id
stringlengths
30
34
text
stringlengths
0
71.3k
industry_type
stringclasses
1 value
2017-09/1580/en_head.json.gz/4713
Xerox Redesigns Products For Lower Energy Use, Meeting Tough New Epa Energy Star Criteria ROCHESTER, N.Y., April 12, 2007 – Over the past two years, Xerox Corporation scientists and engineers have trained their sights on developing products that use significantly less energy. The payoff: More than half of the company’s office and production product offerings meet the U.S. Environmental Protection Agency’s rigorous new ENERGY STAR requirements that went into effect on April 1. Previously the ENERGY STAR criteria for office copiers, printers and multifunction systems measured power consumed in standby and low-power modes. The new standard asks a different question: How much energy would the device use during a typical week? It measures the energy consumed if the system mimics the tempo of a normal office, running a sample job mix with downtime for lunch, overnight and on weekends. The result is a Typical Electricity Consumption (TEC) figure that must meet the EPA’s tough new requirements in order for a product to achieve ENERGY STAR status. Patricia Calkins, Xerox vice president, Environment, Health and Safety, said, “The EPA’s new ENERGY STAR requirements raise the bar so significantly that only 25 percent of products in the marketplace were expected to meet the new criteria. At Xerox, we knew we could do better than the industry average, and we did with more than 50 percent of our current product line passing this tough test. Over time, the standards will get even tougher. We’ll remain focused on improving our entire product line to meet these evolving requirements. And, we expect to qualify more products over time.” A commitment to the environment and energy conservation As an ENERGY STAR Charter Partner since the early 1990s, Xerox has long applied its technical expertise to building energy savings into its products. About two years ago, it took a fresh look at all the subsystems in its laser-printing based products, hoping to bring the power usage down even further. As a result, engineers identified four opportunities to cut power consumption: the fuser, the toner, the electronic controls and the xerographic system. In the xerographic process, a copy or print is made by digitally capturing the image to be printed; exposing the image on a photoreceptor; developing the image with pigmented powder, which is called toner; and then transferring the image created by the toner onto paper and heating it to fuse the image and make a print. Kenneth J. Buck, a senior systems engineer who worked on the project, said, “One example of the company’s success is the WorkCentre 4150, which prints at 45 pages per minute. It’s a black-and-white, desktop multifunction system for small and medium-sized businesses, and it uses 11.9 kilowatt-hours per week of electricity. That’s roughly half the energy consumption of a comparable 45 ppm multifunction system of three years ago.” Faster fusing Office products like printers, copiers, and multifunction systems are active about 10 percent of the time. The rest of the time, they are in a standby or “sleep” mode, where the fuser roll cools and uses less power. The dilemma: The “deeper” the sleep, the less power they use, but the longer it takes before they are ready to print again. Xerox developed fuser rolls with thinner walls that would heat up faster for some products; for others, it changed from a roller to a thin metal belt with a heater. As a result of the technical changes to the product line, one new black-and-white product will use 75 percent less energy to emerge from the deep sleep than it did previously. Warm-up times for Xerox’s color laser printers have also been significantly reduced. Improved toner and controls Xerox is using toner made by its patented emulsion aggregation process in more products to reduce energy consumption. Not only does the EA manufacturing process require less energy, but the toner consumes less energy when used to make a print. That’s because its rich colors and regular particle size mean devices need less EA Toner than conventional toner to create an image, so there’s less thermal mass to heat. Xerox scientists have also worked to develop toners with lower melting points, which consume less energy in the fuser. These have enabled Xerox to reduce fusing temperatures by about 10 percent in some products. In the xerographic system engineers have developed ways to charge and erase the photoreceptor more efficiently using less energy. Other innovations include redesign of the control electronics in the devices to take advantage of next-generation processors and save energy. Energy efficiency developments are part of Xerox's ongoing investments in sustainable innovation -- or "green products" -- that deliver measurable benefits to the environment and help Xerox customers work in more environmentally friendly offices. These include solid ink printing technology, which generates 90 percent less waste than comparable laser printers, document-management services and software that improve workers’ productivity while reducing dependency on paper, and other paper-saving innovations. In addition, Xerox is contributing $1 million to The Nature Conservancy to develop science-based tools and systems that will help the paper industry better manage ecologically important forest land. The funding focuses on the Canadian Boreal Forest as well as the forests of the southern United States, Indonesia and Brazil's Atlantic Forest.
科技
2017-09/1580/en_head.json.gz/4723
True, Blue Planet Found Orbiting Nearby Star By Scott Neuman Jul 11, 2013 TweetShareGoogle+Email Originally published on July 11, 2013 1:37 pm Move over, Earth. There's another blue planet in town — or at least in our corner of the Milky Way. Astronomers using the Hubble Space Telescope deduced for the first time the atmospheric hue of a planet outside our own solar system — and it turns out to be a "deep cobalt blue." But the similarities between HD 189733b, as the alien world is unpoetically known, and Earth, pretty much end there: While oceans of liquid water give our world its azure tint, that's unlikely the case with HD 189733b, which orbits a star just 63 light years away from us. The planet in question is what's known as a "hot Jupiter" — a term that describes both its large mass and nearness to its parent star. Nature elaborates, describing the weather on HD 189733b as extremely hot and windy, with occasional glass rain: "Although the planet seems to be the shade of a deep ocean, it is unlikely to host liquid water. The exoplanet is a giant ball of gas, similar to Jupiter, and was previously often painted brown and red in artists' impressions. "The blue color may come from clouds laden with reflective particles that contain silicon — essentially raindrops of molten glass. Evidence for this idea dates to 2007, when Hubble observed the planet passing in front of its star. Light from the star seemed to be passing through a haze of particles." But Hubble's optical resolution isn't good enough to actually "see" the planet. Instead, astronomers analyzed spectroscopically the light from the parent star and the planet together (during an eclipse from our vantage point), then measured it again when the planet was behind the star. The observation from the star minus the planet was less blue, indicating that is the color of the planet itself. According to Nature: "During the eclipse, the amount of observed blue light decreased, whereas other colours remained unaffected. This indicated that the light reflected by the planet's atmosphere, blocked by the star in the eclipse, is blue. ... " 'This is the first time this has been done for optical wavelengths,' said Alan Boss, an astrophysicist at the Carnegie Institution for Science in Washington DC. 'It's a technical tour de force.' The amount of visible light bouncing off a planet is typically small compared to light fluctuations in a star, making planets difficult to distinguish. Fortunately, HD 189733 b is large relative to other exoplanets — and well illuminated." Copyright 2013 NPR. To see more, visit http://www.npr.org/. TweetShareGoogle+EmailView the discussion thread. © 2017 WKU Public Radio
科技
2017-09/1580/en_head.json.gz/4743
parent of A Better Start The National Science Challenges Our Challenge E Tipu E Rea Vision Mātauranga A New Approach A Better Start In The News What are the National Science Challenges? The National Science Challenges are designed to find solutions to some of the large, complex issues that matter most to us. Each challenge draws scientists from different institutions and disciplines together, to work towards solutions which would make a real difference to the lives of New Zealanders. This is a new, mission-led approach designed to focus the New Zealand government’s investment in science – generating better science, faster to deliver results in areas of national significance. There are 11 National Science Challenges, each targeting a bold goal in a different area of science. The National Science Challenges are overseen by the Ministry of Business, Innovation & Employment. Why were obesity, literacy and mental health chosen for A Better Start? When the Ministry of Business, Innovation and Employment called for public submissions on what issues the National Science Challenges should tackle, the wellbeing of New Zealand’s children emerged as a priority. The Challenge consulted scientists and stakeholders – who identified a healthy weight, successful learning and sound mental health as three of the most important areas for our tamariki.
科技
2017-09/1580/en_head.json.gz/4756
Agweb HomeAgDay TV HomeNewsFood Prize Winners: Science Needed to Fight Hunger Food Prize Winners: Science Needed to Fight Hunger The winners of this year's World Food Prize continued Wednesday to press their case that biotechnology research and innovation is necessary to keep food production in step with a growing world population. The three scientists, recognized as pioneers in the development of genetically modified organisms, made their case to reporters at the three-day World Food Prize symposium underway in Des Moines. They will share the $250,000 award that they are to receive in a ceremony at the Iowa Capitol on Thursday. One winner, Robert Fraley, chief technology officer at Monsanto, said biotechnology and information technology are helping farmers globally improve crop production and can help solve the problem of a growing population with too little food. "Whether it's a small farmer in India with a cellphone message that wind currents are changing ... or planter in Iowa that says, 'Change the way this field is planted every 10 meters to optimize yields,' science has so much potential," he said. "The challenge that's going to come is: Are we going to limit it by policy and regulation?" Opponents of genetically modified crops say they are harmful to people and the environment. Some organic farmers warn that widespread planting of genetically modified crops could contaminate organic and traditional crops, destroying their value. Others are concerned about the uncharted long-term impact for those who eat products such as milk and beef from animals raised on genetically modified plants. Another winner, Marc Van Montagu, founder and chairman of the Institute of Plant Biotechnology Outreach at Ghent University in Belgium said some of the fear of GMO crops is absurd. He used the example of papayas in Hawaii, which he said were saved through genetic modification. The third winner, Mary-Dell Chilton, founder and researcher at Syngenta Biotechnology, said all the discussion by critics of biotechnology should be directed at the coming problem of widespread hunger as the population grows to 9 billion people by 2050. "There are going to be a lot of hungry people here," she told reporters at a news conference. "I hope that you will at least give a balanced view of the safety, the utility of these biotech tools. We're going to need them, believe me." Environmental groups and activist organizations offered opposing views by holding their own press conference at the same time the food prize laureates were speaking. Cherie Mortice, a retired teacher from Des Moines and a member of Iowa Citizens for Community Improvement, an action group that fights large-scale farms, said the prize "is the grand promenade of corporate control over food production that undermines the independent family farms that are capable of producing a diversity of healthy foods that can actually make it to our dinner plates. "We are told in Iowa that we must feed the world, but feeding people is not a supply problem — the problem is distribution and economic inequality," she said. "The solution to feeding the world is to bust up big ag, and empower women, immigrant and young farmers." Wednesday evening the Occupy World Food Prize organization was hosting former Texas Agriculture Commissioner Jim Hightower, who was to deliver a speech entitled: "From Factory Farms to GMOs, the Upchuck Rebellion is Taking Root." The group also planned peaceful protests, which last year led to the arrest of several of the group's members as they tried to enter the World Food Prize headquarters. Three Biotechnology Scientists Awarded 2013 World Food Prize 6/20/2013 8:48:00 AM World Food Prize Takes on Biotech, Global Warming 10/16/2013 1:00:00 PM Comments
科技
2017-09/1580/en_head.json.gz/4764
2012 China PV Market: Slow Start but a Strong Ending to Come Installations are forecast to surge in the second half, with more than four gigawatts (GW) of PV installations to be completed. 10/11/12, 08:04 AM | Solar & Wind Shanghai, 9 October 2012. China's domestic photovoltaic (PV) market made a slow start to the year, with just 720 megawatts (MW) installed in the first half, according to the latest research from IMS Research (recently acquired by IHS Inc. (NYSE: IHS)). However, installations are forecast to surge in the second half, with more than four gigawatts (GW) of PV installations to be completed, taking full year installations to five GW, according to the Q3'12 edition of the China PV Market – Supply and Demand Quarterly report, released in September. In September, China announced its latest PV Development Five-Year-Plan which targets 20 GW of PV systems and one GW of solar thermal power to be completed by 2015. However, this plan did not bring the levels of financial stimulus that were expected. "While old issues like grid-connection and power transmission have still not been solved, new issues emerged in the second quarter of 2012, such as worsening bankability, poor credit conditions and a general slowdown of the Chinese economy," remarked Frank Xie, IMS Research's senior PV analyst based in Shanghai. "Many projects are said to have completed the bidding process; however, they are not yet under construction. Integrators are prioritizing projects to be completed by year end, and there will be a huge surge in installations in the final quarter of the year." 2012 has so far also bought difficult times for China's huge supplier base and utilization rates remained low as a result of a strong focus on cost control and caution over the ongoing EU trade investigation into Chinese PV products. Average utilization levels for PV polysilicon, wafers, cells and module manufacturers all declined in the third quarter; all were lower than 60 percent. Despite wafer production capacity in China declining in the third quarter, average utilization fell to just 58 percent; as an increasing number of cell manufacturers favored sourcing competitively priced wafers from third parties at a lower cost than manufacturing them in house. Utilization rates are forecast to recover slightly in the fourth quarter in response to the predicted boom in domestic installations. Both inverter shipments and revenues declined in the second quarter of 2012 compared with the previous quarter as a result of weak demand. According to the report, the first half of 2012 saw inverter shipments of just 700 MW, less than half of the amount shipped in the second half of 2011. However, Xie holds a positive view for future inverter shipments in the second half of the year, adding: "The situation is set to improve, and China's rapidly expanding inverter supplier base is forecast to ship more than four GW of inverters in the second half of 2012." Researched by IMS Research's Chinese analysts, the �China PV Market – Supply and Demand Q3'12' was published on 27 September and contains quarterly analysis and forecasts of the supply and demand dynamics of China's PV industry. About IHS Inc. (www.ihs.com) IHS (NYSE: IHS) is the leading source of information, insight and analytics in critical areas that shape today's business landscape. Businesses and governments in more than 165 countries around the globe rely on the comprehensive content, expert independent analysis and flexible delivery methods of IHS to make high-impact decisions and develop strategies with speed and confidence. IHS has been in business since 1959 and became a publicly traded company on the New York Stock Exchange in 2005. Headquartered in Englewood, Colorado, USA, IHS employs more than 6,000 people in more than 30 countries around the world. About IMS Research (www.imsresearch.com) IMS Research, recently acquired by IHS (NYSE: IHS), is a leading supplier of market research and consultancy to over 2500 clients worldwide, including most of the world's largest technology companies. Established in the UK in 1989, IMS Research now has dedicated analyst teams focused on the factory automation, automotive, communications, computer, consumer, display, financial & ID, LED & lighting, medical, power & energy, solar PV, smart grid and security markets. Currently publishing over 350 different syndicated report titles each year, these in-depth publications are used by major electronics and industrial companies to assess market trends, solve marketing problems, and improve the efficiency of their businesses. 10/11/12, 08:04 AM | Solar & Wind Subscribe to Newsletter Rolls Battery - Maintenance-Free AGM & GEL Batteries With a full range of capacity options (85AH-3300AH) and voltage configurations to choose from, Rolls Battery maintenance-free 2V, 6V & 12V AGM and broad range of 2V GEL models offer a valve regulated lead acid (VRLA) battery option with the same dependable energy storage and heavy-duty construction customers have grown to expect from the Rolls brand for over sixty years. Installed in off-grid, grid-tied or backup float applications, these sealed batteries require minimal ongoing maintenance and provide a versatile energy storage solution for remote or confined installations. Rolls Battery AGM and GEL battery lines deliver superior cycle life and are backed by an industry-leading warranty.
科技
2017-09/1580/en_head.json.gz/4905
Here’s How You Agreed to Be the Star of Facebook’s Advertising Show Emily Coyle Google+ Many users sign on to Facebook (NASDAQ:FB) everyday, but few actually read its fine print. What fine print you ask? That would be the documents that explain the social media site’s terms and conditions, and also warn that all user data is subject to be used for advertising or “Sponsored Stories.” Facebook makes this clear in its Statement of Rights and Responsibilities and Data Use Policy legal documents, but that hasn’t kept critics from charging that the explanations of its practices are purposely ambigious, and still violate user privacy. That’s why the company is now proposing a re-write of some of its legal documents to ensure its millions of “friends” know exactly what they’re getting themselves into when they sign up for the social media site, and to help it steer clear of future legal action. According to CNET, the proposition follows just in the wake of a bitter legal battle that Facebook has been the subject of for two years. In 2011, the social network was sued for violating users’ right to privacy by publicizing their “likes” in advertisements without asking them or compensating them. Next More Articles About: Advertising, data use, data use policy, Facebook, Facebook policy, Facebook users, legal action, legal settlement, marketing efforts, social network, Sponsored Stories, Statement of Rights and Responsibilities and Data Use Policy, user data 2013-08-30 10:52:08
科技
2017-09/1580/en_head.json.gz/4930
2012 Global Temps Rank in Top 10 Hottest On Record By Andrew Freedman Published: January 15th, 2013 , Last Updated: January 15th, 2013 2012 was one of the 10 warmest years on record globally according to data released Tuesday from the National Oceanic and Atmospheric Administration (NOAA) and NASA. NOAA said that 2012 marked the 10th warmest year since records began in 1880, with a globally averaged annual temperature that was 1.03°F above average. NASA, using slightly different methods, found that it was the ninth-warmest year. NASA and NOAA independently keep track of Earth’s surface temperatures, and their records, along with other datasets all show a clear global warming trend during the latter half of the 20th century. Studies show this due in large part to manmade emissions of heat-trapping greenhouse gases, such as carbon dioxide. Watch 62 Years of Global Warming in 13 Seconds A progression of changing global surface temperature anomalies from 1950 through 2012. Double-click video for full screen. Credit: NASA Based on NOAA’s numbers, anyone younger than 36-years-old has never experienced a cooler-than-average year on the planet, since the last such year occurred in 1976. The global annual temperature has increased at an average rate of 0.11°F per decade from 1880 to 2012, NOAA said, with the rate of increase accelerating in recent decades, to an average of 0.27°F per decade during the past 50 years. Including 2012, all 12 years during the 21st century have been among the 14 warmest on record, and only one year — 1998 — during the 20th century was warmer than 2012. According to NASA, with the exception of 1998, the nine warmest years in their 132 years of record keeping have occurred since 2000, with 2010 and 2005 ranking as the hottest years on record. "One more year of numbers isn't in itself significant," NASA climatologist Gavin Schmidt said in a press release. "What matters is this decade is warmer than the last decade, and that decade was warmer than the decade before. The planet is warming. The reason it's warming is because we are pumping increasing amounts of carbon dioxide into the atmosphere." Although global temperatures have continued to increase when viewed over longer time periods, there remains considerable year-to-year variability due to natural climate fluctuations, such as La Niña and El Niño events, which can help decrease or increase global temperatures even further. During the past decade, the rate of global warming has slowed in response to natural variability and changes in manmade pollution, said James Hansen, the director of NASA’s Goddard Institute for Space Studies. Hansen said the prevalence of La Niña conditions during this period, and an increase in global particulate pollution, most likely account for any apparent slowdown in warming. Particulates, such as sulfate aerosols from burning coal, can mask some of the warming influence of greenhouse gases, and increased air pollution in developing countries such as China and India has boosted particulate pollution in recent years, Hansen said. However, if the past two years are any indication, the manmade global warming signal may now be powerful enough to largely overcome the short-term cooling influences of a La Niña. Global surface temperature anomalies shown along with El Nino and La Nina years. Click to enlarge the image. Credit: NOAA. During the first three months of 2012, La Niña conditions were present in the equatorial tropical Pacific Ocean, with cooler-than-average sea surface temperatures helping to hold down global temperatures. However, 2012 wound up besting 2011 for the warmest La Niña year on record. In addition, global average ocean temperatures also set a record for the warmest La Niña year. The warmest year in NOAA’s dataset, 1998, was an El Niño year, when unusually warm sea surface temperatures in the tropical Pacific added to the warming already taking place from the influence of greenhouse gases. The climate has continued to warm since 1998, and the consequences of that warming are becoming increasingly apparent. A new federal assessment of climate change impacts on the U.S. found that climate change is already having a wide range of negative impacts around the country, including longer lasting and more frequent extreme heat events and heavy precipitation events. In 2012, the U.S. had its warmest and second-most extreme year on record, and Arctic sea ice melted to record low levels. Between March 18 and September 16, 4.57 million square miles of Arctic sea ice melted — the largest ice loss of any melt season on record. Meanwhile, Antarctic sea ice extent hit the largest level on record, which is consistent with climate change projections that show the Antarctic should respond to global warming differently than the Arctic, due to the many geographical distinctions between the two poles and differences in ocean and atmospheric circulation in these areas. Northern Hemisphere snow cover extent in December 2012 was the largest on record, and snow cover during the winter of 2011-12 was also above average. However, spring snow cover extent has been declining in a trend that scientists have linked to manmade climate change and the loss of Arctic sea ice. While winter Northern Hemisphere snow cover has grown at a rate of about 0.1 percent per decade, spring Northern Hemisphere snow cover has shrunk by about 2.2 percent. Most parts of the world experienced warmer-than-average annual temperatures, including much of North and South America, most of Europe and Africa, and western, southern, and extreme northeastern Asia. Much of Alaska, far western Canada, central Asia, the eastern and equatorial Pacific, and areas of the Southern Ocean, among others, were cooler than average. While 2012 saw near-average precipitation across the globe, there were many precipitation extremes that inflicted a heavy toll in terms of their death toll and economic impact. Major drought occurred in the U.S., eastern Russia, Ukraine, and Kazakhstan, as well as in northeastern Brazil. A wetter-than-average rainy season in western and central Africa affected 3 million people across 15 countries from July to October. And in the U.K., conditions swung from record dryness in March to record wetness in April, NOAA said. NOAA: 2012 Hottest and 2nd-Most Extreme Year On Record 5 Must-See Charts From Major New U.S. Climate Report Forget the Melting Arctic, Sea Ice in Antarctica is Growing Drought Has Ties to La Nina, With Global Warming Assist Has Global Warming Stopped? Look at the Bigger Picture Posted in Causes, Greenhouse Gases, Impacts, Climate, Extremes, Drought, Flooding, Heat, Weather, Extreme Weather, Global, United States, US National
科技
2017-09/1580/en_head.json.gz/4951
Title Consumer Protection Policy The Communications Regulatory Authority (CRA) issued a telecommunications Consumer Protection Policy in January 2014, as a solid foundation for our consumer protection work in Qatar. The Policy, developed following a public consultation, incorporates internationally and regionally recognized standards for telecommunication consumer rights. The Policy brings together the existing obligations on service providers, and imposes a set of new obligations on operators to ensure that they compete fairly. The Policy includes a strengthened system for monitoring and enforcing compliance with the rules, and provision for a dispute resolution process that is independent of operators. The Policy, together with 103 – a dedicated telecom consumers complaint hotline (operational 24-hours), an independent complaints service, which consumers can contact if they are dissatisfied with the way their service provider has treated their complaint, and a Code on Advertising, Marketing & Branding; forms the basis of consumer protection efforts initiative by CRA. The Consumer Protection Policy document can be downloaded from this link.
科技
2017-09/1580/en_head.json.gz/4992
You are hereHome | Press Releases | America’s Most Infamous Extinction: Centennial Anniversary of Passenger Pigeon’s Disappearance America’s Most Infamous Extinction: Centennial Anniversary of Passenger Pigeon’s Disappearance August 25, 2014 Contact: Melanie Gade; mgade@defenders.org; (202) 772-0288 America’s Most Infamous Extinction: Centennial Anniversary of Passenger Pigeon’s Disappearance WASHINGTON – One hundred years ago on September 1st, the last remaining passenger pigeon – “Martha” –took her final breath, marking the end of a bird species that was once the most abundant in North America. It was America’s first infamous extinction and remains one of the most tragic examples of human-caused extinction in our nation’s history. The passenger pigeon once numbered as many as 5 billion. The bird was so numerous passing flocks were said to darken the skies. Nonetheless, the bird’s population crashed and disappeared within a span of 50 years. The causes were excessive hunting and habitat destruction; some of the same factors that continue to threaten imperiled wildlife today. Fortunately, nearly 60 years after the bird’s extinction, Congress took a significant step to ensure that avoidable tragedies like the passenger pigeon extinction never happen again when it enacted the Endangered Species Act (ESA) to save plants and animals facing extinction. On the occasion of this tragic anniversary, Jamie Rappaport Clark, president and CEO of Defenders of Wildlife, issued the following statement: “This nation enacted the ESA so that we would never again experience a loss like the passenger pigeon, a human-caused extinction that could have been prevented. Indeed, had the ESA been in place in the late 19th century, it likely would have saved the passenger pigeon. Though the ESA came too late to save Martha’s species, today we can look to it as a statement of America’s commitment to protecting our nation’s imperiled wildlife and plants for future generations. “Sadly, even after all we’ve learned about the critical importance of this landmark law, the ESA is on the chopping block in Congress once again. Too often, we have seen members of Congress move to dismantle key pieces of the ESA and gut protections for imperiled wildlife, rejecting the very conservation values that are the foundation of the ESA. It’s my hope that the haunting anniversary of the passenger pigeon’s extinction will remind our political leaders of what we have to lose from continued reckless attacks on the ESA. Truly, if there was ever a time to renew our nation’s defining values for conservation, it is on the 100th anniversary of the passenger pigeon’s demise.” Defenders of Wildlife is dedicated to the protection of all native animals and plants in their natural communities. With more than 1.1 million members and activists, Defenders of Wildlife is a leading advocate for innovative solutions to safeguard our wildlife heritage for generations to come. For more information, visit www.defenders.org. February 14, 2017 | 4.10 PM ESA: The Forgotten Love Story » June 14, 2016 | 1.18 PM Going to Bat for a Species on the Brink » May 25, 2016 | 9.00 AM Spring Means Nesting Sea Turtles » More on the blog » Press
科技
2017-09/1580/en_head.json.gz/5004
Home > Android Army > Big-screen Sony Xperia Z and ZL phones leaked… Big-screen Sony Xperia Z and ZL phones leaked ahead of CES unveiling By Check out our full reviews of the Sony Xperia ZL and Sony Xperia Z phones. Sony hasn’t been able to keep its two new flagship Android phones secret ahead of CES 2013, as a picture of the pair has sneaked out just days before the beginning of the Las Vegas show. The phones are identified as the Xperia Z (seen above on the left) and Xperia ZL, and have previously been leaked as the Yuga and Odin respectively. As both images were discovered on a SonyMobile.com website they’re almost certainly genuine, plus the device names make up part of the URL, so there’s a good chance these will turn out to be correct too. While both the Xperia Z and Xperia ZL have appeared before, we’re still not sure of the exact specification of either, however it’s possible the Xperia ZL/Odin is a more compact variant of the full-size Xperia Z/Yuga. In the past, the addition of an L in a Sony smartphone model name hasn’t signified anything other than a gentle warm-over of the specs, such as in the case of the Xperia S and Xperia SL, but how this approach would work with two devices launched at the same time remains to be seen. We’re expecting the Xperia Z to be Sony’s 5-inch, 1080p smartphone effort which will be driven by a quad-core Qualcomm Snapdragon processor – also a first for Sony, after it swore off quad-core chips for 2012. A 13-megapixel camera – as seen on the Xperia T, Xperia TL and its variants – is likely to sit on the rear of the big-screen device, plus we can see a forward facing video call lens on both phones in the new pictures. Oddly, the Xperia ZL’s video call lens is in the bottom right-hand corner of the chassis; a most unusual position. Google Android 4.1 Jelly Bean is the most obvious candidate for the operating system. So when will we get to see these two new Sony smartphones? The company has scheduled a press conference at CES 2013 for 5pm local time on January 7, and in previous years it has announced several top-of-the-range devices. We’ll be there to bring you all the latest news.
科技
2017-09/1580/en_head.json.gz/5033
Wiley Rein's Brightbill discusses U.S.-Chinese solar dispute following anti-dumping move OnPoint Aired: Tuesday, June 5, 2012 As the U.S. Commerce Department moves forward with anti-subsidy and anti-dumping tariff proposals against Chinese solar imports, is there room for compromise between the two countries? During today's OnPoint, Timothy Brightbill, a partner at Wiley Rein and the lead trade lawyer on SolarWorld's case against Chinese manufacturers, discusses the impact of the proposed tariffs on U.S. manufacturers and installers. He also responds to the Solar Energy Industries Association's suggestion that the two governments should work together toward a solution to the trade dispute. Monica Trauzzi: Hello and welcome to OnPoint. I'm Monica Trauzzi. Joining me today is Timothy Brightbill, a partner at Wiley Rein and the lead trade lawyer on SolarWorld's trade case against Chinese manufacturers. Tim, thanks for coming back on the show. Timothy Brightbill: Monica, thanks for having me back. Monica Trauzzi: Tim, an interesting twist in the ongoing case against Chinese solar manufacturers. After the Commerce Department announced an anti-dumping tariff, the Chinese sort of fired back determining that six state-level U.S. renewable energy programs violate global trade laws. Is this fair game or is SolarWorld's initial plan sort of backfiring at this point? Timothy Brightbill: No, no, this is proceeding just as planned. The United States is allowed to enforce its trade laws and China is also allowed to decide whether or not any U.S. policies violate WTO rules. So, we had a good first step with the anti-dumping rulings put in place. We now have margins of 30 percent up to 250 percent against Chinese cells and modules coming in. Those duties are retroactive 90 days, so any goods coming in from China now are subject to those duties and we think that's an important step toward leveling the playing field and returning fair competition in the solar industry. Monica Trauzzi: SEIA, the solar energy industry trade group, has called on the U.S. and the Chinese governments to work together towards a solution. What room do you think actually exists for some kind of compromise or negotiation to happen between the two countries? Timothy Brightbill: Well, we're not looking for that at this point and, in fact, the Commerce Department doesn't settle these cases with China and there's a good reason for that; it's because China, in the past, has violated those kind of settlement agreements. So, in fact, Commerce has not settled any of these trade remedy cases since China joined the WTO in 2001. So, China agreed to these rules when it joined the WTO. We think this case should go through to its conclusion and let the trade laws speak for themselves. Monica Trauzzi: Why then would the solar industry trade group make this suggestion of moving towards a negotiation? Timothy Brightbill: Well, it's unfortunate because SEIA did say that it was going to be neutral in this trade remedy case and it said that we have a right to apply the trade remedy laws, but we see this as not a neutral step, unfortunately, to call for negotiations. All we want to do is see this case through to its conclusion. Since there are unfair trade practices going on, we think the laws should be applied and the duties should be applied. Monica Trauzzi: But SEIA represents a broad spectrum of companies within the solar industry, from manufacturers to those who install, and isn't it true that these tariffs could actually negatively impact certain segments of the solar industry? Timothy Brightbill: Well, we think these tariffs are good for the entire industry. They're certainly good for manufacturing, I mean 12 U.S. companies have shut down or gone bankrupt or laid-off employees in the last two years as a result of the surge in Chinese imports, $3 billion worth of Chinese imports of cells and modules last year, which just crushed the U.S. industry. So, it's definitely good for the manufacturing industry. We also think it's good for installers, for suppliers to have a manufacturing base here in the United States. We need to keep that and the trade laws will help us to do that. Monica Trauzzi: So, when SEIA says that the anti-dumping and anti-subsidy tariffs ultimately hurt the entire market, that's untrue? Timothy Brightbill: No, we strongly disagree. The best thing to have is fair competition, not dumping and subsidies that unfairly benefit one country and that are driven by one government. It's best to have a level playing field where everyone is competing. The solar industry is highly competitive. All we're doing is addressing the unfair trade practices of one country. Monica Trauzzi: Overall, how will solar installations in the United States be affected by these tariffs? Timothy Brightbill: Well, we think solar installations are going to continue to grow. I mean the market doubled last year. All this does is, again, address unfair trade practices from China. The U.S. industry stands ready to supply those installations and companies can also access imports from anywhere else in the world. It's just when these subsidies and when the dumping takes place that something has to be done to address the problem. Monica Trauzzi: But doesn't this mean more expensive solar technology here in the U.S. and couldn't that lead to a slowdown in installations? Timothy Brightbill: Well, the trade cases will have an effect perhaps on volume or price from China, but prices have been dropping every year in the solar industry 8 to 10 percent. These products get better every year, the technology gets better every year. That's not going to change. So the only thing that's going to change is to take these dumped and subsidized imports out of the marketplace. Monica Trauzzi: So, once these tariffs are in place, will U.S. be able to compete with China? Timothy Brightbill: Oh, absolutely, yes, and with the rest of the world. This is an industry that should be growing here in the United States. There should be new companies adding thousands of workers and the companies we represent should be expanding right now to meet demand. They'll be ready to do that. Monica Trauzzi: So, where does the case go from here? What are the next steps? Timothy Brightbill: Well, right now, Commerce is actually verifying the responses of the Chinese government. They're in China right now. They're looking at all the responses that have been put in. They're also examining additional subsidies, subsidies on glass and aluminum and things like that, so the margins could even go higher than they are right now. There will be final determinations from the Commerce Department in October and from the International Trade Commission in November of this year. Monica Trauzzi: All right, we'll end it there. Thank you for coming on the show. Nice to see you. Timothy Brightbill: Thanks very much. Monica Trauzzi: And thanks for watching. We'll see you back here tomorrow. [End of Audio]
科技
2017-09/1580/en_head.json.gz/5034
Net Logic buys Cypress' search engine IC line-up EE Times8/30/2007 11:00 AM EDT Post a comment LONDON — NetLogic Microsystems Inc. has bought, for $12 million in cash, a portfolio of network search engine products from Cypress Semiconductor Corp. The portfolio being acquired by Net Logic (Mountain View, Calif.) extends its already strong position in the high-volume desktop switching market. Ron Jankov, president and CEO of NetLogic, said the company believes the desktop switching market "is on the cusp of significant growth in the coming years." For Cypress (San Jose, Calif.) "the sale of our network search engine assets is part of our overall strategy to focus on programmable products and solutions," said T.J. Rodgers, president and CEO of the company. With the addition of Cypress' portfolio of devices for desktop switching, NetLogic says it now has "the industry’s most complete and most advanced portfolio of knowledge-based processors and network search engine solutions, spanning Layer 2 through Layer 7 networking." It adds the scale and scope of the operation allows it to expand its R&D investments in the sector.
科技
2017-09/1580/en_head.json.gz/5054
Next Generation Firewall Storage / Two Fried Drives Rescued Two Fried Drives Rescued By Chris Preimesberger | Posted 2008-04-25 "We're pretty confident about what we can do," Remley said. "We've found that we can save data about 98 percent of the time." When the data is recovered from bad digital storage, it will be returned to the customer on an external USB hard drive with a two-year limited warranty. "If a customer drops off an 80GB hard drive, for example, we'll return the data on a similar-quality 80GB external drive, with a simple USB connector," Remley said. "So all the customer has to do is plug it in, and the data will all be poured back into the new drive or device." I decided to go out on a limb and ask Remley if I could have Seagate recover the data from my two fried drives. I had some vintage Teddy Wilson and Lionel Hampton tracks and some rare Beatles outtakes on one of those drives, and I wanted them back in my music collection in the worst way. "Sure, let's see what we can do," Remley said. A few weeks later, during a slight lull in schedules, I brought the two drives down to the Seagate Services fix-it shop in Santa Clara, Calif., just down the Bayshore Freeway from my office. I was impressed with the way the whole process was handled. The workshop had about a half-dozen technicians working on rescues of varying kinds ("This one was in a bad fire, but we were still able to save most of the data," one fellow said, showing me a badly blackened drive.) Mine weren't blackened, but they were in sad shape. My technician carefully took my first drive apart, gave it a visual inspection and remarked that it looked pretty normal to him. But after plugging it into a diagnostic workstation, he could immediately find the sectors of the drive that had gone bad. A confusing grid of numbers popped up on his screen, but he knew exactly what it all meant. He took furious notes, smiled quite a bit and assured me that he could save most of my data on that With the exactitude of a watchmaker and a scientific outlook, he went about his business, and I felt good about the eventual outcome-although you never really know what's going to happen when you turn in a burned-out HDD. In about four days, I received a call informing me that I could pick up my data on a brand-spanking-new storage drive. Wow! Now this was cool. The Seagate team had loaded both of my drives onto a new FreeAgent Pro, sporting a storage capacity of 320GB. This model stands vertically, with only a 5- by 7-inch footprint on my desk. It connects via USB to any computer I own, and carries with it its own backup software. It lights up in a very cool way when it's running, too. It was so nice to get Teddy Wilson, Lionel Hampton, the Beatles outtakes and all my other stuff back into circulation. For that, I can thank the Seagaters-who really know what they are doing, because they actually design and build these things. « Previous Chris Preimesberger was named Editor-in-Chief of Features & Analysis at eWEEK in November 2011. Previously he served eWEEK as Senior Writer, covering a range of IT sectors that include data center systems, cloud computing, storage, virtualization, green IT, e-discovery and IT governance. His blog, Storage Station, is considered a go-to information source. Chris won a national Folio Award for magazine writing in November 2011 for a cover story on Salesforce.com and CEO-founder Marc Benioff, and he has served as a judge for the SIIA Codie Awards since 2005. In previous IT journalism, Chris was a founding editor of both IT Manager's Journal and DevX.com and was managing editor of Software Development magazine. His diverse resume also includes: sportswriter for the Los Angeles Daily News, covering NCAA and NBA basketball, television critic for the Palo Alto Times Tribune, and Sports Information Director at Stanford University. He has served as a correspondent for The Associated Press, covering Stanford and NCAA tournament basketball, since 1983. He has covered a number of major events, including the 1984 Democratic National Convention, a Presidential press conference at the White House in 1993, the Emmy Awards (three times), two Rose Bowls, the Fiesta Bowl, several NCAA men's and women's basketball tournaments, a Formula One Grand Prix auto race, a heavyweight boxing championship bout (Ali vs. Spinks, 1978), and the 1985 Super Bowl. A 1975 graduate of Pepperdine University in Malibu, Calif., Chris has won more than a dozen regional and national awards for his work. He and his wife, Rebecca, have four children and reside in Redwood City, Calif.Follow on Twitter: editingwhiz
科技
2017-09/1580/en_head.json.gz/5062
US government uses fake cell towers, flown on airplanes, to harvest phone data and track down criminals November 14, 2014 at 11:34 am Proving yet again that the US government can show a surprising soupçon of tenacity when it comes to invading privacy and occasionally catching a terrorist, a new report claims that the US Marshals Service — since 2007 — has been criss-crossing the country with small airplanes equipped with fake cell towers. These small aircraft (fixed-wing Cessnas) intercept communications between your mobile phone and the carrier’s legitimate cell tower, allowing the US Marshals to find and triangulate the exact location of a target. Obviously, the primary target of the system is criminals — but the report says a lot of “innocent Americans” are also being tagged by the program. The NSA knew about and exploited the Heartbleed bug for ‘at least two years’ April 14, 2014 at 8:35 am When I wrote about the Heartbleed bug last week, and how it means that much of the web has been insecure for the last two years, I found myself thinking: ‘if I was the NSA, or some other intelligence agency, this is exactly how I would go about gathering sensitive data.’ Now, according to two people familiar with the matter, it appears the NSA did just that. The NSA is building a quantum computer to crack almost every kind of encryption January 3, 2014 at 10:03 am New documents leaked by Edward Snowden reveal two NSA programs that seek to build a “useful quantum computer” that can break all known forms of classical encryption. Such a quantum computer would obviously give the NSA unprecedented access to encrypted communications, but a working quantum computer is also vital for defensive purposes: If someone else gets their hands on a quantum computer first, then it is the US government that will suddenly have all of its encrypted communications cracked wide open. ExtremeTech Newsletter
科技
2017-09/1580/en_head.json.gz/5099
Home Screens DC Universe Online DC Universe Online Screenshots AUSTIN, Texas – Feb. 6, 2009 – In another sign of Sony Online Entertainment LLC’s (SOE) commitment to the upcoming DC Universe™ Online (DCUO) massively multiplayer online video game, the company today announced that award-winning writer Marv Wolfman has joined the creative team at WildStorm Productions that is working with SOE to bring the DC Universe to life on the PLAYSTATION® 3 computer entertainment system and the PC. Wolfman will write compelling story arcs, exciting quests and in-game events for DCUO. Wolfman’s seminal run on The New Teen Titans is a favorite among comic book fans, and along with his work on series such as Crisis on Infinite Earths, Batman and Superman, he has become one of the most recognizable names in comics of the last three decades. His illustrious comic credits also include creating and writing Blade, The Vampire Hunter. Wolfman joins an all-star roster already at work on DCUO, including legendary comic book artist Jim Lee, who serves as the game’s executive creative director, and renowned DC writer Geoff Johns, who is crafting the game’s overarching story. DCUO is currently in development at SOE’s Austin studio in collaboration with DC Comics and Warner Bros. Interactive Entertainment. “Getting Marv on board with DC Universe Online shows just how passionate and determined we are about making sure this franchise delivers for both video game players and comic book fans alike,” said John Smedley, president of SOE. “This is a true collaboration between the top talent at DC and SOE, and I believe the result of this creative alliance will be one of the most exciting, memorable online game experiences to date.” Wolfman has been a creative force both on and off the pages of comic books. Wolfman recently worked as a writer for the popular Teen Titans animated series. In addition to Blade, which was turned into a TV series and three hit movies starring Wesley Snipes, he created Bullseye, the prime villain in the 2003 movie Daredevil. Wolfman has received numerous awards and recognition for his work, including the SCRIBE award for best speculative fiction novel adaptation for Superman Returns, and a special commendation by the White House for his work on three anti-drug comics for the “Just Say No” program. “As a gamer, this is a dream project in many ways,” said Wolfman. “I get to create new stories in the DC Universe that will make this video game world a truly unique, unforgettable experience.” About DCUO DCUO offers a dramatic online setting where players can enter the DC Universe and battle alongside or against their favorite DC Comics heroes and villains including such icons as Batman, Superman, Wonder Woman, and The Joker, as well as many other fan favorites such as Green Lantern, The Flash, Catwoman and Martian Manhunter. The action and drama will play out in such well-known locations as Gotham City and Metropolis among others. More information about DC Universe Online
科技
2017-09/1580/en_head.json.gz/5104
Science Gone Social From keeping up with the literature to sparking collaborations and finding funds, scientists are storming social media. Tracy Vence A growing group of scientists are using social media to move their work forward. [© arrow - Fotolia.com] Not many scientists have produced manuscripts as a direct result of participating in discussions on social media. Fewer still support their research programs in full by funds obtained in the same way. But, respectively speaking, Emily Darling, Ph.D., and Karthik Ram, Ph.D., attribute these outcomes to their activities on Twitter. Drs. Darling and Ram, both postdocs, are among a growing group of scientists using social media to move their work forward. Among the many platforms, Twitter stands out for more than its character-limiting format. For reasons not altogether known, it is the social media platform of choice among many scientists who are active online. Twitter, says Aaron Darling, Ph.D., “is an exceptionally efficient way to communicate about science, whether it’s carefully vetted results or wild ideas.” Dr. Aaron Darling, who is not related to Dr. Emily Darling, is an associate professor at University of Technology Sydney’s ithree institute. He says the social media platform keeps him current on the literature, exposing him “to useful ideas that I would otherwise miss entirely.” For many, Twitter functions as a virtual table of contents. By following investigators working in their own or related fields, researchers can access a curated compilation of published papers and conference correspondence, which is constantly updated in real-time. Some researchers use Twitter to quickly source solutions for day-to-day issues, such as tracking down citations or finding a full-text article (#icanhazpdf). Others are using it as a means of networking, expanding their access to colleagues outside of their own departments. “For me, I just feel so much more connected to colleagues that are interested in the same things I am, even if I haven’t met them,” says Dr. Emily Darling, a coral reef ecologist at the University of North Carolina, Chapel Hill. “The thing about Twitter that’s really neat is you have access to a much larger community.” Much larger, indeed. According to a study Dr. Darling and her colleagues have submitted to Ideas in Ecology and Evolution (IEE), the average scientist on Twitter has seven times more followers there than departmental peers. The story behind the Twitter study is one borne of the site itself. The University of Massachusetts Boston’s Jarrett Byrnes, Ph.D., was soliciting a submission for a special section of IEE he edits: “Anyone interested in writing a paper on Twitter and the future of publication in EEB for a special section of IEE?” he asked. “I thought that was kind of cool, so I did end up direct messaging him,” Dr. Emily Darling says. Turns out, she wasn’t the only one. And so, over the course of two months, Dr. Emily Darling and her graduate supervisor from Simon Fraser University, Isabelle Côté, Ph.D., worked alongside the University of Miami’s David Shiffman, Ph.D., and Columbia University’s Joshua Drew, Ph.D., virtually—via Twitter, Google Hangouts, and email, among other things. Having never met altogether in person, the researchers put their paper together entirely online. “It was really a tweet that inspired us to start thinking about how Twitter can influence research and the research workflow,” Dr. Emily Darling says. “By sending these very small science soundbites, you’re never quite sure where an idea will go,” she adds. “Jarrett Byrnes tweeted this idea about a paper, and a couple of months later we actually have written the paper, which is pretty cool.” Another IEE editor reports having similar success. Dr. Ram, a quantitative ecologist at the University of California, Berkeley, says he often finds peer reviewers online. “Usually, you tap into the same people you know, but by the time you figure out who they are, they’ve published something [on a given topic] two years ago, so it’s very hard to know who’s working on that now,” he says. By making use of his Twitter feed, “when I am looking to name reviewers I can automatically think of scientists who are really into this topic right now and could probably give a fantastic review,” Dr. Ram adds. Arguably, Dr. Ram has made the most profitable use of social media a scientist can, at least fiscally speaking—his research is currently funded by a two-year, $200,000 Alfred P. Sloan Foundation grant he landed through an introduction by way of Twitter. “I’ve been lucky,” he says. Dr. Ram contacted a researcher he was following but had never met to discuss her work. Over the course of a Skype conversation and subsequent email exchanges, he learned that this researcher’s program officer at the foundation had been interested in expanding the scope of funding to include his line of work—food web ecology. This researcher Dr. Ram reached out to “made the introduction last fall,” he says. “I just got funded last month.” Reluctance Remains Of course, there is a certain amount of tweeting to the choir going on among scientists actively involved in social media. As seen with blogging, posting preprints, and participating in non-anonymous peer review, there are at least as many researchers who are strongly against the practice of sharing scientific ideas openly online as there are supporting it. “In my experience, many scientists from all fields are rather conservative, particularly those who already have long, well-established, successful careers,” says CSIRO bioinformatician Neil Saunders, Ph.D. “Their attitude seems to be: ‘I have done perfectly well without all those new tools and ideas, so I don’t need them, and neither do you.’” To be sure, most scientists are not on Twitter. A recent survey from UNC Chapel Hill’s Jason Priem et al., found that only about one in 40 researchers is active on the site, Dr. Emily Darling’s team notes. Of scientists who are active on Twitter, around 60% completed their Ph.D.s within the last five years, the group adds in its report. “One of the limitations right now is a nonrandom sample of scientists on Twitter,” Dr. Emily Darling says. “We are a self-selecting group,” adds Dr. Ram. “We are missing a lot of the discussion that could be happening, which is a shame. A whole segment of the scientific community is not online, so we are missing a lot more valuable interaction.” Part of the problem, as Dr. Saunders notes, is resistance to change. But such resistance is not reserved for tenured professors—it spans the entire scientific career spectrum. “The way the incentives are structured, people don’t see the value in doing these sorts of things,” Dr. Ram says. That’s compounded by the pressures of perception, he adds. “Anybody engaging in this sort of medium is sort of seen as not focused, not a traditional academic,” Dr. Ram says. “It keeps people away who are more conservative, but it also keeps younger academics away because they fear they might end up losing out on opportunities to advance because they don’t want to be seen as somebody engaging in a lot of alternate forms of scholarly communication.” And there are other issues. “Trying to conduct an actual conversation on Twitter is horrible,” says Mike Taylor, Ph.D., a computer programmer and paleontologist at the University of Bristol. “The length limit means that all nuance is lost, and any kind of disagreement tends to look much stronger than it really is.” Blogs, he says, are better suited for scientific discourse. “Blogs remain the social medium of choice for science, as they have space to expound complex arguments but retain an immediacy that invites fruitful discussion,” Dr. Taylor tells GEN. Dr. Emily Darling says she hopes that by extolling the virtues of social media, more scientists will enter the fray. “By highlighting the potential benefits of using social media for both community and curation of news and keeping up to date, more people [might] join to make it a more inclusive community,” she says. Dr. Ram sums it up succinctly. “I find funding online, I find new papers online, I can quickly get answers to a lot of quotes, I can actively find collaborators, I get pulled into collaborations, and I also end up finding reviewers,” he says. “To me, it’s almost mind-boggling how I got around without Twitter.” “The role of Twitter in the life cycle of a science publication” has been submitted to Ideas in Ecology and Evolution and was published to the preprint servers arXiv and PeerJ on May 2 and May 6, respectively.
科技
2017-09/1580/en_head.json.gz/5117
Naked ADSL Buying Guide Naked ADSL gives you broadband Internet without pesky line rental fees. We explain what you need to know before making the leap. PC World Staff (PC World) on 05 February, 2010 11:21 Naked ADSL (or Naked DSL) is a relatively new type of broadband service. Like ADSL2+ broadband Internet, Naked ADSL services are delivered over the copper lines normally used to deliver telephone calls. However, whereas ADSL2+ broadband is tacked on top of an existing phone service, Naked DSL services can be delivered over a "vacant copper pair" — a line that doesn't have a phone service already attached. This means you can have a broadband connection without paying for a landline telephone service.Naked ADSL benefitsUnless they have family and friends overseas, many people don't use their fixed line telephone at home very often. However, even if you don't make many calls you are still paying a rental fee on the telephone line, often as a base monthly charge included with other features like call waiting, forwarding or voicemail. At its cheapest, you're looking at $20.95 per month on Telstra's HomeLine Budget plan, and that's only if you use BigPond as your Internet service provider. If you use a different ISP (such as iiNet), Telstra will charge you $27.95 as the minimum line rental.Naked ADSL means you no longer have to pay line rental, as the Internet service can exist on a telephone line that doesn't have a phone service attached. The line rental fee isn't completely gone: ISPs are still charged by Telstra for using the copper line to your house, and they often pass this cost onto consumers within the price of the Naked ADSL plan. However, even taking this into account, Naked ADSL often works out cheaper than a traditional broadband plan with a typical Telstra phone line rental fee.If you still want to make telephone calls from a home phone, many Naked ADSL ISPs also offer a voice over Internet protocol (VoIP) service. VoIP calls are much cheaper than regular landline calls — especially if you are calling overseas — and you can even use your existing telephone by connecting to your router through a small adapter. Many ISPs offer a VoIP account for a small fee (or for free) with a Naked ADSL service. You can also use a third-party, software-based Internet telephony service, such as Skype.Naked ADSL downsidesThere are a few technical disadvantages to Naked ADSL. For instance, you can't make telephone calls without power. As well as disconnecting you from the Internet, power loss means your VoIP connection will cease to function, which means you won't be able to make 000 emergency calls. This often isn't seen as a major drawback these days because most people own mobile phones. In addition, many people tend to have cordless phones that rely on a powered base station anyway, and so can't be used in the event of a blackout. Ensure that you have alternate means of communication in such a situation before considering a Naked ADSL service.Because Naked ADSL uses a phone line without a dial tone, you can't use a regular fax machine (which needs a dial tone). Additionally, faxes cannot be sent over VoIP reliably due to data compression. Some VoIP providers, such as Engin, have trialled a fax service over VoIP, but this has proved to be unreliable. If your business still relies on faxes, and you also want to make the switch to Naked ADSL and VoIP, then you can look into a Web-based fax service (such as Utbox) that sends faxes for you and also allows you to receive them via e-mail. Tags naked adsl2+naked adsl2broadband PC World Staff
科技
2017-09/1580/en_head.json.gz/5134
Transition Towns: Utilizing Tools of Resilience to Challenge a Globalized WorldEmily Richey, Advisor: Kate Bjork The Transition Town Movement started in 2006 in the United Kingdom, and it has spread globally since (Transition Sydney, 2008). Inspiration for this movement came from the fragility of our current social and economic systems, peak oil, and global climate change. Transition Town communities understand that the world is facing great change in the coming decades. These towns are making an effort to improve the self-sufficiency and resiliency of their communities. The Transition Town Movement has established many tools for towns and communities to use when attempting to create this resilience. One of these tools is the Transition Network Principles. These eight principles enable Transition Towns to reflect on where they are in the process of building resiliency. For this study, eight Transition Town Initiators from Hervey Bay, Sydney, Byron Shire, Heybridge, Derwent, South Hobart, Water Works Valley, and the Sunshine Coast, all towns in the eastern states of Australia (Tasmania, New South Wales, Queensland), participated in interviews about the implementation of each of these principles in their respective town. Analysis of the data compared the actions of a Transition Town to the definition of each principle. The results represent which stage of progress each Transition Town is in on the way to creating resiliency. Additionally, questions about how each Initiator and Transition Town defined, viewed, utilized, and challenged globalization were included in the interviews. The results demonstrate how globalization has affected the positive spread of Transition Towns, but also how greatly globalization has negatively affected the systems we live within, such as economy and ecosystems. The results also helped to further explain the necessity of and motivation behind the Transition Town Movement and its Initiators. Transition Towns have used globalization to their benefit, but more consequently, have risen to challenge the current global system, and create more resiliency and self-sufficiency by implementing the Transition Network Principles. Facebook
科技
2017-09/1580/en_head.json.gz/5185
LifestyleTechNews Facebook unveils group-friendly function Genevieve Roberts The social media website Facebook has announced a new function for groups to share information.The website, which has more than 900m active users, has launched a new function that members of group pages can see which other users have seen updates.A spokesperson said: "The new groups feature acts like a read receipt. When you visit a group, you can view who’s seen each post. This way you can stay updated on the group’s activity. For example, in your football group you can post the new practice time and then see who got the update."The number of people who have become digitally dependent is estimated by psychiatrists to have risen 30 per cent over the past three years. The Capio Nightingale Hospital in Marylebone, North London, launched a technology-addiction service two years ago and has seen a 15 per cent rise in patients over the past 12 months.Psychiatrist Dr Nerina Ramlakhan, author of Tired But Wired, also works with law firms and investment banks across London. She has seen a 30 per cent rise in people suffering from digital dependency over the past three years, and has helped MPs, footballers and actors, as well as lawyers and investment bankers. "People are getting locked into a cycle of technology at work, and then surf the Net or use Facebook to unwind," she said. "They become exhausted, but cannot switch their brains off."Each day, more than 300m photos are uploaded to Facebook, and 526m people log in, up 41 per cent from a year ago. More about:
科技
2017-09/1580/en_head.json.gz/5246
Energy We Can Live With Karl GrossmanProfessor SUNY College at Old Westbury Presentation at SUNY College at New Paltz Energy we can live with. Yes. It’s here, it can sustain us, it can allow us to thrive—without life-threatening power. But getting from here to there will not be easy. It will take individual and group action because the energy deck is stacked—from pressure especially from the oil, coal and nuclear industries. Let me make some remarks—and then let’s have a discussion with you saying what you think we can do to implement safe, renewable energy technologies. They are here today. As this magazine, the respect British journal New Scientist declared in a special issue not long ago, “The UN says the renewable energy that can already be harnessed economically would supply the world’s electricity needs 15 times over.” Clean renewable power technologies can be employed to “achieve a colossal environmental win…It’s time we…got on with making it a reality.” New Scientist goes on to present details on solar power, wind energy, tide-power, geothermal energy and other technologies that are here today. It declares: “A world run on renewables is no longer a hippy’s fuzzy green dream. “ It’s time, it says, we “make it a reality.” But we don’t live in the fairest of energy worlds. Take oil. Do you remember—just two years ago—when the price of gasoline was skyrocketing: to $4 and $4.25 and $4.50 a gallon and more. The oil companies were claiming the fault was China and India going car-crazy and guzzling up gas, problems in the Middle East, then it was refinery capacity, and all along—if the ban on drilling in areas on the continental shelf offshore was only lifted, everything would be different. Meanwhile, filling up a car, at 40 or 50 bucks a shot, was hurting people badly, impacting an already bad economy. And the oil companies were raking in record profits—billions upon billions of dollars. People were getting angrier and angrier thinking some kind of price-rigging was going on. You think? Then, suddenly, the price of gas went down. And ever since it’s been down to about $3 a gallon. That’s the price I just paid on the Thruway coming here. The price of a barrel of crude has dived—from a high of $145 two years back to half that. Yet people are still car-crazy in China and India, problems continue in the Middle East, no new refineries have been built, and after the mammoth oil spill in Gulf of Mexico, restrictions on offshore oil drilling have been expanded. Do you think the oil industry is manipulating the market, grabbing our money to make windfall profits when it can, and is deep in deception? I’ve thought so for years. Let me tell a story—of how decades ago I broke the story of the oil industry exploring in the Atlantic—and received my first lesson in oil industry honesty, an oxymoron. I was a reporter for the daily Long Island Press and got a tip from a fisherman out of Montauk who said he had seen the same sort of vessel as the boats he observed searching for oil when he was a shrimper in the 1940s in the Gulf of Mexico. I spent the day telephoning oil company after oil company. Public relations people for each said, no, we’re not involved in looking for oil in the Atlantic. In the Atlantic? they scoffed. I was leaving the office when there was a call that a PR guy from Gulf was on the phone. He said he checked and, yes, Gulf was involved in searching for oil in the Atlantic—in a “consortium” of 32 oil companies. These included the companies that all day issued flat denials. As to oil spillage at offshore rig, I worked the Atlantic offshore oil drilling story for years which included visiting the first rig set up—off Nova Scotia. Offshore drilling is dangerous in the Atlantic or the Gulf or anywhere. My article began: “The rescue boat goes round and round…as the man from Shell concedes, ‘We treat every foot of hole like a potential disaster.’” On the rig were capsules to eject crew members in an accident. I wrote, “Workers may all be kept in one piece, but erupting oil won’t, the man from Shell admits.” He acknowledges that “booms and other devices the oil industry flashes in its advertising ‘just don’t work in over five-foot seas.’” So, he says, there are “stockpiles of clean-up material on shore. Not straw as in the States,” he says. “Here we have peat moss.” I found spills in offshore drilling and consequent damage to fisheries and other life as chronic—although we’re not supposed to know that. We’re to believe the Gulf disaster was an isolated incident. In fact, it’s drill, baby, spill. Might I recommend a very well-researched recent book, The Tyranny of Oil: The World’s Most Powerful Industry—and What We Must To Do Stop It by Antonia Juhasz. She writes: “The masters of the oil industry, the companies known as ‘Big Oil,’ exercise their influence…through rapidly and ever-increasing oil and gasoline prices, a lack of viable alternatives, the erosion of democracy, environmental destruction, global warming, violence, and war.” She cites a Gallup poll on “public perceptions of U.S. industry” and reports the oil industry “earned the lowest rating of any industry.” Americans are on to the oil industry—and they need to do a lot about it! And it’s not just Big Oil. When it comes to energy, it’s Big Oil and Big Coal and Big Nuclear which manipulate U.S. policy, says S. David Freeman, and he should know. Freedman headed the New York Power Authority and also the Tennessee Valley Authority and authored the book Winning Our Energy Independence: An Energy Insider Shows How. Freeman calls oil, coal and nuclear “The Three Poisons.” And he stresses that we don’t need any of these poisons. He declares that the solar power that could be harnessed on 1 percent of the land in the U.S. “could generate electricity that, if converted to hydrogen, could completely replace gasoline,” that “our vast solar and wind potential…could meet all our energy needs, from driving our motor vehicles to heating our homes and other uses now being supplied by coal, nuclear, oil…We would have our renewable energy when, where, and however we liked it.” There’s a windfall at hand of safe, renewable, clean energy—if only it would be fully pursued. But there are industrial interests working with their partners in the U.S. government, who fight that. These renewable energy technologies—are energy that we can live with, energy that can unhook us from oil, coal and nuclear. But those industries don’t like that possibility. Consider hot dry rock geothermal energy. It turns out that below half the earth, two to six miles down, it’s extremely hot. When naturally flowing water hits those hot rocks and has a place to come up, you get geysers like in California or Iceland. But also water can be sent down an injection pipe to hit the hot dry rock below and rise up second production pipe as super-heated water that can turn a turbine and generate electricity or furnish heat. Scientists from Los Alamos National Laboratory built a model hot dry rock facility at Fenton Hill and showed that the technology can work. Here’s a television piece I did: (A THREE-MINUTE ENVIROVIDEO TV NEWS PIECE ON HOT DRY ROCK GEOTHERMAL IS SHOWN.) That was some statement from Dave Duchane, a respected, careful scientist, that “hot dry rock is has an almost unlimited potential to supply all the energy needs of the United States and all the world.” The New York Times said about hot dry rock geothermal: “The estimated energy potential of hot dry rock nationwide is 10 million quads…more energy than this country uses in thousands of years.” So what happened? A request for proposal—an RFP—was prepared by Los Alamos inviting industry to take over the Fenton Hill facility that you just saw and “produce and market energy” from it. But on its way to Washington, the RFP was cancelled by the Department of Energy under pressure, I’ve been told, by conventional energy industries. And the Fenton Hill facility has been decommissioned. And now there are claims being made that hot dry rock geothermal might be great but the initial drilling could cause earth tremors. The hot dry rock scientists say if that happens the tremors cease pretty quickly. But the technology is to a large degree stalled. Some things can be done individually. The sun shines on where I live on Long Island, and up here and all over New York State, indeed throughout the U.S. and the world. As Sharp, a major manufacturer of solar panels, says: the sun is the answer. Last year, my wife and I had solar photovoltaic panels installed on the roof of our house. And now, most of the time, our electric meter spins backwards. The panels on the roof are not onlysupplying all the electricity we use but excess is sent back into the grid, for which we are paid. Our electric bill is now $5 a month, the minimum charge for the meter reader to come. Meanwhile, the price of solar photovoltaic panels has been dropping fast and their efficiencies rising. SunPower Corp. of California this year announced new panels with a remarkable 24.2 percent efficiency—the rating NASA’s solar panels have in converting sunlight to electricity. Also, we not only now have solar panels to generate electricity but thermal panels to heat water. And it is just amazing to see, in the middle of last winter, a cold winter, the water coming down from the roof at 100 and 120 days—on frigid days. Technology can be very good. Solar is also a key to generating an optimum fuel—hydrogen—for locomotion . As Lester Brown, founder of Worldwatch Institute says in his book, EcoEconomy: Building an Economy for the Earth, “In the eco-economy, hydrogen will be the dominant fuel…Since hydrogen can be stored and used as needed, it provides perfect support for an energy economy with wind and solar power as the main pillars.” There’s a very, very good U.S. Department of Energy Laboratory, the National Renewable Energy Laboratory in Golden, Colorado. It’s a beacon for a sustainable energy future. At NREL, they’re working on using solar to produce hydrogen from water. Here’s my interview with John Turner, senior scientist, at NREL. (AN ENVIROVIDEO TV INTERVIEW WITH TURNER IS PLAYED.) Here’s Dr. Turner, a respected, careful scientist speaking of “sunlight to hydrogen—basically an inexhaustible fuel…the forever fuel.” The hydrogen-through-solar-energy approach of NREL is the way Volkswagen envisions a hydrogen infrastructure. It has opened a solar hydrogen filling station in Germany, built in collaboration with the German solar energy company Solvis. You drive up and see a large solar array which, through electrolysis, produces hydrogen from water. And you fill’er-up—with hydrogen. That combination of endless hydrogen from water and endless solar from the sun to produce it is being called green hydrogen. But, again, those vested interests would get into the act. A scheme started under the administration of President George W. Bush—with its cronies in the oil, coal and nuclear industries—involves construction of a nuclear power plant at Idaho National Laboratory to make hydrogen. To get clean hydrogen there’s this push to use atomic power with all its dangers: the potential for catastrophic accidents, routine radioactive emissions, the production of nuclear waste that somehow must be safeguarded for millennia, problems of nuclear proliferation, and so forth. Talking about screwing up a great idea. There’s a coalition—the Green Hydrogen Coalition—which includes Greenpeace, Sierra Club, Friends of the Earth and other groups—fighting for the hydrogen/solar economy, not the hydrogen/nuclear scheme. What I’ve been most impressed in visiting the National Renewable Energy Laboratory is that whatever division I went to there, the vision is of boundless safe, clean, renewable energy energy. Not only by using solar to generate hydrogen but through a new amazing solar energy technology called “thin film photovoltaic.” Developed at NREL, rather than conventional rigid solar panels, it involves flexible membranes impregnated with high-efficiency solar collectors. These sheets of solar-collecting membranes can be applied over glass buildings. Skyscrapers that rise in Manhattan or buildings here on the New Paltz campus can serve as electricity generators. “Thin film photovoltaic” is now being widely used in Europe. Scientists at NREL’s Solar Energy Research Facility say that through solar we could get all the energy we’d ever need. But then you go to NREL’s National Wind Technology Center where the scientists speak about wind providing all the energy we’d ever need. They were pioneers in the great advances in wind energy in recent years—especially the development of turbines with highly-efficient blades and wind turbines that can be…and are...being placed on land and increasingly, in Europe, offshore. Bluewater Wind is getting set to build the first offshore wind farm off Delaware. It would be this country’s first. Wind is now the fastest growing energy technology. It has been expanding 25 percent a year and that kind of future annual growth is predicted. Wind energy costs a fifth of what it did in the 1980s—and is now fully competitive with other energy technologies—and a continuing downward cost trend is anticipated. And at NREL’s National Bioenergy Center, the scientists say biomass could fulfill a huge portion of the world energy needs—and we’re not talking here about using food stocks, corn, but switchgrass and poplar trees and other, again, non-food energy crops. The scientists at NREL might not be right on any single energy source—but all together these and other renewable energy sources, can, in a mix, provide all the energy we need. And energy we can live with. As NREL declares on its website: “There’s no shortage of renewable energy resources.” And there’s so many more: Consider: wave power. In Portugal, a wave power project has just begun. Pelamis Wave Power, a Scottish company, has engineered it—a line of machines will be tapping nature’s constant ocean power. And tidal energy. The government of Nova Scotia is moving ahead with tapping the enormous power of the 40 and 50 foot tides that twice a day rush in and out of the Bay of Fundy—driven by the moon. And energy from algae. And micro or distributed power, smart grids, cutting energy loss from transmitting electricity over long distances. And throughout, we must remember efficiency, a key across the board. Here’s my interview with energy analyst Amory Lovins. (ENVIROVIDEO TV INTERVIEW WITHLOVINS TAPE IS SHOWN) Renewables Are Ready was the title of a book written by two Union of Concerned Scientists staffers in 1995. They’re more than ready now. But there’s much work to do challenging the manipulation and, yes, tyranny of Big Oil, Big Coal and Big Nuclear to make that possible. Now, let’s have a discussion on what you think we should and can do to bring on safe, renewable energy technologies. Karl Grossman is a full professor of journalism at the State University of New York/College at Old Westbury. Among the six books he has authored are: Cover Up: What You Are Not Supposed To Know About Nuclear Power and Power Crazy. He has given presentations on energy and environmental issues around the world. He hosts the nationally-aired Enviro Close-Up produced by EnviroVideo, a New York-based TV company. He narrated and wrote EnviroVideo’s award-winning documentaries The Push To Revive Nuclear Power; Nukes In Space: The Nuclearization and Weaponization of the Heavens and Three Mile Island Revisited. He is the chief investigative reporter of WVVH-TV on Long Island. His articles have appeared in publications including The New York Times, The Boston Globe, USA Today, The Miami Herald, The Village Voice, Extra!, E, The Environmental Magazine, The Globe and Mail, The Nation, The Progressive, The Philadelphia Inquirer, Newsday, The Christian Science Monitor, The Crisis, Mother Jones and The Ecologist. His column appears weekly in newspapers of The Southampton Press Group and other newspapers on Long Island. Honors he has received for journalism include the George Polk, James Aronson and John Peter Zenger Awards. He can be reached by e-mail at kgrossman@hamptons.com. His home address is: Box 1680, Sag Harbor, New York, USA, 11963. Nuclear Weapons, War and the Media Beyond the Bomb ConferencePace University New York CityNovember 4, 2006 Karl GrossmanProfessor, State University of New York, College at Old Westbury In examining the interplay between nuclear weapons, war and the media, it is instructive to examine how The New York Times, the paper of record in the United States, gave direction to press coverage in this country as the so-called “nuclear age” opened. It’s a shocking story. As Beverly Deepe Keever, a reporter for Newsweek, The New York Herald Tribune andThe Christian Science Monitor before becoming a professor of journalism at the University of Hawaii, details in her important book, News Zero: The New York Times and The Bomb, “from the dawn of the atomic-bomb age, [William L.] Laurence and The Times almost single-handedly shaped the news of this epoch and helped birth the acceptance of the most destructive force ever created.” Who was William L. Laurence? He was the granddaddy of embedded reporters—plus. A science reporter forThe Times, he was hired by the Manhattan Project, the World War II crash program to build an atomic bomb and, while working for the government remained on The Times payroll, his Times weekly salary going to his wife while he also was paid by the government. The arrangement was made by the Manhattan Project’s head, General Leslie Groves, with the publisher and editor of The Times. Keever writes: “To sell the bomb, the U.S. government needed The Times...and The Times willingly obliged.” At the Manhattan Project, Laurence participated in “the government’s cover-up of the super-secret Trinity shot.” Held a month before the U.S. dropped atomic bombs on Hiroshima and Nagasaki, in the Trinity test a nuclear device was exploded for the first time. Laurence prepared a press release to “disguise the detonation and resulting radiation.” The “fake news” claimed there had been a “jumbo detonation of an ammunition magazine filled with high explosives at the 2000-square mile Alamogordo Air Base.” The Timesman didn’t stop with this deception. He prepared a 10-part series at the Manhattan Project glorifying its making of atomic weapons—and all but ignoring the dangers of radioactivity. And after the bombs fell on Japan, The Times itself ran the series and “on behalf of the government” distributed it free “to the press nationwide.” Laurence’s avid pro-nuclear writings continued when he returned to The Times this becoming an institutional stance of the publication. The Times, writes Keever, “became little more than a propaganda outlet for the U.S. government in its drive to cover up the dangers of immediate radiation and future radioactivity emanating from the use and testing of nuclear weapons.” The Times, she writes, “tolerated or aided the U.S. government’s Cold War cover-up that resulted in minimizing or denying the health and environmental effects arising from the use in Japan and later testing of the most destructive weaponry in U.S. history in Pacific Islands once called paradise….The Times aided the U.S. government in keeping in the dark thousands of U.S. servicemen, production workers and miners, even civil defense officials, Pacific Islanders and others worldwide about the dangers of radiation.” Other Times writers who participated in the pro-nuclear spin included its military editor, Hanson Baldwin. Writes Keever: “In editorials and articles, The Times clearly favored Operation Crossroads,” a major nuclear test in the Pacific, and when President Truman “postponed the first scheduled dates for the test, Baldwin complained that ‘well-meaning but muddled persons, in and out of Congress, are proposing the permanent cancellation of the tests.’” The atomic dysfunction at The Times went on and on. The nuclear testing-caused tragedy “from 1947 to 1991 unfolding in the faraway Marshall Islands,” for instance, was “largely untold by The Times.” And the dysfunction continues today as The New York Times leads U.S. media in pushing for a “revival” of nuclear power. Notes Keever, “A huge outcry followed the revelation of a breach of reporting ethics by a single individual when The Times in mid-2003 exposed the plagiarism and fraud committed…yet the issues raised” by her research “are far more pervasive and more importantly condoned and institutionalized as part of media management policies and practices. This investigation serves as a wake-up call for journalists of today and tomorrow.” It’s more than a wake-up call for journalists today. It could be a critical to the lives and survival of millions. I helped Keever with her book sharing with her the work of Deborah Lipstadt, professor of Modern Jewish and Holocaust Studies at Emory University, the author of Beyond Belief: The American Press and the Coming of the Holocaust, and Kenneth Libo, author and curator. Beyond Belief is about how much was known about the Holocaust—as hundreds of thousands and then millions of Jews were being killed in the 1930s and 1940s—and this was intensely covered by the Jewish press. Yet The Times, Lipstadt writes in Beyond Belief, downplayed the horrible news coming out of Europe. Lipstadt writes that if The Times had done solid journalism about the situation, “it is possible that other American papers would have followed suit”—and what was happening could have been widely exposed—and efforts made to stop it. Libo was responsible for exhibits on this issue including one at the National Museum of American Jewish History which featured enlarged photocopies of small, back-page Times articles on the shipping off of Jews to concentration camps placed alongside the major stories on this which ran in Jewish papers. A sign at the exhibit, Keever notes, quoting an article by me, read: “Setting the tone for coverage in the general press” of the Holocaust was The New York Times which “downplayed” the news. Keever ends her book stating that “history might have unfolded quite differently if The Times had reported the Holocaust more prominently and vigorously,” and, likewise, “History might also have unfolded quite differently if The Times had given more than News-Zero coverage of the effects” of the “nuclear holocaust” of our time. What should The Times and other media be reporting? First and foremost, that nuclear weapons and nuclear power are two sides of the same coin—that there is no “peaceful atom.” Then it should examine the proposition that the only real way to end the threat of nuclear weapons spreading throughout this world today is to also put a stop to nuclear technology. Radical? Yes, but consider the even more radical alternative: a world in which scores of nations will be able to construct nuclear weaponry because they possess nuclear power technology. There are major parts of the Earth—Africa, South America, the South Pacific, and others—that have now been designated nuclear-free zones. If we are really to have a world free of the horrific threat of nuclear weapons, the goal needs to be the designation of this entire planet as a nuclear-free zone—no nuclear weapons, no nuclear power. Radical? Yes, but consider the alternative—trying to keep using carrots and sticks, juggling on the road to inevitable nuclear disaster. A nuclear-free world is the only way, I believe, through which humanity will be free of the specter of nuclear warfare. Some will say putting the atomic genie back into the bottle is impossible. I say: anything people have done, other people can undo. Especially if the reason is good. And the prospect of massive loss of life from nuclear destruction is the best of reasons. As Amory and Hunter Lovins wrote in their book, Energy/War: Breaking the Nuclear Link: “All nuclear fission technologies both use and produce fissionable materials that are or can be concentrated. Unavoidably latent in those technologies, therefore, is a potential for nuclear violence and coercion which may be exploited by governments, factions.” “Little strategic material is needed to make a weapon of mass destruction. Nagasaki-yield bomb can be made from a few kilograms of plutonium, a piece the size of a tennis ball.” “A large power reactor,” they noted, “annually produces…hundreds of kilograms of plutonium; a large fast breeder reactor would contain thousands of kilograms; a large reprocessing plant may separate tens of thousands.” Civilian nuclear power technology, they say, provides the way to make nuclear weapons—furnishing the materiel and trained personnel. That’s how India got The Bomb in 1974. Canada supplied a reactor for “peaceful purposes” and the U.S. Atomic Energy Commission trained Indian engineers. And lo and behold, India had nuclear weapons. Where have media been in examining the operations of the International Atomic Energy Agency—the global nuclear-pusher? The IAEA was formed as a result of President Eisenhower’s 1953 “Atoms for Peace” speech before the UN General Assembly. Eisenhower proposed the creation of an international agency to promote civilian applications of atomic energy and, somehow at the same time, control the use of fissionable material—a dual role paralleling that of the U.S. Atomic Energy Commission. In 1974, the AEC was abolished after the U.S. Congress concluded that, in theory and practice, it was in conflict of interest. But the IAEA—in the AEC’s image—remains with us. The IAEA’s mandate: “To accelerate and enlarge the contribution of atomic energy to peace, health and prosperity throughout the world.” From its outset, the IAEA has been run by atomic zealots. Its first director general was Sterling Cole, who, as a U.S. congressman was an original member and then chairman of the Joint Committee on Atomic Energy, as extreme in its promotion of nuclear technology as the AEC. Later, Hans Blix became IAEA director general—after, his official IAEA biography stresses, leading a move in his native Sweden against the effort to close nuclear power plants there. Blix was outspoken in insisting nuclear technology be spread throughout the world—calling for “resolute response by government, acting individually or together as in the [IAE] Agency.” Blix’s long-time IAEA second-in command: Morris Rosen—formerly of the AEC and before that the nuclear division of General Electric. After the Chernobyl nuclear plant disaster, he rendered this advice: “There is very little doubt that nuclear power is a rather benign industrial enterprise and we may have to expect catastrophic accidents from time to time.” As for the current IAEA director general, Mohamed ElBaradei, he too, is a great nuclear booster. “There is clearly a sense of rising expectations for nuclear power,” he told a gathering in Paris last year organized by the IAEA entitled “International Conference on Nuclear Power for the 2lst Century.” The IAEA has been doing everything it can to fuel those expectations—scandalously downplaying the public health consequences of nuclear accidents including the Chernobyl disaster, promoting all sorts of atomic technology and, with its nearly $300 million annual budget, encouraging the spread of nuclear power around the globe. The War & Peace Foundation has wisely proposed that the IAEA be replaced with a World Sustainable Energy Agency which would promote the use of safe, clean, non-lethal energy technologies. Meanwhile, true nuclear non-proliferation, as Amory and Hunter Lovins state, requires “civil denuclearization.” Even Admiral Hyman Rickover, the “father” of the U.S. nuclear navy and manager of construction of the first commercial nuclear plant in the U.S., in Shippingport, Pennsylvania, in the end came to the conclusion that the world must—in his words—“outlaw nuclear reactors.” Rickover, in a farewell address, told a committee of Congress in 1982: “I’ll be philosophical. Until about two billion years ago, it was impossible to have any life on earth: that is, there was so much radiation on earth you couldn’t have any life—fish or anything. Gradually, about two billion years ago, the amount of radiation on this planet and probably in the entire system reduced and made it possible for some for some form of life to begin.” “Now,” Rickover went on, “when we go back to using nuclear power, we are creating something which nature tried to destroy to make life possible…Every time you produce radiation, you produce something that has life, in some cases for billions of years, and I think there the human race is going to wreck itself, and it’s far more important that we get control of this horrible force and try to eliminate it.” As for nuclear weaponry, the “lesson of history,” said the retiring admiral, is that in war nations “will use” whatever weaponry they have. Where have media been on focusing on these realities? In the case of The New York Times and most of mainstream media: in league with a power structure archly pro-nuclear…at News Zero. Now, positively, the media revolution of our time and what it can mean to get the truth out—in Q&A. Karl Grossman is professor of journalism at the State University of New York/College at Old Westbury and coordinator of its Media & Communications Major. A major concentration for decades has been nuclear technology. Among the six books he has authored are: Cover Up: What You Are Not Supposed To Know About Nuclear Power; The Wrong Stuff: The Space Program’s Nuclear Threat To Our Planet; Power Crazy; and Weapons in Space. Grossman has given presentations on nuclear issues around the world. He has long also been active on television. He narrated and wrote the award-winning documentaries: The Push To Revive Nuclear Power; Nukes In Space: The Nuclearization and Weaponization of the Heavens; and Three Mile Island Revisited, all produced by EnviroVideo (www.envirovideo.com). For the past 15 years, Grossman has hosted Enviro Close-Up, aired nationally on Free Speech TV, the DISH satellite network (Channel 9415), and on more than 100 cable TV systems and on commercial TV. His magazine and newspaper articles have appeared in numerous publications. He is a charter member of the Commission on Disarmament Education, Conflict Resolution and Peace of the International Association of University Presidents and the United Nations. He is a member of the boards of directors of the Nuclear Information and Resource Service-World Information Service on Energy and Fairness and Accuracy In Reporting, and board of advisors of the Global Network Against Weapons & Nuclear Power in Space. He can be reached at kgrossman@hamptons.com or Box 1680, Sag Harbor, NY 11963. Nuclear Engineering, Ethics and Public Health 5th International ConferenceProblems and Practice of Engineering Education Tomsk Polytechnic UniversityTomsk, SiberiaMay 26, 2002 Doobrahye Ootrah. The Patriarch of Russia, Alexey II, spoke here yesterday afternoon about the importance of combining learning in science and engineering with education in the humanities. I would like to humbly add to that wise man’s counsel with some thoughts. We have come to a time in my country and yours, indeed in the world as a whole, that education in the humanities—especially in understanding and applying ethics and moral principles—is critical, vital, indeed should be required in science and engineering. First, I am a professor of journalism and let me say that education in the humanities—in history and culture and values—is also critical for journalists. And some journalists are, unfortunately, remiss in this central area for their work, too. At my college of the State Universityin New York, in classes I and others teach for future journalists, we try to educate them in this regard. The problems of ethics and journalism must be the subject of another day. But I do want to make it clear, I am not picking on another profession. I have written several books and done much investigating into nuclear technology—including the role of nuclear engineers and scientists. My subject today at this conference on “Problems and Practice of Engineering Education” is, in specific, “Nuclear Engineering, Ethics and Public Health.” Several weeks after the 1986 catastrophe at the Chernobyl nuclear plant, Morris Rosen, a nuclear engineer from the United States—formerly with our government—who moved on to become long-time director of nuclear safety at the International Atomic Energy Agency, the Number 2 man at this agency—said, and I have his statement in my hand: “There is very little doubt that nuclear power is a rather benign industrial enterprise and we may have to expect catastrophic accidents from time to time.” To this day, the nuclear engineers and scientists of the International Atomic Energy Agency—created by the United States to somehow promote and regulate nuclear power at the same time—have sought to minimize, indeed deny, the terrible public health impacts of Chernobyl. They maintain that but 31 people died, that the main health effect has been psychological. Chernobyl was not an anomaly, a unique event. I have in my hand an official analysis by the U.S. Nuclear Regulatory Commission projecting the impacts—in “early fatalities,” “early injuries,” “cancer deaths” and property damage—in the event of a meltdown with breach of containment at every nuclear plant in America. This analysis, “Calculation of Reactor Accident Consequences,” estimates for the Indian Point 2 and 3 nuclear plants—just north of New York City: 46,000 "early fatalities" from 2 and 50,000 from 3. 141,000 "early injuries" from 2 and 167,000 from 3. 13,000 "cancer deaths" from 2 and 14,000 from 3. And property damage -- $274 billion from 2 and $314 billion from 3 (and these are in 1980 dollars; a trillion each today. And these are not just numbers. These represent people’s lives. Before our Three Mile Island accident in 1979, American nuclear engineer Norman Rasmussen, professor of nuclear engineering at the Massachusetts Institute of Technology, said getting injured or killed in a nuclear plant accident was “like getting hit on the head by a meteor while crossing a street.” Some meteor. Some street. Later, the U.S. Nuclear Regulatory Commission, under pressure of a U.S. Congressional committee, admitted in this statement that the “likelihood of a severe core melt accident” in “a population of 100 reactors operating over a period of 20 years” was 45%—and that this might be off by 5 or 10%. So the chances, it said, are about 50-50. Nuclear technology—and engineering and science in general—are not value-free. At the end of the Manhattan Project, the U.S. program which first invented the atomic bomb, J. Robert Oppenheimer, its scientific director, told Edward Teller, who was pushing on to develop the hydrogen bomb, “We physicists have sinned.” Today, good engineering and science have revolutionized safe, clean, sustainable, non-nuclear energy technologies. Generating energy from the wind is now far cheaper than nuclear power. Huge strides have been made in solar energy, geothermal power, there is appropriate hydropower, tidal power, wave power, the production of hydrogen fuel by using solar energy to separate hydrogen and oxygen in water—and on and on. Still, in my country, what has been called the “nuclear establishment,” drives on. Nuclear engineers and scientists working for the government and industry in the U.S. push the technology that gives them money and power—and forget about good science. Forget about ethics. Forget about morality. Forget about honest, independent epidemiology. Forget about life. In medicine, all over the world the first principle for all doctors under the Hippocratic Oath is “do no harm.” This is not the case, I submit, for many nuclear engineers and scientists. In my country, with many nuclear engineers and scientists involved, there is a push to “revive” nuclear power. There has not been a nuclear plant sold in America since our Three Mile Island accident. Fifty new nuclear plants would be built. The operating years of existing reactors would be extended from 40 to 60 years—inviting catastrophe from machines never viewed as running that long. Some nuclear waste would be smelted down and incorporated into consumer items like car bodies, pots and spoons and forks. High level waste would be sent to Yucca Mountain in Nevada, a place on or near 32 earthquake faults. The huge terrorist threat against nuclear plants is not being realistically dealt with. One of the jets piloted by terrorists that flew into the World Trade Center minutes before flew over the Indian Point nuclear plants. But U.S. government agencies and corporations—and engineers and scientists with a vested interest in nuclear technology—continue pushing. Here in Russia, where your Ministry of Atomic Energy wants to build 10 new reactors and make your wonderful country a garbage dump for large amounts of the world’s nuclear waste, there is a comparable situation. The brave Lydia Popova, who broke from your Ministry of Atomic Energy, has written about the ministry and “its commitment…to serve the interests of the [nuclear] industry and a select group of nuclear specialists at the expense of the people.” What’s to be done? Education—sound, solid education imbuing moral values and broader understanding pioneered here at Tomsk PolytechnicUniversity—for scientists and engineers must occur. Widely and intensely. At the least. Education and democracy, of course, go hand in hand. The kind of critical issues I’ve spoke about today are too important to be left to nuclear engineers and scientists—many who would prefer to work in secret. We need transparency. We need openness. We need full public participation and democratic involvement. We need to make sure life is put first. As the environmental plan for Russia advanced by the Center for Russian Environmental Policy, led by your great scientist and my friend, biologist Alexey Yablokov, states: the “environment must be healthy for both long-time successful existence of the living nature and assurance of human health.” Or as another great Russian scientist of conscience, nuclear physicist Andrei Sakharov, has said: “The [long-term] effect of radioactive carbon does not reduce the moral responsibility for future lives. Only an extreme deficiency of imagination can distinguish the suffering of contemporaries [from] that of posterity.” In respect to the Holy Father’s comments on integrating religion and education, we have in America a principle of separation of church and state. But as an American Jew, there’s nothing wrong, I believe, in considering a passage from the Bible—important to Russian Orthodox and Christians of all kinds, and Jews, who, I mention in all humility, wrote the book. In Deuteronomy it is written: “I have set before you life and death, blessing and curse. Therefore, choose life, that you and your descendants may live.” People from around the world, lawyers and plumbers, professors and bus drivers, musicians and engineers and scientists, must choose life—and learn about why. Spaceeba. Envirovideo © 2015 Karl Grossman. All rights reserved. Web Design and Domain names by
科技
2017-09/1580/en_head.json.gz/5358
Plasma Experiment celebrates Max-Planck-Institute anniversary on board ISS (Nanowerk News) On 27th January 2010 the 25th series of experiments studying complex plasmas will start on board the international space station ISS. Physicists from the Max-Planck-Institute for extraterrestrial Physics in Garching, Germany, will use them to study fundamental structure forming processes to better understand what happens in liquids and solids. That matter exists in three states is widely known: as solid, liquid or gas. Our Universe, however, is dominated by a fourth state of matter: plasma. This forms, if a gas is heated to very high temperatures, so that its molecules dissociate in ions and free electrons. A plasma is regarded as the most disorganised state of matter. Researchers at the Max-Planck-Institute for extraterrestrial Physics, however, have found that under certain conditions plasmas can become liquid or may even crystallise. These are called “complex plasmas” and allow new insights into the physics of liquids and solids. Plasma physicists use them to study melting and crystallisation, motion of lattice defects in crystals, or liquid effects and other processes by looking at single atoms. Phase separation (droplet formation) in a binary complex plasma on board the ISS. Complex plasmas consist of tiny particles (about one thousandth of a millimetre) that are suspended in a plasma and carry a highly negative electric charge. Due to the strong interaction between the particles, they can form regular structures, either liquid or solid. Since Earth´s gravitational field interferes with these processes, experiments with complex plasmas are carried out in space. Research on complex plasmas with the PKE-Nefedov laboratory in 2001 was the first science project on board the international space station ISS and the most successful one during the first years. Its successor PK-3 Plus has already been running for four years and provides again unique results. The new series of experiments, carried out from 27th to 29th January is already the 25th mission to study complex plasmas in the absence of gravity. Moreover, PK-3 Plus has now been installed permanently in the new ISS module MIM-2, and will be its first scientific experiment. One of the experiments in the PK-3 Plus laboratory will deal with “binary” complex plasmas: if two kinds of particles with different sizes are suspended in a homogeneous plasma, one could expect them to mix due to mutual repulsion. Previous experiments on board the ISS, however, have shown a clear phase separation of the two particles clouds (see figure 2). “This phenomenon is well known from many different systems, such as molecular liquids or colloidal suspensions, and has been studied for a long time,” says Hubertus Thomas, MPE-scientist and coordinator of the PK-3 Plus experiments. “In complex plasmas, for the first time we can now study these processes looking at the movement of individual particles and we hope that our latest experiments will lead to new insights into the physics of phase separation.” The study of complex plasmas is interdisciplinary, fundamental research. As in other fundamental research before, however, this work initiated a new approach in applied research: the results and experience gained with the plasma experiments on board the ISS and in the lab led to a new medical field, the so-called plasma medicine. Currently a clinical trial is carried out to study how plasmas can be employed for contact-free sterilisation of wounds, hand disinfection in clinical environments or treatment of gingivitis. Source: Max-Planck-Institute for Extraterrestrial Physics
科技
2017-09/1580/en_head.json.gz/5433
Sam Harris on Real Time with Bill Maher: simple logic is too simple Sam Harris was on Real Time recently (last night?), the first time he's ever been on the show. The interview was a tad short which was a pity, because at the very end he began to touch on one aspect that makes him a bit different from most in the so-called New Atheism, but first he talked a bit about his opposition to Barack Obama's nominating of a religious scientist (Francis Collins) to head the NIH, which is something he mentioned in an editorial in the New York Times a few weeks ago. That part starts at about 4:50 into the video.The idea behind that is that any scientist with a religious persuasion will eventually allow him or herself to be swayed by religion and will end up with erroneous science, and if this sounds too simplistic to be true, it's because it is. Following the same logic, Sam Harris would have been opposed to one of George Lemaître's positions throughout his life (let's say his election as a member to the Royal Academy of Sciences and Arts of Belgium). So who was George Lemaître? He was the Belgian priest that proposed the idea of the Big Bang (then called the Hypothesis of the Primeval Atom), and spent almost a decade convincing Einstein and other physicists that it was the most logical model to explain the expansion of the universe and its original creation. Would it have made any sense to oppose the work of the scientist who proposed the idea of the Big Bang, or a nomination of his to a post of influence?Near the end of the interview Sam Harris mentions a bit about his views on mysticism and why the non-religious need their own type of mysticism, but there wasn't enough time left in the interview to go over that. Luckily I know where the video is where he is able to talk at length on the subject. When he talks on this subject he becomes much more interesting, and you should definitely have a look at the video if up till now all you've heard from him are short interviews on the news on "let's not hold back from criticizing religion" and "religion and science can't coexist". Most people are at their most interesting when they are given a certain length in their own element to talk on subjects that concern them, and Sam Harris is no exception.If you're a bit strained for time then skip ahead to 23 minutes into the video.
科技
2017-09/1580/en_head.json.gz/5448
Square Enix to Launch Updated Final Fantasy IX for PC, iOS, Android By David Murphy Some of the game's tweaks will include enhanced graphics, achievements, and an autosave system. If you're a big Final Fantasy fan, then Square Enix just made your day: The popular FInal Fantasy IX game is going to be released on the PC, iOS, and Android at some point in 2016. While Square Enix didn't specifically say that the game is being launched in North America, it stands to reason that it'll arrive there at some point as well. We're most happy about the game's official website having an English translation, which certainly looks like a little bit of a tease for a wider release of the game. That said, we don't know exactly when Square Enix will launch the game in Japan, nor what the time delay will be (if any) between that launch and the North American / European launch. We also don't know all of the changes coming to the game with its remake, but we do know a few: Square Enix is going to be updating the game's graphics and adding in a few more modern conveniences, like autosave and high-speed modes. It'll also come with achievements, for those who like showing off everything they do in a game, and various other "game boosting features," reports PC Gamer. If you're looking for more clues from the remake's first trailer, good luck: It's mostly cutscenes, with a slight highlight of visual differences for the game's combat system—nothing major is changing to the raw mechanics as far as we can tell, but it does look a bit better. Related Final Fantasy VII Remake Producer Addresses Fans' Fears Interestingly enough, Square Enix has seemingly decided to skip any kind of remake for FInal Fantasy VIII at the moment—having last announced the much-anticipated update to Final Fantasy VII in June of 2015. We're not sure why Square Enix decided to overlook the title (especially since this writer particularly enjoys it), and we haven't heard any rumors that the developer might go back and revisit this one at some point, but it's certainly possible. The Final Fantasy IX remake will require iOS 7.0 or Android 4.1 at minimum—and Square Enix warns that the game might not run on certain Android devices. (It neglected to specify, which is fair given just how many Android devices there are.) You'll need at least an iPhone 5s, fourth-generation iPad, iPad Mini 2, or sixth-generation iPod Touch to play on iOS. Holiday Gaming: Xbox Fine, PlayStation So-So, and Steam Gets... Awesome Games Done Quick Kicks Off Speedrunning Event David Murphy got his first real taste of technology journalism when he arrived at PC Magazine as an intern in 2005. A three-month gig turned to six months, six months turned to occasional freelance assignments, and he has since rejoined his tech-loving, mostly New York-based friends as one of PCMag.com's news contributors. His rise to (self-described) fame in the world of tech journalism began during his stint as an associate editor at Maximum PC, where his love of cardboard-based PC construction and meetings put him in... ComScore: Half of All Smartphone Time is Spent on Apps There's an app for everything, and it seems that more people are turning to their smartphone apps fo... Firewatch Comes to Xbox One Sept. 21 Xbox One gamers will get to play the game's new free-roam mode before anybody else. Report: Daimler Targets Tesla with 6–9 New Electric Vehicles by 2024 Daimler has fired up its research and development for premium electric cars, in the hopes that it ca...
科技
2017-09/1580/en_head.json.gz/5449
20 Best U.S. Airports for Tech Travelers By Mark Sullivan PCWorld | In Search of the Tech-Savvy Airport The Top 10 Tech-Friendly U.S. Airlines 10 Nightmares When Traveling With Tech--and How to Prevent Them Best Airports for Tech Users: How the... 10 Most Tech-Friendly Terminals 10 Most Tech-Friendly U.S. Airlines 10 Best U.S. Airports for Wi-Fi 10 Best U.S. Airports for Cell Coverage Best Cellular Service Providers in... Show More Walking past laptop-toting digital nomads who huddle around the outlets lining the concourse, you arrive at your gate with 30 minutes to spare. You have a 6-hour flight in front of you, and a laptop and a smartphone that need a full charge to keep you working and listening to music throughout the flight. You stalk the gate area. The two available outlets on the payphone are taken. No outlets on the walls. The remaining minutes before departure tick down. A baby is crying. (Please, please, please, you think, don't seat me next to the baby...). "Final call for boarding." Your laptop has an hour of life left, and so does your phone. When both are dead, your noise-canceling headphones will be useless. You board and approach your seat. You're in 16B. The baby, in 16C, is already crying...Photograph by Robert Cardin.Another day in the friendly skies. It's happened before, and it will happen again. But it doesn't have to be that way. Airports across the country are installing more outlets and improving their Wi-Fi signals--but some are moving much faster than others. And fortunately, these days you have some measure of control: On many trips you have a choice of airports, terminals, and airlines. If you only knew what tech amenities were waiting for you at the airport, you might think twice before choosing an airline that flies out of gates like the one described above.[ Further reading: The best Android phones for every budget. ]PCWorld sent researchers all over the country to canvass the gates of the 40 busiest airports in the United States and to identify the tech winners and losers. In all our airport auditors visited 3300 gates from coast to coast; they counted more than 17,000 electrical outlets, 5000 USB ports, and 1350 charging stations; and they performed hundreds of tests of airport Wi-Fi and cellular broadband service. For further details see "In Search of the Tech-Savvy Airport."The charts on the following pages illustrate how each airport performed in these areas, with rankings of the top airports for overall tech amenities, the best terminals, and the best airports for Wi-Fi and cellular service. We also rated the major domestic airlines on their efforts to accommodate mobile, connected travelers--at the gates, in the planes, and online.The Big PictureStepping back for a macro-level view of the data yields some interesting general findings about airports and airlines. For instance, the number of electrical outlets available in the nation's busiest airports is woefully inadequate. The average number of outlets (typically two AC plugs under a plate on the wall) for the U.S. airports we visited is about 5.5 per gate. But given that the number of wireless contracts for smartphones, laptops, tablets, and modems (almost 323 million, according to the wireless trade organization CTIA) now exceeds the U.S. population, most of the people waiting at any airport gate are likely to be carrying at least one such device. Take into account that mobile devices have notoriously short battery lives, and the traveler's dilemma comes into sharp focus. No wonder you see people walking forlornly through the gate areas looking for an outlet--any outlet--to plug in to.Wi-Fi service on airplanes is similarly scarce. Only about a third of the planes in the fleets of the ten U.S. carriers have Wi-Fi onboard, meaning that many passengers must work offline during flight and then sync with other users, apps, and machines after the flight lands. But time and tech march on. Offering Wi-Fi on a flight no longer strikes airlines as a novel and exotic perk, but rather as something in line with the expectations of a growing percentage of the flying public. That's why airlines such as United and JetBlue have recently announced plans to outfit their fleets with Wi-Fi. We also noted a trend toward satellite-based (as opposed to ground-based) Wi-Fi that will work internationally, not just on domestic flights.Airport Wi-Fi is a shifting landscape, too. Large operators like Boingo offer paid Wi-Fi in most U.S. airports, but airports are also moving to offer free Wi-Fi throughout the facility. Even so, fast, free Wi-Fi--such as that available at the Cleveland, Raleigh-Durham, and Seattle airports, among others--remains the exception and not the rule. Providing Wi-Fi service is expensive, and someone has to pay for it. Some airports rely on ad-based models, which require users to view an advertisement or take a poll before connecting for free. Others--mostly smaller airports--build the cost of Wi-Fi into their operating budgets just the way Starbucks does. But this approach is generally too costly for larger airports to pull off. Other technology improvements, however, have become ubiquitous. Mobile check-in is one such advance. According to the U.S. Transportation Security Administration, of the 40 airports we visited, only one--Houston's Hobby (the older and smaller of the city's two international airports)--doesn't yet have the necessary phone scanners at security and at the gates to support it. Currently the airlines pick up the tab for these special scanners, but the TSA is said to be working out plans to buy the technology for all security checkpoints.To come up with our rankings, we measured the tech amenities at the 40 busiest airports (as measured by number of boardings during 2010) in the United States, and then rated each one against its peers on the average number of electrical outlets, USB ports, charging stations, internet kiosks, and work desks that it offers per gate. We also performed a series of speed tests to measure each airport's Wi-Fi and major cellular services in numerous locations around the facility. We assigned a ranking to each airport based on overall speeds, with bonus points awarded to airports that don't charge for Wi-Fi. The airports that scored highest in our rankings offered a compelling mix of all of these services. Next 20 Best U.S. Airports for Tech Travelers Next Currently reading Looking for a place where you can get things done--or be entertained--while waiting for a flight?... Best Airports for Tech Users: How the 40 Busiest U.S. Airports Stack Up Best Cellular Service Providers in Airports
科技
2017-09/1580/en_head.json.gz/5450
Microsoft's Opposition to SOPA is Sincere, Not Half-Hearted Microsoft's SOPA Opposition: Solid and Sincere, Not Half-Hearted Microsoft has joined the public opposition to SOPA, although like Google and many other opponents, won't black out its site today. Although some people have said Microsoft's opposition is insincere or half-hearted, the company has been quietly working against it behind the scenes at least since November.Bloomberg reports that Microsoft has said in a statement, "We oppose the passage of the SOPA bill as currently drafted. Hundreds of millions of customers rely on our services every day so we don’t plan to shut those down to express our view."GeekWire adds that Microsoft supports the White House's attempt to fix problems with the legislation. Microsoft told the site in an email:"We think the White House statement points in a constructive way to problems with the current legislation, the need to fix them, and the opportunity for people on all sides to talk together about a better path forward."The White House statement that Microsoft is referring to criticizes SOPA for a variety of reasons, including shutting down sites violating the copyright law, by using the Domain Name System, and says, "Any effort to combat online piracy must guard against the risk of online censorship of lawful activity and must not inhibit innovation by our dynamic businesses large and small."There's a segment of Internet users who will never believe Microsoft's sincerity on any issue, and don't believe it on this one, either. But they're wrong. It appears that Microsoft has been working behind the scenes against SOPA at least since November. CNet reported in late November:Microsoft has long been one of the most ardent proponents of expanding U.S. copyright law. But that enthusiasm doesn't extend to the new Stop Online Piracy Act, which its lobbyists are quietly working to alter.In addition, the article reports, Microsoft convinced the Business Software Alliance, of which it is a member, to change course and oppose SOPA after the alliance had originally supported it.Microsoft would potentially gain financially if SOPA were passed, while many other opponents, such as Google, wouldn't be financially hit. So Microsoft deserves credit for doing the right thing in its opposition. This story, "Microsoft's Opposition to SOPA is Sincere, Not Half-Hearted" was originally published by
科技
2017-09/1580/en_head.json.gz/5465
Written by Anthony Severino The Uncharted series is the most recognizable exclusive on the PlayStation 3, so bringing a franchise of this caliber to a portable system is was always going to be a herculean task. But if there’s anyone up to the job, it’s Sony Bend. They’re probably best known for bringing Syphon Filter, and more recently, another PS3 exclusive franchise to the PSP with Resistance: Retribution. Both are noble efforts and make up some of the best the PSP has to offer. Can Sony Bend work the same magic on the PlayStation Vita, but with a franchise that is more cherished by PlayStation fans?Editor’s Note: This review was conducted on an import Japanese PlayStation Vita using the Japanese import version of Uncharted Golden: Abyss. Uncharted: Golden Abyss features full English text and audio, but some features and aspects of the game, and PlayStation Vita hardware itself, may change for the North American launch.Uncharted: Golden Abyss is developed by Sony Bend in cooperation with Naughty Dog. Naughty Dog is mostly hands-off, leaving Sony Bend to the task of bringing Nathan Drake to the portable system. I myself – I’m sure like many others – was very skeptical. It’s not that I didn’t believe Sony Bend could make a great game based on a PS3 exclusive – they’ve proven themselves before. But what I could never expect, is that they’d produce an Uncharted game that’s not only worthy of the name, but it’s without a shadow of a doubt, the deepest, most-varied, console-like experience I’ve ever had on a portable system. Bold words, I know. And it’s something that must be seen to be truly believed. When it’s all said and done, you still may not believe that what you’ve just played is possible on a handheld. And more amazing still, on a launch title.“Wow. This is a launch title?”_Surprisingly, anything big Drake can do mini-Drake can do better. And he does it in more ways. All of Drake’s melee moves, ducking for cover, and running and gunning are here on the PlayStation Vita, except now a lot of it can be performed using the Vita’s touch-screen. It’s not necessary, but it feels more natural and intuitive. Like I said, this old “dog” has learned new tricks: Drake can now clear out vines and overgrown vegetation using a machete; he takes charcoal rubbings of hieroglyphics; he snaps pictures of the environment with a camera he carries around in his backpack. And there’s are a few surprises that I’d love to talk about, but it’d be a shame to spoil. It’s evident that Sony Bend wanted to exploit everything the Vita hardware had to offer for Uncharted: Golden Abyss.But that OLED screen isn’t just for touching. It’s stunning, and Golden Abyss makes for a beautiful game to display on it. It’s not the realistic-looking Uncharted 3, but it’s honestly not all that far off. Which says a lot for the capabilities of the PlayStation Vita. Uncharted: Golden Abyss is a launch title, after all – so think of what developers will be able to do with the device with some time and mastery of the hardware. Vistas are gorgeous, forcing you to stop and just gaze. Textures are nice, again, very impressive for a portable. I can’t stress enough how good the game looks.You can tell that Naughty Dog isn’t at the helm here. Mainly because the cinematic presentation isn’t what we’ve come to expect from an Uncharted game. But like the graphics, is still very impressive. The story starts off slow, but eventually picks up steam a couple of hours in. And it’s worth the wait. It’s got the typical Uncharted twists, the love interest, and the double-cross. Jason Dante, befitting of a goomba weasel stereotype, cares more about his boots than the life of his companions, and Marisa Chase plays the standard “save the female” role who doesn’t like guns but always seems to get in trouble type. They don’t inspire the same bond as say Elena Fisher – the character development isn’t at the same level found in the Naughty Dog games. Luckily, and old friend of Drake’s stops in at the game’s halfway point to save the day, witty banter included. At that point the game really starts to shine. The entire ten or so hour package is every bit of what you’d expect from the a game bearing the Uncharted namesake.There’s bad guys, too. There has to be. Guerro, an ex-general who’s after treasure (duh!) and is giving Drake and friends a really difficult time. He’s not quite as imposing of a threat as Katherine Marlowe is, but he does fine making Drake’s life a living hell. A plot twist halfway brings another villain into the fold, but we’ll leave that for you to discover on your own.Drake can approach Guerro’s henchmen stealthily, confront head-on with guns a’ blazing, or a mixture of the two. Vertical shootouts from Uncharted 3 make an appearance, and help to add variety to areas that are combat heavy. The pacing between shooting, platforming and climbing is perfect. On the subject of climbing, the Vita really makes the climbing seem less tedious. Simply brush your finger along the ledges and Drake follows the path you’ve outlined. When I demoed Uncharted: Golden Abyss previously, I though I’d never use this feature. But the ease of it and how smoothly it works cannot be overlooked. Touch controls are added to a number of actions, as are motion controls (used for swinging, walking across logs, etc.). The Vita’s capabilities are fully explored.Exploration is a stronger theme in Uncharted: Golden Abyss than with its PS3 big brothers. Treasures return adding replayability, but are a bit different. They’re in sets, rather than just a long list of them. And that’s not the only thing you discover. Using Drake’s camera, you have to take snapshots of certain locations throughout his adventure. Zooming is done best with the rear-touch panel, and for using it you unlock an aptly named “Touch My Rear” bronze trophy. Drake keeps track of everything in his journal, but the journal is more helpful as it tells you which chapter each opportunity is lurking at.“You won’t believe what you’ve just played is possible on a handheld.”_Puzzles are here too, but they’re not as plentiful as you may be used to. At least not in the traditional Uncharted sense. A lot of them have to do with the charcoal rubbings Drake takes of statues and hieroglyphics found around the environment. Shredded documents or maps must be rearranged as if they were actual puzzle pieces. And all of the rotating and maneuvering is done using the Vita’s touch screen. There was one puzzle, in fact, that used the PlayStation Vita in such an interesting way, my jaw hit the floor.From start to finish, Sony Bend has captured the essence of Uncharted. They hit the nail on the head with the gameplay, environments, and combat. The story and the characters don’t quite live up to the high bar set by the PlayStation 3 games, but they’re still better than most portable games. At times, you forget you’re even playing a portable game. Thanks to the Vita’s dual-analog sticks, the Uncharted: Golden Abyss experience feels more like what a home console can produce, but with a touch (pun intended) of smartphone-like gameplay elements.“Uncharted: Golden Abyss is the very best reason to buy a PS Vita.”_Uncharted: Golden Abyss is – hands-down – a must-own for anyone at all considering a purchase of a PlayStation Vita. It shows what that little portable with plenty of power can do, from the graphics, to the controls, to the Vita-specific touch features – it’s truly amazing. More impressive is the fact that, at launch, the PlayStation Vita has a title as packed with greatness as Uncharted: Golden Abyss, but I’d expect nothing less from something bearing the franchise namesake. If you’re looking for a reason to convince you to buy a PlayStation Vita, look no further than Uncharted: Golden Abyss. As the bar has been set by Uncharted on the PS3, so it has on the PlayStation Vita as well. For a launch title to be this impressive, it speaks extremely well of the future of the PlayStation Vita, and it solidifies Sony Bend as the go to studio for forging blockbuster games on portable platforms.PlayStation LifeStyle’s Final Score+ Worthy of the Uncharted name.+ Takes full advantage of the PS Vita’s capabilities.+ Wow. This is a launch title?– SHARE TWEET Tags: Uncharted, Uncharted: Golden AbyssRocksteady to Release Free New Batman Inc. Skin for Batman: Arkham CityNew Screenshots Of Touch My Katamari…and a Trailer Starring Goro The Slacker? FeaturesPSLS OriginalsReviewsVideosPS4PS VRStaffReview PolicyPrivacy PolicyTerms of UseGameRevolutionAdChoicesPlayStationLifeStyle.net is a property of CraveOnline Media, LLC,
科技
2017-09/1580/en_head.json.gz/5473
Complete Genomics has developed what is seen as the fastest technology in the field. Concerns over Chinese genomics bid By Steve Friess The pending sale of a major American gene-mapping company to a Chinese firm is sparking yet another dust-up over what sensitive industries the rising Asian power ought to be allowed to dominate in the United States. A key question — as it has been with Chinese involvement in aviation, cloud computing and telecom hardware — is whether there are national security concerns attached to allowing a company largely funded by the Chinese government to have access to human DNA being decoded for doctors, researchers and pharmaceutical companies. Story Continued Below BGI Shenzhen has bid $118 million for Mountain View, Calif.-based Complete Genomics, a sale under review by the Committee on Foreign Investment in the United States. A third gene-mapping company, San Diego-based Illumina, last month offered $123 million, but the board of Complete Genomics rejected that, saying the Chinese bid was of “superior quality.” Complete Genomics has developed what is seen as the fastest and most cost-effective gene-mapping technology in the field. Their clientele, which BGI would acquire as well, includes the National Cancer Institute, the Mayo Clinic, Eli Lilly, Pfizer and several others. Still, it has filed documents with the Securities and Exchange Commission saying it is hemorrhaging money and could go bust by the end of January if the BGI transaction is nixed or stalled. The mapping of genomes is both a critical building block for an array of new genetics-based medical treatments and drug development — as well as for the development of frightening new biological weaponry that enemies may use to target specific individuals. A lengthy piece in The Atlantic in November laid out a scenario in which terrorists could soon be capable of using the DNA of the president of the United States to carry out an assassination. “Because there are questions about the technologies that are involved that are complex, cutting edge and have national security implications related to bioweapons, this bears strict scrutiny,” said Michael Wessel, a commissioner on the United States-China Economic and Security Review Commission, a congressionally appointed advisory panel. “Are there capabilities here that can be adverse to American interests?” In a confidential letter late last month, Illumina CEO Jay Flatley asked the Complete Genomics board to reconsider rejecting their bid, noting that “national security, industrial policy, personal identifier information protection and other concerns raised in connection with an acquisition … by a foreign state-owned entity create meaningful uncertainty around a BGI acquisition.” Flatley argued there was no need to sell to a foreign company when a qualified, well-funded American firm was offering more money. And in a statement to POLITICO, Illumina’s outside attorney, Sharis Pozen, warned Friday, “The genomics field is important to the American economy and U.S. national security.” Share on Facebook Steve Friess sfriess@politico.com @SteveFriess This story tagged under:
科技
2017-09/1580/en_head.json.gz/5491
Jack Tretton to step down as President of SCEA 06. Mar, 2014 Sony has issued a press release stating that Jack Tretton, as of March 31st, 2014, will step down as Sony Computer Entertainment America‘s President and CEO. Further, on April 1st, 2014 Shawn Layden, now Executive Vice President and COO of Sony Network Entertainment International, will become the new President and CEO of SCEA. Mr Tretton has been with Sony since the beginning of SCEA in 1995 and has played a major role in all the launches of the PlayStation consoles in North America. Jack Tretton made this statement about the announcement; “Working at SCEA for the past 19 years has been the most rewarding experience of my career. Although I will deeply miss the talented team at SCEA and the passion demonstrated every day by our fans, I’m very excited about starting the next chapter of my career. I want to thank the employees, partners and customers for their tireless commitment to the PlayStation brand and, of course, to our fans who have pushed us to new heights of innovation and entertainment over the past two decades. I leave PlayStation in a position of considerable strength and the future will only get brighter for PlayStation Nation.” Mr. Layden has been with Sony and the PlayStation brand for over 15 years and was one of the founding members of the SNEI in 2010, joining directly from SCE where he was the President of Sony Computer Entertainment Japan. “I’ve worked with Jack for nearly two decades and I want to personally thank him for his leadership and the considerable contributions he’s made to the SCEA business and PlayStation brand over the years. I wish him nothing but the best in his future endeavors,” said Andrew House, President and Group CEO, Sony Computer Entertainment. “I also want to welcome Shawn to the Sony Computer Entertainment America team. I have the utmost confidence in Shawn’s leadership capabilities, and his deep knowledge of the gaming industry and commitment to gamers, will help keep PlayStation at the forefront of entertainment and innovative gameplay.” Take care Jack. Thank you for everything you have done for the PlayStation world. Written by Kyle Jessee Your lone Kentucky writer on staff. Loves the Big Blue Nation, rock music, and Resistance 2 (the best in the series). Posted by: Kyle Jessee on March 6, 2014. Share News, Playstation, PS Vita, PS Vita News, PS2, PS2 News, PS3, PS3 News, PS4, PS4 News, PSN News, PSP, PSP News Dragon Age: Inquisition Environments Trailer Naughty Dog Release Statement on Hennig’s Departure This was a surprise. Makes you wonder whats going on at Sony. He was always awesome at E3. brianc6234 What’s with all of these changes? No more Amy Hennig at Naughty Dog, 50 people let go at Sony Santa Monica, and now this. You’d think Sony is trying to make changes to catch up. ruefrak I’ve got money saying that he’s going to work for a TV network or movie studio. Derek / Beaver6622 That’s nice of Jack to mention the wonderful world of PS Nation. He misspoke by calling us Playstation Nation. 😉 Sad to see him go; he had turned into the face of Sony for so many people. But of course I wish him all the best at his new endeavors. He left at the perfect time, no better way to go out than on top. Still shocked, Sony is killing it with the PS4, but this has been a bad week for the company with people leaving. I hope Jack Tretton stays in the public eye. How about a talk show were Jack drinks whiskey and smokes cigars with guests? https://i.chzbgr.com/maxW500/7561783808/h78E928C7/ ChazzH69 He had mentioned his love for other entertainment media so might want a change. I’m not worried about the timing of his departure as the PlayStation brand is strong at the moment, especially with the success of the PS4. It just seems like a change he wanted to make.
科技
2017-09/1580/en_head.json.gz/5512
Bottom-Trawling Vessels to Get Less Access to Bering Sea By Mary Pemberton Associated Press ANCHORAGE, Alaska -- Large areas of the Bering Sea off Alaska's coast will soon be off-limits to bottom trawling, a practice involving fishing vessels that drag huge, weighted nets across the ocean floor. Come Monday, nearly 180,000 square miles of the Bering Sea will be closed to bottom trawling, bringing the total in the Pacific Ocean to 830,000 square miles -- an area more than five times the size of California. Other newly restricted areas are off Washington, Oregon and California. Conservation groups have long fought the practice of bottom trawling, calling it an outdated form of fishing that pulverizes delicate corals and sponges living on the sea floor. Scientists say it can take centuries for the slow-growing corals and sponges to recover, if they ever do, after bottom trawlers move through an area. "It basically is taking a net and raking it on the bottom, and anything that sticks up from the bottom gets bulldozed over. It is similar to forest clear-cutting," Chris Krenz, Oceana's arctic project manager, said Friday. In the northern Bering Sea, many animals rely on the crabs and clams that grow on the ocean floor for food, Krenz said. The North Pacific Fishery Management Council, which advises the federal government on fisheries, unanimously voted in favor of the northern Bering Sea regulation in June. In Alaska, bottom trawlers will be allowed to work about 150,000 square miles, mostly around the Aleutian and Pribilof islands. The industry favored a temporary restriction to assess what areas needed protection, leaving nonsensitive sections open to bottom trawling. Jim Ayers, vice president of Oceana, said the regulation essentially puts the northern Bering Sea off-limits to bottom trawlers. The fishing vessels had not been consistently venturing into the area but were starting to, he said. Ayers said the effects of climate change in the Bering Sea, combined with bottom trawling, could have devastated essential fish habitat. According to Oceana, the Bering Sea has 26 species of marine mammals, including the North Pacific right whale, believed to be the most critically endangered whale in the world. Blue, humpback, gray and bowhead whales also travel each year through the Bering Sea, which is a magnet for millions of seabirds who migrate each spring and summer to breed. Bottom trawling "would really cause even more stress of the ecosystem," Krenz said. The Groundfish Forum, a trade association of six trawl companies that fish in the Bering Sea and the Gulf of Alaska, said the regulation, while OK for now, could end up harming the industry. "Should the concentrations of fish move to the north, it actually could be harmful to keep us from going where the fish are," said Lori Swanson, the forum's executive director. It could mean fishing longer, keeping the nets on the bottom more, using more fuel and potentially increasing the number of bycatch, the non-targeted fish that are caught in the nets, Swanson said. (c) 2008 Deseret News (Salt Lake City). Provided by ProQuest LLC. All rights Reserved.
科技
2017-09/1580/en_head.json.gz/5584
Wednesday, 03 February 2010 15:01 Sun power supplies a path to simplicity Lydia Aydlett first moved to the mountains of Western North Carolina during the height of the back-to-the-land movement, and it’s taken her all the time since to understand why living a sustainable life is so important to her. Aydlett grew up in the eastern part of the state, near Elizabeth City, during the 1950s and ‘60s. She moved to the Piedmont region with her husband, a banker, and when the marriage failed, she decided to make a change. “I was really interested, even then, in finding alternatives to the intense consumption that had been taking place all around me,” said Aydlett. Aydlett bought a white farmhouse and 25 acres and moved her two young children to the area between Sylva and Cullowhee in 1978. “We’d been through the ‘60s so there was a lot of ‘f—- the establishment.’ Feminism was a part of that for me and of moving myself to the mountains by myself with my kids,” Aydlett said. What Aydlett found in the mountains of Western North Carolina was a sense of community. “This was the back to the land time. Lots of returning Vietnam vets. There were people from around the country who had come to this area to live the homestead life,” Aydlett said. Aydlett had friends who lived in teepees and broken down barns and there was still music in the air. She remembers chanting and spinning at Sufi dances held at Caney Fork Community Center, where the good ole boys would come to watch them and drink beer. “It was a bunch of hippies who were thoughtful people and were trying to sort things out,” Aydlett said. “Some of them are still around.” But while her experience in the early ‘70s gave Aydlett a taste for rural living, she didn’t necessarily get close to the environmental principles she espouses today. For instance, at that time, the thought of killing a chicken for food disgusted her. “I was working full-time and raising kids and just kind of scrambling to keep life together,” Aydlett said. Having gotten a master’s degree at Western Carolina University, Aydlett worked with young children with developmental disabilities and raised her family. It was when she went back to school again at University of North Carolina-Chapel Hill to pursue a doctoral degree in 1984 that Aydlett rediscovered her fervor to live life differently. She spent her time in Chapel Hill in a co-housing community that espoused sustainable living in close quarters. “That community boosted my consciousness and let me know that this could be done,” Aydlett said. “And when I say ‘this’ I mean living a sustainable lifestyle. One that’s more self-contained and has a shorter feedback loop.” When Aydlett came back to teach at Western Carolina, she sold three acres and the white farmhouse and began work on building a sustainable homestead high on the ridge on Robertson Mountain. Friends of hers from Chapel Hill helped her build a 16-by-16-foot cabin that ran off of solar power, had a composting toilet, and well-water driven by gravity. Living in the cabin was an intermediate step, and it taught Aydlett a lesson about what she wanted. She didn’t want to live THAT simply. Now her house runs off solar electricity and has a solar water heater which Aydlett augments with a propane generator and a propane stove. She has a dishwasher, a television, and a pro-audio system. It’s not about living an austere life, Aydlett said, but about being connected to what you use. “If I use too much electricity I know it. If a person in a conventional house does, they just have to pay a little bit bigger bill,” Aydlett said. Aydlett has begun growing her own vegetables in a large terraced garden above her house, which she feeds with rainwater. After collecting water in rain barrels under her downspouts, she pumps it uphill to a holding tank so she can use gravity to water her beds. Aydlett also maintains a small goat herd, 20 chickens, and one large Great Pyrenees to keep them safe from coyotes and wandering pit bulls. Last week she killed a rooster for the first time with help from friends, and she’s getting used to the idea of doing it again. As Aydlett has matured, so has the environmental movement. She marvels at the political infrastructure in place now to advocate for sustainable living. But she has come to believe that it’s the practical day-to-day connections to the earth that are most valuable to her. “I’m really glad [the movement] is out there, but it’s not the same as being aware that because we’ve had three cloudy days the battery bank has gotten low,” Aydlett said. Aydlett still works part-time as a counselor of children. She is by no means a hermit on her mountaintop. Her life has taken her on a journey, and now the ideologies that drove her decisions in her youth are taking root in new ways. “I don’t hold to the ideologies with the same fervor as I did then but they certainly have fed my growth and helped create my path,” said Aydlett. “It’s more of a feeling thing than a thinking thing now.” Please enable JavaScript to view the comments powered by Disqus. Solar in the Smokies: Duke proposes microgrid for Mt. Sterling Solar farm comes to Bethel Soaking in the sun: Solar energy movement comes to WNC Resilient living on a different planet Cultivating dreams in Appalachia Haywood businesses catch the solar bug HCC student turns business plan into business reality Local nursing home celebrates solar panel installation « Quick facts Childcare openings fall far short of demand in Macon »
科技
2017-09/1580/en_head.json.gz/5603
News Potter spell will lift Argonaut Sunday 17 February 2002 00:00 GMT HARRY Potter magic will prove a major boost to Argonaut Games on Monday when it unveils sales figures for its PlayStation game based on the popular character.It developed the computer game version of Harry Potter and the Philosopher's Stone for US publisher Electronic Arts, which said this year it had sold seven million copies across all hardware platforms. Analysts expected Argonaut's share of the sales to be about 1m copies, but they are thought to be more than twice that. This would be a fillip to Argonaut's finances after it opted to take a smaller advance to develop the game in return for a bigger share of royalties on each game sold More about:
科技
2017-09/1580/en_head.json.gz/5694
Nuclear industry to hire 130,000 for new reactors | Times Free Press Local Nuclear industry to hire 130,000 for new reactors by Dave Flessner in Business Around the Region Nuclear power tile Photo by WRCB-TV Channel 3 /Times Free Press. Abdualaziz Alqahtani grew up in oil-rich Saudi Arabia. But after studying at the University of Tennessee at Chattanooga, the engineering student is eager to return to his native country to work on another form of energy. The 25-year-old graduate student says he would like to work at one of the 16 nuclear power plants Saudi Arabia plans to build over the next two decades. "It's interesting and important work," Alqahtani said Wednesday after hearing nuclear power representatives describe the the industry. Although the power of the atom was first harnessed in the United States, only a handful of the more than 60 nuclear reactors under construction around the world are in the United States. But a manager at one of the utilities building two of the new reactors in Georgia by Southern Nuclear told UTC engineering students the nuclear industry will still need 130,000 more workers to build the next 30 reactors expected to be built during their careers in the U.S. Far more workers will be needed to build more than 100 reactors planned to be built around the globe. "We are hiring engineers today and so are the other major nuclear utilities like TVA, Exelon, Energy and Duke," said John Williams, a senior engineer for the Southern Co. in Birmingham, Ala. Southern, the parent company of Georgia Power, is building two new Westinghouse AP-1000 reactors at Plant Vogtle near Waynesboro, Ga. The new reactors -- the first to employ new passive safety designs -- are scheduled to be completed by 2017 and 2018. Southern currently has more than 2,500 contract workers on site at Plant Vogtle and TVA has nearly 5,000 contract and staff employees working at the Watts Bar Nuclear Plant to build one reactor and refuel the other. The average age of nuclear workers at both TVA and Southern is in the mid 40s. "We'll need many new workers just to replace those who are retirning," Williams said. The federal Energy Information Administration projects that even with more energy conservation and efficiency, demand for electricity will grow by 28 percent by 2040. Williams said much of that energy will have to come from nuclear power. "We believe the only way we are going to meet this demand is to deploy all of our energy options," said Williams, whose appearance at UTC is part of a nationwide education and recruitment campaign by the industry-backed Nuclear Energy Institute. "We think we're going to have to deploy conservation, natural gas, 21st century clean coal and nuclear power to meet the demands of our customers in the future." Amid growing concerns about global climate change, Williams said nuclear power has a lower carbon footprint tha fossil fuel generation and, over the life of its plants and all of its costs, generates less carbon dioxide than many solar power units. Williams conceded that nuclear power has suffered from public concerns following the accidents at Three Mile Island in the United States, Chernobyl in Russia and Fukushimi in Japan. Dr. Phil Kazemersky, a professor of nuclear Engineering at UTC, said public opinion about nuclear power was shaped in many American minds by the atomic bomb built in nearby Oak Ridge. "The image of the mushroom cloud still lingers," he said, even though nuclear power harnesses the the power of the atom in a totally different manner. "There are concerns by many, but public support for nuclear power is greater among those living around nuclear plants because they see the economic benefits from these facilties," Williams said. "We have a stellar safety record and we're getting safer every day." Contact Dave Flessner at dflessner@timesfreepress.com or at 757-6340 Modernized McDonald's opens today in East Ridge as biggest in area St. John's Restaurant owner sells to partner
科技
2017-09/1580/en_head.json.gz/5769
The Year in Fracking: Quakes, Spills and Backroom Deals Alan Prendergast Tuesday, December 30, 2014 at 11:05 a.m. By Alan Prendergast The battle over fracking in Colorado continued to attract national media attention in 2014, and with good reason. The use of hydraulic fracturing to extract oil and gas, and the attendant debate over economic benefits versus possible health and environmental risks, has been playing out here with more twists and turns than the bedsheets of an insomniac with night sweats. Here's a brief recap of some of the more earth-shaking moments in one of the most divisive political struggles of the past year.See also: How Colorado Became Ground Zero in America's Energy WarsThe year began with antifracking activists riding high on the momentum of grass-roots campaigns to impose a moratorium or outright ban on new drilling in five Front Range municipalities. Despite being outspent by oil and gas interests by more than 30-1 -- and facing the threat of more litigation by Colorado's gas-friendly governor -- fractivists celebrated the passage of 2013 ballot measures prohibiting fracking in Boulder, Lafayette, Broomfield and Fort Collins. (Longmont had touched off the movement with its ban in 2012.) Organizers soon got busy on a range of petition initiatives seeking to tighten restrictions on fracking statewide.Fractivists sought to bolster their arguments about the health risks of gas development with a new study prepared by researchers at the Colorado School of Public Health and Brown University, which found an increased risk of birth defects -- including a startling 30 percent hike in the risk of congenital heart defects -- among families living near oil and gas wells in rural Colorado. But the report was swiftly denounced as deeply flawed by industry advocates, and state health officials -- who supported the study and supplied much of the raw data -- called it "not conclusive." Joann Ginal. Upcoming Events An attempt by Representative Joann Ginal, a Democrat from Fort Collins, to pass a bill requiring a new study of the health risks of fracking in four Front Range counties, supervised by a "scientific oversight committee," subsequently went down to defeat amid complaints that the study would be biased, or too political in nature.In the absence of Ginal's proposed study, environmental groups turned increasingly to mining the industry's own data. In May the Center for Western Priorities unveiled the Western Toxic Release Map, which plots oil and natural gas spills in Colorado and New Mexico over more than a decade -- more than 13,500 spills in all between 2000 and 2013. The Fractivist website, operated by industry bugaboo Shane Davis, also took off as a place to find many industry documents about toxic releases. Davis himself took a star turn as the narrator of the documentary Dear Governor Hickenlooper, actually a collection of eleven polemical shorts by different filmmakers exploring different aspects of the fracking battle.One development that didn't get a lot of play in the film: the mounting evidence that injection wells used for fracking wastewater disposal can produce earthquakes. In June the Colorado Oil and Gas Conservation Commission ordered NGL Water Solutions to stop dumping water into a Weld County well, after two quakes in the Greeley area -- the first of any significance to strike the town in forty years -- occurred three weeks apart. (The larger of the two, a 3.4-magnitude temblor, was felt as far away as Golden and Boulder.) Although seismic testing indicated close to 500 small quakes in the area over a six-week period, COGCC lifted the ban in mid-July, after imposing new restrictions on the volume of water NGL is allowed to inject into the well on a daily basis.That shakeup was nothing, though, compared to the rift that occurred between antifracking leaders and the self-proclaimed poster boy of fracking, Jared Polis. The congressman had helped bankroll the major statewide antifracking initiatives, but in August Polis struck a deal with Governor John Hickenlooper and others to pull the initiatives off the ballot, avoiding a costly, ad-saturated fall showdown. In exchange, Hickenlooper pledged to drop the state's lawsuit against the City of Longmont over its fracking ban and to form a commission to develop proposals intended to minimize conflict between the industry and local communities. But activists cried sellout, especially after they found out that the panel lacked any representation from the groups that had pushed the local fracking bans.But while the ban-it-all crowd were licking their wounds by year's end, some progress was made in resolving drilling disputes that have dragged on for a decade or more. The most notable was the compromise reached on the Roan Plateau, a gas-rich area on the Western Slope that also happens to be rich in wildlife habitat and rare species. The deal announced by state and federal officials last month will cancel drilling leases at the top of the plateau while allowing energy development to continue at the base. Whether even stronger protection for Browns Canyon will be achieved by the push to designate the area a national monument is still up in the air.Meanwhile, fractivists here are looking with some envy toward New York State -- which just imposed a statewide moratorium on fracking, and where a local town's ban on the practice sent billionaire Phil Anschutz packing last summer. The movement hasn't got that kind of muscle here, and faces a powerful governor on the side of compromise as "the Colorado way."But a lot can happen in a year. Have a tip? E-mail alan.prendergast@westword.com. Alan Prendergast has been a staff writer for Westword since 1995 and teaches journalism at Colorado College. His stories about the justice system, historic crimes, high-security prisons and death by misadventure have won numerous awards and appeared in a wide range of magazines and anthologies. @alanprend 99 Denver Hit-and-Runs in Six Days: See Where They Happened
科技
2017-09/1580/en_head.json.gz/5811
Azure connection speeds up cloud computing for UK universities A private link between the Janet network and Microsoft's Windows Azure's datacentre will eliminate the need to send data over the public internet. By Sam Shead The UK's research and education network, Janet, has entered into a partnership with Microsoft that could deliver a faster and more secure connection to universities across the UK. Read this £30m upgrade for UK's pan-university research network The upgrade will see network capacity on the UK's research and education network Janet almost double, laying the ground for collaborative research on data-intensive areas such as genome research and particle physics. The new arrangement — announced at Goldsmiths University of London on Tuesday — involves establishing a private link between the Janet network and the Microsoft Windows Azure datacentre in Dublin via a London internet exchange point, thereby eliminating the need to move data over the public internet. Janet's director of product and marketing, Dan Perry, told ZDNet: "What's really unique here is the increased service and security achieved. Staff and students can have end-to-end connectivity to cloud services without touching the public internet." Paul Watson, professor of computing science at Newcastle University, which has more than £20m of research projects supported by the cloud, said universities dealt in huge volumes of data and were attracted to the cloud for analysis because they did not have to buy their own computer hardware. "One of the major barriers holding back further cloud adoption is the time it takes to transfer large datasets from the lab to the cloud for analysis," he said. "This new link between Janet and the Azure Cloud removes this barrier, and will allow a far greater range of research projects to fully exploit the benefits of cloud computing." Under the partnership, UK universities on the Janet network will also be able to buy licenses for an academic version of Office 365 directly from Janet, instead of entering negotiations with Microsoft. A Microsoft spokesman told ZDNet on Thursday that all staff and students would receive the Office 365 Education Plan A (Academic). "This provides email via Microsoft Exchange with 25Gb storage, and collaboration tools including Sharepoint with Skydrive Pro and 7GB of personal storage. The package also comes with browser-based versions of Word, Excel, PowerPoint and OneNote, and real-time communication via Microsoft Lync. There are educational discounts available for optional add on services." Janet claims that the deal could have an impact on more than 18 million staff, students and researchers. "We've already carried out due diligence and negotiated amendments to help the sector adopt Office 365, which Goldsmiths University estimate could save each institution up to £20,000 and will save the sector as a whole, £670,000," Perry said. "Even more savings will be made through a similar agreement for Microsoft Azure, which then starts to run into the millions." Janet connects into the GEANT pan-European data network, which then connects through to other major academic regions such as the Pacific Rim and North America.
科技
2017-09/1580/en_head.json.gz/5826
Main menuEvidence Earth's magnetic field is decaying rapidly Earth's Magnetic Field Decay: As summarized by University of Maryland geophysicist Daniel Lathrop, “In particular, over the last 150 years or so, the Earth’s magnetic field has declined in strength about ten percent, and continues to decline in strength [as is evident] every time people go and make new measurements.” Creationists point out that this rapid decay is not expected in such a brief snapshot in time if our planet were 4.6 billion years old. On the other hand, these careful, long-term, and worldwide measurements that document the rapidly decreasing strength of Earth's magnetic field are consistent with a young Earth. Lathrop, not surprisingly, is an old-earthgeophysicist who nonetheless acknowledged this data at the opening of and midway through the 2013 program Magnetic Shield, an episode of The Weather Channel's Secrets of the Earth with theoretical physicist (emphasis on the theoretical), Michio Kaku. Creationist physicist Russell Humphreys of Sandia National Labs has updated his previous work by publishing Earth's Magnetic Field Is Decaying Steadily, which includes global data through 2010. Humphreys observes that, "in 1968 the International Association of Geomagnetism and Aeronomy (IAGA) began more systematically measuring, gathering, and analyzing geomagnetic data from all over the world. This group of geomagnetic professionals introduced a 'standard spherical harmonic representation' of the field called the International Geomagnetic Reference Field, or IGRF. Every five years starting in 1970, they have published both dipole and non-dipole components of the field. Using older data, the IAGA also extended the model back to the beginning of the twentieth century. With the issuance of the latest data set, IGRF-11, we have a standardized set of geomagnetic data from 1900 to 2010. You can download it free of charge as an ASCII file..." (Incidentally, Humphreys also published accurate predictions of the magnetic fields of Neptune and Uranus before NASA's Voyager mission confirmed his work.) The steady and rapid decay of the energy of the Earth's magnetic field as documented by the most careful measurements over the forty-year period from 1970 to 2010 is also consistent with previous published results using data going back to 1835, and by inference from other observations, apparently, going back to the 1100s A.D. Further, as with forensic accounting and statistical analysis, numbers can often tell a lot about data, and in this case, analysis of the field strength measurements helps to confirm the validity of the data. Humphreys writes further that the decay patterns, "weigh heavily against the idea that there is currently a 'dynamo' process at work in the core that would ultimately restore the lost energy back to the field. Without such a restoration mechanism, the field can only have a limited lifetime, in the thousands of years." For example, if the energy of the field has been dissipating at the current rate, going back only a million years would produce such heat that the oceans would have burned off the Earth, which clearly they have not. See also the Real Science Radio Mercury Report at rsr.org/mercury#magnetic-field for an example of another planet experiencing rapidly decaying magnetic field strength and hear Bob Enyart and Fred Williams talk about the Earth's decay rate at RSR's Spiders & Termites & Magnets. Here's the point: A four-billion year old Earth would have reached stasis long ago whereby changes in something as globally significant as its magnetic field would occur only very slowly. And since the Earth could not sustain the necessary increased energy backward in time for even a million years, let alone billions, to explain its current strength and decay rate, this is significant, worldwide evidence that appears to undermine the alleged great age of the Earth. Topic Selection: Physics and Geochronology Follow @RealSciRadio More Young Earth: In Our Store: Age of the Earth Debate vs RTB Denver scientists Get Updates rss feed widget YoungEarth.com is sponsored by Real Science Radio hosts Fred Williams and Bob Enyart.
科技
2017-09/1580/en_head.json.gz/5827
Tim Cook grabs TIME front cover with lengthy interview on Apple vs. FBI: ‘It’s very much about the future’ Apple CEO Tim Cook has gotten his first solo appearance on the cover of TIME magazine today, with the publication printing a lengthy interview with Cook about Apple’s fight with the FBI over iPhone encryption backdoors. The full transcript of the interview is available on TIME’s website. Cook says he is uncomfortable fighting the government, but Apple is fighting willingly for what it believes are civil liberties. Although interesting, the piece largely repeats the same arguments Apple has been touting for the last few weeks on the right to privacy, freedom of speech in iOS code and more. The interview discusses how this whole thing came about, as Cook has previously expressed disappointment with how the case was handled. Ignoring the matter at hand, Cook says he doesn’t like the tactics of the government. And so do I like their tactics? No. I don’t. I’m seeing the government apparatus in a way I’ve never seen it before. Do I like finding out from the press about it? No. I don’t think it’s professional. So do I like them talking about, or lying, about our intentions? No. I’m offended by it. Deeply offended by it. Cook goes on to say that this is a ‘golden age of surveillance’, with cameras everywhere in our lives — including in our pockets with smartphones. Cook implies that strong security and encryption on iPhone will not prevent law enforcement from gathering intelligence: ‘No one’s going dark’. On a technological level, Cook says that Apple will continue ramping security with every new OS. He says Apple has to ‘stay a step ahead of the bad guys’. This agrees with reports that Apple is actively developing stronger encryption for iCloud that even Apple couldn’t hack if it was forced to, by entangling user passwords into the encryption key. In response to questions about Donald Trump’s boycott of Apple products, Tim Cook remains diplomatic, essentially side-stepping the question: ‘I haven’t talked to him so I don’t know what he thinks’. Cook says Apple may see this as an over-reach into people’s live more than others because they are technically knowledgeable and it’s vital to have this issue discussed to all. I do think this is something that I think will affect the wellbeing of citizens of the U.S. for decades to come, that will affect civil liberties for decades to come. This is of that kind of stature and of that kind of importance. As it was going, the steamroller was on. And our job was just to be rolled up under the steamroller. He says that fighting the government is an unfortunate necessity, and is shocked that this issue is because of a case originating in the United States, not another part of the world. However, he remains optimistic that a good outcome can be found. This is just one of those cases where occasionally the government over reaches and doesn’t act in the best interest of its citizens. But I’m optimistic that we’ll get through it and get to a much better place. Apple is set to enter court proceedings against the FBI on March 22nd, regarding the government’s motion to compel Apple to compromise the security of iOS such that the government can access data on a San Bernardino shooter’s work iPhone. Read the full interview on TIME’s website. rnc - 11 months ago Reply Could TIME ask him how much is the next 9.7″ iPad is going to cost and what are the storage options? PhilBoogie - 11 months ago Reply “If it’s not on .com, we don’t know” 89p13 - 11 months ago Reply Not directly related to this article but as I was reading the Apple court filing on March 15, 2016, Apple adds to it’s term of “GovtOS” with next generation “LocationTrackingOS and EvesdropOS.” Stand tall and firm, Tim Cook and Apple. It is our future you are fighting for!
科技
2017-09/1580/en_head.json.gz/5847
The Dawn of the Location Enabled Web 1) Location Privacy The ubiquity of increasingly high-powered mobile devices has already spawned the Internet’s first generation of location-based services and applications. As the accuracy of location data improves and the expense of calculating and obtaining it declines, location may well come to pervade the online experience. While the increasing availability of location information paves the way for exciting new applications and services, the increasingly easy availability of location information raises several different kinds of privacy concerns. Ensuring that location information is transmitted and accessed in a privacy-protective way is essential to the future success of location-based applications and services. Because individuals often carry their mobile devices with them, location data may be collected everywhere and at any time, often without user interaction, and it may potentially describe both what a person is doing and where he or she is doing it. For example, triangulation of an individual’s mobile phone can reveal the fact that he was at a particular medical clinic at a particular time. The ubiquity of location information may also increase the risks of stalking and domestic violence if perpetrators are able to use (or abuse) location-based services to gain access to location information about their victims. Location information can also be highly identifiable, even when it isn’t directly associated with other personal information. For many people, there is one location where they spend their daytime hours (at work) and one location where they spend their nighttime hours (at home). After a day or two of collecting just those two data points about a person, it becomes fairly obvious whom those data points describe. Furthermore, location information is and will continue to be of particular interest to governments and law enforcers around the world. Standards for government access to location information held by companies are unclear at best and far too low at worst. The existence of detailed records of individuals’ movements should not automatically facilitate the ability for governments to track their citizens, but in many cases, laws dictating what government agents must do to obtain location data have not kept pace with technological evolution. Testimony of Leslie Harris (regarding DPI but containing a section about location) (April 2009) Digital Search & Seizure Report (February 2006) 2) The Dawn of the Location-Enabled Web Apple recently announced the release of the iPhone 3.0 software, which is a free update available to iPhone users containing a number of new software features. With the release of the software, the latest version of the Safari web browser running on the iPhone will be location-enabled. This means that any Web site can ask Safari for the user’s location, and Safari can provide it by using the location positioning technologies built into the phone (including GPS, among others). Apple has implemented a simple interface (based on a draft of a W3C standard) that Web sites can use to request location. Even before browsers started to become location-aware, Web sites have for years been using reverse-IP address lookups to obtain the approximate locations (at about city-level precision) of Web users. But with 40 million iPhone users, Apple’™s foray into geolocation marks the true beginning of an era when pinpointing many Internet users on a map – with the precision of a few meters, not a few miles – goes from complicated and onerous to simple and fast. This certainly will not work for all Internet users, but 40 million is a significant start. This new development has some obvious privacy implications. CDT believes that location information should only be used on individual Internet users’ own terms. Individuals should get to decide with whom they share their location, what that information is used for, whether or not it gets shared, and how long it’s retained. Location-enabled technologies – including Web browsers – should be designed with privacy in mind from the beginning and with built-in user controls to allow individuals to manage their location data as it is collected. CDT has been working for years to incorporate some of these concepts into technical standards, originally in the IETF’s Geopriv working group and more recently within the W3C Geolocation working group, which created the draft standard that Apple and other browser vendors are starting to use. Although the initial attempts from Apple and others are highly protective of privacy in some ways, there is still much room for improvement in providing user control. With Safari on iPhone, each Web site that wants to use your location has to first obtain the user’s permission not once, but twice. Those permissions are also reset every 24 hours. As far as consent goes, this is a really strong baseline. But in terms of providing more granular control and transparency, the iPhone is lacking. There is no way for a user to see with which sites (or applications, for that matter) he or she has shared location. If a user visits a site and declines to provide location to it, the site may continue to prompt the user to provide location on every visit. It would be helpful for users to be able to have a whitelist of trusted sites that can always obtain the user’s location, and a blacklist of untrusted sites that cannot ever access it. (Incidentally, the IETF’s Geopriv work has a built-in whitelisting capability.) That way, users could avoid the 24-hour permission renewal described above and they would not be badgered into consenting by accident. This kind of granularity would also help with permission revocation. Right now, to revoke even a single site’s permission, the only choice is to revoke all sites’ permissions. Even accomplishing that is a counterintuitive process: under the general settings, using the tab marked “Reset” (a somewhat scary name), the user must select “Reset Location Warnings.” Granted, the 24-hour permission relapse means that, today, there probably will not be many sites to revoke permissions from. But if the permission model ever changes, the revocation model needs to change as well. Given the privacy interests at stake and the relative lack of protection in the law, we would expect location controls to be better than other kinds of technological controls on the Web, to offer users more choices about what happens to their data and to be especially transparent about when location data is being passed around. It does not appear that every one of our expectations will be met here at the dawn of the location-enabled Web. But as location comes to pervade the Web experience – which it will, given the simple interface offered by the browser vendors and myriad uses of location information – we will be taking a closer look at how current user controls work, how they could be improved, and how standards, policy, and law can contribute to protecting location privacy on the Web. IETF Geopriv Working Group (February 2009) Draft WC3 Standard (June 2009) 3) Location-Aware Firefox Firefox, the second-most popular Web browser in the US after Microsoft’s Internet Explorer, has also recently become location-enabled. As with Safari on iPhone, this means is that Web sites can now ask Firefox for your location, and the browser can now deliver it. Initially, Google has signed on as the default “location provider” for Firefox. When a Firefox user pulls up a Web site that wants to use his or her location, Firefox will gather some information about nearby WiFi access points and send that information to Google. Because Google maintains a database that maps WiFi access points to actual physical locations, it can use this information to calculate the user’s location. That location gets sent back to the Firefox browser, and the browser forwards it on to the Web site that originally requested it. The accuracy of the location depends on a number of factors, but can be within a handful of meters in densely populated areas. Firefox and Google have taken a couple of excellent steps to protect the privacy of Firefox users throughout this process. The location information gets transmitted over an encrypted connection so it cannot be sniffed en route between the browser and Google or vice versa. Firefox does not provide Google with any information about the site that made the location request, so Google does not learn anything extra about the user’s browsing habits. Google also de-identifies the information it receives from Firefox two weeks after it is collected. This seems like a solid set of standards that all location-enabled browsers and location providers should be able to meet. While it is nice to see Google and Firefox take these steps, we are hopeful that Firefox will be able to expand its pool of location providers, and that new location providers will be able to meet these same standards. There are actually a diversity of ways in which Web users can or will soon be able to obtain their own locations, and as new location providers crop up, users should have the ability to choose their preferred provider. On the user experience side, the story is somewhat mixed. While Firefox, like Safari on iPhone, will prompt users for permission before passing location on to a Web site, there is no easy way to see a list of sites that have obtained location. If the user loses trust in a particular site, he or she must go back to the site itself to revoke its permission, which is probably precisely what the user will not want to do. And the mechanism for disabling location-awareness altogether is somewhat complex. We expect to see some more intuitive user controls for these kinds of features as more and more Web sites become location-enabled. Geolocation in Firefox (June 2009) Geolocation Controls Comparison Object Controls Comparison Cookie Controls Comparison Privacy Mode Comparison Time to Look Beyond the iPhone Location File
科技
2017-09/1580/en_head.json.gz/6006
Libya's Internet hit with severe disruptions Libya's network traffic has fallen by up to 80 percent, with YouTube being especially hard hit, as the country appears to follow Egypt's recent example. by Declan McCullagh February 22, 2011 4:35 PM PST @declanm Libya's Internet links have been severely disrupted as chaos spreads across the country, with a defiant Col. Moammar Gadhafi today vowing to die a "martyr" rather than relinquish his grip on power. As reports describe portions of Libya as a "war zone," and the country's deputy U.N. ambassador is saying "genocide" is under way, inbound and outbound Internet traffic has plummeted to a fraction of what's normal. Over the weekend, traffic appeared to be following a "curfew" pattern, with more restrictions imposed in the evenings, and YouTube is now almost entirely unreachable while Facebook is blocked. Craig Labovitz, the chief scientist of Arbor Networks, said that as of today, Libya is experiencing a significant Internet outage with traffic volumes 60 percent to 80 percent below normal levels. That follows a complete outage on Friday night, with the country vanishing from the Internet as completely as Egypt did during its revolts a few weeks earlier. Partial service was restored Saturday morning, only to be cut off again at around 2 p.m. PT, or midnight local time. Jim Cowie, co-founder and chief technology officer of Internet intelligence firm Renesys, says it's not clear whether the disruptions are intentional or caused by other factors such as power outages. (A report by a CNN correspondent in eastern Libya said the power was up but the Internet was down.) YouTube connections from Libya have fallen dramatically from normal levels. "The outages have lasted hours, and then service has resumed," Cowie said. "All of that is consistent with alternative explanations, such as power problems or some other kind of single-operator engineering issue." Egypt's Internet disruptions were easier to identify because of the larger number of broadband providers, almost all of which went dark simultaneously. In Libya, however, there appears to be only one with connections to the rest of the world: Libya Telecom and Technology, which is state-owned and enjoys close ties to Gadhafi. Egyptian networks "were withdrawn within the same 20-minute window--hundreds and hundreds of networks were affected, and stayed down for days," Cowie said. "Traffic flowing through Egypt to other destinations in the Middle East was utterly unaffected. All of that gave a fairly unambiguous signal from the start that it was a political event." Akamai Technologies' monitoring also shows a dramatic dropoff in Libya's traffic (times are U.S. Eastern Time). Graphs from Google's Transparency Report and Akamai Technologies reflect the hiccups in Libya. The data shows daily rises and dips in normal Internet traffic from Libya, followed by a stuttering, interrupted flow after Friday. Traffic appeared to be rising on Tuesday, but at far lower levels than a week before. YouTube's traffic volume, according to the Transparency Report, is down as much as 90 percent from normal levels. Instant messaging and Web browsing traffic has dropped more quickly than other types, according to Arbor's Labovitz. While Libya's government has engaged in relatively modest Internet selective filtering in the past, the list of off-limits Web sites has grown in the last week. The AFP news service reported on Friday that Facebook was blocked, and Al Jazeera's Web site is now off-limits. Al Jazeera said today that it's "suffering interference on its Arabsat satellite frequency" as well. That follows Gadhafi's recent warning to Libyans not to use Facebook, where activists have created groups calling for reform. Bit.ly CEO John Borthwick wrote on Quora that Internet blocking in Libya "will not affect" his company's site because some of the root nameservers for the .ly top-level domain are located in Oregon and the Netherlands. Page.ly's Joshua Strebel offered similar assurances. On the other hand, if for some reason Libya's government decided to target foreign .ly Web sites--which of course wouldn't make much economic sense--it could require that those domains be removed from the master in-country registry. They would then begin to disappear from the Internet over the next few weeks. Discuss: Libya's Internet hit with severe disruptions
科技
2017-09/1580/en_head.json.gz/6022
Public Release: 14-Feb-2013 Slithering towards extinction Almost 1 in 5 reptiles are struggling to survive NINETEEN PERCENT of the world's reptiles are estimated to be threatened with extinction, states a paper published today by the Zoological Society of London (ZSL) in conjunction with experts from the IUCN Species Survival Commission (SSC). The study, printed in the journal of Biological Conservation, is the first of its kind summarising the global conservation status of reptiles. More than 200 world renowned experts assessed the extinction risk of 1,500 randomly selected reptiles from across the globe. Out of the estimated 19% of reptiles threatened with extinction, 12% classified as Critically Endangered, 41% Endangered and 47% Vulnerable. Three Critically Endangered species were also highlighted as possibly extinct. One of these, a jungle runner lizard Ameiva vittata, has only ever been recorded in one part of Bolivia. Levels of threat remain particularly high in tropical regions, mainly as a result of habitat conversion for agriculture and logging. With the lizard's habitat virtually destroyed, two recent searches for the species have been unsuccessful. Dr. Monika Böhm, lead author on the paper: "Reptiles are often associated with extreme habitats and tough environmental conditions, so it is easy to assume that they will be fine in our changing world. "However, many species are very highly specialised in terms of habitat use and the climatic conditions they require for day to day functioning. This makes them particularly sensitive to environmental changes," Dr. Böhm added. Extinction risk is not evenly spread throughout this highly diverse group: freshwater turtles are at particularly high risk, mirroring greater levels of threat in freshwater biodiversity around the world. Overall, this study estimated 30% of freshwater reptiles to be close to extinction, which rises to 50% when considering freshwater turtles alone, as they are also affected by national and international trade. Although threat remains lower in terrestrial reptiles, the often restricted ranges, specific biological and environmental requirements, and low mobility make them particularly susceptible to human pressures. In Haiti, six of the nine species of Anolis lizard included in this study have an elevated risk of extinction, due to extensive deforestation affecting the country. Collectively referred to as 'reptiles', snakes, lizards, amphisbaenians (also known as worm lizards), crocodiles, and tuataras have had a long and complex evolutionary history, having first appeared on the planet around 300 million years ago. They play a number of vital roles in the proper functioning of the world's ecosystems, as predator as well as prey. Head of ZSL's Indicators and Assessment Unit, Dr Ben Collen says: "Gaps in knowledge and shortcomings in effective conservation actions need to be addressed to ensure that reptiles continue to thrive around the world. These findings provide a shortcut to allow important conservation decisions to be made as soon as possible and firmly place reptiles on the conservation map," "This is a very important step towards assessing the conservation status of reptiles globally," says Philip Bowles, Coordinator of the Snake and Lizard Red List Authority of the IUCN Species Survival Commission. "The findings sound alarm bells about the state of these species and the growing threats that they face globally. Tackling the identified threats, which include habitat loss and harvesting, are key conservation priorities in order to reverse the declines in these reptiles." The current study provides an indicator to assess conservation success, tracking trends in extinction risk over time and humanity's performance with regard to global biodiversity targets. ZSL and IUCN will continue to work with collaborating organisations to ensure reptiles are considered in conservation planning alongside more charismatic mammal species. ### Editors' notes Interviews: Available on request with Dr. Ben Collen or Dr. Monika Böhm Copies of the full paper are available on request High resolution images available here:- https://zslondon.sharefile.com/d/s1cd538d64f54ee6a Media Information For more information please contact Amy Harris on 0207 449 6643 or email amy.harris@zsl.org ZSL Founded in 1826, the Zoological Society of London (ZSL) is an international scientific, conservation and educational charity whose mission is to promote and achieve the worldwide conservation of animals and their habitats. Our mission is realised through our groundbreaking science, our active conservation projects in more than 50 countries and our two Zoos, ZSL London Zoo and ZSL Whipsnade Zoo. For more information visit www.zsl.org About IUCN IUCN, International Union for Conservation of Nature, helps the world find pragmatic solutions to our most pressing environment and development challenges. IUCN supports scientific research, manages field projects all over the world, and brings governments, NGOs, the UN and companies together to develop policy, laws and best practice. IUCN is the world's oldest and largest global environmental organization, with more than 1,000 government and NGO members and almost 11,000 volunteer experts in some 160 countries. IUCN's work is supported by over 1,000 staff in 60 offices and hundreds of partners in public, NGO and private sectors around the world. www.iucn.org; IUCN on Facebook; IUCN on Twitter Information from this study will form part of the global assessment of reptiles which is being undertaken by IUCN. More than 3,700 of the world's nearly 10,000 species of reptiles have already been assessed in this ongoing process. About the Species Survival Commission The Species Survival Commission (SSC) is the largest of IUCN's six volunteer commissions with a global membership of around 8,000 experts. SSC advises IUCN and its members on the wide range of technical and scientific aspects of species conservation, and is dedicated to securing a future for biodiversity. SSC has significant input into the international agreements dealing with biodiversity conservation. The IUCN Red List of Threatened Species™ The IUCN Red List of Threatened Species™ (or The IUCN Red List) is the world's most comprehensive information source on the global conservation status of plant and animal species. It is based on an objective system for assessing the risk of extinction of a species should no conservation action be taken. Species are assigned to one of eight categories of threat based on whether they meet criteria linked to population trend, population size and structure and geographic range. Species listed as Critically Endangered, Endangered or Vulnerable are collectively described as 'Threatened'. The IUCN Red List is not just a register of names and associated threat categories. It is a rich compendium of information on the threats to the species, their ecological requirements, where they live, and information on conservation actions that can be used to reduce or prevent extinctions. The IUCN Red List is a joint effort between IUCN and its Species Survival Commission, working with its Red List partners BirdLife International; Botanic Gardens Conservation International; Conservation International; Microsoft, NatureServe; Royal Botanic Gardens, Kew; Sapienza University of Rome; Texas A&M University; Wildscreen; and Zoological Society of London (ZSL). www.iucnredlist.org www.facebook.com/iucn.red.list @amazingspecies Crocodiles are genetically closer to birds than to other reptiles. There are around 25 species found in Americas, Asia, Africa and Australia. The largest extant reptile of all is the saltwater crocodile. Tuatara The tuatara is a reptile endemic to New Zealand which, though it resembles most lizards, is actually part of a distinct lineage, order Rhynchocephalia. The tuatara is the only surviving members of its order, which flourished around 200 million years ago. There were previously thought to be two living species of tuatara, but recent evidence suggests only a single species, Sphenodon punctatus, exists. The recent discovery of a tuatara hatchling on the mainland indicates attempts to re-establish a breeding population on the New Zealand mainland have had some success. The total population of tuatara of all species and subspecies is estimated to be greater than 60,000, but less than 100,000. The Amphisbaenia (called amphisbaenians or worm lizards) are a usually legless suborder of squamates closely related to lizards and snakes, comprising more than 150 species. They are very poorly understood, due to their burrowing lifestyle and general rarity. Most species are found in Africa and South America, with a few in other parts of the world. Lizards & Snakes Lizards are a widespread group of squamate reptiles, with more than 5600 species, ranging across all continents except Antarctica, as well as most oceanic island chains. Lizards typically have feet and external ears, while snakes lack both of these characteristics. There are over 3,000 species of snakes ranging as far northward as the Arctic Circle in Scandinavia and southward through Australia. Snakes can be found on every continent (with the exception of Antarctica), in the sea, and as high as 16,000 feet (4,900 m) in the Himalayan Mountains of Asia. NW1 4RYT: 0207 449 6643 www.zsl.org Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system. amy.harris@zsl.org @OfficialZSL http://www.zsl.org More on this News Release Biological Conservation
科技
2017-09/1580/en_head.json.gz/6030
170 FLIGHT International, 31 July 1969 Showing its underside (above) is the Comet 3 prototype, here in BEA livery. A service line-up at RAF Lyneham in 1962 (right) includes a Comet 2 (foreground) and three Comet CAs of Transport Command THE COMET IS TWENTY . ance aspects, it would not be practical or economical for an airliner. The Comet's final configuration was agreed in August 1946. but as early as the previous September, while the design was still fluid, BOAC had recommended the placing of a develop ment order and at the end of 1945 had agreed to buy up to ten aircraft subject to guaranteed performance, delivery dates and price. Great credit is due both to Lord Knollys, then BOAC's chairman, and the corporation's technical team, with Mr A. C. Campbell-Orde as operations development director, for their enthusiastic and farsighted backing of what was still an almost visionary concept to many people. In January 1947 the first Comet orders were placed—two Ministry-of-Supply-owned prototypes, eight for BOAC and six for British South American Airways, the BOAC total being revised to nine following the takeover of BSAA in July 1949. This initial order for 16. all at a fixed price based on costs current in 1946, was of course far fflom a break-even quantity, but BOAC's and de Havilland's decision to go into production without waiting for a prototype to fly enabled Comet services to start much quicker—Mr R. E. Bishop has estimated at least two years—than would otherwise have been the case. Only 2| years elapsed from the start of detailed design to the first flight, and a further 21 years to BOAC's first Comet service on May 2, 1952—and this was less than DH had expected. Such a time scale compared favourably with much later, if rather larger, designs such as the Boeing 727 and Trident; and, considering the advanced technology that the Comet represented, it was highly creditable. The new jet embodied a number of technical features that are now established practice but were then distinctly novel to civil aircraft; these included the large scale use of Redux bonding (previously used in the Dove), a "dragless" radio system with fully suppressed aerials, provision for cloud- and collision-warning radar, power-operated flying controls and—on the prototype—a small faired-over nacelle between each pair of Ghost jetpipes; this was intended to house a 5,0001b s.t. de Havilland Sprite "cold" liquid-fuel rocket motor for assisted take-off from hot and high airports. The idea of using the Sprite was abandoned, however, on consideration of the prac tical difficulties of storing rocket fuels and of refuelling. Meanwhile BOAC itself had to put in a lot of preliminary work over and above that involved in preparing for a new type; jet operations were a new venture. Such aspects as radio and navaid requirements, en route performance analysis, temperature accountability, ATC holding and descent proce dures and the necessary fuel reserves and alternates all had to be planned with the utmost care. (Above) United Arab Airlines put the Comet 4C into service in July, I960, and still operates seven. (Opposite) BEA has operated the 4B since April, I960, and is now transferring seven to its newly formed charter subsidiary, BEA Airtours Gradually the necessary spadework was completed, at first by the BOAC operational development unit at Hum under Captain M. J. R. Alderson, who later headed the Comet unit which was formed in September 1950. The second prototype, G-ALZK, made its first flight one year to the day after the first, and instead of the latter's single mainwheels employed four-wheel bogie units. The first had completed 324 hours' flying by June 15, 1950, including tropical trials and the breaking of London-Rome-London and London-Cairo point-to- point records. In December the Comet 2 was announced, powered by four 6,5001b s.t. RA.7 Avon 502s (later 7,1001b s.t. RA.25 Avon 503s) in place of the 5,0001b s.t. Ghost 45s of the Mk 1; the Mk 2 was given an extra 3ft of fuselage length forward of the wing, enabling it to seat up to 44 passengers, eight more than the Mk 1. The possibility of using axial-flow engines with their better fuel consumption had been considered back in 1945-46, but at that time the axial; did not seem sufficiently developed for civil use; in particular the problems of de-icing it and tapping compressor bleed air were not then solved. Only five months after the type's maiden flight the first export order was announced—two Comet lAs for Canadian Pacific Air Lines. But the first of these, CF-CUN, was lost in a take-off accident at Karachi on March 3, 1953, on its delivery flight, and in the end CPAL never operated the Comet, selling its second Mk 1A to BOAC. The Mk 1A featured an extra 1,000 Imp gal fuel tankage and seating for 44 passengers. Three each of this variant were
科技
2017-09/1580/en_head.json.gz/6059
Home Tanzania: Working towards better ecosystems in Pangani river basin Thu, 20 Mar 2008 A new project in Tanzania's Pangani river basin aims to improve water flows for the benefit of all its users. Extending from the mountainous southern reaches of Kenya down to the lowlands of the Tanzanian coast, the Pangani river basin is reeling from over-use and over-allocation. The basin is stressed, with many latent and emerging conflicts among its users, and current supplies are unable to meet demand. Realising that the national situation generally was headed for disaster, the Tanzanian government elaborated a new water policy in 2002 which sought to redistribute the waters of the Pangani basin in a more equitable manner. Traditionally users in the northern highlands have had abundant water supplies, while consumers in the drier southern coastal areas have suffered more deprivation. Several factors have contributed to the current state of the Pangani basin, including increased demand for water, changing climatic conditions, competing uses, over-abstraction and watershed degradation. The new policy contends that basic human needs will take priority in water allocation, followed by environmental issues after the government acknowledged the importance of healthy ecosystems for sustainable development. But a crucial question remains: how much water does the environment need to stay healthy? In a bid to answer this, IUCN has been giving technical support to a project of the Tanzanian water ministry conducted through its Pangani Basin Water Office in Moshi, at the foot of Mount Kilimanjaro. The Integrated Flow Management (IFM) project seeks to determine how much water is needed for the basin’s ecosystems to maintain themselves and their functions. Detailed studies of the hydrology, river health, estuary health and socio-economics of the basin were carried out as initial steps of the IFM, leading to the current phase in which scenarios for six different water allocation options have been devised by a multi-disciplinary team of Tanzanian experts. Guided by a team of experts from South Africa who have pioneered the field, the Tanzanians – drawn from the government, academia, and NGOs – developed water allocation scenarios taking into account the social, economic and environmental impacts. These include agricultural expansion as a priority; hydropower production as a priority; keeping the status quo; impact on other sectors by boosting the condition of ecosystems; implementing the national water policy with a focus on agriculture; implementing the water policy with a focus on hydropower. Impacts on livelihoods, utilities and the economy were also assessed. Both the Tanzanian government and the South African consultants are very excited by the prospects of this project, hoping it can be replicated throughout the country. The consultants point out that much of the pioneering work on assessment flows is being carried out in Africa with the focus on sustainable livelihoods. Notable results have already been produced. For instance, poverty reduction measures traditionally call for increasing agricultural output. But in Pangani, water allocation currently favours the agricultural sector so there is not much more room for growth here and it’s an option that could be ruled out. Advocates of boosting hydropower are normally at odds with conservationists. However the downstream site of the Pangani hydropower station, which is a more degraded environment, means that by increasing water for hydropower output, the environment would also benefit, keeping both sides happy. Long term, water allocation of the Pangani river basin should also look at small scale investment and water storage for the dry season. These current options provide a starting point for discussion, and it’s likely that the optimum scenario will be drawn from a combination of these different strategies, with a view to raising awareness of the benefits of the new system among stakeholders by June 2008. Work area: WaterLocation: East and Southern Africa
科技
2017-09/1580/en_head.json.gz/6114
QinetiQ ion propulsion team recognised with Sir Arthur Clarke award for contribution to space exploration Page ContentQinetiQ’s ion propulsion team has been named "team of the year" for its outstanding contribution to space exploration at the recent Sir Arthur Clarke awards. The award comes at the end of a landmark year for the QinetiQ ion propulsion team which saw the European Space Agency's (ESA’s) GOCE Spacecraft become the first to launch with QinetiQ’s T5 ion thrusters on board and QinetiQ begin work supplying advanced T6 thrusters for ESA’s future BepiColombo mission to Mercury. The award was accepted by the team’s Chief engineer, Neil Wallace, who commented: “We knew about the nomination but winning the award came as a complete surprise to all of us. It was a great team effort and reflects the hard work of many individuals for almost 20 years." He added: “2009 was a busy and exciting year for the Ion Engine Team and we’re thrilled to have our work on the GOCE and BepiColombo spacecraft recognised with this award. Electric propulsion will make spacecraft and satellites lighter, allowing more weight for the real payload, and we are delighted to be at the leading edge of this technology.” The Sir Arthur Clarke Awards are presented annually at the climax of the UK Space Conference to honour those who have done the most to further the field of space exploration in the past year. Previous winners have included the ESA ATV team, responsible for creating the Spacecraft which keeps supplies flowing to the International Space Station and the Huygens team which landed the first spacecraft on Saturn’s moon, Titan. QinetiQ beat off fierce competition from fellow nominees SSTL and ESA to win this year’s honour. Chair of the judging panel, Dr. Lesley Wright said that “This was an outstanding achievement by the QinetiQ team which the judges view as a significant contribution to space exploration.” The T5 and T6 ion thrusters developed by QinetiQ are ten times more efficient than the chemical engines traditionally used to propel spacecraft making some deep space missions possible for the first time. For ESA's Bepi Colombo mission to Mercury, the engines make the mission possible by counteracting the sun’s gravitational pull.
科技
2017-09/1580/en_head.json.gz/6115
/ Nintendo / Famicom £0.00 - £99.99 (3) B/C (1) AV Mod (2) Famicom (2) Famicom Disk System (1) Twin Famicom (1) Cartridge & Disk System (1) Disk System (1) Nintendo Famicom Consoles 24 36 48 Sharp Twin Famicom Console Black Grade: B/C Nintendo Family Computer (Famicom) Console with AV Mod Nintendo Famicom Disk System with RAM Pack Boxed Grade: E 24 36 48 Nintendo Famicom Consoles The Nintendo Family Computer is most commonly known as the Nintendo Famicom and was the first home games console released by Nintendo in Japan during 1983. The Famicom helped reinvigorate the Video Games Industry after the industry crash experienced in the 80's and eventually came to the west in the form of the NES. Each Famicom is internally renovated, cleaning the internal components, controllers volume control and microphone (embedded in the controllers that can cause feedback when dirty). We stock various models of the Nintendo Famicom console such as the Sharp Twin Famicom and the system expansion, the Famicom Disk System. We also offer AV modified systems and can perform this modification upon request. The Family Computer Disk System most commonly known as the Famicom Disk System is a peripheral for Nintendo's Famicom video game console and was released in Japan in 1986. It uses proprietary floppy disks called "Disk Cards" for data storage. The Disk System is connected to the Famicom by plugging a special cartridge known as the RAM Adapter into the system's cartridge port, and attaching that cartridge's cable to the disk drive. The RAM adapter contains an additional 32kb of RAM for temporary program storage, 8 KB of RAM for tile and sprite data storage, and an ASIC known as the 2C33. The ASIC acts as a disk controller for the floppy drive, and also includes additional sound hardware featuring a single-cycle wavetable-lookup synthesizer. Finally, embedded in the 2C33 is an 8KB BIOS ROM. The double sided Disk Cards have a total capacity of 112kb. Many games span both sides of a disk or a single disk contains a different game on each side. A few games use two full disks, totaling four sides. The Disk System is capable of running on six C-cell batteries or a compatible AC adapter. We renovate each of our Famicom Disk Systems by replacing the drive belts (notoriously known for degrading), re-align and re-collaborate the drive. Sharp Twin Famicom The Twin Famicom was produced by Sharp in 1986, and was only released in Japan. Licensed by Nintendo it combines the Nintendo Famicom and the Famicom Disk System into a single home console. Each of the cartridge and disk system aspects of the console function independently, controlled by a switch to enable the desired console. As with our Famicom and Disk System console, each of our Twin Famicoms are renovated and vigorously tested by replacing the drive belt (notoriously known for degrading), re-align and re-collaborate the drive, along with cleaning all controller internals, volume control and microphone. Secure Shopping
科技
2017-09/1580/en_head.json.gz/6186
May Undercut ABC Stations - WSJ TV Downloads ABC Stations Nick Wingfield, Joe Flint and Ethan Smith Staff Reporters of THE WALL STREET JOURNAL Last Thursday morning, Apple Computer Inc. started selling an episode of the hit television series "Lost" through its iTunes Music Store for $1.99 after the show aired the night before on ABC. It marked the first time a popular show was made available for legal downloading over the Internet so quickly after its original airing. With that, Apple may have helped open a Pandora's box for the media business. The Cupertino, Calif., company and its first TV partner -- Walt Disney Co., the parent of ABC -- have taken a...
科技
2017-09/1580/en_head.json.gz/6191
Risk of Nuclear Proliferation - WSJ New High-Speed PCs Raise Risk of Nuclear Proliferation Carla Anne Robbins Staff Reporter of The Wall Street Journal Updated Dec. 14, 1998 12:01 a.m. ET PALO ALTO, Calif. -- Two years ago, it took more than a million dollars and months of plotting for Russia's top nuclear scientists to get their hands on an IBM supercomputer; the incident sparked charges in Washington that the U.S. had dropped its strategic guard and was selling off its military edge. Today, at Fry's Electronics, not far from the answering machines and a few aisles over from the washers and dryers, it takes less than 10 minutes to search out computers than can come close to the Russians' hard-won prize....
科技
2017-09/1580/en_head.json.gz/6238
S&T Notebook: A Two-Year Tenure Winds Down Home | S&T Notebook: A Two-Year Tenure Winds Down Sep Dr. Scott Fish This is the final column by Dr. Scott Fish, Army Chief Scientist, on activities in the Army science and technology (S&T) community and their potential impact on Army acquisition programs. As part of our efforts to expand the Army’s awareness of S&T Initiatives outside the Army, Ms. Heidi Shyu, Assistant Secretary of the Army for Acquisition, Logistics, and Technology, and I visited Sandia National Laboratories on Aug. 23. We were met by Dr. Jeff Isaacson, Vice President for Defense Systems and Assessments, and Dr. Jerry L. McDowell, Deputy Laboratories Director and Executive Vice President for National Security Programs. They provided an overview of Sandia’s current research and development (R&D) initiatives and transitioning technologies, while showing us some of their unique laboratories with projects of relevance to the Army mission. In return, we discussed ways to enhance the strategic relationship between Sandia and the Army. This was a very fruitful visit. The following week, I traveled to the Air Force Research Laboratory (AFRL) at Wright-Patterson Air Force Base, OH, to attend a meeting of the Air Force Research Council, a gathering of the Air Force’s Chief Scientists, at the invitation of the AFRL Chief Technologist, Dr. Jennifer Ricklin. We had an excellent discussion on sensors, munitions, materials and manufacturing, and information. I talked about the Army’s work in these areas and gave them an overview of our S&T portfolio. Mike Cook of ATC discusses the Roadway Simulator with Dr. Fish. (Photo by Dana Fritts, Protocol Specialist, ATC) I also met with Maj Gen William N. McCasland, the AFRL Commander, to discuss increased cross-service S&T collaboration. I was able to tour several AFRL labs and facilities, discussing their programs. I was particularly impressed with how the various Air Force directorates think through and articulate their efforts within the Air Force Strategic Plan. They were terrific hosts. On Aug. 30, Air Force Chief Scientist Dr. Mark Maybury presented to Secretary Shyu, and a host of Army cyber-related organizations, work on an Air Force study he’s leading to provide a strategic focus in the cyberspace domain. Cyber Vision 2025 connects current National Strategy with future trends and challenges; it focuses on cyber as a domain, with air and space command and control functions within that cyber domain. The product clearly had parallel implications for the Army and engendered a lively discussion with the presentation participants. The next week I accompanied Ms. Shyu on a long-planned visit to the U.S. Army Cyber Command and received an overview of Army efforts in the cyberspace domain. Cyberspace will continue to be of national, military, and economic concern with no shortage of future work in that area. The week of Sept. 10 was a busy one. The Army Science Board briefed both the Commanding General of the U.S. Army Materiel Command, LTG Dennis L. Via, and Secretary of the Army John McHugh, on the results of the board’s latest study, “Strategic Direction for Army Science and Technology.” The study contains recommendations derived from looking at the current S&T environment and familiar trends, such as the growing global and industrial investment in technology. It also looks hard at how to enhance the transition of S&T while providing more focus for our S&T Enterprise. I started the next week in Warren, MI, with a visit to the U.S. Army Tank Automotive Research, Development, and Engineering Center (TARDEC), where Dr. Paul Rogers has just taken over as Director. I spent time with him and his leadership team getting an update on TARDEC’s work in protection, energy, and robotics. Dr. Rogers and I talked about how to enable his team to continue innovating and providing mechanisms for transitioning advancements to industry faster and more easily. I also received an update on underbody blast simulation work, and we discussed what TARDEC and the U.S. Army Research Laboratory are learning with these tools, where experimental validation is strong, and where improvements are needed. It was time well spent. Steve Knott (left), Associate Director of Ground System Survivability at TARDEC, discusses recent advances with Dr. Fish. (Photo by Bruce J. Huffman, Public Affairs Officer, TARDEC) At the end of the week, I accepted an invitation to tour the U.S. Army Aberdeen Test Center (ATC), Aberdeen Proving Ground, MD, from the new ATC Commander, COL Gordon Graham. Though I have interacted with many individual ATC personnel and participated in several tests there, I was surprised by the breadth and depth of ATC’s work. The increased use of modeling and simulation to help guide test planning, and the focus on the most productive tests to perform, are encouraging. We must continue to be diligent in this area, as budgets and trends in the complexity of our equipment continue to reduce our ability to verify everything by direct physical measurement. I was also impressed with the attitude of the project managers, who are finding ways to streamline validation and verification processes earlier in the acquisition cycle and link up with testing being conducted at contractor sites to shrink overall program timelines and cost. This is not easy; it requires continued engagement and clever strategy to maximize opportunities for confident development and certification of equipment for our warfighters. ATC has a great team and is doing critical work for our Army. This month ends my two-year tenure as the Army Chief Scientist. The experience has been great fun, and I’ve had the chance to shape some very interesting technical investigations across the realm of Army R&D. During this time, I’ve also had the chance to initiate activities both internal to the Army as well as external, and work through some of the typical growing pains of starting a new office in the Pentagon. Stay tuned for the selection of my successor by Secretary Shyu, whom I wish the very best, and who I expect will be able to take the Army Chief Scientist Office to an even higher level of utility. I now look forward to returning to Austin and initiating new activity with the University of Texas. Related posts: S&T Notebook: Strengthening Communication between S&T and Acquisition S&T Notebook: Robotics, Strategy, Small Units and More Army Develops ‘Translator’ for Improved Information Sharing Army Reserve Launches Air Traffic Control Simulator System
科技
2017-09/1580/en_head.json.gz/6247
Obama Pushes For A ‘Price On Carbon’, Says Global Warming Science ‘Is Compelling’ BarbWire on 9 June, 2014 at 12:10 0shares Share Tweet Pin Plus LinkedIn Reddit StumbleUpon Digg Email Print President Obama told New York Times columnist Thomas Friedman he wanted to “price the cost of carbon emissions” in order to fight global warming. Obama compared pricing carbon dioxide emissions to mandating pollution control technology that lowered pollution emissions that caused acid rain. His interview with Friedman comes after his administration put limits on carbon dioxide emissions on existing power plants. One of the policy options for states to comply with Obama’s new emissions limits is to impose a cap-and-trade program within their state or as part of a regional agreement with other states. In essence, this is putting a price on carbon emissions. “So if there’s one thing I would like to see, it’d be for us to be able to price the cost of carbon emissions,” Obama told Friedman. “We’ve obviously seen resistance from the Republican side of the aisle on that. And out of fairness, there’s some Democrats who’ve been concerned about it as well, because regionally they’re very reliant on heavy industry and old-power plants,” Obama said. “I still believe, though, that the more we can show the price of inaction — that billions and potentially trillions of dollars are going to be lost because we do not do something about it — ultimately leads us to be able to say, ‘Let’s go ahead and help the marketplace discourage this kind of activity,’” Obama added. Obama’s Environmental Protection Agency has received stiff resistance from Republicans, states, some Democrats and the energy industry because the new emissions limits would force utilities to shut down more coal-fired power plants — which several analyses show will raise electricity prices. “The EPA based its regulations upon what it admits is a clerical error, which does not even appear in the Clean Air Act,” said West Virginia Attorney General Patrick Morrisey, a Republican. Morrisey sent the EPA a letter last week, urging them to withdraw their proposed emissions limits on power plants. The new power plants rules show the administration’s “disregard for the rule of law, said Morrisey. “If the issue of climate change is to be addressed, it should be addressed by Congress,” Moreisey said. “Yet, in its zeal to achieve what the President considers to be a ‘legacy issue,’ the EPA has ignored this most basic principle and has clearly violated the Clean Air Act.” Despite the massive political pushback, Obama seems determined to push climate policies and convince the public global warming is an imminent threat. “Look, it’s frustrating when the science is in front of us,” Obama said in his interview. “We can argue about how. But let’s not argue about what’s going on. The science is compelling.” “The baseline fact of climate change is not something we can afford to deny,” he continued. “And if you profess leadership in this country at this moment in our history, then you’ve got to recognize this is going to be one of the most significant long-term challenges, if not the most significant long-term challenge, that this country faces and that the planet faces.” “The good news is that the public may get out ahead of some of their politicians” — as people start to see the cost of cleaning up for hurricanes like Sandy or the drought in California — and when “those start multiplying, then people start thinking, ‘You know what? We’re going to reward politicians who talk to us honestly and seriously about this problem,’” Obama said. Follow Michael on Twitter and Facebook Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact licensing@dailycallernewsfoundation.org. Previous Article National Religious Broadcasters: Senate Proposal Will Limit First Amendment Freedoms Next Article Report: EPA’s Claimed Benefits Of Climate Rule Overblown By 15-Fold
科技
2017-09/1580/en_head.json.gz/6356
Home › Risk assessment Briefing - December 2015 Genetic Engineering in Plants and the “New Breeding Techniques (NBTs)” Inherent risks and the need to regulate Over the last 5-10 years there have been rapid developments in genetic engineering techniques (genetic modification). Along with these has come the increasing ability to make deeper and more complex changes in the genetic makeup and metabolic pathways of living organisms. This has led to the emergence of two new fields of genetic engineering that overlap with each other: synthetic biology and the so-called New Breeding Techniques (NBTs). - August 2015 Dr Frances Kelsey: thalidomide and the precautionary principle We owe a deep debt of gratitude to Dr Frances Kelsey, write Helena Paul & Philip Bereano. In 1960, she defied her bosses at the FDA to prevent the licensing of thalidomide in the USA, saving thousands from being born with serious deformities. Her tough approach to minimising the risk from new drugs contains lessons we ignore at our peril. - January 2015 New breeding techniques Open letter to the Commission on new genetic engineering methods In the interest of protecting the environment and public health, genetically modified crops are subject to risk assessment, an authorisation process and labelling rules under EU law. All non-traditional breeding processes that change the structure of DNA using genetic engineering technologies or interfere with gene regulation fall within the scope of these GM regulations. Some are now calling on the European Commission to exempt new genetic engineering techniques from GM rules. The undersigned groups argue that such an exception could threaten the environment and our health, and would violate EU law. A dangerous precedent The UK Nuffield Council on Bioethics proposes five ethical principles and a duty to develop biofuels instead of the Precautionary Principle The Precautionary Principle advises society to be cautious about a technology or practice where there is scientific uncertainty, ignorance, gaps in knowledge or the likelihood of outcomes we did not predict or intend. It runs counter to the optimistic notion that any negative impacts from a technology can be addressed and may provide an opportunity to develop new solutions, so contributing to economic growth. The US Chamber of Commerce dislikes the precautionary approach and prefers: “the use of sound science, cost-benefit analysis, and risk assessment when assessing a particular regulatory issue.” Its strategy is therefore to: “Oppose the domestic and international adoption of the precautionary principle as a basis for regulatory decision making.” Genetically Engineered Trees & Risk Assessment An overview of risk assessment and risk management issues Trees differ in a number of important characteristics from field crops, and these characteristics are also relevant for any risk assessment of genetically engineered (GE) trees. A review of the scientific literature shows that due to the complexity of trees as organisms with large habitats and numerous interactions, currently no meaningful and sufficient risk assessment of GE trees is possible, and that especially a trait-specific risk assessment is not appropriate. Both scientific literature and in-field experience show that contamination by and dispersal of GE trees will take place. Transgenic sterility is not an option to avoid the potential impacts posed by GE trees and their spread. Regulation of trees on a national level will not be sufficient because due to the large-scale dispersion of reproductive plant material, GE trees are likely to cross national borders. All this makes GE trees a compelling case for the application of the precautionary principle. Search this site: Issues Bioenergy / Biomass Climate change & agriculture Food security / sovereignity (Green) Economy Social and ecological impacts Terminator (GURTs) This work by http://www.econexus.info is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License. Original website design by Envoy.
科技
2017-09/1580/en_head.json.gz/6357
Stoppping Science in its Tracks -- Presidential Hopefuls Lack Skills to Understand Science, Sustainability While presidential hopefuls from the Rotten Tomatoes leagues -- Republican candidates, Tea Bag Party, Cain, Koch-supported anything -- take shots at EPA, anything tied to environmental sustainability, and science in general, and the record profits of Exxon and other energy companies splash on the news headlines, we have real issues to fight onward for. Arctic -- Right now, the U.S. Fish and Wildlife Service is asking for your input on a plan for the Arctic National Wildlife Refuge that, for the first time, could recommend Wilderness protection for the Arctic Refuge's Coastal Plain — the Refuge's biological epicenter that has been in Big Oil's sights for decades. This plan will guide how the Arctic Refuge will be managed for the next 15 years or more. A Wilderness recommendation could protect this unparalleled area and the abundant wildlife that depends on it— including polar bears, musk oxen, caribou, and millions of birds from around the globe. But to make sure the final version of the Fish and Wildlife Service's plan includes a Wilderness recommendation, we must demonstrate overwhelming support for protecting the Refuge's Coastal Plain. If we speak with a loud and united voice, we'll be sending a strong message that the Fish and Wildlife Service can't ignore. Will you speak up for the Arctic Refuge? Please sign our letter to Secretary Salazar and the Fish and Wildlife Service. There are some places in this country that define what it means to be American — the Arctic National Wildlife Refuge in northeastern Alaska is one of those places. For the past 50 years our country has remained committed to protecting one of our last wild places. Some places are just too extraordinary to drill, and the Arctic National Wildlife Refuge is one of them. This year, the U.S. Fish and Wildlife Service will make some big decisions about the future of the Arctic Refuge. If we all speak out, we can make sure that those decisions offer the critical Coastal Plain the strongest possible protections from Big Oil and harmful development. Please sign our letter to Secretary Salazar and the Fish and Wildlife Service, and speak out for this national treasure. Thanks for all you do, Cindy Shogan Alaska Wilderness League Western Australia -- Shark attack hype belongs in Hollywood, Australian scientists say [1]Freya Petersen [2]November 2, 2011 22:20 Jaws doesn't live here any more? It's been a summer of shark hunts, but are the politicians' responses to the perceived threat of attack justified? Freya Petersen Perhaps it will be remembered as the Great Shark Hunt of 2011, prompted by three fatal attacks in quick succession off Australia's western coastline. According to a recent article in The Sydney Morning Herald [3] (SMH), a shark cull — authorized by the government of Western Australia to find what many were speculating was a "rogue" great white, and to rid the waters of other potential man eaters — was only the fifth such hunt of the year. Shark attacks in Russia, the Seychelles, Reunion Island and Mexico similarly led officials to spring into action, mainly to reassure a terrified public it seems. The WA state government, meanwhile, reportedly given the go-ahead for any great white sharks to be killed if they posed a threat to human life. Now, Christopher Neff, a PhD candidate at the University of Sydney into public responses to "shark-bite incidents" off South Africa and the United States, has penned a timely response to the hype (although, with just a handful of recent stories on shark attacks in this column generating more than 10,000 page views, we of course consider it legitimate coverage of a serious public safety concern). Neff tries to turn attention to prevention rather than a knee-jerk political cure. Despite the nightmare-invoking details of the attacks of Western Australia, Neff's SMH article makes you want to go back in the water, be it Perth, Sydney or that unspecified place in New England [4] where Sheriff Martin Brody (almost) met his toothy match. He starts by arguing that while the public's concern about the risks is reasonable, swimmers "are being offered a political solution to a public safety problem." There is no evidence that shark hunts reduce attacks. Research shows these responses are political, symbolically reducing public perceptions of risk rather than the actual risk. At the core of this reaction are two elements: the pressure on any government to respond after tragic and unexplainable events; and the familiarity of the ''rogue shark'' theory.Can we pause there just to say, "Hallelujah!" Supporting Neff's assertion that shark hunts, or culls to use the more palatable of the terms for the conservation-minded, is a collection of responses from the science community — published by Down Under's new favorite resource, the Australian Science Media Centre [5]. Among them, that of Shaun Collin — Western Australia Premiers Research Fellow and Professor of Neuroecology in the School of Animal Biology and Oceans Institute at the University of Western Australia (we're hoping that's his full title). Here's a snapshot quote: The culling of any species of sharks is not the solution. Not only will this be indiscriminate killing of a protected Australian species (under both the EPBC Act and state legislation), there is no way of being sure the sharks caught will be those responsible for the attacks. At present, there is no data to suggest that shark numbers are increasing off WA’s coastline and shark attacks in Australia have remained relatively constant over time, occurring at a rate of approximately one per year for the last 50 years. Sharks are apex predators and they play a critical role in the complex balance of oceanic ecosystems and their removal can have major impacts on other marine species. Education and surveillance are the best prevention of human fatalities off the WA coast until better repellent devices are developed.Meanwhile, Dr Charlie Huveneers — a Shark Ecologist within the Marine Environment and Ecology Program at the South Australian Research and Development Institute (Aquatic Sciences) and lecturer in the School of Biological Sciences at Flinders University, Adelaide (ditto on the titles) writes, among other things: Although shark attacks are tragic events and are often highly mediatised, they are still very rare events with a low probability of occurrence... Around the world, several means of mitigating shark attacks have been put in place with variable level of success, but it is unlikely that one method can be considered the best way to reduce shark attacks. A combination of techniques selected depending on the characteristics of each location frequented by potentially dangerous sharks is likely to be the most efficient.Techniques, he says, such as the Shark Spotter program instituted by the City of Cape Town, South Africa, where spotters perched on mountains towering directly over the city's beaches "monitor for the dark silhouette of sharks on the sandy background.” Well, it's a start. Back to Neff, who's done his homework — stretching back to a British Medical Journal article from 1899, which almost half a century later inspired a Sydney doctor (no less) to develop the "rogue shark theory." The article addressed a series of unexplained shark bites and caught the attention of the shark researcher and Sydney surgeon Sir Victor Coppleson, a good man and public servant. With his heart in the right place, he developed the ''rogue shark'' theory in the 1940s and '50s; however, it went awry as he tried to explain why sharks bite. His theory has been discredited by science over the years, but found its way into the film "Jaws" and into our collective consciousness.Enough said. Neff concludes by saying that sharks are not "looking for people," but rather "are following prey, such as whales, dolphins, bait fish and seals. He says that rather than send out the shark hunters, governments can educate the public to "reduce personal risk based on their behavior." He even specifies four shark-attack-risk-reducing factors: environmental conditions (stay out of the water after or before storms, at dawn or dusk); ecological conditions (avoid areas with bait fish, dolphins, seals and whales); personal behavior (be conscious of how far out you are and how long you've been in the water, and avoid shiny jewelery); and lastly shark behavior (sharks are curious and defensive; we are in the way, not on the menu). Finally in the interests of balance, here's a politician's reaction to the fatal attack on bobyboarder Kyle James Burden, 21, at Bunker Bay in September, cited in Perth Now [6]: Shire of Busselton president Ian Stubbs said he wants the shark responsible killed. “A lot of people say the water is the shark’s territory,” Mr Stubbs said.“But I think if they can find the shark (responsible) they should get rid of it. It’s a personal opinion, not a shire opinion but I personally think they should (kill it.) If they have attacked a human in one of those areas they may want to do it again and I think we should be stopping that.”Perhaps, given the statistics, Stubbs might be better advised keeping personal opinions to himself and urging rate payers to read Neff's four risk-reducing practices ... and then have a nice swim. It's been a summer of shark hunts, but are the politicians' responses to the perceived threat of attack justified? Posted by ValerieNovember 6, 2011 at 3:29 PMCould you send the link to the letter to Salazar? I'd love to sign!ReplyDeletePaul K. HaederNovember 17, 2011 at 9:54 AMit's closed -- sorryhttp://www.alaskawild.org/closed/ReplyDeleteAdd commentLoad more... Seven Billion, and the One-percenters are Counting... Stoppping Science in its Tracks -- Presidential Ho...
科技
2017-09/1580/en_head.json.gz/6362
UnNews:Himalayan glaciers melting deadline 'a stupid mistake' Alarmist warning that glaciers could melt by 2035 is wildly inaccurate. ONTARIO, Canada -- The UN panel on climate change warning that Himalayan glaciers could melt to a fifth of current levels by 2035 is wildly inaccurate, an academic says. J Graham Cogley, a professor at Ontario Trent University, says the UN authors got the date from an earlier report wrong by more than 5000 years. He is astonished they "misread 7035 as 2035". The authors had no comment, and refused to have voluntary eye examinations or undergo a polygraph. Leading glaciologists say the report has caused mass confusion and "a catalogue of errors in Himalayan glaciology". The Himalayas hold the planet's largest body of ice outside the polar caps - an estimated 1,020,000 cubic kilometers of water. They feed many of the world's great rivers - the Tigris, the Nile, the Mississippi, the Mekong, the Amazon, the Indus, the Ganges, and the Brahmaputra, on which billions of people depend. In its 2007 report, the Nobel Prize-winning Inter-galactic Panel on Climate Change (IPCC) said: "Glaciers in the Himalayas are receding faster than in any other part of the world and, if the present rate continues, the likelihood of them disappearing by the year 2035 and perhaps sooner is very high if the Earth keeps warming at the current rate. "Its total area will likely shrink from the present 1,020,000 to 34.9 square kilometers by the year 2035," the report said. It suggested three billion people who depend on glacier melt for water supplies in Asia, South America, North America, and Egypt could be affected. But Professor Cogley has found a 1996 document by a leading hydrologist, VM Kotlyakov that mentions 7,035, as the year by which there will be massive and precipitate melting of glaciers. "The extrapolar glaciations of the Earth will be decaying at rapid, catastrophic rates - its total area will shrink from 1,020,000 to 0 square kilometers by the year 7035," Mr. Kotlyakov's report said. Mr. Cogley says it is astonishing that none of the 10 authors of the 2007 IPCC report could spot the error and "misread 7035 as 2035". "I do suggest that the glaciological community might consider advising the IPCC about ways to avoid such egregious errors as the 2035 versus 7035 confusion in the future," says Mr. Cogley. “Such has having their eyes examined.” Projected Himalayan glacier subsidence based on viewing data while wearing reading glasses When asked how this "error" could have happened, RK Pachauri, the Indian scientist who heads the IPCC, said: "I don't have anything to say, it was a simple mistake, what is the matter?" The IPCC relied on three documents to arrive at 2035 as the "outer year" for shrinkage of glaciers. They are: a 2005 World Wide Fund for Nature report on Elephant Mating Habits; a 1996 Unesco document on AIDS; and a 1999 news report in New Christian Scientist, none of which dealt with glaciers. Incidentally, none of these documents have been reviewed by peer professionals, which is what the IPCC is mandated to be doing. Murari Lal, a climate expert who was one of the leading authors of the 2007 IPCC report, admitted it had its facts wrong about melting Himalayan glaciers. He also admitted the report relied on non-peer reviewed - or 'unpublished' - documents when assessing the status of the glaciers. Recently India's Environment Minister Jairam Ramesh released a study on Himalayan glaciers that suggested that they are not melting near as much due to global warming as it is widely feared. He accused the IPCC of being "alarmist". India says the rate of retreat in many glaciers has decreased in recent years. When asked how he was certain Ramesh replied, “because I am telling you!” Michael Zemp from the World Glacier Monitoring Service in Zürich also said the IPCC mistake on Himalayan glaciers had caused "some major confusion resulting in the media, and Al Gore, making a fortune. Under strict consideration of the IPCC rules, it should actually not have been published as it is not based on a sound scientific reference. "From a present state of knowledge," he continued, "it is not plausible that Himalayan glaciers are disappearing completely within the next few millennia. I do not know of any scientific study that does support a complete vanishing of glaciers in the Himalayas within this century." However, he would not rule out the possibility of the Sun exploding at any time. Staff "Himalayan glaciers melting deadline 'a mistake'" BBC News, August 31, 2010 Retrieved from "http://en.uncyclopedia.co/w/index.php?title=UnNews:Himalayan_glaciers_melting_deadline_%27a_stupid_mistake%27&oldid=5711468" Categories: UnNews31 August 2010Science Navigation menu Peereview
科技
2017-09/1580/en_head.json.gz/6491
As It Is Sprint Signs Telecommunications Deal in Cuba November 06, 2015 At the annual Havana International Fair, Cuba’s first trade fair since ties were reestablished with the United States The U.S. pavilion includes an exhibit for Pepsi Cola, Nov. 2, 2015. The island nation is seeking more foreign investment. Share Sprint Signs Telecommunications Deal in Cuba //learningenglish.voanews.com/a/sprint-signs-telecommunications-deal-with-cuba/3037112.html Sprint has become the first United States-based telecommunications company to provide direct roaming mobile phone service to foreigners in Cuba. The deal was announced Monday in Havana, the Cuban capital. Sprint signed the deal with Cuba’s state-operated telecommunications agency Etecsa. Roaming service is the ability of mobile phone users to operate their devices when away from home or outside their local calling area. Sprint wants to reach rising numbers of foreign tourists in Cuba. The company says it expects the number of visitors there to reach five million a year within the next 10 years. The deal comes as the United States and Cuba continue to explore possible business deals. The two countries reestablished diplomatic relations in July for the first time in more than 54 years. U.S. telecom Sprint Corp Chief Executive Marcelo Claure talks to the media in Havana, Cuba, Nov. 2, 2015. Under the new agreement, Sprint’s customers will now be able to generate and receive calls and text messages in Cuba from their personal mobile devices. Marcelo Claure is Sprint’s president and Chief Executive Officer. He considers the deal an important step for both countries. "This is a big deal because it shows that the U.S. and Cuba working together in an area of progress which is communications, and we expect this to be the beginning of a long-term relationship between Sprint and between Etecsa." The agreement was announced at the Havana International Fair 2015. Reports say 900 businesses sent representatives to the fair. One third of them were Cuban. About 70 countries, including the United States, took part in the event. Sprint is not the only U.S. company offering telecommunications services in Cuba. One of Sprint’s main rivals is Verizon. That company announced in September that it would offer a roaming service to its clients visiting Cuba. However, that service is provided indirectly, through a third-party contracted by Etecsa. Some business watchers are concerned that U.S. companies have fallen behind those of Cuba’s strategic partners. Those countries include Cuba’s traditional allies, such as China, Russia and, more recently, Venezuela. Spain also has developed successful commercial relations with Cuba. The country had the largest foreign presence this year at the Havana International Fair. Of the 600 foreign companies taking part, 150 were from Spain. Ramon Taylor reported on this story from Washington. Mario Ritter adapted it for Learning English. George Grow was the editor. What do you think about the U.S. and Cuba developing closer ties? Let us know in the Comment section or on our Facebook page. roaming – adj. the use of a cell phone outside the usual area customer(s) – n. people or organizations that buy goods or services generate – v. to make or produce rival – n. a person or thing that tries to defeat or be more successful that another clients – n. a person who pays a professional person or organization for services strategic – adj. of or related to a general plan to reach a goal in business, politics or war commercial - adj. related to the buying or selling of goods and services US Seeks to Influence Changes in Cuba Future Doctors Learn Medical English in Havana China, Syria, Iran Worst on Internet Freedom
科技
2017-09/1580/en_head.json.gz/6543
For Google, Social Networking Is Just One Chapter of the Book By Stan Schroeder2010-11-21 08:47:33 UTC With its valuations and the number of users going through the roof, Facebook is perceived by many as the company that might dethrone Google as the next Internet superpower. Google, however, doesn't see Facebook as a direct threat, since social networking is just one part of Google's overall strategy. "The digital world is exploding and it has so many chapters — it has cloud computing, it has mobile, it does have social, it has searches, it has so many elements. (...) Yes, absolutely it will be part of our strategy, yes it will be embedded in many of our products. But at the same time remember it's one chapter of an entire book," said Google's chief financial officer Patrick Pichette to Australian public television on Sunday. While that may be true, with Facebook's recent foray into e-mail, it's getting obvious that Facebook is competing with Google on many fronts, not merely as a social network. The real question here is whether Facebook's core product — social networking — is more powerful a foundation than Google's core product — search. Pichette also weighed in on the state of mobile and Google's great success with Android. "Now that everybody has a smartphone everybody searches, so these few hundred engineers (who developed Android) have accelerated (a market that) would have taken 10 years to develop into a few years," he said. Pichette also repeated Eric Schmidt's claim that 200,000 new android handsets are being activated every day. Finally, Pichette gave his thoughts on what drives Google as a company, delivering another quote that goes hand in hand with Google's long-standing "don't be evil" mantra. "The first driving principle of Google is in fact not money — the first driving principle of Google is understanding that the Internet is changing the world," he said. [Image courtesy of Google] Android, Facebook, Google, Patrick Pichette, Social Media, social networking
科技
2017-09/1580/en_head.json.gz/6583
Mark Zuckerberg: Is Connectivity a Human Right? For almost ten years, Facebook has been on a mission to make the world more open and connected. Today we connect more than 1.15 billion people each month, but as we started thinking about connecting the next 5 billion, we realized something important: the vast majority of people in the world don’t have access to the internet. Today, only 2.7 billion people are online — a little more than one third of the world. That is growing by less than 9% each year, but that’s slow considering how early we are in the internet’s development. Even though projections show most people will get smartphones in the next decade, most people still won’t have data access because the cost of data remains much more expensive than the price of a smartphone. Below, I’ll share a rough proposal for how we can connect the next 5 billion people, and a rough plan to work together as an industry to get there. We’ll discuss how we can make internet access more affordable by making it more efficient to deliver data, how we can use less data by improving the efficiency of the apps we build and how we can help businesses drive internet access by developing a new model to get people online. I call this a “rough plan” because, like many long term technology projects, we expect the details to evolve. It may be possible to achieve more than we lay out here, but it may also be more challenging than we predict. The specific technical work will evolve as people contribute better ideas, and we welcome all feedback on how to improve this. Connecting the world is one of the greatest challenges of our generation. This is just one small step toward achieving that goal. I’m excited to work together to make this a reality. For the full version, click here. Category: Company News Technology Leaders Launch Partnership to Make Internet Access Available to All Aquila’s First Flight: A Big Milestone Toward Connecting Billions of People
科技
2017-09/1580/en_head.json.gz/6636
Vimeo: 15 Things You Didn’t Know (Part 2) By PPcorn Share on Facebook We already brought you part one of our list of 15 things you probably did not know about Vimeo, and now we’re back with part two! Check out eight more fascinating facts about the popular video sharing website that you definitely (probably) did not know below. You might be surprised by what you find out! Number Eight: It Can’t Be Used in Turkey. If you want to watch some videos on the site and you just so happen to be in Turkey, then you’re out of luck. As of 2014, all videos hosted on the site were blocked in the country. Number Seven: It’s Got Higher Quality Content Than YouTube. It’s true! Even though YouTube is used by more people than Vimeo, Vimeo has more of a tight-knit community of people who will give back honest, constructive criticism and avoid the racist, unrelated comments seen so often on YouTube. Number Six: It’s Pretty Easy to Upload Copyrighted Material. While YouTube detects copyrighted material that people upload almost instantly, Vimeo doesn’t, which makes it much easier for its users to steal material. For this reason, users need to be more careful about how they share things. Number Five: It Rewards Its Best Users. In 2010, the company hosted its very first festival to honor the best videos uploaded onto the site. The festival had nine judges, including M.I.A. and David Lynch, and the first-prize winner took home $25,000. Number Four: It Was Around Before YouTube Was Ever Invented. Despite the fact that YouTube is significantly more popular, Vimeo was actually created before YouTube! Most people don’t expect this because of YouTube’s massive following. Number Three: Its Founders Didn’t Focus on the Company Right Away. The company’s two founders were also involved with CollegeHumor and BustedTees at the time of Vimeo’s creation. Because of this, they couldn’t focus their energy on the company, which may explain why YouTube gained so much more popularity than Vimeo. Number Two: One of Its Founders Made No Money With the Company. Josh Abramson is one of Vimeo’s founding partners, and unlike YouTube’s founders, who made over $1 billion when they sold it to Google, he made nothing from the venture. He himself said that he made “exactly zero dollars.” Number One: It’s Here to Stay. A lot of what we’ve mentioned might make you think that the company isn’t a success, but that is definitely not the case. Vimeo offers several professional packages that YouTube can’t compete with, and as of 2015, there were 35 million registered members. We hope you enjoyed our list of 15 things you didn’t know about Vimeo! tweet PPcorn RELATED ARTICLESMORE FROM AUTHOR The Top 10 Board Game-Based Video Games The Top 10 Sexiest Women According to Google YouTube: The Top 10 Richest Gamers Google+: Top 11 Dos and Don’ts Gmail: Top 9 Secret Features You Didn’t Know Existed Google Search: Top 10 Advanced Tricks and Tips Google Drive: 15 Tricks You Didn’t Know (Part 2) Google Drive: 15 Tricks You Didn’t Know (Part 1) Google Maps: 15 Things You Didn’t Know (Part 2) Top 10 Misconceptions You Have About Vegetarians
科技
2017-09/1580/en_head.json.gz/6718
Airbnb Gets More Social, Aspirational and Beautiful with Wish Lists By Harry McCracken @harrymccrackenJune 27, 2012 Share Harry McCracken / TIME.com Email When Airbnb launched in 2008, it was about one thing, and one thing only: Finding a place to stay, such as someone’s spare bedroom, that was cheaper than a hotel. That wasn’t very glamorous, and neither was the site. And over the years, its look and feel haven’t changed much: CEO and co-founder Brian Chesky told me that its utilitarian interface has been similar to that of travel research sites such as Kayak. Harry McCracken / TIME.com Four years and ten million nights of stays later, Airbnb is not what it was. The budget-priced hotel alternatives are still there. But so are castles. And homes designed by Frank Lloyd Wright. And private islands. And a “mushroom house” in Aptos, California that’s the single most popular property in Airbnb history. And a whole lot of other places to stay that are destinations in themselves, often as intriguing as the cities they’re in. Today, Airbnb is launching a major redesign (and a revamp of its iPhone app) designed to catch up with the diversity of the 200,000-plus listings it currently hosts. It lets Airbnb users–and Airbnb–collect and share the site’s most interesting listings, so it becomes a place for open-ended exploration as well as straightforward searching. Chesky, who gave me a sneak peek earlier this week, says that with the new site, users will “start with the unique property instead of the destination.” Wish Lists grew out of two existing features–one which let users star any listing for future reference, and another which provided them with collections of noteworthy properties selected by Airbnb. The star is now a heart symbol; clicking it lets you save the venue, which is added to a public list that other people can see (unless you’ve chosen to make it private). Airbnb gives you two standard Wish Lists–Dream Homes and Vacation Places–and you can create your own. You can still use the feature to queue up places you’re thinking of staying, but as the name “Wish List” indicates, Airbnb hopes that you’ll also use it to share places you’d love to stay, whether or not they’re actually in your budget. The Borgia Castle in Tuscany, for instance, accommodates 14 guests and rents out for $1416 a night. Airbnb is also creating themed Wish Lists of its own, such as Airstreams and Modernist Marvels, which it’s dressed up with custom typography so they look like magazine features. It’s also solicited lists from celebs such as actor/Airbnb investor Ashton Kutcher and designer Yves Behar, and will spotlight lists that members compile. Even the new, much more engaging Airbnb homepage is a Wish List of a sort–a never-ending one, which lets you scroll through lush oversized photos of exceptional listings. (Thanks to Airbnb’s offer of free professional photo shoots for properties people list on the site, it has plenty of slick high-resolution imagery.) If I sound impressed, it’s not because I was already smitten with Airbnb. Actually, I’ve never used the service; I’ve always been just as happy to stay in a nice, predictable hotel, and nervous about lodging with some stranger. But now that Airbnb is emphasizing that it’s got plenty of options that are cooler than a hotel rather than simply cheaper, I’m more tempted to give it a whirl. Oh, and one footnote from my visit to Airbnb’s San Francisco office. Lots of tech startups have conference rooms with cleverly-themed names. Airbnb, however, has turned its meeting spaces into homey replicas of actual properties listed on the site. There’s one based on a New York home, for instance, and another inspired by one in Berlin. They’re nice. But the one that’s startling is the one that’s based on the most popular single Airbnb listing of them all. You know–that one for a mushroom-shaped cabin in Aptos, California.
科技
2017-09/1580/en_head.json.gz/6793
As White Nose Syndrome Spreads, Worries Persist About Potential Impact on Bats, Ag Industry By Emil Moffatt May 29, 2014 TweetShareGoogle+Email The entrance to Mammoth Cave Credit Emil Moffatt Listen Listening... / 4:42 White Nose Syndrome has spread to more areas at Mammoth Cave National Park and may end up costing farmers billions of dollars After a 10 minute climb up a gentle incline just off the main trail at Mammoth Cave National Park, Rick Toomey stands on a wooden platform overlooking Dixon Cave. “It’s one of our most important hibernation sites,” said Toomey, the park’s research coordinator. He says during the winter thousands of bats, including several different species hibernate here. But those numbers might be on the verge of a drastic change. “This is a site that could be vastly altered in five years. In five years we might go in there and find five or ten bats total,” said Toomey. “It’s a very realistic possibility based on what’s been seen elsewhere. And that would be devastating to our ecosystem up here.” The problem: White Nose Syndrome. It started in the northeast in 2006. It was first noticed at Mammoth Cave in 2013 and has since spread to the caves that welcomed nearly half-a million visitors last year. Toomey says the fungus that gives White Nose Syndrome its name is just one of the symptoms of the devastating disease. “This fungus seems to cause them to wake up a lot more often and burn through fat reserves, so they end up looking emaciated and dehydrated,” said Toomey. “It will cause them, instead of sleeping in the middle of February when there’s snow out, they’ll be hungry, flying around at the cave entrances, moving to colder areas, heading out into the snow looking for bugs to eat and looking for things to drink.” The disease is fatal in some bat species. It’s not harmful to human health, but scientists continue to warn the biggest consequence to humans could be an economic one. Researchers at Mammoth Cave haven’t seen a mass die off, but they say they may get a better picture of the effects of White Nose Syndrome in a few months from now after the next hibernation season. The hopes that the warmer climates in the South and Midwest would reduce the number of White Nose Deaths seem to have been dashed by early evidence. “We were hoping Kentucky was going to be an area, where if the winters were short enough that the bats could make it through without high mortality," said Toomey. "But we’re starting to see mortality in Kentucky this year in a number of places and in a number of species. Unfortunately, we’re less hopeful that the shorter and warmer winters would make a difference.” Mammoth Cave research coordinator Rick Toomey Credit Vickie Carson/Mammoth Cave National Park Economic Impact In the Northeast, meantime, the mortality rates are staggering. In some populations, like the Little Brown bat or the Northern Long Ear bat, there’s a 98 to 99 percent drop off in populations. In all, it’s estimated at over five million bats have died from White Nose. And that’s where the economic impact comes in. As Toomey points out, bats plays an important role in the ecosystem as a major consumer of night time pests. A 2011 study published in the journal Science estimated the economic impact of bats for the agriculture industry between $3 billion and $50 billion dollars annually. As White Nose Syndrome slowly inches south and west, it may begin to threaten areas whose economies depend more heavily on agriculture. But measuring the effect of the disease on the inspect population remains a challenge says Jonathan Reichard with U.S. Fish & Wildlife Service. “We’ve got annual changes in weather, and spraying and various other environmental impacts that are going to be influencing the number of pests that are around,” said Reichard. But Reichard says researchers will continue trying to find ways to parse those numbers. “We can design some studies that will get at it, through some elaborative, empirical design,” said Reichard. “But farmers most likely would be the first ones to see it through increase pestilence in their crops or decreased yields.” Stopping The Spread Scientists believe the most common way the disease is spread is from bat to bat. Some species of bats have prolific migration patterns. Toomey says some Indiana bats, for example spend the winter hibernating in Kentucky before returning to a tree roost in Central Michigan. “The bats in the eastern part of North America are largely different species than the bats in western North America. There are a few species that cut across, but we don’t really have good information about how much breeding and gene flow goes across east-west. There’s a lot of gene-flow north-south in all of these bats.” Toomey says he takes part in conference call every two weeks with local, state and federal researchers from around the country. He says when the calls started a few years ago, there were about three dozen people on the call. Now, that number is now doubled and sometimes tripled, according to the U.S. Fish & Wildlife Department. The mat guests walk over after leaving Mammoth Cave is soaked in a Woolite solution to prevent the spread of fungus associated with WNS Credit Emil Moffatt In addition to bat migration spreading the syndrome, it can also be passed incidentally by cave visitors. That’s why at Mammoth Cave, when visitors emerge from a cave tour, they walk across a black mat – fiberglass cloth and foam soaked in a Woolite solution, which Toomey says has shown effectiveness in killing the spores carrying the fungus. “What we’re doing here at Mammoth Cave, with people walking over mats so they can’t take it some place, is we are actually trying to defend Colorado, New Mexico, Oregon and California because we don’t want to have people move it into the western bats," said Toomey. "If people move it into the western bats we’re also fairly certain the western bats will be very efficient at moving it amongst themselves and starting a new epidemic there.” As Mammoth Cave's public information officer Vickie Carson notes, there are hopes a cure can be found soon. “It’s not in Carlsbad Caverns, it’s not in Oregon’s caves, it’s not in Timpanogos, it’s not in Wind Cave or Jewel Cave – those are National Park Service caves out west," said Carson. "There’s probably no doubt that it will get there at some time. We’re hoping that if we can slow the spread of it, maybe science can catch up in some way and do something about it." Tags: Mammoth Cave National Parkwhite nose syndromeU.S. Fish & WildlifeTweetShareGoogle+EmailView the discussion thread. Related Content Mammoth Cave National Park Hosts Citizenship Ceremony--Inside Cave By Lisa Autry Sep 13, 2013 Lisa Autry The United States put on an adoption ceremony today at Mammoth Cave National Park. In a courtroom made by nature, the U.S. adopted 39 new citizens. In the depths of a cave, a federal judge presided over the ceremony featuring natives of 22 countries around the world. Park Ranger David Alexander sang "The Star Spangled Banner," and Park Superintendent Sarah Craighead gave the country's newest citizens and official welcome."We are so pleased and honored to have you spend your first few minutes as citizens in a national park," remarked Craighead. "There's not a more perfect place to have that occur." Will Kentucky Reopen National Parks Using State Funds? By WKU Public Radio News & Associated Press Oct 10, 2013 Emil Moffatt Update 6:35 p.m. Governor Beshear’s Communications Director Kerri Richardson says Beshear needs more information regarding future federal reimbursement and the level at which the facilities could be reopened before deciding on reopening federal parks like Mammoth Cave and Land Between the Lakes. Original Post There’s no word yet from Governor Steve Beshear regarding whether he will use state funds to reopen national parks that have been closed due to the government shutdown. The Obama administration says it will allow states to use their own money to reopen some national parks. The Governors of Arizona, Colorado, South Dakota, and Utah have asked for authority to reopen national parks within their borders because of the economic impacts caused by the park closures. The closing of parks in Kentucky, such as Mammoth Cave National Park, has sent workers home and is a drag on local economies that benefit from tourists who visit the park and other nearby attractions. Interior Secretary Sally Jewell said in a letter Thursday to the four governors that the government will consider offers to pay for park operations, but will not surrender control of national parks to the states. © 2017 WKU Public Radio
科技
2017-09/1580/en_head.json.gz/6796
Sarah Tuttle Ok, after stating the obvious (one month? really? We should do better than one month.) let's spend today's post (re)committing to using this time to not just celebrate Black history but to interjecting it throughout our work. What does that mean to you? This is a particularly timely conversation since I'm assuming most of you in the US have gone to see "Hidden Figures" - a film focused on the missing African American women (Katherine Johnson, Dorothy Vaughn, and Mary Jackson) in particular who were a crucial part of getting the US space program into orbit. Of course, it is a fictionalized account (including the not-real white savior boss man) compacted to make for a (more) compelling 2 hour narrative. But the premise is spot on. Whose stories have we neglected to tell? What effect has that had on the work we do as astrophysicists? I'm not going to give you a comprehensive list here, but want you to at least get your feet wet. What are you doing that changes our relatively monochromatic field? Some of us spend time in a classroom where we can change the narrative by including voices and stories from past & present to make more room for contributions from Black scientists to our field. Some of us sit on hiring committees. Some of us mentor graduate students. Let's make sure that we shine light on the work being done wherever we can do it. For many of us, seeing "Hidden Figures" both brought joy at seeing the celebration of those doing incredible work - but also sadness and frustration for the limited progress made in the meantime. What about now? This is just a short (and by no means complete) list of places to start exploring the rich history of Black space scientists. Benjamin Banneker - Born in 1731, he was a self-trained mathematician & astronomer who worked as a surveyor, clockmaker, and creator of almanacs. Robert Henry Lawrence, Jr. - The first black astronaut, he joined the corps in 1967. He died in a training plane crash before getting to space. Dr. Willie Hobbs Moore - The first Black woman to get a PhD in Physics (University of Michigan, 1972). Dr. Mae Jemison - The first African American woman in space, Dr. Jemison has received degrees in Chemical Engineering, medicine (She *is* that kind of doctor), and served in the Peace Corps. She has also leveraged her platform as an astronaut to draw attention to racism & civil rights. Dr. Beth Brown - An x-ray astronomer (studying elliptical galaxies), Dr. Brown also worked in her role at Goddard Space Flight Center to bring astrophysics to students across the country. Professor Mercedes Richards - A professor at Penn State, she studied stars including stellar evolution and binary systems until she passed away last year. Stephanie Wilson - An astronaut who has flown into space three times, her graduate work in engineering focused on flexible structures in space. Dr. Neil deGrasse Tyson - One of the current faces of astrophysics, teaching us about the Cosmos while running the Hayden Planetarium at the American Museum of Natural History in New York City. Get on over to Vanguard STEM. Founded by Dr. Jedidah Isler (who studies blazar jets at Vanderbilt University, where she is an NSF fellow), this monthly web series highlights women of color in STEM, and provides a community for them as well. Dr. Chanda Prescod-Weinstein studies axionic matter and ways it can address holes in our understanding of the formation and evolution of the universe. She is also a passionate advocate for minoritized voices, especially Black women in STEM. The National Society for Black Physicists (well worth a visit and your support) is highlighting a different Black physicists every day of this month. Go check it out! And lastly... go get familiar with this collection of African American Women In Physics. Still a small group of women, but growing every day. Become familiar (if you aren't already) with the work they are doing in our field - and commit yourself to making this list grow exponentially by supporting our Black colleagues. This article provides a quick roundup of discoveries frequently attributed only to western cultures, including astronomical work such as time keeping, navigation, and trans oceanic travel - even observations of our galaxy - that was discovered, developed, and deployed on the African continent. This book review from 2014 highlights some of the deeper issues around colonialism and the way knowledge is created, valued, eliminated, or controlled depending on its source. These issues are at the heart of our failures to create a robust scientific culture supporting our minoritized colleagues including Black astronomers. I agree with you, masterful commentary. Women, People of Color and People with Disabilitie... AASWomen Newsletter for February 17, 2017 Career Profile: Astronomer/Planetary Scientist to ...
科技
2017-09/1580/en_head.json.gz/6803
Apple Expected To Announce A Smaller Version Of Its iPad By Eyder Peralta Oct 23, 2012 TweetShareGoogle+Email Apple Senior Vice President of Worldwide product marketing Phil Schiller announces the new iPad Mini during an Apple special event at the historic California Theater on Tuesday. Kevork Djansezian Originally published on October 23, 2012 2:35 pm Update at 1:52 p.m. ET. Introducing iPad Mini: Philip W. Schiller, the senior vice president of worldwide marketing at Apple, announced a new, smaller and cheaper version of its popular tablet, just minutes ago in San Jose, Calif. "So, what can you do with an iPad mini that can't do with an iPad?" Schiller asked. "You can hold it in one hand." The iPad mini is as thin as a pencil, weighs 0.68 pounds and has a 7.9 inch screen, Schiller said. The iPad has a 9.7 inch screen. Before introducing the new product, Chief Executive Tim Cook said that Apple had already sold 100 million iPads. Taking a swipe at competitors, he said those numbers have garnered attention. You can tell, he said, by the number of tablets that keep shipping every day. The Apple iPad Mini will compete directly with Amazon's Kindle and Google's Nexus 7. In fact, in rare Apple form, Schiller showed a side-by-side comparison of the iPad mini and an tablet running Google's Android software. Update at 2:08 The Price: As always, Apple leaves the damage for last: The 16 GB iPad mini will start at $329. As a comparison, the iPad starts at $499. For the record, Apple also announced upgrades to the regular iPad, as well as their line of iMacs. They also introduced a 13 inch Macbook Pro with a retina display. The Verge has every single detail. Our Original Post Continues: Apple is set for a big product announcement this morning in San Jose, Calif. While nothing is for sure, the rumbling across the press is that the technology giant will announce a smaller, cheaper version of its very popular tablet. The "iPad Mini," reports The Washington Post, "is rumored to have a 7.85 inch screen and run on Apple's A5 chip, with a very thin and light design." This is a big deal for the company, The Wall Street Journal reports. It says: "The offering stands to be the first new class of hardware Apple has introduced since the iPad went on sale in 2010. People familiar with the matter say the smaller tablet, whose name is unknown, will have a screen around 7.85 inches and look similar to the existing iPad. Apple is also expected to announce updates to other product lines, including computers, at an event in San Jose, Calif. "Such a device would compete squarely with devices like Amazon.com Inc.'s Kindle Fire and Google Inc.'s Nexus 7, already on the market. These and other roughly seven-inch devices from Samsung Electronics Co. and others will account for about a fifth of the tablet market this year, according to Piper Jaffray." Reuters reports that if this rumor comes to fruition, it will be the first new product to be added under the leadership of Tim Cook, who took over from Apple co-founder Steve Jobs. "Apple sensed early that they had a real winner with the iPad and that has proven to be correct," Lars Albright, co-founder of mobile advertising startup SessionM and a former Apple ad executive, told Reuters. "They have a large market share, and to protect that market share they have got to be innovative." In a rare move, Apple will live stream the announcement on its website starting at 1 p.m. ET. It's been our experience that the tech website The Verge is among the best at live blogging these events. We will update this post once the announcement is made.Copyright 2012 National Public Radio. To see more, visit http://www.npr.org/. TweetShareGoogle+EmailView the discussion thread. © 2017 WVXU
科技
2017-09/1580/en_head.json.gz/6844
Home » Magazine » 1997 » Volume 48, Issue 5 Super Mario Nation THE VIDEO GAME turns twenty-five this year, and it has packed a whole lot of history into a mere quarter-century Steven L. Kent September 1997 | Volume 48, Issue 5 PrintEmailRipples of Space Invaders’ success also reached Atari’s consumer division. Kassar purchased home rights to the game and translated it into a major bestseller for the VCS. In 1979 an Atari coinop engineer created a game in which players cleared asteroid fields with a small free-floating spaceship; Atari would sell 70,000 copies of Asteroids in the United States.Meanwhile, Midway was busy placing 100,000 units of its new Pac-Man game in North America alone. Other companies followed suit. Atari released Missile Command, Tempest, BattleZone, and Centipede. Williams Electronics, a leading pinball manufacturer, had their biggest hit with Defender. Taito of America, the new U.S. arm of Taito, released Qix, Front Line, and Jungle Hunt. Stern Electronics released Scramble. Nintendo released Donkey Kong, Donkey Kong Junior, and Popeye. The most successful game in U.S. history was an updated version of Pac-Man called Ms. Pac-Man—with more than 115,000 sold.Video-game arcades became more plentiful than convenience stores. “Pac-Man and Space Invaders were going into virtually every location in the country with the exception of funeral parlors,” says Eddie Adlum of RePlay . “And even a few funeral parlors had video games in the basements. Absolutely true. I believe churches and synagogues were about the only types of locations to escape video games.” Suddenly video games had become a major force in popular culture. In 1981 Americans spent twenty billion quarters playing 75,000 man-hours on them. The games outgrossed movies and the recording industry. A hit song was written about Pac-Man, and the characters that inhabited the electronic landscapes of Pac-Man, Donkey Kong, and other games appeared on their own television shows in Saturday-morning cartoons.In his 1983 State of the Union address, President Ronald Reagan defended aid to the Nicaraguan contras by comparing it with the money spent on video games. “The total amount requested for aid to all of Central America in 1984 is about $600 million; that is less than one-tenth of what Americans will spend this year on coin-operated video games.”For President Reagan to have been correct, every American man, woman, and child would have had to spend almost thirty dollars a year in a video-game arcade. But he missed more than the numbers; the feverish trend itself was winding down at the time he spoke. By June 1982 what the industry still remembers as the golden age had already dimmed. Business softened alarmingly, and by year’s end arcades had begun closing. This downward trend has continued, with only a few positive spikes, for nearly fifteen years.The home-console market took a brutal beating the next year. VCS sales had been strong for four years despite new competition. In 1979 Mattel, one of the world’s leading toy manufacturers, had entered the market with the Intellivision, a system that offered better graphics and more complex games than the VCS. Mattel sold an impressive 200,000 units in its first full year but barely dented Atari’s market. In 1982 Coleco unveiled the ColecoVision, a sophisticated home console that ran excellent versions of top arcade games. All three companies made enormous profits.Atari had the largest profits, but they were not enough. In 1982 Atari released two VCS cartridges that cost the company dearly. The first was Pac-Man, the long-awaited but poorly programmed home version of the arcade smash. Atari made twelve million copies of the game, many of which came back from disgruntled customers.The second cartridge was based on the phenomenally successful movie E.T. According to several sources, Ross forced the game on Kassar after promising the film’s director, Steven Spielberg, a whopping twenty-five-million-dollar royalty for the exclusive video-game rights to the movie. The game was dull and hard to play. In the end Atari created a landfill in a New Mexico desert, dumped in it millions of E.T., Pac-Man, and other cartridges, crushed them with a steamroller, and buried the fragments under cement. Atari’s profits dropped for the first time in eight years. When, on December 7, 1982, Atari executives revealed that the company had not reached its projections, Warner Communications stock tumbled from fifty-one points per share to thirty-five, and Ray Kassar was fired.Over the next two years Mattel pulled out of the video-game market, Coleco imploded after investing all its resources in a highly flawed home computer, and Warner Communications sold Atari Home Computers. Under its new ownership Atari managed to limp out of the wreck of the home-console market and even showed a $450 million profit in 1988. However, it never reemerged as a force in the video-game industry, and last year the company was purchased by a disk-drive manufacturer.Now it was Japan’s turn. In 1985 Nintendo announced that it would restart the American video-game market by releasing a game console called the Nintendo Entertainment System (NES). Though the system was very popular in Japan, American software developers, many of whom were nearly bankrupted by the collapse of Atari, scoffed at the idea. Retailers refused to carry it.« first
科技
2017-09/1580/en_head.json.gz/6850
Hearthstone: Heroes of Warcraft confirmed for Android launch Apps & Games Newsby Chris SmithNovember 10, 2013 4 103 Blizzard’s Hearthstone: Heroes of Warcraft has been confirmed to launch on Android as well, in addition to other platforms, but a release date is not yet available. Blizzard chief creative officer Rob Pardo made the announcement a BlizzCon earlier this week, revealing that both the iPhone and Android version of the free-to-play Warcraft digital card collectible trade game will be available to mobile users in the second half of 2014. The game will be available to iPad owners at some point next month, and it’s already playable in closed beta on PC and Mac. The game promises to offer a lot of Warcraft-based action, although it’s a card game, so don’t expect any World of Warcraft-like play on your mobile devices any time soon. However, the game will offer access to lots of hero classes, various cards to improve their skills, and leveling up for better cards. The video above should better help you understand how Hearthstone is played, and why it’ll be a great game for mobile devices. Apps & Games NewsCard GamesCard Games VentureBeatChris Smith Show 4 comments
科技
2017-09/1580/en_head.json.gz/7029
Brecksville's Movable keeps moving forward LinkedIn Google+ It sounds like there's plenty of activity at Movable, the Brecksville company that produces an increasingly popular worn activity monitor called the MOVband.Forbes.com catches up with the firm and its founder, serial entrepreneur Blake Squires, who released the MOVband about 18 months ago.The idea behind the device is simple: It can be connected to a computer, and the data is collected on a dashboard where a user views his or her history of activity. “We're trying to inspire that extra walk at the lunch hour,” Mr. Squires tells Forbes.com.Northwestern Mutual, Hyland Software and DDR Inc., among others, “have embraced the company's corporate product” since it launched in September. That program also is used in about 150 schools nationwide, including a Houston school system with 4,800 students using the MOVbands.The price for the program is $29.99 per employee or student, and an additional $4.99 per device for the management of a particular group. At those rates, Movable “feels it can disrupt more expensive offerings that seek to provide wellness programs for enterprises, which can charge about $100 per employee,” Forbes.com says.Mr. Squires says companies that offer the program to employees can offer rewards for those that hit certain milestones. Envisioned, but not yet operational, “is the possibility of having health insurance premiums lowered through use of the Movable program,” according to the story.Forbes.com says that to date, Mr. Squires and other angels “have funded the company to the tune of about $2 million. A series A is planned for later this year and profitability is expected sometime in 2014.”The digital revolution progresses OverDrive Inc. in Garfield Heights is gaining some big-name authors with the addition of the Hachette Book Group e-book catalog to its platform.The digital media company said Hachette will make its entire digital catalog of more than 5,000 eBooks available to libraries and schools via OverDrive starting next Wednesday, May 8. That's a big deal because it will give e-book readers at OverDrive-powered libraries and schools in the United States and Canada access to authors including Kate Atkinson, David Baldacci, Sandra Brown, James Patterson and David Sedaris.OverDrive supplies digital services for more than 22,000 libraries and schools worldwide, with support for all major e-reading devices, including iPad, Nook and Kindle.The addition of the entire Hachette Book Group e-book catalog to OverDrive — which already carries its audiobooks on the same lending platform — enhances what the company says is the world's largest catalog for libraries and schools, comprising more than 1 million e-book, audiobook, music and video titles.Hachette says it will follow a “one-copy/one-user” lending model, and there will be no checkout or term limit for the titles on the OverDrive platform.Call it even The recession officially ended in June 2009, but it doesn't feel that way to a large percentage of small business owners in Ohio and nationwide, according to the 2013 U.S. Bank Small Business Annual Survey.The survey of 3,210 small business owners, including 203 in Ohio, found most are “hesitant to make a significant investment because of uncertainty toward the economy, or the potential impact of tax and health care policies,” says Tom Zirbs, regional manager for Northeast and Central Ohio at U.S. Bank.Asked to describe the state of the economy, 45% of Ohio small business owners said it's in recovery, but 45% said it's still in recession. The rest said they were “unsure.”Those numbers match almost exactly the U.S. data, where 45% say the country is recovering and 43% think it's in recession.According to the survey, small business owners in Ohio and across the country remain “hesitant to make significant investments in their business.” Both regionally and nationally, about two-thirds of owners responded that they were unlikely to make a capital expenditure in the next 12 months, according to the survey.However, small business owners in Ohio said they are more likely to add to staff, and less likely to make cuts, than the national average.The survey found 20% of Ohio small business owners expect to increase their work force in the next year, while just 4% expect to make cuts. Nationwide, those numbers were 16% and 5%, respectively.And small business owners are making progress in embracing new technology. U.S. Bank said more than seven in 10 small business owners in Ohio report they have “integrated mobile technology into their business strategy, whether through mobile banking, social networking, web design, payments or other uses.”You also can follow me on Twitter for more news about business and Northeast Ohio.
科技
2017-09/1580/en_head.json.gz/7045
Home > Books > Spirituality/Religion > spirituality/religion The Moral Landscape: How Science Can Determine Human Values Sam Harris. The author of Letter to a Christian Nation and The End of Faith (which won the 2005 PEN Award for Nonfiction), Sam Harris here tackles the widespread notion that "science can't speak to morality, only religion can," and finds it flat wrong. Harris urges us to think about morality in terms of human and animal well-being, viewing the experiences of conscious creatures as peaks and valleys on a "moral landscape." Addressing ancient questions on good and evil, Harris demonstrates that we now know enough about the brain and its relationship to events in the world to scientifically deduce right and wrong answers."Sam Harris breathes intellectual fire into an ancient debate. Reading this thrilling, audacious book, you feel the ground shifting beneath your feet. Reason has never had a more passionate advocate."—Ian McEwan"A lively, provocative, and timely new look at one of the deepest problems in the world of ideas. Harris makes a powerful case for a morality that is based on human flourishing and thoroughly enmeshed with science and rationality. It is a tremendously appealing vision, and one that no thinking person can afford to ignore."—Steven Pinker"I was one of those who had unthinkingly bought into the hectoring myth that science can say nothing about morals. To my surprise, The Moral Landscape has changed all that for me. It should change it for philosophers too. Philosophers of mind have already discovered that they can't duck the study of neuroscience, and the best of them have raised their game as a result. Sam Harris shows that the same should be true of moral philosophers, and it will turn their world exhilaratingly upside down. As for religion, and the preposterous idea that we need God to be good, nobody wields a sharper bayonet than Sam Harris."—Richard Dawkins
科技
2017-09/1580/en_head.json.gz/7078
Home > Android Army > Rumor: Kindle Fire already has over 250,000… Rumor: Kindle Fire already has over 250,000 preorders and counting By We don’t think that the new Amazon Kindle Fire will be an iPad killer, but it might just outsell the Apple tablet. The folks over at Cult Of Android got some leaked preorder numbers from Amazon, and the numbers look amazing. After only five days the Kindle Fire has amassed 254,074 preorders, which almost matches the original iPads first day sales. The original iPad sold 300,000 units on its first day of availability, and took a month to reach one million unit sales. The iPad 2 sold 2.6 million units in its first month, which is by far the tablet record to beat. If we dig into the numbers a little more we see that it is possible that the Kindle Fire might come close to those numbers. The 257,074 preorder number was after five days which means an average of 50,000 orders a day. If that rate of orders remains steady then the Kindle Fire will reach 2.5 million preorders by its November 15 release date. It is highly unlikely that people will keep preordering at the same rate for the entire time, but it is very likely that the Kindle Fire could sell a million units on its launch date. We already think that the Fire will dominate, and having such amazing order numbers has to be a good sign. Just to be mean let’s compare these numbers to other tablets recently released. The Motorola Xoom sold only 100,000 units in its first month and a half, RIM sold an estimated 250,000 units in its first month. The Kindle Fire has a long way to go to pass the iPad, but it seems like a given that it will pass all other Android tablets in sales.
科技
2017-09/1580/en_head.json.gz/7079
Home > Apple > How Apple missed the rise of the mini tablet How Apple missed the rise of the mini tablet By About this time last year, if you had tried to convince a major company to produce a 7-inch tablet, they would have laughed in your face. At the time, these tablets were seen as “tweener” products that wouldn’t and couldn’t sell. Only idiots would bring them to market. You’d probably lose your shirt. Then the Kindle Fire shipped and spiked hard in the holiday season. Earlier this year, Google announced the Nexus 7, and proceeded to sell buckets of them. Just weeks ago, Amazon followed up its original hit with the Kindle Fire HD 7, which is even more advanced, better priced, and reportedly selling at even a more aggressive pace. Around a month from now, the iPad mini will ship in the same size, and suddenly we’ll be up to our armpits with products that are successfully selling, in a segment that many thought was stupid just a year ago. We also thought big screen phones were stupid before the iPhone shipped successfully. And even Steve Jobs said tablets were stupid before Apple shipped the iPad. So how does an idea go from idiotic to brilliant? And how come Apple didn’t lead the way on this one? We don’t like “different” It often takes us a while to get comfortable with a new idea. For instance, one of the most unsuccessful cars in history was the Ford Edsel, which was based on massive market research, but bombed. Yet compared to cars that came out a few years later, the Edsel just appears ahead of its time. Microsoft had tablets long before Apple did, Philips was showcasing smartphones like Apple’s in the 1990s but couldn’t get support to bring them to market, and LG had the Prada in 2007, which Apple basically ripped off. But once we see something, we slowly get comfortable with the idea. Eventually, it isn’t so different, and we have a new market. Apple isn’t the first to see an opportunity; it’s often just the first sense when the market is ready for it, and willing to spend the time and money to ride the resulting wave. Apple seemed to time the iPhone and iPad perfectly in that regard, but with the 7-inch tablets, both Google and Amazon beat it to the punch. Kindle blazed the trail The process of making consumers comfortable with 7-inch devices started with e-readers, and Amazon lead that trend. In fact, Steve Jobs thought e-readers were stupid at the time, because he believed no one read anymore. While he was right in that e-readers never became the success that smartphones or the iPad, they sold enough to familiarize people with this smaller size. When the Kindle Fire showed up on an e-reader vector, it sold. When the Nexus 7 showed up, it expanded the vision for this form factor to a full tablet, and the market suddenly woke up to the advantages of 7-inch tablets, which are actually in many ways better than their 10-inch siblings. We should have seen this coming, because when 7-inch and 10-inch e-readers hit market, buyers gravitated toward the smaller product the same way. What makes the 7-inch product better There are a number of advantages to 7-inch tablets. They cost about half as much as 10-inch tablets. They’re vastly more portable and actually fit in jacket pockets and purses. They’re far lighter, which makes them much more comfortable for personal entertainment. Mostly, they don’t try to be laptop computers, which is the curse of their larger cousins. To me, the 10-inch iPad is now the real “tweener” product: too small to be a laptop, too big to be truly portable. Nexus 7 vs. Kindle Fire HD Following that logic, while both the Nexus 7 and Kindle Fire HD are good products, the Kindle Fire is more Apple-like in its focus on the user experience. That makes it, in my opinion, the better product. It may not have all of the sensors that the Nexus does, but chances are you won’t use them anyway if you already own a smartphone. Amazon focused on things like a more expensive case and better screen instead. As a result, the Kindle Fire HD diverges sharply from the Nexus 7, is easier to use, and does core things better. But it doesn’t do as many things, so the two products likely appeal to different audiences. Interestingly, the team behind the Kindle Fire is largely from Microsoft. They licensed Microsoft’s technology, stole the OS from Google, and executed an Apple-like marketing strategy, right down to the high-profile announcement. That alone is pretty damned amazing. Missing the boat I think 7-inch tablets will be the big thing in the fourth quarter, which leaves Apple uncharacteristically behind. Not only did Apple miss the chance to set the bar here, it will be third to market with a very similar product, which doesn’t bode well for its ability to seize market leadership in this segment. Apple no longer leads with the iPhone either: Where vendors used to follow and copy the iPhone, now they are leading it with larger screens, better antennas, and faster radios. The iPod, meanwhile, is last decade’s news. That “Apple TV” thing better be a hit, or Apple could be screwed. Guest contributor Rob Enderle is the founder and principal analyst for the Enderle Group, and one of the most frequently quoted tech pundits in the world. Opinion pieces denote the opinions of the author, and do not necessarily represent the views of Digital Trends.
科技
2017-09/1580/en_head.json.gz/7081
Home > Mobile > What happens if you unlock your smartphone now… What happens if you unlock your smartphone now that it’s illegal? By On Saturday, unlocking new cell phones that have been locked down by wireless carriers officially became illegal. The decision from the U.S. Copyright Office and the Librarian of Congress to delete phone unlocking from the list of exemptions to the Digital Millennium Copyright Act (DMCA) sparked a wave of consumer criticism, and left people wondering: What now? We’ve already covered the main questions concerning the new no-unlock rule, like which phone are illegal to unlock, but there are some things that still need to be addressed. Here, we’ll clear up some of the lingering key questions. Will I go to jail if I unlock my new smartphone? No – at least, not if you only unlock your own device. The DMCA specifies two different levels of penalties for violations: civil and criminal. Unlocking your own device for personal use would fall firmly in the civil category, which means you couldn’t go to jail for doing it, but could definitely get sued by your wireless carrier. If I get sued, what’s the worst that could happen? You would have to pay “damages” to your wireless carrier (plus any legal fees that might go along with a lawsuit). Section 1203 of the DMCA (PDF) stipulates that “a complaining party may elect to recover an award of statutory damages for each violation … in the sum of not less than $200 or more than $2,500 per act …” That means you could be sued for a minimum of $200 for each device that you’ve unlocked, with possible damages shooting up to $2,500 per device. That said, experts on the DMCA don’t expect individuals to face many lawsuits over small-scale unlocking. And it’s possible that, were you to be sued over unlocking, the court may rule that the DMCA shouldn’t protect against consumers unlocking their devices. What if I unlock my device and then sell it? This is where things get more complicated. If, for example, you buy an iPhone 5 from AT&T for the subsidized price of $200, unlock it, then sell it for $650 (the going rate for a new, unlocked 16GB iPhone 5), AT&T could theoretically sue you for $450 in damages – but it’s also possible that unlocking your device to make money would push you into “criminal” territory. Section 1204 of the DMCA (PDF) makes it a “criminal offense” to “willfully” circumvent any digital locks – this includes DRM on movies or music and the firmware that locks your smartphone – “for purposes of commercial advantage or private financial gain.” Do so, and you’re looking at a fine of up to $500,000, or up to five years in prison, or both, for a first-time offense. Repeat offenders see the consequences double to a maximum fine of $1 million, up to 10 years in prison, or both. So unlocking a phone you bought from AT&T (or any other wireless carrier) on the cheap for the purpose of making some cash is now decidedly a bad idea. Would I really be put in jail for selling a single unlocked phone? Not likely. The real purpose of targeting phone unlocking is to go after “large-scale” sellers of illegally unlocked devices. According to Mike Altschul, the senior vice president and general counsel of CTIA, the wireless industry’s lobbying arm, the no-unlock rule “makes our streets just a little bit safer by making it harder for large scale phone trafficking operations to operate in the open and buy large quantities of phones, unlock them and resell them in foreign markets where carriers do not offer subsidized handsets.” In short, you’re probably not in any danger of going to jail, even if you do pawn your illegally unlocked handset on Craigslist. But that shady electronics store that sells unlocked phones might be, as are companies who sell phone unlock codes. Still, it’s best to just buy a legally unlocked device in the first place. Is there any way to get rid of this unlock rule? Not quickly or easily. One possible scenario is that a court will rule that cell phone owners are not in violation of the DMCA, if they unlock their device for personal use. There is also a White House petition that is gaining ground, which would require the Obama administration to review the unlock rule, if it reaches the 100,000 signature threshold. (Even if it gets enough signatures, that doesn’t mean anything will change.) And finally, we can just wait. The Librarian of Congress will begin reconsidering exemptions to the DMCA again starting in 2014, and the new exemptions will come out in 2015. Image via Matt Valentine/Shutterstock
科技
2017-09/1580/en_head.json.gz/7104
Evangelical Christian Tells Bill Moyers Not All Christians Are Climate Deniers Sep. 12, 2014 08:44PM EST It's a widespread belief that evangelical Christianity is incompatible with climate science, understandable since polls have shown two-thirds of evangelical Christians don't believe manmade climate change is real. But Katharine Hayhoe, who is an evangelical Christian and also an atmospheric scientist, tells journalist Bill Moyers that's not so. "The New Testament talks about how faith is the evidence of things not seen,” she tells him. “By definition, science is the evidence of things that are seen, that can be observed, that are quantifiable. And so that's why I see faith and science as two sides of the same coin.” Hayhoe is the director of the Climate Science Center at Texas Tech University in Lubbock, where she teaches. She's been attacked by Rush Limbaugh and gotten floods of hate mail and even threats after a right-wing blogger publisher her email address. But she says, “Caring about climate is entirely consistent with who we are as Christians. We have increasingly begun to confound our politics with our faith. To the point where instead of our faith dictating our attitudes on political and social issues, we are instead allowing our political party to dictate our attitude on issues that are clearly consistent with who we are." Hayhoe is also the founder and CEO of scientific research and consulting center ATMOS Research and co-author of A Climate for Change: Global Warming Facts for Faith-Based Decisions. YOU MIGHT ALSO LIKE Global Warming Deniers Become More Desperate By the Day Evangelicals Pressure Florida Governor on Climate Change Viral Video: Naming Hurricanes After Politicians Who Deny Climate Change 'Keep Your Tiny Hands Off Our Data' GOP to NASA: Forget Climate Science, Focus on Space climate changefeatured
科技
2017-09/1580/en_head.json.gz/7127
Next Generation Firewall Networking / Quantum Mechanics and IBMs Power6 Quantum Mechanics and IBMs Power6 eWEEK Labs: Does the Power6's ability to observe itself answer a quantum mechanics question? Quantum mechanics and the activities of subatomic particles play a significant role in the design and operation of processors. When those processors are scaled down in size and bumped up in speed, as IBM has done with the Power6, the actions of those forces and particles become a significant factor in chip design. One good example is something that IBM calls "leakage." Leakage is the tendency of the electrons in the conductors within a processor to leak out. Physicist Werner Heisenberg described the property of quantum-level objects such that it was impossible to know both the momentum and the position of an object. Since you know the momentum of an electron inside a processor (because you know its mass and the speed of light in that context), you cant also know where it is, exactly. Thus the leakage problem. IBM engineers have managed to minimize leakage through conductor design and the design of insulating material, but they have not eliminated it. Something they had even less success in eliminating was the interaction of subatomic particles. At this level, even something wed never notice, such as a cosmic ray, can disrupt the operations of a processor. The reasons for this are the small scale (65 nanometers) and the low voltage at which the Power6 operates. Click here for eWEEK Labs evaluation of IBMs Power6-based p570. Cosmic rays are really high-energy particles, mostly protons, that pass into the earths atmosphere from space. When they strike something—say, a conductor or other component inside a processor—they can change its state, and that can result in a change in how an instruction is processed. This in turn causes an error. IBM engineers cant prevent these high-energy particles, and they cant really shield against them. This is made worse because the particles are everywhere. To alleviate the problem, IBMs scientists and engineers chose to institute error correction to keep these particles from interrupting operations. The Power6 is massively instrumented, and it checks itself for errors at every clock cycle. When it finds a soft error, it reruns the instruction, and that in turn fixes errors caused by high-energy particles. But one thing IBM hasnt talked about is another principle of quantum mechanics: that the act of observation changes the outcome of quantum events. Presumably, this would also include both leakage and high-energy particle interactions. With the Power6 constantly observing itself, what then? Physicist Erwin Schrödinger once proposed a thought experiment in which a test subject—he suggested a cat—would be placed in a sealed container, and its life or death determined by the random decay of a single particle. Until the cat was observed, he suggested, the cat existed in both states, both alive and dead. Then, the act of observation alone would cause one of those states to exist. Perhaps by creating a processor that also observes itself, IBM has managed to find a handle on the world of quantum mechanics in addition to its speed and its abilities with floating point calculations. Technical Analyst Wayne Rash can be reached at wayne_rash@ziffdavis.com. Check out eWEEK.coms for the latest news, views and analysis on servers, switches and networking protocols for the enterprise and small businesses. Wayne Rash is a Senior Analyst for eWEEK Labs and runs the magazine's Washington Bureau. Prior to joining eWEEK as a Senior Writer on wireless technology, he was a Senior Contributing Editor and previously a Senior Analyst in the InfoWorld Test Center. He was also a reviewer for Federal Computer Week and Information Security Magazine. Previously, he ran the reviews and events departments at CMP's InternetWeek.He is a retired naval officer, a former principal at American Management Systems and a long-time columnist for Byte Magazine. He is a regular contributor to Plane & Pilot Magazine and The Washington Post.
科技
2017-09/1580/en_head.json.gz/7132
Case Studies > Technology > S and K Series TechnologyUNH Benefits From Enhanced Analytics and Optimized Application Performance – A Purview Case StudyUNH IT is always looking to improve the experience of their end-users and explore ways of making better use of… Our Latest S and K Series Case Studies: Rural Public School District Relies on PCS to Improve Network Capabilities and Meet Technology Mandates Introduction Jefferson County Schools (JCS) is a K-12 public school district in eastern Tennessee. Based in a rural area, JCS… Concurrency Improves Efficiency and Productivity with Robust Wireless Network As Microsoft’s largest partner in the Midwest, Concurrency is an award-winning IT consulting agency that provides expert advice to clients… East Grand Rapids Uses Analytics To Transform Into A Smart School East Grand Rapids Public Schools rank in the top 5% of schools in the state of Michigan. Fundamental to the… Louisiana College Invests in the Future with Wi-Fi Infrastructure, BYOD, and Simplified Network Management Since its foundation in 1906, Louisiana College has drawn from strong values in character and learning based on its tradition… How a School District Uses Network Analytics to Save Time and Money The Hardin County School district, located in the state of Kentucky, consists of 26 buildings and 16,000 users. When the… Large Health Services Provider Reduces Network Configuration and Management Time with Unified Wired and Wireless Solution Facing the challenges brought on by the evolving healthcare landscape and growing technology demands for tele-health, BYOD, and Wi-Fi, Lutheran… France’s Leading Magazine Publisher Deploys Tomorrow’s Network Today Introduction Prisma Media is France’s leading multi-channel publisher with a portfolio of 26 print titles and over twenty web sites… Leveraging Extreme Networks’ SDN Platform, Enfield Revolutionizes Services and Dramatically Lowers Costs Introduction The Town of Enfield, Connecticut is a suburb located in Hartford County, 18 miles north of Hartford, Connecticut and… Extreme and Palo Alto Integration Provides Faster Troubleshooting, Enhanced Security for a Better Student Experience Introduction Founded in 1846, the University of Mount Union is a four-year private institution grounded in the liberal arts tradition…. Hospital in Southern Rhode Island Selects Extreme Networks Infrastructure for Cost Savings and Reliability Introduction The South County Hospital Healthcare System (SCHHS), located in southern Rhode Island, is an independent, non-profit, acute-care hospital offering…
科技
2017-09/1580/en_head.json.gz/7145
There are many things we still don't know about Sony's next-gen console, the appropriately named PlayStation 4. We don't know what it will cost, for one thing. For another, we don't know what it will look like. That being said, we still know quite a bit, including the fact that it's a machine with powerful specs---including 8GB of super-fast DDR5 RAM. We also know that a lot of game developers are on board, including CD Projekt RED, who have stated that The Witcher 3 will be coming to the PS4. “Combining CD Projekt RED’s unique artistic vision and tech with the capabilities of PlayStation 4 will make The Witcher 3 a truly next-gen RPG," said Dam Kiciński of CDP. "With Sony’s new hardware and our new engine, there are no limits to what we can create in the process of realizing our vision for Geralt’s dark fantasy world.” And finally, we know that the system is coming out in time for Christmas this year. So start saving your allowance or putting some extra money in the savings account. At last night's PS4 event, Sony unveiled the system's controller which you can take a closer look at here. They also showed us several games, including exclusives like Killzone: Shadow Fall and inFamous: Second Son, and several other titles like the indie game The Witness and Capcom's Down Deep, which may be a next-gen sequel to Dragon's Dogma. Sony's PlayStation Meeting was admittedly a mixed bag, but it packed in a great deal more information---and reasons to be optimistic about the system---than many are giving it credit for. Much Ado About Social One thing Sony pushed too hard at last night's event was its social media integration, including a "Share" button on the DualShock 4 controller. Many gamers are understandably wary of the social media craze, and see Sony's hyping of its social features as a bad sign. I'm not so sure. Social media is an inherent part of our lives now. Some of us choose to ignore it, but many enjoy and/or participate in social media on a daily basis. From what I can see, there is no reason anyone will be forced to use any of the PS4's social media functions if they don't want to---any more than you're forced to use the Wii U's social media functions. Nintendo's console features social media integration that is similar, though not exactly the same, as the PS4. This opens the door for interesting social functions to be added to games like ZombiU, which allowed gamers to post messages to one another in the game via spray-paint (similar to Demon's Souls.) The PS4 will go further, allowing users to stream gameplay or spectate and even take control of one another's games. I, for one, rarely make use of social functions while gaming. I have no Xbox 360 or PS3 friends to speak of, for instance. I never chat during games, either via a headset or in an MMO chat box. I just ignore the existence of these features entirely, which is something I plan on continuing with the next-generation. Others, however, may make more use of these features. It would have been, to put it quite bluntly, insanely stupid of Sony to ignore this trend---even if their conveying of the social features during the event last night was flawed. (And I believe it was flawed, because it placed too much emphasis on social features.) Does Facebook integration mean that the PS4 will be a Farmville emulator? Of course not. Sony isn't putting the sort of horsepower into its next console that we heard about last night to deliver gamers a Facebook-game machine. This is just an added perk. Whether it will sell systems is an open question, but there's no reason to believe it will harm the core gaming experience. Just as the gaming industry (and the money behind that industry) is too in love with social media, mobile, and every other trend we see these days---and likely why Sony made such a fuss over it last night---gamers are too hostile and overly concerned over social integration. Social will be an option for those who make use of it, and will likely open the door for an increased use of the PS4 as an eSports device. Improved social functionality should also make match-making and online competitive play more streamlined. For those who want to opt-out, just opt-out (like me!) Beyond that, it's much ado about nothing. Forbes video game contributor Jason Evangelho says the lack of backwards compatibility, saved games transfers, and transfers of PSN digital purchases is the "definition of insanity" (though Far Cry 3 used a slightly different definition.) I can honestly say, with a straight face, that I don't care one bit about backwards compatibility, even though I'm aware many others do. I can respect that, but I don't share the opinion. Jason suggests that it's a "a nail in Sony’s coffin that its marketing team needs to address immediately," but I think that's a bridge too far. People are angry now, but it will pass once enough new games are announced for the PS4. For one thing, the PS3 was equipped with hardware that made backwards compatibility pretty much impossible unless Sony had decided to once again use the same architecture. That would have been a terrible idea, and the move to standard x86 architecture is a far superior direction for the company to take. It will make the system future-proof, for one thing, and make the possibility of PC ports for PlayStation games very real. For another thing, at least for me personally, I have no plans to get rid of my PS3. I made that mistake with many older systems, including the PS2, and I don't plan on making it again. I will own both systems, and I will still play my PS3 games on my PS3. Sure, it would be nice to be able to play those games on my PS4, but it's not a deal-breaker. In fact, the PS2 only recently stopped production, so support for the PS3 should be expected for some time. There's a bunch of really great PS3 titles coming out this year already, for that matter. The two systems will coexist, with or without backwards compatibility. The fact that Sony had no good options here is important, especially since they made the right decision by moving to x86. They're using Gaikai to soften the blow, and even though we don't know how that will shape up, it's still a step in the right direction. Better still, there will be no anti-used games DRM, which would have been the actual definition of insanity. Keeping Cards Close to the Chest The reason Sony didn't unveil the actual hardware itself was apparently because they wanted to keep something for later---presumably, E3. They need to have something to take the wind out of Microsoft's sails when that company unveils its next Xbox console sometime later this year. My guess is that the controller wasn't supposed to be announced yet either, but recent leaked images of the DualShock 4 forced Sony's hand. Whatever the case, I have the sneaking suspicion that this announcement was supposed to be focused on services and software. Frankly, I don't care at all about what the machine looks like. We know that it's running some impressive specs, even though not every last detail has been unveiled: People have worried that because we didn't see the hardware, there would be no optical drive. Well take a look at the chart above (taken from this PDF.) The system will play Blu-Ray and DVD. It will have Ethernet, WiFi, and Bluetooth. It will have HDMI, analog, and digital outputs for video and sound. Beyond that it has a single-chip custom processor with an x86-64 AMD "Jaguar" CPU with 8 cores, an AMD "next-generation" Radeon processor, and the afore-mentioned 8GB of DDR5 RAM. The specs, so far as I can tell, are great. Sure, you can build a better gaming PC---but Sony would price itself out of existence if it tried to outdo top-line gaming rigs. The fact that we see such amazing looking games on current consoles should speak volumes about hardware optimization and the efficiency of game development on consoles. We certainly won't see the best looking next-gen games at launch, but I imagine we'll have some extremely impressive looking stuff in the future. More importantly, the higher powered system should be able to give us much better AI to contend with. And even though there will be digital download options available for every PS4 title, there is still an optical drive. Sony is also keeping the price of the system a secret. Almost certainly they're playing a game of chicken with Microsoft. Neither firm wants to be the one to lowball or highball the price, and they both want to see what the other is up to before spilling any carefully kept beans. I actually wasn't surprised by this, though I was as disappointed as anyone else. I'm sure we'll find out soon enough, and I'm still betting the lower-end model will be in the $400-$430 price range. It's All About the Games PC gamers are fond of saying things like "Why would I buy this when I can just build a gaming PC that's even better?" As a PC gamer, I understand this sentiment. Yet I spend a lot of time playing games on five-plus year old consoles, and many of these games are excellent and quite a few are not to be found on any PC. Really, that's the reason we'll play the PS4: for its games. We won't play it for its social media, or buy it for its Netflix streaming. These are great perks but they're just that: perks. The games are at the core of the console experience, and the next generation of consoles will live or die based on their games. Sony has unveiled a few games, and third party developers are starting to tell us more about their offerings as well, including the above-mentioned Witcher 3, but also Watch Dogs, Destiny, and many more to come in the coming weeks. My bet is we'll learn about other games from developers like Naughty Dog either before or at least by E3. This is just the first wave of game announcements. And really, it wasn't half bad. I would have liked to see more, and I would have really liked to see them running the games on the PS4 itself (Watch Dogs, for instance, was apparently running on a PC with the same basic hardware as the PS4.) But as far as these things go, we got to see a number of games in action and we heard about a number of others headed to the PS4 including a new Final Fantasy title. If Sony can deliver the games and the console at a price that gamers can afford, they will be just fine. The fluff---the social media, the Move motion controls, etc.---that's juts fluff. That's just appealing to investors, to a broader demographic, to the sort of people who buy Xbox 360 over PS3. Gallery Killzone Shadow Fall Launch Gallery 24 images What will really make or break the PS4 is its games. With a stellar line-up of PS3 exclusives this year, and development on the PS4 that stretches back to 2008, I have no doubt that we have yet to scratch the surface of PS4 games. Hopefully a strong line-up will be ready for launch this Holiday season. I'm sure we'll have tons and tons of problems to contend with next generation. We will face many of the same challenges we face today with DLC, rising costs, and the constant experimentation the industry goes through while trying to find out how to make more money off of games, while spending hundreds of millions of dollars in the process. The next-generation systems will have good, bad, and ugly features, and there will be room for both praise and criticism. Sony's PlayStation event was a mixed bag, which focused too much on popular industry memes and buzzwords and didn't show us everything we wanted to see. But it was better, in many ways, than I expected. We did get to see gameplay, and hear from various developers and publishers, and we were given some information about the system's specs. That's more important to me than seeing the physical console itself. All told, it was a success, if an imperfect success. Sony's stock price may be down today, but I think that may have more to do with the lack of a pricetag than with anything inherently wrong with the console itself. In some ways, it may also be because Sony focused too much on just improving the hardware rather than on any truly fancy or "innovative" design changes. The complaint that there was nothing "revolutionary" about the system is, to my mind, an added bonus. That the PS4 is an evolutionary step up from the PS3 is far preferable, in my book, to a revolutionary one. Complete overhauls are better left to Nintendo with their Wii motion controls or their Wii U gamepad. Sony didn't need to change the game. It just needed to step it up. By focusing on play (however cheesy that sounded last night) they're focusing on what really does make a console a fun gaming experience. Now let's just hope we hear a lot less about the social aspects in coming months, and a lot more about the games.
科技
2017-09/1580/en_head.json.gz/7148
Kan was known as the guy who popped up around Silicon Valley with a camera on his hat, live-streaming his life online for all to see. Kan turned that idea into Justin.TV, a Y Combinator-backed startup that became a popular online live video website.The company also spawned two others, Socialcam, a mobile video app, and Twitch.TV, a gaming video site. Meanwhile, Kan, now more experienced after learning how to build his startup on the fly, is an advisor to startups and a part-time venture partner at Y Combinator. Now he's started a new company called Exec with cofounders Daniel Kan, Kan's brother and former head of sales and business development at UserVoice, and Amir Ghazvinian, a Stanford Masters in Bioinformatics graduate. The Y Combinator-backed company (more on that below) is a mobile app and website designed for people to find others to complete jobs for them in real-time. Sound like other services like Zaarly, TaskRabbit? Exec is different, Kan says, because all the tasks are designed to be done in real-time. The jobs could be anything from the mundane: buying and delivering coffee; to the practical: buying, assembling and delivering Ikea furniture; to the bizarre: getting gas for a scooter that ran out of gas and driving it to the owner's office (true story). One person requested the planning of Valentine's Day with dinner, flowers and chocolate. Exec dispatches the job requests to individuals who are nearby, have good ratings and skills in that area. Jobs are designed to pay at least a $25 hourly rate. The requests are sent automatically to the top person on the list, who has a couple of minutes to reply. If he or she doesn't say yes, Exec goes down the line to the next person. The ideas is to find someone really quickly, in the way that car service Uber finds a nearby car quickly. To be able to do jobs on Exec, people have to complete three rounds of interviews, so that they can be trusted, Kan says. While jobs are posted in real-time, people can also schedule task requests ahead of time. The service has been in beta for about a month in San Francisco. Kan hopes that Exec can provide flexible work for people who are freelancers, writers, designers or just unemployed. "Our goal is to provide flexible work for someone who wants to make extra cash for a few hours per day, or even all day," Kan says. Exec, which has a four-person team, is in the current crop of Y Combinator companies that will be pitching to investors in March. Kan, however, is also a Y Combinator "part time partner," helping existing new startups. So Paul Graham, cofounder of YC, jokingly tells Kan to "do office hours for yourself" for his startup. Kan didn't necessarily need to go through Y Combinator again. He has some knowledge of how to run a startup and could probably tap other startups for help. But he says he likes building a company as part of the incubator. "I think it's such a great environment for founders," Kan says. "I want my cofounders to go through it. Also there's an immense focus with three months working on a product and not being distracted... And there's nothing more fun than working with other startups." Now that he's got some more experience, Kan says he's taken one lesson, by just focusing on the things that are really important for the company and deferring decisions on everything else. This enables the company to move faster. With Justin.TV Kan and his team generally figured things out as they went along. He remembers an hours-long Justin.TV argument about whether to include "time stamps" in the chat window. To simplify things, Exec has one person in charge of each part of the company and that person is in charge of just building it. One person is on the iPhone app, another is in charge of operations and Kan developed the back-end technology. This time Kan is trying to be more pragmatic and building as few features for the service as possible and trying to make each one great for customers' experiences. "Our focus is customer experience. That's it. Everything else is secondary."
科技
2017-09/1580/en_head.json.gz/7149
JEAN BAUDRILLARD 1 Title BAUDRILLARD, JEAN Le Xerox et l'Infini THE TAPEWORM TTW 002CS Cassette-only release in a limited edition of 250 copies. Jean Baudrillard's Le Xerox Et l'Infini -- originally published in Paris, 1987 -- as read by Patricia and Ellen. Recorded on July 12, 2009 by Vicki Bennett in Hersham, England. Jean Baudrillard is perhaps the most important theorist of the "after modern." Though he says he has "nothing to do with postmodernism," many interpret him among the most important prophets of a truly postmodern era. His works have attracted praise and derision all over the world. Since 1991, British artist Vicki Bennett (People Like Us) has been an influential figure in the field of audiovisual collage, through her innovative sampling, appropriating and cutting up of found footage and archives. Using collage as her main form of expression, she creates works that communicate a humorous, dark and often surreal view on life. Patricia and Ellen were born in Reims, northeastern France, on July 29, 1929. They told interviewers that their grandparents were peasants and their parents were civil servants. They became the first of their family to attend university when they enrolled in the Sorbonne University in Paris. There they studied German, which led to them teaching the subject at a provincial lycée, where they remained from 1958 until their departure in 1966. While teaching, Patricia and Ellen began to publish reviews of literature, and translate the works of such authors as Peter Weiss, Bertolt Brecht and Wilhelm Mühlmann. Later on, with the development of the magnetic tape recorder, Patricia and Ellen used these new means in order to manipulate their performances and expand the possibilities of language sound transformations. Patricia and Ellen continue to actively perform their work, the contextual quality of which is enhanced by their idiosyncratic delivery.
科技
2017-09/1580/en_head.json.gz/7173
2010 Holiday Gift Guide: Amazon Kindle The latest release of the Amazon Kindle, the company's ebook reader, seriously changed the game in the ereader world. It's thinner and lighter than any previous model, it packs in Wi-Fi for the first time, and it is much cheaper despite being the best Kindle they've made to date, and that's why we're adding it to our 2010 Holiday Gift Guide. You can get the Wi-Fi model for just $139, or if you need the Wi-Fi + 3G model, that one goes for $189. Thing is, we'd bet that most anyone would do just fine with the Wi-Fi only model, and for $139, you get a cool gadget that book lovers will...uh...love! They're available in white and graphite colors, and we'd recommend picking on up sooner rather than later, because these are gonna be a hot one this year. Read More | Amazon Kindle If you've got someone on your gift list that needs a new point-and-shoot camera that also takes great video, we'd like to recommend checking out the Panasonic Lumix DMC-ZS7. This is the point-and-shoot that we've been using for the past 8 months or so, and the quality is great. Video recordings look superb in high definition, recorded in AVCHD Lite. You've got a 25mm ultra wide-angle lens with 12x optical zoom, facial recognition, and other bells and whistles. What we love about this camera, though, is that it has GPS built-in to geotag your photos and videos automatically, on the fly, as you take them. The ZS7 retails for $349.95, but Amazon is currently selling them for just $257.82 - a savings of $92.13! 2010 Holiday Gift Guide: Powermat Rechargeable Travel Mat We're big fans of what Powermat has to offer here at Gear Live HQ, and that's why we think it's appropriate to feature them in our 2010 Holiday Gift Guide. What you've got here is a little panel with a built-in battery pack that lets you wirelessly charge your mobile devices. It folds up and is small enough to throw in a bag, and won't take up much space at all on a desk. On a full charge, you can charge an iPhone five times. The Powermat Travel Mat comes with adapters that allow you to charge any USB device, Nintendo DS, a few proprietary phone connectors, and iOS devices. The cool part here is that you just set your device on the mat, and it starts charging. They typically sell for $130, but Amazon's got 'em for $69 right now. Read More | Powermat Travel Mat 2009 Holiday Gift Guide: Casio Exilim Zoom EX-Z280 Casio rocks it this year with the Exilim Zoom EX-Z280 digital camera, and we like it enough that we are recommending it in this years gift guide. It sports a dedicated Easy Mode, making it easy enough for pretty much anyone who enjoys pressing buttons to operate, while still maintaining the advanced features for more experienced pros, including ones that you don’t typically find on budget-priced digital cameras. One such feature is mechanical (instead of digital) image stabilization, giving you nice clear shots while you stumble around during a holiday party after one-too-many eggnogs, as well as a 4x optical zoom lens. In addition, it shoots video in 720p, so it can even take the place of your Flip-style camcorder. You can grab one now for $173 on Amazon. Read More | Casio Exilim 2009 Holiday Gift Guide: Kodak Zi8 We love how inexpensive it’s gotten for people to attain the ability to record high definition video, and we think this category is set to explode this holiday season. Based on this, and you know how much we love video, we’ve got to recommend the Kodak Zi8 handheld camera. Why? Well, the Kodak Zi8 records full 1080p HD video at 30 fps, and can also take 5 megapixel still images. It incorporates image stabilization as well, which helps avoid that whole Blair Witch shaky cam effect that none of us enjoy, and in a rare move, it even has a microphone jack so you can connect a nice, high quality audio device, if you so choose. It’s definitely a great value for such a small device that packs a nice punch. The Kodak Zi8 typically sells for $179, but you can find it on Amazon for a little less than that. Honorable Mention: Flip Mino: It isn’t HD, but it’s about $60 cheaper than the Kodak Zi8. Read More | Kodak Zi8 pocket video camera 2009 Holiday Gift Guide: Scene It! The Simpsons Deluxe Edition It’s no secret that Scene It? is one of the most popular game series around, and this holiday we are recommending Scene It? The Simpsons Deluxe Edition. It’s been getting rave reviews as one of the best Scene It? titles to be released, and hey, it’s The Simpsons. The show has been around for 20 years, which means parents can play this with kids and have a good time. The game features material from the show’s first 19 seasons on the air, has hundreds of questions, and has four collectable tokens. You can pick up Scene It? The Simpsons Deluxe Edition for $29.99. Best. Scene It. Ever. Scene It? Twilight Edition: Yeah, we know a lot of the ladies out there are on a vampire kick thanks to Twilight. Guys, you may want to consider this one. Scene It? Bright Lights! Big Screen!: Would you rather play scene it on the Xbox 360 rather than using a DVD player? This is the one you’ll want. 2009 Holiday Gift Guide: HP DreamScreen 100 Displays, Ever since the HP DreamScreen arrived on the scene, generic digital photo frames have seemed so passe. I mean, the DreamScreen is a digital photo frame on crack, essentially. It’s available in 10- and 13-inch sizes, and aside from simply displaying your digital photos, it can stream Pandora (or up to 10,000 Internet radio stations,) set alarms, display your calendar, show you a five-day weather forecast, and even give you a look at your Facebook account so you can see what’s up with your peeps. Plus, it incorporates touch into the bezel, and we like touch. The HP DreamScreen typicalls sells for $250, but Amazon currently has it for $205, a full 18% off. Read More | HP DreamScreen 100 on Amazon 2009 Holiday Gift Guide: Roku HD-XR Player We are kicking off our 2009 Holiday Gift Guide with the Roku HD-XR player. Why? Well, we think that it’s the perfect time to introduce someone to the new hotness that is TV, delivered over the web, and right into the living room. Sure, Netflix has come to the Xbox 360, but if you want streaming access to the Netflix service and don’t own a game console, this is your cheapest barrier to entry. The HD-XR model also gives you access to the Amazon Unbox catalogue, as well as the entire MLB game service as well, if you are subscribed. Definitely a great gift for movie buffs, the Roku HD-XR player is able to stream over 12,000 movies and television shows from the Netflix Watch Instantly catalogue, and over 45,000 from Amazon Video on Demand. It connects to your home network through an ethernet port, or over Wi-Fi, connecting at up to 802.11n if you have it. There’s also a USB port on back, which Roku says is for future use. Our guess is that they’ll be allowing the device to read from an external USB drive, which would be stellar. The actual device is tiny, so it takes up barely any space, and setup takes just a couple of minutes - very simple. Of course, a Netflix subscription is required, but once you have that, everything else is a piece of cake. Honorable Mention: If you want all the features of the Roku HD-XR, but don’t need the USB port or 802.11n wireless speeds, you can grab the Roku HD Player for $99! Read More | Roku HD-XR Page 5 of 7 pages « First < 3 4 5 6 7 >
科技
2017-09/1580/en_head.json.gz/7249
Kirk Campbell President and CEO Kirk Campbell joined IDC as Chief Operating Officer in 1990 and became President and CEO in 1991. Under his leadership, IDC has rapidly strengthened its position in the technology market intelligence business. Since 1990 IDC's revenues have increased from $30 million to $400 million, the number of employees has quintupled to more than 1,700, and the number of countries which IDC research covers has grown from 25 to over 110. As a result, IDC offers the most comprehensive market intelligence available on all key technology segments from the internet, PCs, systems, peripherals and semiconductors to software, services, telecommunications, vertical markets and distribution channels. IDC has invested heavily in developing an unrivaled research network at the worldwide, regional, and country levels. Mr. Campbell joined International Data Group in 1988 as a Vice President responsible for corporate business planning and analysis for over 100 IDG business units. Prior to his position at IDG, Mr. Campbell was Vice President, Finance and Corporate Development for PennWell Publishing, a high-tech information and publishing company. He also held financial positions with Exxon Corporation. Mr. Campbell earned a Bachelor's degree from Stanford University and a Master's degree from Princeton University. He also was a Fulbright Scholar at the Free University in Berlin, Germany. Debra Bernardi Chief Human Resources Officer Ms. Bernardi is responsible for the Human Resources and Education departments at IDC and also has dotted-line management for the International HR function. She is an active part of the IDC's senior management team. Her mission is to take care of IDC's most valuable resource, the "People". She has led her dedicated HR and Education teams to hire and train each new employees, while maintaining the same retention rate. In addition, she has significantly enhanced IDC's benefit portfolio and created innovative HR solutions like the Global Fellowship program, Mentoring program and the Perks@Work program. She was instrumental both in creating IDC's Orientation and assimilation program, the reward and recognition programs, the New and Advanced Analyst training programs, and in developing a one of a kind People & Education site on the company's intranet, InsideIDC. In 1998, Ms. Bernardi was promoted to Vice President of Human Resources and Education. Ms. Bernardi has been actively involved in helping IDC expand to over 600 U.S. employees. Prior to her position as VP of HR, Ms. Bernardi also managed IDC's administration departments which included Telecom, Library, and General Services. Ms. Bernardi joined IDG in 1989 and was responsible for creating the first collaborative training program for all IDG business units. Shortly after, she began work in the Human Resources Service Division (HRSD), providing HR generalist services to a variety of smaller IDG business units. One of the business units Ms. Bernardi supported was IDC, and in September 1991, Ms. Bernardi was hired as an HR Manager at IDC. At that time, IDC employed a mere 85 people. IDC now boasts a U.S. workforce of 600+ employees. Before joining IDC, Ms. Bernardi worked in recruiting and training at Massachusetts General Hospital as well as York Hospital in Maine. Ms. Bernardi earned a Bachelor's degree from the University of New Hampshire and a Master's degree from Lesley College. She is also a graduate of the American Society of Training and Development's Train the Trainer program. She is certified in Myers Briggs Type Indicator as well as a graduate of the Center of Creative Leaderships 360 feedback tools. Crawford Del Prete Executive Vice President, Worldwide Research Products & Chief Research Officer Crawford Del Prete, Executive Vice President, Worldwide Products and Chief Research Officer, manages IDC's WW research and consulting businesses. This includes IDC's Enterprise Computing, Storage, Networking, Integration, Development and Application Software, Professional Services, Telecommunications, Personal Computing, Mobility, Consumer, Digital Marketplace, SMB, Vertical Markets, Consulting and WW Tracker research practices. Mr. Del Prete is also responsible for IDC's Industry Insights Companies, which specifically target the needs of end users in six vertical segments. Mr. Del Prete serves on the operating review boards of PC World and Macworld. Mr. Del Prete is a leading authority on the IT industry and has completed extensive research on the structure and evolution of the information technology industry. In 2001, Mr. Del Prete forged IDC's partnership with Innosight, the consultancy founded by Harvard Business School Professor Clayton Christensen. Together, the companies have created a body of work to understand and predict trends in disruptive innovation. Mr. Del Prete also serves as IDC's lead analyst covering Hewlett Packard. Mr. Del Prete joined IDC in 1989. At that time he initiated coverage of the Winchester Disk Drive market, and was a founder of IDC's European and Asia Pacific storage research programs. In 1997, Mr. Del Prete founded IDC's coverage of the Semiconductor marketplace. In 1995, Mr. Del Prete was honored with IDC's James Peacock award for research excellence, IDC's highest honor. That same year he was voted "most valuable" storage analyst by an outside panel of his peers from the International Disk Equipment Manufacturers Association. Mr. Del Prete is a member of the United States Computer History Museum Storage Committee. Prior to IDC, Mr. Del Prete worked in marketing at Installed Technology International. Before this, he was with Paine Webber Jackson and Curtis in New York in the government securities and commercial lending sectors. Mr. Del Prete holds a B.A. from Michigan State University and in 2012 was named a Distinguished Alumni of the University, the highest award given to graduates. John Gantz Senior Vice President, Research As Senior Vice President, John Gantz has responsibility for all IDC's worldwide demand-side research, global market models, and portal products. He is also responsible for IDC's worldwide services research. Prior to this role, he was in charge of worldwide research and consulting in personal systems, consumer devices, workgroup and collaborative computing, and services. He is also a member of IDC's management committee, chief architect of IDC's Global Internet Commerce Market Model, and one of IDC's chief spokespersons on broad technology and market issues. He cofounded IDC's annual Internet Executive Forum, founded IDC's PC Market Outlook conferences, and is a regular speaker at IDC's annual Directions conferences in the United States and around the world. Prior to joining IDC in September 1992, Mr. Gantz was Vice President and Chief Analyst for Dataquest and Director of its Software Research Group. Before joining Dataquest in 1991, Mr. Gantz was executive Vice President of TFS, Inc., a custom research and consulting company that he cofounded in 1983. Before TFS, Mr. Gantz was a Vice President at IDC, where he managed newsletter operations and research programs in software and office automation. Mr. Gantz is well known in the industry through his frequent speaking and consulting engagements and his bylined biweekly column in Computerworld. In the past he has served as contributing editor for Networking Management, Computer Graphics World, InfoWorld, and Digital News. He has also published articles in Fortune, Forbes, Industry Week, Discover, High Technology, and Management Technology. Additional national and international exposure has come through quotes in all major business publications, interviews on CNN, and chairperson assignments at all major computer industry trade shows, including COMDEX, Internet Commerce Expo, ComNet, ITAA, NCGA, and PC Expo. Mr. Gantz is a graduate of Dartmouth College, former Navy submarine officer, and coauthor of The Naked Computer (Morrow, 1983). He has run two Boston Marathons (1979 and 1980) and hiked the Appalachian Trail end to end (1973). He is married and has two sons. Frank Gens Senior Vice President & Chief Analyst As IDC's Senior Vice President & Chief Analyst, Frank Gens guides IDC's research into broad IT industry trends, particularly the strategic adoption of technology by Global 2000 businesses and the industries in which they compete. Since 2008, Mr. Gens has led IDC's global cloud research team, driving the development of IDC's widely-used cloud services taxonomy, and the industry's first and most accurate forecast of cloud services adoption. He has also led the development of IDC's "3rd Platform" scenario - forecasting the transformation of the IT industry (and other industries) through the strategic use of cloud, mobile, social and big data technologies. Mr. Gens speaks frequently to CIOs, other enterprise executives and IT vendor executives at forums around the world, most recently in Brazil, China, Japan, New Zealand, Russia, Saudi Arabia, South Africa, Turkey and the U.S. He regularly participates in strategic consulting engagements - often around Cloud and 3rd Platform trends - for senior executives in global IT vendor and IT buyer organizations. Mr. Gens is the author of IDC Predictions, the company's annual forecast of major changes in the global development and use of technology, and keynote speaker at IDC's annual Directions conferences in the U.S. and in many other countries. Mr. Gens is a member of IDC's worldwide management team, a lead developer of IDC's global research architecture, and a co-chair of IDC's Research Quality Board. Mr. Gens is a 30 year veteran of the IT research and advisory services business, including over 20 years at IDC in a variety of senior roles, including leading IDC's end-user research business and developing IDC's global Internet research capabilities. In addition to research management roles, Mr. Gens' career includes senior positions in new product development and introduction, product management, marketing, business development, corporate development, and regional operations management. Prior to IDC, Mr. Gens held management positions at International Data Group (IDG), AMR Research and The Yankee Group. Vito Mabrucco SVP Worldwide Consulting & Managing Director, IDC Canada Vito Mabrucco is responsible for IDC’s Worldwide Custom Solutions line of business and for some key IDC geographical business units. IDC's Custom Solutions provide clients with market and buyer insight and recommendations to help develop their strategic growth plans, and provide the services to help execute the tactical plan for implementation, marketing and sales. These solutions are tailored to ensure clients are successful in their targeted technology, geographic, and customer markets. They include offerings in Custom Data Analytics, Buyer Behavior Insights, Global Thought Leadership Programs, Business Value Programs, Sales Enablement Solutions, Integrated Marketing Programs, and Partnering Programs. Vito Mabrucco is also responsible for the IDC geographic regions of Canada, Japan, and Asia Pacific – which includes China, India, ASEAN, Korea, Taiwan, Australia and New Zealand. IDC’s regional teams deliver high value research and advisory services that are insightful and relevant to local and global clients. Vito Mabrucco has deep insight into the global Information and Communications Technology industry with a focus on the value of technology, strategic business models, key industry segments, and global market trends and directions. His insights include a strong understanding of disruptive innovation business models as well as insights into evolving global dynamics and how they apply to the overall technology industry. An outspoken participant and supporter of the ICT industry, Vito provides his perspective and analysis through formal speaking engagements, written documents, media interviews and other public events. A thirty year veteran of the information and communications industry, Vito Mabrucco has held senior management portfolios in Sales, Marketing, Finance, Systems and Strategic Planning with major global technology companies such as IBM, Tandem, Compaq, Dell and Cisco. Vito Mabrucco, a graduate of the University of Waterloo, is on the board of the Information Technology Association of Canada (ITAC), a member of the Campaign Committee for the United Way, and is actively involved in his community. Henry Morris Senior Vice President, Worldwide Software, Services, and CMO Advisory Research Henry Morris is the Senior Vice President for IDC's Worldwide Software, Services, and CMO Advisory research groups. Dr. Morris is also the executive lead for IDC's worldwide Big Data research initiative. Dr. Morris started the Analytics and Data Warehousing research service at IDC, and coined the term "analytic applications" in 1997 to focus on the value of analytics in specific horizontal business and industry processes. Currently, Dr. Morris speaks about major software trends driving the industry, such as the impact of cloud and Big Data/analytics in shaping a new generation of applications. He has been quoted in and written papers for business and trade publications, such as Computerworld, Forbes and KMworld, on trends in business intelligence, business performance management, data warehousing, knowledge management, and enterprise applications. Prior to joining IDC in 1995, Dr. Morris served in a variety of technical and management positions at Digital Equipment Corporation, where he specialized in software for application development. Dr. Morris has been an instructor in technical writing at Northeastern University and Bentley College, and an Assistant Professor of Philosophy at Colgate University. He earned a B.A. with distinction from the University of Michigan and his Ph.D. in philosophy from the University of Pennsylvania. Eric Prothero Senior Vice President, Worldwide Tracker Research As Senior Vice President for Worldwide Tracker Research, Mr. Prothero is responsible for managing IDC's largest research practice which includes Tracker, QView and Forecaster products. Across the globe, IDC has over 250 semiannual and quarterly worldwide, regional, and local country tracker products covering hardware, software, IT services, and telecom equipment. His specific responsibilities include overseeing tracker data quality and consistency, taxonomy evolution, global tracker analyst direction, tracker tools and data platforms, the efficiency of the research tracker processes, and client satisfaction. Prior to this role, Mr. Prothero led the Latin America business unit for IDC, overseeing business operations across 7 countries in the Latin America region, managing functional areas including Research, Consulting, Conferences, GMS, Sales, Marketing, and Finance. Mr. Prothero joined IDC in 1993 as the second employee in the Latin America region. Under his leadership the Latin America region grew to over 120 employees with research coverage in 15 countries in the region. Prior to IDC, Mr. Prothero was a senior consultant at Windermere Associates, a San Francisco-based strategic planning consulting firm. Mr. Prothero received his M.B.A. in International Finance from the University of California at Berkeley and his B.A. in Economics from Swarthmore College. He speaks Portuguese and Spanish and a bit of Japanese. Mark Sullivan Chief Financial Officer As Chief Financial Officer, Mark Sullivan is responsible for IDC’s worldwide financial reporting, analysis, and planning. He is a member of IDC’s senior management committee, and reports to President and CEO Kirk Campbell. Mr. Sullivan joined IDC in 1992 as director of finance. He was named vice president, finance in 1998, and CFO in 1999. Prior to joining IDC, Mr. Sullivan served as controller from 1988 –1992 for CW Publishing, publisher of Computerworld. He joined International Data Group in 1987 as a financial analyst. Prior to his IDG tenure, he held various financial analyst positions at General Electric Company manufacturing operations in Michigan, Ohio, and Massachusetts. He completed, with honors, General Electric’s Financial Management Program. Mr. Sullivan earned his Bachelor of Arts degree in finance from Michigan State University. Brad Thorpe Chief Sales Officer As Senior Vice President of Worldwide Sales, Brad Thorpe has responsibility for leading all IDC direct sales and customer service to the ICT Vendor, Financial, and End User communities, as well as indirect sales including partnerships. He is IDC's chief customer advocate. Prior to joining IDC in 1986 Mr. Thorpe was employed at NCR Corporation in both Commercial and Federal Accounts Sales, and before that he held sales positions with Western Microtechnology, a distributor of semiconductors, computer systems, and peripherals. Mr. Thorpe's first role in 1986 at IDC was as a Unix research analyst and product champion. He then moved into sales, first in California where he was responsible for Western regional and major accounts sales. He was promoted to Director of Major Accounts and relocated to the East Coast to develop IDC's first major account focus, building and implementing its now renowned global account management program. In the NYC area, he led the selling effort for IDC's entry into providing its technology research to equity analysts and investment bankers following technology companies through its successful Investment Research Services. Most recently, Mr. Thorpe was Group Vice President and General Manager of Enterprise Major, Financial, and Sector Sales, leading sales to IDC's largest ICT vendor clients, the financial community, and emerging ICT vendors and supply chain. In addition to earning a BA in English from Kenyon College, Mr. Thorpe has participated in numerous sales, sales management, and negotiation courses, including the Sales Management and Global Account Management programs at the Columbia University Business School. Vernon Turner Senior Vice President, Enterprise Computing Research, Telecommunications, Mobility, Client Computing, Consumer Markets, Worldwide Tracker Products Vernon Turner is responsible for world-wide research across the server, software, network and services programs for the IT Enterprise. Mr. Turner advises IDC clients on the competitive, managerial, technological, integration and implementation issues for complete systems environments. Mr. Turner's areas of expertise include enterprise computer and storage relationships, technology recovery and capacity planning, performance management and end-user perspectives. He has helped to drive research on the evolution of the next generation Internet Infrastructure including grid computing, service centric IT infrastructures, utility computing solutions, processors and modular server designs. In addition he has worked with several industry customer councils in an advisory role on server network and storage architectures to take advantages of initiatives such as IT Consolidation. Mr. Turner has a strong background in the technology requirements of the finance and banking communities and comments frequently on high availability and disaster recovery requirements for mission critical workloads. Finally, Mr. Turner is frequently quoted in the Wall Street Journal, New York Times, USA Today and Financial Times as well as commenting on technology showcases for CNBC, CNNfn, and international media outlets. Mr. Turner holds an M.B.A. from Babson College and a Computer Science undergraduate degree from Oxford Polytechnic, Oxford, England. Meredith Whalen Senior Vice President, IT Executive, Software, Services and Industry Research Meredith Whalen is Senior Vice President of IT Executive, Software, Services and Industry Research. Her global team of analysts leverage research and advisory services to empower business transformation for the Global 2000. Ms. Whalen is responsible for IDC’s technology professional advisory services, overseeing the research, client services, and marketing activities aimed at technology professionals in IT and lines of business. Ms. Whalen architected IDC’s DecisonScapes, a portfolio of decision-making methodologies that enable professionals to better plan, deploy and optimize their technology initiatives. Since introducing the IDC DecisionScape in 2014, IDC analysts around the world have published more than 600 research studies. Ms. Whalen also oversees IDC’s Software and Services groups, serving technology suppliers and buyers. She manages Customer Insights and Advisory Practice, which produces global IT spending forecasts for over 100 technologies in more than 50 countries. Most recently, Ms. Whalen led the creation of IDC's Digital Transformation and Leading in 3D thought leadership platforms. Ms. Whalen and her analyst teams advise line of business executives on how to digitally transform their organization. In parallel, they advise the IT organization how to transform IT to be an effective partner to the business in digital transformation. Ms. Whalen keynotes at CIO summits in North America, EMEA, Asia Pacific and Latin America. Early in Ms. Whalen’s career at IDC, she initiated IDC's coverage of application service providers (ASPs), created IDC's first global view of business process outsourcing services, and designed the Services Group's global research process to ensure consistency in market taxonomies, research architecture, and forecasting. Ms. Whalen started her career in IT business consulting, advising clients on mid-range systems. She holds a B.A. with honors from Wellesley College and an M.B.A. with honors with from Babson's F.W. Olin Graduate School of Management.
科技
2017-09/1580/en_head.json.gz/7270
DFG To Establish Ten New Collaborative Research Centres The Topics Range from Inflammation of the Brain, to the Distribution of Oxygen in the Oceans, to Nanoscopic Structures in the Macroscopic World On 1 January 2008 the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) will establish ten new Collaborative Research Centres, which will receive a total of 74.4 million euros in funding over the next four years, as well as a lump sum of 20 percent to cover indirect costs incurred by the projects. The new Collaborative Research Centres (SFBs) will address a range of topics, including inflammation of the brain, the distribution of oxygen in tropical oceans, and nanoscopic structures in the macroscopic world. Other topics will include the neurobiological basis for behaviour, managing cycles in innovation processes, and the development of high brilliance lasers and other novel components. Two of the ten newly established Collaborative Research Centres are Transregional Collaborative Research Centres, which are based at more than one location. At its meeting in Bonn on 20-21 November, the relevant Grants Committee also approved the continuation of 26 existing SFBs for an additional funding period. The DFG will thus fund a total of 259 Collaborative Research Centres as of the beginning of next year. In total, they will receive 403 million euros in funding in 2008, plus the 20 percent programme overhead. The new Collaborative Research Centres: “The Brain as a Target of Inflammatory Processes” is the topic of SFB/Transregional Collaborative Research Centre 43, which will involve scientists from Berlin and Göttingen. Research will focus on inflammatory and immune reactions in the brain, an area that has received less attention in the past than research focussing on inflammatory processes outside the brain. The projects they plan include an attempt to find out what role inflammation plays in traumatic and neurodegenerative processes. Their main focus will be on diseases such as stroke, Alzheimer's and multiple sclerosis. They hope that their insights will find their way into clinical practice and therapy in the long term. (Host institutions: Charité – University Hospital of the Humboldt University Berlin and the Free University of Berlin. Coordinator: Frauke Zipp) SFB/Transregional Collaborative Research Centre 54 “Growth and Survival, Plasticity and Cellular Interactivity of Lymphatic Malignancies” will study how cancer cells adapt to their surroundings in patients suffering from diseases of the lymph nodes, thus possibly making it harder or even impossible for them to recover. Participating researchers from Berlin and Munich will combine animal experiments and patient-oriented projects aimed at developing novel therapeutic approaches to diseases such as Hodgkin's lymphoma, multiple myelomas and other malignant diseases of the lymphatic system. (Host institution: Charité – University Hospital of the Humboldt University Berlin and the Free University of Berlin. Coordinator: Bernd Dörken) SFB 754 “Climate – Biogeochemistry Interactions in the Tropical Oceans” will examine a topic that is of great interest in the light of climate change. It will involve oceanographers, geoscientists and microbiologists from Kiel, who will study the distribution of oxygen in tropical oceans. Of particular interest to them is how the oxygen concentration can fall dramatically due to interactions between physical, biological and geochemical processes, and what consequences this has on the nutrient balance in the ocean and on the climate. This research will be carried out with the help of the German research vessels Meteor, Merian and Sonne. (Host university: Christian-Albrechts University, Kiel. Coordinator: Douglas W.R. Wallace) Surface physics, magnetism, semiconductor physics, materials science and theoretical physics are the common elements of SFB 762 “Functionality of Oxidic Interfaces”. The researchers involved, from Halle, Leipzig and Magdeburg, will investigate the production of oxide heterostructures and the characterisation of their structural, ferroelectric, magnetic and electronic properties using state-of-the-art scientific methods and equipment. In addition to producing new fundamental insights, this work is also of high practical relevance, for instance for the development of new types of sensors and computer memory. (Host university: Martin Luther University of Halle-Wittenberg. Coordinator: Ingrid Mertig) SFB 765 “Multivalency as a Chemical Organisation and Action Principle: New Architectures, Functions and Applications” aims to lay the necessary groundwork for answering key issues in biological and material sciences. In this SFB, scientists from the Free University of Berlin plan to cooperate with other institutions in Berlin to study the phenomenon of multivalency in detail, paying particular attention to the fundamental chemical and biological mechanisms and molecular architectures. In the long term, they hope their work will lead to the development of novel multivalent molecules that may be of great importance for use in inhibiting inflammation or providing protection against viral infections, as well as for optimising surfaces. (Host university: Free University of Berlin. Coordinator: Rainer Haag) SFB 767 “Controlled Nanosystems: Interaction and Interfacing to the Macroscale” will investigate one of the key areas of research in the 21st century. Participating researchers from Constance and Stuttgart aim to discover how nanostructures interact with each other and with macroscopic structures – issues that are of fundamental importance for nanotechnology, but which have not yet been systematically addressed. The theoretical and experimental studies they plan promise to not only yield key insights into the basic science of nanostructures, but also a wide variety of applications in the fields of telecommunications and data storage as well as for highly integrated circuits. (Host university: University of Konstanz. Coordinator: Elke Scheer) SFB 768 “Managing Cycles in Innovation Processes – Integrated Development of Product Service Systems Based on Technical Products” will address a topic that is of equal importance to science, industry and consumers. In this SFB, mechanical engineers, computer scientists and sociologists as well as researchers from marketing and other areas will collaborate to study the cycles that are affected by technical, competitive and social influences, which have a major influence on the development and introduction to market of innovative products and services, sometimes in a very negative way. The projects they have planned cover the entire spectrum of cyclic processes of innovation, from product planning through to marketing, and from tangible goods to services, for the first time. Vendors and customers alike will benefit from their findings. (Host university: Technical University of Munich. Coordinator: Udo Lindemann) SFB 779 will examine the “Neurobiology of Motivated Behaviour”. Researchers will aim to identify the connection between deliberate actions, the brain structures and neural interconnections on which these actions are based, and the neurochemistry involved. The main focus will also be on pathological changes that occur in the course of various neuropsychiatric diseases. Participating scientists from Magdeburg and Leipzig will use a combination of approaches at various levels in their human and animal experiments, ranging from molecular biology to neurophysiology and psychology. They will begin by concentrating on fundamental questions about actions motivated by “how and why”, which, in the longer term, may lead to important clinical applications. (Host university: Otto von Guericke University Magdeburg. Coordinator: Thomas F. Münte) “Synaptic Mechanisms of Neuronal Network Function” is the subject of SFB 780, which will address key issues relating to the functioning of neuronal networks, examining them at three different levels – the structural aspects of individual synapses, analysis of functional networks, and modelling and analysis of human diseases. The researchers, from Freiburg and Basel, will use approaches from molecular biology, neurophysiology, genetics, anatomy and clinical medicine to investigate these issues. They hope that their findings will lead to a better understanding and more effective therapy of neuronal diseases such as epilepsy and Parkinson’s disease. (Host university: Albert Ludwigs University of Freiburg. Coordinator: Peter Jonas) SFB 787 “Semiconductors – Nanophotonics: Materials, Models, Components” aims to develop novel photonic and nanophotonic components from a variety of materials. The researchers, from Berlin and Magdeburg, will combine three complementary areas of research: material science, modelling, and production and characterisation of components. This will allow theoreticians and experimental researchers to collaborate closely in basic and applied areas. Working on this basis, they hope, in the long term, to be able to generate very high frequency and ultrashort pulses with laser diodes and semiconductor amplifiers as well as high brilliance lasers in the infra-red to green spectral range. (Host university: Technical University of Berlin. Coordinator: Michael Kneissl) Jutta Höhn | alfa http://www.dfg.de/sfb
科技
2017-09/1580/en_head.json.gz/7273
Meet the (Non-Tech!) Silicon Valley Donor Searching for Replicable Solutions Mike Scutari At Inside Philanthropy, we've devoted a good deal of time talking about the various nuances of Silicon Valley philanthropy. One recurring theme is how many of these donors—particularly those in the millennial demographic—don't particularly care much for the arts.We attribute this disinterest to various factors, including a scientific mindset that draws donors to effective altruism, which argues that opera won't save lives. Silicon Valley types, the thinking goes, are wired differently than, say, Herb Alpert and Alice Walton. They're ones-and-zeros types. They like predictability, measurement, and predictably measurable results. Of course, this psychographic inclination can be a helpful asset, particularly when it comes to the perennial quest in philanthropy for replicable solutions. If something really makes sense and works, that should be true again and again, across many different places.Alas, of course, change isn't so simple—but if only it could be. Which brings us, quite naturally, to California, where Silicon Valley real estate investor Jon Freeman has given $1.5 million to Santa Clara University’s Miller Center for Social Entrepreneurship to explore the best ways to replicate effective social business models.The school frames the challenge accordingly:Many social enterprises address similar problems afflicting the global poor—such as lack of access to drinking water or to clean, affordable energy—with highly localized solutions. But could the best solutions be better replicated across regions or industries, helping lift more people out of poverty more quickly? What if, for instance, a safe drinking water business validated in one location could be reproduced and introduced to other geographic regions that also lack potable water?Of course, many foundations fund the research for replicable solutions, particularly as they pertain to challenges in poor countries. And a number of tech funders are keen on social entrepreneurs. But the confluence of Silicon Valley dollars with a research institution like the Miller Center for Social Entrepreneurship is a bit different.For one thing, there's the center itself, which only recently emerged in its present form last year, after Santa Clara University received a $25 million gift from tech winner Jeff Miller, and his wife Karen. Now, with more money coming in from another philanthropist, the picture we're starting to get here is of Silicon Valley types funders building up a local think tank of sorts, one working to build up "global, innovation-based entrepreneurship in service to humanity."Related: Millions to Foster SocialEntrepreneurship on a California Campus. Will More Follow?And what about the Silicon Valley donor in question, Jon Freeman? Well, for starters, he doesn't come from the tech world. He's the president and principal owner of the San Jose-based real estate investment firm Stonecrest Financial. Freeman earned his BA in Business Administration from the University of San Diego and is a California licensed Real Estate Broker.This background is yet another reminder that there's plenty of money in Silicon Valley that has little to do with the tech world. Indeed, some of the most high-profile and well-known philanthropic families in the region made their fortunes in real estate. We're thinking specifically of the Sobrato, Peery, and Arrillaga families. Often, though, the tech mindset seems to have rubbed off a bit on these folks. And Freeman sounds quite a bit like some of the tech donors we come across. According to Stonecrest's website, Jon is dedicated to "hand up" philanthropy by supporting social enterprise and women’s education initiatives that empower women to lift themselves out of poverty. Needless to say, his gift to the Miller Center—of which he's an advisory board member—certainly fits the bill."I have always believed that the way to tackle challenges such as poverty or the negative impacts of climate change is by eradicating the barriers to opportunity," said Freeman. "Social entrepreneurs are more likely to build successful enterprises if they can start with a blueprint or proof of concept that has already been developed and confirmed somewhere else in the real world." Mike Scutari/ A Young Donor’s Striking Niche: ... Who's Giving Serious Money to ...
科技
2017-09/1580/en_head.json.gz/7375
Decapods - Crabs True crabs are decapod crustaceansand belong to a group called the Brachyura. They have a very short projecting "tail" and their small abdomens are completely hidden under the thorax. There are almost 7,000 speciesof true crabs with over 800 species freshwater. Other animals, such as hermit crabs, king crabs, porcelain crabs, horseshoe crabs and crab lice do not belong in the Brachyura - the true crabs. All crabs have one pair of pincers (chelipeds) and four pairs of walking legs. They are the first pair of legs on a crab and are used for holding and carrying food, digging, cracking open shells and warning off would be attackers. The carapace protects the internal organs of the head, thorax and gills. The eyes are on the ends of short stalks and the mouthparts are a series of pairs of short legs, specialised to manipulate and chew food. The abdomen is small and tightly held under the body. The sexes are separate and the size of the abdomen distinguishes them. For much more detailed information refer to the "World of Crabs" on this website. Diversity in true crabs Not True Crabs Alaskan King Crab Image © Philip Tong Flickr King crab or stone crabs, are in a group of crab-like crustaceans mainly found in cold seas. They are thought to be related to hermit crabs. Horseshoe Crab Image © Michelle Pearson Flickr Horseshoe crabs are arthropods that live in shallow ocean waters on soft sandy or muddy bottoms. They come on shore to mate. Although they look like Crustaceans they belong to to a group called the Chelicerata, and are more closely related to spiders and scorpions. Porcelain Crab Porcelain crabs are in a group which resemble true crabs. They are usually less than 15 mm wide, and have flattened bodies to allow them to live in rock crevices. They differ from true crabs as they appear to have three instead of four pairs of walking legs (the fourth pair is reduced and held against the carapace), and by their long antennaewhich grow from the front outside of the eyestalks Hermit crab Hermit crabs are more closely related to squat lobsters and porcelain crabs than they are to the true crabs. Most of the 800 species possess a long, soft, spirally curved abdomen which is covered by a discarded shell (usually from a gastropod) carried around by the hermit crab. Crab Louse (Pthirus pubis) Crab lice (also known as "pubic lice" are insects which are parasities of humans. They can also live on other areas with hair, including the eyelashes. They only feed on blood. Sand Bubbler Crabs These are Sand Bubbler Crabs and they're sifting through huge amounts of sand in search of detritus. The waste sand builds on their head and they kick off the ball before they can't see over it. http://en.wikipedia.org/wiki/Crabs http://museumvictoria.com.au/crust/crabbiol.html www.mesa.edu.au/crabs Pea crab The pea crab is a small crab that is a parasite of oysters, clams, mussels and other species of bivalves. Image © Marck Hicks via Flickr Next: ... Lobsters & Crayfish
科技
2017-09/1580/en_head.json.gz/7444
page: www.nytimes.com/archive/article/science, targetedPage: www.nytimes.com/archive/article/science, position: Bar1 EnvironmentSpace & Cosmos Health Researchers Study Dirt In the Diet AMHERST, Mass., Feb. 23— Researchers working with a grant from a company involved in lawsuits over dioxin contamination are studying used diapers to determine how much dirt from playgrounds and backyards makes its way into children's diets. The reseachers are looking for traces of chemicals that the body does not absorb and that are known to be in the soil. The work is being done with a $344,000 grant from the Syntex Corporation, a California pharmaceutical company that is involved in lawsuits over contamination of soils by dioxin in Times Beach, Mo. The study involves 65 children 1 to 3 years old. To prepare for it, investigators first tested dirt in the diets on six adults last fall. Four of the adults were parents of the children in the diaper study. Capsules of Dirt, Twice a Day The volunteers swallowed 50 milligrams of soil in a capsule twice a day for one week and increased the dosage to 250 milligrams the next week. As a control, they unknowingly swallowed empty capsules a third week. The soil was thoroughly analyzed and guarded during the experiment to make sure it was untainted, said Charles Gilbert of the University of Massachusetts Public Health Center, the project manager. Subjects saved samples of everything they ate and collected stool and urine samples. Each family was paid $500 for the two-week experiment and they earned it, said Mr. Gilbert. They kept diapers on ice until researchers could pick them up at the end of the day in a special truck that freeze-dried them to preserve the contents for analysis. About 4,000 diapers were collected, Mr. Gilbert said Saturday. Also collected were samples of dirt around the homes and a duplicate of everything the children ingested. If a mother made a sandwich for her child's lunch, for example, she would make an extra one and save it in a plastic bag, Mr. Gilbert said. Pollutants in Soil The researchers supplied diapers and the same brand of cleansing tissues and toothpaste, so that they could rule those out as reasons for differences in results. Edward Calabrese, the principal investigator, said he hoped to have some results by the end of April. Researchers said the study could help determine standards for levels of pollutants in the soil. ''We've gotten lots of calls already,'' Mr. Gilbert said, adding that among the callers were representatives of the Federal Environmental Protection Agency and the Massachusetts Department of Environmental Quality Engineering. Inside NYTimes.com Health » Too Hot to Handle Campaign ID: 285217 | Creative: nyt2017_pers_B2B_cookieset_v5_HTTPS -- 415673 | Page: www.nytimes.com/archive/article/science / Targeted Page: www.nytimes.com/archive/article/science | Position: Inv1
科技
2017-09/1580/en_head.json.gz/7498
How to cut energy consumption with new technology Learn how industrial companies are finding ways to increase productivity with less energy. By Paul Studebaker The University of Texas at Austin generates 100% of its electric power, steam and cooling for more than 150 buildings comprising 20 million square feet. Electricity, chilled water, steam, condensate recovery, water and sewer lines all run underground in truly redundant loop configurations through a network of tunnels throughout the campus. The university's 135 MW of combined heat and power operates with 88% fuel efficiency at 99.9998% reliability. But it hasn't always been this way. "17 years ago, we were operating at 62% efficiency," said Juan Ontiveros, P.E., executive director of utilities and energy management for the university, speaking at the Power and Energy Management Industry Forum this week at the Automation Fair in Houston, presented by Rockwell Automation. The Austin campus and its power systems had grown organically since 1929. "Our controls were all pneumatic, and we were at capacity. We had to upgrade, but we couldn't shut down." Read the whole story on Control Global Show More Content
科技
2017-09/1580/en_head.json.gz/7532
NAB debating the future life or death of AM Radio By Paul Riismandel on October 4, 2012 in HD Radio, Sports Radio, Talk Radio Inside the radio industry there has been quite a bit of hand-wringing about the AM dial. Though conservative talk stations and sports stations in big markets continue to generate ratings and revenue, there’s an increasing recognition that other AM stations aren’t doing as well. Many blame overcrowding on the dial which leads to more interference, especially at night. Others claim broadcasters themselves are to blame, for not maintaining facilities and uncreative programming. Over at DIYmedia.net John Anderson takes a critical overview of the solutions being examined by the National Association of Broadcasters, which has formed a task force to tackle the future of AM. John writes, The Task Force seems to be considering two primary ideas for “revitalizing” AM broadcasting. One is to phase it out completely and migrate all AM stations to new spots on the FM dial. The other involves a wholesale conversion of AM broadcasting from analog to digital, using AM-HD as the mechanism. Neither of these proposals are optimal. Both would necessitate listeners buying new receivers to take advantage of any changes, and they would be expensive and disruptive to all AM broadcasters – many of whom are on shaky financial footing already. The NAB, as the handmaiden of the largest broadcast conglomerates (and with the close cooperation of National Public Radio) seems to be leaning toward the digitalization route. Either will be a tough sell. I tend to come down on the side of thinking that the big broadcasters made their own bed, similar to how Clear Channel and its ilk squeezed the life out of commercial music radio on FM over the last 16 years. Just like HD Radio has failed to rescue FM, I have serious doubts that digitizing AM will save it, either. I also don’t agree with scrapping AM. Although it is an older technology, which poses technical and fidelity challenges that FM does not face, it also has distinct advantages. First, AM transmissions can cover a much bigger geographic area than FM, nearly half the North American continent with the right power level. Because they don’t travel line-of-sight, it’s easier to send and receive AM signals in hilly or mountainous areas than FM. Second, AM receivers are simple to build and operate — a crystal set doesn’t even need batteries. While this may seem downright antiquated in the mobile internet age, it can be a real lifesaver during a natural disaster or other emergency that results in extended power outages. Finally, the infrastructure is already there, and is in use. There are still millions of listeners tuning in AM radio each day, who would likely lose many of their favorite stations were the service eliminated. Furthermore, the AM broadcast band is a tiny swath of spectrum, not particularly useful for data services like the FM and UHF bands are. Of course, any change would require a long FCC proceeding. But that doesn’t mean change is necessarily unlikely or impossible. What it means is that those of us who care about preserving the service need to be aware and ready to engage in the debate. How To Listen to Super Bowl LI on the Radio this Sunday Radio Station Visit #117 – KCHUNG Radio in Los Angeles Norway to Shutter Nat’l FM Broadcasts, 200 Local Stations to Remain Radio Station Visit #110 – Radio 1190 at University of Colorado, Boulder Podcast #54 – Dusting Up: From Carrier Current to NPR One AM Radio, analog, Digital Radio, digitization, HD Radio, National Association of Broadcasters Intercollegiate Broadcast System Chicago conference focused on professionalization The Lasting Impact of College Radio Participation One Response to NAB debating the future life or death of AM Radio **Just like HD Radio has failed to rescue FM** was it supposed to “rescue” FM? Who said that? HD was simply an added feature which brings some extra usability to radio. Those who are shocked, shocked(!) that it hasn’t become the next iPod or iPhone were misinformed. The main thing that will save AM is a combination of digitization…AND…content. As AM has gone downhill in demographics and listeners, there is almost nothing to listen to on AM, aside from news, sports and talk. Those formats are already migrating to FM. No one is doing anything of interest to attract people to AM anymore. Someone like Howard Stern would bring a new validity to AM radio. Digitization would allow some music formats to proliferate, formats that are getting crowded off the FM dial.
科技
2017-09/1580/en_head.json.gz/7576
go top menu go search go content Content/Service News Digital Imaging News Home Appliances News Social Responsibility News TV & Video News Exhibition News Samsung Semiconductor News Press Materials Social Media Center January 03, 2013 in CES News SAMSUNG NX300 Combines Speed, Performance and Connectivity for Perfect Shooting in Every Moment Samsung 45mm F1.8 2D/3D lens is the World’s first one-lens 3D system, capable of capturing both stills and 1080p Full HD video in perfect 3D quality LAS VEGAS, USA – January 3, 2013 – Samsung Electronics Co., Ltd, a global leader in digital media and convergence technologies, today announced the launch of the NX300, the new flagship model in its successful NX series. The NX300 once again raises the bar for compact system cameras (CSC), delivering an outstanding combination of features, functionality and style for perfect shooting in every moment. Samsung also announced the new NX 45mm F1.8 2D/3D lens, the first one-lens 3D system capable of capturing both still images and full HD movies in perfect 3D quality. Myoung Sup Han, Senior Vice President and Head of the Digital Imaging Business, Samsung Electronics, commented: “Since its launch in 2010, the Samsung NX System has become synonymous with innovation and design, making the best digital imaging technology available to everyone without compromising on style. The NX300 builds even further on this illustrious heritage. Not only does it deliver incredible speed and accuracy through advanced photographic features, but it is truly built around the user, with upgraded SMART Camera functions making shooting and sharing both easy and seamless. We are also very proud to introduce the first one-lens 3D system alongside the NX300, opening up a whole new world of 3D photography and video recording for consumers everywhere.” Perfect shooting Central to the NX300’s outstanding imaging performance is its brand new 20.3 Megapixel APS-CMOS sensor, capable of capturing high quality detailed. Images boast sharp, life-like colors and are crisp and clear in all light conditions. The wide ISO range (ISO100-25600) further improves performance, letting users capture beautifully balanced images in even the darkest conditions. A brand new Hybrid Auto Focus (AF) system also delivers fast, accurate phase and contrast detection, while the 1/6000 sec shutter speed and 9fps continuous shooting mode ensure you never miss a moment. The Samsung’s unique in-house developed DRIMe IV imaging engine provides outstanding developments in speed, and image quality. The DRIMe IV engine enables better color reproduction and greater noise reduction, as well as support for full 1080p HD video capture in both 2D and 3D (when combined with Samsung’s new 45mm 2D/3D lens). The NX300’s 84mm (3.31”) AMOLED screen makes capturing and reviewing your images a pleasure, while the hybrid touch screen and 5-way manual key interface is simple and intuitive to use. The tilt display makes it easier than ever to take high and low angle shots, letting you capture your world from any angle you choose. Easy usability and stylish retro design The compact NX300 makes it simple for anyone to achieve pro-like and impressive pictures easily with minimal technical knowledge or time spent adjusting parameters. Using the camera’s Smart Mode, users can choose from 14 different settings, such as Creative Shot, Landscape, Light Trace or Action Freeze which will automatically adjust parameters such as aperture and shutter speed in order to obtain the best shot possible for the desired situation. The NX300 also includes i-Depth, an easy and simple way to adjust the depth of an image using the NX Series’ unique i-Function system which enables users to modify camera parameters using the lens itself, adjusting the image without ever having to move off target. The NX acclaimed design returns with the NX300 sporting a stylish yet simple retro feel that is available in either black or white with a contrast silver band, conveying elegance and authenticity. Instant sharing through enhanced Wi-Fi connectivity The NX300’s Wi-Fi connectivity of SMART CAMERA allows users to share their cherished photos instantly and securely between their camera and smartphone (or tablet). To connect a smartphone to the camera, users simply need to download the Samsung SMART CAMERA App., which is available for both Android and iOS based application markets, onto their smartphone or tablet and follow the easy steps to create a secure connection with the camera. The NX300 will be able to detect and auto-connect to the smartphone wirelessly whenever any of the SMART CAMERA features are activated. These include AutoShare; which automatically sends every high-quality photos to your smart phone for safe keeping, and Mobile Link; which allows users to select and transfer images or album from the camera directly to your smartphone at their leisure. The SMART CAMERA App. also features a Remote Viewfinder function for the NX300, allowing for even more inventive and exciting photography. The NX300’s enhanced Wi-Fi connectivity also lets users share images with friends and family directly from the camera via social networking sites using the cameras one-touch DIRECT LINK hot key. Alternatively, images can be automatically backed up or stored in the cloud through AllShare Play ensuring they are always safe and easily accessible. Samsung 45mm 2D/3D lens The Samsung 45mm 2D/3D lens(sold separately) opens up an exciting world of 3D imaging possibilities and is the world’s first one-lens 3D system for a consumer camera. Capable of capturing both still pictures and full 1080p HD video, the Samsung NX300 and 45mm 2D/3D lens kit have become the only compact system camera supporting both 3D still and 3D movie. The NX300 is also compatible with Samsung’s entire range of NX lenses and professional standard accessories, giving users an unparalleled range of options when striving for that perfect shot. Please visit our booth to experience this future technology firsthand. Samsung's product line will be displayed January 8-11 at booth #12004 in the Central Hall of the Las Vegas Convention Center. Full details, video content and product images are available at the Samsung microsite at: www.samsungces.com or mobile site at: http://m.samsungces.com as well. The Samsung press conference and Samsung Tomorrow TV CES 2013 Specials will be streamed live on the Samsung Tomorrow blog at: http://global.samsungtomorrow.com and Samsung's microsite site also. After the live presentations, videos will be available at http://youtube.com/SamsungTomorrow About Samsung Electronics Co., Ltd. Samsung Electronics Co., Ltd. is a global leader in consumer electronics and the core components that go into them. Through relentless innovation and discovery, we are transforming the worlds of televisions, smartphones, personal computers, printers, cameras, home appliances, medical devices, semiconductors and LED solutions. We employ 227,000 people across 75 countries with annual sales exceeding US$143 billion. Our goal is opening new possibilities for people everywhere. To discover more, please visit http://www.samsung.com. Note to Editors NX300 Product Specifications 44mm 2D/3D lens Specifications *All functionality, features, specifications and other product information provided in this document including, but not limited to, the benefits, design, pricing, components, performance, availability, and capabilities of the product are subject to change without notice or obligation. Download Hi-res Images Click the thumbnails above to see a larger image and to download a hi-res version.
科技
2017-09/1580/en_head.json.gz/7667
LG Optimus 2x hits UK in March for £30.65 per month Devina Divecha Date revised to March 2011; Vodafone tariffs revealed Shares Looks like we'll have to wait slightly longer than expected for the LG Optimus 2xThe LG Optimus 2x was expected to release on 21 February as we had reported last week.However, the online retailer Expansys now shows the date as being 18 March 2011. To remind customers waiting for the LG Optimus 2x, it is available for pre-order on a SIM-free price of £500, and Vodafone has come out with its prices for the phone on contract. Contracts with Vodafone start at £30.65 a month, which gives you the handset free, unlimited texts, 300 minutes and 500MB of internet. Free gifts with the contract include a universal PDA/phone car mounting kit and a microUSB car charger. The phone was revealed at CES 2011 and is powered with Android 2.2 Froyo, has an Nvidia Tegra 2 dual-core processor, a 4-inch WVGA touchscreen display and 8GB of internal memory, which can be increased by 32GB via microSD. In addition, it is the first handset to possess full HD 1080p video recording and playback. An upgrade to Android 2.3 has already been announced. The UK is also hoping to see the LG Optimus Black land sometime in March. Are you pre-ordering the LG Optimus 2x? Let us know on T3's Twitter and Facebook feeds and stay tuned in for your latest news from the world of tech.
科技
2017-09/1580/en_head.json.gz/7676
Six things you should know about the Huawei Watch James Peckham Finally...it's launching soon Shares Introduction Huawei's Watch has been a long time coming – it was officially announced at MWC 2015 at the start of March. Since then little else has been revealed, and we've been sitting patiently and twiddling our thumbs.We got some brief hands-on time with an early version of the Watch at the show, but it wasn't running the final software and didn't deliver the full experience. That's was a bad idea in our book – the demo was clearly pushed out early in an attempt to make sure Huawei doesn't fall any further behind its rivals in the wearables stakes.When we were invited to Huawei HQ in Shenzen, China, we thought it would be the perfect opportunity to find out more, so we dug around for the latest information on everything from the design to the launch date – here's what we discovered...Read our hands-on with the Huawei Watch Next 1. It's coming "very soon" Huawei's Watch was announced more than six months ago now, and it still isn't adorning our wrists. Many observers are starting to suspect that there are some issues with the device, but it's more likely that the watch was just announced prematurely.The fact that the devices at the launch weren't running any software at all suggests it was pushed through to the announcement stage a little early to make sure Huawei didn't fall any further behind its Android Wear competitors.Wearable Product Design Manager at Huawei, Pan, told techradar: "Very soon you will see it in the market." So rumours of the Watch launching in the next few weeks seem to hold water – could we see something at IFA 2015? Next 2. It's still in testing Even though we've heard the Huawei Watch will be launching in the near future, we know it's still going through the testing process. We managed to get some time around the Huawei testing facilities in Shenzen and in the environmental testing room we spotted a couple of hidden Huawei Watch displays.On the other hand, it may be that the company is already working on the follow-up to the Huawei Watch and it's already hit the testing process.There was no sign of the rest of the Watch housing, but these watch faces were inside a machine that drops the temperature from high to really low to see how the device handles the change.According to Pan, Huawei Watch has gone through over 650 hours of quality testing to check that it's not going to get damaged easily. Next 3. No Emotion UI or OS tweaking Huawei is all about customising the interface. Emotion UI on the phones is a bone of contention for some users with its different approach to Android design.Those won't be available on the Watch OS though – it's strictly an Android Wear affair here. The unique look of Emotion UI isn't going to make the jump, and there won't be any big changes in the way you interact with the Huawei Watch compared with a Moto 360.Pan told techradar: "To a certain extent we can customise but we can't go very deep, we provide the watch faces for people to choose." Next 4. Employees are already wearing them In my few days exploring the Huawei campus I noticed a few people had some interesting wristwear. I know for certain that at least three employees around the factory and campus in general were wearing the Huawei Watch, although it wasn't clear how complete they were.They may have been early builds, or could even be a new version of the Watch, but whenever we approached employees with questions on their choice of gadget they backed away, muttering that they couldn't talk about it. Next 5. Battery life is 2 to 3 days Battery is a big concern on smartwatches, and the Huawei Watch is no exception. We now know there's a 300mAh cell under the hood, but there's no precise figure for how long that's going to last.Pan told us: "I don't use it very heavily, so two or three days." That sounds like the average kind of Android Wear battery life, and it makes sense as it's a similar cell to those we've seen in other watches.If you manage to get a full three days life out of the watch while actually using the screen, however, then we'd be impressed – that's unheard of so far. Next 6. It's been in development for years Huawei hasn't just thrown the Watch out after no research at all – a representative confirmed to techradar it has been in development for over two years.That means it was being developed before the general public even knew Android Wear was a thing – it wasn't announced until March 2014.Huawei may have even started creating its own OS to run on the wearable before it knew Android Wear existed, and then decided to switch midway through the R&D process.Read our hands-on with the Huawei Watch Next Shares Tags Android Related articles Best iPhone 2017: how to choose the Apple phone for you Moto Mods: all of the Moto Z modules currently in the works The best free PC back-up software 2017 Forget the iPhone 7 – save money with an iPhone SE if you want good battery life See more Wearables news Load Comments
科技
2017-09/1580/en_head.json.gz/7677
BlackBerry PlayBook: Beware of the demo-ware tablet The BlackBerry PlayBook looks flashy and has excellent specs, but it's still a long way from coming to market and it may fall short in the four areas where a tablet needs to excel. By Jason Hiner in Tech Sanity Check, September 28, 2010, 1:12 AM PST I have a similar feeling about the BlackBerry PlayBook (below) as I had about Google Wave when it was introduced. It's a product that looks great in a PowerPoint presentation but when I think about it in the real world, I start to have my doubts. After its flashy introduction on Monday, my skepticism of the PlayBook deepened when there were no pre-release units available for us to try after the demo. The only glimpses available of the BlackBerry tablet were a few of them suspended behind glass running short videos in a continuous loop. That, combined with the fact that the release date is "early 2011," means that this product is nowhere near complete. Research in Motion announcing it 4-6 months before it actually arrives in the market is RIM's way of saying, "Hey, we've got a tablet, too. Before you go out and buy an iPad or an Android tablet, hold off until we come out with ours." This "freezing the market" technique is an old trick employed effectively by others, but especially Microsoft. However, it doesn't work when there are already viable products in the market from trusted vendors. The fact that RIM is pre-announcing the PlayBook so early is evidence that they are fearful of the iPad gaining too much momentum in the enterprise. Another big warning siren with the PlayBook was the way RIM announced it. RIM co-CEO Mike Lazaridis put emphasis on two things: This is a tablet for professional business users The PlayBook has great specs It certainly makes sense for RIM to focus on the enterprise. That's where its traditional strength is and enterprises have taken a quick and surprising affinity to the iPad, which means there's definitely a market there. However, while the PlayBook does has impressive specs, the fact that RIM chose to emphasize them so heavily isn't a good sign. RIM talked about the PlayBook's dual core processor, 1 GB of RAM, and Flash 10.1 as if they had just pulled out a royal flush at the poker table. They seemed to gloat with self-satisfaction over each of these features, as if to say, "Aha! See, we're sticking it to the iPad." Not only was that annoying, it was evidence that RIM is stuck in 1990s thinking about computing devices. The bottom line is that most of those specs don't mean much any more. Both consumers and the enterprise — at least, the smart enterprises — want products that just work and that get the technology itself out of the way. (I would say that Flash is one of the things that people want to just work, but after using it on Android 2.2 devices and seeing how slow and buggy it is, I'm starting to think NOT having Flash on mobile devices is a benefit.) The iPad has four killer features: Great battery life Lots of apps Any tablet that wants to compete with the iPad needs to be at least minimally competent in those four areas and then bring something to the table that outshines the iPad. Unfortunately, the PlayBook is likely to come up short in all four areas. In terms of ease of use, while the demo of the PlayBook's tablet OS looks like a mix between the iPad and the Palm WebOS, RIM does not have a good history of building usable software. Their software is very secure and it's full-featured, but ease of use has never been one of their strengths, so they would have to pull off a coup here. The primary reason why the iPad has been so successful is because the user experience is almost completely self-evident. In neither RIM's on-stage presentation nor in its official press release did the company mention a single word about battery life. While it sounds impressive that the PlayBack has a 1 GHz dual-core processor, it takes a lot of power to run that kind of CPU. BlackBerry devices typically have excellent battery life, so RIM knows what it's doing in this department. Still, it would be very difficult to get over 10 hours of battery life (the iPad's gold standard) out of tablet with a dual core CPU. And, the fact that RIM didn't mention battery life is probably an indication that it's something they're still wrestling with. In terms of apps, the PlayBook is built on QNX, a totally separate architecture than the traditional BlackBerry OS. Here's what RIM said about it as an app platform in its official statement: "The OS is fully POSIX compliant enabling easy portability of C-based code, supports Open GL for 2D and 3D graphics intensive applications like gaming, and will run applications built in Adobe Mobile AIR as well as the new BlackBerry WebWorks app platform announced today (which will allow apps to be written to run on BlackBerry PlayBook tablets as well as BlackBerry smartphones with BlackBerry 6). The BlackBerry Tablet OS will also support Java enabling developers to easily bring their existing BlackBerry 6 Java applications to the BlackBerry Tablet OS environment." I applaud RIM for having the guts to do a complete reboot on their tablet OS, but this also means that when the new platform launches there will probably won't be many apps since most of the existing BlackBerry apps will need some tinkering in order to work on the tablet. And then, RIM is going to have to convince developers to write apps for its tablet instead of (or in addition to) iPad and Android. The other thing RIM didn't talk about when unveiling the PlayBook was the price. As most of you probably know, when a salesperson doesn't tell you the price of something upfront it's usually because the product is expensive and they want to sell you on the value so that you don't get sticker shock from the big price tag. A lot of people who scoffed at the idea of an Apple tablet at the rumored $999 price tag before its launch changed their minds when the iPad was unveiled at $499 for the least expensive model. I'm afraid we could see the opposite phenomenon with the PlayBook, especially with all of the high-end specs RIM is touting. A lot of those who are intrigued by the PlayBook today could be priced out of the device when we finally learn the real price tag in the coming months. If it comes in at $800 or more, as I suspect it might, then it will likely be a narrow niche product, at best. BlackBerry enlists HTML5 and open source to jumpstart apps platform The truth about iPad: It's only good for two things Apple iPad: The five biggest annoyances About Jason Hiner Jason Hiner is Global Editor in Chief of TechRepublic and Global Long Form Editor of ZDNet. He's co-author of the book, Follow the Geeks. See all of Jason's content @jasonhiner Jason Hiner has nothing to disclose. He doesn't hold investments in the technology companies he covers.
科技
2017-09/1580/en_head.json.gz/7701
14 Cold, Hard Facts About Noah’s Ark That You Probably Do Not Know The new Hollywood blockbuster “ Noah ” has created a tremendous amount of interest in the story o... http://www.thedailytrends.net/2014/05/14-cold-hard-facts-about-noahs-ark-that.html The new Hollywood blockbuster “Noah” has created a tremendous amount of interest in the story of Noah’s Ark. Traditionally, most people have regarded it as just a cute Bible story to tell children. But could it be real? Is there solid evidence that Noah’s Ark actually existed? If there is real evidence, would you believe it? What you are about to see is absolutely stunning. In fact, some of the things that you are about to see are so shocking that many people will simply refuse to accept them. Later in this article, you are going to see video footage of the physical remains of Noah’s Ark. This discovery has been known about for quite some time, but the mainstream media has mostly ignored it. A boat-shaped object that is the exact length that the Ark should be and the exact width the Ark should be has been found on the mountains of Ararat. Ground penetrating radar shows us how the Ark was laid out, and scientific tests have been conducted on wood and metal extracted from the gigantic buried boat. If you have never heard about any of this before, prepare to be blown away. The following are 14 cold, hard facts about Noah’s Ark that you probably do not know… #1 The Ark was about 500 feet long. That would make it approximately the size of a World War II aircraft carrier. #2 It has been estimated that the Ark had an internal volume of more than 1.5 million cubic feet. #3 According to brand new research conducted by scientists the University of Leicester, Noah’s Ark could have carried at least 70,000 animals without sinking… Noah’s Ark would have floated even with two of every animal in the world packed inside, scientists have calculated. Although researchers are unsure if all the creatures could have squeezed into the huge boat, they are confident it would have handled the weight of 70,000 creatures without sinking. #4 Of course the Ark would not have needed to hold 70,000 animals. One conservative estimate puts the number of animals on the Ark at about 16,000. This would have allowed for more than enough room for food, supplies and lots of empty space. #5 We have discovered at least 250 different ancient cultures that have a story of a massive, cataclysmic flood. Most of those stories have striking similarities to the Genesis account. #6 If there really was a global flood, we would expect to find billions of dead things laid down in rock layers all over the globe. And that is precisely what we find. #7 We know where Noah’s Ark is today. Yes, you read that correctly. As you can see from the YouTube video posted below, the remains of Noah’s Ark have been discovered on “the mountains of Ararat” in Turkey. This video footage is absolutely stunning… #8 The remains of the Ark are just as long and just as wide as the Bible says they should be. #9 Wood from the Ark has been tested, and the tests show that it does contain organic carbon. That means that the fossilized wood that was discovered was once living matter. #10 Along with wood, metal was also used in the construction of the Ark. Iron fittings and aluminum have both been discovered at the remains of the Ark. #11 Petrified animal dung, a petrified antler and and an ancient piece of cat hair have also been retrieved from the remains of the Ark. #12 Historical records confirm that people have believed that this is the correct resting place of the Ark for a very, very long time… About the author: Michael T. Snyder is a former Washington D.C. attorney who now publishes The Truth. His new thriller entitled “The Beginning Of The End” is now available on Amazon.com.
科技
2017-09/1580/en_head.json.gz/7762
Irradiated beef may be coming to your store Production of irradiated beef is beginning, and with it comes the issue of customer acceptance. Experts such as the U.S. Food and Drug Administration's Dr. George Pauli say that the process may have significant health benefits in providing a safe food product, and that "there is no factual evidence for risk."This much is known: Irradiation can reduce or eliminate harmful pathogens such as salmonella, E. coli 0157:H7, and Campylobacter, three major causes of food-borne illness. Irradiating prepared ready-to-eat meats such as hot dogs and deli meats could eliminate the risk of listeria from foods, according to the Centers for Disease Control and Prevention. Irradiation could also eliminate parasites like cyclospora and bacteria such as shigella and salmonella from fresh produce. Irradiation of animal feeds could prevent the spread of salmonella and other pathogens to livestock through their food.Irradiation may be done with gamma rays, electron beams, and X-rays. According to the National Cattlemen's Beef Association, beef products will most likely be irradiated using an electron beam system similar to the X-ray systems used for security purposes."I want the consumer to understand that at no time does the product become radioactive," said Ruth Weisheit, public affairs specialist of the Brunswick, O., office of the Food and Drug Administration. (The FDA approves the source of irradiation and the level of radiation.)Irradiated meat and poultry products sold at retail must bear the international radura symbol in conjunction with a statement such as "Treated with radiation" or "Treated by irradiation.""The finished product in the supermarket must be labeled," said Ms. Weisheit. "Non-packaged products (such as bananas or mangoes) must have the symbol in the area, such as the produce bin. It must have both the words and the symbol."While scientists draw a parallel between pasteurization and food irradiation, some consumer groups say that irradiation destroys important vitamins and enzymes.But Ms. Weisheit notes that it does not kill vitamins or enzymes any more than any other form of food preservation. "We have approved irradiation for a number of years for a number of products, and it has not been used," she said.Irradiated meat can be recontaminated if it is not properly handled, she added. However, irradiation provides an extra margin of safety. It gives manufacturers and processors an additional tool to effectively ensure the safety of meats, fruits, and vegetables for consumers, but it is not a substitute for proper food-handling practices, according to the Food Technology Service, an irradiation facility in Florida using gamma radiation on seasoning and spices, consumer products, and packaging materials.Even though irradiation has been approved for raw meat, producers and processors must now determine if they want to use that method. That means it will take time for the product to reach supermarket refrigerated cases. When it does, look for the labeling.Under the United States Department of Agriculture's labeling requirements, meat served in restaurants and cafeterias will not have to be labeled. The product could also end up in hospitals and nursing homes, according to the FDA's Dr. Pauli. "Consumers have seen examples of people getting sick [from tainted food]. People will take that into account when they evaluate the benefits of irradiation."In 1999, seafood processors requested to be allowed to irradiate molluscan shellfish, such as oysters, clams, and mussels. Permission has not yet been granted. These shellfish are often eaten raw, and may contain strains of vibrio bacteria that are normally present in the water in which the shellfish grow.As long as taste is not affected, irradiation may well be the next step in food safety practices.But the bottom line in consumer acceptance may be the price differential, whether it is big or small. To date, irradiated products have cost slightly more.Kathie Smith is The Blade's food editor.
科技
2017-09/1580/en_head.json.gz/7860
(Bubo scandiacus) The snowy owl was recently reclassified in the genus Bubo, which is part of the family Strigidae or "typical" owls. This circumpolar owl inhabits tundra regions of Eurasia and North America. During the fall and winter, it migrates to southern Canada and northerly parts of the United States. During years when high numbers of owl young are produced, the snowy owl may wander as far south as the central United States. The snowy owl prefers open areas for its breeding range, including tundra and grasslands. During winter it seeks open areas to the south, including prairies, marshes or shorelines. The snowy owl is the heaviest North American owl, and one of the largest in overall size. They average 20-27 inches (51-69 cm) tall, with a wingspan of almost 5 feet (1.5 m). They have large, round heads, black beaks, no visible ear-tufts and yellow eyes. As with most owls, females are larger than males; females average 5 pounds (2.3 kg), males 4 pounds (1.8 kg). The female's body is white and highlighted with dark brown bars and spots. Males are nearly pure white. Both sexes of immature owls are heavily marked with brown barring. Up to 25 years in captivity. Birds have been documented to live at least 10 years in the wild. In the wild: Snowy owls prey primarily on lemmings, mice and voles. They also eat large birds, such as ptarmigans and a variety of waterfowl. Snowy owls have been known to wade into water to catch marine animals with their talons. At the zoo: Mice, rats and coturnix quail. Courtship begins in May, with males performing aerial displays of dives, soars and exaggerated wing beats. They often carry a dead animal in their beaks as food gifts for prospective mates. While on the ground, males may also spread their wings to impress females. Once paired, the owls nest on a prominent point that offers a good view in all directions. Nests are not elaborate, simply being a scooped-out area lined with feathers and moss. The amount of available food determines clutch size. An average of seven eggs are usually laid, but clutch size may be even larger if prey is exceptionally plentiful. During periods of few prey, owls may not nest. Females incubate their eggs for 32-34 days. The female incubates and broods, while the male hunts for food. About three to four weeks after hatching, young leave the nest and scatter in close proximity of the nest. Once all the chicks have left the nest, both parents must feed the scattered young. Young fledge seven to eight weeks after hatching. Due to nearly 24 hours of light per day in the tundra, snowy owls are active any hour of day or night, and perch on the ground or any raised object while searching for food. They may gather in groups of up to 20 or more owls when they disperse south in the fall, particularly when they find a location with plenty of prey. Loyal Parents Snowy owls are very protective parents. A male will dive and strike at most any intruder that enters its territory or threatens its family. This includes animals such as arctic foxes, wolves and even humans. Another means of defense against predators is to act as if it's wounded. If a predator comes too close to the nest, either parent may drag their wing(s) on the ground. They hope to trick the predator into thinking that the wounded owl is an easy meal, while they slowly lead it farther and farther away from the nest. Woodland Park Zoo's snowy owls can be seen in our Northern Trail exhibit area. Other owls can be viewed in our Raptor Center and near Bug World. During the breeding and nesting season, snowy owls inhabit tundra areas that experience severe weather conditions. Well adapted to live in these harsh environments, snowy owls face few threats and their populations are stable. This changes, however, as these owls migrate south and come into contact with human civilization. Although not a threat to the species snowy owls die from flying into utility lines, wire fences, automobiles, airplanes (at airports) and other human structures. Some owls are even killed by hunters. Many raptor species are in trouble. Human-caused changes in land use are escalating, and this affects the habitats and migratory corridors required by some raptors for survival. Vast forests are being removed for timber and other paper products, and industrial emissions are polluting water and air resources. Critical shoreline and riparian zone habitats are being rapidly converted by expanding human communities and agricultural needs. Shooting and trapping are also lowering raptor numbers. It's only a matter of time until more raptor species may face extinction, unless we take measures to protect their habitats. Humans need raptors. Here are only a few of the benefits raptors provide: Raptors help keep animal populations in balance. Raptors consume many animals that humans consider as pests, including mice, rats and destructive species of insects. This helps to control disease and damage to crops. As top predators of their food chain, raptors are an indicator species of the overall health of the ecosystem in which they live. Of equal importance, witnessing wild raptors enriches each of our lives. Imagine what life would be like if we could no longer hear the haunting evening call of the owl. Efforts to save threatened and endangered raptors require cooperation and support at international, national, regional and individual levels. You can help in this cause. Join and become active in Woodland Park Zoo and other conservation organizations of your choice. Eliminate or reduce pesticide use. Support breeding programs for endangered birds of prey at zoos and other animal care organizations. Contact Woodland Park Zoo at webkeeper@zoo.org to find ways you can support conservation programs at the zoo. Discover more about raptors by calling the Peregrine Fund (208) 362-3716. Learn other ways you can help conserve wildlife and their habitats by visiting our How You Can Help page. Burton, J.A. (editor). 1992 (third ed.). Owls of the World: Their Evolution, Structure and Ecology. E.P. Dutton, New York, NY. 216 p. Jarvis, Kila and Denver W. Holt. 1996. Owls: Whoo Are They? Mountain Press Publishing Company, Missoula, MT. 59 p. Zoobooks. 1992. Owls. Wildlife Education, Ltd., San Diego, CA. 17 p. Snowy Owl Taxonomy Phylum: Chordata Class: Aves Order: Strigiformes Family: Strigidae Genus: Bubo Species: B. scandiacus Snowy Owl Fascinating Facts If there are nine chicks in the clutch, parents will have to provide about 1,500 lemmings to feed the ravenous chicks until they're ready to go out on their own. Unlike other raptors that carry prey in their talons, owls sometimes carry prey with their beak, and usually swallow small prey items whole. Bones, fur and other indigestible items are regurgitated as pellets! The snowy owl is the official bird of Quebec, Canada!
科技
2017-09/1580/en_head.json.gz/7886
Human Microbiome Project Program Snapshot HMP Consortium Policies Working Group Members Trans-NIH Microbiome Working Group Laboratory Methods Database/Libraries Meeting/Activities You are hereHome » Programs » Human Microbiome Project » Agenda » Summary Submitted by admin on Wed, 07/17/2013 - 05:48 Agenda NIH has recently engaged in community discussion of the idea of sequencing the totality of all microbes in or on the human body (the "human microbiome";). With the cost of sequencing declining and the impact of genomic approaches to biomedical research growing, the time is right to consider the opportunities that a Human Microbiome Project will offer in promoting better approaches to diagnosing disease, developing new therapeutics and therapeutic strategies, and in maintaining human health. To further this discussion, NIH organized an informal meeting of NIH staff from several Institutes and Centers with a group of scientists interested in exploration of the human microbiome to discuss the current state of knowledge, of the plans being developed internationally and of the challenges facing the organization of a Human Microbiome Project. Why would we want to sequence all the microbes in the human body? (Jeff Gordon) Dr. Jeff Gordon described what we currently know about the human microbiome. He also set the context for a potential project that would attempt to define, more fully, the human microbiome. A summary of his comments: The microbiome is an integral part of the human genetic landscape and human evolution. Sequencing the human microbiome would give us an "extended view of ourselves";. Members of the phyla Bacteroidetes and Firmicutes dominate the human bacterial "flora,"; but there is considerable variation in the composition of indigenous bacterial communities among individual humans. Host selection plays a very significant role in the composition of the microbiota, so that microbiomes differ greatly from organism to organism. Generally mammals with similar diets have more closely related microbial communities. The microbial communities in omnivores, carnivores and herbivores cluster differently from each other. (From Ruth Ley's study). When the gut flora from zebra fish is transferred to a germ free mouse, within 10 days the flora becomes "mouseified";, i.e., it looks more like the expected normal mouse flora than that of zebrafish. Experiments with germ-free and colonized animals show that microbiota direct myriad biotransformations (production of essential vitamins, xenobiotic metabolism) and affect energy balances (microbes can manipulate host genes resulting in alterations in the deposition of energy into different sites), postnatal development, gut epithelial renewal rates, cardiac size/output and blood pressure. The composition of the gut microbiome is affected by or affects obesity. As individuals become leaner through diet, the fraction of Firmicutes in the gut population decreases, while the fraction of Bacteroidetes increases. Role of archaea in human health - up to 10% of the gut microbiome is comprised of Methanobrevibacter smithii. Other experiments with germ-free mice have shown that the presence of this organism can increase the efficiency of the fermentation of the gut contents and energy storage in the host. Xenologs abound in the genomes of organisms in the Bacteroidetes class, indicating there is tremendous exchange of genes within and across species. Context for an HMP Interest is increasing in the scientific community in identifying the gene content and functional role of variation of microbial communities. There is a paradigm shift going on in the field of microbiology from the study of individual organisms to the study of microbial communities. Understanding the operation of communities and the dynamics in the composition of the communities with respect to the environmental changes is now a topic of interest. Microbiota could potentially form the basis of a 21st century pharmacopeia. Microbes have the capacity to synthesize many novel chemical entities that sustain mutually beneficial relationships with us. Through the identification of natural products, exploration of the effects on host signaling, examination of metabolic pathways and gene manipulation, we have the opportunity to identify and develop many possible new biologically active compounds. Summary of the November 2005 meeting in Paris, France entitled The Human Metagenome Project George Weinstock presented a summary of this international meeting: The goal of the meeting was to bring together scientists and funders to forge an international alliance to initiate a human metagenome project, with its first effort being focused on the intestinal microbiome. It was attended by scientists and funders from Europe, Asia and the U.S. The attendees recommended that a metagenome project should be undertaken. This metagenome project should be organized in two components: Determination of a set of reference or scaffold sequences from a core set of species. Sample (metagenomic) sequencing utilizing high throughout technologies to characterize the microbial flora from different body sites. There was a lot of enthusiasm at the meeting. Some institutions (NHGRI, The Sanger Institute, The Joint Genome Institute (DOE) and possibly Genoscope (France)) indicated that they would be willing to commit or are already committed to sequencing a number of reference genomes. Representatives from China and Japan indicted that they need a concrete description of the project to take to their funding agencies to seek support. The EU would be interested in supporting efforts in this field in its next call for proposals. The NSF already funds significant microbial sequencing, but not necessarily from the human. One of the next steps is to publish the meeting report/recommendations so that partners can use it to encourage their funding agencies to participate. The meeting did not address the many challenges and the several fundamental issues that need to be addressed before initiating the project. There was not a good sense among the attendees of what coordinating a large international project would entail. There was agreement that an international consortium and its attendant attention to coordination could bring the benefits of a division of labor, economies of scale, standardization of procedures, quality control, rapid data release and reduction of redundancy to the project. What organisms should be included in a HMP? What body sites? Ideally, the genomic sequences of all microorganisms (including bacteria, archaea, fungi, viruses, and phage) found on and in the human body should be determined. Efforts should be made to include rare organisms. All relevant body sites, including skin, the oral cavity, the intestinal tract, the female urogenital tract, the ear, and the upper respiratory tract, should be characterized. The effects of locale within each of the body sites should be addressed. Sampling should be done to address a number of variables: A very important question is whether there is a "normal"; "core microbiome."; The idea that there is a "normal"; flora may be a fallacy. Instead, there may be a spectrum of normalcy and several models of health. What new technology is needed? Culture technology. Most bacteria are not culturable at present, and research is needed to develop new approaches to grow them. Single-cell analysis. Devices are needed that can sort single cells from microbial communities for individual examination. The ability to do so could advance the understanding of rare individual organisms that cannot be studied today. Data analysis - New analytical tools are needed to enable searching of the large datasets that will be produced. Databases - Databases are needed to link clinical annotation and other meta-data to the sequences for analysis. The role of metabolomics in the project was discussed. It would be ideal to do metagenomic projects in parallel with metabolomic projects. Metagenomic projects can be undertaken now because the technology is available, but metabolomics technology needs to be developed. Implementation Topics - Relative roles and timing of the sequencing of the genomes of cultivable organisms and sample sequencing A dataset of sequences of the genomes of a collection of reference organisms is needed for each body site Technology development is needed to enable efficient metagenomic sampling Metagenomic sampling is needed to define the cohort of organisms present within each of the chosen sites - what's there at any one time, how it changes over time, and how it is affected by the anatomical locale within the site (e.g., different parts of the mouth). Standardizing of sample annotation and processing will be needed. Different types of sampling protocols will need to be examined to measure their effects on the data collected. Data management - Do we need a central data coordinating center? See above in technology development. There will need to be tools and ways to make data accessible to researchers who are not used to working with large data sets. A definition of standards and level of consistency will need to be defined. Rough cost estimates for a possible pilot project: The following cost estimate for a potential pilot project was developed before the meeting. It gives a rough estimate of scientific and funding needs to launch a pilot. Estimates in red italic are those for which funding is likely already in place at a number of different sequencing centers as noted: Sequencing 1000 reference genomes across 5 body sites: $18.5M with Sanger sequencing (ABI technology) or $12.5 M with 454 Technology. The NHGRI, Sanger and JGI have previously indicated that funding may already be available to cover these costs. Microbial Diversity: 16S rRNA analyses-- $2/clone or $200k/100,000 clones Technology Development ~$5M/year for 3 years: Sequencing technology development. NHGRI is already funding the $1000 and $100,000 genome programs. The HMP projects will drive this new technology. Further investment not currently needed. Sample sequencing a pilot as described below may be possible within existing funding. Sample preparation. Culturing currently unculturable organisms. BAC libraries as a sampling technique. Normalization of the community DNA. Whole genome amplification (wga) the sequencing centers are already working on wga as part of the NHGRI medical sequencing program. Further investment may not be needed. Single cell analysis engage the microfabrication, microfluidics community. Bioinformatics data bases and data analysis tools. Sample (metagenomic) sequencing pilot: 300,000 reads per body site in 100 individuals (5 body sites) ~$2M/year over 3 years Sequencing centers are likely to contribute existing funding to this effort if it is technically feasible. Overall, new funding is needed primarily for technology development to enable to full metagenomic exploration once the reference genomes are finished. Larger scale metagenomic projects which examine microbial communities under different variables may be appropriately done as individual research projects if sequencing costs are sufficiently reduced in the near future. NIH staff will take steps to form a trans-NIH working group to work on developing a plan that will be of interest across the NIH and for laying the groundwork for the NIH's participation in an international project. Part of the working group's effort will be to identify components that can already be tapped into - sequencing is potentially already funded so new funding is likely not needed for that part of this project. NIH staff will explore the role of OPASI in promoting this project. A "next step"; meeting may be needed with the international research community interested in the HMP. This will help coalesce an international project. Finally, it will be important to have an effective plan to communicate with the general population about the importance of this project. The outcome, as it relates to health, must be well articulated. There was general agreement that the project can be tied to the ideology of disease and stress; that it is looking at the "normal"; or healthy state. The argument can be made that not until we understand the nature of health can we more thoughtfully define the target for treating someone who is sick. Right now we can only remove the signs of illness instead of actually restoring people to health. Many illnesses are ecological diseases rather than the result of exposure to a rare pathogen. Download Readers: Page Last Updated on August 27, 2013
科技
2017-09/1580/en_head.json.gz/7913
Warming Causes More Extreme Shifts of the Southern Hemisphere's Largest Rain Band The changes will result from the South Pacific rain band responding to greenhouse warming. The South Pacific rain band is largest and most persistent of the Southern Hemisphere spanning the Pacific from south of the Equator, south-eastward to French Polynesia. Occasionally, the rain band moves northwards towards the Equator by 1000 kilometres, inducing extreme climate events. The international study, led by CSIRO oceanographer Dr Wenju Cai, focuses on how the frequency of such movement may change in the future. The study finds the frequency will almost double in the next 100 years, with a corresponding intensification of the rain band. Dr Wenju and colleagues turned to the extensive archives of general circulation models submitted for the fourth and fifth IPCC Assessments and found that increases in greenhouse gases are projected to enhance equatorial Pacific warming. In turn, and in spite of disagreement about the future of El Niño events, this warming leads to the increased frequency of extreme excursions of the rain band. During moderate El Niño events with warming in the equatorial eastern Pacific, the rain band moves north-eastward by 300 kilometres. Countries located within the bands' normal position such as Vanuatu, Samoa, and the southern Cook Islands experience forest fires and droughts as well as increased frequency of tropical cyclones, whereas countries to which the rain band moves experience extreme floods. "During extreme El Niño events, such as 1982/83 and 1997/98, the band moved northward by up to 1000 kilometres. The shift brings more severe extremes, including cyclones to regions such as French Polynesia that are not accustomed to such events," said Dr Cai, a scientist at the Wealth from Oceans Flagship. A central issue for community adaptation in Australia and across the Pacific is understanding how the warming atmosphere and oceans will influence the intensity and frequency of extreme events. The impact associated with the observed extreme excursions includes massive droughts, severe food shortage, and coral reef mortality through thermally-induced coral bleaching across the South Pacific. "Understanding changes in the frequency of these events as the climate changes proceed is therefore of broad scientific and socio-economic interest."
科技
2017-09/1580/en_head.json.gz/7921
Supporters renew push for federal data center bill By Adam MazmanianNov 20, 2013 A House Democrat’s bill designed to improve the energy efficiency of federal data centers appears to stand little chance of moving on its own, but could wind up as part of a larger Senate energy bill that has supporters in both parties. The data center measure was introduced in February by Rep. Anna Eshoo (D-Calif.) , but the Energy and Commerce Committee of which she is a senior member has yet to hold a hearing on the legislation. On the Senate side, there is a push among supporters to bring the broader energy bill, backed by New Hampshire Democrat Jeanne Shaheen and Ohio Republican Rob Portman, to the floor. The legislation stalled in September amid debate on the continuing resolution and a Republican push to defund the 2010 health care law. The bipartisan bill would create a voluntary rating program along the lines of EnergySTAR to encourage supply chain efficiency and rewrite national building codes to require more energy-efficient construction. A provision on data center consolidation had been added to the bill when it was first considered in the Senate, and a similar provision is likely to be included if a new version of the legislation comes to the Senate floor. Rep. Ed Whitfield (R-Ky.), chairman of the Energy and Commerce Energy and Power Subcommittee, told SNL Energy in September that he would move on the Shaheen-Porter bill if the Senate passes it. "Anytime we talk about efficiency, we're all up there hugging and holding each other," he said. Speaking at a Capitol Hill event convened by the Information Technology and Innovation Foundation, Eshoo noted that federal data centers are responsible for 10 percent of all U.S. data energy use, which adds up to about $600 million annually. "I think that's a low number myself," she added. Her legislation would require the Office of Management and Budget to develop a government-wide strategy on energy efficiency and focus on improving IT asset utilization and establishing metrics for measuring savings and performance. "I think government should be leading by example," Eshoo said. In that vein, a measure that would provide an accurate count of federal data centers and establish metrics to apply to a consolidation initiative spearheaded by federal CIO Steve VanRoekel was proposed as an amendment to the Senate defense authorization bill on Nov. 19. But since Eshoo first become active on the issue almost a decade ago, it has been the private sector leading the way in consolidation. Eric Massanet, a professor at Northwestern University said the past eight years have brought "unprecedented" improvements in data center energy efficiency and shifts to cloud-based virtualization at large private-sector companies. But smaller data centers have an opportunity to slash energy use by up to 90 percent, with the use of more energy-efficient hardware, and policies that encourage reduced energy use across companies and institutions. Government could see improved results from the incorporation of stricter energy efficiency standards into procurement standards, Massanet said. Available capital is big hurdle for government agencies looking to upgrade their hardware, fund consolidation projects, or move to the cloud. One available vehicle for energy efficiency is the energy saving performance contract (ESPC), in which a vendor essentially loans an agency the capital for an energy saving project and gets paid back out of the energy savings. The vehicle, which operates outside the traditional appropriations process, has been commonly used for weatherizing, retrofitting and other facilities-based projects, but its use in data center consolidation has thus far proved problematic. A planned $70 million data center consolidation by the Department of Energy with Lockheed Martin has been held up for months as the Office of Management and Budget reviews the deal. Cathy Snyder, a government relations vice president at Lockheed, touted the future of ESPCs at the event. She acknowledged the snag with the DOE deal, but said ESPCs will be viable for data center consolidation in the long term. "I don't know that it's exactly put a chill on other agencies. I know other agencies will comport with policy and law as it comes out," Snyder said. She noted that the Air Force announced plans for an ESPC to fund data center consolidation at Edwards Air Force Base. More generally, Snyder said, it's going to take a cultural shift for agencies to get more comfortable with the alternative funding mechanism. The Obama administration is also nearing the end of an energy savings project. In December 2011, Obama tasked agencies with writing $2 billion in performance-based contracts by the end of 2013. When OMB last reported on its progress in June, it had awarded $560 million across 64 projects with another $1.7 billion in the pipeline. It's not clear why OMB is holding up the Lockheed-DOE deal, although an OMB letter on the topic to Sen. Ron Wyden (D-Ore.) hints that the government might be looking to drive down financing costs of ESPCs related to data center consolidation. Adam Mazmanian is executive editor of FCW. Before joining the editing team, Mazmanian was an FCW staff writer covering Congress, government-wide technology policy, health IT and the Department of Veterans Affairs. Prior to joining FCW, Mr. Mazmanian was technology correspondent for National Journal and served in a variety of editorial at B2B news service SmartBrief. Mazmanian started his career as an arts reporter and critic, and has contributed reviews and articles to the Washington Post, the Washington City Paper, Newsday, Architect magazine, and other publications. He was an editorial assistant and staff writer at the now-defunct New York Press and arts editor at the About.com online network in the 1990s, and was a weekly contributor of music and film reviews to the Washington Times from 2007 to 2014. Click here for previous articles by Mazmanian. Connect with him on Twitter at @thisismaz. DC Fed Washington DC A significant element to success in this area is, as was pointed out, replacing less efficient hardware with newer, more efficient hardware. Under the guidance issued over the last two FY's we have been ordered to extend the duty cycle of all IT hardware by a minimum of 1 year over normal standard replacement cycles. Last year's sequester activities pushe the refresh cycle even further out. In most cases it is a flat out freeze on purchasing unless the device fails and cant be repaired. Unless this bill overrides the freeze, we're stuck. Also, unless this bill provides funding to implement this, we're still stuck. You can only realize energy and cost savings by purchasing the newer hardware and vendors require cash up front. No cash, no consolidation. I guess if you like your servers, you can keep your servers.......
科技
2017-09/1580/en_head.json.gz/8056
Panasonic X5 series plasma TV disappoints (pictures) In 2012, no other television can touch the dominating presence of the Panasonic plasmas. No matter whether you paid $800 or $3,000, I believe CNET's tests and observations have amply demonstrated that models across the entire range are the televisions to buy at their respective prices. But cracks had to appear somewhere, and in the Panasonic X5, they have. This is the only Panasonic plasma I don't recommend. Furthermore, I encourage you to actively avoid it. The Panasonic smacks of a way to fill in a price point, and quality has ultimately suffered. While there are no features to speak of -- this is to be expected at the entry level -- it's the TV's picture quality that lets it down. Black levels are fairly average, but shadow detail would be good if shadows weren't so green. There is a green cast to everything that makes skin tones in particular look very sickly. This TV is unfortunately the opposite of accurate -- if you like watching rainforest documentaries, it might be fine, but for everyone else this is a pretty disappointing TV. Read full review Photo by: Sarah Tew/CNET Pricing is currently unavailable. It may not look that thick in the photo, but it's noticeably fatter than most competitors. Two HDMI ports, two component ports, USB, and SD card. The remote is easy to use. The Panasonic features a rectangular, nonswiveling stand. As a 720p plasma, the pixel structure can occasionally be visible from your seating position. The Panasonic features a piano-black bezel. The "Reset to defaults" menu option is a really big problem. Accidentally press right twice to access the picture menu instead of once and you lose all of your settings. Advanced picture menu The advanced offerings are fairly slim. After hogging the headlines of our plasma TV reviews all year, the company's luck had to end somewhere, and that's here. When summing up a TV's picture quality, the two most important elements are black levels and color. When a TV isn't able to compete on those two things, it doesn't really matter how good its picture processor is, or how many features it has. The Panasonic, unfortunately, is mediocre. After good performance from the U50 I was curious to see how deep the rabbit hole went, and it seems in this case not that far -- it's more of a pothole here in Panasonic's "X" series. While the company's competitors are able to engineer good value-for-money televisions at a sub-$500 price, this is seemingly beyond Panasonic's capabilities. The X5 is a cheap TV and it performs like one, but that's no longer good enough. Black levels were passable and shadow detail was good, but the TV's main problem can be summed up in one word: green. Green skin tones and green shadows. No, it doesn't look like the old CRT screens when they died (all green, all the time!), this is a more insidious green cast to objects. No matter its other attributes, this issue alone was enough to spoil the X5's picture. Panasonic TC-P50X5
科技
2017-09/1580/en_head.json.gz/8087
HII: Japan's indigenous booster Japan has launched 38 satellites since entering the space age in 1970. Apart from the 17 science satellites launched on solid- propellant boosters, the larger applications satellites have needed combined US /Japanese boosters to get into orbit; indeed, three were launched from the US by Nasa. Breaking the mould will be the HII, Japan's first indigenous liquid propellant launcher. Tim Furniss profiles the vehicle that will first fly in four years time. At first sight, the HII looks like a slightly f-\ anorexic Ariane 5. It has a similar •*• •*• core stage, a single cryogenic first- stage engine, two solid-rocket booster (SRB) strap-ons, and is practically the same height. Ariane 5 weighs 500 tonnes at lift-off; the HII, 258 tonnes. Ariane 5 will be able to place 6 • 5 tonnes into geostationary transfer orbit (GTO); the HII, 4 tonnes. Whereas Ariane 5 will be, for the most part, a commer cial launcher, there is no certainty—only a suggestion—that the HII will be. If it does enter the commercial market, the HII will be competing with Ariane 44L for GTO business. The HII will be launched in 1992, at least three years before Ariane 5. Its primary function is to place Japanese national communications and other applications satellites and platforms into orbit. In addi tion, it would be the launcher for the unmanned Hope spaceplane, thereby con tinuing the country's autonomous space policy. The HII will also form the basis of a more powerful vehicle, capable of launching larger satellites and manned spaceplanes, and gaining Japanese manned autonomy in space during the 21st century. Although Japan is ostensibly running an autonomous programme, both in terms of the launchers and applications satellites, in fact it is not. The US Delta has launched three Japanese satellites from Cape Cana veral, and the others have been launched from Japan on NI, Nil, and HI vehicles based on the US Delta, with components built under licence in Japan. More recently, the HI incorporated the first Japanese cryogenic engine in the second stage as a step towards launcher autonomy. The HI, which has flown successfully three times, once without a solid-propellant third stage, 28 FLIGHT INTERNATIONAL, 18 June 1988
科技
2017-09/1580/en_head.json.gz/8109
Home Mediterranean Marine Protected Areas play a key role in understanding the effects of climate change Wed, 24 Jul 2013 Climate change is likely to have drastic effects on the habitat of the Mediterranean flora and fauna, but its impacts will vary between Mediterranean regions and between marine protected areas (MPAs) within each region. This IUCN guide analyzes the threats and effects of climate change on Mediterranean marine biodiversity and provides MPA managers with tools to monitor and mitigate changes in their own MPA. Photo: UICN-Med While the impact of climate change at global level has been widely studied, its form and extent in the Mediterranean remains vague. This new IUCN publication provides a clear and synthetic summary of the main effects of climate change observed to date on Mediterranean marine biodiversity, according to existing research. It also outlines the many uncertainties that still exist in understanding ecological responses to climate change. The publication intends to give some guidance to MPA managers on how to measure the impact of climate change on the marine biodiversity of protected areas and how to improve the planning for the mitigation of this threat. Managers may choose, within several monitoring plans and indicators, the ones that best fit their particular circumstances and management objectives. Several impacts of climate change in the Mediterranean are identified in this guide, such as sea warming, sea-level rise and salinity and sea circulation changes. Existing information shows that shallow sea waters have already warmed by almost 1 °C since the 1980s. The observed or potential consequences of these shifts on the marine communities are described in the publication. Research shows that native species distribution is already evolving, as some warm-water species have started to colonize areas where they were previously absent. For example, the ornate wrasse Thalassoma pavo increased its population density tenfold within less than 5 yeas of its arrival in the Scandola Marine Reserve (NW Corsica, France). Sea warming also causes mass mortalities of macrobenthic communities (corals, gorgonians, sponges), in particular in the North-Western Mediterranean, and favors the bloom of opportunistic organisms like the P. noctiluca jellyfish. As for the increased sea acidification, it can potentially affect the growth, reproduction and activity rates of different species. “MPAs play a key role in the analysis of the biological consequences of climate change. As they are better shielded from anthropogenic impacts than other areas, they can serve as “sentinel sites” where the effects of climate change can be studied and management strategies can be developped to adapt to such negative effects, and wherever possible counter them”, explains Maria del Mar Otero, IUCN Centre for Mediterranean Cooperation Marine Programme officer and coordinator of the publication. Based on the experience of a selection of MPAs, the guide presents several case studies, each highlighting a stake of the impact of climate change on marine biodiversity : vulnerability risk of sea turtle nesting faced towards sea-level rise, the resilience of the Mediterranean coral Cladocora caespitosa or species shift distributions observed along the Mediterranean through monitoring. The guide outlines different issues and potential monitoring methods, as well as adaptation strategies to protect the local biodiversity. This guide was prepared by the IUCN Centre for Mediterranean Cooperation in collaboration with the Regional Activity Centre for Specially Protected Areas (RAC/SPA), within the MedPAN North project funded by the European Regional Development Fund, the Spanish Agency for International Cooperation and Development (AECID) and the RAC/SPA biannual programme funded by the Mediterranean Trust Fund of the Barcelona Convention. For more information : Maria del Mar Otero Work area: MarineMembersProtected AreasLocation: EuropeMediterraneanSouth-Eastern Europe Languages Publication : Mediterranean MPAs and climate change (EN)Publication : Les AMPs méditerranéennes et le changement climatique (FR)MedPAN North website
科技
2017-09/1580/en_head.json.gz/8110
Home Pacific Islands Come Together to Protect Marine Resources Mon, 28 Sep 2009 Last week two of the world’s largest Marine Protected Areas (MPAs) announced a historic alliance to enhance the management and protection of almost 300,000 square miles of marine habitat in the Pacific Ocean. Photo: Etika Rupeni/FSPI Photo: http://www.flickr.com/photos/taylorboger/ Photo: John Johnson/Marine Photobank President Anote Tong of the Republic of Kiribati signed an agreement with the United States that establishes a “sister site” relationship between the Papahânaumokuâkea Marine National Monument, located in the Northwestern Hawaiian Islands, and the Phoenix Islands Protected Area near the equator in the Republic of Kiribati. “This commitment acts like a beacon to other small island nations; providing hope that together we can sustain livelihoods, conserve nature and address global change on islands,” says Kate Brown, Global Island Partnership (GLISPA) Coordinator. “Islands all across the world continue to show the value of working together regardless of whether they are from a large country or small island state – they are joined together by their islandness and have much to share.” The partnership links the sites and is designed to enhance management knowledge and practices for these tropical and subtropical marine and terrestrial island ecosystems. “This is fantastic news,” says Caitlyn Toropova, IUCN’s MPA Coordinator. “Not only must we come together to increase global commitments to protecting marine areas, but we must develop active networks that share information so that we can learn what methods are most effective.” Toropova is currently embarking on a global review of MPAs to identify gaps but also highlight success stories; the Pacific is one of the 18 regions under assessment. Each site was nominated this year by their respective governments as World Heritage Sites, a designation of the United Nations Educational, Scientific and Cultural Organization (UNESCO). Managers of both sites will meet in November in French Polynesia to formalize the agreement. Papahânaumokuâkea When it was established in 2006, Papahânaumokuâkea was the largest marine protected area in the world, protecting natural, cultural and historic resources within an area of approximately 140,000 square miles (362,075 square kilometers). The monument’s extensive coral reefs are home to over 7,000 marine species, one quarter of which are found only in the Hawaiian Archipelago. Papahânaumokuâkea is cooperatively managed to ensure ecological integrity and achieve strong, long-term protection and perpetuation of Northwestern Hawaiian Island ecosystems, Native Hawaiian culture, and heritage resources for current and future generations. Three co-trustees – the Department of Commerce, Department of the Interior, and State of Hawaii – joined by the Office of Hawaiian Affairs, protect this special place. Phoenix Island In 2008, the Phoenix Island Protected Area was founded to protect the archipelago’s terrestrial and marine resources, becoming the largest marine protected area in the world today at approximately 158,500 square miles (410,500 square kilometers). The coral reefs and bird populations of the islands are highly unique and virtually untouched by humans. The protected area also includes underwater seamounts and other deep-sea habitat. The Phoenix Islands Protected Area is a unique partnership between the government of Kiribati that owns the Phoenix Islands, non-governmental conservation organizations and regional governments. It is supported through a unique “reverse fishing license” financing program, in which the government of Kiribati is reimbursed for the amount that they would have made from selling fishing licenses. The government of Kiribati and an advisory board, working collaboratively to ensure the long-term sustainability of this remarkable place, administers the trust.Work area: MarineMarine Protected Areas (MPAs)MarineEcosystemsIslandsMarineIslandsLocation: North AmericaOceaniaNorth AmericaNorth AmericaNorth America Protect Planet Ocean PortalGlobal Island Partnership
科技
2017-09/1580/en_head.json.gz/8167
Accelerating Science Discovery - Join the Discussion OSTIblog HomeTopicsAuthorsArchive Search DOE Research Data and Digital Object Identifiers: A Perfect Match 22 Apr 2015 Published by Sara Studwell The Office of Scientific and Technical Information (OSTI) became a member of and a registering agency for DataCite in 2011—making the Department of Energy the first U.S. federal agency to assign digital object identifiers (DOIs) to data through OSTI’s Data ID Service. DataCite is an international organization that supports data visibility, ease of data citation in scholarly publications, data preservation and future re-use, and data access and retrievability. Through the OSTI Data ID Service, DOIs are assigned to research datasets and then registered with DataCite to establish persistence and aid in citation, discovery, and retrieval. The assignment and registration of a DOI is a free service for DOE researchers to enhance the management of this increasingly important resource. Citations to these datasets are then made broadly available in OSTI databases such as DOE Data Explorer and SciTech Connect and in resources such as Science.gov and WorldWideScience.org. They are also indexed by commercial search engines like Google and Bing. Read more... How to Accelerate Public Access 20 Apr 2015 Published by Dr. Jeffrey Salmon For science agencies, access to federally funded research is a key part of our mission. And the very first requirement for federal agency public access plans directed by the White House Office of Science and Technology Policy (OSTP) was that the plans must encompass “a strategy for leveraging existing archives, where appropriate, and fostering public-private partnerships with scientific journals relevant to the agency’s research [emphasis added].” This 2013 OSTP memo is replete with calls for public-private partnerships. When it comes to the key issue of repositories, for example, agencies are told that “[r]epositories could be maintained by the Federal agency funding the research, through an arrangement with other Federal agencies, or through other parties working in partnership with the agency including, but not limited to, scholarly and professional associations, publishers, and libraries [emphasis added].” Under the section on “Objectives for Public Access to Scientific Publications,” the OSTP memo states that agency plans “shall …[e]ncourage public-private collaboration to: maximize the potential for interoperability between public and private platforms and creative reuse to enhance value to all stakeholders, avoid unnecessary duplication of existing mechanisms, maximize the impact of the Federal research investment, and otherwise assist with implementation of the agency plan [emphasis added].” And public-private partnerships are also called out in the memo’s section on data management plans. Read more... DOE OSTI Implements Re-organization: Sticking to Our Knitting While Meeting New Challenges 08 Apr 2015 Published by Brian Hitson The Department of Energy (DOE) Office of Scientific and Technical Information (OSTI), a unit of the Office of Science, recently completed a restructuring to fulfill agency-wide responsibilities to collect, preserve, and disseminate scientific and technical information (STI) emanating from DOE research and development (R&D) activities, including a new obligation to provide public access to DOE-affiliated journal articles. The re-organization is the culmination of a year during which OSTI took steps to re-focus and re-balance our operations by devoting more resources to collecting and preserving DOE STI and to providing comprehensive access to the results of DOE R&D investments. At the same time, we streamlined our portfolio of science search tools to make it easier to find DOE’s R&D results. In August, DOE became the first federal science agency to issue a public access plan for scholarly scientific publications in response to a February 2013 White House Office of Science and Technology Policy directive, and OSTI launched DOE PAGESBeta, a beta portal to journal articles and accepted manuscripts resulting from DOE-funded research. On October 1, we issued the OSTI 2015-2019 Strategic Plan, a roadmap for working to ensure its collections and portals reflect the complete R&D output of DOE. Read more... SciTech Connect, Primary Repository for DOE Scientific and Technical Information, Turns Two 02 Apr 2015 Published by Catherine Pepmiller As Spring 2015 rolls around, it’s time to mark a momentous occasion in the history of SciTech Connect: it’s turning TWO! SciTech Connect is a publicly available database of bibliographic citations for energy-related scientific and technical information (STI), including technical reports, journal articles, conference papers, books, multimedia, and data information. Launched in March 2013 by the Department of Energy (DOE) Office of Scientific and Technical Information (OSTI), SciTech Connect incorporated the contents of two of the most popular core DOE collections, DOE Information Bridge and Energy Citations Database, and employed an innovative semantic search tool and updated interface to enable scientists, researchers, and the public to retrieve more relevant information. SciTech Connect has emerged as a go-to resource, becoming OSTI’s most-visited repository for DOE science, technology, and engineering research information. Currently, it offers users over 2.7 million citations, including 400,000 full-text documents and nearly 1.5 million journal article citations, 240,000 of which have digital object identifiers (DOIs) with links to publishers’ websites. Read more... OSTI Joins In Celebrating the Forty-Fifth Anniversary of the International Nuclear Information System 25 Mar 2015 Published by Debbie Cutler Forty-five years ago, nations around the world saw their dream for a more efficient way to share nuclear-related information reach fruition through the creation of a formal international collaboration. This was accomplished without the internet, email, or websites. It was the right thing to do for public safety, education, and the further advancement of science. It was also a necessary way forward as the volume of research and information about nuclear-related science, even back then, was skyrocketing and exceeded the capacity for any one country to go it alone. And the Department of Energy (DOE) Office of Scientific and Technical Information (OSTI) was part of the collaboration from its initial planning stages. The International Nuclear Information System, or INIS, as it is commonly known, was approved by the Governing Board of the United Nations’ International Atomic Energy Agency (IAEA) in 1969 and began operations in 1970. The primary purpose of INIS was, and still is, to collect and share information about the peaceful uses of nuclear science and technology, with participating nations sharing efforts to build a centralized resource. Read more... Show More Subscribe to OSTIblog Posts by Email (If you're a human, don't change the following field)
科技
2017-09/1580/en_head.json.gz/8322
Stocks A to Z / Stocks G / Galileo Corporation (GAEO) Author: ndenga Subject: Galileo > > > NetOptix <Name and Business Change>I am considering adding this company to the Nascent Group Refractive Index. It definitely bears watching.The company has indicated that it is changing its name to NetOptix and focusing more on providing components to the telecommunications industry.http://www.galileocorp.com/news/index.html<snip>Galileo Corporation Announces Receipt of Order for DWDM Optical Filters, Establishment of a German Subsidiary and Planned Change of Company's Name to NetOptix Corporation Source: Business Wire STURBRIDGE, Mass., Aug 30, 1999 (BUSINESS WIRE) -- Galileo Corporation (NASDAQ National Market:GAEO) today announced that its Optical Filter Corporation (OFC) subsidiary has received a purchase order for Dense Wave Division Multiplex (DWDM) optical filters in the amount of $5.2 million. Deliveries are expected to be made over the next four to six months. This represents the first commercial production order for DWDM filters developed and manufactured by OFC. In addition, OFC is negotiating and expects to conclude shortly a longer-term supply agreement with the same customer, expected to total approximately $15 to $20 million for DWDM filters, for delivery during calendar year 2000. DWDM filters are a key component used by telecommunications equipment manufacturers to split a single fiber optic channel into multiple wavelengths, thus permitting a substantial increase in communications capacity. The need for increasing capacity stems from the surge in demand for fiber optic capacity fueled, in part, by the growth of Internet traffic. Galileo further announced that it has established a subsidiary in Germany with the specific objectives of conducting research in the area of optical filter design and processing systems as well as manufacturing DWDM filters. Together with the optical filters manufactured by OFC's Natick, Massachusetts plant, the new operation should allow Galileo, in time, to offer customers a broad range of optical telecommunications filters on a global basis. The facility, to be located near Frankfurt, is expected to be in operation by March 2000. Lastly, Galileo announced that in recognition of the emergence of the Company primarily as a manufacturer of optical telecommunications components, its Board of Directors has voted to change the name of Galileo Corporation to NetOptix Corporation. The new corporate name is expected to be effective September 30, 1999 or shortly thereafter. </snip>The one year chart got my attention.Thanks,Peter </Name and Business Change>
科技
2017-09/1580/en_head.json.gz/8396
Weighing Earth's Water from Space Launched in 2002, a pair of identical satellites that make up NASA's Gravity Recovery And Climate Experiment (GRACE) are tackling the problem in an unexpected way: they are weighing Earth's fresh water from space. Serving as a sort of "divining rod" in space that moves in response to a powerful, fundamental force of nature--gravity--the satellites respond to changes in Earth's gravitation field that signal shifts in the movement of water a cross and under Earth's surface. Read more Savanna Smog Each August in southern Africa, literally thousands of people equipped with lighters or torches go out into the African savanna, a region dotted with villages and teaming with animals, and intentionally set the dry grasslands ablaze. Read more Drought Lowers Lake Mead In the space of just three years, water levels in Lake Mead have fallen more than sixty feet due to sustained drought. Landsat images show the extent of the change to the lake's shoreline. Read more Denali's Fault During the afternoon of November 3, 2002, the water in Seattle’s Lake Union suddenly began sloshing hard enough to knock houseboats off their moorings. Water in pools, ponds, and bayous as far away as Texas and Louisiana splashed for nearly half an hour. The cause? Alaska’s Denali Fault was on the move, jostling the state with a magnitude 7.9 earthquake. Read more Dwindling Arctic Ice Since the 1970s, Arctic sea ice has been melting at the rate of 9 percent per decade. NASA researcher Josefino Comiso points to an accelerating warming trend as a primary cause and discusses how global climate change may be influencing the shrinking Arctic ice cap. Read more Little Islands, Big Wake The Hawaiian Islands interrupt the trade winds that blow across the Pacific Ocean, with far-reaching effects on ocean currents and atmospheric circulation. Read more Watching the World Go By Space Station Science Officer Ed Lu describes what it is like to look at the Earth over the course of an orbit. His descriptions are accompanied by digital photographs of Earth he has taken and transmitted to the ground during his mission. Just Add Water: a Modern Agricultural Revolution in the Fertile Crescent Satellite observations in the Middle East's Fertile Crescent have documented a modern agricultural revolution. The dramatic changes in crop production in southern Turkey over the last decade are the result of new irrigation schemes that tap the historic Tigris and Euphrates Rivers. Read more Land Matters Storm-related losses from the 1982-83 El Nino cost the state of California an estimated $2.2 billion. Fifteen years later, damages from the 1997-98 El Nino cost California only half that amount. Differences in storm intensity and duration accounted for some of the reduced costs, but other factors were also at work. Read more For the first time, scientists can rely on not one, but two satellites to monitor ocean surface topography, or sea level. TOPEX/Poseidon and Jason-1, launched nearly 10 years apart, are now engaged in a tandem mission, creating a spaceborne ocean observatory that provides scientists, climate modelers, and forecasters with nearly global coverage of the world's ocean surface at an unprecedented level of precision. Read more Watching our Ozone Weather Until about 30 years ago, atmospheric scientists believed that all of the ozone in the lower atmosphere (troposphere) intruded from the upper atmosphere (stratosphere), where it formed by the action of sunlight on oxygen molecules. Read more The Incredible Glowing Algae The latest development in oceanographic remote sensing enables researchers to detect the glow, or phytoplankton fluorescence, from chlorophyll. Read more Under a Variable Sun In their continued effort to understand the Sun, solar physicists of the 21st century have used satellite data to study how much energy reaches the outskirts of the Earth’s atmosphere and whether or how much that amount varies over time. Recently published research claims that the amount of solar energy reaching the Earth has increased over the past two solar cycles, while other scientists doubt that any such change has occurred. This story describes how gaps in the set of observations and uncertainty about the accuracy of different satellite sensors have made splicing together a complete data set so controversial. Read more Searching for Atlantic Rhythms? All over the globe there are relationships between the conditions of the atmosphere and oceans that affect weather and climate at great distances. The North Atlantic Oscillation is one of these teleconnections, linking the temperature of the North Atlantic Ocean with winter weather in North America and Europe. Read more The Great Bend of the Nile, Day and Night Photographs from the Space Shuttle reveal the densley populated communities along the banks fo the Nile River. Read more Squeezing Water from Rock Survivors of the New Madrid earthquakes reported not only intense ground shaking and land movement, as would be expected during an earthquake, but also an unfamiliar phenomenon: water and sand spouting up through fissures, or cracks, in the Earth's surface Read more A Delicate Balance: Signs of Change in the Tropics While NASA climate scientists were reviewing radiation data emanating from the tropics simply to test existing notions, they uncovered a phenomenon no one expected. They found that progressively more thermal radiation has been escaping the atmosphere above the tropics and progressively less sunlight has been reflecting off of the clouds. Read more Global Garden Gets Greener Between 1982-1999, the climate grew warmer, wetter, and sunnier in many parts of the global greenhouse. For the most part, these changes were favorable for Earth's vegetation. Satellite observations of vegetation combined with nearly 20 years of climate data reveal that productivity of Earth's land-based vegetation increased by 6 percent during the time period. The greatest increase occurred in the tropics, where decreasing cloudiness made more sunlight available. Compared to the increase in human population, however, the small increase in productivity has not changed the Earth's habitability in any significant way. Read more Lightning Spies In 1997, NASA launched the Lightning Imaging Sensor (LIS) aboard the Tropical Rainfall Measuring Mission (TRMM) satellite. The LIS detects and maps the distribution and variability of cloud-to-cloud, intracloud, and cloud-to-ground lightning. Read more Vanishing Ice Konrad Steffen arrived on the Greenland Ice Sheet for the 2002 fieldwork season and immediately observed that something significant was happening in the Arctic. Pools of water already spotted the ice sur face, and melting was occurring where it never had before. Read more Escape from the Amazon In this era of heightened concern about the relationship between the build up of atmospheric carbon dioxide and climate change, scientists are working to itemize all the ways carbon moves into and out of forest ecosystems. Perhaps nowhere on Earth do questions about the role of forests in the carbon cycle need answers more than in the Amazon Rainforest. Using satellite mapping and ground-based observations, scientists have discovered that carbon dioxide gas escaping from wetlands and flooded areas is a significant source of carbon emissions in the Amazon. Read more Measuring Ozone from Space Shuttle Columbia New remote-sensing technology called limb viewing allows observation of the atmosphere from the side rather than straight down. From that side view the layers of the atmosphere appear like layers in a cake, allowing instruments to see the lower layers of the stratosphere where most of the recently observed ozone change, like the ozone hole, occurs. Read more How on Earth was this Image Made? Remotely sensed Earth observations can include everything from sonar measurements used to map the topography of the ocean floor to satellite-based observations of city lights. Combining observations collected by a variety of instruments at different times and places allow scientists to create an otherwise impossible view of the Earth, showing underwater mountain ranges, cloud-free skies, and city lights that are brighter than daylight. Such visualizations are invaluable for interpreting complex data and communicating scientific concepts. Read more From Space to the Outback The 2002-03 fire season in Australia echoes the devastating 2001-02 season that climaxed in the bush on the outskirts of Sydney and drew international attention once again to the city that had hosted the 2000 Summer Olympics. In the aftermath of that season, Australian scientists and government agencies developed a new fire monitoring system that uses observations from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) sensors on the Terra and Aqua satellites to identify fires in remote locations in Australia. The system provides a big-picture perspective of fires across the country and helps fire emergency agencies allocate resources to the areas where they are needed most. Read more Flame & Flood In the desert, fires can move fast; constant winds funnel through shallow dry creek beds to keep parched vegetation burning. A hot fire can make soil "hydrophobic," meaning that water runs off instead of soaking into the ground. Read more The Human Footprint In North America, the black-tailed prairie dog occupies as little as 5 percent of its former habitat. In Madagascar, more than 20 lemur species are threatened with extinction, and at least 15 species are already extinct. And on the island of Mauritius in the Indian Ocean, fewer than 50 mature mandrinette hibiscus plants remain in the wild. Read more Chemistry in the Sunlight Ozone has proven to be among the most difficult air pollutants to control. To control ozone requires understanding its complex chemistry and how the chemical travels from one locality to another. Chemistry in the Sunlight explains basic aspects of ozone formation and provides a sample set of chemical reactions involved in ozone production. Read more Solar Radiation and Climate Experiment (SORCE) Fact Sheet Earth scientists will move a step closer to a full understanding of the Sun's energy output with the launch of the Solar Radiation and Climate Experiment (SORCE) satellite. SORCE will be equipped with four instruments now being built at the University of Colorado that will measure variations in solar radiation much more accurately than anything now in use and observe some of the spectral properties of solar radiation for the first time. With data from NASA's SORCE mission, researchers should be able to follow how the Sun affects our climate now and in the future. (Original 2001-11-30; updated 2003-01-21) Read more The Road to Recovery A recent study in the Amazon rain forest shows that some types of logging may not negatively impact the carbon cycle as originally thought. Read more ICESat Factsheet The ICESat mission will provide multi-year elevation data needed to determine ice sheet mass balance as well as cloud property information, especially for stratospheric clouds common over polar areas. It will also provide topography and vegetation data around the globe, in addition to the polar-specific coverage over the Greenland and Antarctic ice sheets. Read more
科技
2017-09/1580/en_head.json.gz/8404
You are here:HomeArts/HumanitiesVisual ArtsPhotographyCamera Accepting Dual Image Storage Media. Photography/Camera Accepting Dual Image Storage Media. Expert: Pat G - 7/16/2013 QuestionCamera Roll Digital Camera QUESTION: Dear Pat http://en.wikipedia.org/wiki/Camera http://en.wikipedia.org/wiki/Roll_film http://en.wikipedia.org/wiki/110_film‎ http://en.wikipedia.org/wiki/135_film http://www.youtube.com/watch?v=vuhM7McQOpc‎ http://en.wikipedia.org/wiki/Digital_camera 1. Are there cameras available from sony, kodak, samsung etc which can accept and store both image media in roll (cartridge) as well as on memory card (digital camera) ?. i.e. Film roll cartridges as well as memory card installed in the camera. There will be two modes of operation with a Mode Operation button on the camera. Clicking on Mode operation button will allow the photographer to select two image storage media. A. Digital B. Film Roll. if photographer selects option A then takes a photo, the image will be recorded on the memory card, if photographer selects option B 2. Do you feel this type of camera which can accept both types of media can be useful to photographers and other consumers ?. 3. Technically, is it feasible (complexity) to manufacture these type of camera product ?. Awaiting your reply, Prashant S Akerkar ANSWER: Hello, Prashant, What you are suggesting isn't technically feasible, which is to say making a camera for a reasonable cost that is reasonable to hold and operate. The digital camera is a totally new design. Remember that the light making up the image must strike a surface. In a film camera, it strikes film. In a digital camera, it strikes a light-sensitive plate and generates an electronic signal. Switching these two back and forth would be an engineering nightmare. And the lenses for a film camera are designed differently from the lenses for a digital camera. The focal length effectively changes, partly because of the distance between the lens and the surface, and partly because of the different size of the sensor. It is true you can use the same lenses (if the fittings are right) on a digital camera that you use on a film camera, but the apparent distance to the subject changes. The distance between the media (film or sensor) and the lens must be very precisely set. Moving one back and forth to replace the other would cause vibration that could upset the adjustments as well. Using mirrors instead would cost you brightness, and mess with the f stop. Manufacturing such a device would be horribly expensive, and the result would be bulky and heavy. There is one possible exception. Hasselblad has such a camera, which arose because they are designed to take different backs so you can switch film, so they also made one that will take a film back and a digital back, but it would be a real pain to switch between one or the other. You wouldn't just flip a switch, and you could miss photo opportunities doing it, unless you simply used just one for the session. It's worse than switching lenses, and I often carry two camera bodies so I don't have to as often. You can do an internet search using the terms "hasselblad digital back" or "hasselblad v series" to find one. They make one called the V series for which there are both film and digital backs. They are horribly expensive. They run in the tens of thousands of US dollars in price. The cheapest I found is just under $10,000. Storage and lenses are also horribly expensive. But as you can see, this is a different solution from the one you are suggesting. Having used both film cameras and digital cameras extensively, I much prefer digital, especially since they have developed to a point where I can get large images. The camera I have now effectively is a large format camera in a compact size. I will never go back to using film. Film costs money, and developing does, too. You can't see the results at the time, so if you need to adjust exposure, you don't know about it until it is too late. A good photographer always takes lots of pictures and selects the best. With a film camera, it gets very expensive, but with a digital camera, it is effectively free. With a film camera, you have to pay for developing the bad pictures along with the good. With a digital camera, you can just throw away the bad ones, and it doesn't cost you anything to have taken some bad pictures (except insofar as they prevent you from taking a good photo at that instant at the time.) I also doubt seriously if there would be a market for such an inexpensive camera. People prefer one or the other. People who still use film for artistic reasons don't need the digital capabilities. Most people have switched to digital and feel largely as I do. It's something like the difference between vinyl records and CDs. Some people think CDs distort the sound of classical music. These are diehard vinyl collectors. I am very sensitive to sound quality, but I have never noticed such a distortion (which could be corrected in any case), and vinyl records become scratchy and noisy, while CDs do not. In like manner, film responds somewhat differently to colors than a digital sensor; however, you can fix such problems in a computer program if need be. Film will keep more colors, but the answer is to shoot in RAW format with digital. Fixing exposure problems from film is hard. Fixing them from a digital image is trivial. QUESTION: Dear Pat I apologize for the incomplete sentence from my end in 1. There will be two modes of operation with a Mode Operation button on the camera. Clicking on Mode operation button will allow the photographer to select two image storage media. A. Digital B. Film Roll. if photographer selects option A then takes a photo, the image will be recorded on the memory card, if photographer selects option B then the image will be recorded on roll (cartridge). Prashant S Akerkar AnswerHello, Prashant, Thank you for the kind rating and comment. As far as I know, there are no cameras available with the features you seek. It isn't REALLY technically feasible, such that it would make sense to develop and market such a camera. Let's suppose what I regard as the most likely design. You'd have the sensor in one place on the camera, and the film in another place. There would be a semi-transparent mirror that would direct half the light toward the sensor, and half toward the film. You'd choose which medium got the image by using a Mode-Operation switch. The problem is that cutting down the light to half what it would be otherwise would mean losing a lot in terms of image quality. Let's say you chose to compensate by running at ISO 1600 instead of ISO 800. (If these terms don't make sense, let me know). The image will be more grainy at 1600. Most cameras don't have sensors good enough to handle an ISO number higher than 1600 without a lot of graininess. And the grain is aggravated in low light situations. If you open the aperture more, you lose depth of field and sharpness if it's not focused perfectly, and there are a lot of situations where perfect focus is not guaranteed, especially with automatic focus. Or you might slow down the shutter speed, which might lead to blurring. I personally will do everything in my power NOT to rob the sensor of ANY of the light going to it. And you would probably have to give up the SLR viewfinder and rely on the display on the back, which is NOT very detailed, and I personally find impossible to use. Lots of photographers think this way. A camera designed like that wouldn't sell. There are a few cameras that have sensors that will handle higher ISO numbers without graininess, but they're few and far between, very expensive, and not suitable for film. You'd need extra sensitive film for the camera you propose as well. I have never seen any color film for more than 800 ISO, and they did make 1600 ISO, but that was VERY grainy. Suppose instead, they had a mirror that would flip one way for film, and another way for the sensor. That would require some more sophisticated mechanics which could more easily break, and would not be desirable otherwise, either. The light would have to be bent, rather than going straight back to the sensor. If you're thinking of trying to invent such a camera, feel free. I doubt if you will sell your design to anyone. In other words, it is an engineering problem, and the solution is anything BUT obvious. But if you are thinking you can find an inexpensive camera (under $1000 US) that has this feature, in Sony, Kodak, or Samsung, I'd say that would be nearly 100% impossible. I don't know for sure that they do not have such a camera, but I'd be SHOCKED if they did. I didn't turn anything up on an internet search. Feel free to do one as well. Fuji is still making a disposable film camera that takes 27 shots. You have to send the camera back to get the film developed. I think they re-use the cameras, which means you might have tiny scratches on the lens of one you buy "new". This camera is available at B&H in New York, and is called Fujifilm Quicksnap 800 Waterproof 35mm Disposable Camera. The web site is bhphotovideo.com. But that's not what you are looking for, either. They do have a film camera section, but all they have is cheap cameras, nothing like what you are asking about. Here's the link, for what it's worth: http://www.bhphotovideo.com/c/buy/Film-Cameras/ci/9812/N/4288586278 I looked only at the first page. None of these cameras would be attractive to a professional, so all the professionals are either using equipment they already have, or they have gone digital. I held out for a long time because I didn't have the money to buy a digital camera, and the one I wanted hadn't been developed yet. But I went digital about 5 years ago. And there's been a lot of water under the bridge since then. If you want to use both film and digital, carry two camera bodies. It's cheaper, and it will work better, because the cameras will be designed specifically for that recording medium. I hope this helps. Questioner's RatingRating(1-10)Knowledgeability = 10Clarity of Response = 10Politeness = 10CommentDear Pat PhotographyAll AnswersAnswers by Expert:Steve MeltzerKent StaubusMark GluckmanMartin SeymourAkshay Singh JamwalDavid SilverWebSearch AnalysisPat GEric C. M. BasirLawrence MarshallDave NyceBarry BenjaminLarry MaupinMikael BarnardKenSteve Ramsden Pat G ExpertiseI can answer questions about the artistic aspects of photography, and tricks for shooting landscape, scenic, macro, and animal photography. I am familiar with Pentax SLR cameras, both film and digital. I have also done work in urban photography and creative photography, and I am familiar with creative uses of filters and lenses. I am familiar with composition and color theory, and know how to make use of light. I can answer questions about things like lightning photography and moon photography. I spend time studying the techniques of the well known photographers. I work exclusively in color. Although I travel anywhere and everywhere in pursuit of landscape and scenic photography, my main area of expertise is the American southwest, and I am familiar with many scenic areas. I also have familiarity with the plants of the Sonoran Desert, having studied not only their appearance but also their uses, including ethnobotany. See my educational credentials for other art that I do. ExperienceI have spent the last ten years as a semi-professional photographer, selling my work on the internet, and having won international honors. The gallery of my most recent work, where I usually post frequently is http://patgoltz.deviantart.com/ I will take questions about how I did various photographs.Education/CredentialsThe first prize I won for my art was when I was in third grade. I have a bachelor's degree in art from Ohio Dominican University, where I learned mainly ceramics and glaze calculation. I have also done various kinds of fiber arts. In addition, I do digital landscapes, abstracts, and fractal art.
科技
2017-09/1580/en_head.json.gz/8473
Yes, the Earth Has Warmed Up Before Without Any Help From Us Some new information has come out about the Medieval Warm Period that's giving the vapors to the global warming crowd: Current theories of the causes and impact of global warming have been thrown into question by a new study which shows that during medieval times the whole of the planet heated up. It then cooled down naturally and there was even a 'mini ice age'. A team of scientists led by geochemist Zunli Lu from Syracuse University in New York state, has found that contrary to the ‘consensus’, the ‘Medieval Warm Period’ approximately 500 to 1,000 years ago wasn’t just confined to Europe. In fact, it extended all the way down to Antarctica – which means that the Earth has already experience global warming without the aid of human CO2 emissions.Read the whole thing for the details. Bottom line, the earth warms and cools according to its own schedule (and the sun's) and we really don't have much to do with it. Therefore, throwing trillions of dollars in new taxes and regulations at it will be meaningless. ClimateGate, As I've said before, where I grew up was glaciated some 15K years ago--I was taught that in school before there was color TV. And touch-tone phones.
科技
2017-09/1580/en_head.json.gz/8545
Man's Call To America: Turn Off That Air Conditioner By Bryan Thompson Sep 18, 2011 TweetShareGoogle+Email Stan Cox has air conditioning in his Kansas house — but he only runs the unit about once a year, he says. Bryan Thompson / Kansas Public Radio Originally published on September 21, 2011 10:01 pm According to the National Oceanic and Atmospheric Administration, this summer has been the second-hottest ever recorded in the United States, helping to push power demand in homes to record levels. As some worry that the growing use of fossil fuels to produce electricity for cooling is unsustainable, one man is urging Americans to live without air conditioning. With wire-rimmed glasses and a neatly trimmed salt-and-pepper beard, Stan Cox doesn't stand out in a crowd. But with his book arguing that our climate-controlled lifestyle in modern America is unsustainable, Cox has drawn quite a bit of attention. He lives in a modest two-story house in an older neighborhood in Salina, Kan. Built in the 1930s, the house has a screened-in porch — and, on a recent visit, lots of open windows. "It does have central air," Cox says. "And we run it one day each summer, to make sure it's still working okay. And so far, for 10 years now, it works fine on that day." Cox says the lack of air conditioning hasn't kept friends away, but it does prompt some good-natured ribbing. "When it gets really hot, they kind of wink and say, 'How are you holding up?' Or people will try to catch you: 'You've turned it on now, haven't you?' — Not so far," he says. Cox met his wife, Pritti, while working in India several years ago. She says even now, few homes in India have air conditioning, and almost none have central air. "For instance, my sister's family in Mumbai, they have one unit in the bedroom and one unit in the kitchen, dining, and living room area," she says. "But it's never the case that they would close the windows and leave it on all day. No." Living in hot-and-humid India led Cox to question whether Americans really need to keep their indoor environment at a constant temperature year-round. "The hotter the summers get, the more we run air conditioning. The more we do that, the greater the load of emissions added to all the other emissions we're putting in the atmosphere," he says, "and the higher the chance that we'll have even hotter summers in the future. That's what's being predicted." In his book, Losing Our Cool, Stan Cox argues that Americans rely too much on air conditioning. He's not against having air conditioning available during heat emergencies. But Cox says comfort research proves that most people can acclimate to warmer temperatures. "Office workers who have an air-conditioned workplace will have a temperature range they're comfortable in, that may reach up to 78," he says. "Whereas those who work in a non-airconditioned workplace, they were happy up to 89 degrees." That's if they had plenty of air movement. Cox says fans are a must, and shade makes a big difference. David Orr, who teaches environmental studies at Oberlin College, agrees that cutting back on air conditioning isn't all that hard to do. "I don't think anyone would ask at this point to go cold turkey on air conditioning," Orr says. "But what is reasonable is to use it only sparingly, or as necessary. When you use it, buy the most efficient equipment you can possibly buy." Both Orr and Cox say buildings should be designed for passive cooling, so that air conditioning isn't a must. That's the way people in many parts of the world have been coping with heat for thousands of years.Copyright 2011 Kansas Public Radio. To see more, visit http://kpr.ku.edu. TweetShareGoogle+EmailView the discussion thread. © 2017 KUNM
科技
2017-09/1580/en_head.json.gz/8605
Podcasts Insights | Consumers What could happen in China in 2014? The year ahead could see companies focus on driving productivity, CIOs becoming a hot commodity, shopping malls going bankrupt, and European soccer clubs finally investing in Chinese ones. McKinsey director Gordon Orr makes his annual predictions. January 2014 | by Gordon Orr 1. Two phrases will be important for 2014: ‘productivity growth’ and ‘technological disruption’ China’s labor costs continue to rise by more than 10 percent a year, land costs are pricing offices out of city centers, the cost of energy and water is growing so much that they may be rationed in some geographies, and the cost of capital is higher, especially for state-owned enterprises. Basically, all major input costs are growing, while intense competition and, often, overcapacity make it incredibly hard to pass price increases onto customers. China’s solution? Higher productivity. Companies will adopt global best practices from wherever they can be found, which explains why recent international field trips of Chinese executives have taken on a much more serious, substantive tone. This productivity focus will extend beyond manufacturing. In agriculture, the pace at which larger farms emerge should accelerate, spurring mechanization and more efficient irrigation and giving farmers the ability to finance the purchase of higher-quality seeds. Services will also be affected: for companies where labor is now the fastest-growing cost, a sustained edge in productivity may make all the difference. And in industry after industry, companies will feel the disruptive impact of technology, which will help them generate more from less and potentially spawn entirely new business models. Consider China’s banking sector, where bricks-and-mortar scale has been a critical differentiator for the past two decades. If private bank start-ups were allowed, could we see a digital-only model, offering comprehensive services without high physical costs? Will Chinese consumers be willing to bank online? Absolutely—if their willingness to shop online is any guide. 2. CIOs become a hot commodity There is a paradox when it comes to technology in China. On the one hand, the country excels in consumer-oriented tech services and products, and it boasts the world’s largest e-commerce market and a very vibrant Internet and social-media ecosystem. On the other hand, it has been a laggard in applying business technology in an effective way. As one of our surveys1 recently showed, Chinese companies widely regard the IT function as strong at helping to run the business, not at helping it to grow. Indeed, simply trying to find the CIO in many Chinese state-owned enterprises is akin to hunting for a needle in a haystack. Yet the CIOs’ day is coming. The productivity imperative is making technology a top-team priority for the first time in many enterprises. Everything is on the table: digitizing existing processes and eliminating labor, reaching consumers directly through the Internet, transforming the supply chain, reinventing the business model. The problem is that China sorely lacks the business-savvy, technology-capable talent to lead this effort. Strong CIOs should expect large compensation increases—they are the key executives in everything from aligning IT and business strategies to building stronger internal IT teams and adopting new technologies, such as cloud computing or big data. 3. The government focuses on jobs, not growth Expect the Chinese government’s rhetoric and focus to shift from economic growth to job creation. The paradox of rising input costs (including wages), the productivity push, and technological disruption is that they collectively undermine job growth, at the very time China needs more jobs. Millions and millions of them. While few companies are shifting manufacturing operations out of the country, they are putting incremental production capacity elsewhere and investing heavily in automation. For example, Foxconn usually hires the bulk of its workers for a given 12-month span just after the Chinese New Year. Yet at the beginning of last year, the company announced that it wouldn’t hire any entry-level workers, as automation and better employee retention had reduced its needs. Although upswings in the company’s hiring still occur (as with last year’s iPhone 5S and 5C release), the gradual rollout of robots will probably reduce demand for factory workers going forward. In short, many manufacturers—both multinational and Chinese—are producing more with less. So as technology enables massive disruptions in service industries and sales forces, what happens to millions of retail jobs when sales move online? To millions of insurance sales agents? Millions of bank clerks? Even business-to-business sales folks may find themselves partially disintermediated by technology, and rising numbers of graduates will have fewer and fewer jobs that meet their expectations. They will not be happy about this and may not be passive. Finally, while state-owned enterprises will feel pressure to improve their performance, to use capital more efficiently, and to deal with market forces, they are likely, at the same time, to face pressure to hire and retain staff they may not really need. The government and the leaders of these enterprises have long argued that such jobs are among the most secure. They will find it very hard to declare them expendable. 4. There will be more M&A in logistics As everyone pushes for greater productivity, logistics is a rich source of potential gains. State-owned enterprises dominate in capital expenditure–intensive logistics, such as shipping, ports, toll roads, rail, and airports; small mom-and-pop entrepreneurs are the norm in segments such as road transportation. This sector costs businesses in China way more than it should. With upward of $500 billion in annual revenues, logistics is an industry ripe for massive infusions of capital, operational best practices, and consolidation. Driven by the pressure to increase productivity, that’s already happening at a rapid pace in areas such as express delivery, warehousing, and cold chain. Private and foreign participation is increasingly encouraged in many parts of the sector, and its competitive intensity is likely to rise. 5. Crumbling buildings get much-needed attention While China’s flagship buildings are architectural wonders built to the highest global standards of quality and energy efficiency, they are unfortunately the exception, not the rule. Much of the residential and office construction in China over the past 30 years used low-quality methods, as well as materials that are aging badly. Some cities are reaching a tipping point: clusters of buildings barely 20 years old are visibly decaying. Many will need to be renovated thoroughly, others to be knocked down and rebuilt. Who will pay for this? What will happen if residential buildings filled with private owners who sank their life savings into an apartment now find it declining in value and, perhaps, unsellable? Alongside a wave of reconstruction, prepare for a wave of local protests against developers and, in some cases, local governments too. 6. The country doubles down on high-speed rail When China inaugurated its high-speed rail lines, seven years ago, many observers declared them another infrastructure boondoggle that would never be used at capacity. How wrong they were: daily ridership soared from 250,000 in 2007 to 1.3 million last year, fuelled partly by aggressive ticket prices. Demand was simply underestimated. Now that trains run as often as every 15 minutes on the Shanghai–Nanjing line, business and retail clusters are merging and people are making weekly day-trips rather than monthly two-day visits. The turnaround of ideas is faster; market visibility is better; and many people come to Shanghai for the day to browse and shop. There are already more than 9,000 kilometers (5,592 miles) of operational lines—and that’s set to double by 2015. If the “market decides” framing of China’s Third Plenum applies here, much of the investment should switch from building brand-new lines to increasing capacity on routes that are already proven successes. 7. Solar industry survivors flourish Many solar stocks, while nowhere near their all-time highs, more than tripled in value in 2013. For the entire industry, and specifically for Chinese players, it was a year of much-needed relief. By November, ten of the Chinese solar-panel manufacturers that lost money in 2012 reported third-quarter profits, driven by demand from Japan in the wake of the Fukushima disaster. (Japan’s installed capacity quadrupled, from 1.7 gigawatts in 2012 to more than 6 gigawatts by the end of 2013.) Domestic demand also picked up as the State Grid Corporation of China allowed some small-scale distributed solar-power plants to be connected to the grid, while a State Council subsidy program even prompted panel manufacturers to invest in building and operating solar farms—an initiative that will ramp up further. This year is likely to see even stronger demand. Aided by international organizations, including the World Bank, an increasing number of developing countries (such as India) regard scaling up distributed power as a way of improving access to electricity. In addition, solar-energy prices continue to fall rapidly, driven down by technological innovations and a focus on operational efficiency. While I’m on green topics, I’ll point out that the coming months are also likely to see another effort to create a real Chinese electric-vehicle market. The push will be centered on the launch of the first vehicle from Shenzhen BYD Daimler New Technology. 8. Mall developers go bankrupt—especially state-owned ones Shopping malls are losing ground to the online marketplace. While overall retail sales are growing, e-retail sales jumped by 50 percent in 2013. Although the rate of growth may slow in 2014, it will be significant. Yet developers have already announced plans to increase China’s shopping-mall capacity by 50 percent during the next three years. For an industry that generates a significant portion of its returns from a percentage of the sales of retailers in its malls, this looks rash indeed. If clothing and electronics stores are pulling back on the number of outlets, what will fill these malls? Certainly, more restaurants, cinemas, health clinics, and dental and optical providers. But banks and financial-service advisers are moving online, as are tutorial and other education services. I expect malls in weaker locations to suffer disproportionately. These are often owned by smaller developers that can’t afford better locations or by city-sponsored state-owned developers that are expanding into new cities. The weak will get weaker, and while they may be able to consolidate, it’s more likely they will go out of business. 9. The Shanghai Free Trade Zone will be fairly quiet In early October, there was much speculation about the size of the opportunity after the State Council issued the Overall Plan for the China (Shanghai) Pilot Free Trade Zone (FTZ), and the Shanghai municipality issued its “negative list” of restricted and prohibited projects just a few days later at the end of September. For the FTZ, the only change so far appears to be that companies allowed to invest in it will not have to go through an approval process. As for the negative list, while there’s a possibility that Shanghai will ease the limitations, for the moment the list very much matches the categories for restricted and prohibited projects in the government’s fifth Catalog of Industries for Guiding Foreign Investment. This ambiguous situation gives the authorities, as usual, full freedom to maintain the status quo or to pursue bolder liberalization in the FTZ in 2014 if they see a need for a stimulus of some kind. On balance, I’d say this is relatively unlikely to happen. 10. European soccer teams invest in the Chinese Super League I know, I know—I’m making exactly the same prediction I did a year ago. True, Chinese football has battled both corruption and a lack of long-term vision. It’s also true that the Chinese Super League still trails Spain’s La Liga and the English Premier League in television ratings. That’s in spite of roping in stars such as Nicolas Anelka and Didier Drogba (who both returned to Europe this year) and even David Beckham (as an “ambassador”). At least this year some things started to improve. After all, Guangzhou Evergrande just won Asia’s premier club competition—the AFC Champions League—a year after hiring Italy’s seasoned coach Marcelo Lippi. This international success could be temporary, but there is a shared sense in China that something has to change because there is so much underleveraged potential. Maybe Rupert Murdoch’s decision to invest in the Indian football league will precipitate more openness among Chinese football administrators? Perhaps the catalyst will be the news that the Qatari investors in Manchester City also invested in a New York City soccer franchise? An era of cross-border synergies from the development and branding of sister soccer teams is coming closer. Finally, something that’s less a prediction than a request. Can we declare the end of the “BRICs”? When the acronym came into common use, a decade ago, the BRIC countries—Brazil, Russia, India, and China—contributed roughly 20 percent of global economic growth. Although China was already the heavyweight, it did not yet dominate: in 2004, the country contributed 13 percent of global growth in gross domestic product, while Brazil, Russia, and India combined contributed 9 percent, with similar growth rates. Compare that with the experience of the past two years. China accounted for 26 percent of global economic growth in 2012 and for 29 percent in 2013. The collective share of Brazil, Russia, and India has shrunk to just 7 percent. It’s time to let BRIC sink. In this podcast, author Gordon Orr discusses some of his predictions for the coming year in China with fellow McKinsey directors Nick Leung and Guangyu Li. Gordon Orr is a director in McKinsey’s Shanghai office. For more from him on issues of relevance to business leaders in Asia, visit his blog, Gordon’s View. By McKinsey China|January 7, 2014|Consumers, Macroeconomy|0 Comments Previous Next Related Posts How airlines can win the hearts, minds, and wallets of Chinese travelers China’s Choice: Capturing the $5 Trillion Productivity Opportunity Finding the Fast Lane: Emerging Trends in China’s Auto Market How Savvy, Social Shoppers Are Transforming E-commerce The Modernization of the Chinese Consumer Why are Chinese developers investing outside China? February 14, 2014 Summer Reading On China July 3, 2014 China’s iConsumer 2015: A Growing Appetite for Change February 10, 2015 Can India Export More To China? July 1, 2014 Coming To A Factory Near You: Chinese Robots April 10, 2014 How airlines can win the hearts, minds, and wallets of Chinese travelers December 23, 2016 An innovative approach to social impact in China October 12, 2016 From Powerpoint to Paperback: How Daimler Qiao Juggles Work, Family, and a Budding Career as a Novelist September 14, 2016 What’s Next for China’s Booming Fintech Sector? July 27, 2016 Using Analytics to Turbocharge China’s E-commerce Performance July 27, 2016 Latest insights How airlines can win the hearts, minds, and wallets of Chinese travelers
科技