Datasets:
id
stringlengths 30
34
| text
stringlengths 0
75.5k
| industry_type
stringclasses 1
value |
---|---|---|
2014-15/4479/en_head.json.gz/11291 | Internet Explorer 7 Beta 1 Review
Paul Thurrott's Supersite for Windows
Paul Thurrott
I have a checkered history with Internet Explorer, and by this point, my relationship with Microsoft's controversial browser is so tainted, I have to admit that I approached this review with some trepidation. My history with IE dates all the way back to 1995, when Windows 95 first shipped and Microsoft released the first version of this browser.
At the time, IE was sort of a joke, and Netscape Navigator ruled the Web. Netscape was pumping out new browser versions every couple of weeks, it seemed, and Microsoft's first attempt seemed a bit sad by comparison. That said, I actually liked it: IE 1.x looked a lot like the Explorer shell in Windows 95, with compact, square buttons. It's hard to appreciate this now, but at the time, the Windows 95 look and feel was brand new and IE just seemed to fit in.
I actually switched to IE full time when IE 2 was released in late 1995, though in retrospect it's unclear what advantages it offered over 1.0. By early 1996, Microsoft began publicly discussing IE 3.0 and, at a developer show that simulcast to theaters around America, Joe Belfiore showed off IE 3.0 alpha features like frames, HTML layout, and multimedia support. I was hooked, and though early builds of IE 3 were difficult to use, the final version, released in August 1996, was a watershed event in the industry. IE, for the first time, bested the feature set of Netscape Navigator. And it would never look back: From that point on, IE began gnawing away at Navigator, and would soon overcome it for good.
In late 1996, Microsoft canceled its original plans for IE 4.0 and retooled after hearing that Netscape was going to try and replace the Windows shell with an HTML-based shell codenamed Constellation. The original IE 4.0 plans called for an evolutionary update to IE 3.0 that would have included features such as Site Map and an integrated FTP client. However, catching wind of Netscape's plans, Microsoft recast IE 4.0 as a major project called "Nashville" which would combine the Windows shell with the HTML rendering engine in IE, blurring the line between local content on your PC and remote content from the Web.
Nashville resulted in two products. The first was the standalone version of IE 4.0, released in late 1997, which included the expected browser, of course, but also a new integrated version of the Windows shell, an Active Desktop that combined the Windows desktop with a Web-based layer, and other controversial features. The second was Windows 95 OSR-2, which included these new IE-based integration elements, as would every future version of Windows, including, alarmingly, those based on Windows NT.
It was here that my support for IE began to flag and then, eventually, completely unwind. Bundling IE with Windows was one thing. Integrating it deeply into the Windows core was quite another. Unlike Windows and NT, IE was new code, and adding it deeply into Windows at such an early stage--and only because of a perceived competitive threat that, frankly, never materialized anyway--was just a bad decision. The ramifications of that decision are still with us today. IE is now one of the most obvious attack vectors for malware in Windows, and the weakest technical link in the so-called shield that separates hackers from your precious data.
Anyway. The next few IE releases were relatively uninspiring updates and an unintended omen of things to come. That's because IE was starting to pull away in the market, and Microsoft had fewer reasons to improve the browser, now that Netscape was imploding. IE 5.0 was "an incremental, evolutionary upgrade to IE 4.0" (see my review and my tech showcase) that sported an alarming number of proprietary Web features. IE 5.01, included with Windows 2000, set the stage for future IE versions by offering a huge array of security fixes (see my review). And IE 5.5--designed to coincide with Windows Me--was just as unexciting, with security and bug fixes, more proprietary Web technologies, and print preview. Yawn.
The last time I reviewed a standalone I version was in late 1999, over five years ago, and I wasn't too impressed. And though I did describe IE 6.0 as "wonderful" in my Windows XP Home and Professional Editions Review (see my review), I also noted that "it doesn't seem much different than the IE 5.x products it replaces." Since then, IE 6.0 stagnated for three years before Microsoft finally got around to updating it in XP Service Pack 2 (SP2, see my review). With SP2, IE finally got pop-up ad blocking and a simple plug-in management system, but not much else. I commented on its "laughable compliance with Web standards" and noted that I would continue using Mozilla Firefox, which I have. As a result, I have never suffered from a spyware or malware attack, a common occurrence for IE users. And, I've been beseeching people to use Firefox--which, in addition to better security, has a slew of awesome end-user features not found in IE--instead of Microsoft's buggy browser.
Then the IE 7 beta happened.
Microsoft announces IE 7
It's important to understand that Microsoft had effectively killed IE. That is, the original plan for Windows Vista, the next major version of Windows (see my Beta 1 review), called for IE to be subsumed completely into the Windows shell. There were to be no more standalone IE updates.
Two things changed those plans. First, hacker attacks on IE 6 reached record levels, with Microsoft releasing IE 6 patches constantly over a three year period. Second, the Mozilla Foundation, which rose out of the ashes of Netscape, developed the aforementioned standalone browser, Firefox (originally called Phoenix, and then briefly Firebird), which, amazingly, began eating away at IE's market share. At the time of this writing, Firefox is closing in on 10 percent of the market, with all of that market share--all of it--coming at IE's expense. In certain technology-oriented circles, Firefox's share is actually much, much higher than that, and it actually outstrips IE in some cases.
When you combine these factors with Windows Vista's constant delays--now due in late 2006, the product was first aimed at a 2003 release--it was pretty clear that Microsoft had to do something. As I documented in my first IE 7 Preview, Microsoft chairman Bill Gates announced that his company would ship IE 7 for Windows XP with Service Pack 2 (SP2; and, as it turned out, for Windows XP x64 and Windows Server 2003 with SP1 | 计算机 |
2014-15/4479/en_head.json.gz/12079 | East 1000AD to the Present, software for Windows and Mac OSX
The Centennia Atlas
Download Centennia
Purchase an Access Code
Group Rate Purchase
Site License Pricing
Reviews of Centennia
Using Centennia
Buy on Amazon.com
The CENTENNIA Historical Atlas
Recent Additions and Changes:Single-user access code: lower price
New Windows edition (Windows 8, 7, Vista, XP compatible)
Macintosh OS X edition (10.5 or above, Mavericks, Lion, etc.)
Added review by Prof. Charles Ingrao
EU focus (for example, see the EU in 2008)
Read about Frank Reed, the creator of Centennia
Centennia Software's home port is now Conanicut Island USA
Here's Frank Reed, creator of the Centennia Historical Atlas with Neil deGrasse Tyson:
CENTENNIA is a map-based guide to the history of Europe and the
Middle East from the beginning of the 11th century to the present. It is
dynamic, animated historical atlas including over 9,000 border
changes. The map controls evolve the map forward or backward in time bringing the static map to life. Our maps display every major war and territorial conflict displaying the status of each region at intervals of a tenth of a year. The maps reflect actual "power on the ground" rather than internationally-sanctioned or "recognized" borders.
From Kevin Kelly's review of Centennia which was published in the Whole Earth Catalog:
"As a kid I dreamed of maps that would move; I got what I wanted in
Centennia. This colorful political map of Europe and the Mid-East redraws
itself at yearly intervals from the year 1000 to present. It's a living map,
an atlas with the dimension of time. I can zoom around history, pause at
particular dates, or simply watch how nations melt away, or disintegrate
into tiny fragements, or unite! Year by year the outlines of tribes and nations
spread, retreat, and reform almost as if they were tides or infections. The
resolution of detail (almost at the "county" level) is astounding; the breadth
of time (ten centuries) thrilling. It rewards hours and hours of study."
Kevin Kelly is editor-at-large and co-founder of "Wired" magazine and an all-around prophet of the digital age.
The Centennia Historical Atlas was required reading for all beginning students at the US Naval Academy at Annapolis for over twelve years. Over 1150 copies have been purchased annually for all prospective naval officers at Annapolis. The software serves as a visual introduction to Western History from a cartographic perspective. Centennia is also licensed by hundreds of secondary schools, colleges, and universities worldwide. Editions of the Centennia Atlas are available in Greek and German, as well as English.
Individual home users also purchase the Centennia Historical Atlas. It's ideal for anyone who loves maps and history, and it's also extremely popular among genealogy enthusiasts. There's no easier way to get a long-time-scale perspective on the history of the regions of Europe and the Middle East than by watching the borders shift back and forth in Centennia.
Professor Charles Ingrao, Purdue University wrote:
The Centennia Atlas offers an instant antidote to the problem of changing frontiers. It permits
you to view any part of Europe, North Africa or the Levant from A.D. 1000 to
[the present]. You can also go forward (or backward) in time, which permits you to see
the map change in five-week intervals for the period and region of your choice.
Centennia also provides a "historical gazette" and glossary of
names/places that students might find useful. It even traces the changing
battlefronts between countries in wartime, so you can follow the inexorable
march and retreat of the Austrian armies in the Balkans and elsewhere. I was
most impressed by the developer's incredible eye for detail, which was more
precise (and often more accurate) than Magocsi's new Historical Atlas of
East Central Europe. Centennia is no less precise for Germany. Since much of my earlier work
dealt with the early modern German states, I especially appreciated the
excellent detail that Centennia provides for some of the smaller (but not
the very smallest) Kleinstaaterei. CENTENNIA covers in detail the rise and fall of the Ottoman Empire,
the Hundred Years War, the Mongol invasions, the Napoleonic Wars,
the Unification of Italy and Germany, the First World War, the Rise of Nazi
Germany, the Arab-Israeli wars, and even recent events like the collapse
of the USSR, the wars of the former Yugoslavia, and the Chechen wars.
Some video samples from the Centennia Historical Atlas:
Some earlier non-official versions created using the Centennia Historical Atlas appeared under the titles "Ten centuries in five minutes", "Epic time-laspe of Europe", and "European time-lapse map".
The Centennia Historical Atlas software runs under Apple Macintosh OSX (Leopard, Snow Leopard, etc.), as well as Microsoft Windows (8/7/Vista/XP). The software requires 20 megabytes of hard disk space and 40 megabytes of memory. Centennia does not have any other significant system requirements, and it will run well on almost any computer made in the year 2000 or later.
The downloadable edition of the Centennia Historical Atlas is available at no charge. It covers the French Revolutionary and Napoleonic Era from 1789 to 1819. The map data and text for the full period from 1000AD to the present are already present in the download file and may be opened at any time with an access code. A single-user license access code is priced at $59.00 (plus shipping and handling, if required). We also have site license pricing and group rate pricing. We also accept purchase orders from schools, universities, and campus book stores.
Watch a short video guide to the Centennia Historical Atlas software. This video was created by Legacy Family Tree, one of our dealers. © Copyright 2002-2013 Centennia Software, Conanicut Island USA. All rights reserved. www.HistoricalAtlas.com. powered by Olark live chat software | 计算机 |
2014-15/4479/en_head.json.gz/12820 | Our Organization Our People Year in Review Acquisition Support The SEI works directly with federal defense and civil programs. Teams of acquirers, developers, and operators help government navigate the complexities of acquiring increasingly complex software and systems.
Increasingly, the Department of Defense (DoD) and federal agencies acquire software-intensive systems instead of building them with internal resources. However, acquisition programs frequently have difficulty meeting aggressive cost, schedule, and technical objectives.
The SEI works directly with key acquisition programs to help them achieve their objectives. Teams of SEI technical experts work in actual acquisition environments in the Army, Navy, and Air Force, as well as other DoD and civil agencies, applying SEI products and services in specific contexts.
Our vision is to facilitate the rapid establishment of agile teams composed of acquirers, developers, and operators using SEI technologies to provide evolutionary, high-quality, cutting-edge software-intensive capabilities to the warfighter. Acquisition program managers are challenged not only to grasp practical business concerns, but also to understand topics as diverse as risk identification and mitigation, selection and integration of commercial off-the-shelf (COTS) components, process capability, program management, architecture, survivability, interoperability, source selection, and contract monitoring. The SEI has spent more than two decades compiling a body of knowledge and developing solutions for these topics. The SEI is focused on direct interaction with the defense, intelligence, and federal acquisition communities by
transitioning technologies and practices to improve DoD software-intensive systems
performing diagnostics such as Independent Technical Assessments (ITAs) and Independent Expert Program Reviews (IEPRs)
helping with RFP preparation
helping with technical evaluations of proposals and deliverables
collaboratively developing acquisition technologies and practices
transitioning technologies and practices to the DoD acquisition community's collaborators
reviewing and advising the DoD on acquisition policy related to software-intensive systems
The SEI is focused on delivery, support, and integration of software-intensive systems acquisition practices to help acquisition program offices. The SEI is positioning itself as a facilitator and leader of a community of practice for the acquisition of software-intensive systems.
Spotlight on Acquisition Support
Acquisition Archetypes: Robbing Peter to Pay Paul
This April 2009 whitepaper is one in a short series of acquisition failures. This paper focuses on the problems of underspending, which can result in funds being shifted from one program to another.
Software Acquisition Survival Skills | 计算机 |
2014-15/4479/en_head.json.gz/16211 | RIPE NCC Regional Meeting Moscow Sets Precedent for Internet Community in Russia, CIS and Eastern Europe
regional meeting,
press release,
7 October 2010 - The RIPE NCC held its 7th Regional Meeting in Moscow from 29 September - 1 October. With over 380 attendees, the meeting was a huge success, with tutorials, presentations and panel sessions on topics including DDoS Attacks, DNS Security (DNSSEC), Operations of Networks and Exchange Points, Regional Connectivity and Capacity, the Evolution of the Internet and IPv6 Deployment. During the meeting, the attendees decided to further their cooperation by proposing the creation of a regional forum in which the region’s Internet experts could collaborate on issues unique to the Russian Federation, CIS and Eastern Europe. Alexey Soldatov, Russian Federation Ministry of Telecommunication and Mass Media, told the attendees in his welcome address, "The future of the Internet depends on you to establish its infrastructure and maintenance. The RIPE NCC Regional Meeting Moscow is an excellent example of self-organisation and the exchange of experiences between the Russian Federation technical community and global Internet experts enables us to be fully involved in the development of the Internet."
Paul Rendek, RIPE NCC Head of External Relations and Communications, commented, "The RIPE NCC welcomes any move to create regional forums for network engineers and other technical staff that enable them to share their experiences and knowledge and identify areas for regional cooperation. It is the next logical step for a community that has, over the last five years, established itself as a unique gathering of technical minds. The RIPE NCC offers its full support and guidance throughout the process and looks forward to facilitate this process and the resulting forum."
Founded in 1992, the RIPE NCC is an independent, not-for-profit membership organisation that supports the infrastructure of the Internet. The most prominent activity of the RIPE NCC is to act as a Regional Internet Registry (RIR) providing global Internet resources and related services to a current membership base of around 7,000 members in over 75 countries.
These members consist mainly of Internet Service Providers (ISPs), telecommunication organisations and large corporations located in Europe, the Middle East and parts of Central Asia.
As one of the world’s five RIRs, the RIPE NCC performs a range of critical functions including:
The reliable and stable allocation of Internet number resources (IPv4, IPv6 and AS Number resources)
The responsible storage and maintenance of this registration data
The provision of an open, publicly accessible database where this data can be accessed
The RIPE NCC also provides a range of technical and coordination services for the Internet community. These services include the operation of K-root (one of the 13 root name servers), the Deployment of Internet Security Infrastructure (DISI) and DNS Monitoring (DNSMON).
As a result of its established position in the Internet industry, the RIPE NCC has played an important role in the World Summit on the Information Society (WSIS), the Internet Governance Forum (IGF), European Union (EU) workshops and government briefings on key issues in the current Internet landscape.
For media enquiries please contact
Blaise Hammond / Lucie Smith Racepoint Group UK Tel: +44 208 752 3200 Email: ripencc _at_ racepointgroup _dot_ com | 计算机 |
2014-15/4479/en_head.json.gz/16877 | Home / Community / Evaluation Teams / 2012 Evaluation Team Report: Web Browser Evaluation
2012 Evaluation Team Report: Web Browser Evaluation Executive Summary
The 2012 Web Browser Evaluation Team was tasked with evaluating the impact of the changing web browser climate for desktop and mobile operating systems. It is important to note that the team was not tasked with providing a recommendation on a single best-in-class browser; rather, it was asked to assess the current and upcoming generation of web browsers and consider the implications for Penn’s various constituents. The team divided Penn’s constituents into three separate communities, and provides the following recommendations for each subset:
Developers: The team suggests (at a minimum) that the primary goal for developers should be to provide access from an operating system’s built-in browser offering (for example, Safari latest on OS X ). The secondary goal should be to support a secondary browser for the operating system (for example, Firefox Latest or Firefox ESR). Developers should clearly communicate expected levels of functionality for each browser and application. When access to an application using a particular browser is restricted, the reasons for the limitation should be clearly provided at point of access, in a format that is comprehensible to both End Users and Local Support Providers.
Support Providers: The team suggests that Local Support Providers should utilize a “Browser + 1” model on all managed systems. On unmanaged systems, Local Support Providers should strongly encourage and promote this model. Resultant of frequent version changes, LSP’s should deploy frequent updates to managed systems or allow users to update browsers to ensure mitigation of security vulnerabilities.
End Users: End Users managing their own systems should also adopt a “Browser + 1” model, and utilizing this model should provide a reasonable expectation of the availability of services.
As the environment continues to evolve, schools and centers will need to continue adopt support for a “bring your own” environment as it pertains to web browsers.
Evaluation Methodology
The 2012 Web Browser Evaluation Team considered the built-in and most popular (as of early 2012) web browsers for desktop and mobile platforms. These included Firefox and Internet Explorer (with multiple versions identified), and Safari and Chrome (with the latest version indicated). The team considered University-supported desktop and mobile platforms (with the addition of Android due to its popularity). The team began by identifying pertinent Penn-provided and Penn-affiliated websites with heavy usage by University personnel. After identification, a testing matrix was developed for each combination of browser and application (See: Mobile evaluation and desktop
evaluation sheets). For each combination, a “Yes”/”No” answer was recorded as it pertained to functionality. When an issue was encountered with a specific combination, a note was made indicating (in as much detail as possible) the cause and problem.
In addition, a separate “Pros and Cons” list was developed for each web browser platform. For each point of interest, documentation is provided from a reliable source. While the team does not make any recommendations on best-practices or use-cases for each browser, the hope is that this list can be an aid in making these decisions. (See: Pros and cons)
The team tested baseline browsers with only supported add-ons (for example, Java 6 update 33) and no major customizations to settings that would impact compatibility. The team attempted to maintain the integrity of testing, while realizing that the scope of testing had to be fairly limited (due to time and manpower restrictions). While the team endeavored to accurately and fully test every application for every browser, some of these applications have restrictions to certain features. Where an application was tested and worked for all available functionality that the tester had access to, the team made the assumption that the application was compatible. Due to time constraints, the team also asserted that browsers across similar operating systems work similarly (for example, Windows 7 and Windows Vista Firefox work similarly), except where indicated. Finally, because Android allows for heavily modified user interface (UI) overlays, the team adopted a supported stance for the built-in browser with a “use at your own risk” recommendation for overlays.
In years past, the Browser Evaluation Teams provided a recommendation on the penultimate campus browser. Rather than making a particular recommendation on the recommended browser, the team developed a list of Pros and Cons for each browser on mobile and desktop platforms, to help Penn constituents make an informed decision based on preferences and use case. The list provides information (using cited resources) on browser strengths and weaknesses. The team also performed best-case effort to collect data for most combinations of modern web browsers, operating systems, and applications.
Browser Pros and Cons
Desktop Platform Results
Mobile Platform Results
As it pertains to developers, the team reiterates that applications should work with University- supported built-in browsers. On desktop platforms, these supported browsers are Internet Explorer 8 and 9 and Safari current. On mobile platforms, these are iOS, Android, and Windows mobile built- in browsers. Developers should work to ensure all technology End Users are equipped with a baseline operating system and built-in browser that supports access to their application. While the optimal environment would be one where access to an application was browser-agnostic, the optimistic view is that developers will strive to support their applications using the “browser + 1” model (the built-in operating system’s browser plus a secondary browser). Where developers are not able to develop compatibility with a specific browser, the rationale for this limitation should be clearly communicated (for example, if a user logs into an application using Chrome and that application is not compatible, the website should disseminate specific reasons why limitation is in place) in a format that is both comprehensible to End Users and technically informative to local support providers.
For local support providers, particularly where a system is manageable, emphasis is put on enforcing or allowing users to maintain an up-to-date browser where possible (particularly easy with Firefox’s and Chrome’s update methodology, yet challenging with Safari and Internet Explorer). To some degree, limitation on customizations that would impede access ensures a fairly homogenous and consistent user experience across the board. When a system is not manageable, communicating appropriate requirements and limitations that has been relayed by the developers is key. When providing recommendations on browsers for particular use case scenarios, LSPs can provide recommendations based on the “Pros and Cons” list. Where browser-agnosticism isn’t possible, the decision on use of browser should be on the merits of the browser itself and user preferences. The push should also be for the “browser + 1” model espoused in this report, ensuring a fallback should developers be unable or unwilling to follow the above recommendations.
As it pertains to End Users, we increasingly see an environment in which users expect more choice about their technological experience. Where access to an application is impossible, the onus is on developers to clearly communicate the specific incompatibility. By informing users about expectations, we can limit End User frustration and the extent to which they are unable to access web-based applications.
Date Posted: June 28, 2013
EvaluationTeam, Web Browser Was this information helpful?
Please Note This article is the final report of a past evaluation team. The information may no longer be current and has been made available as a historical reference only. | 计算机 |
2014-15/4479/en_head.json.gz/17583 | We've gathered project and spec leads, go-to bloggers, best-selling authors and industry insiders for 3 days of information sharing. Here's a list of who is confirmed to present at TheServerSide Java Symposium 2011: James Gosling
Adam Messinger
Bear Bibeault
Adam Bien
Andy Bosch
Jeanne Boyarsky
Bill Burke Stephen Chin
Cliff Click
Adrian Cole
Emiliano Conde
Patrick Curran
Janeice Del Vecchio
Jerome Dochez
Johan Edstrom
Michael Ernest
Jonathan Fullam
Jeff Genender
Dan Hardiker
Iran Hutchinson
Claus Ibsen
Jevgeni Kabanov
Max Katz
Jon Kern
Mik Kersten
Heath Kesler
Tom Kincaid
Jim Knutson
Justin Lee
Cameron McKenzie
Andrew Monkhouse
Charles Nutter
Karen Tegan Padir
Kirk Pepperdine
Reza Rahman
Matt Raible
Scott Selikoff
Mark Spritzler
Craig Tataryn
Martijn Verburg
Patrycja Wegrzynowicz
Jason Whaley
Paul Wheaton
Meet the Java Experts James Gosling, Father of Java
Presenting: Keynote: Surfing the Currents of Change and Panelist for Keynote Panel Discussion: The Java Community Process: What's Wrong and How to Fix It
While most likely everyone knows Java expert James Gosling by name recognition alone, his official bio includes a BSc in Computer Science from the University of Calgary followed by a PhD in Computer Science from Carnegie-Mellon University. Best known for the original design of the Java programming language and the implementation of its original compiler and virtual machine, he has also contributed to the Real-Time Specification for Java and was an original researcher at Sun labs where his primary interest was software development tools, prior to becoming Chief Technology Officer of Sun's Developer Products Group and most recently CTO of Sun's Client Software Group when Oracle finalized their acquisition of Sun in early 2010.
One of the computer industry's most noted programmers, he is also the recipient of Software Development's "Programming Excellence Award" in addition to co-authoring such programming bibles including The Java Language Specification and The Java Programming Language. In the past. James has been a speaker at key industry events and Java conferences, including last year's Java Symposium and JavaOne 2009. Visit James Gosling’s blog: Nighthacks. Steve Harris, Senior Vice-President, Application Server Development, Oracle Presenting: Co-Keynote: Java in Flux: Utopia or Deuteronopia? Java expert Steve Harris is senior vice president of application server development at Oracle. He joined Oracle in 1997 to manage development of the Java virtual machine for the Oracle8i release. Since then, his role has expanded to include the entire Java Platform, Enterprise Edition technologies in the Oracle Application Server and WebLogic Server product lines, including EJBs, Servlets, JSPs, JDBC drivers, SQLJ, TopLink, and Web services support in both the application server and database. Prior to Oracle, Mr. Harris was vice president of engineering at Java predecessor ParcPlace-Digitalk following acquisition of a startup providing an object-oriented database for Smalltalk developers he co-founded in 1993. More than 13 years in scientific and engineering computing, consulting, document management, and systems integration
experience followed Mr. Harris’s degrees from George Washington University and UC Berkeley. Steve is co-presenting the Keynote presentation Java in Flux: Utopia or Deuteronopia? with Adam Messinger
In the past. Steve has been a speaker at key industry events and Java conferences, including EclipseCon 2010 (visit the conference website).
Rod Johnson, Creator of the Spring Framework; Author, J2EE without EJB and more Presenting: Keynote: Driving Java Innovation to the Cloud and Cloud Keynote: Bringing Code to the Cloud and Back Again Rod is the father of Spring, co-founder and CEO of Interface21, and one of the world’s leading authorities on Java and J2EE development. Rod’s best-selling Expert One-on-One J2EE Design and Development (2002) was one of the most influential books ever published on J2EE. The sequel, J2EE without EJB (July 2004, with Juergen Hoeller), has proven almost equally significant, establishing a comprehensive vision for lightweight, post-EJB J2EE development. Rod regularly speaks at conferences in the US, Europe and Asia, including the ServerSide Symposium (2003, 2004, 2005 and 2006), JavaPolis (Europe’s leading Java conference) in 2004 and 2005, JavaZone (2004 and 2005) and JAOO (2004). He was awarded a prize for giving one of the top 20 presentations (by evaluation) at JavaOne in 2005.
Rod serves in the JCP on the Expert Groups defining the Servlet 2.4 and JDO 2.0 specifications. His status as a Java expert and leader in the Java community has been recognized through his invitation to Sun’s Java Champions program. Rod continues to be actively involved in client projects at Interface21, as well as Spring development, writing and evangelism.
In the past. Rod has been a speaker at key industry events and Java conferences, including previous years of TheServerSide Java Symposium, Jasig Spring 2010 Conference and he will speak at other upcoming events, such as OSCON 2011 (Open Source Convention). Visit this blog with contributions from Rod Johnson: SpringSource Team Blog Adam Messinger, Vice-President of Development in the Fusion Middleware Group, Oracle
Presenting: Co-Keynote: Java in Flux: Utopia or Deuteronopia?
Java expert Adam Messinger is Vice President of Development in the Fusion Middleware group at Oracle. He is responsible for managing the Oracle Coherence, Oracle JRockit, Oracle WebLogic Operations Control, and other web tier products. Prior to joining Oracle, he worked as a venture capitalist at Smartforest Ventures and O'Reilly AlphaTech Ventures. Adam is a graduate of the Stanford Graduate School of Business where he was a Sloan Fellow and of Willamette University where he was a G. Herbert Smith Scholar. Adam is co-presenting the Keynote presentation Java in Flux: Utopia or Deuteronopia? with Steve Harris.
In the past. Adam has been a speaker at key industry events and Java conferences, including QCon San Francisco 2010. Bear Bibeault, Author, jQuery in Action
Presenting: How jQuery Made Bob a Happy Man Bear Bibeault has been turning coffee into quality software since 1976 when he starting programming in BASIC on a Control Data Cyber. Having managed to wrestle two Electrical Engineering degrees from the University of Massachusetts, he taught in the Graduate Computer Engineering Program of that esteemed institution for a decade or so. He has also served stints with Digital Equipment Corporation, Lightbridge Inc., Dragon Systems, and a whole slew of other companies no one has ever heard of (or that he's ashamed to admit association with). Bear has authored four books, and contributed on many others, including: Ajax in Practice Prototype and Scriptaculous in Action
jQuery in Action and jQuery in Action, 2nd edition
In the past. Bear has been a speaker at key industry events and Java conferences, including the Emerging Technology for the Enterprise Conference in 2008. Adam Bien, Author, Real World Java EE Patterns
Presenting: Java EE 6 Patterns and Best Practices: What I Learned in the Field and Lightweight Application Development with Java EE 6
Java expert, Adam Bien, is an Expert Group member for the Java EE 6, EJB 3.1, and JPA 2.0 JSRs. He has worked with Java technology since JDK 1.0 and Servlets/EJB 1.0 in several large-scale projects and is now an architect and developer in Java SE/EE/FX projects. He has edited several books about Java and J2EE / Java EE and is the author of Real World Java EE Patterns. Adam is a Java Champion, Oracle Java Developer of The Year 2010, and JavaOne 2009 Rock Star.
Andy Bosch, Independent Consultant
Presenting: JavaServer Faces in the Cloud Andy Bosch is an independent consultant and trainer for JSF and Portlet technologies. He wrote the first German book on JavaServer Faces and just lately published "Portlets and JavaServer Faces."
Andy is responsible for the website www.jsf-forum.de, a German portal for JSF related topics. Andy is a member of the Expert Group of JSR-301 and JSR-329. He regularly publishes articles in Java magazines and teaches web programming with JSF at various conferences.
Jeanne Boyarsky, Developer for a New York City bank
Presenting: Throw Away All The Rules. Now What Process Do You Follow? Jeanne Boyarsky is a graduate of Queens College with a degree in Computer Science, and she also holds a Master’s degree in Computer Information Technology from Regis University. Currently working as a Java Developer for a bank in New York City, her development
interests include databases, Web programming and testing.
Among her speaking engagements, Jeanne gave a highly regarded 'lightning talk' at the 2007 Google Test Automation Conference, and she has written a number of often cited articles on: JDBC batching, Ant Task Dependency Graphs, The Great Forum Migration Project, and
data Migration to JForum.
Jeanne is also an open-source developer, contributing to Version 1.1.0 of Classpath Suite - Running JUnit 3.8 test classes in a 4.X suite and supporting JUnit 4.4.
Bill Burke, Senior Consulting Software Engineer, Red Hat Presenting: REST Never Sleeps (And Neither Does Your Middleware) Bill Burke, senior consulting software engineer at Red Hat, is a JBoss Fellow. A long-time JBoss.org contributor and architect, Bill has founded projects, including JBoss clustering, EJB3, AOP, and RESTEasy, and he was Red Hat’s representative for EJB 3.0, Java EE 5, and JAX-RS JCP specifications. Bill authored O’Reilly’s EJB 3.0 5th Edition and RESTFul Java with JAX-RS and has numerous in-print and online articles.
Stephen Chin, Chief Agile Methodologist, GXS
Presenting: Extending VisualVM with JavaFX
Stephen Chin is a technical expert in RIA technologies, and Chief Agile Methodologist at GXS where he is leading a large-scale Lean/Agile rollout with hundred of developers spread out across the globe. He coauthored the Apress Pro JavaFX Platform title, which is the current leading technical reference for JavaFX, and is lead author of the upcoming Pro Android Flash title. In addition, Stephen runs the very successful Silicon Valley JavaFX User Group, which has hundreds of members and tens of thousands of online viewers. Finally, he is a Java Champion and an internationally recognized speaker featured at Devoxx, Jazoon, and JavaOne, where he received a Rock Star Award. Stephen can be followed on twitter @steveonjava and reached via his blog.
Cliff Click, Chief JVM Architect, Azul Systems Presenting: A JVM Does That??? With more than 25 years experience developing compilers Cliff serves as Azul Systems' Chief JVM Architect. Cliff joined Azul from Sun Microsystems where he was the architect and lead developer of the HotSpot Server Compiler. Previously he was with Motorola where he helped deliver industry leading SpecInt2000 scores on PowerPC chips, and before that he researched compiler technology at HP Labs. Cliff has been writing optimizing compilers and JITs for over 15 years. Cliff holds a PhD in Computer Science from Rice University.
Adrian Cole, Founder, Cloud Conscious, LLC
Presenting: Java Power Tools: The Cloud Edition
Adrian founded the open source jclouds multi-cloud library two years ago, and is actively engaged in cloud interoperability and develops circles. Recent efforts include vCloud ecosystem engineering at VMware, Java integration at Opscode, and cloud portability efforts at Cloudsoft. Adrian is currently consulting under Cloud Conscious LLC. Emiliano Conde, Founder and Lead Developer, jBilling Software, Ltd.
Presenting: Distributed to the Extreme: The Open Source Development Process Emiliano Conde is the Founder and Lead Developer of jBilling Software, Ltd. He oversees the architecture and product direction of jBilling, the leader in open source enterprise billing systems. He is often working on-site with companies around the world helping implement large, enterprise class billing solutions on Java environments. Emiliano Code counts 17 years of experience in software development, his last position prior to the founding of jBilling being Software Architect for HSBC Global Systems (ranked 2nd largest bank in the world). He holds a certificate on Software Engineer from the University of British Columbia (Canada). He now lives in Ottawa, Canada.
Patrick Curran, Chair of the Java Community Process
Panelist for Keynote Panel Discussion: The Java Community Process: What's Wrong and How to Fix It
Patrick Curran is Chair of the Java Community Process (JCP). In this role he oversees the activities of the JCP Program Office including driving the process, managing its membership, guiding specification leads and experts through the process, leading Executive Committee meetings, and managing the JCP.org website.
Patrick has worked in the software industry for more than 25 years, and at Sun (now Oracle) for almost 20 years. He has a long-standing record in conformance testing, and before becoming Chair of the JCP he led the Java Conformance Engineering team in Sun's Client Software Group. He was also chair of Sun's Conformance Council, which was responsible for defining Sun's policies and strategies around Java conformance and compatibility.
Patrick has participated actively in several consortia and communities including the World Wide Web Consortium (W3C) (as a member of the W3C's Quality Assurance Working Group and co-chair of the W3C Quality Assurance Interest Group), and the Organization for the Advancement of Structured Information Standards (OASIS) (as co-chair of the OASIS Test Assertions Guidelines Technical Committee). Patrick maintains a blog here: http://blogs.sun.com/pcurran/
Janeice Del Vecchio, Independent Consultant
Presenting: What’s New on the Persistence Side: Getting to Know JPA 2.0 Janeice Del Vecchio, a graduate of Western Governors University's Bachelor of Science in Information Technology program, is an Oracle Certified Java Professional who is well known to the Java community through her participation as a bartender on JavaRanch. She also volunteers her time in the Beginning Java, Cattle Drive and General Computing forums. Jerome Dochez, GlassFish Architect, Oracle
Presenting: OSGi-enabled Java EE Applications in GlassFish and Sponsored Keynote: GlassFish 3.1: Java EE 6 and beyond
Jerome Dochez is the architect of GlassFish and led the design and implementation of the GlassFish V1 and V3 application servers. He worked at Sun Microsystems for 13 years before joining Oracle as part of the acquisition. Jerome has presented at numerous conferences including 13 consecutive JavaOne conferences, as well as Devoxx and Jazoon. He is looking at the direction of the product while maintaining a stable compatible implementation, but he spends most of his time coding as it remains the fun part of this job.
He has worked on Java EE technologies since 2000, including various aspects of the application server implementation such as deployment, Web services and kernel. Before concentrating on Java EE, he worked on the Java SE team particularly on the JavaBeans team and the Java Plug-in.
Johan Edstrom, Independent Consultant, Senior SOA Architect, Savoir Technologies
Presenting: Tax Dollars and Open Source
Johan Edstrom is an open source developer, consultant and software architect. Johan divides his time between writing software, mentoring development teams and teaching people how to use Apache Servicemix, Camel, CXF and ActiveMQ effectively and scalable for enterprise installations. He is a senior SOA architect at Savoir Technologies which specializes in guiding companies to leverage open source technologies and solutions. Michael Ernest, Owner, Systems Architect; Education Specialist, Inkling Research
Presenting: The Premier Sun Designation: Mastering the Oracle Certified Architect Exam and Java Performance Tuning: Embrace the Whole Platform
Michael Ernest has 15 years' experience in consulting, training and writing, principally on Java development and the Unix-based systems administration. He owns and operates Inkling Research, a small group of societal misfits that found a way to teach and learn and still live indoors.
He specializes in delivering fast-track seminars to highly-experienced programming and admin teams. He is a lead instructor, technical adviser and contributor to Oracle courseware development on topics including Solaris performance management, DTrace technology, Java EE design patterns and architecture.
Michael has spoken previously at the JavaOne, Java University and CommunityOne conferences. He co-authored the Complete Java 2 Certification Study Guide and still isn't even close to finishing The Book of DTrace, not without a lot more caffeine. Ben Evans, Technical Architect and Lead Application Developer Presenting: Back to the Future with Java 7 Ben has been a professional developer and Open Source enthusiast since the late 90s. He has delivered world-class projects for banks, media companies and charities in that time, and currently works as a lead architect, principal engineer and in-house Java expert at one of the world’s leading financial institutions. Mark Fisher, Author, Spring Integration in Action; Engineer, VMware Presenting: Developing a Message Driven Architecture with Spring Mark Fisher is an engineer within the SpringSource division of VMware. He is the lead of the Spring Integration project and co-lead of the Spring AMQP project. He is also a committer on the core Spring Framework and the Spring BlazeDS Integration project. In addition to his role as an engineer, Mark spends a significant amount of time working with customers as a consultant and trainer. The focus of such engagements is primarily in the realm of enterprise integration and message-driven applications. Mark is a frequent speaker at conferences and user groups in North America and Europe, and along with other Spring Integration committers, he is an author of the forthcoming book, Spring Integration in Action, to be published in 2011 by Manning.
Jonathan Fullam, Enterprise Content Management Consultant, Micro Strategies
Presenting: How to Reap the Benefits of Agile-Based Test-Driven Development
Jonathan Fullam is an Enterprise Content Management consultant with over 10 years of experience with software development. Currently employed by Micro Strategies, Jonathan designs and implements custom ECM solutions based on the Alfresco open source Enterprise Content Management platform and has delivered presentations at Alfresco "Lunch and Learns" and the Alfresco Developer's conference. Jonathan has a passion for software development and also enjoys public speaking. Jeff Genender, CTO, Chief Architect and Open source evangelist
Presenting: Architecture Track Keynote: ActiveMQ In The Trenches – Advanced Tips On Architectures and Implementations
Jeff has over 20 years of software architecture, team lead, and development experience in multiple industries. He is a frequent speaker at such events as TheServerSide Symposium, JavaZone, Java In Action, and numerous Java User Groups on topics pertaining to Enterprise Service Bus (ESBs), Service Oriented Architectures (SOA), and application servers.
Jeff is an active committer and Project Management Committee(PMC) member for Apache Geronimo, a committer on OpenTerracotta, OpenEJB, ServiceMix, and Mojo (Maven plugins). He is the author of Enterprise Java Servlets(Addison Wesley Longman, 2001), coauthor of Professional Apache Geronimo (2006, Wiley), and co-author of Professional Apache Tomcat (2007, Wiley). Jeff also serves as a member of the Java Community Process (JCP) expert group for JSR-316 (JavaPlatform, Enterprise Edition 6 (Java EE 6) Specification) as a representative of the Apache Software Foundation.
Jeff is an open source evangelist and has successfully brought open source development efforts, initiatives, and success stories into a number of Global 2000 companies, saving these organizations millions in licensing costs.
Dan Hardiker, Chief Technical Architect and Founding Member, Adaptavist.com Ltd. Presenting:The (Not So) Dark Art of Performance Tuning Dan Hardiker is a Chief Technical Architect and founding member of Adaptavist.com Ltd., which specializes in confluence consultancy, support, hosting, and bespoke development. Dan has many years of Java expertise, as well as almost two decades of experience with UNIX and networking systems - focusing on infrastructure, performance, and security. He speaks regularly on these topics and has a background in event management. He works with enabling geeks to socialize throughout the UK via the GeekUp and BarCamp initiatives.
Iran Hutchinson, Product Manager, InterSystems
Presenting: Vendor Technical Session: Globals: Extreme Performance for Java Iran Hutchinson currently serves as Product Manager at InterSystems with a focus on driving global product strategy and development on the Java platform. Prior to joining InterSystems, Hutchinson held lead roles in enterprise architecture and development in companies such as IBM, where he led the development strategy for enterprise integration and evolution of global projects using: JavaEE5, Distributed Computing, CICS, Flex, SOA and Web services. He focuses on understanding diverse architectures and technologies to lead the way to next-generation solutions surrounding high performance computing, distributed computing and complex data interactions. Hutchinson thinks the open sourcing of standards and technologies, such as Java, in concert with other best-of-breed tooling will yield a bright future. Recently, Hutchinson has taken a more active role in presenting and debating technology in the hopes of learning and spurring innovative solutions. You can find him presenting at upcoming events around the world like Java One and on the upcoming blog + technology series at InterSystems.com. Claus Ibsen, Author, Camel in Action
Presenting: Apache Camel, the Integration Framework: Tales from the Leading Camel Experts
Claus Ibsen is a software engineer and integration specialists from FuseSource and is project lead on the open source integration framework Apache Camel and co-author of the Camel in Action.
Claus is the most active contributor to Apache Camel and is very active in the Camel community. At FuseSource he leads the development of Camel and provides consulting and support to customers. Claus is frequent speaker at FuseSource community day events on subjects related to Camel, including Devoxx 2010.
Jevgeni Kabanov, Founder and CTO of ZeroTurnaround Presenting: Do You Really Get Class Loaders? and Do You Really Get Memory?
Jevgeni Kabanov is the founder and CTO of ZeroTurnaround, a development tools company that focuses on productivity. Before that he worked as the R&D director of Webmedia, Ltd., the largest custom software development company in the Baltics. As part of the effort to reduce development time tunraround, he wrote the prototype of the ZeroTurnaround flagship product, JRebel, a class reloading JVM plugin.
Jevgeni has been speaking at international conferences for over 5 years, including TheServerSide Java Symposium, JavaPolis/Devoxx, JavaZone, JAOO, QCon, JFokus and others. He also has an active research interest in programming languages, types and virtual machines, publishing several papers on topics ranging from category theoretical notions to typesafe Java DSLs. Jevgeni is a co-founder of two open-source projects - Aranea and Squill.
Max Katz, Senior Systems Engineer and Lead RIA Strategist, Exadel
Presenting: Ajax Applications with JSF 2 and RichFaces 4
Max Katz is a Senior Systems Engineer and Lead RIA Strategist at Exadel. Max is a well-known speaker, appearing at many conferences, webinars, and JUGs. Max leads Exadel’s RIA and mobile strategy and Exadel open source projects such as Fiji, Flamingo and JavaFX Plug-in for Eclipse. Max is the community manager for web-based rapid UI prototyping application Tiggr. Max has been involved with RichFaces since its inception, publishing numerous articles, providing consulting and training, and authoring the book Practical RichFaces (Apress). Max writes about RIA technologies in his blog, and can be found on Twitter as @maxkatz. Max holds a Bachelor of Science in computer science from the University of California, Davis and MBA from Golden Gate University. Jon Kern, Software Architect, Agile Mentor, and Co-author of the Agile Manifesto Presenting: Agile Track Keynote: Agile Schmagile: The Backlash Against Agile
Jon Kern is a premiere software architect and team leader/coach that keeps the people and the business in sharp focus. Aerospace engineer-turned software expert, co-author of Agile Manifesto for Software Development and Java Design. Currently, Jon helps companies develop mission-critical software. His insights are critical factors in producing solutions with significant impact to business value, quality, budget, and schedule. He brings experts from around the world to work on the project team, work with client's developers and mentors on agile and distributed development processes, techniques, & tools. Most importantly, Jon leaves behind a team that is much more valuable to the company. Mik Kersten, CEO of Tasktop Technologies and Creator of the Eclipse Mylyn open source project Presenting: Mylyn 3.4 and the New Face of the Java IDE and Cloud Keynote: Bringing Code to the Cloud and Back Again
Dr. Mik Kersten is the CEO of Tasktop Technologies, creator of the Eclipse Mylyn open source project and inventor of the task-focused interface. As a research scientist at Xerox PARC, Mik implemented the first aspect-oriented programming tools for AspectJ. He created Mylyn and the task-focused interface during his PhD in Computer Science at the University of British Columbia. Mik has been an Eclipse committer since 2002, is an elected member of the Eclipse Board of Directors and serves on the Eclipse Architecture Council. Mik's thought leadership on task-focused collaboration makes him a popular speaker at software conferences, and he was voted a JavaOne Rock Star speaker in 2008 and 2009. Mik has also been recognized as one of the top ten IBM developerWorks Java technology writers of the decade. He enjoys building tools that offload our brains and make it easier to get creative work done. Heath Kesler, Consultant and Open source software evangelist
Presenting: What Riding the Camel Can Do for You
Heath Kesler is an open source software evangelist, developer and architect; he has created Java architectures utilizing open source frameworks on large scalable, high transaction load systems for such companies as LeapFrog Enterprises, AT&T, GE & GE Healthcare, and IBM. Heath has conducted training classes at companies like Verizon, Singapore Post and the Federal Aviation Administration on Apache frameworks including ActiveMQ, ServiceMix, CXF and Camel. Heath has been a team lead in many project recovery implementations, helping to rescue systems on the verge of collapse. He was recently involved with the implementation of the customer account creation and third-party integration on mission-critical systems for the largest educational products provider in the United States.
Tom Kincaid, Vice President, Professional Services, EnterpriseDB
Presenting: Vendor Technical Session: Introduction to PostgreSQL for Development and Deployment
Tom is Vice President of Professional Services at EnterpriseDB. He is responsible for the over-site and delivery of all their professional services including support, training and consulting. He has over 24 years of experience in the Enterprise Software Industry. Prior to EnterpriseDB, he was VP of software development for Oracle's GlassFish and Web Tier products where he helped integrate Sun's Application Server and Web Tier products into Oracle's Fusion middleware offerings. At Sun Microsystems he was part of the original Java EE architecture and management teams at Sun Microsystems and played a critical role in defining and delivering the Java Platform. Tom is a veteran of the Object Database industry and helped build Object Design's customer service department holding management and senior technical contributor roles. Other positions in Tom's past include Director of Quality Engineering at Red Hat and Director of Software Engineering at Unica.
Jim Knutson, Java EE Architect, WebSphere, IBM
Presenting: Core Java Track Keynote: Enterprise Java Platforms for the Next Decade Jim Knutson, IBM WebSphere's Java EE Architect, is responsible for IBM's participation in Java EE specifications and IBM's implementations of the specifications. His involvement in Java EE goes back to before there was a J2EE platform. He is also involved in programming model evolution to support SOA and Web services. Lasse Koskela, Author, Test Driven: Practical TDD and Acceptance TDD for Java Developers Presenting: Test Smells in Your Code Base Lasse Koskela works as a coach, trainer, consultant and programmer, spending his days helping clients and colleagues at Reaktor create successful software products. He has trenched in a variety of software projects ranging from enterprise applications to middleware products developed for an equally wide range of domains.
In the recent years, Lasse has spent an increasing amount of time giving training courses and mentoring client teams on-site, helping them improve their performance and establish a culture of continuous learning. Aside from consulting leaders and managers, Lasse enjoys programming and works frequently hands-on with software teams.
In 2007, he published a book on Test Driven Development and is currently working on his next book. He is one of the pioneers of the Finnish agile community and speaks frequently at international conferences.
Justin Lee, Member, GlassFish and Grizzly teams, Oracle
Presenting: Building Websockets Applications with GlassFish/Grizzly Justin has been an active Java developer since 1996. He has worked on projects ranging from Web applications to systems integration. He has spoken internationally and at local user groups and is an active member of the open source community. For the last few years, he has been a member of the GlassFish and Grizzly teams where he works on the Web tier team. Justin is also a contributor to The Basement Coders Podcast.
Cameron McKenzie, Editor, TheServerSide.com
Presenting: What’s New on the Persistence Side: Getting to Know JPA 2.0
With over ten years of development experience, Cameron McKenzie brings with him a long and storied history with the Java platform and Java EE architectures. Cameron McKenzie is the author of five best selling Java titles, including What is WebSphere?, the SCJA Certification Guide, JSR168 Portlet Programming, and the ever popular Hibernate Made Easy. Along with emceeing the TSSJS event, Cameron, together with Janeice Del Vecchio, will be speaking about what’s new with the Java Persistence API, and what we can expect from the specification in the future.
Andrew Monkhouse, Author of the Sun Certified Java Developer Guide Presenting: The Myths and Realities of Testing and Deployment in the Cloud Andrew is a senior software engineer at Overstock.com in Salt Lake City - a job that he thinks is one of the best you can get.
Prior to Overstock.com, Andrew has worked in many different sized companies dealing with many different problems in countries all over the world. From companies with only 2 developers, all the way up to Amazon.com with its several thousand developers. During these jobs, he has worked on occupational health and safety systems, communication systems, airline systems, nanking systems, and retail systems.
Andrew is best known for authoring the best selling Sun Certified Java Developers Guide (SCJD). He has also contributed on a number of other best selling Java titles including: Head First Servlets and JSP, Head First Design Patterns, Head Rush Ajax. Charles Nutter, Co-lead JRuby project, Engine Yard, Inc
Presenting: Language Track Keynote: Pump It Up: Maximizing the Value of an Existing Investment in Java with Ruby
Charles Nutter has been programming most of his life, as a Java developer for the past decade (named a Java Rock Star in 2007) and as a JRuby developer for over four years. He co-leads the JRuby project at Engine Yard, in an effort to bring the beauty of Ruby and the power of the JVM together. Along with the rest of the JRuby team, Charles recently celebrated the release of JRuby 1.5. The latest release makes it easier than ever for Java developers to take Ruby for a spin because of the seamless interaction it allows with commonly used Java components. Charles believes in open source and open standards and hopes his efforts on JRuby and other languages will help ensure that the many JVM users and enthusiasts have the best possible access to the benefits Ruby can bring. Karen Tegan Padir, Vice President, Products & Marketing, EnterpriseDB Presenting: Sponsored Keynote: Predicting Technology Ubiquity: What makes standards stick?
Karen is responsible for EnterpriseDB's product management and engineering as well as its global marketing initiatives, including demand generation, public relations and product marketing. She is a veteran software executive with 20 years of industry experience leading global business and engineering organizations.
Prior to joining EnterpriseDB, Karen was the vice president of MySQL and Software Infrastructure at Sun Microsystems where she was responsible for key Sun open source software GlassFish, Identity Management and SOA products. Prior to that Karen was vice president of engineering for infrastructure technology at Red Hat where she was responsible for Red Hat's Directory and Certificate server products, as well as Quality and Release Engineering of the Red Hat Enterprise Linux bundle. She is one of the founding members of the Java EE Platform at Sun.
She holds a Masters Degree in Business Administration and a Bachelors of Science Degree in Computer Science from Worcester Polytechnic Institute (WPI).
Kirk Pepperdine, Java Performance Tuning Expert
Presenting: Tools & Techniques Track Keynote: The (Not So) Dark Art of Performance Tuning, Extending VisualVM with JavaFX, and Performance Tuning with Cheap Drink and Poor Tools (Part Deux)
Kirk's career began in Biochemical Engineering, where he applied his researching skills in attaching computers to sheep and cats, synthesising radio-active tylenol and developing separation techniques using High Performance Liquid Chromatography for Ottawa University and the National Research Council of Canada. Subsequently, he became employed by the Canadian Department of Defense. Kirk admits that his work at the DoD involved programming Cray supercomputers as well as other Unix systems, but he refuses or is unable to divulge the exact nature of the applications in the department other than that they involved databases and high performance systems. After the DoD, Kirk consulted as an analyst at Florida Power & Light, then moved on to join GemStone Systems as a senior consultant. He is currently an independent consultant, and also an editor at TheServerSide.com. Kirk has been heavily involved in the performance aspects of applications since the start of his career, and has tuned applications involving a variety of languages from Cray Assembler, through C, Smalltalk and on to Java. Kirk has focused on Java since 1996. Kirk co-authored ANT Developer's Handbook, which was published in 2002. Reza Rahman, Author, EJB 3 in Action; Member, Java EE 6 and EJB 3.1 expert groups
Presenting: A Quick Tour of the CDI Landscape, Effective Caching Across Enterprise Application Tiers, An Introduction to Seam 3, Testing Java EE 6 Applications: Tools and Techniques and Panelist for Keynote Panel Discussion: The Java Community Process: What's Wrong and How to Fix It
Reza Rahman is an independent consultant specializing in Java EE with clients across the greater Philadelphia and New York metropolitan areas. He is currently focused on the Resin EJB 3.1 Lite/Java EE 6 Web Profile implementation.
Reza is the author of EJB 3 in Action from Manning Publishing. He is a member of the Java EE 6 and EJB 3.1 expert groups. He is a frequent speaker at seminars, conferences and Java user groups including JavaOne as well as an avid contributor to TheServerSide.com.
Reza has been working with Java EE since its inception in the mid-nineties. He has developed enterprise systems in the financial, healthcare, telecommunications and publishing industries. Reza has been fortunate to have worked with EJB 2, Spring, EJB 3 and Seam.
Matt Raible, UI Consultant and Architect
Presenting: Everything You Ever Wanted To Know About Online Video and Comparing JVM Web Frameworks
Matt Raible has been building web applications for most of his adult life. He started tinkering with the web before Netscape 1.0 was even released. For the last 11 years, Matt has helped companies adopt open source technologies (Spring, Hibernate, Apache, Struts, Tapestry, Grails) and use them effectively. Matt has been a speaker at many conferences worldwide, including ApacheCon, JavaZone, Colorado Software Summit, No Fluff Just Stuff, and a host of others.
Matt is an author (Spring Live and Pro JSP), and an active "kick-ass technology" evangelist. He is the founder of AppFuse, a project which allows you to get started quickly with Java frameworks, as well as a committer on the Apache Roller and Apache Struts projects. Scott Selikoff, Owner, Selikoff Solutions, LLC
Presenting: GWT Roundup: An Overview of Google's Web Toolkit and Hybrid Integration
Scott Selikoff is a senior Java/J2EE software developer with years of experience in Web-based database-driven architectures. He owns and operates Selikoff Solutions, a software consulting company servicing businesses in the NY/NJ/PA area.
Scott is the founder of Down Home Country Coding, a software blog that provides tools, tips and discussions for Java, GWT, and Flex developers. He is a member of the bst-player project, which provides support for integrating third-party video players into GWT applications. He is also an editor on the TheDailyWTF.com, a humorous blog dedicated to "Curious Perversions in Information Technology."
Scott holds a Bachelor of Arts in Mathematics and Computer Science and a Masters of Engineering in Computer Science, both from Cornell University. His master's thesis was on the effectiveness of online educational software in the classroom. Mark Spritzler, Independent Consultant, Regular Contributor to TheServerSide.com Presenting: Comparing, Contrasting and Differentiating Between Mobile Platforms
Mark owns Perfect World Programming, LLC, a consulting and contract training firm. Specializing in Java, Enterprise Java, iPhone/iPad and Android development. He currently has 5 iPhone and 1 iPad applications on the Apple App Store. Most of the time, Mark is travelling the world training many software developers, working for companies like SpringSource and JBoss, as well as N-Tier Training and Sum Global. He was a technical editor on Head First Design Patterns and the K&B SCJP 5.0 Exam book. James Strachan, Creator of the Groovy language and Software Fellow at FuseSource Presenting: Apache Camel, the Integration Framework: Tales from the Leading Camel Experts
James is heavily involved in the open source community: he's been an Apache committer for 10 years, was one of the founders of the Apache ActiveMQ, Camel and ServiceMix projects, created the Groovy programming language and a number of other open source projects including Scalate, dom4j & Jaxen and is a committer on a number of projects such as Apache Karaf, Maven, Lift and Jersey. James is currently a Software Fellow at FuseSource and has more than 20 years experience in enterprise software development with a background in finance, consulting and middleware. Craig Tataryn, Editor, Basement Coders Podcast
Presenting: Evolving Enterprise Code with Scala & Wicket Craig Tataryn started his career as a Visual Basic programmer, but don’t hold that against him! Around the year 2000 he discovered Struts and never looked back. A professional Java developer for over a decade, Craig has honed his skills on everything from Apache Wicket, CXF, Facebook and iPhone application development. Craig also enjoys his role as Editor at The Basement Coders Podcast. Martijn Verburg, Consultant & Community Leader for Java and Open Source software
Presenting:Back to the Future with Java 7, The Diabolical Developer: What You Need to Do to Become Awesome and How Mega-Corp Open Sourced its Internal Software & Leveraged a Volunteer Community (And How Your Corporation Can Too!!!)
Martijn Verburg is a Dutch Born Kiwi who is also a permanent resident
of a few other nations, he likes to call it being a "citizen of the
world". Martijn co-leads the London (UK) Java User Group (JUG) and
also is heavily involved in the London graduate/undergraduate
developer, CTOs and software craftsmanship communities. JavaRanch.com
kindly invited him to be a bartender in 2008 and he's been humbled by
the awesomeness of the community ever since.
He's currently working on somewhat complex JCA Connectors and an associated open source middleware platform (Ikasan) and also spends a
good deal of time herding monkeys on another open source project that
deals with creating characters for d20 based role playing games
(PCGen).
More recently he's started writing The Well-Grounded Java Developer
(Covers Java 7) for Manning publications (with Ben Evans) and can be
found speaking at conferences on a wide range of topics including open
sourcing software, software craftsmanship and the latest advancements
in the openJDK.
Patrycja Wegrzynowicz , Founder and CTO at Yon Labs and Yon Consulting Presenting: Anti-Patterns and Best Practices for Hibernate and Static Analysis in Search for Performance Anti-Patterns Patrycja Wegrzynowicz is Founder and CTO at Yon Labs and Yon Consulting. There, she shapes the future direction of technological research in software as well as acts as a chief architect and consultant on the projects from the field of automated software engineering, domain names, and Internet security. Also, she is associated with Warsaw University of Technology, where she serves as Technical Manager of Passim, intelligent search engine. She is a regular speaker at top academic (e.g., OOPSLA, ASE) as well as technical conferences (e.g., JavaOne, Devoxx, JavaZone). Patrycja holds a master degree in Computer Science and is currently finalizing her PhD at Warsaw University. Her research interests are focused on architectural and design patterns and anti-patterns along with automated software engineering, particularly static and dynamic analysis techniques to support program verification, comprehension, and optimization.
Jason Whaley, Founder Brink Systems, Freelance Java Developer Presenting: It's Your Infrastructure Now - Developing Solutions in an IaaS World Jason Whaley is a freelance Java developer and consultant specializing in service oriented architectures, enterprise integration, cloud computing, and continuous integration. Previously, Jason has worked in multiple roles for both public companies as well as government institutions in a variety of roles for several broad ranging Java based projects. He is also a contributor to The Basement Coders Podcast.
Paul Wheaton, Owner, JavaRanch.com
Presenting: SEO in the Real World: A Java Case Study in What Works and What Doesn’t
Paul is a Sun Certified Java Programmer working out of Missoula, Montana.
Paul had a website dedicated to Java discussion that he started in November of 1998. He merged his site into JavaRanch when Kathy Sierra turned it over to him. His contributions include the Saloon, the Cattle Drive, most of the bunkhouse, some of the code barn, the coop, gramps and granny.
Jim White, Director of Training, Intertech, Inc.
Presenting: Java in the Microsoft Cloud: Deploying Enterprise Applications to Windows Azure Jim White is the director of training, partner, and instructor with Intertech, Inc. He is co-author of Java 2 Micro Edition (Manning) and a frequent contributor to various journals and on-line magazines including recent articles at DevX.com. Jim also heads Intertech’s Cloud Computing practice and is co-lead of the Windows Azure User Group (www.azureug.net), which is a national virtual user group of over 750 members. He has twenty years of software development experience including time as a senior technical architect at Target Corporation. Return to Top
Bring a Group and Save
Save $1,500 when you come with a team. Contact your Delegate Relations Manager, Melissa Cote, to learn about special registration offers.
TheServerSide.com | TheServerSide.NET | SearchSOA.com | SearchSoftwareQuality.com SearchWinDevelopment.com |
Ajaxian.com | ebizQ.net
TechTarget Events Gain free admission to an IT-specific event coming soon to a city near you. To report issues with this Web site, please contact TechTargetWebSite@TechTarget.com.
© 2012 TechTarget. TechTarget cares about your privacy. Read our Privacy Statement. | 计算机 |
2014-15/4479/en_head.json.gz/17818 | characterizing planetary systems
Downloadable Console
Console Tutorial #1
> worlds > reintroduction reintroduction
greg i-Phone snapshot of Difference Engine #2.
The systemic console started life over five years ago as a web-based applet for analyzing radial velocity data. The original version was a collaboration between Aaron Wolf (then a UCSC Undergraduate, now a Caltech Grad Student) and myself, and the Java was coded in its entirety by Aaron. Our goal was to clarify the analysis of radial velocity data — the “fitting” of extrasolar planets — by providing an interactive graphical interface. The look and feel were inspired by sound-mixing boards, in particular, the ICON Digital Console built by Digidesign:
Over the intervening years, the console has expanded greatly in scope. Stefano Meschiari has taken over as lead software developer, and has directed the long-running evolution with considerable skill. The console has been adopted by planet-hunting groups world-wide, as well as by classroom instructors and by a large community of users from the public.
Tuesday’s post pointed to our new peer-reviewed article (Meschiari et al. 2009) that describes the algorithms under the console’s hood, and now that the code base has matured, we’re developing documentation that can serve the widely varying needs of our users. We also intend to return the systemic backend collaboration to the forefront of relevance. A great deal of very interesting work has been done by the backend users, and it can be leveraged.
As the first step, we’re updating and expanding the tutorials, which have been largely gathering dust since November 2005. Following the page break, the remainder of this post updates tutorial #1. If you’ve ever had interest in using the console, now’s the time to start…
Console Tutorial #1: A Fish in a Barrel — HD 4208b
How do you use the console to find planets?
Stellar radial velocity data have been used to infer the presence of hundreds of extrasolar planetary systems, and in nearly every case, the radial velocity data have been tabulated in the papers that announce the discoveries. In this tutorial, we introduce the console, and use it to “discover” a planet orbiting the star HD 4208. (The tutorial assumes version 1.0.90 of the console, running on Mac OS X 10.5.7, if you are using a different set-up, there may be minor differences in details and appearance).
A radial velocity measurement of a star is the component of the velocity of the star along the line of sight from the Earth to the star. Most of this radial velocity stems from the natural motion of the star with respect to our solar system. The Sun orbits the center of the galaxy at a speed of approximately 250 kilometers per second. Most of the stars in the solar neighborhood are moving in roughly the same manner, but stellar orbits are not perfectly circular. Some of the stars in the solar vicinity are moving more quickly than the Sun, while others are moving more slowly. The average difference in orbital velocity between neighboring stars is about 20 kilometers per second. Part of this velocity will be in the so-called transverse direction, the rest is along the radial line connecting our solar system to the star. The Alpha Centauri system, for example, is headed toward us with a radial velocity of -21.6 kilometers per second. By carefully noting the Doppler shift of the stellar lines, it is possible (for favorable cases) to measure the line-of-sight speed of the star with a precision of order 1 meter per second.
In addition to the random motion that a given star has with respect to the Sun, there is also a small superimposed component of motion that is generated as the star wobbles back and forth in response to any planets that are in orbit around it. For the case of a single planet in a circular orbit, the situation is easily visualized by imagining that the star and the planet are attached to the opposite ends of a rigid rod. If you wish to balance the rod on a fingertip, then you must position your finger under a point on the rod that is much closer to the heavy star than it is to the less massive planet. For example, if the star is a hundred times more massive than the planet, then the point of balance lies one hundred times closer to the star than it does to the planet. As the orbit proceeds, one simply swings the star and the planet around the point of balance. The planet executes a large circle, and the star executes (in the same amount of time) a circle that is one hundred times smaller.
An analogous situation applies when the orbits are eccentric.
To get started, install the “cutting-edge” console on your computer, and double-click the Console.jar icon. When initialization is finished, you’ll see the main console window:
At the risk of sounding silly, it’s worth remarking that the console is ready for rough-and-ready experimentation. Push buttons, slide sliders, and get a feeling for how things work. While it’s not impossible to effectively hang the software, you can always force-quit and restart, and in a worst-case scenario you can always download a fresh-baked copy. It’s free! (Also, in response to a query from a well-known planet hunter, under no circumstances will the console “phone home”.)
When the console first appears, it is set by default to the radial velocity data sets for the multiple-planet-bearing star 55 Cancri. This tutorial walks you through the much less complex data set associated with the star HD 4208. You change data sets by clicking on the star icon
and then selecting HD 4208 from the ensuing pop-up window:
This selection plots the the published HD 4208 radial velocity data set in the console’s data window:
HD 4208 is a sunlike star lying roughly 110 light years from Earth. It’s too faint to see with the naked eye, but it can easily be spotted with binoculars or a small telescope if you know where to look. In 2002, Vogt et al. published a data set containing 35 independent radial velocity observations of the star. These measurements were accumulated at the Keck telescope over an 1,821 day (~5 year) interval starting on JD 2450366.9657 (11:10 AM on Oct. 10, 1996, Universal Time). The observations are spaced unevenly over the years because the California-Carnegie planet search team received only limited blocks of time at the telescope, and also because the star can only be easily observed from Hawaii from Ju | 计算机 |
2014-15/4479/en_head.json.gz/18774 | Previous789101112131415161718192021222324252627Next
Nurturing Entrepreneurship at Every Level
Hileman Jane
Summary: The founder and CEO of American Reading Company, Jane Hileman, has seen her company grow from a few teachers ten years ago to 111 employees today who provide books and reading goals for students to encourage a love of reading. Hileman's goals are revenue growth, profitability, and success.
Hennessy John
VideoSeries Resource
Summary: Dr. John Hennessy has been President of Stanford University since 2000. He became a Stanford faculty member in 1977. He rose through the academic ranks to full professorship in 1986 and was the inaugural Willard R. and
Inez Kerr Bell Professor of Electrical Engineering and Computer Science from 1987 to 2004. A pioneer in computer architecture, in 1981 Dr. Hennessy drew together researchers to focus on a computer architecture known as RISC (Reduced
Instruction Set Computer), a technology that has revolutionized the computer industry by increasing performance while reducing costs. In 1984, he used his sabbatical year to found MIPS Computer Systems Inc. to commercialize his research in
RISC processors. Dr. Hennessy is a recipient of the 2000 IEEE John von Neumann Medal, a 2004 NEC C&C Prize for lifetime achievement in computer science and engineering, and a 2005 Founders Award from the American Academy of Arts and
Sciences. Dr. Hennessy earned his bachelor's degree in electrical engineering from Villanova University and his master's and doctoral degrees in computer science from the State University of New York at Stony Brook.
Surviving the Lean Years
Heinmiller Robert
Summary: How do you survive personally when your business goes bust? In an article that is both realistic and compassionate, the author lays out a financial plan for the seven lean years. Stash away cash during the fat years, downsize quickly once the handwriting is on the wall, and consider moving to a lower-cost geographic area are among his suggestions.
Summary: How do you deal with things when your business is on the verge of going bust? This author lays out a financial plan for working through lean years to sustain a business. Key tips: stash away cash during good times, downsize quickly if need be, and consider relocating to a lower-cost area of the country.
Summary: Jeff Hawkins is the Founder of Numenta, but he is also well known as the co-founder of two companies, Palm and Handspring, and as the architect of many computing products, such as the PalmPilot and the Treo smartphone.
Throughout his life Hawkins has also had a deep interest in neuroscience and theories of the neocortex. His interest in the brain led him to create the non-profit Redwood Neuroscience Institute (RNI), a scientific organization focused on
understanding how the human neocortex processes information. While at RNI, Hawkins developed a theory of neocortex which appeared in his 2004 book, On Intelligence. Along with Dileep George and Donna Dubinsky, Hawkins
founded Numenta in 2005 to develop a technology platform derived from his theory. It is his hope that Numenta will play a catalytic role in creating an industry based on this theory and technology. Jeff Hawkins earned his B.S. in
electrical engineering from Cornell University in 1979. He was elected to the National Academy of Engineering in 2003.
The Three Shows
Haupt Norbert
Summary: The author asserts there are three tasks entrepreneurs need to do to attract the attention of angel investors. They are "the three shows": show up, show enthusiasm, and show humility.
Entrepreneurial Thought Leaders Lecture Series
Hansson David Heinemeier
Summary: Danish-born David Heinemeier Hansson is the programmer and creator of the popular Ruby on Rails web development framework and the Instiki wiki. He is also a partner at the Web-based software development firm 37signals,
based in Chicago. Ruby on Rails provides a "basic development environment" for programmers, according to Wikipedia.org. Based on the programming language Ruby (developed by Japanese programmer Yukihiro Matsumoto in 1995), Ruby on Rails
focuses on user interface and "convention over configuration"; meaning, developers can focus on the unique qualities of their Web site or program rather than the building blocks that every application may require. Released in 2004, Ruby on
Rails has been incorporated into many applications used by some of the biggest companies, from Twitter to Apple's 2007 release of Mac OS X v.10.5 "Leopard." Aside from his development of Ruby on Rails, Heinemeier Hansson also works as a
partner for Web-based software development firm 37signals. Joining the company in 2003, he has helped develop Basecamp, Campfire, Backpack and other Web-based applications. Working in similar ways like Web-based e-mail services like Yahoo!
e-mail and Google's Gmail, 37signals hosts a broad range of IT services for companies, including project management to information-sharing. The firm's software has been used by Kellogg's, Sun Microsystems and even Obama '08. Hansson
received his bachelor's degree from the Copenhagen Business School in 2005. In that same year, he moved to Chicago and received Hacker of the Year honors for his work on Ruby on Rails from Google and O'Reilly Media. He runs a blog called
LoudThinking.com.
Managing a Mail-Order Marriage: Building Trust With Your VC Investor
Hammer Katherine
Curle Robin Lea
Summary: Venture capitalists play a critical funding role, as entrepreneurial ventures move into the big leagues, but the price these investors extract is often too high. Entrepreneurs should consider the relationship analogous to marrying a mail-order bride and proceed accordingly, according to this comprehensive and entertaining article by two women who co-founded a software company. Tips include advising company owners to build trust with VCs and, until that is established, dealing with them in a way that allows for "a reasonable balance of power."
Endeavor's Entrepreneurs' Summit
Green Jason
Friel Tom
Frankel David
Cline Michael
Summary: J. Michael Cline is the founding Partner of Accretive LLC. Michael and other Accretive principals founded Exult, Xchanging, Fandango and Accretive Health. Before founding Accretive Michael spent 10 years as General
Partner at General Atlantic Partners helping build General Atlantic into the world's largest private investment firm focused on software and related investments. Prior to General Atlantic, Michael was an associate at McKinsey &
Company. Michael received his MBA from Harvard Business School where he was a Baker Scholar and he received a BS from Cornell University. He serves on the boards of Accretive Commerce, Fandango, Accretive Health and Willow. He is a Trustee
of the Wildlife Conservation Society (WCS) where he chairs the Tigers Forever initiative - the world's largest effort in global tiger conservation and is a Trustee of the Brunswick School. He also serves on the board of the National Fish
and Wildlife Foundation, Endeavor Global and the Harvard Business School Rock Center for Entrepreneurship.
Collecting Well: Whose Money Is It Anyway?
Green Leonard
Altman John
Summary: Entrepreneurs are apt to happen upon found money by more skillfully | 计算机 |
2014-15/4479/en_head.json.gz/18778 | ERCIM News 64
This issue in pdf(76 pages; 12Mb)
Next issue:April 2006
Next Special theme:Space Exploration
AVISPA: Automated Validation of Internet Security Protocols and Applications
by Alessandro Armando, David Basin, Jorge Cuellar, Michael Rusinowitch and Luca Viganò
AVISPA is a push-button tool for the Automated Validation of Internet Security Protocols and Applications. It provides a modular and expressive formal language for specifying protocols and their security properties, and integrates different back-ends that implement a variety of state-of-the-art automatic analysis techniques. Experimental results, carried out on a large library of Internet security protocols, indicate that the AVISPA tool is the state of the art for automatic security protocols. No other tool combines the same scope and robustness with such performance and scalability.
With the spread of the Internet and network-based services and the development of new technological possibilities, the number and scale of new security protocols under development is outpacing the human ability to rigorously analyse and validate them. This is an increasingly serious problem for standardization organizations like the Internet Engineering Task Force (IETF), the International Telecommunication Union (ITU) and the World Wide Web Consortium (W3C). It also affects companies whose products and services depend on the rapid standardization and correct functioning of these protocols, and users whose rights and freedoms (eg the right to privacy of personal data) depend on a secure infrastructure.
Designing secure protocols is a hard problem. In open networks such as the Internet, protocols should work even under worst-case assumptions, eg that messages may be seen or tampered with by an intruder (also called the attacker or spy). Severe attacks can be conducted without breaking cryptography, by exploiting weaknesses in the protocols themselves. Examples of this are 'masquerading attacks', in which an attacker impersonates an honest agent, or 'replay attacks', in which messages from one protocol session (ie execution of the protocol) are used in another session. The possibility of these attacks sometimes stems from subtle mistakes in protocol design. Typically these attacks go unnoticed, as it is difficult for humans, despite careful protocol inspection, to determine all the complex ways in which protocol sessions can be interleaved, with the possible interference of a malicious intruder.
Tools that support a rigorous analysis of security protocols are thus of great importance in accelerating and improving the development of the next generation of security protocols. Ideally, these tools should be completely automated, robust, expressive and easily usable, so that they can be integrated into protocol development and standardization processes.
Although in the last decade many new techniques that can automatically analyse small and medium-scale protocols have been developed, moving up to large-scale Internet security protocols remains a challenge. The AVISPA tool is a push-button tool for the Automated Validation of Internet Security-sensitive Protocols and Applications, which rises to this challenge in a systematic way. First, it provides a modular and expressive formal language for specifying security protocols and properties. Second, it integrates different back-ends that implement a variety of automatic analysis techniques ranging from protocol falsification (by finding an attack on the input protocol) to abstraction-based verification methods for both finite and infinite numbers of sessions. To the best of our knowledge, no other tool exhibits the same scope and robustness while enjoying the same performance and scalability.
AVISPA Web-based graphical user interface.
As shown in the figure, AVISPA is equipped with a Web-based graphical user interface that supports the editing of protocol specifications and allows the user to select and configure the back-ends integrated into the tool. If an attack on a protocol is found, the tool displays it as a message-sequence chart. The interface features specialized menus for both novice and expert users. A protocol designer interacts with the tool by specifying a security problem (ie a protocol paired with a security property that the protocol is expected to achieve) in the High-Level Protocol Specification Language (HLPSL). The HLPSL is an expressive, modular, role-based, formal language that is used to specify control-flow patterns, data-structures, alternative intruder models and complex security properties, as well as different cryptographic primitives and their algebraic properties. These features make HLPSL well suited for specifying modern, industrial-scale protocols.
In order to demonstrate the effectiveness of AVISPA, we selected a substantial set of security problems associated with protocols that have recently been, or are currently being standardized by organizations like the Internet Engineering Task Force IETF. We then formalized a large subset of these protocols in HLPSL. The result of this specification effort is the AVISPA Library (publicly available on the AVISPA Web site), which at present comprises 215 security problems derived from 48 protocols. Most of the problems in the library can be solved by the AVISPA tool in a few seconds. Moreover, AVISPA detected a number of previously unknown attacks on some of the protocols analysed, eg on some protocols of the ISO-PK family, on the IKEv2-DS protocol, and on the H.530 protocol.
The AVISPA tool can be freely accessed either through its Web-based interface or by downloading and installing the software distribution. For more details, please refer to the AVISPA Web site.
AVISPA has been developed in the context of the FET Open Project IST-2001-39252 'AVISPA: Automated Validation of Internet Security Protocols and Applications', in collaboration with the University of Genova, INRIA Lorraine, ETH Zurich and Siemens Munich.
http://www.avispa-project.org
Alessandro Armando, Università di Genova, Italy
Tel: +39 010353 2216
E-mail: armandodist.unige.it | 计算机 |
2014-15/4479/en_head.json.gz/19961 | Goto Search
Lebanese e-Government portal: DAWLATI
Thematic Website
Electronic and Mobile Government, ICT for MDGs, Knowledge Management in Government, Citizen Engagement
DAWLATI (in Arabic means “ My State” ) provides Lebanese Citizens with the following services: Information about more than 4500 administrative transactions in the Lebanese administration in a simple, accurate and constantly d method, Having electronic forms for download and electronic filling and printing, online registration with personalized space and storage of personal documents, and electronic services to be announced periodically with different administrations.
Website: www.dawlati.gov.lb
Mobile applications: DAWLATI mobile applications (ANDROID 4+ / APPLE 6+ /BLACKBERRY)
0 Views | Rated 0.0 | Created On : Nov 05, 2013
Visit | More...
International Journal of eGovernance and Networks (IJeN)
Electronic and Mobile Government, Knowledge Management in Government, Internet Governance
International Journal of eGovernance and Networks (IJeN) is a peer-reviewed publication, devoted to broadening the understanding of contemporary developments and challenges in administrative and policy practices promotion of international scholarly and practitioner dialogs the encouragement of international comparisons and the application of new techniques and approaches in electronic systems of governing. IJeN intends to fill the need for a venue in which scholars and practitioners with different viewpoints bring their substantive approaches to work on various legal, social, political, and administrative challenges related to e-Governance issues. IJeN includes cutting edge empirical and theoretical research, opinions from leading scholars and practitioners, and case studies. Call for Manuscript
IJeN a uses a blind peer-review process and therefore manuscripts should be prepared in accordance with the American Psychological Association (APA) Guidelines as follows: No longer than 35 pages, including all elements (abstract, endnotes, references, tables, figures, appendices, etc.) formatted in Times New Roman, 12-point type, double-spaced with one inch margins. Please do not use the automatic features as well as the footnote feature to endnotes.
Submissions should include the title of the manuscript, an abstract of approximately 150 words, an opinion for practitioners of 100 words, and a list of key words on the title page but do not include the author(s) name on the title page. Please ensure to remove any indications of authorship in the body of the manuscript. The author(s) name, affiliation, and contact information should be listed on a separate page preceding the title page of the manuscript. Please submit your manuscript for review in a widely accepted word processing format such as Microsoft Word.
Submission to IJeN implies that your article has not been simultaneously submitted to other journals or previously has not been published elsewhere.
Submissions should be directed to the attention of:
Younhee Kim
Managing Editor at kimy@ecu.edu
0 Views | Rated 0.0 | Created On : Oct 08, 2013
e-Governance in Small States
Journals, Training Material
Electronic and Mobile Government, ICT for MDGs, Internet Governance
ICTs can digital pathways between citizens and governments, which are both affordable, accessible and widespread. This offers the opportunity for developing small states to leapfrog generations of technology when seeking to enhance governance or to deepen democracy through promoting the participation of citizens in processes that affect their lives and welfare. For small developing countries, especially those in the early stages of building an e-Government infrastructure, it is vital that they understand their position in terms of their e-readiness, reflect upon the intrinsic components of an e-Governance action plan, and draw lessons from the success and failures of the various e-Government initiatives undertaken by other countries, developed or developing. This book aims to strengthen the understanding of policy-makers by outlining the conditions and processes involved in planning and ution of e-Government projects.
0 Views | Rated 0.0 | Created On : Sep 10, 2013
Going for Governance: Lessons Learned from Advisory Interventions by the Royal Tropical Institute
Knowledge Management in Government, Internet Governance
The 15 cases presented in this book illustrate the different kinds of advice and support that advisors from the Royal Tropical Institute (KIT) have delivered to help partners around the world improve people’ s lives by "going for governance.” Taken as a whole, these accounts show the range of processes and interventions that have helped strengthen governance in diverse settings and situations. Taken individually, each case study can be used as reference materials for a variety of training courses. The aim of this book is to provide ideas and inspiration for those who are asked to advise on governance issues in various kinds of development programs and sectors, or explore opportunities to use innovative and creative governance approaches and tools in KIT’ s joint initiatives with partners in the South.
Masters Degree Online - Public Administration
Public Administration Schools
Electronic and Mobile Government, ICT for MDGs, Knowledge Management in Government, Citizen Engagement, Institution and HR Management, Internet Governance
Masters Degree Online in public administration provides information to current and prospective graduate students who is pursuing a career in public administration or related fields. Its directory allows you to search schools by institution size, geographic area, tuition cost, and school type. Its primary focus is online master degree programs, but we acknowledge that on-campus programs at traditional brick-and-mortar schools are the best options for some students. Therefore, you can search for both online and on-campus programs here.
Click here for Online Masters Degree in Public Administration.
0 Views | Rated 0.0 | Created On : Jun 28, 2013
UNCTAD Measuring ICT Website
The Measuring ICT Website provides information on the development of ICT statistics and indicators worldwide, with an emphasis on supporting ICT policies and the information economies in developing countries. The objectives of the Measuring ICT Website are to: Provide information to experts and the general public on progress in the field of ICT measurement, particularly by National Statistical Offices and international organizations Promote the discussion between practitioners of ICT statistical work on best practices, experiences, methodology, presentations, theory, etc. Contribute to the follow-up to the World Summit on the Information Society (WSIS) Support the work of UNCTAD on measuring the information economy, and of the Partnership on Measuring ICT for Development.
The Measuring ICT Website is maintained by the ICT Analysis Section of UNCTAD. The Section is part of the Science, Technology and ICT Branch, in the Division on Technology and Logistics.
Galilee International Management Institute
Training Institutions, Public Administration Schools, Training Material
ICT for MDGs, Knowledge Management in Government, Citizen Engagement, Institution and HR Management
Based in beautiful northern Israel, the Galilee Institute is a leading public training institution, offering advanced leadership, management and capacity building seminars to professionals from more than 160 transitional and industrialised countries around the world. The institute enjoys a global reputation as a top management institute, and to date, more than 10,000 senior managers, administrators and planners have graduated from the international programmes at the institute. In addition to its regularly scheduled seminars, the institute also offers tailor-made training programmes, designed to meet the requirements of governments and other international organisations. All programmes are available in English, French, Spanish, Portuguese, Russian and Arabic, and other languages are available upon request.
Click here to visit Galilee International Management Institute.
0 Views | Rated 0.0 | Created On : May 06, 2013
Approaches to Urban Slums a Multimedia Sourcebook on Adaptive and Proactive Strategies
This source book by Barjor Mehta & Arish Dastur (editors) from The World Bank on Approaches to Urban Slums a Multimedia Sourcebook on Adaptive and Proactive Strategies
brings together the growing and rich body of knowledge on the vital issue of improving the lives of existing slum dwellers, while simultaneously planning for new urban growth in a way which ensures future urban residents are not forced to live in slums. The sourcebook& rsquo s user-friendly multimedia approach and informal dialogue greatly increase the accessibility of the content, as well as the range of topics and information that are covered. Totaling over nine hours of modular viewing time, the sourcebook will be an essential resource for practitioners, policy makers, as well as students and academics. It contains the latest perspectives on the burning issues, and cutting edge approaches to dealing with the problems that afflict the living conditions of hundreds of millions of poor people. The sourcebook charts unfamiliar waters in two ways.
0 Views | Rated 0.0 | Created On : Mar 31, 2013
Education Index
Training Institutions, Public Administration Schools, Public Institutions, Statistical Databases
The Education Index at PhDs.org is the premier source of clear and educational data about undergraduate and graduate programs in the United States. We use publicly available numbers from the National Center for Education Statistics (NCES), and strive to present them in a simple and easy-to-digest way. Our desire is to make it easy for you to pick the best college you possibly can with this index: a college that fits your financial, social and educational interests and goals.
Click here to visit the Education Index.
The International Council for Caring Communities (ICCC)
The International Council for Caring Communities (ICCC) is a not-for-profit organization that has Special Consultative Status with the Economic and Social Council of the United Nations.
ICCC acts as a bridge linking government, civil society organizations, the private sector, universities and the United Nations in their efforts at sparking new ways of viewing an integrated society for all ages.
Since its inception, ICCC has been committed to the principle that private enterprises and individuals can help society improve communities and social public activities. This is one of ICCC essential goals. Twenty-three renowned world leaders since 1996 have been presented with ICCC "Caring" Awards for their contributions to society.
2014 International Student Design Competition
Music as a Global Resource: Solutions for Social and Economic Issues Compendium - Third Edition
2011 ICCC Compendium on Music As a Natural Resource 2012 International Student Design Competition Winners
<< 1 2 3 4 5 6 7 8 9 10 ... >> Total Record(s): 1082
Statistical Databases
Training Institutions
UN Research Institutions
Public Service Awards Programs | 计算机 |
2014-15/4479/en_head.json.gz/20639 | Laptop Friday, August 15, 2008
A laptop computer or laptop (also notebook computer, notebook and notepad) is a small mobile computer, typically weighing 3 to 12 pounds (1.4 to 5.4 kg), although older laptops may weigh more. Laptops usually run on a single main battery or from an external AC/DC adapter that charges the battery while it also supplies power to the computer itself, even in the event of a power failure. This very powerful main battery should not be confused with the much smaller battery nearly all computers use to run the real-time clock and backup BIOS configuration into the CMOS memory when the computer is without power. Laptops contain components that are similar to their desktop counterparts and perform the same functions, but are miniaturized and optimized for mobile use and efficient power consumption, although typically less powerful for the same price. Laptops usually have liquid crystal displays and most of them use different memory modules for their random access memory (RAM), for instance, SO-DIMM in lieu of the larger DIMMs. In addition to a built-in keyboard, they may utilize a touchpad (also known as a trackpad) or a pointing stick for input, though an external keyboard or mouse can usually be attached.
CGyp
A computer network is a group of interconnected computers. Networks may be classified according to a wide variety of characteristics. This article provides a general overview of some types and categories and presents the basic components of a network.Connection method Computer networks can also be classified according to the hardware technology that is used to connect the individual devices in the network such as Optical fibre, Ethernet, Wireless LAN, HomePNA, or Power line communication. Ethernet uses physical wiring to connect devices. Often deployed devices are hubs, switches, bridges, and/or routers. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves as transmission medium.Functional relationship (Network Architectures) Computer networks may be classified according to the functional relationships which exist among the elements of the network, e.g., Active Networking, Client-server and Peer-to-peer (workgroup) architecture.Network topology Main article: Network Topology Computer networks may be classified according to the network topology upon which the network is based, such as Bus network, Star network, Ring network, Mesh network, Star-bus network, Tree or Hierarchical topology network, etc. Network Topology signifies the way in which devices in the network see their logical relations to one another. The use of the term "logical" here is significant. That is, network topology is independent of the "physical" layout of the network. Even if networked computers are physically placed in a linear arrangement, if they are connected via a hub, the network has a Star topology, rather than a Bus Topology. In this regard the visual and operational characteristics of a network are distinct; the logical network topology is not necessarily the same as the physical layout.Types of networks Below is a list of the most common types of computer networks in order of scale.Personal Area Network (PAN) Main article: Personal area network A personal area network (PAN) is a computer network used for communication among computer devices close to one person. Some examples of devices that are used in a PAN are printers, fax machines, telephones, PDAs or scanners. The reach of a PAN is typically within about 20-30 feet (approximately 6-9 metres). Personal area networks may be wired with computer buses such as USB[1] and FireWire. A wireless personal area network (WPAN) can also be made possible with network technologies such as IrDA and Bluetooth..Local Area Network (LAN) Main article: Local Area Network A network covering a small geographic area, like a home, office, or building. Current LANs are most likely to be based on Ethernet technology. For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g., printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnection devices and eventually connect to the internet. The cables to the servers are typically on Cat 5e enhanced cable, which will support IEEE 802.3 at 1 Gbit/s. A wireless LAN may exist using a different IEEE protocol, 802.11b or 802.11g. The staff computers (bright green in the figure) can get to the color printer, checkout records, and the academic network and the Internet. All user computers can get to the Internet and the card catalog. Each workgroup can get to its local printer. Note that the printers are not accessible from outside their workgroup. Typical library network, in a branching tree topology and controlled access to resources All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called "layer 3 switches" because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks' customer access routers. The defining characteristics of LANs, in contrast to WANs (wide area networks), include their higher data transfer rates, smaller geographic range, and lack of a need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 100 Gbit/s, and possibly 40 Gbit/s.Campus Area Network (CAN) Main article: Campus Area Network A network that connects two or more LANs but that is limited to a specific and contiguous geographical area such as a college campus, industrial complex, or a military base. A CAN may be considered a type of MAN (metropolitan area network), but is generally limited to an area that is smaller than a typical MAN. This term is most often used to discuss the implementation of networks for a contiguous area. This should not be confused with a Controller Area Network. A LAN connects network devices over a relatively short distance. A networked office building, school, or home usually contains a single LAN, though sometimes one building will contain a few small LANs (perhaps one per room), and occasionally a LAN will span a group of nearby buildings. In TCP/IP networking, a LAN is often but not always implemented as a single IP subnet.Metropolitan Area Network (MAN) Main article: Metropolitan Area Network A Metropolitan Area Network is a network that connects two or more Local Area Networks or Campus Area Networks together but does not extend beyond the boundaries of the immediate town/city. Routers, switches and hubs are connected to create a Metropolitan Area Network.Wide Area Network (WAN) Main article: Wide Area Network A WAN is a data communications network that covers a relatively broad geographic area (i.e. one city to another and one country to another country) and that often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.Global Area Network (GAN) Main article: Global Area Network Global area networks (GAN) specifications are in development by several groups, and there is no common definition. In general, however, a GAN is a model for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is "handing off" the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial Wireless local area networks (WLAN).[2]Internetwork Main article: Internetwork Two or more networks or network segments connected using devices that operate at layer 3 (the 'network' layer) of the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private, commercial, industrial, or governmental networks may also be defined as an internetwork. In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants of internetwork, depending on who administers and who participates in them: IntranetExtranetInternet Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the intranet or extranet is normally protected from being accessed from the Internet without proper authorization. The Internet is not considered to be a part of the intranet or extranet, although it may serve as a portal for access to portions of an extranet.Intranet Main article: Intranet An intranet is a set of interconnected networks, using the Internet Protocol and uses IP-based tools such as web browsers and ftp tools, that is under the control of a single administrative entity. That administrative entity closes the intranet to the rest of the world, and allows only specific users. Most commonly, an intranet is the internal network of a company or other enterprise. A large intranet will typically have its own web server to provide users with browseable information.Extranet Main article: Extranet An extranet is a network or internetwork that is limited in scope to a single organization or entity but which also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities (e.g. a company's customers may be given access to some part of its intranet creating in this way an extranet, while at the same time the customers may not be considered 'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must have at least one connection with an external network.Internet Main article: Internet A specific internetwork, consisting of a worldwide interconnection of governmental, academic, public, and private networks based upon the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the U.S. Department of Defense – also home to the World Wide Web (WWW) and referred to as the 'Internet' with a capital 'I' to distinguish it from other generic internetworks. Participants in the Internet use the Internet Protocol Suite and IP Addresses allocated by address registries. Service providers and large enterprises exchange information about the reachability of their address ranges through the Border Gateway Protocol (BGP).Basic Hardware Components All networks are made up of basic hardware building blocks to interconnect network nodes, such as Network Interface Cards (NICs), Bridges, Hubs, Switches, and Routers. In addition, some method of connecting these building blocks is required, usually in the form of galvanic cable (most commonly Category 5 cable). Less common are microwave links (as in IEEE 802.11) or optical cable ("optical fiber").Network Interface Cards Main article: Network card A network card, network adapter or NIC (network interface card) is a piece of computer hardware designed to allow computers to communicate over a computer network. It provides physical access to a networking medium and often provides a low-level addressing system through the use of MAC addresses. It allows users to connect to each other either by using cables or wirelessly.Repeaters Main article: Repeater A repeater is an electronic device that receives a signal and retransmits it at a higher level or higher power, or onto the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair ethernet configurations, repeaters are required for cable runs longer than 100 meters.Hubs Main article: Network hub A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports of the hub for transmission. When the packets are copied, the destination address in the frame does not change to a broadcast address. It does this in a rudimentary way, it simply copies the data to all of the Nodes connected to the hub.[3]Bridges Main article: Network bridge A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges do not promiscuously copy traffic to all ports, as hubs do, but learns which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast was received. Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived. Bridges come in three basic types: Local bridges: Directly connect local area networks (LANs)Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced by routers.Wireless bridges: Can be used to join LANs or connect remote stations to LANs.Switches Main article: Network switch A switch is a device that performs switching. Specifically, it forwards and filters OSI layer 2 datagrams (chunk of data communication) between ports (connected cables) based on the Mac-Addresses in the packets.[4] This is distinct from a hub in that it only forwards the datagrams to the ports involved in the communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic based on IP address (layer 3) which is necessary for communicating between network segments or within a large or complex LAN. Some switches are capable of routing based on IP addresses but are still called switches as a marketing term. A switch normally has numerous ports with the intention that most or all of the network be connected directly to a switch, or another switch that is in turn connected to a switch.[5] Switches is a marketing term that encompasses routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more OSI layers, including physical, data link, network, or transport (i.e., end-to-end). A device that operates simultaneously at more than one of these layers is called a multilayer switch. Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand networking. Many experienced network designers and operators recommend starting with the logic of devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device selection is an advanced topic that may lead to selecting particular implementations, but multilayer switching is simply not a real-world design concept.Routers Main article: Router Routers are networking devices that forward data packets between networks using headers and forwarding tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media (RFC 1812). This is accomplished by examining the Header of a data packet, and making a decision on the next hop to which it should be sent (RFC 1812) They use preconfigured static routes, status of their hardware interfaces, and routing protocols to select the best route between any two subnets. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some DSL and cable modems, for home (and even office) use, have been integrated with routers to allow multiple home/office computers to access the Internet through the same connection. Many of these new devices also consist of wireless access points (waps) or wireless routers to allow for IEEE 802.11b/g wireless enabled devices to connect to the network without the need for a cabled connection.
Hardware is a general term that refers to the physical artifacts of a technology. It may also mean the physical components of a computer system, in the form of computer hardware. Hardware historically meant the metal parts and fittings that were used to make wooden products stronger, more functional, longer lasting and easier to fabricate or assemble.[citation needed] In modern usage it includes equipment such as keys, locks, hinges, latches, corners, handles, wire, chains, plumbing supplies, tools, utensils, cutlery and machine parts, especially when they are made of metal.[citation needed] In the United States, this type of hardware has been traditionally sold in hardware stores, a term also used to a lesser extent in the UK.[citation needed] In a more colloquial sense, hardware can refer to major items of military equipment, such as tanks, aircraft or ships.[citation needed] In slang, the term refers to trophies and other physical representations of awards.[citation needed]
Computer software, or just software is a general term used to describe a collection of computer programs, procedures and documentation that perform some tasks on a computer system.The term includes application software such as word processors which perform productive tasks for users, system software such as operating systems, which interface with hardware to provide the necessary services for application software, and middleware which controls and co-ordinates distributed systems. "Software" is sometimes used in a broader context to mean anything which is not hardware but which is used with hardware, such as film, tapes and records.
An e-book (for electronic book: also ebook: also ecobook) is the digital media equivalent of a conventional printed book. Such documents are usually read on personal computers, or on dedicated hardware devices known as e-book readers or e-book devices.If you want to search free e-book, you can see here
Sata HD Notebook Driver for Windows XP
Notebook ACER :
Aspire 4310 - 4710
Presario V3643TU
DAPAT GRATISAN DISINI
Blog Information Profile for feribayek Kartoo SE
allvery
Store Download
Good Blog
Good Multiply
Open tab links in browser window instead. | 计算机 |
2014-15/4479/en_head.json.gz/21596 | Bill Swartz: "Follow The Money" The head of Mastiff Games tells us where your $50 goes when you buy a game.
While most of the GDC speeches we cover concern game design and production, the conference also features speakers on everything from audio to art to business & legal issues. One of this year's business lectures was "Follow the Money: Understanding Console Publishers," where Bill Swartz of new publisher Mastiff Games laid out what you need to know to start a game publishing company.
For his presentation, Swartz put together various slides showing where the money goes for a hypothetical game. While not based on an actual title, Swartz said his example is based on a "pretty real game," and compared it to an equivalent of Bloody Roar.
Part of the problem with putting together an example, Swartz said, is that this is a hit-driven industry. Games will either bomb and sell 40,000 copies or less, or do extremely well with 300,000 or more sold. Because of this, an example like Bloody Roar is rare, since that is a game that will sell around 90,000 -- it hits the "average" that doesn't tend to exist most of the time. However, Swartz claims that the percentages seen in his mock-up would not change drastically for a game that sold 900,000 copies, so the example should hold true for most games.
Swartz showed a breakdown of how much publishers, wholesalers, and retailers can make, as well as what risks they face.
In his example, Swartz showed that a publisher can clear seven dollars on a game, but only one of every five games will sell enough copies to make money, since publishers have to consider things like taking back inventory that doesn't sell through to customers. They have to be smart about the number of copies they ship into the market. Wholesalers have a smaller amount they can make on a single game, and face risks such as dealing with retailer payments. Retailers can sometimes make good money on a single game, but that margin drops when the game price falls.
Swartz went on to discuss each part of the market you have to consider before publishing a product, noting the amounts you can expect to spend on distribution, retail advertising, royalties, etc. once the game is complete.
A chart of the distribution, production, and royalty costs.
A closer look at the total cost and net profit of a moderately sucessful game.
Beyond numbers, Swartz gave a few tips to aspiring game publishers. He said to look at Intellectual Property as nothing more than advertising, because paying for a license serves the same purpose as marketing -- it gets the word out and attracts customers. He also advised that you "don't commit more than one crime," with those crimes being 1) new engine/technology, 2) new Intellectual property, and 3) new development team. More than one of these makes the game a risk for the publisher.
Swartz finished his speech by pleading with the audience to not be "slimey" when it comes to making payments and doing business deals. Swartz has seen way too much of this type of behavior in the past, and hopes developers won't have to worry that their next payment will come in on time every couple months -- comments that were met with quite a few head nods in the audience. | 计算机 |
2014-15/4479/en_head.json.gz/21969 | 7 hard truths about the NoSQL revolution
Forgoing features for speed has its trade-offs as these NoSQL data store shortcomings show
Peter Wayner (InfoWorld)
The NoSQL buzzword has been metastasizing for several years. The excitement about these fast data stores has been intoxicating, and we're as guilty as anyone of seeing the groundbreaking appeal of NoSQL. Yet the honeymoon is coming to an end, and it's time to start balancing our enthusiasm with some gimlet-eyed hard truths. Don't get us wrong. We're still running to try the latest experiment in building a simple mechanism for storing data. We still find deep value in MongoDB, CouchDB, Cassandra, Riak, and other NoSQL standouts. We're still planning on tossing some of our most trusted data into these stacks of code because they're growing better and more battle-tested each day. [ Also on InfoWorld: NoSQL standouts: New databases for new applications | First look: Oracle NoSQL Database | Get a digest of the key stories each day in the InfoWorld Daily newsletter. ] But we're starting to feel the chafing, as the NoSQL systems are far from a perfect fit and often rub the wrong way. The smartest developers knew this from the beginning. They didn't burn the SQL manuals and send nastygrams to the sales force of their once devoted SQL vendor. No, the smart NoSQL developers simply noted that NoSQL stood for "Not Only SQL." If the masses misinterpreted the acronym, that was their problem. This list of gripes, big and small, is thus an attempt to document this fact and to clear the air. It's meant to set things straight now so that we can do a better job understanding the trade-offs and the compromises. NoSQL hard truth No. 1: JOINs mean consistencyOne of the first gripes people have about SQL systems is the computational cost of executing a JOIN between two tables. The idea is to store the data in one and only one place. If you're keeping a list of customers, you put their street addresses in one table and use their customer IDs in every other table. When you pull the data, the JOIN connects the IDs with the addresses and everything remains consistent. The trouble is that JOINs can be expensive, and some DBAs have concocted complex JOIN commands that boggle the mind, turning even the fastest hardware to sludge. It was no surprise that the NoSQL developers turned their lack of JOINs into a feature: Let's just keep the customer's address in the same table as everything else! The NoSQL way is to store key-value pairs for each person. When the time comes, you retrieve them all. Alas, people who want their tables to be consistent still need JOINs. Once you start storing customers' addresses with everything else about them, you often end up with multiple copies of those addresses in each table. And when you have multiple copies, you need to update them all at the same time. Sometimes that works, but when it doesn't, NoSQL isn't ready to help with transactions. Wait, you say, why not have a separate table with the customer's information? That way there will only be one record to change. It's a great idea, but now you get to write the JOIN yourself in your own logic. NoSQL hard truth No. 2: Tricky transactionsLet's say you're OK to live without JOINing tables because you want the speed. It's an acceptable trade-off, and sometimes SQL DBAs denormalize tables for just this reason. The trouble is that NoSQL makes it hard to keep the various entries consistent. There are often no transactions to make sure that changes to multiple tables are made together. For that, you're on your own, and a crash could ensure that tables turn inconsistent. The earliest NoSQL implementations thumbed their nose at these transactions. They would offer data listings that were consistent, except when they weren't. In other words, they went after the lowest-value data where errors wouldn't make any material difference. Now some NoSQL implementations offer something approaching a transaction. Oracle's NoSQL product, for instance, offers transactional control over data written to one node and lets you choose a flexible amount of consistency across multiple nodes. If you want perfect consistency, you have to wait for each write to reach all nodes. Several other NoSQL data stores are experimenting with adding more structure and protection like this. NoSQL hard truth No. 3: Databases can be smartMany NoSQL programmers like to brag about how their lightweight code and simple mechanism work extremely quickly. They're usually right when the tasks are as simple as the insides of NoSQL, but that changes when the problems get harder. Consider the old challenge of a JOIN. Once NoSQL programmers start generating their own JOIN commands in their own logic, they start to try to do this efficiently. SQL developers have spent decades developing sophisticated engines to handle JOIN commands as efficiently as possible. One SQL developer told me he was trying to synchronize his code with the spinning hard disk so that he would request data only when the head was just above the right spot. This may seem extreme, but SQL developers have been working on similar hacks for decades. There's no doubt that programmers spend days pulling out their hair trying to structure their SQL queries to take advantage of all of this latent intelligence. It may not be simple to tap, but when the programmer figures it out, the databases can really sing. A sophisticated query language like SQL always has the potential to outshine an unsophisticated query language like those found in NoSQL. It may not matter with simple results, but when the action becomes complex, the SQL is being executed on the machine right next to the data. It has little overhead fetching the data and doing the work. A NoSQL server usually has to ship the data to where it's going. NoSQL hard truth No. 4: Too many access modelsIn theory, SQL is supposed to be a standard language. If you use SQL for one database, you should be able to run the same query in another compliant version. This claim may work with a few simple queries, but every DBA knows that it can take years to learn the idiosyncrasies of SQL for different versions of the same database. Keywords are redefined, and queries that worked on one version won't work with another. NoSQL is even more arcane. It's like the Tower of Babel. Since the beginning, NoSQL developers have each tried to imagine the best language possible, but they have very different imaginations. This hotbed of experimentation is good -- until you try to jump between tools. A query for CouchDB is expressed as a pair of JavaScript functions for mapping and reducing. Early versions of Cassandra used a raw, low-level API called Thrift; newer versions offer CQL, an SQL-like query language that must be parsed and understood by the server. Each one is different in its own way. Each tool doesn't just have its own idiosyncrasies, it sports an entirely different philosophy and way of expressing it. There are no easy ways to switch between data stores and you're often left writing tons of glue code just to give yourself the option of switching in the future. This may not be too difficult when you're stuffing pairs of keys and values into the system, but it can grow increasingly aggravating the more complexity you introduce. NoSQL hard truth No. 5: Schema flexibility is trouble waiting to happenOne of the great ideas from the NoSQL model is not requiring a schema. In other words, programmers don't need to decide in advance which columns will be available for each and every row in a table. One entry may have 20 strings attached to it, another may have 12 integers, and another might be completely blank. The programmers can make the decision whenever they need to store something. They don't need to ask permission of the DBA, and they don't need to fill out all the paperwork to add a new column. All that freedom sounds intoxicating, and in the right hands it can speed development. But is it really a good idea for a database that might live through three teams of developers? Is it even workable for a database that might last beyond six months? In other words, the developers might want the freedom to toss any old pair into a database, but do you want to be the fifth developer to come along after four have chosen their own keys? It's easy to imagine a variety of representations of "birthday," with each developer choosing his or her own representation as a key when adding a user's birthday to an entry. A team of developers might imagine almost anything: "bday," "b-day," "birthday". The NoSQL structure offers no support to limit this problem because that would mean reimagining the schema. It doesn't want to harsh on the mellow of the totally cool developers. A schema would get in the way. The fact is that adding a column to a table isn't a big deal, and the discipline might actually be good for the developer. Just as it helps to force developers to designate variable types, it also helps to force developers to designate the type of data attached to a column. Yes, the DBA may force the developer to fill out a form in triplicate before attaching that column, but it's not as bad as dealing with a half-dozen different keys created on the fly by a programmer. NoSQL hard truth No. 6: No extrasLet's say you don't want all of the data in all of the rows, and you want the sum of a single column. SQL users can execute a query with the SUM operation and send one -- just one -- number back to you. NoSQL users get all of the data shipped back to them and can then do the addition themselves. The addition isn't the problem because it takes about the same amount of time to add up the numbers on any machine. However, shipping the data around is slow, and the bandwidth required to ship all that data can be expensive. There are few extras in NoSQL databases. If you want to do anything but store and retrieve data, you're probably going to do it yourself. In many cases, you're going to do it on a different machine with a complete copy of the data. The real problem is that it can often be useful to do all of the computation on the machine holding the data because shipping the data takes time. But tough for you. NoSQL solutions are emerging. The Map and Reduce query structure from MongoDB gives you arbitrary JavaScript structure for boiling down the data. Hadoop is a powerful mechanism for distributing computation throughout the stack of machines that also holds the data. It is a rapidly evolving structure that offers rapidly improving tools for building sophisticated analysis. It's very cool, but still new. And technically Hadoop is an entirely different buzzword than NoSQL, though the distinction between them is fading. NoSQL hard truth No. 7: Fewer toolsSure, you can get your NoSQL stack up and running on your server. Sure, you can write your own custom code to push and pull your data from the stack. But what if you want to do more? What if you want to buy one of those fancy reporting packages? Or a graphing package? Or to download some open source tools for creating charts? Sorry, most of the tools are written for SQL databases. If you want to generate reports, create graphs, or do something with all of the data in your NoSQL stack, you'll need to start coding. The standard tools come ready to snarf data from Oracle, Microsoft SQL, MySQL, and Postgres. Your data is in NoSQL? They're working on it. And they'll be laboring on it for a bit. Even if they jump through all of the hoops to get up and running with one of the NoSQL databases, they'll have to start all over again from the beginning to handle the next system. There are more than 20 different NoSQL choices, all of which sport their own philosophy and their own way of working with the data. It was hard enough for the tool makers to support the idiosyncrasies and inconsistencies in SQL, but it's even more complicated to make the tools work with every NoSQL approach. This is a problem that will slowly go away. The developers can sense the excitement in NoSQL, and they'll be modifying their tools to work with these systems, but it will take time. Maybe then they'll start on MongoDB, which won't help you because you're running Cassandra. Standards help in situations like this, and NoSQL isn't big on standards. NoSQL shortcomings in a nutshellAll of these NoSQL shortcomings can be reduced to one simple statement: NoSQL tosses away functionality for speed. If you don't need the functionality, you'll be fine, but if you need it in the future, you'll be sorry. Revolutions are endemic to tech culture. A new group comes along and wonders why the last generation built something so complex, and they set out to tear down the old institutions. After a bit, they begin to realize why all of the old institutions were so complex, and they start implementing the features once again. We're seeing this in the NoSQL world, as some of the projects start adding back things that look like transactions, schemas, and standards. This is the nature of progress. We tear things down only to build them back again. NoSQL is finished with the first phase of the revolution and now it's time for the second one. The king is dead. Long live the king. Related articles NoSQL standouts: New databases for new applications First look: Oracle NoSQL Database Flexing NoSQL: MongoDB in review 10 essential performance tips for MySQL 10 essential MySQL tools for admins Master MySQL in the Amazon cloud The time for NoSQL standards is now
This story, "7 hard truths about the NoSQL revolution," was originally published at InfoWorld.com. Follow the latest developments in data management at InfoWorld.com. For the latest developments in business technology news, follow InfoWorld.com on Twitter. Read more about data management in InfoWorld's Data Management Channel.
Read how Perth-based safety footwear manufacturer, Steel Blue, was able to cut costs with shipping and improve efficiency while meeting the growing demand for their products as they expanded their national and export markets and increased their local market share, all thanks to a new ERP system.
IBM X-Force Threat Intelligence Top 8 Considerations to Enable and Simplify Mobility | 计算机 |
2014-15/4479/en_head.json.gz/22998 | Our Organization Our People Year in Review Risk Management Consider a broad range of conditions and events that can affect the potential for success, and it becomes easier to strategically allocate limited resources where and when they are needed the most.
The SEI has been conducting research and development in various aspects of risk management for more than 20 years. Over that time span, many solutions have been developed, tested, and released into the community. In the early years, we developed and conducted Software Risk Evaluations (SREs), using the Risk Taxonomy. The tactical Continuous Risk Management (CRM) approach to managing project risk followed, which is still in use today—more than 15 years after it was released. Other applications of risk management principles have been developed, including CURE (focused on COTS usage), ATAM® (with a focus on architecture), and the cyber-security-focused OCTAVE®. In 2006, the SEI Mission Success in Complex Environments (MSCE) project was chartered to develop practical and innovative methods, tools, and techniques for measuring, assessing, and managing mission risks. At the heart of this work is the Mission Risk Diagnostic (MRD), which employs a top-down analysis of mission risk.
Mission risk analysis provides a holistic view of the risk to an interactively complex, socio-technical system. The first step in this type of risk analysis is to establish the objectives that must be achieved. The objectives define the desired outcome, or "picture of success," for a system. Next, systemic factors that have a strong influence on the outcome (i.e., whether or not the objectives will be achieved) are identified. These systemic factors, called drivers, are important because they define a small set of factors that can be used to assess a system's performance and gauge whether it is on track to achieve its key objectives. The drivers are then analyzed, which enables decision makers to gauge the overall risk to the system's mission.
The MRD has proven to be effective for establishing confidence in the characteristics of software-reliant systems across the life cycle and supply chain. The SEI has the MRD in a variety of domains, including software acquisition and development; secure software development; cybersecurity incident management; and technology portfolio management. The MRD has also been blended with other SEI products to provide unique solutions to customer needs.
Although most programs and organizations use risk management when developing and operating software-reliant systems, preventable failures continue to occur at an alarming rate. In many instances, the root causes of these preventable failures can be traced to weaknesses in the risk management practices employed by those programs and organizations. For this reason, risk management research at the SEI continues. The SEI provides a wide range of risk management solutions. Many of the older SEI methodologies are still successfully used today and can provide benefits to your programs. To reach the available documentation on the older solutions, see the additional materials.
The MSCE work on mission risk analysis—top-down, systemic analyses of risk in relation to a system's mission and objectives—is better suited to managing mission risk in complex, distributed environments. These newer solutions can be used to manage mission risk across the life cycle and supply chain, enabling decision makers to more efficiently engage in the risk management process, navigate through a broad tradeoff space (including performance, reliability, safety, and security considerations, among others), and strategically allocate their limited resources when and where they are needed the most. Finally, the SEI CERT Program is using the MRD to assess software security risk across the life cycle and supply chain. As part of this work, CERT is conducting research into risk-based measurement and analysis, where the MRD is being used to direct an organization's measurement and analysis efforts. Spotlight on Risk Management
New Directions in Risk: A Success-Oriented Approach (2009)
presented in San Jose, California, at the 21st Annual SEPG North America 2009 conference March 23-26, 2009
Practical Risk Management: Framework and Methods | 计算机 |
2014-15/4479/en_head.json.gz/23112 | Daylight Saving Time and Sybase Server Products
Daylight Saving Time and Sybase Server Products Summary Most Sybase Server Products do not have any direct support for handling daylight saving time. This document examines the issues and suggests ways to avoid problems. Sybase recommends, as a best practice, running Sybase Servers under the Coordinated Universal Time (UTC) standard and having all conversions to local time zones (including daylight saving time adjustments) performed by the client applications. Additionally, applications written in Java that employ unpatched versions of the Java Developer's Kit/Java Runtime Environment may be at risk of incorrectly reporting the local-time offset from UTC. How Sybase Products Use Time and Date Sybase Servers support the use of date and time data through the datetime and smalldatetime datatypes (and, with newer server, the date and time datatypes), as well as the getdate(), dateadd(), datediff(), and datepart() functions. The getutcdate() function was also added to some servers to provide the current datetime value in Coordinated Universal Time regardless of the time zone the server is otherwise running under. The datetime and smalldatetime datatypes, however, do not store time zone information and the products are entirely ignorant of the concepts of time zones and daylight saving time. Sybase Servers only recognize and store the date and time portions of the values provided by the operating system, which are based on the time zone configured at the operating system level (typically though the TZ environment variable setting in Unix or the Date/Time function of the Windows Control Panel) for the user who started the product. The calculations behind the dateadd and datediff functions are aware of leap years (using the rule of every 4th year, except for every 100th year, except for every 400th year), but do not include any adjustments for leap seconds or transitions from daylight saving time to regular time. Most Sybase Servers usually have two sources for datetime values. The getdate() and getutcdate() functions always make a call to the operating system to get the current time with the greatest accuracy. Most Servers also maintain an internal clock, which it uses to avoid the overhead of making a system call in cases where strict accuracy is less important. For example, Sybase ASE relies on its own clock for the password change date field pwdate in syslogins, creation and modification date fields in system catalogs, and begin and commit transaction times (which are visible in the syslogshold table, and used for the with until_time option of load transaction). The internal clock is initialized at start time with the current value of the operating system clock and incremented based on regular SIGALRM signals from the operating system (typically 10 per second). Once a minute, Sybase ASE polls the operating system clock to get the current time. The two clocks sometimes fall out of synchronization. When this happens, Sybase ASE speeds up or slows down the internal clock to minimize the difference with the operating system clock. The Effect of Daylight Saving Time Most UNIX systems actually run on UTC; there are no daylight saving adjustments in the UTC definition. Such adjustments are taken care of in applications, normally by calling OS library functions. If in effect, daylight saving time causes a large discontinuous jump in the time value received from the OS, typically either forward by an hour or backwards by an hour. The getdate() function, which gets its information directly from the operating system, immediately picks up this change. However, the server cannot immediately synchronize its internal clock. System-generated datetime values, such as crdates in sysobjects, pwdate in syslogins, and begin tran and commit tran times in log records and the syslogshold table will not match the new operating system clock, though the difference will decrease over time. Use of load tran with until_time is also effected by this. Until the internal clock has synchronized with the operating system clock, the "until time" is not accurate. The various common date and time functions are also unaware of daylight saving time. For example, if you use the datediff() function with values that cross one or more daylight saving time boundaries, the results are not adjusted for this change. Recent Changes to Daylight Saving Time in the USA In the USA, a law named the Energy Policy Act was passed which altered the starting and ending dates of Daylight Saving Time by 4 weeks starting in March of 2007. Sybase Servers do not contain any built-in knowledge of daylight saving time, the adjustments are made based on OS libraries and function calls. Presuming the Sybase Product is running under daylight saving time, these products will reflect these changes if the OS has been updated. Time Conversions Done in the Java JDK/JRE The Java Run-time Environment (JRE) contains library functions for time conversion from Coordinated Universal Time (also refered to as GMT) to local time, including any Daylight Saving Time compensation. These library functions were developed prior to the United States Energy Policy Act of 2006 and may not include the changes to Daylight Saving Time in US Time Zones that result from that act. Patches to the JDK/JRE will be issued as necessary to ensure these functions are updated by both Sun and Sybase Inc. The following advice is provided by Sun Microsystems: The Java Runtime Environment (JRE) stores rules about DST observance all around the globe. Older JREs will have outdated rules that will be superseded by the Energy Policy Act of 2005. As a result, applications running on an older JRE may report incorrect time from March 11, 2007 through April 2, 2007 and from October 29, 2007 through November 4, 2007. Solutions for Java Applications: If you are concerned about application failures that may result from these DST changes, you should update your Java Runtime Environment. The following Java platform versions have correct time rules to handle the DST changes that will affect U.S. time zones in 2007. You can download any of the following Java platform versions to resolve this DST issue: JDK 6 Project (beta)
J2SE 5.0 Update 6 or later
J2SE 1.4.2_11 or later
Testing Daylight Saving Time Changes Testing Daylight Saving Time changes is non-trivial. Simply changing the OS clock forward or back an hour is a poor test as it is changing the root OS time, and the time zone adjustments (made through calls to the OS libraries) are being bypassed. Testing probably should be done on dedicated machines where the system clock can be changed to values just before the transition time into or out of daylight saving time, the application (i.e. Sybase Product) started and allowed to run as the OS clock advances through the transition. Best Practice To avoid such issues, the best practice is to run Sybase products so that they are not subject to daylight saving time, i.e. run them on a constant clock, such as Greenwich Mean Time (GMT) or Coordinated Universal Time (UTC). You can use one of these standards for all datetime values in the server and let clients be responsible for conversions to their local time zone, including adjustments for daylight saving time. The dateadd() and datediff() calculations will then be correct even if the values in question span the change in or out of daylight saving time. The choice of which timezone to run the server under is unfortunately often based on where the company's headquarters are at the time the server is first created, which usually works well for small companies that don't have operations in other time zones. However, headquarters are sometimes moved, and small companies can grow to become global companies. Establishing a company standard early on that the server runs under UTC and all clients are responsible to translating from UTC to local time can save a great deal of trouble in the future. What Sybase Customers should do Customers should contact operating system supplier for the updates for their particular operating system. Some Sybase products bundle the JDK/JRE. Customers using these products should get the updated JDK/JRE from the supplier or get the Sybase product update that bundles the fixed JDK/JRE. Click here for a complete list of Sybase products and instructions - Last Updated 6th March 2007. Additional Workaround If you cannot avoid making daylight saving time adjustments, the best practice is to shut down the product before the operating system clock is reset; you can restart immediately after the clock is reset. This is precautionary; changing the clock while the product is running usually does not cause problems. However, unpredictable effects can occurred, for example, WAITFOR commands or dumps hanging, or SIGALRMs not being received by the product. If a precautionary shutting down of the product while the clock is changed is not possible and such problems are subsequently seen, the product can be restarted at any convenient time to reinitialize the internal clock. To assist you, Sybase Engineers have prepared the attached matrix of products and indicated their dependency on Operating System time information. Where applicable, known operating system patches have been recommended. Additional Recommended Reading: Calendrical Calculations by Dershowitz and Reingold, Cambridge University Press. ISBN 0-521-56474-3
Developing Time-Oriented Database Applications in SQL by Snodgrass, Morgan Kaufmann Publishers. ISBN 1-55860-436-7
Sun Developer Article: "U.S. Daylight Saving Time Changes in 2007" by O'Conner: http://java.sun.com/developer/technicalArticles/Intl/USDST/
Unix MAN pages on "TIMEZONE(4)"
Copyright © 2006 Sybase, Inc. All rights reserved. DOCUMENT ATTRIBUTES
Last Revised: Mar 08, 2007
Product: EAServer, Open Server, Replication, Replication Server, Data Integration Suite, Adaptive Server Enterprise
Technical Topics: Troubleshooting
Content Id: 1048699 | 计算机 |
2014-15/4479/en_head.json.gz/23775 | Early Windows 8 Adoption Lagging Far Behind Windows 7′s Rates
By Brad Chacos, LAPTOP Contributor
| Oct 2, 2012 11:39 AM EDT
With less than a month to go until the official launch of Windows 8, consumers don’t appear to be jumping on the Metro… oops, Modern bandwagon with as much enthusiasm as they had making the switch from Vista to Windows 7.
Only 0.33 percent — or 33 out of every 10,000 PCs — currently run a Preview version or RTM trial of Windows 8, Computerworld reports, citing statistics from metrics firm Net Applications. At the same point to the release of Windows 7, 1.64 percent of all Windows PCs were running the upcoming operating system. That’s a full five times more than Windows 8 adoptees — and the gap between early Windows 7 adoption and early Windows 8 adoption is actually increasing as October 26th draws closer.
Windows 8′s current adoption number stand at the same percentage Windows 7 held six months before that operating systems launch. At that point, the Windows 7 Release Candidate wasn’t even available yet and the final RTM version was a far-off milestone.
Microsoft hopes to stimulate sales of the new operating system with an aggressive pricing structure out of the gate for early buyers, highlighted by a Windows 8 Pro upgrade that costs just $40 for current Windows users.
Simple customer satisfaction may be part of the reason for consumer hesitation; Windows 7 was the follow up to the widely panned Windows Vista. Many people consider Windows 7 to be the best version of Windows ever released, while early reaction to Windows 8′s tiled, touch-focused interface has been decidedly mixed, with one expert going so far as to call the Modern/Desktop switching “a cognitive burden.”
The new UI is sure to be a stumbling point for adoption rates. The question is whether the mass mainstream market will be willing take the time to get used to a completely new design.
8 Worst Windows 8 Annoyances and How to Fix Them
Usability Expert: Windows 8 on PCs is Confusing, a Cognitive Burden
Microsoft Windows 8 Review
Tags: Microsoft Windows 8, Windows 8, operating systems, operating system, Windows 7, Windows, Microsoft Recommended by LEAVE A REPLY
Agasicles Says:
October 2nd, 2012 at 12:46 pm There might be some reasons for this delta. I am only offering my own perspective, and suggesting that maybe some of the 1.3% delta might be due to some of the same reasoning.
When the Win7 pre-release was made available, it was largely preferable to the on-market version of Vista. Today, I am satisfied with Windows 7. I know eventually I am going to have to go over to Win8. But that will happen naturally for me like it will with most people…when I have to upgrade to a new PC. I do not think most people upgrade to a new PC because of a new Windows OS. Sometimes you delay an upgrade, and sometimes you just upgrade the OS itself. But most times it happens just because the market does not stock new PC’s with the previous version of the OS, and it becomes time for you to buy a new PC. So without a Windows Vista or Me to run away from, I have not been compelled to test-drive the new OS. I can wait for the first Service Pack, in fact, like I’ve done for every other Windows release other than Win7. I think the downloadable pre-release availability for Win7 was the first time that was feasible, at least for the general public. The paradigm is still somewhat new. It was popular then because of the negativity around Vista. Without that goblin to push people to download the pre-release, maybe we are just all back to our normal adoption and upgrade paradigms.
– Vr/A. Stamas
October 3rd, 2012 at 6:55 am I’ve got to agree with Agasicles, I’m surely interested in windows 8 but I don’t really need an upgrade at the moment. A lot of people tried out the Windows 7 pre-releases because that’s the first time anyone could do it, and if it meant getting away from Vista then it was probably the right move. But people are happy with 7, so there’s not going to be the same desperate rush as there was with 7. I expect it will end up being just as successful as 7 come service pack 1, and by then I will probably be in the mood for an upgrade. | 计算机 |
2014-15/4479/en_head.json.gz/23880 | BlogsCool Stuff
Questions about Windows 7? Ask @MicrosoftHelps on Twitter
Posted: Oct 21, 2009 at 12:55 PM
By: Sarah Perez
Yes, it’s official. @MicrosoftHelps is in fact a real Microsoft-owned Twitter account for Microsoft Customer Service. Recently launched but given little fanfare, several bloggers and Twitter users were questioning whether or not this account was genuine and, if so, what kind of questions it was designed to help out with. Now we have the answers.The primary purpose for the @MicrosoftHelps account is to help you find the resources you need for Microsoft products and services. At launch time, the initial scope will focus solely on Windows 7. It will also be an English-only resource for now. Before launching the account, the company spoke with members from Best Buy’s social media team in order to learn from their experience with their own @twelpforce account, a Best Buy service where users can ask questions about the hardware, software, and services Best Buy provides. While similar in spirit to @twelpforce, the @MicrosoftHelps account is different because it will only focus on Microsoft products and services and initially only Windows 7. Also, the company says they don’t anticipate being able to respond to each and every question they receive. However, the team behind the account will be providing answers to customer questions about Windows 7 and if those questions are of a complex nature, they will direct those asking to the appropriate forums where in-house experts and Microsoft MVPs will be able to help. The company’s goal with the new account is to help respond to and engage with customers on the platform where so many are now choosing to participate, Twitter. Microsoft has seen a lot of people tweeting about Microsoft products and services – both good and bad – and wanted to provide a resource where customers could get easy, accurate answers. They hope that by doing so, they’ll be able to improve customers’ experience with the company by providing support, information, and even product updates if needed. Although initially the focus is on Windows 7, we’re told that Microsoft will “continue to assess the value” of using Twitter in this way going forward. Hopefully that means they’ll expand their scope beyond OS questions in the future. Tags:
Twitter, Windows 7
A Camera that Records your Whole Life
Questions about Windows 7? Ask @MicrosoftHelps on…
Microsoft's Online Store Now Selling Windows 7 PCs
Pivot Tables with Tons of Developer Resources
Bytes by MSDN: Tim Sneath and Tim Huckaby discuss… | 计算机 |
2014-15/4479/en_head.json.gz/23882 | The Sims 3 ReviewXbox 360
| PC System: PC, PS3, X360, Wii, DS
Despite all of the added features and persistent world, The Sims 3 oddly leaves out some content that should have really been included. Add-ons like season weather, which was included in one of The Sims 2's expansion packs, aren't present here. So, while The Sims 3 does provide a lot of content, there are some details that were accounted for in previous installments that aren't represented.
Another missing component is an intimate view of your career. When players send their sim off to work, they can watch as far as the door to the building, at which point the sim enters and the player is given an aerial view of the building until the sim's shift ends. Behavior options are available while the sim is at work, which affects the productivity and chances of promotion. For example, as a police officer, the player can set their sim to "Chat with Partner," which improves their relationship with their partner, but doesn't increase their chances of promotion as quickly as say setting their behavior to "Work hard." This system does give the player some control over their sims' career on a day-to-day basis, but seems strange in comparison to the amount of depth found in other areas. Another interesting and mildly irritating limitation is how a sim can only have one career at a time, despite whether work shifts overlap or not. If one shift ends at 2 p.m. and the other begins at 3 p.m., why should you not be able to work two careers at once if you want? Sure, working two jobs in real life isn't fun at all, and maybe this was the thinking behind limiting a sim to one career, but if you want to make your sim a work-a-holic with two jobs and no social life, then the option should be there.
While The Sims 3 continues the tradition of a solely offline and single-player experience, the continuation of social networking features remains as strong as ever. The Sims 3 launcher allows players to do a variety of things such as upload their own content, including individual sims, objects, houses, public buildings, and entire towns. Players have the option of creating their own player page, which includes a blog and an area to display all their created content for sharing.
In addition to being able to share content with other players via the website, exclusive content can be downloaded from the developers in exchange for SimPoints, which are purchased with real money. While this particular system isn't very popular, it doesn't hurt the game much because players have the option of just downloading shared content instead. Moreover, while some content requires SimPoints, players will be happy to know that actual game updates remain free.
These online features greatly increase the longevity of The Sims 3. Considering how detailed the creation tools are, players should have a nearly endless source of downloadable content to choose from. Perhaps the only negative thing about the online feature is the fact that it can't all be done in a browser built into the game, forcing players to muddle around in their browser and install content prior to launching the game. So, while it may not be as integrated as it could be, it definitely isn't any less easy to use.
The Sims 3 was a huge undertaking and it shows. The core gameplay remains largely unchanged, with minor tweaks and improvements that unquestionably add to the fun. Enhanced visuals and an expectedly good soundtrack excel at creating truly immersive moments, especially when moving around the largely persistent world. Even though there are areas that lack the level of detail and depth of the game as a whole, they still provide options to the player that keep the game from running into issues, which makes them identifiable as areas for improvement rather than a complete overhaul.
If you're a fan of the series, then The Sims 3 will deliver all that you've come to expect and throw in a ton of new ideas, even if the amount of extra toppings doesn't seem as vast. Newcomers to the series take note: if you've ever thought about playing The Sims, but were overwhelmed by the many expansion and "stuff" packs on the shelves, fear not; The Sims 3 is your window of opportunity.
Derek Hidey
CCC Freelance Writer
GraphicsImproved graphics bring a new level of realism to The Sims, but also bring issues for players with less powerful computers.
ControlA familiar user interface and control scheme boil the many tiers and complexities of the game into a simple and easy-to-understand system that could still be slightly daunting to newcomers to the series.
/ Sound FX / Voice ActingGreat music helps to enhance the experience, while recycled sound effects ensure the game maintains an intimate level of familiarity.
ValueUnchanged core gameplay mechanics are improved with a host of new features and simplified ideas, and they are brought together in a single persistent environment that is just what the series needed.
Overall Rating -
Must BuyNot an average. See Rating legend above for a final score breakdown.
Game Features: Create Sims with Unique Personalities: Influence the behaviors of your Sims with traits you've chosen and watch how their traits impact their relationships and the neighborhood around them.
Expand Your Game: Add to your game by downloading exclusive content or sharing content with other players via TheSims3.com!
Determine Your Sims Ultimate Destiny: Face short- and long-term challenges and reap the rewards. Your Sims can pursue random opportunities to get fast cash, get ahead in life, get even with enemies, and more.
Customize Everything: Enjoy complete customization over your Sims' appearances with the new Create a Sim. Enjoy new, easy-to-use design tools that allow you to fine-tune your Sims' facial features, hair color, eye color, and more. | 计算机 |
2014-15/4479/en_head.json.gz/24624 | Some aspects of the logical design of a control computer: a case study1
R. L. Alonso / H. Blair-Smith / A. L. Hopkins
Summary Some logical aspects of a digital computer for a space vehicle are described, and the evolution of its logical design is traced. The intended application and the characteristics of the computer's ancestry form a framework for the design, which is filled in by accumulation of the many decisions made by its designers. This paper deals with the choice of word length, number system, instruction set, memory addressing, and problems of multiple precision arithmetic.
The computer is a parallel, single address machine with more than 10,000 words of 16 bits. Such a short word length yields advantages of efficient storage and speed, but at a cost of logical complexity in connection with addressing, instruction selection, and multiple-precision arithmetic.
In this paper we attempt to record the reasoning that led us to certain choices in the logical design of the Apollo Guidance Computer (AGC). The AGC is an onboard computer for one of the forthcoming manned space projects, a fact which is relevant primarily because it puts a high premium on economy and modularity of equipment, and results in much specialized input and output circuitry. The AGC, however, was designed in the tradition of parallel, single-address general-purpose computers, and thus has many properties familiar to computer designers [Richards, 1955], [Beckman et al., 1961]. We will describe some of the problems of designing a short word length computer, and the way in which the word length influenced some of its characteristics. These characteristics are number system, addressing system, order code, and multiple precision arithmetic.
A secondary purpose for this paper is to indicate the role of evolution in the AGC's design. Several smaller computers with about the same structure had been designed previously. One of these, MOD 3C, was to have been the Apollo Guidance Computer, but a decision to change the means of electrical implementation (from core-transistors to integrated circuits) afforded the logical designers an unusual second chance.
It is our belief, as practitioners of logical design, that designers, computers and their applications evolve in time; that a frequent reason for a given choice is that it is the same as, or the logical next step to. a choice that was made once before.
A recent conference on airborne computers [Proc. Conf. Spaceborne Computer Eng., Anaheim, Calif., Oct. 30-31, 1962] affords a view of how other designers treated two specific problems: word length and number system. All of these computers have word lengths of the order of 22 to 28 bits, and use a two's complement system. The AGC stands in contrast in these two respects, and our reasons for choosing as we did may therefore be of interest as a minority view.
2. Description of the AGC
The AGC has three principal sections. The first is a memory, the fixed (read only) portion of which has 24,576 words, and the erasable portion of which has 1024 words. The next section may be called the central section; it includes, besides an adder and a parity computing register, an instruction decoder (SQ), a memory address decoder (S), and a number of addressable registers with either special features or special use. The third section is the sequence generator which includes a portion for generating various microprograms and a portion for processing various interrupting requests.
The backbone of the AGC is the set of 16 write busses; these are the means for transferring information between the various registers shown in Fig. 1. The arrowheads to and from the various registers show the possible directions of information flow.
In Fig. 1, the data paths are shown as solid lines; the control paths are shown as broken lines.
Memory: fixed and erasable
The Fixed Memory is made of wired-in "ropes" [Alonso and Laning, 1960], which are compact and reliable devices. The number of bits so wired is about 4 X 105. The cycle time is 12 m
The erasable memory is a coincident current system with the same cycle time as the fixed memory. Instructions can address registers in either memory, and can be stored in either memory.
1IEEE Trans., EC-12 (6), 687-697 (December, 1963) | 计算机 |
2014-15/4479/en_head.json.gz/24950 | #Side Mission
New Contrast Website Launched, Images Released
Daniel Kayser September 27, 2013 at 10:45AM Contrast, the upcoming puzzle/platformer developed by Compulsion Games for PC, the PSN and XBLA has drawn the curtains back to reveal its rather slick official website and four new images to celebrate this occasion.
From information about the game, the artistry and innovation of the gameplay mechanics to the story and details on the cast/characters, the newly launched website offers a look behind the scenes with the latest images, arts, videos and articles from the developer blog. I was personally very intrigued by this game at E3 and the new website does a great job of capturing the mood and atmosphere that helped it stand out from the crowd during the show.
In addition, four new screenshots from the game were released, which I'm including throughout this post.
For those unfamiliar with what Contrast is all about, here's a description of the game's story:
In and around town, there's a rumor that Didi's father, Johnny, is back in town trying to promote something big – a circus that is as entertaining as it is ambitious. But to do this, Johnny needs the Great Vincenzo, a famous magician, to headline the show and draw big crowds. Johnny could find himself in a bad situation if Vincenzo doesn't agree, as this deal isn't exactly above board. Join Didi, the adventure-loving, spirited little girl and her imaginary friend Dawn, as they discover the mysteries that lay under the big tents of Johnny's circus and Vincenzo's workshop through our four new images, as these places will be the setting for important scenes in Didi's story. So, check out the game's new website and let us know if you're interested in experiencing Contrast once it arrives on PC, PS3, and Xbox 360 in the comments below. | 计算机 |
2014-15/4479/en_head.json.gz/25393 | Spear-Phishing Attacks Out Of China Targeted Source Code, Intellectual PropertyAttackers used intelligence, custom malware to access Google, Adobe, and other U.S. companies' systemsThe wave of targeted attacks from China on Google, Adobe, and more than 20 other U.S. companies, which has led the search giant to consider closing its doors in China and no longer censor search results there, began with end users at the victim organizations getting duped by convincing spear-phishing messages with poisoned attachments. Google and Adobe both revealed last night that they were hit by these attacks, which appear to be aimed mainly at stealing intellectual property, including source code from the victim companies, security experts say. So far, the other victim companies have yet to come forward and say who they are, but some could go public later this week. Microsoft, for one, appears to be in the clear: "We have no indication that any of our mail properties have been compromised," a Microsoft spokesperson said in a statement issued today.
Google, meanwhile, first discovered in mid-December that it had been hit by a targeted attack out of China that resulted in the theft of some of its intellectual property. The attackers' primary goal was to access the Gmail accounts of Chinese human rights activists, according to Google: "Based on our investigation to date we believe their attack did not achieve that objective. Only two Gmail accounts appear to have been accessed, and that activity was limited to account information (such as the date the account was created) and subject line, rather than the content of emails themselves," said David Drummond, senior vice president of corporate development and chief legal officer at Google, in a blog post. Google discovered that at least 20 other large companies from the Internet, finance, technology, media, and chemical industries also had been hit by the attack, he said.
iDefense says the attacks were primarily going after source code from many of the victim firms, and that the attackers were working on behalf of or in the employment of officials for the Chinese government. "Two independent, anonymous iDefense sources in the defense contracting and intelligence consulting community confirmed that both the source IPs and drop server of the attack correspond to a single foreign entity consisting either of agents of the Chinese state or proxies thereof," iDefense said in a summary it has issued on the attacks.
Eli Jellenc, head of international cyberintelligence for iDefense, which is working with some of the victim companies, says on average the attacks had been under way for nearly a month at those companies. One source close to the investigation says this brand of targeted attack has actually been going on for about three years against U.S. companies and government agencies, involving some 10 different groups in China consisting of some 150,000 trained cyber-attackers. The attacks on Google, Adobe, and others started with spear-phishing email messages with infected attachments, some PDFs, and some Office documents that lured users within the victim companies, including Google, to open what appeared to be documents from people they knew. The documents then ran code that infected their machines, and the attackers got remote access to those organizations via the infected systems. Interestingly, the attackers used different malware payloads among the victims. "This is a pretty marked jump in sophistication," iDefense's Jellenc says. "That level of planning is unprecedented."
Mikko Hypponen, chief research officer at F-Secure, says a PDF file emailed to key people in the targeted companies started the attacks. "Once opened, the PDF exploited Adobe Reader with a zero-day vulnerability, which was patched today, and dropped a back-door [Trojan] that connected outbound from the infected machine back to the attackers," Hypponen says. That then gave the attackers full access to the infected machine as well as anywhere the user's machine went within his or her network, he says.
Other experts with knowledge of the attacks say it wasn't just PDFs, but Excel spreadsheets and other types of files employed as malicious attachments. The malware used in the attacks was custom-developed, they say, based on zero-day flaws, and investigators were able to match any "fingerprints" in the various versions of malware used in the attacks and determine that they were related. The attackers didn't cast a wide spam net to get their victims like a typical botnet or spam campaign. Sources with knowledge of the attacks say the attackers instead started out with "good intelligence" that helped them gather the appropriate names and email addresses they used in the email attacks. "The state sponsorship may not be financial, but it is backed with intelligence," says one source. "What we're seeing is a blending of intelligence work plus malicious cyberattacks."
iDefense's Jellenc says the attackers were able to successfully steal valuable intellectual property from several of the victim companies.
Kelly Jackson Higgins is Senior Editor at DarkReading.com. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise Magazine, ... View Full Bio1 of 2Comment | Email This | Print | RSSMore InsightsWebcasts | 计算机 |
2014-15/4479/en_head.json.gz/26558 | Far Cry 2 performance in-depth
By Steven Walton on October 29, 2008
Can i get the full performance of a Gddr5 vga from a G41 motherboard?
4 replies on Audio and Video
Question on Acer 29" monitor and other monitors
Issue getting sound from tv speakers
Laptop - TV HDMI connection problem
Titan black or gtx 780+quadro k4000 for 3d modeling/gaming
By Steven Walton
Editor: Julio Franco
Read user comments Find videocard prices Tweet
If like us you are a fan of first-person shooters, then there is a good chance you have spent the better part of this year anticipating the arrival of Far Cry 2. Last week marked the release date for this awaited sequel, and so we immediately jumped in and bought our copy. However, rather than play the single player mission from start to finish and then go into some multiplayer action, we have been hard at work for bringing you this article.
As usual our in-depth performance review takes various ATI and Nvidia graphics cards and compares them in this new first-person shooter title. Having recently completed a similar article with Crytek's Crysis Warhead, we have been keen to see if Far Cry 2 is just as demanding.
You may recall the original version of Far Cry was developed by Crytek using the CryENGINE, while it was actually published by Ubisoft. For Far Cry 2, Ubisoft's Montreal studios took over the development of the game using their own Dunia engine. This game engine has been designed for use with the PC, Xbox 360, and PlayStation 3 platforms which resulted in last week's multi-platform release. The word Dunia means "world", "earth" or "living" in the Persian Language. As current players of Far Cry 2 will discover, Dunia offers a number of impressive features like destructible environments, dynamic weather, dynamic fire propagation, full day/night cycles, and many others.
Furthermore, the Dunia engine can take advantage of DirectX 10 when running on Windows Vista, but is also capable of running on DirectX 9 platforms. Now, unlike the engine used for Crysis games, Dunia is said to be less hardware demanding which could only come as great news for PC gamers. If you look back at our recent Crysis Warhead performance article you will see that this game requires a tremendous amount of GPU power to deliver playable performance.
Clearly only those with the most advanced gaming rigs are going to be able to enjoy Crysis Warhead in all its visual glory as we found that even a top of the line GeForce GTX 280 could struggle when pushed far enough. Far Cry 2, on the other hand, has been publicized to work perfectly on today's mid-range graphics cards in spite of the impressive eye candy.
As we move on, we will find out exactly how Far Cry 2 performs using a range of previous and current generation graphics cards. The quality presets tested include Ultra High, Very High, and High, which will be tested at 1280x1024, 1680x1050, and 1920x1200 resolutions. The built-in Far Cry 2 benchmark tool has been used to test the various graphics cards, so you will be able to accurately compare your systems performance to ours.
Test System Specs
Benchmarks: Ultra High
Benchmarks: Very High
Benchmarks: High | 计算机 |
2014-15/4479/en_head.json.gz/26956 | Assassin's Creed 3's "Wolf Pack" co-op fails to excite Posted by: Vito Gesualdi
Though the Assassin's Creed series is largely known for its expansive single player campaigns, the unique multiplayer modes have been a big draw for gamers looking for something outside of the traditional deathmatch experience. Thankfully, many of these fan favorite multiplayer modes are returning in Assassin's Creed III, alongside several new modes which promise even more variations on the game's backstabbing fun times.
Last week I got the chance to try out the new Wolf Pack multiplayer mode at Ubisoft's AC3 preview event in Boston, Massachusetts, the first time the Assassin's Creed series has ever featured co-op. Unfortunately, though Wolf Pack is a decently enjoyable multiplayer romp, this co-op experience wasn't deep enough to hold my attention for long.
Wolf Pack is a co-op multiplayer mode where four assassins work together to take out specific groups of targets, working against the clock to rack up kills and accrue as many points as possible. The key here is that the timer is extended each time players earn enough points to reach a new "sequence," with major bonus points awarded to teams who can successfully synchronize their kills. Simply put, if each assassin simply runs around the map taking out targets on a whim, the mode is sure to end quickly. However by effectively communicating the location of targets and taking care to make the kill at the same exact time, skilled teams will likely be able to dominate the game's leaderboards.
Wolf Pack feels a bit like an Assassin Creed flavored twist on Resident Evil's "Mercenaries" mode, with the frantic race against the clock leaving little time to think. Unlike Mercenaries however, players will have to maintain some degree of stealth as they progress to harder and harder sequences, where targets not only begin to spread out around the map, but are also prepared for confrontation. Synchronizing kills becomes very difficult once targets start becoming uncomfortable with the weird cloaked figures standing behind them waiting for the signal to strike. Some of these NPCs will flee if shadowed for too long, while others will turn to do battle with potential assassins. Point is, this mode will be almost impossible to play without a headset, as teams will need to be in constant contact throughout each session.
Again, the only real problem I had with Wolf Pack is that it isn't a terribly thrilling implementation of co-op, and the seeming complexity involved in setting up tandem kills means you'll need a truly dedicated team to excel at this mode. Still, it's a welcome addition to AC3's suite of multiplayer options, and it's nice to work alongside your fellow assassins for once. We'll be looking to see how popular Wolf Pack mode proves to be when Assassin's Creed 3 drops in November.
Tags: AC3, Assassin's Creed 3, Assassin's Creed III, Assassin's Creed | 计算机 |
2014-15/4479/en_head.json.gz/26982 | WPS Home
Note: your password is case sensitive. Forgot your password? Notice to employees and independent contractors of WPS and its subsidiaries: This computer system, which includes all related equipment, networks, and network devices (specifically including access to the internet), is provided only for authorized business functions. The system may be monitored by authorized personnel to ensure that your use is authorized, for management of the system, to facilitate protection against unauthorized access, and to verify security procedures. Information you place on this system is not private. Use of this computer system, authorized or unauthorized, constitutes consent to official monitoring of this system.
PRIVACY ACT WARNING! Information contained in this system with respect to Wisconsin Physicians Service Insurance Corporation and its subsidiaries is subject to The Privacy Act of 1974, 5 U.S.C. §552a, as amended. Information contained in this system may be used only by authorized persons in the conduct of official business. Any individual responsible for unauthorized disclosure or misuse of personal information may be subject to fines of up to $5,000. | WPS Home
| About WPS
| Code of Conduct | Supplier Code of Conduct | Disclaimer
© Wisconsin Physicians Service Insurance Corporation. All Rights Reserved. | 计算机 |
Industry models play a crucial role in driving enterprise intelligence transformation and innovative development. High-quality industry data is key to improving the performance of large models and realizing industry applications. However, datasets currently used for industry model training generally suffer from issues such as insufficient data volume, low quality, and lack of domain expertise.
To address these problems, we constructed and applied 22 industry data processing operators to clean and filter 3.4TB of high-quality multi-industry classified Chinese and English language pre-training datasets from over 100TB of open-source datasets including WuDaoCorpora, BAAI-CCI, redpajama, and SkyPile-150B. The filtered data consists of 1TB of Chinese data and 2.4TB of English data. To facilitate user utilization, we annotated the Chinese data with 12 types of labels including alphanumeric ratio, average line length, language confidence score, maximum line length, and perplexity.
Furthermore, to validate the dataset's performance, we conducted continued pre-training, SFT, and DPO training on a medical industry demonstration model. The results showed a 20% improvement in objective performance and a subjective win rate of 82%.
Industry categories: 18 categories including medical, education, literature, finance, travel, law, sports, automotive, news, etc. Rule-based filtering: Traditional Chinese conversion, email removal, IP address removal, link removal, Unicode repair, etc. Chinese data labels: Alphanumeric ratio, average line length, language confidence score, maximum line length, perplexity, toxicity character ratio, etc. Model-based filtering: Industry classification language model with 80% accuracy Data deduplication: MinHash document-level deduplication Data size: 1TB Chinese, 2.4TB English
Industry classification data size:
Industry Category | Data Size (GB) | Industry Category | Data Size (GB) |
---|---|---|---|
Programming | 4.1 | Politics | 326.4 |
Law | 274.6 | Mathematics | 5.9 |
Education | 458.1 | Sports | 442 |
Finance | 197.8 | Literature | 179.3 |
Computer Science | 46.9 | News | 564.1 |
Technology | 333.6 | Film & TV | 162.1 |
Travel | 82.5 | Medicine | 189.4 |
Agriculture | 41.6 | Automotive | 40.8 |
Emotion | 31.7 | Artificial Intelligence | 5.6 |
Total (GB) | 3386.5 |
For the convenience of users to download and use, we have split the large dataset into sub-datasets for 18 industries. The current one is the sub-dataset for the computer industry.
Data processing workflow:
- Downloads last month
- 125