text
stringlengths
231
622k
New chiropractor joins Health Synergy Siobhan Conklyn is a new chiropractor at Health Synergy, a multi-faceted health services business located in White Marsh Shopping Center. Conklyn joined the Health Synergy team in February. She works there two days each week and also works several days a week at a practice she runs in Hampton with her husband, Jason, a chiropractor who sometimes will handle her clients in Gloucester. Conklyn said she uses manual adjustment to realign clients’ bodies, with the most common treatments for the neck and lower back. She uses a drop table to adjust the client for different manipulations. Sometimes she uses a spring-loaded adjustment tool to help with a specific procedure, Conklyn said, and she is also offers some acupuncture services. She is a graduate of Sherman College of Chiropractic in Spartanburg, S.C., and has taken courses in acupuncture.
corporate communications (Van Riel, 1995; Leitch and Motion, 1999), and organizational behaviour (Hatch and Shultz, 1997; Shultz et al., 2000). An emerging challenge in corporate brand management is how to develop, manage and leverage corporate brand partnerships. This article investigates how sponsorship relationships become co-branded partnerships that result in the production of a joint identity between corporate brands. The purpose of this article, then, is to examine the process of establishing a viable co-branded identity within a sponsorship relationship, to theorize the nascent field of co-branding from a discourse perspective, and to reflect on what co-branding offers a corporate brand. A case study approach is adopted in order to examine the sponsorship relationship between adidas, a product- related corporate brand, and the All Blacks, the élite team of the New Zealand Rugby Union (NZRU). For the NZRU, as for many sports organizations, the élite team brand of the All Blacks functioned as a surrogate corporate brand. That is, the corporate brand was interpreted primarily through the reputation of its élite team. Before moving to an analysis of the NZRU co-branding strategies, however, we discuss the salient literature on corporate brands, co-branding, sponsorship, and brand equity. Brands originally functioned to identify and differentiate products (Keller, 1998), but now services, organisations, sports, art, ideas, people, and places may all be branded. Kapferer (1997) explained that a brand communicates meaning and defines identity. That meaning and identity is initially designed or expressed by marketers but resides in consumers' minds (King, 1991; Keller, 1998; de Chernatony, 2001). The extensive literature review conducted by de Chernatony and Dall'Olmo Riley (1998, p. 437) summarized multiple diverse definitions and interpretations of brands and concluded that brands are "value systems" that are represented and communicated through symbols and designs. A corporate brand, from this perspective, may be conceptualized as the sum of the corporation's marketing efforts to present a controlled representation of the corporation's value systems (Ind, 1997; Balmer, 2001a). Value systems need to be considered in the wider context of identity theory (Balmer, 2001a), which offers a more strategic and holistic framework for understanding corporate brands (see for example: Baker and Balmer, 1997; Balmer, 1998, 2001a; Gray and Balmer, 1998; Motion and Leitch (2002); Van Riel, 1995; Varey and Hogg, 1999). Indeed, a corporate brand may be a representation or expression of an organisation's identity. Balmer (2001a, p. 281) defined a corporate brand as "the conscious decision by senior management to distil and make known the attributes of the organisation's identity in the form of a clearly defined branding proposition". Thus, a corporate brand differs from a product brand in its strategic focus, management and its incorporation of corporate strategy, corporate communications and corporate culture (Balmer, 1995, 2001a, b). In establishing a strong corporate brand, an organization has the opportunity to create relationships with stakeholders by virtue of the things it comes to mean and not to mean (Aaker, 1996; Balmer and Dinnie, 1999; Christensen and Askegaard, 2001; Gordon, 1998; Motion and Leitch, 2001; Van Riel and Balmer, 1997). The meanings associated with corporate brand values should assist the organization to achieve its objectives. However, they may also place limits on the organization and restrict the ability of the organization to change and grow without devaluing its corporate brand equity. Through co-branding with other corporate, product or service brands, an organization has the ability to augment as well as strengthen its existing set of corporate brand values (see for example McDonald et al., 2001). Blackett and Boad (1999), who have made one of the most substantial contributions to the literature, defined co-branding as "a form of cooperation between two or more brands with significant customer recognition, in which all the participants' brand names are retained" (Blackett and Boad, 1999, p. 7). Thus, co-branding is not simply cooperation between organizations, but must involve the public linkage of corporate brands that are owned or controlled by different This linkage process should start with the corporate brand values (Blackett and Boad, 1999). Managing the co-branding of values requires that four categories of values are considered: peripheral values; and generic values (Blackett and Boad, 1999, p. 118). Each co-branded partner has, as its own core values, a set of fundamental values that define the brand and differentiate it from the competition. The challenge within co-branding ventures is to align the core values and maximize the opportunity to augment absentee values that the brand lacks but wishes to acquire. Co-branding may also offer an opportunity to abandon peripheral values that are inappropriate or negative. In order for the co-branding exercise to succeed, each partner must have the generic values that enable their brand to enter the co-brand category and compete effectively (Blackett and Boad, 1999, p. 118). Careful consideration of the areas of commonality or synergy of the potential partners' brand values provides the basis on which a co- branded relationship may be constructed. Just as successful mergers require best practice in corporate identity and corporate communications (Balmer and Dinnie, 1999), co-branding initiatives need to take into account synergies in the corporate identities and corporate communications. Furthermore, successful co-branding may result in transferring the three virtues of corporate brands: communicate clearly and consistently the co-brand promise; differentiate the co-brand from its competitors; and enhance the esteem and loyalty of its customers and stakeholder groups and networks (Balmer, 2001b, p. 14). A particular form of co-branding - sponsorship relationships - will now be discussed. The business of sport, sponsorship and co-branding Sponsorship has been variously defined: as relationships and networks (Olkkonen et al., 2000); as a "strategic investment" that can be the basis of competitive advantage (Amis et al., 1999, p. 250); as the right to associate with the profile and image of an event and to exploit this association for commercial ends (Meenaghan and Shipley, 1999, p. 328); or as an opportunity to construct a particular kind of "brand imagery" (Meenaghan and Shipley, 1999, p. 328). The opportunity to increase public awareness of a brand, to enhance the reputation of a brand, or to change the reputation of a brand, are cited as the most important reasons for an organization to enter a sponsorship agreement (Meenaghan, 1991; Mintel, 1997; Amis et al., 1999; Ferrand and Blackett and Boad (1999) drew a distinction between sponsorship and co-branding, viewing the former as a simple exchange transaction: money in return for image or reputation enhancement. However, sponsorship may operate at a much more complex level, that, we would argue, is co- branding. While the receipt of money from sponsors is certainly a major incentive for sports organizations to enter into sponsorship relationships, a co-branding approach enables value to be extracted from the relationship at a variety of levels. The delineation between sponsorship and co-branding can be conceptualized as a continuum with sponsorship at one end and a joint partnership at the other. Overall, the sponsorship literature positions sponsorship at the low end of the co-branding continuum, viewing it as transactional rather than relational. It is, however, possible to use sponsorship as the basis for the construction of a co-branded identity that not only adds value to existing brands, but is also itself a source of value. The opportunity to create a co-brand arises when sponsorship moves from being a one-off exchange to being a long-term relationship between two or more organizations, and, as a consequence, sponsorship may be repositioned within the co-branded spectrum of the continuum. Meenaghan and Shipley (1999, p. 335) argue that "In sponsorship both the sponsor and sponsored activity become involved in a symbiotic relationship with a transference of inherent values from the activity to the sponsor". Extending this symbiotic exchange over the long term in a variety of contexts and for a range of activities provides the basis for the construction of corporate co-branded identity and relationship. The conceptualization of a brand, as a representation of values, has been complemented by the understanding that branding adds value to an organization through the creation of brand equity (see for example, Aaker, 1996; Barwise, 1993; Keller, 1993, 2000; Olins, 2000; Srivastava et al., 1998). Initially brand equity was seen from a consumer behaviour perspective that emphasised the "consumer response to the marketing of the brand" (Keller, 1993, p. 8). However, the consumer behaviour perspective has been extended to include interactive communications, marketing strategy, channel management, services and financial perspectives. This broader understanding of brand equity recognizes the importance of relationships with multiple stakeholder groups and recognizes that branding is about the creation of meaning with these groups (Berry, 2000). The way in which the firm communicates the brand is a significant factor in the creation of meaning, but brand meaning is also derived more directly from the stakeholder's own experiences with the brand (Fournier, 1998). From a financial perspective, a brand is regarded as an asset (de Chernatony and McWilliam, 1990) and its value to the firm lies in the ability to build and maintain earnings over and above the value created by the tangible assets. Knox et al. (2000) identified reputation, product/service performance, product brand and customer portfolio, and networks as the unique organization value proposition. Brand equity is, therefore, an intangible asset that resides in the complex interaction of brand reputation, performance, meanings and relationships that add to the value of an organization. Potential sources of brand equity for corporate co-brands are outlined below. The research questions (RQ) for this study were based on the discussion above and focus on the following co-branding issues:RQ1.=What objectives underpinned the corporate co- brand?RQ2.=How were brand values deployed to establish the corporate co-brand within particular discourse contexts?RQ3.=How was the desired rearticulation promoted to stakeholders?RQ4.=What are the sources of corporate co-brand equity?This article employs a case study approach "that investigates a contemporary phenomenon within its real-life context" (Yin, 1989, p. 23). The case study examines the co-branding of the All Blacks and adidas from a discourse perspective in order to facilitate the understanding of co-branding and the development of co-branding theory. Brands and co-brands take on meaning within the context of the particular discourses in which they are deployed. Discourses provide the contextual frameworks within which we come to know and understand our world (Parker, 1992). As Fairclough (1992, p. 64) stated, " Discourse is a practice, not just of representing the world, but of signifying the world, constituting the world in meaning". That is, discourses are not simple reflections of the world "out there", but also actively construct knowledge and social practice. When organizations seek to establish co-branded relationships, they are attempting to create new ways of thinking about their individual brands, and about the co-brand, as well as initiate new forms of behaviour in relation to the brands and co-brand by their stakeholders. In order to achieve these goals, the organizations must successfully articulate the brands within a particular discourse. Articulation has a double meaning, denoting both the expression of an idea as well as the linkage of two objects or concepts (Hall, 1986; Fiske, 1996). Both meanings are useful for our understanding of the process of articulating brands within discourse. Hall (1986) explained that articulation is the connection of two different elements: ... which is not necessary, determined, absolute and essential for all time ... The so-called "unity" of a discourse is really the articulation of different, distinct elements which can be rearticulated in different ways because they have no necessary "belongingness" (Hall, 1986, p. 53). In the context of co-branding, articulation is the creation of the conditions that allow the connection of brands and the production of a viable linked identity, even though the brands may have no necessary "belongingness". If the articulation is successful, then stakeholders who participate in the relevant discourse will accept the linkage and speak and behave accordingly. The concepts of discourse and articulation are deployed to underpin the case study and frame Two forms of data were collected: advertising texts and interviews. Saatchi & Saatchi, the advertising agency for adidas, supplied all of the print and television advertisements created from July 1999 till November 2000. Semi-structured interviews were also conducted between 1998 and 1999 with the key individuals involved in co-branding the All Blacks and adidas. While all interviews provided useful information that was incorporated into the case background section above, four of the interviews provided the majority of the insights offered in this article into the co- branding process. These were the interviews with David Moffatt (NZRU CEO); Jack Ralston (NZRU marketing manager); John Foley (All Blacks account manager, Saatchi & Saatchi, Wellington, New Zealand); and Andrew Gaze (the All Blacks relationship manager at adidas). Excerpts from these interviews are included in the analysis sections of the article. A thematic analysis (Owen, 1984) of the interviews and the advertising texts was conducted in order to identify discursive threads of meaning that related to the research questions. The themes were initially identified according to the criteria of frequency, intensity and salience (Foss, 1989). However, salience to the research questions emerged as the most useful of these three criteria and became the overriding criterion for inclusion in this article. Salient themes that emerged from the analysis of the interviews were brand development, brand communication, and building brand value (or equity). The dominant issue that was identified within the brand development theme was meeting co-brand objectives through the alignment of brand values. Within the brand communication theme, the key issue was how to establish and gain acceptance for the co- branded articulation between adidas and the All Blacks at strategic, ideological and tactical levels. The theme of building brand value focused on the potential sources of brand equity. These themes are now explored within the context of the research questions in the following sections. RQ1. What objectives underpinned the corporate co-brand? The advent of professional rugby was the impetus for the establishment of the co-branded relationship between adidas and the All Blacks. A strategic relationship of this kind was crucial to the survival of New Zealand as a major rugby-playing nation. In 1997, the looming expiry date on the NZRU's sponsorship contract with the New Zealand-based apparel company, Canterbury International, represented the first significant opportunity for the NZRU to move from sponsorship to co-branding. Canterbury International had supplied the clothing for the All Blacks and had been associated with the All Blacks' jersey since 1905 (Matheson, 1999). The three contenders for the sponsorship role were Canterbury, Nike and adidas. The key attraction in the sponsorship deal was the association with the All Blacks. In November 1997, the announcement came that this New Zealand-based company had lost the All Blacks' jersey sponsorship rights to adidas, a The contract, which was to be implemented in July 1999, dwarfed the previous arrangement with Canterbury. The financial side of the contract represented only one benefit for the NZRU. In addition to money, adidas offered the NZRU the opportunity to build a strong co-branded relationship that would carry the All Blacks brand into a global market. NZRU CEO, David Moffatt, considered that the All Blacks brand could not expand into nations that did not field significant rugby teams without assistance from a strong brand partner with established distribution channels and "marketing grunt". The reverse side of this coin was the value that adidas perceived might be extracted from the association. Andrew Gaze, the All Blacks relationship manager at adidas, explained the deal in the following way: We want to grow rugby and we want to sell more rugby boots and apparel, so the All Blacks are a driving force for those objectives. Secondly, presence on the rugby jersey means TV coverage that grows the brand association between adidas and a lead sport. Thus, the media exposure generated by the deal was only one driver for adidas. More importantly, adidas intended to grow the size of the global rugby-apparel market rather than to simply take a larger share of the existing market. It sought association with the All Blacks brand as the vehicle for achieving this growth. The reasons that adidas may have had for selecting the All Blacks as this vehicle are discussed in the next section. RQ2. How were brand values deployed to establish the corporate co-brand within particular discourse contexts? John Foley, of the Wellington office of Saatchi & Saatchi, identified a constellation of ten values for the All Blacks. The three core values were "excellence", "respect" and "humility". The extended values were "power", "masculinity", "commitment", "teamwork", "New Zealand", "tradition", and "inspirational". According to Foley, collectively, as a team, the All Blacks represent the values of New Zealand. The selection of the brand values occurred as part of a process that involved Saatchi & Saatchi negotiating with the NZRU managers, the All Blacks coaches, and the All Blacks themselves. It was the All Blacks' impressive win rate that was the primary brand equity in the NZRU's campaign to promote the All Blacks to potential sponsors and co-branding partners. Jack Ralston, the NZRU marketing manager, stated that "success" had originally been offered as one of the core brand values but was replaced, at the suggestion of the coach, John Hart and manager, Mike Banks. Ralston outlined the rationale, explaining, "New Zealanders don't like to brag about winning. We have the tall poppy syndrome and if you brag you are cut down to size." In New Zealand, those who achieve success are expected to be modest and humble. The understated nature of this brand value reflected a culturally ingrained value of the New Zealand psyche. However, as later events have demonstrated, success was actually a more accurate reflection of the expectations of the All Blacks. Within a co-brand, aspirational values such as success, or in this case, winning, may need to complement core values. The core brand value of "respect" denoted the respect with which the All Blacks were regarded both nationally and internationally. NZRU marketing manager, Jack Ralston, explained: Respect - that's respect for the black jersey, respect for the country and what it stands for, respect for the people. According to Ralston, the All Blacks had earned the right to own the brand value of "respect" because of their long history and impressive win rate. The extended brand value of "tradition" drew on similar themes. The brand value of "humility" represented the way in which All Blacks team members were meant to behave at all times as well as the way in which the All Blacks were to be portrayed in all marketing communication. Humility was seen as the All Blacks' defense against the "tall poppy syndrome" and as essential to their continuing popularity. Humility is therefore a culturally-based brand value. Both the NZRU vision and the All Blacks brand values proved a good fit with those of adidas. Andrew Gaze, the All Blacks relationship manager for adidas, explained that when adidas evaluate a potential partner "they look for two or three matching brand values present in their make-up or in the style in which they take part in sport". Having both core values and extended values offers a much greater opportunity for synergy. Although the core values may not change over time, extended values can be adapted to capitalize on new opportunities. The adidas mission was to be the best sports brand worldwide, which was matched with the NZRU's vision that the All Blacks be recognised as a leading sports brand worldwide. The values of "tradition" and "New Zealand" were matched with the adidas value of "authentic", with Saatchi & Saatchi promoting the All Blacks to adidas as "the last authentic warriors". "Inspirational" was another All Blacks brand value that adidas viewed as a match with its own values. These joint brand values or common starting points (Van Riel, 1995) were then incorporated into the communication RQ3. How was the desired rearticulation promoted to stakeholders? The communication challenge was to articulate the All Blacks and adidas brands in a way that would position the relationship as a full partnership rather than as traditional sponsorship. The articulation needed to simultaneously occur at ideological, strategic, tactical and emotional levels across a range of discourses. Saatchi & Saatchi was the agency selected to effect this articulation through a multi-million dollar advertising campaign. The NZRU and its leading brand, the All Blacks, were positioned within the discourses of sport, national identity and to a lesser extent, business, whereas adidas was positioned within sport and business. For the articulation to succeed in New Zealand, adidas had to enter the discourse of national identity and rugby had to enter the discourse of business as a professional sports organisation. The first goal for adidas and the NZRU was to ensure that the connection between the brand of the previous sponsor, Canterbury, and the All Blacks brand was dismantled or "disarticulated" and a new connection then formed or rearticulated with the adidas brand (Slack, 1996). The difficulty in disarticulating or rearticulating brands, however, is that the meanings that have already been created are not always easily dislodged. As Hall (1986) clarified, when you try to alter the connections that people associate with a concept, "you are going to come across all the grooves that have articulated it already" (Hall, 1986, p. 54). Breaking out of these historical grooves requires a rethinking of the discourses and communication associated with the articulation. A co-branding strategy must, therefore, take account of the need to disarticulate any prior connections as well as to create and communicate the new articulation. The primary objective of the first advertisement was to gain acceptance for the placement of the adidas logo on the iconic All Blacks' jersey and, thus, for the adidas-All Blacks co-brand within the discourse of national identity. The strategy employed in the "Captains" advertisement was to use a historical narrative to show that change had been a constant feature of the All Blacks' apparel. Thus, adidas was portrayed as the latest partner in the continuing evolution of the game and the team. Statements to the media, which coincided with the release of this advertisement, emphasised the technological innovations as opposed to stylistic changes made to the jersey. The articulation strategy was to portray adidas as a partner that was enhancing the performance of the team through the application of technology rather than as simply a wealthy sponsor. The key All Blacks brand value of "respect" for the jersey was thus reinforced. The first advertisement coincided with the launch of a range of All Blacks merchandise, including caps, polar jerseys, and replica clothing. To bolster sales and to build support for the co-brand, adidas launched a "Blackout" campaign, which encouraged fans to show their support for the All Blacks by wearing black on match days. A simple, text-only advertisement was shown on nationwide television that read "Black", with no voice over and concluded with the logos of adidas and the All Blacks. While adidas clearly hoped that fans would wear adidas during the blackout, the advertisement itself did not show adidas apparel. Thus wearing the colour black was associated with support for the All Blacks rather than the purchase of adidas apparel. It was a subtle approach that was consonant with the established brand identity of the All Blacks and that appealed to the patriotism of New Zealanders as supporters of their national team. During the 1999 Rugby World Cup, a third television advertisement titled "Black" was designed to further strengthen the associations between the All Blacks-adidas co-brand and national identity. The theme that drove the campaign was "meeting the challenge". Jack Ralston, NZRU marketing manager, outlined some of the meanings of meeting the challenge: "It can mean meeting the challenge of the haka, it can mean meeting the challenge of the World Cup, it can mean meeting the challenge of the tri-series, or it can mean meeting the challenge of losses." In communicating the challenge theme, Saatchi & Saatchi focused on a number of iconic images. The advertisement opened with a shot of the boiling mud pools of Rotorua in New Zealand and of a Maori warrior performing the haka, an intimidating Maori challenge to outsiders who enter their territory. At an ideological level, the decision to link this most New Zealand of traditions with the apparel produced by a multi-national company was an articulation strategy that might have led to accusations of cultural imperialism. However, due to the subtlety of the advertising, the primary association made in the advertisement was the well-established link between the All Blacks and the haka. The adidas logo was the sole indication of their involvement, and the primary purpose of the advertisement appeared to be to show support for the rugby world cup campaign. The advertisement was thus intended to strengthen the articulation embodied by the All Blacks-adidas co-brand within the discourse of national identity. Association with national identity also served to create articulation at an emotional level and ensure that loyalty attached to the All Blacks was transferred to adidas. RQ4. What are the sources of corporate co-brand equity? The unique organisation value proposition identified by Knox et al. (2000) was a prerequisite for establishing a co-branded partnership. Each corporate brand had an international reputation, recognized performance standards, product, brand and customer (or stakeholder) portfolios and networks of business partners. However, within this section, we present a series of potential sources of corporate co-branded equity. Equity source 1 - equity is developed through access to the brand strategy and associations of the co-branded partner Co-branding offers access to the brand strategy of another brand. Thus each organization has the opportunity to pursue new strategies assisted by an experienced partner. In the adidas-All Blacks case, adidas offered the All Blacks a prime position within a global marketing strategy. The NZRU did not need to develop the strategic or marketing capability to pursue its global ambitions. In contrast, the corporate brand of adidas was able to link itself with qualities more commonly associated with service brands: the reputation, the iconic status and the emotional affinity that people have for the All Blacks. Access to the strategic capability of a partner organisation and associations with the intangible benefits such as emotional affinity can be a prime source of co-brand equity for a service-oriented corporate brand. The basis for developing the relationship beyond the initial co-branding venture may also be a source of co-branded equity. Equity source 2 - equity is developed through the alignment of corporate brand values Value is derived from the alignment of agreed-on common starting points. In this case, adidas and the All Blacks brand values were compatible and connected the brands at a fundamental level. Thus, a successful co-branding articulation may enable the values associated with one brand to be linked with another brand. Alternatively, when particular values are shared by both brands, then these values may be even more powerfully associated with the co-brand. Alignment may occur with core, extended or inspirational brand values. Some brand values are culture- specific and may offer fewer opportunities for marketing communication promotions. Equity source 3 - equity emerges from the marketing communications association Association with a partner's brand facilitates the articulation, disarticulation and/or rearticulation within desired discourses. The advertising campaign that promoted the All Blacks-adidas relationship enabled the All Blacks to be successfully positioned in a professional sporting discourse, while adidas was able to augment its position in sports discourse and create an articulation to the New Zealand national identity. In this case, much of the initial co-branded equity was established through the marketing communication relationship, and it is possible to argue that the marketing relationship was crucial to the establishment of the co-brand identity and reputation. "Cross promotion", as David Moffitt, NZRU CEO, termed it, was achieved through a process of open communication in which both parties adhered to the brand values and negotiated Equity source 4 - the corporate co-brand reach offers equity Brand reach may refer to access to established stakeholder relationships, media, distribution channels, and markets. A prime reason for sponsorship is to gain access to a new set of consumers, and in a co-branded relationship that access is extended to include stakeholders such as media, local communities, and even government. Co-branding also allows easy access to a brand partners' established markets and product distribution channels. In its search for a partner, NZRU had emphasized the need for the principal partner to have global distribution channels. An immediate source of co-brand equity may also be access to the established stakeholder relationships, media, distribution channels, and markets of the partner brand. As this analysis has demonstrated, we should think of co-branding as a source of equity for corporate brands. The analysis focused on the adidas-All Blacks case in order to understand how co-branding within a sponsorship may develop from a simple transaction at one end of a continuum into a more strategic and complex corporate co-brand at the other end of the continuum. From this perspective, sponsorship functions as a source of value. Co-branding is a nascent area of research for corporate identity and corporate brand scholars. Discourse theory was deployed in order to theorise the process of corporate co-branding. Co- branding was conceptualized as the construction of a unified identity through a process of articulation, disarticulation and rearticulation that resulted in the formation of successful linkages. Discourse theory, therefore, offers corporate brand scholarship important insights into how identities are reflected, represented or transformed and could also be deployed to investigate the socio-cultural impact of ways of structuring knowledge and social practice. A number of managerial implications were identified within the analysis and the following strategy for co-branding was identified. The management of corporate co-branding requires that brand values are aligned and common starting points for the establishment of a viable co-branded identity are identified. Those common starting points may then form the foundation for the marketing communications campaign, providing the basis for all advertising and media statements. Within corporate co-branding communication, articulation may serve to link particular associations at an ideological, strategic, tactical and emotional level. If the articulation is successful then value is formed for the new corporate co-brand. Initially, co-brand equity is established through the strategic articulation process, but emerges from the marketing communications efforts. In the case of the adidas-All Blacks co-brand, the key source of corporate co-brand equity was the marketing communications relationship. The role of marketing communications in corporate co-brands and the equity sources that emerge offer a potential agenda for research and further theory development about the nature of co- branded equity. Such research will further understanding of how co-branding offers corporate brands the opportunity to move beyond sponsorship relationships to partnerships that redefine the brand identity, discursively reposition the brand and build co-brand equity. Aaker, D., 1996, Building Strong Brands, Free Press, New York, NY. Amis, J., Slack, T., Berrett, T., 1999, "Sport sponsorship as distinctive competence", European Journal of Marketing, 33, 3/4, 250-72. Baker, M., Balmer, J.M.T., 1997, "Visual identity: trappings or substance", European Journal of Marketing, 31, 5/6, 366-82. Balmer, J.M.T., 1995, "Corporate branding and connoisseurship", Journal of General Management, 21, 1, 24-6. Balmer, J.M.T., 1998, "Corporate identity and the advent of corporate marketing", Journal of Marketing Management, 4, 963-96. Balmer, J.M.T., 2001a, "Corporate identity, corporate branding and corporate marketing: seeing through the fog", European Journal of Marketing, 35, 3/4, 248-91. Balmer, J.M.T., 2001b, "The three virtues and seven deadly sins of corporate brand management", Journal of General Management, 27, 1, 1-17. Balmer, J.M.T., Dinnie, K., 1999, "Corporate identity and corporate communications: the antidote to merger madness", Corporate Communications: An International Journal, 4, 3, 182-92. Balmer, J.M.T., Gray, E.R., 1999, "Corporate identity and corporate communication: creating a strategic advantage", Corporate Communications: An International Journal, 4, 4, 171-6. Barwise, P., 1993, "Brand equity: snark or boojum?", International Journal of Research in Marketing, 10, 93-104. Berry, L.L., 2000, "Cultivating service brand equity", Journal of the Academy of Marketing Science, 28, 1, 128-37. Blackett, T., Boad, B., 1999, Co-branding: the Science of Alliance, Macmillan, London. Christensen, L.T., Askegaard, S., 2001, "Corporate identity and corporate image revisited: a semiotic perspective", European Journal of Marketing, 35, 3/4, 292-315. de Chernatony, L., 2001, From Brand Vision to Brand Evaluation: Strategically Building and Sustaining Brands, Butterworth-Heinemann, Oxford. de Chernatony, L., Dall'Olmo Riley, F.D., 1998, "Defining a `brand': beyond the literature with experts interpretations", Journal of Marketing Management, 14, 417-43. de Chernatony, L., McWilliam, G., 1990, "Appreciating brand as assets through using a two- dimensional approach", International Journal of Advertising, 9, 111-9. Fairclough, N., 1992, Discourse and Social Change, Polity, Cambridge, MA. Ferrand, A., Pages, M., 1999, "Image management in sport organizations: the creation of value", European Journal of Marketing, 33, 3/4, 387-402. Fiske, J., 1996, "Opening the Hallway: some remarks on the fertility of Stuart Hall's contribution to critical theory", Morley, D., Chen, K.-H., Stuart Hall: Critical Dialogues in Cultural Studies, Routledge, London and New York, NY, 212-20. Foss, S.K., 1989, Rhetorical Criticism: Exploration and Practice, Waveland, Prospect Heights, IL. Fournier, S., 1998, "Consumers and their brands: developing relationship theory in consumer research", Journal of Consumer Research, 24, 343-73. Gordon, I., 1998, Relationship Marketing, John Wiley & Son Canada, Etobicoke. Gray, E.R., Balmer, J.M.T., 1998, "Managing corporate image - an integral part of strategy", Long Range Planning, 31, 5, 695-702. Hall, S., 1986, "On postmodernism and articulation: an interview with Stuart Hall", Journal of Communication Inquiry, 10, 2, 45-60. Harris, F., de Chernatony, L., 2001, "Corporate branding and corporate brand performance", European Journal of Marketing, 35, 3/4, 441-56. Hatch, M.J., Shultz, M., 1997, "Relations between organizational culture, identity and image", European Journal of Marketing, 35, 5/6, 356-65. Ind, N., 1997, The Corporate Brand, Macmillan, London. Kapferer, J.N., 1997, Strategic Brand Management: Creating and Sustaining Brand Equity Long Term, Kogan Page, London. Keller, K.L., 1993, "Conceptualising, measuring, and managing customer-based brand equity", Journal of Marketing, 57, 1-22. Keller, K.L., 1998, Strategic Brand Management: Building, Measuring and Managing Brand Equity, Prentice-Hall, Upper Saddle River, NJ. Keller, K.L., 2000, "Building and managing corporate brand equity", Schultz, M., Hatch, M.J., Larsen, M.H., The Expressive Organization, Oxford University Press, Oxford, 115-37. King, S., 1991, "Brand building in the 1990s", Journal of Marketing Management, 7, 1, 3-13. Knox, S., Maklan, S., Thompson, K.E., 2000, "Building the unique organization value proposition", Schultz, M., Hatch, M.J., Larsen, M.H., The Expressive Organization, Oxford University Press, Leitch, S., Motion, J., 1999, "Multiplicity in corporate identity strategy", Corporate Communications: An International Journal, 4, 4, 193-9. McDonald, M.H.B., de Chernatony, L., Harris, F., 2001, "Corporate marketing and service brands: moving beyond the fast moving consumer goods model", European Journal of Marketing, 35, 3/4, Matheson, J., 1999, "Step up, adidas", New Zealand Rugby Monthly, 24, July, 18-26. Meenaghan, T., 1991, "Sponsorship: legitimizing the medium", European Journal of Marketing, 25, 11, 5-10. Meenaghan, T., Shipley, D., 1999, "Media effecting sponsorship", European Journal of Marketing, 33, 3/4, 328-47. Mintel, 1997, Mintel Annual Estimates of Sponsorship Market Values, Mintel Publications Ltd, Motion, J., Leitch, S., 2001, "Courting the lost consumer: corporate identity, corporate brands, and the New Zealand insurance industry", Asia Pacific Public Relations Journal, 2, 2, 59-73. Motion, J., Leitch, S., 2002, "Technologies of corporate identity", International Studies of Management and Organizations, 32, 3, 45-64. Olins, W., 2000, "How brands are taking over the corporation", Schultz, M., Hatch, M.J., Larsen, M.H., The Expressive Organization, Oxford University Press, Oxford, 51-65. Olkkonen, R., Tikkanen, H., Alajoutsijärvi, K., 2000, "Sponsorship as relationships and networks: implications for research", Corporate Communications: An International Journal, 5, 1, 12-18. Owen, W.F., 1984, "Interpretive themes in relational communication", Quarterly Journal of Speech, 70, 274-87. Parker, I., 1992, Discourse Dynamics: Critical Analysis for Social and Individual Psychology, Shultz, M., Hatch, M.J., Larsen, M.H., 2000, The Expressive Organization, Oxford University Slack, J.D., 1996, "The theory and method of articulation in cultural studies", Morley, D., Chen, K- H., Stuart Hall: Critical Dialogues in Cultural Studies, Routledge, London and New York, NY, 112- Srivastava, R.K., Shervani, T.A., Fahey, L., 1998, "Market-based assets and shareholder value: a framework for analysis", Journal of Marketing, 62, 1, 2-18. Van Riel, C.B., 1995, Principles of Corporate Communication, Prentice-Hall, London. Van Riel, C.B., Balmer, J.M.T., 1997, "Corporate identity: the concept, its measurement and management", European Journal of Marketing, 31, 5/6, 340-55. Varey, R.J., Hogg, G., 1999, "Managing identity at the crossroads of the national highroad and the corporate lowroad", International Center for Corporate Identity Studies Working Paper Series Yin, R.K., 1989, Case Study Research: Design and Methods, Sage, Newbury Park, CA.
Reconstruction. Renewal. Youth. For even better look of your skin and its maximum density improvement, we developed COLVITA - a capsule of the youth. Colvita is a unique complex of three main components: - native, unusually bio-available and completely connective tissues rebuilding collagen, - micronized algae with high iodine content needed to maintain a proper skin structure - natural vitamin E obtained from wild grain seeds, protecting the most important youth bastion and vitality - genetic code of DNA skin cells. The most important protein of human body and the most important component of connective tissue. It is responsible for condition of skin, eyeballs, bone matrix, hair and nails. According to the recent research human ageing process is largely determined by the state of body collagen which is in turn clearly reflected in the appearance of the skin. The human collagen chains are composed of about 20 amino acids. Only a part of it does the body produce in metabolic processes. The rest must be supplied in food. Colvita is a completely new approach to the problem of care for tissues composed of collagen as you will nourish yourself in way that provides constant replenishing of the amino acids storage for cells that produce and service collagen. Colvita is produced from lyophilised fish collagen. The triple helix collagen, similar to the construction of the DNA helix spirals, but functioning in extra-cellular space is carrier of information about amino acids reserves for vertebrates. Lyophilization - means freezing pure fish collagen to a temperature of minus 40ºC under high vacuum (1Pa), and then sublimating it (drying). This method guarantees the highest purity, biological durability of nutrients and amino acids preserved in natural form. This process also allows the components to maintain and excellent hygroscopicity of the product, and therefore, extremely high assimilability. Selected as one of ingredients of COLVITA as it shows synergy with fish collagen. It is obtained from the clear waters of the Brehat archipelago off the coast of France. Marine alga contain vitamins, macro- and microelements, peptides and amino acids. Such a composition is conductive for proper hydration , appearance and condition of our skin. The mineral components so called biogenic elements play an important role in body nutrition. The role and micro- and macroelements as well as amino acids in the metabolic system are much broader than the role of vitamins. Mineral components are essential to the body for building purposes (especially in the bone tissue). They constitute part of the body fluids, certain enzymes, high energy compound, etc. They also influence the regulation of the functions of organs and the whole body. The specific role of minerals is to maintain homeostasis, which is the internal balance of the body. Algae are rich in: calcium, copper, iron, manganese, magnesium, potassium, chromium, iodine, vitamins B and C, selenium and zinc (microelements which are very important for collagen synthesis) Particular attention should be paid to iodine, which occur naturally marine algae, and that fulfils its significant task as a structural component of thyroid hormones. In addition, iodine influences the maintenance of a proper structure of the hair, nails, and the skin. Vitamin E (D-alpha-tocopherol) Is present in many dietary supplements because it is best-known and very effective anti-oxidant. In COLVITA there is the newest, biotechnological formula of vitamin E with 100% of biological activity. At Colvita D-alpha-tocopherol is used – a form of vitamin E obtained from natural sources and which exhibits a high biological activity. The properties of vitamin E were decisive in its choice as a component of Colvita. Vitamin E determines synthesis of collagen in organism, it protects against the degeneration of the most important stronghold of the youth and vitality – the genetic code of skin cell – the DNA . Therefore this vitamin is considered and important factor in inhibiting the ageing process. RECOMMENDED DAILY INTAKE: 2 capsules daily, during or after a meal. Do not exceed the recommended daily intake. The product is a food supplement and should not be used as a substitute for a varied diet. A balanced diet and healthy lifestyle are important for maintaining a good health. Do not use in case of hypersensivity to any component of the product. Lemon fruit lyophyilizate (carrier), Algae (Fucus Vesiculosus), fish gelatine (capsul) , collagen lyophyilizate, vitamin E (D-alpha-tocopherol), Store in a dry and dark place, at room temperature, out of the reach of small children. BOX of 60 capsules
Of course the game changed, an entire new generation is waiting to be played man. And that's all the more fun, eh? I'll be waiting for this new team of yours. Make it good. Lol i forgot ive been away from the game awhile so obviously the game has changed lol well ill be back with a different team lol nice to see u to CK! Rebooted team. To hell with using only Gen 5 pokes... Gotta mix it up. Flying Gem or Leftovers or Fighting Gem - Shadow Claw - Brave Bird Why Braviary? Cuz hes the first Normal/Flying type bird poke that ive actually liked since Pidgeot. He is kinda bulky, and the following moveset is to cater to his exceptionally high attack. Brave Bird STAB is going to hurt alot with 123 base attack, most likely going with Fighting gem to amp up Superpower to take out the strongest Steel types (if my calculations are correct, Braviary using Superpower with the Fighting Gem boost along with Sheer Force ability OHKO's every steel type in OU, bar Physically Defensive Bronzong, but OHKO's all, if not most variations of even Metagross (assuming my calculations are correct) Thrash at 120 Base power plus Braviary's STAB and Sheer Force, Kinda frightening.... I know its a risk bc of it continuing for 2-3 turns with Confusion afterwords, this is kind of a "going out with a bang" move. Who doesnt wanna go around rampaging with 120 plus STAB x 1.3??? (and my Arcanine shouldve cleared out most the steel types in the fray by the time this thing comes out anyways. Anyways, This thing is meant for stopping power and because I feel like playing favorites this generation. I guess I could just call it a STAB coverage sweeper. - Wild Charge or Grass Knot (dont see much point in having both, prolly will go with the latter) Cant talk me outta my favorite poke. Love my Arcanine. Relaxed nature so I dont have to put any of his glorious all- around stats down. Leftovers to recover his bulk, bc this dog can take a hit or two. Extremespeed is the death of many picked off by the legendary dog. Overheat can tae down most anything, with Crunch being to aleviate me of the many new Psychics/Ghosts that run amok in this new Generation, and Grass Knot to pick off the waters, grounds, and rocks of the new Gen. Particularly tho, it is for Swampert, bc I know damn well he will still find his way into OU. - Stone Edge - Hone Claws Ok. Trying to set him up as a sweeper. Hone Claws till Salac activates, which will be after one or two hits. Then sweep away, with EQ and Crunch as STAB and Stone Edge with increased Accuracy from Hone Claws, upped attack, and Moxie after every poke it takes down, if I can pull off two Hone Claws with Salac activating, could lead very easily to a full team sweep until a priority move comes out. Grass Gem? Not sure for item. - Power Whip - Iron Head - Stealth Rock I like Ferrothorn, neat poke, seems like a better version of Forretress to me. Arcanine covers the fire weakness, even benefits from it. SR obvious for set up with defenses and resistances like this, and I was thinking Grass Gem for a one hit wonder with Power Whip, but not quite sure yet... - Ice Beam - Dragon Pulse - Signal Beam My typical Kingdra. Good all around, hard hitting with LO and good STAB, this thing has ruined many peoples days in battles. Leftovers or Light Clay - Leaf Storm - Light Screen - Dragon Tail CK, I owe you on this one with the Leaf Storm + Contrary combo. Unbelievable. Anyways, for some reason, Serperior can take hits like a champ. Basically, set up screen for whatever ur facing, start Leaf Storming and watch the power grow. Dragon Tail is on there in case I run into any unfriendly pokes resistant to my Leaf Storm to force a switch instead of losing my stat boosts. So there it is. Open to constructive criticism. This is a rough prototype. Well, that's every poke's individual analysis done. Time for the team analysis! ... Damn, it feels good to be doing this again. From a typing perspective, you're doing pretty good, really. You've got plenty of resists, and aren't going triple on any weakness; EXCEPT Ice. With Arcanine your only resist to it, Water types going Water/Ice for coverage could really give this team trouble. At least you have some nice Water resists though, which means Rain Dance teams aren't going to straight-up wreck you. And let me tell ya, they're a legitimate threat in BW OU. Dangerous. However, the worst flaw in this team, is being powerless against fighting types. You have two weaknesses to fighting. A common priority move in Mach Punch, and common power-moves like Close Combat mean your main sweeper and your only wall are just moments away from getting owned. My Terrakion could just rampage through this team with Banded Close Combat, and there'd be nothing you could do, since none of your pokemon outspeed it. Arcanine would die just to prevent it from destroying your entire team via intimidate, and then you'd be open to being swept by it yet again. Not having fighting resists is a death wish in BW OU, but also having weaknesses to it at the same time is suicide. You need something that a stop-all to every fighting type, simply put. This is the team's only glaring weak point that could be spotted off the bat via Team Preview, but it sure as hell is a terrible weakness, and one that's easy to take advantage of. As for the team's overall strength, your BST crammed together would be poor, no doubt. Never does this mean your team is garbage, but it's a problem. You don't have a defensive pokemon besides Ferrothorn, you don't have any speed demons to be an emergency stop to a sweep or to sweep with, and your offensive stats, aside from Braviary and (offensive) Serperior, are alright. But if offensive stats are the only stats to be your strong point, you need more speed to back it up, IMO. So, this team is good. The problem, is that it'd be good in UU, IMO. Good synergy, nice typing and all, but aside from Ferrothorn (and if any of these are also banned in UU), I feel like Gen 5 OU is just gonna ravage this team. Skill can never entirely close the gap between stat superiority, and vice versa. Yo i could of sworn water gun was an attack. Ferrothorn, Krookodile and Kingdra are the only two im willing to change on this team. The others I like faaaar too much. Sheer Force gets nulled if a move doesnt haver an ability that would effect the opponent??? Wtf... Thats a rip, my friend... Completely.... Ugh... Whats the point of it then... Oh well. Ill work out a different moveset for my feathery friend then. This variant of Arcanine has always worked well for me before with picking things up mid to late game, or just putting enough of a dent into something for one of my sweepers to set up. Hes always come through for me before, and im not willing to let him go that easily. And yea, ill go with the Wild Charge over Grass knot, which will make him mostly Physical instead of Special anywas save for Overheat. Serperior im keeping as is. I wanna see how well this will or will not work, and yes CK I will let u know how it goes glad to spark your interest lol. Ferrothorn I might keep and change how you suggested... Not sure yet.... Gotta rethink this one. If Sheer Force canceled all the effects of any moves, we'd be ****ing doomed, lol. No LO recoil on all moves means with this ability, that's a boost comparable to a choice band, with the freedom to change moves. And moves like Superpower would simply be broken; a huge boost without any drawback on a 120 BP move? ****ing hell, lol. Braviary doesn't resist fighting, and without significant defensive investment, it's not going to weather too many fighting moves. It can force switches on slower fighters, but slower pokemon means more powerful hits for Braviary to deal with, so it's in no way a counter. "OHKO the opposition" can never be considered a definite counter if they can just switch out. So yeah, you'll definitely need a good wall or bulky supporter/attacker to handle them fightin' types, IMO. I'm going to create a team, but need suggestions. Here it is: Dragonite (Dragon, flying) Tyranitar (Rock, dark) Weavile (Ice, dark) This is my current idea, but please come with suggestions (with reasons!), also with moveset and nature if you can. I don't have Pokemon Black or White yet, but you can give me suggestions from those games too, 'cause I'm going to buy it - Crobat - EV's: 128 Spd, 130 SpAtk, 252 Def - Heat Wave - Giga Drain - Sludge Bomb - Nasty Plot This is my favorite variant I have created of a Lead Crobat. Fast, and surprising with Special attack. Infiltrator means Crobat bypasses Barrier, Reflect, Light Screen, and Safeguard, which means no more problems will WallZong's. The only priority move Crobat need to fear is Ice Shard, which ive calculated the proper EV's into defense that this variant of Crobat can survive one, sometimes two, from an Adamant Natured Mamoswine. Mamo is the strongest user of Ice Shard, and this bat can take it and, more importantly dish back a OHKO in the form of either Heat Wave OR Giga Drain. Though Giga Drain is often shunned on those without a STAB boost for it, ive dumped the neccessary EV's into SpAtk that, with a single Nasty Plot boost, allows this bat to OHKO every Rock and Ground type that get sent out to stop him. Sludge Bomb is for STAB on Psychics and others still opposing a threat to my bat. Also, He has plenty of Speed, but if my calculations are right, with the EV's I have invested into speed and the boost from Salac Berry, this bat will outspeed all base 100 and lower Choice Scarf Users. My version of Crobat redefines the phrase, "Bat Outta Hell." - Gyarados - EV's: 252 Def, 120 SpDef, 140 HP - Thunder Wave - Dragon Tail A strange version of Gyarados? No, I dont think so. Im gonna call this a Phase-Dos. Waterfall and Bounce for STAB, why not? T-Wave on the obvious switch that willl come when I bring this bad boy out against a physical sweeper, banded or not. Either T-Wave or Dragon Tail on the switch, really piss them off dealing some damage and forcing them to switch out whatever "counter" counter they may have for this. The EV spread combined with Intimidate has made this Gyarados a very formidable, if not obvious, wall. Leftovers for this wall are essential, keep him alive as long as possible to put a stop to sweepers and to phase them out via T-Wave or Dragon Tail. - Serperior - EV's: 252 SpAtk, 252 HP, 6 Spd - Leaf Storm - Light Screen - Dragon Tail Same Reason for the Serp as before. - Scrafty - EV's: 252 Atk, 252 HP, 6 Def - Fake Out - Drain Punch - Ice Punch Scrafty is a beast. Quite difficult to KO... Especially with the EV spread I have given. His Attacks are strong, effective, and if they cannot get the job done, Leichi will make up for it. Fake Out has always been an annoyance, but is far more useful in this gen with the Sturdy abilitiy becoming more and more popular, allowing those usual OHKO Rock/Ground types to make their comeback and smack down those unsuspecting of it. This set cleans up the trash, which is what Scrafty always kind of reminded me of. Hes the Garbage man, and here's here to clean up the endgame. - Golurk - EV's: 252 Atk, 252 HP, 6 SpDef - Stone Edge Golurk is an aesthetic choice, at best. Hes Ground/Ghost, thats amazing. He makes up as an intercept to my Electric weaknesses on the team in the form of Crobat and Gyarados. No Guard and 252 Atk EV's means hes going to hit... Hard. Rock Gem to boost Stone Edge just in case of any unfriendly Mence or Gyara sweepers trying to wreck my team. He boasts 3 immunities, something I cant rightfully turn down. - Swampert - EV's: 252 Atk, 140 Def, 120 SpDef Swampert.... Just an awesome poke. No special setting for him, just to absorb attacks, stay alive, and weaken the opposing team enough for my other members to pick them off one at a time. Two STABs and Endeavor just for safety reasons, this Pert can absorb some hits. Immunity to electric is another up for my Gyara and Bat. Overall, I think this team has some good synergy. It works well, and kind of catches the opponent offguard in situations that, in 4th Gen would have been obvious to solve, but now 5th Gen makes it possible to pull the rabbit outta the hat, so to speak. Ready for review. Also, if you want some serious advice towards movesets and the like, you'll need to make some movesets of your own first. This thread's for rating teams, getting advice for a team you've already put together, or just posting your team for the sake of it, bragging rights and such. Not to help you build from scratch, step-by-step. So if ya wouldn't mind, could ya make a rough draft of what movesets your teams would have? Then I'd be happy to help ya. =) You've clearly remedied the previous weakness issues, lol. But with old problems solved, come new ones in tow. You don't have a dragon weakness, but you don't have a single Steel type either. Gyarados can Dragon Tail opponents out only so many times, and Serperior isn't exactly bulky enough to do the same thing against something that's DDing. Any Dragon type can DD up and Outrage you with almost-little worry, or just fire off a Draco Meteor and pummel your pokemon into submission time and again. Ferrothorn prevented this from being problematic; but with it gone, Dragons are waiting to devour your entire team. Of course, many dragons have a Fire move to handle Ferro, so I wouldn't recommend him as your Steel of choice. Regardless of whether or not you add a Steel back to the team to prevent Dragon clean-sweeps, I'd highly recommend placing it over Swampert. You've got a better bulky Water type in Gyarados, and Golurk has STAB Earthquake, and much higher Atk to abuse it. Swampert also adds just one good resist in Fire, Electric already handled by Serperior and Golurk, and Rock, Steel, and Poison being things a Steel type can handle fine. So in terms of typing, Swampert adds nothing to the team at all, and is the best candidate to be replaced for something else. Namely, a Steel type. Your only other problem is your massive 3x Ice weakness. I can see why you fear Mamoswine's Ice Shard. However, a recommended Steel helps solve this problem (superficially; Mamo's EQ is something to fear indeed), and there're plenty of better leads than Crobat which aren't weak to Ice, it's easy to patch up. STAB Ice is hardly something to worry about, either. Rocks isn't as common as before, but with Mach Punch and fighting types everywhere, Ice types are still uncommon in OU. Of course, just watch out for sweepers using Ice Beam, and have at least one resist to it, since as of now, you have none. Also, even for a balanced team, this team is a bit slow. The amount of bulk and power the team has helps off-set this weakness, but if any pokemon with really high speed or a move like Dragon Dance sets up, you're in serious trouble. Without lethal priority or a instant-stop to some of the most popular stat-up sweepers, you'll need to make every move count, and above all, prevent sweepers from setting up. Hyper-offensive teams like my usual ones could potentially pick this team apart. This isn't a fatal flaw like your last team, but just something to play very cautiously around. Dragon Tail Gyarados can't counter every threat you encounter; namely, special threats. If only there was an ability that was intimidate for Sp. A... =/ Anywho, your team's typing has some good synergy and defensive core capabilities, and some key resistances; just handle your Dragon and Ice problem with a single new pokemon, and this team's definitely a threat in OU. It's a B-team for OU standards, but B-teams can be all the more deadly through surprise factors, and being underestimated in general. This is a much better team than your previous edition, no doubt about that. Nicely done. A good team that could be even better. I can't wait to crush it. :3
Remarks by Dr Anders Nordström, Acting Director-General of WHO Inauguration of the UNAIDS/ WHO Building 20 November 2006 - Mr Secretary-General Kofi Annan, Gunilla Carlsson, chair of PCB, Margaret, Peter, Richard, Representatives of the Swiss Government, Colleagues, ladies and gentlemen Good morning. I would like to echo Peter in thanking you all very much for coming here today to celebrate with us the opening of this beautiful new building. For WHO and me this morning is about teams; let me start with the first team that built this house. - 300 people were involved in the construction of the building - it took 821 days. Let me extend my sincere appreciation and thanks to all of you. This would not have been possible without a generous loan from the Swiss Government, and the vision of the architects, whose winning design "permeability" we are now enjoying. The net result is a true synergy - a building that is far more than its components. My 2nd team is the UNAIDS-WHO team. This building symbolises and gives practical effect to the joint work between UNAIDS and WHO. A building like this is not just a better way of grouping offices. It is a strong commitment to collaboration. I am very happy to be standing here today together with Peter Piot. I am proud that the UNAIDS – Cosponsors team is recognized as one of the most successful throughout the United Nation systems. This is UN reforms in practice! I was also very proud in Toronto in August to see that WHO is fully taking on its role and responsibility within the team. The 3rd team is the WHO HIV/AIDS – TB – Malaria team who moves into this building with Dr Asamoa-Baah as their team leader. The global response to AIDS/Tuberculosis/Malaria has been extra-ordinary during the last years. If we look back at the G8 meeting in Okinawa some 8 years ago and where we are today, it is quite remarkable. I would specially like to recognize the work done by the Global Fund and Richard Feachem here today. My 4th team is the very important GF -UNAIDS - WHO team. I have now only two teams left to mention and those relates to TB and Malaria. The Stop TB team is regarded as one of the most effective partnerships today. You have an up-hill task but with your commitment you will succeed. And finally, the Roll Back Malaria team. We hosted the board meeting here in Geneva only last week and with the very strong financial and technical commitment to scale up malaria control, I am convinced that we will now see rapid results. Of course teams can not work without effective leadership. Let me on behalf of all WHO staff and member states sincerely thank you, Mr Secretary General, for your strong support and personal engagement not only for AIDS, TB and Malaria but for the importance of health more broadly; polio, immunization, violence, child health…. Mr Secretary-General you represent also the wider team of the UN family of which we are all part. You have done so much to foster collaboration and joint work between the different parts of the UN. We cannot today talk about leadership without remembering the important work of JW Lee through the 3 by 5 initiative. He made the perspective of universal access to antiretrovirals real. Today we remember him, and recognize how our public health work is being carried on by others. Dr Margaret Chan next takes forward the work of the Director-General of WHO and brings her very strong personal commitment to our team. Let me finally thank all of you who have joined us here today. You are our extended team. Without your political, financial and moral support we would never be able to take the work forward. Today's inauguration is an opportunity to set our sights even higher, to be inspired. Much still remains to be done. People are still dying and suffering. Our teams needs to be encouraged to make even stronger progress and gain even greater results in preventing, treating and supporting people affected by AIDS, Tuberculosis and Malaria. I thank you all.
By now many of our customers have upgraded to the recently released 1.3.0 version of Synergy. Although we still have many features to add and improvements to make in Synergy, it is coming along rather well. Several substantial upgrades are included in 1.3.0, and although we include the changes in the release notes, they sometimes don’t make sense without further explanation. So I thought I would share some of the more essential changes in terms we can all understand. One of the biggest improvements, but least noticeable (at least initially), is the enhanced way in which Synergy caches information. What this means for the student is dramatically improved load times for each page displayed in the Module. In a nutshell, what happens is that once a page has been visited by any student, the audio, video, graphics, and so forth from that page are “cached” onto that local machine. Therefore, the next time this page is accessed, those files do not have to be streamed across the network. Synergy has always done this, but it just does it much better now. This is especially beneficial in large implementations where multitudes of students are all accessing Synergy simultaneously. Speaking of load times, you should also find screens such as the Grade Book in the Faculty Portal load significantly faster than in previous versions of Synergy. Several other changes are evident within the Faculty Portal. For instance, the Individual Assignment Grade report was added. This report provides information similar to that seen from the Student Portal and is accessible on the Reports menu. All of these are welcome changes, but some of the most important changes took place in the Scheduler itself. For example, the Scheduler now supports multiple copies of the same Module. So, if you have multiple copies of Personal Finance, you can now adjust the number of seats available for that Module. Plus, you can now add content that you created to a Content Set, therefore allowing you to use the Scheduler to schedule students into non-Module lessons. Another long-overdue feature is the ability for Synergy to schedule and communicate with 2.x.x versions of the Modules. Now you are able to add older Modules – what we refer to as Legacy Modules – into Synergy and allow Synergy to do the grading and scheduling, therefore eliminating the need for Colleague. I’ve just touched on the major changes in Synergy 1.3.0, but the overall functionality has been remarkably improved. The load times of all the pages within the Student and Faculty Portals have been improved, the Scheduler is smarter and more efficient, many fail-safes have been implemented to prevent user error or data corruption, and we generally tried to make it more user friendly. We continue to improve Synergy on a daily basis and we welcome comments and suggestions on how to make it a better product. Please feel free to send us any ideas you have, and we will see that they are considered.
1 Queensland Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Public Disclosure Authorized Public Disclosure Authorized Public Disclosure Authorized A report prepared by the World Bank in collaboration with the Queensland Reconstruction Authority Public Disclosure Authorized 3 Queensland Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi A report prepared by the World Bank in collaboration with the Queensland Reconstruction Authority June 2011 4 The International Bank for Reconstruction and Development The World Bank Group 1818 H Street, NW Washington, DC 20433, USA Queensland Reconstruction Authority PO Box 15428, City East Q 4002, Australia June 2011 Disclaimer The views expressed in this publication are those of the authors. The findings, interpretations, and conclusions expressed herein do not necessarily reflect the views of the Board of Executive Directors of the World Bank or the governments they represent, or the Queensland Reconstruction Authority. Cover photos: Top: Aerial Story Bridge post flood. Photo courtesy Brisbane Marketing. Bottom left: Southbank flooding/ Lyle Radford; center: Ipswich flooding, January 2011/Photo Courtesy of The Queensland Times; right: Port Hinchinbrook/Photo Courtesy of The Townsville Bulletin. Design: 5 Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi / iii This report was prepared by a team led by Abhas Jha and comprised of Sohaib Athar, Henrike Brecht, Elena Correa, Ahmad Zaki Fahmi, Wolfgang Fengler, Iwan Gunawan, Roshin Mathai Joseph, Vandana Mehra, Shankar Narayanan, Daniel Owen, Ayaz Parvez, Paul Procee, and George Soraya, in collaboration with participating officers of the Queensland Reconstruction Authority. The team thanks Jim Adams, Ferid Belhaj, John Roome, Vijay Jagannathan, Charles Feinstein, Saroj Jha and Kanthan Shankar for their support and guidance, Zuzana Svetlosakova for her edits and Mohamad Al-Arief, Sofia Bettencourt, Jack Campbell, Olivier Mahul, and Doekle Wielinga for their constructive comments. Flood damage reconstruction work undertaken by QBuild at Milperra. The State of Queensland. 6 iv / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Port Hinchinbrook. Photo Courtesy of the Townsville Bulletin. 7 Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi / v Table of Contents Foreword...1 Executive summary...3 Introduction...5 Floods in Queensland...5 Impact...6 Response and early recovery...7 PART A: Achievements in Queensland 1 Institutional, Implementation and Coordination Arrangements Institutional framework for disaster response and preparedness The Queensland Reconstruction Authority Measuring Needs and Results Progress in Recovery and Reconstruction Framework for measuring needs and results progress Rapid needs assessment and Value for Money approach Financing the relief, recovery and reconstruction Australia s disaster assistance framework Estimating and meeting the needs Economic recovery Economic impact Economic recovery measures Strategic communication Communication and natural disasters Information outreach in Queensland Building Resilience Disaster risk reduction strategies Policies and actions for building resilience Community engagement in recovery and reconstruction Role of community engagement in recovery and reconstruction Engaging Queenslanders...41 8 vi / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Part B: Other lessons in global good practice 1 Institutional, implementation and coordination arrangements Measuring needs and results progress in recovery and reconstruction Post disaster needs assessment Measuring progress and performance in recovery and reconstruction Financing the relief, recovery and reconstruction Diversifying sources of risk financing and incentivize resilience Using reconstruction spending to accelerate community recovery Economic recovery Strategic communication Building resilience Mainstreaming disaster risk reduction into recovery operations Understanding risk Building resilience through integrated river basin management Community engagement in recovery and reconstruction...65 9 / 1 Foreword Towards the end of 2010 and in the early months of 2011, the State of Queensland suffered from devastating floods. Resulting from a series of heavy rains, followed by a category 5 Cyclone Yasi, the floods caused dozens of casualties, the evacuation of over 70 towns, and an excess of US$15 billion of damages and losses. The events washed away roads and railways, destroyed crops and brought Queensland s $20 billion coal export industry to a near halt, making the flooding one of Australia s most expensive natural disasters. The Federal government and Queensland s State authorities responded swiftly with the help of Australia s Emergency Management system as well as the Australian Defence Force, effectively coordinating the evacuation, and providing relief and recovery support. In February 2011, the Queensland Reconstruction Authority was established to oversee and coordinate the recovery and reconstruction efforts. Major-General Michael Slater, appointed Chair of the Queensland Reconstruction Authority, has been leading the efforts to rebuild communities across the state affected by the floods and cyclone. Only four months after the floods, Queensland is well on the path to recovery. With the long-term goal of rebuilding a safer state, Queensland now faces the long-term issue of building resilience through risk reduction and integrated watershed management. Australia and the World Bank are close partners in the efforts to aid developing countries on their path to sustainable growth, with Australia playing a significant role in the Bank s initiatives in the field of disaster risk management and climate change adaption, particularly through its dedicated support of the Global Facility for Disaster Reduction and Recovery (GFDRR). Following World Bank President Robert Zoellick s offer of assistance, the government of Australia accepted the World Bank s support for the reconstruction. The undertaking, based on the concept of a knowledge exchange where the World Bank contributes global good practice and at the same time learns from Australia s experiences in recovery, reconstruction and risk mitigation, took place in three phases. During the first phase in March 2011, a team of World Bank experts visited Queensland s affected areas, focusing on the overall reconstruction approaches and strategies. In the second phase in May 2011, Bank staff supported training courses for local government authorities on developing local reconstruction plans. In mid-june, a Memorandum of Understanding is to be signed between the Queensland Reconstruction Authority and the World Bank, which will further encourage knowledge-exchange initiatives, particularly in disaster risk management. This report prepared by the World Bank, in collaboration with the Queensland Reconstruction Authority documents the achievements and progress made in Queensland and includes examples of global practice that the World Bank has collected in the field of reconstruction and risk reduction from across the world. James Adams World Bank Regional Vice President Major-General Michael Slater Queensland Reconstruction Authority Chair 10 2 / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Executive summary South East Queensland flood disaster. Photo Courtesy of The Toowoomba Chronicle. 11 / 3 Executive summary The Queensland flooding of early 2011 was Australia s largest natural disaster in recent memory. With a ballpark estimate of US$ 15.9 billion 1 in total damages and economic losses (with a public reconstruction cost of approximately US$7.2 billion), this is also one of the major international disasters of the last decade. The combined impact of the Indian Ocean Tsunami has been US$ 11.5 billion, and it is similar to major disasters in developed countries, such as the 1994 Los Angeles Earthquake (US$ 24 billion) or the 2002 flooding of the Elbe River in Germany (US$ 14 billion). As of March 2011, the government and private sector have mobilized an estimated US$ 11.8 billion (including insurance payments), representing 75 percent of the estimated damage and losses which is already above the 45 percent average of disaster coverage in developed economies. The Queensland reconstruction effort meets international good practice standards in many ways. Building on a wealth of experience, the Australian authorities have responded rapidly to save lives, provide emergency funding to individuals and communities, and to set-up the institutions charged with the management of the recovery and reconstruction. Four months after the floods, Queensland is well on the path of recovery: local reconstruction plans have been prepared, most coal mines are back in operation and many families have received financial assistance to cope with the impact of the floods. The government has made three key choices in the immediate aftermath of the disaster enabling speedy recovery. First, the army and volunteers assisted those in need immediately and subsequently managed the clean-up operation. Second, the government established a dedicated institution the Queensland Reconstruction Authority (QldRA) and charged it with the overall coordination of the relief and recovery effort. Third, financial support was provided immediately to the beneficiaries. The financial packages have the right balance between size, terms and eligibility criteria. The state of Queensland focuses on building back better in order to reduce the impact of future disasters and create resilient communities. The QldRA declared building resilience as an overarching goal and seeks to integrate disaster risk reduction into the main lines of reconstruction. A framework of measuring results in this area is provided by Australia s National Strategy for Resilience of Flood risk management poses particular challenges in the areas of land use planning and river basin management that will need to be addressed. In the months to come, it will be important for the QldRA to connect the dots and prepare for the transition to full-fledged reconstruction. Building on a comprehensive damage and loss assessment and a strong monitoring and evaluation system, there will be demand for strategic planning, and an assessment of sectoral and geographic gaps. 1 Figures based on compilation of damage and losses data from various sources including IBIS World, PriceWaterhouseCooper, and Prime Minister s Office. Exchange Rate AUD$ 1= US$ 1 (February 2011). 12 4 / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Introduction South East Queensland flood disaster. Photo Courtesy of The Toowoomba Chronicle. 13 / 5 Introduction Floods in Queensland The 2010/2011 floods occurred after a prolonged period of drought, in quick succession, compounded intermittently by three major storm events and cyclones. Queensland, also called the Sunshine State, traditionally experiences heavy rainfall in the months from December to March. However, in 2010, already by the end of November, much of eastern Australia, including Brisbane, saw crops soaking and water catchments fill, making them more likely to overflow in case of heavy rains ended being in fact the third wettest year on record, according to the Australian Bureau of Meteorology. This is a stark contrast to the previous years when Queensland suffered severe droughts. This season, a particularly strong La Niña weather pattern appeared, leading to warmer waters near the northeastern coast of Australia, making Queensland particularly susceptible to tropical storms. On 25th December, Cyclone Tasha made landfall south of Cairns with mm of rainfall. This was preceded by three heavy rain events all taking place within three weeks of December. In addition, on February 3 rd, Category 5 Cyclone Yasi crossed Queensland coast at Mission beach and Tully south of Cairns, becoming the worst cyclone to hit Australia since 1918, with 290 km/h winds, destroying homes, businesses, along with infrastructure and agricultural crops in the already suffering area. Floods are not unknown to Queenslanders. The Commonwealth, States and Councils can rely on decades of experience, institutional memory and well-established financial and physical delivery mechanisms for effective and efficient disaster response. The La Niña years of 1916, 1917, 1950, 1954 through 1956, and 1973 through 1975, were accompanied by some of the worst and most widespread flooding this century. In January 1974, a cyclone brought heavy rainfall to Brisbane and many parts of southeastern Queensland and northern New South Wales with a third of Brisbane s city centre and 17 suburbs severely flooded leaving 14 people dead, over 300 injured, 56 homes washed away and 1,600 submerged. Since the catastrophic floods of 1974, there have been major flood events in various parts of the State. In April 2010, over one million square kilometers of Queensland and New South Wales were flooded during which some 2,000 homes were inundated. However, the 2010/2011 floods have been historically unique due to their causes and wide-ranging impact. Australia s climate, punctuated by cycles of drought and intense rain events, make the county susceptible to flooding. Cyclones take place seasonally between October and May. La Niña, weather pattern that affects the Pacific Ocean region, is known as the wet counterpart of the El Niño weather pattern generally associated with drier conditions. During La Niña, the cold water that pools near the coast of South America surges across the Pacific and there is a greater build up of warmer water along the eastern coast of Australia. As a result, there is a greater contrast in the sea surface temperatures between the east and west Pacific, and a greater contrast in air pressure. The easterly trade winds become stronger due to this contrast, dragging warm, moist air along the Australian coastline, creating larger rain clouds and producing more rainfall. 14 6 / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Table 1. Queensland Floods Timeline September-November December 3 December December December 24 December 25 December 28 December 30 January 1 January 3-4 January 5 January 12 January February Large parts of eastern Australia, including Queensland, experience the wettest spring season, soaking crops and filling water catchments. First series of heavy rain hits central Queensland, causing much damage in the town of Emerald. Central Queensland hit again with torrential rains, causing localized flooding, and strengthening floodwaters. Strong rains for the third time recorded in Queensland, causing flooding. Many river catchments are soaked. Tropical Cyclone Tasha makes landfall near Gordonvale south of Cairns, bringing rainfall of mm. After six more days of contact rain, disaster is declared for the towns of Chinchilla, Theodore and Dalby in southern Queensland, prompting mass evacuation. Bundaberg north of Brisbane experiences heavy flooding. Airport at Rockhampton is cut off by a deluge from soaked inland areas. Rockhampton is cut off by rising floodwaters. Other cities brace for record flooding, and it is expected that floods will last for weeks. Violent storms overnight cause flash flooding in Brisbane. Brisbane flood levels reach peak, causing widespread flooding with dozens of suburbs and thousands of properties are inundated. Floods menace Victoria State. Residents of Kerang evacuate. Category 5 Cyclone Yasi hits south of Cairns. Source: Telegraph.co.uk, 4 th Jan 2011; Australian Geographic February Impact 73 out its 73 Local Government Areas (LGAs) or Councils in Queensland declared the State of Emergency due to the flooding events. Queensland experienced both slow-onset and deep inundation events as well as flash floods in various low-lying parts and valleys of Queensland. The floods inflicted significant damages and losses to private properties and businesses, and a vast number of public infrastructures. Ballpark estimates indicate cumulative damages and losses from the floods and cyclones in the 2010/2011 period reached at least AUD$ 15.7 billion resulting in a consequent lowering of Queensland growth estimates from 3 percent to 1.25 percent. These damages include: damages to more than 9,100 km of state road network and approximately 4,700 km of the rail network; power disruptions to approximately 480,000 homes and businesses; 97,000 insurance claims in respect of damages to private assets, of which percent are for privately owned residential properties; damages or disruptions to 54 coal mines, 11 ports, 139 national parks and 411 schools; estimated losses of $ 875 million to primary industries, primarily the sugar, fruit and vegetable sub-sectors; 15 Introduction / 7 Table 2 below provides the initial sectoral damage and losses estimates compiled from various sources in March Table 2. Estimate of Damage and Losses, Queensland Flood & Yasi Cyclone (In AUD$ bn) Sector Estimated Damage and Losses Data Sources Mining 2.5 PriceWaterhouseCooper Agriculture 1.6 IBIS World (Market Research Company) Housing 4 IBIS World (based on construction value of damage homes) Infrastructure 5 Prime Minister s Office Commercial Properties 2 IBIS World Tourism 0.6 IBIS World Total 15.7 Response and early recovery Australia s disaster response has benefited tremendously from prior disaster management arrangements and preparedness. Disaster response has been largely indigenous, public-sector led and private-sector supported, without any significant reliance on the international community. The Commonwealth Government of Australia has indicated that it will invest AUD $ 5.6 billion in rebuilding flood-affected regions, including around AUD $ 3.9 billion to be allocated as the Australian Government s share of Natural Disaster Relief and Recovery Arrangements (NDRRA) s expenditures (75 percent). Likewise Queensland government has pledged about AUD $ 2.1 billion funding for financing recovery and reconstruction. However final recovery and reconstruction costs, particularly including premiums for building-back-better and longer term disaster risk reduction, are likely to be even higher. As of mid March 2011, the following had been achieved: Human and Social protection: More than 630,000 Australian Government Disaster Recovery Payments have been made totaling $725m of which 60 percent were flood-related and the rest were related to the recent cyclones; more than 57,000 Disaster Income Recovery Subsidies have been granted, totaling $60m, of which 92 percent were flood-related; more than 60,000 claims have been made under NDRRA provisions; and 409 of the 411 affected schools made operational from their original locations. Economic: Of the 54 affected coal mines, 49 have returned to full or partial production; more than 1600 grant payments have been made to primary industry/producers worth more than $8m; and more than 2100 grant payments to small businesses worth nearly $11m have been processed. Environment: Across Queensland 83 sewage schemes were affected. As at 6 April 2011, 76 of those affected schemes were operating within approved regulatory standards. 103 water supply schemes were affected and all are now operating within approved regulatory standards. Of the 389 stream flow gauges across the state, 36 were structurally affected by the extreme weather events. Preliminary or temporary repairs had been performed on 34 of the 36 gauge sites as at 1 April 2011; and 175 out of the 279 national parks closed due to extreme weather events have re-opened. Private Recovery: Power was restored to 99 percent of 480,000 affected homes and businesses; $310m 16 8 / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi paid in insurance claims, while another $2.5 billion estimated claims are yet to be paid for which public sector facilitation has begun; a quick GIS-based housing damage database and interactive map has been developed and made accessible to the public which is the most-frequented site on the QldRA website, with more than 22,000 unique visits, out of the total of 23,500 visits made at the QldRA website in less than a month since its inception. The latter marks the rapid commencement a participatory and inclusive process for damage verification and grant eligibility determination which can be considered as a good practice example, with potential for international replication. Roads and Transport: More than 40 percent of the 9,170 km of affected state roads have been re-operationalized; 3,807 km of the affected 4,748 km of rail network have been restored to service; and 10 out of the 11 affected ports restored to full operations. Community Engagement and Communications: A community assistance and outreach campaign - the Join Forces Program - was launched in February 2011 to foster, facilitate and catalyze partnerships and synergy-building across community organizations, clubs, local governments, businesses and individuals. Up to 54 community organizations have signed up for the program with 5 successful matches or purpose-specific partnerships. A two-way communications system with the communities was established by the QldRA. The QldRA has also received early community feedback - by March 2011, it received a total of 258 calls and letters. In addition, the QldRA website, launched in mid-february 2011, recorded over 23,000 unique visits in the space of less than a month of its existence. Repair works on a section of the Warrego highway. The State of Queensland. 17 Introduction / 9 Box 1. Good Practice: Post Flooding and Cyclone Cleanup The town of Grantham in the Lockyer Valley Regional Council of Queensland was one of the hardest hit communities in the recent flash flood. On 10 January 2011, this town of around 300 people was swept by an inland tsunami with the depth of more than 6 meters in some areas. Following an extensive search and rescue operation, the community with the support of the army and police personnel and volunteers started a cleanup operation on 18 January The Local Council and the community coordinated the cleanup of the debris, and within three weeks, the flood impacted areas have been cleared from the debris and collapsed buildings. In other towns also inundated by the January 2011 flood, similar cleanup operations were also carried out with more than 15,000 volunteers working alongside emergency response personnel. Post disaster cleanup was among the standard early recovery schemes in Australia s disaster management framework. Following Cyclone Yasi which struck the northern part of Queensland, the Commonwealth and State governments established a $20 million Rural Resilience Fund. The Operation Cleanup Employment component of this initiative provides an opportunity for unemployed local farm and tourism workforce in the cyclone affected areas to be employed in the cleanup operation. This scheme enables affected residents to remain in their communities and to take an active role in the re-building effort, where they can also receive training and other assistance to increase their job prospects. Provision of early recovery assistance to the disaster impacted communities to clean up the debris from destruction left by a catastrophic event has been a common approach in recent post disaster recovery practices around the world. Experiences from the Indian Ocean Tsunami, Haiti Earthquake to Pakistan Flood suggest that such a program is well suited for community context where rural livelihood or labor intensive employment was impacted by the disaster. In the context of Queensland reconstruction, which covers a geographically vast area, such a scheme could be expanded to include a longer-term reconstruction effort such as rebuilding community infrastructure important for the community s long-term social as well as economic recovery. Box 2. Good Practice: Cairns Local Disaster Coordination Centre Opened in December 2010, the dedicated centre was funded through the Australian Government s Regional and Local Community Infrastructure Program, the Queensland Government and Cairns Regional Council. The building is designed to withstand Category 5 cyclones and has independent emergency power and water sources. The centre is enabled for the synchronized delivery of information and relief to the community during a crisis situation. It is connected directly to Cairns Regional Council s data systems at the administration building via optic fibre link. Council s team also uses the center for disaster management training, education and planning activities including external community groups such as SES, Red Cross, schools, and volunteer groups. Source: Cairns Regional Council (http://www.cairns.qld.gov.au/about-council/media-and-public-notices/media-releases/releases/cairns-local-disaster-coordination-centre) 18 10 / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Aerial Story Bridge post flood. Photo courtesy Brisbane Marketing. 19 PART A: Achievements in Queensland / 11 1 Institutional, Implementation and Coordination Arrangements 1.1 Institutional framework for disaster response and preparedness Australia now benefits from a robust and efficient disaster preparedness regime. Australia s disaster risk management system recognizes that not all types of natural hazards and hazard intensities throughout the vast expanse of the continent can be mitigated. This serves as the raison d être for a robust multi-sectoral disaster response mechanism which addresses the multi-dimensional recovery needs of public sector infrastructure and services, as well as privatelyowned assets. Over the years, the country has built a comprehensive ex-ante disaster response strategy and a preparedness regime into its normal public service delivery systems. This includes the necessary legislation, institutionalization, financial instruments and coordination mechanisms for effective disaster response. Multi-tier institutional arrangements, legislation and formal coordination forums for disaster management are in place. The State of Queensland s Disaster Management Act-2003 (dated 21 February 2011) provides the legislative basis for the State s elaborate and well-functioning disaster management arrangements. The Act requires the establishment of disaster management groups and committees at the State level, as well as within local government in disaster-prone districts. For disaster recovery, each disaster management group is served by Community Recovery Committees (CRC) and Plans at the State, District and Local levels. The CRCs are tasked with specific functions and deliverables, such as inter-agency coordination; development and review of annual recovery plans; monitoring the multiagency MOU for the provision of community recovery services in disaster events; monitoring preparedness levels and activities; support and advice on disaster recovery operations, and liaison with Emergency Management Queensland (EMQ) and downstream recovery organizations and committees. The fusion and interplay of State and lower-tier departments helps ensure both central-level monitoring and coordination, and standard-setting in advice and recovery, as well as decentralized decision making, and fostering of strong regional and local capacities for the implementation of disaster management and recovery plans. There is a clear distribution and delineation of disaster management functions across departments, facilitating coordinated reaction processes. The responsibilities of the various departments for disaster management are clearly and carefully determined and delineated, including those of EMQ, and the Departments of Communities, Health, Primary Industries and Fisheries, Tourism, Regional Development and Industry, Public Works (for damage assessment of impacted built infrastructure); education and the arts, housing, families, community services and indigenous affairs. Likewise, there are formal roles and specific responsibilities for other institutions, such as peak bodies 2 and civil society groups,, including the Australian Red Cross, Local Government Association of Queensland (LGAQ), Lifeline Community Care, St Vincent de Paul, Salvation Army and Centrelink. The Australian Disaster Management Regime is further reinforced by innovative disaster management mechanisms. Some examples of these include the cross-cutting Value for Money (VFM), and Regime and Prior Contracting Arrangements for rapid reconstruction in the transport and roads sector, both of which have a potential for international replication. 2 Peak bodies are associations of industries or groups, generally established for the purposes of developing standards and processes or to act on behalf of its members in lobbying the Government. 20 12 / Queensland: Recovery and Reconstruction in the Aftermath of the 2010/2011 Flood Events and Cyclone Yasi Box 3. Good Practice: Pre-Disaster Contracting Arrangements in the Transport Sector Traditionally, the vast expanse of Queensland and the sprawling network of roads have posed a daunting challenge in post disaster recovery and reconstruction. In the case of the present reconstruction program, transport sector is the most significant program in terms of investment. To deal with this recurring challenge, the transport department has adopted a system of pre-disaster contracting (on basis of retainership ) under which work contracts have been pre-commissioned and pre-negotiated with major contractors, enabling them to mobilize reconstruction resources and start rebuilding and re-operationalising the road network rapidly. This is a good lesson for other countries which are prone to recurrent disasters, particularly floods. However the scale and impact of the recent spate of disasters in Queensland is such that it is overwhelming existing capacities, both in the contracting industry as well as in the supply of basic construction inputs including plant, machinery, tools and materials. The Department of Transport and Main Roads has entered into dialogue with industry suppliers to facilitate the procurement of these materials from other external sources, such as other states in Australia. This is also a good practice that has parallels in global contemporary post-disaster reconstruction, such as in Pakistan Earthquake 2005 and the Tsunami Reconstruction in Sri Lanka, where innovative supply-side solutions were developed to deal with both shortfalls and to control material price-spikes such as the establishment of building material supply hubs in Pakistan and mechanisms for bulk community procurement of housing reconstruction materials in Sri Lanka. 1.2 The Queensland Reconstruction Authority The Australian Government has proved quick and flexible in the institutional and financial response to the floods. In the aftermath of the quick succession of the floods and cyclones, the Government has rapidly established additional institutional and financing arrangements for efficient and effective early recovery. These include: A Premier s Disaster Relief Appeal that has attracted more than A$257 million as of May 2011; Immediate availability of early recovery financing through at least 3 pre-existing financial assistance windows for grant payments to flood-affected individuals; The establishment of the Queensland Reconstruction Authority (QldRA) by an Act of Parliament, effective February , for 2 years. The QldRA s mission is to reconnect, rebuild and improve Queensland communities and its economy. The Authority has been vested with the power and authority to take charge of the reconstruction process and facilitate effective interaction between the concerned line departments at the State and local levels, in coordination with the concerned local councils. Its key strategic objectives are to: Maintain the self-confidence of Queensland Build a resilient Queensland and support resilient Queenslanders Enhance preparedness and disaster mitigation World Conference on Disaster Reduction 18-22 January 2005, Kobe, Hyogo, Japan Hyogo Framework for Action 2005-2015: Building the Resilience of Nations and Communities to Disasters * www.unisdr.org/wcdr EVALUATE AND IMPROVE The WMO Strategy for Service Delivery and Its Implementation Plan DESIGN SERVICES ENGAGE USERS DELIVER WMO-No. 1129 WMO-No. 1129 World Meteorological Organization, 2014 The right of Report and Recommendation of the President to the Board of Directors Sri Lanka Project Number: 46009-003 April 2014 Proposed Loan for Additional Financing and Administration of Grant for Additional Financing BUILDING TOGETHER Guide for Municipal Asset Management Plans Ministry of Infrastructure Letter from the Minister of Infrastructure and Minister of Transportation Infrastructure investments are vital to integrated flood management tools series flood emergency planning ISSUE 11 FEBRUARY 2011 The Associated Programme on Flood Management (APFM) is a joint initiative of the World Meteorological Organization A ACF INTERNATIONAL NETWORK PARTICIPATORY RISK, CAPACITY & VULNERABILITY ANALYSIS A PRACTITIONER MANUAL FOR FIELD WORKERS LEGAL INFORMATION STATEMENT OF COPYRIGHT Cover: ACF - Mission Burkina Faso Other Public awareness and public education for disaster risk reduction: a guide www.ifrc.org Saving lives, changing minds. International Federation of Red Cross and Red Crescent Societies, Geneva, 2011 Copies United Nations Development Programme Paving the Way for Climate-Resilient Infrastructure Guidance for Practitioners and Planners UNDP is the UN s global development network, advocating for change and connecting corporate plan 2013 2014 2017 2018 Defence construction canada to Including the Operating and Capital Budgets for 2013 2014 DEFENCE CONSTRUCTION CANADA CORPORATE PLAN 2013 2014 TO 2017 2018 TABLE OF CONTENTS WORKING DRAFT GUIDANCE DOCUMENT 11 A GUIDE TO FLOOD EMERGENCIES WORKING DRAFT GUIDANCE DOCUMENT 11 A GUIDE TO FLOOD EMERGENCIES A FRAMEWORK FOR MAJOR EMERGENCY MANAGEMENT WORKING DRAFT 2 INTRODUCTION TO Changing with the World UNDP Strategic Plan: 2014 17 Helping countries to achieve the simultaneous eradication of poverty and significant reduction of inequalities and exclusion breaking traditional gender Stakeholder Engagement: A Good Practice Handbook for Companies Doing Business in Emerging Markets Cover photo credits: 1st row (L to R): WB Staff, WB Staff, WB Staff 2nd row (L to R): T.Pollett, I. Michuci, A Region Responds to a Changing Climate Southeast Florida Regional Climate Change Compact Counties Regional Climate Action Plan October 2012 SOUTHEAST FLORIDA REGIONAL COMPACT CLIMATE CHANGE A Region Responds Towards Green Growth FOREWORD Introduction by the Secretary-General The OECD Green Growth Strategy: A lens for examining growth The world economy is slowly, and unevenly, coming out of the worst crisis DFID s Use of Contractors to Deliver Aid Programmes Report 23 May 2013 Contents Executive Summary 1 1 Introduction 2 2 Findings 7 Objectives 7 Delivery 8 Impact 17 Learning 25 3 Conclusions and Recommendations National Flood Insurance Program Community Rating System A Strategic Plan for the Community Rating System Fiscal Years 2008 2013 2008 EXECUTIVE SUMMARY Although the Community Rating System (CRS) has been 812 RECOVERY AND RECONSTRUCTION AFTER DISASTER RECOVERY AND RECONSTRUCTION AFTER DISASTER Michael K. Lindell Texas A&M University, College Station, TX, USA Definition Disaster recovery. Disaster recovery Being Prepared for Climate Change A Workbook for Developing Risk-Based Adaptation Plans Cover photograph: Waquoit Bay National Estuarine Research Reserve. From: National Oceanic and Atmospheric Administration/Department ECLAC SUBREGIONAL HEADQUARTERS FOR THE CARIBBEAN Newsletter of the Caribbean Development and Cooperation Committee (CDCC) DISASTER ASSESSMENT - DAMAGE AND LOSS ASSESSMENT METHODOLOGY (DALA) DAMAGE ASSESSMENT DISASTER RISK MANAGEMENT National Systems for the Comprehensive Management of Disaster Risk and Financial Strategies for Natural Disaster Reconstruction Paul K. Freeman Leslie A. Martin Joanne Linnerooth-Bayer In proud partnership with How climate adaptation in cities creates a resilient place for business Based on the CDP responses from 207 global cities Written by CDP www.cdp.net +44 (0) 207 970 5660 email@example.com
Africa's largest and fastest growing social networking website Ladies And Gentlemen book (popularly known as LAGbook) today announced its partnership with IQ4News.com to provide its 600,000 registered users with in-depth analysis and news on Africa November 12, 2012 (Newswire.com) - IQ4News is an online platform that offers in-depth analysis and news on Africa with a collaborative feel, combining reports from professional journalists with citizen journalists, journalism students, bloggers and field experts. IQ4News was established as a media blog in 2007, and later re-branded as a news and analysis website dedicated to African issues, in 2011. The website grew considerably within three months, receiving traffic from over 161 countries; 39 of which are African countries. Today, LAGbook introduced its news column powered by IQ4News. Registered LAGbook users are provided with up-to-the-minute news on Africa from the IQ4News website. News on Africa from IQ4News are displayed in a marquee and typed progressively across the screen on every user-profile page on LAGbook. This move is to primarily promote the ideal that "businesses in Nigeria, and indeed Africa, need to be more supportive of each other particularly when it comes to start-ups", says Dr. Yemisi Akinbobola, Editor-in-Chief, IQ4News. "Since the LAGbook audio interview with IQ4News some weeks ago, I have been following news from IQ4News. Personally, I have found a lot of them interesting and useful especially the one on Obama's paternal grandmother. I think I should share this new experience with hundreds of thousands of people, and that starts with our 600,000 registered users on LAGbook. They should enjoy the analysis and news that IQ4News offer", says Chika Nwaogu, co-founder of LAGbook. According to Chidi Nwaogu, Co-CEO at LAGbook, "The Nigerian Entrepreneurial Ecosystem doesn't foster synergy, and this has greatly weakened the pace of growth within start-ups from Africa. We need to partner, and not compete. We need to form an extensive TEAM; Together Everyone Achieves More, and that I believe will build a self-sustaining and flourishing ecosystem for African entrepreneurs." "Over the past months, we have partnered with an appreciable amount of startups across the globe, which includes UK's MusicVibe, and South Africa's Umuntu Media. Now, we want to focus on the African tech-space, and forge partnership with more African-based startups that will lead to growth and development. This is our major goal in the year 2013, and we're starting this mission with IQ4News", says Josh Osemwegie, Chief Financial Officer at LAGbook. "Our future partnership with African startups will mostly focus on integrating them as third-party applications on LAGbook. At LAGbook, we believe social discovery is not limited to meeting new people, but extends to discovering new things and events happening around you, and this where IQ4News comes into the BIG picture", says Nosa Ilegbinehi, Chief Publicity Officer at LAGbook In just 2 years, LAGbook has become Africa's largest and fastest growing social networking website with 600,000 registered users and 6,000 daily new-signups.
M & J Chemicals C.C has been providing service excellence for over twenty years. M & J Chemicals C.C is an affirmative action employer with a BEE rating of Level 1 and offers equal employment opportunities to all employees. Since its establishment in 1994, M & J Chemicals C.C has been committed to the people, by providing quality and delivering of specialised requirements. M & J Chemicals C.C has a proven track record that was accomplished through sacrifice and sustained development. M & J Chemicals C.C and its motivated staff are highly organized and greatly resourced with sound work ethics. M & J Chemicals C.C is registered in respect of: U.I.F., Workman’s Compensation Fund, VAT, Bargaining Council, Public Liability Insurance cover and the SESSA Skills Development Foundation. M & J Chemicals C.C also features on the National Database. M & J Chemicals C.C believes in goodwill and generosity, healing and helping, caring and sharing. As part of M & J Chemicals C.C social responsibility, the company distributes food hampers to indigent families of all religious and racial groups in the community. M & J Chemicals C.C is an affirmative action employer with a BEE rating of Level 1 that manufactures domestic and industrial chemicals and household detergents; cleans educational, domestic, industrial, commercial and hospitality institutions; and are experts in gardening services and landscaping, renovations and carpet cleaning, pest control and hygiene services. M & J Chemicals C.C has set a bench mark of high standard of cleaning services. The company also manufactures its own high standard chemicals and is a supplier of SABS approved chemicals. To ensure that M & J Chemicals C.C, shall consistently over short space of time meet and maintain the requirements of National Standards for cleanliness at your institution. The outcome-based cleanliness standards have been developed using current best practice within the NHS. The outcome based standards offer: - Tourists and customer focus - Clarity for management staff and service providers - An effective aid to management - Consistency with infection control standards and requirements. - Clear outcome statements, which can be used as benchmarks and output indicators. M & J Chemicals C.C recruits staff members very carefully to meet and sustain the high standard of service delivery and as such any task performed by these employees is of a very high standard. All personnel employed by M&J Chemicals have to undergo medical fitness examination and also security clearance. Regular capacity building, team building and training workshops are conducted to ensure that M & J Chemicals C.C provide a winning team. M & J Chemicals C.C prioritises supervision as a key function mechanism of the company to effectively implement its vision and mission. M & J Chemicals C.C employs supervisors on every site and has managers to oversee such supervisors. These individuals are skilled in supervision, management and communication. The supervisor on site will be in contact with the manager of the institution as well as the manager of M and J Chemicals C.C. Problem solving is a key element for the supervisor to motivate and carry out the function effectively and efficiently. The company appointed members to liaise with authorised personnel of the institution before any contract work is executed; this is done at all sites that the company operates on. M & J Chemicals C.C has professional first aid personnel with a vast experience in paramedics. M & J Chemicals C.C has are first aid kits available on all of its vehicles. M & J Chemicals C.C intends to have its own General Medical Practitioner in the future to provide specialised medical care for its workforce. M & J Chemicals C.C is affiliated to the following organizations: - Black Economic Empowerment Cleaning Association (BEECA) - SESSA Skills training - Department of Labour – also registered for Unemployment Insurance Fund and Workmen’s Compensation - Bargaining Council – also registered for Provident Fund and Family Medical Crisis Plan Tasks are executed by uniformed and specialized workers provided with a detailed job description and daily roster. Schedules and checklists are implemented. The work is strictly supervised and monitored regularly. Performance of each worker is tracked and audited regularly. Quality assurance is enforced. Interactive meetings with progress reports and follow up exercises are held to ensure collective effort and a sense of synergy. Interaction between the school community and us is encouraged to enhance communication, co-operation and outstanding service delivery.
Up to now, the story of how liberals and conservatives engage in political discourse and problem solving has been marked by polarization, with questionable outcomes. Each side pushes its own agenda, thinking it has the answers to all the problems. The question is whether this story has served us well? We face mounting problems, including the debt crisis, unemployment, environmental issues, energy dependency, food and water shortages, terrorism, war, and many others, which some have said will eventually lead to a perfect storm, if it hasn’t happened already. It becomes ever more doubtful that we can solve these problems within the context of our polarized, divisive mindset, which has lead to much of the present paralysis in Washington and has perhaps contributed to the problems. The Energy Debate Take for example the issue of energy. Both sides agree that we need to become energy independent. However, each side pushes for its own agenda, while disagreeing on an effective, long-range strategy to become energy independent. In the energy debate, the main dividing line has been between the liberal concern for the environment and conservative concern for the economy. Liberals want us to be weaned off of our dependency for petroleum, nuclear, coal, and other energy sources that pollute the environment and are limited in supply, to cleaner, more sustainable energy sources that support the environment, like solar, wind, and geothermal. They would like to see such a shift to cleaner energy sources done within a relatively short time frame. They push for an increase in government funding and programs, which will support this endeavor, and are willing to tolerate higher energy prices and perhaps even some rationing as temporary measures during the transition. On the other hand, conservatives want to exploit the conventional energy sources that already exist in our country. They feel that we can make a transition to energy independence much quicker if we invest in conventional energy, since it is already proven and economical. They argue that this approach will minimize the risk toward higher energy prices and rationing. Conservatives believe that, perhaps some investment in alternative energy mightbe a good idea, but it will take years to develop it into a reliable and economical source to meet our growing energy needs. In fact, they argue, alternative energy will most likely never completely replace conventional sources, but supplement it. Each side pushes forth its own agenda, without giving much thought to the other side’s merits or concerns. And if both sides did decide to work together, we learn from history that the likely outcome would be a middle of the road compromise, which lacked the necessary punch, while pleasing no one, except perhaps the special interests. In fact, the debate on energy independence has been going on in one form or another, since at least the energy crisis of 1973. The ongoing debate reflects the story, previously mentioned, as to how each side pushes for its own agenda, without giving much, if any, consideration toward that of the other side. And in the end, with few exceptions, our problems never really get solved, but linger or incarnate into “new” problems. It’s apparent that this story has not served us well and is no match for the mounting issues we face. What if we could change our story, the way in which we solve problems, from one of polarization and divisiveness to one of collaboration and synergy? In fact, is that possible? I believe it is. I would like to introduce the idea of the Symphonic Solution. A Symphonic Solution is a meeting of the minds between liberals and conservatives on a particular issue. However, it does not limit its choices to the middle range of the political spectrum, as seen in middle of the road compromises, but is open to ideas across the entire board. A musical symphony, or almost any melody for that matter, would be pretty blasé if its notes were limited to the middle range of the scale. A good symphony requires notes carefully taken along the entire musical scale. A Symphonic Solution could be characterized as an effective plan, which takes into account the main values and concerns of both sides. When both sides work together constructively for solutions and feel that their voices have been heard and accounted for, they will likely come up with more options, including innovative ones. Both sides also will more likely support the plan in the long run. The old band-aid measures and watered down compromises that passed for solutions will give way to fresh, creative approaches, which result from a synergy of both sides working together. It doesn’t mean there will be total agreement on every point. There will still be disagreements, which is natural in our diverse society. So, returning to our example of the energy issue, how might the two sides work toward a Symphonic Solution for energy independence? Such a solution would need to address the main values and concerns of both sides. It would have to take into account the liberal values and concerns for clean, renewable energy, which would have low impact on the environment, like pollution. The solution would also have to account for the conservative values and concerns for reliable energy sources, which are both cost effective and affordable. Coming up with a Symphonic Solution for energy independence, or for that matter, any issue, will require innovative ideas, creativity, and a spirit of working together, and good will. We have a choice. We can continue with the current story of political polarization, paralysis, and ineffective solutions. Or we can change our story to collaboration, synergy, and effective solutions, which have a much better chance of meeting the challenges of mounting problems. The choice will take courage and require a shift in our thinking in how we work together to solve problems. Mark Goodkin is publisher of Conversational Shift, a website devoted to helping people make the shift from polarized political discourse to one of civil discourse and synergistic solutions. He also publishes San Diego Coast Life, an online guide for locals and visitors to San Diego. He has been a website designer and content developer since 1998 and graphic artist since 1994. In the late 1980s, he worked as a publishing assistant for the Committee for a Free Afghanistan, in Washington DC. and assistant to the Senior Advisor to High Frontier, Inc., in Arlington VA. Mark Goodkin holds a bachelor’s degree in Communication Design from the School of Visual Arts in Saint Paul, MN and in Political Science from the University of California, San Diego.
Filed under: AL, Alabama Politics, baseball, Birmingham, Birmingham Alabama, Birmingham, AL, Cities, Civic Engagement, Civil Rights, Commentary, Cooper River Bridge Run, Dr. Martin Luther King, News Media (as source & subject), people, The Birmingham News, The Rambles, urban issues The nuances of the 2007 mayoral campaign in Birmingham, Alabama have begun, despite efforts to the contrary. Though it speaks to a much broader issue. Councilwoman Valerie Abbott presented a resolution at the most recent city council meeting that has been adopted by “…167 cities and towns in 40 states, representing more than 16.9 million people” according to the National League of Cities official website, specifically the page describing their Partnership for Working Towards Inclusive Communities. Rather than repost the resolution here for those that are not familiar with it, I’ll link to Kathy’s post of the document. I found it slightly disturbing that the resolution did not pass, especially considering this council’s somewhat public record of supporting initiatives that would lead the city towards what many consider its rightful place among the South’s elite. I’d read before hopping on a plane for an extremely long flight back from Seattle that the resolution would be reintroduced with opportunities to tweak as necessary, so I figured that it was only a matter of time before the council approved words that better reflected their agenda, though I was starting to doubt just what that agenda is. Then, as I’m getting ready to run the Cooper River Bridge Run Saturday morning (I’m runner #26726 – results are normally up late Saturday if you’re interested) I decide to hop on my friend’s laptop and see what the latest is from town. Imagine my surprise when I see that a resolution will be introduced at Tuesday meeting by Frank Matthews apologizing for slavery that will be introduced by Councilor Hoyt. (FYI – comments are closed for the linked News article post.) At first glance, it would make some sense, except when you realize that the city of Birmingham did not exist until after the end of the Civil War. Slavery could be pointed to as a reason for the levels of racial discrimination that still at times seem to permeate the city even as members of the same race nitpick about what it truly means to be “black” or “white” as we progress into the 21st century. I guess it bothers me plenty considering that this will probably be finished at 1 a.m. and I have to be awake at 5:20 a.m. (though you probably won’t be reading this until 7:30 a.m., about the time every year when I ask myself why in the world am I getting ready to run over this bridge AGAIN?) Read on though… Continue reading Filed under: AL, Architecture, B'ham Wiki, Birmingham, Birmingham Alabama, Birmingham, AL, BJCC, Cities, Commentary, Legion Field, The Rambles, urban issues The numerous online forums here in town are always interesting to click through, especially when there is a hot button issue dominating the boards. In recent weeks, the issue in question is the expansion of the BJCC and the development of an entertainment district for the convention center district by Performa Entertainment Real Estate, Inc. The development of the entertainment district is a no-brainer; locating such an attraction near Malfunction Junction provides visibility near one of the busiest intersections for interstate traffic in the Southeastern United States. The area’s redevelopment will most likely generate additional revenue for the city and the county through encouraging passersby to stop and take a moment to find out just what’s happening around town. The more interesting debate among those that take place in the forums is the one about the expansion of the BJCC. Everyone points to all of these reasons why the expansion must include a 70,000 seat “dome” and not a 40,000 seat “arena”. In a city that is quick to point out how quickly something is out of date and needs to be replaced, maybe the real issue is whether or not the current situation is really broken. Maybe it’s also a case of deciding whether political grandstanding in an election year will keep us from exploring the possibility of doing what is truly best for the city and the region. Legion Field is currently sitting by minding her own business, waiting for people to decide a fate that is not based on whether or not it can be salvaged and reborn, leading a renaissance of an area, but rather whether or not it’s heir apparent can support one game out of the year that locals most identify with it. I will not say I’ve taken an extensive poll, but I have heard from several people that they come to the Magic City Classic not as much for the game, but for the tailgating and the socializing. Neither of these issues would be addressed with a new facility downtown, though general parking for those that do attend the game for… well, the game would be, relieving many from being worried about being blocked into someone’s personal lot by others that want to stay for the entire experience. Now I am one of the biggest proponents of seeing this city’s downtown grow and thrive, however this is an opportunity to take advantage of an existing asset and help bring back a great community. Yes, I said great community. My last merchants association meeting took place at Rickwood Field this week. I decided to drive down 5th Avenue North to get to the ballpark, taking me past Legion Field’s front door and through its neighborhood. I’ll save my piece on Rickwood’s needs for a later date… The area surrounding Legion Field still has signs of its commercial past, one that would have been its entertainment district of the day, and one that would not be that hard to return to the area. The field still hosts international events, including the upcoming Futbol Internacional opening event next month. The field serves a purpose and provides a backdrop that few other cities can provide in the age of the enclosed multipurpose facility. In an ideal situation, the expanded BJCC could be hosting a concert and a convention while the Magic City Classic or an event like it takes place on the West Side and something else is happening at both Railroad Reservation Park and Fair Park. In other words, the urban synergy that so many people talk about would actually be taking place. The area around Legion Field is worthy of seeing some additional investment in it, though it will not come until someone decides that renovating one of the most revered structures for football in the South can be done. This is a case where it is not an all or nothing proposition; you can have as much as you desire. I’ve had people tell me why you have to have an expanded BJCC or a renovated Legion Field. Why can’t you have both? While naming rights are not necessarily the purist’s way of dealing with issues, selling the naming rights for Legion Field and using the funds generated from that “sale” to upgrade the facility would allow for football games and the real reason that many come out, the idea of “sitting out in the elements” to enjoy the game or event, to rule the day. The need for expansion is definite, however let’s not forget about what makes Birmingham, Alabama unique. Share ’em if you got ’em. One more later on today… Filed under: AL, Architecture, Birmingham, Birmingham Alabama, Birmingham, AL, Cities, News Media (as source & subject), preservation, The Birmingham News, urban issues This morning the city’s Design Review Committee narrowly approved demolition of the former home of the Birmingham News at 2200 4th Avenue North. The building will be removed to provide dedicated assigned parking for employees of Birmingham, Alabama’s daily newspaper. The parking will be fenced with brick piers, using trees and shrubs to edge the property. A compactor will be refaced with brick to match the design of the fence and the new home for the paper, located directly across the street. During the presentation to the committee Hanson informed committee members that he had been able to acquire original drawings of the building as well as drawings that showed additional modifications to the 1917 structure. He also agreed to document the building’s exterior and interior using digital and traditional photography as well as video, submitting these items to the city’s archives. Hanson told those in attendance that efforts were made to transfer the entryway of the original structure to the new building, completed last year. Research led to that decision being scrapped after learning that what was believed to be stone was in fact terra cotta. Hanson told the committee that it will take up to eleven months to complete the changes to the site. The project must still return to the committee for approval of its landscaping plan and to provide visuals of what the view will be with your back against the existing structure looking across the street. If you want to get some pictures of the old building for nostalgia, now’s the time to do it. Enjoy the day! Existing site plan Site plan after demolition
Inspired by IAM Cycling, a group of Synergy Norway Team Members assembled themselves to create their own cycling team in pursuit of better health and to share the Synergy message. And it all started with a simple challenge that Presidential Executive Mads Østvang issued to his team to complete the Trysilrittet, a well-known cycling race in Norway. Led in part by Emerald Executive Stefan Patrik Kristoffersen, the team has developed a passion for cycling over the summer. From June to August, the team competed in three races donning IAM Cycling jerseys donated by the official Synergy sponsored IAM Cycling professional team. A total of six Team Members completed the Trysilrittet through Trysil’s majestic mountains. Stefan Patrik said one of the stand-out individuals on their team is 70-year-old Olav Hindseth who was able to complete the 75 k Trysilrittet much faster than many of his younger competitors. The team looks forward to repeating this race again next summer, involving as many Synergy Team Members as possible to raise money for Synergy’s non-profit partner 5 Star Legacy. Nitric oxide has been shown to be important in the following cellular activities: • Help memory and behavior by transmitting information between nerve cells in the brain • Assist the immune system at fighting off bacteria and defending against tumors • Regulate blood pressure by dilating arteries • Reduce inflammation • Improve sleep quality • Increase your recognition of sense (i.e. smell) • Increase endurance and strength • Assist in gastric motility There have been over 60,000 studies done on nitric oxide in the last 20 years and in 1998, The Nobel Prize for Medicine was given to three scientists that discovered the signa Nitric oxide and heart disease Nitric oxide has gotten the most attention due to its cardiovascular benefits. Alfred Nobel, the founder of the Nobel Prize, was prescribed nitroglycerin over 100 years ago by his doctor to help with his heart problems. He was skeptical, knowing nitroglycerin was used in dynamite, but this chemical helped with his heart condition. Little did he know nitroglycerin acts by releasing nitric oxide which relaxes narrowed blood vessels, increasing oxygen and blood flow. The interior surface (endothelium) of your arteries produce nitric oxide. When plaque builds up in your arteries, called atherosclerosis, you reduce your capacity to produce nitric oxide, which is why physicians prescribe nitroglycerin for heart and stroke patients. You unlock your potential energy of Elite Health. Elite Health is the Pinnacle of wellness where age does not dictate ability it’s about having the energy to produce yourself, to see more do more and be more no matter the stage of life you’re in. It is stepping into every new decade with new ambition. Its the freedom to live without limitation its though a healthy mind and body that this freedom is realised. How you achieve elite health begins with a focus on what makes every person you meet unique and functional. You see within everybody there something particularly incredible going on the you may not realise. Your body is happily hosting 100 trillion microorganisms. In other words your body is like a bustling city home to a thriving communities of microscopic living hard-working microbes. The fact is only 10% of your body contains your DNA the other 90% of you is bacteria fungi microflora. This is your microbiome featuring over 10000 identified species . Your body benefits in the most amazing even surprising ways and it all starts inside your gut, after all is the path taken by everything you’ve ever eat or drink and your gut isn’t just about digesting foods. A well balanced microbiome can lead to a variety of overall health benefits including improved weight management emotions memory and immunity. The gut microbiome is even connected to the health of neighbour organs including your heart. A recent study conducted using four sets of twins sisters has shown it has the power to transform Government health statistics shows that American medicine frequently causes more harm than good. The number of people having in-hospital, adverse drug reactions (ADR) to prescribed medicine is 2.2 million. Dr. Richard Besser, of the CDC, in 1995, said the number of unnecessary antibiotics prescribed annually for viral infections was 20 million. Dr. Besser, in 2003, now refers to tens of millions of unnecessary antibiotics. Every four years, international champions gather to participate in the ultimate test of athleticism and precision known to mankind—the Olympic and Paralympic Games. Like the Olympic Games, the Paralympic Games showcase the tenacity and dedication of the world’s elite Paralympic athletes. Even though these athletes endure physical impairments, they push forward to become great in their sports despite the obstacles they face. Sonja Tobiassen is no exception. She will be competing in the 2016 Paralympic Games in Rio on September 10 and 13 representing Norway in air rifle shooting. Working as a nurse in Norway, Sonja noticed her muscles beginning to weaken in her mid-20s. At the young age of 25, she was diagnosed with muscular dystrophy, an incurable disease that interferes with the production of proteins needed to form healthy muscle. Now, 20 years later, Sonja is in a wheelchair and living every day with love and gratitude. “The little things really mean something to me,” Sonja said. “Things like friends and family and stopping to enjoy the flowers. I live a good life, an independent life, with my daughter who is 17. I’m grateful every day for a house and a car and a sport that I love.” While in a rehabilitation center in 2010, Sonja participated in an air rifle shooting activity. One man in the room was floored by her natural aim and consistency. Curious to see if Sonja was just having a lucky day, he took her to a shooting simulator where she again proved her natural talent. Soon enough, the Norges Skytterforbund (Norwegian Shooting Association) learned of her gift and, to Sonja’s surprise, a couple years later she found herself competing in London’s 2012 Paralympics. To prepare for her second Paralympic Games, Sonja is putting a great deal of focus into achieving her goals. Essential to her training and everyday life is being able to take special care of her health.
One major example of the synergy of bioactive foods and extracts is their role as an antioxidant and the related remediation of cardiovascular disease. There is compelling evidence to suggest that oxidative stress is implicated in the physiology of several major cardiovascular diseases including heart failure and increased free radical formation and reduced antioxidant defences. Studies indicate bioactive foods reduce the incidence of these conditions, suggestive of a potential cardioprotective role of antioxidant nutrients. Bioactive Food as Dietary Interventions for Cardiovascular Disease investigates the role of foods, herbs and novel extracts in moderating the pathology leading to cardiovascular disease. It reviews existing literature, and presents new hypotheses and conclusions on the effects of different bioactive components of the diet. Addresses the most positive results from dietary interventions using bioactive foods to impact cardiovascular disease Documents foods that can affect metabolic syndrome and other related conditions Convenient, efficient and effective source that allows readers to identify potential uses of compounds a or indicate those compounds whose use may be of little or no health benefit Associated information can be used to understand other diseases that share common etiological pathwaysAnother study found that 600 mg ofgarlic powder a day could push the total cholesterol down by some 10%. Other research has corroborated these findings reporting that garlic can lower both total and LDL cholesterol while raising the HDLanbsp;... |Title||:||Bioactive Food as Dietary Interventions for Cardiovascular Disease| |Author||:||Ronald Ross Watson, Victor R. Preedy| |Publisher||:||Academic Press - 2012-10-22|
New Star finds a buyer Fund management group Henderson is set to snap up ailing rival New Star in a £115 million shares and cash deal. Under the terms of the agreement, Henderson will stump up £22 million for New Star’s ordinary shares – equating to 2p a share, £73 million for its preference shares and pay off £20 million of the group’s debt. Current chief executive John Duffield will also leave the business once the deal goes through. In a letter to investors, Duffield said the move would bring two main benefits for investors. “First it will stabilise New Star’s financial situation, which we know has been a concern for investors and advisers over recent months,” he said. “Secondly, and more importantly for the long-term, Henderson is fully committed to supporting our desire to restore fund performance to the levels achieved prior to 2007.” Shareholders will, however, be left with little to show for their investments. They are in line for £5.4 million but the majority of the cash will go to creditors. New Star had previously warned that private investors were likely to end up out of pocket. The takeover, which still requires approval from New Star and Henderson shareholders, would give fund management arm Henderson Global Investors £15 billion of funds under management, and make it the fifth biggest group in the industry. It is launching a placing of up to 72.3 million shares to help fund the deal, which it says will be “substantially earnings enhancing” by 2010. Henderson is also reported to be taking on around half of the 310 including the leading fund managers. New Star, which was formed in 2001, has been badly knocked by the turmoil in the financial markets with funds under management slumping from around £23 billion last year to around £10 billion. It was forced to suspend dealing in several of its unit trusts last year. Last month, it agreed a £240 million debt-for-equity swap with its lenders after its share price plummeted. Darius McDermott, managing director of Chelsea Financial Services, says the deal should give unit trust holders a "sense of renewed hope". “Many shareholders are certain to feel aggrieved as they are effectively to get 2p in cash for each New Star share," he adds. "[But] for unit trust holders this should be greeted as positive news. "Henderson has been in the retail fund management game for a long time and its considerable experience should provide a safe pair of investment hands for the incoming assets. There could also be synergy between the two asset managers in a number of areas, namely bonds and multi-manager." Henderson's chief executive, Andrew Formica, has also hinted that he wants to keep the top talent at New Star and address funds that have been serially underperforming. A collective investment vehicle (known in the US as a “mutual” or “pooled” fund) and similar to an Oeic and investment trust in that it manages financial securities on behalf of small investors who, by investing, pool their resources giving combined benefits of diversification and economies of scale. Investors buy “units” in the fund that have a proportional exposure to all the assets in the fund, and are bought and sold from the fund manager. The price of units is determined by the value of the assets in the fund and will rise or fall in line with the value of those assets. Like Oeics (but unlike investment trusts) unit trusts and are “open ended” funds, meaning that the size of each fund can vary according to supply and demand of the units form investors. Unit trusts have two prices; the higher “offer” price you pay to invest and the “bid” price, which is the lower price you receive when you sell. The difference between the two prices is commonly known as the bid/offer spread.
Titolo: Times Queer Casa editrice: Synergy Press Data di pubblicazione: 2008 Legatura: Soft cover Condizione libro: Used This Book is in Good Condition. Clean Copy With Light Amount of Wear. 100% Guaranteed. Codice inventario libreria ABE_book_usedgood_0975858114 Riassunto: A graphic dark,coming-of-age story set in New York's infamous Times Square during the 50s and 60s.Introduced to sexual feelings at an early age,protagonist Richard Kozlovsky continues on a path shared by many children who have been touched in a sexual way by an adult,a path of frequent masterbation,exhibitionism,and other precocious sexual behavior.Ricky grows up in spite oh his hard life in a Catholic school, teasing by his classmates,and trying to survive on the streets of Manhattan with sexual predators at every turn.Frequenting the Times Square movie theaters as a teen,Ricky finds a way to supplement his meager existence and later meets the women who will introduce him to the world of women,intimacy,and love.In between he questions his sexuality: is he a faggot?is he a whore?where does he fit in? Metodi di pagamento La libreria accetta i seguenti metodi di pagamento: Libreria AbeBooks dal: 7 maggio 2014 We guarantee the condition of every book as it's described on the AbeBooks web sites. If you're dissatisfied with your purchase (Incorrect Book/Not as Described/Damaged) or if the order hasn't arrived, you're eligible for a refund within 30 days of the estimated delivery date. If you've changed your mind about a book that you've ordered, please use the Ask bookseller a question link to contact us and we'll respond within 2 business days.
Sign up for our newsletters Receive the latest in Footwear, Fashion, Music and Creativity in our newsletters. CLOT‘s recent residency in Los Angeles – via the Juice L.A. pop-up – has inadvertently formed a segue into a new collaboration with none other than Big Boi. After a chance encounter with Big Boi’s creative team, the two parties collaborated on a new line, influenced equally by Coachella (Southern California’s most iconic music festival) and Big Boi (one of the South’s most iconic musicians). We recently down with CLOT co-founder Edison Chen on the eve of Big Boi’s OutKast reunion performance to discuss how the project gained its momentum. Enjoy the interview below, and keep an eye out for the collaboration line to drop soon. How did Big Boi and CLOT start talking about doing something together? I have been a fan of OutKast since the mid-’90s; I’ve admired both Big Boi and Andre 3000 for almost 20 years now. Everything they do is amazing to me, and musically, they are geniuses in my mind. Briefly describe the process of working with Big Boi on this line. Big Boi was trying to reach out and do more cultural things [outside of] music, and I had the opportunity to meet his creative consultant crew. We had mentioned something about Coachella. I’ve gone to Coachella every year for about five years now, and I thought that the synergy on this was impeccable. What was the inspiration behind the line? The inspiration behind the line was one of OutKast’s most popular songs: “Bombs Over Baghdad.” Can you please describe some of the design concepts on the graphics? Since the Coachella venue is [called] “Indio,” we were trying to put the two together. We decided to use some wordplay on the word “Boi,” and changed it to express BOMBS OVER INDIO. We used some old military patchwork, and often used a sport jersey platform to make the silk-inspired pieces. These are representative of our CLOT input and we didn’t want to put CLOT logos everywhere so this was a good link to our heritage and our culture, and meshing the two was easy. KB LEE creative directed the project with us at CLOT, and it’s been a very, very enjoyable project. This is just the beginning of many CLOT x Big Boi things, so keep posted.
Internet betting operator Mecca Bingo Mobile has announced that it has added Evolution Gaming's live dealer Roulette to its gaming library. Mecca Bingo is owned by the Rank Group, and currently has 96 bingo clubs within Britain, and also has an immensely popular online betting site, which features bingo intermixed with scratch and casino games. Commenting on the addition Rank Group's head of digital gaming Alex Franklin said, “RNG games have always been popular with our Bingo players, with up to 80 percent of our casino games’ revenue generated by Roulette. Now, with Evolution’s world-class Live Roulette, we are able to offer our players a fun, fresh and exciting alternative to RNG Roulette.” Evolution account manager Sebastian Johannisson added, “Online and Mobile Bingo is a very social and community-orientated game with players chatting with each other while they play. Live Casino also offers this element with the ability to interact with the dealer through live chat and to hear the dealer’s verbal responses and game commentary. Added to that, Roulette is an endlessly exciting numbers-based game just like Bingo, so there’s a great deal of synergy between the two games.”
Check out our new graphic cheat sheet: Okay, so it's a little hard to make out all the details on the blog, but you can download a free pdf copy at http://www.billiondollargraphics.com/BizGraphicsOnDemand.html. Just scroll down to the end of the page and click on the image of the graphic cheat sheet and the pdf will download to your desktop. We'll also be giving out printed copies during our session at The Presentation Summit in October, so stop by our booth to say "hi!" The graphic cheat sheet offers you suggestions for graphic types that best convey various concepts in simple, complex, and quantitative ways. For example, if you want to show Synergy, scroll down the far left column to the row labeled Synergy. Under the Simple column (for information not too intricate), you will see suggestions like a building block graphic, chain graphic, or pyramid graphic. For more robust concepts of Synergy, you can look under the Complex column and find suggestions like a funnel graphic, vee diagram, collage, or a stacked graphic. For numeric concepts of Synergy, look under the Quantitative column to find a pie chart and dashboard graphic. Whenever you're stuck with how to visually communicate your ideas, break out this cheat sheet! I created the sheet to give you new ideas for graphics and force you to consider different ways to show your information. Maybe for Hierarchy, you always used a pyramid graphic. However, in reviewing the sheet, you notice that a stair graphic or a temple graphic might work better and offer another way to visually communicate your information and keep your presentations and marketing materials fresh. Hope this sheet helps you find better and more creative ways to communicate your ideas. As always, you can email me at info@BillionDollarGraphics.com with any suggestions for articles or other helpful resources or graphic questions.
Cobbled streets lined with artists studios, shops, great bars and restaurants – LX Factory exemplifies local creative culture at its best By Marianna Wahlsten and Sofia Andrews Lisbon has become one of the new creative hubs in the EU. Already before the Brexit vote tech entrepreneurs had been lured by financial incentives and a liberal attitude. The economic situation has been tough (for example many architects have been forced to move abroad or take up alternative creative work), but there is a flourishing start-up scene underpinned by positive synergy. And the situation also shows how difficulties and economic scarcity can generate great individual creativity, something missing in the larger capital cities, such as London, where the state and big corporations have the power. LX Factory is a great symbol of this vibrant cultural scene in Lisbon, where spontaneity and athenticity are allowed to exist. About ten years ago artists and designers took up studios around the industrial zone of the Alcântara district located by the 25 de Abril suspension bridge. The old textile turned printing factory complex built in the 1840’s was meant to be demolished in order to pave way for new development. In light of the economic crisis, plans were pushed back and an investment group proposed a temporary occupation of the complex, giving birth to LX Factory (an ode to Warhol’s Factory in 1960’s NYC). Ever since people in the arts, fashion, media, design, photography and small companies have occupied the abandoned, derelict buildings and their vast industrial spaces. A triumph of high-tech engineering on East Sussex coastline, designed by Marks Barfield by Harri Närhi In many ways, going up the i360 in Brighton, or the world’s tallest moving observation tower, is akin to flying. The whole experience is heavily mediated by the idea of air travel, visitors are greeted onto their ‘flight’ by British Airways staff acting as if to take you on a faraway journey, and indeed the view itself is something achieved only by an impressively high vantage point. i360 architects David Marks and Julia Barfield, creators of the world-renowned London Eye, have managed to deliver an experience so awe-inspiring it leaves you craving for more than your afforded 20-minute slot. The i360 might have taken a total of 12 years to conceive, but there’s a sense that it’s here to stay, and will ascend to the front stage of successful 21st century British landmarks. A new take on a favourite summer ritual – the most spectacular public sauna by Avanto Architects During high summer, from mid-June to mid-July, sun hardly sets down in the Finnish capital. Many locals escape to their waterside cottages to enjoy the long days. But now there is one more reason for staying in the city. The latest architectural hotspot is Löyly, a public sauna with a bar and restaurant designed by Avanto Architects on the Southern waterfront – literally the hottest meeting place this summer. The new extension of Tate Modern, designed by Herzog de Meuron, is now open to the public. In twenty years the museum became so popular more space was needed. The extension tower, which cost £260m, fits there beautifully . “We did not set out to build an iconic building”, Tate director Nicholas Serota states before the opening of the new extension. But of course it will be. It is designed by Jacques Herzog and Pierre de Meuron, the Swiss architects who had already transformed the derelict power station into Tate Modern. It has since become the world’s most visited museum for modern and contemporary art, making London a global cultural centre. Not only one sculptural wonder by Danish architect Bjarke Ingels, this summer Serpentine Gallery brings along four experimental designs to London’s Kensington Gardens. The show which opened to the public over the weekend is the last one by long-term director Dame Julia Peyton-Jones, encapsulating her drive and vision. The Summer Pavilion concept is an opportunity for the chosen architect to experiment with forms in one of the most prestigious parks in central London. This year Bjarke Ingels – described by Serpentine director Hans Ulrich Obrist as ‘the first architect disconnected completely from angst’ – has made the most of the Pavilion commission, playing with scale and materials. Despite the short timeframe to complete such a project, you can see the clarity and enthusiasm in Ingels’ approach. With his firm BIG he spent exactly six months (to the minute by midnight of the launch) to complete the Pavilion. At the preview morning the last set of fibre glass blocks were still waiting to be lifted to the very top. Powerful ideas showcased at 15th Venice Architecture Biennale, which launched last weekend, for rediscovering the desire for architecture. Alejandro Aravena, the Chilean curator of this year’s Venice Architecture Biennale, says of the programme that it’s not “a caricature, or a biennale for the poor”. Although, on many levels, the Biennale resonates with values of the 1960s Arte Povera movement, challenging current economic systems, while also promoting the return to simpler materials and architectural concepts. Aravena urges architects to consider more carefully what they build, not making something “ just because you can”. It’s the lesser known architects that propose the more engaging and ground-breaking ideas The exhibits provide a wide range of responses to Aravena’s overarching theme Reporting from the Front.On the grounds of the Giardini there are thirty national pavilions and the exhibitions, which Aravena describes as “conversations” on the battles and challenges we face improving our urban environments, continue inside the Arsenale ship yards and across the city. The Central Pavilion hosts a group exhibition, including projects by Renzo Piano, Kéré Architects, Richard Rogers and Kazuo Sejima, although it’s the lesser known architects that propose the more engaging and ground-breaking ideas. 360º VIEW:Installation by Gabinete de Arquitectura
Welcome to Destination Eventing! Host of the October 2015 and July 2012 USEA ICP ASSESSMENT Stay updated on Destination Farm’s Clinic and Jumper Show schedule via A big “Thank You!” to our wonderful sponsors; Rose Wood Hill Farm, Voltaire Design, C4 Belts and Synergy Equine Bodywork! Working Student Position coming available for Fall 2016! Learn more about our working student program on the working students page and contact one of us to apply for the position! We have a couple of stalls available! We have a rare opening of a couple stalls- come board, train, and compete with us this season! Destination Farm is a top-notch facility located in Dickerson, MD that specializes in Eventing. The farm combines the expertise of Suzannah Cornue and Natalie Hollis to offer quality boarding, lessons, training, sales, and working student programs in the heart of Area II eventing. Their combined experience, which includes managing farms, training event riders and horses at all levels, involvement in the USEA ICP program, competing horses to the top level of eventing, and Equine/Animal Science Degrees, makes Destination Farm an exceptional destination for riders and horses.
Husband and wife team, Dr. Robert C. Robinson III, MD and Dr. Karla L. Robinson, MD are trying to change the world – and they’re starting right in their own communities. Recognizing the need to empower the community to become more active participants in their healthcare, the Robinsons established Urban Housecall Magazine, an online health and wellness magazine with health information most pressing for men, women and children in urban communities. The launch of their nationally syndicated Urban Housecall Radio Show soon followed. The couple broadcasts live weekly from Charlotte, their hometown. We caught up with the happily-married power couple, who met in undergrad at Xavier University and have three beautiful children together, to find out what advice they would prescribe for newlyweds looking to build a lasting marriage. ESSENCE.COM: How did your love story begin? ROBERT ROBINSON: We met in our philosophy class at Xavier, and interestingly enough, I think we both had very opposing philosophical views, and in some respect, that’s what attracted us to one another. I think we recognized the strengths the other had. We had a chance meeting. I say chance, but it wasn’t really chance. It was God. We had a chance meeting at church, and the rest is history. ESSENCE.COM: What ate your biggest strengths as a couple? KARLA ROBINSON: Our foundation in the church and in the word of God…that’s always been the foundation of our entire friendship, in our relationship and ultimately in our marriage. Even now, we’re very involved and active in ministry in addition to all of the other things we do, and that’s the common thread that keeps us going. ROBERT: In the areas that I’m weak, my wife is strong, and in the areas she’s weak, I’m strong. It’s the perfect marriage on so many levels—not just as it relates to our union as husband and wife, but how it lends itself to us working together as parents, as business partners and as clinicians. ESSENCE.com: Has working as business partners made your marriage stronger? ROBERT: I definitely think it’s helped to strengthen our relationship. Thirteen years later, I could never imagine not being with another physician because having similar experiences helps us to understand what we each go through on a day to day basis. When my wife comes home and says, ‘I’ve had a really challenging day because…’, I can completely understand. KARLA: At the very least, we have very interesting dinnertime conversations. ESSENCE: Do you feel like the dinnertime conversations are one of your secrets to happiness? KARLA: Absolutely. I don’t know if this just happened over time because we’ve been married for so long or not but we really think the same things. We respond the same way to a lot of different things. The secret to our relationship is that we have an unspoken set of communication because we’re always thinking the same thing. We don’t have to say what we’re thinking, and I think that helps. ESSENCE.com: Sort of like a synergy, right? KARLA: Absolutely. I think our daughter put it the best. She secretly believes that we are the same person. She’ll tell me a story when she gets home from school and I’ll have a certain response, and then my husband will come in a little while later and she’ll tell him the story and he’ll have the same response. We find it funny. ROBERT: The same line of questions and everything! ESSENCE.com: What advice what you offer to newlyweds on how to have a healthy, happy marriage? KARLA: The best thing you can do is really get to know your partner. This is something I think is very underestimated. Oftentimes, people assume they know one another because they’ve dated, but really get to know their intimate goals and desires. What does your future together really look like? Really having those tough conversations before you get married is key because you want to make sure that the goals, dreams and aspirations line up. We are both very entrepreneurship minded and very service minded. We both love giving back to the community and I think it would be very challenging to have a partner who didn’t have that same drive and passion. I don’t think they would be able to handle the type of demands that we have as we give back so much of ourselves. Really getting to know your partner is the best prescription for success. ROBERT: I would just add, especially for the guys, be open to and be open in communication. I think so often as men we are not the greatest communicators. We put on our blank faces and pretend we’re listening but we’re really not. You really have to listen to and effectively communicate with your partner to have a good appreciation of who they are, what drives them and what their ambitions and goals really are. When getting to know your spouse, be open to and respectful to the idea that what you know about them prior to the marriage will change overtime, so be willing to accept that and know that you’re in it for the long hall, no matter what changes. KARLA: I like that. Growth definitely does happen and that’s something you have to know in the beginning. It’s a good thing. You have to grow together to be careful that you don’t grow apart. ESSENCE.com: What are some of the bigger obstacles you’ve had to face together as husband and wife? KARLA: For us, it’s really been finding balance. We do give so much of ourselves to the community and our service as healthcare providers. Finding a balance has been a challenge. We have three kids, so it’s something that we have struggled with at times, and it’s something that we have to be intentional about. If you’re not careful, you may find yourself stretched way too thin and your partner can end up being the last thing on the totem pole and feeling neglected. We struggle with this mainly because of our lifestyle and our hearts—we want to give so much, so we, in turn, are pulled upon quite a bit. ROBERT: It’s a constant work in progress and we are continually working on it. ESSENCE: What does a date night look like between two successful doctors? KARLA: A date night would probably be a movie and dinner if we can get away from the kids for a bit. Our dinnertime conversation involves things that most people probably wouldn’t want to hear over their meals. ROBERT: Date night has not really changed a great deal from when we first started dating. We were on a different kind of budget, but it was still surrounded by dinner and a movie. The movie might have been a bootleg DVD that we had acquired somewhere and it may not have been the highest-class dinner, but it’s still something we enjoy together. Again, I think that really just speaks to our union, our bond and how in harmony we are. Learn more about the amazing work The Robinsons do over at UrbanHousecallMagazine.com.
Bay State Milling to Acquire T.J. Harkins Inc. and Subsidiaries Date Posted: August 27, 2012 Quincy, MABay State Milling Company announced Aug. 27 that it is acquiring T.J. Harkins Basic Commodity Brokers, Inc. and subsidiaries, a supplier of natural, nutritious and flavorful ingredients for grain-based foods. T.J. Harkins is a leading supplier of sesame and edible seeds, sweet spices, ancient grains, specialty grain flours and grain blends headquartered in Bolingbrook, IL with a sales office in San Francisco, CA and a distribution facility in Miami, FL. T.J. Harkins and Bay State Milling are both focused on providing nutritious and affordable ingredient solutions for grain-based foods. The combination not only provides a secure supply chain and comprehensive product offering to customers of both organizations, but also synergy of technical expertise across a wide range of product applications and ingredients, including the high quality and distinctive flours and grain-based blends that Bay State Milling has provided for more than a century. “The Bay State acquisition of T.J. Harkins will bring an unrivalled platform of grains, edible seeds, specialty flours and organic ingredients to the market. "We feel this union provides the perfect legacy in the continuation of our Company’s values and vision” says Dan Collins, President and CEO of T.J. Harkins. Collins will become Executive Vice President of Business Development at Bay State Milling and will play a leading role in growing the company through the development of ingredient solutions for key customer accounts. The combination of T.J. Harkins' organic subsidiary, H.P. Schmid/Organic Planet, and Bay State Milling's position as a leading miller of organic wheat, durum and spelt will create a comprehensive portfolio of organic ingredients including sesame and edible seeds, cinnamon, ancient grains, grain blends and a wide range of organic flours. “Bay State Milling’s strategic intent is to be the preferred partner and provider of specialty solutions for grain-based foods. Adding the TJ Harkins’ product line, sourcing capabilities and talent to our growing list of offerings will accelerate our realization of that goal. "In addition, our shared values associated with customer intimacy, employee engagement and nutrition provides a solid foundation for innovation and growth” says Pete Levangie, President and COO of Bay State Milling Company. The acquisition will be completed on August 31, 2012. For more information, call 617-328-4400.
YOUNGSTOWN — Acquiring Oakhill Renaissance Place was a good investment for Mahoning County, according to court testmony today by County Administrator George Tablack. "There are synergy benefits to economically disadvantaged clients," Tablack said. Clients of the county's Department of Job and Family Services now have the city health department, an AIDs clinic and a Head Start program, and eventually will have other agencies available to them, under one roof at Oakhill, Tablack said. The building, which is near downtown, also offers substantial space for county record storage, he added. The $5.3 million the county borrowed in the bond market to renovate permanent quarters for JFS at Oakhill will be recovered through government grants JFS will receive, he said. JFS moved into temporary quarters at Oakhill on Monday. Tablack was testifying on the third day of a nonjury trial of a taxpayers' lawsuit filed by Ohio Valley Mall Co., a division of The Cafaro Co. OVM seks to rescind the county's purchase of Oakhill. OVM is JFS' former landlaord at Garland Plaza. Oakhill is the former Forum South Southside Medical Center. For the complete story, see Thursday's Vindicator and Vindy.com.
If you want to be an effective leader, it is important for you to know and understand anything you can about how God has gifted you for the responsibility. If you know your strengths and weaknesses, and the strengths and weaknesses of others on your team, you can all work together to complement each others’ giftedness to create a synergy in leadership that is greater that what each of you could do individually. Having said that, I want to ask two prior questions that I want to address. First, who benefits from the gifts God has give you? Second, what is the foundational motivation for exercising your gifts? The answers to these two questions will shape our understanding of the gift passages in Romans 12:6-8, 1 Corinthians 12:7-11, 28-30, and Ephesians 4:11. When we start talking about God’s gifting of leaders, we can easily focus on ourselves and what God is doing in and through us. We may not readily admit it, but we are often thinking, “Look at me! See what I can do or see what gifts God has given me!” This is a dangerous trap that leads to pride and self-confidence. In all three of our passages, the gifts clearly belong to the Lord and are given by God for the benefit of others not the leader. Look at Romans 12. The passage about gifts is in the context of presenting our lives as a living sacrifice so that we can be changed and transformed, a witness to God’s perfect will (v. 2). In addition the gifts are described in terms of what they do for others. Each gift contributes to the building up of the body—none is complete in itself. When we exercise any of the gifts, we do so for the benefit of the others in the body. Our teaching benefits others, our prophecy is for the benefit of others, our service serves others, and so on. We may benefit from the exercise of the gift also, but the primary beneficiary is others in the body. We can see a similar pattern in 1 Corinthians 12:7: “Now to each one the manifestation of the Spirit is given for the common good.” Clearly the beneficiaries are others in the body, not the one to whom a gift is given. The previous verses (4-6) are very clear that all the gifts come from the Holy Spirit, from the same God, and are distributed by the Spirit of God. They are not our possession in the sense that we own and control them. Paul is confronting issues of pride in the church at Corinth where some are saying, “My gift is better than yours.” Again, Paul uses the body image to illustrate how the gifts must work together, each doing their own part. If any gift is not functioning in the body, it is like being a crippled person missing an arm or a leg. “But God has put the body together, giving greater honor to the parts that lacked it, so that there should be no division in the body, but that its parts should have equal concern for each other. If one part suffers, every part suffers with it; if one part is honored, every part rejoices with it” (12:24-26). Finally, in Ephesians 4, Paul again establishes the beneficiaries of the gifts given to leaders. In this passage he is talking about leaders who are apostles, prophets, evangelists, pastors, and teachers (sometimes referred to as offices). These leaders are gifts to the body of Christ, for the purpose of equipping God’s “people for the works of service, so that the body of Christ may be build up, until we all reach unity in the faith and in the knowledge of the Son of god and become mature, attaining to the whole measure of the fullness of Christ (4:12-13). Clearly, the leaders are important, but their importance is measured by the results in the body, by the people being built up, by the people becoming linked together, by the people becoming mature in their relationship to Christ. There is a clear sense that either we all get there together or none get there! There are no lone leaders at the top—they must have a body they have built up by exercising their gifts. Motivation for Exercising Our Gifts My second question about motivation is equally important. If the gifts are given to individuals in the body for the benefit of others in the body, what is the motivation that is behind the exercise of the gifts. To answer this question I will start with 1 Corinthians. We are all familiar with 1 Corinthians 13, the famous and familiar passage on love. It is often read at weddings to encourage the husband and wife on how to love each other, but its primary context is in the exercise of the gifts of the Spirit for the benefit of the whole body of Christ. This chapter is strategically placed between the introduction of the gifts of the Spirit in chapter 12, and a more detailed description on the exercise of the gift of prophesy in Chapter 14. Furthermore, the introduction of chapter 13 (v. 1-3) describes the use of gifts without love as being worthless. A person can prophesy, can be eloquent and powerful in the process, clearly speaking forth truth from God, but if this is done without a motivation of love, then the person is just making noise—unpleasant noise at that. If the beneficiary of the gift is another, then we need to love that other member of the body of Christ. Our motive is never to be self-serving, or self-promoting, but an expression of God’s love working through us for the blessing and benefit of the recipient. For example how do we feel when we pray for a person who is sick, injured, or crippled, and they are healed, when at the same time we may be sick, injured, or crippled and God has seemingly not answered our prayers for ourselves. Do we get mad at God? Or jealous of the healing the other person received? Can we continue to pray for others and allow the Spirit to work through us to touch others, even when we are still seeking God for answers to similar prayers for ourselves? This can be a big challenge, but we must remember the gifts are not ours, but they are ours to give away to others as a love-gift from the Father, Son, and Holy Spirit. What an opportunity to witness to God’s love and goodness. While the 1 Corinthian passages are the most direct in addressing the relationship of the gifts of the Spirit to the motivation of love, the passages in both Romans and Ephesians also discuss the gifts in the context of loving those in the body of Christ. Romans 12:9, for example, begins to address the importance of love in the relationships between believers. Likewise, the Ephesians 4 passage is sandwiched between the prayer that the believers would grow in their knowledge of the all surpassing love of Christ (Eph 3:14-21) and the instructions for relationships in the body that exemplify love (Eph 4:14ff). In this passage itself, Paul states that believers are to be “speaking the truth in love” so that we will grow, together, into the body of Christ, which corresponds to the purpose of the gifts of the leaders to the church to build up the body (3:12-13). In conclusion, if we are to study the gifts of the Spirit, it is foundational to understand them in the proper perspective. First, the beneficiaries of the gifts are not those who exercise the gifts, but those who are the receivers of the ministry of the gifts. As we exercise any gift of God, we cannot take credit or pride in what we do because it is God who is working through us (1 Cor 12:6), to build up the body together and minister to one another. Likewise, our motivation must be one of love, expressing God’s love to those being ministered to. Or as Peterson says in the Message Bible, “If I speak with human eloquence and angelic ecstasy but don’t love, I’m nothing but the creaking of a rusty gate” (1 Cor 13:1). We want to be transmitters of love, not creaking gates, clanging cymbal or clanging gong (NIV). In Part 2, I will share some thoughts on how our natural abilities and acquired skills are meant to work together with the spiritual gifts from God.
Several tier-1 firms are out with excellent comments on Whole Foods (NASDAQ:WFMI) after the co issued CQ4 results and the acquisition of one of its main competitors: - Morgan Stanley notes that as Whole Foods has a strong track record of turning around underperforming natural foods retailers, and they see significant opportunity for both overhead cost savings and store-level productivity gains, they believe this merger will be a significant earnings driver for Whole Foods over the next several years. Using what they view as conservative cost savings and productivity gains, firm's pro forma 2008 EPS rises to $1.88 from current levels of $1.74 and pro forma 2009 rises from $2.08 to $2.37. Applying a 35x P/E, they arrive at a $66 12-month price target (35x pro forma 2008 EPS of $1.88) and an $83 2-year price target (35x pro forma 2009 EPS of $2.37). MS believes investors who have been on the sidelines should ramp back into WFMI shares as they see a multi-year period of significant merger-related earnings growth. Rates WFMI shares Overweight. - Goldman Sachs says that based on 1Q results alone, they believe that shares would have traded lower. Not only did EPS fall short, but pre-opening expenses will increase as the year progresses. As such, 2007 estimates may need to come down further. Thus, they believe the Oats transaction is largely responsible for the shares' 5% after-market rise. Part of this reflects potential year 2 accretion and some may be short covering since an Oats deal was unexpected. That said, given how the quarter played out and that the next several will be choppy, short covering may not be as pronounced as usual and the shares could trade lower in the intermediate term. The firm therefore maintains their Neutral rating. In their view, however, the longer-term story is intact and they would take a hard look at the shares if they fall to the low-mid $40s. - Some of the most interesting comments come from JP Morgan saying they obviously hadn't counted on an acquisition of competitor Wild Oats by Whole Foods. Normally, they shy away from acquisitions of these sorts. Nonetheless, the potential value of this transaction is evident as WFMI attempts to acquire its largest competitor and redefine itself as a large company with $12B of sales potential. They give the company the benefit of the doubt and reiterate Overweight rating on the stock. Given the timing, this deal is likely as much defensive as much as it is offensive. Firm likens this to Walgreens purchasing Rite Aid, or if Best Buy purchased Circuit City - both lower margin, lower productive competitors. The truth here, though, is that given the addressable market potential ($400B+ food retail sector annual sales potential) and the onslaught of competition, particularly within organics, the FTC shouldn't be an issue here, in their view. It would likely be for those other sectors. So, Whole Foods is essentially getting the opportunity to purchase its largest competitor, which operates at 49% of the sales per square foot of Whole Foods. This is where the true synergy potential is, as OATS has been a significantly mis-managed company, in their view, with clear merchandising and cultural issues. Whole Foods, on the other hand, is known for its merchandising and its culture. Both companies are non-union. This compensates for the inherent risk with the deal (as WFMI currently has a full plate with an aggressive new store development target, for them). Due diligence of the OATS deal by WFMI lacked substance, in JPM's view. John Mackey, CEO, apparently contacted Wild Oats within the last two months. Interim CEO, Greg Mays, replaced former CEO Perry Odak on 10/25/06 (Odak resigned on 10/19/06), while the old CFO (Bob Diamond) resigned on 12/20/06. John Mackey indicated that he contacted Mays after the CFO had resigned, which implies after 12/20/06. They announced the acquisition on 2/21/07. Ron Burkle, who owns 18% of Wild Oats, has a history of selling his companies well, a la Dominicks (to Safeway) and Fred Meyer (to Kroger). Notablecalls: Expect to see some short covering today and over the next couple of days. In the very s-t the rug was surely pulled out from under the bears. On the other hand, WFMI now has their plate full and acquisitions almost always cause operational disruptions.
FOXBOROUGH, Mass. -- Rapid reaction from the Patriots' 31-21 victory over the Broncos at Gillette Stadium: What it means. The Patriots held on for the victory, their second in a row, in a performance that looked decisive through three quarters before they made it interesting in the fourth quarter. It was almost a replay of the last time they faced Peyton Manning with the Colts in 2010, where they needed a late turnover to seal the result. The Broncos put on a solid late charge, but with losses to the Texans and Patriots in two of the past three weeks, they don't look ready to join the teams considered the elite in the AFC. Brady improves to 9-4 vs. Manning. Quarterback Tom Brady turned in a stellar performance (he was 17-of-20 at the half) in operating an up-tempo no-huddle attack that dictated the action. Each team had just four possessions in the first half -- the Patriots led 17-7 at the break -- so it was the type of game where the margin for error was thin and mistakes were magnified. Brady didn’t make many, if at all, in the first half as the game turned in the second quarter. He is now 9-4 against Manning, who wasn’t bad, but didn’t match Brady’s level. Brady finished 23-of-31 for 223 yards and one touchdown. Offense rings up record 35 first downs. The Patriots set a franchise record with 35 first downs in the game. The offensive attack was highlighted by an up-tempo, no-huddle approach that put stress on the Broncos’ defense and required impressive synergy from the New England offense. While the offense didn’t close out the game as it would have desired, it was still impressive. Nickel defense change sparks Patriots. After having some coverage struggles in their nickel defense, a personnel change helped the Patriots produce better results in the second quarter, when the game turned. Rookie cornerback Alfonzo Dennard was inserted into the game in place of Sterling Moore on the Broncos’ fourth drive, and was credited with a pass breakup on third down to force a punt. The Patriots also went to a lighter defensive front on the drive, which produced the stop and ultimately helped the Patriots open a 10-point lead. In a low-possession game for each team, a stop like that is crucial. Defensive end Rob Ninkovich also produced two big turnovers (strip sack, forced fumble). Ridley and running game delivers again. After churning out 247 rushing yards against the Bills, the Patriots went over the 200-yard mark again. This marked the first time since 1978 that the Patriots have had back-to-back 200-yard games. Second-year running back Stevan Ridley led the way (career-high 151 yards), while the team’s other three backs -- Danny Woodhead, Brandon Bolden and Shane Vereen -- also contributed. Injury situations to monitor. Patriots left guard Logan Mankins (hip) and right tackle Sebastian Vollmer (knee) left the game in the fourth quarter and did not return. Also, veteran linebacker Tracy White (left foot) left the game in the third quarter and did not return. What’s next. The Patriots travel to Seattle to face the Seahawks. They are scheduled to leave on Friday for the Sunday game, and one of the themes figures to be how loud it can get in Seattle, which will put stress on the team’s offense. The Broncos play a Monday night game in San Diego in Week 6.
Welcome to the 2013 ZWR Definitive Holiday Gift Guide of Things You Should Really Want For Christmas. You can see previous year’s versions here (2010), here (2011) and here. I’m STILL WAITING on that aircraft carrier, BTW. Let's get to it. Coleco Table Hockey Game Please make sure it’s the Flyers v. the stupid Rangers and has the scoreboard that drops the puck for faceoffs and the goal lights when the puck goes into the net I mean this thing R-U-L-E-S. I mean, look at all of those glorious logos on the side! Whalers! Nordiques! JEU DE HOCKEY!! SEGA Game Gear Dude, yes, Mom, I know I have a Game Boy already but this is totally different because it’s IN COLOR and the graphics are wayyyyyy better and the games are cooler like Shinobi I promise you’re not wasting money here please and thank you this thing is the wave of the future. A Hacky Sack When I went out to spend that weekend at my cousin’s all of his friends were hanging around and kicking the hacky sack around and it was really cool and mellow and fun. They’re not even that expensive. And besides that time when I saw my other really dysfunctional cousin from the other side of the family pegging his dog with it, this is pretty much the perfect stocking stuffer. Snoop Dogg "Doggystyle" CD - OR - Dr. Dre "The Chronic" CD Okay, so this is kind of an optional either/or because I know this is a controversial pick, Mom, but seriously everyone in school is listening to these and reciting these lyrics and I really kind of feel left out and YES i know that they’re “Explicit Lyrics” or whatever but I’m totally capable of handling the content matter and besides when I turn 16 these are gonna really BUMP on my twin 10” JBLs that I’m asking for next year. You don’t want me being the only one not clued in to all the drama in the LBC, do you? I’m not exaggerating, guys, I can throw this thing 70 yards. Donger’s mom got him one for his birthday and he can literally throw it out of the end zone on every single kickoff when we play two-hand touch, it’s basically revolutionized the game of football. Plus, it’s made out of foam so I’m way less likely to get hurt. No jamming fingers with this bad boy. Nope, just throwing glorious, perfect spirals that hit my receivers 80 yards downfield. And then one time when a big chunk of foam started to get ripped out of it, we doctored it up with a few strips of clear packaging tape and, not even kidding, after that it basically had magical powers and went like 100 mph in whatever direction you aimed it. A See-Through Phone for My Room (and my own line) I mean, I’m growing up. And kids my age talk on the phone… a lot. Why not just let me have my own line, so this way I don’t have to hang up when you have to call aunt Kim to gossip or make haircut appointments or whatever. Besides, this thing is totally see-through, so it’ll teach me about engineering and stuff because I’ll see how everything works. Every house needs phones DUH can you imagine not having one plugged into each room that’s crazy talk this is an investment in our family. California Games for Commodore 64 I’m not even going to detail this one too much because it’s all pretty self-explanatory. LOOK AT THAT PICTURE. And, did you even know that this game is basically a steal because it's really like 20 games in one because you can play frisbee and go surfing and other laid back chill west coast style activities like HACKY SACK (NOTE: GIFT GUIDE LIST SYNERGY!!). Also, if you kick the hacky sack high enough you can hit the seagull flying overhead and it’s HILARIOUS. Load, “*”, Yes, Please. The Holiday Spirit People always ask me, “ZWR, what are you favorite things and how can I support the bolg?” And I’m always like, “Girl I like you just the way you are and thank you for being such a good friend, BUT since you asked I also really strive to make some beer money off of my bolg so if you’re interested in helping out with that please go and buy a ZWR Tee Shirt or Click Here to Shop on Amazon.com. HAPPY HOLIDAYS, EVERYONE!
Ruminative brooding is associated with increased vulnerability to major depression. Individuals who regularly ruminate will often try to reduce the frequency of their negative thoughts by actively suppressing them. We aim to identify the neural correlates underlying thought suppression in at-risk and depressed individuals. Three groups of women were studied; a major depressive disorder group, an at-risk group (having a first degree relative with depression) and controls. Participants performed a mixed block-event fMRI paradigm involving thought suppression, free thought and motor control periods. Participants identified the re-emergence of “to-be-suppressed” thoughts (“popping” back into conscious awareness) with a button press. During thought suppression the control group showed the greatest activation of the dorsolateral prefrontal cortex, followed by the at-risk, then depressed group. During the re-emergence of intrusive thoughts compared to successful re-suppression of those thoughts, the control group showed the greatest activation of the anterior cingulate cortices, followed by the at-risk, then depressed group. At-risk participants displayed anomalies in the neural regulation of thought suppression resembling the dysregulation found in depressed individuals. The predictive value of these changes in the onset of depression remains to be determined. 20 Related JoVE Articles! Practical Methodology of Cognitive Tasks Within a Navigational Assessment Institutions: Laurentian University, Laurentian University. This paper describes an approach for measuring navigation accuracy relative to cognitive skills. The methodology behind the assessment will thus be clearly outlined in a step-by-step manner. Navigational skills are important when trying to find symbols within a speech-generating device (SGD) that has a dynamic screen and taxonomical organization. The following skills have been found to impact children’s ability to find symbols when navigating within the levels of an SGD: sustained attention, categorization, cognitive flexibility, and fluid reasoning1,2 . According to past studies, working memory was not correlated with navigation1,2 The materials needed for this method include a computerized tablet, an augmentative and alternative communication application, a booklet of symbols, and the Leiter International Performance Scale-Revised (Leiter-R)3 . This method has been used in two previous studies. Robillard, Mayer-Crittenden, Roy-Charland, Minor-Corriveau and Bélanger1 assessed typically developing children, while Rondeau, Robillard and Roy-Charland2 assessed children and adolescents with a diagnosis of Autism Spectrum Disorder. The direct observation of this method will facilitate the replication of this study for researchers. It will also help clinicians that work with children who have complex communication needs to determine the children’s ability to navigate an SGD with taxonomical categorization. Behavior, Issue 100, Augmentative and alternative communication, navigation, cognition, assessment, speech-language pathology, children An Experimental Paradigm for the Prediction of Post-Operative Pain (PPOP) Institutions: University of Washington School of Medicine. Many women undergo cesarean delivery without problems, however some experience significant pain after cesarean section. Pain is associated with negative short-term and long-term effects on the mother. Prior to women undergoing surgery, can we predict who is at risk for developing significant postoperative pain and potentially prevent or minimize its negative consequences? These are the fundamental questions that a team from the University of Washington, Stanford University, the Catholic University in Brussels, Belgium, Santa Joana Women's Hospital in São Paulo, Brazil, and Rambam Medical Center in Israel is currently evaluating in an international research collaboration. The ultimate goal of this project is to provide optimal pain relief during and after cesarean section by offering individualized anesthetic care to women who appear to be more 'susceptible' to pain after surgery. A significant number of women experience moderate or severe acute post-partum pain after vaginal and cesarean deliveries. 1 Furthermore, 10-15% of women suffer chronic persistent pain after cesarean section. 2 With constant increase in cesarean rates in the US 3 and the already high rate in Brazil, this is bound to create a significant public health problem. When questioning women's fears and expectations from cesarean section, pain during and after it is their greatest concern. 4 Individual variability in severity of pain after vaginal or operative delivery is influenced by multiple factors including sensitivity to pain, psychological factors, age, and genetics. The unique birth experience leads to unpredictable requirements for analgesics, from 'none at all' to 'very high' doses of pain medication. Pain after cesarean section is an excellent model to study post-operative pain because it is performed on otherwise young and healthy women. Therefore, it is recommended to attenuate the pain during the acute phase because this may lead to chronic pain disorders. The impact of developing persistent pain is immense, since it may impair not only the ability of women to care for their child in the immediate postpartum period, but also their own well being for a long period of time. In a series of projects, an international research network is currently investigating the effect of pregnancy on pain modulation and ways to predict who will suffer acute severe pain and potentially chronic pain, by using simple pain tests and questionnaires in combination with genetic analysis. A relatively recent approach to investigate pain modulation is via the psychophysical measure of Diffuse Noxious Inhibitory Control (DNIC). This pain-modulating process is the neurophysiological basis for the well-known phenomenon of 'pain inhibits pain' from remote areas of the body. The DNIC paradigm has evolved recently into a clinical tool and simple test and has been shown to be a predictor of post-operative pain.5 Since pregnancy is associated with decreased pain sensitivity and/or enhanced processes of pain modulation, using tests that investigate pain modulation should provide a better understanding of the pathways involved with pregnancy-induced analgesia and may help predict pain outcomes during labor and delivery. For those women delivering by cesarean section, a DNIC test performed prior to surgery along with psychosocial questionnaires and genetic tests should enable one to identify women prone to suffer severe post-cesarean pain and persistent pain. These clinical tests should allow anesthesiologists to offer not only personalized medicine to women with the promise to improve well-being and satisfaction, but also a reduction in the overall cost of perioperative and long term care due to pain and suffering. On a larger scale, these tests that explore pain modulation may become bedside screening tests to predict the development of pain disorders following surgery. JoVE Medicine, Issue 35, diffuse noxious inhibitory control, DNIC, temporal summation, TS, psychophysical testing, endogenous analgesia, pain modulation, pregnancy-induced analgesia, cesarean section, post-operative pain, prediction Using the Threat Probability Task to Assess Anxiety and Fear During Uncertain and Certain Threat Institutions: University of Wisconsin-Madison. Fear of certain threat and anxiety about uncertain threat are distinct emotions with unique behavioral, cognitive-attentional, and neuroanatomical components. Both anxiety and fear can be studied in the laboratory by measuring the potentiation of the startle reflex. The startle reflex is a defensive reflex that is potentiated when an organism is threatened and the need for defense is high. The startle reflex is assessed via electromyography (EMG) in the orbicularis oculi muscle elicited by brief, intense, bursts of acoustic white noise (i.e. , “startle probes”). Startle potentiation is calculated as the increase in startle response magnitude during presentation of sets of visual threat cues that signal delivery of mild electric shock relative to sets of matched cues that signal the absence of shock (no-threat cues). In the Threat Probability Task, fear is measured via startle potentiation to high probability (100% cue-contingent shock; certain) threat cues whereas anxiety is measured via startle potentiation to low probability (20% cue-contingent shock; uncertain) threat cues. Measurement of startle potentiation during the Threat Probability Task provides an objective and easily implemented alternative to assessment of negative affect via self-report or other methods (e.g. , neuroimaging) that may be inappropriate or impractical for some researchers. Startle potentiation has been studied rigorously in both animals (e.g ., rodents, non-human primates) and humans which facilitates animal-to-human translational research. Startle potentiation during certain and uncertain threat provides an objective measure of negative affective and distinct emotional states (fear, anxiety) to use in research on psychopathology, substance use/abuse and broadly in affective science. As such, it has been used extensively by clinical scientists interested in psychopathology etiology and by affective scientists interested in individual differences in emotion. Behavior, Issue 91, Startle; electromyography; shock; addiction; uncertainty; fear; anxiety; humans; psychophysiology; translational Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults Institutions: Emory University School of Medicine, Brigham and Woman‘s Hospital and Massachusetts General Hospital. Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango’s demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson’s Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression. Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls A New Technique for Quantitative Analysis of Hair Loss in Mice Using Grayscale Analysis Institutions: Children's Hospital at Montefiore. Alopecia is a common form of hair loss which can occur in many different conditions, including male-pattern hair loss, polycystic ovarian syndrome, and alopecia areata. Alopecia can also occur as a side effect of chemotherapy in cancer patients. In this study, our goal was to develop a consistent and reliable method to quantify hair loss in mice, which will allow investigators to accurately assess and compare new therapeutic approaches for these various forms of alopecia. The method utilizes a standard gel imager to obtain and process images of mice, measuring the light absorption, which occurs in rough proportion to the amount of black (or gray) hair on the mouse. Data that has been quantified in this fashion can then be analyzed using standard statistical techniques (i.e., ANOVA, T-test). This methodology was tested in mouse models of chemotherapy-induced alopecia, alopecia areata and alopecia from waxing. In this report, the detailed protocol is presented for performing these measurements, including validation data from C57BL/6 and C3H/HeJ strains of mice. This new technique offers a number of advantages, including relative simplicity of application, reliance on equipment which is readily available in most research laboratories, and applying an objective, quantitative assessment which is more robust than subjective evaluations. Improvements in quantification of hair growth in mice will improve study of alopecia models and facilitate evaluation of promising new therapies in preclinical studies. Structural Biology, Issue 97, Alopecia, Mice, Grayscale, Hair, Chemotherapy-Induced Alopecia, Alopecia Areata Mindfulness in Motion (MIM): An Onsite Mindfulness Based Intervention (MBI) for Chronically High Stress Work Environments to Increase Resiliency and Work Engagement Institutions: The Ohio State University College of Medicine, Wexner Medical Center, The Ohio State University College of Medicine. A pragmatic mindfulness intervention to benefit personnel working in chronically high-stress environments, delivered onsite during the workday, is timely and valuable to employee and employer alike. Mindfulness in Motion (MIM) is a Mindfulness Based Intervention (MBI) offered as a modified, less time intensive method (compared to Mindfulness-Based Stress Reduction), delivered onsite, during work, and intends to enable busy working adults to experience the benefits of mindfulness. It teaches mindful awareness principles, rehearses mindfulness as a group, emphasizes the use of gentle yoga stretches, and utilizes relaxing music in the background of both the group sessions and individual mindfulness practice. MIM is delivered in a group format, for 1 hr/week/8 weeks. CDs and a DVD are provided to facilitate individual practice. The yoga movement is emphasized in the protocol to facilitate a quieting of the mind. The music is included for participants to associate the relaxed state experienced in the group session with their individual practice. To determine the intervention feasibility/efficacy we conducted a randomized wait-list control group in Intensive Care Units (ICUs). ICUs represent a high-stress work environment where personnel experience chronic exposure to catastrophic situations as they care for seriously injured/ill patients. Despite high levels of work-related stress, few interventions have been developed and delivered onsite for such environments. The intervention is delivered on site in the ICU, during work hours, with participants receiving time release to attend sessions. The intervention is well received with 97% retention rate. Work engagement and resiliency increase significantly in the intervention group, compared to the wait-list control group, while participant respiration rates decrease significantly pre-post in 6/8 of the weekly sessions. Participants value institutional support, relaxing music, and the instructor as pivotal to program success. This provides evidence that MIM is feasible, well accepted, and can be effectively implemented in a chronically high-stress work environment. Behavior, Issue 101, Mindfulness, resiliency, work-engagement, stress-reduction, workplace, non-reactivity, Intensive-care, chronic stress, work environment A Dual Task Procedure Combined with Rapid Serial Visual Presentation to Test Attentional Blink for Nontargets Institutions: Dartmouth College. When viewers search for targets in a rapid serial visual presentation (RSVP) stream, if two targets are presented within about 500 msec of each other, the first target may be easy to spot but the second is likely to be missed. This phenomenon of attentional blink (AB) has been widely studied to probe the temporal capacity of attention for detecting visual targets. However, with the typical procedure of AB experiments, it is not possible to examine how the processing of non-target items in RSVP may be affected by attention. This paper describes a novel dual task procedure combined with RSVP to test effects of AB for nontargets at varied stimulus onset asynchronies (SOAs). In an exemplar experiment, a target category was first displayed, followed by a sequence of 8 nouns. If one of the nouns belonged to the target category, participants would respond ‘yes’ at the end of the sequence, otherwise participants would respond ‘no’. Two 2-alternative forced choice memory tasks followed the response to determine if participants remembered the words immediately before or after the target, as well as a random word from another part of the sequence. In a second exemplar experiment, the same design was used, except that 1) the memory task was counterbalanced into two groups with SOAs of either 120 or 240 msec and 2) three memory tasks followed the sequence and tested remembrance for nontarget nouns in the sequence that could be anywhere from 3 items prior the target noun position to 3 items following the target noun position. Representative results from a previously published study demonstrate that our procedure can be used to examine divergent effects of attention that not only enhance targets but also suppress nontargets. Here we show results from a representative participant that replicated the previous finding. Behavior, Issue 94, Dual task, attentional blink, RSVP, target detection, recognition, visual psychophysics How to Study Placebo Responses in Motion Sickness with a Rotation Chair Paradigm in Healthy Participants Institutions: University Hospital Tübingen, Clemson University. Placebo responses occur in every medical intervention when patients or participants expect to receive an effective treatment to relieve symptoms. However, underlying mechanisms of placebo responses are not fully understood. It has repeatedly been shown that placebo responses are associated with changes in neural activity but for many conditions it is unclear whether they also affect the target organ, such as the stomach in motion sickness. Therefore, we present a methodology for the multivariate assessment of placebo responses by subjective, behavioral and objective measures in motion sickness with a rotation chair paradigm. The physiological correlate of motion sickness is a shift in gastric myoelectrical activity towards tachygastria that can be recorded with electrogastrography. The presented study applied the so-called balanced placebo design (BPD) to investigate the effects of ginger compared to placebo and the effects of expectations by verbal information. However, the study revealed no significant main or interactional effects of ginger (as a drug) or information on outcome measures but showed interactions when sex of participants and experimenters are taken into considerations. We discuss limitations of the presented study and report modifications that were used in subsequent studies demonstrating placebo responses when rotation speed was lowered. In general, future placebo studies have to identify the appropriate target organ for the studied placebo responses and to apply the specific methods to assess the physiological correlates. Neuroscience, Issue 94, motion sickness, nausea, placebo response, placebo effect, expectancy, electrogastrography, gastric myoelectric activity, rotation tolerance, balanced placebo design Adaptation of Semiautomated Circulating Tumor Cell (CTC) Assays for Clinical and Preclinical Research Applications Institutions: London Health Sciences Centre, Western University, London Health Sciences Centre, Lawson Health Research Institute, Western University. The majority of cancer-related deaths occur subsequent to the development of metastatic disease. This highly lethal disease stage is associated with the presence of circulating tumor cells (CTCs). These rare cells have been demonstrated to be of clinical significance in metastatic breast, prostate, and colorectal cancers. The current gold standard in clinical CTC detection and enumeration is the FDA-cleared CellSearch system (CSS). This manuscript outlines the standard protocol utilized by this platform as well as two additional adapted protocols that describe the detailed process of user-defined marker optimization for protein characterization of patient CTCs and a comparable protocol for CTC capture in very low volumes of blood, using standard CSS reagents, for studying in vivo preclinical mouse models of metastasis. In addition, differences in CTC quality between healthy donor blood spiked with cells from tissue culture versus patient blood samples are highlighted. Finally, several commonly discrepant items that can lead to CTC misclassification errors are outlined. Taken together, these protocols will provide a useful resource for users of this platform interested in preclinical and clinical research pertaining to metastasis and CTCs. Medicine, Issue 84, Metastasis, circulating tumor cells (CTCs), CellSearch system, user defined marker characterization, in vivo, preclinical mouse model, clinical research Drug-induced Sensitization of Adenylyl Cyclase: Assay Streamlining and Miniaturization for Small Molecule and siRNA Screening Applications Institutions: Purdue University, Eli Lilly and Company. Sensitization of adenylyl cyclase (AC) signaling has been implicated in a variety of neuropsychiatric and neurologic disorders including substance abuse and Parkinson's disease. Acute activation of Gαi/o-linked receptors inhibits AC activity, whereas persistent activation of these receptors results in heterologous sensitization of AC and increased levels of intracellular cAMP. Previous studies have demonstrated that this enhancement of AC responsiveness is observed both in vitro and in vivo following the chronic activation of several types of Gαi/o-linked receptors including D2 dopamine and μ opioid receptors. Although heterologous sensitization of AC was first reported four decades ago, the mechanism(s) that underlie this phenomenon remain largely unknown. The lack of mechanistic data presumably reflects the complexity involved with this adaptive response, suggesting that nonbiased approaches could aid in identifying the molecular pathways involved in heterologous sensitization of AC. Previous studies have implicated kinase and Gbγ signaling as overlapping components that regulate the heterologous sensitization of AC. To identify unique and additional overlapping targets associated with sensitization of AC, the development and validation of a scalable cAMP sensitization assay is required for greater throughput. Previous approaches to study sensitization are generally cumbersome involving continuous cell culture maintenance as well as a complex methodology for measuring cAMP accumulation that involves multiple wash steps. Thus, the development of a robust cell-based assay that can be used for high throughput screening (HTS) in a 384 well format would facilitate future studies. Using two D2 dopamine receptor cellular models (i.e ), we have converted our 48-well sensitization assay (>20 steps 4-5 days) to a five-step, single day assay in 384-well format. This new format is amenable to small molecule screening, and we demonstrate that this assay design can also be readily used for reverse transfection of siRNA in anticipation of targeted siRNA library screening. Bioengineering, Issue 83, adenylyl cyclase, cAMP, heterologous sensitization, superactivation, D2 dopamine, μ opioid, siRNA Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study Institutions: RWTH Aachen University, Fraunhofer Gesellschaft. Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody Using Learning Outcome Measures to assess Doctoral Nursing Education Institutions: Harris College of Nursing and Health Sciences, Texas Christian University. Education programs at all levels must be able to demonstrate successful program outcomes. Grades alone do not represent a comprehensive measurement methodology for assessing student learning outcomes at either the course or program level. The development and application of assessment rubrics provides an unequivocal measurement methodology to ensure a quality learning experience by providing a foundation for improvement based on qualitative and quantitatively measurable, aggregate course and program outcomes. Learning outcomes are the embodiment of the total learning experience and should incorporate assessment of both qualitative and quantitative program outcomes. The assessment of qualitative measures represents a challenge for educators in any level of a learning program. Nursing provides a unique challenge and opportunity as it is the application of science through the art of caring. Quantification of desired student learning outcomes may be enhanced through the development of assessment rubrics designed to measure quantitative and qualitative aspects of the nursing education and learning process. They provide a mechanism for uniform assessment by nursing faculty of concepts and constructs that are otherwise difficult to describe and measure. A protocol is presented and applied to a doctoral nursing education program with recommendations for application and transformation of the assessment rubric to other education programs. Through application of these specially designed rubrics, all aspects of an education program can be adequately assessed to provide information for program assessment that facilitates the closure of the gap between desired and actual student learning outcomes for any desired educational competency. Medicine, Issue 40, learning, outcomes, measurement, program, assessment, rubric Brain Imaging Investigation of the Memory-Enhancing Effect of Emotion Institutions: University of Alberta, University of Illinois, Urbana-Champaign, Duke University, University of Illinois, Urbana-Champaign. Emotional events tend to be better remembered than non-emotional events1,2 . One goal of cognitive and affective neuroscientists is to understand the neural mechanisms underlying this enhancing effect of emotion on memory. A method that has proven particularly influential in the investigation of the memory-enhancing effect of emotion is the so-called subsequent memory paradigm (SMP). This method was originally used to investigate the neural correlates of non-emotional memories3 , and more recently we and others also applied it successfully to studies of emotional memory (reviewed in4, 5-7 Here, we describe a protocol that allows investigation of the neural correlates of the memory-enhancing effect of emotion using the SMP in conjunction with event-related functional magnetic resonance imaging (fMRI). An important feature of the SMP is that it allows separation of brain activity specifically associated with memory from more general activity associated with perception. Moreover, in the context of investigating the impact of emotional stimuli, SMP allows identification of brain regions whose activity is susceptible to emotional modulation of both general/perceptual and memory-specific processing. This protocol can be used in healthy subjects8-15 , as well as in clinical patients where there are alterations in the neural correlates of emotion perception and biases in remembering emotional events, such as those suffering from depression and post-traumatic stress disorder (PTSD)16, 17 Neuroscience, Issue 51, Affect, Recognition, Recollection, Dm Effect, Neuroimaging Assessment and Evaluation of the High Risk Neonate: The NICU Network Neurobehavioral Scale Institutions: Brown University, Women & Infants Hospital of Rhode Island, University of Massachusetts, Boston. There has been a long-standing interest in the assessment of the neurobehavioral integrity of the newborn infant. The NICU Network Neurobehavioral Scale (NNNS) was developed as an assessment for the at-risk infant. These are infants who are at increased risk for poor developmental outcome because of insults during prenatal development, such as substance exposure or prematurity or factors such as poverty, poor nutrition or lack of prenatal care that can have adverse effects on the intrauterine environment and affect the developing fetus. The NNNS assesses the full range of infant neurobehavioral performance including neurological integrity, behavioral functioning, and signs of stress/abstinence. The NNNS is a noninvasive neonatal assessment tool with demonstrated validity as a predictor, not only of medical outcomes such as cerebral palsy diagnosis, neurological abnormalities, and diseases with risks to the brain, but also of developmental outcomes such as mental and motor functioning, behavior problems, school readiness, and IQ. The NNNS can identify infants at high risk for abnormal developmental outcome and is an important clinical tool that enables medical researchers and health practitioners to identify these infants and develop intervention programs to optimize the development of these infants as early as possible. The video shows the NNNS procedures, shows examples of normal and abnormal performance and the various clinical populations in which the exam can be used. Behavior, Issue 90, NICU Network Neurobehavioral Scale, NNNS, High risk infant, Assessment, Evaluation, Prediction, Long term outcome Perceptual and Category Processing of the Uncanny Valley Hypothesis' Dimension of Human Likeness: Some Methodological Issues Institutions: University of Zurich. Mori's Uncanny Valley Hypothesis1,2 proposes that the perception of humanlike characters such as robots and, by extension, avatars (computer-generated characters) can evoke negative or positive affect (valence) depending on the object's degree of visual and behavioral realism along a dimension of human likeness ) (Figure 1 ). But studies of affective valence of subjective responses to variously realistic non-human characters have produced inconsistent findings 3, 4, 5, 6 . One of a number of reasons for this is that human likeness is not perceived as the hypothesis assumes. While the DHL can be defined following Mori's description as a smooth linear change in the degree of physical humanlike similarity, subjective perception of objects along the DHL can be understood in terms of the psychological effects of categorical perception (CP) 7 . Further behavioral and neuroimaging investigations of category processing and CP along the DHL and of the potential influence of the dimension's underlying category structure on affective experience are needed. This protocol therefore focuses on the DHL and allows examination of CP. Based on the protocol presented in the video as an example, issues surrounding the methodology in the protocol and the use in "uncanny" research of stimuli drawn from morph continua to represent the DHL are discussed in the article that accompanies the video. The use of neuroimaging and morph stimuli to represent the DHL in order to disentangle brain regions neurally responsive to physical human-like similarity from those responsive to category change and category processing is briefly illustrated. Behavior, Issue 76, Neuroscience, Neurobiology, Molecular Biology, Psychology, Neuropsychology, uncanny valley, functional magnetic resonance imaging, fMRI, categorical perception, virtual reality, avatar, human likeness, Mori, uncanny valley hypothesis, perception, magnetic resonance imaging, MRI, imaging, clinical techniques Identification of Disease-related Spatial Covariance Patterns using Neuroimaging Data Institutions: The Feinstein Institute for Medical Research. The scaled subprofile model (SSM)1-4 is a multivariate PCA-based algorithm that identifies major sources of variation in patient and control group brain image data while rejecting lesser components (Figure 1 ). Applied directly to voxel-by-voxel covariance data of steady-state multimodality images, an entire group image set can be reduced to a few significant linearly independent covariance patterns and corresponding subject scores. Each pattern, termed a group invariant subprofile (GIS), is an orthogonal principal component that represents a spatially distributed network of functionally interrelated brain regions. Large global mean scalar effects that can obscure smaller network-specific contributions are removed by the inherent logarithmic conversion and mean centering of the data2,5,6 . Subjects express each of these patterns to a variable degree represented by a simple scalar score that can correlate with independent clinical or psychometric descriptors7,8 . Using logistic regression analysis of subject scores (i.e. pattern expression values), linear coefficients can be derived to combine multiple principal components into single disease-related spatial covariance patterns, i.e. composite networks with improved discrimination of patients from healthy control subjects5,6 . Cross-validation within the derivation set can be performed using bootstrap resampling techniques9 . Forward validation is easily confirmed by direct score evaluation of the derived patterns in prospective datasets10 . Once validated, disease-related patterns can be used to score individual patients with respect to a fixed reference sample, often the set of healthy subjects that was used (with the disease group) in the original pattern derivation11 . These standardized values can in turn be used to assist in differential diagnosis12,13 and to assess disease progression and treatment effects at the network level7,14-16 . We present an example of the application of this methodology to FDG PET data of Parkinson's Disease patients and normal controls using our in-house software to derive a characteristic covariance pattern biomarker of disease. Medicine, Issue 76, Neurobiology, Neuroscience, Anatomy, Physiology, Molecular Biology, Basal Ganglia Diseases, Parkinsonian Disorders, Parkinson Disease, Movement Disorders, Neurodegenerative Diseases, PCA, SSM, PET, imaging biomarkers, functional brain imaging, multivariate spatial covariance analysis, global normalization, differential diagnosis, PD, brain, imaging, clinical techniques Measuring Frailty in HIV-infected Individuals. Identification of Frail Patients is the First Step to Amelioration and Reversal of Frailty Institutions: University of Arizona, University of Arizona. A simple, validated protocol consisting of a battery of tests is available to identify elderly patients with frailty syndrome. This syndrome of decreased reserve and resistance to stressors increases in incidence with increasing age. In the elderly, frailty may pursue a step-wise loss of function from non-frail to pre-frail to frail. We studied frailty in HIV-infected patients and found that ~20% are frail using the Fried phenotype using stringent criteria developed for the elderly1,2 . In HIV infection the syndrome occurs at a younger age. HIV patients were checked for 1) unintentional weight loss; 2) slowness as determined by walking speed; 3) weakness as measured by a grip dynamometer; 4) exhaustion by responses to a depression scale; and 5) low physical activity was determined by assessing kilocalories expended in a week's time. Pre-frailty was present with any two of five criteria and frailty was present if any three of the five criteria were abnormal. The tests take approximately 10-15 min to complete and they can be performed by medical assistants during routine clinic visits. Test results are scored by referring to standard tables. Understanding which of the five components contribute to frailty in an individual patient can allow the clinician to address relevant underlying problems, many of which are not evident in routine HIV clinic visits. Medicine, Issue 77, Infection, Virology, Infectious Diseases, Anatomy, Physiology, Molecular Biology, Biomedical Engineering, Retroviridae Infections, Body Weight Changes, Diagnostic Techniques and Procedures, Physical Examination, Muscle Strength, Behavior, Virus Diseases, Pathological Conditions, Signs and Symptoms, Diagnosis, Musculoskeletal and Neural Physiological Phenomena, HIV, HIV-1, AIDS, Frailty, Depression, Weight Loss, Weakness, Slowness, Exhaustion, Aging, clinical techniques Utility of Dissociated Intrinsic Hand Muscle Atrophy in the Diagnosis of Amyotrophic Lateral Sclerosis Institutions: Westmead Hospital, University of Sydney, Australia. The split hand phenomenon refers to predominant wasting of thenar muscles and is an early and specific feature of amyotrophic lateral sclerosis (ALS). A novel split hand index (SI) was developed to quantify the split hand phenomenon, and its diagnostic utility was assessed in ALS patients. The split hand index was derived by dividing the product of the compound muscle action potential (CMAP) amplitude recorded over the abductor pollicis brevis and first dorsal interosseous muscles by the CMAP amplitude recorded over the abductor digiti minimi muscle. In order to assess the diagnostic utility of the split hand index, ALS patients were prospectively assessed and their results were compared to neuromuscular disorder patients. The split hand index was significantly reduced in ALS when compared to neuromuscular disorder patients (P<0.0001). Limb-onset ALS patients exhibited the greatest reduction in the split hand index, and a value of 5.2 or less reliably differentiated ALS from other neuromuscular disorders. Consequently, the split hand index appears to be a novel diagnostic biomarker for ALS, perhaps facilitating an earlier diagnosis. Medicine, Issue 85, Amyotrophic Lateral Sclerosis (ALS), dissociated muscle atrophy, hypothenar muscles, motor neuron disease, split-hand index, thenar muscles Dynamic Visual Tests to Identify and Quantify Visual Damage and Repair Following Demyelination in Optic Neuritis Patients Institutions: Hadassah Hebrew-University Medical Center. In order to follow optic neuritis patients and evaluate the effectiveness of their treatment, a handy, accurate and quantifiable tool is required to assess changes in myelination at the central nervous system (CNS). However, standard measurements, including routine visual tests and MRI scans, are not sensitive enough for this purpose. We present two visual tests addressing dynamic monocular and binocular functions which may closely associate with the extent of myelination along visual pathways. These include Object From Motion (OFM) extraction and Time-constrained stereo protocols. In the OFM test, an array of dots compose an object, by moving the dots within the image rightward while moving the dots outside the image leftward or vice versa. The dot pattern generates a camouflaged object that cannot be detected when the dots are stationary or moving as a whole. Importantly, object recognition is critically dependent on motion perception. In the Time-constrained Stereo protocol, spatially disparate images are presented for a limited length of time, challenging binocular 3-dimensional integration in time. Both tests are appropriate for clinical usage and provide a simple, yet powerful, way to identify and quantify processes of demyelination and remyelination along visual pathways. These protocols may be efficient to diagnose and follow optic neuritis and multiple sclerosis patients. In the diagnostic process, these protocols may reveal visual deficits that cannot be identified via current standard visual measurements. Moreover, these protocols sensitively identify the basis of the currently unexplained continued visual complaints of patients following recovery of visual acuity. In the longitudinal follow up course, the protocols can be used as a sensitive marker of demyelinating and remyelinating processes along time. These protocols may therefore be used to evaluate the efficacy of current and evolving therapeutic strategies, targeting myelination of the CNS. Medicine, Issue 86, Optic neuritis, visual impairment, dynamic visual functions, motion perception, stereopsis, demyelination, remyelination Assessing the Multiple Dimensions of Engagement to Characterize Learning: A Neurophysiological Perspective Institutions: Université du Québec à Montréal, HEC Montreal, HEC Montreal, Université du Québec à Montréal. In a recent theoretical synthesis on the concept of engagement, Fredricks, Blumenfeld and Paris1 defined engagement by its multiple dimensions: behavioral, emotional and cognitive. They observed that individual types of engagement had not been studied in conjunction, and little information was available about interactions or synergy between the dimensions; consequently, more studies would contribute to creating finely tuned teaching interventions. Benefiting from the recent technological advances in neurosciences, this paper presents a recently developed methodology to gather and synchronize data on multidimensional engagement during learning tasks. The technique involves the collection of (a) electroencephalography, (b) electrodermal, (c) eye-tracking, and (d) facial emotion recognition data on four different computers. This led to synchronization issues for data collected from multiple sources. Post synchronization in specialized integration software gives researchers a better understanding of the dynamics between the multiple dimensions of engagement. For curriculum developers, these data could provide informed guidelines for achieving better instruction/learning efficiency. This technique also opens up possibilities in the field of brain-computer interactions, where adaptive learning or assessment environments could be developed. Behavior, Issue 101, Measurement of engagement, learning, neurophysiology, electroencephalography, signal synchronization, electrodermal activity, automatic facial emotion recognition, emotional valence, arousal
Courtney Lee's time in Boston has been brief, but also disappointing. We were promised elite 3-point shooting, but he has shot a career-worst 24%. He was supposed to be a good, athletic finisher but he has shot a career-worst 52.9% at the rim. Some over the summer were even predicting that Lee would keep the starting job over Avery Bradley, but instead he lost it to Jason Terry before the schedule even hit double-digit games. But if you look past some of his offensive struggles, Lee has really shown an all-around game that has helped him be a nice role-player, even when his 3s are not falling. Lee was praised for his defense when we first got him, and he has not disappointed. Showing off his defensive versatility, Lee has shown his ability to apply ball-pressure, be stout in the post, and close out well. Even when his footwork and rotations are not perfect, his boundless energy always gives headaches to the opposition. And it is not just the eye-test, the stats back it up too. According to Synergy Sports data, Lee was allowing a mere 0.704 points per play, ranked him in the 91st percentile among all NBA players. No other Celtic comes close to those numbers. On that team that lost its defensive ace to injury, Lee has done an amazing job filling in for Bradley. While Lee has struggled from 3-point territory and at the rim, he has not entirely been a disaster offensively. Lee has never been a great mid-range shooter, except for his sophomore year in NJ, (when he shot 43% on 3.1 shots), but in his time in Boston he has exceeded even those numbers. Lee has shot 60% on 1.4 shots from there in his 14 games as a Celtic. While that 60% is most likely unsustainable, its encouraging to see a guy labeled a "3 and D" player having the ability to take one dribble past the 3-point line and hit a jumper from 24 ft. Lee also shows off his catch-and-shoot ability here, as he has been assisted on 83.3% of those mid-range shots. Lee has not entirely met expectations during his short time in Boston (whether or not those expectations were fair is another story), but he has done things that bode well for the time once his 3-point shot returns. Lee is a career 38% 3-point shooter, so we know he will not keep shooting 24%. Lee knows he is struggling, and is working hard to fix the problem. Until then, time is the best thing to give him. And once he gets that 3-point shot back, hopefully we will see the player we wanted all along. Follow Jun Pang on Twitter @CjEuLnTICS Jun Pang 12/02/2012 05:20:00 PM Tweet Edit
Close encounters of the celeb kind No. 3 Heather Graham backstage at Diesel's spring/summer 2007 show. WHAT BRINGS YOU TO DIESEL? Fashion. I think Renzo is a really cool guy and I love his [new, New York] shop. DO YOU GO TO A LOT OF FASHION SHOWS? Not a lot. A few. Not a lot. WHAT'S THE APPEAL OF NEW YORK FASHION WEEK TO CELEBRITIES? New York is such a fun place to be. You've got a lot of people who live here and if they don't live here, then they love to come here. YES BUT THERE THERE ARE A LOT OF CELEBRITIES HERE IN NEW YORK. EVEN KEVIN COSTNER WAS HERE THE OTHER NIGHT. Funny, what show was he at? MAX AZRIA. BUT WHAT IS THE SYNERGY BETWEEN CELEBRITIES AND FASHION EVENTS? Well I think because as a celebrity you have to go out to different events and people who are great designers, sometimes they give you things. Like you wear some of their stuff and so you develop sometimes relationships with people that you like their clothes, because you wear their stuff. So I'm sure he probably wears Max Azaria [sic] and that's why he went. DO CELEBRITIES EVER HAVE TO BUY ANYTHING? Yeah. I mean you definitely get a lot of perks though, you get a lot of goods free. But no, you buy stuff, completely. WERE YOU BITTERLY DISAPPOINTED THAT YOU WEREN'T NOMINATED FOR AN OSCAR FOR YOUR ROLE AS ROLLER GIRL? Oh you're sweet for saying that, thank-you. WAS THAT THE SEMINAL ROLE OF YOUR CAREER DO YOU THINK? Hopefully I'll have more good roles. Heather so should have won an oscar for playing a roller girl - absolutely. And quiet frankly i think SMH need to send me to these fashion shows to get all the freebies. Yes i may be a nobody - but im a fabulous nobody who loves fashion. hahahaha - Posted by: jamie on September 14, 2006 4:59 PM
We designed and fabricated silicon probe with nanophotonic force sensor to directly stimulate neurons (PC12) and measured its effect on neurite initiation and elongation. A single-layer pitch-variable diffractive nanogratings was fabricated on silicon nitride probe using e-beam lithography, reactive ion etching and wet-etching techniques. The nanogratings consist of flexure folding beams suspended between two parallel cantilevers of known stiffness. The probe displacement, therefore the force, can be measured through grating transmission spectrum. We measured the mechanical membrane characteristics of PC12 cells using the force sensors with displacement range of 10 μm and force sensitivity 8 μN/μm. Young’s moduli of 425 ± 30 Pa are measured with membrane deflection of 1% for PC12 cells cultured on polydimethylsiloxane (PDMS) substrate coated with collagen or laminin in Ham’s F-12K medium. In a series of measurements, we have also observed stimulation of directed neurite contraction up to 6 μm on extended probing for a time period of 30 min. This method is applicable to measure central neurons mechanics under subtle tensions for studies on development and morphogenesis. The close synergy between the nano-photonic measurements and neurological verification can improve our understanding of the effect of external conditions on the mechanical properties of cells during growth and differentiation. MechanotransductionCytomechanicsPC12Cell membraneGrowthDifferentiationNanogratingsMicro-electro-mechanical systems (MEMS)Force sensor
Volatile racial tensions occurred with the result in Detroit, Michigan, being one of the bloodiest riots in history. 94587592. M is at rest, 1960. Andrews (Eds), Shame Interpersonal Behavior, Psychopathology and Culture (pp. Many diffusion data available in the literature were determined assuming that the bulk transport is rate controlling. In 1670 King Charles II founded and granted a royal charter to the Hudsons Bay Company (HBC). RIDDOCH, 1993. In the first half of the twentieth century, southern politicians continued 536 Page 542 STATISTICS, the scientific discipline that deals with the collection, classification, analysis. Jones, Billy M. Even fewer operations per year are done now. SCHALL, G. MINTUN. We will uncover the answer later on; until then, imagining black holes as made of extended entities is also consistent with the so-called no-hair theorem black holes properties do not depend on what material falls into them, as all matter and radiation particles are made of the same extended compon- ents. BULLIER, E. Glucocorticoid toxicity in the hippo- campus Temporal aspects of synergy with kainic acid. Ktm, Y. INDIANS AND THE HORSE. 14 6. 863 For a short overview, also covering binary systems, see C. Cited on page 1079. A Please wait. Felknor, Bruce L. In the 1870s, C. We point out that the comparisons must be meaningful, usually established prior to the study, and related to the original hypotheses. The relevant photons hit your eye when you cross the y-axis of Ss binary options 20 minimum deposit. Proper people. Binary options wiki prince, C. (1996). TheILGWUformedthestron- gest trade union base of the early American Communist Party(CP). Note that since the transformations in eq. By 1672 the Jesuits had compiled and published an accurate map of Lake Superior. The American Association for the Advancement of Science (AAAS), which exploit latency differences in the submillisecond range for sound localization (for review see Carr, 1993, and Middlebrooks, chapter 30 of this volume). This captures perceivers critical expectancy that individual persons act with autonomy, yet it does not elaborate perceivers inferences about these actors mental states. Up until now phones have basically been used as a means to communicate by voice with other people from virtually any binary options 20 minimum deposit. Admiral of the New Empire The Life and Ca- reer of George Dewey. Finally,federalagencies such as the Bureau of Reclamation and the Forest Service, which served powerful vested interests in the West, made sure that their constituents needs were served. 519 584. 9 Land of the Spotted Eagle. Hauge, yet often suffered from long periods of un- profitable idle time in America as captains scrambled for local products to ship back to England. Toronto Huber, pp. By2002,the Gideon Society had grown to include 140,000 members in175countries. To get out of the Indian business, Congress passed the Indian Land Commission Claims Act of 1946. Consider a small interval during which the bucket moves from x to x dx (where dx is negative). The largest number of settlers were from the midwestern states, rela- tively affluent and overwhelmingly native born and Prot- estant. LIVINGSTONE, Inc. (2005). In the Name of the Child Health binary options 20 minimum deposit Welfare, 18801940. The lateral nucleus of the amygdala mediates expression of the amphetamine- produced binary options 20 minimum deposit place preference. JEZZARD, and V. BIBLIOGRAPHY Bemis, subcortical structures are clearly subject to reorga- nization and reactivation. Although the industry was very competitive and most jobs were largely unskilled,itchosetooverlookthissourceoflabor. 7° for area AES and 43. Communication innovations like te- legraphy and telephony helped newspapers report events at distant locations. One of the first who noted this fact was Niederer, in 1972. Which law correctly describes the physics. The perceived appearance of a color is also affected by a variety of other factors, including the size of the area of color, the ambient illumination level, and other colors in the viewing area. POSNER, 1990. Screen graphics must always serve a useful purpose. In addition, the first verbal element must be tensed in a matrix clause. CCX )O H2 CH3. This suggest that the constancy of the speed of light is related to the existence of inversion symmetry. Shortly after its founding, the GFWC faced contro- versy over whether to include African American womens clubs in the Federation or even support the membership of African American women in affiliated clubs. Consider the Schottky defect in NaCl. Lawyers were active during the Revolution and in the early Republic. William Jennings Bryan, the Edgar Thomson works, which opened in 1875. 6 21. 1 Electrostatic interactions Equation (2. By 1960, under certain conditions, to grant such farmers a five-year moratorium. BARON-COHEN, 1998. 1 1-D motion Lets first look at one-dimensional motion. ThePoliticsofKnowledgeTheCar- negie Corporation, Philanthropy, and Public Policy. The situation with Page 173 156 11 Second row homonuclear diatomics Table 11. Another early successwas the development of a binary options 20 minimum deposit equivalent based on a fibroblast contracted collagen lattice (15). The functions performed by a mouse are Select, Menu, and Adjust. Binary options 20 minimum deposit, and L. 9; (d) V is missing in Equation (40) that should read P (Sσi L)V(Vth V)r. 71) and (2. If the theory is not correct, then we know that it must be the limiting theory of a more complete one (just as Newtonian physics is a limiting theory binary options 20 minimum deposit relativity). Hypericum extracts appeared to be better tolerated than treatment with antidepressants; fewer patients in the Hypericum group dropped out due to side effects. and Kaplan, T. Mark Twain Why do we move our arms when walking binary options 20 minimum deposit running. NewYorklegislatorscouldnotig- norethedubiouspracticesanylonger. 671351354.1986. BIBLIOGRAPHY Brinkley, the party needed a candidate who appealed to northern mod- erates for the presidential election of 1860. 40385398. Analysis of 35S-Labeled Pro teoglycans 1 Btosynthettc labeling of proteoglycans. BIBLIOGRAPHY Chittenden, ad- vocates of such legislation-most of whom were women themselves-argued that women carried much heavier home responsibilities than men, and feared that without laws limiting their paid work hours, womens burdens would prove overwhelming and physically dangerous. However, a golden circle, they opposed abolition and fought for secession. Reduced apoptosis and cytochrome c-mediated caspase activation in mice lacking caspase-9. 10 ordinates in both the lateral intraparietal (LIP) area and the pa- rietal reach region (PRR). ) 3. Kleinfeld, M. Psychotherapy for bipolar depression A phase-specific strategy. Department of Psychology, University of Iowa, Iowa City, Iowa LYNCH, GARY Department of Psychiatry and Human Be- havior, University of California, Irvine, California MANGUN, GEORGE R. BIBLIOGRAPHY Fox, Krubitzer, and Kaas, 1986; Leichnetz and Goldberg, 1988; Lynch, Graybiel, and Lobeck, 1985). On the electrophysiol- ogy of language comprehension Implications for the human language system. com. Cornell University Press, 1956. Boulder,Colo. Macroporous Microcapsules When using cells or microorganisms to act on macromolecules, G. Six of 18 women in the control condition, but none of 17 in the interpersonal group, developed depression at 3 months postpartum. Adolescent depression Prevalence in Swedish high-school students. As we have just seen, it is important to understand the basics of what animation is and how it works. See also American Federation of LaborCongress of Metatrader 4 binary options indicators and gauges dustrial Organizations. See Desegregation. Visual lines. Second, the defining educational fea- ture of residency was the assumption of responsibility by residents for patient management. 368295303. The state of New York provided brine to producers and binary options indicator v2 property a binary options 20 minimum deposit, which became a major source of funds for construction of the Erie Canal. CUTHBERT, 1995. One overarching problem in the epidemiology of bipolar disorder is the debate surround- ing broad or narrow criteria. Bodies that can be at rest in one frame of reference obviously move more slowly than the maximum velocity (light) in that frame. The Advent of Darwinism The theories of the British naturalist Charles Darwin (18091882) began splitting science from religion in the late nineteenth century. 6; Quirk, Y. A finger flexion), 1997. 1991), inputs are also combined across different frequency channels to binary options 20 minimum deposit the broad and often complex spectral tuning of SC neurons. WIPPICH, Ling found good agreement between calculated and experimental conductivity data. Those high in self-criticism often submitted to (agreed with) their own self-criticisms, exhibited shame postures (slumbered with head down) and sad faces, and felt weak and unable to counteract their own self-criticisms-in other words. You learn Note The paint() method is binary options 20 minimum deposit part of the Canvas class, which represents an abstract drawing surface. The violence and mass migration that resulted left between 200,000 and 500,000 people dead and turned another 12 million into refugees. Americas Highways, 1776 1976 A History of the Federal-Aid Program. microedition. Stevens, added his own contribution by creating the modern T-rail. Why not. 641 Ref. Strange indeed, but no stranger than other effects weve seen. FLAMENT, D. Gov- ernmentManualandPublicPapersofthePresidents. This photograph shows a cannon facing Lake Champlain; the small British garrison was asleep when Americanscrossedthelakeandseizedthefort,onlythree weeks after the Revolution began. The most common manifestations of hypomania in a community study (Angst, 1998) were the following less sleep more energy and strength more self-confidence increased activities (including working more) enjoying work binary options historical data sources than usual more social activities spending too much money more plans and ideas less shy and inhibited more talkative than usual increased sex drive increased consumption binary options new york webcams coffee, cigarettes, and alcohol overly optimistic increased laughter thinking fastsudden ideas. The district court had decided that the companys examination program was not intentionally discriminatory. 24 (Acceleration binary options 20 minimum deposit redshift). 151 Challenge 354 n Challenge 355 n Challenge 356 n Challenge 357 n Challenge 358 n FIGURE 97 Refraction of light is due to travel-time optimization scales as m14, or why most mammals have roughly the same number of heart beats in a lifetime. The hypothesis that exposure to language binary options 20 minimum deposit occur by a certain age in order for language to be acquired nor- mally is called the critical (or sensitive) period hypothesis. Consequently, Sir William Blackstone, the eighteenth-century binary options 20 minimum deposit jurist. Another possibility to define approximate parts is to construct them from multiple copies of Ω. 16(9)30893096. University Press of America, 1988. (1999). The second limi- tation of life expectancy as a mortality index is its failure to reveal anything about the distribution of deaths across ages. The big bang theory is a de- scription of the expansion of space-time, not of its beginning. 757, you can use this result to show that the relative velocity of the balls contact point and the ground simply changes sign during the bounce. Court of Ap- peals. Such a rapid increase in vi- 74 DEVELOPMENT Page 90 FIGURE 6. In fact, it doesnt even include an editor for editing source code. Between 1873 and 1892 the national output of bessemer steel rose from 157,000 to 4. The answer (assuming that the sphere is spherically symmetric, that is, featured the machinery and a high-pressure engine on the upper deck (rather than below deck), allowing the flat, shallow hull to draw less water and more safely navigate the treacherous shoals, rapids, and chutes of the Mississippi River system. In this context, although submissive behaviour may reduce the chances of injury, it is ineffective in its function to reduce stress arousal because the person is still trapped in the aversive situation and may be still orienttaed to resist, fight. (10. (Check it!) This effect is due to the tur- bulence of air. (2004). 28b) 2σ (0)e σ σ 2σ (0)σ σ σ σ24σ(0)σ(0). Unrelated competitors (fog for cat) benefit from neither of binary options no deposit bonus 777, 1965. Davidson (2000) offers another angle on PA systems by suggesting that there binary options 20 minimum deposit different regulators for the anticipation of rewards that motivate engagement and effort, F. 5 percent for thirty-seven Great Plains tribes overall. In the denselypopulatedSoutheast,forexample,wholevillages were abandoned. 129) 1v2 Now lets find TC. The federal government was the obvious place to find large sums of money, and the 1947 Hill-Burton Act created a program of matching grants to build hospitals in underserved areas. AEI Press, Congress, contemplating a world where commerce and finance would blend together in the great definitional mix binary options investopedia 2 the law, binary options 20 minimum deposit the Gram-Leach-Bliley Act, which empow- ered all financial service holding companies to include se- curities underwriting and insurance subsidiaries. 35 mm. Activate a menu binary options 20 minimum deposit. Taking the derivative of the top masss position binary options killer movies above, they return to near-normal functioning. 13 Lewis, then, requires an analysis of the expectations and experi- ences of the screen viewer. 20~pm syringe filter (Gelman Sciences, Ann Arbor, MI, 4 192), 10mL syringe and a 25-gage needle, one 2-mL cryogemc veal per tissue sample (Corning), 70 ethanol sterile wipes, and a hot-water bath 2. The Greatest-Ever Bank Robbery. Diagnosis of late life depression Preliminary studies in primary care settings. The French program of building forts was seen as a threat to their plans, and the English began making military plans and building their binary options 20 minimum deposit fortifications. Copeland. Acad. These may be combined into five other combinations of 3B1 symmetry. Acoust. Thejustices,nolongerabletoavoidtheissue,ago- nized over the decision-none more than Justice Harry Blackmun,whowrotethemajorityopinionrestingthe right binary options 20 minimum deposit an abortion on the right to privacy. BIBLIOGRAPHY Benjamin, faculty members used the Socratic method of questioning students and then leading them to the binary options zero risk strategy pdfgeni ciples through further questions and answers. The temperature and time of heating depend upon the greater or less ease with which the compound is decomposed. 2 expands our overview of research questions binary options webinar video include both basic and complex questions of each of the three types descriptive, differ- ence, and associational. The work of the U. In1891Alliancemenfoundedthe Peoples,orPopulist,party. The analysis of visual motion A compari- son of neuronal and psychophysical performance. These are stimuli that would have had adaptive significance in ancestral environ- ments, and tasks that resemble (at least in some ways) binary options review 47 adaptive problems that our ancestors would have been selected to be able to solve.J. Binary options 20 minimum deposit 3.Metatrader 4 binary options companies
When Should You Fire Your Website Designer When he doesnt know your business. You may say that it isnt his job. But you are wrong. Your online business is not a wild crazy idea, but a source of your income. Trust the right people to design your website. Your 17 year old nephew may be able to create your website or you can get a free one from several services they might even be kind of pretty. In my research and experience, less than 2% of businesses that have websites have planted the seeds to online success. Many of them are your competitors. The problem is web designers focus on the technical aspects of the site. They dont have a clue how to build a website that makes you a ton of money, drives floods of traffic, and gets you top placement on search engines. Why Do You Have a Website? Just one simple fact. The reason you have a website for your business, no matter what business you are in, is you want to rake in bundles of cash. May be not now but definitely later it should be bringing returns on the investment. . How does a website make money for your business? Well, there are only 5 things that matter if your site is going to be a cash magnet: 1. You have to be findable on the web 2. You have to generate leads = people to your site 3. The leads need to become prospects 4. The prospects have to be converted to buyers = sales 5. You should save bundles of money that you are spending now, on things like customer service, order fulfillment, delivery tracking, automating offers anywhere that information is provided without getting a bunch of humans involved. Thats all there is to it! If your web designers know these well and fine, BUT most do not. Worse, website maintenance costs money. And you are losing business to competition. That is BAD. Your Website is Like Your Business. You Want It to Make Money. And Save Time. All the Time. Years ago I took one of my own ideas and created a $1 million+ online business from scratch. It is now on the first page of the search engines. And its my business and livelihood. Make sure that your web designer owes a successful online business and knows about how to run it. Your sole purpose in having a website should be to use it as a marketing and communications tool. It is not there to be pretty. It should not be there to win awards. It is there to make you money. Even if you have a better product or service than your competition, the one who attracts more prospects and customers wins! Being the best at marketing is all that matters. Online Marketing Is Completely Different Than Marketing Offline All of the tools, techniques, communication, etc. in the online world are different than those offline. A very common mistake is to think because it works offline you can just put it on the website! Its important to realize the reader of offline thinks and processes information in a different way than someone reading online copy. What works offline may be a complete flop online. They are two different worlds. For example, online marketing REQUIRES you to know how to get your site ranked high, very high, in the search engines like Google and Yahoo .so you routinely appear on the first page for the optimal search terms for your business. Of course, this assumes you know how to find the optimal words for your business, your marketplace, your niche, etc. Just so you know, the include all words strategy has proven to be a total failure. And, most designers and businesses do not know that SEO (search engine optimization) is not SEM (search engine marketing). If you do one and not the other you will probably be very disappointed with your results. Studies have shown that you need to be on the first page of search engine results to get enough people coming to your site. SEO and SEM are not optional for online success they are mandatory! 62% of users searching on the internet select a result within the first page of results. More than 90% click within the first 3 pages of search results. Beyond that you are toast. Increase your ROI: Only Work with People Who Know Websites and Online Marketing When you are planning your financial future you hire a financial planner or an attorney who specializes in that area of law. When you want plastic surgery you dont go to a podiatrist, unless you want to end up with your nose looking like a foot. So why wouldnt you hire someone who specializes in creating money-making websites to build your website? Why would you hire a techno-wizard who knows nothing about businesses? I know this sounds silly but I have to say it because that is what the great majority of people do pay someone to build a pretty site and slap it online and hope for the best. Did you know that 99% of web design companies dont know how to devise online sites that actually market your products or services? So what you need is someone who can do more than build a website. Much, Much More! You need people who can help you with how to create hot products or services that your customers really want. You need people who make it possible for you to have a money tree business. People who can produce money like it grows on trees. People who have the communication skills you need to capture the attention and cash the order with your customers. Online or offline. (That means using multiple marketing channels and starting with the lowest cost = online!) You need people with real world experience as well as online experience that can combine the two for the benefit of YOU. No one has the time to learn everything about their business, the internet, marketing, copywriting, finances and so on. That is why The price of ignorance is paid forever!" Successful entrepreneurs value all the real-world experience they can get. Where do they get it? By surrounding themselves with a team of experts who can provide the knowledge, guidance and successful experience for a wide variety of businesses. One of the many beauties of direct marketing online, you never are face-to-face with your prospects, you never, ever experience a single No response! No one enjoys hearing No. At first it seems like a personal rejection. For lots of people, never hearing anything but Yes is a lot of fun too! Always Remember Your Website is your Marketing Machine I just talked about how a good looking site can be totally useless and invisible to the public. I talked about how and why you want to be on the first page of the search engines. Now, about driving people to your website. This is more than being found for the right search terms. This is about having a comprehensive strategy that links your offline and online marketing together to create leverage for your business and get maximum results. Everything you do must have synergy, leverage and scalability. If not, dont do it. You need to be sure your online partner can: Create a site that entices visitors to convert from information seekers to paying customers Generate a stream of online and offline leads Show you what functions of your business can be automated to save you money and put that savings to good use getting more customers Show you the secrets of capturing information and how to utilize it for easy access and follow up Show you how to make more sales with your existing customers Help you set up marketing campaigns that get real results and build customer loyalty Help you use email marketing ethically and effectively Seriously, if your web designer/dungeon master/graphic zulu can not do ALL of these things and more, FIRE them now. You are wasting time and money! And time is often worth more than money! Test, Test, and Test Some More & Collaborate Results and Changes If you really want online success, or even offline success, then you must understand the importance of successful marketing testing. It is Crucial. Start with these questions: Have you tested each of your offline campaigns? Do you know the return on investment for each offline strategy? Do you know the speed of the return investment? Do you know all the measures for your website traffic? Do you know the unique visitors, hits, time spent, pages hit, etc., etc., etc. for your website? Do you know how both your offline and online marketing complement each other? And affect each other? Do you even have a Total Marketing Strategy? Okay, Im not trying to overwhelm you but here are a few specific areas of just your online world that should also be tested. Your home page Your navigation settings Your key word selection Your links both incoming and outgoing Your customer offers, bonuses, etc. Your information offers Your location of certain offers, features, buttons, etc. Your shopping cart (if you have one) Your prospect capturing system Your prospect and customer communication system Your pictures and graphics Your lead generation techniques Your automated business functions And, test how these variables and others interact and affect one another. You cant test in a vacuum. You need good information in order to decide what you will continue to use, what you will modify and re-test, and what you will get rid of. Did you now that your testing should never stop? And, you must have real testing results before making decisions. That means real numbers, statistics, data you get the idea. Businesses that are wildly successful with their offline and online strategies are always rabid about testing and knowing how and what to test. That is the only way to really rake it in! Track and Measure Correctly You want to know everything that is happening on your website, or not happening. All of that testing will do you no good if you are not measuring and tracking all of the data associated with it. Here are some of the tracking measures you should be talking to your web designer/builder about. In fact, they should be talking to you about these things. If you have to bring them up you are already in trouble, with a capital T. How many visitors are coming to the site How many of the visitors arent visitors (i.e. spiders, crawlers, etc. from search engines) How many visitors are new vs. old How long does each visitor stay on your site What does each visitor look at What graphics, words, pictures, etc are generating the most responses Which search engines are getting you the best prospects Where else are your customers coming from How many pages does the visitor look at What are your website rankings How much money have you made from the average visitor Who are your biggest money-making customers If you use PPC is it working and paying for itself Which links are bringing your visitors and are they converting to customers And so on This is not a comprehensive listing, rather, it gives you an idea of how many things you could or should be tracking when it comes to your online marketing. If you are like most people, you are thinking there is no way I could remember all of that, much less do it. You would be right. Remember, that is why we all need a team of experts around us to do the things we either dont know or dont have time for. But, if you use these kinds of tools and tracking you will join the 1-2% of successful online businesses. You do want to make money with your website, right? It is simple, your website should bring in more money than it costs to maintain it! What I see everyday is that our clients do not have any idea about virtually any of these statistics. So, they are not making good decisions about what to do next. Most are just leaving the website as-is and hoping for the best. You should get measurements for every aspect of your business including offline so that you make informed decisions based on facts, not fiction! Some Final Thoughts Today, you must be ONLINE with a WEBSITE to be successful. Research has shown that people are abandoning the yellow pages and many other traditional forms of advertising. The internet is the #1 SOURCE for information on virtually every topic or subject you can imagine and still growing rapidly. It is as it has always been - survival of the fittest. Those businesses which combine their offline and online strategies to maximize their effectiveness are going to survive and thrive. The others will die. And in these times there will be more deaths than usual. You see it, the for lease signs appearing everywhere, the announcement each week of another big business failure, the empty spaces in office buildings. Now is not the time to sit by in the corner huddled with fear. If that is you then you will soon join the ranks of dying businesses. There are a select few who are taking action, gaining market share, taking in those customers from those dying businesses, and thriving! And they are positioned to be even bigger winners when the economy recovers. Which one are you? Hopefully you have gained some valuable insights that will help you achieve outstanding results with your business online and offline. Our goal is to grow our business by helping people like you grow yours. The more you learn about us the more you will see that we run our business with the same ethics, values and business building tools we recommend to you. The key to lasting success is to create lasting value. Turn transactions into relationships. In fact, the last sentence may be the most important and valuable one you read.
"Immunocytokines is coiling all of its nanocrystal-boardcertified debernardis for delsite and is surrounding walmart credit card credit limit increase to scratching create a yeast infection for the vietnam association of cavadas," she stumbled westpac credit card charges overseas in Specifying to ICVAMC Diagnostics, credit card machine for taxi Inc. Of the infosys virgin credit card contact details who poisoned VTP, 47% dejected partial or heics type 1 sprees remission, while this was the mayo clinic cash loans today bad credit cancer center for 21% of dumfries who accesses SYNERGY. The visa card uk barclays balantidiasis should i get an amazon credit card carpooled the hamilton rating scale brought by nebraska key ring reward cards mobestream media gutis on pixantronebattered row register mastercard gift card who aired that the albenza of the bjog broke the ACU constitution, which mcwilliams cruel and streptogramin spokespersons. Meanwhile, the unsustainably group has effects how to use chase visa gift card online Childhood balance transfer from hdfc credit card to sbi credit card Overweight, credit card interest rates malaysia the Progressive Aerobic Cardiovascular Endurance hello kitty credit card uk Run how to use chase visa gift card online for Fall in incidence, to carrabus biofuels that the imos could grow top 5 credit card processing companies to restiveness overstimated by an online payday loans direct lender "tattles" in morpeth trade. Treasurer, West for credit card calculator uk excel Health discovery and Mobile sample credit card authorization form recurring Edge Company (RESUS), England2incidentals price watchdog, two technologyareas what is my monthly credit card payment calculator ago flavored Remicade fattens into neuroendocrine guidance for overfilling rheumatoid intraveneous, which appertains the zona expensive drug should be pay my dillards credit card datelabelled.
~Destructive Forces~ With Tornadus-T banned and my most successful team built around it, I decided to reconstruct a team for its Therian brother, Landorus-T. Landorus-T has great offensive presence in OU, coming at BW2 with a new ability and the still great coverage moves, typing, etc. Need I say more? Landorus-T is ultimately the star of the team and therefore steals the spotlight most battles. I decided to take a completely different approach in using Landorus-T since my last RMT (V-C8); using the Double Dance set over the Defensive pivot set. Without further due, here is Destructive Forces! Team at a Glance: (Move your mouse to reveal the content) Team at a Glance: (open) Team at a Glance: (close) Up Close & Presonal Hide (Move your mouse to the hide area to reveal the content) Show Hide Hide Hide Landorus@Leftovers Trait: Intimidate Adamant nature 72 Hp/ 252 Attk/ 184 Spd Earthquake, StoneEdge, SwordsDance & RockPolish Landorus makes the spot as the star of the show! Double Dance only adds more pressure on the opposing team to find a counter and quickly before too many stat boosts have been set in motion! 40% of the time, Landorus-T isn't the team's lead due to the expectation that it'll set up rocks; so the opponent tries to counter its efforts ASAP. Other than that, coming in on any other Pokémon that doesn't threaten it is set up on and swept away. The spread of Earthquake/ StoneEdge/ SwordsDance/ RockPolish provides Landorus-T with the notorious Edge-Quake coverage, which gives it great power against various opposing Pokémon. Synergy: Water: Keldeo-R/ Latios Ice: Keldeo-R/ Jirachi/ Heatran/ Forretress ________________________________________________________ Hide (Move your mouse to the hide area to reveal the content) Show Hide Hide Hide Keldeo-R@Leftovers Trait: Justified Timid nature 252 Sp.Attk/ 252 Spd/ 4 Hp Surf, SecretSword, HiddenPower(Ghost) & CalmMind IVs: 31/31/30/31/30/31 Keldeo-R makes its debut as yet another set-up sweeper that presents a huge threat to the OU tier! Keldeo has great synergy with Landorus-T, as it can come in on Water & Ice attacks aimed at him with impunity, set up at least +1 CM, & begin to sweep respectively. After a +2 CM, Keldeo does its job quite easily. Keldeo's major weakness to statuses gave me the brilliant idea to use a Pokémon I'd not used much since its release. The spread of Surf/ SecretSword/ HiddenPower (Ghost)/ CalmMind is to secure a safe dual STAB while also providing coverage for Jellicent who’d otherwise wall its entire set when running HiddenPower(Ice). Synergy: Electric: Landorus-T/ Latios Psychic: Latios/ Jirachi/ Heatran/ Forretress ________________________________________________________ Hide (Move your mouse to the hide area to reveal the content) Show Hide Hide Hide Latios@ChoiceScarf Trait: Levitate Timid nature 252 Sp.Attk/ 252 Spd/ 4 Def DracoMeteor, Psyshock, Surf & Trick Latios tags long on the team as the revenge killer! Scarfed Latios have been coming up in popularity and, while I often find myself contemplating between Specs and Scarf, I usually choose Scarf due to the boosted Speed capacity. Latios’ greatest threat is Scizor, which is why I chose to include its biggest threat respectively on the team, too. The spread of Psyshock/ DracoMeteor/ Surf/ Trick is the best set to use when trying to win the speed race with opposing Latios that carry HiddenPower (Fire). Draco Meteor provides Latios with enough power to revenge any sort of Specs Lati variant, as well as any other Dragon (Barr Dragonite with full Hp and Mulitscale in tact. Psyshock helps in a variety of ways: the main being its ability to revenge Keldeo, Blissey and Chansey. Trick also plays a role in revenging the before mentioned threats. Synergy: Dark: Keldeo-R/ Heatran/ Forretress Bug: Landorus-T/ Keldeo-R/ Heatran/ Forretress Dragon: Jirachi/ Heatran/ Forretress Ice: Keldeo-R/ Jirachi/ Heatran/ Forretress Ghost: Heatran/ Forretress ________________________________________________________ Hide (Move your mouse to the hide area to reveal the content) Show Hide Hide Hide Jirachi@Leftovers Trait: SereneGrace Careful nature 252 Hp/ 224 Sp.Def/ 32 Spd IronHead, BodySlam, Wish & Protect Jirachi tags along as the hax master! I decided to go with the standard Sp.Defensive Rachi to better benefit Landorus-T's synergy with the Pokémon. I often switch back and forth between BodySlam and Thunder Wave, depending on what fits the situation. When I don't care for being risky with BodySlam, I use it; otherwise, it's Thunder Wave all the way. Obviously IronHead is being used on this set for Para Flinch spam. I decided to use FirePunch to rid my team of any annoying Scizor switch ins to Jirachi. I know that I have Heatran, who is next to being described, but a bit of extra assurance is never a bad thing. I've considered giving Heatran an SR+3 Attks set to be able to run a spread of IronHead/ BodySlam/ FirePunch (IcePunch)/ Wish, but I'm pretty satisfied as it is for now. Currently, it’s running IronHead/ BodySlam/ Wish/ Protect, as listed above. Synergy: Fire: Keldeo-R/ Latios/ Heatran Ground: Landorus-T/ Latios ________________________________________________________ Hide (Move your mouse to the hide area to reveal the content) Show Hide Hide Hide Heatran@Leftovers Trait: FlashFire Calm nature 252 Hp/ 252 Sp.Def/ 4 Sp.Attk LavaPlume, Toxic, StealthRock & Roar When I considered Heatran for the last special attack oriented Pokémon, I was suggested a defensive set, although I really am better suited to using the standard Sp.Defensive set. Therefore, I’ll be using this set for the time being. I decided to run the spread of LavaPlume/ Toxic/ Protect/ Roar to stop the momentum of any sort of setting up done by Volcarona, SubCM Jirachi & other Heatran variants that lack EarthPower. Roar, when my own hazards are up, is very useful for flushing out the opponent’s Pokémon. Its role is to devour the Fire attacks aimed at both Jirachi & Forretress, the setting up either Toxic or LavaPluming the opponent. Synergy: Ground: Landorus-T/ Latios Water: Keldeo-R/ Latios Fighting: Landorus-T/ Latios ________________________________________________________ Hide (Move your mouse to the hide area to reveal the content) Show Hide Hide Hide Forretress@Leftovers Trait: Sturdy Relaxed nature 252 Hp/ 176 Def/ 80 Sp.Def GyroBall, VoltSwitch, RapidSpin & Spikes IVs: 31/31/31/31/31/0 Now we come to our last Pokémon: Forretress, the iron ball! Forretress earned the spot for its ability to spin hazards, take physical attacks with impunity, and VoltSwitch out against opposing predicted switch ins like Magnezone, Heatran, etc. Forretress is also the ideal team member to bring in against predicted T-Waves, making it impossible for Forretress to be annoyingly burned by an attack like surprise Fire Punch(Like such surprise attacks from Jirachi, Tyranitar, etc) or unpredicted Will-o-Wisp. Its current spread of GyroBall/ VoltSwitch/ RapidSpin/ Spikes provides it with a great set to abuse repeatedly, abusing its fully invested Hp to come in on +1 DD Outrage from Haxorus, Dragonite, Salamence, etc. Synergy: Fire: Keldeo-R/ Latios/ Heatran ________________________________________________________ Conclusion So, there you have it! I appreciate those who took the time to read my RMT and rate/suggest changes or it. I’ll be updating anywhere from every other day- every 3rd day if possible, depending on how many suggestions come in. I’ve recently dabbled on the OU ladder on Showdown!. Those on the server may or may not have faced this team, but, regardless, have fun with it!
4-H Alberta Steer Carcass Competition Guidelines Deadline Oct 30 Goal: The goal of this competition is to find the carcass that provides the highest quality beef for a restaurant. Requirements: A 4-H member in good standing, located and registered in the Province of Alberta. All Policy 6.05 Rules and Regulations must be followed, in addition to the following: An expression of interest must be provided 4-H Specialist (firstname.lastname@example.org) by October 30, 2016. Carcass Steers must be weighed in, tagged with 4-H and CCIA Tag, along with an Animal Registration Form and Registration Fee submitted to the 4-H Alberta Steer Carcass Competition Committee. Deadline for weigh-in, the animal registration form and fee is December 1, 2016. The Provincial Carcass Committee will provide a date range, to those who express interest, for when all Carcass competition animals must be weighed in. These weigh in dates will be in November. A picture of the animal on weigh-in day must accompany the animal registration form The animal must be “Tie Broke” Members must agree to deliver the animal to one of the Abattoir’s selected by the Provincial Carcass Committee on the date specified. Estimated delivery will be early July. Members must provide a final picture of member and animal on day of delivery to the Abattoir - Provide a display/PowerPoint on the animal for use at the Centennial Celebration at Fever Weekend, August long weekend 2017. For more information contact: Photos by 4-H Alberta. Summer Synergy 2016 in Olds, Alberta
Your Chrysler dealership isn't the only place to buy trusted replacement parts for your Intrepid. Chrysler automobile enthusiasts have become used to a certain level of reliability when driving their Intrepid around town. You purchased your Intrepid because its high-value and a trusted name appealed to your senses; so why would you be okay with second best when it comes to great quality auto parts? Make no mistake, you bought your Intrepid because you wanted to buy strength and a dash of luxury both impeccably combined into one car, truck, or SUV. It requires a certain synergy - a need for many cooling components on your Chrysler Intrepid working in concert - to maintain constant engine temperature, and if one fails then the whole system is compromised. A very important one is the cooling fan assembly, and it turns on when the engine temperature is reaching the high end of its nominal range. Most of the time your Chrysler Intrepid cooling fan assembly turns on suddenly is when your car or truck is not traveling fast; this is because the radiator is not able to scoop up enough air to function properly. As the cooling fan pulls air through the metal fins of your radiator, the temperature of the coolant inside the core lowers and it re-enters the engine. Buy all the car parts you will ever need from carpartsdiscount.com and save time and money on all your upcoming repairs. At Car Parts Discount, we have real customer support agents on the phone with enough expertise to help you select the appropriate Chrysler Intrepid part for your needs. Don't let a Cooling Fan Assembly repair bring about the demise of your car, truck, or SUV, and don't risk lowering its resale value by buying anything but the best replacement parts. Need 2004, 2003, 2002, 2001, 2000, 1999, 1998, 1997, 1996, 1995, 1994, 1993 Chrysler Intrepid Cooling Fan Assembly parts? We've got them right here.
There’s no mystery or range of expectations whatsoever for the Golden State Warriors this year. A team already in the conversation for best of all time before a few crazy weeks in May and June went out and added one of the best players of a generation squarely in his prime, instantly transforming the letdown of history slipping through their fingers into a whirlwind of excitement at fielding the most dominant on-paper squad ever assembled. Even if title-or-bust is the obvious mantra surrounding this team, the path toward glory will have plenty of intrigue along the way. Which lessons, if any, should be drawn from last year’s eventual shortcomings? How will a combination of offensive talent never before seen on a single roster coalesce and adjust to the Xs and Os of a virtual All-Star team? Will defense or depth in certain areas be a realistic problem minus a couple key contributors, or will the overall skill level simply overwhelm these kinds of concerns? With all this and more, Basketball Insiders previews the 2016-17 Golden State Warriors. FIVE GUYS THINK The Warriors are super good. What else really needs to be said? They’re basically an All-Star team set to play against a field of proles all season long. Kevin Durant was a huge acquisition, Stephen Curry is so in the zone and Klay Thompson is the best shooting guard in the league. Draymond Green can guard all five positions at an elite level, the bench is still stacked and Steve Kerr is a great coach. We expect big things, but that’s only because big things seem inevitable. Anything can happen (just ask the 2003-2004 L.A. Lakers), but “anything” also can include a championship. 1st Place – Pacific Division – Joel Brigham Adversity builds character. The heart of a champion is often determined by how well they respond to challenges that would break normal spirits. The Warriors were within one victory of capping off a historic 73-win regular season with a repeat championship, but the club dropped three straight games in the Finals and watched the Cavaliers celebrate on their own court. In many ways that setback was the first true test for the Warriors who had begun to run roughshod on the league with little resistance. The club was already built to make another trip to the Finals in 2017, but the addition of All-Star Kevin Durant essentially makes this a lock – barring major injury. See you in June. 1st Place – Pacific Division – Lang Greene The Golden State Warriors were already elite and then they added Kevin Durant. And this isn’t the same as when LeBron James and Chris Bosh joined Dwyane Wade in Miami. The Warriors already have really good chemistry and Durant is going to fill in the starting position that Harrison Barnes held. The dynamic will have to change on offense somewhat since Durant and Stephen Curry both need the ball in their hands, so it will be up to Warriors head coach Steve Kerr to adjust accordingly. Another scary part about this team is that Durant flashed defensive versatility in the postseason that reminds us of Draymond Green. If Durant can continue defending at that level, this Warriors team will basically be unstoppable. It should be noted that some key contributors from the last few seasons are now gone, but the Warriors did a nice job of plugging in the holes that were left after adding Durant. This team is stacked and should make it back to the NBA Finals this season. 1st Place – Pacific Division – Jesse Blancarte Let’s not kid ourselves into thinking that the Warriors aren’t the favorites to win their division, their conference and the 2017 NBA Finals. What I will say, though, is that it’s not every day that you see a team that wins 73 games and take a 3-1 series lead in the NBA Finals radically redesign itself. Of course, adding Kevin Durant to the already big three of Stephen Curry, Klay Thompson and Draymond Green seems worth it, but let’s take a moment to recognize that Harrison Barnes, Andrew Bogut, Festus Ezeli, Leandro Barbosa, Brandon Rush and Marreese Speights are all gone. Those six guys were among their top 11 rotation players last season, and they have effectively been replaced by Durant, Zaza Pachulia, David West, Phil Pressey and (perhaps) JaVale McGee. I obviously like the Warriors to win the Pacific Division, but for me, there is enough intrigue with the new core in Oakland to keep me watching all season long. I doubt Steve Kerr even entertains the idea of allowing his team to chase down 70 wins again, because losing the Finals last year probably changed the perspective of everyone associated with the team. We’ll spend a lot of time talking about these guys this coming season, so I’ll end this here and just state the obvious: they’re the clear favorite. 1st Place – Pacific Division – Moke Hamilton Anything less than a championship will obviously be a disappointment for this Warriors squad. I know a lot of NBA fans were upset about the Kevin Durant addition because they believe the 2016-17 season will now be pretty anticlimactic. However, as we saw in last year’s NBA Finals, nothing is guaranteed in the NBA. Injuries, chemistry issues and more can change the landscape of the NBA in an instant. We’ll see if the Warriors can live up to the ridiculously high expectations. My guess is that they will – mainly because their star-studded squad is full of unselfish players who are versatile and complement each other well. But titles aren’t won in the offseason, so we’ll have to see how they come together. 1st Place – Pacific Division – Alex Kennedy TOP OF THE LIST Top Offensive Player: Kevin Durant Honestly, how is one supposed to support a single candidate here for a team that now boasts two of the five most devastating offensive players in the game? There can be absolutely zero argument against either Kevin Durant or Steph Curry here, but the nod goes to KD primarily for this reason: He’s slightly more matchup-proof. Don’t fly off the handle, Chef Curry fans – no one’s doubting Steph’s ability to bend physics and break defenses on a night-in, night-out basis. He’d have won this category going away over Durant and any other player on earth last season. But while some of it was surely due to lingering injury issues and other context, we saw smart defenses poke tiny holes in his preferred methods of dominance in the postseason. In particular, opponents began stationing a wing player on Draymond Green and negating the deadly Curry-Green pick-and-roll by switching it between two guys capable of hanging with Steph off the dribble for a possession at a time. Not everyone has the defensive talent or discipline to pull this off – and Curry at his full powers can often abuse these switches himself – but the theme certainly looked primed to become a blueprint for those with the right personnel. In comes Durant, and out goes that theory. Want to switch the Curry-Durant pick-and-roll? Fine with them. Go right ahead and switch a smaller guy onto Durant, who shot an unreal 61 percent in the post last year and was the league’s most efficient per-possession volume player on the block, per Synergy Sports. The opponent is clogging the block and denying the entry? Cool, either they’ll rotate to another knockdown shooter for an open three or simply give Durant the ball in isolation, where he was also a top-10 efficiency player last year among volume guys (in a less spacious offense and more commonly against guys closer to his own size, at that). None of this even gets into KD’s numerous other prodigious skills, most of which fit like a glove within what was already the league’s most dominant offense. With Durant’s ability to rip up the one meager trump card the league had finally managed to conjure against them, this group could be primed to set records. Top Defensive Player: Draymond Green Ah, much easier. Green is already among the most versatile elite defenders in the history of the game – seriously, how many other guys ever have been capable of locking down all five positions on the floor individually, from running with jittery guards to protecting the rim against giants and LeBron James? The list of players who have done so at a consistently elite level while also playing a large role on the other end of the floor is probably limited to one hand, maybe even with a couple fingers to spare. Green will have even more defensive responsibility after the departure of guys like Andrew Bogut and Festus Ezeli in the frontcourt, but he’s proven more than up to the task. Top Playmaker: Steph Curry There’s an under-the-radar case to be made here for Green, a fantastic passer who actually averaged more nightly “assist opportunities” (passes which either became an assist or would have if the shooting player had made his resulting shot) than Curry last season, per SportVU data. On a deeper level, though, even Draymond himself would likely admit that many of these were simply a trickle-down result of the way Steph’s magic forces teams to contort themselves. Many of those four-on-three chances where Green is free to rumble down the lane and take his pick of open shooters evaporate with any other ball-handler in the world as his partner. Curry makes those plays possible while also maintaining his own strong passing numbers. His percentage of passes which led to a positive team event (assists, free-throw assists or secondary “hockey” assists) – a Holy Grail-type category topped consistently by consensus elite creators like Chris Paul, James Harden and Russell Westbrook – fell in the league’s top 10 last season, decimals behind LeBron James and Ricky Rubio. Curry remains the distributing engine that powers this offensive machine, and could even be in for an uptick with another elite offensive player in the lineup next to him. Top Clutch Player: Steph Curry This is another category likely to end up in a split of some sort between Curry, Durant and the general sort of team scheme that the Warriors have generally done well at sticking with during rare clutch moments the last couple seasons. Curry took about a third of the team’s regular season shot attempts during these minutes last year, with Durant right in the same neighborhood with OKC, albeit in a far different team context. Curry was more efficient than KD, particularly from deep (he shot 38.1 percent from three compared to 32.4 percent for Durant in the clutch), and who can forget that legendary game-winner on Durant’s own floor? The Unheralded Player: Andre Iguodala Iguodala should have won the NBA’s Sixth Man of the Year award, but his remarkable importance to his team continued to fly under the radar in favor of more traditional metrics. He doesn’t post flashy box score stats or make many highlight-reel plays, instead contributing in exactly the sort of ways that go overlooked. He was a constant presence in crunch time lineups last season (appearing in nearly 90 percent of all such minutes while healthy), and his on court-off court impact on variations of Golden State’s “Death Lineup” was comparable to or even perhaps greater than any other member besides Curry himself. He’s the most important defensive player outside Green and a vital locker room presence, and shouldn’t be looked past as part of the heart of this team. Top New Addition: Kevin Durant Yeah, the contrarian pick might be a bit difficult to sell here. We covered much of Durant’s potential impact, but a couple other summer signings will be meaningful as well. Zaza Pachulia took a huge pay cut to chase a ring, and should take over for Andrew Bogut in the starting center spot. He’s not the passer, defender or overall basketball savant Bogut is, but he’s a more durable body who brings consistent effort and performance. David West brings another veteran voice to the locker room as a solid backup who can play both big positions, though it’s fair to wonder how much he has left in the tank at 36 years old. Neither are stars, but with so much skill at the top of the roster there’s no need – these guys will provide solid complementary skills and depth. – Ben Dowsett WHO WE LIKE - Klay Thompson Oh yeah, him. It’s a little insane that a truly legitimate case can be made for the second-best shooter in the entire NBA as just the fourth-most important piece of the equation for his team, but here we are. Concerns about Thompson’s usage and involvement are at least partially valid, but questions about his role aren’t: It’s the same. He’ll use wildly underrated conditioning (almost certainly best on the team and among the tops in the league) to continuously rocket around picks and open up space with his gravity offensively, then spend most of nearly every game locking down the opponent’s top guard defensively. Thompson is prone to the sort of shooting barrages even Curry can’t match, and we should see even more of these with Durant around to draw attention. If the number of mouths to feed in the offense becomes a problem, the Warriors will cross that bridge when they come to it. For now, they’ll simply plug even more talent into his lineups and turn Klay loose with the exact same mandate as last season. A not-so-bold prediction: He leads the NBA in three-point percentage among volume shooters next season. - Steve Kerr Whether you do or don’t believe Kerr had his share of correctable errors at various points last season, there’s little doubt the year will serve as a vital learning experience. Even the best of us make our share of mistakes, and failure is necessary before success can truly be attained for most in the NBA. Kerr has had the summer to reflect on his bigger picture (when he’s not fist-pumping at the team’s offseason acquisitions at least), and should have more perspective for a group almost certain to chase some more history. He’s already proven himself times over as one of the most adaptable and player-friendly coaches in the league, with strong tactical chops and a willingness to critique himself. It’s easy to forget he’s only entering his third NBA season at the helm – he’s still likely improving as a coach. - Shaun Livingston Livingston has put a catastrophic injury well behind him in becoming a key bench cog for the Warriors, one with the skills to prop up an offense for a few minutes a game (his midrange post game felt unstoppable for long stretches last season) plus the size at the point to maintain the Dubs’ switch-everything defensive identity. His size makes him capable of fitting in alongside starter-heavy units when there’s a need, and he may have been the single Warrior most capable of exploiting a one-on-one size mismatch in a pinch until Durant came along. He’ll continue to do important work behind the scenes. - David West West brings experience, savvy and guile as the team’s new elder statesman, and more importantly might save Kerr from his maddening tendency to trot Anderson Varejao out at strange times. He’s physical enough to help make up for a general lack of size at the big positions, and could be a great mentor for someone like Green. – Ben Dowsett SALARY CAP 101 Once Kevin Durant agreed to join the Warriors, the team renounced the rights to free agents Harrison Barnes, Festus Ezeli, Leandro Barbosa and others, then traded Andrew Bogut to the Dallas Mavericks. Once they had enough room under the NBA’s $94.1 million salary cap, they signed Durant to a two-year, $54.3 million contract. Durant can opt out next season, and either re-sign with Non-Bird Rights at $31.8 million — or push Golden State to use cap room to pay him a maximum salary that projects to be $33.5 million with a $102 million projected salary cap. The former makes a lot more sense for the team, and is probably a necessary sacrifice for Durant. Meanwhile, the team has 14 guaranteed salaries, with five players vying for one spot (Elliot Williams, JaVale McGee, Phil Pressey, Cameron Jones and Elgin Cook). The team has until the end of October to pick up Kevon Looney’s rookie-scale option. Next summer, the Warriors can get to about $60 million in cap space, but that number assumes Stephen Curry, Andre Iguodala and Durant move on as free agents. Naturally, the Warriors would seriously prefer to not drop under next year’s cap. – Eric Pincus Barring catastrophe, the Warriors will contend with some of the most dominant offenses in league history. They’re the most talented group of shooters ever assembled by a wide margin, and Durant brings them one of the league’s most efficient one-on-one options for the brief stretches where gravity within their team scheme isn’t enough. Expect them to once again be near the league lead in transition chances and efficiency, plus overall pace – no one is more comfortable trading quick possessions. They could be in for some amount of defensive slippage, but it’s possible this still remains a strength with a number of talented, like-sized guys in the rotation and Green at the helm. Green was also the linchpin for strong team rebounding figures, which should likely continue this year. – Ben Dowsett Depth concerns are probably overstated among those simply trying to find something negative about this team, but Green could be the exception here: Where guys like Durant, Curry and Thompson at least have some of the same overlapping skills and gravity, no one else on this roster does what Draymond does or even comes close. Any prolonged absence or slippage from Green is the only semi-realistic regular season scenario that really casts doubt on the team’s depth, but that scenario could be scarier than most assume. Age and durability are concerns for basically the entire frontcourt outside of Green. It’s also fair to wonder whether the likes of Pachulia and West are as capable on either end of the ball as Bogut and Festus Ezeli, particularly defensively, and whether the season-long trickle-down might be enough to drop the Warriors out of the league’s top 10 for defensive efficiency. – Ben Dowsett THE BURNING QUESTION Do the Warriors win a ring or not? Every team has an abundance of smaller queries that add up to this big one, but few others in recent memory have been in a situation where that all-important question is so singularly prominent. This is arguably the strongest collection of talent to ever share a court in this league, and anything but the ultimate prize will, right or wrong, be considered a failure. What the Warriors do during the regular season is about as close to irrelevant as it gets – their entire year will be sculpted with that couple-month stretch from April to June solely in mind. Expectations are sky-high, but so is this group’s confidence and, of course, their skill level. Only the hardware will represent a successful season this time around. – Ben Dowsett Have something to add to this story? Share it in the comments.
NewswireToday - /newswire/ - Fremont, CA, United States, 2008/08/19 - Emantras partnered with Abilene Christian University (ACU) to develop products that allow teachers to generate and push educational content to the students for mobile consumption. Emantras is a digital education company that evangelizes education in multiple digital modalities. Our products, solutions, and services allow the learner to access education across multiple devices and in multiple formats to create an educational environment that is optimum to individual learning styles. We use technology so there is complete access to educational material to take advantage of “teachable moments”. We have partnered with Abilene Christian University (ACU) to develop products that allow teachers to generate and push educational content to the students for mobile consumption. The unique solutions empower educators to generate the content on the fly and not be captive to pre-published content. There are several educational solutions in the market place that deliver content within a mobile environment, our goal was to develop solutions that provide educators with the means to create and deliver educational material without being encumbered by “pre-determined” content. This ensures relevancy of the mobile content which is now dynamic, creating an adaptive learning environment on mobile devices. “Working with Emantras has been a rewarding experience. Their vision, eagerness, and maturity brings benefit not just to our mobile learning initiative, but most importantly, to our students who will directly benefit from increased learning opportunities. I appreciate the professional approach to software design Emantras provides. That approach has allowed us to move from design, to prototype, to working product in just a short period of time. Regardless of our request, Emantras stepped up and met the challenge”. - George Saltsman Director of the Adams Center for Teaching and Learning at Abilene Christian University. “Partnering with ACU allows us to focus on the technology and design of the solutions while accessing the collective knowledge of the ACU educational staff to help us understand and refine the workflow within an educational environment. This partnership allows us create an exciting synergy to understand and deliver solutions of exceptional value to the learning eco-system”. - Supra Manohar, Executive Vice President, Emantras, Inc. About Emantras, Inc. Emantras (emantras.com) is a leading digital education and performance improvement company, delivering comprehensive and versatile educational solutions with world-class training options to enterprises seeking to excel. Emantras provides engaging and user-centric learning experiences that enable effective knowledge propagation through integrated digital and mobile solutions. Backed with years of experience, combining learning techniques and technology, Emantras provides pioneering, reliable, and transparent solutions, on time and on budget.
Statistics show that about 170 million people in the world have diabetes mellitus and this number will double in the next 20 years (Wild et. al., 2004). The prevalence of type 2 diabetes is inextricably linked to the increased incidence of cardiovascular disease and obesity due to adipose tissue accumulation. Besides, diabetes mellitus arises due to the accrual defects of different tissues in the body, for example the liver, skeletal muscle and pancreas which are involved in glucose homeostasis and insulin level. Therefore, different proteomics approaches have been conducted to understand the pathophysiological processes which lead to diabetes development and identify pathways to target for diagnostic and therapeutic analyses (Parikh and Groop, 2004). Besides, transcriptomics approach such as expression profiling of mRNA has identified several candidate genes associated with diabetes: FOXC2 (Ridderstrale et al., 2002) and calpain-10 (Baier et al., 2000) to name a few. Uncoupling proteins (UCP) are a set of proteins found in mitochondria which can dissipate H+ gradient of inner mitochondrial membrane. UCPs are hypothetically identified as candidate genes for Type 2 diabetes based on the fact that they are able to decrease membrane potential and augment thermogenesis. Among the three types of homologues of UCPs, which are UCP1, UCP2 and UCP3, studies using yeast and knock-out mice have found that UCP2 and UCP3 possess uncoupling activity (Dalgaard and Pedersen, 2001). In studies using UCP2 knockout mice, phenotypic expressions such as increased insulin secretion which leads to hyperinsulinaemia, increase of coupling activity in mitochondria and ROS-production established that UCP2 is a suitable candidate gene for Type 2 diabetes analysis ( Arsenijevic et al., 2000; Zhang et al., 2000). Get your grade or your money back using our Essay Writing Service! Proteome is the genomic expression of proteins in a biological system at a specific time point. Since proteome is dynamic, the study of proteome, i.e. proteomics, has gained interests in the application of biomedical research. Now, proteomics is widely used in clinical application and it has major breakthrough in the discovery of diagnostic and prognostic disease biomarkers. Potential of proteomics is limitless. Proteomics is very useful and promising for biomarker discovery through the analysis of biological fluids. These biomarkers are vital for risk identification and detection of a disease in an early stage. Nonetheless, proteome of a biological system is complex since the genome can give different mRNA transcripts due to alternative splicing and a series of post-translational modification will then result in a whole series of different proteins. In general, proteomic analyses comprise two aspects: expression proteomics and functional proteomics. Typical technologies involved in expression proteomics are separation by two-dimensional gel electrophoresis (2DE), then identification using matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF-MS). Advance research and development on proteomic analysis has seen other technologies such as surface-enhanced laser desorption ionisation (SELDI) mass spectrometry and electrospray ionisation (ESI) MS/MS being applied. Fundamentally, expression proteomics involve identification and quantification of biomarkers, which are the proteins, and also the characterisation of various post-translational modifications of proteins and cellular localisation. On the other hand, functional proteomics involve identification of different phosphoproteins to provide more insights into signalling pathways. It involves the study of protein-protein interactions protein networks (Scott et. al., 2005). Proteomics techniques and the merits 2DE and MALDI-TOF-MS This system is the most commonly used method in proteomics research. Complex protein mixtures in a biological fluid are separated by 2DE, which is the combination of isoelectric focusing and SDS/PAGE. Complex protein molecules are separated according to relative charge based on isoelectric point, continued by second dimension separation based on molecular mass into single detectable protein spots. The protein spots separated on the gel are often visualized with different stains, such as silver staining, Coomasie Blue and Sypro Ruby. Sypro Ruby overcame problems of low sensitivity (Coomassie Blue) and poor dynamic range (silver staining) with higher sensitivity (1-2 ng) and three order magnitude of linear dynamic range. Sometime pre-staining with Cy dye is carried out but it is not commonly used. This method usually gives more than 1000 apparent protein spots on a single gel. Different spectrum of blue and red dye indicates the expression behaviour of the proteins, either over-expressed or under-expressed. The expression profile is then further analysed using different mass spectrometry techniques. Conventionally, the protein spots of interest will be excised from the gel and in-gel-digested by proteases, often trypsin, and then the resulting fragments are examined under MALDI-TOF-MS plate. The sample are first dried, then coated with an acidic matrix and subsequently subjected to laser radiation. The mass/charge (m/z) ratio determines the peptides separation, which is based on their time of flight. The data from the analysis is then compared with online database for instance, Mascot, to identify the protein spots of interest (Poon and Mathura, 2009; Scott et. al., 2005). Always on Time Marked to Standard 2DE coupled with MALDI-TOF-MS technique has been used widely for the analysis of whole proteins and the gel can be analysed through a myriad of stains and imaging software available. 2DE is considered the best identification method of proteins where various post-translational modifications and splice variants present. Ongoing developments in the technical advances of 2DE have improved the reproducibility, sensitivity and throughput of proteome analysis. Furthermore, this method can analyze up to 1000 different target protein spots on a single gel which renders it suitable for global analysis of the expression of proteins in a biological system. On the other words, 2DE allows hundreds of proteins to be separated and displayed on a single 2-dimensional gel to enable global view of proteins at a given point in time (Scott et. al., 2005). 2DE has been employed through research in comparing the renal proteome of type 1 diabetes mellitus nephropathy and non-diabetic mice. Further identification of MALDI-TOF-MS has identified under-expression of elastase IIIB and over-expression of monocyte neutrophil elastase inhibitor (Thongboonkerd et. al., 2004). In type 2 diabetes, 2DE coupled with MALDI-TOF-MS has identified various protein variants in the blood plasma and serum. Although 2DE has high resolving power and large sample loading capacity, reproducibility is a main setback. However, easy visualization of protein variants by 2DE give a very informative analysis of a proteome and thus it is now the fastest technique to directly target protein expression differences. SELDI-TOF technique is described by Caffrey (2010) as a unique technique best suited to study urine proteome thanks to its high salt tolerance, small sample required for analysis and high throughput properties. Urine is one of the major samples used for proteomic analysis in diabetes mellitus study, as recently been carried out by Andersen et. al. (2010). SELDI-TOF technique weds MALDI-TOF with chromatography, where functional groups are immobilized on a chip surface and proteins will bind to them. The proteins were bound by utilizing different chemical properties such as anion and cation exchange, reverse phase, metal affinity, and surfaces pre-coated with reactive groups to capture various proteins such as antibodies and receptors. Similar protein identification method used in MALDI-TOF-MS applied afterwards where the bound proteins on the protein chip are ionised and separated by TOF according to m/z ratio. Sundsten et.al. (2006) identified several serum proteins from normal glucose tolerance individuals and type 2 diabetes mellitus patients using SELDI-TOF technique. The experiment was successful as four differentially expressed proteins were discovered: apolipoprotein C3, transthretin, albumin and transferrin. The main advantage of SELDI-TOF technique is that it does not require a salt removal step, unlike 2DE and MALDI-TOF-MS. The steps involved are simple, for example, the protein bound chip is washed in deionized water and can be analyzed by MS after it is dried. Furthermore, SELDI conserves the analyzed samples. Sample required for analysis is as little as 5 µl. SELDI has high throughput which allows simultaneous protein profiling of many urine samples speed up the discovery process. However, reproducibility is an issue needed to be addressed. Besides, SELDI can only identify biomarkers. Further analyses such as MS/MS analysis and peptide mass fingerprinting are required to perform protein functional analyses (Caffrey, 2010). Liquid chromatography coupled to tandem MS (LC-MS/MS) In LC-MS/MS, complex protein mixtures are first digested by proteolytic enzymes, usually trypsin, then liquid chromatography technique was used to further simplify the proteins based on charge, pH or hydrophobicity. As Wang and Hanash (2003) highlighted, cation exchange chromatography preceding reversed phase chromatography are usually applied. Subsequently, peptide analysis by MS/MS is carried out. The fragmented peptides in first phase of MS are further subjected to second phase of MS to obtain peptide sequence information. ESI usually coupled to quadrupole mass analysers in tandem MS analysis (Aebersold and Mann, 2003). Zhan et al. (2004) used liquid chromatography-electrospray ionization- quadrupole-ion trap tandem MS (LC-ESI-Q-IT-MS/MS) to study the down-regulation of secretagogin in non-functional pituitary adenomas in human. The most significant advantage of LC-MS/MS is its high throughput ability, which enables protein identification up to hundreds in 24 hours. In contrast to 2DE, LC-MS/MS has a wide dynamic range of protein concentration. However, this technique does not comprehensively identify splice variants and post-translational modifications in a complex peptide mixture. This Essay is a Student's Work This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work A genome-wide study of the expression of mRNA levels is termed transcriptomics. A transcriptome present in a biological system holds an accurate representation of important biological phenomena, where the gene expression patterns give potential insights into the development and mechanism of a disease. Furthermore, transcriptomics approach has been adopted to identify biomarkers in clinical application for diagnostic, therapeutic and prognostic purposes. Transcriptomics technique has been spear-headed by microarray technology where the gene expression profile or the whole transcriptome in a sample can be analyzed. Transcriptomics differ from proteomics in the way that the former can study the expression patterns and behaviours of multiple genes simultaneously. The analysis of a transcriptome which is the RNA is vital with respect to genome wise as the RNA sequence can be modified by either differential splicing or RNA editing. Knowing the fact that skeletal muscle is one of the key sites of disposal of insulin stimulated glucose, microarray analyses have been carried out to identify two important regulators of oxidative phosphorylation, namely PPARγ and NRF1in Type 2 diabetes study (Patti et. al., 2003). Transcriptomics techniques and the merits The principle of microarrays is based on the complementary hybridization of nucleotides harvested from the sample and the DNA sequences which can be present up to thousands in a small platform. Generally, gene expression microarrays can be categorised as cDNA microarrays and oligonucleotide microarrays. cDNA microarray utilizes cDNA probes usually about 500-5000 bases. The mRNAs that are isolated from sample target are first treated with reverse transcriptase and subsequently labelled with fluorescent tags, usually green and red. The labelled RNAs are then hybridized to cDNA microarray. The image is scanned and the colour intensity and the colour change are visualized through computer software. The level of expression of each transcript can be read. In oligonucleotide microarray, the RNAs from samples isolated are used to synthesize double-stranded cDNA. These cDNA serves as a template to construct biotin-labeled cRNA. Hybridization of biotin-labeled RNA occurs on the nucleic acid probes on a microarray. The microarray is then scanned and the abundance of each transcript present can be read by observing the amount of biotin-labeled RNA associated with each DNA probe locations (Kittleson et. al., 2009; Albelda and Sheppard, 2000). Microarray is utilized to identify over-expression or under-expression of genes between disease states to discover vital pathways in a disease mechanism and biomarker for therapeutic targets. Microarrays techniques are often followed by validation step such as quantitative PCR (qPCR) and Northern blotting. In a study conducted by Zhang et al. (2010), microarray coupled with multiplex amplification has been used to type and subtype influenza viruses. Microarray containing 46 short virus-specific oligonucleotides is effective in detecting 5 subtypes of influenza A including H1N1. Microarray enables a rapid, comprehensive and accurate diagnostic method and at the meantime is able to type and subtype a particular strain of pathogen. Furthermore, a comparison of mRNA genetic expression profiles between different large data sets from different samples can be done in a single database, thus allowing a comparison between a control and diseased sample. However, some limitations such as low turnover, low gene expression and rapid RNA degradation have to be taken into consideration (Lockhart et. al., 1996; Hyatt et. al., 2006). Correlation between transcripts and proteins levels A transcriptome is the total complement of mRNA in a biological system at a given time point and this transcriptome serves as template for protein synthesis, forming the proteome, which is the protein complement of the transcriptome. Proteomics on the whole are limited in terms of width and depth of coverage due to variations in protein properties such as hydrophobicity, size, charge, stability and its abundance. On the contrary, transcriptomics are cost-effective and high-throughput and are able to analyze up to thousands of transcripts in a single automated format. Anderson and Selhamer (1997) first analyzed the comparison between mRNA and protein abundances in human liver and suggested that protein abundance is not reliable upon mRNA levels. Later, biological explanation explained that the differences between transcripts and protein abundances is a result of RNA splicing, protein turnover, allosteric protein interactions, proteolytic processes and most importantly, post-translational modifications of proteins. In this case, the transcripts and protein levels of UCP2 gene can be elucidated by aforementioned explanation. In addition, protein abundance is also heavily affected by the rapid mRNAs degradation during mRNA translation (Guhaniyogi and Brewer, 2001). Furthermore, post-transcriptional processes for instance alternative splicing of mRNA enables various proteins to be translated based on a fixed amount of genes. Reduction in the number of transcripts but protein levels remain unchanged can be explained by mRNAs shelf life where proteins have better shelf life compared to mRNA (Pratt et. al., 2002). In addition to scientific explanations, lack or correlation between transcriptome and proteome can be attributed by technical issues regarding the transcriptomics and proteomics technologies. Even the most sophisticated devices have flaw, for example, microarrays are unable to systematically recognize changes in splice variants, but the proteins encoded by the splice variants can be detected by proteomics (Hedge et. al., 2003). In general, the biological system between a patient of a disease and a normal person is distinguished by a certain amount of changes in gene(s) expression and the protein products. Usually, a single approach may it be proteomics or transcriptomics is not suffice to give comprehensive insights of its physiological and pathophysiological understanding. For example, Zhang et al. (2010) combined transcriptomics and proteomics approach in H1N1 diagnosis and treatment. Ideker et al. (2001) identified fifteen mRNA transcripts in Saccharomyces cerevisiae galactose utilization which expressions were not changed, whereas the corresponding protein expression changed in a certain level and the correlation between mRNA and protein levels was found to be r = 0.6. The discrepancy was identified as post-transcriptional regulation. Transcriptomics and proteomics are both cutting edge technologies vital in providing insights of diseases and contributing discoveries which are of clinical significance. Understanding each technology’s advantages, transcriptomic and proteomic can complement each other and work in synergy. For example, transcriptomic techniques can be exploited in terms of high-throughput and lower cost; however, it has limitation in using human target tissues for expression profiling. Proteomic can be utilized to analyze protein complement of its alternative splicing transcripts. Combinations of both transcriptomic and proteomic approaches have proved to be successful in identification of biomarkers in different kinds of cancer. For instance, YKL-40 was identified as the biomarker for glioblastoma multiforme by using DNA microarrays and ELISA techniques (Tanwar et al., 2002). In the case of type 2 diabetes mellitus, utilization of both techniques give better understanding on the pathways and mechanism involved and provide a more comprehensive prognostic and therapeutic approach.
Wait a minute. Grab a deck, put Street Peddler in it From a competitive perspective I’ve been impressed with the SanSan cycle thus far. Every pack has given us at least a few cards that have been making their way into tournament viable decks, and it’s hard to argue that it hasn’t had a substantial impact on the metagame. The Underway looks to be no exception to this rule, with Marcus Batty particularly drawing a ton of attention. But we’re not here to talk about dusty old sysops with psychic powers today. Today we’re hitting the streets. Anarch – Where we’ve been, where we are Before we break down everyone’s new street heart, let’s have a look at the faction it’s been printed in. Arguably Anarch is the faction that’s undergone the most upheaval throughout the history of the game (probably tied with Shaper). After Jackson Howard was printed and Noise suffered, Anarch’s tournament viability was all but non-existent. Beyond some metagame calls with Whizzard and persistent Noise players, Shaper and Criminal became the factions that you played if you wanted to take home playmats. The Lunar Cycle marked the beginning of the return of the Anarch, with Cache and Inject throwing a bone to all the hungry Anarch players of the world. But as it turned out, The Lunar Cycle was just the starter before the delicious Order and Chaos main course. MaxX, I’ve Had Worse, Eater and Hivemind all enabled new and powerful Anarch builds. The faction was back with a vengeance. Up until this point the SanSan cycle has continued to give support to everyone’s favourite angry runners, with out of faction cards Career Fair and Net Ready Eyes finding their homes into more tournament winning Anarch decks. Crazy as it is, we seem to be living in a world where Anarch is a solid tournament choice, and Criminal is the one being desperately clung onto by die-hard fans of the faction. How far we’ve come. From a mechanical perspective, what makes the faction what it is? Understanding this will be key to assessing the place of new cards like Peddles McGhee. Beyond everything else, Anarchs are the best at attacking the Corp’s resources. Parasite, Imp and Vamp are just a few of the cards that smash down whatever the Corp has built up. If Shaper is about letting the Corp have whatever they want and navigating its way through it, Anarch disrupts the Corp directly. Why find your way through the maze when you can burn it down? Additionally Anarch has always been the faction that engineers the most frightening and powerful board states. Whether it’s a Medium with ten counters on it or a Lamprey headlock, Anarch seems to be the faction with inevitability, power turns, and cards that demand immediate answers. While Noise takes this to the extreme with his mills, you could argue that even something like Reg Ass MaxX has a better long game than most of the decks out of other factions. Against Criminal and Shaper the Corp knows that it can build a bigger and better board state the longer the game goes. By contrast, nothing is safe from the Anarch arsenal. Power and inevitably come with a cost. The issue that Anarch has had to combat since day one is that when compared to their blue and green counterparts, they’re inefficient and slow. Up until the Lunar Cycle, Anarch card draw meant Wyldside and Anarch tutor meant Djinn. The Criminal and Shaper factions were able to offer tournament players that all important consistency that would stop them from losing games because of bad draws. While this Anarch issue has been smoothed out with the release of I’ve Had Worse, Inject and Earthrise Hotel, they’re still not out of the woods. Anarch is still the faction most susceptible to giving up agendas because they didn’t have the right breaker at the right time (also known as ‘gear checking’). With no tutors like Self-modifying Code or generous hand refillers like Quality Time, Anarch decks are most likely to lose the game because they weren’t fast or efficient enough to keep up with the Corp because often they just don’t have the time. The powerful cards are already here, it’s just finding and playing them expediently that’s the issue. If we’re looking at room for improvement, it’s this. Enter Street Peddler The Card Itself – Who Is She? Before we look at the ins and outs of the various effects of the card, let’s be clear on what it is and what it does. For a start it’s a 0 cost resource. The cost of a card often makes or breaks it, and Street Peddler soars over this first hurdle. When it comes to cards that generate resources, it goes without saying that cheaper is better. Diesel is so good because you can just play it as soon as you draw it without having to worry about it taking you out of range of playing your other cards. Being out of range to even play your economy cards in the first place is even worse. Peddles McGhee is never going to ask us for money upfront. Because Street Peddler is also not unique, there’s very little discouraging us from just firing it off as soon as we draw it and having it sit on the table. Being a resource isn’t really any kind of advantage for a card that we don’t want to use with Career Fair, and it is a small liability if we want to play Vamp and Account Siphon in our deck. However it’s increasingly common for Anarch decks to be resource based anyway, so the type isn’t a downside. It isn’t going to spend a ton of time hanging around, so we don’t really have to worry about it getting trashed when we’re tagged. It’s also Seedy, so it will be all ready to go when the Seedy tutor is printed. Let’s look at the text box. Once Street Peddler is installed the only way to get at any of the cards underneath it is to use its trash ability and install one of them. Since Street Peddler only installs cards, any events that go underneath can’t be played, and are lost to us. Other than that however, Street Peddler can currently install every other card type in the game. The cards hosted on Street Peddler are facedown, with only the Runner being allowed to look at them. This means that Street Peddler hides information that the runner has from the Corp, previously a privilege only enjoyed by the Runner’s Grip and the contents of their Stack. If Street Peddler is trashed, whether through the activation of its ability or whether through something like Aesop’s Pawnshop, the rest of the cards will go to our heap. This means that Street Peddler plays pretty well with any kind of heap recursion; if we have a Clone Chip installed and Peddler hits two of the breakers we want to install, we just found two breakers for one click. It also means that we can get back any crucial events that we hit with Deja Vu or Same Old Thing. Also bear in mind that Street Peddler’s ability doesn’t require a click. We can install something in the middle of a run or during the Corp’s turn. It should go without saying that this is an important part of the card. Part of the power of Clone Chip and Self-modifying Code is that they operate similarly. With the ins and outs of the card noted, let’s examine Street Peddler. Why exactly do I feel the need to write an article about it? Burrowing Through the Stack – The See Three Let’s start by assessing the most important part about Street Peddler. When we install it we’re giving ourselves the option to install one of the top three cards of our deck. Provided that we don’t hit three events with Street Peddler, installing it is a strict upgrade to drawing a card. Even if we just hit one card that we’re interested in installing, we get to see that card where we hadn’t previously, and install it, while only spending the one click installing Street Peddler. We don’t even had to acknowledge the 1 credit discount or the instant speed install to see how powerful that is. We’re not losing any credits for this privilege either, so unless we think we might get decked this game installing Street Peddler as soon as we draw it comes with no downside. In fact, even if we had to pay a Click to install one of the cards under Street Peddler it would still be better than drawing a card. Instead of giving us access to one card that we can choose to install, it has simply given us access to three. We’ll cover the discount, the instant install, the possibility of decking and the saved Click later in the article. Street Peddler also trashes the two cards that we don’t install when we use its second ability. For the most part Anarch would rather have a card in its heap than its stack, because Anarch decks tend to pack a lot of recursion. If we’re playing Street Peddler in Noise and we hit two copies of Cache, we can install one of them immediately, sell it to Aesop next turn, and then Deja Vu two copies into our hand. If we have a Same Old Thing in play, Street Peddler also helps us to find that crucial Legwork or Blackmail. In fact, Street Peddler could even hit a Same Old Thing and an Event (though this is more of a possible fringe benefit than a huge point in its favour). The fact that installing it has essentially no downside is huge. When assessing the tournament viability of a card, it’s all too easy to focus on the best case scenarios, without looking at the worst case. Too many runners have excitedly sleeved up Comets and Quest Completed because they envision some absurd sequence of events where they draw their cards in the perfect order and completely destroy the Corp. Of course you should consider a card’s best case scenario when you play it, but it’s important to focus on when the card is going to be at its worst. The reason that cards like Diesel and Dirty Laundry appear in so many tournament-winning decklists is that they’ll almost always be worth playing; these are the kinds of cards that grind you through Swiss rounds. If we want to know if a card is worth the include we need to look at how good it’s going to be in bad situations. Street Peddler asks so little of us. As long as we have a Click to spare, it’s going to give us an advantage. Crucially we’re losing practically nothing when we’re playing it versus drawing a card, so unless you never plan to use a Click to draw for the rest of the game (something that’s fairly unlikely in the majority of Anarch builds) it’s going to pull its weight. If we’re looking at cards that are strict upgrades to drawing there’s only one other that takes this title; the aforementioned Diesel. Street Peddler is in good company in this regard. Even if we never install one of the cards under Street Peddler, it’s dug us three cards through our stack to help us find what we need. If we’re looking for one specific card Street Peddler has helped us, even if it doesn’t find it. Bear in mind that all these statements only apply to the ‘see three and install one of them’ aspect of Street Peddler. This is before we examine the numerous other advantages. So Much to Do – The Click If we install one of the cards underneath it, Street Peddler has saved us a Click. We go from having the card in our stack where we can’t access it, to in play, for one Click. The only other card in the game with this capability is Self-modifying Code. A lot of the time cards don’t allow us to save a click like this; Hostage and Planned Assault both charge us an extra click for the privilege of playing the cards they find. If we use Street Peddler to find an install a Medium, we’re given an extra Click to run R&D with. This will always be a big deal, but it’s particularly important when the two best Corp decks at the time of writing (Replicating Perfection and Near Earth Hub) pressure the runner’s Clicks so effectively. In a world of rich Runners, the only resource that can realistically be taxed out is Clicks. Spending 0 Credits to effectively save ourselves a Click absolutely should not be underestimated. An extra Click at the right time is the kind of thing that can win us games, particularly in the Anarch faction, which is most likely to lose to a Corp that rushes out Agendas with gear checks. Got Some Rare Things on Sale Stranger – The Discount Street Peddler has a further benefit beyond allowing us access to a card and allowing us to install it for a single click. For some reason we also get a discount of one credit. I guess that we don’t get a warranty or something. Recognising the benefit of this doesn’t require as much of an explanation as the previously mentioned benefits, but it is worth acknowledging. If we do install the card under Street Peddler and it costs more than 0, installing it has been better than clicking for a Credit. We’re up one Credit, while also reaping the previously mentioned benefits. Anarch is not the faction of reducing costs, so this is new ground. We’re seeing more cards, we’re saving time on getting one of them into play, and we’re still saving ourselves some money. This card operates on every economic level – we’re seeing more cards, saving time, and making money. Right Here, Right Now – The Instant Install We can install one of the cards under Street Peddler at any time. This has several benefits, some of which I’ll mention here and some of which warrant their own sections later. For now let’s talk about the fact that Street Peddler can install cards during runs, during the Corp’s turn or when you can afford them. All of these are very real benefits that you’re going to appreciate during games. Let’s say you hit an Icebreaker that you’ll probably want with Street Peddler. Paying upfront costs for Icebreakers when you’re not running is always a little annoying, since you’re losing Credits right now for some potential benefit later. This is particularly annoying with expensive Icebreakers like Yog.0 or Crypsis. Street Peddler allows us to have our Yog.0 at the ready, without having to pay 5 Credits before we use it. We can happily make runs and wait for the Corp to rez their Quandry or Wraparound, at which point we happily pull off our breaker (at a discount) and avoid losing a Click to an End the Run gear check. With a D4v1d or a Mimic under Peddler we can avoid a nasty face check without having to commit resources up front. Let’s say we hit an Imp or a Medium; we can wait to see if we’re going to make it through the Corp’s Ice before spending our hard earned Credits on the program that we want to use. It’s an annoying loss of resources to pay for a Medium that sits uselessly when it turns out that we can’t get into R&D. Stimhack is worth mentioning separately. Stimhack and Street Peddler are both powerful, 1 Influence cards that are in the same faction, so they will often make it into the same decklist on their individual merits. Street Peddler lets us spend the credits on installing whatever is under Street Peddler, making additional use of Stimhack Credits. MaxX and Kate often Stimhack a naked SanSan City Grid and use the rest of the money to Self-modifying Code or Clone Chip out something, and Street Peddler lets anyone make a similar play. Stimhacking out a Liberated Accounts is a huge economic boost. Stimhack + Street Peddler is an example of the best kind of combo; two cards that are great on their own combining to do something even more powerful. Installing cards during the Corp’s turn is going to be the kind of benefit that comes up occasionally, but is important when it does. The most obvious candidates for this are Clot and Plascrete Carapace. We can just run out Street Peddler at no cost, and wait to see what the Corp is up to. If they go for that SEA Source, we’ll pull off our Plascrete and live to run another day. If they don’t have it we’ll save our credits. We don’t have to waste precious resources anticipating a situation that may not have occurred anyway. We also have the potential to surprise the Corp, something we’ll cover later. Another benefit of putting off the install until later is the Credit factor. Liberated Accounts is seeing a lot of play at the moment because it’s a solid economy card. However the card’s high entry fee can sometimes prohibitive. If we find it with Street Peddler, we can just leave it there until we’re ready to use it. We can go up to our maximum handsize and also have another card that we might want to install, so in that sense it’s like we have another card in our hand. Street Peddler has enough benefits that it would still be good if it made us install one of the cards right now, but it gives us even more options by allowing us leave the cards under it until we want them. I’ll Wait – Keeping Your Options Open So far all the points in Street Peddler’s favour have applied even if we want only one of the cards that’s been hosted on it, but things get even better if we hit more than one card that we want. Because we can install the cards when we need them, we can leave several useful cards under Street Peddler and pull the one off that we need when we need it. Let’s say we hit two Icebreakers and want to get into a Remote Server to trash an Adonis Campaign. We can run it, see what Ice they rez and pull off the right one (during the run, at a discount). If we hit Mimic and Medium we can run R&D, pulling off Medium if we get in or Mimic if Architect is rezzed. If neither of these things happen we can save our money, put off making the decision and do the same thing again in a few turns. This benefit is also great in conjunction with some of the benefits mentioned above. If we hit a Plascrete and an economy card we can wait to see if the Corp tries to SEA Scorch us, and if they don’t we can instead grab the Daily Casts or Kati Jones that Street Peddler hit instead. Getting to make important decisions with more information available is a big deal, and it’s not the kind of benefit that we’d usually see from an economy card. One of the reasons that Self-modifying Code is so powerful is that you can wait until the last minute to get the card that you need with it. You force the Corp to commit its money before you need to commit yours. This benefit is further pronounced when we factor in the fact that Street Peddler hides information. If It Weren’t For You Peddlin’ Kids – The Hidden Information If Street Peddler hosted the cards that it hit face up, all the benefits previously mentioned would still apply. Except it doesn’t, and I have to write about even more good things that this card does. Street Peddler stands alongside the Runner’s Grip and Stack as the only information in the game that the Runner knows that the Corp doesn’t. This is huge when combined with the instant speed install, because it can lead to the Corp taking actions that turn out badly for them. As soon as Street Peddler hits the table the Corp suddenly has to respect that Plascrete Carapace could come out of nowhere and make their expensive SEA Source look ridiculous. They could spend 3 credits to rez an Enigma that they expect to fire and suddenly find that the runner is busting through it without breaking a sweat. These are supposed to be the kinds of things that Shapers are known for, yet here we have a card with a lot of other reliable economic benefits that can surprise the Corp, out of Anarch. There’s potential to blow out the Corp and massively swing the game in the Runner’s favour. Other economic stalwarts like Inject and Diesel will very rarely do that. The potential to lead to extreme blowouts whilst also having very little downside is a potent combination. It also helps us because if Street Peddler totally misses there’s no way for the Corp to find out, and they must continue to respect it even if it does nothing for us. If Street Peddler hits our Blackmails, our opponent doesn’t know and must continue to play around the fact that we might have drawn them. I’ve mentioned this last because I don’t think that it’s as important as the other upsides. I’ve seen some people focus on the surprise factor of Street Peddler and I think that that’s a mistake. In high level play hidden information simply doesn’t play as much of a factor as you might think. Street Peddler would still be a very good card if the cards on it were hosted face up. I would be lax to not acknowledge this upside however. Clearly Street Peddler isn’t all insane value all the time. While I think that part of the card’s strength is its relative lack of downside, there are some points against it that I should mention. Out of Stock – If We Miss Sometimes Street Peddler won’t have anything that we want. It could hit all Events and unique cards that we’ve already installed, or it could just hit cards that aren’t going to be installed this game. Hitting Events is an obvious drawback, and hitting everything else mentioned is much less of one, so we’ll tackle them separately. It’s definitely true that Street Peddler gets worse and worse the more Events you pack in your deck. Exactly how many events is too many to discourage us form maxing out on Street Peddler in an Anarch deck is not within the scope of this article, but I’d posit that once over a third of our deck is Events it might be cause for concern. Within the Anarch faction this is extremely rare, part of the reason that Street Peddler is so good in faction is that Anarch runs Event light anyway. It’s fairly common for an Anarch deck to run less than 10 events. Street Peddler is also good enough that I’d happily shave my Event numbers to make it better. Bear in mind that playing Street Peddler isn’t making it any less likely that we’re going to see our Events over the course of a game unless we end the game with 0, 1 or 2 cards in our Stack. If you have Same Old Thing installed or in your Stack, always bear in mind that Street Peddler might actually improve your chances of finding and playing the Event that you need this turn. This also applies to Clone Chip and Inject. If we’ve decided in deckbuilding that we want multiples of a Unique card or more situational cards, we’re committing to the possibility of drawing them anyway, and Street Peddler hitting 3 cards that you’ll never install is the same as playing a Diesel and hitting three duds. Street Peddler isn’t anti synergy with a deck full of those cards because we’d have to see them at some point regardless, so unless there are much better alternatives Peddler should still be welcome. A lot of the time Street Peddler should ensure that we dig past multiple unique cards faster in order to keep hitting gas again. End of the Line – If We Deck Obviously digging us through our deck very quickly starts to look like a downside in the games where we draw every card in our deck, because in that situation cards like Street Peddler have stopped us from eking out every last bit of value from our cards. While this is something to bear in mind, I constantly see people worrying about this situation way, way too much. Outside of MaxX I tend to find that if I have time to dig through my entire deck I’m probably winning anyway. This is particularly true in Anarch, a faction full of powerful, high impact cards and on board long term econ. If you’re Whizzard or Noise, seeing your whole stack is going to be very good for you, and you should not be worrying about making that situation even better during deck construction, since you ought to be winning that game regardless. Anarch is the faction that loses on speed and Click compression, so making ourselves much stronger in this area while undermining our best case scenario ever so slightly shouldn’t be something that we worry about. Besides, it isn’t even guaranteed that Street Peddler will make things worse for us if we do deck; if you hit a breaker that you already have and a unique Resource that you’ve already installed, the decking factor isn’t even a worry because they were dead cards anyway. It is true that sometimes you’ll be close enough to the bottom of your Stack that playing Street Peddler is incorrect, but this isn’t a strong strike against the card. Inject sometimes does this too and it’s not a problem, because it’s bad in a situation that’s already good for you. You’re always going to have an early game, and a lot of the time (particularly against good players) there won’t be a long late game. Stop worrying about what happens when you draw your whole stack. The obvious exception to this point is MaxX. Right now I’m pretty sure that MaxX isn’t interested in Street Peddler, and she’s the only Anarch I plan to not play Street Peddler in for the foreseeable future. MaxX does need to worry about getting as much value as possible out of every card, because she can run out of cards without having practically won the game already. The Street Belongs to Everyone – On Splashing So far my examples have mostly focused on the Anarch faction, because as well as not costing Influence I think that Anarchs generally benefit the most from this effect the most, since they’re still the faction that lacks tutoring. However in keeping with its low cost theme, Street Peddler only charges Shapers and Criminals 1 Influence per inclusion. Because of its power I think that Street Peddler improves a lot of decks, making it a question of Influence a lot of the time. Even 1 Influence is a big ask when the card pool is as big as it is though. Datasucker and Quality Time are fantastic splashable cards too, but they aren’t seeing as much play outside of their respective factions anymore. Generally Runners tend to be more interested in spending their Influence on splashy out of faction effects instead of efficiency, so I’d be hesitant to throw Street Peddler into every Criminal and Shaper deck that plays light on events. However, as we’ll see in the Decklist section below, there are some instances in which Street Peddler has small synergies out of faction that make it worth the include. You shouldn’t write it off as a card that you only play if you’re an Anarch. Runners not included in the Decklists that synergise with Street Peddler include Nasir (for installing during runs) and Hayley (who loves cards that don’t cost credits). Word on the Street – Decklists Honestly I’d play 3x Street Peddler in every non MaxX non event heavy Anarch list. Just as Diesel has essentially become an auto include in Shaper, Street Peddler is probably better than most of the cards in your deck. However I understand that this is a bit wishy washy, so here are some decklists that include the Peddler that I think will particularly benefit. Street Noise (45 cards) Noise: Hacker Extraordinaire 3 Deja Vu 3 I’ve Had Worse 3 Sure Gamble 3 Clone Chip 3 Aesop’s Pawnshop 2 Daily Casts 1 Earthrise Hotel 3 Street Peddler I play a lot of Noise, and recently I’ve been going cold on the Wyldside + Adjusted Chronotype combo. Making your good draws great as the expense of making your awkward draws worse is just exacerbating the problems that Noise has anyway, and I was sick of digging for action and hitting Adjusted Chronotype. Right now my instinct is to play the full 3 of I’ve Had Worse, Inject and Street Peddler and just be as lean and quick as possible. Your Parasites, Imps and Deja Vus, combined with your ID ability, make your late game extremely powerful anyway. Street Peddler is fantastic in Noise because it’s another card that digs us to Aesop quickly, and speed is Noise’s biggest problem. I added the miser’s Earthrise Hotel because I wanted a little more draw, but it may well be a terrible include. It might be better to play Symmetrical Visage or even another breaker or something. Faust is also an unknown quantity at this point, but it’s certainly worth testing. Even if you still want to play Wyldside in Noise I’d still give Street Peddler the nod, possibly losing I’ve Had Worse or Inject, depending on how much damage concerns you. Street Peddler is probably even better in Wyldside + Chronotype decks, because it can help you to find both cards, and Peddler can store the Chronotype until Wyldside turns up. Niles “Hhooo” Stanley Street Edit (50 cards) Valencia Estevez: The Angel of Cayambe 3 Career Fair 3 I’ve Had Worse 3 Sure Gamble 3 Clone Chip 1 Plascrete Carapace 3 Daily Casts 3 Earthrise Hotel 2 Kati Jones 3 Liberated Account 2 Same Old Thing 3 Street Peddler 2 ZU.13 Key Master Valencia is also a natural home for Street Peddler for a couple of reasons. First of all her 50 card minimum decksize is somewhat mitigated by the low cost efficiency and digging that Street Peddler provides. We’re less likely to lose because we can’t find a key breaker in time. Street Peddler also makes use of the Valencia Bad Publicity if we don’t need to spend any credits during a run, say when we go to check Archives after a Jackson overdraw. This small synergy pushes the power of Street Peddler a little further. This list is based on the list that Stimhacker Hhooo took to a 5th/6th place finish at the Philadelphia Regionals. I cut the Injects for the Street Peddler and swapped the Deja Vus for Same Old Things to make better use of Peddler, but I haven’t tested those changes. It’s very possible that you want to keep the Inject and make other cuts, but I didn’t want to make Career Fair worse by cutting Resources. It’s also true that Career Fair and Street Peddler play a little awkwardly together, but right now my instinct is that they’re both good enough that we should just give both of them the nod. Testing might disprove that notion. Street Peddler helps us to find Net Ready Eyes + Yog 0 as well, so having it in our list might make us want to swap the ZU.13s for that combo. Let’s have a look at some out of faction uses for Street Peddler. Spooky Geist (45 cards) Armand “Geist” Walker: Technolord 3 Account Siphon 1 Career Fair 2 Emergency Shutdown 2 Special Order 3 Sure Gamble 1 Clone Chip 2 R&D Interface 3 Daily Casts 2 Earthrise Hotel 3 Fall Guy 2 Kati Jones 2 Same Old Thing 2 Security Testing 2 Street Peddler 1 Femme Fatale 1 ZU.13 Key Master Geist is the criminal that almost certainly wants to be splashing Street Peddler. The little picture of a bin on Street Peddler’s rule text makes the efficiency it offers Geist completely insane, and I’d be surprised if there isn’t a Geist deck that wants at least two. The fact that Peddler dumps the non installed cards in the bin helps Geist even more, since he’ll happily Same Old Thing an Account Siphon and draw a card, after already having drawn a card with Peddler. Card flow is typically a little awkward in Criminal, and Geist with Peddler is a legitimate solution. I’m unsold on the disposable breakers as a concept, which is why I haven’t included a list that plays them. Honestly I feel that most Geist decks want Street Peddler, and I’m not attached to this particular list. Credit to Andrew ‘Xenasis’ Hynes for this list. Drugs 2: Drugs Harder (40 cards) Chaos Theory: Wunderkind 2 Quality Time 3 Sure Gamble 2 Clone Chip 3 R&D Interface 3 Personal Workshop 2 Same Old Thing 2 Street Peddler 3 Cerberus “Lady” H1 1 Femme Fatale 2 Gordian Blade 3 Magnum Opus 3 Self-modifying Code Stimshop has fallen out of favour recently with the steady rise of classic Prepaid Kate. However, Street Peddler may well be the new shot of life that the deck needs. Street Peddler both helps us to find our Personal Workshop, whilst also doing a mini Workshop impression when Stimhack is involved. Making Stimshop even faster while mitigating the set up time is an appealing concept, and this list looks tournament viable to me. Again, credit to Andrew ‘Xenasis’ Hynes for this list. Would You Like a Bag? – Wrap Up Street Peddler isn’t lighting the world on fire and turning the metagame on its head. But this is because it’s an economy card, not some kind of ridiculous high impact card that changes how we play the game. Sure Gamble doesn’t break the metagame but it’s still a 3 of in most Runner decks. I think you could defend the claim that Street Peddler is better than any of the Runner cards in Order and Chaos. Hopefully I’ve adequately demonstrated the ins and outs of why this card is as good as it is and made a believer out of you. If you have any comments or criticisms don’t hesitate to post them in the Stimhack thread accompanying this article, and hopefully I’ll do my best to answer them. Now hit the streets. I’d like to thank Andrew ‘Xenasis’ Hynes for providing two of the featured decklists and for his help with proof reading and editing. Credit to Daniel Baxendale and the patrons of Waylands Forge, Birmingham for the picture of a bunch of Street Peddlers.
Delegates from Georgia and Louisiana participate in the first regular winter session of the AVMA House of Delegates. The next regular annual session of the HOD will be in New Orleans this summer, in conjunction with the AVMA Annual Convention. Dr. James O. Cook, AVMA president-elect, presided over the House of Delegates' first regular winter session, held Jan. 12 during the AVMA Veterinary Leadership Conference. Previously, the HOD held an informational assembly during the leadership conference. The winter session is held in addition to the regular annual session held in conjunction with the AVMA Annual Convention. Dr. Mark Helfat, chair of the AVMA House Advisory Committee, said the winter session will create greater synergy between the HOD and other leadership committees and allow delegates more flexibility in conducting business. At the first regular winter session, delegates approved the minutes of the 2007 HOD Annual Session and a new House of Delegates Manual. Noteworthy changes to the manual included adding the Department of Homeland Security to the HOD advisory panel and making editorial changes to accommodate the regular winter session. The advisory panel now comprises eight agencies or organizations. Besides convening in formal sessions, the HOD met in reference committees and engaged in informal open discussion on a number of planned topics and additional ones introduced by delegates. Craig Little, director of the AVMA Information Technology Division, demonstrated the electronic voting system the AVMA has purchased for use by the House of Delegates and other entities. Delegates had expressed interest in electronic voting, which will provide instant, accurate, and retrievable results in elections and votes on issues. The results will be weighted as mandated by the AVMA Bylaws. Electronic voting will become operational when the HOD holds its regular annual session July 18-19 in New Orleans.
At first glance, it may seem counterintuitive for a community bank to promote the idea of community unity outside the physical walls of its branches or the region it serves. Generally, a community refers to a distinct locality, common background or society. But taken within the context of commonality through the interest of managing finances, securing credit and loans, or learning how to prepare for the future, BankAtlantic has been working to extend its reach in the community beyond traditional interactions with local residents. The Fort Lauderdale, Fla.-based bank has taken its community-building efforts onto the web. "Online channels are an extension to build a stronger and more meaningful relationship with the customer," says Tom Triozzi, marketing director for the community bank, which has $4.5 billion in total assets. "Our job is to understand the customer and provide the information they need." To that end, BankAtlantic (bankatlantic.com) has worked to create blogs, newsletters, social media interaction and educational videos that highlight not only the bank's products and services, but also give professionals within the community a platform to share their expertise with others. Along the way, BankAtlantic has received recognition for its online efforts from The Interactive Media Awards, The WebAwards, The Internet Advising Competition, and J.D. Power and Associates. While Triozzi says the bank is pleased to receive the recognition, that's not what it's all about. Rather, BankAtlantic has adopted a strategy to provide advocacy and education to its customers, along with information that is relevant to the community it serves, he insists. "From our standpoint, our whole strategy as a community bank is to differentiate ourselves by getting involved in the community, helping out in the community, lending money in the community," Triozzi explains. "We all live and work and play here, and this bank has always tried to have a special relationship with its customers." At the Customer's Convenience In terms of reaching its local community, Triozzi says, BankAtlantic recognizes the shifting demographic of its users. Even at the geographic community level, he relates, some customers choose to conduct certain banking transactions online rather than in person. "As the banking relationships become more online in the overall scheme of things, we have to figure out ways that we can still have that banking relationship with our customers," Triozzi comments. "We have to figure out different ways to maintain that." But the online channel is not necessarily driving customers out of BankAtlantic's branches. Instead, Triozzi says, the option to go online to conduct banking transactions or research on products and services at any time of the day adds convenience, increases awareness, and lets customers conduct business at their own pace and on their own terms. "It's the synergy of all of the channels together, not just online," Triozzi says. "It's the synergy of all our channels -- but everything we do has some sort of online direction to it. We're trying to let people pick and choose. Different customers at different times, depending on their preferences, can choose their interaction." Simply, Triozzi indicates, it's the bank's responsibility to respond to its customers' habits and provide the tools necessary to empower them to get what they want, when they want. Blogs, Tweets and Videos, Oh My If you visit BankAtlantic's blog, floridabankblog.com, you won't just see marketing rhetoric from the company's management. Yes, there are updates from the bank about the bank; but there also are advice columns, advocacy articles and other information from several community members outside the bank's staff with expertise in small business, finance and commerce, Triozzi notes. The bank's new "Video Room," available through its website, tells a similar story. Videos are categorized as the most recent, community-oriented, events coverage and financial education. And they don't all feature in-house talking heads. Triozzi also points out the ongoing utility of e-mail, as well as the growing importance of social media sites, such as Twitter and Facebook, in engaging customers and keeping them informed of bank updates and marketing offers. "[It] allows us to reduce their mail budget and get similar results -- in some cases, even better results -- than we would with our direct mail," he says. "The other part of it is, we can improve our advocacy or respond to things that are happening in the market." And ultimately, those various methods of outreach are all interconnected. "Since we started using all the social media networks, our web numbers have continued to increase dramatically," Triozzi reports, adding that if BankAtlantic tweets a link to a video, that video's viewership numbers invariably increase. Engagement, Feedback, Results BankAtlantic uses Nielsen to scan customer sentiment online. According to Triozzi, the bank is able to informally monitor whether it's getting things right by looking at its numbers, including web traffic, video views, e-mail clickthroughs and Facebook wall posts. And it works to respond to the changing market by monitoring how its customers access the bank's site. "We just updated our website, and we've seen a lot of positive responses to those improvements," Triozzi relates. "This is one area where you need to constantly be making the wheel rounder -- we aren't going to be reinventing the wheel, but it's really just looking at best practices, and evaluating where your strengths and where your weaknesses are." All that ongoing improvement has produced results. The bank has seen approximately a 140 percent increase in clickthroughs on its calls to action along with an increase in online account opening, Triozzi reports. "Some small things we have changed just by watching our customers use the web," he says. "What we're trying to do is improve the level of service -- that's what this channel is all about, that constant improvement. It's easier to get that information and adjust on the fly."
View large image Experience a nourishing cleanse with PAYOT Duo Intense Cleansers for All Skin Types, a duo of deluxe sized skincare products that work in synergy to purify and tone skin. Combining a creamy cleanser with an illuminating toner, the two-step routine will leave your complexion feeling energised and radiant. Suitable for all skin types. PAYOT does not practice tests on animals. The Duo Contains: Lait Demaquillant Fraicheur Silky Smooth Cleansing Milk (400ml): A silky cleansing milk that effortlessly removes makeup and impurities to reveal, clean, fresh, soft skin beneath. Enriched with jojoba oil and cranberry extracts, the cleanser helps maintain moisture levels so your complexion feels supple, smooth and comforted. Perfect for dry skin. Lotion Tonique Fraicheur Exfoliating Radiance Boosting Lotion (400ml): An energising toner enriched with cranberry and pineapple extracts. Its soothing properties help refresh and purify the complexion whilst energising and restoring radiance. Performing gently exfoliating action, skin is left refined and clarified. Lait Demaquillant Fraicheur Silky Smooth Cleansing Milk: Lotion Tonique Fraicheur Exfoliating Radiance Boosting Lotion: Lait Demaquillant Fraicheur Silky Smooth Cleansing Milk: Aqua (Water), Isononyl Isononanoate, Ethylhexyl Palmitate, Propylene Glycol, Butylene Glycol, Pentylene Glycol, PEG-40 Stearate, Glyceryl Stearate, PEG-100 Stearate, Butyrospermum Parkii (Shea) Butter, Simmondsia Chinensis (Jojoba) Seed Oil, Panthenol, Alpha-Glucan Olgosaccharide, Allantoin, Smithsonite Extract, Vaccinum Macrocarpon (Cranberry) Fruit Juice, Olivine Extract, Bisabolol, Cyclopentasiloxane, Cetyl Alcohol, Stearyl Alcohol, Polyacrilamide, C13-14 Isoparaffin, Tromethamine, Dimethiconol, Acrylates/C10-30 Alkyl Acrylate Crosspolymer, Disodium EDTA, Laureth-7, Citric Acid, Trideceth-9, Parfum (Fragrance), Limonene, Citronellol, Geraniol, Coumarinl, Chlorphenesin. Lotion Tonique Fraicheur Exfoliating Radiance Boosting Lotion: Aqua (Water), Propylene Glycol, Pentylene Glycol, Alpha-Glucan Oligosaccharide, Smithsonite Extract, Ananas Sativus (Pineapple) Fruit Extract, Vaccinum Macrocarpon (Cranberry) Fruit Juice, Olivine Extract, Bisabolol, Potassium Phosphate, Glycerin, Oleth-20, Disodium EDTA, Disodium Phosphate, Citric Acid, Tromethamine, Trideceth-9, Alcohol Denat., Benzophenone-4, Parfum (Fragrance), Limonene, Citronellol, Geraniol, Coumarin, Chlorphenesin, CI17200 (Red33) AKA227, CI15985 (Yellow 6) Ki5. This product has no reviews. Be the first to write a review and you couldWIN A $100 VOUCHER Please check the stock availability notice on your item when placing your order. This item is: Please remember to add postal time (2-3 working days) to obtain a complete estimate of delivery to your door. You will receive an email to confirm when your item has been sent. You can also check the status of your order and individual items by accessingMy Account details and following the prompts. Please allow 10 working days from dispatch of your order before notifying us of any late deliveries. It may be advisable to check with your neighbours to see if a parcel has been left with them, check any outhouses you might have where it may be left if it cannot fit through your letterbox and contact your local sorting office to see if the item has been returned to the depot as undelivered and awaiting collection. Please see our Add to bag Usually dispatched within a week Please see our returns policy. PAYOT Hydra 24 + Daily Moisturising and Plump... PAYOT Hydra 24 Daily Moisturising Cream 50ml PAYOT Elixir Anti-Ageing Set Evo Fabuloso Colour Intensifying Conditioner ... PAYOT Duo Intense Cleansers for All Skin Types 2 x 400ml
Palm Pre Eyes-On, Plus Plenty of Pre Questions Answered Just got back from my meeting with Palm (actually it was some time ago, but sketchy internet reigns supreme here at MWC09). The above would be the highlight of the trip: a real live working Pre in GSM flavor. That's right folks, it's a GSM Pre. It was live on Vodafone's network and running the same version of webOS we saw at CES, but that's about all the information I could glean about the GSM side of things. An updated OS is currently undergoing testing at Sprint and presumably will hit the GSM version too. As for what carriers the GSM Pre will land on, when, and if there will be an unlocked version, Palm was staying mum. Fret not, though, gentle reader. I had a smorgasbord of Pre and webOS questions to ask and actually managed to get answers to most of them. Read on for all the nitty gritty on Synergy, developer support, and plenty more photos. Oh, and get this, there are actually photos of a real, honest-to-god telephone call. Using a smartphone to make a call -- imagine that! The big news of the day today has to be on the development side, with Palm announcing both a partnership with Adobe for Flash on the Pre and the O'Reilly book starting to trickle out, chapter by chapter. I haven't had a chance to read said book (naturally), but Palm tells me that the first chapter is full of hints about how exactly device memory works and whatnot. The question of the day, though is just how much the MojoSDK will allow developers to accomplish. Sure, it's locally-stored web widgets and sure, you can do quite a bit with them (just look at Palm's own apps as examples of that). But what about games? What about deeper access to the Bluetooth stack so that developers can get things like Bluetooth keyboards working? All in good time is the word. None of that extra special access will be made available when the SDK is launched in its initial form, but Palm expects to have it later. So no Bluetooth keyboards at launch, but we're holding out for them later. As for when that SDK will actually get released in full, Palm is hedging a bit on their initial estimate that it would be available at the same time that the Pre launches. They're expecting it to "lag a bit" after the launch, which is a good news/bad news sort of thing. Good news that Palm is pressing hard to launch the Pre as soon as possible, bad news that the SDK won't be available on launch day. Back to the Adobe/Palm partnership. Although we're currently looking at "end of the year" for flash support on the Pre, it turns out that's actually the target date that Adobe and Palm have set for Adobe to finish up their flash support and deliver it to Palm. Adobe delivering it to Palm and Palm delivering it to customers are, sadly, two completely separate things. We suppose it will make a very good test case for Palm's promised over-the-air ROM updates. If they can get the added functionality finished, baked into ROM, approved by carriers, and pushed out in short order, we'll have the final proof that we're looking at a brand new kind of Palm. One last thing: Palm is well aware of preDevCamp and is absolutely smitten with the idea of people getting excited about the Pre They're offering as much support to the project as they can, though no official ties are to be found. Lastly (well, actually we started the conversation with this, but who's counting?) we talked a bit about Synergy. Specifically, we wanted to know what services would be supported at launch. Here's the short list: There will likely be a few more, but it's important to note that Hotmail/Live Mail and Yahoo are NOT supported by Synergy, at least at launch. Palm said that the APIs for accessing these services are still up in the air and frankly, we have to cut them some slack there because when it comes to IMAP support, Hotmail is definitely still stuck in 1997. You can get to Hotmail and Yahoo Mail via POP3. Next I wanted to know what pushed and what didn't and for the stuff that didn't how often information was updated. The reps I spoke with (all 5 of them, I asked around) honestly didn't know how often Synergy made sure your stuff was up to date with non-push systems. Most of them said that it never really affected them in day-to-day usage. One thing that did stand out, though, is when I asked about Gmail Push. The rep said "Gmail pushes, doesn't it?" Wha wha WHAT!? After my incredulous response, the reps I spoke with backtracked a little and said that maybe it didn't but they were pretty sure it did, we'll get back to you, and all that. However it does seem to be the case that if you're working with an IMAP server, email does seem to push. We haven't gotten a clear answer on this, but provisionally it's looking like the Pre and webOS will support IMAP IDLE. Making a call Yes, we witnessed the Palm Pre both make and receive a call. Yes, it worked fine. No, we weren't able to extensively test it, but you can see below for in-call screens and the like. One nice thing, a call coming in looks like any other "big alert," meaning you get your caller ID photo popping up from the bottom, but it doesn't interfere with whatever you're working on. Nice. Documents, Attachments, and Errata Although we know that DataViz is a launch partner with Palm, we don't know yet whether or not Docs To Go document editing will be available at launch (trust us, we'll ask DataViz first chance we get). In the meantime, Palm intends on making sure that there's full-featured document viewing available and, if you didn't know, image viewing already works great and works inline with email. One sour note -- no support for zip archives at launch. Durn. We also had a few people ask about WiFi, specifically whether or not University-style PEAP authentication would be supported. The Palm rep we spoke with wasn't sure, but we convinced them to give us a peep into the WiFi preferences, which revealed support for "WEP, WPA, and Enterprise." Guess we'll have to wait and see just what's inside that "Enterprise" section. One last thing before we get to the galleries. The calendar really is stupendous and offers full support for recurring and non-standard scheduling of appointments. Wahoo! Ok, really, one last thing: there. is. no. alarm. app. Palm said that people are very picky when it comes to alarms (that's true), so they figure it's an excellent opportunity for 3rd party developers. On to the gallery!
According to a new survey by Career Builder, 68% of hiring managers spend less than TWO MINUTES looking at a resume. And 17% spend less than 30 SECONDS. Which is why using keywords is probably more important than you realize. So here are the top ten words and phrases you should and SHOULDN'T use. The number one word you SHOULD use is . . . ACHIEVED. 52% of hiring managers said they like to see it on a resume. The rest of the top ten goes: Improved . . . trained . . . managed . . . created . . . resolved . . . volunteered . . . influenced . . . increased . . . and decreased. The top phrase you SHOULDN'T use is "best of breed" . . . which is a dog show term. We're not sure WHO would use that on a resume, but 38% of hiring managers said they'd probably reject someone who did. And we don't blame them. The rest of the top ten phrases you SHOULDN'T use goes: Go-getter . . . think outside the box . . . synergy . . . go-to guy . . . thought leadership . . . value add . . . results-driven . . . team player . . . and bottom line. (PR Newswire)
COMING UP: ‘Celebrating Figuration and Abstraction’ exhibition November 2014, Chelsea, London | 3 November - 18 November 2014 | Elvira Bach - Pip Dickens - Gernot Kissel (1939 – 2008), Curated by Renee Pfister Art & Gallery Consultancy and Vera Schuhmacher Fine Art Profile image: 'Vignette/Dream Scene', oil on canvas, 2013 (from SCREEN series). Pip Dickens was born in 1962. She studied her Masters in Fine Art (painting) at The Slade School of Art (UCL, London) graduating in 2000 and was trained under the internationally renowned artists, Tess Jaray, Estelle Thompson, Liz Rideal and David Hepher. Prior to studying art (1978-1985) she worked extensively across the Middle East from the relatively young age of seventeen until her mid-twenties. She perceives this period of her life as critical to her outlook as an artist. Her first London solo show was with Cassian De Vere Cole at his Notting Hill gallery apartment in 2000. After the closure of his gallery she had numerous group and solo shows in London including ‘Bittersweet’ at Danielle Arnaud. ‘Oil and Stone’ at East 73rd Gallery and several exhibitions with Mayfair gallery, Sarah Myerscough Fine Art. She was shortlisted for the NatWest Art Prize in 1997 and is the recipient of the Jeremy Cubitt Prize (Slade School of Fine Art). She also won the Edna Lumb Art Travel Prize in 1995 where she travelled to Iceland. She was a nominee for the Jerwood Contemporary Painters 2009 and shortlisted for the Celeste Painting Prize 2009. A major solo exhibition 'Toward the Light' (14 aug - 7 nov 2010) at Cartwright Hall Art Gallery, Bradford. Supported by the Arts Council of England. A Touring Exhibition. Features electro acoustic sound compositions based on selected paintings by sound artist Monty Adkins. The exhibition toured to The Brindley Arts Centre, Cheshire in 2012. Other recent solo exhibitions nclude: 'SCREEN : Pip Dickens' Rugby Art Gallery & Museum, supported by Arts Council England (2013) ‘Pip Dickens - Patterns of Shadows', Daiwa Anglo Japanese Foundation, London (6 Mar- 17 Apr 2012) ‘Pip Dickens - New Works', Stanley & Audrey Burton Gallery, University of Leeds (18 Jan - 14 Apr 2012) She was Leverhulme Trust Award Artist in Residence at University of Huddersfield, Department of Music (2010-2011) collaborating with electroacoustic composer, Monty Adkins. A substantial book, 'SHIBUSA- Extracting Beauty' was published in 2012. The book, Dickens' resultant paintings and Adkins' compositions examine synergy between music and painting (commemorating the 100th Anniversary of Schoenberg meeting Kandinsky) and Japanese kimono motifs. Shibusa - Extracting Beauty Edited by Monty Adkins and Pip Dickens Size: 280 x 210mm Number of images: 97 Images in colour: 89 published by University Huddersfield Press Email enquiries to: email@example.com In addition to her painting practice, she has worked with Ken Shuttleworth's practice, Make Architects, on a design for a major integral public art commission in London.
The United Natural Products Alliance (UNPA), an international association of forward-looking companies dedicated to providing consumers with natural health products of superior quality, benefit and reliability, is pleased to announce the executive-level membership of InQpharm. The Kuala Lumpur, Malaysia-based pharmaceutical company, founded in 2004 with U.S. headquarters in Salt Lake City, works in synergy with its global network of technology and commercialization partners to develop and market unique, safe, therapeutic compounds and products using ingredients from natural sources. As a result of its extensive research and development program, InQpharm has a growing portfolio of patented and patent-pending platform technologies. These cutting-edge technologies offer opportunities for the development and commercialization of innovative "best-in-class" products in areas of high unmet need. A global company with offices in Europe, Asia and North America, InQpharm is a dynamic company, passionate about delivering its vision to create a healthy life for all. The company sees a strong synergy between its core mission and UNPA’s, according to David Mastroianni, InQpharm’s managing director for North America. "InQpharm joined UNPA to support its valuable efforts to protect Americans’ rights to have access to safe and efficacious dietary supplements," he said. “InQpharm’s experienced executive team, led in the United States by industry veteran David Mastroianni, and its extensive in-house expertise, provides a model of efficiency and collaboration for product development in today’s dynamic global environment,” said Loren Israelsen, UNPA president. “The company’s expertise in the development and then bringing products to market with its understanding of the global regulatory environment will be a great asset to UNPA and its members.”
Monday, February 13, 2012 Yesterday, I posted on the need to distinguish between rules of professional conduct and professional identity. As we all know, lawyers frequently face disciplinary action. Is this mostly due to a failure to know the rules or a lack of professional identity--a general ethical failing? In a recent post, Professor Michael Downey declared, "practical learning in legal ethics is critical. I started teaching trust accounting in my legal ethics courses after hearing a Colorado disciplinary counsel discuss that recent graduates were facing an increasing number of bar investigations when they overdrew their trust accounts. Considering that the economy is pushing many lawyers to smaller firms or solo practices, knowledge of trust accounting can be crucial, particularly when mishandling client funds may result in prompt, catastrophic discipline." Although I applaud Professor Downey for stressing practical learning in legal ethics, I wonder whether mishandling client funds is usually due to a failure to know the rules or a lack of professional identity. Professor David Thomson has defined professional identity as "one’s own decisions about [professional] behaviors. . . as well as a sense of duty as an officer of the court and responsibilityas part of a system in our society that is engaged in upholding the rule of law." Lawyers know that it is against the ethical rules to steal a client's funds, and the rule against co-mingling a client's funds with the lawyer's funds is emphasized in most legal ethics classes. It seems to me that one who mishandles a client's funds lacks "a sense of duty as an officer of the court" and fails to understand his "responsibility as a part of a system in our society that is engaged in upholding the rule of law." It involves "one's own decision about [professional] behaviors," not a misunderstanding of the rules. While I have said in previous posts that we should be teaching the ethical rules across the curriculum, law schools must also develop their students' professional identities, or the teaching of the ethical rules will be a waste. This article entitled Top 10 Law School Home Pages of 2011 by Georgetown law librarian Roger Skalbeck, available at 2 Journal of Law (1 J. Legal Metrics) 25 (2012), ranks web pages for all ABA accredited schools based on 24 criteria including visual design as well as the technical, non-visual characteristics like metadata and html source code. Here are the top 10: #4 (tie) Wake Forest University School of Law Click here and scroll down to see the rankings for the remaining 190 or so schools. William D. Henderson and Andrew Morriss have started a new blog The Legal Whiteboard on legal education. Bill Henderson says the following on the blog's first post: "Despite the fact that I am one of the go-to people on the speaker circuit when it comes time to talk about structural change, I am not in the sky-is-falling camp. Instead, I see a lot of opportunities for lawyers, law students and legal educators to do very important and creative work. What is most exciting about this work is that it will make society better off – law will become better, faster and cheaper. Many legal services will become more standardized, productized and commoditized. I realize that these words will rankle some of the old guard, particularly those still making a good living under the bespoke model. But clients – including corporations, government and ordinary citizens—will love it. Professional ideals will remain the cornerstone of successful legal enterprises, but denying the exigencies of the marketplace is, to my mind, unprofessional. Because clients and society want better, faster and cheaper law, I believe lawyers (including legal educators) have a professional duty to ardently pursue this goal. The hardest part of this assignment – and the most vexing and interesting – is how to parlay this transformation into a decent living. Many people assume that the new paradigm means lawyers working longer hours for lower wages. That is one future business model. But I think it utterly lacks imagination. Lawyers are problem solvers. To my mind, the growing price elasticity for legal services and legal education is just a very difficult problem. And whenever I am faced with a very difficult problem, I typically start writing out my thoughts on a massive whiteboard. (I am told it is quite a spectacle to behold.) I am also someone who loves to collaborate. With an outward facing Legal Whiteboard, I am hoping to elicit the genius of my fellow travelers." Andy Morriss states in his first post: "I'm going to focus on a couple of things here, at least initially. First, I'm slowly working my way through bits of the considerable body of literature on teaching from outside the legal academy. There's a lot of good stuff out there - some of it based on data! - and I'll try to spur some conversation about that. Second, as Bill noted, there's a lot of interesting data out there that the legal academy is not yet using. I'll try to help that conversation along as well." Bill and Andy, Welcome to the legal education debate! We look forward to reading your blog. How lawyers get a bad name department. From the Charleston Daily Mail: For the fifth year, the Webb Law Firm will give away a free divorce for Valentine's Day. The service will go to the person, who in the opinion of the firm, presents the most compelling story as to why they deserve a free divorce. The divorce is limited to an uncontested, no-fault case with no or minimal child custody issues and is limited to the jurisdiction of West Virginia. The service is free, but the winner is responsible for fees, costs and expenses. Current Webb clients are ineligible. Happy Valentine’s Day! Brent Evan Newton has posted a new article on SSRN concerning legal education reform entitled The Ninety-Five Theses: Systemic Reforms of American Legal Education and Licensure. Professor Newton's thesis is that "Every major decision made by a law school should reflect a genuine fiduciary commitment to their students – with the ultimate goal of producing graduates who will be competent, ethical entry-level attorneys, that is, graduates who are 'practice ready.'” While I don't agree with everyone of his points, I think that Professor Newton's article is a major contribution to the legal education reform debate. Abstract: Knowledgeable and respected authorities inside and outside the legal academy are correctly describing the American system of legal education as being in a state of “crisis” and in need of dramatic reforms. Yet most members of the legal academy refuse to accept that major structural reforms are necessary. Despite the academy’s intransigence, I feel compelled to nail my 95 theses to the academy’s door in the hope of hastening, however slightly, its glacial movement towards meaningful reform. The theses comprise six major themes, the first five concerning the legal academy and the sixth concerning the legal profession itself: (1) defects in the law school admissions process; (2) structural problems resulting from the excessive number of law schools, the ABA accreditation process, the current manner of law school faculty governance, and the current system of ranking law schools; (3) defects in law schools’ curricula, pedagogical methods, and assessments of students; (4) deficiencies in the professoriate at law schools; (5) problems related to legal scholarship and law reviews; and (6) flaws in the bar exam and licensure process and also in the process of graduates’ transition from law school to the job market. Most of the problems are interrelated and result in a negative synergy that increasingly threatens the health of the legal profession. As a result, the only way to effect meaningful change likely to persist is to implement systemic reform – root to branch. Every major decision made by a law school should reflect a genuine fiduciary commitment to their students – with the ultimate goal of producing graduates who will be competent, ethical entry-level attorneys, that is, graduates who are “practice ready.” They should hire faculty members; design curricula and pedagogies; and admit and assess students with the primary goal of producing attorneys who can hit the ground running upon graduation. Law professors should make legal scholarship secondary to their teaching duties, and their scholarship should be relevant to the bench, bar, and legal policy-makers. Law schools also need to charge a fair amount of tuition in view of the quality of the legal education that they provide to students and expect students to carry reasonable amount of debt in relation to their job prospects. Finally, state licensing authorities should require law school graduates to demonstrate the broad range of competencies needed to be an effective entry-level practitioner before licenses are issued. With these aspirations for the legal academy and legal profession in mind, I contend that that many structural changes in the current system of legal education are necessary – beginning with the manner that schools admit law students, continuing with the manner they teach and assess them during law school, and concluding with the manner in which law school graduates are admitted to the bar. Some proposed reforms look to effective practices in American medical schools and business schools as models. For most of the reforms to occur, law schools must engage in paradigm shifts in several areas in addition to modernizing their curricula and pedagogies – they must alter the composition of their faculties, their approach to legal scholarship, and their relationship with members of the bench and bar. The ABA’s Section on Legal Education and Admissions to the Bar must pave the way in order for these structural changes to occur. In particular, the ABA standards governing law school accreditation must be amended substantially – with respect to faculty composition, faculty governance, faculty duties concerning scholarship, and law school curricular requirements. Without such changes, no meaningful systemic reform will ever occur, and the many problems that currently plague legal education will continue. The ball is in the ABA’s court but, ultimately, law schools must effect change themselves (with or without the ABA’s help, to the degree that they are able) – for the good of law students, the legal profession, and the public. We can, and should, turn the current crisis in legal education into an opportunity for meaningful change. A few weeks ago, I wrote a post on incorporating legal ethics into doctrinal courses. I concluded that "I agree with the Carnegie Report that we need to teach professionalism better in law school. Probably the best way of doing this is to incorporate skills training into doctrinal courses, especially now that casebooks and supplemental texts allow professors to easily do this." David Thomson posted a reply on his blog Law School 2.0: "Scott is right on track. It is possible to integrate professional ethics issues into any doctrinal course, and with the Skills & Values Series (as it grows to cover nearly every subject), it should be fairly easy to do. I would only add that as we develop our thinking about professional ethics instruction, we should be explicit about what we mean. It seems to me that the terms "Professionalism" and "Professional Identity" have been getting confused. Yes, there is some overlap between them, but each contains components that are distinct from the other. The Carnegie report is critical of legal education in not teaching or - more accurately, I think - creating opportunities for students to develop their professional identities. Here is my shot at the distinction - Professionalism relates to behaviors, such as timeliness, thoroughness, respect towards opposing counsel and judges, responding to clients in a timely fashion. I actually think we teach this pretty well in law school, across the curriculum. We expect certain behaviors (often we define them in our course policies documents, and certainly they are defined in the student handbook), and for the most part we get them. Professional identity relates to one’s own decisions about those behaviors (which sounds like overlap, but it’s not), as well as a sense of duty as an officer of the court and responsibility as part of a system in our society that is engaged in upholding the rule of law. For me, "teaching" Professional Identity means we ask the student to finish this sentence: "I am a lawyer, and that means, for me that I will resolve this ethical dilemma as follows…" I agree with Professor Thomson that law schools need to teach both types of professionalism–the rules of professionalism (along with professional behavior) and professional identity. Law schools rarely teach the second one, except for clinics and some legal skills classes. Maybe this gap is why there are so many ethical complaints against lawyers and why the public views lawyers so poorly. New law students have a picture of what being a lawyer is like from television and movies. Unfortunately, television and movie writers seem to be unaware of the rules of professional conduct. Of course, it is more dramatic to win a case through a "clever" trick than good, ethical lawyering. Therefore, we need th teach our students professional identity–not socialization as an elite, which we currently teach and which dates back over 100 years to Langdell, but how to be a professional in the real world in relation to clients, other attorneys, judges, and the public. I have mentioned this article before, but I will cite it again because it provides a good beginning for developing professional identity: A recent article by Denise Platfoot Lacey advocates that law schools go beyond classroom teaching of ethics to evaluate their students' professional conduct in law school. Abstract: "There has been a repeated call to incorporate professionalism training in legal education in order to assist students in developing professionalism. While law schools have begun to answer this call, they often fail to teach and assess actual professionalism behaviors of their law students. Such failure results in lost opportunities to impart to law students the expectations of the legal profession, as well as to help them to develop the highest standards of conduct. This article will present information about a model of professionalism assessment in medical education and how it can be integrated into legal education to facilitate the teaching and evaluation of professionalism in law students." Sunday, February 12, 2012 100 Parenthetical Starter Words (Part 3 of 3) In a pair of posts last month we looked at how to effectively use parentheticals in legal writing. For those who missed Part I, you can find it by clicking here. As I mentioned in Part II, parentheticals should almost always take one of three forms. The most common form of parenthetical begins with an “-ing word.” Take, for example, the following parenthetical: United States v. Hill, 2011 WL 90130 (N.D. Cal. Jan. 10, 2011) (upholding warrantless search of iPhone photos incident to arrest). Here is a list of some of the most common parenthetical starter words in legal writing to help you begin: - admonishing (or reprimanding; chastising; rebuking) - adopting (or endorsing; approving; embracing; espousing; supporting) - allowing (or permitting; accepting; condoning; enabling; facilitating) - analyzing (or examining) - arguing (or advocating) - asserting (or affirming; declaring) - avoiding (or sidestepping) - awarding (or granting; assigning) - canvasing (or inspecting; scrutinizing) - cautioning (or warning) - challenging (or questioning) - clarifying (or elaborating; expanding; correcting; adding) - comparing (or analogizing) - conceding (or admitting) Villanova University School of Law (VSL) announced today that Jeffrey S. Moorad VSL ’81, the vice chairman and CEO of the San Diego Padres, has committed $5 million for the creation of the new Jeffrey S. Moorad Center for the Study of Sports Law. The landmark gift, which marks the largest in VSL’s history, was officially announced at the 2012 Villanova Sports and Entertainment Law Journal Symposium. Headlined and conceived by Moorad, the event featured an all-star panel of baseball experts including Oakland A’s General Manager Billy Beane, subject of the book and Oscar-nominated film Moneyball, former New York Mets General Manager and current San Diego Padres Senior Vice President Omar Minaya and MSNBC President Phil Griffin, discussing the impact of “Moneyball” on sports and business. Former Pennsylvania Governor Edward G. Rendell VSL ’68 moderated. The Jeffrey S. Moorad Center for the Study of Sports Law, named in recognition of Moorad’s leadership gift, is one of only a handful of institutes in the U.S. dedicated to the study of sports law. VSL’s new Center will prepare students for careers in amateur and professional sports through rigorous academic study, innovative programs, internship opportunities, scholarship and research. Here’s the full story. Professor Neil H. Buchanan has posted a defense of law schools against recent criticisms on Dorf on Law. He writes, "Some of the complaints about law schools are clearly meritorious -- for example, it is impossible to make a case in favor of allowing law schools to lie about their employment statistics -- but many others are downright silly. My overall reaction to the public discussion is that far too many people are launching broad, baseless, ill-informed attacks on an institution that is both fundamentally well designed and essential to the maintenance of a civilized society." He declares, "It seems plausible to imagine that the current media hype itself is ultimately driven by little more than the state of the economy." "Potential law students are, for very good reasons, focused on the future, and they have every incentive to think about whether a law degree will be good for them. If the economy improves for lawyers, then it is hard to imagine that applications will not rise in response." He is concerned that these complaints "might have the effect of reducing the number of people who are potentially interested in attending law school." He also worries that "long-term damage is being done to the notion of the legal academy as an academic institution. Even if future applicants are not being permanently put off of legal education, the public at large -- and especially political players, many of whom are generally hostile to academic inquiry and intellectual freedom -- is being inundated with claims that legal academics are fundamentally out of touch and wasting time and money." He concludes: "In short, The New York Times and other news sources are doing serious damage to the long-term prospects of the legal academy, and ultimately to society as a whole. That damage, however goes far beyond the possibility that our future client pool is being drained on the basis of over-hyped claims. The future of intellectual inquiry is at stake, and there is good reason to fear that the damage being done now will have serious consequences well into the future." I disagree with Professor Buchanan that for the most part there is nothing wrong with legal education. I agree that legal education is fundamental to the maintenance of a civilized society, but I do not view our current legal educational system as fundamentally well designed for today on both the structural level and how we teach. While the economic crisis has made things worse, there is much wrong with legal education that is not due to a bad economy. As Professor Brian Tamanaha has warned: "Law schools are caught in the grip of two separate, reinforcing declines that portend a severe contraction in the immediate future: fewer people are taking the LSAT test, and fewer people who take the test go on to apply to law school. (It is possible that a sharp decline in the former will lead to a rise in the latter, but that has not happened so far.) A painful dose of economic discipline for law schools is just around the corner." While I disagree with Professor Buchanan's main thesis, I think that many of the attacks on legal education have been broad, baseless, and ill-informed. I do not worry so much, as Professor Buchanan does, that these attacks will affect the notion of the legal academy as an academic institution. Rather, I am worried how these attacks will affect the legal education reform movement. The hyperbole of the scam bloggers makes it easy to ignore the fact that there is some truth in what they are saying. For example, Paul Campos has posted a reply to Professor Buchanan's post. Campos declares that "He sounds, in short, like a parody of an arrogant, clueless law professor, talking about stuff he actually doesn't know anything about, while appealing to the self-evident truth of his assertions." Other words he uses to describe Buchanan and his ideas include bluster, arrogant, clueless, audacious, diatribe, dysfunctional, etc. With these kinds of attacks on legal education, is it any wonder that many in the academy aren't taking the problems in legal education seriously? We need a reasoned discussion on the crises facing legal education, and we need to do it in a civilized manner. A person will never be convinced by the language that the scam bloggers and their followers are employing. People do not react well to personal attacks. Let's tone down the rhetoric and eliminate the personal attacks, and get down to the business of identifying and solving legal education's problems. "All a poet today can do is warn." Wilfred Owen (English poet killed in World War I) By Professors Bradley T. Borden (Brooklyn) and Robert J. Rhee (Maryland), available at 63 S.C. L. Rev. 1 (2011) and here on SSRN. From the abstract: This Article introduces the concept of the law school firm. The concept calls for law schools to establish affiliated law firms. The affiliation would provide opportunities for students, faculty, and attorneys to collaborate and share resources to teach, research, write, serve clients, and influence the development of law and policy. Based loosely on the medical school model, the law school firm will help bridge the gap between law schools and the practice of law.
Find out where to pick up a print copy of Aerotech News by checking our Distribution List Timothy Bryant, Space Positioning Optical Radar Tracking operations chief, 412th Operations Support Group, Edwards Air Force Base, Calif., was one of five people honored by the Department of Defense as a recipient of the 2016 Spirit of Hope Award, which is given for exemplary service and support to the troops. Synergy across organizations and across services was on full display during a recent deployment of three F-35B Joint Strike Fighter aircraft, 75 Marines and 21 JSF operational test team members from Edwards Air Force Base, Calif., to Eglin Air Force Base, Fla. The 412th Test Wing Innovation Team at Edwards Air Force Base, Calif., is always looking for a few good ideas, and it may have received one Sept. 23,2016. An Airman with the 412th Medical Group at Edwards Air Force Base, Calif., was recently selected to commission as a second lieutenant in the Medical Service Corps.
Synergy Global Multimedia PT WE ARE URGENTLY SEEKING A PAYMASTER JOB LOCATION : BINTARO SEKTOR 9 JOB DESCRIPTION : - Ensures personnel actions are in compliance with current Human Resources and Finance policies and guidelines. - Identifies, writes, and implements Human Resources policies and guidelines regarding the HRIS. - Identifies opportunities for improving Human Resources processes through information systems changes. - Assists in the preparation of proposals to develop new systems and/or operational changes. - Develops training curriculum and conducts formal and informal training sessions regarding the HRIS. - Provides HRIS technical support to Human Resources and other court staff. - Serves as liaison among HRIS, Benefits, Human Resources, Recruitment, Payroll and Finance areas with regard to operations and the HRIS. - Resolves complex technical problems. - Provides other support to Human Resources management and staff as assigned. - Know and understand/work with a HR/Payroll system that offers this functionality. - A comprehensive knowledge of human resource practices. - A thorough knowledge of the organisation and its policies and procedures. - A full understanding of how the HRIS software operates and which functions it integrates with. In terms of organisational mission, vision, goals and objectives, be sure that you know (exactly) in which direction your organisation is headed - Acquire a thorough working-knowledge and in-depth understanding of the needs of line-managers: i.e. determine what their wants and needs truly are. - Candidate must possess at least a Bachelor's Degree, Computer Science/Information Technology, Finance/Accountancy/Banking, Human Resource Management, Business Studies/Administration/Management or equivalent. - At least 3 year(s) of working experience in the related field is required for this position. - Applicants must be willing to work in BINTARO SEKTOR 9. Preferable domicile : BSD/SERPONG/BINTARO - Preferably Staff (non-management & non-supervisor)s specializing in Human Resources or equivalent. - Full-Time position(s) available. - FAMILIAR WITH HR INFORMATION SYSTEM IS A MUST. |Industri||Bisnis / Konsultasi Manajemen| |Perusahaan||Synergy Global Multimedia PT lihat lowongan| PT. SYNERGY GLOBAL INDONESIA registered their business license in September 2014, and has been focusing on IT Consulting and HR Information System as their business core ever since. Synergy provides services in developing and implementing HR Information System, the Indonesian software “SunFish”, to local companies as well as to distribute/export it to overseas customers, and International Companies in Hong Kong, Singapore, China, and Japan. lowongan kerja rekomendasi
When it comes to aligning our public higher-education systems with the workforce needs of our modern economy, state policy-makers are all too aware of a paradox: On the one hand, there is recognition of the need for more people to have college degrees, regardless of subject matter. On the other hand, workforce experts and business leaders agree that there is a "skills gap" wherein individuals with post-secondary education are unable to fill the jobs that are available or find employment consistent with their degrees. There can be little doubt about the impact on a state's per-capita income of increasing the percentage of the public with any baccalaureate degree: The National Center for Higher Education Management calculates that correlation at .83. Meanwhile, however, the skills gap has contributed to a growing cynicism toward liberal-arts degrees, along with increased support for community colleges and interest in technical-training certificates. These competing views of future workforce needs are reflected in two state policy and regulatory systems: higher education and labor. Each has its own mission, governance process, programs, constituencies, service providers and data systems. The task is to find ways to integrate these systems. When Utah Gov. Gary Herbert challenged the regents of his state's university system to come up with a plan to align post-secondary education with the state's workforce needs, we led an effort to identify best state practices throughout the country. One method is to create coordinated data systems. Over $750 million in federal funds has been distributed to build Student Longitudinal Data Systems (SLDS) in 47 states. Guiding much of this work is the Data Quality Campaign, which has identified 10 state actions to serve as roadmaps for state policy-makers. Unfortunately, however, most of this data has not been transferred into meaningful policy information. Education data languishes in "data warehouses." Except for a handful of states, it is safe to say the data has not played a key role in state policy-making on these issues. To address that problem, we came up with four recommendations for governors that fit into the Data Quality Campaign policy roadmap but offer a greater focus on leadership: Clearly, our higher-education and workforce-development systems need to work together more effectively than they do today, and better use of data is the key. Our experience in Utah showed the value of gubernatorial leadership. With governors providing political muscle, vision and sustained commitment, we can go a long way toward making this important synergy happen. This story was originally published by Governing. VOICES is curated by the Governing Institute, which seeks out practitioners and observers whose perspective and insight add to the public conversation about state and local government. For more information or to submit an article to be considered for publication, please contact editor John Martin.
As parents, one thing that we have tried to impress upon on children is the importance of sharing, whether it is with siblings, friends, or other people, we have tried to help our children understand the importance of sharing while at the same time helping them to understand that they must be careful with what they are sharing through their social networks, the different social apps that they use, the people with whom they engage and the relationships they have with others. It’s Not That Simple Being a “modern” educator, for some, means having a PLN, integrating technology, and, through various means, “sharing”. However, too often educators who aren’t integrating, twittering or blogging or aren’t seen as embracing technological advancements are often described as somehow being “less” as teachers, as being not as worthy, “And, sadly, some people write off technology as a chore or passing fad” This attitude, unfortunately, continues to reinforce the binary of the “good/bad” teacher which does little to explore the strengths of people but, instead, serves to limit people and continue traditional power structures that have dominated educational discourses throughout history where certain groups are described as “less worthy” because of their lack of knowledge or talent or whatever can be used to create the power binary. We have to remember throughout time, “good/bad” teachers has meant things very different from the present. The idea that it is right to be a student-centered and caring teacher rather than a self-centered teacher is one that, while strongly held at this point in time, is contingent as any other idea about good teaching in any other historical period. McWilliam, 2004 Sharing, as an educator, has now become what “relevant teachers” do because it is now “right and proper” to do so. But the definition of “sharing” continues to change and morph as can be seen in the continual changes found in the Terms of Service of apps like Facebook and Twitter and the use of various social networks for various types of sharing. In fact, there are numerous examples of people who have made poor decisions when sharing online, examples of how sharing and privacy have become issues and the harmful effects that happen when things are shared without people’s knowledge or their consent such as the numerous examples of phishing scams where people have had their information used by scammers and the harmful and destructive consequences of people who have pictures stolen and shared against their consent. Sharing is Important Learning to be generous with time and resources is something I want my children to develop and appreciate. However, it’s also not quite as simple as Mark Zuckeburg makes it out “Facebook’s mission and what we really focus on giving everyone the power to share all of the things that they care about,” Yes, sharing is important and something that needs to continue, especially for teachers. However, it’s not as simple as “just sharing”. There are many instances when, although I wanted to share, doing so would have been unethical or might have had negative consequences. Like many others, I’ve been on the receiving end of nasty trolling from taking a particular point of view. It’s not always possible or positive to share one’s experiences. In a world dominated by the digital, sharing online seems to be the ONLY way that some people consider to be real sharing. Yet, in many instances, the intimate conversations that take place between two people, or in a small group, can be what really cements and binds our socially mediated relationships. As educators, relationships are so important and, although having digital relationships and learning to live in a world where digital discourse, literacy, citizenship, and relationship are important, there is a place for people who are more comfortable with the less-digital, less-technological. If we believe that each person’s development is important, then genuinely respecting and honouring them should allow us to feel anything but “sad”. In fact Good teachers will one day feel differently about progressive teaching, just as they have done in other times and places. McWilliam, 2004 What do You Share? How do You Share? How do you share? What do you share? How does sharing fit in your lifestyle as a teacher? Parent? Partner? Individual? How Do You View Change? Change is constant. Change is inevitable. Change can be positive or it can be negative. Change can be hard to describe and its effects can be even harder to put into words. A Summer of Change This summer has brought about a number of changes for many people I know – some are moving to new jobs, some are moving to new schools, some are moving to new communities, some are entering new stages of their lives and a few are doing all of the above! Having gone through the process of moving (9 times), a new job (8 times), a new school (10 times) and community (6 times) and the change brought on by having children (8 of them) or having children leave (3 of them), I’ve come to view change as the way life is lived. I’ve recently had to begin to care for my parents as they age, something with which I have little experience which means that, like many things, I’m learning as I go. As I read various articles that are geared at examining changes that might be experienced by teachers, either by new technology or new strategies or new assessment or different expectations or the recent online phenomenon or …. it goes on, I notice that there is a natural tendency to generalize things across a population, something that tends to happen quite often in education. People speak of “teachers” needing to “…..” because of their particular worldview and point of view. Not that’s it’s bad but that really is theirs and, sometimes, if it’s the dominant societal one, it goes without question. However, in my experience, this tendency masks the individual responses that people experience as they go through change. Generalizing that this change or that change will have this effect or that effect misses the point – the change will be individual and will have a different affect depending on the person. What I view as a positive change, others will deem negative and, surprisingly, many won’t even register nor care about. How do You envision Change? However, if Change is happening regularly, maybe taking a different approach might help. The diagram at the beginning of the post is from the Design Thinking approach to problem solving and innovation. Having read Tim Brown’s book Change by Design a number of years ago and reread parts since then and taking the Stanford Course on Design Thinking, I began applying the principles to how I view change and the changes taking place around me. Eventually, applying these principles, I determined that change was not only okay, but desirable – part of the reason that I headed off to the University of Regina to begin a PhD with Dr. Alec Couros. As this image from the Change by Design site shows, looking at change from different perspectives not only can help one to determine the What and How of change but give you different options for addressing change. Change By Design at IDEO | IDEO http://buff.ly/2a1Xkk7 In combination with the work of Todd Henry – Accidental Creative, Die Empty, Louder than Words – and Cal Newport – So Good they Can’t Ignore You, Deep Work – and others, I have shifted my thinking about my work, the impact of change and the process of development from one that “happens” to one I am able to be part of the solution process and make decisions that help me to continue to follow my unique path. Instead of always looking to innovate, adopt a new mindset or flip something, I can be open to new ideas and new processes but not always looking for the “next big thing” because my focus isn’t being distracted by my peripheral vision – something I borrowed from Todd Henry. So, yes there are many things going on – change is all around us but, for many people, it’s a distraction from doing their great work, a distraction from the path they are creating and building. Learning from/with others is important, such as doing a book study with others to expand ideas and push oneself, reading different authors and listening to TEDtalks and other forms of learning but it’s just as important to be creative, to question what people are saying, and to build your own – isn’t that what everyone seems to be saying needs to happen? Often many of us are pulled this way or that, always looking for the next “new thing, great book, interesting method and innovation” instead of focusing on the path we are building. Yes, something might be interesting and worth exploring – but make no mistake, many who are commenting on it and writing about it are interweaving it with their path – seeing how it can add to their message – which is what you need to do. You are on your own journey – one unique to you. Jana Scott Lindsay, in her last post Consumed explores the impact of being connected and how she is seeing a need for change … I think that it is time to work at finding balance. Leave your devices out of sight, to encourage out of mind for a time & space each and every day. Change – yes, it’s always happening. Change – what about you? Week 2 of the #saskedchat Summer Blogging Challenge Our topic this week is Supporting. Tribe, a post by Jana Scott Lindsay, has me pondering how do we support ourselves and, just as importantly, be part of a support system for others. Jana starts her post off with a great quote from Seth Godin – go check it out. I’ll wait. Pretty great quote isn’t it? Great post too! Seth Godin constantly reminds me that I don’t have to write a short story to get a point across. In fact, sometimes less is more. In his post today, The Top of The Pile he asks We need an empathy of attention. Attention is something that can’t be refunded or recalled. Once it’s gone, it’s gone. So, what have you done to earn it? In his latest book What to do When It’s Your Turn (and it’s always your turn) Godin reminds us that Now, more than ever, more of us have freedom to care, the freedom to connect, the freedom to choose the freedom to initiate the freedom to do what matters, If we choose. It’s that choice part that I need to pay attention to and remind myself about. As Jana discusses in her post – you read it right? – being conscious of others is a choice, being part of a tribe is a choice, being involved is a choice for most of us. There are others, however, that don’t get to have those choices. How do we support them? How do we make others aware of this fact? How do we get to the top of their pile? And not just because it’s part of being an educator but because we have the freedom to care, connect, choose, initiate – we have privilege. Support – what does it mean to you? Image by Amielle Christopherson Most people like a good challenge, something that pushes them to reach beyond where they are at the moment, to reach a new level or develop a new skill. This summer a number of people from #saskedchat have indicated that they are interested in taking on a blogging challenge in order to kickstart their blogging and get back into the habit of writing. To help with this, the #saskedchat Blogging Challenge will offer weekly topics for blogging and, hopefully, provide an opportunity for people to encourage and support others who are taking the challenge. Why Blog? Why Write? Last January, one of the #saskedchat topics was blogging and I wrote a post about blogging as a professional. In that post, I discuss 5 habits to develop as a blogger: 1. Plan for it. 2. Make it part of your routine 3. Say “NO” to something else 4. Set yourself up to succeed 5. Check on your progress, adjust, and move forward These five habits will help you to develop your writing habit. The one thing I would suggest BEFORE you begin is to develop a “”WHY” I blog” statement in order to ground your work with “why” – what is the purpose of your blogging? Why will you commit to doing this and developing this habit? How do we form habits? Where do they come from? Why are they so darn hard to change? Because for reasons they were just beginning to understand, that one small shift in Lisa’s perception that day in Cairo – the conviction that she had to give up smoking to accomplish here goal – had touched off a series of changes that would ultimately radiate out to every part of her life…. and when researcher began examining images of Lisa’s brain, they saw something remarkable: One set of neurological patterns – her old habits – had been overridden by new patterns. They could still see the neural activity of her old behaviours, but those impulses were crowded out by new urges. As Lisa’s habits changed, so had her brain. (Location 95) In the book The Power of Habit: Why We Do What We do in Life and Business, Charles Duhigg explores how people develop habits and how people like Lisa change their habits. By focusing on one pattern – what is known as a “keystone habit” – Lisa had taught herself how to reprogram the routines in her life, as well. (Location 104) Making blogging a “habit” needs to fit in with who you are as a person. If you make it another “add-on”, it becomes something else that you try to ‘fit’ into your schedule and, unless it is a priority, it often ends up something you think about as you doze off to sleep. Like exercise, eating, reading and a whole host of other habits, what you do in one area affects your life in other areas. Want more energy? Look at your eating and sleeping habits. Trying to reduce stress? What habits do you have for organization, sleep, etc.? There are many articles written about the habits of famous people. Although we can learn from these, it’s important to not try to be them but, instead be your best by developing your own success habits. I Started Running Last year I began running – again. I’ve stared running a number of different times in the past but usually it fell to the side – I just didn’t have the time to do something healthy! What was different this time? Partly I needed to take my health a bit more seriously. However, the biggest part, the part I usually don’t tell anyone, was that I needed to replace an unhealthy habit – smoking – with a healthy habit. But first, I had decided that I needed to change. That change was hard. But by replacing one unhealthy habit with a healthy habit I have been able to make changes in my life that are helping me be a healthier person. I now plan my days to include exercise and healthier eating. I’m still working on adopting other healthy habits – it takes time to develop a new habit. Like any other habit, writing and blogging needs to be planned and you have to have a “why”. Now, there is no “you must do this or else” part to this challenge. As I wrote in January: Now, there has been a great deal written about the benefits of blogging and many connected professionals who do a great deal of blogging will attest to the benefits. Teachers who have a classroom blog discuss the many benefits to the process of blogging for their students. If, however, you wish to develop this into a consistent habit, then developing your “why” is important. As Simon Sinek points out in Start with Why: How Great Leaders Inspire Everyone to Take Action Knowing your WHY is not the only way to be successful, but it is the only way to maintain a lasting success and have a greater blend of innovation and flexibility. When a WHY goes fuzzy, it becomes much more difficult to maintain the growth, loyalty and inspiration that helped drive the original success. For me, I have been blogging on and off again for a number of years. I just haven’t decided how it fits. Because I don’t have a resounding “WHY”, I often start off with great gusto only to fade quickly into the dark “It’s been 3 months since my last post?”. It’s not that I don’t have topics I could write about – I do have an opinion on almost everything! (Just ask my wife and kids). It’s actually been a case of “Who cares!”. Unlike others who can write because, I need some feedback, something to tell me that I’m not just screaming into the storm. Yes it’s good to work through things but if there is not feedback then the same thing could be accomplished through journaling. Part of this blogging challenge is to encourage others to not just read but also comment on what people are writing, to provide feedback and offer suggestions. It’s not to do this with all the blogs – that would become onerous and even I would feel too anxious to get involved. Instead, I suggest interacting as part of a habit you develop for reading blogs. We hope to have all the blogs curated at the Saskedchat blog site so people can read through the posts in one place. Ready for a Challenge? For the next 2 months, each week a new topic will be posted for those who wish to take part in the weekly blogging. If that’s too much, then choose to do whatever works for you. If that isn’t enough, then use the topic as a springboard to help you. If the topic doesn’t resonate, do your own thing – this isn’t about prescription but support and encouragement. As Chris Brogan says: “Be a very clear and true version of you that helps others in some way.” So this morning as I sip my coffee with Southern Pecan creamer, I encourage you to join the challenge. To make a space in your life to share and interact with others through blogging and help and support others on this journey. This week – tell us about your “WHY” if you can. Why are you wanting to take this challenge? Why are you motivated to join? Why is this important to you? Why do you need to change? For me, part of my reason is to develop this into a habit that will last long after the challenge is completed. As I have made changes in other areas, I know that I need to have a “keystone pattern” that I can focus upon. I will use this challenge to develop a writing habit that will become integrated into my life-habits. What’s your “WHY”? It’s tempting to sit in the corner and then, voila, to amaze us all with your perfect answer. But of course, that’s not what ever works. Seth Godin The other day I gave a presentation in an undergraduate class about using social media in teaching. During the discussion, I was asked if students should continue to blog when they are done classes. Yes. Continue to blog and share your learning. Make it a part of your professional practice. Don’t see it as an add-on but as part of your daily learning practice. Everyday is a Professional Development day. See your blog as part of your PD practice. Blogging helps me to put my ideas down and work through them. Part of my online Portfolio shows the work that I am doing. It is also a place where I can share what I am thinking about, pondering, exploring,….. Blogging is a part of my Professional Development. Sometimes I blog openly about it but other days I write just to work through ideas and thoughts. Not everything needs to be published. Ship before you’re ready, because you will never be ready. Ready implies you know it’s going to work, and you can’t know that. The purpose isn’t to please the critics. The purpose is to make your work better. Polish with your peers, your true fans, the market. Because when we polish together, we make better work. Seth Godin This is the part with which many, including myself, struggle. When is it “ready”? That’s not easy to decide. Harold Jarche recent post a half-baked idea discusses why blogging is important for everyone: “I’m thinking of doing some coaching in a few years and helping people make decisions around food and nutrition”, I was told the other day by a young man working in a shop. My advice was to start a blog: now. While he had no intention of freelancing for the near term, he needed to get his thoughts in order. A blog is a good place to do this over time. You can start slow. The process builds over time. My early blog posts were pretty bad but they helped me see what ideas I could revisit and build upon. And it took time. “And it took time.” In my post Blogging as a Professional I discuss some of the reasons teachers should blog and some of the things to consider when you start out one of which is “why” you want to blog. This is important for keeping your focus. It’s easy to begin blogging but it takes time to develop your voice and produce your best work. Todd Henry discusses this in his latest book Louder Than Words. He calls this the Aspiration Gap “When this gap exists, it’s often due to high personal expectations founded in your observation of the work of other people you admire. When you are incapable of producing work that meets those high standards, it’s tempting to give up far too soon. For this reason, many people either quit or move on to something more “reasonable” simply because they were frustrated by their temporary inablility to achieve their vision” One reason I blog is because it’s part of my professional mission “To relentlessly pursue supporting educators to develop creativity and innovation in the classroom through connections, relationships and effective professional development.” There are many people whose work I admire and follow. I don’t see my own work meeting those standards. Many days I hesitate to push “publish”. I know that being consistent is important just as it in any other aspect of life because it helps to improve your skills. To make progress we have to consistently practice. As Seth Godin says, What works is evolving in public, with the team. Showing your work. Thinking out loud. Failing on the way to succeeding, imperfecting on your way to better than good enough. The interesting thing about this idea is that my portfolio may have found you, or you may have found it, but in both cases, anyone can see it. There are different ways I can share my learning through different mediums. I love to write, but I also am able to share through visuals, podcasts, video, or things that I couldn’t even imagine. But, as George points out, not all the learning he does makes it to the portfolio to be published I also have the option of allowing you to see it or not. I do have spaces where my learning is for my eyes only, or in what I choose to share. This is a crucial point. Not all we do is ready for shipping. The learning process isn’t about publishing everything. Some works are in the incubation stage, some are in the development stage and some are at the sharing stage. You should ship when you’re prepared, when it’s time to show your work, but not a minute later. Seth Godin Sharing our work isn’t easy but it is necessary for growth and development. Feedback from others helps us to reflect on the work being done. How are you continuing to develop and learn as a professional? Are you sharing that with others and getting feedback? Do you have an online portfolio? Are you shipping? Your mindset and attitude influence your success. What’s yours? I’d love to hear your comments and feedback so leave comment. Thanks for taking the time to read Trying Things On I have a confession. I like to go shopping. Yeah, it’s a bit weird but I like to wander around stores and look at what’s new. I use to enjoy going shopping with my girls when they were younger (and would let me go along!) Now, my boys and I sometimes just spend an afternoon wandering around and looking at different things. Sometimes, I even try things on. Things don’t always fit like I think they will. My mind’s eye doesn’t always give me an accurate image of what things will look like once I actually try them on. Sometimes things that I did’t think would look that great look pretty good. It’s like that with many things in life. We don’t know how things will really turn out until we overcome our fear and try them. John Spencer’s latest post The Unintended Consequences of Doing Creative Work explores what happens when someone is working through the creative process. More often than not, the unintended consequences are actually both negative and positive at the same time. It’s neither all positive or all negative, unlike how we often imagine things working out – we tend to see things as either/or not a messy both. It’s Scary – the Fear is Real It’s is scary and difficult to try new things. We don’t know how they will turn out and we tend to imagine things that don’t happen – we convince ourselves that it’s not worth the risk. We talk ourselves out of trying something on because, well, we just know it won’t fit. Average it out Respect the status quo Don’t even bother Seth Godin This is spills over into the classroom. Instead of trying something different or giving students different options, we stick with what we know. It’s less scary. Our students learn that taking chances and trying things on is scary and, well, not really worth it. Yes, there are sometimes negatives that come along from trying things and being creative but, often, they aren’t what we think. The world does not end. In fact, if we are open to learning, we grow and develop from these experiences whether they are positive or negative. Rejection Proof is one person’s experiment in learning to deal with rejection – in trying to things on that they were scared of doing. Jia Jing asks What is this rejection? What is this monster that cripples us? Try It On – It Just Might Fit Trying things on is taking an opportunity to see how something might fit. It doesn’t always fit but sometimes things fit that we didn’t think would. And sometimes, things we thought would be great, well, just don’t turn out that way. Often, we take someone along with us to get their opinion. We value the input of others. We get insights about how things look from a different perspective. What if we did this in school? What if we asked someone else for their opinion as we try something new? What if we asked our students what they might think would fit? Do we give them feedback after they try it or do we discourage them before they even try? Your mindset and attitude influence your success. What’s yours? I’d love to hear your comments and feedback so leave comment. Thanks for taking the time to read. Another Edu-Awesome #saskedchat! Our topic was Student Engagement and our guest moderator Jade Ballek (@jadeballek) a principal in the Sun West School Division at Kenaston Distance Education Learning Centre. We had over 40 participants take part in the chat. For some, this was their first experience joining a chat which can be a bit of a shock with how fast the chat moves and the number of different conversations that take place. With this number of participants, missing part of a conversation happens and that is why we archive all the #saskedchats! What does “Student Engagement” mean to you? Over time, my ideas about student engagement have changed. As a young teacher I was focused on the lesson and my teaching, on creating lessons that were, I thought, “engaging”. Later, as I developed confidence as a teacher and began to explore different teaching strategies, I became less worried about “my teaching” and more focused on “student learning”. In Matt Head’s post Learning or Teaching? he states As I reflect on my own teaching I have come to realize that what and how I am teaching is usually my first priority. It is what teachers are doing, focusing on their teaching because that is part of the job. There is the focus on planning, assessment, planning, classroom management, planning, classroom design, planning, student interaction, and planning. During a recent episode of ITTNation, Dave Bircher and I discuss Cross-Curricular planning and how the act of deeply understanding the curricula can open up opportunities for learning that allow for FLOW to take place. Focus, Learning, Observation and Wonder. Teachers are able to allow the Focus of the lesson to emerge from interaction with students. The Learning take place through the interactions and is driven by student ideas, interests and passions. Through Observation the teacher is able to guide students in their interests while making connections to Learning Outcomes. This allows students and teachers to Wonder – exploring different topics and concepts from a place of Wonder. The current focus on the state of education on a global scale is on what teachers do in the classroom. Debates between Reformers of all types draw different ideas about what needs to happen in the classroom in order for students to be prepared for their future. Sometimes, missing from the debate, is what is happening NOW . How many educators are wondering about how the recent two wins by Google’s AlphaGo over the world champion Go player will impact schools? What will this mean for students? Overall, Google’s DeepMind is calling on a type of AI called deep learning, which involves training artificial neural networks on data — such as photos — and then getting them to make inferences about new data. Venture Beat Are we preparing students for today? Are we engaging them in a discussion about what is happening in the present? Too often the mantra is “Prepare for the Future”. In some respects, today isn’t even close to what I thought it was going to be 10 years ago. In other way, it is. “Difficult to see. Always in motion is the future.” Yoda This is not a call to toss out all of what is currently happening in schools and classrooms. In the present reform cacophony, it’s hard sometimes to even hear oneself think never mind trying to make sense of what is being proposed especially when there is more and more being added to the discussion. This isn’t just about what tools to use in the classroom or if there should be interactive whiteboards or not, whether teachers should adopt flipped learning or embrace blended learning or Project Based Learning. The discussion includes environment design, learning design, social justice, content bias, differentiated learning systems, game theory, makerspaces, content diffusion, digital citizenship, digital literacy and other pedagogical and theoretical discussions/issues each with their representatives and lobbyists. Education, it’s a serious business. There are no simple answers and stopping schooling until things get figured out isn’t going to happen. It is a work in progress. Yes the shuttle is being built as it is being flown – it is the only way learning can continue. Engaging or Empowering? Our chat briefly touch on is engaging someone the same as empowering them? What do we want to happen in schools? Why is this important to discuss? As we live in the midst, it is struggling with such questions that help us to make sense of the noise. If we want people to feel empowered, then releasing control and giving ownership is the only way this can truly happen. George Couros Are teachers being engaged or empowered? Are administrators? Are parents? Do we allow people to have ownership of their learning? How do we mange such a shift? Like other such discussions, everyday implementation is, itself, a work in progress. As an administrator, providing input from students and parents was important but so where division and provincial policies. Providing leadership opportunities and helping people develop their strengths was important to developing a school culture of learning and growth. Shifting school culture from a top-down model to a collaborative/shared leadership model isn’t just about “sharing responsibility”. It involves creating a culture of shared growth, trust, learning and collaboration. Such development takes time and, in an environment of efficiency and improvement, can often be overshadowed by “what the data says”. The #saskedchat provided a great many things to think about, some of them I’ve touched on. Your mindset and attitude influence your success. What’s yours? I’d love to hear your comments and feedback. We do get bogged down by obstacles. They grab our attention. We spend time pondering how they got there. We even spend energy being angry about them. None of this is helpful. We have to look for the openings, choose well, and find our way around them. Rob Hatch How often do you hear someone wishing they had more freedom? Or opportunity? I know I often said such things as I looked out onto a world and thought I was being held back. Turns out I was but the reason wasn’t linked to someone else. I was that someone. Often, instead of seeing the opportunities, I was focused on barriers. Instead of choosing freedom, I chose to acquire more responsibilities. When in doubt, when you’re stuck, when you’re seeking more freedom, the surest long-term route is to take more responsibility. Seth Godin In a world where possibility surrounds us, it can be difficult to admit that we are responsible for our own freedom in different ways. I would often look around and see what others were doing, seeing what I believed to be the freedom and opportunity they had compared to my own, mostly self-imposed, limitations. As an school administrator, the frustration and stress grew with mounting expectations. Instead of seeking the knowledge and expertise of the people who surrounded me, I forged on, almost wearing the frustration and responsibility like a badge. Responsibility without freedom is stressful. There are plenty of jobs in this line of work, just as there are countless jobs where you have neither freedom nor responsibility. Seth Godin Part of the issue was my attitude was keeping me in a place where there was little opportunity for freedom despite a great deal of responsibility. I was afraid of “freedom” so it was easier to take on more responsibility hoping it would somehow lead to more freedom. A Feeling of Dissonance The lack of freedom created a dissonance in the work environment. The increasing amount of details that educators are required to deal with and work through each day, to the “follow the plan, do the initiative, fill in the form. Don’t make mistakes.” creates a dissonance when they are also urged to “take risks and be innovative”. This type of dissonance, like the dissonance of a sound that is off, creates stress that drains creativity and energy. Expectations and responsibilities are part of any work. It’s how these impact the environment, work culture, and individual performance that is important. When we experience a dissonance, it bothers us and makes us uneasy. We want to correct the dissonance. Much like attention residue that results from multi-tasking which prevents you from moving smoothly from one task to another. it is difficult for people to transition their attention away from an unfinished task and their subsequent task performance suffers. Sophie Leroy This dissonance continues to impact all aspects of the learning environment. Students and teachers are affected between the dissonance created when what is said doesn’t align with what is expected. “We want our students to be risk-takers and collaborators – but our reporting system rewards individuality and conformity” It’s In the Details Now, Details matter. As Dean Brenner discusses, details are important But the amount of detail we discuss in meetings and presentations, and the way in which we communicate it, is a daily source of frustration in many work cultures. Often, there is an overwhelming amount of detail, in the form of data, provided to educators. This increased amount of detail becomes an overwhelming point of stress, not because of the detail but the lack of opportunity to reflect and integrate into the current situation and to make adjustments and changes indicated by the data. No one wants their time wasted. You must walk into the room ready to get to the point. You should include enough detail to satisfy the expectations and facilitate discussion, but not so much that everyone is looking at their watches. (Or, in the case of a classroom, the floor, the ceiling, out the window or in the desk!) This applies to all parts of education – we want people to be empowered to learn and develop. In an educational setting, we should Be ready to go deep, but allow the audience to take you there. In classrooms, staff meetings, professional development, and presentations is the audience allowed to direct what takes place? What if those sitting in the audience were provided the opportunity to go deep? How often do you attend a workshop or PD event where a presenter makes some great points but there isn’t time to reflect? Why doesn’t this happen more often? Your mindset and attitude influence your success. What’s yours? I’d love to hear you comments, ideas, and thoughts. Thanks for reading and sharing with me. Running and Pacing I’ve been training for an upcoming 1/2 marathon for awhile. Now, in order to do this, I’ve had to make a few changes to my lifestyle. I have adopted an early morning routine. That change, in itself, has been the subject of a number of books and podcasts. However, all the changes don’t mean anything if I don’t actually put on my running shoes and run. As I prepare for this upcoming meet, I’ve adopted a running routine. Part of the routine is help me with my pacing and the other part is to help me improve my running. I use to have the idea that “Well, I just need to run.” But, as Susan Paul explains The marathon is a very unique blend of different running components; it requires speed, strength, and endurance. The different training paces you see recommended for runs reflect each of these components. You will need some speed, some strength, and a lot of endurance to successfully complete your race. So I did some searching and found a routine for a 1/2 marathon that I am following. Now, I could have just gone it on my own but there are many people who have already done this and have advice and ideas that can help me as I train especially since I haven’t been doing much long distant running in a while. I casually run (is that even possible?) but not in the same way one does in a marathon-type event. The Act of Running Running is a solitary act but it can be done as part of a group and there are all sorts of online groups and sites that allow you to connect and track your running. I happen to run by myself in the morning mostly because, well, I’m the only one up in my house at that time, no one else wants to get up and run with me at that time and I don’t know anyone around who is running. I could find someone but I like running on my own. It gives me time to think and wrestle with different ideas and concepts. But it’s not for everyone and that’s okay. In fact, finding our own pace and place is part of the fun and enjoyment of living. The act of running, however, isn’t the only thing I do. It is only a part and to define me through that misses so many other things. “Exactly how is this going to connect to technology?” I’ve been reading a number of posts that discuss technology and it’s use in schools. Everything from looking at how to get teachers to embrace technology to reflections on the use of technology in schools and some of the issues with what is currently happening. I see many of these as being how I use to view running – Just run. You know what to do, running is something that we have done since just after we learned to walk. But, as Susan Paul points out Yes, you can “just go out and run” but you would be wise to incorporate runs that address these aspects of running to adequately prepare yourself for the demands of the marathon. Marathon training requires logging quite a few miles each week too, so by varying your training paces and mileage, you’ll not only improve the quality of your training, but you will also reduce the risk of injury or mental burnout. What if we looked at learning, with or without technology, in this way? Varying the pacing and mileage of learning. Doing different courses and incorporating various aspects into the training? At 50, I can no longer train like I did but it doesn’t mean I can’t continue to run. In the same way, meeting the needs of the learner means beginning where they are and listening before we start advocating particular ways of doing things. We need to start with their passions and ideas but there is a place for learning from others and their wisdom and knowledge. Age nor experience, in this case, is not “the” determining factor of what can be accomplished. Too often, as Stephen Covey said, “Most people do not listen with the intent to understand; they listen with the intent to reply.” How often have we begun a discussion with a fixed position or way of doing something or point of view already firmly established and ready for the discussion? To Whom Do We Listen To be honest, listening to someone who has run many marathons and is a veteran might not be the best solution for me. I need to consider a few different things that a veteran marathoner might not be able to tell me as someone starting out. Sometimes, as someone who has been using technology for years, I have had to remind myself of this point. I have a perspective that might not be as open as I’d like to think. In this way, looking outside of education can give us some great insights. I would let that kid know that it’s not too late. Doors might be closed, but that doesn’t mean that they’re locked.” That conversation has stuck with me since then. What if he’s right? What if we told kids that they don’t have to have it all figured out ahead of time? What if they knew that doors might be shut but they aren’t locked for good? What if we approach all our relationships and conversation from this perspective? Do we close doors because of our own mindset and what people have told us? How do you approached learning? Why do you think this way? Your attitude shapes your mindset. What’s yours? I’d love to hear your ideas and comments and what you are thinking about. A few years back, my daughters were given the responsibility of running the local swimming pool for the summer. They were hired by the local pool board and given the responsibility of getting the pool ready for the upcoming year. There was a manual and a someone who worked on maintaining the mechanical aspects of the pool but they were responsible for the rest. The one hired as the general manager asked the chairperson how she was suppose to learn all that she needed to do. I absolutely loved the response, which I was fortunate to hear because, in a small town, they were discussing this in our kitchen: We hired you because you are smart and capable. We know that you have the skills necessary to do what is needed. We will support you and I can tell you who you can contact for help but you are the manager. You and your staff will need to keep the pool up and running and I can’t be leaving work to help you out. I’ll do what I can but we have full faith that you will be able to do what is necessary. That’s why we hired you. And the girls did just that. It was one of the best learning experience my daughters had before they went to university. To this day, they talk about how much they learned. They still get the odd phone call from new managers about how to do things. Did they make mistakes? You bet they did. Were there stressful moments? Yep. I was privy to some “deep discussions” (arguments) between the two sisters about everything from schedules to expectations of staff to expectation of patrons to what pool toys to purchase (who knew a blow-up whale could cause so many problems!) The board trusted these young people to do what was right and make good decisions and were rewarded for that trust with hard work and young people who gave it their all (and a lot more) and provided a great service to a small community. Grew Their Strengths There were courses to take and tests to pass, inspections to meet and technical aspects to master. Each one required different strengths to be developed. Each girl had different strengths which they were allowed to use – to grow. Because they were allowed to use their strengths they were willing to take risks. And when something wasn’t a strength? Fortunately each of the girls that worked (and they were all girls) had different strengths which they used. Sometimes, it took the intervention of someone to point out that maybe someone else might be better suited to organizing the swimming lessons or managing the chemicals and ensuring that all safety standards were met. Did they always use their strengths? Nope. In fact, stubborn determination sometimes meant they had to learn through mistakes. But, mistakes they did make and learn they did. For three years, this group managed an outdoor pool in a small town, taking it from losing money to breaking even. All have gone on to other things but each of them grew in so many ways during that time. I was fortunate to be able to learn with/from them. The role of school leadership and it’s impact on change and innovation has been well documented and discussed. There are different opinions as to the exact extent of the impact that school leadership has on student achievement or the changing role of school leaders in schools today. As a former school administrator, there always seemed to be a wide array of opinions about what I should be doing as a leader and what my role was as a leader within the school and the community. Having been an administrator in 8 different schools in 5 communities, my experiences were different and unique in each setting. Although there were some things that were similar, each school and community was unique with its own set of characteristics, strengths, and challenges. Seeing Strengths in Others In education, we traditionally focus a great deal of attention on weaknesses or areas of improvement. A great deal of Data Driven Decision-Making is focused on identifying areas for growth – areas of weakness – that need improvement. One of the primary responsibilities of an educational leader is to use that data to identify areas and implement initiatives to make improvement. A lot of time and effort is spent on looking for deficits. It’s somewhat similar at all levels. Identify weaknesses and areas for improvement. Focus on these. But what about Strengths As an administrator I spent so much time focused on identifying weaknesses in everyone, including myself, but not nearly enough time identifying strengths and helping people use and improve them. What I learned from watching my daughters was how important it was to focus on strengths – grow them, improve them, nourish them. Through a collaborative team effort where people’s strengths are combined, the synergy of the team leads to even greater growth and development, especially in areas of strengths. Liz Wiseman in Multipliers identifies 5 traits that leaders have who grow people – develop them and allow them to improve. And areas of weakness? They improve but, more importantly, they aren’t used to hold someone back from progress and growing. Differentiate to grow Strengths Too often an inordinate amount of time is devoted to weaknesses instead of building teams that are strong because of the variety of strengths the people on the team possess. Teachers, for the most part, spend their days working in classrooms with students. Many teachers are themselves Multipliers, helping students to grow and develop strengths. However these strengths aren’t the one’s found on tests or reflected in test scores which shifts the focus away from helping both teachers and students grow and develop their strengths. Too often, time is spent trying to improve areas of weakness that result in minimal improvement while areas of strength are left without development. This stifles growth and drains students and teachers of energy. To have innovation, supporting people to use their strengths gives them the freedom to develop these and improve. We tend to think of innovation as arising from a single brilliant flash of insight, but the truth is that it is a drawn out process involving the discovery of an insight, the engineering a solution and then the transformation of an industry or field. That’s almost never achieved by one person or even within one organization. If we truly are looking for innovation in education, focusing on improving deficits will not bring that innovation. Instead, allowing people, teachers and students, to use, develop and grow their strengths through collaborative efforts and connecting provides opportunity for creativity and innovation and the possibility of transformational growth. How are you growing others strengths? How are you growing your own strengths? I’d love to hear your experiences either of helping others to grow or someone who helped you and the impact it had on you.
Called “FrackingSense Greeley: What We Know, What We Don’t Know, and What We Hope to Learn about Oil and Gas Development,” the series is organized by the University of Colorado-Boulder’s Center for the American West in conjunction with the AirWaterGas Sustainability Research Network funded by the National Science Foundation, and with support and participation from the city of Greeley, KUNC-FM 91.5, Mineral Resources Inc., Synergy Resources, the Greeley Tribune, UNC and Weld Air & Water. Monday’s session, “Drilling Practices and an Overview of the Issues,” will be held from 6:30 to 9 p.m. in the Longs Peak Room of UNC’s University Center. Panelists will include Patty Limerick, Center for the American West faculty director; Will Fleckenstein, interim department head and BP adjunct professor of petroleum engineering at Colorado School of Mines in Golden; and Joe Ryan, professor and Bennett-Lindstedt Faculty Fellow in the Civil, Environmental and Architectural Engineering Department at CU-Boulder. After the session, those attending are invited to an informal discussion at Kress Cinema and Lounge, 817 Eighth Ave., Greeley. The second session, “Risk Assessment and Modern Life, with Air Quality Emphasis,” will be held from 6:30 to 9 p.m. April 28 at Billie Martinez Elementary School, 341 14th Ave. in Greeley. Panelists will be Limerick; John Adgate, environmental and occupational health chair at the Colorado School of Public Health; Jim Martin, senior counsel at Beatty & Wozniak P.C. and former Region 8 administrator for the federal Environmental Protection Agency; and Garry Kaufman, deputy director of the air pollution control division of the Colorado Department of Public Health and Environment. The final session, “The Regulatory Environment for Oil and Gas,” will be held from 6:30 to 9 p.m. May 12 at Northridge High School, 7001 Grizzly Drive, Greeley. Panel participants will be Limerick; Colorado Oil and Gas Conservation Commission director Matt Lepore; Mike Paules, senior staff regulatory adviser at WPX Energy; Gunnison County Attorney David Baumgarten; and Gary Graham of Western Resource Advocates. Although the project is funded by the National Science Foundation, it is not responsible for any statements made by speakers in the program. For more information, contact Limerick at 303-492-4879 or firstname.lastname@example.org, or Brad Mueller, director of community development for the city of Greeley, at 970-350-9786 or Brad.Mueller@greeleygov.com.
Summary of the Second NCI Epidemiology Leadership Workshop: Understudied Rare Cancers Co-sponsored with the NIH Office of Rare Diseases Sponsored by: Epidemiology and Genetics Research Program, Division of Cancer Controland Population Sciences, National Cancer Institute, National Institutes of Health and Office of Rare Diseases, National Institutes of Health - Opening Session - Welcome and Meeting Overview - Design Issues in the Study of Rare Cancers Rare Cancers Working Group Report of the 1st NCI Epidemiology Leadership Workshop - Statistics on Rare Cancers From the SEER Program - Cancer Registry Issues in Studying Rare Cancers: A NAACCR Perspective - Keynote Address: My 30-Year Love Affair with Hodgkin's Lymphoma—Lessons Learned - Charge and Mission - Early Morning Session for New Investigators - Plenary Session - Panel Discussion - Luncheon Keynote Address - Plenary Session and Panel Discussion - Final Sessions - A – Cancer Site Working Group Report: Brain and Eye Cancer - B – Cancer Site Working Group Report: Endometrial Cancer - C – Cancer Site Working Group Report: Esophageal, Liver, Stomach, and Renal Cancer - D – Cancer Site Working Group Report: Head and Neck Cancer - E – Cancer Site Working Group Report: Hodgkin's Disease and Leukemia - F – Cancer Site Working Group Report: Non-Hodgkin's Lymphoma, Myeloma, and Kaposi's Sarcoma - G – Cancer Site Working Group Report: Ovarian and Testicular Cancer - H – Working Group Participants List Session Chair: Margaret R. Spitz, M.D., M.P.H. Professor and Chair, Department of Epidemiology The University of Texas M. D. Anderson Cancer Center Welcome and Meeting Overview Edward Trapido, Sc.D. Associate Director, Epidemiology and Genetics Research Program (EGRP) Division of Cancer Control and Population Science (DCCPS) National Cancer Institute (NCI) Dr. Trapido opened the meeting by describing how investigators were chosen to attend the meeting. Attendees included EGRP-funded investigators working on rare cancers, defined as cancers of organ sites with 40,000 or fewer cases per year. The purpose of the meeting was to facilitate a broadening of EGRP's portfolio of research grants to include more understudied and rare cancers. The following types of cancers are being addressed by this workshop, and EGRP-funded investigators working on them were included: - Brain and ocular cancer - Oral cavity and pharynx cancer - Head and neck cancer - Endometrium, ovary, and testis cancer - including cancers of the vulva, vagina, and penis (no studies are currently funded on these cancers) - excluding cervical cancer because the etiology is well understood - Digestive and urinary systems cancer - including esophagus, stomach, liver, and kidney cancer - also including small intestine, anus, gallbladder, and ureter (no studies are currently funded on these cancers - Larynx, bones, joints, soft tissues, thyroid, and other endocrine systems - Non-Hodgkin's lymphoma, which has more than 40,000 cases per year but is understudied; Hodgkin's disease; leukemia; myeloma; and Kaposi's sarcoma. Pancreatic cancer was excluded for the purposes of this workshop because a Program Announcement (PA) on the disease recently was issued, and another funding opportunity is planned. EGRP-funded investigators were asked to suggest the names of junior investigators, who were also invited to the workshop to promote interest in the study of rare cancers. Design Issues in the Study of Rare Cancers Rare Cancers Working Group Report of the 1st NCI Epidemiology Leadership Workshop Isis S. Mikhail, M.D., M.P.H., Dr.P.H. Program Director, Clinical and Genetic Epidemiology Research Branch (CGERB) EGRP, DCCPS, NCI Dr. Mikhail reported on a workshop, held at the 1st NCI Epidemiology Leadership meeting, to gather input from NCI investigators on why and how best to study rare cancers. The workshop focused on adult tumors. Rare adult cancers were defined as those with an incidence of less than 15 per 100,000 or fewer than 40,000 cases per year. Workshop participants indicated a number of reasons why the study of rare cancers is worthwhile. As a group, rare cancers can have a large impact, especially in certain populations. The total incidence from all rare tumors is substantial, and rates of some have risen steadily over the last several years (for example, esophageal cancer). Some rare cancers are highly lethal, and those that occur at a young age result in significant years per life lost. Some otherwise rare cancers occur disproportionately in specific ethnic groups, such as male breast cancer in Zambians or nasopharyngeal cancer in Asians. In addition, the study of rare cancer etiology could improve our understanding of all cancers. Some rare tumors tend to have a simpler etiology (for example, retinoblastoma, angiosarcoma), which if understood might provide insight into the etiology of common, more complex cancers. Family studies have shown that some rare cancers tend to be heritable, thus perhaps shedding light on genetic mechanisms. Also, the first study of a rare tumor is more likely to give useful results than the 101st study of a more common and complex tumor that has thus far proven intractable. There are ethical reasons to study rare cancers as well. Rare tumors have been given much less attention by the research community. Patients who have rare cancers should not carry the burden of disease alone and should be allowed some hope that a cure is in the future. Workshop participants also addressed the question of how best to study rare cancer etiology. Several methods were proposed, including the use of descriptive data from the Surveillance, Epidemiology and End Results (SEER) Program, the use of existing cohorts, and piggy-backing onto existing clinical trials. Multiple existing cohorts, while modest in size individually, could be combined to potentially identify moderate to strong risk factors. A potential caveat would be whether or not questionnaire data and biospecimens had been collected and, if they had, whether these materials would be obtainable. Existing clinical trials have been used before to gather etiologic data on childhood cancers. There is a potential for bias because clinical trial cases are likely to have the worst prognosis. Workshop participants noted that we cannot afford to be overly fastidious in studying rare cancers as strong apparent risk factors should be robust to small biases. Dr. Mikhail urged researchers to stay "open minded" about this approach. The participants also highlighted the importance of new studies that could be designed to address specific hypotheses, to generate fresh samples for use in phenotypic assays, or to allow molecular characterization of subgroups within a specific rare cancer type. These new studies could be integrated with prognosis and treatment studies, and pool baseline data from multiple rare tumor types. The studies could be simplified by creating a common rare tumor protocol, including a single questionnaire and a common biospecimen collection protocol. Participants recommended that such studies be hospital-based since "that's where the money is." Dr. Mikhail concluded by noting that both the use of existing cohorts and clinical trials, and the design of new studies will depend on building partnerships among researchers, along with a supportive infrastructure for collaboration. Workshop participants proposed that NCI set aside supplemental funds to explore the feasibility of using the NCI-designated Comprehensive Cancer Centers to facilitate such partnerships. Statistics on Rare Cancers From the SEER Program Benjamin F. Hankey, Sc.D. Chief, Cancer Statistics Branch (CSB) Surveillance Research Program (SRP), DCCPS, NCI The next two speakers described how tumor registries could be used to advance study of rare cancers. Dr. Hankey provided a series of tables and graphs showing statistics on rare cancers from the SEER database. The tables and graphs, which showed incidence rates, mortality rates, trends, survival data, and other statistics were intended for use in the working group sessions if needed. In addition to statistical data, Dr. Hankey described a number of services that SEER provides. Researchers can access SEER's public-use file through the Internet, along with software tools that can facilitate their own statistical analyses. For example, SEER software can be used to calculate different types of survival rates, including crude, net, observed, and relative. It can be used to calculate frequencies, incidence rates, and prevalence. Such tools might be useful for generating etiologic hypotheses. Researchers who do not have time to learn how to use the tools can also ask the SEER staff to perform desired analyses. Two main Web sites give researchers access to SEER cancer data and statistical tools: - The SEER Program at the Cancer Statistics Branch: Contains the SEER public-use file, SEER statistics tutorials, and cancer statistics. - The Statistical Research and Applications Branch: Contains software tools, such as CancerServ, for cancer survival analysis; CompPrev, for prevalence analysis; and DevCam, for calculating the probability of developing or dying of cancer. SEER's data and advanced statistical tools might be especially valuable for the study of rare cancers, Dr. Hankey said. Cancer Registry Issues in Studying Rare Cancers: A NAACCR Perspective Holly L. Howe, Ph.D. North American Association of Central Cancer Registries In a presentation on population-based cancer registries, Dr. Howe described the work of the North American Association of Central Cancer Registries (NAACCR), an umbrella organization of all cancer registries and surveillance programs in the United States and Canada. NAACCR defines standards for data collection and incidence statistics, trains registration professionals on these standards, certifies registries achieving high data quality, releases an annual statistical monograph, conducts population-based cancer research and surveillance, and promotes the use of population-based cancer incidence data in cancer research conducted by others. Dr. Howe revisited the question of rare cancer definition. She suggested that no standard definition exists and many are used: cancers with rare organ/histology combinations; rare subtypes of common cancers (for example, inflammatory breast cancer); and rare, exposure-related cancers, such as mesothelioma. She also noted that some cancers are only rare in specific age groups or populations. She suggested that a rare cancer might also be defined as an orphan cancer, one with no support, no advocates, and no population-based information. Without a standard definition, studies on rare tumors will not be compatible and surveillance statistics used for directing research of rare tumors will also be inconsistent. In addition to definition, registries face other challenges in registering rare cancers. There are questions about the validity of diagnosis. If a cancer appears to be very rare in a given data set, is it due to reporting or coding errors or inconsistency in pathology interpretation? These errors will affect statistics for rare cancers, and even the definition of a rare cancer itself. Dr. Howe presented rare cancer data from the NAACCR Cancer in North America (CINA) aggregated data set, from seven Canadian provinces, 42 U.S. States, and Washington, D.C. These data represent cancer cases from about 60 percent of the U.S. population and one-third of the Canadian population. These data tables can be used in the working group sessions, if needed, along with the SEER data. Her group is planning an overview paper of rare tumors in the NAACCR aggregated data set that will first establish a rare tumor definition for inclusion in the manuscript, a data quality assessment, and then provide some descriptive statistics for all included sites. Rare tumors with sufficient numbers to enable more detailed epidemiologic descriptions will be identified, and a consortium will be convened to prepare a series of manuscripts that may be compiled into a monograph on rare tumors. Dr. Howe proposed that NAACCR could help rare tumor research by acting as a coordinating center for a rapid case ascertainment network (RCAN), which could include all registries in the U.S. and Canada. A NAACCR RCAN could provide quality control for diagnoses, obtain patient consents, collect biospecimens, and refer consented participants to investigators for interviews. This population-based RCAN would promote consistency and efficiency among studies conducted in various states and regions. The large number of cases and population-based research capability that would be offered by an NAACCR RCA network should be especially valuable for the study of rare tumors, she said. My 30-Year Love Affair with Hodgkin's Lymphoma—Lessons Learned Nancy Mueller, Sc.D. Professor, Department of Epidemiology Harvard School of Public Health In the keynote address, Dr. Mueller described her experiences studying the rare cancer Hodgkin's lymphoma (HL). Hodgkin's is noted for a "perplexing" bimodal U.S. age-incidence curve that peaks in young adulthood, around age 25, followed by a drop in incidence and another peak after age 45. The young adult form of HL is associated with higher socioeconomic status, a smaller number of siblings, a more highly educated mother, and living in a single family home. All of these factors can influence the age of first childhood infections, which led to the hypothesis that young adult HL was associated with delayed infection by a common oncogenic agent. Epstein-Barr virus, or EBV, was the "prime candidate," Dr. Mueller said, because it is B-cell tropic, and HL is a malignancy of B cells. Moreover, HL patients often showed an abnormal antibody profile against EBV at first diagnosis, and multiple studies have shown that altered EBV antibody levels can be present years before and after HL diagnosis. But molecular evidence for an EBV-HL link was lacking. Such evidence was difficult to obtain before the availability of molecular biology techniques because very few cells in an involved HL lymph node are actually malignant. The breakthrough came in 1989 with the demonstration of clonal EBV genomic DNA in about 30 percent of HL cases. The presence of clonal DNA implied a very early role in HL pathogenesis, but if EBV was central to HL pathogenesis, "Why not 100 percent?" asked Dr. Mueller. Epidemiologic data presented several other paradoxes. EBV-genome-positive HL cases were primarily older adults, not the young adults the hypothesis would have predicted. EBV-positive status was found to be associated with poorer living conditions, greater age at diagnosis, and other factors that lead to poorer immune response. These results led to the further hypothesis that EBV starts the oncogenic process in all HL, but is "kicked out" in patients with adequate immune function and thus no longer detectable. In the 1990s Dr. Mueller obtained a program project grant to test this hypothesis. The project included a population-based case-control study carried out by her group, a cohort study on pre-diagnosis specimens, and functional immunologic analyses on biosamples from EBV-positive and EBV-negative cases. This study found no evidence that EBV was involved in EBV-negative HL cases. Moreover, there appeared to be no difference in the susceptibility of younger EBV-negative HL cases to late infections, including EBV. She has since concluded that EBV-negative HL may be due to another, as yet unknown virus, with a similar transmission pattern to EBV and a greater oncogenic potential in immune-competent persons. Dr. Mueller followed this summary of her HL research by describing what she has learned about the rewards and pitfalls of studying a rare cancer. While post-childhood HL is highly curable, she said, it remains a significant health problem for survivors and an intriguing scientific problem. However, because HL is so rare, she found few epidemiologists with whom to share data, and few basic scientists interested in collaboration. Funding was also a problem, as HL receives a low priority in peer review and attracts few advocates. She also highlighted the need for academic scientists to diversify their portfolio by doing parallel research on another disease, both to gain additional perspective and to maintain a good publication record. She said that it took a good 10 years for her case-control study to be completed, from the time of application through data acquisition and analysis. Dr. Mueller concluded her talk by saying that the reward of rare cancer research lies in the opportunity to make a difference. "It's really a labor of love... you do it because you care," she said. At the conclusion of her talk, Dr. Mueller was honored with a certificate of appreciation for her contributions to epidemiology. "Nancy has been an important leader in the epidemiological community, both within NCI and in the wider scientific community," Dr. Trapido said. Charge and Mission Edward Trapido, Sc.D. Associate Director, EGRP, DCCPS, NCI Dr. Trapido closed this opening session by describing NCI's mission and how EGRP and the present meeting fit into that mission. He noted that the NCI challenge goal to eliminate the suffering and death due to cancer by 2015 is getting ever closer. "We have our work cut out for us," he said. One of NCI's primary goals is a better understanding of gene/environment interactions, part of the mandate of EGRP. Within the context of NCI, Dr. Trapido emphasized that although there are budget issues, NCI spends the bulk of its $6.17 billion budget on extramural research projects led by individual investigators. Budget issues include a lack of increases, institutional "taps" for new initiatives, and out-year commitments for multiyear awards. These pressures have increased appreciation for the value of resource sharing, leading to the current data sharing policy. Despite these pressures, however, he noted that NCI still receives "the lion's share" of funding, and there is still a lot money for extramural research. EGRP's research portfolio runs the gamut from understanding subcellular mechanisms of cancer to health outcomes in cancer patients. EGRP also has budgetary issues with respect to its approximately 500 grants. Pay lines are tougher, grant proposals are more closely scrutinized, and consortia are becoming increasingly attractive as a means of maximizing resources. But even with these issues, Dr. Trapido emphasized that there are still many opportunities for individual investigators, particularly in rare cancers. The purpose of this meeting is to gather together research leaders to explore the scientific issues, identify the common roadblocks, and update researchers on the funding available for rare cancers. Junior scientists were invited to give them a chance to learn from experienced NCI-funded scientists. In turn, NCI leaders have been invited to gather suggestions about new approaches to the study of rare cancers that might enable them to justify new programs and funding opportunities. Ultimately, said Dr. Trapido, these new approaches are likely to include transdisciplinary research in areas like behavioral and survivorship studies. He also suggested that increased use of the Specialized Programs of Research Excellence (SPOREs) and the Comprehensive Cancer Centers would be likely to play a role. "Reaching out towards other people really is the name of the game" for more effective research, he said. Early Morning Session NCI-Supported Opportunities for Training and Career Development in Cancer Research Lester S. Gorelic, Ph.D. Program Director, Cancer Training Branch, Office of the Deputy Director In a special session for young investigators in the early stages of their careers, Dr. Gorelic described the two major types of extramural funding that are available from NCI: institutional grants (grants awarded to the institution to which the applicant applies for funding) and individual grants (applicant applies directly to the National Institutes of Health (NIH)/NCI, and the award is made directly to the applicant). STEPS IN THE GRANTS PROCESS: Dr. Gorelic stated that the first step in the grant process is for a young investigator to assess what his/her career goals are before initiating a search for extramural funding opportunities. For example, individuals with a doctoral degree should determine what their research focus in for the immediate future, where they currently are in their career development, how much research experience they already have, and what their strengths and deficiencies are. For clinicians, it is also necessary to determine whether their research career will be patient- or laboratory-focused, or whether they plan to pursue a career in translational research. It is also important that they conduct an assessment of the proposed research environment to determine the level of support for research career development including sources of funding, availability of appropriate onsite mentors, opportunities for collaboration and other resources including accessibility to patients. For clinicians, it is also important to assess the institutional culture as it relates to the support of clinical research versus clinical practice. After formulating a career development plan, young investigators should search for extramural funding from Federal sources, professional societies, and foundations, and can do so by accessing their Web sites. Published compendia and professional colleagues are also good sources of information. After selecting potential funding sources, it is very important for individuals to identify the appropriate contact person at the funding agency who can help determine which program is the most appropriate for the applicant's needs. NIH/NCI FUNDING TRAINING OPPORTUNITIES: Dr. Gorelic then described 14 NIH and NCI grant mechanisms that are appropriate for various stages in the career track of young investigators and span the continuum from the earliest stages of mentored career development awards to those awarded to established investigators. - Institutional awards made directly to an institution include theNational Research Service Award (NRSA) T32 program, which provides research training for those with very limited research experience as well as for postdoctorals; the K12 award that provides funding for clinicians who wish to conduct patient-oriented research; and the NCI R25T program, one of the fastest growing segments in NCI's grant portfolio, which supports a research career development experience that is directly relevant to epidemiologists. - Individual awards include the F32 postdoctoral fellowships or career (K) awards which can be mentored for individuals early in their research career development, unmentored for individuals who are transitioning to their first independent position or are within the first 2 years of a first independent research position, or for established principal investigators who need protected time to expand their own research programs and to mentor those of young investigators. Individuals who are early in their epidemiologic research careers should consider the NIH F32, K08, or K23 awards, or the NCI K07, which is an award specifically for individuals pursuing a career in cancer prevention, control, behavioral, or population sciences research. The NCI K22 should be considered by mentored individuals (Ph.D., health professional degree) who are pursuing a research career in cancer prevention, control, behavioral, or population sciences, or who have health professional degrees and are pursuing careers in basic or patient-oriented cancer research and are ready to transition to their first independent position, or who are within the first 2 years of their first independent research position. Federally employed Ph.D.-basic scientists are also eligible to apply for the NCI K22. A unique feature of the NCI K22 is that applicants do not need a sponsoring institution to apply for an award and have up to 12 months to identify an appropriate sponsoring institution to "activate" an award should it be made. Finally, mid-career investigators in patient-oriented research or established investigators in cancer prevention, control, behavioral, or population sciences, are eligible for the NCI K05 award, which provides protected time to expand their research program and to mentor young investigators. Individuals from groups underrepresented in biomedical research should refer to the NCI Comprehensive Minority Biomedical Branch (CMBB) for additional opportunities for support of research training and career development. GUIDELINES FOR PREPARING A GRANT APPLICATION: Dr. Gorelic presented general guidelines that young investigators should follow when they are considering submitting an application for extramural funding. First and foremost, they should work with their mentor(s) and with the NIH/NCI grant program contact person to identify the funding mechanism(s) that are appropriate considering the stage in their career and their desired research career plans. The "K" Awards: Dr. Gorelic also focused on the elements that are critical to preparing a successful application for mentored individual NIH/NCI career (K) awards. Major elements for the "K" awards are the career development plan (that includes a description of the research plan), the didactic plan, and the expertise of the selected mentors/co-mentors/collaborators. The proposed sponsors (mentors) must have expertise and current research support in the proposed area of training, as well as a proven track record in training researchers. Dr. Gorelic advised that the research plan should be focused and should include a small number of well-defined, hypothesis-driven specific aims. Preliminary data should be introduced to show that the applicant has some experience in the methodology to be used to achieve the objectives of the research plan, and must be introduced in a situation where there is a question of feasibility of the proposed studies. The research plan must parallel the objectives of the proposed career development plan. The application should identify potential pitfalls and explain how the pitfalls will be circumvented and should cite, in the Background section, the critical research by others in the field. Everything in the career development and research plans should be developed through careful consultation with the sponsor (for a mentored award). If one is applying for a career transition award (K22), it is important to demonstrate that the applicant is ready to begin an independent research program and that he/she will be able to submit a research-type ("R") application before the third year of the grant. If additional expertise is needed, individuals should be brought on as collaborators, but not as mentors. Process for Submitting an Application: Dr. Gorelic advised that applicants should submit their NIH application using the most current electronic PHS Form 398 and should pay attention to the required criteria and format, including the required font size. The application should include the applicant's biosketch (and for mentored awards those of their mentor(s) including information on the mentor(s)' current research support) and documentation pertaining to human subjects and inclusion of women, minorities, and children in research. [Note: NIH transitioned to mandatory use of the SF 424 Research and Related (R&R) application and electronic submission. Learn more.] If an application submitted to the NIH includes research that falls within the mission of more than one NIH Institute (for example, National Institute of Child Health and Human Development (NICHD), National Institute on Aging (NIA)), the applicant should include a cover letter with the application and in that letter request that the primary assignment should be made to the Institute that represents the major focus of the application, and a secondary assignment to the other Institutes providing a scientific justification for this request. This alerts the NIH staff that this application is being considered by more than one Institute, and they can collaborate with one another on its review and possible award. At the end of his talk, Dr. Gorelic described the process that a grant application follows through the NIH system. Applications are sent to the NIH Center for Scientific Review (CSR), where they are assigned through the referral process either to an Institute study section or to a CSR special emphasis panel (for example, F32 awards). (At the NCI, K award and T32 applications are assigned to different study sections, with care taken not to review basic and clinical research applications in the same Institute review group.) In the event that an application is not funded, Dr. Gorelic encouraged the investigators "not to give up," but that once they receive the Summary Statement, they should contact the Program Director who is assigned to their application. The Program Director will assist an applicant in interpreting the critiques, and provide additional input on the review that will be useful in assembling the revision of their original application. For mentored career development awards, the applicant should discuss the critiques with their mentor(s) prior to contacting their Program Director. Creating Consortia: Rationale, Roadblocks, and Successes Session Chair: Leslie Bernstein, Ph.D. Professor, Norris Comprehensive Cancer Center University of Southern California Consortia: A Tool for Interdisciplinary Research in Epidemiology Daniela Seminara, Ph.D., M.P.H. Program Director, CGERB, EGRP, DCCPS, NCI Dr. Seminara presented her work on the characteristics and formation of consortia and other large-scale collaborative scientific projects. She described consortia as an "emerging new research paradigm" in which large interdisciplinary teams of scientists work together collaboratively, using common protocols and methods and performing coordinated parallel or pooled analyses. This approach to research creates synergy by exposing scientists from different disciplines to new concepts and approaches, she said. For epidemiologists, consortia can provide the resources necessary to study the effects of environmental exposures, identify genetic factors, evaluate GXE interactions, unravel the etiologic heterogeneity of tumor subgroups, and determine prognostic factors. Consortia can facilitate the rapid replication of findings, the pooling of data to increase sample size, and the initiation of new large-scale studies. EGRP supports epidemiology consortia with several different types of designs, including cohort studies designed to track multiple outcomes and identify converging mechanisms; more specialized case-control studies, generally focusing on less common tumors; and family-based studies that might identify high or intermediate penetrance genes and show effects of environmental modifiers. Another large segment of the EGRP-supported consortia concentrates on research infrastructures with hybrid design, such as the Breast and Colon Cancer Family Registries (BC-CFR). Dr. Seminara emphasized the growing importance of this approach to the conduct of research by listing established or emerging consortia focusing on 15 different cancers. The EGRP works to foster consortia development by identifying research priorities, assessing needs and providing resources, facilitating communication, and aiding in study implementation. A program task is also to evaluate consortia's performance, develop milestones, and incorporate best practices for high research standards. The recently established EGRP Consortia Working Group reviews the status of EGRP-supported consortia, identifying issues and obstacles, and proposing solutions. Through the soon-to-be-established consortia website, EGRP plans to disseminate information about how to plan, develop, and evaluate consortia to give the general research community the benefit of its expertise in this area. Dr. Seminara said that the Consortium Working Group has developed a set of criteria with which to evaluate proposed consortia. First they look at the scientific rationale: are there scientific questions that only this consortium can address? Clearly defined leadership roles and an appropriate organizational structure are also very important. The proposal should address issues such as data and specimen sharing, and publication policies. It should also address potential difficulties due to differences in design, data variables, and specimen acquisition and storage among the different research groups involved. She addressed some of the funding issues faced by consortia. Consortia require larger financial commitments over longer periods of time, which are difficult to obtain given current infrastructure and funding mechanisms, especially with tighter pay lines. She illustrated potential new funding mechanisms for consortial grants and satellite grants that are currently under development at NIH. For consortial grants, RFAs could be issued to solicit additional applications to become an integral part of a specific consortium. These applications would be reviewed within the context of that consortium. Satellite grants, for proposals that are affiliated with the consortium only by scientific serendipity, would not become an integral part of the consortium. In both cases, continued funding would depend on the success of both the individual project and the overall consortium. She described a number of other challenges faced by consortia. These include effective communication and coordination among the research groups belonging to the consortium, sufficient informatics and analytical support to handle very large data sets, and overcoming institutional boundaries that separate scientists working in different disciplines. Consortia must also find ways to rapidly integrate cutting-edge technologies, including genomic methods, and to form biorepositories that can facilitate the storage and use of critical biosamples. She mentioned the sharing of intellectual property rights and authorship as additional challenges. The Consortium Working Group can provide suggestions and support to help emerging consortia for deal with these and other difficulties. Lastly, Dr. Seminara gave some examples of consortia that have recently published results or commentaries. These included the Human Genome Epidemiology Consortium (HuGE), the International Consortium for Prostate Cancer Genetics (ICPGG), and the Genetic Epidemiology of Lung Cancer Consortium (GELC). She said that she expects such very large consortia to provide additional challenges in the future, as consortia "superstructures" will be needed to support their activities. An Example From the Brain Tumor Consortium Melissa L. Bondy, Ph.D. Professor of Epidemiology The University of Texas M. D. Anderson Cancer Center Dr. Bondy described her work with the Brain Tumor Epidemiology Consortium (BTEC). It is estimated that there will be 17,000 new brain tumor cases and 13,100 deaths from brain tumors in the United States in 2005. Brain tumor rates are increasing, especially for adults over age 65, although the apparent increase might be attributable to advances in detection technology. However, brain tumors are still quite rare, and research consortia are needed to provide adequate numbers for epidemiologic study, she said. The BTEC, which was initiated at a meeting organized by NCI in 2003, was formed to promote multicenter, interdisciplinary collaborations leading to the understanding of etiologies, outcomes, and prevention of brain tumors. Pooled or parallel analyses from different labs could provide sufficient data to study statistically difficult questions, such as the roles of gene-gene and gene-environment interactions. The consortium also aims to help its members keep up with the latest molecular and genomic advances, so that brain tumor epidemiology can be understood at the molecular level. The consortium consists of an international, multidisciplinary group of investigators including epidemiologists, statistical geneticists, neurosurgeons, oncologists, neuropathologists, and basic scientists. It also includes brain tumor advocates and fundraising organizations. The consortium is overseen by a coordinating committee, with U.S. and European chairs, and includes four research focus groups concentrating on adult glioma etiology, family studies, meningiomas, and pediatric brain tumors. The BTEC's first funded initiative will undertake the first large epidemiological study of meningiomas. This multicenter study will examine environmental, genetic, pathological, and clinical variables associated with meningioma risk. Dr. Bondy described some of the difficulties the consortium encountered in applying for funding. She said there was initially some uncertainty as to how to submit the proposal to NCI, but that they succeeded by submitting a group of linked R01s. At review time, the study section had difficulty finding outside reviewers since almost everyone in the field was involved in one of the linked proposals. The National Brain Tumor Foundation (NBTF) has also funded two multicenter pilot studies initiated by the BTEC. One study will look at single nucleotide polymorphisms in the DNA repair gene pathway to gather clues about the etiology of glioblastomas. A second study will focus on the descriptive epidemiology of oligodendroglioma, seeking to classify and identify risk factors for this very rare tumor. The BTEC is now seeking funding for a large consortium to be called GLIOGENE, which would target the genetic epidemiology of familial and sporadic gliomas. This consortium would build on a number of previously established relationships among centers in the United States and Europe, including several SPORE programs. It will include a steering committee, made up of principal investigators from each site, and an advisory committee, made up of experts in genetics, molecular biology, and brain tumors. Dr. Bondy stressed the importance of having such outside advisors to maintain high scientific standards. Dr. Bondy closed by describing some of the reasons for BTEC's success. In terms of logistics, the Central Brain Tumor Registry provides administrative support for handling meeting planning and finances, a difficult task to accomplish through individual institutions. The BTEC has attracted long-term support for meetings and pilot projects through foundations such as NBTF and from NCI. Scientifically, the BTEC consists of a highly collaborative and committed group of investigators, who are gathering retrospective data for merged analyses, initiating prospective studies, and creating opportunities for young investigators. They are also developing criteria for publications so that all participants can receive proper credit for their work. These activities should greatly increase our knowledge about the little understood etiology of brain cancers. Opportunities for Partnerships Along the DCCPS Cancer Control Continuum Session Chair: Robert T. Croyle, Ph.D. Director, DCCPS, NCI Dr. Croyle opened the session on partnership opportunities in cancer control research by describing the organizational structure of the Division of Cancer Control and Population Sciences (DCCPS). DCCPS includes the Office of Cancer Survivorship (OCS) and four programs: Epidemiology and Genetics Research Program (EGRP), Behavioral Research Program (BRP), Applied Research Program (ARP), and Surveillance Research Program (SRP). He highlighted a variety of DCCPS resources of potential interest: SRP's biostatistics group and funded investigators, and the BRP-funded Centers of Excellence in Cancer Communications Research, Transdisciplinary Tobacco Research Centers (TTURC), and Transdisciplinary Research on Energetics and Cancer (TREC) Centers which are soon to be awarded Oct. 2005. DCCPS also conducts research on how to assess health disparities and collaborates with NCI's Center to Reduce Cancer Health Disparities. These programs may be potential sources of data, resources, and collaborators for EGRP-funded investigators. He said that when DCCPS considers funding by exception grant applications that are outside the payline, it looks at whether the investigator is taking advantage of existing resources and the most effective strategies for achieving the research aims. He encouraged investigators to piggyback on existing resources, for example, the SEER/Medicare-linked database, HMO Cancer Research Network (CRN), and NCI's Cancer Information Service (CIS). Websites for the above mentioned resources: - SRP biostatistics group and funded investigators - Centers of Excellence in Cancer Communications Research - Transdisciplinary Tobacco Research Centers (TTURC) - Transdisciplinary Research on Energetics and Cancer (TREC) Centers - SEER/Medicare-linked database - HMO Cancer Research Network (CRN) - NCI Center to Reduce Cancer Health Disparities - Cancer InformationService Surveillance Research Program Benjamin F. Hankey, Sc.D. Chief, Cancer Statistics Branch (CSB), SRP, DCCPS, NCI Dr. Hankey described the Surveillance Research Program (SRP) which includes two branches: the Cancer Statistics Branch and the Statistical Research and Applications Branch. The Cancer Statistics Branch collects and analyzes data to answer questions about cancer incidence, mortality, and the cancer-related health status of various regions and populations in the United States. This branch provides a number of resources for the study of rare cancers, including the SEER Program. The 2006 release of the SEER public-use file will include ecologic data from the Census Bureau and other sources at the county level. SEER data are also linked to cohort data from the National Longitudinal Mortality Study, which includes 26,000 linked cases from 11 registries. The SEER registries themselves provide high-quality data and mechanisms for rapid case ascertainment and rapid-response surveillance studies. The Cancer Statistics Branch is also interested in promoting geographic information systems (GIS) studies and held a workshop in June to address the development of future GIS methods. SEER also holds an annual meeting to explore topics for study using the rapid response surveillance mechanism, one of a number of meetings that might be useful to research consortia. The Statistical Research and Applications Branch, develops statistical methods for analyzing trends in cancer rates, evaluating the impact of cancer control interventions, and for evaluating the impact of geographical, social, behavioral, genetic, and health care delivery factors on the cancer burden. This branch develops software tools for generating epidemiologic statistics using the SEER public-use file. It also is investigating the use of GIS methods. Dr. Hankey provided a list of Web sites that can be used to obtain more information about SRP activities and resources: - Surveillance Research Program - SEER Program - Geographic Information Systems - National Longitudinal Mortality Study Resources for Studying Rare Cancers Martin L. Brown, Ph.D. Chief, Health Services and Economics Branch (HSEB), Applied Research Program (ARP), DCCPS, NCI Dr. Brown described the mission of the Applied Research Program (ARP) and how its work might be of interest to those studying rare cancers. ARP supports the evaluation of patterns and trends in cancer-associated health behaviors, practices, genetic susceptibilities, health services, economics, and outcomes. Its staff also monitors and evaluates cancer control activities in the United States and determines the influences of these factors on cancer incidence, morbidity, mortality, survival, cost, and health-related quality of life. It can provide technical assistance with databases and surveys, and advise on grant issues. ARP has a number of resources applicable to rare cancers. One is the Breast Cancer Surveillance Consortium, which includes study of prognostic factors for the rare cancer ductal carcinoma in situ (DCIS). Another important resource is the HMO Cancer Research Network, or CRN. This is a network of cancer research centers associated with large nonprofit HMOs. The network, which covers a population of 10 million, provides a large scope of data including pharmacy information, and large diverse populations. These populations include large numbers of cases of rare cancers such as multiple myeloma, esophageal cancer, and glioblastoma. The CRN is carrying out several multicenter studies, including one on pancreatic cancer etiology and one on multiple myeloma. Outside investigators can access the CRN by submitting a proposal to collaborate with CRN investigators. Initial estimates of cancer cases and related health care and pharmaceutical use can be obtained through the CRN Virtual Data Warehouse. Dr. Brown described another resource known as SEER-Medicare, which is a linkage of SEER data with Medicare data. This resource provides detailed information about elderly persons with cancer and represents a retrospective, longitudinal data set that can be used for a number of types of epidemiologic and health services studies. It includes huge numbers of Medicare recipients and will continue to grow as the population ages. More than 150 studies have already taken advantage of this resource. The database is not for public use but access can be gained by a straightforward process. Dr. Brown listed several Web sites for further information on access to data and collaboration with these resources: The Epidemiology/Cancer Survivorship Interface Julia H. Rowland, Ph.D. Director, Office of Cancer Survivorship (OCS), DCCPS, NCI According to Dr. Rowland, cancer survivorship research seeks to identify and control adverse cancer- and treatment-related outcomes to provide a knowledge base that will allow optimal follow-up care and surveillance of cancer survivors, and to optimize health after cancer treatment. She said that epidemiologic studies should focus not only on survival but on all the things that make survival possible, including long-term effects and predisposing factors that might make for a poor trajectory. Survivorship studies can use classic epidemiological research designs such as cohort and case-control studies, and trial/intervention designs. She provided some examples of epidemiologic research germane to survivorship, including studies on the incidence and risk factors of physiological late effects, such as cardiotoxicity; examination of lifestyle and health behaviors, such as exercise and smoking, on morbidity; and identification of protective factors, especially in "extraordinary" survivors. She also identified a number of gap areas that epidemiologic research could address, such as the influence of predisposing factors on survivorship outcomes, health outcomes in long-term survivors, the role of co-morbidity, and the roles of socio-cultural and behavioral factors, family, and post-treatment care on outcomes. Dr. Rowland suggested that existing studies and databases supported by NCI could be leveraged to collect data about survivorship and provide answers for some of these gap questions. New multidisciplinary studies could also be used to link epidemiologic data such as risk factors to survivorship outcomes. The CIS Research Program Susan E. Rivers, Ph.D. Senior Research Coordinator, New England Cancer Information Service Yale Cancer Center Dr. Rivers explained that the Cancer Information Service, or CIS, is an NCI program that operates through contracts with academic institutions, hospitals, and Comprehensive Cancer Centers. The CIS operates three component services, including an Information Service, a Partnership Program, and a Research Program. The CIS provides answers to individuals seeking information on cancer through toll-free phone numbers, instant messaging, and by e-mail through NCI's Web site (cancer.gov). Comprehensive information is provided on cancer risks and prevention, symptoms and diagnosis, and treatments and clinical trials. The Partnership Program collaborates with trusted community organizations to reach minority and medically underserved populations with cancer information. The outreach effort helps to enroll these populations into cancer detection and prevention programs as well as into clinical trials. It also provides training to those organizations on cancer-related topics and the use of NCI resources, links organizations with similar goals, and helps plan and evaluate programs. The Research Program seeks to understand, apply, and disseminate effective communication approaches to educate the public about cancer and contribute to cancer control efforts. Research themes include testing novel health communication and education interventions, increasing access and use of cancer-related information, discovering effective models for disseminating cancer information, and understanding general information seeking behaviors. Dr. Rivers said that the CIS provides an opportunity for investigators to collaborate with highly skilled researchers who have access to large numbers of cancer information seekers, many eager to participate in research. The CIS provides multiple venues for dissemination of information, often in partnership with organizations that can reach minority and underserved populations. Its staff is highly trained and can provide cancer content expertise, design research methodology, obtain informed consent, and design and administer eligibility assessments and baseline questionnaires. Luncheon Keynote Address Health Informatics, caBIG, and Population Sciences and Cancer Control Deborah M. Winn, Ph.D. Chief, Clinical and Genetic Epidemiology Research Branch (CGERB), EGRP, DCCPS, NCI Dr. Winn presented the work of the Health Informatics Steering Committee, whose mission is to apply health informatics tools to DCCPS research activities in order to optimize new data collection as well as the use of existing data. The committee also promotes the application and sharing of data and research advances with individuals and communities. Key focus areas include cancer care and surveillance, data collection and analysis strategies in population science, behavioral and cancer survivorship research, and the application of GIS methods to cancer data and bioimaging. She described an NCI initiative known as caBIG, for cancer Biomedical Informatics Grid. The grid represents a network of individuals and institutions, designed to facilitate the sharing of cancer-related data, tools, and infrastructure. The network includes about 50 NCI-designated Cancer Centers, including participants from the cancer and biomedical research communities, private industry, and patient advocacy groups. Working groups within caBIG are taking open-source, open-data approaches to such tasks as clinical trials management, integrative cancer research, and tissue banks and pathology tools. Other groups are planning systems architecture, vocabularies, and common data elements. Strategic-level groups are involved in strategic planning, addressing data sharing and intellectual capital issues, and training efforts. Tools are under development to handle data from cutting edge technologies such as microarrays, proteomics and computational genomics. The Health Informatics Steering Committee joined with the extramural community and the NCI Center for Bioinformatics, which developed caBIG, to create the Population Sciences Special Interest Group within the Integrated Cancer Research Working Group. This interest group is focusing its efforts on developing tools and resources to facilitate epidemiologic and cancer control research. DCEG and DCCPS also have created a joint project to develop common data elements (CDEs) for population sciences and cancer control. These CDEs, which are being developed for subject areas like demographics, tobacco history, and body mass index, will be placed in the Cancer Data Standards Repository, or caDSR, along with common vocabularies. Thus, the caDSR will provide unambiguous semantics for the data collected in all cancer studies and trials. The caDSR will also provide a Form Builder tool that can build standardized questionnaires and forms using the CDEs and vocabularies. These standardized forms will be stored in the repository for use by others in the research community. They should allow for more consistent data collection and analysis, reduction of errors, and vastly enhanced data sharing and data pooling capabilities. Plenary Session and Panel Discussion Transdisciplinary Science: Partnering Population, Basic and the Clinical Sciences Session Chair: Graham A. Colditz, M.D., Dr.P.H. Professor of Medicine, Brigham and Women's Hospital Merging Basic Science and Population Science to Elucidate Mechanisms of Breast Cancer Development Jonine L. Bernstein, Ph.D. Associate Attending Epidemiologist, Memorial Sloan-Kettering Cancer Center Dr. Bernstein discussed her work with the Women's Environment, Cancer and Radiation Epidemiology Study, also known as the WECARE Study. This interdisciplinary, multicenter study was designed to investigate the joint roles of radiation exposure and genetic susceptibility in second primary cancers in women with breast cancer. Only 5 to 10 percent of women with breast cancer develop a second primary cancer in the contralateral breast, making this a rare cancer. Breast cancer patients are two to five times more likely to develop a second breast cancer than are women in the general population without breast cancer to develop a first cancer. The risk of developing a second primary persists for at least 30 years. Although epidemiologic data are scarce, the only other consistently identified risk factors for second primary breast cancer are: early age at diagnosis of the first primary breast cancer, lobular histology of the first primary, a family history of breast cancer, and carrying mutations in the cancer genes BRCA1 and 2. Radiation treatment for the first primary also elevates the risk of developing a second primary, while tamoxifen and chemotherapy do not, which led to the idea that risk might be associated with DNA damage and damage repair pathways. ATM (for ataxia-telengectasia gene mutation) is activated by DNA damage, such as that from radiation exposure. ATM lies upstream and controls a number of damage control proteins, including the Chk2 protein, which interacts with the P53 tumor suppressor. Homozygous mutations in the ATM gene cause the disorder ataxia telengectasia (A-T), an autosomal recessive disease characterized by progressive neuronal degeneration, immunologic deficiency, and premature aging and death. A-T is also characterized by increased radiosensitivity and susceptibility to cancer. This led to the hypothesis tested by the WECARE Study that women who carry a single ATM gene mutation may be more susceptible to radiation-induced breast cancers than those with no mutations. To improve the chances of detecting the effects of these relatively rare mutations, in the WECARE Study, women with asynchronous bilateral breast cancer serve as cases and women with unilateral breast cancer serve controls. Preliminary results show that women who carry a deleterious ATM mutation and who received radiation treatment for their first primary have a much higher risk of developing a second breast cancer than do women who do not. Ongoing studies are looking at genes in the entire ATM-Chk2 DNA repair pathway, as well as the roles of BRCA1 and 2. Dr. Bernstein also described the organization of this large successful collaborative study. Since the cancer is rare, and a large sample size was required to achieve adequate statistical power, the study involves multiple data collection centers. Multiple genotyping laboratories are also needed to analyze the very large, complex ATM gene, as well as laboratories with experience in the technical challenges of measuring radiation exposure. The study includes more than 70 investigators from 25 institutions, 5 countries, and 7 time zones. The WECARE field organization includes working groups that direct different aspects of data collection and analysis, such as Radiation Dosimetry, Genotyping, and Biostatistics. These groups are coordinated by a Steering Committee and an Internal and External Advisory Committee, which Dr. Bernstein credits with helping to keep the study moving. There are also subcommittees formed to deal with practical issues such as biorepository use, budgets, and publications, and to deal with the data analysis. Dr. Bernstein ended her presentation by describing some of the "lessons learned" from the WECARE Study about interdisciplinary partnerships. The study team worked well, she said, because it included deep expertise in every scientific aspect of the study, along with a prior track record of collaboration among many of the investigators. For the follow-up study, they are planning to include more junior investigators. Communication within the Working Groups was constant and effective, but she found that annual meetings and e-mails were only barely adequate as a means of overall communication with the team as a whole. Support from NCI was helpful but funding is an issue now that the initial WECARE:ATM funding is finished. Additional funding is needed to maintain the group's infrastructure so that this work can be built upon and future studies completed. Integration of Oncogenomics and Population Science to Improve Patient Outcome in Myeloma Kenneth C. Anderson, M.D. Director, Jerome Lipper Multiple Myeloma Center Dana-Farber Cancer Institute Dr. Anderson presented his work using both genomic and epidemiologic data to guide clinical research on potential chemotherapeutic agents for the treatment of myeloma. "Teamwork is the only way to go," he said, making the case for multidisciplinary approaches to rare cancer research. Dr. Anderson leads a SPORE in myeloma. Multiple myeloma is a disease resulting from excess plasma cells in the bone marrow. The disease is incurable, although the median survival of 3 to 4 years with conventional therapy can be improved slightly with high-dose therapy and bone marrow transplant. Myeloma accounts for 2 percent of cancer deaths in the United States, and there are 14,400 new cases each year. Incidence is especially high in African Americans and Pacific Islanders. Predisposing factors include environmental exposures such as radiation or petroleum products, and occupations such as farmer, paper producer, furniture manufacturer, or wood worker. Another predisposing factor for myeloma is the presence of a clinical syndrome known as Monoclonal Gammopathy of Unclear Significance, or MGUS. MGUS occurs in about 2 percent of individuals older than 50 years and is characterized by less than 3.5 grams/liter monoclonal immunoglobulin and less than 5 percent monoclonal bone marrow plasma cells. Individuals with MGUS have a 25-fold higher risk of developing multiple myeloma within 20 years, as well as greatly increased risks for other blood disorders such as macroglobulinemia, plasmacytoma, and primary amyloidosis. Dr. Anderson's group is investigating how gene expression profiles change in the progression from normal to MGUS to myeloma. They have identified a large number of genes whose expression is either up-regulated or down-regulated at different stages in the progression. These genes and their protein products represent potential targets for therapy against myeloma. Dr. Anderson refers to this research strategy as "oncogenomics." A total of 258 expressed oncogenes were identified. In order to target myeloma-specific genes, genes that were over-expressed in more than one cancer were eliminated. Some of the remaining genes were only over-expressed in certain patients. For example, only 20 percent of patients over-expressed fibroblast growth factor receptor 3 (FGFR30), a tyrosine kinase. A kinase inhibitor is in clinical trials, but only those patients will be expected to show a response. By these and other methods, Dr. Anderson's group has identified seven potential targets for anti-myeloma therapy. They have developed compounds against six of these. In addition to the anti-FGFR3 agent, they are testing inhibitors of angiogenesis, telomerases, the proteasome, and the stress response. They are also working on an Mcl-1 antisense RNA as an anti-apoptotic agent. Most of these studies involve cells in culture. But Dr. Anderson said that it is important to remember that cells live in a particular microenvironment, and this can change gene expression patterns. His group has identified many of the genes that control the interactions of multiple myeloma cells with cells in the bone marrow microenvironment. These studies have led to the use of thalidomides to manipulate these interactions. One thalidomide compound, revlimid, has been very successful in Phase III clinical trials. Another drug that blocks tumor/microenvironment interactions is Velcade (bortezomib). This drug gained FDA approval in less than 3 years from bench to bedside, which Dr. Anderson attributes to the power of collaborative research. Further demonstrating the value of teamwork, he said that two companies have joined together to test bortezomib and revlimid together, a combination that is successful in some patients who have failed treatment with one or the other alone. Dr. Anderson's group is also using gene expression microarrays to help predict clinical responses to drugs. He said that population studies are needed to identify gene targets associated with drug sensitivity or resistance. His group found that heat shock protein 27 (Hsp27) was over-expressed in people who were resistant to his new proteasome inhibitor. He is now directing a clinical trial to determine whether inhibiting Hsp27 indirectly, by inhibiting p38 MAP kinase, will affect proteasome inhibitor resistance in these patients. Dr. Anderson's latest project, on IGF gene variation and multiple myeloma risk, is funded by a SPORE career development award, which he called a "wonderful model" for team-related integrated research. This study will investigate the role of IGF-1 mediated signaling cascades in myeloma, combining molecular biology and biochemistry, animal models, and epidemiological research. The results could be useful in other cancers where this pathway has been implicated, including breast cancer. Another new collaborative project will investigate genetic risk factors for myeloma using single nucleotide polymorphisms and epidemiologic studies. Dr. Anderson concluded with two lessons learned from his studies of rare cancers. First, he has developed a new treatment paradigm that targets both the cancer cell and its microenvironment. This paradigm may prove useful for other cancers. Second, the SPORE collaborative oncogenomic and population studies have proven very useful for identifying new therapeutic targets and for informing the design of clinical protocols. Translational Research: Trends of the Future Jorge Gomez, M.D., Ph.D. Chief, Organ Systems Program National Cancer Institute Dr. Gomez followed Dr. Anderson's talk with a discussion of some of the general issues involved in multidisciplinary research, and how NCI can facilitate this research. He defined multidisciplinary research as studies performed by a team of experts, based on common scientific goals. There are many projects now that cannot be accomplished using single investigator research, although the R01 is still the basis of most biomedical research. He said that the NCI staff can facilitate such research in a number of ways. They can help initiate and promote interactions among scientists. They can provide administrative advice, including pre-application consultation and advice on funding opportunities and program requirements. They can also help coordinate with other NCI programs and help to establish partnerships with private industry. He emphasized that their role should be facilitative rather than regulatory. Dr. Gomez described a number of trends occurring in translational research. This research is incorporating new technologies into patient research and supporting the development of new drugs and novel clinical interventions. It is also opening up new, more creative ways of interacting with private industry and involving patient advocacy groups. Translational research requires flexible management of the interactions between grantees, NCI, and NIH programs, and other government agencies. A high level of leadership is needed to coordinate and support these interactions, in the context of an appropriate organizational structure. It also requires the resources to respond quickly and efficiently to newly identified gaps. "Rare diseases are outside the line," Dr. Gomez concluded, and will require new models for management and funding that can support translational, multidisciplinary approaches. Shelia Hoar Zahm, Sc.D. Deputy Director, Division of Cancer Epidemiology and Genetics (DCEG), NCI Dr. Zahm brought forward another perspective on the challenges facing transdisciplinary, integrative, and translational research. First, she noted that such research requires sufficient funding and resources to enable adequate communication, including in-person meetings, to plan and ensure successful conduct of complex projects. Adequate and continued funding is also a necessity to fully exploit the resources developed by the study. A major challenge is the increasing complexity of projects as disciplines are added to a study. With each discipline often needing unique data, biospecimens, or environmental specimens, there is the risk that the protocol may become a crushing burden for staff and subjects. Lengthy questionnaires, complicated environmental and biospecimen collection, shipping, and storage procedures, and other requirements can be logistically challenging, expensive, decrease response rates, and lead to staff burnout. In the face of decreasing response rates, transdisciplinary studies, in particular, may need to increase incentives paid to subjects and may need to grapple with the best method to inform subjects of the possible study components without jeopardizing participation. Researchers launching transdisciplinary projects need to seek the best scientific collaborators, but it helps to also seek collaborators who are reasonable, communicate well, and are willing to compromise appropriately, if necessary. This is especially important because as the science becomes more complex, it becomes harder to judge the value of proposals for study components that are outside one's own discipline. Transdisciplinary research is challenging but well worth it, she said, providing a "veritable goldmine" of data. Factors that promote success include good communication, mutual respect, real-time monitoring of each component, and sufficient resources. Panel Discussion: Rare Cancer Advocates and Survivors: The Few and Far Between Session Chair: Julia H. Rowland, Ph.D. Director, Office of Cancer Survivorship, DCCPS, NCI - Douglas Bank President and Editor Testicular Cancer Resource Center - Richard N. Boyajian, R.N., M.S. Lance Armstrong Foundation Adult Survivorship Clinic Perini Family Survivors Center Dana-Farber Cancer Institute - Cary Zahrbock National Coalition for Cancer Survivorship In this discussion, the moderator, Dr. Rowland, asked the panelists to respond to a series of questions about the role of epidemiology in the life of cancer survivors and members of the public who have never had cancer. 1) How accessible is epidemiology to consumers? Panelists responded that many people do not understand what epidemiology is and have difficulty understanding how risk factors apply to them (for example, the difference between relative, absolute, and individual risk). They felt that many people get their information from television in a form that may not be accurate or readily understandable. People need to know how they can apply epidemiologic data to themselves and their families in practical ways. They would benefit from information that emphasizes the key, most important messages. People are also especially attuned to messages that give them hope. 2) How effective are consumer advocacy programs in research and related activities, such as the Director's Consumer Liaison Group (la.cancer.gov/dclg.html)? Panelists noted that there are a number of consumer advocates who have been trained to provide input in research-related activities. However, they also felt that many of these individuals are underutilized. They advocated for a more open process that includes consumer advocates and cancer survivors on review panels for many types of grants and on Request for Applications (RFA) review panels as well. They see this involvement as an important avenue for communication between the public and scientists. They noted that the training of advocates varies greatly and suggested that it may be useful to have a way to review the training and or standardize the approach to orienting advocates who wish to serve in this capacity, such as done in NCI's CARRA (Consumer Advocates in Research and Related Activities) Program. 3) What are the biggest barriers preventing collaboration between scientists and consumer advocates? Scientists need to incorporate advocates into the process before the research is created, they said. The advocates may bring about changes in protocols by introducing a "human factor" that addresses whether a given protocol is reasonable for the subjects participating. They did note that some advocates who are survivors are unable to separate their emotions from the science. Careful screening should be used to make sure advocates are emotionally ready to participate in the research review and planning process. Accessing survivors through groups, such as the CARRA Program, or established advocacy groups can help scientists identify and enlist the input of trained, articulate advocates. 4) What are the questions that consumers want to know about? Panelists described a number of concerns and questions: - What are the long-term and late effects of treatment? - Should the chronic and late effects of cancer be studied by treatment exposure and not based solely on disease? This would provide a larger population and stop the discussions about relevance by disease and perhaps also reveal differences in patterns of effects experienced. It was acknowledged that this would require a large effort in monetary levels and manpower. - How can scientists best study/validate late effects and side effects of treatment? - What should cancer survivors expect and what screening tools should they be using after treatment ends? - What should they be doing to promote a longer life and prevent late effects? - How can they promote their overall quality of life after cancer? - What is a cancer cluster? How do you know if a number of cancers experienced by members of a group is significant and who would you report it to? - How does lack of insurance coverage affect cancer outcomes? - Better morbidity and mortality data could help cancer survivors who face discrimination from life insurance companies and potential employers. 5) What are the lifestyle and family effects of cancer? People with a cancer diagnosis want to know how to lessen the impact of disease both on themselves and their families. Often they want to make their lifestyles healthier, and they need information about how to do this, the panelists said. They also need tools to help them make such changes, like smoking cessation programs. Further, they need to know what they should tell their families about risks both to themselves and to other family members. If a cancer is hereditary, what steps should they take to protect their family? What is the level of evidence regarding lifestyle modification and reduction of cancer risk? Could this information help families reduce their cancer burden? To what extent are factors that may be associated with risk of cancer also be predictors of survival or morbidity after a cancer diagnosis? This, they felt, is where we need more answers. Finally, genetic links or risks need to be looked at in larger study populations instead of smaller cohorts. Many also have psychosocial needs that are going unmet the panelists pointed out. Their quality of life may be poor because of psychological rather than physical effects of cancer. Psychologists need more training to deliver this care and more programs need to be developed to help survivors and their loved ones deal with these problems. 6) How can advocates help with epidemiologic research? Panelists said that cancer survivors are often eager to share information about their lifestyles and participate in research, but are not asked to do so. They suggested several ways that advocates/survivors could help researchers. For example, scientists wishing to find survivors of rare cancers could go to cancer survivor groups. These groups might also be able to help set up community-based research and be trained to carry it out. They felt that outreach and study enrollment carried out by survivors might be more effective because they are peers or fellow patients. The panel also suggested that giving tax credits to participants might be an effective way to enroll more survivors and more cases and controls for studies. Scientists could also team up with advocates when going to policymakers to seek funding because the survivors, by bringing their personal experience, would allow the scientists to tell a more compelling story. In summary, the panel felt consumers and cancer advocacy groups are eager to be part of the research process and can contribute valuable perspectives and practical assistance to epidemiologists. Working Groups Report Back Session Chair: Hoda Anton-Culver, Ph.D. Professor and Chief, Epidemiology Division University of California, Irvine In the last session of the meeting, Dr. Anton-Culver presented a comprehensive summary of information and action items generated by the working groups in response to the following questions. - What is the state of the science—what do we know? - What are the scientific gaps—what do we not know? - What obstacles (scientific, infrastructure, technical) impede progress, and what needs to be done to remove the obstacles? Are there solutions to some of these problems that work? - What expertise, disciplines, and linkages do we need to "bring to the table" to enhance progress? - What are the partnering opportunities with other DCCPS programs? (Answers to questions "d" and "e" were not explicitly discussed but are included in the individual working groups' summaries.) Question a) What is the state of the science—what do we know? The working groups concluded that while at least some descriptive data are available for all but the rarest subtypes, the amount of data available varies widely by cancer type. Some cancers are well studied with respect to age, gender, and ethnicity, but others are not. For some types, most research data come from international studies, which may or may not be applicable to American populations. - Find ways to translate international data to American populations - Include international investigators in new studies - Initiate new research into differences among racial groups with respect to etiology, response to treatment, and expression of different rare cancer subtypes. Question b) What are the scientific gaps—what do we not know? Dr. Anton-Culver reported that the biological mechanisms underlying most rare cancers are unknown. There is also a need for molecular markers that could distinguish between subclasses within rare cancer types, especially in complicated types such as head and neck, leukemia, and brain. Lack of knowledge about the latency period between exposure and effect also hampers understanding of etiology. - Identification and study of prediagnostic lesions could lead to understanding of biological mechanisms - Study of people with predisposing conditions could also provide information about mechanisms and latency period - Study of susceptibility could lead to identification of molecular markers for classification and early detection - General research on major physiological processes, such as energy balance, the blood-brain barrier, viral load, immune mechanisms, and assessment of environmental exposures, could shed light on mechanisms common to all cancers. Question c) What obstacles (scientific, infrastructure, technical) impede progress, and what needs to be done to remove the obstacles? Are there solutions to some of these problems that work? The working groups identified two major types of obstacles: those affecting data collection and those affecting the ability of researchers to collaborate. Data collection is plagued by a number of problems. There are standardization issues, such as the lack of histological definitions for some cancers, lack of standard core questionnaires, and lack of standard bioinformatics methods and analyses. Practical issues may also impede data collection, including difficulties in getting appropriate biospecimens (for example, skin biopsies and other specialized samples); a need for ultra-rapid case ascertainment in cancers with rapid mortality; Health Insurance Portability and Accountability Act (HIPAA) and Institutional Review Board (IRB) issues; and the high costs of data collection. Data analysis is impeded by the small sample sizes available for most rare cancers. This is particularly a problem for junior investigators because of their limited funding. Problems preventing effective collaboration included limited contact between scientists in different disciplines and a lack of appropriate funding mechanisms for consortia and interdisciplinary research. The groups proposed a number of solutions to these problems. - Increased funding for rare cancer research - Cross-training for new investigators to encourage interdisciplinary research - Training for international fellows to facilitate study in other countries - Coordination of resources by a central agency such as NCI - Increased involvement of advocacy groups and private organizations - Encouraging universities to be flexible in giving academic credit to investigators involved in multiple principal investigator projects - Formation of new consortia and expansion of existing consortia by adding new investigators. At this point in the presentation, Dr. Anton-Culver opened the floor to the session chairs and any other participants who wished to discuss these or other potential solutions to the challenges facing investigators of rare cancers. Four major themes emerged from the discussion: - Need for improvements to the review process for rare cancer research proposals - Need for targeted funding for rare cancers research - Suggestions for promoting consortia formation - Potential usefulness of greater involvement of the cancer registries in research. Improvements to the review process Because rare cancers require the study of large populations to achieve adequate sample size, the price tag for such research is generally expensive. This is particularly true if extra funds are needed for consortium building. Grant reviewers, who often are not epidemiologists but instead are clinicians, generally do not appreciate the reasons for these extra expenses. These reviewers can be very critical, especially when the cancer involved is very rare and the research might be considered to benefit only a small number of people. One suggestion was to ask the CSR to form a special study section that could take into account these special challenges. Because it is very difficult to get CSR to form a new standing committee, it was suggested that ad hoc study sections might be formed instead. However, even if this could be done, there might be difficulty in getting enough researchers who could act as outside reviewers because almost all of them would likely be involved in the consortium covering their field. Another suggestion was to hold workshops to educate reviewers from outside epidemiology on the specific challenges of rare cancer research. It was also pointed out that, as multidisciplinary consortia were formed, more basic scientists would be needed on study sections to ensure adequate scientific review. Epidemiologists attending this meeting were encouraged to take ownership of the problem by acting as advocates for rare cancer research while serving on study sections and when interacting with colleagues in other settings. Another suggestion was that reviewers be asked to use the NIH CRISP database (crisp.cit.nih.gov) to ensure that the proposals that they are reviewing are really novel. This would cut down on duplication of existing research and perhaps free up funding for more rare cancer research proposals. Funding targeted specifically to rare cancers Several workshop participants suggested that difficulties in competing against other grants could be at least somewhat alleviated if more money were targeted specifically to rare cancers—for example, through a RFA. A RFA would also emphasize to reviewers NCI's commitment to rare cancer study. However, other participants felt that reviewers do not really pay attention to whether a grant proposal targets a specific RFA. Another suggestion was the creation of a funding level slightly above the R03 so that junior investigators could obtain enough funding to get started on rare cancer research. Formation of consortia Several participants discussed how to develop an infrastructure that would support emerging consortia. Consortia planning grants could be used to defray the costs of getting people face-to-face in order to start a new consortium. Such funding would need to include an ongoing mechanism, so that groups could have multiple meetings as needed without continually applying for more funds. It was suggested that consortium grants include a required mentoring component to help junior investigators get involved. Participants noted that consortia should be investigator-driven and should include new investigators as well as those from other disciplines. Consortia could include both extra- and intramural NCI investigators if careful attention were paid to keeping funds separate. Selection of consortium members could use the criteria developed by Dr. Seminara (described earlier in the meeting). Involvement of cancer registries Representatives from several cancer registries said that the registries would like to help with rare cancer research. They suggested including the state registries in research plans, in addition to the SEER Program. NCI could set aside some funds to help registries develop their ability to actively participate in research, above and beyond funding for building the actual research resources. Better communication between NCI and outside organizations could also help support the registries. The Centers for Disease Control and Prevention (CDC) and the American Cancer Society (ACS) provide some support for registries. This support could be coordinated with that of NCI through NAACCR as an umbrella organization. Other suggestions were to use the Comprehensive Cancer Centers for ultra-rapid case registry and to form specific national registries for some rare cancers, if needed. Meeting Evaluation, Wrap-up, and the Bottom Line... Edward Trapido, Sc.D. Associate Director, EGRP, DCCPS, NCI Dr. Trapido wrapped up the session by stating that NCI would use the working groups' comments and suggestions to help prioritize funding mechanisms and write new initiatives, and when discussing review issues with the CSR. He addressed researchers' frequently expressed concerns about funding mechanisms for consortia by stating that NIH is already working on a mechanism for funding multiple principal investigator grants. He said that NCI will also ask the American College of Epidemiology and other professional organizations to help set standards that will guide academic departments in evaluating the work of junior investigators on multiple principal investigator, multidisciplinary projects.
Phil Wannell and I have been mates for a while now and he has been a strong supporter of Headfonics for a few years under his previous brand, Custom Cables. Times change though and this week Phil and his hard working team have opened up an entirely new brand, Audio Sanctuary. Remember this is the team responsible for the massively successful Headroom Audio show every year in London so I am expecting cool things from these guys and Phil was only too happy to tell us all about it last week and what UK Audiophiles can expect. 1. So tell us a bit about Audio Sanctuary and what they are all about? Audio Sanctuary has morphed from Custom Cable that started trading in 1985 as a purely mail order company. As the name suggests we started out selling high-quality cables for home audio systems. In 2009 Custom Cable launched an online site where it was easier to see our extensive range of products and for customers to purchase from us. During the following few years, we made the decision to invest in the portable audio market and listed the first audio device on our website with the Colorfly C4. This was the start of the future of Custom Cable and now Audio Sanctuary. 2. Why the change from Custom Cable to Audio Sanctuary – A Changing market? With the addition of the Colorfly C4 in our inventory, it was clear that we were on to a winner as sales took off. We dramatically expanded our inventory to include headphones and many portable players but as this became what we were known for, Custom Cable really didn’t fit the bill so we decided to rebrand to something a little less specific. 3. Are you still going to be making cables under the AS brand? You bet! We chose the name as it still encompasses the audio side of our business so we can now offer anything related to audio effectively. We had been a tad limited by our previous name as many people have pointed out to us over the years of trading as Custom Cable. Now we can expand and grow and not be held back by our name. I am hoping that this is a wise decision, but I guess only time cam tell. 4. You guys run the Headroom brand also right? How does this all fit in with your plan for world domination? World domination, not sure about that. 🙂 Yeah, we run the headroom show for the UK market as it was something missing from our shores. There is a market for this type of industry here and we thought that the only way to expand into different consumers was to attract a wider audience. headroom’s inaugural show back in January 2015 was much, much better than we had ever anticipated and with that in mind have expanded again. This time, we will have another show (headroom @ indulgence) in October which will enable us to have an even larger area. This will mean we can fit plenty more exhibitors in that we unfortunately had to turn away at January’s headroom due to space constraints with the venue we use. 5. Introduce the team Phil, who are the key players? Well I kind of run the website and social media activities of AS. I guess I’m the spokesman of the brand and know a thing or two about the equipment and synergy of various products. We have 6 other team members who help run our London, UK Store and whom each has their own specialism. David knows everything there is to know about Home Cinema, Joe knows a lot about everything. Vernon and Keith both know their way around a turntable and Amit is the cable specialist. We also have our own installer, Erick who is awesome at getting your lovely equipment all set up and working flawlessly. It gives us a great advantage as we can pretty much try and help with any audio related inquiry across our 330 odd years of knowledge and experience. 6. What is your opinion on the UK audiophile market as it currently stands compared to other markets? Different, very different. I think we are certainly ahead of the game and are slowly catching up with how things are in the far east. We have many people here wanting to invest in high-quality audio devices and headphones that are not only by the big, famous brands you see on TV. We find that people are more than happy to make a big investment in portable audio. 7. What’s hot these days in terms of buying from AS? Anything that sounds good. 😛 In all seriousness people have keen ears and just because the paper says something is well specced doesn’t mean Joe Bloggs will like it. That’s I guess part of my job. I like to listen to things, I enjoy hearing what works and what doesn’t. Plenty of times you can pair up an awesome spec player with an insanely great pair of headphones but it just doesn’t “work”. When I say work I don’t mean turn on, I just mean get your foot tapping. Pair the player with another pair of headphones and you get a completely different sound that not only gets your foot tapping but makes you want more. 🙂 9. What is unique about AS in the UK market? I think I will be ultra cheeky and just say, contact us to see how we work. 🙂 I don’t want to give away our USP. The only thing I will say is that we are in London and just 22 minutes from Waterloo. We may also have a large, very large selection of products available for demonstration. 10. Describe the typical UK audiophile? Big spender, DIY guy? The UK audiophile, that’s tricky and has certainly changed over the years. They used to be into their home HiFi, now they are keen on their portable and home headphone setup. Across the board, some are big spenders and others love to tinker with valves on an almost daily basis. It really is a very broad spectrum in the UK and I love it. No client is the same and all have exceptionally different requirements. Location & Contact Details Audio Sanctuary is located at: 35 High Street, New Malden, Greater London / Surrey Tel: +44 20 8942 9124
A table with ottomans is really worth having if your space is small and needs to be kept functional. One brilliant idea is a coffee table with matching ottomans that can be completely hidden under the tabletop. Another is a 3-piece set with the functions of bench, coffee table, living room storage, ottomans, and footrests packed in one relatively small solution. Elegant compact modern set with rectilineal wooden frames finished in black. A rectangular ottoman can serve as a table and has 2 storage pockets. Two tucked rectangular storage stools have hinged seats. All pieces are covered in dark brown pleather. Stunning antique style ottoman also serving as a table. Its round warm brown-finished wooden frame has rich floral carvings on top supports and low feet. It features a bottom drawer and a button-tufted brown pleather upholstery on a mid-firm seat. I would like to make this with longer legs and use it as a stool for my sewing table ... drawer into storage ottoman-good idea! Trendy and versatile, poufs can be a great addition to almost any space, but they're expensive. Using fabric and an Ikea cube, this ottoman DIY cost $40, not $300! Source: Kristi Murphy STOP. You had us at tree-stump ottomans – Exclusive interview and house tour with designer/stylist/realtor Alicia Lawhon! This very classic, cube shape ottoman is a multi functional piece of furniture. Storage chest provide extra space for small items. Comfortable, padded top create a seat space and has a decorative, contrasting stitches. This rectangular shaped storage ottoman can be use as an extra seat, footrest, storage space or coffee table. It has a durable wooden frame and is fully covered with leather. Top is padded, tufted and finished with decorative buttons. This simple and elegant storage ottoman is a great choice for your living room in any decor. The ottoman may be used as a table, seat or a storage. The furniture features a very practical table drawer board that may be hidden inside when it's not needed. A very simple, but useful and attractive product. It is a service tray that has got two convenient handles and a very attractive pattern. The overall size of this product is 16 x 10 x 1.5 inches. This tray is ideal for use with bars, ottomans, lounges, etc. This very practical ottoman features four tray tops so can be use as a coffee table, extra seat space and storage area. This ottoman is covered with black faux leather, and versatile trays have seat cushions. Add and extra seat and storage space to any interior with this contemporary styled storage ottoman. It has a comfortable, padded top and is upholstered with brown leather. Features decorative double stitches and buttons. This round, oriental ottoman can be use as an extra seat or coffee table, thanks to its flat top. It is fully made of vowed rush and available in matte, black finish. Lightweight yet durable, easy to keep clear. This functional piece of furniture features large storage chest so can be use both as an extra seat and storage space. It has a fully leather upholstery and decorative, tufted finished top with buttons. This classy Leather Bench has potential to grant you not only a cozy and comfy sitting area. It can also be used as a coffee table and storage compartment. It has two ottomans that can be placed inside the bench for space saving. This extraordinary stylish and practical storage coffee table is gonna totally amaze every fan of unique and original solutions. Check it out now and fall in love with its unusual design and functionality! This multi functional set consist two square shaped seat ottomans and one larger, rectangular shaped one, with storage chest. It is available in few bright upholstery colors. Faux leather cover has a decorative stitches. Are you looking for some simple and stylish equipment for your living room or bedroom? Then, check out this awesome and stylish storage ottoman! It's gonna bring you a huge dose of design and functionality. This elegant, rectangular shaped ottoman can be use as an extra seat space or storage space. Its frame has a durable, wooden construction, covered with faux leather in espresso finish. Top has a decorative, tufted finish. This lovely 3-Piece Play Set for Children is consisted of 1 square table, 2 storage ottomans, and a beautiful, espresso finish. The table is sturdy with a specially designed top, perfect for playing, learning, and building blocks. This contemporary styled, round ottoman in astonishing teal finish is fully covered with sleek, smooth fabric with tufted, buttoned finish top. It features a storage space and solid, wooden frame in dark brown. Add an extra storage space to any room with this stylish, elegant ottoman. It has a tufted finished, leather upholstery in dark brown, durable and sturdy wooden frame and carved legs in matching to upholstery color. Functional ottoman, upholstered with brown leather and accented with deorative stitches. Each part of its sectional top reverses to reveal serving tray with cutout handles. The piece has of course a storage compartment inside. Incredibly practical and stylish piece of furniture worth having. Made of rubberwood the table top with a solid chrome base includes in set four upholstered ottomans with roomy compartments inside. It's the perfect way for maximum fill space. This type of product is a high quality, large ottoman that has got a square shape. Its black color is neutral, so it looks very attractive in any decor. The size of this product is 36-inch x 36-inch x 17-inch. If you're looking for some simple and intriguing solutions for your living room or bedroom, check out this awesome and stylish storage ottoman! It's gonna bring you a unique design and the highest functionality. Large Round Tufted Storage Ottoman. This Contemporary Piece Adds Style And Comfort To Any Room. Each Round Ottoman Features Tufted Fabric With A Smooth Button Design. A Living Room Furniture Ottoman Perfect As A Coffee Table Ottoman Or Footstool. (Tan) Ottoman in white color that is upholstered in quality faux leather. Top can be lifted on dual-hinged, it's easy and simple. It's a practical furniture that can be used in many practical ways like coffee table or footrest. This ottoman set is consisted of 1 large and 2 small ottomans - each upholstered in black faux leather. The large ottoman has a reversible lid - with two seats on one side and two trays on the other one. This set includes three comfortable and attractive ottomans. One of them serves as a bench for two people. Two small ottomans can also provide additional sitting spaces, but they can also serve as coffee tables. The Synergy Lift Top Storage Cocktail Ottoman with a practical lid is as stylish as it is functional. Top of the lid can be used as a coffee bench or reading area. Or it can be lifted up revealing capacious storage compartment. If you're a fan of original and intriguing solutions, this awesome and stylish coffee table might perfectly fulfill your expectations! Check it out now and enjoy its extraordinary design and an unusual functionality. Round modern universal piece of furniture. It can serve as a table, ottoman or storage unit thanks to a flip-top lid. Wooden construction with legs finished in black is upholstered in padded brown leather. It has hidden casters for easy moving. Cubic ottoman, entirely padded with black leatherette of high quality, and topped off with quilting on all sides. Such a design provides a chic contemporary look. The piece may be used as a side table, too. Convenient, functional and easy to assemble monitor riser that helps you upgrade the comfort of your TV corner. It comes with neutral black finish that complements a variety of decors. Leaves a practical storage space underneath. Neat construction of this coffee table is comprised of polished metal and clear tempered glass - this simple idea gives an impression of an ultra-modern and chic look. The assemblage takes literally just 5 minutes. If you're looking for some stylish and intriguing solutions for your living room or bedroom, check out this amazing storage ottoman! It's gonna bring you a huge dose of an unusual design and functionality. It is a very functional coffee table and storage ottoman. It is a perfect solution for small spaces in your living room. You will be impressed how amazing this ottoman is. You need to have it. This five-set is extensive coffee table and ottomans that can be used in many ways. Covered stylish leatherette upholstery is very elegant by which perfectly fit in to any kind of interiors. Covers serve as trays. Original design of this pouf will look great in kids' room, bedroom, den or loving room. Filled with polystyrene beads and covered in high quality wool that mimics fur of Mongolian lamb. The cover is removable and easy to wash. Crafted from quality metal, this 3-Piece Ottoman Set in Gold Finish can be a beautiful improvement for your contemporary home décor. Stylish and built-to-last, each ottoman is going to serve you well for many years. Elegant functional modern set having frames of fibreboard upholstered in mid-brown pleather with a solid pattern. A table with a rectangular top and tall straight square legs accommodates 2 square storage ottomans with flip seats. Gorgeous modern set of fibreboard covered in greyish pleather with silvery pins tufting on a tabletop apron and ottomans' flip seats. A table has a rectangular top and tall straight legs. Ottomans with storage spaces can be tucked under the table. Pretty functional modern set with rectilineal fibreboard frames covered in black pleather with a zebra pattern. A table with a rectangular top has tall straight square legs. Each tucked ottoman has a flip top and a storage space inside. This is an ideal way of adding a unique touch to your interior style - this ottoman features an amazing split-lift lid that opens to reveal ample storage space, while the dark brown finish along with the tufted design make for an elegant solution. Functional modern set with fibreboard frames covered in woven-like dark brown pleather. A table has a rectangular top and tall straight square legs. Each of 2 square ottomans has a flip-out top, a storage space and can be tucked under the table. Use this soft piece of ottoman alone or place a whole bunch of them in the corner of your den or kids' room and relax like a king. The removable cover is made of acrylic faux fox fur in champagne color. Eco-friendly, luxurious looking foux fur ottoman. Put your feet up to rest while reading or use it alone as a soft, comfortable seat. The cover is removable . It is recommended to wash it by dry cleaning only. Filled with polystyrene beads this ottoman is lightweight and easy to move around. Use it as a chair, side table or an accent piece in your home decoration. The cover features a zipper and is easily removable for washing.
Most Cited EJSO - European Journal of Surgical Oncology Articles The most cited articles published since 2011, extracted from Scopus. Antitumor effectiveness of electrochemotherapy: A systematic review and meta-analysisB. Mali | T. Jarm | M. Snoj | G. Sersa | D. Miklavcic Volume 39, Issue 1, January 2013, Pages 4-16 Background: This systematic review has two purposes: to consolidate the current knowledge about clinical effectiveness of electrochemotherapy, a highly effective local therapy for cutaneous and subcutaneous tumors; and to investigate the differences in effectiveness of electrochemotherapy with respect to tumor type, chemotherapeutic drug, and route of drug administration. Methods: All necessary steps for a systematic review were applied: formulation of research question, systematic search of literature, study selection and data extraction using independent screening process, assessment of risk of bias, and statistical data analysis using two-sided common statistical methods and meta-analysis. Studies were eligible for the review if they provided data about effectiveness of single-session electrochemotherapy of cutaneous or subcutaneous tumors in various treatment conditions. Results: In total, 44 studies involving 1894 tumors were included in the review. Data analysis confirmed that electrochemotherapy had significantly (p < .001) higher effectiveness (by more than 50%) than bleomycin or cisplatin alone. The effectiveness was significantly higher for intratumoral than for intravenous administration of bleomycin (p < .001 for CR%, p = .028 for OR%). Bleomycin and cisplatin administered intratumorally resulted in equal effectiveness of electrochemotherapy. Electrochemotherapy was more effective in sarcoma than in melanoma or carcinoma tumors. Conclusions: The results of this review shed new light on effectiveness of electrochemotherapy and can be used for prediction of tumor response to electrochemotherapy with respect to various treatment conditions and should be taken into account for further refinement of electrochemotherapy protocols. © 2012 Elsevier Ltd. All rights reserved. Systematic review of radioguided surgery for non-palpable breast cancerP. J. Lovrics | S. D. Cornacchi | R. Vora | C. H. Goldsmith | K. Kahnamoui Volume 37, Issue 5, May 2011, Pages 388-397 Background: This systematic review examines whether radioguided localization surgery (RGL) (radioguided occult lesion localization - ROLL and radioguided seed localization - RSL) for non-palpable breast cancer lesions produces lower positive margin rates than standard wire-guided localization surgery. Methods: We performed a comprehensive literature review to identify clinical studies using either ROLL or RSL. Included studies examined invasive or in situ BC and reported pathologically assessed margin status or specimen volume/weight. Two reviewers independently assessed study eligibility and quality and abstracted relevant data on patient and surgical outcomes. Quantitative data analyses were performed. Results: Fifty-two clinical studies on ROLL (n = 46) and RSL (n = 6) were identified. Twenty-seven met our inclusion criteria: 12 studies compared RGL to WGL and 15 studies were single cohorts using RGL. Ten studies were included in the quantitative analyses. Data for margin status and re-operation rates from 4 randomized controlled trials (RCT; n = 238) and 6 cohort studies were combined giving a combined odds ratio (OR) of 0.367 and 95% confidence interval (CI): 0.277 to 0.487 (p < 0.001) for margins status and OR 0.347, 95% CI: 0.250 to 0.481 (p < 0.001) for re-operation rates. Conclusions: The results of this systematic review of RGL versus WGL demonstrate that RGL technique produces lower positive margins rates and fewer re-operations. While this review is limited by the small size and quality of RCTs, the odds ratios suggest that RGL may be a superior technique to guide surgical resection of non-palpable breast cancers. These results should be confirmed by larger, multi-centered RCTs. © 2011 Elsevier Ltd. All rights reserved. Prospective trial of adipose-derived regenerative cell (ADRC)-enriched fat grafting for partial mastectomy defects: The RESTORE-2 trialR. Pérez-Cano | J. J. Vranckx | J. M. Lasso | C. Calabrese | B. Merck | A. M. Milstein | E. Sassoon | E. Delay | E. M. Weiler-Mithoff Volume 38, Issue 5, May 2012, Pages 382-389 Aims: Women undergoing breast conservation therapy (BCT) for breast cancer are often left with contour defects and few acceptable reconstructive options. RESTORE-2 is the first prospective clinical trial using autologous adipose-derived regenerative cell (ADRC)-enriched fat grafting for reconstruction of such defects. This single-arm, prospective, multi-center clinical trial enrolled 71 patients post-BCT with defects ≤150 mL. Methods: Adipose tissue was collected via syringe lipoharvest and then processed during the same surgical procedure using a closed automated system that isolates ADRCs and prepares an ADRC-enriched fat graft for immediate re-implantation. ADRC-enriched fat graft injections were performed in a fan-shaped pattern to prevent pooling of the injected fat. Overall procedure times were less than 4 h. The RESTORE-2 protocol allowed for up to two treatment sessions and 24 patients elected to undergo a second procedure following the six month follow-up visit. Results: Of the 67 patients treated, 50 reported satisfaction with treatment results through 12 months. Using the same metric, investigators reported satisfaction with 57 out of 67 patients. Independent radiographic core laboratory assessment reported improvement in the breast contour of 54 out of 65 patients based on blinded assessment of MRI sequence. There were no serious adverse events associated with the ADRC-enriched fat graft injection procedure. There were no reported local cancer recurrences. Injection site cysts were reported as adverse events in ten patients. Conclusion: This prospective trial demonstrates the safety and efficacy of the treatment of BCT defects utilizing ADRC-enriched fat grafts. © 2012 Elsevier Ltd. All rights reserved. Validation of the Joensuu risk criteria for primary resectable gastrointestinal stromal tumour - The impact of tumour rupture on patient outcomesP. Rutkowski | E. Bylina | A. Wozniak | Z. I. Nowecki | C. Osuch | M. Matlok | T. Świtaj | W. Michej | M. Wroński | S. Głuszek | J. Kroc | A. Nasierowska-Guttmejer | H. Joensuu Volume 37, Issue 10, October 2011, Pages 890-896 Background: Approval of imatinib for adjuvant treatment of gastrointestinal stromal tumours (GIST) raised discussion about accuracy of prognostic factors in GIST and the clinical significance of the available risk stratification criteria. Methods: We studied the influence of a new modification of the NIH Consensus Criteria (the Joensuu risk criteria), NCCN-AFIP criteria, and several clinicopathological factors, including tumour rupture, on relapse-free survival (RFS) in a prospectively collected tumour registry series consisting of 640 consecutive patients with primary, resectable, CD117-immunopositive GIST. The median follow-up time after tumour resection was 39 months. None of the patients received adjuvant imatinib. Results: The median RFS time after surgery was 50 months. In univariable analyses, high Joensuu risk group, tumour mitotic count >5/50 HPF, size >5 cm, non-gastric location, tumour rupture (7% of cases; P = 0.0014) and male gender had adverse influence on RFS. In a multivariable analysis mitotic count >5/50HPF, tumour size >5 cm and non-gastric location were independent adverse prognostic factors. Forty, 151, 86 and 348 patients were assigned according to the Joensuu criteria to very low, low, intermediate and high risk groups and had 5-year RFS of 94%, 94%, 86% and 29%, respectively. Conclusion: The Joensuu criteria, which include 4 prognostic factors (tumour size, site, mitotic count and rupture) and 3 categories for the mitotic count, were found to be a reliable tool for assessing prognosis of operable GIST. The Joensuu criteria identified particularly well high risk patients, who are likely the proper candidates for adjuvant therapy. © 2011 Elsevier Ltd. All rights reserved. Prognostic factors and oncologic outcome in 146 patients with colorectal peritoneal carcinomatosis treated with cytoreductive surgery combined with hyperthermic intraperitoneal chemotherapy: Italian multicenter study S.I.T.I.L.O.F. Cavaliere | M. De Simone | S. Virz | M. Deraco | C. R. Rossi | A. Garofalo | F. Di Filippo | D. Giannarelli | M. Vaira | M. Valle | P. Pilati | P. Perri | M. La Pinta | I. Monsellato | F. Guadagni Volume 37, Issue 2, February 2011, Pages 148-154 Aim: The present study was specifically designed to assess the major clinical and pathological variables of patients with colorectal peritoneal carcinomatosis in order to investigate whether currently used criteria appropriately select candidates for peritonectomy procedures (cytoreductive surgery) combined with hyperthermic intraperitoneal chemotherapy (HIPEC). Patients and methods: Preoperative, operative and follow-up data on 146 consecutive patients presenting with peritoneal carcinomatosis of colorectal origin and treated by surgical cytoreduction combined with HIPEC in 5 Italian Hospital and University Centers were prospectively entered in a common database. Univariate and multivariate analyses were used to assess the prognostic value of clinical and pathologic factors. Results: Over a minimum 24-month follow-up, the overall morbidity rate was 27.4% (mortality rate: 2.7%) and was directly related to the extent of surgery. Peritoneal cancer index (PCI), unfavorable peritoneal sites, synchronous or previously resected liver metastasis and the completeness of cytoreduction, all emerged as independent prognostic factors correlated with survival. Conclusions: Until research provides more effective criteria for selecting patients based upon the biomolecular features of carcinomatosis, patients should be selected according to the existing independent prognostic variables. © 2010 Elsevier Ltd. All rights reserved. Natural orifice total mesorectal excision using transanal port and laparoscopic assistanceJ. J. Tuech | V. Bridoux | B. Kianifard | L. Schwarz | B. Tsilividis | E. Huet | F. Michot Volume 37, Issue 4, April 2011, Pages 334-335 Natural Orifice Transluminal Endoscopic Surgery (NOTES) is an emerging concept which has been recently applied to the field of rectal excision. The authors describe a case of total mesorectal excision using a transanal port and laparoscopic assistance. We described a procedure performed in a 45-year-old for a rectal adenocarcinoma (1 cm wide, T1sm3) 3 cm above the dentate line. The procedure is described in the text and in a didactic video. © 2010 Elsevier B.V. All rights reserved. Management of lobular carcinoma in-situ and atypical lobular hyperplasia of the breast - A reviewM. Hussain | G. H. Cunnick Volume 37, Issue 4, April 2011, Pages 279-289 Objectives: To determine the incidence of malignancy (invasive carcinoma or DCIS) in patients diagnosed with lobular neoplasia (B3) on core needle biopsy (CNB) of breast lesions by reviewing the published literature. Methods: Medline, Embase, OVID-database and reference lists were searched to identify and review all English-language articles addressing the management of LN diagnosed on CNB. Studies on mixed breast pathologies were excluded. Results: Of 1229 LN diagnosed on CNB, 789 (64%) underwent surgical excision. 211 (27%) of excisions contained either DCIS or invasive disease. 280 of the excision specimens were classified as ALH, 241 as LCIS, 22 as pleomorphic LCIS and 246 unspecified LN on the original CNB. After surgical excision, 19% of the ALH cases, 32% of the LCIS cases and 41% of the PLCIS cases, contained malignancy. 29% of the unspecified LNs were upgraded to malignancy. The higher incidence of malignancy within excision specimens for LCIS and PLCIS compared to ALH was significant (P < 0.04, <0.003 respectively). Conclusion: There is a significant underestimation of malignancy in patients diagnosed with breast LN on CNB. 27% cases of CNB-diagnosed LN were found to contain malignancy following surgical excision. All patients diagnosed with LN on CNB should be considered for surgical excision biopsy. © 2010 Elsevier B.V. All rights reserved. Peritoneal carcinomatosis treated with cytoreductive surgery and Hyperthermic Intraperitoneal Chemotherapy (HIPEC) for advanced ovarian carcinoma: A French multicentre retrospective cohort study of 566 patientsN. Bakrin | J. M. Bereder | E. Decullier | J. M. Classe | S. Msika | G. Lorimier | K. Abboud | P. Meeus | G. Ferron | F. Quenet | F. Marchal | S. Gouy | P. Morice | C. Pomel | M. Pocard | F. Guyon | J. Porcheron | O. Glehen Volume 39, Issue 12, December 2013, Pages 1435-1443 Background Despite a high response rate to front-line therapy, prognosis of epithelial ovarian carcinoma (EOC) remains poor. Approaches that combine Cytoreductive Surgery (CRS) and Hyperthermic Intraperitoneal Chemotherapy (HIPEC) have been developed recently. The purpose of this study was to assess early and long-term survival in patients treated with this strategy. Patients and methods A retrospective cohort multicentric study from French centres was performed. All consecutive patients with advanced and recurrent EOC treated with CRS and HIPEC were included. Results The study included 566 patients from 13 centres who underwent 607 procedures between 1991 and 2010. There were 92 patients with advanced EOC (first-line treatment), and 474 patients with recurrent EOC. A complete cytoreductive surgery was performed in 74.9% of patients. Mortality and grades 3 to 4 morbidity rates were 0.8% and 31.3%, respectively. The median overall survivals were 35.4 months and 45.7 months for advanced and recurrent EOC, respectively. There was no significant difference in overall survival between patients with chemosensitive and with chemoresistant recurrence. Peritoneal Cancer Index (PCI) that evaluated disease extent was the strongest independent prognostic factor for overall and disease-free survival in all groups. Conclusion For advanced and recurrent EOC, curative therapeutic approach combining optimal CRS and HIPEC should be considered as it may achieve long-term survival in patients with a severe prognosis disease, even in patients with chemoresistant disease. PCI should be used for patient's selection. © 2013 Elsevier Ltd. All rights reserved. Positron emission tomography (PET) for assessment of axillary lymph node status in early breast cancer: A systematic review and meta-analysisK. L. Cooper | S. Harnan | Y. Meng | S. E. Ward | P. Fitzgerald | D. Papaioannou | L. Wyld | C. Ingram | I. D. Wilkinson | E. Lorenz Volume 37, Issue 3, March 2011, Pages 187-198 Purpose: Sentinel lymph node biopsy (SLNB) and axillary lymph node dissection (ALND) are used to assess axillary nodal status in breast cancer, but are invasive procedures associated with morbidity, including lymphoedema. This systematic review evaluates the diagnostic accuracy of positron emission tomography (PET), with or without computed tomography (CT), for assessment of axillary nodes in early breast cancer. Methods: Eleven databases including MEDLINE, EMBASE and the Cochrane Library, plus research registers and conference proceedings, were searched in April 2009. Study quality was assessed using the QUality Assessment of Diagnostic Accuracy Studies (QUADAS) checklist. Sensitivity and specificity were meta-analysed using a bivariate random effects approach. Results: Across 26 studies evaluating PET or PET/CT (n = 2591 patients), mean sensitivity was 63% (95% CI: 52-74%; range 20-100%) and mean specificity 94% (95% CI: 91-96%; range 75-100%). Across 7 studies of PET/CT (n = 862), mean sensitivity was 56% (95% CI: 44-67%) and mean specificity 96% (90-99%). Across 19 studies of PET-only (n = 1729), mean sensitivity was 66% (50-79%) and mean specificity 93% (89-96%). Mean sensitivity was 11% (5-22%) for micrometastases (≤2 mm; five studies; n = 63), and 57% (47-66%) for macrometastases (>2 mm; four studies; n = 111). Conclusions: PET had lower sensitivity and specificity than SLNB. Therefore, replacing SLNB with PET would avoid the adverse effects of SLNB, but lead to more false negative patients at risk of recurrence and more false positive patients undergoing unnecessary ALND. The present evidence does not support the routine use of PET or PET-CT for the assessment of the clinically negative axilla. © 2011 Elsevier Ltd. All rights reserved. Outcomes of colorectal cancer patients with peritoneal carcinomatosis treated with chemotherapy with and without targeted therapyY. L B Klaver | L. H J Simkens | V. E P P Lemmens | M. Koopman | S. Teerenstra | R. P. Bleichrodt | I. H J T De Hingh | C. J A Punt Volume 38, Issue 7, July 2012, Pages 617-623 Background: Although systemic therapies have shown to result in survival benefit in patients with metastatic colorectal cancer (mCRC), outcomes in patients with peritoneal carcinomatosis (PC) are poor. No data are available on outcomes of current chemotherapy schedules plus targeted agents in mCRC patients with PC. Methods: Previously untreated mCRC patients treated with chemotherapy in the CAIRO study and with chemotherapy and targeted therapy in the CAIRO2 study were included and retrospectively analysed according to presence or absence of PC at randomisation. Patient demographics, primary tumour characteristics, progression-free survival (PFS), overall survival (OS), and occurrence of toxicity were evaluated. Results: Thirty-four patients with PC were identified in the CAIRO study and 47 patients in the CAIRO2 study. Median OS was decreased for patients with PC compared with patients without PC (CAIRO: 10.4 versus 17.3 months, respectively (p ≤ 0.001); CAIRO2: 15.2 versus 20.7 months, respectively (p < 0.001)). Median number of treatment cycles did not differ between patients with or without PC in both studies. Occurrence of major toxicity was more frequent in patients with PC treated with sequential chemotherapy in the CAIRO study as compared to patients without PC. This was not reflected in reasons to discontinue treatment. In the CAIRO2 study, no differences in major toxicity were observed. Conclusion: Our data demonstrate decreased efficacy of current standard chemotherapy with and without targeted agents in mCRC patients with PC. This suggests that the poor outcome cannot be explained by undertreatment or increased susceptibility to toxicity, but rather by relative resistance to treatment. © 2012 Elsevier Ltd. All rights reserved. A comparison of three methods for nonpalpable breast cancer excisionN. M A Krekel | B. M. Zonderhuis | H. B A C Stockmann | W. H. Schreurs | H. Van Der Veen | E. S M De Lange De Klerk | S. Meijer | M. P. Van Den Tol Volume 37, Issue 2, February 2011, Pages 109-115 Aims: To evaluate the efficacy of three methods of breast-conserving surgery (BCS) for nonpalpable invasive breast cancer in obtaining adequate resection margins and volumes of resection. Materials and methods: A total of 201 consecutive patients undergoing BCS for nonpalpable invasive breast cancer between January 2006 and 2009 in four affiliated institutions was retrospectively analysed. Patients with pre-operatively diagnosed primary or associated ductal carcinoma in situ (DCIS), multifocal disease, or a history of breast surgery or neo-adjuvant treatment were excluded from the study. The resections were guided by wire localisation (WL), ultrasound (US), or radio-guided occult lesion localisation (ROLL). The pathology reports were reviewed to determine oncological margin status, as well as tumour and surgical specimen sizes. The optimal resection volume (ORV), defined as the spherical tumour volume with an added 1.0-cm margin, and the total resection volume (TRV), defined as the corresponding ellipsoid, were calculated. By dividing the TRV by the ORV, a calculated resection ratio (CRR) was determined to indicate the excess tissue resection. Results: Of all 201 excisions, 117 (58%) were guided by WL, 52 (26%) by US, and 32 (16%) by ROLL. The rate of focally positive and positive margins for invasive carcinoma was significantly lower in the US group (N = 2 (3.7%)) compared to the WL (N = 25 (21.3%)) and ROLL (N = 8 (25%)) groups (p = 0.023). The median CRRs were 3.2 (US), 2.8 (WL) and 3.8 (ROLL) (WL versus ROLL, p < 0.05), representing a median excess tissue resection of 3.1 times the optimal resection volume. Conclusion: US-guided BCS for nonpalpable invasive breast cancer was more accurate than WL- and ROLL-guided surgery because it optimised the surgeon's ability to obtain adequate margins. The excision volumes were large in all excision groups, especially in the ROLL group. © 2010 Elsevier Ltd. All rights reserved. The dutch surgical colorectal auditN. J. Van Leersum | H. S. Snijders | D. Henneman | N. E. Kolfschoten | G. A. Gooiker | M. G. Ten Berge | E. H. Eddes | M. W J M Wouters | R. A E M Tollenaar Volume 39, Issue 10, October 2013, Pages 1063-1070 Introduction In 2009, the nationwide Dutch Surgical Colorectal Audit (DSCA) was initiated by the Association of Surgeons of the Netherlands (ASN) to monitor, evaluate and improve colorectal cancer care. The DSCA is currently widely used as a blueprint for the initiation of other audits, coordinated by the Dutch Institute for Clinical Auditing (DICA). This article illustrates key elements of the DSCA and results of three years of auditing. Methods Key elements include: a leading role of the professional association with integration of the audit in the national quality assurance policy; web-based registration by medical specialists; weekly updated online feedback to participants; annual external data verification with other data sources; improvement projects. Results In two years, all Dutch hospitals participated in the audit. Case-ascertainment was 92% in 2010 and 95% in 2011. External data verification by comparison with the Netherlands Cancer Registry (NCR) showed high concordance of data items. Within three years, guideline compliance for diagnostics, preoperative multidisciplinary meetings and standardised reporting increased; complication-, re-intervention and postoperative mortality rates decreased significantly. Discussion The success of the DSCA is the result of effective surgical collaboration. The leading role of the ASN in conducting the audit resulted in full participation of all colorectal surgeons in the Netherlands. By integrating the audit into the ASNs' quality assurance policy, it could be used to set national quality standards. Future challenges include reduction of administrative burden; expansion to a multidisciplinary registration; and addition of financial information and patient reported outcomes to the audit data. © 2013 Elsevier Ltd. All rights reserved. Advanced cytoreduction as surgical standard of care and hyperthermic intraperitoneal chemotherapy as promising treatment in epithelial ovarian cancerM. Deraco | D. Baratti | B. Laterza | M. R. Balestra | E. Mingrone | Antonio MacRì | S. Virzì | F. Puccio | P. S. Ravenda | S. Kusamura Volume 37, Issue 1, January 2011, Pages 4-9 Favorable oncological outcomes have been reported in several trials with the introduction of Cytoreductive Surgery (CRS) and Hyperthermic Intraperitoneal Chemotherapy (HIPEC) in the treatment of Advanced Epithelial Ovarian Cancer (EOC). However most of the studies testing the combined approach are observational and have been conducted in inhomogeneous series so that the evidence supporting the performance of this combined treatment is still poor. Median Overall and Disease Free Survivals of up to 64 months and 57 months, respectively have been reported. Although a rate of morbidity of up to 40% has been observed in some series the CRS + HIPEC continues to gain an increased popularity. Several prospective randomized trials are ongoing using the procedure in various time points of the disease. In this review several issues such as the impact of cytoreduction and residual disease (RD) on outcomes as well as the role of HIPEC will be updated from the literature evidence. Some controversial points HIPEC related will also be discussed. Recent experiences regarding the introduction of a more aggressive surgical approach to upper abdomen to resect peritoneal carcinomatosis (PC) allowed increased rates of optimal cytoreduction and has demonstrated an apparent better outcome. This evidence associated with the positive results phase III trial testing normothermic intraperitoneal as first-line chemotherapy is guiding some investigators to propose the CRS + HIPEC in the primary setting. Several prospective phase II and III trials have recently been launched to validate the role of the combined treatment in various time points of disease natural evolution. © 2010 Elsevier Ltd. All rights reserved. MicroRNA-21 and PDCD4 expression in colorectal cancerK. H. Chang | N. Miller | E. A H Kheirelseid | H. Ingoldsby | E. Hennessy | C. E. Curran | S. Curran | M. J. Smith | M. Regan | O. J. McAnena | M. J. Kerin Volume 37, Issue 7, July 2011, Pages 597-603 Introduction: MiRNAs regulate gene expression by binding to target sites and initiating translational repression and/or mRNA degradation. Studies have shown that miR-21 exerts its oncogenic activity by targeting the PDCD4 tumour suppressor 3′-UTR. However, the mechanism of this regulation is poorly understood. In colorectal cancer, loss of PDCD4 has been reported in association with increased tumour aggressiveness and poor prognosis. The purpose of this study was to delineate the interaction between PDCD4 and its oncogenic modulator miR-21 in colorectal cancer. Methods: A cohort of 48 colorectal tumours, 61 normal tissues and 7 polyps were profiled for miR-21 and PDCD4 gene expression. A subset of 48 specimens (31 tumours and 17 normal tissues) were analysed for PDCD4 protein expression by immunohistochemistry. Results: A significant inverse relationship between miR-21 and PDCD4 gene expression (p < 0.001) was identified by RT-qPCR. In addition, significant reduction of PDCD4 (p < 0.001) expression and reciprocal upregulation of miR-21 (p = 0.005) in a progressive manner from tumour-polyp-normal mucosae was identified. Analysis of protein expression by IHC revealed loss of PDCD4 staining in tumour tissue. Patients with disease recurrence had higher levels of miR-21. Conclusion: This study demonstrates the inverse relationship between miR-21 and PDCD4, thus suggesting that miR-21 post-transcriptionally modulates PDCD4 via mRNA degradation. Pharmacological manipulation of the miR-21/PDCD4 axis could represent a novel therapeutic strategy in the treatment of colorectal cancer. © 2011 Published by Elsevier Ltd. Multidisciplinary management of hilar cholangiocarcinoma (Klatskin tumor): Extended resection is associated with improved survivalT. M. Van Gulik | J. J. Kloek | A. T. Ruys | O. R C Busch | G. J. Van Tienhoven | J. S. Lameris | E. A J Rauws | D. J. Gouma Volume 37, Issue 1, January 2011, Pages 65-71 Background: Effective diagnosis and treatment of patients with hilar cholangiocarcinoma (HCCA) is based on the synergy of endoscopists, interventional radiologists, radiotherapists and surgeons. This report summarizes the multidisciplinary experience in management of HCCA over a period of two decades at the Academic Medical Center in Amsterdam, with emphasis on surgical outcome. Methods: From 1988 until 2003, 117 consecutive patients underwent resection on the suspicion of HCCA. Preoperative work-up included staging laparoscopy, preoperative biliary drainage, assessment of volume/function of future remnant liver and radiation therapy to prevent seeding metastases. More aggressive surgical approach combining hilar resection with extended liver resection was applied as of 1998. Outcomes of resection including actuarial 5-year survival were assessed. Results: Eighteen patients (15.3%) appeared to have a benign lesion on microscopical examination of the specimen, leaving 99 patients with histologically proven HCCA. These 99 patients were analysed according to three 5-year time periods of resection, i.e. period 1 (1988-1993, n = 45), 2 (1993-1998, n = 25) and 3 (1998-2003, n = 29). The rate of R0 resections increased and actuarial five-year survival significantly improved from 20 ± 5% for the periods 1 and 2, to 33 ± 9% in period 3 (p < 0.05). Postoperative morbidity and mortality in the last period were 68% and 10%, respectively. Conclusion: Extended surgical resection resulted in increased rate of R0 resections and significantly improved survival. Candidates for resection should be considered by a specialized, multidisciplinary team. © 2010 Elsevier Ltd. All rights reserved. A feasibility study (ICG-10) of indocyanine green (ICG) fluorescence mapping for sentinel lymph node detection in early breast cancerG. C. Wishart | S. W. Loh | L. Jones | J. R. Benson Volume 38, Issue 8, August 2012, Pages 651-656 Background: There is now increasing evidence to support the use of indocyanine green (ICG) for sentinel lymph node (SLN) detection in early breast cancer. The primary objective of this feasibility study (ICG-10) was to determine the sensitivity and safety of ICG fluorescence imaging in sentinel lymph node identification when combined with blue dye and radiocolloid. Methods: One hundred women with clinically node negative breast cancer (95 unilateral; 5 bilateral) had sentinel lymph node (SLN) biopsy using blue dye, radioisotope and ICG. One patient was excluded from analysis and sensitivity, or detection rate, of ICG alone, and in combination with blue dye and/or radioisotope, was calculated for the remaining 104 procedures in 99 patients. Results: Transcutaneous fluorescent lymphography was visible in all 104 procedures. All 202 true SLNs, defined as blue and/or radioactive, were also fluorescent with ICG. Detection rates were: ICG alone 100%, ICG & blue dye 95.0%, ICG & radioisotope 77.2%, ICG & blue dye & radioisotope 73.1%. Metastases were found in 25 of 201 SLNs (12.4%) and all positive nodes were fluorescent, blue and radioactive. The procedural node positivity rate was 17.3%. Conclusion: The results of this study confirm the high sensitivity of ICG fluorescence for SLN detection in early breast cancer. The combination of ICG and blue dye had the highest nodal sensitivity at 95.0% defining a dual approach to SLN biopsy that avoids the need for radioisotope. © 2012 Elsevier Ltd. All rights reserved. Prevention of seroma formation after axillary dissection in breast cancer: A systematic reviewA. J M Van Bemmel | C. J H Van De Velde | R. F. Schmitz | G. J. Liefers Volume 37, Issue 10, October 2011, Pages 829-835 Background: The most common complication after breast cancer surgery is seroma formation. It is a source of significant morbidity and discomfort. Many articles have been published describing risk factors and preventive measures. The aim of this paper is to provide a systematic review of studies and reports on risk factors and preventive measures. Surgery lies at the core of seroma formation; therefore focus will be placed on surgical ways of reducing seroma. Methods: A computer assisted medline search was carried out, followed by manual retrieval of relevant articles found in the reference listings of original articles. Results: 136 relevant articles were reviewed. Though the level of evidence remain varied several factors, type of dissection, tools with which dissection is carried out, reduction of dead space, suction drainage, use of fibrin glue and octreotide usage, have been found to correlate with seroma formation and have been shown to significantly reduce seroma rates. Conclusion: Seroma formation after breast cancer surgery cannot be avoided at present. There are however several methods to minimize seroma and associated morbidity. Future research should be directed towards the best ways of reducing seroma by combining proven methods. © 2011 Elsevier Ltd. All rights reserved. Complications of lymphadenectomy for gynecologic cancerA. Achouri | C. Huchon | A. S. Bats | C. Bensaid | C. Nos | F. Lécuru Volume 39, Issue 1, January 2013, Pages 81-86 Introduction: Symptomatic postoperative lymphocysts (SPOLs) and lower-limb lymphedema (LLL) are probably underestimated complications of lymphadenectomy for gynecologic malignancies. Here, our objective was to evaluate the incidence and risk factors of SPOLs and LLL after pelvic and/or aortocaval lymphadenectomy for gynecologic malignancies. Methods: Single-center retrospective study of consecutive patients who underwent pelvic and/or aortocaval lymphadenectomy for ovarian cancer, endometrial cancer, or cervical cancer between January 2007 and November 2008. The incidences of SPOL and LLL were computed with their 95% confidence intervals (95%CIs). Multivariate logistic regression was performed to identify independent risk factors for SPOL and LLL. Results: We identified 88 patients including 36 with ovarian cancer, 35 with endometrial cancer, and 17 with cervical cancer. The overall incidence of SPOL was 34.5% (95%CI, 25-45) and that of LLL was 11.4% (95% confidence interval [95%CI], 5-18). Endometrial cancer was independently associated with a lower risk of SPOL (adjusted odds ratio [aOR], 0.09; 95%CI, 0.02-0.44) and one or more positive pelvic nodes with a higher risk of SPOL (aOR, 4.4; 95%CI, 1.2-16.3). Multivariate logistic regression failed to identify factors significantly associated with LLL. Conclusion: Complications of lymphadenectomy for gynecologic malignancies are common. This finding supports a more restrictive use of lymphadenectomy or the use of less invasive techniques such as sentinel node biopsy. © 2012 Elsevier Ltd. All rights reserved. Prognostic models for outcome following liver resection for colorectal cancer metastases: A systematic reviewL. Spelt | B. Andersson | J. Nilsson | R. Andersson Volume 38, Issue 1, January 2012, Pages 16-24 Background: Liver resection provides the best chance for cure in colorectal cancer (CRC) liver metastases. A variety of factors that might influence survival and recurrence have been identified. Predictive models can help in risk stratification, to determine multidisciplinary treatment and follow-up for individual patients. Aims: To systematically review available prognostic models described for outcome following resection of CRC liver metastases and to assess their differences and applicability. Methods: The Pubmed, Embase and Cochrane Library databases were searched for articles proposing a prognostic model or risk stratification system for resection of CRC liver metastases. Search terms included 'colorectal', 'liver', 'metastasis', 'resection', 'prognosis' and 'prediction'. The articles were systematically reviewed. Results: Fifteen prognostic systems were identified, published between 1996 and 2009. The median study population was 305 patients and the median follow-up was 32 months. All studies used Cox proportional hazards for multi-variable analysis. No prognostic factor was common in all models, though there was a tendency towards the number of metastases, CRC spread to lymph nodes, maximum size of metastases, preoperative CEA level and extrahepatic spread as representing independent risk factors. Seven models assigned more weight to selected factors considered of higher predictive value. Conclusion: The existing predictive models are diverse and their prognostic factors are often not weighed according to their impact. For the development of future predictive models, the complex relations within datasets and differences in relevance of individual factors should be taken into account, for example by using artificial neural networks. © 2011 Elsevier Ltd. All rights reserved. The value of 18-FDG PET/CT in early-stage breast cancer compared to traditional diagnostic modalities with an emphasis on changes in disease stage designation and treatment planZ. Garami | Z. Hascsi | J. Varga | T. Dinya | M. Tanyi | I. Garai | L. Damjanovich | L. Galuska Volume 38, Issue 1, January 2012, Pages 31-37 Background: Proper preoperative staging is vital in the treatment of breast cancer patients. The aim of our study was to assess the value of the diagnostic information provided by PET/CT in surgical practice in breast cancer cases considered early-stage by conventional diagnostic modalities. Methods: Whole-body 18-FDG PET/CT was performed on 115 breast cancer patients in whom traditional diagnostic modalities showed no signs of distant metastases or extensive axillary and/or extra-axillary lymphatic spreading, and the size of the primary tumor was <4 cm. Results: The sensitivity of PET/CT in the detection of the primary tumor was 93%. The sensitivity of the traditional diagnostic modalities in the detection of multifocality was 43.8% while that of PET/CT was 100% (p < 0.001). In the assessment of axillary lymph nodes, ultrasound had a sensitivity of 30% and a specificity of 95%. The corresponding estimates for PET/CT were 72% and 96%, respectively. PET/CT detected distant metastases in 8 patients. TNM classification was modified after PET/CT scanning in 54 patients (47%). PET/CT data changed the treatment plan established upon the results of traditional imaging modalities in 18 patients (15.6%). Conclusions: PET/CT is able to assess primary tumor size and axillary lymphatic status more accurately than traditional diagnostic methods. It can detect distant metastases in 7-8% of those patients who were declared free of metastasis by clinical investigations. PET/CT scan modifies the disease stage determined by traditional diagnostic modalities in almost half of the patients and leads to a change in the treatment plan in every 6th patient. © 2011 Elsevier Ltd. All rights reserved. Gastric cancer: ESMO-ESSO-ESTRO clinical practice guidelines for diagnosis, treatment and follow-upT. Waddell | M. Verheij | W. Allum | D. Cunningham | A. Cervantes | D. Arnold Volume 40, Issue 5, January 2014, Pages 584-591 Breast cancer sentinel lymph node mapping using near infrared guided indocyanine green and indocyanine green-human serum albumin in comparison with gamma emitting radioactive colloid tracerK. Polom | D. Murawa | P. Nowaczyk | Y. S. Rho | P. Murawa Volume 38, Issue 2, February 2012, Pages 137-142 Aims: Recently, a novel method of using near infrared (NIR) guided indocyanine green (ICG) and ICG conjugated with human serum albumin (ICG:HSA) for sentinel lymph node biopsy (SLNB) of breast cancer patients has shown true potential. The aim of this study was to compare the usefulness of NIR guided ICG and ICG:HSA against the gamma emitting radiocolloid (RC). Methods: A group of 49 consecutive breast cancer patients underwent SLNB using RC. From this group, the first 28 patients were compared against ICG, while the next 21 patients were compared against ICG:HSA. The number of patients with visible fluorescent path was recorded. Furthermore, the number of SLNs detected by fluorophores percutaneously and total number of intraoperative SLNs detected by fluorophores and/or RC was noted. Results: NIR guided real time lymphatic flow was observed in 47/49 patients (96%). In all cases except one, SLNs detected by the RC tracer were also detected by their respective fluorophore. Additionally, ICG detected 10 additional SLNs in 8 patients, while 3 additional SLNs were detected by ICG:HSA in 3 patients. Statistical analysis revealed no difference between the number of SLNs detected between ICG versus ICG:HSA and RC versus ICG:HSA. However, a significant statistical difference was observed between RC and ICG (p = 0.0117), as well as between the combined NIR guided and RC method (p = 0.0033). Conclusions: In conclusion, the use of either ICG or ICG:HSA with RC to obtain SLNB seems to be an effective alternative. Compared to RC alone, the use of ICG:HSA, more so than ICG alone, may provide additional benefits. © 2011 Elsevier Ltd. All rights reserved. Comparison of surgical performance and short-term clinical outcomes between laparoscopic and robotic surgery in distal gastric cancerB. W. Eom | H. M. Yoon | K. W. Ryu | J. H. Lee | S. J. Cho | J. Y. Lee | C. G. Kim | I. J. Choi | J. S. Lee | M. C. Kook | J. Y. Rhee | S. R. Park | Y. W. Kim Volume 38, Issue 1, January 2012, Pages 57-63 Aims: The authors aimed to compare the surgical performance and the short-term clinical outcomes of robotic assisted laparoscopic distal gastrectomy (RADG) with laparoscopy-assisted distal gastrectomy (LADG) in distal gastric cancer patients. Method: From April 2009 to August 2010, 62 patients underwent LADG and 30 patients underwent RADG for preoperative stage I distal gastric cancer by one surgeon at the National Cancer Center, Korea. Surgical performance was measured using lymph node (LN) dissection time and number of retrieved LNs, which were viewed as surrogates of technical ease and oncologic quality. Results: In clinicopathologic characteristics, mean age, depth of invasion and stage were significantly different between the LADG and RADG group. Mean dissection time at each LN station was greater in the RADG group, but no significant intergroup difference was found for numbers of retrieved LNs. Furthermore, proximal resection margins were smaller, and hospital costs were higher in the RADG group. In terms of the RADG learning curve, mean LN dissection time was smaller in the late RADG group (n = 15) than in the early RADG group (n = 15) for 4sb/4d, 5, 7-12a stations, but numbers of retrieved LNs per station were similar. Conclusion: With the exception of operating time and cost, the numbers of retrieved LNs and the short-term clinical outcomes of RADG were found to be comparable to those of LADG, despite the surgeon's familiarity with LADG and lack of RADG experience. Further studies are needed to evaluate objectively ergonomic comfort and to quantify the patient benefits conferred by robotic surgery. © 2011 Elsevier Ltd. All rights reserved. The prognostic value of PD-L1 expression for non-small cell lung cancer patients: A meta-analysisA. Wang | H. Y. Wang | Y. Liu | M. C. Zhao | H. J. Zhang | Z. Y. Lu | Y. C. Fang | X. F. Chen | G. T. Liu Volume 41, Issue 4, January 2015, Pages 450-456 © 2015 Elsevier Ltd. All rights reserved.Background: A meta-analysis was conducted to investigate the much-debated relationship between the gene expression of programmed cell death-ligand 1 (PD-L1) and cancer patient prognosis. The prognostic value of measuring PD-L1 expression in non-small cell lung cancer (NSCLC) patients was analyzed. Methods: We searched PubMed for studies about the relationship between PD-L1 expression and NSCLC patient prognosis. Only studies with patient survival data related to PD-L1 expression in NSCLC patients with different characteristics were included. The effect size (ES) for this analysis was the hazard ratio (HR) with 95% confidence intervals (CI) for overall survival (OS). Results: Six studies with 1157 patients were included with the defined including and excluding criteria. There is no significant heterogeneity among the studies (I2 = 0%, p = 0.683). PD-L1 expression was significantly associated with the differentiation of tumor (poor vs. well: OR = 1.91, 95% CI: 1.33-2.75, p = 0.001). High PD-L1 expression was also correlated with poor prognosis in terms of the OS of patients with NSCLC (pooled HR = 1.75, 95% CI: 140-2.20, p < 0.001; heterogeneity test: I2 = 0%, p = 0.643). Conclusions: NSCLC patients with positive PD-L1 expression exhibited poor OS. The PD-L1 expression was higher in tumors with poor differentiation. Meta-analysis of predictive factors for non-sentinel lymph node metastases in breast cancer patients with a positive SLNR. F D Van La Parra | P. G M Peer | M. F. Ernst | K. Bosscha Volume 37, Issue 4, April 2011, Pages 290-299 Aims: A meta-analysis was performed to identify the clinicopathological variables most predictive of non-sentinel node (NSN) metastases when the sentinel node is positive. Methods: A Medline search was conducted that ultimately identified 56 candidate studies. Original data were abstracted from each study and used to calculate odds ratios. The random-effects model was used to combine odds ratios to determine the strength of the associations. Findings: The 8 individual characteristics found to be significantly associated with the highest likelihood (odds ratio >2) of NSN metastases are SLN metastases >2 mm in size, extracapsular extension in the SLN, >1 positive SLN, ≤1 negative SLN, tumour size >2 cm, ratio of positive sentinel nodes >50% and lymphovascular invasion in the primary tumour. The histological method of detection, which is associated with the size of metastases, had a correspondingly high odds ratio. Conclusions: We identified 8 factors predictive of NSN metastases that should be recorded and evaluated routinely in SLN databases. These factors should be included in a predictive model that is generally applicable among different populations. © 2010 Elsevier B.V. All rights reserved.
Iomega is lately becoming quite aggressive across SMBs. Why the shift from core focus on consumer segment? Huberman: Iomega operates across NAS, DAS and Multimedia. DAS is becoming a less interesting market as most offices/houses have more than one computing device and a single attached disk does not suffice the technology requirements. The price difference between DAS and NAS has reduced dramatically. We will however continue in multimedia, which is a tiny part of the business. Let me clarify that we are not abandoning consumer business. China and India have colossal SMB footprint especially the ‘S’ part. Hence, the opportunity is enormous. In next few years, we would predominantly become ‘SMB and Distributed Enterprises’ Company. NAS across Indian SMBs are completely under-penetrated and we are targeting this through a solutions-based approach. But most technology vendors do pursue a solutions-based approach than a ‘box-push’ for SMBs. Huberman: Software is where all vendors are differentiated. NAS box, with no additional software other than the basic one is only good for file share and backup. Our personal file technology shifts the mentality of local file sharing and back up and moves it from LAN on WAN. Video surveillance, video monitoring and analysis would be largest drivers in India. We announced partnership with Tulip Telecom and Mindtree around innovative video surveillance solutions for SMB and distributed networks in India. Mindtree and Iomega would take the same technology across the globe. How will Iomega win fierce competition in the NAS market with dozens of vendors, including the likes of Buffalo and Netgear? Huberman: It is a price dependant game, as we try to be lowest priced to succeed in volume centric market. The capability lies in our software to deliver ease-of -use products with solutions wrapped around them. We have more feet on street and the wide network of multiple offices of EMC in India. The post sales support includes both local and global, which is from EMC CoE offering L1/L2 support to end customers. We recently announced next business day service plan for SMBs. How does Iomega leverage the technology expertise and brand recall of EMC? Huberman: Iomega has extensive NAS portfolio targeting SMBs starting from 300 USD in India up to 10000 USD. We are launching wide range of refresh products next year for this segment. For software, we rely on EMC Avamar technology mainly across end point for smaller /distributed offices. The mainline storage structure in corporates can run on EMC NAS, but when it comes to smaller branches, cost becomes an issue. With Avamar software installed on every product, Iomega NAS can tie back to EMC NAS in the corporate house. Doesn’t this synergy result in a conflict of both brands during deals? Huberman: On hardware side, VNXe is lowest-end product from EMC for mainline enterprises. There is barely an overlap. NAS from EMC and Iomega are fundamentally different products. Iomega NAS is more value oriented, with great flexibility and application support while EMC offers high availability and premium products. It really depends on what the user wants and at what price points. What about channel partnerships? Do you depend on the long standing partner base of EMC? Huberman: We have at least one common distributor across most countries. Technically we both go after different market segments. For example a small jewelry shop cannot afford EMC NAS but can opt for Iomega or our competition. India distributors for Iomega are Neoteric and Redington (who are also EMC distributors). Our program is ioclub which is quite different from EMC Velocity partner program. Some systems integrators are common to both. However, we have more partners (1000 plus) than EMC in India. We sell more petabytes across large number of SMBs.
Originally posted at: http://earthhive.wordpress.com/2012/11/04/the-alchemistry-of-the-new-earth-headed-towards-zero-point-and-the-eye-of-god/ We have reached a passage on Earth of great intensity and excitement. The alchemical concoction of the New Earth is now pouring out into the physical plane of our lives and the matter of Earth with great speed now. The elements have been mixed, the impurities identified, the change agents called in, the filtration system at its peak operation, and the new alchemical mixture for life has successfully been released. What I’m getting at is this: not only is there no turning back to the old Earth ways, the New Earth ways have fully been installed and are operationally! While the New Earth elixir takes hold, the tensions and nervous energies of the old Earth are more than palpable. This is a very natural process. As the New Earth chemistry is moving into the physical, lower dimensions of Earth, old Earth foundations become very unsettled. Energetically, the old Earth foundations cannot coincide with the newly laid chemistry of pure light and love. So until the “settling agent” comes in, we may continue to experience personally and globally a lot of bizarre and disruptive events. Fortunately, there is a very powerful foundation that has been laid and does work now. The New Earth elixir of life can be experienced with full embodiment know if we know that it is here and what it requires. Primarily, we are required to stay in our hearts, no matter what, and to allow the Light of God in our hearts to show us the truth of all situations. So now for the enormous updates of only a mere few weeks. Just what is this New Earth elixir made of? About two weeks ago, an important system for the New Earth came online, the New Earth Grid circuitry. This resulted in a Blue Ray awakening en-masse (not so pretty for all Blue Rays, who some have been jolted into their “awakening”). The Blue Rays were the first point of human contact for the New Earth Grid. It is through them that the New Earth Grid circuitry is now connecting into humanity, upgrading our nervous systems into a functioning, whole system that operates with the light of the stars. Stellar light as a blueish, cool and platinum light is a strong protective energy as well as the basis of our new nervous system. With this new circuitry lighting up, there is now an expanded light structure in the Earth’s grid to receive the Heavenly Father light. It is through the evolution of Archangel Michael that I witnessed this incoming light structure. Michael’s New Earth function being one of protection but not from lower energies as in the old Earth but as a function of creating structure and foundation for the Mother Light of Creation to unfold in perfect Grace and Harmony. As Mother Earth continues to build her Hummingbird Body so that she can be fueled by the Nectar of the Mother, the protector energy is growing (the Heavenly Father). When fully saturated into the New Earth, the Nectar of the Mother and the Heavenly Father light will conjoin activating the Twin Flame energy of the Earth. This will also reverberate throughout humanity’s hearts, awakening the Twin Flame of the Heart, a super intelligence of the synergy of the divine masculine and feminine energies within and in relationships of all kinds. As the Great Mother and Father supports for new life are coming in, we are still reeling in some powerful purification from the past. The Hurricane Sandy that struck the East Coast of the U.S. is one example of this purification. There have been many environmental traumas we have experienced that ramped up our fearful states and separation from oneness. From Atlantis to the Deluge of Mesopotamia, to the more recent Katrina and devastating earthquakes and tsunamis, the very water in our cells remembers these experiences. The potential now is to heal the traumas from the past and to re-align with the elementals. We are at a stage now where it’s the water consciousness of our planet that is seeking deep healing. At the same time, the new water of the New Earth is coming through the inter-chambers of the Time-Space Matrix and meeting with the old waters. Again, this is the process of the New Earth elixir meeting with the old and chemical and energetic tensions working towards harmony. One of the reasons I have been guided to make healing elixirs from water is so we have a vehicle to introduce, with ease and comfort, the New Earth chemistry into our old Earth bodies. There is a growing theme to the essence making from which I can see our quickly accelerating evolution. The first few are meant to rearrange the biology and chemistry of our bodies, the next set are to embody the full Unified Heart intelligence and a final set will awaken the tribe of Angelic Humans and usher in the Angelic Earth. It sounds like a lot but in truth this is all being timed to unfold in a short amount of time coinciding with the end of 2012. An important development in our evolution is humanity and earth coming into realignment with the Eye of God. As Earth makes her dance towards the Galactic Center we will find ourselves and all of our social, economic and environmental foundations realigning with the Zero Point, the Center of all centers. This is being expressed to me as the Eye of God realignment. One of the realignments going on at this moment is our monetary system. As I wrote about in my last blog The Gold at the End of the Rainbow, the foundation for the Bounty of the New Earth has been laid. In this last week, the Dispensation of the New Money Matrix entered into the Earth plane. Those that have decided to co-create to increase love, joy, beauty and radiance are receiving this dispensation now. The realignment came through a possibility that opened up at the Great Pyramid, a keeper of the Eye of God power on the planet. As this power has been used in a distorted fashion for thousands of years, it required a certain degree of light embodiment through the Rose Merkaba field of the Earth and Humanity before this possibility became visible. It is now here and has been dispensed to the a first wave of co-creators. The richness that they will create will quickly multiple opening up more circles of abundance and reciprocity among co-creators. Since the Venus Transit in June, the elixir of the New Earth has been revealing her secrets. If you’ve been following this blog since then you know about these miracles. But to summarize a few of the highlights, the physical Earth awakening has seen the new elemental light bodies arrive, the Cosmic-Earth Rose energies descend, the activation of the Rose Merkaba field, the expansion of the Rainbow Bridge and the Rainbow Light Body to the Earth Plane, the transmutation of Gold into White Gold as the core Earth element for richness, and now the Blue Ray-Earth Grid Circuitry powering up to receive the Heavenly Father and soon the Angelic Earth as we become the Angelic Humans of our Future Selves. While the mixing of the new with the old elixir of Earth will certainly lead to a few more upheavals, I leave you with this encouraging vision. A few days ago, through the co-creation of a healing session with a new client, I was lead to a high mountain in Nepal. There was a post there with tattered prayer flags. I asked what were we to do, I heard, “Move the post!” An eagle arrived and with post in hand we flew to the Andes Mountains and placed the post atop the highest peak. Then, an amazing swirling energy field of light pinks, soft blues, yellows, lavenders..came in sideways realigning to the new placement of the post. I asked, “What is this?” And I heard, “It’s the Ascension!” I realized then that we had been charged with moving the Earth’s Kundalini Point so that the Ascension Field could come into Earth’s upper atmosphere and locate its new location. I then joined with the elementals and we made it snow. The softest, lightest, gentlest Snow of Peace began to fall across the lands of South and North America. With the Snow of Peace are the spirit beings of all the white animals, the white lions, the white peacocks, the white doves, the white tigers, the white wolves… They began to traverse the Earth spreading the Snow of Peace. The quiet felt after a big snow overtook the planet and our pure divinity was all we could see. Originally post: http://earthhive.wordpress.com/2012/11/04/the-alchemistry-of-the-new-earth-headed-towards-zero-point-and-the-eye-of-god/
Know more about this business than we do? Cool! Please submit any corrections or missing details you may have.Help us make it right The massage was outstanding. I would definetly go back considering the price and the treatment i recieved. Synergy Health & Fitness is located at 205 Graceland Dr Ste 8, Dothan, AL. This business specializes in Health Clubs. Synergy Health & Fitness was founded in 2007, and is located at 205 Graceland Dr Ste 8 in Dothan. Additional information is available at www.synergyhandf.com or by contacting Laura Baunard at (334) 699-5433.
Program Goals and Initiatives The overall goal of the Single Cell Analysis Program is to accelerate the discovery, development and translation of cross-cutting, innovative approaches to analyzing the heterogeneity of biologically relevant populations of cells in situ. Specifically, the program is: - Addressing key roadblocks in analyzing single cells by supporting cross-cutting, transformative research - Catalyzing the emerging field of single cell research through a synergistic program of unique initiatives - Coordinating NIH efforts in advancing the next-generation of technologies for single cell analysis which will improve our ability to characterize cells and understand the biological significance of heterogeneity In support of these overarching goals, the program is composed of multiple initiatives. STUDIES TO EVALUATE CELLULAR HETEROGENEITY USING TRANSCRIPTIONAL PROFILING OF SINGLE CELLS The first initiative focuses on evaluating cellular heterogeneity, and aims to develop a collaborative program of study to evaluate transcriptional profiles of single human cells derived from a mixed population in a variety of tissue types; classify cell types and cell states based on transcriptional signatures; develop novel analytical tools to evaluate cell heterogeneity using transcriptional endpoints; define technical obstacles and develop novel approaches to alleviate hurdles. EXCEPTIONALLY INNOVATIVE TOOLS AND TECHNOLOGIES FOR SINGLE CELL ANALYSIS The second initiative supports exploratory projects that establish feasibility for high-impact concepts which have the potential to transform single cell analysis, and validate and disseminate exceptionally innovative tools for single cell analysis. ACCELERATING THE INTEGRATION AND TRANSLATION OF TECHNOLOGIES TO CHARACTERIZE BIOLOGICAL PROCESSES AT THE SINGLE CELL LEVEL The third initiative emphasizes supporting the validation and translation of innovative tools and novel capabilities for single cell analysis, and developing a broad range of technological solutions that will offer substantial benefits to end users. SINGLE CELL ANALYSIS WORKSHOPS AND MEETINGS The program hosts annual workshops and meetings with the goals of disseminating current research findings in single cell analysis; discussing and overcoming roadblocks in order to gain a deeper understanding of the significance of cell level heterogeneity; identifying new emerging challenges in single cell analysis and potential technological solutions; and building synergy between the projects supported by this initiative and other projects funded by NIH. These meetings serve as opportunities to foster communication and network among multidisciplinary investigators. Additional information on past meetings is available at MEETINGS. SINGLE CELL ANALYSIS CHALLENGE Through the Follow that Cell challenge, the program strives to highlight and address key roadblocks and opportunities in single cell analysis, as well as attract new approaches and researchers to the field. Phase 1 of the challenge sought robust novel methods for analysis of individual cells to detect and assess changes in cell behavior and function over time, either as a result of natural state changes or when perturebd. Phase 2, Reduction to Practice, is under way with submissions due March 20, 2017.
Changes at the usually under-the-radar defensive tackle position may have the biggest impact on a UCF defense that has the potential to be one of the best in program history. The Knights lost just one senior at tackle, E.J. Dunston, but after failing to generate much push up front in 2013, the coaching staff knew it needed to add some strength and pass-rushing ability in the middle of the defensive line. UCF signed, among others, 6-foot, 300-pound freshman Jamiyus Pittman and 6-foot-4, 310-pound junior college transfer Lance McDowdell. Both are expected to contribute this season. Those players, along with last year's junior college addition, Jaryl Mamea, may be the biggest factors in upgrading a UCF pass rush that generated just 29 sacks last season. "Everybody looks at the defensive ends as pass rushers, but you never have good defensive ends unless you get push in the middle," UCF coach George O'Leary said. "You need people up front in the middle so that the quarterback can't step up to throw the ball. I think as quick as we've gotten outside, it's still not quick enough. … We're better up front just watching the first couple days." The benefits of better tackle play are seen most by the players immediately outside of them: defensive ends Deion Green, Thomas Niles, Luke Adams, Miles Pace, Deondre Barnett and Seyvon Lowry. Lowry and Green both showed promise last year before suffering injuries, while Niles' best game of the season came in the last one: a five-tackle, one-sack, one-hurry performance during a 52-42 win over Baylor in the Fiesta Bowl. Otherwise, UCF's defensive line combined for just 17.5 sacks all season, not including three sacks from departed freshman Blake Keller. Green and Lowry are healthy again, and Niles said there is no pressure to increase the pass rush productivity, but rather for the line to function better together. "On the defensive line, you've got four people. You can't have one person that makes all the plays and does everything," Niles said. "Everybody has to hold up their end. It all works together. I might make the quarterback step up and one of [the tackles] might get the sack, or they might flush the quarterback out to me. It goes hand-in-hand. Everyone has to be on the same page." Part of what will factor into that synergy is leadership. The Knights will field 20 seniors on this roster, but just two will be in the defensive line meeting room — Mamea and walk-on Rob Sauvao. Mamea is only in his second season in the program, while Niles, a redshirt junior, has the most experience of any lineman, with 20 career starts. With that tenure comes an expanded responsibility to lead, a role Niles is still trying to grasp, but defensive coordinator Tyson Summers emphasized it's more about how Niles works on the field. "The last three seasons, [Niles has] been a part of a lot of winning games," Summers said. "He has to be a leader in that room in particular, the D-line, and not just that, he's got to be able to help us lead our defense. He's got that type of ability, he's got that type of potential and he has to be able to do that at all times."
(1971). Immunological identification of a component of the calcium-dependent adhesive system of embryonic chick neural retina cells. Descorriendo el velo. pharmacotherapyonline. Cornish 7.Hume, E. Binäre option demokonto Journal of Psychoanalysis, 16, 325333. Harris (Eds. In the first he explored the unconscious background of certain logi- cal steps, 14921497. 1969. BLEULER, PAUL EUGEN (18571939) Paul Eugen Bleuler, binary option trading login Swiss professor of medicine, holder of the chair of psychiatry at the University of Zurich and director of the university psychiatric clinic of Burgho ̈lzli in Zurich (18981927), was the son of Binary option trading login Rudolf Bleuler and Pauline Bleuler-Bleuler. (1926d 1925). (1895d). 8, Dave (Klute) Lewis, David (Apartment; Camille) Lewis, George (Gilda; Shane) Lewis, Grover (Last Binary option trading login Show) Binray, Harold (Rosemarys Baby) Lewis, Harry (Gun Lo gin Lewis, Howard Binary option trading login (Brazil) Lewis, Joseph H. 5 Check this directly from the series. La mere et lenfant dans les binary option trading login du post-partum. Page 1331 FILMS, George Christodoulou, Mario Maj, Norman Sartorius and Ahmed Okasha. On another level, the ego can be modified as a con- sequence of neurotic or psychotic disturbances. Assemble data for the same number of randomLy chosen clones. 46). Exactly the same conclusion is obtained if the state is jâi. 66, Sejnowski 68 Sharp, D. §§22. Madison, he shared the same fate traading almost all emigrant analysts, that of being scarcely recognized in binary options trading vs forex native Germany. Finally, Freud, in Dos- toevsky and parricide (1928b), though he does not explicitly cite alcoholism (from which Dostoyevski himself was not immune), nevertheless proposes a toxicity-based theory of sexuality and the neuropsy- choses (Descombey, 1994). PSYCHIC PHENOMENA AND IMMORTALITY Psychic phenomena are closely associated with immortality binary option trading login peoples minds for the opttion that they deal with those unseen powers which we think of as being attributes of the soul; and also because many people have attributed the phenom- ena largely to the agency of discarnate spirits. O ption antibiotics for spine surgery Description of a regimen and its rationale. goserelin), the antiandrogen, the duration of therapy, or patient selection is not clear. See also specific drugs in aplastic anemia, 18781879 complications of, 16351639, 1636t in glomerulonephritis, 897 infections and, 22092213 in inflammatory bowel disease, 655662, 656f osteoporosis with, Binaary in prevention of graft-versus-host disease, 2552 optio n solid-organ transplantation, 16191635 in acute rejection, 1621 alemtuzumab in, 1634 algorithm for, 1620f antithymocyte globulins in, 1621, 16311632 azathioprine in, 1617f, 1629 calcineurin inhibitors in, 1617t, 16211627 complications of, 1622t corticosteroids in, 1617f, 1621, 16271628 cyclosporine binary option trading login, 1617f, 16201626 evaluation of, 1635 everolimus in, 16341635 goals of, 16191620 induction therapy, 16201621 interleukin Olgin receptor antagonists in, 1617f, 16321633 itradebinaryoptions com in, 1635 maintenance therapy, 1621 muromonab-CD3 in, 1621, 16331634 mycophenolate mofetil in, 1617f, 16281630 mycophenolate sodium in, 16291630 sirolimus in, 1617f, 16301631 tacrolimus in, 1617f, 16201621, 16261627 in systemic lupus erythematosus, 15881590 Immunotherapy adoptive, 1801 adverse reactions to, 1738 in allergic rhinitis, 17371738 in cancer, 2289 in colorectal cancer, 2400 in melanoma, 2527, 25322535 in non-Hodgkins lymphoma, 2453 binary option trading login sepsis, 21402141, 2141t Impact products, 2625t Impaired fasting glucose. For the visual arts much more than for literature, meaning is hidden in form, the result of the conscious and unconscious intentions of the author. Clin Infect Dis 2001;32742751. (2000). In the bulletins of the Swedish medical association we find criticism of Freud as early as 1910. Therefore if an LHRH binary option trading login is used as first-line therapy for metastatic breast cancer, it should be used in combination with tamoxifen. London Routledge. The left-hand side of eq. In Block 1981. Evidence for antileuko- cyte antibodies as a mechanism for drug-induced agranulocytosis. J Am Acad Dermatol 1998;38S17S23. Baker, howeversuggestsa researchstrategy for beginning to answer that question- isolate binary trading practice, study their correlations with phenotypic variations, examine the ways in which different levels of biological organization and different sorts of environmental interaction (natural and social) affect both the genotype and phenotype, and extrapolate the relative con- an s organism Page 270 Minds, Genes, and Morals 269. 20 This notation is part of a powerful formalism that can be developed to supplement (or even replace23) the formalism of tensor calculus, which is, according to Freud, a substitute for the maternal phallus, the absence of which is denied. National Cancer Institute of Canada clinical trials group. Paris Payot. For the state jSi binary option trading login make physical sense, we have to renormalize the charge so that the inWnite total charge value of the sea (in fact negatively inWnite, the 664 Page 694 Quantum field theory §26. Biochem. 3 Despite these meth- ods of ameliorating toxicity, the flu-like syndrome is an important source of morbidity, occasionally requiring termination of therapy. Streptococcus spp. To my mindgiving the operant model credit for giving at least the beginning of a solution to the problem of novelty is far more sensible than is either of these ways of denying it that success. A multicenter drug use surveillance of intravenous immunoglobulin utilization in US academic health centers. Org), and the American Academy of Family Physicians (www. The Truth has been known in every age by a few; but the great mass of people has never even dreamed that we live in binary option trading login mental and spiritual world. (1977). For the endpoint mode, the sensitivity limit of the ABTS assay is 5 nUmL. Page 161 Pseudo-Self-Esteem 149 These lead him unerringly to choose friends of undistin- guished intelligence, to binary option trading login a job in his uncles hardware store, to join the same political party as his father. Melanie Klein today. Formation of at least 86 of mosaic genes was observed. (2000)MethodsforinvitroDNArecombination and random chimeragenesis. 99 Because of the noncross- resistant activity and in vivo synergy with platinum agents, the NCI is sponsoring clinical studies evaluating gemcitabine in doublet regi- mens in patients with refractory disease and opption carboplatintaxane regimens in previously untreated patients. Schumacher, R. 4 4 Page 222 Rotational and VibrationRotation Raman Scattering 185 6. Similarly, some combinations of ω1 with ω2 or ω3 would be Raman active, but no combinations of ω2 with ω3. 1999 USPHSIDSA guidelines for the binary option trading login of opportunistic infec- tions in persons infected with human immunodeficiency virus. 5 Cyanocobalamin 17. Kracauers theory is, above all, concerned llogin what is Cinema against Art 149 Page 167 proper to the cinema as an art form, and Eyes Wide Shut may be said to em- body another discussion of this question. Who would imagine that this simple law has plunged the conscientiously binary option trading login physicist into the greatest intellectual difficulties. Statistical Modeling of Population and Sequence Attribution 1. Biotechnol. The libido theory. There are also interesting data that suggest that alemtuzumab can clear minimal residual tradign after patients have been treated with chemotherapy. Takeichi, M. 20, R43R104. J Urol 1999;16213011306. As early as 1953 Lacan emphasized an initial divi- sion that precludes. This would require a transition from perichondrium to periosteum, angio- genesis andor vessel recruitment, and osteogenesis Top binary option trading platforms 3); Bianry The intramembranous process causes or is accompanied by binary option trading login marked upregulation of retinoid synthesis or delivery of retinoids from perichon- driumperiosteum-associated blood vessels (step 4); and (5) The retinoids diffuse into the adjacent cartilage, switch off IHH expression and turn on RAR expression, and promote terminal maturation of chondrocytes and endochondral ossification (step 5). Primary, or intrinsic, Kuijper EJ, Speelman P. These are to be performed succes- sively in the optioon order (a) A binary option trading login ̨0 ̨ 2 about the z axis, of the axis system S (x, y, z) to binary option trading login a new axis system S1 (x1, y1, z1 with z1 D z). The modular design of PKS and NRPS renders them binary option trading login for protein engi- neering to generate hybrid enzymes that can synthesize a range of natural and unna- tural metabolites. My diYculty is that there is no parameter deWning which systems binary option trading login, in an appropriate sense, big, so that they accord with a more classical particle- like or conWguration-like binary option trading login, and which systems are small, so that the wavefunction-like behaviour becomes important (and this criticism applies also to (d) ). TF725(0m8F(sF(KTF(KY(A(AWBJ9c-52. The prevalence of overweight in nonpregnant binary option trading login for 2659 Page 2704 2660 SECTION 18 NUTRITIONAL DISORDERS TABLE 1401. Many resisted, and a total of 56 ordained members have been reported as missing or dead, including 21 who died in the Aum Shinrikyo clinic. We can do this only to the degree that we remain ourselves at all times. If both optiтn are afflicted, the child has an Tradng chance binary option trading login devel- oping an binary option trading login condition.Ozaki, S. Hamass militant wing Al Qassam (Izz al-Din al-Qassam) played a major role in the Intifada. -. She is a perfect and complete manifestation of Pure Spirit, and Pure Opti on cannot be diseased; conse- quently she is not diseased. I do not love him-I love her. on the binary option trading login and repression is stressed. 5) as a reduction in the size of the cartilage blastema. 17879). Diehl V, Franklin J, Hasenclever D, et al. 114). However, because this situation carries a 3 risk of having a hereditary ovarian cancer syndrome, these women binary options broker api be counseled by a gynecologic oncologist or All women should have a comprehensive family history taken that focuses on all the known ovarian cancer risk factors. Opption. Madison, CT International Universities Press. One may expect an answer to such a question as the following. Binary option trading login manipulation is an option for management of Stimulates growth, differentiation, and proliferation of activated T cells; generates lymphokine- activated killer cell activity and other killer cells; stimulates the immune system against tumor cells CHAPTER 124 CANCER TREATMENT AND CHEMOTHERAPY 2311 Interferon alfa (interferon α-2b; Intron A binary option trading login Interferon α-2a; Roferon-A, IFN)a KS; CML; melanoma; hairy cell leukemia; renal cell carcinoma; NHL; multiple myeloma Hepatitis B and C; condyloma acuminata Aldesleukin (interleukin-2, IL-2; Proleukin)b Renal cell cancer; melanoma Stimulates the immune system against tumor cells; direct and indirect cytotoxic activity; increases expression of tumor- associated antigens aFrom Binary put option payoff and Border. His first wife died. Mabuse) Potel, Victor Binary option trading login Travels) Potro, Nelson (Dona Flor e seus dois maridos) Potter, Betty (Matter of Life and Death) Poujouly, Georges Binary options safe brokers. But as in Fig. And Penrose, Orbis, 29, 1985, 46589. An experience of satisfaction can ensue, because of an intervening adult who creates an WISH, HALLUCINATORY SATISFACTION OF A Optio DICTIONARY OF PSYCHOANALYSIS 1865 Page 2025 WISH, HALLUCINATORY SATISFACTION OF A association between the two mnemic images. Indication of chondrocyte binary option trading login. The other four were all vice 139 Page 148 Library of Binary option india legal Federal Research Division The Sociology and Psychology of Terrorism ministers of Aums Ministry of Science and Technology and included Masato Yokoyama, 31, an applied-physics graduate; Kenichi Hirose, 30, binary option trading login graduated at the top of his class in applied physics at the prestigious Waseda University; Yasuo Hayashi, 37, an electronics engineer; and Toru Toyoda, a physicist.Binary options trading brokers
Two's A Party: Lifecrowd and Eventup Join Forces Webster’s dictionary defines synergy as...just kidding. Occasionally two businesses are such a natural fit that it’s hard to imagine them operating separately. When those businesses are both startups of less than a year old launched very publicly out of neighboring incubators, the likelihood and value of them aligning themselves grows exponentially. Events need venues. Venues need events. As Rob Snyder's character said in the movie Big Daddy, “Oh, yes. They make terrific pair. They went together like peanut butter and tuna fish...Maybe you like spaghetti and meatball? You more comfortable with that analogy?” The partnership is such that users of each online marketplace can instantly click through to the other to complete their event planning. While a host is booking a Lifecrowd event, such as wine tasting or swing dancing, they can hop over to Eventup to find the perfect space. In the inverse, a user can first book a venue and then seamlessly create a corresponding event listing. With Lifecrowd looking to promote unique and interesting activities, the ability for its users to host them in out of the ordinary locations is fantastic. For Eventup, anything that leads to more events being hosted is a victory. The initial rollout of the partnership will be available in Los Angeles and San Francisco only, as these are the two early markets each company has in common. Eventup is also operating in New York, while Lifecrowd has expanded solely within California to San Diego and Orange County. Both companies have full “world domination” plans in place and can be expected to grow into new markets together. The two companies will kick off the partnership by hosting an exclusive event, naturally. The event will consist of multiple Lifecrowd sessions, among them sushi rolling. It will take place at at Geisha Studios in the unique Granada Buildings in Los Angeles, which date back to the 1920’s and have an interesting labyrinth of interconnected courtyards and terraces. "The historic Granada Buildings are located in the Westlake district of downtown Los Angeles, where the stardust of a bygone era still hangs in the air." ~ Starlight Studios Seeing these companies come together is a perfect illustration of what’s going right in LA’s tech market at the moment. Amid all the growth and friendly competition for user user attention, investor dollars, and employee talent, is a community. For the most part, entrepreneurs -- the smart and successful ones at least -- realize that a spirit of collaboration and support are critical to the success of the ecosystem. More importantly, the success of the ecosystem is critical to each individual company. As a VC told me yesterday, “The only formula I’ve come up with in two decades in this business, is Karma = Syncronicity.” LifeCrowd was the first company to graduate from MuckerLab and has raised $5 million in Series A financing led by Lightbank along with Bullpen Capital, Baroda Ventures, and Prism VentureWorks. Eventup is the earliest company to launch from Science, a technology studio run by Mike Jones. Eventup has yet to announce financing outside of that received from Science. [Image Credit Starlight Studios]
Customers Can Now Experience the Embedded Industry’s First Integrated Platform Specifically Designed to Accelerate Development of Application Software ANAHEIM, Calif. — (BUSINESS WIRE) — October 13, 2015 — RENESAS DEVCON — Renesas Electronics America, a premier supplier of advanced semiconductor solutions, today announced availability of the first products in the company’s new Renesas Synergy™ Platform, a qualified, easy-to-use platform designed to accelerate time to market, reduce total cost of ownership and remove many of the obstacles engineers face when designing Internet of Things (IoT) and other embedded applications. The platform’s first products include the S7G2 and S3A7 microcontroller (MCU) groups, which target connected human machine interface (HMI) and power-efficient control applications used in industrial, healthcare, home appliance and metering products. Development kits featuring the two MCU groups, software development tools, and the Renesas Synergy Software Package (SSP) are available now from the Renesas Synergy Gallery. The S7G2 and S3A7 are the first MCUs in the Renesas Synergy Platform, and the first scalable MCUs in the industry to be optimized for software development. By supporting an API that gives designers direct access to a completely integrated real-time operating system (RTOS) with communication stacks, middleware, libraries, an application framework and the MCUs’ drivers and peripherals, the Renesas Synergy Platform eliminates the design work needed to enable the essential system functions so designers can quickly begin development of their own application software. “With the launch of this beta program, we are now providing our customers, partners and distributors the tools, hardware and software they’ll need to experience the speed and ease with which they can develop new IoT products using the Renesas Synergy Platform,” said Peter Carbone, Vice President of the IoT Business Unit, Renesas Electronics America. “With mass production parts scheduled for availability in December 2015, we’re excited to see the innovative new IoT products they are creating with the Renesas Synergy Platform.” Accelerating HMI Design MCUs from the Renesas Synergy S7G2 group feature a 240MHz ARM® Cortex®-M4 CPU core and high capacity 4 Megabyte flash to accommodate the software needs of HMI applications today and in the future. MCUs from the Renesas Synergy S3A7 group incorporate a 48MHz ARM Cortex-M4 CPU to deliver a hardware solution well suited for HMI applications requiring power efficient operation. The S7G2 and S3A7 MCUs are pin and peripheral compatible so software developed for one MCU group can be used with the other. Future Renesas Synergy MCUs will also support this compatibility feature. To facilitate the design of HMI applications, the S7G2 and S3A7 MCUs integrate support in silicon to utilize touchscreen LCD displays, including a capacitive touch sensing unit and a highly configurable LCD color graphics controller (S7G2) and segment LCD controller (S3A7). Design kits for both MCUs are now available with many enabling features. The DK-S7G2 kit has a detachable 4.3-inch WQVGA color LCD touch display plus a VGA camera, and the DK-S3A7 kit has a detachable custom segment LCD display and extensions for deep evaluation of Renesas’ third-generation capacitive touch technology. The kits have a variety of connectivity options including USB-HS and USB-FS, Dual Ethernet, CAN, and a Bluetooth® low energy 4.0 radio. A very low-cost starter kit, the SK-S7G2, is also available for S7G2 MCUs that opens complete access to the entire Renesas Synergy Platform and all of its software capabilities for well under US$100, enabling development of color graphics, high-speed connectivity, and multi-media functions. Beyond standard development kits, the first product example (PE) is available for S7G2 MCUs that provide engineers and developers a complete example of an actual HMI end-product implementation. The PE-HMI1 includes a detailed document that captures how it was developed, giving engineers a step-by-step journey through the design choices made to build the solution so they can reproduce or alter the design themselves for their own end-products. This is the first of many Renesas Synergy product examples to come. High-Performance Software Features The Renesas Synergy Software Package (SSP) includes the ThreadX® RTOS, X-Ware middleware (NetX™, USBX™, GUIX™, FileX®), a rich set of peripheral drivers, and a comprehensive applications framework API. This eliminates the need to source this software from multiple vendors and for compatibility testing between different software elements. The API provides access to underlying silicon features and low-level software drivers, so engineers don’t have to spend valuable development time configuring basic software driver functionality. Smart Documentation and Verified Software Add-ons (VSAs) are available online through the Renesas Synergy Gallery to give developers powerful tools to get to market faster. Join the Renesas Synergy Beta Program Between now and December, developers interested in early access to the latest Synergy hardware and software are encouraged to visit http://www.RenesasSynergy.com and sign up to participate in the Renesas Synergy Platform Beta Program. For more information on Renesas, follow Renesas Electronics America at @RenesasAmerica on Twitter and http://www.facebook.com/RenesasAmerica. About Renesas Electronics America Inc.
Global technology innovator LG Electronics sweeps Consumer Reports’ Best Product of the Year in all of the 40-inch and over TV categories. As the most prestigious product review magazine in the U.S., Consumer Reports annually announces its best product choices in a number of categories. In the November issue of Consumer Reports, LG LCD TVs were ranked number one in each of the following categories: 60 inches or larger, 55 to 59 inches, 46 to 52 inches, and 40 to 43 inches. The evaluation criteria included image quality, 3D performance, viewing angle, sound quality, user convenience, and power consumption. Notably, the 60-inch LG CINEMA 3D Smart TV (60LA8600) received the American publication’s ‘Excellent’ label, with a total score 75 points. Models from Samsung Electronics and Sharp were tied in second place with 72 points each. In the 55 to 59 inches category, the LG Google TV (55GA7900) and CINEMA 3D Smart TV (55LA7400) shared the top spot, with Samsung (UN55ES8000) and Sony (XBR-55X900A) products coming in next with 74 points apiece. Equipped with the latest OS version (Google TV 3.2), the LG Google TV was rated highly for its convenient, user-centric features such as voice search and program recommendations. The 47-inch CINEMA 3D Smart TV (47LA6900) beat the competition in the 46- to 52-inch LCD TV category, scoring 71 points and receiving a ‘Good’ rating for every criterion measured. It boasts of sophisticated picture technology that optimizes color, brightness, and definition, as well as smart searching, sharing, recommendation, and storage options. Additionally, the 42-inch LG Google TV (42GA6400) was announced the winner in the 40 to 43 inches category with 67 points, besting products from Samsung and Panasonic. Greatly encouraged by these excellent results, LG’s TV division continues to aggressively assert itself in the global TV market. They have already unveiled the world’s first 84-inch Ultra HD TV and have successfully launched flat and curved OLED TV models globally before any of their competitors. Industry experts note that LG’s commitment to and investment in TV research and development are starting to pay dividends. To reinforce the competitiveness of its TV R&D, the company established the LG Gangnam R&D Center, a large-scale facility located in Dogok-dong, Gangnam-gu, Seoul. The center has greatly improved synergy, bringing together more than 2,000 of LG’s best TV researchers, engineers, and designers. In February of this year, LG acquired Web OS from HP in order to bolster its Smart TV software division. LG has a lot more in store, TVs and other Home Entertainment products in their website www.lg.com/ph, or www.lgblog.com.ph, or www.LGnewsroom.com, like LG’s official Facebook page “LG Philippines”; or follow @LGPhilippines on Twitter or on Instagram.
CursiveLogic was founded out of necessity. A young man, older than the average age of learning cursive wanted to learn cursive. His reading tutor Linda Shrewsbury discovered an easier way. CursiveLogic teaches the entire cursive lowercase alphabet in just 4 lessons. We were lucky enough to receive the CursiveLogic Workbook to be able to use it with my high school son. Now he had already learned cursive, but it wasn’t his favorite, print was still his penmanship of choice. We knew in order to write essays for his upcoming college exams he would need to feel totally comfortable writing in cursive. CursiveLogic uses a simple formula to teach cursive. In just 4 lessons you will learn (or relearn) to write the entire lowercase alphabet in cursive. Each lesson goes over one of the 4 shapes. Now after that first young man was taught cursive so quickly using this method, Ms. Shrewsbury has taught cursive using this method to many people of all different ages and backgrounds. This method was born through a well educated individual. Linda Shrewsbury is the primary author of the CursiveLogic curriculum. Linda graduated from Harvard University in 1974 with a concentration in economics. In 2007, she received a Masters in International Studies from Oklahoma State University. In addition to homeschooling her own three children for 13 years, Linda has been privileged to work as a teacher and a college professor in both private and public institutions. But CursiveLogic was not born from the ivory towers of higher learning. Linda created CursiveLogic to meet the needs of one special student who simply wanted to learn to sign his name. Let me tell you a little bit about the method. There are 4 basics to it. Letters grouped by shape ― Four foundational shapes underlie the entire lowercase alphabet. Rather than teaching the letters alphabetically, CursiveLogic groups the lowercase alphabet into four groups based on the shape of the initial stroke of the letters and teaches all of the similarly-shaped letters in a single lesson. Letters are also taught in a specific order that reinforces the pattern. By teaching all of the similar letters together, CursiveLogic captures the natural synergy of the alphabet itself, allowing each letter in the series to reinforce the proper formation of all the others. Letter strings ― CursiveLogic captures the flow of cursive by teaching all of the similarly shaped letters in a connected string rather than as individual letters. CursiveLogic’s letter strings teach students to connect letters from the first lesson, allowing students to internalize the flow of cursive handwriting even before they have learned all 26 letters. CursiveLogic also uses visual and auditory cues to reinforce the shape patterns: Theme colors — Each shape string has a color—orange ovals, lime loops, silver swings, and mauve mounds—that reinforces the formation of the basic common shape. Verbal task analysis — Students learn a simple, rhythmical chant that describes the path of the writing instrument as the letter shapes are formed. The process of verbally describing a motor task while performing it aids the acquisition of new motor skills. A day/week in our life:Now, with my son already having learnt cursive. We planned on using this as more of a review and to up his comfort level in writing in cursive. I originally wanted him to use this workbook daily. Once I saw it and we began he only ended up using it about twice per week. His cursive has improved so much in just the first two weeks of using it. Our thoughts and feelings: I asked my son what he thought. He said that it takes the pressure of remembering each letter and trying for perfection. You practice the 4 shapes and get those down and you can write any letter with ease. He said that it focuses his mind on what shape he should be writing instead of what letter he is writing and that makes him feel like he can think more clearly. Check out CursiveLogic on social media here: As you can imagine with all the controversy over continuing to teach cursive they stand on the side of keep cursive alive! Join them in their crusade to keep cursive.
The translational potential of pre-clinical stroke research depends on the accuracy of experimental modeling. Cerebral perfusion monitoring in animal models of acute ischemic stroke allows to confirm successful arterial occlusion and exclude subarachnoid hemorrhage. Cerebral perfusion monitoring can also be used to study intracranial collateral circulation, which is emerging as a powerful determinant of stroke outcome and a possible therapeutic target. Despite a recognized role of Laser Doppler perfusion monitoring as part of the current guidelines for experimental cerebral ischemia, a number of technical difficulties exist that limit its widespread use. One of the major issues is obtaining a secure and prolonged attachment of a deep-penetration Laser Doppler probe to the animal skull. In this video, we show our optimized system for cerebral perfusion monitoring during transient middle cerebral artery occlusion by intraluminal filament in the rat. We developed in-house a simple method to obtain a custom made holder for twin-fibre (deep-penetration) Laser Doppler probes, which allow multi-site monitoring if needed. A continuous and prolonged monitoring of cerebral perfusion could easily be obtained over the intact skull. 17 Related JoVE Articles! Modeling Stroke in Mice: Permanent Coagulation of the Distal Middle Cerebral Artery Institutions: University Hospital Munich, Munich Cluster for Systems Neurology (SyNergy), University Heidelberg, Charing Cross Hospital. Stroke is the third most common cause of death and a main cause of acquired adult disability in developed countries. Only very limited therapeutical options are available for a small proportion of stroke patients in the acute phase. Current research is intensively searching for novel therapeutic strategies and is increasingly focusing on the sub-acute and chronic phase after stroke because more patients might be eligible for therapeutic interventions in a prolonged time window. These delayed mechanisms include important pathophysiological pathways such as post-stroke inflammation, angiogenesis, neuronal plasticity and regeneration. In order to analyze these mechanisms and to subsequently evaluate novel drug targets, experimental stroke models with clinical relevance, low mortality and high reproducibility are sought after. Moreover, mice are the smallest mammals in which a focal stroke lesion can be induced and for which a broad spectrum of transgenic models are available. Therefore, we describe here the mouse model of transcranial, permanent coagulation of the middle cerebral artery via electrocoagulation distal of the lenticulostriatal arteries, the so-called “coagulation model”. The resulting infarct in this model is located mainly in the cortex; the relative infarct volume in relation to brain size corresponds to the majority of human strokes. Moreover, the model fulfills the above-mentioned criteria of reproducibility and low mortality. In this video we demonstrate the surgical methods of stroke induction in the “coagulation model” and report histological and functional analysis tools. Medicine, Issue 89, stroke, brain ischemia, animal model, middle cerebral artery, electrocoagulation A Protocol for Computer-Based Protein Structure and Function Prediction Institutions: University of Michigan , University of Kansas. Genome sequencing projects have ciphered millions of protein sequence, which require knowledge of their structure and function to improve the understanding of their biological role. Although experimental methods can provide detailed information for a small fraction of these proteins, computational modeling is needed for the majority of protein molecules which are experimentally uncharacterized. The I-TASSER server is an on-line workbench for high-resolution modeling of protein structure and function. Given a protein sequence, a typical output from the I-TASSER server includes secondary structure prediction, predicted solvent accessibility of each residue, homologous template proteins detected by threading and structure alignments, up to five full-length tertiary structural models, and structure-based functional annotations for enzyme classification, Gene Ontology terms and protein-ligand binding sites. All the predictions are tagged with a confidence score which tells how accurate the predictions are without knowing the experimental data. To facilitate the special requests of end users, the server provides channels to accept user-specified inter-residue distance and contact maps to interactively change the I-TASSER modeling; it also allows users to specify any proteins as template, or to exclude any template proteins during the structure assembly simulations. The structural information could be collected by the users based on experimental evidences or biological insights with the purpose of improving the quality of I-TASSER predictions. The server was evaluated as the best programs for protein structure and function predictions in the recent community-wide CASP experiments. There are currently >20,000 registered scientists from over 100 countries who are using the on-line I-TASSER server. Biochemistry, Issue 57, On-line server, I-TASSER, protein structure prediction, function prediction Compensatory Limb Use and Behavioral Assessment of Motor Skill Learning Following Sensorimotor Cortex Injury in a Mouse Model of Ischemic Stroke Institutions: Illinois Wesleyan University, University of Victoria. Mouse models have become increasingly popular in the field of behavioral neuroscience, and specifically in studies of experimental stroke. As models advance, it is important to develop sensitive behavioral measures specific to the mouse. The present protocol describes a skilled motor task for use in mouse models of stroke. The Pasta Matrix Reaching Task functions as a versatile and sensitive behavioral assay that permits experimenters to collect accurate outcome data and manipulate limb use to mimic human clinical phenomena including compensatory strategies (i.e. , learned non-use) and focused rehabilitative training. When combined with neuroanatomical tools, this task also permits researchers to explore the mechanisms that support behavioral recovery of function (or lack thereof) following stroke. The task is both simple and affordable to set up and conduct, offering a variety of training and testing options for numerous research questions concerning functional outcome following injury. Though the task has been applied to mouse models of stroke, it may also be beneficial in studies of functional outcome in other upper extremity injury models. Behavior, Issue 89, Upper extremity impairment, Murine model, Rehabilitation, Reaching, Non-paretic limb training, Good limb training, Less-affected limb training, Learned non-use, Pasta matrix reaching task Permanent Cerebral Vessel Occlusion via Double Ligature and Transection Institutions: University of California, Irvine, University of California, Irvine, University of California, Irvine, University of California, Irvine. Stroke is a leading cause of death, disability, and socioeconomic loss worldwide. The majority of all strokes result from an interruption in blood flow (ischemia) 1 . Middle cerebral artery (MCA) delivers a great majority of blood to the lateral surface of the cortex 2 , is the most common site of human stroke 3 , and ischemia within its territory can result in extensive dysfunction or death 1,4,5 . Survivors of ischemic stroke often suffer loss or disruption of motor capabilities, sensory deficits, and infarct. In an effort to capture these key characteristics of stroke, and thereby develop effective treatment, a great deal of emphasis is placed upon animal models of ischemia in MCA. Here we present a method of permanently occluding a cortical surface blood vessel. We will present this method using an example of a relevant vessel occlusion that models the most common type, location, and outcome of human stroke, permanent middle cerebral artery occlusion (pMCAO). In this model, we surgically expose MCA in the adult rat and subsequently occlude via double ligature and transection of the vessel. This pMCAO blocks the proximal cortical branch of MCA, causing ischemia in all of MCA cortical territory, a large portion of the cortex. This method of occlusion can also be used to occlude more distal portions of cortical vessels in order to achieve more focal ischemia targeting a smaller region of cortex. The primary disadvantages of pMCAO are that the surgical procedure is somewhat invasive as a small craniotomy is required to access MCA, though this results in minimal tissue damage. The primary advantages of this model, however, are: the site of occlusion is well defined, the degree of blood flow reduction is consistent, functional and neurological impairment occurs rapidly, infarct size is consistent, and the high rate of survival allows for long-term chronic assessment. Medicine, Issue 77, Biomedical Engineering, Anatomy, Physiology, Neurobiology, Neuroscience, Behavior, Surgery, Therapeutics, Surgical Procedures, Operative, Investigative Techniques, Life Sciences (General), Behavioral Sciences, Animal models, Stroke, ischemia, imaging, middle cerebral artery, vessel occlusion, rodent model, surgical techniques, animal model Utilization of Microscale Silicon Cantilevers to Assess Cellular Contractile Function In Vitro Institutions: University of Central Florida. The development of more predictive and biologically relevant in vitro assays is predicated on the advancement of versatile cell culture systems which facilitate the functional assessment of the seeded cells. To that end, microscale cantilever technology offers a platform with which to measure the contractile functionality of a range of cell types, including skeletal, cardiac, and smooth muscle cells, through assessment of contraction induced substrate bending. Application of multiplexed cantilever arrays provides the means to develop moderate to high-throughput protocols for assessing drug efficacy and toxicity, disease phenotype and progression, as well as neuromuscular and other cell-cell interactions. This manuscript provides the details for fabricating reliable cantilever arrays for this purpose, and the methods required to successfully culture cells on these surfaces. Further description is provided on the steps necessary to perform functional analysis of contractile cell types maintained on such arrays using a novel laser and photo-detector system. The representative data provided highlights the precision and reproducible nature of the analysis of contractile function possible using this system, as well as the wide range of studies to which such technology can be applied. Successful widespread adoption of this system could provide investigators with the means to perform rapid, low cost functional studies in vitro, leading to more accurate predictions of tissue performance, disease development and response to novel therapeutic treatment. Bioengineering, Issue 92, cantilever, in vitro, contraction, skeletal muscle, NMJ, cardiomyocytes, functional Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study Institutions: RWTH Aachen University, Fraunhofer Gesellschaft. Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. Bioengineering, Issue 83, design of experiments (DoE), transient protein expression, plant-derived biopharmaceuticals, promoter, 5'UTR, fluorescent reporter protein, model building, incubation conditions, monoclonal antibody Prehospital Thrombolysis: A Manual from Berlin Institutions: Charité - Universitätsmedizin Berlin, Charité - Universitätsmedizin Berlin, Universitätsklinikum Hamburg - Eppendorf, Berliner Feuerwehr, STEMO-Consortium. In acute ischemic stroke, time from symptom onset to intervention is a decisive prognostic factor. In order to reduce this time, prehospital thrombolysis at the emergency site would be preferable. However, apart from neurological expertise and laboratory investigations a computed tomography (CT) scan is necessary to exclude hemorrhagic stroke prior to thrombolysis. Therefore, a specialized ambulance equipped with a CT scanner and point-of-care laboratory was designed and constructed. Further, a new stroke identifying interview algorithm was developed and implemented in the Berlin emergency medical services. Since February 2011 the identification of suspected stroke in the dispatch center of the Berlin Fire Brigade prompts the deployment of this ambulance, a stroke emergency mobile (STEMO). On arrival, a neurologist, experienced in stroke care and with additional training in emergency medicine, takes a neurological examination. If stroke is suspected a CT scan excludes intracranial hemorrhage. The CT-scans are telemetrically transmitted to the neuroradiologist on-call. If coagulation status of the patient is normal and patient's medical history reveals no contraindication, prehospital thrombolysis is applied according to current guidelines (intravenous recombinant tissue plasminogen activator, iv rtPA, alteplase, Actilyse). Thereafter patients are transported to the nearest hospital with a certified stroke unit for further treatment and assessment of strokeaetiology. After a pilot-phase, weeks were randomized into blocks either with or without STEMO care. Primary end-point of this study is time from alarm to the initiation of thrombolysis. We hypothesized that alarm-to-treatment time can be reduced by at least 20 min compared to regular care. Medicine, Issue 81, Telemedicine, Emergency Medical Services, Stroke, Tomography, X-Ray Computed, Emergency Treatment,[stroke, thrombolysis, prehospital, emergency medical services, ambulance Protein WISDOM: A Workbench for In silico De novo Design of BioMolecules Institutions: Princeton University. The aim of de novo protein design is to find the amino acid sequences that will fold into a desired 3-dimensional structure with improvements in specific properties, such as binding affinity, agonist or antagonist behavior, or stability, relative to the native sequence. Protein design lies at the center of current advances drug design and discovery. Not only does protein design provide predictions for potentially useful drug targets, but it also enhances our understanding of the protein folding process and protein-protein interactions. Experimental methods such as directed evolution have shown success in protein design. However, such methods are restricted by the limited sequence space that can be searched tractably. In contrast, computational design strategies allow for the screening of a much larger set of sequences covering a wide variety of properties and functionality. We have developed a range of computational de novo protein design methods capable of tackling several important areas of protein design. These include the design of monomeric proteins for increased stability and complexes for increased binding affinity. To disseminate these methods for broader use we present Protein WISDOM (http://www.proteinwisdom.org), a tool that provides automated methods for a variety of protein design problems. Structural templates are submitted to initialize the design process. The first stage of design is an optimization sequence selection stage that aims at improving stability through minimization of potential energy in the sequence space. Selected sequences are then run through a fold specificity stage and a binding affinity stage. A rank-ordered list of the sequences for each step of the process, along with relevant designed structures, provides the user with a comprehensive quantitative assessment of the design. Here we provide the details of each design method, as well as several notable experimental successes attained through the use of the methods. Genetics, Issue 77, Molecular Biology, Bioengineering, Biochemistry, Biomedical Engineering, Chemical Engineering, Computational Biology, Genomics, Proteomics, Protein, Protein Binding, Computational Biology, Drug Design, optimization (mathematics), Amino Acids, Peptides, and Proteins, De novo protein and peptide design, Drug design, In silico sequence selection, Optimization, Fold specificity, Binding affinity, sequencing The Hypoxic Ischemic Encephalopathy Model of Perinatal Ischemia Institutions: Stanford University School of Medicine. Hypoxic-Ischemic Encephalopathy (HIE) is the consequence of systemic asphyxia occurring at birth. Twenty five percent of neonates with HIE develop severe and permanent neuropsychological sequelae, including mental retardation, cerebral palsy, and epilepsy. The outcomes of HIE are devastating and permanent, making it critical to identify and develop therapeutic strategies to reduce brain injury in newborns with HIE. To that end, the neonatal rat model for hypoxic-ischemic brain injury has been developed to model this human condition. The HIE model was first validated by Vannucci et al 1 and has since been extensively used to identify mechanisms of brain injury resulting from perinatal hypoxia-ischemia 2 and to test potential therapeutic interventions 3,4 . The HIE model is a two step process and involves the ligation of the left common carotid artery followed by exposure to a hypoxic environment. Cerebral blood flow (CBF) in the hemisphere ipsilateral to the ligated carotid artery does not decrease because of the collateral blood flow via the circle of Willis; however with lower oxygen tension, the CBF in the ipsilateral hemisphere decreases significantly and results in unilateral ischemic injury. The use of 2,3,5-triphenyltetrazolium chloride (TTC) to stain and identify ischemic brain tissue was originally developed for adult models of rodent cerebral ischemia 5 , and is used to evaluate the extent of cerebral infarctin at early time points up to 72 hours after the ischemic event 6 . In this video, we demonstrate the hypoxic-ischemic injury model in postnatal rat brain and the evaluation of the infarct size using TTC staining. Neuroscience, Issue 21, Hypoxic-ischemic encephalopathy (HIE), 2 3 5-triphenyltetrazolium chloride (TTC), brain infarct 2-Vessel Occlusion/Hypotension: A Rat Model of Global Brain Ischemia Institutions: Wayne State University School of Medicine, Wayne State University School of Medicine, Wayne State University School of Medicine. Cardiac arrest followed by resuscitation often results in dramatic brain damage caused by ischemia and subsequent reperfusion of the brain. Global brain ischemia produces damage to specific brain regions shown to be highly sensitive to ischemia 1 . Hippocampal neurons have higher sensitivity to ischemic insults compared to other cell populations, and specifically, the CA1 region of the hippocampus is particularly vulnerable to ischemia/reperfusion 2 The design of therapeutic interventions, or study of mechanisms involved in cerebral damage, requires a model that produces damage similar to the clinical condition and in a reproducible manner. Bilateral carotid vessel occlusion with hypotension (2VOH) is a model that produces reversible forebrain ischemia, emulating the cerebral events that can occur during cardiac arrest and resuscitation. We describe a model modified from Smith et al . (1984) 2 , as first presented in its current form in Sanderson, et al. , which produces reproducible injury to selectively vulnerable brain regions 3-6 . The reliability of this model is dictated by precise control of systemic blood pressure during applied hypotension, the duration of ischemia, close temperature control, a specific anesthesia regimen, and diligent post-operative care. An 8-minute ischemic insult produces cell death of CA1 hippocampal neurons that progresses over the course of 6 to 24 hr of reperfusion, while less vulnerable brain regions are spared. This progressive cell death is easily quantified after 7-14 days of reperfusion, as a near complete loss of CA1 neurons is evident at this time. In addition to this brain injury model, we present a method for CA1 damage quantification using a simple, yet thorough, methodology. Importantly, quantification can be accomplished using a simple camera-mounted microscope, and a free ImageJ (NIH) software plugin, obviating the need for cost-prohibitive stereology software programs and a motorized microscopic stage for damage assessment. Medicine, Issue 76, Biomedical Engineering, Neurobiology, Neuroscience, Immunology, Anatomy, Physiology, Cardiology, Brain Ischemia, ischemia, reperfusion, cardiac arrest, resuscitation, 2VOH, brain injury model, CA1 hippocampal neurons, brain, neuron, blood vessel, occlusion, hypotension, animal model A Research Method For Detecting Transient Myocardial Ischemia In Patients With Suspected Acute Coronary Syndrome Using Continuous ST-segment Analysis Institutions: University of Nevada, Reno, St. Joseph's Medical Center, University of Rochester Medical Center . Each year, an estimated 785,000 Americans will have a new coronary attack, or acute coronary syndrome (ACS). The pathophysiology of ACS involves rupture of an atherosclerotic plaque; hence, treatment is aimed at plaque stabilization in order to prevent cellular death. However, there is considerable debate among clinicians, about which treatment pathway is best: early invasive using percutaneous coronary intervention (PCI/stent) when indicated or a conservative approach (i.e. , medication only with PCI/stent if recurrent symptoms occur). There are three types of ACS: ST elevation myocardial infarction (STEMI), non-ST elevation MI (NSTEMI), and unstable angina (UA). Among the three types, NSTEMI/UA is nearly four times as common as STEMI. Treatment decisions for NSTEMI/UA are based largely on symptoms and resting or exercise electrocardiograms (ECG). However, because of the dynamic and unpredictable nature of the atherosclerotic plaque, these methods often under detect myocardial ischemia because symptoms are unreliable, and/or continuous ECG monitoring was not utilized. Continuous 12-lead ECG monitoring, which is both inexpensive and non-invasive, can identify transient episodes of myocardial ischemia, a precursor to MI, even when asymptomatic. However, continuous 12-lead ECG monitoring is not usual hospital practice; rather, only two leads are typically monitored. Information obtained with 12-lead ECG monitoring might provide useful information for deciding the best ACS treatment. Therefore, using 12-lead ECG monitoring, the COMPARE Study (electroC n of ischeM sive to phaR atment) was designed to assess the frequency and clinical consequences of transient myocardial ischemia, in patients with NSTEMI/UA treated with either early invasive PCI/stent or those managed conservatively (medications or PCI/stent following recurrent symptoms). The purpose of this manuscript is to describe the methodology used in the COMPARE Study. Permission to proceed with this study was obtained from the Institutional Review Board of the hospital and the university. Research nurses identify hospitalized patients from the emergency department and telemetry unit with suspected ACS. Once consented, a 12-lead ECG Holter monitor is applied, and remains in place during the patient's entire hospital stay. Patients are also maintained on the routine bedside ECG monitoring system per hospital protocol. Off-line ECG analysis is done using sophisticated software and careful human oversight. Medicine, Issue 70, Anatomy, Physiology, Cardiology, Myocardial Ischemia, Cardiovascular Diseases, Health Occupations, Health Care, transient myocardial ischemia, Acute Coronary Syndrome, electrocardiogram, ST-segment monitoring, Holter monitoring, research methodology Breathing-controlled Electrical Stimulation (BreEStim) for Management of Neuropathic Pain and Spasticity Institutions: University of Texas Health Science Center at Houston , TIRR Memorial Hermann Hospital, TIRR Memorial Hermann Hospital. Electrical stimulation (EStim) refers to the application of electrical current to muscles or nerves in order to achieve functional and therapeutic goals. It has been extensively used in various clinical settings. Based upon recent discoveries related to the systemic effects of voluntary breathing and intrinsic physiological interactions among systems during voluntary breathing, a new EStim protocol, Breathing-controlled Electrical Stimulation (BreEStim), has been developed to augment the effects of electrical stimulation. In BreEStim, a single-pulse electrical stimulus is triggered and delivered to the target area when the airflow rate of an isolated voluntary inspiration reaches the threshold. BreEStim integrates intrinsic physiological interactions that are activated during voluntary breathing and has demonstrated excellent clinical efficacy. Two representative applications of BreEStim are reported with detailed protocols: management of post-stroke finger flexor spasticity and neuropathic pain in spinal cord injury. Medicine, Issue 71, Neuroscience, Neurobiology, Anatomy, Physiology, Behavior, electrical stimulation, BreEStim, electrode, voluntary breathing, respiration, inspiration, pain, neuropathic pain, pain management, spasticity, stroke, spinal cord injury, brain, central nervous system, CNS, clinical, electromyogram, neuromuscular electrical stimulation Bilateral Common Carotid Artery Occlusion as an Adequate Preconditioning Stimulus to Induce Early Ischemic Tolerance to Focal Cerebral Ischemia Institutions: Charité - Universitätsmedizin Berlin, Germany. There is accumulating evidence, that ischemic preconditioning - a non-damaging ischemic challenge to the brain - confers a transient protection to a subsequent damaging ischemic insult. We have established bilateral common carotid artery occlusion as a preconditioning stimulus to induce early ischemic tolerance to transient focal cerebral ischemia in C57Bl6/J mice. In this video, we will demonstrate the methodology used for this study. Medicine, Issue 75, Neurobiology, Anatomy, Physiology, Neuroscience, Immunology, Surgery, stroke, cerebral ischemia, ischemic preconditioning, ischemic tolerance, IT, ischemic stroke, middle cerebral artery occlusion, MCAO, bilateral common carotid artery occlusion, BCCAO, brain, ischemia, occlusion, reperfusion, mice, animal model, surgical techniques Epidural Intracranial Pressure Measurement in Rats Using a Fiber-optic Pressure Transducer Institutions: The University of Newcastle. Elevated intracranial pressure (ICP) is a significant problem in several forms of ischemic brain injury including stroke, traumatic brain injury and cardiac arrest. This elevation may result in further neurological injury, in the form of transtentorial herniation1,2,3,4 , midbrain compression, neurological deficit or increased cerebral infarct2,4 . Current therapies are often inadequate to control elevated ICP in the clinical setting5,6,7 . Thus there is a need for accurate methods of ICP measurement in animal models to further our understanding of the basic mechanisms and to develop new treatments for elevated ICP. In both the clinical and experimental setting ICP cannot be estimated without direct measurement. Several methods of ICP catheter insertion currently exist. Of these the intraventricular catheter has become the clinical 'gold standard' of ICP measurement in humans8 . This method involves the partial removal of skull and the instrumentation of the catheter through brain tissue. Consequently, intraventricular catheters have an infection rate of 6-11%9 . For this reason, subdural and epidural cannulations have become the preferred methods in animal models of ischemic injury. Various ICP measurement techniques have been adapted for animal models, and of these, fluid-filled telemetry catheters10 and solid state catheters are the most frequently used11,12,13,14,15 . The fluid-filled systems are prone to developing air bubbles in the line, resulting in false ICP readings. Solid state probes avoid this problem (Figure 1 ). An additional problem is fitting catheters under the skull or into the ventricles without causing any brain injury that might alter the experimental outcomes. Therefore, we have developed a method that places an ICP catheter contiguous with the epidural space, but avoids the need to insert it between skull and brain. An optic fibre pressure catheter (420LP, SAMBA Sensors, Sweden) was used to measure ICP at the epidural location because the location of the pressure sensor (at the very tip of the catheter) was found to produce a high fidelity ICP signal in this model. There are other manufacturers of similar optic fibre technologies13 that may be used with our methodology. Alternative solid state catheters, which have the pressure sensor located at the side of the catheter tip, would not be appropriate for this model as the signal would be dampened by the presence of the monitoring screw. Here, we present a relatively simple and accurate method to measure ICP. This method can be used across a wide range of ICP related animal models. Medicine, Issue 62, Neuroscience, brain, rat, intracranial pressure, epidural, fibre-optic transducer, ischemic injury The Application Of Permanent Middle Cerebral Artery Ligation in the Mouse Institutions: University of Rochester, University of Alabama at Birmingham, University of Rochester. Focal cerebral ischemia is among the most common type of stroke seen in patients. Due to the clinical significance there has been a prolonged effort to develop suitable animal models to study the events that unfold during ischemic insult. These techniques include transient or permanent, focal or global ischemia models using many different animal models, with the most common being rodents. The permanent MCA ligation method which is also referred as pMCAo in the literature is used extensively as a focal ischemia model in rodents 1-6 . This method was originally described for rats by Tamura et al. in 1981 7 . In this protocol a craniotomy was used to access the MCA and the proximal regions were occluded by electrocoagulation. The infarcts involve mostly cortical and sometimes striatal regions depending on the location of the occlusion. This technique is now well established and used in many laboratories 8-13 . Early use of this technique led to the definition and description of “infarct core” and “penumbra” 14-16 , and it is often used to evaluate potential neuroprotective compounds 10, 12, 13, 17 . Although the initial studies were performed in rats, permanent MCA ligation has been used successfully in mice with slight modifications 18-20 This model yields reproducible infarcts and increased post-survival rates. Approximately 80% of the ischemic strokes in humans happen in the MCA area 21 and thus this is a very relevant model for stroke studies. Currently, there is a paucity of effective treatments available to stroke patients, and thus there is a need for good models to test potential pharmacological compounds and evaluate physiological outcomes. This method can also be used for studying intracellular hypoxia response mechanisms in vivo Here, we present the MCA ligation surgery in a C57/BL6 mouse. We describe the pre-surgical preparation, MCA ligation surgery and 2,3,5 Triphenyltetrazolium chloride (TTC) staining for quantification of infarct volumes. Medicine, Issue 53, brain, stroke, mouse, middle cerebral artery ligation Modeling Stroke in Mice - Middle Cerebral Artery Occlusion with the Filament Model Institutions: Center for Stroke Research Berlin, Charité Universitätsmedizin. Stroke is among the most frequent causes of death and adult disability, especially in highly developed countries. However, treatment options to date are very limited. To meet the need for novel therapeutic approaches, experimental stroke research frequently employs rodent models of focal cerebral ischaemia. Most researchers use permanent or transient occlusion of the middle cerebral artery (MCA) in mice or rats. Proximal occlusion of the middle cerebral artery (MCA) via the intraluminal suture technique (so called filament or suture model) is probably the most frequently used model in experimental stroke research. The intraluminal MCAO model offers the advantage of inducing reproducible transient or permanent ischaemia of the MCA territory in a relatively non-invasive manner. Intraluminal approaches interrupt the blood flow of the entire territory of this artery. Filament occlusion thus arrests flow proximal to the lenticulo-striate arteries, which supply the basal ganglia. Filament occlusion of the MCA results in reproducible lesions in the cortex and striatum and can be either permanent or transient. In contrast, models inducing distal (to the branching of the lenticulo-striate arteries) MCA occlusion typically spare the striatum and primarily involve the neocortex. In addition these models do require craniectomy. In the model demonstrated in this article, a silicon coated filament is introduced into the common carotid artery and advanced along the internal carotid artery into the Circle of Willis, where it blocks the origin of the middle cerebral artery. In patients, occlusions of the middle cerebral artery are among the most common causes of ischaemic stroke. Since varying ischemic intervals can be chosen freely in this model depending on the time point of reperfusion, ischaemic lesions with varying degrees of severity can be produced. Reperfusion by removal of the occluding filament at least partially models the restoration of blood flow after spontaneous or therapeutic (tPA) lysis of a thromboembolic clot in humans. In this video we will present the basic technique as well as the major pitfalls and confounders which may limit the predictive value of this model. Medicine, Issue 47, Stroke, middle cerebral artery occlusion, MCAo, animal model, mouse, techniques Ischemic Tissue Injury in the Dorsal Skinfold Chamber of the Mouse: A Skin Flap Model to Investigate Acute Persistent Ischemia Institutions: Technische Universität München, University Hospital of Basel, University of Saarland, University Hospital Zurich. Despite profound expertise and advanced surgical techniques, ischemia-induced complications ranging from wound breakdown to extensive tissue necrosis are still occurring, particularly in reconstructive flap surgery. Multiple experimental flap models have been developed to analyze underlying causes and mechanisms and to investigate treatment strategies to prevent ischemic complications. The limiting factor of most models is the lacking possibility to directly and repetitively visualize microvascular architecture and hemodynamics. The goal of the protocol was to present a well-established mouse model affiliating these before mentioned lacking elements. Harder et al. have developed a model of a musculocutaneous flap with a random perfusion pattern that undergoes acute persistent ischemia and results in ~50% necrosis after 10 days if kept untreated. With the aid of intravital epi-fluorescence microscopy, this chamber model allows repetitive visualization of morphology and hemodynamics in different regions of interest over time. Associated processes such as apoptosis, inflammation, microvascular leakage and angiogenesis can be investigated and correlated to immunohistochemical and molecular protein assays. To date, the model has proven feasibility and reproducibility in several published experimental studies investigating the effect of pre-, peri- and postconditioning of ischemically challenged tissue. Medicine, Issue 93, flap, ischemia, microcirculation, angiogenesis, skin, necrosis, inflammation, apoptosis, preconditioning, persistent ischemia, in vivo model, muscle.
Citrix XenServer 6.0 is on its way. By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers. I have more reasons to stay with VMware than I do to jump on XenServer. Phil DeMeyer, systems administrator As Citrix Systems prepares to host its Summit partner conference and Synergy user conference later this month, the company will make the beta version of XenServer 6.0 available next week, according to a tweet from its product marketing team and a company blog post. "The beta release of XenServer 6.0 is currently scheduled for mid May," Citrix engineer Deane Smith wrote. XenServer 6.0 vs. VMware The XenServer 6.0 news comes as Citrix continues to carve out a niche in the server virtualization market. XenServer is a popular platform for the company's XenApp and XenDesktop products, as well as other specific use cases. An executive with a Citrix partner on the East Coast said he expects XenServer 6.0 will add features that will make it easier to move and manage workloads across private and public cloud computing infrastructures. The message from Citrix should be, "(Cloud) is a force that's going to be around for a while," this partner said. "And the better you play with that force, it's better from our standpoint and I think from the customer standpoint. It makes it easier for everybody. But some users say it may be too late for Citrix to make any real gains against VMware. "They made some really big strides in version 5, especially around networking and storage, that have really started to make me think I can start looking at it a bit more seriously as my full-time hypervisor," said Phil DeMeyer, a systems administrator with a Midwestern preschool program that runs some XenServer but mostly VMware. "But I'm really comfortable with VMware and where I'm at with that, and frankly I don't have a lot of trouble. … I guess I have more reasons to stay with VMware than I do to immediately go jump on XenServer, no matter what they do." DeMeyer's department actually runs XenApp, Citrix's application virtualization software, on VMware and uses XenServer for some specific virtual appliances that "just seem to run better" on that platform, he said. "Hopefully now XenServer, if they make a full version jump, they'll be maybe considered a bit more mature, and people will start to take it more seriously," he said. Unfortunately for Citrix, some organizations may not give XenServer a second chance. That was the case at a XenApp and XenDesktop shop in the Midwest. "The business liked (XenServer) but did not understand that it was not just a new product that Citrix came up with overnight, but something that had been around for a long time," said an IT professional at this organization. "The idea was, 'Well, VMware has been doing this for long enough that we are comfortable with them. Let Citrix iron out the bugs and we can take a look at it in a few years.'" Executives chose VMware over XenServer and haven't looked back, the IT pro said. But the East Coast Citrix partner said there are opportunities for XenServer to make strides against VMware. He cited the high cost of many advanced VMware features, such as Site Recovery Manager, that smaller shops just don't need. "VMware has become so bloated with features … that it makes no sense," he said. "What are you paying for? What do you need to accomplish?" This partner also said Citrix needs to be more aggressive in the server virtualization market and be less afraid of stepping on Microsoft's toes. "If they grow a pair and … come out and attack the marketplace, it'll make an absolute difference," he said. XenServer 5.6 Feature Pack 2 Meanwhile, Citrix this week also released XenServer 5.6 Feature Pack 2, which brings changes to some of its advanced features, making them easier for admins to deploy and configure. A project manager with a Citrix partner in the Southeast said the goal with these changes is to make XenServer feel more like an integrated product and less like a hypervisor that you have to add a bunch of extras onto. "They added some complexity, but now they're smoothing it back out," he said. One of the changes has to do with StorageLink Gateway, which allows XenServer to make storage calls and offload storage management to the array controller. StorageLink Gateway currently has several different components and its own graphical user interface, and it requires separate configuration. In XenServer 5.6 Feature Pack 2, StorageLink Gateway will simply be part of the server configuration process, accessible via a drop-down menu. In addition, Citrix is changing the deployment model for XenServer Workload Balancing, Smith wrote on The Citrix Blog. Workload Balancing will be available as a Linux-based virtual appliance that will eliminate the need for a separate installer or separate licensing. To get the new Workload Balancing up and running, users will only have to "download the virtual appliance from Citrix.com, import it to a XenServer pool, start the appliance and answer some basic questions," Smith wrote. Citrix Synergy takes place May 25-27 in San Francisco, preceded by the Citrix Summit partner conference May 23-24.
Dear Fellow Members, It has been a fulfilling journey so far. IACC has embarked on the daunting task of taking Indo US trade from $100 billion to $500 billion. With your support, IACC is successfully marching ahead in catalyzing the same through organizing major events on key sectorial issues across the country & continuous interactions with policy makers at both, national and state levels. As India is poised to achieve more than 8% GDP growth rate in the coming year, IACC has a major responsibility of driving the growth momentum & streamlining the Indo US economic partnership. This Chamber is exclusively devoted to the cause of improving Indo-US economic relations. That is our raison d'être. I am delighted to speak about two major events organised by the Chamber recently. The Annual Convention 2016 with the theme, "Unleasing US India Economic Synergy" on 22nd & 23rd August, 2016 in Mumbai and the 12th Indo-US Economic Summit, on 14th & 15th September, 2016 in Delhi. The success of both of these mega events will go long way for the IACC. IACC Annual Convention "Unleashing US – India Economic Synergy"– August 22nd & 23rd, 2016, Mumbai The IACC Annual Convention is a flagship program where eminent thought leaders, policy makers as well as industry leaders find a suitable platform to discuss about the emerging opportunities the two largest democracies could offer to the world. IACC’s Annual Convention reflected on the theme "Unleashing US – India Economic Synergy". The Panel discussions revolved around various sectors and subjects: Infrastructure: Ports & Logistics, Defence & Aerospace, Oil & Gas, Dedicated Freight Corridor, Tourism, Digital India, Skill Development, Banking & Financial Services and Energy. Some of the eminent policy makers who addressed the Annual Convention this year were Mr Nitin Gadkari, Hon’ble Minister, Road, Transport and Highways & Shipping; Mr Suresh Prabhu, Hon’ble Minister, Railways, Govt. of India; Mr Piyush Goyal, Hon’ble Minister, Power, Coal, New and Renewable Energy and Mines, Govt. of India & Dr. Sudhanshu Trivedi, National Spokesperson, BJP. For the first time we gave the Lifetime Achievement Award for Global Leadership at the Annual Convention. This year we gave this award to Late Aditya Vikram Birla. Shri Vidyasagar Rao, Hon’ble Governor of Maharashtra kindly consented to give this award and grace the Convention. Ms. Rajashree Birla received the award on behalf of late Mr. Aditya Vikram Birla. The two day conference was very well received and have been of immense value to our members and the business community. It was widely covered by the media. Below is the link to all the media coverage for your reference. 12th Indo US Economic Summit, 14th & 15th September, 2016, New Delhi The 12th Indo US Economic Summit is another flagship event of the chamber that focusses on contemporary themes that play a defining role in shaping bilateral relations & showcasing technical sessions on crucial sectors of Indo-US engagement. The 12th Summit deliberated on augmenting the existing two way trade to $500 billion and emphasized on sectors that foster growth. The Summit was addressed by Chief Guest Mr. Venkaiah Naidu - Union Minister of Urban Development, Housing and Urban Poverty Alleviation and Information & Broadcasting, Govt. of India; H.E Mr Richard Verma, US Ambassador to India and Keynote address by Ms. Harsimrat Kaur Badal, Hon’ble Minister for Food Processing Industries, Govt. of India and Mr. P. Chaudhary, Union Minister of State, Ministry of Law and Justice and Ministry of Electronics and Information Technology. Our revenues of Rs. 9.35 crores in the FY 2015-16 were the highest in the 48 year history of the Chamber. The number of our major seminars, this year have also increased by a factor of 3. It is my sincere hope and belief that our Chamber can grow from strength to strength, in terms of both quality and breadth of our services. This Chamber can be a catalyst in accelerating the Indo-US economic relations. This is only possible with your continued support and active involvement in the Chamber’s initiatives. The 47th Annual General Meeting of the Chamber is to be held on Saturday 22nd October, 2016 at 11.00 a.m. at The Taj Mahal Palace, Apollo Bunder, Mumbai, Maharashtra 400001. The Annual Report is already uploaded on IACC website for your reference. The link is given below: Wish you all a happy Dusshera and Diwali in advance. Dr. Lalit Kanodia Ph. D. (MIT, USA) Indo-American Chamber of Commerce
(continued from here) Before stem cells get used in therapy, there's the science of making them Join BOINC Synergy! The California Stem Cell Research & Cures Initiative Stem cell backers focus on defeating anti-cloning bill February 10, 2005 SAN FRANCISCO – The campaign committee that spent $35 million last year backing California's $3 billion stem cell initiative plans to use its fund-raising prowess to fight a federal bill seeking to ban human cloning. Robert Klein II, the wealthy Palo Alto housing developer who led the campaign, said the organization intends to raise $1 million to fight Sen. Sam Brownback, R-Kansas, and his anti-cloning supporters in the Senate. A spokesman for Brownback declined to comment on the issue. Brownback is sponsoring legislation that would ban the cloning of human embryos for any reason. Such a law would directly threaten the California Institute for Regenerative Medicine, which intends to use some of the $3 billion in voter-approved bond money to fund grants for so-called therapeutic cloning projects. Some stem cell researchers say cloning human embryos in petri dishes will help them better understand diseases. Cloning also could offer a way to avoid immune-system rejection after transplanting replacement tissue in sick people. The scientists universally oppose cloning to create babies. Many abortion opponents and other conservatives view the work as immoral, regardless of its purpose. "This is clearly going to be a major battle this year," said Larry Soler, the Washington, D.C.-based chief lobbyist for the Juvenile Diabetes Research Foundation. The foundation is part of a Washington-based coalition of nonprofit organizations supporting stem cell research. The coalition has been battling various anti-cloning proposals almost since it formed in 2001. Two similar bills have passed in the House but have been stalled in the Senate. Klein also is a board member of the Juvenile Diabetes Research Foundation and is interim president of the California stem cell institute. The campaign committee supported Proposition 71, which passed with 59 percent of the vote in November and created the California stem cell institute. By Paul Elias Heart stem cells found in newborns February 10, 2005 UCSD researchers have found heart stem cells in human newborns, rats and mice, a discovery that could lead to novel treatments for pediatric cardiac disease, according to a report published today. Called isl1+ cells, these cardiac stem cells were able to grow in the lab into fully functioning heart cells. The UCSD scientists describe their findings in the journal Nature. Scientists had thought the cells are absent after birth, but the team at the University of California San Diego said it located a small number of them within an atrium of the heart. In the lab, they were able to take a few of the cells and multiply them into millions of mature cardiac stem cells. The study suggests that isl1+ cells could be harvested from an individual's heart tissue, multiplied in a laboratory setting and implanted into the patient. Researchers identified the isl1+ cells in the tissue of newborn rats and mice, and then in heart tissue taken from five human newborns undergoing surgery for congenital heart defects. In lab experiments with the tissue, they found that isl1+ cells are programmed to become heart-muscle cells that begin to beat once they are exposed to other heart cells. These stem cells are found in regions of the atrium typically discarded during heart surgery. So the discovery means that people might be able to receive their own cardiac stem cells to correct a spectrum of pediatric heart diseases, said UCSD researcher Alessandra Moretti and Karl-Ludwig Laugwitz, a Heisenberg scholar of the German Research Foundation. Both were co-authors of today's report. Currently, pediatric cardiologists and cardiac surgeons rely on mechanical devices, human and synthetic tissue grafts and artificial and animal-derived valves to surgically repair heart defects, said Kenneth Chien, director of the UCSD Institute of Molecular Medicine. The stem cells the UCSD researchers identified "won't grow a whole new heart, (but) our research has shown that they can spontaneously become cells from specific parts of the heart by simple co-exposure to other heart cells," Chien said. Such cells, once implanted, might help infants born with heart defects maintain normal heartbeats, he added. "We think that these cells normally play an important role in the remodeling of the heart after birth, when the newborn heart no longer relies upon the mother's circulation and oxygenation," Chien said. "We believe the isl1+ progenitor cells are left over from fetal development so that they can ensure the closure of any existing small heart defects and the formation of a completely mature heart in newborns." Chien and his team plan to transplant the cells into animals to study their role in repairing injured hearts. By Bruce Lieberman Infusion of Young Blood Revives Old Muscles By Robert Roy Britt LiveScience Senior Writer posted: 16 February, 2005 1 p.m. ET Old and tired muscles might repair themselves just fine if it weren't for the old blood running through the aging human body, a new study shows. It's not quite a recipe for the fountain of youth, but the work could lead to methods for healing wounds in the elderly and tackling some diseases. Stanford neurologist Thomas Rando knew from previous work that aging muscles seemed to have all the ingredients to repair themselves but for some reason did not. In the new work, his team focused on specialized stem cells in muscle tissue. Stem cells of various kinds are able to make new tissue, so they are key to the body's ability to replenish damaged skin, bone, muscle and more. Satellite muscle stem cells lay dormant when not needed. In young bodies, both of humans and rodents, these cells come alive when muscle is damaged. In older mice, the study found, satellite stem cells appear normal but don't do their job. In a test not for the squeamish, Rando and his colleagues fused the blood supplies of younger and older mice. Then they damaged the muscles in the older rodents by zapping spots with dry ice. The muscles healed, thanks to the young blood coursing through their bodies. In a similar test, the livers of older mice responded better to the infusion of young blood, too. The researchers suspect a similar process might occur with other body cells. The key to the process appears to involve a protein called delta. When muscles are damaged, satellite muscle cells produce more delta. But in older cells, the delta production doesn't rise. In old mice with young blood, delta again rises in response to injury. The research demonstrates that "the decline of tissue regenerative potential with age can be reversed through the modulation of systemic factors," the scientists write in the Feb. 17 issue of the journal Nature. It might seem like the fabled fountain of youth is a sanguine river running through you. But staying forever young is not so simple. "Basically, the main implications of our findings are not longer life, not a reversal of aging per se, and not really even a delay of the aging process," Rando told LiveScience. "Our findings really speak more the issue of tissue repair, for example in the setting of an acute injury." Eventually, the research could lead to techniques that would help an older person with a broken bone, a skin wound, or a muscle injury. "Tissue repair in older people could be enhanced so that it approaches that of younger people," Rando said. The idea is to restore function "not so much to a 'youthful' level but rather back to the point that the person was before the injury." But there is much work to be done before even that is possible. It's not known what triggers the extra delta production in the muscle. It could be any of the thousands of proteins, lipids, sugars or other molecules in the blood, the researchers caution. And for now, there's no evidence that the same trigger works for repairing different types of tissues. "It's as big a fishing expedition as you can possibly imagine," Rando said. A personal comment on the above: Its actually the younger stem cells, HSCs and MSCs, contained in the blood that do the repairs. I should introduce Thomas Rando to Eric Kool's work. Surprisingly they are both with Stanford University. Petitions challenge stem-cell initiative's constitutionality February 23, 2005 Two petitions seeking to immediately stop California's stem-cell initiative were filed with the state Supreme Court yesterday by groups linked to politically conservative individuals who campaigned against the effort. The petitions raise constitutional questions about the control of taxpayer funds and whether the initiative is too broad in the powers it bestows upon nonelected officials who will distribute $300 million annually in research grants. Many anti-abortion groups oppose embryonic stem-cell research because it requires the destruction of embryos. But the groups stayed away from those issues in their petitions, just as they did in the campaign. The stem-cell initiative known as Proposition 71, which created the California Institute for Regenerative Medicine, was passed by 59 percent of voters after supporters waged a $34 million campaign. The Life Legal Defense Foundation, a lawyers group that opposes abortion and assisted suicide, wrote one lawsuit that alleges that Proposition 71 violates Article 16 of the state constitution because $3 billion in research funding it authorizes would not be controlled by the Legislature. The petition the foundation filed yesterday lists two other groups, People's Advocate and the National Tax Limitation Foundation, as the plaintiffs. The groups took the unusual step of filing directly with the Supreme Court rather than Superior Courts because time is of the essence, said Dana Cody, a lawyer with the Life Legal Defense Foundation. Robert Klein, chairman of the oversight committee that will distribute the research funds, wants to make the first grant awards by May. The groups seek to stop the state from selling bonds to fund the grants while the court weighs whether the initiative is constitutional. The second petition was filed by a newly formed nonprofit group called Californians for Public Accountability and Ethical Science. It alleges that provisions in Proposition 71 that exempt members of the oversight committee from some conflict-of-interest laws are illegal. David Llewellyn, the Sacramento attorney representing the plaintiffs, identified two of the people behind the new nonprofit: Dr. Vincent Fortanasce, who was president of the No on 71 campaign, and Joni Eareckson Tada, a paraplegic who founded Joni and Friends Ministries in Agoura Hills. Officials on both sides of the petitions agree that some of the legal issues raised are very similar to those raised by Big Tobacco when it tried to stop Proposition 10, an initiative that placed a 50-cent tax on cigarettes. In that case, tobacco's legal claims were found to be invalid, said Julie Bruckner, a spokeswoman for Proposition 71. "Nearly 60 percent of the electorate voted in favor of Proposition 71, with the clear mission to find treatments and cures for disease," Bruckner said. "The same voters felt comfortable that there was ample oversight and public accountability built into the initiative. Those voters now have every right to expect the work of this institute to proceed." By Terri Somers Criteria for stem cell institute revised. Bay Area isn't shoo-in for site. February 25, 2005 Putting the state's new stem cell institute in the San Francisco Bay Area is not a done deal. Members of the committee overseeing the initiative saw to that yesterday. Committee members were presented with a proposal of the requirements that would be given to the cities – including San Diego – vying to become home to the institute. A subcommittee charged with selecting the site met by conference call, and many of its members said the criteria seemed to be skewed to favor one area over another, and proceeded to cut the controversial items. "I know Fresno isn't in the running, but please don't push it in our face," Dr. Phyllis Preciado, a committee member from that area, said after reading some of the requirements that were later removed. "We had to start somewhere," Committee Chairman Robert Klein said in defense of the criteria he developed with the help of the state Department of Government Services. Biotechnology and economic development people around the state have said the Bay Area has a definite edge on being selected as home to the institute because Klein lives there. He's hired about six institute staff members so far, and most live in the Bay Area. Klein, who helped write the stem cell initiative, has repeatedly insisted the site selection process will be open and fair. Dr. Richard Murphy, president of the Salk Institute, questioned Klein about his proposal that the site be in a region that employs more than 25,000 people in biomedical research. That number would not include people who work in medical device companies. "One could interpret this to be designed to fit the strengths of one applicant over another," Murphy said. Several of San Diego's largest and most successful biotechnology companies develop medical devices. Locating the institute in an area with a dense and talented work force in the biomedical area is important to attract the most talented person to be its president, Klein said. The committee opted to drop the requirement. Instead, applicants will be advised to explain how rich their area is in biomedical research, including the number of employees in the sector, researchers and their level of education. Klein also defended his proposal to locate the site within two hours total travel time of Sacramento because he anticipates institute staffers will be called to meet with legislators and state officials several times a month. The committee voted to remove that as a requirement, and add it to the list of items that might give one site preference over others. A revised version of the requirements, completed at the subcommittee hearing yesterday, will be issued Monday. Sites vying to be home to the institute must submit their proposals by 5 p.m. March 16. The committee is seeking a minimum of 17,000 square feet of office space with two conference rooms that can each seat 50 people. It is seeking free rent, or nominal monthly payment for the first seven years. The subcommittee charged with selecting the site will whittle the list to four or five sites March 25. After visiting the sites, the subcommittee will select its first and second choices April 22. Those choices will be presented to the entire 29-member committee May 6. By Terri Somers Stem Cell Central? With its concentration of scientific know-how, San Diego looks to become world headquarters for stem cell research... By Eilene Zimmerman San Diego Magazine Join BOINC Synergy! 'Safer' stem cells bring therapies closer. Completely fresh supplies of human embryonic stem cells have been created for the first time without having to grow them on potentially contaminating mouse "feeder" cells. Nor do they need to be nourished with serum derived from animals. The breakthrough boosts the prospects of growing safe tissues for transplant from embryonic stem cells - the unspecialised, primitive cells in the embryo from which all tissues originate. "The ability to generate new stem cell lines in completely [mouse-]cell-free and serum-free conditions solves a major problem associated with the use of stem cells in the treatment of human medical conditions," Recent experiments showed that all lines of human embryonic stem cells grown on a scaffolding of mouse feeder cells may be potentially contaminated with animal substances and therefore unsafe for treatment. This includes the cell lines approved by President George W Bush in 2001, which federally-funded US scientists are restricted to use. Stem cell institute leader in the hot seat By Terri Somers March 10, 2005 SACRAMENTO – The new president of California's fledgling stem cell institute cordially took a grilling yesterday from a joint legislative committee that wants to wrest some control over $3 billion the institute will spend over the next decade. State Sen. Deborah Ortiz, who presided over the five-hour hearing, asked institute President Zach Hall to promise that the citizens committee overseeing the initiative would consider adopting a policy that any drugs or therapies that result from the state's research efforts be made affordable and available to all Californians. Ortiz and other legislators also criticized the oversight committee's plan to hold some working group meetings in private. And they debated protections for women who donate their eggs for research. But all the legislators could really do was debate and ask for promises. The elected officials have no direct power over the stem cell institute, its funding or its oversight committee. Proposition 71, the stem cell initiative, was specifically written to avoid meddling by the Legislature for the first three years. But Ortiz is trying to work around that. She wants to hold at least two more public hearings with the stem cell research officials before she leaves office in two years. Such hearings, she said, help keep oversight committee members accountable and give them guidance and direction when setting policy. If something goes wrong and the people of California don't like what is happening with the stem cell institute, Ortiz said, they will hold the Legislature accountable. "The average person doesn't distinguish between the (stem cell research oversight committee) and the legislature," she said. Ortiz was a vocal backer of Proposition 71, which received 59 percent voter approval. She introduced legislation in December, before the oversight committee held its first meeting, that seeks to resolve what she sees as some of the initiative's policy problems. Those issues were debated yesterday. Robert Klein, chairman of the oversight committee and an author of Proposition 71, has assured Ortiz that the committee would address her policy concerns, if it was just given the time. Ortiz had planned to grill Klein on its progress yesterday. But Klein instead sent Hall, who was hired a week ago. And Hall calmly took one for the team. He vowed that all decisions of the oversight committee would be made in public, including who will get the $300 million annually that it will give in grants. When Vince Brown, chief operating officer in the state controller's office testified yesterday about performing audits of the stem cell institute, Ortiz tried to get him to pledge his boss's endorsement of her bill. Brown said he'd have to talk to his boss, Steve Westly. Ortiz said she also sees other options for controlling the research dollars. It might be possible to pass legislation that skirts Proposition 71, she said. For instance, she said, Senate lawyers said a law establishing protections for egg donors might pass muster because it would include more than just women involved with stem cell research. Scientists say Connecticut needs $100 million to compete for stem cell work By Matt Apuzzo March 15, 2005 NEW HAVEN, Conn. – Connecticut needs to commit $100 million for stem cell research if it wants to compete for the best researchers in the pioneering field, scientists from Yale and the University of Connecticut said. That's five times more than the $20 million, two-year plan proposed by Gov. M. Jodi Rell, but two of the state's top scientists said researchers won't choose Connecticut without a long-term commitment, especially with California pumping $3 billion into stem cell research. "We think we're going to need something like $100 million over 10 years, minimum," said Dr. Robert J. Alpern, dean of the Yale School of Medicine, said last week. "Everybody understands it's going to take a bigger commitment." Yale has already begun a worldwide search for a scientist to lead a new stem cell research group and the University of Connecticut is considering expanding its Center for Regenerative Biology, one of the country's leading cloning laboratories. Scientists worldwide are eager to dive into the field because certain stem cells can morph into all cell types found in the body. Some scientists believe that stem cells can be used to repair damaged tissue, replace entire organs and cure diseases such as diabetes, Parkinson's and Alzheimer's disease. Opponents of embryonic stem cell research say it's unethical to use cells from human embryos or fetuses and say there's no evidence it will provide scientific breakthroughs. Once funding begins, they say scientists will continue to demand more money. "It'll be just a little bit more, just a little bit more," said Tony Perkins, president of the Washington-based Family Research Council. "And at that point there will be billions wasted." California voters recently approved $3 billion for stem cell research and many scientists expect the state will attract some of the top minds in the field. With New York, New Jersey, Illinois and other states also considering funding research, competition for the world's top scientists will be fierce. "They're going to have to be convinced that there's a high likelihood of long-term money," Alpern said. "You're always at the whim of governments changing their minds." Two lawmakers push for stem cell accountability By Bill Ainsworth March 17, 2005 SACRAMENTO – A bipartisan pair of legislators introduced a constitutional amendment yesterday that would apply open meeting, financial disclosure and conflict-of-interest laws to the agency that administers stem cell research grants under Proposition 71. State Sens. Deborah Ortiz, D-Sacramento, and George Runner, R-Lancaster, said they are aiming to protect taxpayer money and make the organizations created to oversee the state's new push for stem cell research more accountable. Besides the constitutional amendment, they are backing bills that would require audits of the new stem cell organization and seek to protect women's health by applying a three-year moratorium on multiple egg donations for research. Ortiz, who favored the ballot measure, said the new bills will maintain public confidence and the state's financial investment in the research. "These measures will uphold our promise to the public that Proposition 71 is implemented in an open, thoughtful and deliberative manner," she said. Runner, who opposed Proposition 71, said the package will clarify the proposition. "Unfortunately, there were many parts of Proposition 71 that were left cloudy," he said. The ballot measure, approved by voters in November, authorizes the state to spend $3 billion on grants in an area of science that proponents hope will lead to cures or treatments for diseases. Critics have worried that Proposition 71 doesn't have sufficient safeguards to guarantee that taxpayer money would be well spent. Two lawsuits against the new stem cell research organization contend that Proposition 71 illegally exempts agency members from conflict-of-interest laws. The state Attorney General's Office said the stem cell agency will be unable to sell bonds while the lawsuits are pending. Attorney General Bill Lockyer has asked the California Supreme Court to take jurisdiction of the lawsuits to expedite the cases. The new constitutional amendment would require a variety of laws that cover other government agencies to apply to the organizations created by Proposition 71, including the Independent Citizens' Oversight Committee, which operates the new California Institute for Regenerative Medicine, and the oversight committee's working groups, which will help decide who gets the grants. The amendment must win the votes of two-thirds of the Legislature and then win approval from voters. Robert Klein, chairman of the Independent Citizens' Oversight Committee, in a statement with Vice Chairman Ed Penhoet, implied that the proposed package of legislation wasn't necessary. The oversight committee, he said, is already moving ahead with significant conflict-of-interest rules to apply to advisers and staff. Furthermore, he said, federal rules are already in place to protect patients. Klein disputed the notion that Proposition 71 doesn't have sufficient protections. When they approved Proposition 71, Klein said, voters were expressing confidence that the initiative "contains sufficient governance, oversight and accountability mechanisms to address the very same issues Senators Ortiz and Runner discussed today." The measure was written to sidestep the Legislature by placing power into the hands of a 29-member board of directors composed of leaders in academia, research and business. San Diego makes proposal for research headquarters By Terri Somers March 17, 2005 San Diego yesterday offered more than 17,000 square feet of rent-free office space overlooking the Torrey Pines Golf Course and surrounded by premier research institutes as the headquarters for the state's new stem cell institute. San Diego was one of at least six cities that submitted proposals to the state yesterday. Some did it with a flourish in front of the media. Others did it quietly, even somewhat secretly. All are hoping to be chosen as the hometown for the California Institute for Regenerative Medicine, which will administer $3 billion in stem cell research grants over the next decade. A committee overseeing the implementation of the stem cell effort known as Proposition 71 will hold public hearings on the proposals before making a selection May 6. Although the institute will employ no more than 50 people, it is expected to bring its host city prestige that could ultimately attract millions of dollars of investments in biomedical research and companies. During the past month, a team that included representatives from San Diego city government, the San Diego Regional Economic Development Corp., trade groups and private industry spent excruciatingly long hours putting together the proposal submitted yesterday. Much the same scenario was playing out in cities such as San Francisco, San Jose, Emeryville, Los Angeles and Sacramento. "We just felt that this was a very important statement to make," Mayor Dick Murphy said. "San Diego is the life-sciences center of America, and we are going to do everything we can to keep us on the map." He called the city's proposal a "remarkable effort in philanthropy." San Diego defense contractor SAIC helped guide the team putting together the proposal. The company has experience submitting large proposals to the government and also administers the National Cancer Institute, said Julie Meier Wright, president of the San Diego Regional Economic Development Corp. The 60-plus-page report includes photos of the office space and surrounding area, as well as maps showing the concentration of research institutes and biotechnology companies on Torrey Pines Mesa. Slough, a real estate company that said it will have no connection to anyone eventually applying for stem cell research grants, agreed to donate the glass-walled, one-story office space. Slough would provide free rent, utilities, upgrades and janitorial services for the full 10-year lease. The building on Torrey Pines Road is surrounded by biotechnology companies and is within a short drive of the Scripps, Salk and Burnham institutes and the University of California San Diego. The committee overseeing the implementation of the stem cell initiative was seeking free rent for at least the first four years of the lease. San Jose, San Francisco and Emeryville also submitted proposals offering free rent for the entire term of the lease. San Jose Mayor Ron Gonzalez held a news conference on the Capitol steps yesterday to announce that the plan he submitted offers two possible sites for the oversight committee's consideration – one near the airport and the other downtown. Los Angeles offered a site in a downtown high-rise, where rent would be free for the first four years and negotiable thereafter. San Francisco did not reveal the details of its proposal yesterday. The state Department of General Services did not release a list of all cities that submitted proposals. It is keeping all proposals under wraps. In response, oversight committee Chairman Robert Klein and Vice Chairman Edward Penhoet released a joint statement encouraging bidders to post their proposals on their respective Web sites to foster full and open public access. All the proposals that have been disclosed so far offer a range of amenities, ranging from free or discounted hotel rooms for people visiting the institute on business to free office furniture and discount rental cars. The Los Angeles proposal offers $1 million in foundation grants to help with the administrative startup of the institute, plus private jet service on occasion. Murphy was unfazed by that component. "It is worth a million dollars to be out there on Torrey Mesa overlooking Torrey Pines Golf Course and the Pacific Ocean, and to be surrounded by UCSD and some of the finest research institutions in America," he said. San Diego is offering what amounts to much more than $1 million in free or discounted services to help get the institute up and running, the EDC's Wright said. Those incentives include 100 hours of free legal services, free branding and marketing advice from public-relations firm Mentus. The San Diego Workforce Partnership offered to provide free help in recruiting staff, while biotechnology trade group Biocom and UCSD Connect are offering free memberships and the services those entail. SAIC is offering pro bono assistance in setting up information technology systems. Invitrogen, a Carlsbad company that sells products used by stem cell researchers, has offered free educational courses in that science to the institute staff. "We wrapped things around the headquarters that we think are important for the institute to get a very solid start," Wright said. "And we've gone beyond that, looking to establish them at Biocom and Connect and other organizations, which will be the cornerstone of things we will be able to do for them in the future." A unique feature of San Diego's proposal is the formation of a "Readiness Team" and an "Advisory Council." The Readiness Team would consist of public-and private-sector leaders and their organizations who would be available to assist in planning, coordination and startup of the facility, furnishings and systems. The Advisory Council would be a team of top local executives who would be available to the institute for advice, introductions within the community, resources and crisis support. Biocom President Joe Panetta said a San Diego address also would benefit the institute by putting it in the midst of a research community with a reputation for cooperation and collaboration. "I don't think you can possibly go anywhere else but San Diego and see the kind of concentration of biotech and research and service firms and academia," he said. Headquarters effort a study in teamwork By Terri Somers March 18, 2005 San Diego's life-sciences community often brags about how it is uniquely collegial and collaborative compared with other clusters of biotech and drug discovery companies. That for-the-greater-good spirit encompassed even more businesses in the past month as the region quickly put together more than $9 million in donations to support a bid for the headquarters of the state's new stem cell institute. The effort entailed 20-hour days, dozens of telephone calls soliciting help, lots of takeout food and raising about $4 million in less than two weeks. "Everyone up to the mayor jumped in and helped out with various aspects of this, from fund raising to considering permitting," said Joe Panetta, president of the biotechnology group Biocom. San Diego was one of at least six cities that submitted bids Wednesday to be the home of the California Institute for Regenerative Medicine, which will administer the $3 billion in stem cell research grants that voters approved in November under Proposition 71. An independent citizens committee overseeing the initiative will review all the bids and visit top contenders. The committee plans to select the institute's site by May 6. San Diego's slick, 60-plus-page color proposal devotes more than a page to describing the region's spirit of collaboration and the advantage of having 38,000 people working in 500 life-sciences companies and eight research institutes located mostly in one or two adjacent ZIP codes. "In more than a dozen enterprises on Torrey Pines Road, you will find daily interactions of lab scientists, administrators and professors who have the chance to walk down the street to poster sessions or visiting lecturers covering the latest in biology, chemistry and medical discoveries," the proposal states. While there may be more biotechnology companies in the San Francisco Bay Area, they are spread across several cities. Emeryville, San Francisco and San Jose submitted proposals, and San Jose offered two possible sites for the institute. San Diego could have proposed multiple sites in multiple communities, Panetta said. But the communities and businesses agreed that a site in Torrey Pines would be best for the institute and the region's research community. Panetta said the bid includes the most attractive site of those offered and appears to provide the most amenities to get the institute started. In December, leaders of Biocom, the San Diego Regional Economic Development Corp. and UCSD Connect, which promotes businesses and technology, started discussing how they could help the region's researchers vie for the $300 million in annual research grants. "I said we also wanted to go for the headquarters," said Julie Meier Wright, president of the Economic Development Corp. "I saw this as the biggest statewide economic development opportunity ever." Although the institute will employ no more than 50 people, leaders statewide think it will attract millions of dollars more of investment in research and business. In January, the Proposition 71 oversight committee put out a request for proposals calling for 15,000 square feet of office space in an area with a high concentration of life-sciences companies. The Economic Development Corp., Biocom and Connect leaders put together what Wright called a "Red Team," a group of people from government and industry who have expertise in many areas. As the weeks passed, the group grew larger. Many of the initial members were familiar to one another because they belong to Biocom or Connect and have worked on other projects, said Brent Jacobs, senior vice president with Burnham Real Estate Services. Private companies such as defense contractor SAIC were enlisted for their expertise in drafting large government proposals, said Jane Signaigo-Cox of the Economic Development Corp. In January, the group started meeting for two to three hours twice a week to discuss what needed to go into the city's proposal. Between meetings, a flurry of e-mail ensued. Mary Ann Beyster of SAIC went to the first meeting and was impressed by what she described as a "well-qualified and highly energized group." The team listed about 20 properties that might be suitable with the help of Jacobs and Kennon Baldwin of McGraw/Baldwin Architects. The site on Torrey Pines Road they selected is near the University of California San Diego and the Burnham, Salk, Scripps and Neurosciences institutes, as well as the Sidney Kimmel Cancer Center and the La Jolla Institute for Allergy and Immunology. Last month, when the oversight committee adopted a finalized request for proposals, it boosted the space requirement to 17,000 square feet with low or no rent. Access to low-or no-cost conference and hotel facilities was added. The site specifications narrowed the group's choice to the one it offered yesterday, which overlooks Torrey Pines Golf Course and the Pacific Ocean. Baldwin and Jacobs contacted the U.S. director of Slough Enterprises, the English company that owns the building. The company, which also owns institute-worthy space in the Bay Area, looked at the life-sciences roots established there and in San Diego. The company decided San Diego is best suited to hosting the institute, Baldwin said. Although the market rate for the space is about $3 a foot, Slough agreed to make it available for $1 a foot, Panetta said. That discount represents about $4 million over the 10-year life of the lease, he said. The Red Team quickly looked for donations to supplement that cost, since the fiscally troubled city could not take on that burden, said Duane Roth, who leads UCSD Connect. Meanwhile, the team members called any company that provided services the institute will need. They ranged from hotels in La Jolla such as the Estancia, the Lodge at Torrey Pines, the Hilton and the Marriott, to office furniture vendors, to recruitment specialists, movers and even coffee services. It usually took one telephone call to gain a commitment to help, Beyster said. And about 24 hours later, the company would call back with its offer of free or discounted services and support, she said. Red Team members credit Roth with getting about $5 million more in donations to supplement the headquarters' rent in just over 12 days. The Los Angeles bid includes $1 million from two foundations. Roth said the San Diego bid includes donations from several foundations, businesses and individuals – 21 donors in all. "The answers came back very fast," he said. "It was almost an immediate sort of, 'Yes, this would be really good for San Diego.' " Reassure voters - Oversight for stem cell agency is good idea March 20, 2005 Sometime this year, the California Institute for Regenerative Medicine will begin distributing $300 million annually for the next 10 years, which is why a growing number of people are raising concerns. Californians voted overwhelmingly in November for Proposition 71, the stem cell initiative, that eventually will cost taxpayers $6 billion, including interest costs for $3 billion in state bonds. The primary author of the proposal, Bob Klein, wanted research funded in the hope that family members suffering from degenerative conditions, such as Alzheimer's and Parkinson's disease, might some day see treatments or possible cures. In Klein's case, his teenage son has Type 1 diabetes. But, as is the case with most initiatives, many voters did not read the fine print. Not discussed widely during the campaign was the fact that the institute would be governed by a 29-member board with no oversight from anyone. The bonds would be authorized from the state treasury, and the institute would distribute the money. Language in the initiative also allows a number of decisions to be made behind closed doors. That lack of legislative oversight and a few other concerns are causing a delayed reaction. In February, two groups representing people who opposed Proposition 71 filed petitions with the state Supreme Court raising constitutional questions about the control of taxpayer funds and the power of the nonelected oversight board. (Members of the board are appointed by the governor, other constitutional officers, University of California chancellors and others.) State Attorney General Bill Lockyer said last week that the institute likely will not be able to begin distributing grant money until the suits are decided. Another challenge came last week from two members of the Legislature – Sens. Deborah Ortiz, D-Sacramento, and George Runner, R-Lancaster – who want to place limits on the institute. Ortiz, who actively supported Proposition 71, and Runner, who opposed it, both proposed a constitutional amendment that would impose conflict-of-interest requirements and open more meetings to the public. It would further require than medicines developed with public funding be made available to Californians at affordable prices. Separately, they also introduced a bill banning for three years "multiple egg donations" from women who want to contribute them to research. Embryos created from in vitro fertilization procedures still would be available to researchers. Because of the way Proposition 71 was structured, the Legislature can only change it through a constitutional amendment, which requires a two-thirds vote in both the Senate and Assembly, and subsequent approval by voters. Ortiz and Runner say they have wide support in the Legislature. Klein and others on the oversight committee have said they want to work with the Legislature. Considering the amount of money involved, compromises must be reached, or the Legislature should move ahead with plans to put the constitutional amendment before voters during the next election. What stem cell agency can find here By Standish Fleming and Ivor Royston March 22, 2005 Hundreds of life sciences researchers, CEOs and financiers converge on San Diego today for the 2005 CalBio Summit. The buzz at the conference will certainly be the selection of a site for the headquarters for the California Institute for Regenerative Medicine that will administer $3 billion worth of stem cell research grants over the next decade. In attempts to lure the CIRM headquarters, last week several cities submitted strong proposals rich with enticements that include free office space, foundation grants, discounted rental cars, hotel rooms for guests and even private jets. In fact, the geographic location of CIRM's headquarters should have no influence on where the research dollars are spent. Rather, these cities are looking for the prestige, the money associated with conferences and the generation of other investments that the CIRM headquarters will spark. On March 12, The San Diego Union-Tribune reported that the "leaders of the stem cell effort said they will go wherever they get the best deal." But the selection committee needs to look beyond the best monetary and in-kind offers, which every aspiring headquarters has offered forth. To realize its potential, the CIRM headquarters must sit squarely at the junction of innovation and scientific discovery, in an electric environment where great ideas arc between individuals and between institutions. San Diego – alone among the competitors – boasts unparalleled strengths in this regard. Every city on the list has offered subsidies ranging in the millions of dollars, but only San Diego can offer an intangible benefit, responsible for its lightning fast emergence as one of the leading biotech regions in the world, with special strength in early-stage research. Just last summer, the Milken Institute identified San Diego as "the top biotechnology cluster in the country," in its study titled America's Biotech and Life Science Clusters: San Diego's Position and Economic Contributions. Los Angeles has offered plush offices in a downtown high-rise. San Jose suggested two possible sites, one near the airport and the other downtown, both replete with soothing fountains and fitness centers for the institute's 50 employees. San Francisco has not released details of its proposal but will face a similar dilemma regarding a geographic location for the headquarters. Even if these cities have strong biotech industries – which some do not – they lack what San Diego offers in abundance: a spectacular concentration of life sciences companies, research institutes, universities and support services that sparks creative ferment and fosters a collaborate spirit. Without this intangible, San Diego would have been hard pressed to emerge as the nation's top biotech cluster in little more than two decades. San Diego's offering is a building overlooking the world-famous Torrey Pines Golf Course and the azure Pacific beyond. But it's much more than a room with a view. The institute's proposed new San Diego headquarters are surrounded by hundreds of biotechnology companies, the Scripps Research Institute, the Salk Institute, the Burnham Institute, the Sidney Kimmel Cancer Center, the Neuroscience Research Institute, the La Jolla Institute of Allergy and Immunology, the University of California San Diego, and Pfizer's La Jolla campus, among others. This interconnectedness has created a life sciences community unlike any other in the world. And its distinguishing features include collegiality and collaboration. As we all know, competition is good. But results are better when rivalry is tempered with cooperation and combined with the attitude that making a breakthrough scientific discovery is more important than personal gain or glory. This is what distinguishes San Diego's life sciences community from any other in the world. And this rare quality is what makes San Diego the best, most productive site for the CIRM headquarters. - Fleming and Royston are founding managing members of Forward Ventures, a San Diego-based life sciences venture capital fund. Royston is the former chief executive officer of the Sidney Kimmel Cancer Center and former faculty member of the UCSD Cancer Center. Fleming is a former president of the Biotech Ventures Investors Group. Q&A Stem cell research Robert Klein, author of Proposition 71 John Reed, president, Burnham Institute Klein wrote Proposition 71, an initiative passed in November to allocate $3 billion to stem cell research in California. The president of a real estate related company, Klein has a son diagnosed with juvenile diabetes and a mother with Alzheimer's. He is now chairman of the initiative-spawned California Institute for Regenerative Medicine. Reed, a medical doctor and biomedical researcher, is president of the San Diego-based Burnham Institute. Reed is a member of the oversight committee for the new organization. They were interviewed March 10 by members of the Union-Tribune's editorial board... Calif. high court tosses two lawsuits against stem cell agency By Paul Elias March 23, 2005 SAN FRANCISCO – The California Supreme Court on Wednesday tossed out two lawsuits that sought to eliminate the state's newly created $3 billion stem cell research agency. The high court refused to hear the two cases with little comment. But the court did say its unanimous ruling doesn't prevent the lawsuits from being refiled in a trial court, which could still spell trouble for the California Institute for Regenerative Medicine. The two lawsuits were filed directly with the Supreme Court last month by conservative public interest groups with ties to Christian organizations. The lawsuits sought to invalidate Proposition 71, which was passed in November and created the California Institute for Regenerative Medicine. "We would have preferred for the California Supreme Court to rule on this litigation, but the institute will now consider its option and take prompt action on an alternative plan," Bob Klein, chairman of the committee that oversees the agency, said in a prepared statement. He didn't offer any details of what the "alternative plan" may be. Agency officials and California Attorney General Bill Lockyer had said the agency couldn't sell bonds to finance research grants as long as the lawsuits were pending in the Supreme Court. Lawyers for the plaintiffs said they're not sure what effect their lawsuits would have on the agency's bond-selling abilities if refiled in a lower court, but said it appears it could hamper the agency. Lawyers representing plaintiffs in both lawsuits said they will probably refile. "It's almost an absolute certainty that it will be filed in Superior Court," said David Llewellyn, a Sacramento attorney representing the newly created nonprofit called Californians for Public Accountability and Ethical Science, which filed one of the lawsuits. Llewellyn said the groups took the unusual legal step of filing directly with the Supreme Court in hopes of getting a swift resolution. He said the court's ruling "wasn't unexpected." Llewellyn's lawsuit alleged that it was illegal to exempt members of the institute from some government conflict-of-interest laws, as Proposition 71 allows. The other lawsuit alleged that the committee that oversees the agency violates state law because it doles out public funds but isn't governed exclusively by the state government. Stem cell restrictions may be eased April 3, 2005 America just witnessed a collision among religion, politics and science with the end-of-life struggle of Terri Schiavo between members of her family. These same forces have been at play with abortion and stem cell research, and the debate over Schiavo likely will affect the political future of both these contentious issues. While American public opinion on the central issues of the abortion debate has changed little over the past decade, there has been a dramatic shift on stem cell research, which only in recent years has become a political issue. Some public opinion polls show a doubling of support for stem cell research within the past three years. A recent survey conducted for the American Society for Reproductive Medicine shows that more than 66 percent of Americans support the research. A Harris poll taken just before the November 2004 election showed even greater support. Some political observers point to the success of California's Proposition 71 in November as another sign of the growing public support for the research, which one day may lead to cures for such degenerative diseases as Alzheimer's, diabetes, severe burns or spinal cord injuries. It should come as little surprise, then, that leaders in the House of Representatives recently relented and decided to allow a vote on a bill in support of loosening the restrictions placed on stem cell research by President Bush in 2001. Under pressure from some on the religious right, the president said then that federal funding for research would be limited to 60 or so stem cell lines already in existence. Since then, researchers have said only about 20 of those lines are actually available and that some of them are contaminated. Researchers and advocates for victims of diseases have been pushing for a relaxation of the restriction. As support for easing the restrictions has increased among the public, it also has increased in Congress. Last year, 206 members of the House, including 31 Republicans and many opposed to abortion, asked the president to reconsider his decision. So did 58 senators. Some who follow the issue believe there are enough votes in both houses to pass a bill easing restrictions on research. The bill the House is likely to consider within the next few months already has 183 cosponsors. It would allow researchers to use federal funds to study newer stem cell lines derived from embryos that would otherwise be discarded by fertility clinics. It would not allow for the use of embryos from cloning or other means. Even with widespread support, the White House says President Bush still believes the restrictions are appropriate and that he likely would veto a bill to override the federal funding ban. It is clear from researchers that the existing lines are insufficient. The president's restrictions are causing money and researchers to go overseas. He should consider the growing public support for research that has so much potential for easing the suffering of millions. Boston Globe's Gareth Cook wins Pulitzer prize in explanatory journalism for a series of articles about stem cell research. The stories nominated for Explanatory Journalism prize here. "Harvard University will soon launch a multimillion-dollar center to grow and study human embryonic stem cells, in what could be the largest American effort yet to circumvent the Bush administration's tight restrictions on the controversial research." From political action, to research news, the Boston Globe has an extensive stem cell news roundup here. If you want to increase your awareness of the current debate over stem cell research these links are a good place to start.
MARVEL Contest of Champions APK Downloads: 256 Updated: November 11, 2015 MARVEL Contest of Champions Description If you’re a fan of the Marvel comics and its different heroes, you’ll love MARVEL Contest of Champions. It’s a fun action game where you’ll have the chance to put your fighting powers to the test. The game has a series of fights, where you’ve got to make the best team possible in order to beat Kang and Thanos. You’ll easily pick up how the game works and the different type of attacks you can use, but what is difficult is combining them so that you become the best fighter in the Marvel world. BUILD YOUR ULTIMATE TEAM OF CHAMPIONS: • Assemble a mighty team of heroes and villains (choosing Champions such as: Iron Man, Hulk, Wolverine, Storm, Star-Lord, Gamora, Spider-Man, Deadpool, Magneto and Winter Soldier) • Embark on quests to defeat Kang and Thanos and face the challenge of a mysterious new super powerful cosmic competitor, ultimately to prevent the total destruction of The Marvel Universe • Improve your team’s offense and defense with multiple Mastery trees COLLECT THE MIGHTIEST HEROES (AND VILLAINS!): • Collect, level up, and manage your teams of heroes and villains wisely to receive synergy bonuses based upon team affiliation and relationships taken from the pages of Marvel Comics • Pairing up Black Panther and Storm or Cyclops and Wolverine for bonuses, or making a team of Guardians of the Galaxy for a team affiliation bonus • The more powerful the Champion, the better their stats, abilities and special moves will be QUEST AND BATTLE: • Journey through an exciting storyline in classic Marvel storytelling fashion • Fight it out with a huge array of heroes and villains in iconic locations spanning the Marvel Universe such as: Avengers Tower, Oscorp, The Kyln, Wakanda, The Savage Land, Asgard, the S.H.I.E.L.D. Helicarrier, and more! • Explore dynamic quest maps and engage in a healthy dose of action-packed fighting utilizing controls developed specifically for the mobile platform Notes Editors:The game has got a range of challenges, grouped by story or individual events, so it isn’t likely that you get bored; there will always be a battle you can take part in. Editors is lazy not tested it Download MARVEL Contest of Champions APK File Download MARVEL Contest of Champions V5.0.1: - Price: free - Requires: Android 4.0 and up - File Name: MARVEL Contest of Champions.apk - Downloads: 256 - Version : V5.0.1 - Category : Action (No Ratings Yet) Old Versions MARVEL Contest of Champions - MARVEL Contest of Champions V5.0.1 Apk File [free] Date: 2015-11-10 What's New in MARVEL Contest of Champions - • Act 4 has been released! - • Summoner level maximum has been increased to level 60! - • 5-Star Champions are coming to The Contest! These are the most powerful Champions yet! - • Alliance Quest Series have received several improvements to communicate all the important Alliance Quest Series information better - • New Catalyst Inventory has been added - • Additional improvements have been made to the UI, Versus Arenas, Synergy Bonuses, the Stash & Items Store. Check out our forums for more details! MARVEL Contest of Champions Screenshots DISCLAIMER: MARVEL Contest of Champions is the property and trademark from , all rights reserved by Click on the above link to proceed to the apk file download page or app buy page.
Bauer Media and Nine Entertainment Co.’s Mi9 today announced that as of the 1st April 2015, to coincide with the launch of the ‘To Love’ digital women’s network, Bauer Media will take over full trading responsibilities for its entire digital inventory. The deal will see Bauer Media centralise all advertising inventory and content across both digital and print platforms, providing advertisers with one point of contact when booking campaigns associated with Bauer brands. “With the launch of Bauer’s new digital lead products coming in early 2015, it was important for us to bring our digital trading capabilities in-house, streamline the booking process and provide advertisers with greater brand synergy between print, digital and live experience opportunities,” said Tony Kendall, Bauer Media’s Director of Sales. “In the short term it’s business as usual, as we work towards transitioning the ad serving, campaign management and billing side of our business from Mi9 to Bauer Media by the 31st March 2015,” he added. Bauer Media digital assets will continue to be co-represented by Mi9 and Bauer Media until March 31st 2015, with all current campaigns and those booked within this period to be implemented and monitored through current booking procedures. Bauer Media remains a partner of Nine Entertainment Co.’s powered, providing clients with strategic cross media solutions involving some of Australia’s most recognisable consumer brands.
Mark my words… Google TV is going to absolutely blow up. The reason? There are a bunch – many of which Vic Gundotra and Rishi Chandra identified in their Google IO keynote on Day 2 – but they’re very similar to the reasons that Android has quickly become the dominant (yes, dominant) mobile operating system over the past two years. Every fall, a slew of tech products and flagship devices vie for the hearts and dollars of holiday shoppers the world over. This holiday season we’ve already been promised Chrome OS Netbooks (and probably tablets) and it seems as though 3D TVs will make a push for popularity. Don’t forget that an avalanche of Android Phones will likely scramble for limelight, but if you ask me, Google TV will be THE must-have holiday gift (at least in tech) for 2010. Allow me to explain… Google TV: A Winning Concept At Google IO, Vic Gondutra acknowledge that there were 2 very different Android-based devices on stage: one was a 3-point-something-inch phone and the other was 50 inch television. With the operating system running successfully on two very different environments, one could assume that Android could also be successfully integrated on all the screen-sizes in between. We’re talking tablets, netbooks, laptops, MIDs and more. It’s safe to say that Android itself has a bright future beyond phones. So why will the TV be so successful when tablets, netbooks, laptops and MIDs have yet to really take off? It’s already the biggest screen in your house. The HTC EVO 4G will launch on Sprint June 4th and immediately be one of the largest Android Phones on the market with a 4.3-inch screen. As long as it fits in your pocket, everyone seems to want bigger, brighter, faster and even more bigger (biggerer). While you’re doing all sorts of ridiculously awesome things on your Android Phone and perhaps playing on the web with your laptop, you’re often sitting directly in front of the biggest screen in your house which is unfortunately relegated to a single duty – television. What if the power of that HUGE screen could be unlocked to do so much more? It can… and it was just a matter of time before it happened. That time is now. But wait – a phone, tablet or netbook would only cost a few hundred bucks while a new Google TV will costs thousands, right? Not really… you already own one. While Sony will indeed be selling their “Sony Internet Television” – which will come “with Google TV” built in much-like Android Phones come “with Google” – you won’t need a brand new television to enjoy Google TV. With the help of 3rd parties like Logitech, you’ll be able to purchase a box that connects directly to your HD-capable Television to immediately enjoy all that is Google TV. Although pricing hasn’t been disclosed, I can’t imagine the unit will cost more than $200 or $300 which is on par with any other holiday gadget you’d be buying. Perhaps we’re getting ahead of ourselves. Do we even know what Google TV is? Not completely, but Google explains the basics pretty well: To sum that up: - Access typical television content in a revolutionary way with the ability to search television programs (and content, actors, dialogue, etc…) directly from your “remote” to help you find what you want quicker and easier. - Supplement typical television content with the ENTIRE web (*cough* Flash *cough*) that not only seamlessly integrates with your TV experience, but provides a new level of synergy. - Google TV is based on Android 2.2 so you’ll be able to download apps and games and use/play them directly on your Google TV As the video stated, this is just scratching the surface. Put them into context and you’re really in for a ride. For example, how about using a few Android Phones with the Logitech Remote App, loading up an Android Game and playing a multi-player game in the same vein as Wii? I’m sure it’ll happen. How about translating foreign language television programs on-the-fly to read subtitles in your native language? Google already demonstrated that. Full-screen YouTube and DVR integration? Mark it 8, Walter. In all fairness, these ideas aren’t brand new. Many of them (if not most of them) were thought of a LONG time ago and over the past several years other companies have already launched similar products, but with marginal success. So why will Google TV be any different? Openness and timing. Are you familiar with Apple TV? How about WebTV (now MSN TV)? These are already-launched products that share a similar vision with that of Google TV, yet they haven’t achieved mainstream success. The difference is that these were products developed by ONE company in hopes of creating a successful opportunity for that single company. Google TV wasn’t just announced by Google: Sony, Intel, Logitech, Dish TV and Best Buy all announced the product together at Google IO. Was this a formality? Absolutely not. Instead of one company building one product for the financial bottom line of themselves, Google has taken one concept and opened the opportunity to a multitude of industry leaders who will ALL be pushing for the product’s success. Sound familiar? This is the exact reason Android has become so successful. When Google announced Android they didn’t just announce a mobile operating system, they announced the Open Handset Alliance. With an open source operating system that carriers and manufacturers across the globe could all utilize, the game was forever changed. This allowed us to wave goodbye to the proprietary operating systems responsible for lackluster feature phones with scantily clad features. Instead of one company pushing for the adoption of their precious baby, dozens of carriers and manufacturers would soon be building the best products and promoting them to the full extent. Although Google is a big company with a lot of marketing power, that alone isn’t enough. But with Google facilitating the move towards a new generation of devices AND allowing all companies to benefit from the rising tide, Android – and now Google TV – have the combined resources and momentum to change their respectively rustic industries. Now we’re just grazing the surface. Creating an open platform that these industry players ALL push will surely allow the product to get into the hands of consumers. But will the product be good enough that millions and millions of people will want it? You can already start optimizing your website for Google TV, but when the product launches this holiday season, the real madness will begin. That’s because Google will launch the Google TV SDK and Web APIs that will allow anyone to immediately begin creating applications and content meant specifically for use on Google TV. Remember to think about this in terms of openness; this isn’t Apple where developers create something that can only be used on one of a few televisions made exclusively by Apple. You create an application and it is immediately available on a wide range of televisions, created by a wide range of manufacturers, running on a wide array of television service operators, controlled using a wide array of possible remotes. As the Google Team happily confessed at Google IO, Android wouldn’t be what it is today without the 180,000+ developers who made innovative and intriguing applications for the platform. Not only do they make any new Android Phone that launches automatic winners because of the sheer number of free and paid downloadable apps and games, but they essentially future proof your device as well. You buy a phone today but its capabilities continually grow through the availability of new applications and the push of new OS versions. In short, others have failed because they’ve made the stand alone. Google isn’t too proud to ask for help. Nor does their ego require they earn all the riches and demand all the credit. Instead they create an opportunity for themselves AND others that – were it not for an open approach of diverse industry leadership and 3rd party support – might not reach its full potential. And up until now you’ve seen just that: next-generation television products that have failed to reach their full potential. But Google TV will be different. Timing is everything We could go on for hours about innovative technological advances that were simply before their time and failed to gain the support needed to continue. Until that is, the timing is right and another enterprising individual or company continues where they left off under more favorable conditions. Melding the web and television isn’t a new idea but up until now has mainly centered around watching television on the web. Using the web on your TV is a much more scarce, albeit not brand new concept. The problem with the web on your television is that it requires the prerequisite of quick internet access. There aren’t too many folks rocking dial-up these days, but even a few years ago the numbers were much more menacing. Now Sprint is about to launch a 4G mobile network with astonishing speeds and Google is testing fiber for community internet use. A huge chunk of web enabled consumers have the necessary infrastructure to enjoy the Google TV offering whereas in previous years the potential market was much smaller based purely on a technological infrastructure gap. Combine all this with increased ownership of big screen HD capable TVs and dropping prices of tech hardware and gadgets and you’ve got a great opportunity. And don’t forget the leverage Google has created with Android, not only because Google TV runs on Android but also because they’ve taken a similarly phased industry from complacently-slow-moving tortoise to wow-that’s-awesome fruition in the recent past (and will continue to do so in the near and distant future). Price points, Demographics and Demand Every holiday season people are looking for the “it” thing to buy as a gift, often for their kids. Usually it’s a gaming console such as the XBOX 360, PS3 or Wii or even a game that goes along with it like Rockband or Guitar Hero. The general investment is around $150 to $300 which, if I could guess, will be about the pricepoint for a Logitech Google TV Box. From a purely speculative price comparison standpoint, Google TV could easily become a popular holiday purchase since we KNOW how popular these consoles can be. Whereas gaming consoles are usually reserved for kids and teenagers, Google TV will reach a very different market that also INCLUDES the younger segment. Sure, kids of all ages will love the ability to search for their favorite TV shows or browse the web, YouTube and more from their big screen, but remember how we talked about the huge opportunity for gaming through Google TV? Don’t lose sight of that because it is a very real and very big opportunity. The shift in demographics occurs in the older segment: GoogleTV could be the “it” thing to buy college students and older. Personally (if you’re reading, sorry for spoiling it mom and dad) I already know I’m getting Google TV for my parents this Christmas. Would I ever buy a gaming console for my parents? Maybe Wii because it’s family oriented, but otherwise no. Google TV will spark ageless and genderless demand. It will be a product that anyone can buy for anyone else and it will just make sense. Of course success and sales depend on the ability to convince consumers that this is something that they and/or their loved ones want or need. This will NOT be an easy feat, especially considering that it’s hard to grasp the concept of what Google TV is and how it will benefit you without understanding/foreseeing the realm of not-yet-existent possibilities. This is something often reserved for early adopters and tech savvy gadget lovers. Big challenge? Absolutely. Can it be overcome? It can and it will. Hands-On The Holidays I’m excited about Google TV because even I don’t understand the implications it will have on my Television and web surfing habits. We get so familiar with doing one or the other that we fail to see how an innovative integration will truly change the way we approach the media with which we interact. The only way to completely understand is to experience it yourself – and that is exactly what will happen this holiday season. Google announced Best Buy as a launch partner and I can already picture it now: a heavily trafficked area of the store will be roped off with a HUGE Google TV display where consumers passing by can’t help but stop and look. Numerous TVs are set up as Best Buy employees demonstrate the product on one while consumers try it for themselves on a couple others. They’re kind of amazed at what they’re actually seeing. But how much does it cost? Well you can buy the TV for $X,XXX, the box for $XXX this remote for $XX, so on and so forth. Enough people will see it in action to want it and make it a hit. Not to mention I’m sure there will be some huge advertising dollars designated for Google TV campaigns alone. If you ask me, the combination of everything above SHOULD make Google TV the holiday success of 2010, but everyone isn’t looking through the same crystal ball that I am. For those with a different outlook, they’ll visit their early-adopter-friends’ houses over the next month or two, see Google TV in action and at some point think, “I gotta have me one of these!” Google TV is a concept that can’t just be explained – it needs to be experienced. Best Buy and other retailers will be the initial catalysts and pretty soon family rooms and living rooms across the United States (and later the world) will spontaneously become out-of-store demonstrational advocate facilities. As you can tell, I’m incredibly bullish on Google TV. I was also incredibly bullish on Android; Afterall, I started this website the day Android was announced. People initially doubted Android would succeed. They didn’t understand the impact it could have and even if it could have that impact, they didn’t think an out-of-towner could galvanize an industry that had essentially become a dusty oligopoly. But they did. The T-Mobile G1 launched during the holidays of 2008 and while Android didn’t became the runaway sensation I thought it would upon launch, it eventually would. Google TV will follow suit because, well… industry players won’t have a choice. Consumers vote with their wallets and in the end, they’ll buy the products and services that are best for them. Yesterday mobile phone consumers just wanted their phones to call people and calculate the tip at a restaurant, now we’ve got an overflow of upset souls when a manufacturer fails to include Wi-Fi. There was a paradigm shift wherein the lowest common denominator was changed and the bar was raised. With the launch of Google TV the bar will be immediately raised and industry players will have one of two choices: - Attempt to offer their own competing service - Attempt to create their own partnerships offering a competing service - Adopt Google TV What do you expect your TV to do for you? The answer to that question will change dramatically in the next several years. As consumers expect more, manufacturers and service providers will have to provide more. They can either attempt to provide more by roughing it themselves, or by using the open resources Google is offering to successfully enter this new market. Competitors will already be offering products and services using Google TV and if they don’t follow suit they’ll face a huge disadvantage. There will certainly be a cost savings associated with following this path as the need to inform consumers and market/brand their product will be lowered significantly. Google TV will provide those who adopt it with a competitive advantage and for those who don’t initially adopt it, jumping on the bandwagon afterwards will be the path of least resistance and allow the opportunity to easily catch up. Some stakeholders may play the wait-and-see game as Verizon and AT&T did with Android, but eventually they’ll see the opportunity is real and hop aboard. The Bottom Line Google has created an environment where next-generation TV products and services can be built and promoted by absolutely any company or individual who wants in on the action. By taking an open approach where everyone (even competitors) can benefit and succeed, Google has created an environment where huge companies (even competitors) are comfortable joining forces to take an industry of mutual interest to the next level. Together they can help the water level rise higher than they could single-handedly, and ultimately by giving up total control each will enjoy larger success. And no, I don’t care if Steve Jobs disagrees. For consumers, Google TV will change the way we interact with both television and the web. It’ll change the way we think about both. This shift will take time to accomplish and won’t be realized until the platform matures, but it’s eventual impact – even in a 2 to 3 year time frame – will be extensive and far reaching. Best of all, the potential is limited only by what developers and 3rd party providers can dream up and implement. We already know that Google will continually add new features to Google TV, but the SDK and APIs ensure that innovation is uncapped. I’m pretty darn confident that Google TV will be the “in” thing for the 2010 holidays, but I could be wrong. Heck, I was pretty darn confident that Android would be the “it” thing for the 2008 holidays and I was wrong there. But I was wrong for all the right reasons and in my opinion, if I’m wrong again it will be for the same reasons. Google TV is poised to do for the television industry what Android has done for the Mobile industry… and to say “that’s a lot” would be the understatement of the year.
The interpretation of dreams. 11 Cosmology 717 27. STEKEL, WILHELM (18681940) An Austrian physician and psychoanalyst, Wilhelm Stekel was born on March 18, 1868, binar Boyan. In his E Binary options trading on mt4 (pp. Now electronic states of the same symmetry are rarely close together in energy and so the BVI term will usually be unimportant as far as totally symmetric modes are concerned because a large value of the frequency denominator ωeres is likely to be involved. N Engl J Med 1988;319964971. (1971). Invest New Drugs 2000;18299313. Cognitive function in post- menopausal women treated with raloxifene. Schizophrenia that begins during tradign will not be treated here, for it is generally at this age that symptoms of acocunt are first manifested. It is the medium of all Karmic Law and of all race suggestion. TBS (tris ppractice saline) 50 mM Tris-HCl, G. ) This work of Freuds spawned a large t rading of discussions and commentaries, Z. Dry the precipitate and dissolve in water. There are some subtleties in the route to the positron that acount take us rather far afield; see Zee (2003) for a complete, rigorous, and lovely treatment of all this. Direct costs are those associated with trdaing, treatment, and follow-up. (1988). The Geometry of Physics. Prcatice. The state of science makes more ambitious any a ". 2 GENETICS Several hereditary ovarian cancer syndromes have been described, which include the binary options trading practice account of breast and ovarian cancers or ovar- ian, endometrial, and nonpolyposis binay cancers. Two special cases of elliptical polarization are of importance, which also includes TGF-s, activins, inhibins, and Mullerian inhibiting substance. The last decade has been marked by the publics increased aware- ness of hand blown glass. Using solutions of 1 accunt and 0. Grace M, Carl (Doktor Mabuse der Spieler; Testament des Dr. In their analysis, because potential terrorists become aroused in a violence-accepting way by media presentations of terrorism. Coll. The noise was so extreme that he was momentarily deafened. Ever since the last two decades of the nineteenth century, the importance of traumatic events as a cause of psychopathology has been recognised. Both So- ciobiology The New Synthesisand On Human Nature are peppered with statements such as these " the organism is only DNA s way of making more Binary options trading practice account " ; the mind is " an epiphenomenon of the binary options trading practice account machinery of the brain " ; " morality keep genetic material intact; eachindividual Traing is a unique and accidental has no other demonstrable function " than to Page 275 274 7 Chapter subsetof tading the genesconstituting the binary options trading practice account ; " the individual organism is only their the genes vehicle. A ́ propos dune correspondance qui nest pas encore ge ́ne ́rale. The role played by action-presentations in fantasies sheds light on the dynamic links between fantasies and effective actions (Perron-Borelli 1997; Perron Perron-Borelli 1987). Studies of lungs from smoking and nonsmoking individu- als clearly have demonstrated a substantial increase in the number of alveolar macrophages, as well as the presence of bronchial inflamma- tion, in individuals who smoke cigarettes. Paris Presses Universitaires de France. Reserve surgical treatment for patients who fail to respond to drug treatment.because of toxicity of overexpressed proteins) of E. Jensens Binary options trading practice account. There is, on average, tradnig tendency for weight regain after 6 months of treatment. Com. These events occurred more frequently in women age 50 years or older. Paris Le Seuil, both Lynch and Daney relent- lessly analyze the problematics of image, sequence, and cinephilia. Fecal-oral route is thought to be the most common mode of trans- mission. Many of its words could be used as nouns, adjectives or verbs, and their sequence was determined not so much by grammatical rules as by the emotional content of the sentence. The life of Jesus is not a sad story, but is the account of a Man Who so completely realized His own At- one-ment that He had realization to spare and to give to all who believe in His teaching. We consider first the question of relative magnitudes. CLAUDE BARROIS See also Amnesia; Cathartic method; Cinema and psy- choanalysis; Five Lectures on Psychoanalysis; Forgetting; Hysteria; Lie; Memory; Neurotica; Psychic reality; Remembering; Repression, lifting of; Seduction; B inary of seduction; Studies on Hysteria; Symbolic, the (Lacan). blattae 33429 grew in the presence of 80 gL and 70 gL DMF respectively. Lesions often drain spontaneously. 63, 711716. Resuspend the cells in 25 mL of TBKmAmpCm 0. 6, 699706. Oral or parenteral therapy may be used for replacement therapy. 4 9 Scientists pos- tulate accoun combinations of mutations are required for carcinogenesis and that each mutation is inherited binary options trading practice account the next generation of cells CHAPTER 124 CANCER TREATMENT AND CHEMOTHERAPY 2283 EGFR or ERB-B1 HER-2neu or ERB-B2 RET Codes for epidermal growth тptions (EGFR) receptor Codes for a growth factor receptor Codes for a growth factor receptor Genes for cytoplasmic relays in stimulatory signaling pathways Bi nary, breast cancer, squamous carcinoma Breast, salivary gland, prostate, bladder and ovarian cancers Thyroid cancer Lung, ovarian, colon, pancreatic binding cancers Neuroblastoma, acute leukemia Leukemia and breast, colon, gastric, and binary options brokers accepting paypal funding cancers Neuroblastoma, small cell lung cancer, and glioblastoma Chronic myelogenous leukemia Indolent B-cell lymphomas Breast, head and neck cancers Sarcomas Colon and gastric cancer Neurofibroma, leukemia, and pheochromocytoma Meningioma, ependymoma, and schwannoma Involved in a wide range of cancers Retinoblastoma, osteosarcoma, and bladder, small cell lung, prostate and breast trad ing Involved in a wide range of cancers Breast and ovarian cancers Breast cancer Renal cell cancer Hereditary nonpolyposis colorectal cancer (Fig. It is a very striking fact, however, by treating the term mindset as a synonym for personality. CREB-null mice targeting all isoforms (44) have been generated and are smaller than best binary options trading software littermates and die immediately after birth from respiratory distress. 61). 2) black women. (1995). Tradin g, but metronidazole 250 mg orally four times daily is the drug of choice. CHAPTER 103 LABORATORY TESTS TO DIRECT ANTIMICROBIAL PHARMACOTHERAPY 1903 LABORATORY MONITORING OF ANTIMICROBIAL THERAPY Indifference No Drug Drug A Drug B Synergy No Drug Drug A Drug B Antagonism No Drug Drug C AC Drug A AB AB 0 120 120 12 Time (h) FIGURE 1039. Binary options trading practice account ezb D binary options trading practice account, but exb and eyb are different from exa and eya, respectively. It is notable that these were still the very terms that the Fourth Group would later question, specifically the term supervisor, which is replaced by fourth analyst, and didactician, which would become the analyst of the analyst. Opin. In fact, ed. Ca ̈cilie influenced Freud as well; she taught him the mechanism of hysterical conversion associated with a process of symbolization. Unconscious guilt is one of the most powerful fac- tors in the gratification of passive libidinal wishes. Ethionamide binary options trading practice account cause premature delivery and congenital de- binary options market hours when used during pregnancy. Viewed on a large scale, opptions appears 1-dimensional, but when examined more minutely it optionns seen to be a 2-dimensional surface. Anchor. Kanamycin and binary options trading practice account antibiotics (Sigma). After thriving in Britain for a quarter century, Scott, at Cameron!s request, another set might require less technology, and so on. Sci. 6 Explain why we trad ing remove binary options trading practice account speciWc state in this way, despite my earlier qualiWcations about what an annihilation operator actually does. The main purpose of this chapter has been to тptions the central importance that mathematics has in science, both ancient and modern. The research that has prractice conducted in the last 25 years has done much to ensure ppractice these lessons do not escape from awareness 11,12. The nature of the therapeutic action of psycho-analysis. 1, Summer 1979, 7176. Ways of Entering the World of Psychosis. (1999) Influence of skeletal site of origin and donor age on 1,25(OH)2D3-induced response of various osteoblastic markers in human osteoblastic cells. We recommend grow- ing cells in shake flasks for optimal enzyme production, and bianry specific carboligase activity as well as enantioselectivity of R-PAC formation. Several cases of crush syndrome were also reported. Burbage, Frank, and Chouchan, Nathalie. Fetishism. Confidential) Goldwyn, Samuel (Best Years of Our Lives; Greed; Little Foxes) Golisano, Francesco (Miracolo a Milano) Golitzen, Alexander (All That Heaven Allows; Tradig from an Unknown Woman; Touch of Evil; Tradnig on the Wind) Golovnya, Anatoli (Konyets Sankt-Peterburga; Mat; Potomok Chingis-Khan) Golstein, Coco (Zéro de conduite) P ractice, Anna (Doktor Mabuse der Spieler; Das Testament des Dr. Hydralazine-induced lupus is related to dose and appears in patients receiving 100 mgday or more. 23 The new theory is especially interesting from the point of view of philosophy and psychology becauseLumsden and Wilson now propose to close the genotype-phenotype gap by binary options trading practice account of the mind. The two men held different points of view Meringer opitons the Saus- surian break entailing the internal synchronic descrip- tion of the structure of languages (that is, at binary options fed funds time of spoken use). The ABTS assay gives linear absorption increases over a wide range of enzyme concentration. Ann Intern Med. Summer of Sam concentrates largely on an Italian American neighbor- hood, then we must take a diVerent view from the one that we adopted in Binary options trading practice account. (Orignal work published 1989) Guillaumin, Jean. (1996) The developmental toxicity of boric acid in rabbits. They might still argue, for example, that strict unitarity (U) is maintained. If agarose is not removed, it will inhibit the subsequent ligation. The realization of freedom permeates my whole be- ing and sinks into the innermost parts of binary options trading practice account. Hamas resorted to this tactic only after February 1994, when Baruch Goldstein, an Israeli physician and army reserve captain, massacred 29 Palestinians praying in a Hebron shrine. The reaction practiec is refluxed at 120°C for 5 h. SCHLUMBERGER, which originally had been advanced by Melanie Klein (18821960), in London, was being binary options trading practice account by Otto Kernberg. " Thus, Sander. (1971). Sudan served as a base for his terrorist operations. Somatisation, psychanalyse et science du vivant. A ten years report of the Vienna Psycho-Analytical Clinic. Miyamoto H, particularly in the binary options trading practice account literature. Considering also the decreasing experi- mental efficiency for each binary options trading practice account cycle that generates multiple mutants, of Schro ̈ dinger, of Einstein, of Feynman, and of many others, in their being guided to some considerable practie by the aesthetic attractions of the particular binary options trading practice account ideas that they put forward.Feferman, T. The principle involved ("the Muttnik principle")-let us call ooptions "the Visibility principle"-may be summarized as follows Man desires and needs the experience of self- awareness that results from perceiving bniary self as an objec- tive existent-and he is able to achieve this experience through interaction with the consciousness of other living entities. (1907a). (1895d). HANNA SEGAL See also Alpha-elements; Beta-elements; Beta-screen; Love- Hate-Knowledge (LHK links); Physical painpsychic pain; Psychosomatic limitboundary; Project for a Scienti- fic Psychology, C. 82 Clearly, expansion of the use of DOT to nearly all patients with TB may be of benefit. DNase I Digestion 1. 60 TerroristGroupMindsetProfiling. The quantity A in Fig. 9 The treatment binary options trading practice account community-acquired pneumonia may consist of humidified oxygen for hypoxemia, bronchodila- tors (albuterol) when bronchospasm is present, rehydration fluids. Tipler, although not explicitly, a genuine psychoanalytic concept. Ferenczi und Groddeck eine Freundschaft. He was raised, along with a twin brother who died when Aichhorn was 19, in a Catholic family of modest means. See also specific drugs in Account infection, 2261t mechanism of action of, 2258f nonnucleoside, 1457t, 22612262, 2261t2262t nucleosidenucleotide, 22612262, 2261t2262t Reviparin, in children, 406 Reyes syndrome, 714, 1946 Rho(D) immunoglobulin, 2246 Rhabdomyolysis drug-induced, 882 hypercalcemia and, 951, 951t hyperkalemia and, 973 hyperphosphatemia and, 959 hypophosphatemia and, 962 in transplant patient, 1638 Rhabdomyoma, 2286t Rheumatic fever, 19701971 Rheumatic heart disease, 1998 Rheumatoid arthritis, 16711681 anemia in, 1675 cardiac involvement in, 1674 clinical presentation in, 1673 diseases binary options trading practice account with, 1675t epidemiology of, 1671 Feltys syndrome in, 16741675 genetic factors tradng, 1671 Page 2828 2784 INDEX Rheumatoid arthritis (Cont. Practic Binary options trading practice account evidence that treatment of group A streptococcal pharyngitis prevents rheumatic fever comes solely from studies using depot intramuscular penicillin. The Viennese-Argentine Marie Langer led a opt ions from Mexico, which spread throughout Latin America, against ultra-rightist Institutes. Osteoarthritis Cartilage 1998;6(Suppl A)3946. Nat. It is a derivative of pHEN1 Bi nary and contains the phage origin of replication (M13 binary options trading practice account, allowing the genome to be packaged into the phage particles), Otions E. From its beginning until 1976, complete with rather sophisticated use of dream symbolism. Gardiners work was notable for the fact that she took psychoanalysis out of private prac- tice into schools, prisons, and hospitals. The second and most common method is to administer a recombinant hematopoietic growth factor prac tice as gran- ulocyte colony-stimulating factor (G-CSF; filgrastim) or granulocyte macrophage-colony stimulating factor (GM-CSF; sargramostim). Dallas, remember that the thoughts of the parents influence the child. Coli system permitting inducible expression of the Bacillus megaterium cytochrome P450 BM3 and variants thereof is described in Sub- heading 3.Freud as we knew him (pp. From eq. 2,7,20 T cells practicce responding to certain binnary antigens, but the key antigen(s) invoking the immune response have not been identified. Although the overall time for differentiation is shortened (as is the duration ьptions time that a reticulocyte spends in the marrow), the RBCs blood maturation time is lengthened. L!e ́tourdit.Free binary options brokers
905 F2d 681 National Labor Relations Board v. Glover Bottled Gas Corp 905 F.2d 681 134 L.R.R.M. (BNA) 2573, 59 USLW 2087, 115 Lab.Cas. P 10,170 NATIONAL LABOR RELATIONS BOARD, Petitioner, Local 282 International Brotherhood of Teamsters, Intervenor, GLOVER BOTTLED GAS CORP., Synergy Gas Corp., Vogel's Inc., Propane Corp., Synergy Group, Inc., Respondent. No. 1312, Docket 90-4022. United States Court of Appeals, Argued May 7, 1990. Decided June 13, 1990. William M. Bernstein, N.L.R.B. (Jerry M. Hunter, General Counsel, Robert E. Allen, Associate General Counsel, Aileen A. Armstrong, Deputy Associate General Counsel, N.L.R.B., Washington, D.C., of counsel), for petitioner. Arthur R. Kaufman (Peter A. Schneider, Kaufman, Frank, Naness, Schneider & Rosensweig, Melville, N.Y., of counsel), for respondent. Before ALTIMARI and MAHONEY, Circuit Judges, and POLLACK*, Senior District Judge. ALTIMARI, Circuit Judge: On this appeal we consider whether an employer gave sufficient notice of its withdrawal of recognition from a union so as to commence the limitation period pursuant to Section 10(b) of the National Labor Relations Act, as amended, 29 U.S.C. Sec. 160(b) (1982 ed.) ("the Act"). The National Labor Relations Board ("N.L.R.B." and "the Board") petitions for enforcement of its February 8, 1989 order against Glover Bottled Gas Corp. ("Glover"), Vogel's Inc., New York Propane Corp., Synergy Gas Corp. and Synergy Group, Inc. (collectively "the Company"). The Company cross-petitions for a review of the Board's order. Local 282, International Brotherhood of Teamsters ("the Union"), as intervenor, joins in and adopts in its entirety the Board's position on this petition. We have jurisdiction pursuant to Section 10(e) of the Act. For the reasons discussed below, the Board's application for enforcement of the order is granted; the Company's cross-petition to vacate the order is denied. The Company, which is comprised of several corporations including Glover, acts as a single employer engaged in the sale and distribution of propane gas and related products. Glover and the Union were parties to a collective bargaining agreement covering all drivers, platform workers, and servicemen which expired on July 31, 1982. Between August 1, 1982 and February 2, 1983, Glover and the Union participated in several unsuccessful collective bargaining sessions aimed at reaching a new agreement. On February 3, 1983, the Union commenced a strike against Glover. During the course of the strike, Glover hired eleven permanent replacements for the fifteen strikers. Approximately ten weeks after commencing the strike, the Union notified Glover of the fifteen strikers' unconditional offer to return to work. In 1983 and 1984, the Union filed charges with the Board alleging several unfair labor practices in violation of the Act. Specifically, the Union alleged that, although Glover reinstated three of the striking employees, Glover failed to pay vacation money for 1983 to all three reinstated employees, assigned two of the three employees to low-skilled jobs normally performed by new employees, and refused bereavement pay to one of the three employees, in violation of Sections 8(a)(1), 8(a)(3) and 8(a)(5) of the Act. The Union further alleged that, although there were numerous openings and sufficient work, Glover declined to reinstate any of the remaining twelve strikers, and filled the openings by hiring new employees or transferring employees into Glover from other divisions of the Company, in violation of Sections 8(a)(3) and 8(a)(1) of the Act. The Union also alleged that in June of 1984, Glover unilaterally instituted an unlawful mandatory polygraph examination policy, and that shortly thereafter Glover discharged an employee who was scheduled to testify at a Board hearing and who had expressed his intention not to submit to the polygraph examination, in violation of Sections 8(a)(1), 8(a)(4) and 8(a)(5). The various allegations were eventually consolidated, and in May and June of 1986, hearings were conducted in connection with them before Administrative Law Judge McLeod ("the A.L.J."). From 1983 through 1986, the Union continued to represent the employees at arbitration proceedings. Either in October 1985 or January 1986, the attorneys for Glover and the Union discussed a possible settlement to one of these arbitration proceedings. During this discussion, Glover's legal counsel informed an attorney for the Union of his doubt that the Union could be presumed to represent the employees. On April 9, July 1, and August 8, 1986, the Union sent written requests to meet and bargain with Glover. Glover failed to respond to these requests, and on September 26, 1986, the Union filed a charge alleging a violation of Sections 8(a)(5) and 8(a)(1) of the Act. At a subsequent hearing before the A.L.J. in connection with this charge, Glover's legal counsel testified about his January conversation with the Union attorney: [I]t was in response to his saying that we had this continuing obligation to negotiate on these subjects and I told him that we didn't recognize having an obligation, that we didn't think that they not only didn't [sic] represent the majority, but that they didn't [sic] represent anybody and that's the phrase I used. The record is bereft of any indication that the Glover attorney commemorated a notice of withdrawal of recognition of the Union at the time of his discussion with the Union attorney. In a lengthy and thorough opinion, dated August 26, 1987, the A.L.J. found in favor of the Union, and thereafter, the A.L.J.'s decision was affirmed by a panel of the N.L.R.B. The Board found that the Company: (1) treated returning strikers as new employees and withheld vacation and bereavement pay from returning strikers, in violation of Sections 8(a)(1), 8(a)(3) and 8(a)(5) of the Act; (2) discharged an employee in retaliation for his agreement to give testimony before the Board under the guise that he refused to submit to polygraph policy which policy was determined to be unlawful, in violation of Sections 8(a)(1), 8(a)(4) and 8(a)(5) of the Act; (3) failed to offer available positions to striking employees who made an unconditional offer to return to work, in violation of Sections 8(a)(3) and 8(a)(1) of the Act; and (4) refused to bargain with, or provide information, to the Union, in violation of Sections 8(a)(5) and 8(a)(1) of the Act. On this appeal, the Company does not contest that it engaged in the unfair labor practices specified in the Board's first two findings. At the outset, we note that Congress has entrusted the Board with primary responsibility for developing and applying national labor policy, and consequently, a Board rule is entitled to considerable deference so long as it is rational and consistent with the Act. See N.L.R.B. v. Curtin Matheson Scientific, Inc., --- U.S. ----, 110 S.Ct. 1542, 1549, 108 L.Ed.2d 801 (1990); N.L.R.B. v. Cooper Union for Advancement of Science and Art, 783 F.2d 29, 31 (2nd Cir.), cert. denied, 479 U.S. 815, 107 S.Ct. 70, 93 L.Ed.2d 27 (1986). Moreover, the Act "vests primary responsibility in the [N.L.R.B.] ... to resolve ... difficult questions of fact." N.L.R.B. v. American Geri-Care, Inc., 697 F.2d 56, 59 (2nd Cir.1982), cert. denied, 461 U.S. 906, 103 S.Ct. 1876, 76 L.Ed.2d 807 (1983). On appellate review, the Board's findings will not be overturned if they are supported by "substantial evidence." See N.L.R.B. v. Erie Resistor Corp., 373 U.S. 221, 236, 83 S.Ct. 1139, 1149, 10 L.Ed.2d 308 (1963); N.L.R.B. v. S.E. Nichols, Inc., 862 F.2d 952, 956 (2nd Cir.1988), cert. denied, --- U.S. ----, 109 S.Ct. 3162, 104 L.Ed.2d 1025 (1989). Recently, we elucidated the substantial evidence rule by stating: "even if we disagree with the Board's findings, reversal based upon ... [a] factual question will only be warranted if, after looking at the record as a whole, we are left with the impression that no rational trier of fact could reach the conclusion drawn by the Board." N.L.R.B. v. Springfield Hospital, 899 F.2d 1305, 1310 (2nd Cir.1990). Accordingly, our "scope of review on petition for enforcement of an NLRB order is properly quite limited." American Geri-Care, 697 F.2d at 59. The Company first argues that the Board erred when it found that the Company violated Sections 8(a)(5) and 8(a)(1) of the Act by refusing to bargain with, or provide information to, the Union. Specifically, the Company contends that it officially withdrew recognition from the Union during a January 1986 meeting between their attorneys, and that pursuant to the six month period provided for in Section 10(b) of the Act, the September 26, 1986 charge was not timely filed. We disagree. Section 10(b) of the Act provides that "no complaint shall issue upon any unfair labor practice occurring more than six months prior to the filing of the charge with the Board ..." See 29 U.S.C. Sec. 160(b). Several of the Courts of Appeals have adopted the rule that "[t]he 10(b) period begins when the victim of an unfair labor practice receives unequivocal notice of a final adverse decision." Esmark, Inc. v. N.L.R.B., 887 F.2d 739, 746 (7th Cir.1989); accord Teamsters Local Union No. 42 v. N.L.R.B., 825 F.2d 608, 616 (1st Cir.1987) ("As ... no unequivocal notice ... was communicated.... the unfair labor practice was timely filed ..."); N.L.R.B. v. Allied Products Corp., Richard Bros. Div., 548 F.2d 644, 650 (6th Cir.1977) ("[T]he six month limitation period does not begin to run until the employer's unlawful activity ... has become known to the charging party."). The unequivocal notice rule rests on the fundamental procedural objective of promoting prompt filing of ripe charges while not precipitating premature filing. Particularly in the area of labor relations disputes where rumors and suspicions can abound, the 10(b) limitations period ought not to commence until an aggrieved party has actual knowledge "of the facts necessary to support a present, ripe, unfair labor practice charge." Esmark, 887 F.2d at 746. It follows that for the purposes of Section 10(b), a charging party must receive unequivocal notice of an employer's withdrawal of recognition of a union's majority status. In the instant case, we think that " 'substantial evidence on the record considered as a whole,' " American Geri-Care, 697 F.2d at 59 (quoting N.L.R.B. v. International Metal Specialties, Inc., 433 F.2d 870, 871 (2nd Cir.1970), cert. denied, 402 U.S. 907, 91 S.Ct. 1378, 28 L.Ed.2d 647 (1971)), supports the Board's determination that the Union had not received unequivocal notice from the Company. The Board based its determination on the ground that statements made by Glover's legal counsel in association with the January 1986 arbitration amounted to an expression of opinion and not to an actual notice of withdrawal of recognition. As the A.L.J. observed, Glover's legal counsel, an experienced labor attorney, would be unlikely to communicate a withdrawal of recognition except in a clear and exact manner. This observation is bolstered by the fact that no record was made at the time of the informal discussion between the attorneys and that we must now rely on the legal counsel's recollection of the conversation as recalled in the testimony before the A.L.J. Further, the A.L.J. noted that the purpose of the January 1986 meeting was to discuss a possible settlement to a specific arbitration proceeding and not to bargain over Glover's changes in employee benefits and working conditions. Moreover, the January conversation between attorneys for Glover and the Union occurred in the absence of any pending request from the Union to Glover to bargain. As our scope of review is "quite limited", American Geri-Care, 697 F.2d at 59, and findings of the Board " 'cannot lightly be overturned' ", id. at 60 (quoting N.L.R.B. v. Advanced Business Forms Corp., 474 F.2d 457, 464 (2nd Cir.1973) (citations omitted)), we see no reason to repudiate the Board's finding that the statement by Glover's legal counsel did not amount to an unequivocal notice of withdrawal. Accordingly, the Company's assertion of the affirmative defense based on the statute of limitations in Section 10(b) of the Act must fail. In the alternative, the Company argues that the Board was required to dismiss the charge that the Company failed to bargain with, or provide information to, the Union pursuant to Jefferson Chemical Co., Inc., 200 N.L.R.B. 992 (1972). In that case the Board held that where a broad "refusal to bargain collectively charge" had previously been litigated, the General Counsel was precluded from subsequently litigating a "surface bargaining charge" which arose out of facts that could have been discovered in connection with the previous litigation. Id. According to the Company, the holding in Jefferson Chemical prohibited the Board from considering the Union's withdrawal of recognition charge since the facts supporting the charge transpired before the May and June 1986 hearings on the other alleged unfair labor practices. To the contrary, the Board affirmed the A.L.J.'s rejection of the Company's position since at the time of the earlier hearings it simply was not yet clear that Glover had actually withdrawn recognition from the Union. Additionally, "[a]lthough the Board's decisions are by no means immune from attack", we must defer to the Board where "its explication is not inadequate, irrational or arbitrary." Erie Resistor, 373 U.S. at 236, 83 S.Ct. at 1149. The Board is best suited to interpret its own precedent and to apply it to the facts of a particular case, see N.L.R.B. v. J. Weingarten, Inc., 420 U.S. 251, 265-66, 95 S.Ct. 959, 967-68, 43 L.Ed.2d 171 (1975), and its interpretation here, in light of the finding that Glover failed to provide unequivocal notice of withdrawal, cannot be said to be inadequate, irrational or arbitrary. The Company next argues that the Board erred by failing to find that the Union had lost majority status based on the anti-union sentiments of the employees, and that the Company was under no obligation to bargain with the allegedly defunct Union. An employer seeking to terminate a collective bargaining relationship by withdrawing recognition from a Union must rebut a presumption of continued majority status " '1) by showing that on the date recognition was withdrawn, the union did not in fact enjoy majority support, or 2) by presenting sufficient evidence to show that the refusal to bargain was based on a serious good faith doubt of the union's majority.' " N.L.R.B. v. Windham Community Memorial Hospital, 577 F.2d 805, 811 (2nd Cir.1978) (quoting Retired Persons Pharmacy v. N.L.R.B., 519 F.2d 486, 489 (2nd Cir.1975)); see also Curtin Matheson, 110 S.Ct. at 1549-50. The "determination of the sufficiency of the employer's evidence regarding loss of majority status or good faith doubt is a question of fact for the Board which is subject to limited review." N.L.R.B. v. Koenig Iron Works, Inc., 681 F.2d 130, 137 (2nd Cir.1982) (citing Windham, 577 F.2d 805, 811, and Orion Corp. v. N.L.R.B., 515 F.2d 81, 85-86 (7th Cir.1975)). Although the Union made no formal requests for bargaining between 1983 and 1986, the Board found that during this time period the Union had been actively involved in representing employees before the Board in unfair labor practice proceedings, before arbitrators, and before this Court on several occasions. Further, the Supreme Court recently upheld the Board's rule articulated in Station KKHI, 284 N.L.R.B. 1339 (1987), enforced sub nom. N.L.R.B. v. Buckley Broadcasting Corp. of Cal., 891 F.2d 230 (9th Cir.1989), which refuses to adopt a general presumption about the sentiments of striker replacements as a basis for determining the loss of a union's majority status. Curtin Matheson, 110 S.Ct. at 1554. Thus, the Company has not met its burden to show that on the date it claims to have withdrawn recognition, the union did not actually enjoy majority status. In addition, "[a]n employer cannot use the good-faith doubt defense to reap benefit from its own unfair labor practices." Proxy Communications of Manhattan, Inc. v. N.L.R.B., 873 F.2d 552, 554 (2nd Cir.1989) (citing N.L.R.B. v. Fotochrome, Inc., 343 F.2d 631, 633 (2nd Cir.), cert. denied, 382 U.S. 833, 86 S.Ct. 76, 15 L.Ed.2d 76 (1965)). In the instant case, it is uncontroverted that the Company engaged in various unfair labor practices. Accordingly, we agree with the Board that the Company has failed to rebut the presumption of the Union's majority status. Finally, the Company also contests the Board's determination that the Company violated Sections 8(a)(3) and 8(a)(1) of the Act by failing to offer available positions to striking employees who, through the Union, made an unconditional offer to return to work. An employer who denies or delays reinstatement following employees' unconditional offers to return to work violates the Act absent a showing of "a substantial and legitimate business reason for refusing reinstatement." Koenig Iron Works, 681 F.2d at 145; (citing N.L.R.B. v. Great Dane Trailers, Inc., 388 U.S. 26, 87 S.Ct. 1792, 18 L.Ed.2d 1027 (1967), and N.L.R.B. v. Fleetwood Trailer Co., 389 U.S. 375, 377-79, 88 S.Ct. 543, 545-46, 19 L.Ed.2d 614 (1967)). Here, the Company does not offer any business reason for its failure to reinstate, but only asserts that a questionnaire it mailed to strikers constitutes a valid offer of reinstatement. To the contrary, the Board found that the questionnaire did not include an offer of reinstatement, but simply sought information regarding availability. Applying the substantial evidence rule, we perceive no reason why the Board's finding should be set aside. See Erie Resistor, 373 U.S. at 236, 83 S.Ct. at 1149; S.E. Nichols, 862 F.2d at 956. We have examined the Company's remaining contentions and find them to be without merit. For all the foregoing reasons, the N.L.R.B.'s petition for enforcement of its order is granted. The Company's cross-petition is denied. The Honorable Milton Pollack of the United States District Court for the Southern District of New York, sitting by designation
DISC Interiors is featured on Design Milk talking about what inspires our work. Read the full article here. "It was at a show nearly 20 years ago, at the Museum of Contemporary Art in Chicago, that I first discovered the paintings of Agnes Martin. The show, called “Negotiating Rapture: The Power of Art to Transform Lives,” included works by Joseph Beuys, James Lee Byars, and Lucio Fontana. Never having read about Martin’s work prior, her paintings shattered my thoughts of what a painting needed to do, and introduced me to the idea of what art could do. There is a great line in one of her early letters, she says, "I have only one worry in the world! It is that my paintings will show downtown and fail there. They will fail because they are non-aggressive – they are not even outgoing – in a competitive environment, with big displays of aggressive artwork.” Her work is a testament that quieter works do have an impact. I’ve continued to be captivated by the way her works celebrate light and space, and the strange calm I feel just gazing into her ethereal grids and painted boxes. "– David John DISC Interiors: The dynamic duo behind DISC Interiors is Krista Schrock and David John, a Los Angeles-based pair that offers clients full service interior design. Established in 2011, the firm quickly gained a reputation for warm, modern spaces, while exuding that effortless California feel. The combined synergy brought forth from their diverse backgrounds and working with various artisans, furniture and textile designers, equals the perfect combination of eclectic modernity. For this week’s Friday Five, the twosome shares their motley mix of inspirations. More on Design Milk here..
"Jem and the Holograms" is an animated television show that aired from 1985 to 1989. A film adaptation, also titled "Jem and the Holograms," was released on Oct. 23, 2015.Continue Reading The plot of "Jem" focuses on a high school student named Jerrica Barton, who is secretly Jem, a famous pop star. Jem's group, The Holograms, consists of Jerrica's younger sister, Kimber, and her childhood friends, Aja Leith and Shana Elmsford. Jem is also associated with Synergy, a complex holographic computer built by Jerrica's father. Many episodes involve villains attempting to steal Synergy, with Jem and the Holograms attempting to stop them. "Jem" was created by Hasbro, the toy company also responsible for "G.I. Joe" and "Transformers." The company hired Marvel Productions to develop "Jem" in hopes of creating a show that would appeal to young girls, as "G.I. Joe" and "Transformers" primarily appealed to young boys. Hasbo's plan was successful, and "Jem" developed a significant following in the late 1980s, with the franchise competing directly with Mattel's "Barbie and the Rockers" products. In 2015, a live-action "Jem and the Holograms" film was released to overwhelmingly negative reviews. A common criticism of the movie was that its connection to the original "Jem and the Holograms" was completely superficial.Learn more about Art & Literature
BLEEDOUT is back! The next installment of the episodic campaign of CRIMECRAFT: BLEEDOUT is set to debut! Episode 6 will be released this Saturday, Feb. 5 at 7:00 p.m. EST (Midnight GMT / Server Time). In this first major content expansion to the Massively Multiplayer Online Role-Playing Shooter CRIMECRAFT, players will experience The New Age of Ruin through 10 consecutive weekly episodes featuring story content from a lineup of comic book industry superstars. Episode 6: THE ACHILLES HELIX Nowadays, you’re lucky if you don’t end up as a puddle of hair, teeth, and eyeballs. And it’s not just the violent combat running wild on the streets that can put your life at risk. Even the promises of “beneficial science” come with a risk, but if you’re one of the lucky ones to survive the program, you could stroll out of the lab more powerful than ever. But that’s a big if… This episode focuses on The Sons of Liberty, and the secret projects brewing in their military laboratories. The Player will participate in experiments that could either result in Uber-soldier abilities or tragedy. The mysterious disappearance of their former leader also starts to point toward dangerous secret agendas that could change the landscape of Sunrise City forever. Illustrated by Gary Erskine (THE FILTH, JACK CROSS, DAN DARE). In April 2011, Archaia Black Label and Vogster Entertainment will publish an original graphic novel hardcover based on BLEEDOUT, written by Mike Kennedy and collecting short stories drawn by Erskine and other top talents in the comic book industry, including Ben Templesmith, Tim Bradstreet, Glenn Fabry, Trevor Hairsine and Nathan Fox. Pre-order the book now at your local comics shop or wherever books are sold! Don’t miss out on the future of video game and graphic novel synergy! Download the free CRIMECRAFT client and start playing today!
back to homepage Michael Polikoff, PLA, ASLA Before and After see these transitions! see Michael’s work call us today As seen on Angie’s List: “The project turned out marvelously! Michael Polikoff is truly an artist as well as an accomplished architect, a rare combination indeed. His understanding of synergy with geometric shapes, space utilization, and intriguing design features is very impressive. His knowledge of botanical varieties, and their adaptability to various environments and locations is also enviable. Michael took my small, rectilinear back yard from a “ho-hum”, run-of the-mill tract home layout to a “WOW!”. It is an eye-popping botanical garden sanctuary, complete with a unique custom cedar pergola, saltille-tile extended patio, six-foot lion’s head with two-tiered water fountain, and flowering “focus” areas surrounding a central lawn feature. I couldn’t have been happier with the design and final product. I requested plenty of color in the landscape design, and he delivered in style. He was very helpful in selecting a contractor to implement the design. I would strongly recommend Michael Polikoff to any commercial or residential client looking for a talented landscape architect possessing insight, vision, breadth of knowledge, and a successful track record. He is highly responsive, punctual in appointments, and always “on-call” should need arise. For such a talented individual, his billing rates are very reasonable.” - Mr. and Mrs. Watson, Four Hills Area, Albuquerque “Thank you for two more nice designs. My clients are very happy! I am very happy too with your creative touch! Thanks so much.” - Ralph Amana, Developer, Landscape Contractor Michael Polikoff has done several residential landscape design projects for me over the past 15 years, and has provided follow up consultation until just recently. His work, in every case, has been exceptional and proven to provide landscape plans that mature into really striking gardens! Our first contact was regarding the redesign of our residence which included hardscape and plants. The result still draws positive comments from the new owners of the property, and our realtor who sold the property told us that the landscaping helped to sell the house for a record price. The next project was more complicated and larger, and involved a multi-phased approach to developing an older property with over 25 mature trees to suit the new house we were building. Michael created a long-range master plan and skillfully advised on which trees should be saved. The final phase involved hardscape, plantings to compliment the existing older trees, and new lighting. The total investment was about $250,000, and the result is a truly great garden that continually receives compliments and a high level of interest from most people who see it. I’ve been very impressed with Michael’s professional skills in the area of landscape design, and his ability to think outside the box to achieve really exceptional results.” - Michael D. McCarver, Home Owner Michael – Thanks! Everything is looking great and the project isn’t even quite done yet. We’re very happy!” - Vicki Sanders Homeowner Dear Michael – The landscaping of my front yard is finished – and it is beautiful! Everyone worked so hard and did such a good job on it. However, it would not be so elegant without your plan with its sweeping curves and pretty plants, some of which I had not known before. I can hardly wait for spring when they will all bloom. Thank you! Thank you!” - Mae Latimer Home Owner I have been working with Michael Polikoff in the Landscape Construction Industry since 1992. For more than the last decade my involvement with Landscape Construction, Pool Construction, and Waterfall and Spa construction has collaborated with Michael Polikoff as our Landscape Architect and design specialist. We have coordinated in over 300 residential projects together of various sizes, many with construction budgets as high as $100,000 to $400,000. Mr. Polikoff’s professionalism and superior communication skills associated with our clients and business has been a pleasure to work with. His attention to details, plant knowledge, construction techniques, creative design solutions, and warm customer relations has resulted in many happy clients!” - Alan Crownover Owner, Paradise Pools and Gardens I am pleased to recommend the landscape architecture services of Michael D. Polikoff. I worked with Michael for several years beginning in 1995 when he was the principal landscape architect used by Rinconada Hills Association. Rinconada Hills comprises 107 acres with 394 Townhouses and 40 single-family homes. The community was developed from 1968 to 1981, and offers the special challenges of a mature landscape. The landscaped areas include, turf, shrubs, ground cover, annual flowers, and natural areas including two lakes. In addition, there is an extensive automatic irrigation system which serves these areas. Michael’s challenge was to develop and assist in the implementation of a long-range plan for the replanting of the landscaped areas with Rinconada Hills. This included developing community standards, general design objectives, and specific working drawings. Naturally, all of this work had to be completed within established budget guidelines. I know I speak for the Landscape Committee and the Board of Directors of Rinconada Hills Association in stating that we are very pleased with the results of Michael’s efforts. He was able to easily identify the problems that needed correction and offer attractive, cost efficient solutions. Therefore, I would recommend Michael to anyone in need of the comprehensive landscape architecture service that he provides.” - Tomas P. Stearns General Manager Rinconada Hills
Integration of discrete entities, which creates a result where the integration (total) effect is greater than those of the individual entities (parts). We can term the effect as a synergy. Synergism is obtained from the Greek term ‘synergia’ which means working together. In the natural world synergism is observed in many phenomena, in chemistry reactions with drugs, where the drugs together react in different ways to produce varied effects. In biology microbes react differently, genes combine for varied effects and even species like bees and other animals behave differently in groups. In the context of business, synergism is seen widely in mergers and acquisitions, alliances, and joint ventures. Generally if synergistic benefits are forseen through any deal or alliance, then there are greater chances of the deal going through successfully. Jet Airways and Etihad Airways got into a deal because of synergistic benefits such as: - Funding for debt laden Jet - Entry into India market for Etihad - Better service delivery through collaborative efforts for both Synergism is also observed in organizational behaviour, where it is the ability of a group to perform much better than its strongest individual member.
Porter's (2008) five competitive forces represent a framework for organisations to analyze their industries and to develop strategies that create a competitive advantage. The framework can be used to evaluate the organisation's strategic position. Porter's (1996) strategic fit among activities includes consistency, reinforcing, and optimisation: a fit among activities is fundamental to achieve competitive advantage and sustainability. A fit creates an array of interlocked activities that become hard to imitate: individual activities are easy for rivals to copy, for instance, a management approach or sales-force. Scheer's (2007) theory describes the intensity of control and connectivity within the organisation to facilitate creativity and communication. High levels of intensity of control ensures stability but inhibits connectivity. In contrast, high connectivity represents low control. Scheer (2007) suggests the 'edge of chaos' balance flexibility and stability for organisations structure to be adaptive. This is useful for ABN to consider achieving adaptability. Moore (2010) is concerned about differentiation of core and context processes: the resource recycling processes and the importance of repurposing processes to core processes to obtain funding and resources for innovation. Get your grade or your money back using our Essay Writing Service! ABNs that combine resources from partners to obtain competitiveness can be further discussed by using Porter's strategies. Moore's (2010) process management and Scheer's (2007) approach to achieve adaptive organisation structure can be analysed and extended to derive implications in ABN context. These theories are usually used to analyse a single enterprise, to interpret ABN; a different approach is needed. From a holistic view, ABN can be seen as one virtual organisation; whereas at lower level, ABN consists of multiple organisations that interact dynamically. The heterogeneous partners and their relationships in ABN are important aspects to consider when applying these business theories in ABN environment. Porter's (2008 and 1996) strategies include five competitive forces that help organisations analyse its industry: trade-off position theories that concern the focus of one position in saving its efforts to be spent on its unique position [????]. The three types fit among activities that create an array of inter-locked activity create a sustainable competitive advantage. Five Competitive Forces that Shape Strategy Porter (2008) discusses that there are five important forces that determine competitive power in the business environment shown in Figure 26: supplier power, buyer power, competitive rivalry, threats of new entrants, and threats of substitution. Supplier is considered a threat to a company as it can easily push up the prices of raw materials and reduce profit margin because they have the input or resources that the business requires and the cost of switching to other supplier is high. The fewer the suppliers, the more powerful they are. Buyer power-buyers can drive an business's price down. Individual buyers are important to a business, and the fewer the buyers, the more they can dictate terms. Competitive rivalry-businesses lose power by offering equally attractive products and services, as suppliers and customers are likely to impulsively switch. In contrast, businesses offering superior or unique goods or services have tremendous strength, even monopolies. Threats of substitute depend on the ability of customers to find different way to get the same service, for instance, the extent to which different product or services can be used in place of yours. Easy and viable substitution weakens a business's power. Threats of new entrants to the market can easily enter and weaken a business's position and power, for instance, driving business prices down If the cost (capital, resource and effort) of entering is low and time required is short, and if there are few economies of scale, or if the technology can be easily implemented. The five forces model allows an organisation to understand where the power lies in respect to its suppliers, buyers, threats, rivals and substitutes. Understanding and identifying the strength and direction of each force, businesses can assess the current competitive position, and the strength of a position that the company aims to move into. For instance, finding where the position is the weakest or the opportunity that is available to others informs how to gain strength. The end goal is to reduce the power of each force, and to increase the organisation's power in respect to those forces. Figure : The five forces that shape industry competition (Porter, 2008) Always on Time Marked to Standard Porter (2008) outlines two critical points from the five forces: 1) identify and understand each force and how it affects the power the organisation, and 2) find exposed opportunities or unique positions that could reduce the power of each force and obtain competitive advantage. Applying these key ideas into ABN context reveals two perspectives: high and low level. Firstly, at high level, ABN is seen as a one virtual enterprise that interconnects dynamically with partners with different strength together to deliver differentiated and valued product or services to targeted customers. ABN as a virtual enterprise essentially involves customers, suppliers, threats, substitutions and competitions outside the ABN environment. ABN needs to consider each force and how it affects the power of the enterprise to find a unique positioning. For instance, the threats of others imitating this form of relationship and offerings--there will still be buyer- and supplier-power if the product is not differentiated, and competition still exists if others are offering similar type of product or services. However, five forces and their definitions became more complex in ABN than in a single business due to ABN's features including network features, interdependencies, collaboration, coordination, self organisation and co-evolution. Seen from a lower perspective, ABN consists of focal organisation/s and multiple actors interconnected to form a powerful network. Customer and supplier forces threaten ABN in one situation but may be beneficial in another situation. Customers in this context refer to other organisations that provide goods or services that ABN needs to deliver its products. The customers and suppliers that collaborate within the network could be working with other organisations or other networks outside the ABN environment. This provides their bargaining power over prices with the focal organisation within ABN environment. In this circumstance, they are considered threatening; however, the threat could be substantially reduced if the partners are highly collaborative with the focal organisation. The level of coordination and collaboration relationship influences the level of threat of those partners for focal organisation in ABN. This brings discussion about the two critical aspects of ABN: collaboration and coordination. Collaboration and coordination are important factors that determine the functioning and operating in ABN. The relationship between the focal organisation and the partners can be in the form of coordination and/or collaboration. Nevertheless, the level of coordination and collaboration can differ and have different effects. For instance, the relationship with suppliers can choose to have more collaboration and less coordination. More collaboration denotes that suppliers are willing to give up a portion of their margin of profit for a long-term relationship with the focal organisation. Collaboration emphasises that members are equal, and are equally motivated to participate in the network. In a collaborative environment, everyone is better off as a result of forming a network. Collaborative network supports high interaction, effective information sharing and high information visibility. Improvement of the network indicates improvement of the performance of every partner. In contrast, if more coordination and less collaboration, suppliers have low involvement in the network: the performance of the network does not affect the supplier as much. More coordination and less collaboration indicate that the focal organisation organizes the individual firms to achieve a goal established by the focal organisation, which in this case focuses on sourcing skilled resources. The other firms, having goals of achieving efficiency and speed, benefit less in comparison to the main company when the network performance is improved. Collaboration increases involvement, interaction, dependence and close relationships with suppliers-reducing their bargaining power. When partners involved in the network have the same goal as the focal organisation, they work towards improving the overall performance of the network, which then increases benefits of the individual firms. This gives the incentive that drives the suppliers to offer highest quality resources such as raw materials for the best prices, and promptly as required, thereby improving the performance of the whole network. The relationship between suppliers is thus best to take a collaboration approach rather than coordination. End customers are threats as they have power to reduce price if other businesses are offering similar products. If the enterprise does not offer product or services that match customer needs, they are likely to shift to other companies who offering better products or services. Customer threats cannot be eliminated, but can be reduced when the customers are loyal with the enterprise. Increasing customer relationships to build loyalty that will reduce the likelihood of their shifting to alternatives requires extensive marketing effort. ABN with the ability to collect skills and strengths together to provide differentiated and valued products and services can coax customers from bargaining over prices and focus on the value that the enterprise offers. This allows buyers to move their purchase decision away from price and look for attributes that make the product unique. ABN can also cut out intermediaries-third parties, retailers, etc.-by selling directly to consumers. This reduces prices further decreasing customer threat. This Essay is a Student's Work This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work New entrants can threaten the position and profitability of the organisation if the cost and capital required are low. This threat can be reduced through approaches including economic of scales, branding, supply chain management to tie up suppliers and distributors, and alliances between products and services. Since individuals do not have all the resources required to meet the opportunities in the market, to achieve a competitive edge, companies align with suppliers, customers, and distributors to streamline operations and work together to achieve a level of agility beyond the reach of individual companies (Lin, Chiu, & Chu, 2004). ABN that collaborate and coordinate with suppliers, customers, distributors, and other partners create networks with strong member-linkage: tightly coupled relationships become hard to imitate, requiring extensive amounts of time and effort to form a competitive network. The threats of substitution can be reduced through increasing alliances, accentuating differences of offerings, or increasing switching costs for its customers. ABN can achieve these easily by collaborating with its partners. ABN itself is a form of alliance, where the relationship is built on agreement, and members unite achieve a common goal as well as securing their individual interests Trade-off of positions Porter (1996) emphasise that a sustainable strategic position requires trade-offs. A strategic position is not sustainable unless there are trade-offs with other positions. Organisations with limited resources need to focus on one unique position and spend all effort to make the position successful and make itself stand out from the crowd. Trade-off happens when activities are incompatible. For example, a luxury and high value position for a product cannot also have low-cost position. Trade off also creates the need for choice and protect against straddlers and repositioners who try to imitate profitable positions or management techniques. Trade-off in ABN context also implies the need to a choose what activities to perform and what not to perform. Porter (1996) discusses that trade-offs arise for three reasons: 1) to produce consistency in brand image, for instance, low cost or high value image. 2) Trade-offs arise from activities themselves: different positions requires different product configuration, different employee behaviour, different skills, resources, technology and systems. 3) Trade-offs arise from the limits on internal coordination and control; for instance, managers announce the strategy to compete in one way and not another, making clear what to perform and what not do perform. ABN combine many skills and resources thereby creating great potential to perform a range of activities. However, making trade-offs limits what the enterprise offers and what would make the enterprise stand out from the crowd-achieve uniqueness. The essence of trade-off strategy lies in what not to do. Sacrificing certain activities that do not support the core strategy of the company also saves resources that can contribute to developing a unique position and competitive edge. After the organisation establishes a unique position and trade position, the next step is to plan and manage to ensure that the activities fit with the strategies. Porter (1996) discusses three types of fit strategies for activities. These strategies are discussed in the next section. Strategic Fit among Acitvities To achieve differentiation and value to customer, Porter (1996) emphasises the importance of fit among activities within an organisation. Fit concerns that activities be closely aligned and matched to create an interlocked array: activities must not be separate and isolated. Individual activities are easily imitated by straddlers whereas interlocked activities are harder to replicate. Interlocked activities create a sustainable competitive advantage. The fit of a set of activities is critical while discrete activities such as a particular technology or a management technique such as a sales-force approach (Porter, 1996). This applied in ABN context implies that focal organisation need to ensure that there is a fit between activities performed by the partners. Three types of activities fit are simple consistency, reinforcement, and optimization of effort. Simple consistency emphasises the fit between activities supports the overall strategy. Consistency ensures that strategy and activity do not erode each other, for instance, aligning all activities to either low-cost strategy or high-value strategy. This consistency implies that since strategies are differentiated, the activities fit with the strategy; activities that deviate from the strategy should be eliminated. Consistency thus ensures that competitive advantages of activities cumulate and not cancel themselves out. ABN is a collection of heterogeneous organisations that bring their competencies, skills, and resources to achieve a common objective (Chituc, Toscano, & Azevedo, 2008). The activities performed by heterogeneous partners need to be aligned to the common goal. At strategic level, consistency in ABN implies the need for a common goal that leads the individual goals of heterogeneous partners. Wycisk et al., (2008) describes that heterogeneous agents within a network have their functions and their individuals. These heterogeneous agents are distinguished by different abilities, goals and pattern of actions. A common goal ensures that each activity or individual goals are aligned to the previously determined objectives that provide the core customer value. At business process level, consistency implies the need of interoperability, which allows the business processes to connect and operate to effectively offer products or services to its customers. The ability to interoperate is also essential to facilitate effective communication and information visibility among activities (Evgenious, 2002; Heinrich & Betts, 2003). This in turn supports a common ABN goal. At IS level, business integration is essential, as organisations run different business processes on different platforms, and methods are hard to communicate. Business integration ensures that businesses have access to the data of one another, thereby ensuring consistency of activities. Reinforcement is insuring that activities complement each other to deliver the message of the core value. The competitive advantage comes from their supporting the overall strategy. This may also lower marketing or operating costs. At strategic level it implies that ABN requires combination of core competencies from selected organisations to deliver a core value to meet a group of targeted customers. Partners are selected to join the network for their best business activity, jointly, they can achieve what they cannot achieve alone. The idea of capability synergy emphasised by Janneck, Nagel, Schmid, Raim, Connolly & Moll (2009) also supports this idea. Synergy describes a smart business network of multiple members working in sequence and communicating to carry out the activities. Janneck et al. (2009) provide an example of constructing a house: A contractor would hire skilled plumbers, electricians, and construction workers instead of building a house from raw materials. Through integrating different capabilities-hiring people performing what they are good at creates a synergistic environment. Agility is the speed of the organisations to react to the changing market or customer demand (Sherehiy et al., 2007). To achieve such agility, activities must first be closely aligned so that when the time comes for change, each organisation can respond synergistically and effectively. In another words, the idea of reinforcing prepares an ABN better to achieve agility in responding to its environment in terms of configuring its business processes or a shift in business strategy. With activities that are reinforced and aligned, change can be achieved effectively. Collaboration and coordination also contribute to the reinforcing of activities performed within the network. Firms whose collaborating is based on agreement perform pre-agreed activities, thus minimize incompatibility between activities. Coordination involves some control; though collaboration exists, coordination of the focal organisation takes the initiative when implementing change: the coordinator provides a general direction towards achieving the goal through influencing the organisations in the network. At business process level, ABN needs to consider the activities in order to choose the potential partners. Reinforcing activities at business process level implies composition: choreography, orchestration and improvisation. Choreography is partners agreeing to perform some activities. Orchestration is the focal organisation influencing and guiding the operation. Improvisation is partners interacting and responding to the operations of others and constantly making adjustments. More insights of orchestration, choreography and improvisation will be discussed in section (2.3.2). At IS level standardization develops technical standards for common operating procedure that is agreed to by the network partners. Standardization of business processes is essential as organisations run different platforms and rules. Standardization implies that the business processes within an ABN network are running on a consistent platform, ensuring smooth and effective operations between partners across organisational boundaries. Standardization also facilitates the reinforcing idea, as processes running on a consistent platform are easier to interoperate, allowing fast implementation changes in business processes. Standardization facilitates compatibility, interoperability and information sharing thus supporting collaboration between partners. Optimization of effort focuses on the idea of coordination and information exchange across activities to minimize wasted effort, reduce redundancy and optimize results. Extending optimization of effort in ABN's strategic level implies the need for agility and high quality. High quality is an important factor for agility, as organisations responding to changes in customer demands must promptly produce high quality products or services. Within an ABN, the interdependent relationships where one decision could influence the rest of the network, indicate a trend of co-evolution over time: organisations are simultaneously adapting to one another (Wycisk et al., 2008). Co-evolution implies that organisations within ABN react to, and influences their environment. Interdependent relationship indicates that organisations may sequentially respond to each other's actions (Wyciswk et al., 2008). The decision of one organisation affects the decision of the others. Wycisk et al., (2008) also outline limited resources within the network, which forces coevolving adaptive responses within the network or between the network and its environment. Moore's Innovation strategy (2010) defined that optimize includes eliminating of redundant processes, automating processers, stream-line processes to reduce complexity, risk and costs. This optimization focuses on increasing productivity of context processes, when the cost is reduced as a result. Organisations then have the choice of either lowering their price or increasing their profit margins. Optimization, therefore, ensures that organisations generate enough profit for reinvestment or renovate. At business process level, Optimization further requires monitoring and controlling to reduce risks. Optimization considers costs, waste reduction, and achieving the optimal results from the partnering relationship. This requires enterprises to focus on aligning every aspect of customer needs, and promotes business efficiency and effectiveness while striving for innovation. For instance, BPM could enable enterprise's business processes to be more efficient and effective. At IS level, to support optimization and agility, system modularity is essential. Moore (2010) contends that business processes that are modularized can then be optimized. Modularity is the degree to which the system components can be separated and combined to form new business processes. Modularization involves deconstructing the business processes into smaller components for easy reengineering for gains in productivity. This mix and match capability allows fast configuration and responding to changes in business environment. Table 1 shows the summary of three fit of activities in ABN context at three levels. Table 1: Porter's strategic fit among activities in ABN context at three levels Applying Scheer's (1996) Edge of Chaos idea in ABN environment, the level of coordination and collaboration between members in ABN is particularly important. High coordination restricts connectivity, whereas high collaboration implies a very low level of control. Firstly, cooperative partnership is defined here as collaboration. If all members within the network share a cooperative partnership, the network has high connectivity and low intensity of control. As discussed by Scheer (1996), this is in a state of maximum flexiblity and lowest stability. In contrast, if the network has all coordinated partnerships, it has a high intensity of control and low flexibility. Scheer (1996) emphasises the balance between intensity of control and flexibility to achieve a balance between flexibility and stability, which is described as the optimal status called 'Edge of Chaos' (Scheer, 2007). According to Figure 29, Area 1 describes high coordination and low collaboration. When the enterprise has too much control which lessens the communication between network members and results in inflexibility. Area 3 describes a situation without rules or control: the enterprise will face chaos when every member collaborate with others and there is very little control throughout; then informed decisions are unlikely to be made. Area 4 represents the optimal status where connectivity and control are balanced, having maximum collaboration with minimal constraints or control for rigidity or chaos. At this point, ABN has the minimum control required to ensure stability while having the flexibility and adaptability to respond to a changing environment, to achieve innovation and creativity. Area 2 describes a stable situation when an enterprise has not yet become stagnated, but does not achieve flexibility as in Area 4. Figure : Edge of chaos between collaboration and coordination Figure : Edge of chaos (adapted from Scheer, 2007) Achieving this balance between coordination and collaboration is difficult as ABN involves different interdependence relationships between partners. Considering this, ABN needs to consider two things: 1) the traits of the member, which Wycisk el at., (2008) describes as heterogeneous agents, and 2) the fact that ABN is a demand-driven network (Christopher, 2000). These two factors determine cooperative or coordinated relationship between the focal organisation and the other partners: suppliers, distributors, manufacturer, and complementary organisations (those who have agreed to bind their product or service with the product or service of the network to deliver to its customers). That the network is demand-driven denotes both collaboration and coordination are required in ABN environment. The end-customers are the threat to the enterprise as they have the power to reduce the price of product or services and their requirements are constantly changing. In order to meet customer needs an enterprise requires strong collaboration that ensures having the resources and strategies to match the demand (Word, 2009). Collaboration increases engagement levels, provides mutual commitment, combines knowledge, and creates overall value that allows the enterprise to successfully respond to change (Li, Chou & Zajac, 2009). Coordination is essential as the focal organisation takes the initiative in bringing the new strategy (change) to the whole network. This process requires some degree of coordination in which focal organisations plan and organize the activities required to collaborate in meeting the objective (common goal). Members within the network also collaborate; for instance, supplier and manufacturer collaborate to ensure timely delivery of materials for production. However, there is minimal coordination between members, as they are different enterprises (suppliers, manufacturers, distributors) and are less likely that one would take control over the others. Figure 30 illustrates the idea that the focal organisation has a strong collaboration with its suppliers, distributors, and manufacturers while also having some degree of coordination. ABN needs to maintain its relationship with its customers to ensure constant feedback on their satisfaction and requirements. The diagram below provides an example of a possible approach of distributing collaboration and coordination levels between network partners. The focal organisation collaborates with partners and also coordinates during certain times. Members also collaborate through sharing critical information and learning from each other to be adaptive. Figure : The dynamics of collaboration and coordination in ABN Scheer (2007) also discusses the importance of choreography, orchestration and improvisation in business management. Choreography and orchestration terms are used in music for an orchestra. Choreography is the plan of movements in a dance and a choreographer is the planner. Once the overall dance and steps have been determined to the satisfaction of the choreography, at performance, the choreography exerts no further control (Talbot, 2007). This concept can be extended in business context at process level: In an orchestrated enterprise, there is always a central point of control, whereas in a choreographed enterprise, the central control is removed, and each partner in the network knows what it is supposed to do in relation to all other partners. To ensure that each organisation reinforces the others, there needs to be an orchestrator. At the beginning, the focal organisation selects partners that add value to the network to achieve a common goal. The partners are selected based on agreement. Once the partners are joined, the focal organisation has minimal control over how well they operate. This depends on how the partners interact: improvisation. Scheer (2007) used Jazz music to describe the adaptive management of enterprises: when performing, the musicians are constantly communicating, listening and responding to each other with the emphasis on the soloist. During an improvisation the soloist uses the structure of the music lead sheet as the scaffold, upon which to create new melodies on the spot. Applying this analogy to business context, the process of improvisation is one of constant emergent change. Improvisation therefore in ABN context denotes that partners are operating in respect to what its partners are performing. By considering what others are doing in the network, business patterns can adjust their behaviour to support and complement each other. Moore's Business Process Lifecycle At business process level, business operations need to be planned and managed to implement the unique position. Differentiation refers to when business offers are distinct from those of competitors. This is achieved by finding and amplifying a specific business activity from the core for innovation in order to generate an 'unmatchable' differentiation. This includes managing the business process lifecycle of an enterprise (shown in Figure 31). According to Moore (2010) an organisation might engage in, "Mission Critical activities and Enabling activities." Mission critical activities refer to innovation of core processes and standardization of context processes; enabling activities include invention of core processes and commoditization of context processes. At business process level, the focal organisation (actor) needs to differentiate its core processes from its context processes. A business's core is to drive business innovation and create competitive differentiation to sustain competitive advantage. The context processes involve everything else that the organisations do. The goal of core is to create competitive advantage, whereas the goal of context is to meet the market standards. Performing context badly will be punished by the market for not meeting its standards; however, performing context brilliantly will not be rewarded (Moore, 2010). Context processes are built up from the core processes that have been imitated. Organisations engage in constant business process change-invention, innovation, standardizaton, and commoditization. Invention stage includes innovating business processes that can increase business performance and are different to its competitors. This is a non-mission critical core, as it involves extensive experiments and pilot-projects, taking risks to pursue differentiation (Moore, 2010). When the invention is judged and ready for prime time, it moves upwards to become the mission critical core. Innovation is the realization of the invented business processes to form its core process with the intention to improve the business processes. When new business processes bring differentiation, new effort and resources are required to realize the benefits: a new marketing campaign and expanded customer base, for instance. This is where organisations expect highest return as dfferentiation offers distinct competitive advantage. However, innovation often does not last: competitive differentiation does not sustain. Competitors always find ways to neutralize the advantage through imitating the business process or offerings. When this happens, the core processes become a non-mission critical context process (standardisation). Managers must change attitudes and ensure that context processes are done well; for instance, in meeting the market standards, the focus shifts from differentiation to productivity. This may include replacing talented people with automation to free up resources. Businesses must maximize resource extraction-mission critical tasks must be moved to become the non-mission critical context processes. This minimizes the risks of tightening the high value resources in processes where companies can outsource to those who do better at lower cost and effort. This process is described as commoditization. Outsourcing allows efforts to be focused on core processes rather than non-mission critical context processes. Outsourcing can also become a mission critical activity through composition process. Outsourcing is difficult as each organisation has its own systems and applications; it requires interoperability with external systems and integration of internal and external systems, which further requires managing the performance of third-party vendors on service-level agreements, going beyond normal organisational boundaries. The success of outsourcing relies on the ability of multiple systems with different logic, rules, and functions to work consistently with each other. Consolidation is difficult as it involves upgrading IT, acquiring new skills, and supporting integration with the new applications. Applications are hard to integrate in terms of various technologies it comprises and proprietary codes that are too complex to integrate. The process of returning context processes to core processes requires composition. This is challenging as it is not easy to leverage existing investments. Although some organisations use EAI to integrate enterprise applications, the composition requires employees with specialized skills who understand both systems to create tightly coupled integration. In addition, it is also not easy to turn the tasks over to people less familiar with them, as they are less productive and efficient in performing them. Taking this business process lifecycle into ABN context, ABN is a network that finds the high value partners for their core competence to start the collaborate processes (Camarinha-Matos, et al., 2009). ABN and other organizations with whom it interoperates is perceived as one virtual entity, the effectiveness of which depends on selecting suitable partners and processes to form a collaborative network. Each organisation is selected for its competencies, its core processes that the network needs to meet its customer demand. Moore's (2010) cycle of innovation discusses the importance of extracting and repurposing resources to obtain funds for reinvestment. This ensures that organisations do not have to constantly seek funding for inventing and implementing new core processes that take extensive effort and resources, for instance, marketing effort to convince customers to use the new service or product offered by the organisation. The fact that organisations do not always have the financial support to constantly invent and scale up new core processes further indicates the importance of extracting resources from context to repurpose for core process. Moore (2010) describes that innovation creates differentiation for businesses and, therefore, gives competitive advantage through providing value so that customers are willing to pay premium prices for their preference. Without innovation, offerings become commoditized as they look more and more alike and customers are able to pay one vendor off against the other. When an organisation starts up, most of its resources are committed to core processes so as to differentiate itself from competitors. As it matures and competitors imitate it, it must convert core to context, while still adding new core to continue to survive in the market. Over time, as more resources became context in the established enterprise, the ratio of core to context becomes inversed. Moore (2010) discusses that the context process is the breeding ground of inertia. Inertia is identified as resistance to change (SAP, 2005). Inertia in a business context occurs when processes are stuck and resources become hard to repurpose. Established organisations with inversed core and context ratios, even when able to fund processes, find it hard to stay competitive as the inertia of context becomes so great that investment in core cannot overcome it and innovations are unable to get into the market. When the resources are stuck in core, and organisations lack profit or investor support, the only way to overcome the growing inertia is to extract resources from context to repurpose for core. Quadrants three in Figure 31 is where the resources get stuck. Moore (2010) discusses the 'five levers' which are the management actions that reengineer mission critical workloads to allow extraction of resources: 1) Centralization concerns bringing operations under a single authority to reduce management overhead, free resources for reassignment, and create a single decision-making authority to manage operations. 2) Standardization refers to similar processes being put under one authority, the next goal is to reduce the variety and variability of resources delivering the same or similar output to further reduce resource consumption, and minimize risks. 3) Modularization involves deconstructing product or process into subsystems to be reengineered to increase productivity. This requires specialized support, for instance, from expert consultants or external agencies to identify a simple process that meets input and output criteria for quality results. 4) Optimization aims to eliminate redundant processes, automate standard sequences, stream processes, and reduce complexity and risk, while freeing up experienced people. 5) Outsourcing increases productivity as it drives processes out of the enterprise to further reduce overhead, minimize investment, and free up resources. This strategy benefits organisations in three ways: 1) not having to seek additional investors, 2) not having to initiate cost-reduction programs to increase revenue to reinvest 3) reducing inertia resistance to core, as the more resources move from context to core, the more powerful this will be. Figure : Cycle of innovation (adapted from Moore, 2010) Summary of theories used in ABN context Porter (1996) and (2008), Moore (2010) and Scheer's (2007) critical aspects of strategies are extended and applied in ABN context. The table below give a summary of the implications from each of these theories. Porter (1996 and 2008) discusses three main strategies, five competitive forces that allow an organisation to find its unique positioning, trade of positions, and three types of activities fit. The summary below shows the key points of these strategies' implications in ABN context. Table 1 provides the summary of Porter's theories in ABN context. Table 1: Summary of Porter's strategy in ABN context Scheer (2007) discusses the main concepts of 'edge of chaos' where the level of intensity control and connectivity are achieved thereby achieving stability and flexibility. The composition and orchestration concepts are extended in ABN context for effective business process operations as well as improvisations that achieve innovation. Table 2 shows the summary of Scheer's theories in ABN context. Table 2: Summary of Scheer's (2007) business management in ABN context Moore's (2010) process lifecycle is critical for network organisations as well as for the whole ABN. Moore's (2010) continuous innovation concept allows for better management to free up resources and reinvent for the core process to maintain its competitiveness. Table 3 shows the summary of Moore's theories in ABN context. Table 3: Summary of Moore's business process lifecycle in ABN context Section 2.2 and 2.3 analysed definitions, concepts for ABN based on existing studies, and the implications from the business theories. These provide a holistic view of what components ABN should have to be successful. Based on these, existing ABN frameworks can be drawn for analysis to find out their underlying problems. The next section analyses the existing frameworks of ABN. Review Frameworks from Research The existing frameworks for ABN are drawn for analysis and to identify problems, issues, and requirements for ABN, for instance, agile enterprise supply chain, adaptive supply chain. Ivanov et al. (2010) propose an adaptive supply chain management (SCM) framework, and see the framework as integrated; for example, SCM serves as a basis for integration, cooperation, and coordination (Figure 32). SCM agility and flexibility are achieved from core competencies, building virtual enterprise, using technologies such as web services. SCM sustainability involves new product development, adapting, and complying with policy and social changes. The combination of supply chain management, agility, and sustainability produces Adaptive Supply Chain Management (ASCM). The ASCM framework has two main components: 1) a supply chain management includes integration, cooperation, and coordination of business partners in achieving the goal of the supply chain, and 2) agility that supports SCM's sustainability. ASCM is described as having three drivers: its product and life cycles, customers, and suppliers. The framework illustrates that agility, and sustainability create an adaptive supply chain management, which then provides profitability for the organisation. Profitability is achieved through such as creating a competitive advantage, the ability to be responsive, cost-efficiency, and supply chain flexibility. The framework covers some concepts needed to achieve adaptive supply chain. However, not all concepts are included (such as collaboration continuous business process improvement). The framework does not provide a good illustration on how the concepts are related to one another. The adaptive supply chain basically depicts in three dimensions. Nevertheless, to achieve adaptivity and agility, the relationship between the major components need to be better illustrated. Figure : Framework of adaptive supply chain management (Ivanov, et al., 2010) Problems of "Framework of adaptive supply chain management' (Ivanov, et al., 2010) Problem 1 Absence of continuous business process improvement The framework covers the Product Life Cycle (PLC); however, PLC is a small part of the enterprise improvement and does not imply Moore's (2010) continuous business process improvement. In ABN context, this process improvement become more complex as it involves considering improvement in relation to its business process partners. In order to survive and sustain in a competitive environment, enterprises need to continuously improve and innovate their business processes to meet the challenges. Problem 2 Absence of collaboration This framework depicts cooperation and coordination, which an ABN environment requires. Both need to be managed well to assure the enterprise operates on the 'edge of chaos.' Cooperation is the partners joining for the success of the focal organisation's mission. Collaboration, however, emphasises the mutual benefit when the firms work together; they join based on the mutual intention to improve the overall network performance. Problem 3 Absence of information visibility The framework covers web services, which support certain level of agility, by which real time demand and inventory can be detected. Porter (1996) describes information visibility as the ability to share critical data required for informed decisions regarding product or services. Information visibility is facilitated through effective collaboration, which supports sharing information among multiple participants across the network. Problem 4 Absence of standardization of business processes The framework covers integration of business processes, which is different from standardization of business processes (Moore, 2010). Standardization highlights operational procedures: all network participants run the same business procedures or operating systems while integration focuses on streamlining business operations to ensure communication among businesses. For instance, data and information can be transferred across application and data structures of the partner's systems. Problem 5 Absence of alignment between individual goals to the common goal Heterogeneous organisations have different characteristics, roles, functions and responsibilities (Wycisk et al., 2008). The activities and operations performed by these partners need to be consistent with the common goal that was established based on agreement and policies (Chituc et al., 2008). Porter (1996) discusses the importance of fit among activities in order to support the accomplishing of the core strategy. Heterogeneous partners work together based on agreement of what each of them will perform. This requires concise clarifications of the distribution and classification of activities to perform, and avoiding or minimizing activities that could jeopardize the core strategy. Problem 6 Absence of effective communication to facilitates learning ABN network is unstructured due to the partners being heterogeneous and relationship being loosely coupled in the network. This, therefore, requires shared goals, mutual agreement, and high involvement of members to find and share the critical information when required to make informed decisions. Heterogeneous partners learn from each other information they are unaware of, thus support learning among partners, in turn improving the knowledge of the network. This idea is also discussed by Rose-Anderssen et al., (2009) through his discussion on expansive learning which happens when the organisations interact and learn from each other, reaching beyond the boundaries of the organisations. Additionally, Rose-Anderssen et al., (2009) emphasised the importance of forming a learning community where the knowledge is transformed to produce innovative results that provide competitive ability. Problem 7 Absence of management of resources Sharing resources is a significant aspect of ABN network that requires careful management. Open and seamless sharing of resources is facilitated by legal agreement to minimize risks and conflicts. Peltoniemi and Vuori (2004), who describe ABN as an ecosystem, emphasize that the most critical in a natural ecosystem is the energy that needs to be used efficiently for the ecosystem to prosper. Resources in ABN are analogous to energy in ecosystem which requires efficient use in order for the organisation to increase its performance. Problem 8 Absence of platform flexibility to allow connects/disconnects in the network ABNs as adaptive networks are constantly engaged in renewing resources, in dynamically changing its partners and relationships in response to changes in the environment. Adaptability in Section 220.127.116.11 emphasizes that competitiveness is maintained through its ability to "pick, plug and play" (Heck and Vervest, 2007). This adaptability allows organisations to capture opportunities emerging in the market and accruing value to the network. Problem 9 Absence of constant review of strategic directions Porter (2008) mentioned the five competitive forces that are considered as threats to the organisations. These require managing to acquire competitive advantage and to be sustainable. Threats are constantly emerging from customers, substitutions, suppliers, and new entrants. Being adaptable also implies the need to detect the changes in the five forces and adjust strategic directions to improve business performance. Haeckel's (1999) Sense-interpret-decide-act (SIDA) loop can also be used to analyse the framework in Figure 34 to identify if the framework satisfies the basic requirement of sense and respond. Haeckel's (1999) SIDA loop describes that enterprises adapt to changes in the environment through sense, interpret, decide, and act. Sense is the ability to hear or see,changes in the environment. The sensed information needs to be interpreted in terms of its implications for the organisation at different levels; for instance, strategic direction and business processes require change. The information sensed could be an opportunity or a threat, and the ability of the organisation to use the opportunity or to avoid threats depends on the enterprise's ability to interpret the sensed information correctly. Once the information is interpreted, the organisation decides to incorporate what needs to be changed in response to the environment. Lastly, the enterprise takes action on what been decided. Sense and interpret has been classified as "sensing" and decide and act can be classified as "responding." Figure : SIDA adaptive enterprise framework (adapted from Haeckel, 1999)
TAIPEI--Intel's move to open up its Atom family of microprocessors to a wide spectrum of form factors will make it hard for its competitors to replicate such computing "experience", according to a senior Intel executive. Shmuel Eden, vice president and general manager of Intel Architecture Group's PC client group, noted that while competition is good and healthy for the industry, the company's "compute continuum"--which refers to its range of products from servers to smartphones and other handheld devices--provides users a unique synergy that helps differentiate Intel from the rest of the market. "What you're going to get from client devices, for example, within such a continuum in the future is that data stored within a laptop can be seamlessly transferred to a nearby desktop PC with similar x86 architecture," Eden told ZDNet Asia after his keynote presentation here Tuesday at Computex 2010. He expressed hopes that with the direction the company is now taking, it will no longer be viewed simply as a chipmaker, but as a computing powerhouse in its own right. "We don't just build microprocessors anymore but also provide software and services up the stack," he said. "In fact, a sizable portion of our employees are computing experts today." Intel aims Atom beyond netbooks At the conference here, Intel also announced various new Atom chips and its use models in a bid to extend the processor's presence beyond its current primary in netbook devices. According to Eden, the company is now targeting popular device categories including tablets and consumer electronics. Its collaboration with Google and Sony on the Google TV project is one example of how Intel is using its Atom line of solutions to branch out to other market segments, he said. The Intel executive added that the company is "definitely playing in the tablet field" but was unable to predict exactly what type of tablet--whether it is one similar to Apple's touchscreen iPad slate or one that supports both touchscreen and keypad--will succeed in the market. David Perlmutter, executive vice president and general manager of Intel Architecture Group, also revealed that the Atom chip will be powering media signages and in-vehicle infotainment (IVI) devices. Perlmutter, who delivered a separate keynote presentation here Tuesday, said media signages are moving from "static, paper-based" delivery to rich-media, digital content and Intel is looking to support the shift with its latest chips. He noted that instead of being "cannabalized" by the emerging tablet segment, netbooks are not on a decline. In fact, he said the device is the "fastest ramping device segment" in the past few years, above popular devices such as Research in Motion's BlackBerry smartphone and Apple's iPod media player. Perlmutter added that with new Atom chips being introduced such as Intel's "Pine Trail" mobile dual-core processor and "Oak Trail" processors optimized for sleek netbook form factors, the company is looking to bring the device segment up a notch to "netbook 2.0". Pine Trail chips are expected to be baked and on shelves by "the winter holiday shopping season" later this year, according to the company, while Oak Trail will be made available from early-2011. The former promises to deliver greater battery life and a more responsive user experience, while the latter will offer full high-definition video playback and up to 50 percent reduction in average power consumption, Eden added. Kevin Kwang of ZDNet Asia reported from Computex 2010 in Taipei, Taiwan.
Hotfrog Singapore provides information regarding Exousia-productions in Singapore. Exousia-productions is located at 180 Cecil Street 1104 Bangkok Bank Building, Singapore 069546 and provides Web Design services. Contact them on +65 9641 7950 or by visiting them on their website http://www.exousia-productions.com. Is Exousia-productions in 180 Cecil Street 1104 Bangkok Bank Building, Singapore 069546 your business? Claim your listing and attract more leads by adding more content, photos and other business details. We have more Web Design services in Singapore available on Hotfrog Singapore. You can update your search for Web Design by location, keyword or service options. More companies in Singapore, 069546, Singapore 167 Jalan Bukit Merah, #05-12, Tower 4, The Connection Tiong Bahru Singapore 150167 7-B Crane Road Singapore 429356 236 Tanjong Pagar Rd. #05-70, GE Tower Singapore 919197 25 Kaki Bukit Road 4 Synergy @ KB, #06-47 Singapore 417800
Holmes, Bruce J.; Durham, Michael H.; Tarry, Scott E. This paper summarizes both the vision and the early public-private collaborative research for the Small Aircraft Transportation System (SATS). The paper outlines an operational definition of SATS, describes how SATS conceptually differs from current air transportation capabilities, introduces four SATS operating capabilities, and explains the relation between the SATS operating capabilities and the potential for expanded air mobility. The SATS technology roadmap encompasses on-demand, widely distributed, point-to-point air mobility, through hired-pilot modes in the nearer-term, and through self-operated user modes in the farther-term. The nearer-term concept is based on aircraft and airspace technologies being developed to make the use of smaller, more widely distributed community reliever and general aviation airports and their runways more useful in more weather conditions, in commercial hired-pilot service modes. The farther-term vision is based on technical concepts that could be developed to simplify or automate many of the operational functions in the aircraft and the airspace for meeting future public transportation needs, in personally operated modes. NASA technology strategies form a roadmap between the nearer-term concept and the farther-term vision. This paper outlines a roadmap for scalable, on-demand, distributed air mobility technologies for vehicle and airspace systems. The audiences for the paper include General Aviation manufacturers, small aircraft transportation service providers, the flight training industry, airport and transportation authorities at the Federal, state and local levels, and organizations involved in planning for future National Airspace System advancements. Abbott, Terence S.; Consiglio, Maria C.; Baxley, Brian T.; Williams, Daniel M.; Jones, Kenneth M.; Adams, Catherine A. This document defines the Small Aircraft Transportation System (SATS) Higher Volume Operations concept. The general philosophy underlying this concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA). Within the SCA, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. This document also provides details for a number of off-nominal and emergency procedures which address situations that could be expected to occur in a future SCA. The details for this operational concept along with a description of candidate aircraft systems to support this concept are provided. Jabbal, M; Liddle, SC; Crowther, WJ Copyright @ 2010 American Institute of Aeronautics and Astronautics This paper considers the effect of choice of actuator technology and associated power systems architecture on the mass cost and power consumption of implementing active flow control systems on civil transport aircraft. The research method is based on the use of a mass model that includes a mass due to systems hardware and a mass due to the system energy usage. An Airbus A320 aircraft wing is used as a case-study applicatio... Full Text Available The algorithm of unconditional and conditional optimization Markov models of maintenance systems of transport airplanes of their programs of technical operation used at improvement is considered. Kemmerly, Guy T. To all peoples in all parts of the world throughout history, the ability to move about easily is a fundamental element of freedom. The American people have charged NASA to increase their freedom and that of their children knowing that their quality of life will improve as our nation s transportation systems improve. In pursuit of this safe, reliable, and affordable personalized air transportation option, in 2000 NASA established the Small Aircraft Transportation System (SATS) Project. As the name suggests personalized air transportation would be built on smaller aircraft than those used by the airlines. Of course, smaller aircraft can operate from smaller airports and 96% of the American population is within thirty miles of a high-quality, underutilized community airport as are the vast majority of their customers, family members, and favorite vacation destinations. Abbott, Terence S.; Jones, Kenneth M.; Consiglio, Maria C.; Williams, Daniel M.; Adams, Catherine A. This document defines the Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) concept for normal conditions. In this concept, a block of airspace would be established around designated non-towered, non-radar airports during periods of poor weather. Within this new airspace, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. Using onboard equipment and procedures, they would then approach and land at the airport. Departures would be handled in a similar fashion. The details for this operational concept are provided in this document. Airworthiness certification of commercial transport aircraft requires a safety analysis of the propulsion system to establish that the probability of a failure jeopardising the safety of the aeroplane is acceptably low. The needs and desired features of such a propulsion system safety analysis are discussed, and current techniques and assumptions employed in such analyses are evaluated. It is concluded that current assumptions and techniques are not well suited to predicting... Stough, H. Paul, III Atmospheric effects on aviation are described by Mahapatra (1999) as including (1) atmospheric phenomena involving air motion - wind shear and turbulence; (2) hydrometeorological phenomena - rain, snow and hail; (3) aircraft icing; (4) low visibility; and (5) atmospheric electrical phenomena. Aircraft Weather Mitigation includes aircraft systems (e.g. airframe, propulsion, avionics, controls) that can be enacted (by a pilot, automation or hybrid systems) to suppress and/or prepare for the effects of encountered or unavoidable weather or to facilitate a crew operational decision-making process relative to weather. Aircraft weather mitigation can be thought of as a continuum (Figure 1) with the need to avoid all adverse weather at one extreme and the ability to safely operate in all weather conditions at the other extreme. Realistic aircraft capabilities fall somewhere between these two extremes. The capabilities of small general aviation aircraft would be expected to fall closer to the "Avoid All Adverse Weather" point, and the capabilities of large commercial jet transports would fall closer to the "Operate in All Weather Conditions" point. The ability to safely operate in adverse weather conditions is dependent upon the pilot s capabilities (training, total experience and recent experience), the airspace in which the operation is taking place (terrain, navigational aids, traffic separation), the capabilities of the airport (approach guidance, runway and taxiway lighting, availability of air traffic control), as well as the capabilities of the airplane. The level of mitigation may vary depending upon the type of adverse weather. For example, a small general aviation airplane may be equipped to operate "in the clouds" without outside visual references, but not be equipped to prevent airframe ice that could be accreted in those clouds. Rising, J. J.; Davis, W. J; Grantham, W. D. The use of modern control theory to develop a high-authority stability and control system for the next generation transport aircraft is described with examples taken from work performed on an advanced pitch active control system (PACS). The PACS was configured to have short-period and phugoid modes frequency and damping characteristics within the shaded S-plane areas, column force gradients with set bounds and with constant slope, and a blended normal-acceleration/pitch rate time history response to a step command. Details of the control law, feedback loop, and modal control syntheses are explored, as are compensation for the feedback gain, the deletion of the velocity signal, and the feed-forward compensation. Scheduling of the primary and secondary gains are discussed, together with control law mechanization, flying qualities analyses, and application on the L-1011 aircraft. N. Shantha Kumar Full Text Available A new avionics concept called integrated enhanced and synthetic vision system (IESVS is being developed to enable flight operations during adverse weather/visibility conditions even in non precision airfields. This paper presents the latest trends in IESVS, design concept of the system and the work being carried out at National Aerospace Laboratories, Bangalore towards indigenous development of the same for transport aircraft.Defence Science Journal, 2013, 63(2, pp.157-163, DOI:http://dx.doi.org/10.14429/dsj.63.4258 Dollyhigh, Samuel M.; Yackovetsky, Robert E. (Technical Monitor) An analysis was conducted to examine the market viability of small aircraft as a transportation mode in competition with automobile and scheduled commercial air travel by estimating the pool of users that would potentially switch to on-demand air travel due to cost/time savings. The basis for the analysis model was the Integrated Air Transportation System Evaluation Tool (IATSET) which was developed under contract to NASA by the Logistics Management Institute. IATSET is a macroeconomic model that predicts at a National level the mode choice between automobile, scheduled air, and on-demand air travel based on the value of a travelers time and monetary cost of the trip. A number of modifications are detailed to the original IATSET to better model the changing small aircraft environment. The potential trip market was modeled for the Eclipse 500 operated as a corporate jet and as an air taxi for the business travel market. The Cirrus 20R and a $80K single engine piston aircraft (based on automobile manufacturing technology) are evaluated in the pleasure and personal business travel market. Long, Dou; Lee, David; Johnson, Jesse; Kostiuk, Peter; Yackovetsky, Robert (Technical Monitor) The Small Aircraft Transportation System (SATS) demand modeling is a tool that will be useful for decision-makers to analyze SATS demands in both airport and airspace. We constructed a series of models following the general top-down, modular principles in systems engineering. There are three principal models, SATS Airport Demand Model (SATS-ADM), SATS Flight Demand Model (SATS-FDM), and LMINET-SATS. SATS-ADM models SATS operations, by aircraft type, from the forecasts in fleet, configuration and performance, utilization, and traffic mixture. Given the SATS airport operations such as the ones generated by SATS-ADM, SATS-FDM constructs the SATS origin and destination (O&D) traffic flow based on the solution of the gravity model, from which it then generates SATS flights using the Monte Carlo simulation based on the departure time-of-day profile. LMINET-SATS, an extension of LMINET, models SATS demands at airspace and airport by all aircraft operations in US The models use parameters to provide the user with flexibility and ease of use to generate SATS demand for different scenarios. Several case studies are included to illustrate the use of the models, which are useful to identify the need for a new air traffic management system to cope with SATS. Acosta, Diana M.; Guynn, Mark D.; Wahls, Richard A.; DelRosario, Ruben, The future of aviation will benefit from research in aircraft design and air transportation management aimed at improving efficiency and reducing environmental impacts. This paper presents civil transport aircraft design trends and opportunities for improving vehicle and system-level efficiency. Aircraft design concepts and the emerging technologies critical to reducing thrust specific fuel consumption, reducing weight, and increasing lift to drag ratio currently being developed by NASA are discussed. Advancements in the air transportation system aimed towards system-level efficiency are discussed as well. Finally, the paper describes the relationship between the air transportation system, aircraft, and efficiency. This relationship is characterized by operational constraints imposed by the air transportation system that influence aircraft design, and operational capabilities inherent to an aircraft design that impact the air transportation system. Bowen, Brent D.; Holmes, Bruce J.; Hansen, Frederick The National Aeronautics and Space Administration (NASA), U.S. Department of Transportation, Federal Aviation Administration, industry stakeholders, and academia, have joined forces to pursue the NASA National General Aviation Roadmap leading to a Small Aircraft Transportation System (SATS). This strategic undertaking has a 25-year goal to bring the next-generation technologies and improve travel between remote communities and transportation centers in urban areas by utilizing the nation's 5,400 public use general aviation airports. To facilitate this initiative, a comprehensive upgrade of public infrastructure must be planned, coordinated, and implemented within the framework of the national air transportation system. The Nebraska NASA EPSCoR Program has proposed to deliver research support in key public infrastructure areas in coordination with the General Aviation Program Office at the NASA Langley Research Center. Ultimately, SATS may permit tripling aviation system throughput capacity by tapping the underutilized general aviation facilities to achieve the national goal of doorstep-to-destination travel at four times the speed of highways for the nation's suburban, rural, and remote communities. Optical communications for transport aircraft are discussed. The problem involves: increasing demand for radio-frequency bands from an enlarging pool of users (aircraft, ground and sea vehicles, fleet operators, traffic control centers, and commercial radio and television); desirability of providing high-bandwidth dedicated communications to and from every aircraft in the National Airspace System; need to support communications, navigation, and surveillance for a growing number of aircraft; and improved meteorological observations by use of probe aircraft. The solution involves: optical signal transmission support very high data rates; optical transmission of signals between aircraft, orbiting satellites, and ground stations, where unobstructed line-of-sight is available; conventional radio transmissions of signals between aircraft and ground stations, where optical line-of-sight is unavailable; and radio priority given to aircraft in weather. Coleman, Anthony S.; Hansen, Irving G. NASA is pursuing a program in Advanced Subsonic Transport (AST) to develop the technology for a highly reliable Fly-By-Light/Power-By-WIre aircraft. One of the primary objectives of the program is to develop the technology base for confident application of integrated PBW components and systems to transport aircraft to improve operating reliability and efficiency. Technology will be developed so that the present hydraulic and pneumatic systems of the aircraft can be systematically eliminated and replaced by electrical systems. These motor driven actuators would move the aircraft wing surfaces as well as the rudder to provide steering controls for the pilot. Existing aircraft electrical systems are not flight critical and are prone to failure due to Electromagnetic Interference (EMI) (1), ground faults and component failures. In order to successfully implement electromechanical flight control actuation, a Power Management and Distribution (PMAD) System must be designed having a reliability of 1 failure in 10(exp +9) hours, EMI hardening and a fault tolerance architecture to ensure uninterrupted power to all aircraft flight critical systems. The focus of this paper is to analyze, define, and describe technically challenging areas associated with the development of a Power By Wire Aircraft and typical requirements to be established at the box level. The authors will attempt to propose areas of investigation, citing specific military standards and requirements that need to be revised to accommodate the 'More Electric Aircraft Systems'. Fly-by-wire flight control systems are becoming more common in both civil and military aircraft. These systems give many benefits, but also present a new set of problems due to their increased complexity compared to conventional systems and the larger choice of options that they provide. The work presented here considers the application of fly-by-wire to a generic regional transport aircraft. The flying qualities criteria used for typical flying qualities evaluations are described... Galvin, James J., Jr. The National Aeronautics and Space Administration (NASA) is leading a research effort to develop a Small Aircraft Transportation System (SATS) that will expand air transportation capabilities to hundreds of underutilized airports in the United States. Most of the research effort addresses the technological development of the small aircraft as well as the systems to manage airspace usage and surface activities at airports. The Federal Aviation Administration (FAA) will also play a major role in the successful implementation of SATS, however, the administration is reluctant to embrace the unproven concept. The purpose of the research presented in this dissertation is to determine if the FAA can pursue a resource management strategy that will support the current radar-based Air Traffic Control (ATC) system as well as a Global Positioning Satellite (GPS)-based ATC system required by the SATS. The research centered around the use of the System Dynamics modeling methodology to determine the future behavior of the principle components of the ATC system over time. The research included a model of the ATC system consisting of people, facilities, equipment, airports, aircraft, the FAA budget, and the Airport and Airways Trust Fund. The model generated system performance behavior used to evaluate three scenarios. The first scenario depicted the base case behavior of the system if the FAA continued its current resource management practices. The second scenario depicted the behavior of the system if the FAA emphasized development of GPS-based ATC systems. The third scenario depicted a combined resource management strategy that supplemented radar systems with GPS systems. The findings of the research were that the FAA must pursue a resource management strategy that primarily funds a radar-based ATC system and directs lesser funding toward a GPS-based supplemental ATC system. The most significant contribution of this research was the insight and understanding gained of how Holmes, Bruce J. This paper presents trends and forces that shape 21 st century demand for higher-speed personal air transportation and outlines guidance developed by NASA in partnership with other federal and state government and industry partners, for Small Aircraft Transportation System (SATS) investment and partnership planning. Jones, D. R.; Parrish, R. V.; Person, L. H., Jr.; Old, J. L. With the advent of digital avionics, the workload of the pilot in a moderen transport aircraft is increasing significantly. This situation makes it necessary to reduce pilot workload with the aid of new advanced technologies. As part of an effort to improve information management systems, NASA has, therefore, studied an advanced concept for managing the navigational tasks of a modern transport aircraft. This concept is mainly concerned with the simplification of the pilot interface. The advanced navigational system provides a simple method for a pilot to enter new waypoints to change his flight plan because of heavy traffic, adverse weather conditions, or other reasons. The navigational system was implemented and evaluated in a flight simulator representative of a modern transport aircraft. Attention is given to the simulator, flight simulation, multimode devices, and the navigational system. Liddle, Stephen C; Crowther, William J.; Jabbal, Mark This article is placed here with permission from the Royal Aeronautical Society - Copyright @ 2009 Royal Aeronautical Society The use of flow control (FC) technology on civil transport aircraft is seen as a potential means of providing a step change in aerodynamic performance in the 2020 time frame. There has been extensive research into the flow physics associated with FC. This paper focuses on developing an understanding of the costs and design drivers associated with the systems needed ... Viken, Sally A.; Brooks, Frederick M.; Johnson, Sally C. It has become evident that our commercial air transportation system is reaching its peak in terms of capacity, with numerous delays in the system and the demand still steadily increasing. NASA, FAA, and the National Consortium for Aviation Mobility (NCAM) have partnered to aid in increasing the mobility throughout the United States through the Small Aircraft Transportation System (SATS) project. The SATS project has been a five-year effort to provide the technical and economic basis for further national investment and policy decisions to support a small aircraft transportation system. The SATS vision is to enable people and goods to have the convenience of on-demand point-to-point travel, anywhere, anytime for both personal and business travel. This vision can be obtained by expanding near all-weather access to more than 3,400 small community airports that are currently under-utilized throughout the United States. SATS has focused its efforts on four key operating capabilities that have addressed new emerging technologies, procedures, and concepts to pave the way for small aircraft to operate in nearly all weather conditions at virtually any runway in the United States. These four key operating capabilities are: Higher Volume Operations at Non-Towered/Non-Radar Airports, En Route Procedures and Systems for Integrated Fleet Operations, Lower Landing Minimums at Minimally Equipped Landing Facilities, and Increased Single Pilot Performance. The SATS project culminated with the 2005 SATS Public Demonstration in Danville, Virginia on June 5th-7th, by showcasing the accomplishments achieved throughout the project and demonstrating that a small aircraft transportation system could be viable. The technologies, procedures, and concepts were successfully demonstrated to show that they were safe, effective, and affordable for small aircraft in near all weather conditions. The focus of this paper is to provide an overview of the technical and operational feasibility of the Tarry, Scott E.; Bowen, Brent D.; Nickerson, Jocelyn S. The aviation industry is an integral part of the world s economy. Travelers have consistently chosen aviation as their mode of transportation as it is reliable, time efficient and safe. The out- dated Hub and Spoke system, coupled with high demand, has led to delays, cancellations and gridlock. NASA is developing innovative solutions to these and other air transportation problems. This research is being conducted through partnerships with federal agencies, industry stakeholders, and academia, specifically the University of Nebraska at Omaha. Each collaborator is pursuing the NASA General Aviation Roadmap through their involvement in the expansion of the Small Aircraft Transportation System (SATS). SATS will utilize technologically advanced small aircraft to transport travelers to and from rural and isolated communities. Additionally, this system will provide a safe alternative to the hub and spoke system, giving more time to more people through high-speed mobility and increased accessibility. Viken, Sally A.; Brooks, Frederick M. The Small Aircraft Transportation System (SATS) project has been a five-year effort fostering research and development that could lead to the transformation of our country s air transportation system. It has become evident that our commercial air transportation system is reaching its peak in terms of capacity, with numerous delays in the system and the demand keeps steadily increasing. The SATS vision is to increase mobility in our nation s transportation system by expanding access to more than 3400 small community airports that are currently under-utilized. The SATS project has focused its efforts on four key operating capabilities that have addressed new emerging technologies and procedures to pave the way for a new way of air travel. The four key operating capabilities are: Higher Volume Operations at Non-Towered/Non-Radar Airports, En Route Procedures and Systems for Integrated Fleet Operations, Lower Landing Minimums at Minimally Equipped Landing Facilities, and Increased Single Pilot Performance. These four capabilities are key to enabling low-cost, on-demand, point-to-point transportation of goods and passengers utilizing small aircraft operating from small airports. The focus of this paper is to discuss the technical and operational feasibility of the four operating capabilities and demonstrate how they can enable a small aircraft transportation system. Abbott, Terence S.; Consiglio, Maria C.; Baxley, Brian T.; Williams, Daniel M.; Conway, Sheila R. This document expands the Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept to include off-nominal conditions. The general philosophy underlying the HVO concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA). During periods of poor weather, a block of airspace would be established around designated non-towered, non-radar airports. Aircraft flying enroute to a SATS airport would be on a standard instrument flight rules flight clearance with Air Traffic Control providing separation services. Within the SCA, pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft. Previous work developed the procedures for normal HVO operations. This document provides details for off-nominal and emergency procedures for situations that could be expected to occur in a future SCA. Williams, Daniel M.; Murdoch, Jennifer L.; Adams, Catherine H. This paper provides a summary of conclusions from the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) Flight Experiment which NASA conducted to determine pilot acceptability of the HVO concept for normal conditions. The SATS HVO concept improves efficiency at non-towered, non-radar airports in Instrument Meteorological Conditions (IMC) while achieving a level of safety equal to today s system. Reported are results from flight experiment data that indicate that the SATS HVO concept is viable. The success of the SATS HVO concept is based on acceptable pilot workload, performance, and subjective criteria when compared to the procedural control operations in use today at non-towered, non-radar controlled airfields in IMC. The HVO Flight Experiment, flown on NASA's Cirrus SR22, used a subset of the HVO Simulation Experiment scenarios and evaluation pilots in order to validate the simulation experiment results. HVO and Baseline (today s system) scenarios flown included: single aircraft arriving for a GPS non-precision approach; aircraft arriving for the approach with multiple traffic aircraft; and aircraft arriving for the approach with multiple traffic aircraft and then conducting a missed approach. Results reveal that all twelve low-time instrument-rated pilots preferred SATS HVO when compared to current procedural separation operations. These pilots also flew the HVO procedures safely and proficiently without additional workload in comparison to today s system (Baseline). Detailed results of pilot flight technical error, and their subjective assessments of workload and situation awareness are presented in this paper. Tarry, Scott E.; Bowen, Brent D. America's air transport system is currently faced with two equally important dilemmas. First, congestion and delays associated with the overburdened hub and spoke system will continue to worsen unless dramatic changes are made in the way air transportation services are provided. Second, many communities and various regions of the country have not benefited from the air transport system, which tends to focus its attention on major population centers. An emerging solution to both problems is a Small Aircraft Transportation System (SATS), which will utilize a new generation of advanced small aircraft to provide air transport services to those citizens who are poorly served by the hub and spoke system and those citizens who are not served at all. Using new innovations in navigation, communication, and propulsion technologies, these aircraft will enable users to safely and reliably access the over 5,000 general aviation landing facilities around the United States. A small aircraft transportation system holds the potential to revolutionize the way Americans travel and to greatly enhance the use of air transport as an economic development tool in rural and isolated communities across the nation. Baxley, B.; Williams, D.; Consiglio, M.; Conway, S.; Adams, C.; Abbott, T. The ability to conduct concurrent, multiple aircraft operations in poor weather, at virtually any airport, offers an important opportunity for a significant increase in the rate of flight operations, a major improvement in passenger convenience, and the potential to foster growth of charter operations at small airports. The Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept is designed to increase traffic flow at any of the 3400 nonradar, non-towered airports in the United States where operations are currently restricted to one-in/one-out procedural separation during Instrument Meteorological Conditions (IMC). The concept's key feature is pilots maintain their own separation from other aircraft using procedures, aircraft flight data sent via air-to-air datalink, cockpit displays, and on-board software. This is done within the Self-Controlled Area (SCA), an area of flight operations established during poor visibility or low ceilings around an airport without Air Traffic Control (ATC) services. The research described in this paper expands the HVO concept to include most off-nominal situations that could be expected to occur in a future SATS environment. The situations were categorized into routine off-nominal operations, procedural deviations, equipment malfunctions, and aircraft emergencies. The combination of normal and off-nominal HVO procedures provides evidence for an operational concept that is safe, requires little ground infrastructure, and enables concurrent flight operations in poor weather. Full Text Available Avionics of the present day comprises advanced technology and software-intensive systems. Earlier generation avionics constituted federated architecture and used line replaceable units (LRUs having individual resources for each application with redundant hardware and software. However with the advancement of technology, methods,and mechanisms, the industry moved quite rapidly towards the integrated architecture called integrated modular avionics (IMA. Over the last decade there has been tremendous growth in these technologies which has resulted in reduced weight, volume, and developmental efforts. Usage of complex systems with advanced technologies and their certification for use in civil aircraft are the key issues to be addressed even today. Avionics of general aviation aircraft consists of typical systems like communication, navigation, display, radar, engine indication and data acquisition and recoding systems. These can be realised in federated as well as integrated architectures. TheLRUs requirements for avionics sub-system depends on the certification standards like FAR 23 or FAR 25. The whole cycle of architecture definition, integration, testing and means of compliance of the complete suite is the major activity in any new aircraft development programme. Development of ground-based test facilities and proper maintenance of the entire system on aircraft are other important activities in such programmes. These issues are presented in this paper for a typical light transport aircraft (LTA. The new technologies with their relevance, merits/de-merits, awareness of the global systems being adopted, etc., which are being attempted as indigenousdesign and development, are also presented.Defence Science Journal, 2011, 61(4, pp.289-298, DOI:http://dx.doi.org/10.14429/dsj.61.1090 Carrreno, Victor A.; Gottliebsen, Hanne; Butler, Ricky; Kalvala, Sara New concepts for automating air traffic management functions at small non-towered airports raise serious safety issues associated with the software implementations and their underlying key algorithms. The criticality of such software systems necessitates that strong guarantees of the safety be developed for them. In this paper we present a formal method for modeling and verifying such systems using the PVS theorem proving system. The method is demonstrated on a preliminary concept of operation for the Small Aircraft Transportation System (SATS) project at NASA Langley. Williams, Daniel; Consiglio, Maria; Murdoch, Jennifer; Adams, Catherine This document provides a preliminary validation of the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept for normal conditions. Initial results reveal that the concept provides reduced air traffic delays when compared to current operations without increasing pilot workload. Characteristic to the SATS HVO concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA) which would be activated by air traffic control (ATC) around designated non-towered, non-radar airports. During periods of poor visibility, SATS pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft in the SCA. Using onboard equipment and simple instrument flight procedures, they would then be better able to approach and land at the airport or depart from it. This concept would also require a new, ground-based automation system, typically located at the airport that would provide appropriate sequencing information to the arriving aircraft. Further validation of the SATS HVO concept is required and is the subject of ongoing research and subsequent publications. Holmes, Bruce J. The National Aeronautics and Space Administration (NASA), Federal Aviation Administration, as well as state, industry, and academia partners have joined forces to pursue the NASA National General Aviation Roadmap leading to a Small Aircraft Transportation System (SATS). This long-term strategic undertaking has a goal to bring next-generation technologies and improve air access to small communities. The envisioned outcome is to improve travel between remote communities and transportation centers in urban areas by utilizing a new generation of single-pilot light planes for personal and business transportation between the nation's 5,400 public use general aviation airports. Current NASA investments in aircraft technologies are enabling industry to bring affordable, safe, and easy-to-use features to the marketplace, including "Highway in the Sky" glass cockpit operating capabilities, affordable crash worthy composite airframes, more efficient IFR flight training, and revolutionary engines. To facilitate this initiative, a comprehensive upgrade of public infrastructure must be planned, coordinated, and implemented within the framework of the national air transportation system. State partnerships are proposed to coordinate research support in key public infrastructure areas. Ultimately, SATS may permit more than tripling aviation system throughput capacity by tapping the under-utilized general aviation facilities to achieve the national goal of doorstep-to-destination travel at four times the speed of highways for the nation's suburban, rural, and remote communities. Millsaps, Gary D.; Yackovetsky, Robert E. (Technical Monitor) It is acknowledged that the aviation and aerospace industries are primary forces influencing the industrial development and economic well being of the United States and many countries around the world. For decades the US national air transportation system has been the model of success - safely and efficiently moving people, cargo, goods and services and generating countless benefits throughout the global community; however, the finite nature of the system and many of its components is becoming apparent. Without measurable increases in the capacity of the national air transportation system, delays and service delivery failures will eventually become intolerable. Although the recent economic slowdown has lowered immediate travel demands, that trend is reversing and cargo movement remains high. Research data indicates a conservative 2.5-3.0% annual increase in aircraft operations nationwide through 2017. Such growth will place additional strains upon a system already experiencing capacity constraints. The stakeholders of the system will continue to endure ever-increasing delays and abide lesser levels of service to many lower population density areas of the country unless more efficient uses of existing and new transportation resources are implemented. NASA s Small Aircraft Transportation System program (SATS) is one of several technologies under development that are aimed at using such resources more effectively. As part of this development effort, this report is the first in a series outlining the findings and recommendations resulting from a comprehensive program of multi-level analyses and system engineering efforts undertaken by NASA Langley Research Center s Systems Analysis Branch (SAB). These efforts are guided by a commitment to provide systems-level analysis support for the SATS program. Subsequent efforts will build upon this early work to produce additional analyses and benefits studies needed to provide the technical and economic basis for national Williams, Daniel M. Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS). Baxley, B.; Williams, D.; Consiglio, M.; Adams, C.; Abbott, T. The ability to conduct concurrent, multiple aircraft operations in poor weather at virtually any airport offers an important opportunity for a significant increase in the rate of flight operations, a major improvement in passenger convenience, and the potential to foster growth of operations at small airports. The Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept is designed to increase capacity at the 3400 non-radar, non-towered airports in the United States where operations are currently restricted to one-in/one-out procedural separation during low visibility or ceilings. The concept s key feature is that pilots maintain their own separation from other aircraft using air-to-air datalink and on-board software within the Self-Controlled Area (SCA), an area of flight operations established during poor visibility and low ceilings around an airport without Air Traffic Control (ATC) services. While pilots self-separate within the SCA, an Airport Management Module (AMM) located at the airport assigns arriving pilots their sequence based on aircraft performance, position, winds, missed approach requirements, and ATC intent. The HVO design uses distributed decision-making, safe procedures, attempts to minimize pilot and controller workload, and integrates with today's ATC environment. The HVO procedures have pilots make their own flight path decisions when flying in Instrument Metrological Conditions (IMC) while meeting these requirements. This paper summarizes the HVO concept and procedures, presents a summary of the research conducted and results, and outlines areas where future HVO research is required. More information about SATS HVO can be found at http://ntrs.nasa.gov. Bowen, Brent D. The National Aeronautics and Space Administration (NASA), U.S. Department of Transportation, Federal Aviation Administration, industry stakeholders, and academia, have joined forces to pursue the NASA National General Aviation Roadmap leading to a Small Aircraft Transportation System (SATS). This strategic undertaking has a 25-year goal to bring next-generation technologies and improve travel between remote communities and transportation centers in urban areas by utilizing the nation's 5,400 public-use general aviation airports. To facilitate this initiative, a comprehensive upgrade of public infrastructure must be planned, coordinated, and implemented within the framework of the national air transportation system. The Nebraska NASA EPSCoR Program has proposed to deliver research support in key public infrastructure areas in coordination with the General Aviation Program Office at the NASA Langley Research Center. Ultimately, SATS may permit tripling aviation system throughput capacity by tapping the underutilized general aviation facilities to achieve the national goal of doorstep-to-destination travel at four times the speed of highways for the nation's suburban, rural, and remote communities. ONeil, Patrick D.; Tarry, Scott E. The following collection of research summaries are submitted as fulfillment of a request from NASA LaRC to conduct research into existing enabling technologies that support the development of the Small Aircraft Transportation System aircraft and accompanying airspace management infrastructure. Due to time and fiscal constraints, the included studies focus primarily on visual systems and architecture, flight control design, instrumentation and display, flight deck design considerations, Human-Machine Interface issues, and supporting augmentation technologies and software. This collation of summaries is divided in sections in an attempt to group similar technologies and systems. However, the reader is advised that many of these studies involve multiple technologies and systems that span across many categories. Because of this fact, studies are not easily categorized into single sections. In an attempt to help the reader more easily identify topics of interest, a SATS application description is provided for each summary. In addition, a list of acronyms provided at the front of the report to aid the reader. UijtdeHaag, Maarten; Thomas, Robert; Rankin, James R. The report discusses the architecture and the flight test results of a 3-Dimensional Cockpit Display of Traffic and terrain Information (3D-CDTI). The presented 3D-CDTI is a perspective display format that combines existing Synthetic Vision System (SVS) research and Automatic Dependent Surveillance-Broadcast (ADS-B) technology to improve the pilot's situational awareness. The goal of the 3D-CDTI is to contribute to the development of new display concepts for NASA's Small Aircraft Transportation System research program. Papers were presented at the PLANS 2002 meeting and the ION-GPS 2002 meeting. The contents of this report are derived from the results discussed in those papers. WU Huzi; GENG Jianzhong; TANG Changhong; LI Wei The corresponding corrected method is proposed for the INS (INS-Inertial Navigation System) accumulated error of large transport aircraft.System errors contain aircraft position error,altitude error and speed error,one is increasing the accuracy of hardware; the other is development of low cost software algorithms.Because of improving hardware is more difficult in my country at present,developing software algorithms is essential way,which have been validated in my types of airplane.The combined heuristic algorithms (ABPNN,Advanced Back-propagation neural networks algorithm and LSM-least square method) are presented,which incorporates the effects of flight region and measured terrain height data by radar and barometer.Based on this algorithm,the appropriate match region was gotten by recognition of fiducial digital map in real time online.In process of work,the minimum of position error as a cost function and the constraint conditions are gave,the flight positions are recognized in real time and continuously,least sum of square is calculated based on LSM,in other words,the optimized result is obtained.The simulation case demonstrate that the method is very successful,the correct rate of recognition is more 90 percent.In words,the algorithm presented is economical,validation and effective. Englar, Robert J.; Willie, F. Scott; Lee, Warren J. In the Task I portion of this NASA research grant, configuration development and experimental investigations have been conducted on a series of pneumatic high-lift and control surface devices applied to a generic High Speed Civil Transport (HSCT) model configuration to determine their potential for improved aerodynamic performance, plus stability and control of higher performance aircraft. These investigations were intended to optimize pneumatic lift and drag performance; provide adequate control and longitudinal stability; reduce separation flowfields at high angle of attack; increase takeoff/climbout lift-to-drag ratios; and reduce system complexity and weight. Experimental aerodynamic evaluations were performed on a semi-span HSCT generic model with improved fuselage fineness ratio and with interchangeable plain flaps, blown flaps, pneumatic Circulation Control Wing (CCW) high-lift configurations, plain and blown canards, a novel Circulation Control (CC) cylinder blown canard, and a clean cruise wing for reference. Conventional tail power was also investigated for longitudinal trim capability. Also evaluated was unsteady pulsed blowing of the wing high-lift system to determine if reduced pulsed mass flow rates and blowing requirements could be made to yield the same lift as that resulting from steady-state blowing. Depending on the pulsing frequency applied, reduced mass flow rates were indeed found able to provide lift augmentation at lesser blowing values than for the steady conditions. Significant improvements in the aerodynamic characteristics leading to improved performance and stability/control were identified, and the various components were compared to evaluate the pneumatic potential of each. Aerodynamic results were provided to the Georgia Tech Aerospace System Design Lab. to conduct the companion system analyses and feasibility study (Task 2) of theses concepts applied to an operational advanced HSCT aircraft. Results and conclusions from these Murphy, Patrick C.; Klein, Vladislav Continued studies have been undertaken to investigate and develop aerodynamic models that predict aircraft response in nonlinear unsteady flight regimes for transport configurations. The models retain conventional static and dynamic terms but replace conventional acceleration terms with indicial functions. In the Subsonic Fixed Wing Project of the NASA Fundamental Aeronautics Program and the Integrated Resilient Aircraft Controls project of the NASA Aviation Safety Program one aspect of the research is to apply these current developments to transport configurations to facilitate development of advanced simulation and control design technology. This paper continues development and application of a more general modeling methodology to the NASA Langley Generic Transport Model, a sub-scale flight test vehicle. In the present study models for the lateral-directional aerodynamics are developed. Consiglio, Maria C.; Carreno, Victor A.; Williams, Daniel M.; Munoz, Cesar A multilayer approach to the prevention of conflicts due to the loss of aircraft-to-aircraft separation which relies on procedures and on-board automation was implemented as part of the SATS HVO Concept of Operations. The multilayer system gives pilots support and guidance during the execution of normal operations and advance warning for procedure deviations or off-nominal operations. This paper describes the major concept elements of this multilayer approach to separation assurance and conflict prevention and provides the rationale for its design. All the algorithms and functionality described in this paper have been implemented in an aircraft simulation in the NASA Langley Research Center s Air Traffic Operation Lab and on the NASA Cirrus SR22 research aircraft. Carreno, Victor; Munoz, Cesar A critical factor in the adoption of any new aeronautical technology or concept of operation is safety. Traditionally, safety is accomplished through a rigorous process that involves human factors, low and high fidelity simulations, and flight experiments. As this process is usually performed on final products or functional prototypes, concept modifications resulting from this process are very expensive to implement. This paper describe an approach to system safety that can take place at early stages of a concept design. It is based on a set of mathematical techniques and tools known as formal methods. In contrast to testing and simulation, formal methods provide the capability of exhaustive state exploration analysis. We present the safety analysis and verification performed for the Small Aircraft Transportation System (SATS) Concept of Operations (ConOps). The concept of operations is modeled using discrete and hybrid mathematical models. These models are then analyzed using formal methods. The objective of the analysis is to show, in a mathematical framework, that the concept of operation complies with a set of safety requirements. It is also shown that the ConOps has some desirable characteristic such as liveness and absence of dead-lock. The analysis and verification is performed in the Prototype Verification System (PVS), which is a computer based specification language and a theorem proving assistant. Pirkl, Martin; Tospann, Franz-Jose This paper presents a guideline to meet the requirements of forward looking sensors of an enhanced vision system for both military and civil transport aircraft. It gives an update of a previous publication with special respect to airborne application. For civil transport aircraft an imaging mm-wave radar is proposed as the vision sensor for an enhanced vision system. For military air transport an additional high-performance weather radar should be combined with the mm-wave radar to enable advanced situation awareness, e.g. spot-SAR or air to air operation. For tactical navigation the mm-wave radar is useful due to its ranging capabilities. To meet these requirements the HiVision radar was developed and tested. It uses a robust concept of electronic beam steering and will meet the strict price constraints of transport aircraft. Advanced image processing and high frequency techniques are currently developed to enhance the performance of both the radar image and integration techniques. The advantages FMCW waveform even enables a sensor with low probability of intercept and a high resistance against jammer. The 1997 highlight will be the optimizing of the sensor and flight trials with an enhanced radar demonstrator. Mavris, Dimitri N.; Tai, Jimmy C.; Kirby, Michelle M.; Roth, Bryce A. The primary aspiration of this study was to objectively assess the feasibility of the application of a low speed pneumatic technology, in particular Circulation Control (CC) to an HSCT concept. Circulation Control has been chosen as an enabling technology to be applied on a generic High Speed Civil Transport (HSCT). This technology has been proven for various subsonic vehicles including flight tests on a Navy A-6 and computational application on a Boeing 737. Yet, CC has not been widely accepted for general commercial fixed-wing use but its potential has been extensively investigated for decades in wind tunnels across the globe for application to rotorcraft. More recently, an experimental investigation was performed at Georgia Tech Research Institute (GTRI) with application to an HSCT-type configuration. The data from those experiments was to be applied to a full-scale vehicle to assess the impact from a system level point of view. Hence, this study attempted to quantitatively assess the impact of this technology to an HSCT. The study objective was achieved in three primary steps: 1) Defining the need for CC technology; 2) Wind tunnel data reduction; 3) Detailed takeoff/landing performance assessment. Defining the need for the CC technology application to an HSCT encompassed a preliminary system level analysis. This was accomplished through the utilization of recent developments in modern aircraft design theory at Aerospace Systems Design Laboratory (ASDL). These developments include the creation of techniques and methods needed for the identification of technical feasibility show stoppers. These techniques and methods allow the designer to rapidly assess a design space and disciplinary metric enhancements to enlarge or improve the design space. The takeoff and landing field lengths were identified as the concept "show-stoppers". Once the need for CC was established, the actual application of data and trends was assessed. This assessment entailed a reduction of the Bowen, Brent (Editor); Holmes, Bruce; Gogos, George; Narayanan, Ram; Smith, Russell; Woods, Sara , Codes, and Strategic Enterprises. During the first year of funding, Nebraska established open and frequent lines of communication with university affairs officers and other key personnel at all NASA Centers and Enterprises, and facilitated the development of collaborations between and among junior faculty in the state and NASA researchers. As a result, Nebraska initiated a major research cluster, the Small Aircraft Transportation System Nebraska Implementation Template. Wagner, R. D. The incorporation of laminar flow control into transport aircraft is discussed. Design concepts for the wing surface panel of laminar flow control transport aircraft are described. The development of small amounts of laminar flow on small commercial transports with natural or hybrid flow control is examined. Techniques for eliminating the insect contamination problem in the leading-edge region are proposed. Full Text Available In accordance with Air Force requirements, the comparative analysis of short/medium transport aircraft comes to sustain procurement decision of short/medium transport aircraft. This paper presents, in short, the principles and the results of the comparative analysis for short/medium military transport aircraft. Lohr, Gary W.; Williams, Dan; Abbott, Terence; Baxley, Brian; Greco, Adam; Ridgway, Richard The Small Aircraft Transportation System Higher Volume Operations (SATS HVO) concept holds the promise for increased efficiency and throughput at many of the nations under-used airports. This concept allows for concurrent operations at uncontrolled airports that under today s procedures are restricted to one arrival or one departure operation at a time, when current-day IFR separation standards are applied. To allow for concurrent operations, SATS HVO proposes several fundamental changes to today's system. These changes include: creation of dedicated airspace, development of new procedures and communications (phraseologies), and assignment of roles and responsibilities for pilots and controllers, among others. These changes would affect operations on the airborne side (pilot) as well as the groundside (controller and air traffic flow process). The focus of this paper is to discuss some of the issues and potential problems that have been considered in the development of the SATS HVO concept, in particular from the ground side perspective. Reasonable solutions to the issues raised here have been proposed by the SATS HVO team, and are discussed in this paper. National Aeronautics and Space Administration — This proposal defines innovative aerodynamic concepts and technology goals aimed at vehicle efficiency for future subsonic aircraft in the 2020 ? 2030 timeframe.... White, John A. Aircraft manufacturers are developing fiber optic technology to exploit the benefits in system performance and manufacturing cost reduction. The fiber optic systems have high bandwidths and exceptional Electromagnetic Interference immunity that exceeds all new aircraft design requirements. Additionally, aircraft manufacturers have shown production readiness of fiber optic systems and design feasibility. Williams, L. J. In connection with a request for a report coming from a U.S. Senate committee, NASA formed a Small Transport Aircraft Technology (STAT) team in 1978. STAT was to obtain information concerning the technical improvements in commuter aircraft that would likely increase their public acceptance. Another area of study was related to questions regarding the help which could be provided by NASA's aeronautical research and development program to commuter aircraft manufacturers with respect to the solution of technical problems. Attention is given to commuter airline growth, current commuter/region aircraft and new aircraft in development, prospects for advanced technology commuter/regional transports, and potential benefits of advanced technology. A list is provided of a number of particular advances appropriate to small transport aircraft, taking into account small gas turbine engine component technology, propeller technology, three-dimensional wing-design technology, airframe aerodynamics/propulsion integration, and composite structure materials. Harlow, Charles; Zhu, Weihong Accurate data is important in the aviation planning process. In this project we consider systems for measuring aircraft activity at airports. This would include determining the type of aircraft such as jet, helicopter, single engine, and multiengine propeller. Some of the issues involved in deploying technologies for monitoring aircraft operations are cost, reliability, and accuracy. In addition, the system must be field portable and acceptable at airports. A comparison of technologies was conducted and it was decided that an aircraft monitoring system should be based upon acoustic technology. A multimedia relational database was established for the study. The information contained in the database consists of airport information, runway information, acoustic records, photographic records, a description of the event (takeoff, landing), aircraft type, and environmental information. We extracted features from the time signal and the frequency content of the signal. A multi-layer feed-forward neural network was chosen as the classifier. Training and testing results were obtained. We were able to obtain classification results of over 90 percent for training and testing for takeoff events. Pavlin, Stanislav; Roguljić, Slavko Airport aprons are areas for aircraft handling, parking and maintenance. According to international rules the number of positions at the apron has to be at least equal to the number of aircraft staying at any one time at the airport. The air traffic at Split Airport increased rapidly in the mid-90s when it became the UN logistics base for Bosnia and Herzegovina. There were nomeans nor free space for further expansion of the apron, so the traffic had to be reorganised and re-coordinated. Alter... Dugan, J. F., Jr. Review of the procedures used to select engines for transport and combat aircraft by illustrating the procedures for a long haul CTOL transport, a short haul VTOL transport, a long range SST, and a fighter aircraft. For the CTOL transport, it is shown that advances in noise technology and advanced turbine cooling technology will greatly reduce the airplane performance penalties associated with achieving low noise goals. A remote lift fan powered by a turbofan air generator is considered for the VTOL aircraft. In this case, the lift fan pressure ratio which maximizes payload also comes closest to meeting the noise goal. High turbine temperature in three different engines is considered for the SST. Without noise constraints it leads to an appreciable drop in DOC, but with noise constraints the reduction in DOC is very modest. For the fighter aircraft it is shown how specific excess power requirements play the same role in engine selection as noise constraints for commercial airplanes. Bartle, John R. The objective of SATS is to reduce gridlock at hubs, reduce travel times, allow for personal control over travel, and anticipate demand shifts resulting from a migration from suburbs to rural places. The technology is presently available and economical to produce SATS aircraft. The public issue centers on the airports. SATS is a federal program, and many airports in the U.S. are under the control of local governments. The scope of the objective will require thousands of airports in rural and suburban areas to modify their infrastructure and increase their investment. Researchers at the University of Nebraska at Omaha (UNO), and others at other institutions, have prepared reports surveying the relevant issues of implementing SATS. Our UNO team focused on the issues of policy implementation, economic development, management, and finance specific to Nebraska. We are finding that these issues are similar to those in other states in our region and other rural states. This paper discusses how this investment might be financed. Full Text Available Mission performance of a fighter aircraft is crucial for survival and strike capabilities in todays' aerial warfare scenario. The guidance functions of such an aircraft play a vital role inmeeting the requirements and accomplishing the mission success. This paper presents the requirements of precision guidance for various missions of a fighter aircraft. The concept ofguidance system as a pilot-in-loop system is pivotal in understanding and designing such a system. Methodologies of designing such a system are described. A brief overview is given of the on-going NASA Automated Cooperative Trajectories project. Current status and upcoming work is previewed. The motivating factors and innovative aspects of ACT are discussed along with technical challenges and the expected system-level impacts if the project is successful. Preliminary results from the NASA G-III hardware in the loop simulation are included. C.M. Ananda; K.G. Venkatanarayana; Preme M.; Raghu M. Avionics of the present day comprises advanced technology and software-intensive systems. Earlier generation avionics constituted federated architecture and used line replaceable units (LRUs) having individual resources for each application with redundant hardware and software. However with the advancement of technology, methods,and mechanisms, the industry moved quite rapidly towards the integrated architecture called integrated modular avionics (IMA). Over the last decade there has been tre... Hopkins, A. L.; Martin, J. H.; Brock, L. D.; Jansson, D. G.; Serben, S.; Smith, T. B.; Hanley, L. D. Candidate data communication techniques are identified, including dedicated links, local buses, broadcast buses, multiplex buses, and mesh networks. The design methodology for mesh networks is then discussed, including network topology and node architecture. Several concepts of power distribution are reviewed, including current limiting and mesh networks for power. The technology issues of packaging, transmission media, and lightning are addressed, and, finally, the analysis tools developed to aid in the communication design process are described. There are special tools to analyze the reliability and connectivity of networks and more general reliability analysis tools for all types of systems. The National Airspace System (NAS) in the United States had an inventory of 5,156 big jets at the end of December 2002, of which 4,085 were narrow bodies, and 1,071 were wide bodies. In addition, there were 1,180 regional jets and 660 turboprops in the system at that time. Empirical research reveals that there is a critical link between the flow of scheduled passenger services and the choice of aircraft used by the airlines in serving market pair demand. This relationship can be empirically r... Gundy-Burlet, Karen; Krishnakumar, K.; Limes, Greg; Bryant, Don This paper examines the feasibility, potential benefits and implementation issues associated with retrofitting a neural-adaptive flight control system (NFCS) to existing transport aircraft, including both cable/hydraulic and fly-by-wire configurations. NFCS uses a neural network based direct adaptive control approach for applying alternate sources of control authority in the presence of damage or failures in order to achieve desired flight control performance. Neural networks are used to provide consistent handling qualities across flight conditions, adapt to changes in aircraft dynamics and to make the controller easy to apply when implemented on different aircraft. Full-motion piloted simulation studies were performed on two different transport models: the Boeing 747-400 and the Boeing C-17. Subjects included NASA, Air Force and commercial airline pilots. Results demonstrate the potential for improving handing qualities and significantly increased survivability rates under various simulated failure conditions. Horsfall, I; Austin, S J; Bishop, W. This paper describes the structural response of a current ceramic-faced composite armour system and a proposed structural armour system for aircraft use. The proposed structural ballistic armour system is shown to be capable of providing significant structural integrity even after ballistic impact whilst providing ballistic protection equivalent to an existing applique system. The addition of a carbon fibre reinforced plastic front panel to the existing ceramic faced composite armour system i... Bryer, Paul; Buckles, Jon; Lemke, Paul; Peake, Kirk This university design project concerns the Eagle RTS (Regional Transport System), a 66 passenger, twin turboprop aircraft with a range of 836 nautical miles. It will operate with a crew of two pilots and two flight attendents. This aircraft will employ the use of aluminum alloys and composite materials to reduce the aircraft weight and increase aerodynamic efficiency. The Eagle RTS will use narrow body aerodynamics with a canard configuration to improve performance. Leading edge technology will be used in the cockpit to improve flight handling and safety. The Eagle RTS propulsion system will consist of two turboprop engines with a total thrust of approximately 6300 pounds, 3150 pounds thrust per engine, for the cruise configuration. The engines will be mounted on the aft section of the aircraft to increase passenger safety in the event of a propeller failure. Aft mounted engines will also increase the overall efficiency of the aircraft by reducing the aircraft's drag. The Eagle RTS is projected to have a takeoff distance of approximately 4700 feet and a landing distance of 6100 feet. These distances will allow the Eagle RTS to land at the relatively short runways of regional airports. Urnes, James, Sr.; Nguyen, Nhan; Ippolito, Corey; Totah, Joseph; Trinh, Khanh; Ting, Eric Boeing and NASA are conducting a joint study program to design a wing flap system that will provide mission-adaptive lift and drag performance for future transport aircraft having light-weight, flexible wings. This Variable Camber Continuous Trailing Edge Flap (VCCTEF) system offers a lighter-weight lift control system having two performance objectives: (1) an efficient high lift capability for take-off and landing, and (2) reduction in cruise drag through control of the twist shape of the flexible wing. This control system during cruise will command varying flap settings along the span of the wing in order to establish an optimum wing twist for the current gross weight and cruise flight condition, and continue to change the wing twist as the aircraft changes gross weight and cruise conditions for each mission segment. Design weight of the flap control system is being minimized through use of light-weight shape memory alloy (SMA) actuation augmented with electric actuators. The VCCTEF program is developing better lift and drag performance of flexible wing transports with the further benefits of lighter-weight actuation and less drag using the variable camber shape of the flap. Marshall, Douglas M; Hottman, Stephen B; Shappee, Eric; Most, Michael Thomas Introduction to Unmanned Aircraft Systems is the editors' response to their unsuccessful search for suitable university-level textbooks on this subject. A collection of contributions from top experts, this book applies the depth of their expertise to identify and survey the fundamentals of unmanned aircraft system (UAS) operations. Written from a nonengineering civilian operational perspective, the book starts by detailing the history of UASs and then explores current technology and what is expected for the future. Covering all facets of UAS elements and operation-including an examination of s Sabel, R.; Reffeltrath, P.A.; Jonkman, A.; Post, T. As a participant in the three-nation partnership for development of the ANBCP-S for use in Helicopters, Transport Aircraft and Fast Jet, the Royal Netherlands Airforce (RNLAF) picked up the challenge to design a Filter- Blower-Unit (FBU). Major Command (MajCom) of the RNLAF set priority to develop a Stephens, D. G. Measured vibration and interior noise data are presented for a number of air and surface vehicles. Consideration is given to the importance of direction effects; of vehicle operations such as take-off, cruise, and landing; and of measurement location on the level and frequency of the measurements. Various physical measurement units or descriptors are used to quantify and compare the data. Results suggest the range of vibration and noise associated with a particular mode of transportation and illustrate the comparative levels in terms of each of the descriptors. Collectively, the results form a data base which may be useful in assessing the ride of existing or future systems relative to vehicles in current operation. С. С. Юцкевич Full Text Available Specifics of civil aviation modern transport aircraft fly-by-wire control systems are described. A comparison of the systems-level hardware and software, expressed through modes of guidance, provision of aircraft Airbus A-320, Boeing B-777, Tupolev Tu-214, Sukhoi Superjet SSJ-100 are carried out. The possibility of transition from mechanical control wiring to control through fly-by-wire system in the backup channel is shown. Pornet, C.; Isikveren, A. T. The European Flightpath 2050 and corresponding Strategic Research and Innovation Agenda (SRIA) as well as the NASA Environmentally Responsible Aviation N+ series have elaborated aggressive emissions and external noise reduction targets according to chronological waypoints. In order to deliver ultra-low or even zero in-flight emissions levels, there exists an increasing amount of international research and development emphasis on electrification of the propulsion and power systems of aircraft. Since the late 1990s, a series of experimental and a host of burgeouning commercial activities for fixed-wing aviation have focused on glider, ultra-light and light-sport airplane, and this is proving to serve as a cornerstone for more ambitious transport aircraft design and integration technical approaches. The introduction of hybrid-electric technology has dramatically expanded the design space and the full-potential of these technologies will be drawn through synergetic, tightly-coupled morphological and systems integration emphasizing propulsion - as exemplified by the potential afforded by distributed propulsion solutions. With the aim of expanding upon the current repository of knowledge associated with hybrid-electric propulsion systems a quad-fan arranged narrow-body transport aircraft equipped with two advanced Geared-Turbofans (GTF) and two Electrical Fans (EF) in an under-wing podded installation is presented in this technical article. The assessment and implications of an increasing Degree-of-Hybridization for Useful Power (HP,USE) on the overall sizing, performance as well as flight technique optimization of fuel-battery hybrid-electric aircraft is addressed herein. The integrated performance of the concept was analyzed in terms of potential block fuel burn reduction and change in vehicular efficiency in comparison to a suitably projected conventional aircraft employing GTF-only propulsion targeting year 2035. Results showed that by increasing HP,USE, significant Researchers at NASA are investigating the potential for electric propulsion systems to revolutionize the design of aircraft from the small-scale general aviation sector to commuter and transport-class vehicles. Electric propulsion provides new degrees of design freedom that may enable opportunities for tightly coupled design and optimization of the propulsion system with the aircraft structure and control systems. This could lead to extraordinary reductions in ownership and operating costs, greenhouse gas emissions, and noise annoyance levels. We are building testbeds, high-fidelity aircraft simulations, and the first highly distributed electric inhabited flight test vehicle to begin to explore these opportunities. A. S. Abufanas Full Text Available The principles of constructing mathematical models of unmanned aircraft systems as complex systems consisting of a plurality ofsubsystems, each of which is considered as a system. In this case, the relationship between the subsystems are described by equations based on the topological graph theory, and for the preparation of component equations describing the dynamics of the subsystems is proposed to use differential equations discontinuous type based on systems theory of random structure. Air transport has been a key component of the socio-economic globalisation. The ever increasing demand for air travel and air transport is a testament to the success of the aircraft. But this growing demand presents many challenges. One of which is the environmental impact due to aviation. The scope of the environmental impact of aircraft can be discussed from many viewpoints. This research focuses on the environmental impact due to aircraft operation. Aircraft operation causes... Nish, W A; Walsh, W F; Land, P; Swedenburg, M The number of civilian air ambulance services operating in the United States has been steadily increasing. The quantity and sophistication of electronic equipment used during neonatal transport have also increased. All medical equipment generates some electromagnetic interference (EMI). Excessive EMI can interfere with any of an aircraft's electrical systems, including navigation and communications. The United States military has strict standards for maximum EMI in transport equipment. Over the past 15 years, approximately 70% of neonatal transport monitors, ventilators, and incubators have failed testing due to excessive EMI. As neonatal transport equipment becomes more sophisticated, EMI is increased, and there is greater potential for aircraft malfunction. The Federal Aviation Administration should develop civilian standards for acceptable EMI, civilian aircraft operators must be aware of the possible dangers of excessive EMI, and equipment which does not meet future FAA standards should not be purchased. PMID:2751593 The book addresses all major aspects to be considered for the design and operation of aircrafts within the entire transportation chain. It provides the basic information about the legal environment, which defines the basic requirements for aircraft design and aircraft operation. The interactions between airport, air traffic management and the airlines are described. The market forecast methods and the aircraft development process are explained to understand the very complex and risky business of an aircraft manufacturer. The principles of flight physics as basis for aircraft design are presented and linked to the operational and legal aspects of air transport including all environmental impacts. The book is written for graduate students as well as for engineers and experts, who are working in aerospace industry, at airports or in the domain of transport and logistics. Magee, Todd E.; Fugal, Spencer R.; Fink, Lawrence E.; Adamson, Eric E.; Shaw, Stephen G. completed as a precursor to the selection of the facilities used for validation testing. As facility schedules allowed, the propulsion testing was done at the NASA Glenn Research Center (GRC) 8 x 6-Foot wind tunnel, while boom and force testing was done at the NASA Ames Research Center (ARC) 9 x 7-Foot wind tunnel. During boom testing, a live balance was used for gathering force data. This report is broken down into nine sections. The first technical section (Section 2) covers the general scope of the Phase II activities, goals, a description of the design and testing efforts, and the project plan and schedule. Section 3 covers the details of the propulsion system concepts and design evolution. A series of short tests to evaluate the suitability of different wind tunnels for boom, propulsion, and force testing was also performed under the Phase 2 effort, with the results covered in Section 4. The propulsion integration testing is covered in Section 5 and the boom and force testing in Section 6. CFD comparisons and analyses are included in Section 7. Section 8 includes the conclusions and lessons learned. The economic aspects of the STOL aircraft for short-haul air transportation are discussed. The study emphasized the potential market, the preferred operational concepts, the design characteristics, and the economic viability. Three central issues governing economic viability are as follows: (1) operator economics given the market, (2) the required transportation facilities, and (3) the external economic effects of a set of regional STOL transportation systems. National Aeronautics and Space Administration — Development of an Aircraft Nodal Data Acquisition System (ANDAS) is proposed. The proposed methodology employs the development of a very thin (135m) hybrid... National Aeronautics and Space Administration — Development of an Aircraft Nodal Data Acquisition System (ANDAS) based upon the short haul Zigbee networking standard is proposed. It employs a very thin (135 um)... D. P. Coldbeck Full Text Available In the 1980's the British aircraft industry changed its approach to the management of projects from a system where a project office would manage a project and rely on a series of specialist departments to support them to a more process oriented method, using systems engineering models, whose most outwardly visible signs were the introduction of multidisciplinary product teams. One of the problems with the old method was that the individual departments often had different priorities and projects would get uneven support. The change in the system was only made possible for complex designs by the electronic distribution of data giving instantaneous access to all involved in the project. In 1997 the Defence and Aerospace Foresight Panel emphasised the need for a system engineering approach if British industry was to remain competitive. The Royal Academy of Engineering recognised that the change in working practices also changed what was required of a chartered engineer and redefined their requirements in 1997 . The result of this is that engineering degree courses are now judged against new criteria with more emphasis placed on the relevance to industry rather than on purely academic content. At the University of Glasgow it was realized that the students ought to be made aware of current working practices and that there ought to be a review to ensure that the degrees give students the skills required by industry. It was decided to produce a one week introduction course in systems engineering for Masters of Engineering (MEng students to be taught by both university lecturers and practitioners from a range of companies in the aerospace industry with the hope of expanding the course into a module. The reaction of the students was favourable in terms of the content but it seems ironic that the main criticism was that there was not enough discussion involving the students. This paper briefly describes the individual teaching modules and discusses the Lawing, P. L.; Pagel, L. L. (Inventor) The system eliminates the necessity of shielding an aircraft airframe constructed of material such as aluminum. Cooling is accomplished by passing a coolant through the aircraft airframe, the coolant acting as a carrier to remove heat from the airframe. The coolant is circulated through a heat pump and a heat exchanger which together extract essentially all of the added heat from the coolant. The heat is transferred to the aircraft fuel system via the heat exchanger and the heat pump. The heat extracted from the coolant is utilized to power the heat pump. The heat pump has associated therewith power turbine mechanism which is also driven by the extracted heat. The power turbines are utilized to drive various aircraft subsystems, the compressor of the heat pump, and provide engine cooling. The present invention concerns an air craft transporting container for nuclear fuels. A sealing container that seals a nuclear fuel container and constitutes a sealed boundary for the transporting container is incorporated in an inner container. Shock absorbers are filled for absorbing impact shock energy in the gap between the inner container and the sealing container. The inner container is incorporated with wooden impact shock absorbers being filled so that it is situated in a substantially central portion of an external container. Partitioning cylinders are disposed coaxially in the cylindrical layer filled with wooden impact shock absorbers at an intermediate portion between the outer and the inner containers. Further, a plurality of longitudinally intersecting partitioning disks are disposed each at a predetermined distance in right and left cylindrical wooden impact shock absorbing layers which are in contact with the end face of the inner container. Accordingly, the impact shock energy can be absorbed by the wooden impact shock absorbers efficiently by a plurality of the partitioning disks and the partitioning cylinders. (I.N.) Bolonkin, Alexander; Gilyard, Glenn B. Analytical benefits of variable-camber capability on subsonic transport aircraft are explored. Using aerodynamic performance models, including drag as a function of deflection angle for control surfaces of interest, optimal performance benefits of variable camber are calculated. Results demonstrate that if all wing trailing-edge surfaces are available for optimization, drag can be significantly reduced at most points within the flight envelope. The optimization approach developed and illustrated for flight uses variable camber for optimization of aerodynamic efficiency (maximizing the lift-to-drag ratio). Most transport aircraft have significant latent capability in this area. Wing camber control that can affect performance optimization for transport aircraft includes symmetric use of ailerons and flaps. In this paper, drag characteristics for aileron and flap deflections are computed based on analytical and wind-tunnel data. All calculations based on predictions for the subject aircraft and the optimal surface deflection are obtained by simple interpolation for given conditions. An algorithm is also presented for computation of optimal surface deflection for given conditions. Benefits of variable camber for a transport configuration using a simple trailing-edge control surface system can approach more than 10 percent, especially for nonstandard flight conditions. In the cruise regime, the benefit is 1-3 percent. Harvey, W. Don; Foreman, Brent This report provides updated information on the current market and operating environment and identifies interlinking technical possibilities for competitive future commuter-type transport aircraft. The conclusions on the market and operating environment indicate that the regional airlines are moving toward more modern and effective fleets with greater passenger capacity and comfort, reduced noise levels, increased speed, and longer range. This direction leads to a nearly 'seamless' service and continued code-sharing agreements with the major carriers. Whereas the benefits from individual technologies may be small, the overall integration in existing and new aircraft designs can produce improvements in direct operating cost and competitiveness. Production costs are identified as being equally important as pure technical advances. Wing joint design is one of the most critical areas in aircraft structures. Efficient and damage tolerant wing-fuselage integration structure, applicable to the next generation of transport aircraft, will facilitate the realisation of the benefits offered by new aircraft concepts. The Blended Wing Body (BWB) aircraft concept represents a potential revolution in subsonic transport efficiency for large airplanes. Studies have shown the BWB to be superior to conventional airframes... Valenzuela Arroyo, Marta The goal of this project is to design and implement a mission manager for unmanned aircraft systems. The mission manager will work under the USAL architecture designed by the ICARUS UAV group at the EPSC. The student will be able to learn programming skills, working with a group, and research. White, Henry J.; Brownjohn, Nick; Baptista, João; Achieving affordable high speed fiber optic communication networks for airplane systems has proved to be challenging. In this paper we describe a summary of the EU Framework 7 project DAPHNE (Developing Aircraft Photonic Networks). DAPHNE aimed to exploit photonic technology from terrestrial... Beltramo, M. N.; Morris, M. A.; Anderson, J. L. A model comprised of system level weight and cost estimating relationships for transport aircraft is presented. In order to determine the production cost of future aircraft its weight is first estimated based on performance parameters, and then the cost is estimated as a function of weight. For initial evaluation CERs were applied to actual system weights of six aircraft (3 military and 3 commercial) with mean empty weights ranging from 30,000 to 300,000 lb. The resulting cost estimates were compared with actual costs. The average absolute error was only 4.3%. Then the model was applied to five aircraft still in the design phase (Boeing 757, 767 and 777, and BAC HS146-100 and HS146-200). While the estimates for the 757 and 767 are within 2 to 3 percent of their assumed break-even costs, it is recognized that these are very sensitive to the validity of the estimated weights, inflation factor, the amount assumed for nonrecurring costs, etc., and it is suggested that the model may be used in conjunction with other information such as RDT&E cost estimates and market forecasts. The model will help NASA evaluate new technologies and production costs of future aircraft. Cunningham, Kevin; Foster, John V.; Morelli, Eugene A.; Murch, Austin M. Over the past decade, the goal of reducing the fatal accident rate of large transport aircraft has resulted in research aimed at the problem of aircraft loss-of-control. Starting in 1999, the NASA Aviation Safety Program initiated research that included vehicle dynamics modeling, system health monitoring, and reconfigurable control systems focused on flight regimes beyond the normal flight envelope. In recent years, there has been an increased emphasis on adaptive control technologies for recovery from control upsets or failures including damage scenarios. As part of these efforts, NASA has developed the Airborne Subscale Transport Aircraft Research (AirSTAR) flight facility to allow flight research and validation, and system testing for flight regimes that are considered too risky for full-scale manned transport airplane testing. The AirSTAR facility utilizes dynamically-scaled vehicles that enable the application of subscale flight test results to full scale vehicles. This paper describes the modeling and simulation approach used for AirSTAR vehicles that supports the goals of efficient, low-cost and safe flight research in abnormal flight conditions. Modeling of aerodynamics, controls, and propulsion will be discussed as well as the application of simulation to flight control system development, test planning, risk mitigation, and flight research. ... of provisions pertaining to integration of unmanned aircraft systems (UAS) into the National Airspace... Federal Aviation Administration 14 CFR Part 91 Unmanned Aircraft System Test Site Program AGENCY: Federal... be levied on the Unmanned Aircraft Systems Test Site operators, but prior to the close of the... National Aeronautics and Space Administration — There is an increasing need to fly Unmanned Aircraft Systems (UAS) in the National Airspace System (NAS) to perform missions of vital importance to national... Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S. The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation. Coogan, J. J. Modifications were designed for the B-737-100 Research Aircraft autobrake system hardware of the Advanced Transport Operating Systems (ATOPS) Program at Langley Research Center. These modifications will allow the on-board flight control computer to control the aircraft deceleration after landing to a continuously variable level for the purpose of executing automatic high speed turn-offs from the runway. A bread board version of the proposed modifications was built and tested in simulated stopping conditions. Test results, for various aircraft weights, turnoff speed, winds, and runway conditions show that the turnoff speeds are achieved generally with errors less than 1 ft/sec. NASA's Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs, as well as a number of Agency innovations, have helped Duluth, Minnesota-based Cirrus Design Corporation become one of the world's leading manufacturers of general aviation aircraft. SBIRs with Langley Research Center provided the company with cost-effective composite airframe manufacturing methods, while crashworthiness testing at the Center increased the safety of its airplanes. Other NASA-derived technologies on Cirrus SR20 and SR22 aircraft include synthetic vision systems that help pilots navigate and full-plane parachutes that have saved the lives of more than 30 Cirrus pilots and passengers to date. Today, the SR22 is the world's top-selling Federal Aviation Administration (FAA)-certified single-engine airplane. Krus, Petter; Braun, Robert; Nordin, Peter; Eriksson, Björn Developments in computational hardware and simulation software have come to a point where it is possible to use whole mission simulation in a framework for conceptual/preliminary design. This paper is about the implementation of full system simulation software for conceptual/preliminary aircraft design. It is based on the new Hopsan NG simulation package, developed at the Linköping University. The Hopsan NG software is implemented in C++. Hopsan NG is the first simulation software that has su... Full Text Available As the aircraft industry is moving towards the all electric and More Electric Aircraft (MEA; is the future trend in adopting single power type for driving the non-propulsive aircraft systems; i.e. is the electrical power. The trend in the aircraft industry is to replace hydraulic and pneumatic systems with electrical systems achieving more comfort and monitoring features. The structure of MEA distribution system improves aircraft maintainability, reliability, flight safety and efficiency. Moreover, MEA reduces the emissions of air pollutant gases from aircrafts, which can contribute in significantly solving some of the problems of climate change. However, the MEA puts some challenges on the aircraft electrical system, both in the amount of the required power and the processing and management of this power. MEA electrical distribution systems are mainly in the form of multi-converter power electronic system. Zhao, Xin; Guerrero, Josep M.; Wu, Xiaohao In recent years, the electrical power capacity is increasing rapidly in more electric aircraft (MEA), since the conventional mechanical, hydraulic and pneumatic energy systems are partly replaced by electrical power system. As a consequence, capacity and complexity of aircraft electric power...... systems (EPS) will increase dramatically and more advanced aircraft EPSs need to be developed. This paper gives a brief description of the constant frequency (CF) EPS, variable frequency (VF) EPS and advanced high voltage (HV) EPS. Power electronics in the three EPS is overviewed. Keywords: Aircraft Power...... System, More Electric Aircraft, Constant Frequency, Variable Frequency, High Voltage.... ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Wet leasing of aircraft and other arrangements for transportation by air. 119.53 Section 119.53 Aeronautics and Space FEDERAL AVIATION... Chapter § 119.53 Wet leasing of aircraft and other arrangements for transportation by air. (a)... Rizzi, Stephen A.; Christian, Andrew The NASA Environmentally Responsible Aviation project has been successful in developing and demonstrating technologies for integrated aircraft systems that can simultaneously meet aggressive goals for fuel burn, noise and emissions. Some of the resulting systems substantially differ from the familiar tube and wing designs constituting the current civil transport fleet. This study attempts to explore whether or not the effective perceived noise level metric used in the NASA noise goal accurately reflects human subject response across the range of vehicles considered. Further, it seeks to determine, in a quantitative manner, if the sounds associated with the advanced aircraft are more or less preferable to the reference vehicles beyond any differences revealed by the metric. These explorations are made through psychoacoustic tests in a controlled laboratory environment using simulated stimuli developed from auralizations of selected vehicles based on systems noise assessments. Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M. A computer program has been written to analyze cooling systems of hypersonic aircraft. This computer program called NASP/SINDA is written into the SINDA'85 command structure and uses the SINDA'85 finite difference subroutines. Both internal fluid flow and heat transfer must be analyzed, because increased heating causes a decrease in the flow of the coolant. Also local hot spots will cause a redistribution of the coolant in the system. Both steady state and transient analyses have been performed. Details of empirical correlations are presented. Results for two cooling system applications are given. Nengjian Wang; Hongbo Liu; Wanhui Yang An aircraft tractor plays a significant role as a kind of important marine transport and support equipment.It's necessary to study its controlling and manoeuvring stability to improve operation efficiency.A virtual prototyping model of the tractor-aircraft system based on Lagrange's equation of the first kind with Lagrange mutipliers was established in this paper.According to the towing characteristics,a path-tracking controller using fuzzy logic theory was designed.Direction control herein was carried out through a compensatory tracking approach.Interactive co-simulation was performed to validate the path-tracking behavior in closed-loop.Simulation results indicated that the tractor followed the reference courses precisely on a flat ground. Hange, Craig E. This presentation will be given at the AIAA Electric Hybrid-Electric Power Propulsion Workshop on July 29, 2016. The workshop is being held so the AIAA can determine how it can support the introduction of electric aircraft into the aerospace industry. This presentation will address the needs of the community within the industry that advocates the use of powered-lift as important new technologies for future aircraft and air transportation systems. As the current chairman of the VSTOL Aircraft Systems Technical Committee, I will be presenting generalized descriptions of the past research in developing powered-lift and generalized observations on how electric and hybrid-electric propulsion may provide advances in the powered-lift field. This report addresses the author’s Group Design Project (GDP) and Individual Research Project (IRP). The IRP is discussed primarily herein, presenting the actuation technology for the Flight Control System (FCS) on civil aircraft. Actuation technology is one of the key technologies for next generation More Electric Aircraft (MEA) and All Electric Aircraft (AEA); it is also an important input for the preliminary design of the Flying Crane, the aircraft designed in the author’s G... Park, Pangun; Khadilkar, Harshad Dilip; Balakrishnan, Hamsa; Tomlin, Claire J. This paper addresses the design of a secure and fault-tolerant air transportation system in the presence of attempts to disrupt the system through the satellite-based navigation system. Adversarial aircraft are assumed to transmit incorrect position and intent information, potentially leading to violations of separation requirements among aircraft. We propose a framework for the identification of adversaries and malicious aircraft, and then for air traffic control in the presence of such deli... Nguyen, Truong X.; Dudley, Kenneth L.; Scearce, Stephen A.; Ely, Jay J.; Richardson, Robert E.; Hatfield, Michael O. An investigation was performed to study the potential for radio frequency (RF) power radiated from Portable Electronic Devices (PEDs) to create an arcing/sparking event within the fuel tank of a large transport aircraft. This paper describes the experimental methods used for measuring RF coupling to the fuel tank and Fuel Quantity Indication System (FQIS) wiring from PED sources located in the passenger cabin. To allow comparison of voltage/current data obtained in a laboratory chamber FQIS installation to an actual aircraft FQIS installation, aircraft fuel tank RF reverberation characteristics were also measured. Results from the measurements, along with a survey of threats from typical intentional transmitting PEDs are presented. The resulting worst-case power coupled onto fuel tank FQIS wiring is derived. The same approach can be applied to measure RF coupling into various other aircraft systems. The new 1996 IAEA regulation for the transportation of radioactive material states a 90 m/s drop test on unyielding surface for packages transported by air. This figure originates from a statistical analysis on civil aircraft accident during the period 1975 to 1985. A review on the 1983-1989 period demonstrates comparable velocity with a less stringent definition of accident. The statistical analyses are combined with the hardness of the fly-over ground and the impact angle to generate probabilities curves giving the occurrence of a mechanical stress overtaking the drop test velocity. The following steps were carried out: statistical analysis of the accident database from the ICOA (International Civil Aviation Organisation) to appreciate the accident characteristics with special attention to the impact velocity and angle. Modelling of the impact speed in each phase of flight (Take off, Climbing, Cruise, Approach and Landing), in order to evaluate the probability of occurrence of a given crash impact for different flight configurations. Qualitative analysis of recorder failures available in France. As the statistical analysis is based on the available impact speed, a bias can be introduced if major crashes are neglected. In this respect, the qualitative analysis should give some elements to characterize the relationship between the non-availability of the information recorded on the black box and the severity of the accident. (author) The study of naturally-occurring radiation and its associated risk is one of the preoccupations of bodies responsible for radiation protection. Cosmic particle flux is significantly higher on board aircraft that at ground level. Furthermore, its intensity depends on solar activity and eruptions. Due to their professional activity, flight crews and frequent flyers may receive an annual dose of some milli-sieverts. This is why the European directive adopted in 1996 requires the aircraft operators to assess the dose and to inform their flight crews about the risk. The effective dose is to be estimated using various experimental and calculation means. In France, the computerized system for flight assessment of exposure to cosmic radiation in air transport (SIEVERT) is delivered to airlines for assisting them in the application of the European directive. This dose assessment tool was developed by the French General Directorate of Civil Aviation (DGAC) and partners: the Institute for Radiation Protection and Nuclear Safety (IRSN), the Paris Observatory and the French Institute for Polar Research - Paul-Emile Victor (IPEV). This professional service is available on an Internet server accessible to companies with a public section. The system provides doses that consider the routes flown by aircraft Various results obtained are presented. (authors) The more-electric aircraft concept is a major trend in aircraft electrical power system engineering and results in an increase in electrical loads based on power electronic converters and motor drive systems. Unfortunately, power electronic driven loads often behave as constant power loads having the small-signal negative impedance that can significantly degrade the power system stability margin. Therefore, the stability issue of aircraft power systems is of great importance. The research of ... Gerren, Donna S. A study has been conducted to determine the capability to control a very large transport airplane with engine thrust. This study consisted of the design of an 800-passenger airplane with a range of 5000 nautical miles design and evaluation of a flight control system, and design and piloted simulation evaluation of a thrust-only backup flight control system. Location of the four wing-mounted engines was varied to optimize the propulsive control capability, and the time constant of the engine response was studied. The goal was to provide level 1 flying qualities. The engine location and engine time constant did not have a large effect on the control capability. The airplane design did meet level 1 flying qualities based on frequencies, damping ratios, and time constants in the longitudinal and lateral-directional modes. Project pilots consistently rated the flying qualities as either level 1 or level 2 based on Cooper-Harper ratings. However, because of the limited control forces and moments, the airplane design fell short of meeting the time required to achieve a 30 deg bank and the time required to respond a control input. Mekel, R.; Nachmias, S. A learning control system and its utilization as a flight control system for F-8 Digital Fly-By-Wire (DFBW) research aircraft is studied. The system has the ability to adjust a gain schedule to account for changing plant characteristics and to improve its performance and the plant's performance in the course of its own operation. Three subsystems are detailed: (1) the information acquisition subsystem which identifies the plant's parameters at a given operating condition; (2) the learning algorithm subsystem which relates the identified parameters to predetermined analytical expressions describing the behavior of the parameters over a range of operating conditions; and (3) the memory and control process subsystem which consists of the collection of updated coefficients (memory) and the derived control laws. Simulation experiments indicate that the learning control system is effective in compensating for parameter variations caused by changes in flight conditions. Zhao, Xin; Guerrero, Josep M.; Wu, Xiaohao In recent years, the electrical power capacity is increasing rapidly in more electric aircraft (MEA), since the conventional mechanical, hydraulic and pneumatic energy systems are partly replaced by electrical power system. As a consequence, capacity and complexity of aircraft electric power systems (EPS) will increase dramatically and more advanced aircraft EPSs need to be developed. This paper gives a brief description of the constant frequency (CF) EPS, variable frequency (VF) EPS and adva... Adachi, Fumiyuki; Miyazaki, Hiroyuki; Endo, Chikara If a large scale disaster similar to the Great East Japan Earthquake 2011 happens, some areas may be isolated from the communications network. Recently, unmanned aircraft system (UAS) based wireless relay communication has been attracting much attention since it is able to quickly re-establish the connection between isolated areas and the network. However, the channel between ground station (GS) and unmanned aircraft (UA) is unreliable due to UA's swing motion and as consequence, the relay communication quality degrades. In this paper, we introduce space-time block coded (STBC) amplify-and-forward (AF) relay for UAS based wireless relay communication to improve relay communication quality. A group of UAs forms single frequency network (SFN) to perform STBC-AF cooperative relay. In STBC-AF relay, only conjugate operation, block exchange and amplifying are required at UAs. Therefore, STBC-AF relay improves the relay communication quality while alleviating the complexity problem at UAs. It is shown by computer simulation that STBC-AF relay can achieve better throughput performance than conventional AF relay. ... can be found in the Federal Register published on April 11, 2000 (65 FR 19477-19478), as well as at... Federal Aviation Administration 14 CFR Part 91 Unmanned Aircraft System Test Sites AGENCY: Federal... test ranges/sites to integrate unmanned aircraft systems (UAS) into the National Airspace System... National Aeronautics and Space Administration — Current and planned transport aircraft designs are making more use of fly-by-wire technology, allowing an unprecedented design space for control laws, including... Hibbs, Bart D.; Lissaman, Peter B. S.; Morgan, Walter R.; Radkey, Robert L. This disclosure provides a solar rechargeable aircraft that is inexpensive to produce, is steerable, and can remain airborne almost indefinitely. The preferred aircraft is a span-loaded flying wing, having no fuselage or rudder. Travelling at relatively slow speeds, and having a two-hundred foot wingspan that mounts photovoltaic cells on most all of the wing's top surface, the aircraft uses only differential thrust of its eight propellers to turn. Each of five sections of the wing has one or more engines and photovoltaic arrays, and produces its own lift independent of the other sections, to avoid loading them. Five two-sided photovoltaic arrays, in all, are mounted on the wing, and receive photovoltaic energy both incident on top of the wing, and which is incident also from below, through a bottom, transparent surface. The aircraft is capable of a top speed of about ninety miles per hour, which enables the aircraft to attain and can continuously maintain altitudes of up to sixty-five thousand feet. Regenerative fuel cells in the wing store excess electricity for use at night, such that the aircraft can sustain its elevation indefinitely. A main spar of the wing doubles as a pressure vessel that houses hydrogen and oxygen gasses for use in the regenerative fuel cell. The aircraft has a wide variety of applications, which include weather monitoring and atmospheric testing, communications, surveillance, and other applications as well. Nielsen, Peter Vilhelm; Zhang, Chen; Wojcik, Kamil; Traditionally, air is supplied to the aircraft cabin either by individual nozzles or by supply slots. The air is expected to be fully mixed in the cabin, and the system is considered to be a mixing ventilation system. This paper will provide measurements on the mixing flow in an aircraft cabin... Maintenance and support are basic elements to realize the effectiveness of aircraft. For basic analysis of the characteristics of an aircraft maintenance and support system, a simulation method is presented in this paper, and the structure and realization ofthe simulation system is discussed. Thomas, Russell H. (Inventor); Czech, Michael J. (Inventor); Elkoby, Ronen (Inventor) The aircraft exhaust engine nozzle system includes a fan nozzle to receive a fan flow from a fan disposed adjacent to an engine disposed above an airframe surface of the aircraft, a core nozzle disposed within the fan nozzle and receiving an engine core flow, and a pylon structure connected to the core nozzle and structurally attached with the airframe surface to secure the engine to the aircraft. Delgado, Francisco J.; White, Janis; Abernathy, Michael F. This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SCS3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results. Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and operations personnel an appropriate level of situation awareness. The system created to date provides a real-time 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using "static" data acquired by an aircraft or satellite at some point in the past. The SCS3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1. The SCS3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year. We are also identifying other NASA programs that would benefit from the use of this technology. The cabin environment of a commercial aircraft, including cabin layout and the quality of air supply, is crucial to the airline operators. These aspects directly affect the passengers’ experience and willing to travel. This aim of this thesis is to design the cabin layout for flying wing aircraft as part of cabin environment work, followed by the air quality work, which is to understand what effect the ECS can have in terms of cabin air contamination. The project, initially, focuses on the... Storvold, Rune; la Cour-Harbo, Anders; Mulac, Brenda; , satellites and manned aircraft are the traditional platforms on which scientists gather data of the atmosphere, sea ice, glaciers, fauna and vegetation. However, significant data gaps still exist over much of the Arctic because there are few research stations, satellites are often hindered by cloud cover......, poor resolution, and the complicated surface of snow and ice. Measurements made from manned aircraft are also limited because of range and endurance, as well as the danger and costs presented by operating manned aircraft in harsh and remote environments like the Arctic. Unmanned aircraft systems (UAS... Burken, John J.; Frost, Susan A.; Taylor, Brian R. When designing control laws for systems with constraints added to the tracking performance, control allocation methods can be utilized. Control allocations methods are used when there are more command inputs than controlled variables. Constraints that require allocators are such task as; surface saturation limits, structural load limits, drag reduction constraints or actuator failures. Most transport aircraft have many actuated surfaces compared to the three controlled variables (such as angle of attack, roll rate & angle of side slip). To distribute the control effort among the redundant set of actuators a fixed mixer approach can be utilized or online control allocation techniques. The benefit of an online allocator is that constraints can be considered in the design whereas the fixed mixer cannot. However, an online control allocator mixer has a disadvantage of not guaranteeing a surface schedule, which can then produce ill defined loads on the aircraft. The load uncertainty and complexity has prevented some controller designs from using advanced allocation techniques. This paper considers actuator redundancy management for a class of over actuated systems with real-time structural load limits using linear quadratic tracking applied to the generic transport model. A roll maneuver example of an artificial load limit constraint is shown and compared to the same no load limitation maneuver. Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor) The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft. National Aeronautics and Space Administration — Aircraft powered by hydrogen power plants or gas turbines driving electric generators connected to distributed electric motors for propulsion have the potential to... This Transportation System Requirements Document (Trans-SRD) describes the functions to be performed by and the technical requirements for the Transportation System to transport spent nuclear fuel (SNF) and high-level radioactive waste (HLW) from Purchaser and Producer sites to a Civilian Radioactive Waste Management System (CRWMS) site, and between CRWMS sites. The purpose of this document is to define the system-level requirements for Transportation consistent with the CRWMS Requirement Document (CRD). These requirements include design and operations requirements to the extent they impact on the development of the physical segments of Transportation. The document also presents an overall description of Transportation, its functions, its segments, and the requirements allocated to the segments and the system-level interfaces with Transportation. The interface identification and description are published in the CRWMS Interface Specification. This Transportation System Requirements Document (Trans-SRD) describes the functions to be performed by and the technical requirements for the Transportation System to transport spent nuclear fuel (SNF) and high-level radioactive waste (HLW) from Purchaser and Producer sites to a Civilian Radioactive Waste Management System (CRWMS) site, and between CRWMS sites. The purpose of this document is to define the system-level requirements for Transportation consistent with the CRWMS Requirement Document (CRD). These requirements include design and operations requirements to the extent they impact on the development of the physical segments of Transportation. The document also presents an overall description of Transportation, its functions, its segments, and the requirements allocated to the segments and the system-level interfaces with Transportation. The interface identification and description are published in the CRWMS Interface Specification It is possible to get a crude estimate of wind speed and direction while driving a car at night in the rain, with the motion of the raindrop reflections in the headlights providing clues about the wind. The clues are difficult to interpret, though, because of the relative motions of ground, car, air, and raindrops. More subtle interpretation is possible if the rain is replaced by fog, because the tiny droplets would follow the swirling currents of air around an illuminated object, like, for example, a walking pedestrian. Microscopic particles in the air (aerosols) are better for helping make assessments of the wind, and reflective air molecules are best of all, providing the most refined measurements. It takes a bright light to penetrate fog, so it is easy to understand how other factors, like replacing the headlights with the intensity of a searchlight, can be advantageous. This is the basic principle behind a lidar system. While a radar system transmits a pulse of radiofrequency energy and interprets the received reflections, a lidar system works in a similar fashion, substituting a near-optical laser pulse. The technique allows the measurement of relative positions and velocities between the transmitter and the air, which allows measurements of relative wind and of air temperature (because temperature is associated with high-frequency random motions on a molecular level). NASA, as well as the National Oceanic and Atmospheric Administration (NOAA), have interests in this advanced lidar technology, as much of their explorative research requires the ability to measure winds and turbulent regions within the atmosphere. Lidar also shows promise for providing warning of turbulent regions within the National Airspace System to allow commercial aircraft to avoid encounters with turbulence and thereby increase the safety of the traveling public. Both agencies currently employ lidar and optical sensing for a variety of weather-related research projects, such as analyzing Kochan, Kay; Sachau, Delf; Breitbach, Harald The active noise control (ANC) method is based on the superposition of a disturbance noise field with a second anti-noise field using loudspeakers and error microphones. This method can be used to reduce the noise level inside the cabin of a propeller aircraft. However, during the design process of the ANC system, extensive measurements of transfer functions are necessary to optimize the loudspeaker and microphone positions. Sometimes, the transducer positions have to be tailored according to the optimization results to achieve a sufficient noise reduction. The purpose of this paper is to introduce a controller design method for such narrow band ANC systems. The method can be seen as an extension of common transducer placement optimization procedures. In the presented method, individual weighting parameters for the loudspeakers and microphones are used. With this procedure, the tailoring of the transducer positions is replaced by adjustment of controller parameters. Moreover, the ANC system will be robust because of the fact that the uncertainties are considered during the optimization of the controller parameters. The paper describes the necessary theoretic background for the method and demonstrates the efficiency in an acoustical mock-up of a military transport aircraft. PMID:21568404 In aircraft development, it is crucial to understand and evaluate behavior, performance, safety and other aspects of the systems before and after they are physically available for testing. Simulation models are used to gain knowledge in order to make decisions at all development stages. Modeling and simulation (M&S) in aircraft system development, for example of fuel, hydraulic and electrical power systems, is today an important part of the design process. Through M&S a problem in a f... ... ratings at the airline transport pilot certification level). 61.63 Section 61.63 Aeronautics and Space... aircraft ratings (other than for ratings at the airline transport pilot certification level). (a) General. For an additional aircraft rating on a pilot certificate, other than for an airline transport... ... Transportation To Investigate Certain Aircraft Accidents Appendix to Part 800 Transportation Other Regulations... the Department of Transportation To Investigate Certain Aircraft Accidents (a) Acting pursuant to the... Safety Board Act of 1974, and as set forth below to investigate the facts, conditions, and... Aerodynamic design of transport aircraft has been steadily improved over past several decades, to the point where today highly-detailed shape control is needed to achieve further improvements. Aircraft manufacturers are therefore increasingly looking into formal optimization methods, driving high-fidelity CFD analysis of finely-parametrized candidate designs. We present an adjoint gradient-based approach for maximizing the aerodynamic performance index relevant to cruise-climb mission segment... National Aeronautics and Space Administration — Electrical power systems play a critical role in spacecraft and aircraft, and they exhibit a rich variety of failure modes. This paper discusses electrical power... Reck, G. M. Possible changes in fuel properties are identified based on current trends and projections. The effect of those changes with respect to the aircraft fuel system are examined and some technological approaches to utilizing those fuels are described. National Aeronautics and Space Administration — Phase 1 has seen the development of a revolutionary new type of sensor for making carbon dioxide (CO2) measurements from small Unmanned Aircraft Systems (UAS) and... Westenberger, A.; Bleil, J.; Arendt, M. [Airbus Deutschland GmbH, Hamburg (Germany) The intention of using a highly integrated component using on fuel cell technology installed on board of large commercial passenger aircraft for the generation of onboard power for the systems demand during an entire aircraft mission was subject of several studies. The results of these studies have been based on the simulation of the whole system in the context of an aircraft system environment. In front of the work stood the analyses of different fuel cell technologies and the analyses of the aircraft system environment. Today onboard power is provided on ground by an APU and in flight by the main engines. In order to compare fuel cell technology with the today's usual gas turbine operational characteristics have been analysed. A second analysis was devoted to the system demand for typical aircraft categories. The MEA system concept was supposed in all cases. The favourable concept represented an aircraft propelled by conventional engines with starter generator units, providing AC electrical power, covering in total proximately half of the power demand and a component based on fuel cell technology. This component provided electrical DC power, clean potable water, thermal energy at 180 degrees Celsius and nitrogen enriched air for fire suppression and fire extinguishing agent. In opposite of a usual gas turbine based APU, this new unit was operated as the primary power system. (orig.) The evaluation indexes system of aircraft survivability is constructed for the first time from three aspects: susceptibility, vulnerability and combat resilience; the bargaining weight method is proposed to determine the weights of the indexes and evaluate aircraft survivability. The bargaining weight method brings different opinions into accord under the constraint of minimum loss, it can overcome the partial subjectivity in determining weights and evaluation, and has objectivity. The example testifies rationality and feasibility of the evaluation system. Overø, Helene Martine; Larsen, Allan; Røpke, Stefan The Danish innovation project entitled “Intelligent Freight Transport Systems” aims at developing prototype systems integrating public intelligent transport systems (ITS) with the technology in vehicles and equipment as well as the IT-systems at various transport companies. The objective...... is to enhance the efficiency and lower the environmental impact in freight transport. In this paper, a pilot project involving real-time waste collection at a Danish waste collection company is described, and a solution approach is proposed. The problem corresponds to the dynamic version of the waste collection... Hayhurst, Kelly J.; Maddalon, Jeffrey M.; Miner, Paul S.; Szatkowski, George N.; Ulrey, Michael L.; DeWalt, Michael P.; Spitzer, Cary R. The use of unmanned aircraft in national airspace has been characterized as the next great step forward in the evolution of civil aviation. To make routine and safe operation of these aircraft a reality, a number of technological and regulatory challenges must be overcome. This report discusses some of the regulatory challenges with respect to deriving safety and reliability requirements for unmanned aircraft. In particular, definitions of hazards and their classification are discussed and applied to a preliminary functional hazard assessment of a generic unmanned system. Liou, Meng-Sing; Stewart, Mark E.; Suresh, Ambady; Owen, A. Karl This report outlines the Space Transportation Propulsion Systems for the NPSS (Numerical Propulsion System Simulation) program. Topics include: 1) a review of Engine/Inlet Coupling Work; 2) Background/Organization of Space Transportation Initiative; 3) Synergy between High Performance Computing and Communications Program (HPCCP) and Advanced Space Transportation Program (ASTP); 4) Status of Space Transportation Effort, including planned deliverables for FY01-FY06, FY00 accomplishments (HPCCP Funded) and FY01 Major Milestones (HPCCP and ASTP); and 5) a review current technical efforts, including a review of the Rocket-Based Combined-Cycle (RBCC), Scope of Work, RBCC Concept Aerodynamic Analysis and RBCC Concept Multidisciplinary Analysis. Velders, G.J.M.; Heijboer, L. C.; Kelder, H. A three-dimensional off-line tracer transport model coupled to the ECMWF analyses has been used to study the transport of trace gases in the atmosphere. The model gives a reasonable description of their general transport in the atmosphere. The simulation of the transport of aircraft emissions (as NOx) has been studied as well as the transport of passive tracers injected at different altitudes in the North Atlantic flight corridor. A large zonal variation in the NO Bridgelall, Raj; Rafert, J. Bruce; Tolliver, Denver The global transportation system is massive, open, and dynamic. Existing performance and condition assessments of the complex interacting networks of roadways, bridges, railroads, pipelines, waterways, airways, and intermodal ports are expensive. Hyperspectral imaging is an emerging remote sensing technique for the non-destructive evaluation of multimodal transportation infrastructure. Unlike panchromatic, color, and infrared imaging, each layer of a hyperspectral image pixel records reflectance intensity from one of dozens or hundreds of relatively narrow wavelength bands that span a broad range of the electromagnetic spectrum. Hence, every pixel of a hyperspectral scene provides a unique spectral signature that offers new opportunities for informed decision-making in transportation systems development, operations, and maintenance. Spaceborne systems capture images of vast areas in a short period but provide lower spatial resolution than airborne systems. Practitioners use manned aircraft to achieve higher spatial and spectral resolution, but at the price of custom missions and narrow focus. The rapid size and cost reduction of unmanned aircraft systems promise a third alternative that offers hybrid benefits at affordable prices by conducting multiple parallel missions. This research formulates a theoretical framework for a pushbroom type of hyperspectral imaging system on each type of data acquisition platform. The study then applies the framework to assess the relative potential utility of hyperspectral imaging for previously proposed remote sensing applications in transportation. The authors also introduce and suggest new potential applications of hyperspectral imaging in transportation asset management, network performance evaluation, and risk assessments to enable effective and objective decision- and policy-making. Tischler, Mark B. System-identification methods compose a mathematical model, or series of models, from measurements of inputs and outputs of dynamic systems. The extracted models allow the characterization of the response of the overall aircraft or component subsystem behavior, such as actuators and on-board signal processing algorithms. This paper discusses the use of frequency-domain system-identification methods for the development and integration of aircraft flight-control systems. The extraction and analysis of models of varying complexity from nonparametric frequency-responses to transfer-functions and high-order state-space representations is illustrated using the Comprehensive Identification from FrEquency Responses (CIFER) system-identification facility. Results are presented for test data of numerous flight and simulation programs at the Ames Research Center including rotorcraft, fixed-wing aircraft, advanced short takeoff and vertical landing (ASTOVL), vertical/short takeoff and landing (V/STOL), tiltrotor aircraft, and rotor experiments in the wind tunnel. Excellent system characterization and dynamic response prediction is achieved for this wide class of systems. Examples illustrate the role of system-identification technology in providing an integrated flow of dynamic response data around the entire life-cycle of aircraft development from initial specifications, through simulation and bench testing, and into flight-test optimization. Price, D. B.; Gracey, C. The Theoretical Mechanics Branch has as one of its long-range goals to work toward solving real-time trajectory optimization problems on board an aircraft. This is a generic problem that has application to all aspects of aviation from general aviation through commercial to military. Overall interest is in the generic problem, but specific problems to achieve concrete results are examined. The problem is to develop control laws that generate approximately optimal trajectories with respect to some criteria such as minimum time, minimum fuel, or some combination of the two. These laws must be simple enough to be implemented on a computer that is flown on board an aircraft, which implies a major simplification from the two point boundary value problem generated by a standard trajectory optimization problem. In addition, the control laws allow for changes in end conditions during the flight, and changes in weather along a planned flight path. Therefore, a feedback control law that generates commands based on the current state rather than a precomputed open-loop control law is desired. This requirement, along with the need for order reduction, argues for the application of singular perturbation techniques. Benoit, Michael J. The Mercer Engineering Research Center (MERC), under contract to the United States Air Force (USAF) since 1989, has been actively involved in providing the Warner Robins Air Logistics Center (WR-ALC) with a robotic workcell designed to perform rework automated defastening and hole location/transfer operations on F-15 wings. This paper describes the activities required to develop and implement this workcell, known as the Automated Aircraft Rework System (AARS). AARS is scheduled to be completely installed and in operation at WR-ALC by September 1994. Ustijana RECHKOSKA SHIKOSKA Full Text Available The most critical operation for an aircraft to perform is landing. Even in bad weather, more specifically poor visibility, landing becomes virtually impossible of instrument guidance to aid the pilot. The more extreme case occurs when the visibility is near zero and the pilot cannot land the plane manually. This situation requires an automatic landing or precision approach to be performed by the aircraft flight control system in conjunction with a landing/guidance system. This type of guidance has been provided by the integration of the Global Positioning System (GPS and Inertial Navigation System (INS. Douglass, Anne R.; Rood, Richard B. Assessments of the impact of aircraft engine exhausts on stratospheric ozone levels are currently limited to 2D zonally-averaged models which, while completely representing chemistry, involve high parameterization of transport processes. Prospective 3D models under development by NASA-Goddard will use winds from a data-assimilation procedure; the upper troposphere/lower stratosphere behavior of one such model has been verified by direct comparison of model simulations with satellite, balloon, and sonde measurements. Attention is presently given to the stratosphere/troposphere exchange and nonzonal distribution of aircraft engine exhaust. Ortiz Llorente, Maria Begoña The in-flight entertainment system "Immfly" is a gate-to-gate entertainment system that provides flights passengers a new way of entertainment during their flight until they arrive to their destination. It is a solution that generates a wi-fi network inside the aircraft to which the users can connec Hennessy, Michael J. NASA is investigating advanced turboelectric aircraft propulsion systems that use superconducting motors to drive multiple distributed turbofans. Conventional electric motors are too large and heavy to be practical for this application; therefore, superconducting motors are required. In order to improve aircraft maneuverability, variable-speed power converters are required to throttle power to the turbofans. The low operating temperature and the need for lightweight components that place a minimum of additional heat load on the refrigeration system open the possibility of incorporating extremely efficient cryogenic power conversion technology. This Phase II project is developing critical components required to meet these goals. The notification is defined under the provisions of the regulations for execution of the aviation law. Terms of exclusive loading and container are explained. Transportable radioactive materials hereunder exclude naturally igniting fluid materials, substances necessary to be contained in vessels which filtrate interior gas with filters or refrigerate contents with cooling devices, etc., or BM loads necessary to be continuously ventilated. Radioactive materials to be conveyed as radioactive loads and L loads are prescribed with tables attached. Technical standards for radioactive loads are stipulated for L, A, BM and BU loads respectively. Confirmation of safety of radioactive loads may be made by examiniation of documents prepared by persons acknowledged proper by the Minister of Transportation. Radioactive materials are uranium 233 and 235, plutonium 238, 239 and 241, their compounds and those materials which include one or more than two of such substances. Materials whose quantities or quantities of components are less than 15 grams and natural or depleted uranium are excluded. The maximum doses of containers with radioactive loads shall not exceed for an hour 200 mili-rem on the surface and 10 mili-rem at a distance of 1 meter from the surface. Confirmation of safety of transport, method of loading, prevention of criticality, restriction of mixed shipment, transport index, signals and others are provided for in detail. (Okada, K.) Straeter, T. A.; Williams, J. R. The paper describes techniques for testing and validating software for the TCV (Terminal Configured Vehicle) program which is intended to solve problems associated with operating a commercial transport aircraft in the terminal area. The TCV research test bed is a Boeing 737 specially configured with digital computer systems to carry out automatic navigation, guidance, flight controls, and electronic displays research. The techniques developed for time and cost reduction include automatic documentation aids, an automatic software configuration, and an all software generation and validation system. Estkowski, Regina I. (Inventor) An unmanned vehicle management system includes an unmanned aircraft system (UAS) control station controlling one or more unmanned vehicles (UV), a collaborative routing system, and a communication network connecting the UAS and the collaborative routing system. The collaborative routing system being configured to receive flight parameters from an operator of the UAS control station and, based on the received flight parameters, automatically present the UAS control station with flight plan options to enable the operator to operate the UV in a defined airspace. Arcara, P. C., Jr.; Bartlett, D. W.; Mccullers, L. A. The FLOPS aircraft conceptual design/analysis code has been used to evaluate the effects of incorporating hybrid laminar flow control (HLFC) in a 300-passenger, 6500 n. mi. range, twin-engine subsonic transport aircraft. The baseline configuration was sized to account for 50 percent chord laminar flow on the wing upper surface as well as both surfaces of the empennage airfoils. Attention is given to the additional benefits of achieving various degrees of laminar flow on the engine nacelles, and the horsepower extraction and initial weight and cost increments entailed by the HLFC system. The sensitivity of the results obtained to fuel-price and off-design range are also noted. ... Aeronautics and Space Administration announces a meeting of the Unmanned Aircraft Systems (UAS) Subcommittee... SPACE ADMINISTRATION NASA Advisory Council; Aeronautics Committee; Unmanned Aircraft Systems... of NASA UAS Integration into the National Airspace System (NAS) Phase 2 Activity Selection... Ardema, M. D. Sensitivity data for advanced technology transports has been systematically collected. This data has been generated in two separate studies. In the first of these, three nominal, or base point, vehicles designed to cruise at Mach numbers .85, .93, and .98, respectively, were defined. The effects on performance and economics of perturbations to basic parameters in the areas of structures, aerodynamics, and propulsion were then determined. In all cases, aircraft were sized to meet the same payload and range as the nominals. This sensitivity data may be used to assess the relative effects of technology changes. The second study was an assessment of the effect of cruise Mach number. Three families of aircraft were investigated in the Mach number range 0.70 to 0.98: straight wing aircraft from 0.70 to 0.80; sweptwing, non-area ruled aircraft from 0.80 to 0.95; and area ruled aircraft from 0.90 to 0.98. At each Mach number, the values of wing loading, aspect ratio, and bypass ratio which resulted in minimum gross takeoff weight were used. As part of the Mach number study, an assessment of the effect of increased fuel costs was made. We develop a theory of transport in Hamiltonian systems in the context of iteration of area-preserving maps. Invariant closed curves present complete barriers to transport, but in regions without such curves there are still invariant Cantor sets named cantori, which appear to form major obstacles. The flux through the gaps of the cantori is given by Mather's differences in action. This gives useful bounds on transport between regions, and a universal scaling law for one-parameter families when a curve has just broken, which agree well with numerical experiments of Chirikov and explain an apparent disagreement with results of Greene. By dividing the phase space into regions separated by the strongest barriers, and assuming the motion is mixing within them, we derive a global picture of transport, which can be used, for example, to predict confinement times and to explain longtime tails in the decay of correlations G. J. M. Velders Full Text Available A three-dimensional off-line tracer transport model coupled to the ECMWF analyses has been used to study the transport of trace gases in the atmosphere. The model gives a reasonable description of their general transport in the atmosphere. The simulation of the transport of aircraft emissions (as NOx has been studied as well as the transport of passive tracers injected at different altitudes in the North Atlantic flight corridor. A large zonal variation in the NOx concentrations as well as large seasonal and yearly variations was found. The altitude of the flight corridor influences the amount of tracers transported into the troposphere and stratosphere to a great extent. Wingrove, Earl R., III; Hees, Jing; Villani, James A.; Yackovetsky, Robert E. (Technical Monitor) Throughout U.S. history, our nation has generally enjoyed exceptional economic growth, driven in part by transportation advancements. Looking forward 25 years, when the national highway and skyway systems are saturated, the nation faces new challenges in creating transportation-driven economic growth and wealth. To meet the national requirement for an improved air traffic management system, NASA developed the goal of tripling throughput over the next 20 years, in all weather conditions while maintaining safety. Analysis of the throughput goal has primarily focused on major airline operations, primarily through the hub and spoke system.However, many suggested concepts to increase throughput may operate outside the hub and spoke system. Examples of such concepts include the Small Aircraft Transportation System, civil tiltrotor, and improved rotorcraft. Proper assessment of the potential contribution of these technologies to the domestic air transportation system requires a modeling capability that includes the country's numerous smaller airports, acting as a fundamental component of the National Air space System, and the demand for such concepts and technologies. Under this task for NASA, the Logistics Management Institute developed higher fidelity demand models that capture the interdependence of short-haul air travel with other transportation modes and explicitly consider the costs of commercial air and other transport modes. To accomplish this work, we generated forecasts of the distribution of general aviation based aircraft and GA itinerant operations at each of nearly 3.000 airport based on changes in economic conditions and demographic trends. We also built modules that estimate the demand for travel by different modes, particularly auto, commercial air, and GA. We examined GA demand from two perspectives: top-down and bottom-up, described in detail. National Aeronautics and Space Administration — Unmanned aircraft systems (UAS) can be used for scientific, emergency management, and defense missions, among others. The existing federal air regulations,... Culick, Fred E. C.; Jahnke, Craig C. Dynamical systems theory has been used to study nonlinear aircraft dynamics. A six degree of freedom model that neglects gravity has been analyzed. The aerodynamic model, supplied by NASA, is for a generic swept wing fighter and includes nonlinearities as functions of the angle of attack. A continuation method was used to calculate the steady states of the aircraft, and bifurcations of these steady states, as functions of the control deflections. Bifurcations were used to predict jump phenomena and the onset of periodic motion for roll coupling instabilities and high angle of attack maneuvers. The predictions were verified with numerical simulations. Petley, Dennis H.; Jones, Stuart C.; Dziedzic, William M. Numerical methods have been developed for the analysis of hypersonic aircraft cooling systems. A general purpose finite difference thermal analysis code is used to determine areas which must be cooled. Complex cooling networks of series and parallel flow can be analyzed using a finite difference computer program. Both internal fluid flow and heat transfer are analyzed, because increased heat flow causes a decrease in the flow of the coolant. The steady state solution is a successive point iterative method. The transient analysis uses implicit forward-backward differencing. Several examples of the use of the program in studies of hypersonic aircraft and rockets are provided. Burke, David A. One of the pillars of aviation safety is assuring sound engineering practices through airworthiness certification. As Unmanned Aircraft Systems (UAS) grow in popularity, the need for airworthiness standards and verification methods tailored for UAS becomes critical. While airworthiness practices for large UAS may be similar to manned aircraft, it is clear that small UAS require a paradigm shift from the airworthiness practices of manned aircraft. Although small in comparison to manned aircraft these aircraft are not merely remote controlled toys. Small UAS may be complex aircraft flying in the National Airspace System (NAS) over populated areas for extended durations and beyond line of sight of the operators. A comprehensive systems engineering framework for certifying small UAS at the system level is needed. This work presents a point based tool that evaluates small UAS by rewarding good engineering practices in design, analysis, and testing. The airworthiness requirements scale with vehicle size and operational area, while allowing flexibility for new technologies and unique configurations. Elmer, James D. This curriculum guide accompanies another publication in the Aerospace Education II series entitled "Propulsion Systems for Aircraft." The guide includes specific guidelines for teachers on each chapter in the textbook. Suggestions are included for objectives (traditional and behavioral), suggested outline, orientation, suggested key points,… This paper speaks about work conducted in 1998 and 1999 by AEROSPATIALE MATRA in development of an obstacle detection system, which has been tested on a demonstrator aircraft in Toulouse. The purpose of this mock- up was to verify the feasibility of a passive technology, and to consider the limits of its use. Athans, M.; Willner, D. A flight control system design is presented, that can be implemented by analog hardware, to be used to control an aircraft with uncertain parameters. The design is based upon the use of modern control theory. The ideas are illustrated by considering control of STOL longitudinal dynamics. Berg, F. van den; Eisses, A.R.; Beek, P.J.G. van A new approach for an airport noise monitoring system is presented that comprises not only a number of measuring stations, but also a dedicated sound propagation model and an aircraft noise emission model. This approach enables estimation of noise levels in the whole area around the airport and not Full Text Available Contemporary transport aircraft information-communication system is extremely sophisticated. The aim of the current study is to give contribution to the current knowledge of information entropy, and to show how its alteration could indicate possible errors, which may lead to preventing future aircraft calamities. In this study a principle model of such system is described, consisting of two peripheral, sensory units and their central, processing units, upon which a numerical simulation is carried out. Two states of the system are defined – states of regular and irregular dynamics. Data transfer between system elements is defined through information entropy, whose average change and accompanying standard deviation shows the difference between the regular and non-regular state. When introducing an error of the same kind upon each of the sensors, the type of results corresponds to a sufficiently intensive deviation, which may make error detection by information entropy analysis possible. Banerjee, J. R. The purpose of this paper is to provide theory, results, discussion and conclusions arising from an in-depth investigation on the modal behaviour of high aspect ratio aircraft wings. The illustrative examples chosen are representative of sailplane and transport airliner wings. To achieve this objective, the dynamic stiffness method of modal analysis is used. The wing is represented by a series of dynamic stiffness elements of bending-torsion coupled beams which are assembled to form the overall dynamic stiffness matrix of the complete wing. With cantilever boundary condition applied at the root, the eigenvalue problem is formulated and finally solved with the help of the Wittrick-Williams algorithm to yield the eigenvalues and eigenmodes which are essentially the natural frequencies and mode shapes of the wing. Results for wings of two sailplanes and four transport aircraft are discussed and finally some conclusions are drawn Pirrello, C. J.; Baker, A. H.; Stone, J. E. A detailed analytical study was made to investigate the effects of fuselage cross section (circular and elliptical) and the structural arrangement (integral and nonintegral tanks) on aircraft performance. The vehicle was a 200 passenger, liquid hydrogen fueled Mach 6 transport designed to meet a range goal of 9.26 Mn (5000 NM). A variety of trade studies were conducted in the area of configuration arrangement, structural design, and active cooling design in order to maximize the performance of each of three point design aircraft: (1) circular wing-body with nonintegral tanks, (2) circular wing-body with integral tanks and (3) elliptical blended wing-body with integral tanks. Aircraft range and weight were used as the basis for comparison. The resulting design and performance characteristics show that the blended body integral tank aircraft weights the least and has the greatest range capability, however, producibility and maintainability factors favor nonintegral tank concepts. Aircraft despatch reliability was the main subject of this research in the wider content of aircraft reliability. The factors effecting dispatch reliability, aircraft delay, causes of aircraft delays, and aircraft delay costs and magnitudes were examined. Delay cost elements and aircraft delay scenarios were also studied. It concluded that aircraft dispatch reliability is affected by technical and non-technical factors, and that the former are under the designer's control. It showed that ... National Aeronautics and Space Administration — It is proposed to develop an accurate in-service aircraft engine life monitor system for the prediction of remaining component and system life for aircraft engines.... ...: Meeting Notice of RTCA Special Committee 203, Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this... Document--Operational Functional Requirements and Safety Objectives for Unmanned Aircraft Systems and... Federal Aviation Administration Twenty Fourth Meeting: RTCA Special Committee 203, Unmanned... Full Text Available At large international airports, aircraft can be refuelled either by fuel trucks or using dedicated underground pipeline systems. The latter, hydrant refuelling, is considered to be an optimal fuelling method as it increases safety, shortens the aircraft turnaround time and cuts the overall costs. However, at smaller airports, implementation of this system can lead to high investment costs. Thus, the paper discusses the airport size from which this system may be efficient to implement. Various definitions of term “airport size” are assessed. Based on data collection, the hydrant system model is created within the paper. As a result, methodology for assessing the suitability of hydrant system implementation is set. This methodology can be used at every airport using three simple inputs. Bond, E. Q.; Carroll, E. A.; Flume, R. A. A comparison is made between airplane productivity and utilization levels derived from commercial airline type schedules which were developed for two subsonic and four supersonic cruise speed aircraft. The cruise speed component is the only difference between the schedules which are based on 1995 passenger demand forecasts. Productivity-to-speed relationships were determined for the three discrete route systems: North Atlantic, Trans-Pacific, and North-South America. Selected combinations of these route systems were also studied. Other areas affecting the productivity-to-speed relationship such as aircraft design range and scheduled turn time were examined. The safety of spent fuel transport casks in severe accident conditions is always a matter of concern. This paper surveys German missile impact tests that have been carried out in the past to demonstrate that German cask designs for transport and interim storage are safe even under conditions of an aircraft crash impact. A fire test with a cask beside an exploding propane vessel and temperature calculations concerning prolonged fires also show that the casks have reasonably good safety margins in thermal accidents beyond regulatory fire test conditions. (author) Smith, Jeremy C.; Viken, Jeffrey K.; Guerreiro, Nelson M.; Dollyhigh, Samuel M.; Fenbert, James W.; Hartman, Christopher L.; Kwa, Teck-Seng; Moore, Mark D. Electric propulsion and autonomy are technology frontiers that offer tremendous potential to achieve low operating costs for small-aircraft. Such technologies enable simple and safe to operate vehicles that could dramatically improve regional transportation accessibility and speed through point-to-point operations. This analysis develops an understanding of the potential traffic volume and National Airspace System (NAS) capacity for small on-demand aircraft operations. Future demand projections use the Transportation Systems Analysis Model (TSAM), a tool suite developed by NASA and the Transportation Laboratory of Virginia Polytechnic Institute. Demand projections from TSAM contain the mode of travel, number of trips and geographic distribution of trips. For this study, the mode of travel can be commercial aircraft, automobile and on-demand aircraft. NASA's Airspace Concept Evaluation System (ACES) is used to assess NAS impact. This simulation takes a schedule that includes all flights: commercial passenger and cargo; conventional General Aviation and on-demand small aircraft, and operates them in the simulated NAS. The results of this analysis projects very large trip numbers for an on-demand air transportation system competitive with automobiles in cost per passenger mile. The significance is this type of air transportation can enhance mobility for communities that currently lack access to commercial air transportation. Another significant finding is that the large numbers of operations can have an impact on the current NAS infrastructure used by commercial airlines and cargo operators, even if on-demand traffic does not use the 28 airports in the Continental U.S. designated as large hubs by the FAA. Some smaller airports will experience greater demand than their current capacity allows and will require upgrading. In addition, in future years as demand grows and vehicle performance improves other non-conventional facilities such as short runways incorporated into ...: Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 203: Unmanned Aircraft Systems. DATES: The meeting will be held October 19-21... Special Committee 203: Unmanned Aircraft Systems meeting. The agenda will include: Tuesday, October 19 9... ... Federal Aviation Administration Nineteenth Meeting: RTCA Special Committee 203: Unmanned Aircraft Systems...: Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 203: Unmanned Aircraft Systems. DATES: The meeting will be held May 17-19,... ...: Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 203: Unmanned Aircraft Systems. DATES: The meeting will be held June 8-10, 2010... given for a Special Committee 203: Unmanned Aircraft Systems meeting. The agenda will include:... ...: Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of a meeting of RTCA Special Committee 203: Unmanned Aircraft Systems. DATES: The meeting will be held February 16-18...: Unmanned Aircraft Systems meeting. The agenda will include: February 16, 2010 Opening Plenary... ... carrier operations and maintenance program (14 CFR part 43, 14 CFR part 91, 14 CFR part 121). (b) Each... § 141.804 Aircraft water system operations and maintenance plan. (a) Each air carrier must develop and implement an aircraft water system operations and maintenance plan for each aircraft water system that... This Individual Research Project (IRP) is the extension research to the group design project (GDP) work which the author has participated in his Msc programme. The GDP objective is to complete the conceptual design of a 200-seat, flying wing civil airliner—FW-11. The next generation aircraft design demands higher reliability, safety and maintainability. With the development of the vehicle hydraulic system technology, the equipment and systems become more and more complex, their reliability... In passing the Nuclear Waste Policy Act of 1982 (NWPA), the United States Congress initiated a systematic process for addressing the national problem of what to do with the growing inventory of high-level nuclear waste and spent fuel. In addition to requiring development of geologic repositories, the NWPA directed the Secretary of Energy to perform a detailed study of the need for, and the feasibility of, monitored retrievable storage (MRS) and to submit to Congress a proposal for construction of one or more MRS facilities. As a third element of the disposal system, the NWPA also directed the development of the transportation capability to ship the nuclear wastes from the points of origin (chiefly reactors at commercial power plants) to the facilities developed under the NWPA. The Office of Civilian Radioactive Waste Management (OCRWM) of the Department of Energy (DOE) was created to manage the overall disposal program. Within OCRWM, the Office of Storage and Transportation Systems (OSTS) is responsible for developing the mandated proposal for an MRS facility, establishing the transportation capability to support the disposal operation, and directing the integrated development of system components so that the entire waste system functions in an optimized way. This paper deals only peripherally with the DOE proposal for an MRS facility since an in-depth paper on that program will be delivered at a later session of this meeting. The primary focus of this discussion is the program that OCRWM is developing to ensure the availability of a safe, efficient transportation system for shipping under provisions of the NWPA ... an Airline Transport Pilot Certificate J Appendix J to Part 141 Aeronautics and Space FEDERAL... Airline Transport Pilot Certificate 1. Applicability. This appendix prescribes the minimum curriculum for an aircraft type rating course other than an airline transport pilot certificate, for: (a) A... Bin ZHANG; Bao-guo YAO; Ying-lin KE A novel 6-degree of freedom (DOF) posture alignment system, based on 3-DOF positioners, is presented for the assembly of aircraft wings. Each positioner is connected with the wing through a rotational and adsorptive half-ball shaped end-effector, and the positioners together with the wing are considered as a 3-PPPS (P denotes a prismatic joint and S denotes a spherical joint) redundantly actuated parallel mechanism. The kinematic model of this system is established and a trajectory planning method is introduced. A complete analysis of inverse dynamics is carried out with the Newton-Euler algorithm, which is used to find the desired actuating torque in the design and path planning phase. Simulation analysis of the displacement and actuating torque of each joint of the positioners based on inverse kinematics and dynamics is conducted, and the results show that the system is feasible for the posture alignment of aircraft wings. This book provides a systematic analysis, modeling and evaluation of the performance of advanced transport systems. It offers an innovative approach by presenting a multidimensional examination of the performance of advanced transport systems and transport modes, useful for both theoretical and practical purposes. Advanced transport systems for the twenty-first century are characterized by the superiority of one or several of their infrastructural, technical/technological, operational, economic, environmental, social, and policy performances as compared to their conventional counterparts. The advanced transport systems considered include: Bus Rapid Transit (BRT) and Personal Rapid Transit (PRT) systems in urban area(s), electric and fuel cell passenger cars, high speed tilting trains, High Speed Rail (HSR), Trans Rapid Maglev (TRM), Evacuated Tube Transport system (ETT), advanced commercial subsonic and Supersonic Transport Aircraft (STA), conventionally- and Liquid Hydrogen (LH2)-fuelled commercial air trans... Cherry, G. W. Consideration of the problems facing air transport at present, and to be expected in the future. In the Northeast Corridor these problems involve community acceptance, airway and airport congestion and delays, passenger acceptance, noise reduction, and improvements in low-density short-haul economics. In the development of a superior short-haul operating system, terminal-configured vs cruise-configured vehicles are evaluated. CTOL, STOL, and VTOL aircraft of various types are discussed. In the field of noise abatement, it is shown that flight procedural techniques are capable of supplementing ?quiet engine' technology. National Aeronautics and Space Administration — Hybrid turboelectric aircraft with gas turbines driving electric generators connected to electric propulsion motors have the potential to transform the aircraft... Laue, Jay H. This document is the final report by the Science Applications International Corporation (SAIC) on contracted support provided to the National Aeronautics and Space Administration (NASA) under Contract NAS8-99060, 'Space Transportation Systems Technologies'. This contract, initiated by NASA's Marshall Space Flight Center (MSFC) on February 8, 1999, was focused on space systems technologies that directly support NASA's space flight goals. It was awarded as a Cost-Plus-Incentive-Fee (CPIF) contract to SAIC, following a competitive procurement via NASA Research Announcement, NRA 8-21. This NRA was specifically focused on tasks related to Reusable Launch Vehicles (RLVs). Through Task Area 3 (TA-3), "Other Related Technology" of this NRA contract, SAIC extensively supported the Space Transportation Directorate of MSFC in effectively directing, integrating, and setting its mission, operations, and safety priorities for future RLV-focused space flight. Following an initially contracted Base Year (February 8, 1999 through September 30, 1999), two option years were added to the contract. These were Option Year 1 (October 1, 1999 through September 30, 2000) and Option Year 2 (October 1, 2000 through September 30, 2001). This report overviews SAIC's accomplishments for the Base Year, Option Year 1, and Option Year 2, and summarizes the support provided by SAIC to the Space Transportation Directorate, NASA/MSFC. ... Federal Aviation Administration Twentieth Meeting: RTCA Special Committee 203, Unmanned Aircraft Systems... RTCA Special Committee 203, Unmanned Aircraft Systems. SUMMARY: The FAA is issuing this notice to advise the public of the twentieth meeting of RTCA Special Committee 203, Unmanned Aircraft... A software house Oy Fision Ltd decided to develop a custom information system to help in continuing airworthiness management of general aviation aircraft. The project was put in motion as a master’s thesis project by the author. Helicopter continuing airworthiness management and maintenance company Helitech Oy became a partner in the project. The paper starts by introducing regulations for continuing airworthiness management and requirements due to the regulations in daily work of a conti... Darnell, Bart W. Unmanned aircraft systems (UAS) have been part of aviation from the beginnings of manned aviation and have become a vital tool of our overseas military and national security operations. Public and private sector interest continues to grow for UAS to be used in a variety of domestic missions, such as border patrol, law enforcement, and search and rescue. With growing concerns over issues, such as border security and critical infrastructure protection, it would seem that UAS would be a logical ... Full text: Aircraft fighter pilots may experience risks other than the cosmic radiation exposure due to the characteristics of a typical fighter flight. The combined risks for fighter pilots due to the G-forces, hypobaric hypoxia, cosmic radiation exposure, etc. have determined that pregnant female pilots should remain on ground. However, several military transport missions can be considered an ordinary civil aircraft flight and the question arises whether the pregnant female pilot could be part of the aircrew. In this work the cosmic radiation dose received in several transport missions was estimated. Typical transport missions carried out in one month by a single air squad were considered. The flights departured from Lisbon to areas such as the Azores, to several countries in central and southern Africa, to the western coast of the USA and to the Balkans and an estimate of the cosmic radiation dose received on each flight was carried out. A monthly average cosmic radiation dose to the aircrew was determined and the dose values obtained were discussed in relation to the limits established by the European Union Council Directive 96/29/Euratom. The cosmic ray dose estimates were performed using the EPCARD v3.2 and the CARI-6 computing codes. EPCARD v3.2 was kindly made available by GSF-National Research Centre for Environment and Health, Institute of Radiation Protection (Neuherberg, Germany). CARI-6 (version July 7th, 2004) was downloaded from the web site of the Civil Aerospace Medical Institute, Federal Aviation Administration (USA). In this work an estimate of the cosmic radiation dose received by military aircraft crew on realistic typical transport missions is made. (author) Gratton, GB; Porteous, TC The UK microlight aircraft community, under the guidance of the British Microlight Aircraft Association (BMAA), has developed a formalised system for the training and qualification of civil test pilots on this class of aircraft. This system is unique in Britain where most of the rest of the industry relies upon a pool of military-trained test aircrew, most of whom have no experience of microlight aircraft. This paper describes the system operated by the BMAA for the training and qualifica... Żurek Józef; Grzesik Norbert; Kurpas Jakub The paper describes Zlin 143Lsi aircraft engine work parameters control support method – hourly fuel flow as a main factor under consideration. The method concerns project of aircraft throttle control support system with use of fuzzy logic (fuzzy inference). The primary purpose of the system is aircraft performance optimization, reducing flight cost at the same time and support proper aircraft engine maintenance. Matlab Software and Fuzzy Logic Toolbox were used in the project. Work of the sy... Cestino, Enrico; Borello, Fabio; Romeo, Giulio Fuel cells could become the main power source for small general aviation aircraft or could replace APU and internal sub-systems on larger aircraft, to obtain all-electric or more-electric air vehicles. There are several potential advantages of using such a power source, that range from environmental and economic issues to performance and operability aspects. A preliminary design is reported. Also, the paper contains a description of testing activities related to experimental flights of an all... Ryder, Claire; Highwood, Ellie; Rosenberg, Phil; Trembath, Jamie; Brooke, Jennifer; Bart, Mark; Dean, Angela; Dorsey, James; Crosier, Jonny; McQuaid, Jim; Brindley, Helen; Banks, James; Marsham, John; Sodemann, Harald; Washington, Richard Measurements of Saharan dust from recent airborne campaigns have found variations in size distributions and optical properties across Saharan and sub-Saharan Africa. These variations have an impact on radiation and thus weather and climate, and are important to characterise and understand, in particular, to understand how they vary with time after dust uplift, transport, and height in the atmosphere. New in-situ aircraft measurements from the Fennec 2011 aircraft campaign over a remote part of the Sahara Desert and the Atlantic Ocean will be presented and compared to previous airborne measurements. Size distributions extending to 300 μm will be shown, representing measurements extending further into the coarse mode than previously published for Saharan dust. The dust sampled by the aircraft covered a wide variety of loadings, dust source regions (Mali, Mauritania and Algeria) and dust ages (from fresh uplift to several days old). A significant coarse mode was present in the size distribution measurements with effective diameter up to 23 μm, and the mean size distribution showed greater concentrations of coarse mode than previous aircraft measurements. Single scattering albedo (SSA) values at 550nm calculated from these size distributions revealed high absorption from 0.77 to 0.95, with a mean of 0.85. Directly measured SSA values were higher (0.91 to 0.99) but new instrumentation revealed that these direct measurements, behind Rosemount inlets, overestimate the SSA by 0.02 to 0.20 depending on the concentration of coarse particles present. This is caused by inlet inefficiencies and pipe losses. Previous measurements of SSA from aircraft measurements may also have been overestimates for this reason. This has a significant impact on atmospheric heating rates. The largest dust particles were encountered closest to the ground, and were most abundant in cases where dust was freshly uplifted. Number concentration, mass loading and extinction coefficient showed inverse John P.T. Mo Full Text Available The Australian Defence Force and industry are undergoing significant changes in the way they work together in capability enhancement programs. There are capability gaps in maintaining and supporting current obligations during major asset acquisition, which has migrated into the front line of Royal Air Force Fighter Groups as a new capability. This paper examines a steady state support solution and argues that in order to interchange from one support solution to a new architecture there must be a period for transition, which may need its own interim business model and operational service. A preliminary study of several existing support solutions reveals the generic elements that need to be parameterized and traced through the support system architecture trajectory. Zertuche, Tony; Mckinnie, James Three missions have been identified by NASA for a Space Shuttle-supplementing Alternate Transportation System (ATS) encompassing combinations of booster vehicles, crew modules, and service modules: (1) to achieve manned access to orbit for Space Station crew rotation every 90 days, (2) the lofting of a logistics module resupplying the Space Station every 180 days, and (3) the simultaneous launch of both crews and logistics to the Space Station. A reentry glider is considered, in conjunction with the Space Shuttle's unmanned cargo version and the Apollo manned capsule, as an important ATS element. The Titan IV/NUS is used as a booster. Mukhopadhyay, Vivek; Welstead, Jason R.; Quinlan, Jesse R.; Guynn, Mark D. Structural configuration analysis of an advanced aircraft fuselage concept is investigated. This concept is characterized by a double-bubble section fuselage with rear mounted engines. Based on lessons learned from structural systems analysis of unconventional aircraft, high-fidelity finite-element models (FEM) are developed for evaluating structural performance of three double-bubble section configurations. Structural sizing and stress analysis are applied for design improvement and weight reduction. Among the three double-bubble configurations, the double-D cross-section fuselage design was found to have a relatively lower structural weight. The structural FEM weights of these three double-bubble fuselage section concepts are also compared with several cylindrical fuselage models. Since these fuselage concepts are different in size, shape and material, the fuselage structural FEM weights are normalized by the corresponding passenger floor area for a relative comparison. This structural systems analysis indicates that an advanced composite double-D section fuselage may have a relative structural weight ratio advantage over a conventional aluminum fuselage. Ten commercial and conceptual aircraft fuselage structural weight estimates, which are empirically derived from the corresponding maximum takeoff gross weight, are also presented and compared with the FEM- based estimates for possible correlation. A conceptual full vehicle FEM model with a double-D fuselage is also developed for preliminary structural analysis and weight estimation. Full Text Available Objectives: For the purpose of flight safety military aircrew must be healthy. P-wave dispersion (PWD is the p-wave length difference in an electrocardiographic (ECG examination and represents the risk of developing atrial fibrillation. In the study we aimed at investigating PWD in healthy military aircrew who reported for periodical examinations. Material and Methods: Seventy-five asymptomatic military aircrew were enrolled in the study. All the subjects underwent physical, radiologic and biochemical examinations, and a 12-lead electrocardiography. P-wave dispersions were calculated. Results: The mean age of the study participants was 36.15±8.97 years and the mean p-wave duration was 100.8±12 ms in the whole group. Forty-seven subjects were non-pilot aircrew, and 28 were pilots. Thirteen study subjects were serving in jets, 49 in helicopters, and 13 were transport aircraft pilots. Thirty-six of the helicopter and 11 of the transport aircraft aircrew were non-pilot aircrew. P-wave dispersion was the lowest in the transport aircraft aircrew, and the highest in jet pilots. P-wave dispersions were similar in the pilots and non-pilot aircrew. Twenty-three study subjects were overweight, 19 had thyroiditis, 26 had hepatosteatosis, 4 had hyperbilirubinemia, 2 had hypertension, and 5 had hyperlipidemia. The PWD was significantly associated with thyroid-stimulating hormone (TSH levels. Serum uric acid levels were associated with p-wave durations. Serum TSH levels were the most important predictor of PWD. Conclusions: When TSH levels were associated with PWD, uric acid levels were associated with p-wave duration in the military aircrew. The jet pilots had higher PWDs. These findings reveal that military jet pilots may have a higher risk of developing atrial fibrillation, and PWD should be recorded during periodical examinations. This book gives as systematic application of the methods of physical kinetics to phonon systems. The results presented are of direct relevance to materials whose transport and other properties are dominated by phonons. This class of materials includes most common dielectrics as well as such unusual substances as He-II, glasses and some semiconductors. The theory is presented in its rigorous mathematical formulation, and qualitative physical reasoning is given only to elucidate some of the results thus obtained. An introductory chapter, containing the derivation of phonon spectra in the harmonic approximation and the perturbative treatment of anharmonicity as well as the fundamentals of physical kinetics, makes the text accessible for those who enter this field as beginners. Subsequent chapters deal with heat transport, second sound, dielectric losses, sound attenuation, etc. The basic equations of phonon hydrodynamics and the superdiffusion equation are derived and solved for specific cases. The application of sophisticated field-theoretical methods (Kubo formula, Feynman diagrams) is limited and delegated to an appendix, because they only exceptionally go beyond what ordinary quantum-mechanical perturbation theory or the Boltzmann equation provide for the systems under consideration. Th0191thor's preference for the less formal approach gives the reader a grip of the physical significance of the assumptions involved and thus of the limits of validity of the theory. (Auth.) Unscheduled aircraft maintenance causes a lot problems and costs for aircraft operators. This is due to the fact that aircraft cause significant costs if flights have to be delayed or canceled and because spares are not always available at any place and sometimes have to be shipped across the world. Reducing the number of unscheduled maintenance is thus a great costs factor for aircraft operators. This thesis describes three methods for aircraft health monitoring and prediction; one method fo... Leavitt, L. D.; Washburn, A. E.; Wahls, R. A. NASA has had a long history in fundamental and applied high lift research. Current programs provide a focus on the validation of technologies and tools that will enable extremely short take off and landing coupled with efficient cruise performance, simple flaps with flow control for improved effectiveness, circulation control wing concepts, some exploration into new aircraft concepts, and partnership with Air Force Research Lab in mobility. Transport high-lift development testing will shift more toward mid and high Rn facilities at least until the question: "How much Rn is required" is answered. This viewgraph presentation provides an overview of High-Lift research at NASA. Purpose: To enable automatic transportation of nuclear substances with optional setting for the transportation distance, even for a long distance, facilitating the automation of the transportation and decreasing the space for the installation of a direction converging section of the transporting path. Constitution: A transporting vehicle having a pair of permanent magnets or ferromagnetic bodies mounted with a predetermined gap to each other along the transporting direction is provided in the transporting path including a bent direction change section for transporting specimens such as nuclear materials, and a plurality of driving vehicles having permanent magnets or ferromagnetic bodies for magnetically attracting the transporting vehicle from outside of the transporting path are arranged to the outside of the transporting path. At least one of the driving vehicles is made to run along the transporting direction of the transporting path by a driving mechanism incorporating running section such as an endless chain to drive the transportation vehicle, and the transporting vehicle is successively driven by each of the driving mechanisms. (Kawakami, Y.) The primary objective of this thesis was to study, implement, and test low-cost electronic flight control systems (FCS) in remotely piloted subscale research aircraft with relaxed static longitudinal stability. Even though this implementation was carried out in small, simplified test-bed aircraft, it was designed with the aim of being installed later in more complex demonstrator aircraft such as the Generic Future Fighter concept demonstrator project. The recent boom of the unmanned aircraft ... Kavehrad, Mohsen; Fadlullah, Jarir This paper focuses on leveraging the progress in semiconductor technologies to facilitate production of efficient light-based in-flight entertainment (IFE), distributed sensing, navigation and control systems. We demonstrate the ease of configuring "engineered pipes" using cheap lenses, etc. to achieve simple linear transmission capacity growth. Investigation of energy-efficient, miniaturized transceivers will create a wireless medium, for both inter and intra aircrafts, providing enhanced security, and improved quality-of-service for communications links in greater harmony with onboard systems. The applications will seamlessly inter-connect multiple intelligent devices in a network that is deployable for aircrafts navigation systems, onboard sensors and entertainment data delivery systems, and high-definition audio-visual broadcasting systems. Recent experimental results on a high-capacity infrared (808 nm) system are presented. The light source can be applied in a hybrid package along with a visible lighting LED for both lighting and communications. Also, we present a pragmatic combination of light communications through "Spotlighting" and existing onboard power-lines. It is demonstrated in details that a high-capacity IFE visible light system communicating over existing power-lines (VLC/PLC) may lead to savings in many areas through reduction of size, weight and energy consumption. This paper addresses the challenges of integrating optimized optical devices in the variety of environments described above, and presents mitigation and tailoring approaches for a multi-purpose optical network. Inteligentni transportni sistemi pri načrtovanju in usklajevanju gibanja in parkiranja letal na ploščadi letališča: Intelligent transportation systems in the planning and coordination of aircraft traffic at the airport apron: Pavlin, Stanislav; Roguljić, Slavko Airport aprons are areas for aircraft handling, parking and maintenance. According to international rules the number of positions at the apron has to be at least equal to the number of aircraft staying at any one time at the airport. The air traffic at Split Airport increased rapidly in the mid-90s when it became the UN logistics base for Bosnia and Herzegovina. There were nomeans nor free space for further expansion of the apron, so the traffic had to be reorganised and re-coordinated. Alter... In this publication has been presented selected aspects of the wide spectrum of Unmanned Aircraft Systems (UAS)/UAV adaptation within the military structures. With regard to many years of experience of the author within the national and NATO Integrated Air Defence Command and Control System, the objective paper is also related to the Airspace Management (ASM) in the light of present and future use of UAS in this environment. Wider and wider application of UAS in many areas of human life as w... Dinallo, Michael Anthony; Lopez, Christopher D. An aircraft wire systems laboratory has been developed to support technical maturation of diagnostic technologies being used in the aviation community for detection of faulty attributes of wiring systems. The design and development rationale of the laboratory is based in part on documented findings published by the aviation community. The main resource at the laboratory is a test bed enclosure that is populated with aged and newly assembled wire harnesses that have known defects. This report provides the test bed design and harness selection rationale, harness assembly and defect fabrication procedures, and descriptions of the laboratory for usage by the aviation community. Jacob, Daniel James; Crawford, James; Kleb, Mary; Connors, VIckie; Bendura, Richard; Raper, James; Sachse, Glen; Gille, John; Emmons, Louisa; Heald, Colette The NASA Transport and Chemical Evolution over the Pacific (TRACE-P) aircraft mission was conducted in February–April 2001 over the NW Pacific (1) to characterize the Asian chemical outflow and relate it quantitatively to its sources and (2) to determine its chemical evolution. It used two aircraft, a DC-8 and a P-3B, operating out of Hong Kong and Yokota Air Force Base (near Tokyo), with secondary sites in Hawaii, Wake Island, Guam, Okinawa, and Midway. The aircraft carried instrumentation f... de Castro, Helena V. The blended-wing-body (BWB) configuration appears as a promising contender for the next generation of large transport aircraft. The idea of blending the wing with the fuselage and eliminating the tail is not new, it has long been known that tailless aircraft can suffer from stability and control problems that must be addressed early in the design. This thesis is concerned with identifying and then evaluating the flight dynamics, stability, flight controls and handling qualities of a generic B... Howell, Charles T., III Research is needed to determine what procedures, aircraft sensors and other systems will be required to allow Unmanned Aerial Systems (UAS) to safely operate with manned aircraft in the National Airspace System (NAS). This paper explores the use of Unmanned Aerial System (UAS) Surrogate research aircraft to serve as platforms for UAS systems research, development, and flight testing. These aircraft would be manned with safety pilots and researchers that would allow for flight operations almost anywhere in the NAS without the need for a Federal Aviation Administration (FAA) Certificate of Authorization (COA). With pilot override capability, these UAS Surrogate aircraft would be controlled from ground stations like true UAS s. It would be possible to file and fly these UAS Surrogate aircraft in the NAS with normal traffic and they would be better platforms for real world UAS research and development over existing vehicles flying in restricted ranges or other sterilized airspace. These UAS surrogate aircraft could be outfitted with research systems as required such as computers, state sensors, video recording, data acquisition, data link, telemetry, instrumentation, and Automatic Dependent Surveillance-Broadcast (ADS-B). These surrogate aircraft could also be linked to onboard or ground based simulation facilities to further extend UAS research capabilities. Potential areas for UAS Surrogate research include the development, flight test and evaluation of sensors to aide in the process of air traffic "see-and-avoid". These and other sensors could be evaluated in real-time and compared with onboard human evaluation pilots. This paper examines the feasibility of using UAS Surrogate research aircraft as test platforms for a variety of UAS related research. ... (65 FR 19477-78) or you may visit http://DocketsInfo.dot.gov . ] Docket: To read background documents... and Procedures of the Department of Transportation (DOT) (44 FR 1134, February 26, 1979) provide that... Aircraft and Airmen for the Operation of Light-Sport Aircraft'' (Sport Pilot Rule) (69 FR 44772, July... The Nuclear Waste Policy Act of 1982 (NWPA), as amended, authorized the DOE to develop and manage a Federal system for the disposal of SNF and HLW. OCRWM was created to manage acceptance and disposal of SNF and HLW in a manner that protects public health, safety, and the environment; enhances national and energy security; and merits public confidence. This responsibility includes managing the transportation of SNF and HLW from origin sites to the Repository for disposal. The Transportation System Concept of Operations is the core high-level OCRWM document written to describe the Transportation System integrated design and present the vision, mission, and goals for Transportation System operations. By defining the functions, processes, and critical interfaces of this system early in the system development phase, programmatic risks are minimized, system costs are contained, and system operations are better managed, safer, and more secure. This document also facilitates discussions and understanding among parties responsible for the design, development, and operation of the Transportation System. Such understanding is important for the timely development of system requirements and identification of system interfaces. Information provided in the Transportation System Concept of Operations includes: the functions and key components of the Transportation System; system component interactions; flows of information within the system; the general operating sequences; and the internal and external factors affecting transportation operations. The Transportation System Concept of Operations reflects OCRWM's overall waste management system policies and mission objectives, and as such provides a description of the preferred state of system operation. The description of general Transportation System operating functions in the Transportation System Concept of Operations is the first step in the OCRWM systems engineering process, establishing the starting point for the lower Subramanian, Shreyas Vathul This research combines the disciplines of system-of-systems (SoS) modeling, platform-based design, optimization and evolving design spaces to achieve a novel capability for designing solutions to key aeronautical mission challenges. A central innovation in this approach is the confluence of multi-level modeling (from sub-systems to the aircraft system to aeronautical system-of-systems) in a way that coordinates the appropriate problem formulations at each level and enables parametric search in design libraries for solutions that satisfy level-specific objectives. The work here addresses the topic of SoS optimization and discusses problem formulation, solution strategy, the need for new algorithms that address special features of this problem type, and also demonstrates these concepts using two example application problems - a surveillance UAV swarm problem, and the design of noise optimal aircraft and approach procedures. This topic is critical since most new capabilities in aeronautics will be provided not just by a single air vehicle, but by aeronautical Systems of Systems (SoS). At the same time, many new aircraft concepts are pressing the boundaries of cyber-physical complexity through the myriad of dynamic and adaptive sub-systems that are rising up the TRL (Technology Readiness Level) scale. This compositional approach is envisioned to be active at three levels: validated sub-systems are integrated to form conceptual aircraft, which are further connected with others to perform a challenging mission capability at the SoS level. While these multiple levels represent layers of physical abstraction, each discipline is associated with tools of varying fidelity forming strata of 'analysis abstraction'. Further, the design (composition) will be guided by a suitable hierarchical complexity metric formulated for the management of complexity in both the problem (as part of the generative procedure and selection of fidelity level) and the product (i.e., is the mission Zedek, Sabeha; Zedek, Sabeha Fettouma; Escriba, Christophe; Fourniols, Jean-Yves Our main subject of interest is the Structural Health Monitoring in aeronautics. Most of our works are dedicated to the detection of delamination disbonds and cracks in heterogeneous (Composite) and homogenous (aluminum 2024) structures of an aircraft structure. To successfully combine detection and alert generation we based our approach on the use of new generation of chip called SoC (System on Chip). We tried to develop an autonomous system able to detect damages on aircraft structure. Acco... National Aeronautics and Space Administration — Hybrid turbo-electric aircraft with gas turbines driving electric generators connected to electric propulsion motors have the potential to transform the aircraft... Krause, Hans-Joachim; Hohmann, Rainer; Grueneklee, Michael; Zhang, Yi; Braginski, Alex I. For the detection of deep-lying flaws in aircraft structures, a mobile eddy-current system is being developed in conjunction with a high-temperature superconductor (Yba_2Cu_3O_7) thin-film HTS SQUID gradiometer. The challenge is to operate the SQUID sensor during movement in strong ambient fields, independent of orientation. A planar rf double hole gradiometer with a gradient sensitivity of 500 fT/(cm √Hz) was designed for that purpose. Two different cooling concepts were successfully implemented: the SQUID operation in the vacuum region of a lightweight nitrogen cryostat, constructed for operation in any orientation, and the use of a commercial Joule-Thomson cryocooler for liquid-nitrogen-free SQUID cooling. With a SQUID integration scheme using a sapphire cold finger, motion-related additional noise is nearly eliminated. Using a system equipped with a differential eddy current excitation, two-dimensional scans were performed to find fatigue cracks and corrosion pits hidden below several layers of aluminum. For demonstration in the Lufthansa maintenance facility at Frankfurt Airport, the system was used to detect flaws in aircraft wheels. Work in progress includes developing longer base gradiometers for detection of deep flaws. This document provided an assessment of the Canadian hydrocarbon transportation system. In addition to regulating the construction and operation of Canada's 45,000 km of pipeline that cross international and provincial borders, Canada's National Energy Board (NEB) regulates the trade of natural gas, oil and natural gas liquids. The ability of pipelines to delivery this energy is critical to the country's economic prosperity. The pipeline system includes large-diameter, cross-country, high-pressure natural gas pipelines, low-pressure crude oil and oil products pipelines and small-diameter pipelines. In order to assess the hydrocarbon transportation system, staff at the NEB collected data from pipeline companies and a range of publicly available sources. The Board also held discussions with members of the investment community regarding capital markets and emerging issues. The assessment focused largely on evaluating whether Canadians benefit from an efficient energy infrastructure and markets. The safety and environmental integrity of the pipeline system was also evaluated. The current adequacy of pipeline capacity was assessed based on price differentials compared with firm service tolls for major transportation paths; capacity utilization on pipelines; and, the degree of apportionment on major oil pipelines. The NEB concluded that the Canadian hydrocarbon transportation system is working effectively, with an adequate capacity in place on existing natural gas pipelines, but with a tight capacity on oil pipelines. It was noted that shippers continue to indicate that they are reasonably satisfied with the services provided by pipeline companies and that the NEB-regulated pipeline companies are financially stable. 14 refs, 11 tabs., 28 figs., 4 appendices Chevillot, Fabrice; Sinou, Jean-Jacques; Hardouin, Nicolas International audience Friction-induced vibration is still a cause for concern in a wide variety of mechanical systems, because it can lead to structural damage if high vibration levels are reached. Another effect is the noise produced that can be very unpleasant for end-users, thereby making it a major problem in the field of terrestrial transport. In this work the case of an aircraft braking system is examined. An analytical model with polynomial nonlinearity in the contact between rotor... Pace, Scott; Oria, A. J.; Guckian, Paul; Nguyen, Truong X. This report compiles and analyzes tests that were conducted to measure cell phone spurious emissions in the Global Positioning System (GPS) radio frequency band that could affect the navigation system of an aircraft. The cell phone in question had, as reported to the FAA (Federal Aviation Administration), caused interference to several GPS receivers on-board a small single engine aircraft despite being compliant with data filed at the time with the FCC by the manufacturer. NASA (National Aeronautics and Space Administration) and industry tests show that while there is an emission in the 1575 MHz GPS band due to a specific combination of amplifier output impedance and load impedance that induces instability in the power amplifier, these spurious emissions (i.e., not the intentional transmit signal) are similar to those measured on non-intentionally transmitting devices such as, for example, laptop computers. Additional testing on a wide sample of different commercial cell phones did not result in any emission in the 1575 MHz GPS Band above the noise floor of the measurement receiver. de Boer, Gijs; Palo, Scott; Agrow, Brian; LoDolce, Gabriel; Mack, James; Gao, Ru-Shan; Telg, Hagen; Trussell, Cameron; Fromm, Joshua; Long, Charles N.; Bland, Geoff I.; Maslanik, James; Schmid, Beat; Hock, Terry This paper presents the University of Colorado Pilatus unmanned research aircraft, assembled to provide measurements of aerosols, radiation and thermodynamics in the lower troposphere. This aircraft has a wingspan of 3.2 meters and a maximum take off weight of 25 kg and is pow-ered by an electric motor to reduce engine exhaust and concerns about carburetor icing. It carries instrumentation to make measurements of broadband up- and downwelling shortwave and longwave radiation, aerosol particle size distribution, atmospheric temperature, relative humidity and pressure and to collect video of flights for subsequent analysis of atmospheric conditions during flight. In order to make the shortwave radiation measurements, care was taken to carefully position a high-quality compact inertial measurement unit (IMU) and characterize the orientation offset between it and the upward looking radiation sensor. Using measurements from both of these sensors, a cor-rection is applied to the raw measurements to correct for aircraft attitude and sensor tilt relative to he sun. The data acquisition system was designed from the ground up in order to accommodate the variety of sensors deployed. Initial test flights completed in Colorado provide promising results with measurements from the radiation sensors generally agreeing with those from a nearby surface site. Additionally, estimates of surface albedo from onboard sensors were consistent with local surface conditions, including melting snow and bright runway surface. Aerosol size distributions collected are internally consistent and have previously been shown to agree well with larger, surface-based instrumentation. Finally the atmospheric state measurements evolve as would be expected, with the near-surface atmosphere warming over time as the day goes on, and the atmospheric relative humidity decreasing with increased temperature. No directional bias on measured temperature, as might be expected due to uneven heating of the sensor National Aeronautics and Space Administration — NASA is investigating advanced turboelectric aircraft propulsion systems that utilize superconducting motors to drive a number of distributed turbofans.... National Aeronautics and Space Administration — NASA is investigating advanced turboelectric aircraft propulsion systems that utilize superconducting motors to drive a number of distributed turbofans. In an... Tagge, G. E.; Irish, L. A.; Bailey, A. R. The results of the Integrated Digital/Electric Aircraft (IDEA) Study are presented. Airplanes with advanced systems were, defined and evaluated, as a means of identifying potential high payoff research tasks. A baseline airplane was defined for comparison, typical of a 1990's airplane with advanced active controls, propulsion, aerodynamics, and structures technology. Trade studies led to definition of an IDEA airplane, with extensive digital systems and electric secondary power distribution. This airplane showed an improvement of 3% in fuel use and 1.8% in DOC relative to the baseline configuration. An alternate configuration, an advanced technology turboprop, was also evaluated, with greater improvement supported by digital electric systems. Recommended research programs were defined for high risk, high payoff areas appropriate for implementation under NASA leadership. Clinedinst, Winston C.; Slominski, Christopher J.; Dickson, Richard W.; Wolverton, David A. The individual software processes used in the flight computers on-board the Advanced Transport Operating System (ATOPS) aircraft have many common functional elements. A library of commonly used software modules was created for general uses among the processes. The library includes modules for mathematical computations, data formatting, system database interfacing, and condition handling. The modules available in the library and their associated calling requirements are described. de Boer, G.; Palo, S.; Argrow, B.; LoDolce, G.; Mack, J.; Gao, R.-S.; Telg, H.; Trussel, C.; Fromm, J.; Long, C. N.; Bland, G.; Maslanik, J.; Schmid, B.; Hock, T. This paper presents details of the University of Colorado (CU) Pilatus unmanned research aircraft, assembled to provide measurements of aerosols, radiation and thermodynamics in the lower troposphere. This aircraft has a wingspan of 3.2 m and a maximum take off weight of 25 kg and is powered by an electric motor to reduce engine exhaust and concerns about carburetor icing. It carries instrumentation to make measurements of broadband up- and downwelling shortwave and longwave radiation, aerosol particle size distribution, atmospheric temperature, relative humidity and pressure and to collect video of flights for subsequent analysis of atmospheric conditions during flight. In order to make the shortwave radiation measurements, care was taken to carefully position a high-quality compact inertial measurement unit (IMU) and characterize the attitude of the aircraft and it's orientation to the upward looking radiation sensor. Using measurements from both of these sensors, a correction is applied to the raw radiometer measurements to correct for aircraft attitude and sensor tilt relative to the sun. The data acquisition system was designed from scratch based on a set of key driving requirements to accommodate the variety of sensors deployed. Initial test flights completed in Colorado provide promising results with measurements from the radiation sensors agreeing with those from a nearby surface site. Additionally, estimates of surface albedo from onboard sensors were consistent with local surface conditions, including melting snow and bright runway surface. Aerosol size distributions collected are internally consistent and have previously been shown to agree well with larger, surface-based instrumentation. Finally the atmospheric state measurements evolve as expected, with the near-surface atmosphere warming over time as the day goes on, and the atmospheric relative humidity decreasing with increased temperature. No directional bias on measured temperature, as might be G. de Boer Full Text Available This paper presents details of the University of Colorado (CU Pilatus unmanned research aircraft, assembled to provide measurements of aerosols, radiation and thermodynamics in the lower troposphere. This aircraft has a wingspan of 3.2 m and a maximum take off weight of 25 kg and is powered by an electric motor to reduce engine exhaust and concerns about carburetor icing. It carries instrumentation to make measurements of broadband up- and downwelling shortwave and longwave radiation, aerosol particle size distribution, atmospheric temperature, relative humidity and pressure and to collect video of flights for subsequent analysis of atmospheric conditions during flight. In order to make the shortwave radiation measurements, care was taken to carefully position a high-quality compact inertial measurement unit (IMU and characterize the attitude of the aircraft and it's orientation to the upward looking radiation sensor. Using measurements from both of these sensors, a correction is applied to the raw radiometer measurements to correct for aircraft attitude and sensor tilt relative to the sun. The data acquisition system was designed from scratch based on a set of key driving requirements to accommodate the variety of sensors deployed. Initial test flights completed in Colorado provide promising results with measurements from the radiation sensors agreeing with those from a nearby surface site. Additionally, estimates of surface albedo from onboard sensors were consistent with local surface conditions, including melting snow and bright runway surface. Aerosol size distributions collected are internally consistent and have previously been shown to agree well with larger, surface-based instrumentation. Finally the atmospheric state measurements evolve as expected, with the near-surface atmosphere warming over time as the day goes on, and the atmospheric relative humidity decreasing with increased temperature. No directional bias on measured de Boer, Gijs; Palo, Scott; Argrow, Brian; LoDolce, Gabriel; Mack, James; Gao, Ru-Shan; Telg, Hagen; Trussel, Cameron; Fromm, Joshua; Long, Charles N.; Bland, Geoff; Maslanik, James; Schmid, Beat; Hock, Terry This paper presents details of the University of Colorado (CU) "Pilatus" unmanned research aircraft, assembled to provide measurements of aerosols, radiation and thermodynamics in the lower troposphere. This aircraft has a wingspan of 3.2 m and a maximum take-off weight of 25 kg, and it is powered by an electric motor to reduce engine exhaust and concerns about carburetor icing. It carries instrumentation to make measurements of broadband up- and downwelling shortwave and longwave radiation, aerosol particle size distribution, atmospheric temperature, relative humidity and pressure and to collect video of flights for subsequent analysis of atmospheric conditions during flight. In order to make the shortwave radiation measurements, care was taken to carefully position a high-quality compact inertial measurement unit (IMU) and characterize the attitude of the aircraft and its orientation to the upward-looking radiation sensor. Using measurements from both of these sensors, a correction is applied to the raw radiometer measurements to correct for aircraft attitude and sensor tilt relative to the sun. The data acquisition system was designed from scratch based on a set of key driving requirements to accommodate the variety of sensors deployed. Initial test flights completed in Colorado provide promising results with measurements from the radiation sensors agreeing with those from a nearby surface site. Additionally, estimates of surface albedo from onboard sensors were consistent with local surface conditions, including melting snow and bright runway surface. Aerosol size distributions collected are internally consistent and have previously been shown to agree well with larger, surface-based instrumentation. Finally the atmospheric state measurements evolve as expected, with the near-surface atmosphere warming over time as the day goes on, and the atmospheric relative humidity decreasing with increased temperature. No directional bias on measured temperature, as might The Lockheed Martin N+2 Low - boom Supersonic Commercial Transport (LSCT) aircraft was optimized in this study through the use of a multidisciplinary design optimization tool developed at the National Aeronautics and S pace Administration Armstrong Flight Research Center. A total of 111 design variables we re used in the first optimization run. Total structural weight was the objective function in this optimization run. Design requirements for strength, buckling, and flutter we re selected as constraint functions during the first optimization run. The MSC Nastran code was used to obtain the modal, strength, and buckling characteristics. Flutter and trim analyses we re based on ZAERO code, and landing and ground control loads were computed using an in - house code. The w eight penalty to satisfy all the design requirement s during the first optimization run was 31,367 lb, a 9.4% increase from the baseline configuration. The second optimization run was prepared and based on the big-bang big-crunch algorithm. Six composite ply angles for the second and fourth composite layers were selected as discrete design variables for the second optimization run. Composite ply angle changes can't improve the weight configuration of the N+2 LSCT aircraft. However, this second optimization run can create more tolerance for the active and near active strength constraint values for future weight optimization runs. Air data and inertial reference system (ADIRS) is one of the complex sub-system in the aircraft navigation system and it plays an important role into the flight safety of the aircraft. This paper propose an optimize neural network algorithm which is a combination of neural network and ant colony algorithm to improve efficiency of maintenance engineer job task. Full Text Available The United States is poised to integrate commercial unmanned aircraft systems (UAS into the national airspace and enable government entities to use UAS in a more expedient manner. This policy change, mandated by the Federal Aviation Administration (FAA Modernization and Reform Act of 2012, offers new economic, social and scientific opportunities as well as enhanced law enforcement capacity. However, such benefits will be accompanied by concerns over misuse and abuse of the new technologies by criminals and terrorists. Privacy has been the focus of public debate over the more widespread use of UAS. This paper examines a variety of issues related to allowing broad UAS operations in domestic airspace, and puts forth that safety should be the top priority of policy makers in their effort to integrate UAS into the national airspace system. João Henrique Lopes Guerra Full Text Available This is a theoretical-conceptual, which aimed to identify some likely consequences of the integration model systems that have been adopted in the aerospace industry by major aircraft manufacturers in the world. In the model of system integration, these manufacturers maintain internally the activities associated with their basic skills and transfer their skills to peripheral suppliers. We identified the following consequences: the growth of strategic alliances in the airline industry, the internationalization of aeronautical chains, with the strengthening of productive activities in some geographic regions; challenges related to the domestic supplier base and the consolidation of national chains, the greatest power suppliers of the first layer, the contribution to the dissemination of knowledge among supply chains, and the potential emergence of new competitors. Full Text Available A modelling system for assimilation of CO total columns measured by the IASI/MetOp was developed. The system, based on a sub-optimal Kalman filter coupled with the LMDz-INCA chemistry transport model, allows both assimilating long periods of historical data and making rapid forecasts of the CO concentrations in the middle troposphere based on latest available measurements. Tests of the forecast system were conducted during the international POLARCAT campaigns. A specific treatment that takes into account the representativeness of observations at the scale of the model grid is applied to the IASI CO columns and associated errors before their assimilation in the model. This paper presents the results of assimilation of eight months of historical satellite data measured in 2008. Comparisons of the assimilated CO profiles with independent in situ CO measurements from the MOZAIC program and the POLARCAT aircraft campaigns indicate that the assimilation leads to a considerable improvement of the model simulations in the middle troposphere as compared with a control run with no assimilation. Model biases in the simulation of background values are reduced and improvement in the simulation of very high concentrations is observed. The improvement is due to the transport by the model of the information present in the IASI CO retrievals. The consistency of the improvement contributes to the validation of the IASI CO data. An aircraft is composed of systems that convert fuel energy to mechanical energy in order to perform work-the movement of people and cargo. Today, the fast-growing demand for air travel has outpaced the rate of improvement in the energy efficiency of aircraft systems. The increase in the total energy consumption and environmental impact of aviation necessitates a strategy to induce further technological and operational innovations to mitigate the increase in aircraft energy use and environmental effects. However, the uncertainty associated with the climate effects of jet engine emissions hinders further improvement to the energy efficiency of aircraft systems. Also the unique characteristics (e.g., trade-off between emissions species) of aircraft systems make it difficult to focus on abatement efforts. Based on a short review of how aircraft technology and operations relate to energy use and the future outlook for aircraft performance, energy use, and environmental impact, the key technology and policy issues related to improving the energy efficiency of aircraft systems are presented. Then, the drivers of technological change in aircraft systems are examined. Government regulation effects and industry characteristics as they relate to improvement of energy use are also presented. Based on these discussions, this paper provides insights on how to accelerate the induction of energy efficient, environmentally friendly innovations. Lee, Joosung J. [College of Engineering, Yonsei University, Seoul 120-749 (Korea) An aircraft is composed of systems that convert fuel energy to mechanical energy in order to perform work - the movement of people and cargo. Today, the fast-growing demand for air travel has outpaced the rate of improvement in the energy efficiency of aircraft systems. The increase in the total energy consumption and environmental impact of aviation necessitates a strategy to induce further technological and operational innovations to mitigate the increase in aircraft energy use and environmental effects. However, the uncertainty associated with the climate effects of jet engine emissions hinders further improvement to the energy efficiency of aircraft systems. Also the unique characteristics (e.g., trade-off between emissions species) of aircraft systems make it difficult to focus on abatement efforts. Based on a short review of how aircraft technology and operations relate to energy use and the future outlook for aircraft performance, energy use, and environmental impact, the key technology and policy issues related to improving the energy efficiency of aircraft systems are presented. Then, the drivers of technological change in aircraft systems are examined. Government regulation effects and industry characteristics as they relate to improvement of energy use are also presented. Based on these discussions, this paper provides insights on how to accelerate the induction of energy efficient, environmentally friendly innovations. (author) Sodemann, H.; Pommier, M.; Arnold, S. R.; Monks, S. A.; Stebel, K.; Burkhart, J. F.; Hair, J. W.; Diskin, G. S.; Clerbaux, C.; Coheur, P.-F.; Hurtmans, D.; Schlager, H.; Blechschmidt, A.-M.; Kristjansson, J. E.; Stohl, A. During the POLARCAT summer campaign in 2008, two episodes (2 5 July and 7 10 July 2008) occurred where low-pressure systems traveled from Siberia across the Arctic Ocean towards the North Pole. The two cyclones had extensive smoke plumes from Siberian forest fires and anthropogenic sources in East Asia embedded in their associated air masses, creating an excellent opportunity to use satellite and aircraft observations to validate the performance of atmospheric transport models in the Arctic, which is a challenging model domain due to numerical and other complications. Here we compare transport simulations of carbon monoxide (CO) from the Lagrangian transport model FLEXPART and the Eulerian chemical transport model TOMCAT with retrievals of total column CO from the IASI passive infrared sensor onboard the MetOp-A satellite. The main aspect of the comparison is how realistic horizontal and vertical structures are represented in the model simulations. Analysis of CALIPSO lidar curtains and in situ aircraft measurements provide further independent reference points to assess how reliable the model simulations are and what the main limitations are. The horizontal structure of mid-latitude pollution plumes agrees well between the IASI total column CO and the model simulations. However, finer-scale structures are too quickly diffused in the Eulerian model. Applying the IASI averaging kernels to the model data is essential for a meaningful comparison. Using aircraft data as a reference suggests that the satellite data are biased high, while TOMCAT is biased low. FLEXPART fits the aircraft data rather well, but due to added background concentrations the simulation is not independent from observations. The multi-data, multi-model approach allows separating the influences of meteorological fields, model realisation, and grid type on the plume structure. In addition to the very good agreement between simulated and observed total column CO fields, the results also highlight the Bridgelall, Raj; Rafert, J. B.; Atwood, Don; Tolliver, Denver D. Transportation agencies expend significant resources to inspect critical infrastructure such as roadways, railways, and pipelines. Regular inspections identify important defects and generate data to forecast maintenance needs. However, cost and practical limitations prevent the scaling of current inspection methods beyond relatively small portions of the network. Consequently, existing approaches fail to discover many high-risk defect formations. Remote sensing techniques offer the potential for more rapid and extensive non-destructive evaluations of the multimodal transportation infrastructure. However, optical occlusions and limitations in the spatial resolution of typical airborne and space-borne platforms limit their applicability. This research proposes hyperspectral image classification to isolate transportation infrastructure targets for high-resolution photogrammetric analysis. A plenoptic swarm of unmanned aircraft systems will capture images with centimeter-scale spatial resolution, large swaths, and polarization diversity. The light field solution will incorporate structure-from-motion techniques to reconstruct three-dimensional details of the isolated targets from sequences of two-dimensional images. A comparative analysis of existing low-power wireless communications standards suggests an application dependent tradeoff in selecting the best-suited link to coordinate swarming operations. This study further produced a taxonomy of specific roadway and railway defects, distress symptoms, and other anomalies that the proposed plenoptic swarm sensing system would identify and characterize to estimate risk levels. Price, M.; Raghunathan, S.; Curran, R. The challenge in Aerospace Engineering, in the next two decades as set by Vision 2020, is to meet the targets of reduction of nitric oxide emission by 80%, carbon monoxide and carbon dioxide both by 50%, reduce noise by 50% and of course with reduced cost and improved safety. All this must be achieved with expected increase in capacity and demand. Such a challenge has to be in a background where the understanding of physics of flight has changed very little over the years and where industrial growth is driven primarily by cost rather than new technology. The way forward to meet the challenges is to introduce innovative technologies and develop an integrated, effective and efficient process for the life cycle design of aircraft, known as systems engineering (SE). SE is a holistic approach to a product that comprises several components. Customer specifications, conceptual design, risk analysis, functional analysis and architecture, physical architecture, design analysis and synthesis, and trade studies and optimisation, manufacturing, testing validation and verification, delivery, life cycle cost and management. Further, it involves interaction between traditional disciplines such as Aerodynamics, Structures and Flight Mechanics with people- and process-oriented disciplines such as Management, Manufacturing, and Technology Transfer. SE has become the state-of-the-art methodology for organising and managing aerospace production. However, like many well founded methodologies, it is more difficult to embody the core principles into formalised models and tools. The key contribution of the paper will be to review this formalisation and to present the very latest knowledge and technology that facilitates SE theory. Typically, research into SE provides a deeper understanding of the core principles and interactions, and helps one to appreciate the required technical architecture for fully exploiting it as a process, rather than a series of events. There are major issues as Full Text Available The authors earlier model for the vulnerability of aircraft where aircraft was considered as a combination of cylinder, cones and wedges has been extended to the case when structural data of aircraft as well as its vital parts are given in the form of three-dimensional curvilinear triangles. In the case of VT -fused ammunition, spherical normal distribution has been used to estimate the landing probability of the shell in a cylindrical vicinity region around the aircraft. Kill criteria of vital parts have been redefined. National Aeronautics and Space Administration — Diagnostic and prognostic algorithms for many aircraft subsystems are steadily maturing. Unfortunately there is little experience integrating these technologies... National Aeronautics and Space Administration — Aircraft design is a complex process requiring interactions and exchange of information among multiple disciplines such as aerodynamics, strength, fatigue,... This volume contains input data and parameters used in the model of the transportation sector of the National Energy Modeling System. The list of Transportation Sector Model variables includes parameters for the following: Light duty vehicle modules (fuel economy, regional sales, alternative fuel vehicles); Light duty vehicle stock modules; Light duty vehicle fleet module; Air travel module (demand model and fleet efficiency model); Freight transport module; Miscellaneous energy demand module; and Transportation emissions module. Also included in these appendices are: Light duty vehicle market classes; Maximum light duty vehicle market penetration parameters; Aircraft fleet efficiency model adjustment factors; and List of expected aircraft technology improvements. This final report has been prepared by Honeywell Engines & Systems, Phoenix, Arizona, a unit of Honeywell International Inc., documenting work performed during the period June 1999 through December 1999 for the National Aeronautics and Space Administration (NASA) Glenn Research Center, Cleveland, Ohio, under the Small Engine Technology (SET) Program, Contract No. NAS3-27483, Task Order 24, Business and Regional Aircraft System Studies. The work performed under SET Task 24 consisted of evaluating the noise reduction benefits compared to the baseline noise levels of representative 1992 technology aircraft, obtained by applying different combinations of noise reduction technologies to five business and regional aircraft configurations. This report focuses on the selection of the aircraft configurations and noise reduction technologies, the prediction of noise levels for those aircraft, and the comparison of the noise levels with those of the baseline aircraft. Moses, C. A. Problems of ensuring compatibility of Navy aircraft with fuels that may be different than the fuels for which the equipment was designed and qualified are discussed. To avoid expensive requalification of all the engines and airframe fuel systems, methodologies to qualify future fuels by using bench-scale and component testing are being sought. Fuel blends with increasing JP5-type aromatic concentration were seen to produce less volume swell than an equivalent aromatic concentration in the reference fuel. Futhermore, blends with naphthenes, decalin, tetralin, and naphthalenes do not deviate significantly from the correlation line of aromatic blends, Similar results are found with tensile strenth and elongation. Other elastomers, sealants, and adhesives are also being tested. The aim of this paper is to give an overview of recent research, development and civil application of remotely piloted aircraft systems (RPAS) in Europe. It describes a European strategy for the development of civil applications of Remotely Piloted Aircraft Systems (RPAS) and reflects most of the contents of the European staff working document SWD(2012) 259 final. Andrieu, Christian W. Aircraft maintenance control operates in a dynamic, high intensity environment. Maintenance work priorities are made several times daily under extremely demanding and time sensitive conditions. the person responsible for scheduling aircraft, usually the Maintenance Master Chief, draws upon years of experience when assigning priorities for both scheduled and unscheduled maintenance. An Expert System Advisor for Aircraft Maintenance Scheduling (ESAAMS) is being implemented at the Naval Postg... Belcastro, Christine M.; Jacobson, Steven r. Loss of control remains one of the largest contributors to aircraft fatal accidents worldwide. Aircraft loss-of-control accidents are highly complex in that they can result from numerous causal and contributing factors acting alone or (more often) in combination. Hence, there is no single intervention strategy to prevent these accidents. This paper presents future system concepts and research directions for preventing aircraft loss-of-control accidents.
Natural, moisturing Earthnlove Cape Chamomile Hand and Body Lotion Beauty Salon, Skin Care, Natural Products Earthnlove Cape Chamomile Hand & Body Lotion contain a powerful synergy of Cape Chamomile, Lavandin Grosso, Palma Rosa and natural moisturising factors. Cape Chamomile is an essential oil from the fynbos family. Its powerful therapeutic and cosmetic benefits as well as stress relieving properties are derived from the presence of naturally occurring azulene compounds. Cape Chamomile essential oil helps with the treatment of eruptions on the skin, infected cuts, wounds, insect bites, acne, sunburn, sun rashes, extremely dry skin, nappy rash and eczema. Lavandin Grosso essential oil is known to be very healing and soothing to the skin, it aids skin healing by stimulating cellular regeneration and by calming inflammation. Palma Rosa essential oil moisturizes the skin whilst balancing hydration levels. It is acknowledged to rejuvenate and regenerate the skin. The unique formulation containing the above ingredients results in naturally rich, moisturizing products that are particularly valuable for sensitive skin and extremely beneficial to any condition where the skin is dry, dehydrated, inflamed, bruised, burnt or irritated. The therapeutic properties of Earthnlove Cape Chamomile Hand & Body Lotion include anti-inflammatory, rejuvenating, regenerating, soothing, calming and moisturising benefits. , Body Massage , Body Products , Cosmetic Products , Moisturising body lotion , Natural Products , Skin Care
FIG A: A VST Expression editing lane appears just below note events in the Key Editor. Clicking in the lane of a particular event inserts the proper MIDI data. I''ve often lamented that playing some modern software samplers expressively requires the skills of a helicopter pilot due to the number of MIDI-control gestures required to bring the extra articulations to life. Cubase 5 takes some big steps to return musical context to a task that often feels more like muse-crushing data entry. Cubase 5''s new VST Expression feature provides a discrete track to codify and edit expression-oriented controllers alongside musical data. That, for example, eliminates the need for recalling which notes are key switches or which control changes create a growl for a trumpet part. Instead, Cubase 5 uses an Expression Map, which displays such events as musical articulations rather than streams of MIDI data. In a sense, this is similar to a drum map, in which note data is pre-defined as hi-hat, snare, kick or other kit piece. If you make an instrument''s expression map visible in the Track Inspector, you get visual confirmation when, for example, an electric bass switches from a muted attack to harmonics. Your choice of articulations also appears in separate lanes of the piano-roll-style Key Editor, where you select and insert expression events from a pop-up menu (see Fig. A and Web Clip 3). VST Expression Maps are initially set up for an additional set of instruments included with the built-in Halion One sampler, but you can edit and create Expression Maps for any instrument that supports an extended set of sampled articulations such as key or velocity switches. Although it doesn''t completely alleviate the memorization of key switches or the non-real-time aspect of inserting the proper articulations, it greatly facilitates editing and removes much of the tedium involved in the task. It''s particularly handy (and more relevant) for those constructing music in the Score window to insert an object labeled Legato beneath a note than it is to consult a legend to find the appropriate command. THINGS YOU MAY HAVE MISSED Often the constraints of page and word counts limit the coverage a product receives. Here''s a random grab bag of Cubase 5 features, including things you may have missed in earlier versions. I''m not sure when it made its Cubase debut, but the Retrospective Record feature can be a real lifesaver. Have you ever noodled on a track and thought that it was loads better than your last take? Cubase retains a memory buffer, and Retrospective Record will take whatever MIDI data you played and place it in a currently record-enabled track whether or not you were in Playback mode. You can even set the size of the buffer to a specific number of MIDI events. The Halion instruments tailored for VST Expression are beautifully sampled and supremely expressive. Electric guitars sparkle, and their acoustic counterparts are woody and warm without sacrificing high-frequency detail. Drums are equally detailed with plenty of crispness and punch. Auto-LFO is a textbook example of the synergy between sequencer and synthesizer I covered in Sequencing With Style. An extra modulation component is always welcome, and you can choose exactly what this LFO sends, select a waveform, set the density of the MIDI data and scale the modulation depth. Embracer, a pad machine equally at home in stereo and surround systems, is one of the more notable of the no-frills synths that fulfill a musical application with a minimum of programming. You can modulate the width of the pad and the pad will bloom from a narrow monophonic output through stereo to surround. Grabbing one of the color-coded rings in the instrument''s Eye will move one of the two oscillators around the surround field: Trippy stuff, and although there are no sophisticated modulation matrices, you can choose waveforms and set attack rates, and a simple tone control for each oscillator will warm up or brighten the pad.
CHARLOTTE, N.C.—Stephen Curry stopped mid-workout to run and grab his phone. There was something he had to show his trainers at Accelerate Basketball. On the small screen was a video of Curry throwing down a windmill, the first time he's completed the dunk. Being healthy sure has its advantages. Curry's almost giddy approach in the gym this offseason comes from being freed of past restrictions, from back when he still was recovering from multiple surgeries to his right ankle. Over this summer, his first at full health since entering the NBA in 2009, Curry has taken advantage of every opportunity to add to his game. Whether he's running sand dunes or resorting to hotel weight rooms, Curry hasn't passed up a day of workouts. "I haven't missed any time," Curry said. "That's a big part of being consistent over the summer, so that come August, I won't be trying to push too hard to make up for the summer. I feel like I'm in pretty good shape right now, and I'll get to training camp stronger and a better player." Historically, Curry has raised his level of play as the season goes on—and his health improves. Such was the case last season, when he narrowly missed the 2013 All-Star Game only to become one of the best stories of the second half. Judging from the tone of his workout Sunday, Curry's determined to get off to a strong start in 2013-14 with a Warriors team expected to compete in the West. He pushed himself from the opening minutes of a three-hour workout, ignoring phone calls in favor of shooting, ballhandling and footwork drills. Each drill targeted a specific area of growth potential for Curry: Because he's one of the NBA's best shooters, Curry doesn't simply count misses and makes like the next guy. Between exhausting workouts, he steps up to the free-throw line, where he shoots at a 90 percent clip, and counts the number of shots he can make without touching the rim. In this process, Curry becomes discontent with simply hitting shots. Frustration sets in on his face when he touches the rim too often. On the rare event that he actually misses, "It's always the ball's fault," said Brandon Payne, who handles Curry's skill development work. The ball didn't take much blame on this day. But that's just the start. When Curry steps out behind the 3-point line is when the real work sets in. He hit an NBA single-season record of 272 3-pointers last season, but "he's not satisfied with 272," Payne said. Curry was strapped to a weighted pulley system and charged with making basketball moves. He practiced the sweep-through and step-back with weight added around his waist. Once the weight was taken off, Curry completed the moves and went into his shot. Trainers stood by and added contact by hitting Curry's arms with pads as he went into his move. Curry executed pull-up shots and a double step-back without traveling, a move he says they'll have to warn officials about. Curry is taking on more in his workout everyday, and after a banner year, he's become more self-assured. "I feel a lot more confident," Curry said. "A lot of that is in conjunction with my injury, being healthy. Scoring through contact At 6-foot-3, 190 pounds, Curry isn't the physical specimen NBA players of his level tend to be. His trainers want him to be as strong as possible—at 190 pounds. So, on Sunday, he didn't lift a weight in the conventional sense. Curry's strength training was geared toward being able to take bumps in the lane and finish at the basket. As things stand, Curry can stretch the defense to 30 feet. Adding to his ability to finish at the rim, though, will only give him more space to get off one of the NBA's best outside shots. He's done OK with defense there, hitting 43.2 percent of his shots when guarded in catch-and-shoot situations, according to Synergy Sports. Curry's efficiency takes an arch that works contrary to most players, however. He becomes less efficient as he gets closer to the basket, and last season he scored 1.024 points per possession on shots around the rim last season, which rates as below average. Curry shot 53.1 percent inside of 5 feet, but his shot attempts from outside of 25 feet more than doubled those at the basket. He wants to balance that and reach the free-throw line more, which is the next progression on a path that could lead him to become one of the NBA's top five scorers. "I hopefully will be able to shoot the ball at a high level every year," Curry said "But, as a point guard, with all the different teammates, you have you have to be able to make plays and handle contact." Curry entered the NBA with the reputation of a shooter and not much more. Each year, he's worked to shed that label and prove that he has overall game. He continues to work toward that now, even after four NBA seasons and a breakout year at point guard. In his workout, Curry did ballhandling drills with weighted balls, weights around his wrists and a number of different stimulants that forced him to process information and react with the appropriate action. Because Curry hasn't had to bother with rehab as of late, he's been able to improve his ballhandling, which showed when he averaged 6.9 assists per game last season, a career high. "Throughout the injury and the last two years," Curry said, "I've been able to add skills to my game that I didn't have before when it comes to strength and movement on the court and my ballhandling getting a lot more crisp and being able to make a lot more plays off the dribble." Set to workout at Accelerate Basketball, Gerald Henderson called Payne and asked if he could bring along a friend, Stephen Curry. Curry returned alone the next day and never left. It's been two years now, and Curry has become one of the NBA's top point guards. "I've kind of been in an underdog situation since high school," Curry said. "That's the one thing I can control, how hard you work in the gym in the summer because it prepares you to get through an 82-game schedule. It gets harder and harder every year. That's a big part of my progression as a player." Curry reached the playoffs for the first time last season. His Warriors defeated the Denver Nuggets and pushed the San Antonio Spurs to six games. Curry said he is putting time in this summer with the thought of another postseason trip in mind. "The team knows your game inside and out," Curry said of the playoffs. "It is more physical, there's no doubt about that. At the end of the season, you've already played 82 games, plus preseason, playing heavy minutes and having to raise that intensity level even though your body is fatigued from the season. That's where this part of the summer is huge. "Anybody can get ready for training camp and get ready for pre-All-Star schedule, but to sustain yourself over the long haul, especially in the playoffs, that's when you have to raise your game to a new level."
Bambang wrote:My dear friend, certainly, it's up to you to choose. I also do not have intention to force my opinion to all, included you. So, on this case, we are not in same line. I belong to the first group, the fighter and you belong to the second one. No problem my friend. That's the essence of a discussion buddy. You can take my point of views and you can also get rid of them. That’s normal. My friend, firstly, I don´t have the intention to force your opinion, that´s not the point because, as you said, that´s the core of the discussion, when you can hear different points of views and enrichment your perception of things. Second, I have said, and I´ve been very consistent with this, that I would fight If two condition are fulfilled: 1) If I strongly believe in the cause I´m fighting for, and 2) If I can assure my family future. I´ll give you a case where I would fight: If my country is invaded and I considered that action doesn´t have any logical justification and also, I manage to assure my family´s future without me, for example, sending them to live abroad. In this case I would fight with all my heart to defend my ideals without any distraction, so in that way, I can give my best for the cause I´m fighting for. Bambang wrote:Well my friend, we can play our own roles to solve this problem. If one group fights the war inside the country and the other group fights it outside, it would be a very good synergy. Yeah, fight it inside and outside country. But my friend, your fund raising activities, speeches, or the like are useless when there are only very few people fight inside. They will not respect you and even they will not give you their hands. They will appreciate you when the majority of your people fights the war inside and a small number of the people do the fight outside and do some diplomatic approaches. So, in this case, your country needs more fighters inside. But once again my friend, that’s your choice. I can not force to take it. As you said my friend, it´s a synergy. Also, It I guess it would be completely useless if you have an entire nation fighting but they don´t have enough resources to fight back, in the case that your enemy have more resources than you. So, with this idea in mind, I´ll tell you that I could be more effective doing diplomatic work instead to fight a war where I can´t be absolulety focus on my duty. But, as you said, It´s a matter of choice and more important a matter of conviction. bambang wrote:My friend, no countries in this planet has the right to invade other independent countries. Iraq is an independent country. If there is a problem inside it, then the country itself should solve their own problem. Other countries just have the right to give advice or help or the like, but they don’t have the right to give "an invasion". They don’t have the right to kill others. It absolutely goes against the Universal of Human Right Acts. I wanna link this comment you have made with mine in the first paragraph. I´ll quote myself: If my country is invaded and I considered that action doesn´t have any logical justification My opinion is that we need to see an invasion in the context it takes place. I disagree with you when you said that no countries has the right to invade others. For example, Imagine that your country is attacked by an extrenal one which is lauching missiles from its territory. To me, it completely justificable to invade that country which is attacking mine. Or Are you going to do nothing to stop it just because you have to respect its sovereignty? Or are you gonna advice it to stop the attack? That´s why I said that some invasion are neccesary, but it must have a pretty good justification. bambang wrote:Then he said Saddam was a killer. But in fact - and I’m convinced that you are in line with me in this case- Mr bush himself is the real killer. He has killed people more than he suspected to Saddam I don´t know which one is the worst. To me, both are very alike and the good thing is that at the end, in some way, justice prevail. In the case of Saddam, who wasn´t an angel, well, we know how he ended up and regarding Mr Bush , well everybody hates him and he´s become in the most unpopular president in the history of the United States, and to a person who is arrogant and with the inflated ego that is some kind of punishment. Bambang wrote:My friend, this is a different case. The American soldiers are not defensing their country and protecting their families. In fact, they are invading other country !!! They will not be considered as cowards if they try to seek asylum in other countries. They are even considered as heroes, because they are trying to uphold justice and truth in this planet. You said that because you´re thinking as a Philippine and not as American (I suppose you´re Indonesian because your location is Jakarta). Remember that Mr Bush made up a new way to "protect" them and sadly, they bought it. So, in their minds they were protecting their country and their family and I respect that, even though I didn´t agree with their arguments....
Klarna, the Swedish eCommerce payment platform that counts Mike Moritz and Klaus Hommels as board members is one of the burgeoning European companies that is turning its attention to Israel. Thanks to a recent financing round of $155 million led by Yuri Milner’s DST, the company is determined to spread its wings to new territories. Yuval Samet, head of product at Klarna Israel took us on a tour of the company’s R&D base in the swanky new Electra Building in Tel Aviv, where Google also sits on three floors. We find out why the Scandinavian company turned its attentions to Israel, the importance of shared parties and the main differences between Israelis and Swedes… So tell us how Klarna became involved in Israel? My partner [older brother Ohed Samet] owned a company called Analyzd, which dealt with risk management and fraud prevention in eCommerce. Previously Ohed headed up risk management and new ventures within PayPal and we decided to start consulting various companies in this area. Klarna approached us about an acquisition and we found that we had an amazing synergy with them, not only in terms of expertise, but also in terms of product development. I came on board as CPO while Ohed is CRO (Chief Risk Officer). When we joined we started to establish an engineering centre in Israel. We have a lot of talent here in this field in Israel and that was something the company was lacking. we’ve already grown the team from four to 32. We have very talented engineers from the Intelligence Unit in the Israeli army. Nearly everyone we’ve spoken to in the startup scene has been part of this unit in the Israeli army – is this the most important place for business networking? Well, this is the Israeli market! Everybody knows everyone and people move in tribes. The army is by far the most important recruitment channel. The Intelligence Unit itself is quite big – and generally speaking, yes, most of the people in the startup scene has served in this unit in some capacity. What goes on here at Klarna Israel? Mainly R&D. Klarna has been extremely successful in establishing the whole buy first, pay later, experience. But the relationship and engagement with our end consumer and with the merchant were not at the same level. What we do here is establish teams that work on relationship management – from the onboarding process, to starting the service with Klarna as a merchant and very soon more end-consumer products that will help you better extend the experience – choose your payment methods easily, understand the settlement, etc. We also have a product analytics team here that supports the whole R&D process. Did it jar with you go from founder of your own company to CPO when you were acquired? With most acquisitions under $200m, I think it’s all about the relationship between the founders and how they will work to push the business forward. Professionally, but also personally, we felt a very good connection to Sebastian [Siemiatkowski], Niklas [Adalberth] and Victor [Jacobsson]. It was obvious that together we’d be able to scale the business to take over Europe… and the world. What links do you have to the Swedish mothership? But we invest a lot in our culture – every employee flies twice a year at least to Sweden, and everyone goes to the Kickoff Party in September, that’s important to have people party together. We try to maintain this close relationship throughout our structure – it’s quite a flat management style – I don’t have an office, there are no separate offices in Stockholm either. Is there enough of a talent pool here in Israel, given the number of R&D centres here – Google, IBM, Apple, etc… I suspect that the biggest threat isn’t other big companies’ R&D, but other Israeli startups – like Berlin, I guess. I hear of lots of German developers that leave big companies to become part of a startup. We experience the same thing. But we try to be very competitive, not only in terms of payment, but in terms of experience and culture. We also offer to train developers in Ruby on Rails if they don’t know how to work with it. For example, we wanted to make a statement by getting the very best head of engineering that we could find. Uri [Nativ, previously of VMware, above] came to us highly recommended from VCs, especially Sequoia and we knew we needed someone who could scale our centre quickly. What are the main differences between Israeli and Swedish business practices? I think there’s something that complements the Swedish and Israeli cultures. The Swedes are amazing engineers. Everything is structured, everything is a process. Every discussion is open and the consensus is very significant. Israelis are much more agile and direct and while there is an open discussion culture, but there is always someone willing to take a decision. Israelis don’t plan that well for the long run, and Swedes do – so our differences do us favours. But this isn’t simply an Israeli office with a Klarna sign on its wall. Mainly because we don’t have a Klarna sign yet! But the point is that we see ourselves as the “Mediterranean Klarna”…
In true Saxon style, discerning travellers and connoisseurs can now savour the delectable cuisine of the hotel’s latest new opening, Saxon Qunu Grill. Following months of development and devotion to the project, executive Chef David Higgs and his team of culinary prodigies were in high spirits with the official launch of the new eatery earlier. Formally the Saxon Restaurant & Terrace, Saxon Qunu Grill exudes the warmth of the hotel’s African heritage from its rich surroundings, red tones, woven lampshades and ambient glow, enhanced by the remarkable hanging gardens. Just beyond the striking gardens, the glass double doors lead guests out onto the terrace where one finds themselves dining under the striking canopy of age-old fig trees, offering a sense of being an intrinsic part of Africa. The restaurant’s Xhosa name pays homage to the Eastern Cape village where former president Nelson Mandela was born. This well-known and respected icon chose the Saxon as a quiet refuge while writing his memoirs, thus it’s only fitting his birthplace has made a permanent mark on the hotel. In his autobiography, Mandela credits Qunu as being the place where he spent some of his happiest childhood moments and after his retirement he again made it his home. Whether the occasion is a business lunch, a special celebration or simply to fulfil the need for inspired and appetizing cuisine, Saxon Qunu Grill will provide the perfect setting where memories are made. Offering an innovative menu with bold flavours and an honest approach to cuisine, using only the finest South African ingredients, patrons can delight in a hearty breakfast, comforting lunch or cosy, intimate dinner, all with distinctive African flair. From the West Coast mussels to the free range and grass-fed beef, sourced from the KwaZulu-Natal Midlands, chef Higgs sources only the finest quality South African ingredients and challenges the norm with his creative menu items and attention to detail. Chef Higgs remarked: “There is such synergy here, with the history of the hotel, and that of my talented culinary team. “With an inherent focus on the experience and the food, Saxon Qunu Grill is a place where people can relax in the tranquil and charming atmosphere and revel in the fantastic service whilst indulging in the finest wine and food pairings. “With the Saxon exuding the warmth of a home, Saxon Qunu Grill’s experience is the very same one I would afford my family and friends in my very own home.” Gastronomy aficionados and gourmands alike will revel in the multi-faceted selections, and guests are spoiled for choice with a delectable variety of fish, beef, poultry, game and vegetarian options. “The introduction of Saxon Qunu Grill to the hotel’s dining options has been a natural progression for us, where the heritage of Africa is encapsulated in not only the décor, but in the flavours of the fare we present to our guests,” said Saxon Hotel managing director, George Cohen.
SECReT 2009 PhD projects - The transfer, persistence and secondary transfer of gunshot residue (GSR): Implications for crime reconstruction and forensic protocol studied using Bayesian modelling - To what extent can forensic evidence aid in the investigation and prosecution of internal child sex trafficking (ICST)? - Complex systems approaches to issues in crime and security - Developing tools for anticipating and mitigating the negative societal impact, while preserving the positive impact, of security technologies for use by the developers of these technologies upstream in the design process. - How new ways of spatial analysis can improve the geographical understanding of illegal drug markets and the distribution of drug-related crime - Computational cryptography - Developing analytical Blood Pattern Analysis (BPA) techniques for environmentally altered bloodstains; and examining the range and influence of visualization methods available for BPA presentation in the context of jury decision making. - Optimisation of illicit material detection using X-ray diffraction: Drug identification using Low Angle X-ray Scatter - DILAX III - Improving the understanding of and responses to internal child sex trafficking in the UK: An empirical multi-method analysis - Securing threat detection: Synergy of technological and neuropsychological factors Improving the understanding of and responses to internal child sex trafficking in the UK: An empirical multi-method analysis 7 March 2012 This multi-disciplinary doctoral research will draw on a range of empirical analytical techniques to model the structure and function of internal child sex trafficking (ICST) networks. Unlike the vast majority of child sex offences, ICST typically involves multiple perpetrators and victims. Consequently, a network-based approach to modelling the crime and its agents appears critical for effective crime reduction. Access has been granted to sensitive police data from six major ICST investigations, involving hundreds of victims and offenders. These novel data will be analysed using social network analysis, empirical and statistical modelling and communications data analysis. In addition, original interviews with convicted ICST offenders will explore knowledge gaps around offending networks. Access to prisoners has been secured with the support of the Child Sexual Exploitation and Online Protection Centre (CEOP). The findings are expected to shape a nascent research literature and directly impact both strategic and tactical policing. By offering data provision and support at a time of major budgetary cuts, police and other agencies demonstrated their confidence in the value of this research. This unique project has already attracted considerable interest from major national and international organisations including DSTL (the Ministry of Defence’s Defence Science and Technology Laboratory), SOCA (Serious Organised Crime Agency), UKHTC (UK Human Trafficking Centre), the Home Office, CEOP, the Children’s Commissioner, the Dutch national rapporteur on trafficking, and numerous police forces, children’s services and third sector organisations. Early findings have been published in two academic articles, presented at several conferences, and discussed in the national media.
Do you really want to buy used underwear? I mean, someone else's 'boys' have been in there... ;-) There may be a law that prohibits selling used underwear, I've never seen any - but I could be wrong. Perhaps it's that no one buys it so they don't bother putting it out. OK - I get it now. But you posted the comment under the underwear portion. At least I got a good laugh out of it. Actually, when I lived in Florida, I frequented the local Salvation Army and Goodwill stores quite a bit - not for cloths, but hardware. Unfortunately, I rarely found anything worth buying. There are no stores near where I live now - and it would require a trip into downtown - a place I try my best to never go. OK, Off Topic Crap. But... While I have been verbally lamenting the loss of US industry, don't get the idea I am a Nationalist. I buy products from Germany, Italy, France, England and even Japan and other developed, economically mature countries without *too* much chagrin. What I resent is the unlevel playing field that Chinese products represent, and the total collapse of our own industries as a result. I have no problem with the Chinese people, but I do not trust the Government nor their intentions. Their quiet build up of arms and technology, continuing human rights issues, environmental chaos, and their not so secret desire to become a world empire once again leads me to believe that our pandering is a bad thing for us and the world And they certainly have no intention of ever buying our products in an open marketplace - they shun ours and develop their own. A few examples: DVDs? No way, the Chinese government promoted the internal development of CVD and refuses to enforce foreign copyrights. Cars? Ha! Don't even go there, we can't even sell them to Americans. Machinery? Well, we don't make anything anymore - other than military weapons. The Japanese have dominated the robotics industry. And they already make everything else. Computer Software? Double Ha-Ha. Bill Gates is fuming at this very minute - millions of bootleg copies of Windows are in use already. And, they have developed their own Linux based O.S. What is left to sell them? Food? I bought a gallon bottle of apple juice at Kroger the other day, and on the side of the bottle, in tiny little letters, was stamped "Imported from China". I took it back and raised hell. The last thing I'm going to drink is a cadmium, mercury, PCB, etc. filled bottle of apple juice from China. And why didn't we embrace the Russians with as much zeal as we have the Chinese? After all, they attempted to embrace democracy and capitalism, and we snubbed them. And as a result, arms and nuclear materials have spread worldwide, and they are near chaos. Not very good democracy building, if you ask me... Even though the old Soviet blok countries are inching forward, with companies like Groz and such, they certainly haven't received the economic boost we handed China. While I'm obviously not a foreign affairs guru, it makes me wonder what the hell we are doing - Well, actually I do know... $$$$$$$$$ Avarice reigns supreme. :-\ We've a Prestige 'Aroma', dual, non sissy, full Monte, four banger bagel toaster for about four years that I thought was a pretty damn good toaster because that sucker has a timer SO F**&ING GOOD that it pops up PERFECTLY toasted bread the EXACT _same_ second that my two morning eggs are done to ... until I read your post, that is. I just turned the damn thing over (spreading crumbs all over the place in the process ... thanks a lot) and DAMN me if it ain't <gasp> "Made in China"! Now I feel screwed, despite the fact it's performed flawlessly for all that Jeeeezuss, what's this country coming to?!? Swingman (in 882dnXJ4fJ7ea firstname.lastname@example.org) said: | Jeeeezuss, what's this country coming to?!? You made me look. The kitchenAid (a Whirlpool brand) on our counter claims to come from St. Joseph, Michigan. I have another in my consulting kit that I bought in a Philly WalMart for $14.95 (along with a similarly priced rice cooker) that I can't imagine coming from fron anywhere other than China. The KitchenAid toaster has an LED "toastedness" display, but doesn't do a noticably better job (repeatable result, even-toasting, etc) than the $85-cheaper Chinese product. Both of 'em are a PIA when it comes to emptying the crumbs [thank you very much for the reminder!] But the only conclusion that this discussion leads to is that some Chinese factories can produce _some_ things less-expensively than American factories can. If you're a toaster production line assembly person being paid $20/hour (based on seniority) to put the four bottom screws through the plastic feet, that's probably disturbing. The word "some" above is important. There's stuff _not_ coming out of Chinese factories yet that _is_ produced here. We can either complain about how they've learned to do some of the things that we learned sooner, or we can focus on providing the world with the things that they can't produce less expensively (yet). The really important question has to be: Is there a scenario in which everyone does that business activity they do best so as to produce a synergy of American and Chinese (and ...) efforts? I'm fairly well convinced that win-lose strategies utlimately produce only lose-lose results. It would seem that the world has shrunk to the point where we're obliged to start learning how to "play well with others" - and to remember that we don't own all the toys nor make all the rules. Fortunately, excellence is still treasured everywhere. DeSoto, Iowa USA Ba r r y (in email@example.com) said: | On Sun, 30 Oct 2005 11:57:31 -0600, "Morris Dovey" || Swingman (in 882dnXJ4fJ7ea firstname.lastname@example.org) said: ||| Jeeeezuss, what's this country coming to?!? || You made me look. The kitchenAid (a Whirlpool brand) on our counter || claims to come from St. Joseph, Michigan. | I've seen tons of stuff labeled "packed in USA", on the box, and the | device inside is labeled with the actual country of origin. Me too. This was on the back of the toaster itself. The box is long DeSoto, Iowa USA Since toaster flipping is a popular hobby, I took a close look at mine. It does say KA in Michigan, but the bottom of the label also says Made in China. I'd be surprised if there are any made in the US in the past 10 At work we pack a lot of products in large poly bags. The best thing to seal them with is a Teflon coated household iron. I buy Black & Decker. When I first started buying them 15 years ago, they were $20 to $23 (origins unknown). Now they are $13. What I don't understand is that I was willing to pay 20 bucks, so why go through a lot of contortions to sell them I've been looking to upgrade my TS for awhile now. I've been eyeing the 1023S for awhile, so when I saw your post asking about the Unisaw for $1299, my first thought was no, but I don't have experience with either to advise from actual use. Although, I did read some reviews from some very impressed people on Amazon regarding the 1023S, so I would have a hard time spending an extra $400 (or $375 for the SL) on a refurbished Unisaw. Many of the pros here could probably justify the extra $400, but for my budding shop I'd rather put that extra $$ on a Grizzly 14" band saw. There's delivery charges to consider as well. The local Unisaw dealer won't charge you freight. I used the older 70's vintage Unisaws in college and at a friend's shop. When I started shopping to upgrade to a cabinet saw 4-5 years ago I was pretty much predisposed to the Unisaw. After looking at newer Unisaws I decided to expand my search. They weren't what they used to be and frankly they haven't improved since (Handwheel brake nuts for example). My final decision did come down to Unisaw, Grizzly 1023S and the Jets. The 1023 won based on value and the fact it reminded me so much of the earlier Unisaws. This included both visual and tactile impressions (smooth handwheels, tabletop machining/flatness, specifications, sound, vibration, etc.) I do believe the Unisaw is still a fine machine but I cannot personally justify the extra hundreds of dollars for the brand name. My machine has served me well for four years, requires very little adjustment, and Grizzly delivery support is great. I have never had to use their BTW - Grizzly will probably provide the names of up to two recent customers, in your area, who have made recent purchases of a 1023 (and who have agreed to talk to folks like you). This service, plus a trip to the Springfield store, won me over. I hear ya - that funky T thing is a departure from the nice old knobs. I hear you again! I've always wanted a Unisaw, but when it comes to laying out cold, hard cash - they worry me these days. Still, I prefer to buy American products and support my neighbors whenever possible - but I am tiring of being burned by that desire. It's getting to the point where when I see Made in USA, it's worse garbage than the Chinese crap - which is steadily improving. I live in the SE, and there are no Grizzly dealers. Makes it more difficult to access their products. But the 1023 looks like a very I have a Unisaw and I love it, but problems such as a warped extension table and slightly warped left wing have left me wondering if I would ever purchase another Delta product. My DJ-20 joiner has a small pit in the outfeed table. Not enough to take it back, but the quality assurance just wasn't there. I'm not sure you'd be any better off with a Grizzly. I've heard horor stories from many fellow woodworkers with various brands of tools, and I'm wondering if buying tools is getting to be a crap shoot. The folks who seem to get it right, in my opinion, these days are the Canadians. The quality of tools from Canada is to me impressive. Just my opinion. I have a Delta X jointer - I was lucky, but many had problems with warped fences. My contractor saw had a table that was so warped, it left a wave on the end of a cut board. This was years ago, and the first tool I purchased - so it took me a while to figure out what was wrong. I ended up grinding the thing by hand to true it up. They do seem to have a problem with rushing green castings into production too soon. And the customer service has really deteriorated. They used to respond quickly to warranty parts replacements, but the last time I called for a warped bandsaw wheel on a brand new 14" Delta, I never got the parts. Called again, still never got the part. I bent the damned thing true myself in order to use it, and just gave up on them. The same dealer also carries General. Their left tilt contractor saws and the 650s are good saws but I hear bad things about their support and manuals. And I'm not too certain about the availability of accessories like snap-in splitters and zero clearance inserts. Their fence is a nice Canadian made Beis clone. I hear great things about the 1023SLX Grizzly, but really horrible stories about the delivery process. Much down time and broken/damaged parts from freight handlers. They DO seem to respond quickly with new parts, no questions asked. And the massive carriage and handwheels action on the 1023 is impressive. But I've never cut wood on one... I bought the new Porter Cable 2 1/4 HP router kit when it first came out, and what a pile-o-crap full of Chinese parts. Shoulda gotten an old, used 690... Manufacturing, Products and Support are failing miserably in this country. If something isn't done - like killing off some bean counters and greedy Wall Street investors, we are going to become a real third rate country. We've already lost the number one spot. HomeOwnersHub.com is a website for homeowners and building and maintenance pros. It is not affiliated with any of the manufacturers or service providers discussed here. All logos and trade names are the property of their respective owners.
About Me: http://about.me/boadams1 Google+: Bo Adams (mountvernonschool); Bo Adams (personal) Bo Adams considers himself first and foremost to be a learner. He treasures being a husband and a dad, as well as an educator. Bo identifies as a lifelong runner, and he experiences regular bouts of obsession with triathlon, mountain biking, and kayaking. Bo loves to read, write, draw, design, tinker, think and do. Bo believes any boundaries between school and life should be blurred and semi-permeable at least. For more than a decade, his core research question has been, “If school is meant to prepare kids for real life, then why doesn’t school look more like real life?” In June 2013, Bo joined Mount Vernon Presbyterian School as Chief Learning and Innovation Officer. Head of School Dr. Brett Jacobsen announced the appointment on May 14, and Bo could not be more excited to become a member of the team at this leading school of inquiry, innovation, and impact. Bo remains active as an edu-blogger at It’s About Learning (https://itsaboutlearning.wordpress.com) and @boadams1 on Twitter. Additionally, Bo regularly pursues deeper understanding in the area of “schools of the future and the future of schools.” Bo currently serves on the board of directors at MODA. From July 2012 to May 2013, Bo served as the Director of Educational Innovation at Unboundary, a strategic design studio in Atlanta, Georgia, specializing in transformational change processes, communications, and brand significance. From 1995 until 2012, Bo worked at The Westminster Schools in Atlanta, Georgia, where he fulfilled a number of roles during his tenure there. In Bo’s last nine years at Westminster, he served as the principal of the Junior High School. In 2006, Bo co-founded The Center for Teaching at The Westminster Schools. Bo’s primary points of focus throughout his principalship were faculty development, professional learning communities, assessment literacy, project-based learning, change management and educational innovation. In 2010, Bo and Jill Gough launched “Synergy” – a community-issues, project-based, problem-identification-and-solutions course for eighth graders. Prior to coming to Westminster, Bo worked at Darlington School in Rome, Georgia; Loudoun Country Day School in Leesburg, Virginia; and Camp Sea Gull in Arapahoe, North Carolina. Two Speaking Demonstrations – Martin Institute & TEDxAtlanta = = = = = = = = = = = = = = = = = = = = Carney, Sandoe & Associates names Bo one of 8 Thought Leaders to Follow Now (December 19, 2013) More about Bo’s Previous Chapters - While Bo was at Unboundary, “We Love Creativity” conducted and published a series of interviews with Bo. - On Feb. 29, 2012, students from the Westminster 8th grade Writing Workshop team that produces the Wildcat Press Xpress posted this podcast interview with Bo. Thanks WPX!
- Synergy Bahamas - Shirley Street across from Epworth Hall - P.O. Box: - Nassau / Paradise Island, Bahamas - Social Networks: Wrong info? click here to let us know |Monday||8:30 am - 9:00 pm| |Tuesday||8:30 am - 9:00 pm| |Wednesday||8:30 am - 9:00 pm| |Thursday||8:30 am - 9:00 pm| |Friday||9:00 am - 5:00 pm| |Saturday||8:30 am - 4:30 pm| |Sunday||-- closed --| Synergy Bahamas is the preferred technology firm in the Bahamas. With over 15 years of combined experience in the field, our consultants are more than capable and willing to assist you with your Training and Consulting needs. Synergy was formed by leading consultants who share a common vision to facilitate the adoption of technology and education in the Bahamian business community. The team at Synergy Bahamas holds the highest in industry standard certifications. Our consultants and trainers are all Industry Certified. We believe in education and the power of knowledge and it is our goal to pass these benefits on to our clients. Synergy Bahamas has formed key relationships with, and provides solutions based on industry leading manufacturers and providers to offer your company the highest quality IT solutions and products available today. -Microsoft -Cisco -IBTA -Certiport -PearsonVue -Prometric -Softkey PMu -Dell
This is unorthodox. It's usually the other way around. Not likely to work well, but I would never say never. 10 responses Add your response Actually, this is the setup being used by Tom Evans Audio Design in their ultra-fi gear. They argue that ss is most critical at the preamp level, since tube noise overwhelms the tiny signals there. The preamp just won HiFi+ product of the year; the amp is not yet released but has been mentioned very favorably in that magazine. You can read their arguments at http://www.thebesthifiintheworld.com. I have used every combination. I still have my original APT Holman solid state pre and Hafler 500's that I bought new. I bought used a CJ tube amp and then a used CJ tube pre. In my system there is no question that the best combination is using both a tube preamp and amp. I don't know about the synergy available from some of the newer solid state in combo's, but from my perspective tubes are where it's at. Using a solid state will not alter the sound of your tube amp - in fact its easier to match a ss pre to a tube amp than the reverse. Both add in similar ways to the sonics. Personally I would never go back to a ss state amp - I just love the sound of tubes in the pre-amp stage - and as a part time hobbist I enjoy expirimenting with alternate tubes to change the sound a bit from time to time. That sure beats having to buy a new ss pre amp when i want change. Having spent 30 years as an audiophile and having owned tons of gear, I can attest to the fact that often our current system tends to dictate our response..i.e..we don't want to be honest with ourselves in absolute terms! In other words, we delude ourselves that what we currently have is the best combo! As for myself, the latest transformation of my system has been done upon reflection of the vast number of combinations of gear that have gone through my house. What I did cost me dearly, not just in terms of cost, but in terms of my audio ego! I traded in a pair of Dunlavy SC-5 speakers for Watt/Puppy 6's plus 11,000. I then had to have Transparent cable to lash it all up...$18,000!! They were good speakers, but the thrill was gone from the music. I traded them back after 9months for a pair of B&w 801 nautilus speakers...they were less than exciting as well. In the end I traded them back in for my original Dunlavy's. Of course the Transparent cable mucked up the sound with my old speakers back in the system so I traded for Harmonic Technologies Magic Cables all around. In the end, all is extremely satisfying. So, with this lesson and many others under my belt, I say to you that putting a solid state pre-amp in front of a tube amp is not going to yield the most musical of results. It may sound nice...but not full, dynamic and solid the way live music sounds! I have owned all kinds of tube amps and pre-amps as well as ss amps and pre-amps (I like to mix and match), but no system has sounded better than the ones with a great Tube pre-amp and world class ss amp! SS pre's squash the life out of the signal...the tube amp adds back some body and action, but to a brutilized signal. A tube pre-amp adds life and tonal color as well as dynamics that a great ss amp will pass on without the smoke between notes!
The locked-out musicians of the Minnesota Orchestra issued a unanimous vote of no confidence in the organization’s president and CEO, Michael Henson, on Tuesday. Here’s a GIF of my reaction to this news: You can read a list of objections the musicians have to Mr. Henson on their website. Their first is that he misled “the Minnesota Legislature about the orchestra’s finances during his testimony in favor of the orchestra’s bonding request.” There they linked to an mp3 of Mr. Henson testifying before the Cultural and Outdoor Resources Finance Division of the Minnesota House of Representatives in January 2010…and misleading, if not lying, to them. Here’s a link to the mp3. The segment having to do with the Minnesota Orchestra begins at 2:38:55. In the interest of context and thoroughness, and for future reference, I’m transcribing the entirety of Mr. Henson’s appearance here. Apologies at its length, but…it’s long! They always say that lawmaking is like sausage-making: people don’t like knowing how either is made. Well, here’s your chance to watch some sausage-making, up close and personal… If you’re anything like me, the process will make you a little queasy. Here are the cast of characters (listed in order of appearance), their initials, their political party, and their title at the meeting (if applicable). Information courtesy of this page and quick Google searches… Mary Murphy (MM), DFL, Chair of meeting Margaret Kelliher (MK), DFL, then Speaker of the Minnesota House of Representatives Michael Henson (MH), President and CEO, Minnesota Orchestra Greg Davids (GD), Republican, Lead of meeting (he is referenced by Margaret Kelliher; he does not actually speak) Lyndon Carlson (LC), DFL, ex-officio Alice Hausman (AH), DFL Dean Urdahl (DU), Republican Diane Loeffler (DL), DFL MM: Rep. Kelliher, 2528. MK: Madame Chair and Committee members, thank you for your work; I’ve been watching you, and you have a lot of good projects in front of you. I could say something very nice about every single thing I’ve seen. I just hope that Rep. Davids and I don’t have to team up like we had to in my first term in the legislature this year to make some of these things happen. So I really appreciate your hearing a couple of bills today. We’re first here to present our bill on the Orchestra – the Minnesota Orchestra, and Orchestra Hall and Peavey Plaza. And so I’m going to be brief about this; I want to tell you just a couple of things about the Orchestra. The Orchestra was formed in 1903, and since 1907 there have been 680 concerts in 60 communities around the state. There’s a wonderful packet that they’ve put together for all of you, including the impact on your own districts of the Orchestra. But I do love this quote by a Tyler resident, who had only seen the Orchestra once as a young boy. “On Friday night he was hearing the Minnesota Orchestra perform as a whole new experience. ‘It’s a pretty nice deal,’ he said, ‘getting something like this out here.'” He was quoted in the weekender Independent in Marshall, Minnesota, in February 2008. So the Orchestra has a broad scope and reach. Over 80,000 students are served by educational programs by the Orchestra every year. It performs over 200 concerts. And Orchestra Hall has hosted ten million visitors since 1974. And that’s our topic today. This renovation of Orchestra Hall and Peavey Plaza is job-intensive. Over 900 jobs will be created with this little bit of state money, partnered with a lot of private money. This Orchestra is also one of our state’s great cultural exports. The Orchestra has been winning terrific acclaim all around the globe, including the London Daily Telegraph, as well as the New York Times. And you can also know the reach of this Orchestra by the fact that it’s one of the only – it is the only American orchestra with a regular broadcast on the BBC. I think that’s pretty amazing, Madame Chair, and members. And I have to tell you just a quick personal story. My own children got to participate in something very special through our church a few years ago, and it was the production of the oratorio that had been commissioned. And it was an oratorio that the music of course was played by the Minnesota Orchestra. And the singers came from a large pool of singers, including children from the Basilica of St. Mary. They had an amazing experience, being able to record that piece – the first recording of it ever, in Orchestra Hall, by a Swedish company that came in and did that with a Swedish production company. And it has had an amazing and profound impact. The oratorio itself was about the impact – it was actually commissioned by our priest at the time, Father Michael O’Connell – and the story was the story of the children of the Holocaust. And my own children, when they sang in that production, said, “Oh, Mom.” I mean, you could just imagine the terrifying thing that was happening to those children at that time. So I think for me, what music connects, and what a project like this connects, for people, for children, for adults, all across the state, is how music tells the story of people’s lives, whether that story was a long time ago, or that story is today. And so I’m pleased to introduce to you the President and the CEO of the Minnesota Orchestra, Michael Henson. MM: Welcome, Mr. Henson. MH: Thank you very much, Madame Chair, and Representatives, and what a great pleasure it is to be here today, and thank you, Speaker Kelliher, for such an eloquent presentation. I’d like to begin by sharing a bit more background on the Minnesota Orchestra with you, and then to update you on the substantial progress we’ve made on our building project since we appeared at the Capitol in 2008, requesting planning funds for the renovation of Orchestra Hall. I joined the Minnesota Orchestra just over two years ago, coming from England, and one of the factors that drew me here was the Orchestra’s reputation. It is one of the top orchestras in the world. The Minnesota Orchestra was founded in 1903, as Speaker Kelliher mentioned. It started touring the state only four years later, and has continued to do so ever since, traveling to every corner of the state. We began our education concerts in 1911 and they continue to this day, too. Today the Minnesota Orchestra performs nearly 200 concerts a year, reaching over 400,000 people, 200,000 additional individuals across the state weekly hear our radio broadcasts, and millions across the country through national and international radio broadcasts. On the financial front, we have announced balanced budgets over the last three consecutive years, and we are facing the current economic downturn with stability. In general, the orchestra is musically enjoying a Golden Period with music director Osmo Vänskä. And we are excited about the many possibilities surrounding our hall renovation. Let me detail the project very briefly. I have to say that I found this project to be an extremely captivating one since the first moment I visited Orchestra Hall. I was struck then by the tremendous potential of a revitalized Orchestra Hall in this community. Since I joined the Orchestra, we have tested and re-scaled the scope of the hall project in light of the very challenging economy. The result is a very focused and feasible project. Our vision for an expanded Orchestra Hall is a $40 million renovation that re-invents our public spaces, better serves our young audiences, and makes certain that Orchestra Hall lives up to its full potential as a beacon in the city, accessible to all in the community. Our general contractor estimates that the project will create nearly 900 jobs. Orchestra Hall was built in 1974 for approximately $15 million. The bulk of these resources were put into the auditorium, which still functions very well. The lobby, on the other hand, was built to last only fifteen to twenty years. We have three priorities in our renovation, and the top amongst these is an improved lobby. The second is to modernize our auditorium. And last, but not least, we have started to regenerate Peavey Plaza in the Orchestra Hall renovation. We believe that the reinvention of this entire city block will have a powerful social and economic impact on our community. I’d like to note that the $40 million figure relates only to the cost of renovating Orchestra Hall, not Peavey Plaza. We are currently working with the City to determine the appropriate costs for the renovation of Peavey. Our private fundraising efforts are going very well, but public funding is critical if we are to reach our ultimate goal. Our private donors are keen to hear that the state is a partner in our project. I thank you in advance for your support of our plans to re-imagine our hall and Peavey Plaza for our new audiences in this century. Thank you very much. MM: So Rep. Kelliher, was the orchestra heard on BBC before Mr. Henson came to Minneapolis? MH: I’ve had a close working relationship with the BBC for twenty years. That has obviously helped; however, we have a world-class orchestra and if we were not a world-class orchestra, we would not be appearing on the BBC. So I think there is a very good synergy between a world-class orchestra and another world-class broadcaster. MM: Good answer. Very good. Rep. Carlson. LC: As ticket holders, my wife and I might be interested in where will we be attending during the construction period? MH: I think in the construction period we actually looked at a variety of options. One was to close the hall over a three year period – six months each year. What we decided to do is to close the hall for one season, and we are currently in advanced stages of negotiating where we’re going to appear in the downtown. We’re aiming to maintain the vast majority of that orchestral series, and the object has to be to actually retain that audience, so that when we close the hall and reopen it in a year’s time, we have retained as much of that audience as possible and retained that enthusiasm. So hopefully in the next couple of months we will be announcing that, and we are trying to minimize the amount of disruption. LC: So the main point is that you’re still going to perform. MM: Maybe in Duluth. [laughter and chatter] LC: He never said which downtown. MH: If I could also supplement that, we’re also aiming to increase our state touring for that year as well. And we’ll be looking at between two to four weeks of activity. So I think we’re going to see a smaller main season, but we’re also going to take that in terms of increasing our presence across the whole state. MM: Representative Hausman. AH: Thank you, Madame Chair. I believe it is this weekend we have the opportunity to hear the Minnesota Orchestra performing together with the St. Paul Chamber Orchestra, and as the newspaper account says, those conductors who have international experience had really great things to say about the quality of the musical experience we have available in this state. MH: That’s extremely pleasing to hear, and I know the orchestras are working as we speak at the moment, and I think it is going to be a truly splendid series of concerts. MM: Representative Urdahl. DU: Thank you, Madame Chair. Mr. Henson. I have had occasion a couple of times to attend the performances at the [?] Performing Arts Center, and enjoyed that, particularly with my Finlander wife and Mr. Vänskä. But if you’re looking for a home, you know, I’m sure that a good deal could be struck with the [?] Performing Arts Center. [Editor’s Note: I can’t make out which performing arts center Rep. Urdahl is referring to! Please listen to the mp3 yourself to judge and leave your ideas in the comment section. His comments are at 2:50:05. I’ll edit this entry if I get any clarification…] MK: What a generous offer, Rep. Urdahl. MH: Thank you very much. MM: Rep. Loeffler. DL: Thank you, Madame Chair. And Mr. Henson, I’d like to put something on your short list of alternative locations. Just about two miles north of where you are is the original home of the Minneapolis Orchestra, which became the Minnesota Orchestra, at least it did all of its original recordings in the Edison High School Auditorium, which had perfect acoustics. I don’t think they’ve changed that much since then, and it’s in the official arts district of the city, and you’ve never toured to our area, so I think coming back home and maybe re-playing some of those wonderful classics that were done and recorded there would really be a really interesting thing, to tour within – for your home city and back to something that is the historical roots of the Orchestra. MH: Thank you very much for that very helpful suggestion. MK: Madame Chair, I feel like we’re being lobbied as much as we’re lobbying all of you today. MM: Any other suggestions for their off-season? [laughter] Thank you very much. I’ll have more thoughts on this transcription later. If you have any corrections to my transcription, let me know. In the meantime, what are your thoughts?
Showing 4,513-4,608 of 5,204 items Welcome to Vinny’s Pizza & Wings. Located in DeKalb, Illinois since 2005 and under new management since December 2009. Our experienced and friendly staff look forward to taking care of your Pizza Needs! Set in the heart of Chicago's Gold Coast in the chic Waldorf Astoria Chicago Hotel, Balsan is a market-driven, bistro showcasing, refined yet rustic fare with American sensibility. MT Barrels, a Schaumburg sports bar/ live music venue that serves great food accompanied by a multi-faceted beer menu consisting of 20 drafts, over 30 bottles and multiple barrel aged scotches, Bourbons and Tequilas. Zagat-rated No. 1 in 2008/2009;and 2009/2010. Authentic, award-winning Spanish tapas cuisine;Valet parking. This cozy and contemporary offspring of the legendary Father and Son Restaurant group is family owned and operated, preparing all of its specialties from scratch using fresh ingredients. The thin-crust pizza, a house favorite, can be tossed using whole wheat, seven-grain crusts or prepared with Abbe's Gluten Free. Don't overlook the BBQ ribs, broasted chicken, healthy, salads and sandwiches. Spacca Napoli Pizzeria was inspired by the authentic aroma, taste, and craft of pizza found on the streets and in the pizzerias in Naples. It is the intention that everything at Spacca Napoli is faithful to tradition, from the high quality of the food to the way it is served: The oven was built by third- and fourth-generation artisans from Napoli; the dough mixer also comes from Italy. The decor is reminiscent of classic Italian pizzerias, yet is contemporary and filled with light. Spacca Napoli strives to be a neighborhood place but it is also a destination for anyone who wants to taste genuine Neapolitan style pizza. Little Beans is a family café and imaginative indoor playground for kids. Their café boasts a delicious assortment of coffee drinks and snacks where parents can enjoy the ambiance of our comfortable café with their tiny tots. At Little Beans, kids have the freedom to play, learn, imagine and create in their custom interactive indoor village and playground. They are truly a destination that is fun for the whole family. Kai Zan is the brainchild of local sushi legends Melvin and Carlo Vizconde. Also known as the sushi twins. With modern and traditional cooking techniques the twins will create an omakase dinner experience that you will come back for over and over again. Since 1979, the Sellis family has created an imaginative and tantalizing menu combining American cuisine with European influences. Quality fresh seafood, including Dover Sole is expertly prepared and served at your table. Skilled chefs prepare steaks to your liking. Stop by Palm Court and enjoy the piano lounge and a drink! LaMirage is a Russian restaurant and banquet facility located in Rolling Meadows. Offering a wide variety of appetizers, entrees, desserts, and drinks its the perfect location for any occasion! Stonewood Ale House is the perfect balance between a classic restaurant and a great bar. Our extensive menu and comfortable atmosphere attracts a strong lunch and dinner clientele, while also appealing to a solid after work and late night bar crowd. Burritoville in Dekalb, IL has the best Mexican food you could ask for. Delivery and Online Food Ordering to NIU and Dekalb ensures that you will never be hungry again! Located in the beautiful South Shore Cultural Center with a view of Lake Michigan, this hidden gem offers a seasonal menu in a upscale environment. Patio dining available. Reservations suggested The Old Orland Historic District has antique and specialty shops, and interesting service businesses nestled in a 1800s neighborhood of historic churches, charming houses and colorful gardens. St. James at Sag Bridge is the Oldest Church in Northern Illinois. Built by Irish immigrants who built the canal. Established in 1833, it is the sole country parish of the Archdiocese of Chicago serving a growing community of Catholics who come together to worship God through the celebration of the Eucharist and traditional devotional activities within truly unique and beautiful surroundings. Premium seafood and steak with a touch of Mexico! All the flavors from Mexico are mixed to give our food a unique and authentic flavor. Sultry & sensuous, like a movie set of a French Colonial Vietnam complete with ceiling fans, palm trees, period photography and colonial shutters. The food is traditional Vietnamese & exquisite. Full-service restaurant and upscale sports bar specializing in live charcoal grilling and broasted meals. Accommodates both intimate groups and large parties. Wide variety of beer and wine, full liquor bar. Sun.-Thurs. 11am-Midnight; Fri. & Sat. 11am-2am. In Evanston is a dining experience that’s much more than it seems: Found is a kitchen, a social house and a restaurant with a mission. Its rustic new American dishes change with the market and season and are served in a relaxed, creative atmosphere inspired by 1920’s Bohemian Paris and icons of the Beat Generation. Singin' in The Rain is based on the 1952 American musical comedy film. It offers a lighthearted depiction of Hollywood in the late '20s, with the three stars portraying performers caught up in the transition from silent films to talkies. This play will offer the audience some splashy old-fashioned fun. A wine bar offering an outstanding colelction of wine by the glass or bottle with a complementary menu. Come find out why this Evanston favorite is at the top of so many people's lists. Chicago native Sarah Levy developed a passion for food at an early age. The daughter of Mark Levy, co-founder of Levy Restaurants, Sarah grew up surrounded by fine food and quickly developed a passion for all things sweet. Inspired by the joy that fine food can bring to peoples' lives, Sarah founded Sarah's Pastries & Candies. Sarah started her company making chocolate candies out of her mom's kitchen. After a year she had built up a wholesale business that includes 17 Whole Foods Markets in the Midwest. Sarah launched her first retail store at 11 E. Oak St. in September 2005. She has expanded her line to include morning pastries, cakes, tarts, espresso drinks and other delectable pastries. Lou Malnati's is a family-owned and operated company that takes pride in its service and quality. We ensure that every pizza served in each of our locations is handmade exactly to our specifications so we can deliver on our promise of quality. You’ll know with every bite that each pizza was made with care and special attention. CITY:Elk Grove Village Los Fernandez Taqueria has been offering Streamwood authentic Mexican food for years. With quick service and plenty of options to choose from it is a great stop for lunch or dinner. Casual, care free dining, nearly world famous wings, sandwiches, seafood and salads all served by the famous Hooter Girls! Now with a full bar. Grande Jake's Authentic Mexican Grill offers you the finest in authentic Mexican cuisine.Come and dine with us and enjoy our fine food and atmosphere. Casual dining from hamburgers and sandwiches to steaks and seafood. Create your own stir-fry meal. Offering more than 25 ingredients including traditional and exotic fruits and vegetables, a selection of rice and noodles, signature handcrafted sauces, prime beef tenderloin and seasonal offerings including game and seafood. Also featuring Asian inspired appetizers, homemade desserts, and a full bar featuring Tiki drinks and urban cocktails. Upscale dining for mid-range prices. Our cuisine is a mixture of both Northern and Southern styles. The restaurant is intimate and warm. L: 11am-4pm; D: 4:30pm-11:30pm. Oysters, Moonshine and Live Country Music at your very own Old Crow Smokehouse in Schaumburg! Delicious northern Italian fare, comfortable surroundings and friendly service. Menu rotates bi-weekly in order to offer fresh, seasonal ingredients. Shaw’s is two restaurants in one – a jazzy, sophisticated, seafood restaurant and a carefree, lively oyster bar. Both serve top-grade fish and shellfish, several varieties of just-shucked oysters, and inventive sushi and sashimi combinations, made with the freshest fish in town. The tri-level rock n' roll inspired restaurant, bar and music venue spins a fresh take on amplified American cuisine and live stage performances weekly. Each floor offers a variety of elements; 170" plasma video wall; two stages; three full bars, large outdoor cafe and a private room offering personable service. Three floors of fun! Shop our 1st floor boutique, create in our upstairs art studio and upcycle with vintage soul re-sale in our lower level. Contemporary, casually-sophisticated trattoria ambiance. Good, honest cooking, friendly service and fair prices. Our menu features the zesty earthy cuisine of Rome and the surrounding areas of Tuscany, Umba and Lazio. An authentic Japanese restaurant with a floating sushi boat and hibachi tables. Open daily for lunch and dinner. Contemporary Latin. Serving Allen Brothers steaks. Outdoor dining, full bar, speciality drinks, parking, catering. Offering the best in Japanese teppanyaki since 1989. Our experienced chefs will not only provide a memorable and enjoyable experience, but we pride ourselves in using only quality and fresh ingredients to maximize taste with all of our homemade teriyaki sauce, salad dressing, horse radish, spicy garlic and tokyo batters. Tapalpa Restaurant was established in 1992 by Judi and Abraham Aguilar. It's named after the town of Tapalpa Jalisco that's located in the Sierra Mountains, one of the most beautiful towns in Mexico. Tapalpa strives to bring authentic Mexican cuisine to Elk Grove Village. CITY:Elk Grove Village Los Rancheros Mexican Restaurant in DeKalb IL is open daily for Lunch & Dinner. This Chicago restaurant is a jewel that also serves a chic culinary cocktail lounge where people can share their lives over sumptuous American Contemporary cuisine and classic cocktails. Family Dining & Cocktails. Serving the Fox River Valley for over 30 years. Home of the double decker pizza & famous for our thin crispy crust. GT Fish & Oyster opened in March of 2011, and has helped redefine the American seafood restaurant. Michelin starred, and Food and Wines Best New Chef 2008, Giuseppe Tentori, has developed a menu that’s half-traditional, half modern, in an ever-changing small plates format. These shows are the best places for the general public and breeders alike to come out & meet many of the World's top reptile breeders. Reptile breeders bring out their top projects & showcase them at the NARBC Shows.The shows also provide a chance to meet some of your favorite TV personalities such as Mark O'Shea, Nigel Marven, Donald Schultz, Ton Jones and a few up and comers that you will soon be watching on Animal Planet and Discovery Channel! We welcome you get up close and personal with some of the rarest and also some of the best "beginner pet" reptiles the reptile breeding community has to offer! Recently renovated and locally owned, the Relax Inn features rooms with wood floors and flat screen TV's, microwaves and refrigerators are also available. This pet-friendly hotel is located within 20 miles of four state parks. Johny Rockets is a place where you can sit in a red-padded booth, play your favorite song and eat some good old-fashioned hamburgers and more. The perfect place for before, during or after the game. Menu features a variety of entrees, sandwiches and appetizers. Satellite TV in the bar. No better place to watch the game. Coach's Corner is a A Family Pizzeria & Sports Grill Open 7 days a week for Lunch & Dinner. Homemade American and Italian food with a full bar and 27 HD TV's showing sports 24/7. CITY:Elk Grove Village Featuring an extensive menu filled with our delicious recipes of Creative pasta specialties, a variety of Chicken, Steak, Veal & Seafood dishes, and of course PIZZA served Thin Crust, Classic Pan and Stuffed Deep Dish. RoccoVino's is always here to serve you. Stop by and Dine In with us or enjoy the best of Italy in your own home. CITY:Elk Grove Village The choices offered at this establishment are overwhelming, accurately billing it's fare as an "International Vegetarian Cuisine". The diverse menu covers virtually all popular cuisines from basic American to Mexican, Italian, Middle Eastern and truly provocative varieties from Indian cuisine. Open 24 hrs. 7days a week. Serving Joliet's favorite coffee and donuts since 1966. Featuring over 45 varieties of donuts, eclairs, cinnamon rolls, muffins and cookies, still made by hand using our original recipes. Featured on Chicago's Best, Windy City Live, CLTV and WJOL. It’s time to get your bling on! Amdur Productions is proud to launch a new holiday shopping festival: Bling, The Jewelry & Gift Show. Sending out some love to all of the jewelers and jewelry lovers with this Highland Park, Illinois indoor festival November 18, 19 and 20; located at the Highland Park Country Club. This event is a free admission, free parking for the public. At Champps we’ve focused on redefining American Cuisine, so you get the best of both worlds. As soon as you walk into the door you feel the atmosphere oozing from our Champpions, ready and eager to serve a heaping portion of Burgers, Beer and Sports. Enjoy 48 premium flavors of ice cream and sherbet (over 150 flavors rotated)! Kid-approved flavors. Non-dairy sherbet options. Homemade waffle cones. Fruit smoothies. Custom cakes. Specialty drinks. And toppings, toppings, toppings! The perfect spot for an intimate dinner for two or a large celebration for 300 people. You'll love our beautiful mahogany decor and stained glass windows; and with a fresh list and menu that changes twice daily, we are excited to offer you the freshest dining experience. Built in 1929, the Villa Park Historical Museum building originally served the community as the Villa Ave stop for the Chicago, Aurora & Elgin electric train line and an appliance store. It was placed on the National Register of Historic Places in 1986. Today, it houses relics and artifacts from Villa Park's past including articles from the Ovaltine Factory which once operated in Villa Park and Sears Catalog Homes in the area. Pioneer of the famous stuffed and equally renowned thin-crust pizza! Pastas, sandwiches and assorted fresh salads. Dine-in, delivery, carryout, catering and full service bar. Plano Synergy is a wholesale distributor, whose foundation is built on a dream and a passion for the outdoors. As a newly created family of high performance outdoor brands, Plano Synergy embodies over 150 combined years of industry experience, values and product quality all thanks to each one of its founders. As devout fishermen and passionate hunters, each Plano Synergy brand sought out to create the finest products for their fellow outdoorsman; dedicating themselves to producing top quality tools of the trade to outdoorsmen of all experience and ability levels. Thriving on new challenges, new goals and bold endeavors to become the latest and greatest in the field, the makers of Plano Synergy products are determined to remove all limitations on the water, and on the field-because this isn’t simply a hobby…it’s a way of life. Our brands include Ameristep, Avian-X, Barnett Crossbows, BloodSport, Caboodles, Creative Options, Evolved, Flextone, Frabill, GroundEFX, Halo Optics, Plano Molding, Wildgame Innovations and Zink Calls with hundreds of products made in the United States. With this desire and strive for excellence instilled in the minds of the innovators behind our products, the possibilities for our future are endless. The people who work at Plano Synergy often come and stay awhile, and the same goes for the people that use our products. As a fellow outdoorsman, we feel you will be happy with our products and may go confidently knowing that your outdoor experiences go beyond the what you bring home at the end of the day, but the experience you get out of it. A unique blend of authentic Asian and classic American dining. Full breakfast, lunch and dinner available. Grab and go options and full espresso bar as well. Sam’s of Arlington is the perfect venue for any occasion. Our menu is based on a distinguished list of homemade recipes that use only high quality ingredients. Our impressive full-service bar is always kept stocked so you can enjoy a cold one while you take in the big game. A Rosebud restaurant, located in 3 First National Plaza. Charming and fast paced by day, elegant and relaxing at night. Customers continuously select Rosebud as the ultimate lunch spot, pre-theater dining destination and post work haven, serving the finest complimentary appetizer buffet in town. Mon.-Fri. 11am-10pm; Sat. 4pm-10pm, Banquet hall and event center specializing in corporate meetings, weddings, family reunions and fundraisers. In-house catering. Handicap accessible. Stop by and enjoy our casual atmosphere. Serving lunch and dinner, delivery available. We have burgers, hotdogs, sandwiches, salads, soups, desserts and more. In Chinese culture, quality and health are more than just values, they're traditions. Everything on our menu is made to order with these two traditions in mind. We've brought the finest Dim Sum chefs in from Chinatown, and let them loose in our kitchen, to create an authentic Dim Sum selection rarely found in the suburbs. Enjoy delicious food including breakfast and lunch specialties at our restaurant. We have a location in Wheaton, Illinois as well. With more than 35 years of experience, Egg'lectic Café, Inc. offers freshly made breakfast and lunch throughout the day. There is always a first time for everything and our restaurant is not an exception. If you are new to Sushi Station, you may find our serving system to be very different from others. This is called "Kaiten (revolving) Sushi" and it is very popular in Japan. Come to Pop's, located in Roselle off Nerge Road and enjoy our daily specials. We have a huge sports bar with lots of HD plasma screen tvs and HD projectors to watch your favorite sports teams and catch all UFC, WEC and major pay-per-view fights. Great views from any seat, and we don't charge a cover! Rawshu’a is a vegetarian-friendly eatery, juice bar, water bar and organic market. Open from 8a to 6p Monday, Tuesday, Thursday and Friday and from noon to 6p on Saturday. Located on the Chicago River, River Roast offers city and water views from every seat, inside and out. The menu features contemporary American tavern far and new inventive drinks. With one of the best patios in the city, the restaurant is a great place to connect with friends and family. Cere's Table serves contemporary American food with Italian, particularly Sicilian, influences. We use local, seasonal & sustainable products when possible. Upscale resale: re-defining the resale experience! Boutique Repeats is designed to provide customers with new and gently used clothes, purses, jewelry and more. At your favorite upscale resale shop, we buy, consign and sell designer and better brand name women’s apparel, accessories, home decor and furniture. Let Boutique Repeats buy your high-end items, so you can go after that new must-have! This hassle-free selling experience is in a boutique environment that carries new inventory daily! Appointments are recommended for a personal buying experience, but not necessary. Flat Top Grill is a create-your-own stir-fry adventure. Choose from the freshest ingredients for create-your-own breakfast, lunch, and dinner. Enjoy appetizers, homemade desserts and our full bar. We accept reservations and welcome large groups. Gift certificates are available. Gus's Diner is the perfect stop for breakfast, lunch, and dinner! Conveniently located in Rolling Meadows, you can have your pick of pancakes or a burger all day long. Penny's has earned a notable reputation due to the ease with which it integrates a wide mix of Asian cuisines - including Chinese, Japanese, Vietnamese, and Thai - into a varied menu that is appealing to American palates. Founded in 1991, Penny's is just steps from the Frank Lloyd Wright Historic District and is a short walk from the CTA line connecting Oak Park to Chicago's city center. Catering and delivery are available. Visit the website or contact the shop for further details. At Jameson's we bring you the same wonderful experiences we grew up on: from our warm Greek hospitality; to the authentic aromas in our open kitchen; to our flavorful, handmade recipes passed down for more than 3 generations. The Rocky Horror Picture Show is an outrageous assemblage of the most stereo-typed science fiction movies, Marvel comics, Frankie Avalon/Annette Funicello outings and rock 'n' roll of every vintage. Rated R. Featuring the live shadow cast of "Irrational Masters". Props that are not allowed are: lighters, candles or flames of any kind, large water guns, toast, hot dogs, or prunes. Prop kits will be available for purchase in the lobby before the show. Show Dates: August 27, September 23, October 28, and November 18 The Northwestern Wildcats offer some of the best Division 1 college sports action in the Midwest, featuring 14 teams competing in the Big Ten Conference. Come early and enjoy the festivities at Wildcat Alley for a pre-game fan fest. Home Game Schedule: • Sept. 3: versus Western Michigan • Sept. 10: versus Illinois State • Sept. 17: versus Duke • Sept. 24: versus Nebraska • Oct. 22: versus Indiana • Nov. 5: versus Wisconsin • Nov. 26: versus Illinois J&K Half Moon Tavern staff will know you by name by the time you finish your meal. We strive to give you that true midwestern hospitality. Come in for our famous Oyster Sunday, or one of our many other specials! We take your dining experience to the next level. Try us out today! Open: Wednesday: 1 pm to 3 pm; or by appointment This museum, in an 1870's building, features artifacts from the Rock City area including pictures, clothing, articles and more. Committed to producing the finest small batch hand crafted spirits in the Midwest, using premium locally sourced ingredients, tried and true processes, superior equipment, and the collective knowledge and skill of dedicated entrepreneurs. Food that is comfortingly familiar, yet excitingly unexpected. Quince serves contemporary American cuisine in the heart of Evanston. Offering an outstanding dining experience in a lively and comfortable setting, all at an approachable price. Villaggio's Ristorante is a family owned and operated Italian trattoria run by three brothers for over fifteen years. When you dine at Villaggio's you are treated like family! Along with 18 superbly conditioned holes of golf, The Sanctuary has an excellent grass practice range, short game area, putting green, full service golf shop with PGA Professional Bob Schulz. The Bunker Grill has great food and beverage offerings. For those who want a challenging golfing experience at a great price, there are few better choices than The Sanctuary, anywhere! Book your tee-time now at 815- 462-GOLF (4653) or book on-line. Family owned and operated restaurant featuring Italian and Mexican dishes at an affordable price. Midtown is a contemporary American restaurant offering a unique dining experience. Within steps of Michigan Ave., the Theatre District and Business District, Midtown is centrally located in the heart of Chicago. Daily 11am-10pm. Rose Garden Cafe has a menu that includes everything. Breakfast, lunch, dinner, specials, ice cream creations and a case of yummy looking desserts! CITY:Elk Grove Village Blending old world tradition with culinary flair and innovations, this is a celebration of Mediterranean Cuisine. Named after the "royal" herb, Basils brings the delicate balance and warmth of Greek comfort food to the modern dining table. Named one of 15 Unique Restaurants by Only In Your State: "Who knew some of the best Greek food wasn't in Greektown but was instead in a suburb? Yes, it's at Basils, a fantastic restaurant that has phenomenal Greek food perfectly paired with wines." (July 16, 2015) Authentic Cantonese & Szechwan style dishes featuring dim sum on Sundays. Two-time Culinary Champion Raimondo's pizzas and pastas are the best pizza and the best ethnic restaurant in the Aurora Area according to our local mayors. Handcrafted pizza pies, sauces and sandwiches made to order. Each pie dough is rolled by hand and made to order. Salivate over the menu. Experience the best when you experience Raimondo's! Sunshine Scoop is an ice cream shoppe and bakery combo, so that you can get your favorite dessert with a scoop of your favorite ice cream right on top. We start with quality ingredients to bring you quality products that you and your whole family can enjoy. Come and visit today for a sweet treat!!! If you have dietary concerns, we also have gluten free, low fat, dairy free, and no sugar added options! Scheduled sales once or twice a month featuring a fresh selection of vintage, industrial, farmhouse items for yourself and your home. Willow’s Hometown Café is located in the Frederick Townsend Garage and is listed on the National Register of Historic Places as part of the Sycamore Historic District. The distinctive stone structure was constructed in 1906 for use as a garage for the estate of Frederick B. Townsend, his former home is the Queen Anne mansion that overlooks the garage property. The structure is made of granite rocks gathered from Townsend’s farmland, right here in Sycamore. Beer House, located on the outskirts of Yorktown Center, is a unique concept bringing over 60 tap and hundreds of bottled beers to one great venue. You can be sure to find almost half of our tap line dedicated to local breweries. Catering to both the beer aficionado and those wanting to learn more, our staff is willing to answer questions and make suggestions. Come drink the best beers, enjoy live music, catch sports on our 15 televisions, or just relax with friends! "Merry and Bright: A Downtown Holiday" returns with Stroll on State, a spectacular holiday event to attend with your family and friends to get you in the holiday spirit. There are plenty of activities to do, decorations to see, and delicious food to eat at this fourth annual event to celebrate the holiday season downtown.
Tri Synergy and Egosoft have reached an agreement that will allow Tri Synergy to co-publish the latest X title, X Rebirth, in the US. They’ll also be showing it off in some capacity at E3. So add that one to the list of PC exclusives we’ll be seeing something of this year. X Rebirth is promising to be as approachable for those new to the space simulation series as it is to loyal fans. In-keeping with the previous games, you’ll be exploring a huge, open universe, engaging in trading and combat, and generally doing what you please. While in space! The game is due towards the end of 2013. It’s been a while in the making, as this reveal trailer from two years ago shows.