id
stringlengths 32
32
| url
stringlengths 31
1.58k
| title
stringlengths 0
1.02k
| contents
stringlengths 92
1.17M
|
---|---|---|---|
a2ec0df46dac81576eac9b972f883ae2 | https://www.brookings.edu/articles/chinas-digital-services-trade-and-data-governance-how-should-the-united-states-respond/ | China’s digital services trade and data governance: How should the United States respond? | China’s digital services trade and data governance: How should the United States respond? China is the world’s second largest digital economy, second only to the United States, and leads the world in the value of many digital applications, including e-commerce and mobile payments. This extensive online activity by Chinese netizens also provides huge amounts of data that can be used to train artificial intelligence (AI) algorithms. China’s dominance in 5G infrastructure will further support China’s digital economy and early 5G rollout in China could give Chinese entrepreneurs a head-start developing new digital applications. China’s digital economy and the importance of data and digital services is also intertwined with its manufacturing activity and the centrality of China in global value chains (GVC), providing enormous scope to export digital services as inputs in manufactured products. Yet, China remains largely closed to foreign competition, with restrictions on digital services imports, a heavily restricted and regulated internet that requires data to be localized, and limited access to online information. These limits to foreign competition stand in contrast to China’s outward focused efforts to shape the international environment and the development of norms and rules affecting data governance consistent with its domestic approach. This includes in international standard-setting bodies, through its support for broadband connectivity and smart cities as part of its Digital Silk Road (DSR) and broader Belt and Road Initiative (BRI). These Chinese efforts abroad and restrictions domestically are harmful to U.S. interests. The United States has been leading efforts supporting an open internet, particularly through its development of digital trade commitments and support for similar efforts in the G20 and Organisation for Economic Co-operation and Development (OECD). However, more is needed to effectively counter China’s efforts globally including as part of its DSR, or risk an internet bifurcated between the United States and China, with security and economic consequences for the United States and its allies. China’s digital economy is large and growing. Based on an OECD taxonomy, Figure 1 shows that China’s digital economy was around 6% of gross domestic product in 2018, compared to around 7% in the United States, 8% in Japan, and 10% in Korea. Figure 1: China’s digital economy1 However, in absolute terms China has the world’s second largest digital economy, second only to the United States. Moreover, China has particular digital strengths. For example, China accounts for 40% of global e-commerce transactions, larger than the value of France, Germany, Japan, the United Kingdom, and the United States combined.2 The transaction value of China’s mobile payments in 2016 were $790 billion, 11 times that of the United States, the next largest market. When it comes to AI, China overall trails the United States but leads in specific applications such as facial recognition.3 This range of digital activity drives extensive online activity that generates enormous quantities of data that can be used to train AI algorithms. Much of what drives China’s digital economy — the data and key digital technologies — are digital services. In fact, the digital economy is largely about digital services, and includes cloud computing, AI, blockchain, and data analytics to derive better business insights, manage supply chains, and enable digital payment, as well as the online delivery of professional services, retail, education, and healthcare. Understanding the scope of China’s digital services exports needs to account for the role of digital services as important inputs into manufacturing exports. In some areas of manufacturing such as automobiles, the digital services component (including software, sensors, and AI) provides much of the value add and is where competition is most fierce. In fact, McKinsey estimates that by 2030 up to 30% of automotive manufacturer’s revenues will come from services offering.4 China’s growth in using digital services in manufacturing is part of a broader push by China into advanced manufacturing, which includes leading the world in its use of robots, where China holds one-third of global stock, over twice that of the United States.5 Chinese companies are also dominant in the supply of 5G hardware, which will also affect growth in data driven services. By one measure, Huawei owns the largest share of standard essential patents on 5G.6 5G will bring high speed connectivity to the edge of the network, reducing latency and increasing speed.7 This will enable the internet-of-things (IoT) and edge computing, and support a range of new applications, content, and businesses such as augmented reality and autonomous delivery systems. Cisco estimates that the number of devices connected to the internet will be 30 billion by 2023, half of these being machine-to-machine connections, including connected factories, home devices, and cars.8 5G will also impact the development of global value chains and the role of digital services as 5G and associated technologies expand the capacity of business to collect data from things in real time, analyze the data, and develop business solutions along supply chains.9 5G will affect Chinese growth in digital services. In part as China rapidly installs 5G domestically, Chinese entrepreneurs will have a head-start over their Western competitors developing new data-driven business models that 5G will enable with opportunities to get to market first. And where network effects create winner-take-all outcomes, potentially new and dominant tech companies. China is also expanding its digital services exports and data governance more generally through its role as a central hub in GVCs and the BRI. China is a central player when it comes to information and communication technology (ICT) goods exports as part of global supply chains. For instance, China accounts for 32% of global ICT goods exports (11% in value added terms — reflecting China’s position in GVC), and around 6% of ICT services exports. However, these figures understate the importance of China’s digital services exports as it fails to account for services as value added inputs into manufacturing and export — often in the context of GVC.10 This growth of services in manufacturing GVC has contributed to services exports growing faster than good exports.11 The World Trade Organization (WTO) estimates that services account for 30% of the value of China manufactured exports in 2015, which is a 19% growth in services value added since 2005.12 The trends toward using services and data in GVC point to the broader servicification of manufacturing, another development that China is driving and is well positioned to take advantage of.13 China is also expanding its digital services and approach to data governance through the development of a Digital Silk Road (DSR), which aims to expand internet infrastructure, promote e-commerce, and develop common internet technology standards amongst participating countries.14 China’s March 2015 white paper made digital connectivity a top priority. As of 2019, China had invested over $80 billion in digital DSR projects, including fiber optic cables. China is also building data centers, which Beijing has called a “fundamental strategic resource.”15 These developments are integrated with other BRI initiatives, including smart cities, ports, and space systems.16 Each of these developments creates new opportunities for China to expand access to data and integrate DSR countries into a broader digital ecosystem centered around China.17 China’s regime for governing digital services and data is based around a relatively closed domestic market for digital services alongside restrictions on cross-border data flows, including access to information. This closed market helps underpin Chinese support for national champions. Including by preventing access by Chinese citizens to their U.S. competitors. The extent of Chinese control over data and access to information led former Google CEO Eric Schmidt to foresee the single global internet bifurcating into a U.S. and China led internet.18 This could also include different standards and frequencies for 5G. Indeed, both the United States and China have identified leadership in 5G standards as a key element to securing their version of 5G.19 The risk for the United States of a bifurcated internet is further heightened by China’s efforts to shape the international environment to support its vision of data governance and digital services exports. China maintains a relatively restricted market for digital services. The OECD digital services trade index in Figure 2 shows barriers affecting trade in digitally enabled services categorized into five policy areas: infrastructure and connectivity, electronic transactions, payment systems, intellectual property rights, and other barriers. The higher the score, the greater the restrictions. As can be seen, amongst the countries listed, China is most restrictive when it comes to digital service across all metrics. China is also most restrictive when it comes to telecommunications services.20 Figure 2: China’s digital services trade regulation are the most restrictive21 These restrictions are parred with domestic policy aimed at dominating emerging technologies. This includes industrial policies such as the National Medium and Long-Term Science and Technology Development Plan Outline (2006-2020) which calls for China to become an innovation-oriented society by the year 2020 and a world leader in science and technology by 2050, based on developing capabilities for indigenous innovation. The “Made in China 2025” initiative launched in 2015 — a 10-year plan for China to achieve 70 percent self-sufficiency in strategic technologies such as advance information technology, robotics, aircraft, new energy vehicles, new material, and biotechnology. Similar industrial policies are also being implemented at the sub-central government level.22 China also employs the most extensive restriction on access to and use of data, including data localization requirements and restrictions on movement of data across borders, further restricting opportunities for digital services trade. Figure 3 below shows the number of data flow restrictions and compares that with other economies within the Asia-Pacific Economic Cooperation (APEC). China’s most restrictive cross-border data flow regulations are around security, internet access, and control and financial flows and services. Such data flow restrictions include China requiring banks and insurers to localize data and China’s data localization requirements under its cybersecurity law. In addition to restricting data flows which affects access for digital services, these regulations could be used to require access to companies source codes and intellectual property under the guide of national security, which could be used by Chinese companies to compete against U.S. and other companies.23 Figure 3: China has the largest number and most data flow restrictions in APEC24 China is affecting the market for digital services and the uptake globally of Chinese data governance practices by influencing international norms and rules and by creating facts on the ground, such as by leveraging the DSI and access to China’s internal market. A key area where China is working to develop rules and norms that will affect growth for China’s digital services trade is by shaping international standards to suit Chinese companies and technologies.25 Chinese companies and officials engage across standards setting organizations and forum, including 3GPP (3rd Generation Partnership Project) which is responsible for 5G standards and in the International Telecommunications Unions (ITU), where China is working to develop standards that suit its technological ambitions, such as in areas of facial recognition26 and IoT.27 This includes placing Chinese officials in senior positions in the ITU and supporting participation of Chinese engineers in technical working groups.28 Chinese specific standards are also being normalized through the DSR as investments in connectivity, smart cities, and data centers which come with Chinese standards.29 The United States has already responded in part. It is developing new digital trade rules in free trade agreements (FTAs) supporting the free flow of data and further services trade liberalization. The United States is also pushing for data flow commitments in the WTO e-commerce negotiations.30 In the G20, the United States has worked with leaders on statements on data flows and digital trade, yet these G20 outcomes have been limited by G20 members China, India, and Russia.31 The United States continues to work on developing international standards. As noted, however, the strategic engagement by China with international standards bodies, including the resources China brings to bear, suggests the United States needs to revisit its approach to standard setting, including allocating increased resources and political capital to ensure that international standards are technically optimal and support open and competitive markets. U.S. efforts to address Huawei’s engagement in 5G are well documented. Tom Wheeler has suggested another way of addressing Huawei’s proprietary edge in 5G equipment is to support open standards for 5G.32 The United States is also working with allies on broader data governance issues around specific technologies, such as through the Global partnership on AI and developing OECD principles on AI. These efforts strengthen normative expectations that markets remain open and competitive. However, more is needed from the United States. Any approach to addressing the challenges China presents on digital services trade and data governance will require a coordinated approach with allies as well as more attention domestically to the regulatory issues that drive data flow restrictions.33 On the international front, the United States should work with like-minded countries on a comprehensive approach to data governance that could also draw in market access issues around data flows and access to digital services. This could involve some combination of more robust common international standards and development of interoperability mechanisms that support data flows and achieving domestic regulatory objectives such as privacy and cybersecurity.34 In parallel, the United States needs more comprehensive domestic regulation on various data and tech issues, particularly federal privacy regulation. This is needed to address concerns amongst allies, especially the EU with respect to online privacy. Progress here could lead to better alignment and ambition amongst U.S. allies on other U.S. data governance priorities. More broadly, the traditional reliance of the United States on industry self-regulation has left the U.S. government with an absence of robust regulatory models around data governance that can serve as a guide to other countries looking to understand how to shore up trust in online activity while also benefiting from the economic and trade opportunities of cross-border data flows. The lack of any such U.S. regulatory models has provided space for China’s approach to gain traction, puling these countries closer into the digital sphere that China is carving out for itself, including all the concomitant emphasis on data restrictions and data localization that act as barriers to trade in digital services. |
6b087faaf9172326321ece40eab01e0f | https://www.brookings.edu/articles/city-centered-investing-in-metropolitan-areas-to-build-the-next-economy/ | City Centered: Investing in Metropolitan Areas to Build the Next Economy | City Centered: Investing in Metropolitan Areas to Build the Next Economy With our recovery sluggish and our politics in rancorous free fall, the U.S. is on a desperate search to create jobs in the near term and retool the economy for the long haul. As Dorothy found in The Wizard of Oz, the answers surprisingly lie no farther away than home, or more precisely, the top 100 cities and their environs — the major metros — where most Americans live, work and play. If we unleash the energies in our metros, we can compete with anyone. Our 100 largest metropolitan areas constitute a new economic geography, seamlessly integrating cities and suburbs, exurbs and rural towns. Together, they house almost two-thirds of our population, generate 74% of our gross domestic product (GDP) and disproportionately concentrate the assets that drive economic success: patents, advanced research and venture capital, college graduates and Ph.D.s, and air, rail and sea hubs. Watch Bruce Katz’s Video Commentary » This intense concentration is the magic elixir of modern economies. It explains why Silicon Valley and Boston lead the world in technological innovation, why San Diego and Indianapolis are global players in life sciences and why Wichita, Kans., and Portland, Ore., specialize in advanced manufacturing and exports. This dynamic holds not only for the U.S. but also around the globe. The rise of Brazil, India and China is a direct product of their rapid urbanization and the growth of supersize metro economies like São Paulo, Mumbai and Shanghai. We mythologize the benefits of small-town America, but it’s the major metros that make the country thrive. Why? When cities collect networks of entrepreneurial firms, smart people, universities and other supporting institutions in close proximity, incredible things happen. People engage. Specializations converge. Ideas collide and flourish. New inventions and processes emerge in research labs and on factory floors. New products and companies follow. As Henry Cisneros, former U.S. Secretary of Housing and Urban Development, likes to say, “Cities are places where two plus two equals five.” The U.S. needs its metro powerhouses as it makes a painful transition from an economy fueled by debt, speculation and excess consumption to one in which we grow in productive, sustainable and inclusive ways. A chorus of business leaders, including Bill Gates, Andy Grove and Jeff Immelt, as well as leading economists, has called for a new American economy. It’s an economy driven by exports, to take advantage of rising global demand. It’s powered by low carbon, to lead the clean-energy revolution. It’s fueled by innovation, to spur growth through ideas. And it’s rich with opportunity, to reverse the troubling, decades-long rise in income inequality. By making smart investments and managing for growth as opposed to maintenance, our major cities and metro areas can lead this transformation. The San Diego region shows how. In 1985 an energetic nonprofit called Connect sprang up to link the scientists and inventors at top research institutions — including the University of California at San Diego, the Salk Institute and the Sanford-Burnham Medical Research Institute — with investors, advisers and support services so their new ideas could become new products and companies. The inventive brew that Connect fermented has made San Diego home to a cluster of life-sciences and technology companies such as Qualcomm, Biogen Idec, Life Technologies and Gen-Probe. New York City has had its eye on San Diego’s success and announced its own undertaking in February. San Diego Mayor Jerry Sanders says, “When we emerged out of the period when the defense industry left San Diego, Connect was there. They helped to create eight clusters of technology that have been employment drivers in San Diego, and we’ve been able to build on that ever since.” In terms of jobs, the region’s technology sector has fared better in this recession than its broader economy. Yet the U.S. has been slow to recognize and build on the power of its metropolitan economic engines. A powerful segment of our popular culture and political leadership still paints us into the corner of quaint small towns rather than embracing a network of dominant metro economies. A closer look shows that prosperous small towns are most often suburbs of major cities, and metros generate the majority of GDP in 47 of the 50 states, including such “rural” states as Nebraska, Iowa, Kansas and Arkansas. For decades, however, the federal government has treated cities like disaster zones, pursuing urban policies devoted to subsidized housing and tax incentives to revitalize inner-city neighborhoods rather than creating policies that, for example, support powerful and promising industry clusters. There has been major improvement under the Obama Administration, but old habits die hard, and legacy interventions still get more support than new approaches. If the federal government is outmoded in its approach, states are often openly hostile to their major cities. Greater Chicago contains 67% of the residents of Illinois and generates 78% of the state’s economic output. But Illinois has pursued transportation and infrastructure policies that divert tax revenue from Chicago to subsidize inefficient investments in the rest of the state. By contrast, our competitors understand that prosperity in this century will come via the distinct assets and attributes of their metro engines. Germany, China and Brazil are investing in wholesale change through advanced research, renewable energy, modern ports, high-speed rail and urban transit in Munich, Shanghai and São Paulo — the metros that drive their economies. We must do the same. Here’s how: First, we should stop refueling the old economy’s bad habits. Why, for example, provide huge tax subsidies for consuming more and more expensive housing? Incredibly, the amount of revenue the government forgoes because of the mortgage-interest deduction is projected to grow from $79 billion in the 2009 fiscal year to $150 billion in 2015. What if we capped these write-offs at the current level and directed half the recaptured revenue to reducing the federal deficit and the other half — roughly $25 billion a year — to efforts that grow exports? Second, we should start investing to help American businesses innovate and have access to a world-class infrastructure that connects them with global markets. For example, the federal government can invest in new-energy-discovery institutes to develop breakthrough technologies that will be in demand in a low-carbon world, like solar power that’s cheaper to generate and deliver than fossil fuels. Washington should also create a national-infrastructure bank to help finance projects that are too complex to be paid for in conventional ways and too important to defer. Think port infrastructure in global-trade gateways like Los Angeles or freight corridors in and around Chicago. Finally, let’s challenge every metro area to meet and exceed the national goal of doubling exports. Instead of subsidizing businesses to move across municipal lines — a complete waste of taxpayer dollars — cities and suburbs need to team up with businesses to devise export initiatives that build on their metro’s distinctive position in the market. We can’t as a nation double exports unless our metros and their businesses (both large and small) do. America’s cities are its centers for talent, capital and innovation. They are our hubs for trade, commerce and migration. With market-based incentives and the proper business climate, they can be unparalleled engines for the next spurt of American growth and prosperity. If our political leaders can put aside rancor, habit and outdated ideas about what kind of nation we are and should be, we can find success not over the rainbow but in our very own metros. |
6a57bce82e5f778ace77656df3840a97 | https://www.brookings.edu/articles/clintons-strong-defense-legacy/ | Clinton’s Strong Defense Legacy | Clinton’s Strong Defense Legacy The notion that President Bill Clinton was a poor steward of the armed forces has become so commonly accepted that it is now often taken for granted—among moderates and independents as well as Republicans such as George W. Bush, who made the charge in the first place. The Clinton administration, so the thinking goes, presided over an excessive downsizing of the U.S. military, seriously weakening the magnificent fighting machine built by Ronald Reagan and honed by George H.W. Bush. It frittered away American power and left the country an object of derision to its enemies, tempting them to misbehave. This assessment, however, is wrong. The Clinton administration’s use of force (or lack thereof ) may be controversial, but the Clinton Pentagon oversaw the most successful defense drawdown in U.S. history—cutting military personnel by 15 percent more than the previous administration had planned while retaining a high state of readiness and a strong global deterrence posture. It enacted a prescient modernization program. And the military it helped produce achieved impressive successes in Bosnia and Kosovo and, more significant, in Afghanistan and Iraq. Although these victories were primarily due to the remarkable dedication and skill of U.S. troops, credit is also owed to Clinton’s defense policy. The Clinton defense team did not, however, do a good job of managing military morale, taking too long to figure out how to distribute a demanding workload fairly and sustainably across a smaller force. As a consequence, U.S. troops became overworked and demoralized, and many left the military or considered doing so. Although many of these problems were largely repaired by the end of the decade, they undoubtedly detract from Clinton’s military achievements. But they do not justify the overwhelmingly negative assessment of his defense record. |
8c56deea1a57bf6ed93e903c21708db8 | https://www.brookings.edu/articles/contemporary-immigrant-gateways-in-historical-perspective/ | Contemporary Immigrant Gateways in Historical Perspective | Contemporary Immigrant Gateways in Historical Perspective This article focuses on settlement trends of immigrants during the periods that bookend the twentieth century, both eras of mass migration. It compares settlement patterns in both periods, describing old and new gateways, the growth of the immigrant population, and geographic concentration and dispersion. Historically, immigrants have been highly concentrated in a few places. Between 1930 and 1990, more than half of all immigrants lived in just five metropolitan areas. Since then, the share of these few destinations has declined, as immigrants have made their way to new metro areas, particularly in the South and West. During the same period, immigrants began to choose the suburbs over cities, following the decentralization of jobs and the movement of opportunities to suburban areas. There are now more immigrants in U.S. suburban areas than cities. Editor’s Note: This article originally appeared in the summer 2013 edition of Daedalus, the Journal of the American Academy of Arts & Sciences published by MIT Press. |
dcd00da6035c6c9855018d23d2dd5e2f | https://www.brookings.edu/articles/corporate-warriors-the-rise-and-ramifications-of-the-privatized-military-industry/ | Corporate Warriors: The Rise and Ramifications of the Privatized Military Industry | Corporate Warriors: The Rise and Ramifications of the Privatized Military Industry A failing government trying to prevent the imminent capture of its capital, a regional power planning for war, a ragtag militia looking to reverse its battlefield losses, a peacekeeping force seeking deployment support, a weak ally attempting to escape its patron’s dictates, a multinational corporation hoping to end constant rebel attacks against its facilities, a drug cartel pursuing high-technology military capabilities, a humanitarian aid group requiring protection within conflict zones, and the world’s sole remaining superpower searching for ways to limit its military costs and risks. When thinking in conventional terms, security studies experts would be hard-pressed to find anything that these actors may have in common. They differ in size, relative power, location in the international system, level of wealth, number and type of adversaries, organizational makeup, ideology, legitimacy, objectives, and so on. There is, however, one unifying link: When faced with such diverse security needs, they all sought external military support. Most important is where that support came from: not from a state or even an international organization but rather the global marketplace. It is located here that a unique business form has arisen that I term the “privatized military firm” (PMF). PMFs are profit-driven organizations that trade in professional services intricately linked to warfare. They are corporate bodies that specialize in the provision of military skills—including tactical combat operations, strategic planning, intelligence gathering and analysis, operational support, troop training, and military technical assistance. With the rise of the privatized military industry, actors in the global system can access capabilities that extend across the entire spectrum of military activity—from a team of commandos to a wing of fighter jets—simply by becoming a business client. PMFs represent the newest addition to the modern battlefield, and their role in contemporary warfare is becoming increasingly significant. Not since the eighteenth century has there been such reliance on private soldiers to accomplish tasks directly affecting the tactical and strategic success of engagement. With the continued growth and increasing activity of the privatized military industry, the start of the twenty-first century is witnessing the gradual breakdown of the Weberian monopoly over the forms of violence. PMFs may well portend the new business face of war. This is not to say, however, that the state itself is disappearing. The story is far more complex than that. The power of PMFs has been utilized as much in support of state interests as against them. As Kevin O’Brien writes, “By privatizing security and the use of violence, removing it from the domain of the state and giving it to private interest, the state in these instances is both being strengthened and disassembled.” With the growth of the privatized military industry, the state’s role in the security sphere has become deprivileged, just as it has in other international arenas such as trade and finance. |
cfbcff25c0411eae04991943b0630f89 | https://www.brookings.edu/articles/covid-19-has-given-the-2020-npt-review-conference-a-reprieve-lets-take-advantage-of-it/ | COVID-19 has given the 2020 NPT Review Conference a reprieve. Let’s take advantage of it. | COVID-19 has given the 2020 NPT Review Conference a reprieve. Let’s take advantage of it. Given the sharp differences among parties to the Nuclear Nonproliferation Treaty (NPT), especially on the pace of nuclear disarmament, it was widely assumed that the 2020 NPT Review Conference — originally scheduled to begin on April 27 in New York — would be highly contentious, would fail to achieve a consensus outcome, and might even result in erosion of support for the treaty. Now that the coronavirus has forced its postponement, there is an opportunity — if the time between now and the re-scheduled gathering of NPT parties is used effectively — to prepare for a more successful conference that could contribute significantly to strengthening the NPT regime. The initial announcement of the postponement stated that, in light of the COVID-19 pandemic, the conference would be held at “a later date, as soon as circumstances permit, but no later than April 2021.” Subsequently, when no better dates could be found on the very crowded U.N. calendar than January 4 to 29, 2021, the conference president-designate decided to place a tentative hold on those dates, pending resolution of the pandemic, and NPT parties subsequently agreed to proceed on that basis. The new date is noteworthy because, if it holds, the U.S. presidential inauguration will fall during the conference, opening the possibility that one administration would be in office at the beginning and its successor would be in office at the end. But NPT parties should not defer preparations while awaiting the outcome of the U.S. elections. Instead, they can and should use the additional preparation time to work toward a more productive and successful conference than would have otherwise been possible. Pessimistic outlook for 2020 Review Conference. NPT Review Conferences have been held at five-year intervals since the treaty’s entry into force in 1970. At each review conference, parties have sought to produce a comprehensive final document, approved by consensus, that reviews the implementation of the treaty and sets forth recommendations for follow-on actions to strengthen it. While such a consensus outcome has not always been achieved, it has been viewed as the litmus test of a successful conference and an indication of a healthy NPT regime. With the 2020 Review Conference commemorating the 50th anniversary of the NPT’s entry into force, producing a successful outcome has taken on considerable symbolic importance. Expectations for the 2020 conference were low from the get-go. The key fault line running through the NPT’s membership has been the belief by many non-nuclear weapon states, especially within the group of non-aligned states, that the NPT nuclear weapon states have not done enough to meet their obligation under the treaty to pursue nuclear disarmament. This has been a divisive issue at all previous review conferences. But the difficulty of finding common ground on disarmament matters has increased for the 2020 conference, mainly because of the deteriorating international security environment. Recent years have witnessed a sharp decline in U.S.-Russian and U.S.-Chinese relations; the unraveling of the international arms control regime with the termination of the Intermediate-range Nuclear Forces treaty and uncertainty about the future of the New START treaty; the possibility of new, destabilizing nuclear arms competitions; and consequently little near-term prospect of further nuclear arms reductions. However, disarmament is not the only issue that could stand in the way of a consensus conference outcome. Implementation of the 1995 resolution on establishing a Middle East zone free of all weapons of mass destruction — an issue that prevented consensus at the 2015 Review Conference — could again pose an obstacle. Addressing the Iran nuclear issue, with the future of the 2015 nuclear agreement very much in doubt, could also be very contentious. The Treaty on the Prohibition of Nuclear Weapons, with its non-nuclear weapon state signatories sharply divided from its opponents (consisting largely of the NPT nuclear powers and non-nuclear states protected under the U.S. nuclear umbrella), will also be hotly debated. And a range of more technical issues, such as whether the International Atomic Energy Agency (IAEA) Additional Protocol should be the universal safeguards standard for NPT members, could also be the focus of divergent conference positions. Surveying some of the issues that could produce discord at the 2020 Review Conference, Tariq Rauf, a former senior IAEA official and close observer of NPT Review Conferences, wrote in March, before the postponement was announced, that “the die is cast for this year’s review conference to degenerate into a Hobbesian fray, perhaps even leading some countries to threaten rhetorically to withdraw from the treaty.” Despite a widely held gloomy outlook, there have been some glimmers of hope that the 2020 Review Conference would not end in failure. Notwithstanding some strong disagreements among the parties on the implementation of the treaty, there is a deep reservoir of support for the NPT itself and a reluctance to place it in jeopardy, especially on its 50th anniversary. In addition, many advocates of more rapid progress on disarmament may recognize that the current international environment is not conducive to such progress and may scale back their demands and settle for more modest results in order to avoid putting strain on the NPT. Moreover, the Trump administration has been hopeful that its initiative on Creating the Environment for Nuclear Disarmament, which brings nuclear and non-nuclear weapon states together to consider how to promote an environment more favorable to disarmament, has been gaining traction internationally and could facilitate a less confrontational conference. Still, even if a train wreck at the originally scheduled conference could have been avoided, the best that was expected was an outcome that minimized harm to the NPT, rather than a result that significantly advanced the treaty’s goals. Hopefully, the reprieve provided by the postponement has created an opportunity to work toward such a positive result at the re-scheduled conference. Impact of the U.S. election. The November 2020 presidential election in the United States is a factor in NPT parties’ thinking about what may be achievable at the re-scheduled conference. This was especially the case after the initial announcement that the postponed conference would take place as soon as possible but no later than April 2021, which meant that it could be held either before or significantly after the American election. A thinly veiled hope of many countries in the wake of that initial announcement was that it would take place after an election in which a Democrat (namely, Joe Biden) would win and adopt positions closer to their preferences than those of the Trump administration, especially on disarmament-related matters as well as on such regional proliferation issues as the Iran nuclear deal. The Trump administration had a different scenario in mind. It hoped the postponed conference would take place before the election (perhaps in October 2020), when it would be guaranteed to be in office and to represent the United States at the conference. It would thus have the opportunity to set forth its own approach to NPT issues and put its own stamp on the NPT review process, regardless of the eventual winner of the November election. The revised dates — January 4 to 29 — alter the situation. These dates are still only tentative and could conceivably change. But the January period, if it holds, raises a possibility unprecedented for NPT Review Conferences — that one U.S. administration would be in office at the beginning of the conference and its successor would be in office at the end. Of course, this would not occur if President Trump is re-elected in November. In that case, NPT parties’ expectations would be shaped by the knowledge that the Trump administration would be in charge of U.S. policy throughout the conference and beyond. But if Biden is elected and takes office on the mandated inauguration day of January 20, the presidential transition would occur just before the final week of the conference, always the most decisive period in any review conference. Given this theoretical scenario, a number of NPT parties, might entertain hopes that a Biden administration would come to the rescue in the concluding phase of the conference and deliver what they consider to be a positive outcome. Don’t defer preparations. But it would be unwise for NPT parties to pin their hopes on a Biden victory in November and therefore put off engaging in serious substantive preparations until the elections results are in. For one thing, Trump has a reasonable chance of being re-elected. Current polls indicate that the presidential race is likely to be close. Moreover, even if Biden wins the November election, it is unclear what his embryonic administration would be willing to do to alter the course of the conference in its concluding days. NPT parties would be wrong to assume that the clock could simply be turned back to the optimistic days of the 2010 conference and the early years of the Obama presidency. The international environment for disarmament has changed for the worse since then. A Biden administration would clearly be much more supportive of arms control than the current administration. But it is unlikely, especially in its earliest days, to champion some recommendations that strong supporters of disarmament would like it to support (such as seeking deeper nuclear reductions in the near term, giving high priority to ratification of the Comprehensive Nuclear Test Ban Treaty in the U.S. Senate, reducing alert levels for nuclear forces, or pledging not to use nuclear weapons first). It may also be unrealistic to expect a Biden administration to adopt specific substantive positions on sensitive issues in the waning days of the conference. Although the president-elect’s transition team would have done a lot of thinking and planning about NPT-related issues leading up to and following the election, no administration political-level appointees (other than perhaps a few cabinet secretaries) will be confirmed by the Senate and in place in the days immediately following the inauguration, and the formal interagency reviews that help set the key policies of a new administration will not even have begun. Moreover, from a political standpoint, the newly installed administration may be reluctant to adopt bold new positions that could subject it to criticism for supposedly showing weakness on national security in its opening days. The Biden team undoubtedly would want to demonstrate to NPT parties in the concluding phase of the conference that it will be charting a different course than its predecessor. But it may only be ready to signal that different course in general terms (such as by expressing support for revitalizing the arms control process and making further progress toward a world without nuclear weapons). An exception would be New START, where a decision to extend the treaty would be needed before its February 5 expiration (if the Trump administration had not already acted to extend it). It is possible to imagine coordination between the incoming and outgoing teams on how to handle the NPT Review Conference. There are many conference issues, including measures to strengthen nonproliferation controls or promote the peaceful uses of nuclear energy, on which the Trump and Biden teams could agree. The U.S. delegation (which by the concluding phase of the conference could conceivably include Trump appointees, Biden appointees not requiring Senate confirmation, and career officials) could be instructed to pursue those common positions. But in areas where the administrations disagree, including some disarmament-related matters, the American delegation might adopt the approach of urging other delegations not to press for conference findings or recommendations on those difficult issues, appealing to those delegations on the grounds that it is simply too early to expect the new administration to take a stand. Another reason for NPT parties not to count heavily on the results of the November election is that the United States, whoever is president, bears only limited responsibility for a successful review conference. Russia (with its Intermediate-Range Nuclear Forces Treaty violation and resistance to further nuclear reductions), China (with its reluctance to accept limits on its growing nuclear arsenal or engage in serious strategic stability talks), North Korea (with its defiance of the international community), Iran (with its destabilizing regional posture), and certain non-nuclear weapon states (with their uncompromising approach to key review conference issues) all deserve their fair share of blame for the current state of affairs and should accept their share of responsibility for getting on a more promising path. All NPT parties need to do their part in promoting an outcome that strengthens the nonproliferation regime. So, for a variety of reasons, it is not a good idea for NPT parties to hold off on preparations in the hope that a newly installed Democratic administration will be able and willing to support the kind of conference outcome they favor. Instead, the parties should use the roughly eight months reprieve provided by the postponement to prepare the way for a more productive conference than would have been possible if the 2020 Review Conference had proceeded on schedule. That means starting now to work closely with the Trump administration and other key governments to find common ground on key issues. Obstacles to early substantive progress. There are two significant obstacles to using the available time effectively. First, it is hard to predict when countries will be sufficiently recovered from the COVID-19 pandemic to permit in-person diplomatic engagements that could facilitate conference preparations. Until it is safe for diplomats to congregate, preparatory work will need to be conducted virtually. And as long as the virus is the overriding preoccupation of national governments, limited bandwidth will remain for lower priorities. Second, progress in finding common ground on difficult issues will depend on overcoming the traditional reluctance of most NPT parties to adjust their national positions and pursue compromise solutions before the conference gets underway. This is why Review Conference Preparatory Committees, which meet years in advance of review conferences, have had such a difficult time achieving their goal of reaching consensus recommendations to be sent to the review conferences for their consideration. NPT parties will need to do their best to overcome these obstacles. While Zoom meetings have their drawbacks and are not a fully satisfactory substitute for in-person consultations, governments are getting accustomed to virtual diplomacy, and there is much that can be done remotely, bilaterally as well as multilaterally, until conditions permit a return to normal practices. The tendency to put off painful compromises until the negotiating endgame may be more difficult to overcome. But this is an unusual year, with the pandemic creating extreme hardships for practically all countries and strains within the NPT membership raising questions about the future of the treaty. On the NPT’s 50th anniversary — and with the devastating global impact of the coronavirus driving home the importance of international cooperation to address common challenges — parties may be more motivated than at previous review conferences to reach early agreement on hard issues for the sake of promoting a successful outcome. The role of the Review Conference bureau. Supported by the United Nations Office of Disarmament Affairs and the IAEA, the Review Conference “bureau” — consisting of conference president-designate Gustavo Zlauvinen of Argentina and the Malaysian, Polish, and Dutch designated chairs of the conference’s three main committees — can play a critical role in giving impetus to preparations at a time when the international community’s attention is understandably directed elsewhere. Zlauvinen, his country’s highly skillful and experienced former deputy foreign minister, and his bureau colleagues know that, as officers of the conference, they are expected to remain neutral on key issues and not try to steer the conference in particular substantive directions. But especially in this difficult period, they can help keep governments focused, bring them up to speed on the issues they will need to address and the choices they will have to make, facilitate their engagement with one another, and urge them to work together to make early progress finding mutually agreeable solutions on critical issues and crafting formulations for a review conference outcome document. Before the coronavirus hit and closed down in-person meetings, various governments and organizations organized a number of workshops, often on a regional basis, to discuss review conference issues in general or to focus on particular “pillars” of the NPT (disarmament, nonproliferation, peaceful uses of nuclear energy). Similar gatherings of NPT parties should now be pursued on an intensified basis, initially virtually but eventually in person as conditions permit. Such events could involve different groups of participants, focus on different subjects, and have different purposes. Regarding participation, meetings could be held on a regional basis or bring together key players on particular issues from various regions. Experts from non-governmental organizations might also be invited when appropriate. Subjects could vary from a wide-ranging discussion of review conference issues, to a narrower discussion of individual pillars, to an even more focused discussion of specific issues (such as nuclear risk reduction measures, the future of nuclear arms control, the IAEA Peaceful Uses Initiative, the IAEA Additional Protocol, or the 1995 Middle East zone resolution). The purpose of the meetings might be educational or consciousness-raising, or they might be intended to acquaint participants with arguments surrounding issues on which differences exist. Setting in motion an ambitious program of meetings will be challenging in present circumstances. Ambassador Zlauvinen and his team will need to be proactive in formulating and scheduling such a program and in gaining sufficient buy-in for it among the most influential NPT parties. Together with the separate bilateral and small-group consultations that Zlauvinen and the bureau will be pursuing to move the process forward, these multilateral gatherings could play an important role in building momentum toward a productive conference. Seeking progress on key issues. While the bureau can play a valuable convening, facilitating, and catalyzing role in the period before the conference, most of the heavy lifting in moving toward substantive agreement on the most salient issues will be the responsibility of NPT governments. On many issues, consensus will be readily achievable. But outcomes on a smaller number of more controversial issues will determine whether the conference succeeds in supporting and strengthening the NPT and the broader nonproliferation regime. Key NPT parties should use the time available to resolve or at least make substantial headway on those issues. New START. Agreement by the United States and Russia to extend New START for five years would give a huge boost to review conference prospects. The Trump administration has so far resisted extension, arguing that simply extending the bilateral treaty would not address some significant Russian capabilities (including non-strategic nuclear weapons and a few novel strategic systems) or cover China’s growing nuclear programs. It has instead proposed including China in a new trilateral arms control agreement. But citing the huge disparity between its own nuclear capabilities and those of the United States and Russia, Beijing has firmly rejected such a trilateral agreement. Washington and Moscow should agree to a five-year extension of New START and link that extension to the commencement of bilateral strategic talks that would consider how New START or a follow-on agreement could address systems not yet covered that are of concern to either side or both of them (including non-strategic nuclear weapons, long-range conventional strike missiles, and hypersonic systems). Those bilateral talks would also consider how additional nuclear-armed states, especially China, could be brought into the international arms control framework. Creating an environment for nuclear disarmament. With or without New START extension, it would be useful in advance of the conference to make progress on the U.S. initiative on Creating an Environment for Nuclear Disarmament. While a significant number of NPT parties initially approached the initiative with skepticism about the Trump administration’s motives — and some still do — the initiative has gained buy-in as a forum that can bring nuclear and non-nuclear weapon states together for a detailed and systematic consideration of how to improve prospects for nuclear disarmament. Constructive discussions in its three subgroups, whether carried out virtually or in person, could improve the atmosphere for addressing disarmament matters at the Review Conference. Nuclear risk reduction. Many NPT parties recognize that, in today’s increasingly troubled strategic environment — with the outlook for further nuclear reductions bleak, at least in the near term — a very high priority, perhaps top priority, should be given to reducing the risk of nuclear war, which many observers believe has significantly increased in recent years. Confidence-building and transparency measures can minimize the likelihood of misperceptions, accidents, and miscalculations that could lead to conventional armed conflict and escalate to the use of nuclear weapons. While such measures are most appropriately pursued in particular regional contexts (for example the NATO-Russia border, the Korean Peninsula, the western Pacific, or South Asia), the conference can give these measures its strong support and call on governmental or non-governmental experts to prepare a study of past and current nuclear risk reduction arrangements that could help promote and inform future regional and global initiatives. U.S.-China strategic stability dialogue. A major gap in international arms control and nuclear risk reduction efforts is the absence of a serious strategic dialogue between the United States and China, two protagonists whose growing strategic competition is a serious potential source of instability in East Asia. Washington and Beijing should pursue a high-level bilateral strategic dialogue aimed at gaining a better understanding of each other’s strategic intentions and capabilities, avoiding worst-case planning and a destabilizing arms competition, and exploring risk reduction measures that could reduce the likelihood of dangerous miscalculations and armed conflict. The United States has long sought such a dialogue, but China has resisted it. In advance of the Review Conference, the two countries should agree to begin such talks, a development that would certainly be welcomed as an important contribution to reducing nuclear dangers. Role of the P5. The five NPT nuclear weapon states, dubbed the P5 because they are the five countries with permanent seats on the U.N. Security Council, have usually worked together cooperatively at previous review conferences, both to promote successful conference outcomes and to defend their records on nuclear disarmament against criticism from non-nuclear weapon states. Although the souring of relations within the group has made P5 solidarity more difficult to achieve, the five need to work together cooperatively in the “P5 Process” (a consultative mechanism begun in 2009) to demonstrate they are taking seriously their collective commitment to reducing nuclear threats and pursuing disarmament. They will report to the Review Conference on progress they have made in several areas, including a dialogue on nuclear doctrines and a glossary of key nuclear terms. In the area of doctrine, the group considered, but could not reach consensus on, a collective endorsement of the Reagan-Gorbachev statement that a nuclear war could not be won and should never be fought. Instead, they have been working on a new statement that would address the role of nuclear weapons. The P5 should use the time between now and the conference to produce a statement acknowledging and affirming their responsibility to work toward a world in which nuclear weapons play a smaller and smaller role and are eventually eliminated. Although each of the five holds its own national position on the role and use of nuclear weapons (China‘s no-first-use policy is the most restrictive), such a P5 statement would seek to outline positions that they share, drawing perhaps on the following elements: Strengthening barriers to proliferation. At past review conferences, proposals for additional nonproliferation measures have encountered resistance, especially by certain non-nuclear weapon states, on the grounds that such measures are not justified in the absence of further steps by NPT nuclear weapon states, mainly the United States and Russia, to meet their obligation to pursue nuclear disarmament. While deep disappointment with the current impasse on disarmament is understandable, it is unwise to hold critical nonproliferation steps hostage to further progress in disarmament. Not only does such a strategy consistently fail to promote advances in disarmament (which depend on international and domestic factors unrelated to such pressure tactics); it risks easing the path to the acquisition of nuclear weapons by additional states — a development that would adversely affect the interests of both non-nuclear weapon states and nuclear weapon states. NPT parties should take advantage of the additional time provided by the postponement to make a major push for two critical nonproliferation measures: making the IAEA Additional Protocol the universal standard for NPT monitoring arrangements and preventing abuse of the NPT’s withdrawal provision. Despite repeated statements by the IAEA that the Additional Protocol is a critical tool for verifying compliance with the NPT by non-nuclear weapon states, a number of NPT parties with nuclear facilities or significant nuclear plans, including Algeria, Argentina, Brazil, Egypt, Saudi Arabia, and Syria, have so far been unwilling to adhere to it. (Iran provisionally abides by it, but its commitment to adhere formally depends on whether it will continue to be bound by the 2015 nuclear agreement, the Joint Comprehensive Plan of Action.) Technically based arguments against Additional Protocol adherence, particularly that it would place undue burdens on civil nuclear energy programs (or in the case of Brazil, on its naval propulsion program), have been convincingly refuted. But political arguments persist, especially that non-nuclear states should not be required to take on additional commitments until the nuclear weapons states fulfill their disarmament commitments and also, by some Arab governments, that Israel’s neighbors should not be asked to accept additional measures while Israel remains outside the NPT. Overcoming these political objections will require diplomatic efforts at the highest levels of government. NPT parties should also find a remedy for one of the treaty’s major shortcomings: if a party exercises its right to withdraw, IAEA safeguards on its nuclear facilities and materials automatically lapse, leaving it legally entitled to use the facilities and materials it acquired under the treaty in a nuclear weapons program. North Korea didn’t even wait for withdrawal before pursuing nuclear weapons; it violated the treaty years before formally withdrawing from it. Although there is broad support among NPT parties for certain principles (such as that states cannot escape responsibility for NPT violations by withdrawing), specific proposals advanced at previous review conferences have not been adopted. Among the ideas proposed for addressing the withdrawal problem was a requirement for Security Council consideration of a state’s declared reasons for withdrawal; a requirement for a withdrawing state to accept intrusive IAEA inspections to determine whether it is already in non-compliance; and a requirement for states to accept “fallback safeguards” on critical facilities that would kick in if NPT safeguards lapsed due to withdrawal. Several states opposed such ideas, arguing that they would abridge the sovereign right of non-nuclear states to leave the treaty. With some parties now hinting at withdrawal and possibly considering a run for nuclear weapons, the need for concrete action on the withdrawal question is greater than ever. Preparations for the Review Conference in coming months should include workshops on both Additional Protocol adherence and NPT withdrawal. Such workshops could involve governments that have taken an active interest in those issues. Peaceful uses of nuclear energy. Peaceful uses has traditionally been the least contentious of the NPT’s three pillars, and that will again be the case at the upcoming Review Conference. In preparation for the conference, regional seminars devoted primarily to peaceful uses were held in Nigeria and South Africa. Additional seminars were planned for Latin America and Southeast Asia, but did not take place. The United States and other parties hope that the regional seminars and further consultations in coming months can provide the basis for an impressive package of new commitments on peaceful uses. A significant focus of these efforts will be to augment the IAEA Peaceful Uses Initiative, which was launched in 2010 as a vehicle for raising extra-budgetary contributions to complement the IAEA’s Technical Cooperation Fund. It has raised over $100 million to fund a wide range of nuclear activities, especially in the developing world, in such areas as water and the environment, food and agriculture, human and animal health, and nuclear safety and nuclear power. Addressing the ban treaty and the Middle East zone. Two divisive issues with the potential to block a conference consensus are the Treaty on the Prohibition of Nuclear Weapons (often referred to as the “ban treaty”) and the proposal for a Middle East zone free of all weapons of mass destruction. Both issues are now being handled outside the NPT review process, and this may decrease the likelihood of their having a disruptive effect on the conference. Following a 2016 U.N. General Assembly resolution, a U.N. conference open to all member states completed negotiation of the ban treaty in 2017. Efforts by proponents are currently focused on gaining ratification by 50 states, the requirement for bringing it into force. Similarly, in accordance with a 2018 U.N. General Assembly resolution (rather than a Review Conference decision, which would have required a consensus), an international conference on the zone proposal was held in November 2019 under U.N. auspices (not the auspices of the three NPT depositary governments) and will continue to be held annually, despite the strong objections of the United States, Israel, and others. The two issues remain extremely contentious. Ban treaty proponents would presumably like to see a favorable reference to the treaty, including the judgment that it reinforces the NPT, a point that is strongly disputed by opponents. Opponents might want the conference either to be critical of the ban treaty or simply to remain silent on it. On the Middle East issue, Arab governments, with Egypt likely taking the lead and with the support of Iran, may seek review conference endorsement of the U.N. General Assembly-mandated conferences, while the United States will continue to oppose a process that gives Israel little incentive to participate. Parties should seek to resolve both of these issues by pursuing brief factual or non-controversial references to them in the outcome document. On the ban treaty, the conference could simply note that the treaty has been concluded and indicate how many states have already ratified it. On the zone, it could reaffirm the 1995 Review Conference resolution calling for the zone (which should not be controversial) and take note of the international conferences that were held on the issue in 2019 and 2020. Getting these potential obstacles out of the way before the conference begins could create a more favorable climate for tackling other difficult issues when the conference gets underway. Approach to North Korea and Iran. The two “country issues” that most affect the future of the global nonproliferation regime — North Korea and Iran — need to be addressed, even though any real substantive work on the issues will take place separately. The outlook on both has become increasingly unpromising and could deteriorate further by the time of the conference. On North Korea, summit diplomacy has reached a dead end. Kim Jong Un says he is no longer interested in negotiations with the United States and is no longer bound by his moratorium on testing nuclear weapons and long-range missiles. He has accelerated testing of short-range missiles and rockets and continued production of fissile material for nuclear weapons. But so far, he has avoided major provocations, such as nuclear or ICBM tests, probably to avoid alienating China and Russia or triggering a strong U.S. response. There is very little prospect of resuming negotiations before either the U.S. presidential elections or the Review Conference (if it is held in January). The Iran situation is equally bleak. In response to the Trump administration’s withdrawal from the nuclear agreement and its adoption of a maximum pressure campaign, Iran has begun to rebuild its enrichment program in violation of the deal’s nuclear restrictions and has engaged in military provocations, either directly or through its proxies, against the United States and its regional partners. It has remained bound by the deal’s extensive monitoring provisions but has not cooperated with an IAEA investigation of possible violations of its safeguards obligations. The risk of armed conflict in the region has substantially increased. The agreement is hanging by a thread and could be gone altogether by the time of the conference. Even without the COVID-19 crisis, constructive diplomacy on North Korea or Iran in 2020 would have been exceedingly unlikely. The pandemic has made it virtually impossible. Given evolving developments on North Korea and Iran, it will difficult to reach conclusions in advance on how the issues will be addressed in a review conference outcome document. That will probably have to await the conference itself and will depend on the situations prevailing at that time. In light of serious differences among NPT parties on both issues, common ground probably will only be attainable on brief, general recommendations. For example, on the North Korea issue (where consensus may be somewhat easier to reach because North Korea, as a non-party, will not attend the conference), the Review Conference might call for the resumption of negotiations, urge all countries to practice restraint and avoid provocations, and support the goal of the complete denuclearization of the Korean Peninsula (a goal shared by all NPT parties despite widespread doubts that it can be achieved). Iran will be trickier because its representatives will be present and able to block a consensus, because of the sharp divide between the Trump administration and most other parties on the value of the 2015 nuclear deal, and because, if a U.S. presidential transition occurs during the conference, the incoming and outgoing administrations may clash on how to address the issue. Still, as with the North Korea case, a general recommendation may be possible — for example, simply calling on all interested parties to engage in negotiations on the nuclear issue and other issues of concern and, in the meantime, to exercise restraint and reduce tensions in the region. Considering different kinds of outcomes. The postponement of the 2020 Review Conference provides an opportunity to consider various departures from traditional conference practices that could help promote its success. Gift baskets. The outcome sought by all previous review conferences has been a comprehensive final document, approved by consensus, that reviewed the past operation of the NPT and set forth recommendations for advancing its goals in the future. But even when the parties were able to achieve such a consensus final document, their recommendations did not necessarily lead to concrete actions. Putting those recommendations into practice depended on follow-on decisions and actions by national governments and specialized bodies like the U.N. Security Council, the IAEA Board of Governors, and the Conference on Disarmament. Sometimes conference recommendations eventually led to concrete results, but sometimes they did not. One way for the upcoming conference to demonstrate tangible results right away is to borrow an innovative feature from the 2010–2016 Nuclear Security Summit process. Delegations were encouraged to come to the summit meetings with “gift baskets”: voluntary commitments by individual countries or groups of countries to take specific steps to strengthen their own nuclear security systems and demonstrate their support for the international nuclear security regime. NPT parties should be encouraged to bring similar gift baskets to the Review Conference. These voluntary undertakings would supplement, not replace, the effort to produce a comprehensive final document containing recommendations for future action. Because of the postponement, there is now enough time for national governments to do the internal reviews needed to decide what kind of commitments, if any, they are prepared to make. Gift baskets could take many forms: a U.S.-Russian joint statement on extending New START; a statement by a nuclear weapons state that it is accelerating its dismantlement of retired nuclear weapons; a pledge to adhere to the Additional Protocol; an assurance by a group of nuclear weapons states (perhaps the P5) that they will continue their unilateral moratoria on nuclear testing for at least five more years; a monetary or in-kind contribution to the IAEA Peaceful Uses Initiative; a commitment to accept fallback safeguards on a country’s enrichment or reprocessing facilities; or a decision to join various nuclear safety or security conventions. Taken together, the gift baskets could make an impressive, concrete contribution to the nonproliferation regime and bolster the success of the Review Conference. Plan B. Another departure from past review conferences would be to relax the requirement that all of the findings and recommendations in the final document be approved by consensus. Because of this all-or-nothing approach, several previous review conferences did not produce a final document and were seen as “failed” conferences. Many constructive ideas offered at these supposedly failed conferences had gained approval in the main committees, but could not be recorded in a final document because disagreements on a small number of issues blocked their adoption. NPT parties should seek a consensus final document, just as at previous review conferences. But in the event that such an outcome is not achievable, the parties should be prepared to fall back to Plan B: a positive conference outcome that includes both consensus and non-consensus elements in a final document. There are any number of possible Plan B outcomes, in terms of format and substance that could demonstrate a successful Review Conference. Conference leaders — the conference president and his bureau and key delegations — should not get locked into a single approach. Instead, they should be flexible and creative in putting together a package of elements, not all of them supported by consensus, that could be incorporated into a final document and capable of gaining the support of the parties. Such a document could begin with a brief, high-level political declaration reaffirming strong support for the NPT and pledging to fulfill its obligations and ensure that its goals are fully realized. The document would give prominence to conference findings and recommendations that could achieve a consensus, but it would also acknowledge areas of disagreement as well as positions and recommendations that fall short of gaining consensus. (For a fuller discussion of Plan B, see “The 2020 NPT Review Conference: Prepare for Plan B.”) Making good use of the reprieve. Many NPT governments and non-governmental observers believe that if the 2020 Review Conference had been held on schedule at the end of April 2020, a harmonious outcome would have been unlikely and that a contentious result could have eroded confidence in the future of the NPT. The postponement of the conference due to the COVID-19 pandemic provides a reprieve that can permit additional preparations. Of course, additional time to prepare does not guarantee a more positive outcome. The Review Conference is likely to be difficult no matter when it is held. The polarization among NPT parties that has hampered the NPT regime for decades has gotten worse in recent years, exacerbated by the deteriorating international security environment, the increasingly adversarial bilateral relations between key major powers, and the resulting sharp decline in prospects for disarmament. But the reprieve at least provides the opportunity to try to do better than what appeared possible before the postponement. Unfortunately, it is an opportunity significantly inhibited by the practical constraints on diplomatic engagement imposed by the coronavirus. It will require considerable ingenuity and energy on the part of the conference president and his bureau, supported by U.N. Office of Disarmament Affairs and the IAEA, to ensure that NPT governments stay actively focused on conference preparations during the challenging months ahead, to help structure preparatory work effectively, and not least to assist the parties in finding practical and innovative ways to interact with one another at a time when traditional in-person engagement is not possible. And most important, key NPT parties, working with the bureau and each other, will need to take advantage of the additional time to try, as early as possible, to resolve key issues that will determine whether a positive conference outcome can be achieved. In the end, the most important determinant of success will be a recognition by parties to the NPT that, despite their differences, they have a strong common interest in preserving and strengthening a treaty that for 50 years has promoted international peace and security and that provides the indispensable international framework for pursuing disarmament, nonproliferation, and the peaceful uses of nuclear energy long into the future. |
c483095580e8d3cb20b43768c63df5e2 | https://www.brookings.edu/articles/debt-is-cheating-our-childrens-future/ | Debt is Cheating Our Children’s Future | Debt is Cheating Our Children’s Future Now that we have all gone through the painful process of paying income taxes, let’s stop and think what tax-paying – and the federal fiscal environment – may be like 25 years from now. If we think we have it bad, our children and grandchildren face potential tax and financial burdens that will be crippling if nothing is done to reduce our nation’s growing debt. A recent AARP Bulletin, belying the stereotype of the greedy senior, put two naked toddlers on its cover and superimposed on their backs the grim headline “$156,000 in debt.” That’s the amount that every American child already owes, on behalf of his or her country, if you add our $8.3 trillion national debt, plus unfunded commitments to Medicare, Social Security and other entitlement programs. That’s nearly three times the average household’s net worth and about four times the average American’s annual income. And it’s all because of our fiscal profligacy – or should we say immorality? We’re the grown-ups who should be taking care of America for future generations. Instead, we’re bequeathing a fiscal mess of biblical proportions. This is not just an abstraction or a problem that will go away with faster economic growth, cutting government “waste,” or as one focus-group participant recently suggested, requiring the Bush daughters to spend less on designer shoes. To put it in more personal terms, rising deficits will slow economic growth and reduce the average family’s income by $1,800 by 2014, drive up interest rates (costing the average American an additional $2,000 per year in mortgage interest), and force average taxes to rise by $7,000 by 2030 if we keep our current promises to the elderly. Moreover, deficits have other perverse effects. As our debt grows, we will soon pay one-quarter of our taxes on interest on that debt – with that money going to our creditors, half of whom are Chinese, Japanese, Saudis and other foreigners. That, of course, is if they don’t dump our T-bills and send our financial markets tumbling. The second perverse effect: The more we spend on entitlements and debt service, the less there is to spend on investments in the future. When people talk of cutting nondefense “discretionary spending” – i.e., everything that government does other than support the big entitlement programs and defense – they’re talking about less than one-fifth of the federal budget. Yet, it is these investments – not just in scientific research, transportation and other infrastructure or environmental protection, but also in children themselves – that are increasingly shortchanged. The 77 million 0-to-18-year-old Americans are our future. If we don’t invest in their education, health and general well-being, we might as well say that we don’t care about their future or the future of the United States. Without the best education, health and other opportunities – which our society is more than wealthy enough to provide – America may well fall behind in global competitiveness. Secondly, by piling up 12-figure deficits, we constrain our children’s and grandchildren’s freedom. If most public spending when they are adults is devoured by entitlements and debt service, they will be unable to make the political choices promised by a democracy on what they want to spend their tax dollars. And if their incomes fall, and their taxes and interest payments rise, that will financially constrain their historic American freedom to the “pursuit of happiness.” Is this what we want? No one in his or her right mind would say yes, but that’s the course we’re on. The glimmer of good news is that this is not inevitable. It’s like the future shown in Charles Dickens’ A Christmas Carol – what might be, if nothing is done. With political leadership and public outcry, a can-do country like the United States can reform its entitlement programs, cut wasteful spending and find ways of raising new — but not onerous — revenues. Look into your child’s or grandchild’s eyes and think of their future when you consider what our president and Congress must do. |
54e1349548701d26a29da750df261d27 | https://www.brookings.edu/articles/democratic-expressions-amidst-fragile-institutions-possibilities-for-reform-in-dutertes-philippines/?shared=email&msg=fail | Democratic expressions amidst fragile institutions: Possibilities for reform in Duterte’s Philippines | Democratic expressions amidst fragile institutions: Possibilities for reform in Duterte’s Philippines This primer characterizes the authoritarian practices of Philippine President Rodrigo Duterte’s administration and their legacies for liberal democracy in the country. It argues that the policy and rhetoric of the Duterte administration’s war on drugs have created fragile democratic institutions that are prone to abuse of power. It highlights three key areas of concern: the increasing role of coercive institutions like the police and the military in all levels of governance undermines long efforts at institutionalizing democratic control over security forces; the regime’s systematic and aggressive attacks against the political opposition, the judiciary, and the media weaken the capacity of monitory institutions to scrutinize and hold the regime accountable; and disinformation campaigns further corrode the capacity of the public to engage in critical discourse and informed political decisionmaking. Despite the intensification of authoritarian practices in the Philippines, there remains robust albeit fragmented democratic expressions in the form of standout local mayors, digital innovations, and electoral resilience. These micropolitical democratic practices may have limited scope, but they are meaningful in consequence. The primer concludes by offering possibilities for scaling up these seemingly mundane yet nevertheless powerful expressions of counterauthoritarian practices. At the height of the COVID-19 pandemic, Philippine President Rodrigo Duterte registered an approval rating of 91%. A vast majority of Filipinos support the government’s pandemic response, despite the Philippines recording one of the highest numbers of infections and COVID-19-releated deaths in Southeast Asia. The debate continues about the reasons behind the president’s popularity, but one thing is for certain: that public satisfaction lends legitimacy to Duterte’s authoritarian project. The Senate opposition did not win a single seat in the midterm elections. The Supreme Court is packed with Duterte’s appointees. The media is facing increasing constraints. Indeed, there are fewer obstacles for the current administration to mainstream authoritarian practices. This primer begins by providing an inventory of authoritarian practices by the Duterte regime and reflect on their implications to democratic institutions. The term “authoritarian practice” is deliberately used to refer to patterns of action that disable voice and accountability. Instead of using the catch-all term “authoritarianism,” which confounds rather than clarifies political transformations in the Philippines, the term authoritarian practice lends precision in identifying political decisions, policies, and rhetoric that undermine democratic contestation and scrutiny of power. The key message in the first part of the primer is that authoritarian practices corrode the quality of democratic institutions by rendering them vulnerable to abuses of power. But this is not the whole story. As the Philippines witness the intensification of authoritarian practices, there remains room for democratic action that facilitate participation and creative forms of co-governance. These not only serve to push back against authoritarian practices but also develop democratic projects that fit the Philippines’ youthful, global, and digital participatory cultures. This primer spotlights these democratic expressions as opportunities for reform, and concludes by considering possibilities to scale up these counter-authoritarian practices in the remainder and in the aftermath of the Duterte regime. The Philippines has an uneven trajectory of building democratic institutions. Three decades after the 1986 People Power Revolution that put an end to Ferdinand Marcos’ dictatorship, the country appears to have developed an electoral habit of rotating power between populist and reformist presidents. 2016 was a populist leader’s turn, but instead of perpetuating a rich-versus-poor narrative, Rodrigo Duterte amplified the latent anxiety of many Filipinos that pit the virtuous citizens against unscrupulous criminals. Duterte referred to Davao — the city where he was mayor for over two decades — as Exhibit A. With unconventional methods of governance, Duterte, so the story goes, was able to transform Davao from the murder capital of the Philippines to a peace and order paradise. Becoming president allowed him to scale up this effort. “It will be bloody,” he warned the nation. Four years into his term, President Duterte did fulfill his campaign promise. He empowered the Philippine National Police to lead his “war against drugs” which has resulted in over 8,000 deaths, as reported by the United Nations Office of the High Commissioner for Human Rights. Even the pandemic did not halt drug-related killings. The drug war is not only Duterte’s landmark policy. It also serves as the organizing logic of his rule. A nation at war justifies authoritarian practices, for due process is a slow-moving process, and protests of “bleeding heart liberals” get in the way of the president’s law and order agenda. The policy and rhetoric of the drug war have vast implications. They create fragile democratic institutions that are prone to abuse. Three key areas of concern are worth highlighting. First, the drug war mainstreamed the securitization of social issues. Coercive institutions such as the police have been at the forefront of implementing social policies. Addressing the issue of illegal drugs is a clear example, with the Philippines bucking the global trend of treating substance abuse as a public health issue rather than a law and order issue. The logic of securitizing social issues extended to pandemic response. The police were among the most visible front-liners enforcing curfew and social distancing policies with punitive measures. Protesters were dispersed and arrested with the police wearing full battle gear. Military tanks were deployed in Cebu City to communicate strict lockdown policies. A little over a month since Manila went on lockdown, over 30,000 people were arrested for breaking quarantine restrictions. Cases of police brutality surfaced. Some violators were locked in dog cages while others were made to sit under the sun. The tone from the top guarantees impunity for the state’s security forces. “Shoot them dead” was the president’s order for violators, just like the “permission to kill” in the drug war. The result of the president’s rhetoric is the culture of impunity in the police force. Decades-long efforts at institutionalizing democratic control over security forces are being undermined, where a new generation of police officers is socialized to an unaccountable institution where police offers who killed suspects on drug raids were hailed as heroes and rewarded with promotions. A greater role is also accorded to ex-military generals who sit in key sites of power including the task force in charge of pandemic response. The growing power of the military is further legitimized by legislation such as the Anti-Terror Law, which broadens the definition of terrorism and legalizes detention without charge for 14 days. These developments, among others, illustrate the reach of authoritarian practice as far as curtailing prospects for accountability and democratic control of security forces are concerned. Second, authoritarian practices compromise monitory institutions or bodies designed to scrutinize power. Among the earliest signs of monitory institutions’ fragility is the complicity, if not active participation of lawmakers, to the prosecution of opposition Senator Leila De Lima. As former human rights commissioner, De Lima led a Senate investigation into Duterte’s death squads a few months after Duterte assumed the presidency. Sixteen of her fellow senators voted to oust her as chair of the Senate Committee on Human Rights, followed by a series of humiliating investigations that suggested De Lima had taken money from drug lords. De Lima has been in detention for three years based on what appears to be politically-motivated charges. De Lima’s case is a clear manifestation of authoritarian practice. It constrains accountability by subverting the role of the Senate as a check to executive power. It also constrains voice as De Lima was made an example of how far the state can go in retaliating against critical voices. Following De Lima’s detention is the ouster of Chief Justice Maria Lourdes Sereno, also a former human rights lawyer, justified based on her failure to disclose financial earnings when she was first appointed to the Supreme Court. Threats and humiliation of critical voices extend outside formal political institutions. Other controversial examples include a military general who tagged female celebrities who speak up for human rights as communist sympathisers and threatened they would “suffer the same fate” as activists killed in military encounters. The cases of Senator De Lima and Chief Justice Sereno serve as a reminder that such threats may be carried out. This sends a strong signal to watchdogs and whistleblowers to think twice about scrutinizing power. Third, authoritarian practices create a fragile public sphere. The Duterte regime is notorious for its systematic distortion of public discourse. Academic studies as well as investigative reports have uncovered the administration’s mobilization of state-sponsored troll armies, which creates a toxic online environment that punishes dissenting voices. Press freedom in the Philippines is also eroding, as news organizations not only face threats of being shut down but have actually been closed by congressional votes and judicial rulings. As in previous sections, these authoritarian practices are given the green light from Duterte, who labelled journalists as “presstitutes” and propagators of fake news. Meanwhile, the Philippines’ protest culture is confronted by pandemic-related restrictions, leading to arrests of activists despite protestors practicing social distancing. The fragility of the public sphere, however, is not the sole creation of the Duterte regime. Long before Duterte assumed power, the Philippines already suffered from a patchy track record of press freedom. The Philippines is widely celebrated as having a vibrant media environment and robust commentary culture, especially when compared to its neighbours in Southeast Asia. This reality, however, uncomfortably co-exists with the country’s track record as the deadliest peacetime country for journalists. Similarly, increasing mistrust of news organizations has made the public sphere more vulnerable to disinformation. One cannot overstate the worry of seeing an increasingly fragile public sphere. The Philippines may not have well-established political parties, but the highly networked and vibrant public sphere has always been a political force in sparking change, whether it was ousting the Marcos dictatorship or calling out the corruption of Presidents Joseph Estrada and Gloria Macapagal-Arroyo. Many observers find it curious that dissent against the Duterte regime has not crystallized to date. At best, protests have been fragmented and fleeting. Could this be an indication of the normalization of authoritarian practices? There are two ways of answering this question. A pessimistic answer is yes, all these developments signal the normalization of authoritarian practices. The Philippines’ pathway to democratization has long been undermined by political elites’ refusal to institutionalize reforms that strengthen political competition and accountability. President Duterte, one could argue, is simply a beneficiary of clan politics that has long defined electoral democracy in the Philippines. With political families dominating all sectors of government, including Duterte’s own family in Davao City, there is little space for alternative voices — whether in the form of opposition parties, social movements, or civil society groups — to offer credible democratic projects that can withstand the political machinery of political elites that benefit from the Duterte regime. On the other hand, a less pessimistic take, is no, the fragmented and fleeting contestation of the Duterte regime does not signal the normalization of authoritarian practices. What it could signal, however, are less spectacular expressions of democratic participation today. This section of this briefing, therefore, places a spotlight on some of these democratic expressions. These, one could argue, are plausible efforts at sustaining democratic action amidst authoritarian practices. Three bright spots are worth paying attention to. The first bright spot can be found in local governance. The pandemic has generated attention to standout local mayors whose open and participatory approaches to governance stand in contrast to the Duterte regime’s centralised and militarized approach. Vico Sotto — the thirty-year-old mayor who put an end to the three-decade reign of a political clan in Pasig City — has established a reputation for institutionalizing good governance practices inspired by participatory practices in cities like Naga in the Bicol region. Sotto focused on democratizing government data — from creating Freedom of Information kiosks to soliciting citizen-centred scorecards that monitor and assess the local government’s delivery of public services. He also championed inclusive governance during the pandemic. He granted financial aid to LGBTQ families and converted hotels to quarantine facilities for communities living in poverty. The young mayor is not the first and certainly not the only local chief executive that has embraced the language and practice of inclusive and participatory governance. But what is curious about his governance style is it is pitched not as an opposition to the Duterte regime — indeed the mayor has been cautious in not condemning the Duterte administration — but an alternative way of governing effectively without an iron fist. This is worth spotlighting for it invites observers to notice practices that are not overtly oppositional but nevertheless creates pockets of democratic innovations even in challenging times. The second bright spot rests on the emergence of digital governance cultures in the Philippines. The rise of troll armies has been diagnosed as an outcome of a tech-savvy generation left with little choice but to engage in precarious digital labor. The flipside of this development, however, is the rise of a generation confident in proposing technological interventions to complex governance problems. Millennial data scientists have creatively used mobility apps like Waze and Google Maps to track the spread of COVID-19, while others focused on developing a dashboard that allows citizens to monitor government spending during the pandemic. These examples, among others, lend insight into the character of democratic innovations embraced by young Filipinos today. Beyond expression of dissent in social media, the digital public sphere is also made alive by seemingly depoliticized yet nevertheless important behind-the-scenes work that promote open data critical for inclusive governance. The third, final, and undoubtedly most obvious avenue for democratic expression are elections. As the Duterte administration’s rule draws to a close in 2022, speculations about “no election” scenarios are being raised, while questions about succession increasingly heat up. Despite the fragility of democratic institutions, one can argue that elections remain as one of the most resilient features of the Philippine democracy. It not only serves a mechanism for peaceful transfer of power but it has, in local culture, been celebrated as a “ritualized gamble.” Elections, as anthropologists describe, are “hugely popular, are taken seriously, and draw very high participation rates.” It is therefore important to focus attention towards identifying political actors that enhance competitive elections, such as grassroots movements and community leaders that can challenge entrenched political clans. The Philippine legislature continues to be controlled by a handful of families but there are exceptional success stories of so-called “dragon-slayers” that challenge the configuration of local power. The three avenues of democratic expressions discussed in the previous section send a key message — micropolitical reforms may have limited scope, but they are meaningful in consequence. This briefing concludes by offering three possibilities for scaling up these seemingly mundane yet nevertheless powerful expressions of counter-authoritarian practices. First, champions of participatory governance at the local level warrant support, but this must go beyond idealizing individual leaders. The success stories discussed above are not singlehanded achievements of heroic politicians, but are built on a cadre of professionalized and committed civil servants who not only have the technical skills to manage day-to-day problems of running local governments but also have the sensibility to listen and engage with the feedback of ordinary citizens. A critical space for reform, therefore, rests on normalizing this ethos of civil service and drawing attention to collective achievements rather than glamorizing individual leaders. Second, it is critical for the Philippines’ large population of digital natives to serve as main defenders of the digital public sphere. Doing this goes beyond campaigns of digital literacy and education against disinformation. As the previous section suggests, the thriving disinformation industry was a beneficiary of a precarious class of digital workers left with little choice but to work for shady clients. A polluted public sphere cannot be rescued without addressing the political economy of disinformation. Finally, expanding field for electoral competition remains a challenge for the Philippines. Large-scale efforts at voters’ education remain futile if voters are left with a narrow field of candidates to choose from. The discourse of voter-blaming does little to deepen democratic practice. Advocacies on party building and reform remain relevant today, as well as a more serious recognition of cultural agents that shape citizens’ views on democracy and politics. While celebrities and influencers have been disparaged as insignificant voices in politics, it is worth recognizing that some of the most successful albeit fleeting campaigns against authoritarian practice, especially disinformation, are sustained by supporters of these cultural actors who are key vectors in shaping public conversation. As authoritarian practices in the Philippines’ national politics continue to unfold, increasing attention is needed to consistent, behind-the-scenes, less spectacular forms of democratic labor. These, as this primer argues, have the power from preventing fragile democratic institutions from completely breaking apart. |
bc0381735f1e5e172c506bfc31ea8aaf | https://www.brookings.edu/articles/democratization-on-hold-in-malaysia/ | Democratization on hold in Malaysia | Democratization on hold in Malaysia In May 2018, Malaysians elected a new coalition to power, putting an end to 61 years of a single party’s de facto monopoly on power. Two years later, the democratization many expected has yet to materialize. Structural and contextual factors have plunged the country in a political crisis alongside the health crisis of the COVID-19 pandemic. While the pandemic has been relatively well-handled, it has created a unique climate of emergency, uncertainty, and fear. These extraordinary twin crises are shaking Malaysia’s fragile democratic equilibrium, creating new political opportunities and failures that could lead to another change in government. General elections are expected in early 2021. Meanwhile, the democratic reforms many anticipated have been put on hold. This paper explores Malaysia’s domestic politics and its recent democratic struggles, as well as their implications for U.S. foreign policy. Malaysia has been a carrefour of civilization, culture, and religion and a strategic route on the Maritime Silk Road since the 6th and 7th centuries. Its society is diverse, including large minorities of Chinese and Indians whose migration goes back as far as the 15th century as well as indigenous peoples, the majority of which inhabit the two states of Borneo, in East Malaysia. Roughly 60% of the population practices Sunni Islam but other religions are also highly represented. The United Malay National Organisation (UMNO), a Malay nationalist party, ruled Malaysia continuously from independence to 2018. Most political parties are ethnically and/or religiously based. Religion is highly politicized, which regularly results in tensions and occasional, mostly politically orchestrated sparks of violence. Despite this, Malaysia remains a relatively peaceful country and an important strategic partner to the United States for trade and strategic cooperation in a highly sensitive regional context. Malaysia is a hybrid regime oscillating between democratic institutions and authoritarian practices, considered “partly free” by Freedom House. Several laws have been designed to control the public space and suppress dissenting voices in the opposition and the media. The 2018 victory of an opposition coalition raised hopes in democratic reforms after 61 years of single-party rule. However, the new government failed at reforming a system embedded in an old political culture of patronage and lasted for only 21 months. The new ruling coalition (Perikatan Nasional, or PN) that took power in March 2020 has revived a more Malay-oriented agenda. In the context of the COVID-19 pandemic, the PN government led by Prime Minister Muhyiddin Yassin has prioritized health and economic measures rather than deep reforms. The opposition coalition (Pakatan Harapan, or PH), led by Anwar Ibrahim, is in limbo. In the last few months, Anwar, a former democratic icon jailed twice on sodomy charges, has lost credibility as a potential prime minister in the eye of his allies, who are now asking for his resignation. Mahathir Mohammad, who after two decades ruling Malaysia under the UMNO flag (1981-2003) switched to the opposition in 2016 and won the election in 2018, is still eyeing the top job in the next general elections, at age 95. The unending feud between the two men, Anwar and Mahathir, further destabilizes a fragile political equilibrium. In May 2018, after a fierce campaign, Mahathir Mohammad’s coalition Pakatan Harapan (the Pact of Hope, or PH) won Malaysia’s general elections. Mahathir, who had ruled over the country for more than two decades starting from 1981, resigned in 2003, promising to never come out of retirement. In spite of this promise, Mahathir made an unexpected comeback in his nineties, reuniting with his former rival Anwar Ibrahim to topple then-Prime Minister Najib Razak, who was embroiled in the 1MDB financial scandal, and the ruling United Malays National Organisation (UMNO). Since their loss of power, Najib and other UMNO leaders have been charged on several accounts of abuse of power, corruption, and misappropriation of state funds. Najib was sentenced to 12 years in prison in July 2020, a decision which he is currently appealing. During the campaign, with Anwar’s blessing, Mahathir had taken over the leadership of Anwar’s democratic movement (Reformasi) while the latter was in jail. In exchange, Mahathir had promised: (1) he would deliver the victory the movement had fail to achieve since its creation in 1998, (2) upon victory, he would release Anwar Ibrahim, and (3) he would hand leadership of the government to Anwar within a few years. Mahathir delivered on his first two promises — an electoral victory and the release of Anwar — but over the course of 2019, the old leader seemed more and more reluctant to hand over power to his ally. Anwar’s mounting pressure on Mahathir’s government, and the growing divisions this created in both Anwar’s party (Keadilan) and in the government coalition precipitated Mahathir’s resignation in January 2020. A few weeks later, to end the feud between Anwar and Mahathir, the country’s monarch appointed a third man to replace Mahathir: Muhyiddin Yassin. Muhyiddin, then vice president of Bersatu, a party he had created with Mahathir and his son Mukhriz Mahathir in 2016, became prime minister in March 2020. He had previously served as a loyal deputy for both Mahathir and Najib Razak, occupying several ministerial posts before rising to the position of deputy prime minister and ultimately resigning in 2015 after voicing concerns regarding Najib’s role in the 1MDB financial scandal. Muhyiddin Yassin now faces multiple crises: political instability, a health crisis, and a looming economic recession. In the midst of these storms, the premier is also struggling with continuous challenges from opposition political leaders as well as those within his own camp. In March 2020, Muhyiddin formed a Malay majority government with the support of a coalition that includes the former ruling party UMNO and the Islamist Party (PAS). The new opposition was then composed of the Pakatan Harapan (PH) including the ethnic Chinese-based DAP (Democratic Action Party), Anwar’s Keadilan, and the Bersatu faction who chose to remain faithful to Mahathir. Keadilan suffered a tremendous loss with the departure (and purge) of some top leaders who pledge allegiances to the new government. Despite their historic loss in 2018, UMNO, now led by Zahid Hamidi (who has also been accused of corruption) is steadily regaining its political influence. The party currently occupies key ministerial positions in Muyhiddin’s cabinet and 38 of 222 seats in the parliament, making it the most powerful party in the ruling coalition with the strongest political machinery, a factor which helped to deliver the coalition its recent victories. In July, Najib was sentenced to 12 years jail on different counts of corruption, a decision he appealed to the higher courts. While outsiders might expect that Najib’s popularity would have crashed, he is in fact slowly rebuilding his image and inventing a new narrative. “Bossku” (My Boss), as his supporters colloquially have named him, is still a major player. Despite UMNO’s deep factionalism, and the fact that Najib has yet to regain the support of the party’s entire leadership (and he might not), the talented tactician is paving his way back to power. The path forward for Malaysia’s democracy remains unclear. On September 23, Anwar Ibrahim announced he had the necessary parliamentary majority to unseat Prime Minister Muhyiddin, a declaration he made without showing any proof to the public or giving any warning to his allies. Anwar had made similar claims in the past. In each case, his assertions ultimately proved unfounded, as they did in this case as well. While Anwar’s prophecy of power did not materialize, the chaos it created for Malaysian democracy sent tremors through the entire country. The coalition is now in limbo and some of its leaders are calling for Anwar’s resignation. Mahathir is also actively pursuing power, attempting to create a political front led by his new party, Pejuang. Though 95 years old, he recently announced his intent to run in the next election (possibly 2021). As for Muyhiddin, he remains in a vulnerable position, pressured from within by his own coalition allies (mostly UMNO) as well as by the opposition. In reviewing the twists and turns of Malaysia’s recent domestic politics, a few key trends deserve to be highlighted. The silence of a new generation and the resilience of old leaders. While younger democratic leaders have emerged elsewhere in Southeast Asia, this is not the case in Malaysia, where Malaysian voters appear more willing to trust older leaders and father figures. Najib, Anwar, and Mahathir have been at the forefront of Malaysia’s political scene for decades and a new generation of leaders has not yet emerged. Mahathir’s successful reinvention from autocrat to democrat in the 2018 campaign suggests that Malaysians voters in general remain conservative and unlikely to call for a new generation of leadership despite the controversies of the past. The (non-)impact of other democratic movements in the region. Democratic movements in Hong Kong and in Thailand have had limited impact in Malaysia. This may be partially explained by the absence of a coordinated, nonpartisan youth movement (as noted above). The new youth party (Muda) created by former Minister of Youth and Sport Syed Saddiq has not received the attention some had hoped, nor is it clear this party would be independent from Malaysia’s longtime power brokers. The party is an offspring of Mahathir’s new movement, and Syed remains a fervent supporter of the former premier. The role of the monarch. While the status of royalty is under threat in Thailand, the Malaysian king, Sultan Abdullah Sultan Ahmad Shah, is subtly crafting a new role to himself, far from the political tradition and his constitutional prerogatives. In this highly volatile political context, the king is portraying himself as a neutral arbitrator and a guarantor of stability. COVID-19 adds further stress. Malaysia’s unstable governance has added to the economic challenges caused by COVID-19, pushing Malaysia into a recession that will leave an impact throughout 2021. The partial lockdowns caused by the pandemic are increasingly contested, creating further tensions between Malaysians and their government. In some cases, these lockdowns, such as the one in Selangor, seem to have been used a way to means of asserting government strength and preventing further political instability. The prime minister has announced he will call for a general election as soon as the pandemic has ended, but this delay is perceived by many merely as a strategy to contain the opposition’s disaffection and the instability within his own government. Malaysian politics are admittedly complex, even for close observers. One of the problems this causes is that outside observers including U.S. policymakers can often make misguided assumptions about the more important factors shaping Malaysian democracy. Two examples come to mind: the “China factor” and civil society. For most Malaysians, the question of China’s influence over domestic issues is perceived as quasi-irrelevant. For many, the question of Western (and specifically American) interference and support for civil society groups is a bigger concern. Many Malaysians share a general suspicion of superpowers — whether the United States or China — that transcends simple dichotomies such as a preference for democracy or illiberalism. Outside observers can also overestimate the impact that civil society can exert on Malaysian democracy. Observers often overlook the fact that (1) few Malaysian non-government organizations (NGOs) and civil society organizations (CSOs) are independent; most are partisan and politically obedient; (2) several large CSOs are in fact umbrella organizations for militants or political proxies that have been paid to mobilize voters; (3) the influence of CSOs are limited; most of Malaysia’s most impactful reform movements have been led or supported by political parties or politicians (ie: the Reformasi movement in 1998 and the Bersih movement since 2007). As such, any real movement for change in Malaysia will have to be a political one. Because Malaysia’s civil society lacks strength and independence, external support to CSOs needs to be made with more diligence, as it is sometimes directed at the wrong organization or short-sighted initiatives. Long-term initiatives targeting youth civic education are urgently needed in both English and vernacular languages. General knowledge about democratic values is poor in Malaysia and the universality of these values is sometimes contested. There is also a need for more civic education focused on equal rights (gender, sexual orientation, ethnicity, etc.), and the inclusion of minorities and vulnerable populations that often remain outside of the Malaysian policy debates and are ignored in the general discourse on democracy. Malaysia’s democracy would also benefit from stronger support for outside voices such as journalists and academic researchers. Malaysia’s low levels of civic awareness and the absence of public debates about democratic principles is partly due to censorship and self-censorship, but also to the limited training offered to journalists and/or political commentators. New media are attempting to bridge academia and journalism, like New Naratif, but their existence remains fragile due to both political and economic pressure. In Malaysian academia, while there are researchers conducting excellent research, they often lack external funding. This dependance on public funds tends to subject them to administrative and political constraints. Despite its political uncertainties, Malaysia remains a stable partner for the United States in Southeast Asia. For that reason, constant analysis and anticipation of Malaysia’s domestic political developments based on solid on-the-ground research is key. The looming economic crisis and the political chaos provoked by party rivalries and ego fights have thus far stymied all attempts at reform. The next few months, and particularly the coming election, will be crucial to determine the future of democracy in the country. |
3800536ff2c9e30ef3fc0794f865abcc | https://www.brookings.edu/articles/dirt-into-dollars-converting-vacant-land-into-valuable-development/ | Dirt Into Dollars: Converting Vacant Land into Valuable Development | Dirt Into Dollars: Converting Vacant Land into Valuable Development American cities have always been about growth. A hundred years ago, boosters organized boomtowns to exploit resources like minerals and cattle. Today, growth coalitions design New Urbanist towns to maximize value while deflecting political backlash by husbanding resources like farmland and road capacity. But from Sunbelt cities to suburbs everywhere, growth is the logic, the politics, and the policy of American places. For the past half-century, this reality has made the older, once-central cities of the Northeast and Midwest awkward anomalies. St. Louis, Pittsburgh, and Cleveland lost about half their population between 1950 and 2000. Yet even after 50 years it remains difficult for such cities to adapt to a world in which the policy problem is one of managing decline rather than growth. That failure has allowed several consequences of long-term depopulation, namely abandoned property and blight, to reach overwhelming levels in many older American cities. When depopulation and blight are addressed at all, it is piecemeal: by attacking the environmental issue of brownfields, the public safety issue of crack houses in empty row homes, the quality-of-life issue of trash-strewn vacant lots. But depopulation is so fundamental and has been so sustained in older industrial cities during this century that blight is now a problem in its own right and demands bold and comprehensive policy responses. Philadelphia, my own home town, is a powerful illustration of these concerns. Like all the nation’s older cities, it has suffered major population loss. Like them, it has been slow in coming to grips with the problem of depopulation. But in the past five years the tide may have begun turning. Philadelphia may now have the opportunity to lead the way in what I call “civic speculation.” Guided by a strategic vision of a “right-sized” city for its current population, civic leaders could leverage abandoned and vacant land and change the subject from decline through abandonment to growth through consolidation. The Good Old Days From its founding in 1683, Philadelphia grew steadily for more than 250 years. The city’s boundaries were enlarged in 1854, when what is known today as Center City was consolidated with the townships of the surrounding county. In 1860, with 565,529 residents in the consolidated city, Philadelphia reclaimed its traditional rank as the nation’s second largest city (it had slipped to fourth behind Baltimore and New Orleans after 1840). By 1900, the population of this economic and political powerhouse had more than doubled to 1,293,697 residents. And by 1950, it had increased another 60 percent to 2,071,605. With the largely undeveloped tracts of Far Northeast and Southwest Philadelphia beckoning, it seemed that the city’s population growth could continue unimpeded. But troubling signs existed, even at mid-century. A 1952 city planning report noted that many older wards had been losing population since 1920 or even earlier. These planners calculated that because the population of the city’s older neighborhoods had already peaked, the city’s housing stock was capable of accommodating nearly 2.5 million people in 1950. With the actual population only 2.1 million, it was clear that the city’s older neighborhoods were being left for newer housing farther north, south, and west of city hall. The city’s overall population growth was hiding this depopulation in the older neighborhoods. What no one could foresee then was the extent to which the city as a whole would begin to lose population. From 1950 to 1970, the city’s population fell about 6 percent, from 2,071,605 to 1,948,609. In the next two decades, the pace picked up dramatically. The population dropped from 1,948,609 in 1970 to 1,585,577 in 1990, a loss of 19 percent. And it has continued falling. In a widely cited statistic, Philadelphia has lost more people than any county in the United States in the past two years. Its population now hovers around 1.4 million?roughly what it was at the turn of the last century. The population loss since 1950 is related, of course, to suburbanization and the fixed political boundaries of the metropolitan region. While Philadelphia lost half a million residents between 1950 and 1990, its surrounding suburbs gained nearly four times that number. Depopulation and Blight Philadelphia’s steady, sometimes precipitous, decline since 1950 has many causes, some good and some bad, some intended and some not. Rising prosperity allowed middle-class households to purchase privacy in suburban subdivisions, while racial prejudice drove whites out of cities where minorities lived. Technological innovation made sprawling one-story business campuses more efficient than dense multi-story loft buildings, while redlining by lending institutions made stability in older neighborhoods virtually impossible. Public policies fostered the growth of modern and lower-density housing, as well as over-subsidizing growth at the suburban edge and over-depreciating assets at the urban core. But regardless of the causes, the city’s oldest neighborhoods, outside of Center City, have suffered catastrophic residential losses. Between 1950 and 1990, one neighborhood in North Philadelphia lost nearly half its population, dropping from 210,000 to 109,000. Another lost two-thirds, down from 111,000 to 39,000 residents. Vacancy and abandonment have been extensive. As of 1992, the Department of Licenses and Inspections identified 27,000 abandoned residential buildings and 15,800 vacant residential lots. More recent estimates approximately double both those estimates, and the number of abandoned houses is now thought to exceed 50,000. In 1999, a study by Fairmount Ventures for the Pennsylvania Horticultural Society identified 30,900 vacant residential lots of one acre or less in the city, about two-thirds of which are privately owned. About 1,000 residential structures have been demolished every year during the 1990s?not nearly fast enough to keep long-term vacant buildings from becoming unsafe. A series of recent reports has attempted to sound a warning bell. In June 1995, the Philadelphia City Planning Commission released Vacant Land in Philadelphia, an excellent analysis of both vacancy conditions and the administrative procedures that have evolved to deal with them. The report, which has become the definitive source for the underlying state law relevant to the myriad departments and procedures affecting vacant property, makes two recommendations. First, the city needs to build an information base on vacant property that is comprehensive, timely, and capable of supporting strategic decisionmaking. Second, it must coordinate agencies and streamline procedures in accordance with a strategic plan for property acquisition and disposition. In September 1995, the Pennsylvania Horticultural Society with the financial support of the Pew Charitable Trusts released Urban Vacant Land: Issues and Recommendations, a state-of-the-art review of vacant land and how it is managed not only in Philadelphia, but in other pioneering cities, including Boston and Cleveland. The report describes a variety of management techniques, both short-term strategies related to greening and gardening and long-term techniques such as parcel assembly and intensive reuse. The report echoes the two key City Planning Commission recommendations. First, Philadelphia should create an integrated land records database or an inventory that is easy to access and update. Second, it should coordinate decisionmaking to assure that various city agencies are working toward common goals. By the end of 1997, the City’s Office of Housing and Community Development had issued Vacant Property Prescriptions and Neighborhood Transformations, two reports that advanced public debate by illustrating the variety of partnerships and specific projects that the city and nonprofits had pursued during the administration of Mayor Edward Rendell. Distilling lessons from that experience, the reports recommend that the city tailor programs and interventions to neighborhood-specific needs, which can vary considerably, and coordinate efforts, especially between the Philadelphia Housing Authority and the Office of Housing and Community Development. The William Penn Foundation has also supported a far-ranging series of reports under the auspices of the Pennsylvania Horticultural Society. The first was a cost-benefit analysis of remediating and maintaining the inventory of over 30,900 vacant residential lots in the city. Later reports highlighted demonstration efforts in target neighborhoods, suggested changes in city policies and practices, and developed a financing plan for citywide vacant land management. Barriers to Progress Philadelphia is now in the forefront of policy analysis and action on the issue of vacant property. But several barriers to significant and lasting progress remain. The most important is the administrative apparatus available to confront the problem of blight. The responsibility for vacant property in Philadelphia is divided among 15 public agencies. Anyone wanting to buy a property may spend weeks negotiating the maze of agencies just to apply. A developer often invests months to meet the differing agency requirements. Decisions made by one city agency can undermine those made by others. A single city block can contain homes owned by the Pennsylvania Horticultural Society, slated for demolition by the Department of Licenses and Inspections, included in a Redevelopment Authority urban renewal project, awarded an Office of Housing and Community Development grant for rehabilitation, and promised for a specific redevelopment plan by a council person. Each action, taken independently, has a profound effect on the others. The current system of land management, complete with numerous checks, balances, and mandatory waiting periods, evolved to help structure and organize a growing city. Despite a half-century of decline, the complex system continues to define the city’s approach to managing vacant property today. The 1995 City Planning Commission report exhaustively summarizes the missions and procedures of the key public agencies. As just one example, obtaining a certification of blight (which allows the Redevelopment Authority to condemn, acquire, and ultimately transfer a property to a new owner) involves no fewer than six city agencies. Improved interdepartmental cooperation during the Rendell administration has made transfers such as this one more efficient, cutting average time from two years to six months. But it is not enough to expedite the current approach to property disposition, which operates on an individual first-come, first-served basis and largely without reference to any strategic plan for the affected block, neighborhood, or city as a whole. To the extent that these small-scale property transfers reduce the inventory of available parcels and transfer them to individual owners, they also reduce the opportunity for assembling and consolidating parcels into larger redevelopment opportunities. Transferring a small parcel to an adjacent owner to use as a side parking lot or to a local group to use as a community garden will often be the appropriate use for vacant lots. But these transfers should be held to some standard comparison with alternate uses rather than simply determined by who’s in line at the title window. Repeated recommendations to strengthen interagency coordination have arguably led to improvements in the past year or two but still have not created a strategic vision or public apparatus bold enough to confront the effects of 50 years of depopulation. Coordination, even if it could be achieved, is not likely to be a strong enough response. As noted, the dozen or so agencies to be coordinated evolved during an era of growth and remain oriented to the obsolete land uses that abandoned properties had decades ago. Prior use too often determines jurisdiction under this administrative regime though it is a weak criterion for determining responsibility and authority for vacant property. To be sure, in some cases prior use has technical implications for vacant property’s future disposition (for example, remediating the contamination of some former industrial properties; preserving the historic character of some former residential properties). But 50 years of depopulation and its accumulated effects on vacancy and abandonment demands an administrative capacity that can think strategically beyond prior use and understand vacant property as a generic resource. These vacant properties must be viewed in the context of surrounding neighborhoods. In some Philadelphia neighborhoods half the lots have no houses, and half the houses left standing are abandoned. Areas that often look like Dresden after World War II must be consolidated and, in some cases, the remaining households relocated. A single authority should replace the current fragmented efforts to acquire, manage, consolidate, and dispose of the abandoned buildings and vacant land in Philadelphia. This is the crux of the policy debate now under way in the city in the first months of the new administration of John F. Street. Mayor Street has made blight removal the top priority of his first year, and a transition planning committee has endorsed the idea of a single blight authority. But the devil is in the administrative details, and right now Philadelphia is the center of a fascinating debate on both blight policy and municipal governance issues. A consolidated public authority in a city like Philadelphia should have the administrative capacity to pursue three functions. First, it should create a strategic plan with a citywide view of the vacant property inventory. This change of scale is essential to “right-sizing” a city’s land use to serve a much smaller population. Fairly obviously, this approach implies a triage strategy. Triage will always be difficult. But its object, it is important to note, is near-empty streets and blocks, not people or neighborhoods. Properties can be assembled with minimal relocation to improve every person’s housing quality and stabilize vulnerable neighborhoods. What is valuable is the land, not the largely obsolete buildings. Demolition, maintenance, redevelopment?almost everything?is cheaper with assembly and consolidation. Only a citywide strategic plan can motivate and sustain a “right-sizing” approach. Second, a single public authority could redevelop its consolidated inventory as market conditions warrant. The complex, and politicized, maze of fragmented public authority in Philadelphia impedes even when private actors want to act, whether it be a homeowner wanting to acquire an adjacent lot for parking or a supermarket chain looking for a 10-acre parcel. There is certainly demand, especially in this economy, for some sites in some locations to develop some uses. A consolidated authority can help restore the property market to places like North Philadelphia. Finally, a single authority can serve as an intentional land bank. Philadelphia has several unintentional land banks in the form of the dormant holdings of multiple public agencies. A single authority probably facilitates redevelopment. But even more important, it creates the capacity for responsibly maintaining the undoubtedly large fraction of the vacant property inventory that is unlikely to be redeveloped any time soon. That maintenance is an obligation all too easily shirked in Philadelphia today. Of course, all this administrative capacity would be pointless without financial resources. The consensus estimate in Philadelphia is that more than $750 million would be needed to fund demolition alone. To his credit, Mayor Street in his first few months in office has backed his commitment to blight removal with a new $250 million anti-blight proposal. The jury is still out on the financing and administrative details of that proposal. But it is increasingly clear that managing the effects of 50 years of decline is an issue whose time has come for Philadelphia and for the nation’s other once-central cities. |
39182ff5b8a39e9bc6f89be0f65c682e | https://www.brookings.edu/articles/drug-trafficking-from-north-korea-implications-for-chinese-policy/?shared=email&msg=fail | Drug Trafficking from North Korea: Implications for Chinese Policy | Drug Trafficking from North Korea: Implications for Chinese Policy Despite many press reports, magazine articles, and academic analyses about drug production and trafficking by the Democratic People’s Republic of Korea (DPRK or North Korea) since the early 1990s, Chinese authorities have avoided acknowledgement of this fact – possibly due to political reasons. In China’s press reports, drug trafficking from North Korea is usually referred to as trafficking from an unidentified overseas country. But in July 2004 Chinese authorities officially recognized that relatively limited quantities of North Korean drugs, mostly methamphetamine (also called “ice”), were shipped into China. In recent years, drug trafficking from North Korea has posed a more and more serious threat to Northeast China, and will be a new challenge to Northeast Asia. North Korean drug trafficking into Northeast China Methamphetamine production in North Korea is reported to have started in 1996 after heavy rains decreased income from poppy production.[1] It is believed the most methamphetamines produced in North Korea are trafficked into Northeast China, then to Shandong, Tianjin, Beijing, and other interior provinces; a smaller percentage is smuggled into South Korea and Japan, where they turn a high profit. Yanbian Korean Autonomous Pefecture and Changbai Korean Autonomous County in China’s Jilin Province, and Dandong city in Liaoning Province, along the border with North Korea, have been identified as key transit points for North Korean drugs into China. Yanbian shares a border of 522.5 kilometers with North Korea; over 1 million Korean Chinese live in the region, as well as 100,000 to 200,000 North Korean refugees. Geographical and ethnic-cultural-linguistic ties provide helpful networks for cross-border trafficking of drugs and illegal immigrants. Furthermore, drug trafficking groups from North Korea, China, and South Korea often cooperate on drug trafficking across the border. South Korea’s JoongAng Daily newspaper has even reported that a “drug trafficking triangle” has been established between them.[2] Recently uncovered cases illustrate this trend. In October 2008, Baishan city border patrol agencies seized 5.4kg of 100 percent pure ice in Changbai Korean Autonomous County.[3] In July 2010, in the celebrated “5. 20” case, Yanbian border patrol agencies arrested 6 suspects from North Korea, including drug kingpin “Sister Kim,” and several Korean Chinese; seized 1.5 kg of ice; and confiscated RMB 132,000 (about US$19,300) of drug money and two cars.[4] As China’s biggest border city on the Yalu River, opposite Sinuiju in North Korea, Dandong city in Liaoning Province is another major transshipment point for drug trafficking from North Korea into Northeast China. On December 23, 2004, Dandong border patrol agencies uncovered the largest drug trafficking operation from North Korea into China since this border patrol was established, seizing 13,775 MaGu tablets (in which the dominant ingredient is amphetamine), and arresting 4 suspects.[5] On February 17, 2005, Dandong law enforcement officials again seized 2,000 MDMA or called ecstasy tablets and 300 MaGu tablets from North Korea, and arrested 7 suspects.[6] Jilin Province is not only the most important transshipment point for drugs from North Korea into China, but has itself become one of the largest markets in China for amphetamine-type stimulants (ATS). Chinese scholar Cui Junyong notes that over the last three to five years, most of the ice consumed in Yanbian has originated from North Korea. In Yanji, Yanbian’s capital, there were 44 registered drug addicts in 1991, but 2,090 in 2010.[7] In all of Jilin Province, as of early June 2010, there are more than 10,000 registered drug addicts, and the provincial Public Security Agency admits that the actual figures may be five or six times more than the official data. Across China, more than 70 percent of drug addicts abuse heroin, but in Jilin Province more than 90 percent of addicts abuse new synthetic drugs and ice in particular. Potential drug crisis in Northeast Asia Clearly, ATS from North Korea have become a threat to China in recent years. It is uncertain whether drug trafficking across the North Korea-China border is sponsored by the North Korean government.[8] But the Chinese National Narcotics Control Commission (NNCC) has recognized the increasing threat of drug smuggling and use in the region, and in 2005 it opened a Northeastern battlefield fighting against cross-border drug trafficking. This strategy focuses on Jilin Province, which emphasizes counternarcotics cooperation between China’s Customs and Public Security agencies on deterrence, and patrols check along the China-North Korean border. Similar to other issues including traditional and non-traditional security, Chinese counternarcotics policy relating to North Korea is often subordinated to the goal of maintaining a good overall relationship between two countries. The Chinese government implements a relatively tolerant policy toward the cross-border drug traffic, which could be very costly in the long run. As drug abuse is established in Northeast China, increased demand will lead to a substantially increased supply of illicit drugs. This will stimulate drug production in North Korea, and will also attract other international drug trafficking organizations. Some have speculated that Afghan heroin smuggled into the Russian Federation will be re-trafficked into Northeast China via the China-Russia border.[9] If a northeast route for Afghan heroin is established, traffickers could also use it to target other countries in Northeast Asia. The Northeast China market has also attracted domestic drug producers and traffickers from Shandong, Jiangsu, Zhejiang, and Guangdong provinces. This poses the danger that ATS production factories from South China will be expanded and/or relocated to Northeast and North China. If this interaction between North China and South China in ATS production and trafficking is consolidated, China may become a major ATS consumer and producer, which would be a new threat to Northeast Asia. Finally, and somewhat paradoxically, if North Korea carries out a reform and opening policy in the near future, the China-North Korean border trade would grow and economic ties would be strengthened. As a result, the Chinese government will not only relax its control of the border area but also actively facilitate cross-border commerce. In such an environment, the volume of drug trafficking across the border into Northeast China and other Northeast Asian countries would quickly increase unless the DPRK were to take strict measures to eliminate drug production. China currently faces similar dilemmas in its southwestern and northwestern border regions. Policy recommendations Though Chinese authorities have begun to confront the issue, its measures have not been able stop or significantly interdict North Korean drug trafficking into China. What new steps should be taken to deal with this problem, or at least to avoid a new tragedy in Northeast Asia which might originate from North Korea? First, the Chinese government should actively urge North Korea to undertake serious investigations into the scope and patterns of ATS production in North Korea and subsequent trafficking across the border. Whenever feasible, the Chinese government could provide technology and equipment assistance to North Korea, in order to help it control and ultimately eliminate drug production and trafficking. Second, the Chinese government should promote the establishment of a regional counternarcotics cooperation mechanism and intelligence sharing system. Initially, China, Russia, South Korea, Japan, and the U.S. could cooperate to stem outbound drug trafficking from North Korea and potentially from Russia, and thereby work to prevent the deterioration of drug problems in Northeast Asia. Ideally, North Korea itself could also be involved in this effort in the future. Third, Chinese central, provincial, and local governments should strengthen counternarcotics law enforcement, border checkpoints and border control in Northeast China, and establish a network to monitor the illegal activities related to drug trafficking in the Korean Chinese and Uyghur Chinese communities that are often located in border regions that see the heaviest drug trafficking in China. Fourth, Chinese authorities should enhance their education of the public on the harm caused by amphetamine-type stimulants, establish advanced drug treatment centers for ATS addicts. Without efforts to reduce the demand for illicit drugs, actions aimed at reducing the illicit drug supply will be doomed to failure. |
bf0a8c3fc66c5bc91f52f925240a2d61 | https://www.brookings.edu/articles/economic-reform-and-military-downsizing-a-key-to-solving-the-north-korean-nuclear-crisis/ | Economic Reform and Military Downsizing: A Key to Solving the North Korean Nuclear Crisis? | Economic Reform and Military Downsizing: A Key to Solving the North Korean Nuclear Crisis? U.S. policy toward North Korea is in need of a major overhaul. Six-party negotiations in Beijing in late August did not break down, but neither did they achieve any substantive progress. Meanwhile, North Korea continues to develop a nuclear arsenal right before our eyes. We propose an ambitious plan that would get to the heart of the matter—North Korea’s broken economy and other aspects of its failed society—by proposing a grand bargain to Pyongyang. North Korea would be offered a new relationship with the outside world and substantial aid if it would denuclearize, reduce military forces, and move in a direction similar to that of Vietnam and China in recent decades. If the plan failed, Washington would have a huge consolation prize—having seriously attempted diplomacy, it would then be in a much stronger position to argue to Seoul, Tokyo, and Beijing that tough measures were needed against North Korea. Despite some impressive successes, notably the 1994 Agreed Framework capping North Korea’s nuclear activities, the Clinton policy of engagement does not offer a promising route. That approach was effective for a time but was somewhat too narrow and tactical, focusing largely on the crisis du jour. The approach appears ultimately to have encouraged in North Korea’s repressive leaders a worsening habit of trying to extort resources from the international community in exchange for cutting back its dangerous weapons programs. President Bush is impatient with this sort of attempted blackmail. But his apparent policy preference—insisting that North Korea immediately stop its nuclear activities and severely limiting talk of possible incentives to Pyongyang until it does—may fail. To date, it clearly has been failing, as the North Korean situation has changed from a serious security problem to a major crisis on his watch. North Korean leaders tend to become more intransigent when their backs are against the wall, and they are clearly willing to see their own people starve before capitulating to coercion. Pushing North Korea to the brink may also increase the odds that it will sell plutonium to the highest bidder to rescue its crumbling economy and preserve its power. North Korea is now well into its second decade of poor economic performance, and its leaders do not seem to know what to do. They have tried modest reforms—price liberalization, special economic zones, and limited business transactions with South Koreans—with little success. They have not yet been prepared to take the risks associated with China-style economic reforms, and their instincts still push them toward high military spending. The ——1 million troops out of a population of 22 million—is the largest in the world in per capita terms and 10 times the global average. North Korea devotes a far greater share of gross domestic product to its armed forces than any other country, and its forces arrayed near the demilitarized zone with South Korea are the densest concentration of firepower in the world by far. We propose a plan that would address the nuclear weapons crisis that has so dominated headlines in recent months—and would also go much further to recast North Korea’s overly militarized economy (see box). Its centerpiece would be a combination of deep conventional arms cuts, economic reform, and external economic assistance aimed at reform. President Bush himself, shortly after taking office, suggested the need for conventional arms cuts in North Korea in exchange for continued aid and diplomatic relations. Though the administration did not follow up on the president’s suggestion, the idea is worth pursuing seriously. Aside from the security value of reducing North Korea’s huge military presence on the Korean peninsula, a fleshed-out plan along those lines could reduce the enormous economic burden that pushes North Korea to provoke nuclear crises to extort resources from the international community. Our plan to prod North Korea toward demilitarization and serious economic reform and to provide the resources to give reform a real chance offers no guarantee of success. But it also has few downsides. Failure would leave us no worse off than we are today. In fact, it would probably improve our hand in convincing South Korea, Japan, and China to adopt a tougher policy toward North Korea as needed. Success would admittedly help a totalitarian regime stay in power. But the alternatives—further economic collapse attended by misery and starvation, war, or the sale of nuclear weapons if the nation finds itself on the verge of collapse—are worse. And under our plan, the regime would be radically transformed even if it clung to power. The Rise and Fall of the North Korean Economy When the Korean peninsula was divided after World War II and set free from Japanese colonization, North Korea had a relatively strong economy. It boasted three-fourths of the peninsula’s mining production, at least 90 percent of its electricity generation capacity, and 80 percent of its heavy industry. The South, with a better climate, was largely an agricultural region. North Korea quickly nationalized major industries and drove up production. Its economic success probably contributed to President Kim Il Sung’s confidence that he would win a war against the South, which he unleashed in 1950. North Korea continued to outperform South Korea immediately after the war. But the seeds of its eventual economic deterioration were soon sown. It collectivized agriculture in 1953 and increasingly invested in heavy industry, much of it defense related, while turning inward and autarkic. Kim Il Sung’s juche concept of self-reliance kept North Korea isolated from the outside world and deprived it of foreign trade. By the late 1960s, North Korea was devoting 15–20 percent of its GDP to the military, and its economic growth gradually slowed. During the early 1970s Pyongyang tried to boost output by borrowing capital on international markets and purchasing whole factories from abroad. But oil price shocks and global stagflation thwarted that strategy. Matters thereafter went from bad to worse. The nation came to depend more and more on economic relations with the Soviet bloc, importing arms and exporting minerals, textiles, steel, and other goods. It also increased its own arms exports, largely to Iran. When the Soviet Union dissolved, North Korea lost access to most of its markets and also to subsidized Soviet oil. China provided coal and oil on favorable terms to stanch the flow of North Korean economic refugees across its borders, but even so, by the end of the decade North Korean energy resources were about half what they had been in 1990. In the past 13 years, North Korea has suffered continuous economic contraction. GDP and per capita income have been cut roughly in half. Alternating periods of drought and flooding, together with a broken political system, have exacerbated agricultural problems. Famine has killed hundreds of thousands of North Koreans despite food aid from abroad. Attempts at Reform For the better part of 20 years, North Korea, like China, has shown an interest in finding a “third way” between communism and capitalism. But unlike China, it has not shown a serious commitment to reform and has achieved little success. As its economy has worsened, North Korea has grown ever more dependent on extortion, drug trafficking, counterfeiting, whatever arms exports it can still find markets for, and cash remittances from overseas North Koreans. Its foreign trade is roughly half what it was in the 1980s. About half of all imports come from China, much of the rest from Japan and South Korea. Likewise, exports go principally to Japan, South Korea, and China. North Korea also still receives aid—mostly food and energy—averaging up to $1 billion a year from China, South Korea, Japan, the United States, and the European Union. Given its poor track record and the leadership’s continuing communist rhetoric, North Korea’s dedication to economic reform today is weak. Its leaders surely fear that liberalizing the economy could lead to political liberalization—and thus their own loss of power. Still, President Kim Jong Il, Kim Il Sung’s son and successor, and other North Korean leaders seem to be searching for alternatives. Kim Jong Il has visited China at least three times since May 2000, traveled to economic centers in Beijing, Shanghai, and Shenzhen, and received briefings from Chinese economists. Korean Workers Party functionaries have also met with their Chinese counterparts to explore reform at the working level. The challenge for the United States and its chief regional allies in this matter—South Korea, Japan, and China—is to give North Koreans a push to take reform more seriously. North Korean leaders have made three types of reforms to date. They have created special economic zones, in which they have encouraged foreign investment. They have allowed South Korean tourists to visit the North, demanding expensive surcharges for the privilege. They have also recently liberalized prices, increased wages, and begun to tolerate limited private agriculture as well as an expansion of farmers’ markets where goods can be bought and sold outside the rigidities of the command economy. North Korea established its primary investment-oriented economic zone in the Rajin-Sonbong area, also known as the Tumen River delta, in the early 1990s. More than 700 square kilometers in area and deliberately removed from much of the rest of the country, it benefits from relatively good natural port potential. The terms granted to foreign investors here are more generous even than those granted by China and Vietnam in their similar zones, at least on paper. Foreign firms can own all the capital invested in a given project, repatriate profits, access the region without visas, enjoy guarantees against nationalization of their assets, and possess 50-year leases on land. So far, however, the region has yet to attract much capital. By the end of the 1990s, foreign investments totaled no more than $34 million, and progress appears not to have accelerated. Problems include poor infrastructure, the great distances from Pyongyang and other cities, and high wage rates. Unresolved geopolitical tensions also hinder investors. Whatever the region’s ultimate prospects, recent times have seen a decline in South Korean ventures in the North. Would-be foreign investors voice continued frustration with political and economic conditions in the region and the country as a whole. In other moves to increase foreign private investment—and win desperately needed support from international financial institutions—North Korean leaders have lobbied the United States to lift trade sanctions and to remove North Korea’s name from the U.S. list of state sponsors of terrorism. They have also attempted rapprochement with Japan. But these efforts too have been generally unsuccessful to date. Other reform efforts have also faltered. Although South Koreans pay up to several hundred dollars a person to visit certain important sites in North Korea, much of the cash appears to have wound up in the hands of the regime for its own uses, not for nationwide development efforts. And although in 2002 North Korea finally liberalized prices and raised wages in much of the country, the nation’s distorted industrial production base, poor trade balance, and limited natural resources have hamstrung the reform move. Inflation has become severe, and some industries are reportedly unable to pay workers the promised higher wages. Prospects for Recovery and Reform Against this backdrop, could a serious program of economic reform buttressed by outside resources succeed? One big plus is the nation’s workforce. As its South Korean counterpart has demonstrated, Korean culture, with its emphases on hard work and group effort, is capable of remarkable things. And North Korea’s population is reasonably well educated, despite heavy doses of propaganda in school and decades of isolation from the outside world. A transition period would be necessary, but on balance North Korea’s human raw materials are impressive. China’s experience also provides an important model for North Korea, as its leaders seem to recognize. The PRC’s ability to take an economy that Mao had largely destroyed and, without losing political power, turn it into one of the fastest- growing countries in the world must hold enormous appeal for North Korea’s leaders. To be sure, reform could be harder in North Korea. China’s reforms began only after Mao had departed the scene, whereas Kim still rules. China was able to accomplish many reforms by making agriculture more efficient, taking workers off the land and putting them into industrial activity. Because North Korea already has a large share of its workers in industry and a relatively small share in agriculture, its leaders would have to take workers from unproductive industry to devote to more productive nonfarm enterprises. But North Korea has available another huge source of productive manpower—its military. Downsizing its armed forces would be a key to reform, for it would free the nation’s youngest, strongest, best workers for important tasks in the years ahead. Many economists are positive about what reform could accomplish. In Avoiding the Apocalypse, Marcus Noland shows that North Korea’s real GDP might be expected to grow anywhere from 60 percent to almost 100 percent under various assumptions about reform. As he puts it, “There are solutions to North Korea’s economic problems?.The real issue is whether reform would be compatible with the continued existence of the Kim Jong Il regime?.” On this latter point, Noland is agnostic, as are we. But that is no argument against proposing reform as part of a grand bargain to Pyongyang and trying to negotiate an acceptable arrangement. Many scholars and officials in Northeast Asia have concluded that North Korea has little choice but to try such reforms. Our plan would encourage leaders in Pyongyang to get the message and act on it. The Role of External Aid in Reform Given the North Korean regime’s apparent interest in economic reform, but uncertainty how to make it work while maintaining political control, how can outside powers play a constructive role? Clearly no guarantees for success exist. But the downsides to trying seem few. The cost to the United States, Japan, and South Korea would be small, especially when weighed against the security implications of the likely alternatives—either a collapsing North Korea possibly willing to sell fissile materials abroad or war on the peninsula. Successful reform would gradually change North Korean society, making life better for citizens while forcing the nation’s leaders to modify at least some of their ways. Failure would leave the United States no worse off than it would have been otherwise, especially since the North Korean nuclear program would have been capped in the interim in any event. (This would be a precondition to any negotiation, along with a U.S. pledge not to use force against North Korea while talks continued and a resumption of U.S. fuel oil shipments to North Korea.) Outside aid would be provided over time, rather than in a lump sum. There would be a strong presumption that the aid would continue from year to year, even in the face of setbacks in reform. But Washington and its allies would retain an unspoken leverage over Pyongyang to continue its compliance with the plan and gradually expand economic reforms. The aid efforts would serve as a form of pilot project, while also giving leaders in Pyongyang confidence that they could manage reform as it gradually spread countrywide. To the extent that North Korea allowed nationwide aid efforts and continued to support the economic reforms needed to make aid work, assistance would expand. In the first couple of years, aid would focus on improving infrastructure, mainly in the special economic zones. Implementing such projects should be relatively straightforward. Broadening the aid effort nationwide would require North Korea to accept a greater international presence on its territory and to accept more changes to its educational, agricultural, public works, and health care sectors. China could mitigate North Korean fears by providing most of the on-the-ground development experts, with funding itself coming in large measure from Japan, South Korea, and the United States. Initial success in the special economic zones might also increase North Korea’s confidence in the package deal. Above and beyond humanitarian relief and energy assistance, North Korea would probably need an average of roughly $2 billion a year for a decade to embark on the path of economic recovery. In per capita terms—roughly $50–$75 a year—the total is commensurate with that given such development success stories as Taiwan and South Korea. There is good reason to think that Japan might provide much of this assistance as a form of reparations for its colonial occupation of Korea until 1945. Japan provided $500 million to South Korea during the 1960s; adjusting that number to account for inflation and economic growth would lead to aid in the range of $5–$10 billion. Other outside assistance would also be critical. As an expression of its good faith and its commitment to improving relations, the United States would have to go beyond its current humanitarian aid, roughly $200 million a year. It would also lift all trade sanctions and provide funds to develop North Korea’s infrastructure. Its annual development aid to North Korea might reach roughly $300 million a year. The total aid—about $500 million annually—would be well below what it gives Israel and Egypt and roughly comparable to the amount it provides to the next tier of U.S. aid recipients—Jordan, Afghanistan, Pakistan, and Colombia. The United States, one of the least generous aid providers among the major industrial economies as a share of its national wealth, can certainly afford this expansion in aid to North Korea. South Korea and China would clearly be critical players too. Both are already providing more aid to North Korea than the United States is. South Korea, which has largely recovered from the 1997 Asian financial crisis, should be able to provide much more assistance under this type of radical overhaul in North-South relations. But its private sector would be the real growth engine, investing in North Korea on a major scale over time. Aid would largely lay the economic groundwork for this private investment. China would help North Korean leaders learn how to create a mixed economy that retained command features in some areas but enterprise zones in others, one that gradually carried out further price liberalization nationwide. Prospects for Success The prospects for this aid effort are unclear. But it has a good chance of succeeding if North Korea wants to make it work. Expectations, of course, must be reasonable—North Korea need not become another South Korea, or even another China, anytime soon. Vietnam might be a better near-term model. As first priorities, North Korea needs to fix its economy enough to take care of the basic survival needs of its people, get out of its extortionate habit of trying to use dangerous weapons programs to gain hard currency, and stop counterfeiting and drug running. More broadly, it must accept a vision for constructive engagement with the international community. By convincing Pyongyang to do so, the United States and its partners can use aid to achieve much better national security. The benefits of the aid effort would go beyond its immediate prospects for economic success. By uniting the major powers of Northeast Asia in pursuit of a common vision for the Korean peninsula, it would harmonize the interactions of Washington, Seoul, Tokyo, and Beijing as they face the simmering nuclear crisis. These four capitals have had a difficult time uniting around any Korea policy, and their confusion and open dissension not only have hurt the prospects for collaboration but also complicate coordinating policy in case things go badly wrong and more dire measures need to be considered. A combination of conventional force reductions and major economic reform initiatives strikes many observers as too much to add to the North Korea agenda. But more limited engagement has failed. Focusing on North Korea’s nuclear program and missile programs holds little appeal for President Bush, who feels he is being blackmailed by Pyongyang’s words and deeds in these areas. And without a broader reform effort, North Korea will remain a broken economy whose leaders will almost surely continue to resort to extortion—or worse—given their stark lack of alternatives for gaining hard currency. President Bush’s initial instincts that North Korea needed to reduce its threatening conventional military forces if it wished more aid and diplomatic relations with the United States were on the mark. It is time now to translate that view into a comprehensive policy proposal. |
7a60444a342a9e7ede4459216a7cc8ce | https://www.brookings.edu/articles/europes-lost-decade/ | Europe’s Lost Decade | Europe’s Lost Decade Europe’s role in world affairs over the next five years will be determined more by how it has handled the euro crisis and challenges to European integration than by its external environment or bureaucratic efforts to forge a common foreign and security policy. During the past five years, analysts have concluded that Europe faces three possible futures. The euro could collapse, Europe could take a great leap towards fiscal and political integration or the continent could ‘muddle through’. In 2012 and 2013, the verdict came in. It is very unlikely that the eurozone will collapse in the next few years and a major leap forward in integration is off the table. We are left with muddling through. But this third scenario has served as a catch-all to describe everything except collapse and unity. Very little work has been done to explore what it actually means. Muddling how? Through to what? The term is something of a misnomer because it incorrectly suggests that Europe is making its way out of its predicament, however inefficiently. The evidence provides no such assurance. Based on current policy and its likely effects, we are looking at a lost European decade of economic stagnation – low growth, high unemployment, zombie banks and vulnerability to exogenous shocks – which will sap Europe’s strength, heighten political tensions about the future of the eurozone and European Union, and cause Europe to play a diminished role in world affairs. To escape this scenario, Europeans must dismiss muddling through as an acceptable alternative to collapse. Instead, it should be recognised for, and treated as, what it is: one of two worst-case scenarios that should be avoided, if at all possible. European policymakers must consider radical steps to escape a lost decade, and the United States should assist Europe in this endeavour. This essay explains why the greater unification and collapse scenarios have failed to materialise, and identifies the characteristics of a prolonged period of stagnation. It considers the impact of stagnation on European integration, the implications for Europe’s global role and what must be done to escape a lost decade. Read the essay » Editor’s note: “Europe’s Lost Decade,” by Thomas Wright received the 2013 Palliser Essay Prize, an annual award in honor of Sir Michael Palliser (1922–2012), former chairman of the council and vice-president of the International Institute for Strategic Studies. |
d37d81ffe4b0e52786fedf93d9e4e8bc | https://www.brookings.edu/articles/fabulous-formless-darkness-presidential-nominees-and-the-morass-of-inquiry/ | Fabulous Formless Darkness: Presidential Nominees and the Morass of Inquiry | Fabulous Formless Darkness: Presidential Nominees and the Morass of Inquiry The White House wants to know what real estate you or your spouse now own. It also wants a list of properties you and your spouse have owned in the past six years but don’t now. The FBI wants to know about properties in which you have an interest. Presumably the properties you might have an interest in include more than those you own outright. Drop the spouse and drop the past six years. The U.S. Office of Government Ethics wants you to report real properties that you have sold or bought. It also wants you to list real estate assets currently held, as well as any you have sold that made you at least $200. Drop the past six years, but add the past two. Skip the properties you own but have not bought recently. Add your spouse to the mix. Add any dependent children. Then set the values of the transactions within one of eleven ranges. A Senate committee wants to return to the White House question of ownership, drop the spouse, drop the dependent children, take the FBI time frame, drop the past six years, then drop the two years, forget about sales and acquisitions, drop the value ranges. But add a specific value to each of the properties reported. Though W. B. Yeats had in mind the primordial chaos of mythology when he penned my title phrase, the Irish poet could well have been speaking of the inquisition that U.S. presidential appointees face in securing a post in the federal government. Over the past 30 years, the process by which the president’s nominees are confirmed has become an increasingly murky fen of executive branch and Senate forms, strategic entanglements, and “gotcha politics.” According to the 1996 Task Force on Presidential Appointments assembled by the Twentieth Century Fund, the appointment process has discouraged and demoralized many who would work in a presidential administration. A recent survey of former appointees from the past three administrations released by the Presidential Appointee Initiative elicited such descriptions of the process as “embarrassing,” “confusing,” and “a necessary evil.” The PAI study concluded that “the Founders’ model of presidential service is near the breaking point. Not only is the path into presidential service getting longer and more tortuous, it leads to ever more stressful jobs. Those who survive the appointment process often enter office frustrated and fatigued.” Both the Twentieth Century Fund’s task force and the Presidential Appointee Initiative report called for finding ways to diminish the blizzard of form filings. This article explores what such efforts might entail. It describes the different inquiries, identifying the general areas of scrutiny, specific questions and their variants, and the array of relationships between these questions. It demonstrates the degree of commonality in areas of scrutiny and across forms. And it assesses three potential approaches to reform, concluding that two strategies seem most effective. The Formless Darkness Anyone nominated for a position requiring Senate confirmation must file four separate forms. The first, the Personal Data Statement (PDS), originates in the White House and covers some 43 questions laid out in paragraphs of text. Applicants permitted by the White House to go on to the vetting stage fill out three other forms. The first, the Standard Form (SF) 86, develops information for a national security clearance investigation, commonly called the “FBI background check.” The SF-86 has two parts: the standard questionnaire and a “supplemental questionnaire” that repackages some questions from the SF-86 into broader language often similar though not identical to questions asked on the White House PDS. The second additional questionnaire, SF-278, comes from the U.S. Office of Government Ethics (OGE) and gathers information for financial disclosure. It doubles as an annual financial disclosure report for all federal employees above the rank of GS-15. For most nominees, the third additional form comes from the Senate committee with jurisdiction over the nomination. Having returned each of these four forms, some nominees will receive a fifth questionnaire, again from the Senate committee of jurisdiction, with more specific questions about the nominee’s agency or policies it implements. While nominees complain about several aspects of the process, they regularly and uniformly express frustration with the repetitive and duplicative questions. Indeed, nominees leave the impression that the forms contain nothing but repetitive inquiries. Although the problem is not that severe, the degree of repetitiveness does represent an undue burden. As indicated at the outset, for example, a presidential nominee is obliged to muster information on real estate property on four forms involving three separate time periods, three separate classes of owners, and at least two separate types of transactions—providing essentially the same information four times, but sorted each time in a different way. Table 1: How Repetitive Are the Questions? Topic Nonrepetitive Questions Repetitive Questions Total Questions Percent Repetitive Personal & family background 39 22 61 36 Professional & educational background 22 39 61 64 Tax & financial information 11 21 32 66 Domestic help issue 1 0 1 — Public & organizational activities 2 7 9 78 Legal & administrative proceedings 10 25 35 71 Miscellaneous 31 3 34 9 Totals 116 117 233 Avg. 50 Source: White House Personal Data Statement, Standard Form 86, Standard Form 278, and a representative Senate committee questionnaire. Measuring Repetitiveness Just how repetitive are the forms? This section tackles that question, first identifying the different levels of repetitiveness and then assessing the distribution of repetitiveness over the different categories of inquiry pursued in the questionnaires. The questions fall into three repetitiveness categories based on how much common information they require. Identical questions (for example, “last name”) inquire into the same subject without varying the information elicited. Repetitive questions (for example, the real property questions) request information on the same subject but vary it along at least one dimension. And nonrepetitive questions (for example, the “nanny-tax” question asked only on the White House PDS) seek different information. On the four forms mentioned (one a representative Senate committee questionnaire), nominees must respond to 233 questions. They must answer 116 nonrepetitive questions (those without an analog) and 99 repetitive questions (those with analogs). They regularly repeat the answers to 18 identical questions. Thus half of the questions nominees answer have some analogs elsewhere while slightly less than half have no analogs anywhere. Table 1 shows how questions are distributed across the seven topics used in the White House Personal Data Statement—personal and family information, profession and education, taxes and finances, domestic help, organizational activities, legal and administrative activities, and miscellaneous. More than a third of the questions cover personal and family background. This large share derives primarily from the detailed background information required on the SF-86. Most of the remaining questions focus on professional and educational achievement—much emphasized by the PDS and the FBI background check—legal entanglements. Table 1 also reports the degree to which a category includes repetitive questions. One potentially misleading result, however, should be noted. Although the personal and family background category has a repetitiveness rate of 36 percent, it is not as burdensome on nominees as might appear, primarily because it contains almost all the identical questions (15 of the 18 asked) found across the four forms and those questions tend to focus on basic information such as name and telephone number. This category also accounts for the largest number of separate questions (39). One prescription for reducing repetitiveness in this category, then, could simply be to reduce the contact information required of nominees. The greatest repetitive burden occurs on three topics: professional and educational background (64 percent over 61 questions), tax and financial information (66 percent over 32 questions), and legal and administrative proceedings (71 percent over 35 questions). Association with employers and potential conflicts of interest constitute a classic example of repetitiveness. Everyone involved in vetting nominees wants to know about potential conflicts of interest embedded in the nominee’s professional relationships. Patterns of repetitiveness in reporting conflicts of interest resemble those found in reporting property: multiple reporting periods, multiple subjects, and multiple types of information. Real property, of course, is a classic example of the kinds of repetitiveness found under the rubric of tax and financial information. The level of repetitiveness under the rubric of legal and administrative proceedings seems particularly telling because, as noted, the Office of Government Ethics asks no questions about legal entanglements. The repetitiveness results almost exclusively from the FBI’s tendency to turn a single general question from the PDS into multiple specialized variations. For example, while the White House asks about arrests, charges, convictions, and litigation all in one question, the FBI asks a series of questions covering separate classes of offenses and case dispositions: felonies, firearms, pending charges on felonies, courts martial, civil investigations, agency procedures, and so on. The FBI background check also changes the time period from that used on the PDS. Strategies for Rescuing Nominees The informational burden on nominees can be eased by reform in three directions—by narrowing the scope of inquiry, by cutting redundancy, and by reconsidering strategic institutional imperatives. Ask Fewer Questions Reducing the scope of inquiry would be most straightforward. Because fewer than half of all questions asked of nominees are repetitive, reform could properly focus on reducing the number of unique questions. Yet of the 116 questions having no counterpart elsewhere, exactly half (58) are on the FBI background check; more than half of those (37), or a third of the total, involve personal and family background. They establish a host of background characteristics presumably necessary to trace an individual’s identity, including basic descriptors like “height” and “hair color” and spouse citizenship. The only questions that might seem superfluous require information on the nominee’s previous marriages and descriptions of adults who reside with the nominee but are not part of the immediate family. It does not seem likely that trying to ask fewer questions will reduce the burden on nominees, except where authorities are willing to challenge the basic techniques used in carrying out a background investigation. One possible reform in this area would be to transfer basic background information on a nominee before the FBI conducts its investigation. The administration would request a name search on the nominee from the government’s files, transfer the results to the appropriate forms, then hand the forms to the nominee to check, amend, and complete. At that point the background check would begin in earnest. This approach would not only reduce the burden on nominees but also reduce the time the FBI spends retracing its earlier investigatory steps. Table 2: How Repetitive Are the Questions after Reform? Topic Nonrepetitive Questions After Reform Repetitive Questions After Reform Total Questions After Reform Percent Repetitive After Reform Percent Repetitive Before Reform Personal & family background 39 19 58 33 36 Professional & educational background 22 11 33 33 64 Tax & financial information 11 6 17 35 66 Domestic help issue 1 0 1 — — Public & organizational activities 2 7 9 78 78 Legal & administrative proceedings 10 7 17 41 71 Miscellaneous 31 3 34 9 9 Totals 116 53 169 Avg. 31 Avg. 50 Source: White House Personal Data Statement, Standard Form 86, Standard Form 278, and a representative Senate committee questionnaire. Reduce Repetitiveness Reform could also accommodate nominees by reducing repetitiveness, as shown in table 2. Taking this approach increases the number of identical questions by smoothing the questions asked across forms, and it may involve changing congressional mandates. Among the repeated questions, three-quarters require nominees to reshape answers to previous questions. The real property questions described earlier are a perfect example. Nominees must answer six separate though similar questions. Settling on a single question—using the OGE approach, for example—instead of on six, would cut the percentage of repetitiveness in the tax and financial category by 47 percent, from 66 percent to 35 percent, while cutting the number of questions in this category almost in half. To create one common question, the four institutions could rely on the broadest range of information required on any dimension involved in a topic. For example, on the real property example, all institutions could settle on the longer time periods of the White House, the broader definition of subjects used by the FBI, and the broader notion of ownership inherent in the FBI’s term “interest.” In the end, this reform reduces the burden on nominees by affording them a standard format in which to provide information. Rethinking questions about professional relationships could also help. At least ten separate questions involve connections between the nominee and corporations and other institutions. Like the questions on property, they vary by time period, the type of organizations involved, the level of connection to the organization necessary to report, the level of compensation triggering a report, and so forth. Reform here could reduce the number of questions on conflict of interest from ten to, say, three. Other changes could cut the number of questions about education, plans for post-government compensation, and foreign representation. Consolidation in these three groups could reduce eight questions to three. In all, reformulation could decrease repetitiveness in this area by half—from 64 percent to 33 percent. Under the last topic with serious repetitiveness, legal and administrative proceedings, reformulation could eliminate all but seven questions, reducing repetitiveness from 71 percent to 41 percent. Overall, reformulating these questions would reduce repetitiveness in the executive branch forms from almost half of all questions to less than one-third—a very substantial improvement of 38 percent. The difficulty of this approach is that the questions generated by both the FBI in SF-86 and the OGE in SF-278 have substantial institutional justification. In the former, the FBI can rely on expertise about the nature of the investigative process to suggest that it must generate sufficient data to discover security risks. In the latter, the SF-278 has a substantial statutory basis for its inquiries. Changing the form requires changing the statute. Reconsidering Institutional Imperatives A final reform strategy would be for one of the four institutions to surrender control over information and rely instead on information already gathered by others. The White House has the best opportunity to take this approach. Because it initiates the process, it can afford to limit its own information requirements by securing the information delivered to the other agencies. Instead of offering its own form, the White House could rely on the fact that it can see how applicants fill out their SF-86 and draft their SF-278 as part of the initial negotiations that identify eventual nominees. Based on those drafts, the White House would then decide whether to carry through its intent to nominate, thereby triggering the appointment vetting process. Because almost all the PDS questions are repeated on other forms, this strategy would reduce repetitiveness to around 28 percent, slightly less than the more complicated strategy outlined earlier. For its own deliberations, the White House would not lose any relevant information. Except for the “nanny question,” the PDS provides information secured on other forms. Because the PDS does not provide information on any “decision criteria” unique to White House concerns, eliminating it would not adversely affect White House considerations. The Senate Forms Except for a few questions requiring the nominee to list publications and honors, Senate committee questionnaires differ from executive branch forms in two important respects. First, they attempt to commit nominees to resolving “constitutional” conflicts in the Senate’s favor. For example, committee questionnaires regularly require nominees to commit to reporting to the Senate on policy decisions that vary from legislative policy. No amount of reform will likely reduce the interest of the Senate in committing nominees to follow committee dictates on policy differences. Second, many Senate committees require more detailed financial information than the executive branch questionnaires, in the form of “net worth” statements. The issue here has become the necessity of requiring information about net worth when it does not clearly indicate the kinds of relationships typically understood to create conflicts of interest. The Relative Ease of Reform Extracting nominees from the formless darkness of the appointment questionnaires requires only a few simple changes in the requirements imposed on them for information. As noted, streamlining information across forms, taking the highest and broadest levels of variation as the focus, greatly reduces repetitiveness without severely curtailing the information made available. Without even attempting to assess either what information is necessary to select the president’s team or whether decision criteria are appropriate, the government can make big improvements and thereby begin to reverse the unwholesome atmosphere for potential appointees. |
647010c966ab20969ea12527af80841d | https://www.brookings.edu/articles/family-structure-the-growing-importance-of-class/ | Family Structure: The Growing Importance of Class | Family Structure: The Growing Importance of Class In 1965, Daniel Patrick Moynihan released a controversial report written for his then boss, President Lyndon Johnson. Entitled “The Negro Family: The Case for National Action,” it described the condition of lower-income African American families and catalyzed a highly acrimonious, decades-long debate about black culture and family values in America. The report cited a series of staggering statistics showing high rates of divorce, unwed childbearing, and single motherhood among black families. “The white family has achieved a high degree of stability and is maintaining that stability,” the report said. “By contrast, the family structure of lower class Negroes is highly unstable, and in many urban centers is approaching complete breakdown.” Nearly fifty years later, the picture is even more grim—and the statistics can no longer be organized neatly by race. In fact, Moynihan’s bracing profile of the collapsing black family in the 1960s looks remarkably similar to a profile of the average white family today. White households have similar—or worse—statistics of divorce, unwed childbearing, and single motherhood as the black households cited by Moynihan in his report. In 2000, the percentage of white children living with a single parent was identical to the percentage of black children living with a single parent in 1960: 22 percent. What was happening to black families in the ’60s can be reinterpreted today not as an indictment of the black family but as a harbinger of a larger collapse of traditional living arrangements—of what demographer Samuel Preston, in words that Moynihan later repeated, called “the earthquake that shuddered through the American family.” That earthquake has not affected all American families the same way. While the Moynihan report focused on disparities between white and black, increasingly it is class, and not just race, that matters for family structure. Although blacks as a group are still less likely to marry than whites, gaps in family formation patterns by class have increased for both races, with the sharpest declines in marriage rates occurring among the least educated of both races. For example, in 1960, 76 percent of adults with a college degree were married, compared to 72 percent of those with a high school diploma—a gap of only 4 percentage points. By 2008, not only was marriage less likely, but that gap had quadrupled, to 16 percentage points, with 64 percent of adults with college degrees getting married compared to only 48 percent of adults with a high school diploma. A report from the National Marriage Project at the University of Virginia summed up the data well: “Marriage is an emerging dividing line between America’s moderately educated middle and those with college degrees.” The group for whom marriage has largely disappeared now includes not just unskilled blacks but unskilled whites as well. Indeed, for younger women without a college degree, unwed childbearing is the new normal. These differences in family formation are a problem not only for those concerned with “family values” per se, but also for those concerned with upward mobility in a society that values equal opportunity for its children. Because the breakdown of the traditional family is overwhelmingly occurring among working-class Americans of all races, these trends threaten to make the U.S. a much more class-based society over time. The well-educated and upper-middle-class parents who are still forming two-parent families are able to invest time and resources in their children—time and resources that lower- and working-class single mothers, however impressive their efforts to be both good parents and good breadwinners, simply do not have. The striking similarities between what happened to black Americans at an earlier stage in our history and what is happening now to white working-class Americans may shed new light on old debates about cultural versus structural explanations of poverty. What’s clear is that economic opportunity, while not the only factor affecting marriage, clearly matters. The journalist Hanna Rosin describes the connection between declining economic opportunities for men and declining rates of marriage in her book The End of Men. Like Moynihan, she points to the importance of job opportunities for men in maintaining marriage as an institution. The disappearance of well-paying factory jobs has, in her view, led to the near collapse of marriage in towns where less educated men used to be able to support a family and a middle-class lifestyle, earning $70,000 or more in a single year. As these jobs have been outsourced or up-skilled, such men either are earning less or are jobless altogether, making them less desirable marriage partners. Other researchers, including Kathryn Edin at Harvard, Andrew Cherlin at Johns Hopkins, and Charles Murray of the American Enterprise Institute, drawing on close observations of other working-class communities, have made similar arguments. Family life, to some extent, adapts to the necessities thrown up by the evolution of the economy. Just as joblessness among young black men contributed to the breakdown of the black family that Moynihan observed in the ’60s, more recent changes in technology and global competition have hollowed out the job market for less educated whites. Unskilled white men have even less attachment to the labor force today than unskilled black men did fifty years ago, leading to a decline in their marriage rates in a similar way. In 1960, the employment rate of prime-age (twenty-five to fifty-five) black men with less than a high school education was 80 percent. Fast-forward to 2000, and the employment rate of white men with less than a high school education was much lower, at 65 percent—and even for white high school graduates it was only 84 percent. Without an education in today’s economy, being white is no guarantee of being able to find a job. That’s not to say that race isn’t an issue. It’s clear that black men have been much harder hit by the disappearance of jobs for the less skilled than white men. Black employment rates for those with less than a college education have sunk to near-catastrophic levels. In 2000, only 63 percent of black men with only a high school diploma (compared with 84 percent of white male graduates) were employed. Since the recession, those numbers have fallen even farther. And even black college graduates are not doing quite as well as their white counterparts. Based on these and other data, I believe it would be a mistake to conclude that race is unimportant; blacks continue to face unique disadvantages because of the color of their skin. It ought to be possible to say that class is becoming more important, but that race still matters a lot. Most obviously, the black experience has been shaped by the impact of slavery and its ongoing aftermath. Even after emancipation and the civil rights revolution in the 1960s, African Americans faced exceptional challenges like segregated and inferior schools and discrimination in the labor market. It would take at least a generation for employers to begin to change their hiring practices and for educational disparities to diminish; even today these remain significant barriers. A recent audit study found that white applicants for low-wage jobs were twice as likely to be called in for interviews as equally qualified black applicants. Black jobless rates not only exceed those of whites; in addition, a single-minded focus on declining job prospects for men and its consequences for family life ignores a number of other factors that have led to the decline of marriage. Male employment prospects can lead to more marriages, but scholars such as Harvard’s David Ellwood and Christopher Jencks have argued that economic factors alone cannot explain the wholesale changes in the frequency of single parenting, unwed births, divorce, and marriage, especially among the least educated, that are leading to growing gaps between social classes. So what else explains the decline of marriage? First, and critically important in my view, is the changing role of women. In my first book, Time of Transition: The Growth of Families Headed by Women, published in 1975, my coauthor and I argued that it was not just male earnings that mattered, but what men could earn relative to women. When women don’t gain much, if anything, from getting married, they often choose to raise children on their own. Fifty years ago, women were far more economically dependent on marriage than they are now. Today, women are not just working more, they are better suited by education and tradition to work in such rapidly growing sectors of the economy as health care, education, administrative jobs, and services. While some observers may see women taking these jobs as a matter of necessity—and that’s surely a factor—we shouldn’t forget the revolution in women’s roles that has made it possible for them to support a family on their own. In a fascinating piece of academic research published in the Journal of Human Resources in 2011, Scott Hankins and Mark Hoekstra discovered that single women who won between $25,000 and $50,000 in the Florida lottery were 41 percent to 48 percent less likely to marry over the following three years than women who won less than $1,000. We economists call this a “natural experiment,” because it shows the strong influence of women’s ability to support themselves without marriage—uncontaminated by differences in personal attributes that may also affect one’s ability or willingness to marry. My own earlier research also suggested that the relative incomes of wives and husbands predicted who would divorce and who would not. Women’s growing economic independence has interacted with stubborn attitudes about changing gender roles. When husbands fail to adjust to women’s new breadwinning responsibilities (who cooks dinner or stays home with a sick child when both parents work?) the couple is more likely to divorce. It may be that well-educated younger men and women continue to marry not only because they can afford to but because many of the men in these families have adopted more egalitarian attitudes. While a working-class male might find such attitudes threatening to his manliness, an upper-middle-class man often does not, given his other sources of status. But when women find themselves having to do it all—that is, earn money in the workplace and shoulder the majority of child care and other domestic responsibilities—they raise the bar on whom they’re willing to marry or stay married to. These gender-related issues may play an even greater role for black women, since while white men hold slightly more high school diplomas and baccalaureate degrees than white women, black women are much better educated than black men. That means it’s more difficult for well-educated black women to find black partners with comparable earning ability and social status. In 2010, black women made 87 percent of what black men did, whereas white women made only 70 percent of what white men earned. For less educated black women, there is, in addition, a shortage of black men because of high rates of incarceration. One estimate puts the proportion of black men who will spend some time in prison at almost one third. In a forthcoming book, Doing the Best I Can: Fatherhood in the Inner City, Timothy Nelson and Edin, the Harvard sociologist, describe in great detail the kind of role reversal that has occurred among low-income families, both black and white. What they saw were mothers who were financially responsible for children, and fathers who were trying to maintain ties to their children in other ways, limited by the fact that these fathers have very little money, are often involved in drugs, crime, or other relationships, and rarely live with the mother and child. In other words, low-income fathers are not only withdrawing from the traditional breadwinner role, they’re staging a wholesale retreat—even as they make attempts to remain involved in their children’s lives. Normative changes figure as well. As the retreat from marriage has become more common, it’s also become more acceptable. That acceptance came earlier among blacks than among whites because of their own distinct experiences. Now that unwed childbearing is becoming the norm among the white working class as well, there is no longer much of a stigma associated with single parenting, and there is a greater willingness on the part of the broader community to accept the legitimacy of single-parent households. Despite this change in norms, however, most Americans, whatever their race or social class, still aspire to marriage. It’s just that their aspirations are typically unrealistically high and their ability to achieve that ideal is out of step with their opportunities and lifestyle. As scholars such as Cherlin and Edin have emphasized, marriage is no longer a precursor to adult success. Instead, when it still takes place, marriage is more a badge of success already achieved. In particular, large numbers of young adults are having unplanned pregnancies long before they can cope with the responsibilities of parenthood. Paradoxically, although they view marriage as something they cannot afford, they rarely worry about the cost of raising a child. Along with many others, I remain concerned about the effects on society of this wholesale retreat from stable two-parent families. The consequences for children, especially, are not good. Their educational achievements, and later chances of becoming involved in crime or a teen pregnancy are, on average, all adversely affected by growing up in a single-parent family. But I am also struck by the lessons that emerge from looking at how trends in family formation have differed by class as well as by race. If we were once two countries, one black and one white, we are now increasingly becoming two countries, one advantaged and one disadvantaged. Race still affects an individual’s chances in life, but class is growing in importance. This argument was the theme of William Julius Wilson’s 1980 book, The Declining Significance of Race. More recent evidence suggests that, despite all the controversy his book engendered, he was right. To say that class is becoming more important than race isn’t to dismiss race as a very important factor. Blacks have faced, and will continue to face, unique challenges. But when we look for the reasons why less skilled blacks are failing to marry and join the middle class, it is largely for the same reasons that marriage and a middle-class lifestyle is eluding a growing number of whites as well. The jobs that unskilled men once did are gone, women are increasingly financially independent, and a broad cultural shift across America has created a new normal. Editor’s Note: This article originally appeared in the January/February 2013 issue of The Washington Monthly under a different title. |
a1b150fda074476c44e5ad547ade1620 | https://www.brookings.edu/articles/fifty-years-after-black-september-in-jordan/ | Fifty years after “Black September” in Jordan | Fifty years after “Black September” in Jordan The Jordanian civil war in 1970, better known as Black September, was decided by an intelligence success led by King Hussein and his chief of intelligence. It was a mystery for years until revealed in the memoir of a former CIA officer serving in the region at the time. President Richard Nixon and National Security Advisor Henry Kissinger took great credit for managing the Black September crisis, but in fact their role was marginal to the outcome of the biggest threat to Hussein’s survival, the Iraqi army in eastern Jordan. |
f85e29a84bf12cf3100707e92995d308 | https://www.brookings.edu/articles/finding-a-new-normal-in-u-s-india-relations/ | Finding a New Normal in U.S.-India Relations | Finding a New Normal in U.S.-India Relations To hear recent commentary on US-India relations, one would think observers were talking about a fraught relationship, full of crises between two countries with little in common and little interaction. This narrative, however, ignores the broader dynamics of the relationship, as well as how far it has come in a short period of time – it was only about a decade and a half ago, after all, that the US had sanctions on India in place. Today, India’s relationship with the US is broader and deeper than ever before – yes, this is a cliché, but one that also happens to be true. Cooperation ranges from India buying C-130s from America to the US Centres for Disease Control helping their Indian counterpart establish an Epidemic Intelligence Service; it spans from sectors like education to economics. Bilateral trade and investment has increased; with over $120 billion in trade in goods and services, the US is India’s largest trading partner. Significantly, the economic relationship has been a two-way street. American companies have invested an estimated $50 billion in India and there’s an estimated $25 billion of Indian investment in the US. Bilateral defence trade has gone from zero to $10 billion dollars, with American companies wanting to sell more to India. In addition, in a few years, US liquefied natural gas exports to India are scheduled to begin. Over the last year and a half, dozens of senior American and Indian policymakers from both the central and state levels have exchanged visits. It’s also no longer unusual for US governors (like those from Arizona and Kentucky) or mayors (like those from San Antonio and Indianapolis) to visit India or for chief ministers from India to travel to the US – even from landlocked states like Madhya Pradesh that haven’t traditionally been as globally connected. There are indeed so many dialogues, working groups, and visiting business and government delegations, that observers seem to lose track of the exact number. On people-to-people ties, there are over 2.2 million non-resident Indians and persons of Indian origin in the US – more than in any other single country in the world. Furthermore, in poll after poll, the Indian public expresses support for good relations with the US, often urging that they be improved further. Indian officials, in turn, admit that these good relations and US interest in India has benefited India’s relations with countries like China, Japan and Saudi Arabia. An Indian official recently referred to this existing phase of relations between the two countries as one of normalcy. Currently, however, despite all these links between India and the US, normalcy seems to have brought with it loud, public condemnations, a sense of drift, an emphasis on differences, and even a questioning of the value of the relationship. The challenge for the new government in Delhi – and the opportunity – will be in finding a new normal. Any new government will first have to assess how significant the US is for India. It will have to ask the question: Is this bilateral relationship ‘India’s most important foreign policy relationship’, as the Indian ambassador to the US recently stated, or is the US at least one of India’s most crucial partners? There are reasons to answer yes. The US will matter for India – and not just for the positive reasons that policy-makers point out when they’re listing areas of convergence. Given the global role and power of the US – with an improving economy and in the midst of an energy revolution – even if Indian leaders are not convinced that the US can play a critical role in helping India achieve its strategic and economic interests, they will have to grapple with the fact that the US will play a role in shaping the environment in which India is operating and, potentially, could play spoiler vis-à-vis Indian interests. Whether a relationship with the US is seen as important for positive or negative reasons, a new Indian government will have to deal with five dangers vis-à-vis the India-US relationship: drift, the dominance of differences, disillusionment, the difficulties of dealing with another democracy, and the dilution of India’s importance. It will also have to seek new opportunities to find a new normal that will benefit Indian interests. Drift: Policymakers in both India and the US are likely to continue to be domestically preoccupied over the next few years. Strengthening the economy and creating jobs at home are likely to be priorities in both countries. In the US, the mid-term elections in November 2014 will be the main focus of political attention. Once that is over, the 2016 presidential election season will begin – some would argue that it already has. There’s a possibility that like his predecessors, President Obama will experience a lame-duck phase – one that could be challenging in terms of getting things done. However, it could also be an opportunity for India-US relations. American presidents are thought to have more flexibility on foreign policy during such phases, and if the president believes that a strong relationship with India will be one of his key legacies, he might be inclined to pay greater attention to it. In the foreign policy realm, both India and the US will likely have more pressing concerns for the next few years. On the US side, the last two or three years have involved a focus on crises in the Middle East and Ukraine and, to a certain extent, in East Asia. As a result, many are wondering what happened to the pivot or rebalance to Asia in which the Obama administration had seen India as playing a crucial role. A new Indian government, in turn, might find itself preoccupied with foreign policy issues closer to home. With limited bureaucratic capacity (time, energy, resources), these other foreign policy priorities and crises might constrain the amount of attention Indian and US policymakers can devote to India-US relations. Having recently experienced a diplomatic crisis and seen its impact on the relationship, neither country would like this kind of (negative) attention. However, absent crises or high profile initiatives (such as the nuclear deal) that can focus bureaucratic and political attention on the relationship, it might suffer from inattention. Dilution of importance: Inattention can also stem from a sense that the other country isn’t as important – and the potential dilution of American importance in India and Indian importance in the US is a danger. Some doubters in India have questioned the value of getting closer to a country that they believe is on the decline. The American investment in India, on its part, has been predicated on at least three assumptions. For some, it has been the idea of India that has been important – a diverse, developing democracy that could be a partner. For others, India’s economic potential has been what makes it attractive. For yet others, it has been India’s strategic potential, especially as a balance against China. India’s importance because of the latter, however, can wax and wane with the health of Sino-US relations or with assessments of India’s willingness and capacity. As for economic potential, there has been more doubt than hope on this front over the last three years. Recent developments have also meant that the India-as-a-role-model constituency is disappointed. All this has resulted in the devaluation of India’s stock in the US. This has been exacerbated by the fact that some of the strongest advocates of strong India-US relations in the US government have moved on to other positions. Disillusionment: The two countries are also no strangers to disillusionment. Often this is a result of heightened expectations that are left unmet. The disillusionment problem is exacerbated because, in many cases, the returns on the investment in the relationship may only become apparent in the medium to long term. It perhaps also results from a phenomenon that one might call India-US exceptionalism: each of the countries involved not just thinks that it is exceptional, but that the other should make exceptions for it. Each also expects more from the other than perhaps any other of its allies or partners and expects that, as a fellow democracy, the other should understand its constraints. Each also seems to believe that the other does not understand its exceptionalism, leading to doubt and disappointment. Differences: Over the last year such disillusionment has been evident, especially as differences have dominated the relationship – or at least the narrative about it. Over the next few years, these differences might continue to be in the spotlight instead of the bilateral achievements that tend to take place behind the scenes. Progress in terms of style (the bureaucracies developing habits of cooperation) or substance (in areas like intelligence sharing) are sometimes invisible. But even when progress is visible, achievements might get little, if any, attention – especially with the media seeming to believe that good news doesn’t necessarily make good copy. A new Indian government could have to deal with potential differences with the US on a number of issues. The US relationship with Pakistan is one, especially with concerns that US actions in the run-up to the 2014 draw down of troops in Afghanistan will compromise Indian interests vis-à-vis Afghanistan, Pakistan and, potentially, counter-terrorism. As Washington continues to calibrate its relationship with Beijing, concerns about a China-US G-2 may also arise again. Similarly, Sino-Indian cooperation might create consternation in some quarters in the US. If US relations with Iran deteriorate again, that might be another area of difference, as might US-India divergences on other countries. Renewed activity in three multilateral arenas – trade, non-proliferation, climate change – might also bring India-US differences to the fore. Finally, growing economic ties will mean that economic tangles will naturally increase. This has already been evident, with US companies and legislators complaining about domestic sourcing, the state of intellectual property protection, and taxation and regulatory policies in India. Indian companies, on their part, have expressed concerns about protectionism, immigration reform and market access in the US. Dealing with another democracy: All these differences are likely to be complicated and exacerbated by an element that also facilitates the India-US relationship – the fact that both countries are democracies. This factor means that debates and differences will play out publicly, negotiations will take place under the gaze of a free press, and domestic politics will have to be navigated and negotiated. Adding to the complications is another element that has driven good relations: the breadth and depth of relations between these two democracies. The quantitative and qualitative change in the relationship means that it involves more issues, interactions and stakeholders than ever before, making greater friction natural. The relationship also involves engagement on issues that span the foreign-domestic divide, including in the economic, energy, education and immigration realms. These issues will require policymakers to tread carefully, given that both countries are sensitive to outsiders trying to influence their domestic politics and policies. However, the depth and breadth of the relationship also provides an opportunity for the new Indian government in its relations with the US. The nature of the relationship means there’s not only one particular person or department or company interested in a good working relationship. This is only one factor that creates opportunity. Second, there is bipartisan support for India in the US, particularly in the mainstream of the Democratic and Republican parties. Third, India can make the case that it is – especially through its companies’ investment in the US – contributing to US policy-makers’ economic goals. Fourth, while in the US there are negative references to outsourcing and complaints about Indian trade and investment policies, India is not seen as a strategic threat. On the contrary, American policymakers have reiterated in public and private venues that the US supports India’s rise. Chinese observers indeed comment on the difference between Washington’s pronouncements vis-à-vis India and China in this regard. The reason offered for this support is that a strong, prosperous India will be good for US geopolitical and economic interests, even if this India won’t always be on the same page as the US and will sometimes create problems for it. Fifth, there is a value-based reason that many in the US want India to succeed – this is where ‘the democracy thing’ plays a positive role. While American commentators often state that their country is unique, a strong, successful, democratic India – in some ways in its own image – is seen by some in the US as validation of the democratic idea and thus as a role model (the unstated part being the contrast to China). This is especially important at a time when democracy hasn’t been getting good press. Taking advantage of these opportunities, however, will require avoiding the dangers mentioned above. A new Indian government will have to ensure that this relationship doesn’t just get bureaucratic attention, but that of the political leadership as well. There is a tendency to take this bilateral relationship for granted. But, natural as India-US relations might seem, without nurture they won’t get beyond the current state. A new Indian government will also have to deal with differences. Differences are unavoidable; India and the US will differ at times, as all allies and partners do – sometimes the two countries will agree on ends, but disagree on means. However, India and the US have shown an ability to manage differences – for example, they successfully navigated the Iran sanctions issue in 2012, when US Congressional unhappiness with Indian oil imports from Iran could have caused serious tensions and loss of support for India on Capitol Hill. Working with American counterparts, Indian officials can also try to minimize the negative impact of differences, for example, through advance consultation and notification. Officials can also deal with differences privately to the extent possible, without assuming the worst of the other side. Furthermore, it’s worth planning for the scenario of such differences becoming public. The next Indian government can’t make policy always thinking, ‘What will Arnab say?,’ but such planning will help if disagreements do dominate the headlines. And, when differences do become public, it would help if the two sides worked to temper the public discussion. In dealing with another democracy, it would also help if the next government (and its American counterpart) tried to support – rather than undermine – the other side in dealing with various domestic constituencies. Sometimes, this will also require showing the same patience with the domestic political constraints their American counterparts face that Indian officials expect or request. On a less defensive note, a new Indian government can also consolidate existing constituencies and create new ones for the relationship in India and the US among officials, legislators, corporations, and individuals outside government. First, it can strengthen the Indian economy and its security. This will increase India’s importance and alleviate the problems of ‘India fatigue’ and ‘India irrelevance’ in the US. Improved economic growth and a sense of momentum will be especially likely to change the narrative about India and India-US relations for the better. To put it bluntly, if India is once again seen as a winner, it will quieten the whiners. Moreover, it will change how countries around the world perceive India and an India taken seriously globally will be taken more seriously by the US. Second, a new Indian government can work with the US to implement existing agreements, conclude outstanding negotiations and explore new opportunities, especially on the economic front. There are potential initiatives that can be taken in the trade and investment, defence trade, space, maritime, energy and education realms. There are regions, such as Southeast Asia or Africa or the Indian Ocean, where India and the US might consider working together on specific initiatives. What about a big-ticket item, which some have called for? Perhaps, but only if the associated agreements are not just negotiated, but implemented as well. A big idea unfulfilled, on the other hand, can lead to disillusionment – as with the two countries’ civil nuclear deal. Third, the Indian government can create greater awareness of the opportunities India offers, as well as the constraints that exist in the country. There is a tendency in the policymaking community and the public in India and the US to assume that they know the other country. Yet one thing the recent crisis involving the detention of an Indian diplomat showed is how little the two countries know or understand each other, often reverting to long-held negative stereotypes in times of crises. A new Indian government can encourage learning about contemporary India. It can facilitate study tours for influential Americans – and not just or even primarily for Indiawallahs. The next government can also ease the ability of a greater number of Americans to work and study in India – including by making easier the research and employment visa application processes and, perhaps, by encouraging the private sector to create a significant scholarship fund for US students designed to increase understanding of contemporary India. It should also promote greater learning about the US in India. There are few real experts focusing on the US in India – this needs to be rectified, especially if Indian government and business would like to understand the country in which and with whom they’re increasingly operating. Finally, a new Indian government can act on initiatives to show specific US constituencies – especially political and corporate ones – that their investment in India can yield tangible benefits. Both India and the US need to get beyond asserting that the bilateral relationship is not transactional, while constantly asking of the other: ‘What have you done for me lately?’ Realistically, foreign relations are not altruistic; if the India-US partnership is to be sustainable and if the new normal is to be at a higher, more constructive level, each needs to feel that they derive or will derive benefit from it. Taking advantage of some of these opportunities will help not just if and when an Indian government believes that the US will be a supporter vis-à-vis Indian objectives, but also if and when it proves to be – or has the potential to be – a spoiler. This article was first published in Seminar Magazine . |
c46ad43a9e6cd782c8d1006a07a6f462 | https://www.brookings.edu/articles/france-and-europe-an-ambivalent-relationship/?shared=email&msg=fail | France and Europe: An Ambivalent Relationship | France and Europe: An Ambivalent Relationship France’s relationship with Europe is paradoxical. On one hand, France has long been a strong supporter of the idea of a united Europe. Aristide Briand, Jean Monnet and Robert Schuman were the founding fathers of European integration. This enthusiasm also stems from the intellectual, idealistic and universal dimensions of French philosophy. But France is also a country with a long history as a nation-state and an early experience with global power. Even when France’s position within Europe was weakened during the 19th century as a result of the rising power of Germany, France was able to maintain its importance in the global arena. It found solace in its colonial adventures and by 1914 was the second largest colonial empire in the world. And even when the Cold War forced Europe to rely on the United States, France was quick to demonstrate its independence and weight during the presidency of Charles de Gaulle. Since the break-up of the Soviet bloc and the reunification of Germany, France’s place at the center of Europe has become threatened. France’s reaction was to step up its attempts to bring about European integration, especially through promotion of the single currency. France decided also to help to create a political Europe (“Europe puissance”) by promoting the Common Foreign and Security Policy (CFSP) and by re-launching a common European security and defense policy (ESDP), together with the UK, at the 1998 St. Malo Summit. Three lessons about France’s approach to Europe derive from the past and apply to the present. First, the concept of Europe is popular in France and is perceived by many as a way of avoiding both the conflicts of European history and the problems of a balance of power in Europe. View Full Article (PDF—87kb) Get Adobe Acrobat Reader |
8c3002902459090d8f32ce55bad342b5 | https://www.brookings.edu/articles/heavy-traffic-international-migration-in-an-era-of-globalization/ | Heavy Traffic: International Migration in an Era of Globalization | Heavy Traffic: International Migration in an Era of Globalization At the start of the new millennium, some 150 million people, or 2.5 percent of the world’s population, live outside their country of birth. That number has doubled since 1965. With poverty, political repression, human rights abuses, and conflict pushing more and more people out of their home countries while economic opportunities, political freedom, physical safety, and security pull both highly skilled and unskilled workers into new lands, the pace of international migration is unlikely to slow any time soon. Few countries remain untouched by migration. Nations as varied as Haiti, India, and the former Yugoslavia feed international flows. The United States receives by far the most international migrants, but migrants also pour into Germany, France, Canada, Saudia Arabia, and Iran. Some countries, such as Mexico, send emigrants to other lands, but also receive immigrants?both those planning to settle and those on their way elsewhere. Institutions and laws for achieving cooperation among receiving, source, and transit countries are in their infancy. The World Trade Organization oversees the movement of goods worldwide and the International Monetary Fund monitors the global movement of capital, but no comparable institution regulates the movements of people. Nor does a common understanding exist among states, or experts for that matter, as to the costs and benefits of freer or more restrictive immigration policies. The surge in international migration, though, is prompting states everywhere to recognize the need for greater harmonization of policies and approaches. Bilateral discussions of migration issues?between the United States and Mexico, for example, over an expanded guestworker program and amnesty for unauthorized Mexican workers?have become more commonplace. During the past decade, regional groups have been set up in the Americas, Europe, East Asia, Africa, and elsewhere to allow receiving, source, and transit countries to address issues of mutual concern. New Trends Economic, geopolitical, and demographic trends reinforce the need to consolidate these regional institutions and begin to develop a global regime for managing migration. Economic trends influence migration patterns in many ways. Multinational corporations, for example, press governments to ease movements of executives, managers, and other key personnel from one country to another. When labor shortages appear, whether in information technology or seasonal agriculture, companies also seek to import foreign workers to fill jobs. Although the rules for admitting foreign workers are largely governed by national legislation, regional and international trade regimes such as the North American Free Trade Agreement and the General Agreement on Trade in Services include provisions for admitting foreign executives, managers, and professionals. Under NAFTA, for example, U.S., Canadian, and Mexican (as of 2004) professionals in designated occupations may work in the other NAFTA countries without regard to numerical limits imposed on other foreign nationals. The growth in global trade and investment also affects source countries. Economic development has long been regarded as the best long-term solution to emigration pressures arising from the lack of economic opportunities in developing countries. Almost uniformly, however, experts caution that emigration pressures are likely to remain and, possibly, increase before the long-term benefits accrue. Wayne Cornelius and Philip Martin postulate that as developing countries’ incomes begin to rise and opportunities to leave home increase, emigration first increases and declines only later as wage differentials between emigration and immigration countries fall. Italy and Korea, in moving from emigration to immigration countries, give credence to that theory. Geopolitical changes since the Cold War era offer both opportunities and challenges for managing international migration, particularly refugee movements. During the Cold War, the United States and other Western countries saw refugee policy as an instrument of foreign policy. The Cold War made it all but impossible to address the roots of refugee movements, which often resulted from surrogate conflicts in Southeast Asia, Central America, Afghanistan, and the Horn of Africa. Few refugees were able or willing to return to lands still dominated by conflict or Communism. With the end of the Cold War, new opportunities to return emerged as decades-old conflicts came to an end. Democratization and increased respect for human rights took hold in many countries, as witnessed in the formerly Communist countries of East Europe, making repatriation a reality for millions of refugees who had been displaced for years. At the same time, rabid nationalism fueled new conflicts that have led to massive displacement in places such as the former Yugoslavia and the Great Lakes region of Africa. When the displacements spilled over to other countries, or became humanitarian crises that threatened the lives of millions, governments proved willing to intervene?even with military force?on behalf of victims. Faced with crises in northern Iraq, Bosnia, Kosovo, and east Timor, classic notions of sovereignty that would once have precluded such intervention came under considerable pressure. On the positive side, people who once would have had to cross international borders to find aid could now find it at home. On the negative side, however, the so-called safe zones established in Bosnia, Iraq, and elsewhere often proved far from secure, leaving internal refugees more vulnerable than those able to cross into neighboring countries. Demographic trends also reinforce arguments for a global regime to manage migration. Worldwide, fertility rates are falling, although developing countries continue to see rapid population growth. In most industrialized countries, fertility levels are well below replacement rates. In Europe, the average number of children born per woman is 1.4; Italy’s fertility rate is 1.2. Countries with declining fertility face the likelihood of a fall in total population, leading some demographers to see a looming population implosion. Such nations can also expect an aging population, with fewer working-age people for each older person. Although immigration will not solve the problem, it will help ease labor shortages and redress somewhat the aging of the society. Demographic trends also help explain emigration pressures in Africa, Latin America, and some parts of Asia, where fertility rates are high. Rapidly growing societies often cannot generate enough jobs to keep pace with new entries into the labor force. Growth may also cause environmental degradation, particularly when land use policies do not protect fragile ecosystems. Natural disasters also wreak havoc on densely populated areas in poor countries. Recent hurricanes in Honduras and Nicaragua and earthquakes in El Salvador and India displaced huge numbers of people from ravaged homes and communities. New Responses In 1997, United Nations Secretary General Kofi Annan addressed the possibility of convening a conference on international migration and development. Upon consulting with UN member governments, he found insufficient consensus about what such a conference could accomplish and reported: “The disparate experiences of countries or subregions with regard to international migration suggest that, if practical solutions are to be found, they are likely to arise from the consideration of the particular situation of groups of countries sharing similar positions or concerns with the global international migration systemÉ. In the light of this, it may be expedient to pursue regional or subregional approaches whenever possible.” Since 1997, regional processes have matured. Perhaps most developed is the Regional Migration Conference, the so-called Puebla Group, which brings together all the countries of Central and North America for regular dialogue on migration issues, including an annual session at the vice-ministerial level. The Puebla Group’s Plan of Action calls for cooperation in exchanging information on migration policy, exploring links between development and migration, combating migrant trafficking, returning extra-regional migrants, and ensuring full respect for the human rights of migrants, as well as reintegrating repatriated migrants, equipping and modernizing immigration control systems, and training officials in migration policy and procedures. Discussions have led law enforcement officials of the United States, Mexico, and several Central American countries to cooperate in arresting and prosecuting members of large-scale smuggling and trafficking operations that move migrants illegally across borders and then force them to work in prostitution, sweatshops, and other exploitive activities. Similar regional groups are working in East and Southeast Asia. The “Manila Process” focuses on unauthorized migration and trafficking in East and Southeast Asia. Since 1996, it has brought together each year 17 countries for regular exchange of information. The Asia-Pacific Consultations include governments in Asia and Oceania and focus on a broad range of population movements in the region. Both ongoing dialogues were strengthened by a 1999 International Symposium on Migration hosted by the Royal Thai government. In the resulting Bangkok Declaration on Irregular Migration, 19 Asian countries agreed to cooperate to combat smuggling and trafficking. Other such groups are in the making in the Southern Cone of South America, in western and southern Africa, and in the Mediterranean. The intent is to bring together the governments of all countries involved in migration, whether origin, transit, or receiving. At present, the groups are forums for exchanging information and perspectives, although the more developed ones, such as the Puebla Group, are leading to joint action as well. Given the lack of sharedinformation or consensus about migration policies and practices, the discussion stage is a necessary first step in developing the capacity for joint efforts. Steps Toward a Global Regime Will these regional processes lead to a global migration regime? It is too early to know. Three issues must first be addressed. First, states must reach a consensus that harmonizing policies will make migration more orderly, safe, and manageable. And in fact signs exist of growing convergence among regional groups in setting out an agenda for harmonization. Many items on regional agendas relate to unauthorized migration?and how best to deter it, consistent with respect for the rule of law and the human rights of migrants. Although source and destination countries may differ still about what causes unauthorized migration, agreement is growing on some approaches to it?for example, that curbing alien smuggling and trafficking (a global enterprise that nets an estimated $7?10 billion a year) requires international cooperation. Other issues arise in addressing forced migration. How, for example, should states protect people fleeing repression and conflict? When conflicts end and migrants no longer need protection, when and how should they be required to return? The growing use of temporary protection, as in the crises in Bosnia and Kosovo, has led European Union member states to place a high priority on harmonizing temporary protection policies and mechanisms for burden sharing. At the same time, the Puebla Group has focused on temporary protection of victims of natural disasters in the aftermath of Hurricane Mitch. More recently, issues involving legal admissions have appeared on regional agendas. When and to whom should visa restrictions apply? Under what circumstances should family reunification be guaranteed? Who should be eligible for work and residence permits? What rights should accrue to those legally admitted for work or family purposes? Answers to these questions will take time because states still differ widely in their attitudes and policies toward legal immigration. But signs of change can be seen even here. The European Union has led the way, with free movement of labor a long-held principle for its own nationals. The 1997 Amsterdam Treaty takes the EU the next step, mandating a common immigration policy for participating states. Second, a global migration regime will require standards, policies, and new legal frameworks. A legal framework already exists for refugee movements, with most countries now signatories to the 1951 UN Convention on the Status of Refugees or its 1967 Protocol. Most important, signatories agree that they will not return (refoule) persons to countries where they have a well-founded fear of persecution. International agreements also apply to the rights of migrant workers, but few states ratified the 1990 UN Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families, showing that they were unwilling to take on a comprehensive set of obligations toward migrant workers and their families. No body of international law or policy governs responses to other forms of international migration. But with growing economic integration, global trade agreements may become vehicles for formulating such policies. Ongoing negotiations on the General Agreement on Trade in Services involve new agreements?under the rubric of the “movement of natural persons”?to ease the admission of executives, managers, and professionals providing services. The third issue to be settled before a global migration regime can come into being is organizational responsibilities. At the heart of the refugee regime is the UN High Commissioner for Refugees, whose mandate dates back to 1950. No comparable institution exists for other migration matters. The International Organization for Migration (IOM), a Geneva-based intergovernmental body, with 86 member states and 41 observer states, comes closest. Since its founding in1951 to help resettle refugees and displaced persons from World War II and the Cold War, IOM has taken on a broad set of responsibilities, including counter-trafficking programs, migrant health and medical services, technical assistance and capacity building, and assisted return programs. In particular, it serves as the secretariat for some of the regional processes discussing international migration. IOM is not a part of the UN system (although it cooperates with UN agencies), and it represents far fewer states. To become the focal point of a new migration regime, it would need substantial new resources and government support, particularly in helping governments formulate new global migration policies. Its governing board, composed of government representatives, has already requested and provided funds to IOM to strengthen its capacity to advise governments on best practices in migration management. Both source and destination countries recognize the need for improved responses, and IOM’s broad membership helps it devise policies that balance varied governmental interests. A Multilateral Approach In an increasingly interconnected world, governments are unlikely to be able to solve the many problems posed by international migration through unilateral approaches only. Source, receiving, and transit countries must all cooperate to manage international migration. Reaching agreement will be relatively straightforward when countries share similar interests and problems?for example, in combating the most exploitive type of human trafficking. Often, however, interests will diverge. Source countries such as Mexico will press for easier access to the labor markets of wealthier countries. Receiving countries such as the United States will face public concerns about seemingly uncontrolled movements into their territories. Agreement will be difficult. Still, these issues will not go away just because different countries see them differently. Sheer necessity is likely to move governments toward a global migration regime. |
9613060967d238bb2301bf270fefc80e | https://www.brookings.edu/articles/help-wanted-connecting-inner-city-job-seekers-with-suburban-jobs/?shared=email&msg=fail | Help Wanted: Connecting Inner-City Job Seekers with Suburban Jobs | Help Wanted: Connecting Inner-City Job Seekers with Suburban Jobs Cities have historically been the hubs of economic activity in America?the focal points of commerce where farmers brought crops and livestock, where office buildings bustled, and where manufacturing industries thrived. In short, cities were the places where people worked. Suburbs began growing up around the cities in the 1950s and 1960s, but they were originally mostly residential. People commuted from them to jobs in the city. Now, though, after decades of decentralization from city to suburb to exurb, the landscape of home and work has radically changed. Today, work is still central to the role of cities in the new knowledge-based economy. With the U.S. economy booming, many new jobs are being created in central cities. Between 1993 and 1997, for the first time in nearly 30 years, central city job growth rivaled both national and suburban growth rates. Urban unemployment rates fell nearly 40 percent between 1993 and 1999. |
b7066bd3cc8505980df49b759a941443 | https://www.brookings.edu/articles/high-and-low-politics-in-afghanistan-the-terrorism-drugs-nexus-and-what-can-be-done-about-it/ | High and low politics in Afghanistan: The terrorism-drugs nexus and what can be done about it | High and low politics in Afghanistan: The terrorism-drugs nexus and what can be done about it As the world debate drug policy at the Special Session of the United Nations General Assembly on the World Drug Problem (UNGASS 2016), opium poppy remains deeply routed in Afghanistan. Narcotics production and counternarcotics policies there are of critical importance not only for drug control in the country and worldwide, but also for the security, reconstruction, and rule-of-law efforts in Afghanistan. Perhaps nowhere in the world has a country and the international community faced such an extensive illicit drug economy as in Afghanistan. Given deteriorating security, economic, and political conditions in Afghanistan, there is no realistic prospect for radically reducing the Afghan opium poppy economy and the Afghanistan’s economy dependence on it for years. In 2007 opium production climbed to a staggering 8,200 metric tons (mt).[1] As a result of the subsequent oversaturation of the illicit opiates market and the intense outbreak of a poppy disease, production subsequently fell to 3,600 mt in 2010 but rose again to 5,800 mt in 2011 and remained with some fluctuations at this level. [2] In 2015, as a combination of market saturation and correction and poppy disease (and far less so as a result of sustainable policies), in 2015 opium production declined to 3,300 mt again.[3] These levels of production are enough to supply most of the world’s opiates market.[4] For two decades, opium has been Afghanistan’s leading cash-generating economic activity. Valued at the border, profits from opiates represent about 10-15% of GDP.[5] But when one takes into account macroeconomic spillovers, with drugs underpinning much of other legal economic activity, drug easily constitute between a third and a half of the overall economy.[6] The United Nations Office on Drugs and Crime (UNODC) provides a smaller number – namely, that the farmgate value of opium production in Afghanistan represents single-digit percent portion of the GDP, such as, for example, about 4% of the country’s GDP in 2013.[7] But this number is misleading. By focusing on farm-gate value only, this number does not take into account value-added in Afghanistan or economic spillover effects, such as the fact that much of consumption of durable and non-durables as well as construction is underpinned by the opium poppy economy. For much of the rural population, the opium poppy economy is an essential source of basic livelihoods and human security. When access to the opium poppy economy is cut off, such as through bans on cultivation or eradication, large segments of the rural population face economic emiseration and deprivation even in terms of access to food, medical treatment, and schooling for children.[8] The significance of the opium poppy production for the Afghan economy and crucially for employment will only grow as Afghanistan has entered serious economic and fiscal crises due to the departure of international military forces, the presence of which structured much of economic activity in Afghanistan over the past decade, and due to political instability and persisting insecurity scaring away investment and generating large capital flight. The Taliban offers itself as a protector of poppy farmers, drawing great political capital as well as physical resources from opposing eradication and taxing the poppy fields. Measuring the size of illicit economies and any derivative numbers, such as profit levels, is notoriously difficult, but it is estimated that somewhere between 20-40% of the Taliban’s income comes from drugs.[9] But the Taliban is not the only actor profiting in multiple ways from the opium poppy economy. So are various criminal gangs, often connected to the government, the Afghan police and other elements of the Afghan security forces, tribal elites, and many ex-warlords cum government officials at various levels of the Afghan government, including the top one. In short, the illicit drug economy exacerbates insecurity, strengthens corruption, produces macroeconomic distortions, and contributes to a vast increase in drug use and addiction in Afghanistan But it also provides a vital economic lifeline for many Afghans and enhances their human security. It thus produces political capital for those who sponsor opium poppy cultivation and it is politically explosive for those who sponsor eradication. Neither opium poppy cultivation nor heroin production is a new, post-2001 phenomenon: each robustly existed during the Taliban era and before. Already during the 1990s, the Taliban sponsored and taxed opium poppy cultivation.[10] Its 2000 ban on cultivation drove production down by some 90% for one year, but was unsustainable…and indeed not sustained. Other decreases poppy cultivation and opiate production since 2001, like last year’s, have been largely driven by the saturation of the global and local drug markets, by poppy crop disease, or by temporary coercive measures in certain parts of Afghanistan that could nonetheless not be sustained and have mostly already broken down. The structural drivers of the Afghan poppy economy, including critically insecurity, political power arrangements, and a lack of ready economic alternatives, remain unchanged and cannot easily be overcome for years to come. The Taliban and related insurgencies remain deeply entrenched and pose a serious threat to the survival of the Afghan government and a potential trigger of an expanded civil war. Although also facing serious constraints on its expansion and sustainment and now also internal fragmentation and the rise of a rival Islamic State in Afghanistan,[11] the Taliban feels the battlefield momentum is on its side. As U.S. and ISAF troops significantly reduced their presence, they handed to the Afghan security forces and people an on-going war that has significantly intensified in 2014.[12] Unfortunately, many of the counternarcotics policies adopted during most of the 2000s not only failed to reduce the size and scope of the illicit economy in Afghanistan, but also had serious counterproductive effects on the other objectives of peace, state-building, and economic reconstruction. Most counterproductive of all, eradication and bans on opium poppy cultivation, often born by the poorest and most socially marginalized, have generated extensive political capital for the Taliban and undermined counterinsurgency. In a courageous break with a previous counterproductive policy, the Obama administration wisely decided in 2009 to scale back poppy eradication in Afghanistan, but it struggled to implement its new strategy effectively. Although the Obama administration backed away from centrally-led eradication, Afghan governor-led eradication haphazardly goes on but since it is extremely politically explosive and dependent on good security conditions, it is minimal in its intensity. Blanket interdiction efforts only vertically integrated smuggling networks. Selective interdiction focused on Taliban-linked traffickers, conducted by NATO’s International Security Assistance Force for Afghanistan (ISAF) between 2008 and 2014, complicated the Taliban’s logistics, but did not severely weaken the Taliban. Moreover, during the surge in Afghanistan, ISAF-led interdiction became at times so intense that it approximated eradication in its negative effects on farmers’ well-being and their receptivity to Taliban mobilization. But the reduction in ISAF presence means that the scale of interdiction has radically declined, and like the rest of Afghanistan’s policy is subject to political favoritism, patronage, and corruption. Under such circumstances, interdiction efforts can thus reshuffle who controls a local drug market within a district or a province and alter the individual’s (often top politician’s) political power and resource access, but does not produce system-wide effects on drug production and smuggling in Afghanistan. Often designed ineffectively and implemented poorly, alternative livelihoods efforts rarely have generated sustainable income for poppy-dependent populations, if they have materialized at all. These outcomes are not surprising, and both Afghanistan and the international community should have expected them. Lessons from anti-poppy policies in Thailand, Burma, and even China as well as against coca in Peru, Bolivia, and Peru clearly indicate that even under the most auspicious circumstances, such as in Thailand in the 1980s, a significant sustained reduction of opium poppy cultivation would require years, even decades, of systematic and well-designed efforts as well as a prior end to military conflict.[13] Given high world demand for illicit opiates, suppression of poppy cultivation in Afghanistan would not leave a highly lucrative market unsatiated, but would shift it elsewhere. Unlike coca, for example, opium poppy is a very adaptable plant that can be grown under a variety of climactic conditions. Theoretically, its cultivation could spread to many areas –Central Asia, back to the Golden Triangle of Southeast Asia, or West Africa.[14] By far the worst scenario from a global security perspective would be the shift of poppy cultivation to the Federally Administered Tribal Areas (FATA), Khyber-Pakhtunkwa or even Punjab of Pakistan. For over twenty years, Pakistan has been a major heroin refining and smuggling hub in the region. It has an extensive hawala system, including for moving drug profits. Today, these territories also have extensive and well-organized salafi insurgency and terrorist groups that seek to limit the reach of the Pakistani state and topple the Pakistani government. A relocation of extensive poppy cultivation there would be highly detrimental to global security and counterterrorism interests since it would contribute to a critical undermining of the Pakistani state and fuel jihadi insurgencies and terrorism. Such a shift would not only increase profit possibilities for Pakistani belligerents, but also provide them with significant political capital by allowing them to become an important local employer sponsoring a labor-intensive economy in areas with minimal employment opportunities. Nor would Pakistan be a newcomer to the drug trade. During the heyday of illicit poppy cultivation in Pakistan in the 1980s, opium poppy was grown in the Federally Administered Tribal Areas (FATA) and the then Northwest Frontier Province (NFWP; now renamed Khyber-Pakhtunkwa), with agencies such as Bannu, Khyber, and Dir being significant loci of cultivation. In many of these areas, opium poppy cultivation involved entire tribes and represented the bulk of the local economy in these highly isolated (geographically, politically, and economically) places.[15] Pakistan was also the locus of heroin production and smuggling, with prominent and official actors, such as Pakistan’s military and intelligence services deeply involved in the heroin trade. Counternarcotics policies in Afghanistan must be judicious, well sequenced, and well prioritized. Eradication should remain suspended. It should only be undertaken in areas where a legal economy already exists and generates sufficient livelihoods. Interdiction operations should predominantly target the most dangerous actors, such as international terrorist groups, the Taliban and the Islamic State, but also violent disruptive powerbrokers who oppose the Taliban but seek to bring down the Afghan government for their own political power-plays. Alternative livelihoods efforts should be streamlined into overall economic development and human capital development. Focused on secure areas, such efforts must include both rebuilding the rural economy and creating off-farm opportunities. Improving access to treatment for addicts and undertaking smart approaches to prevent opiate abuse should be greatly elevated in policy and funded far more extensively than has the case so far. The latter is probably the most feasible, as well as important, policy tool. But perhaps the most important policy implication is patience and realistic expectation: Aside from some major exogenous shock, such as the outbreak of some poppy disease persisting in the Afghan soil for years to come, opium poppy will flower in Afghanistan for decades. This article was originally published by Ahora in Spanish. [1] See United Nations Office on Drugs and Crime (UNODC), “Afghanistan Opium Survey 2007: Executive Summary,” August 2007, http://www.unodc.org/pdf/research/AFG07_ExSum_web.pdf.[2] UNODC, “Afghanistan Opium Survey 2014,” November 2014, http://www.unodc.org/documents/crop-monitoring/Afghanistan/Afghan-opium-survey-2014.pdf: 7.[3] UNODC, “Afghanistan Opium Survey 2015,” http://www.unodc.org/documents/crop-monitoring/Afghanistan/_Afghan_opium_survey_2015_web.pdf: 8[4] See UNODC, World Drug Report 2011 (New York: United Nations, 2011), http://www.unodc.org/documents/data-and-analysis/WDR2011/World_Drug_Report_2011_ebook.pdf: 45[5] William Byrd and David Mansfield, “Afghanistan’s Opium Economy: An Agricultural, Livelihoods, and Governance Perspective,” The World Bank, revised version, June 23, 2014: ii.[6] See UNODC, “Afghan Opium Survey 2007;” Since 2002, the percentage of drugs to licit GDP has oscillated between 60 and 30 percent, not because the illicit economy has been reduced, but due to the expansion of some sectors of the legal economy, such as telecommunications. See, for example, Christopher Ward and William Byrd, “Afghanistan’s Opium Drug Economy,” World Bank Report No. SASPR-5, December 2004, http://documents.worldbank.org/curated/en/2004/12/5533886/afghanistans-opium-drug-economy.[7] UNODC, “Afghanistan Opium Survey 2013,” December 2013, http://www.unodc.org/documents/crop-monitoring/Afghanistan/Afghan_Opium_survey_2013_web_small.pdf: 12.[8] See, for example, David Mansfield, “From Bad They Made It Worse,” Afghanistan Research and Evaluation Unit (AREU), May 2014. http://www.areu.org.af/Uploads/EditionPdfs/NRM%20CS6%20ver%202%20(2).pdf.[9] See, for example, Christopher M. Blanchard, “Afghanistan: Narcotics and U.S. Policy, Congressional Research Service,” Report No. RL32686, July 2009, www.fas.org/sgp/crs/row/RL32686.pdf; and Letizia Paoli, Victoria A. Greenfield, and Peter Reuter, The World Heroin Market: Can Supply Be Cut? (Oxford: Oxford University Press, 2009), 41-83 and 111-114.[10] Vanda Felbab-Brown, Shooting Up: Counterinsurgency and the War on Drugs (Washington, DC: The Brookings Institution, 2010): Ch. 5.[11] Borhan Osman, “Toward Fragmentation? Mapping the Post-Omar Taliban,” Afghanistan Analysts Network, November 24, 2015.[12] Erin Cunningham, “Taliban Fighters Seize Afghan Territories as NATO Chief Visits in Kabul,” The Washington Post, March 15, 2016.[13] Vanda Felbab-Brown, “Improving Supply Side Policies: Smarter Eradication, Interdiction, and Alternative Livelihoods and the Possibility of Licensing,” LSE Drug Reform Series, May 2014, https://www.brookings.edu/~/media/research/files/reports/2014/05/07-improving-supply-side-policies-felbabbrown/improvingsupplysidepoliciesfelbabbrown.pdf.[14] For a discussion of these drug markets and their history of drug production and trade, see, Vanda Felbab-Brown, “The Drug-Conflict Nexus in South Asia: Beyond Taliban Profits and Afghanistan,” in Daveed Gartenstein-Ross and Clifford May, eds., The Afghanistan-Pakistan Theater: Militant Islam, Security, and Stability (Washington, DC: Foundation for Defense of Democracies, May 2010): 90-112, and Vanda Felbab-Brown, “West African Drug Trade in the Context of Illicit Economies and Poor Governance,” Brookings Institution, October 14, 2010, https://www.brookings.edu/speeches/2010/1014_africa_drug_trade_felbabbrown.aspx.[15] Amir Zada Asad and Robert Harris, The Politics and Economics of Drug Production on the Pakistan-Afghanistan Border (Burlington: Ashgate, 2003) and Nigel J. R. Allan, “Opium Production in Afghanistan and Pakistan,” in Dangerous Harvest: Drug Plants and the Transformation of Indigenous Landscapes, Michael K. Steinberg, Joseph J. Hobbs, and Kent Mathewson, eds. (Oxford: Oxford University Press, 2004): 133-152. |
febf7009da1c7d5e75c27c4b2b8584f3 | https://www.brookings.edu/articles/how-americas-cities-are-growing-the-big-picture/ | How America’s Cities Are Growing: The Big Picture | How America’s Cities Are Growing: The Big Picture Suburban sprawl has been the dominant form of metropolitan-area growth in the United States for the past 50 years. This article analyzes the nature of such sprawl, why it occurs in U.S. metropolitan areas, the problems it causes or aggravates, and some alternative possible forms of future metropolitan-area growth. |
46a1423c6aa5e2825059e9d642a5a23e | https://www.brookings.edu/articles/how-china-is-responding-to-escalating-strategic-competition-with-the-us/ | How China is responding to escalating strategic competition with the US | How China is responding to escalating strategic competition with the US There seems to be a growing consensus in Beijing that U.S.-China relations will remain rocky for the foreseeable future. Even so, President Xi Jinping and others have been touting that time and momentum are on China’s side in its quest to move closer to the center of the world stage. Chinese officials recognize that they will need to overcome obstacles in their country’s pursuit of its national goals. To do so, China appears to be pursuing a three-pronged medium-term strategy: maintaining a non-hostile external environment in order to focus on domestic priorities; reducing dependence on America while increasing the rest of the world’s dependence on China; and expanding the reach of Chinese influence overseas. At the same time, China’s actions are generating significant reactions, both at home and abroad. Whether China can learn from this feedback loop to address its own vulnerabilities remains an open question, one that only China will be capable of answering. Understanding China’s evaluation of – and response to – sharp shifts in U.S.-China relations and its international environment have rarely been more important. Given its expanding economic reach and growing strategic weight, China’s actions now directly impact lives in the United States and around the world. Yet, in some respects, it has become more difficult to see clearly what assumptions and decisions are guiding China’s changing approach to America and the world. There has been more heat than light in many recent American debates about China’s ambitions. Travel restrictions due to COVID-19 have eliminated opportunities for both informal in-person exchanges with Chinese officials and first-hand observation of Chinese society, which often have served as one of the richest sources of insight into the policy zeitgeist in Beijing. And into this vacuum, many American scholars have come to rely more on interpreting official and semi-official Chinese texts to develop conclusions on China’s strategic direction. My previous government service at the U.S. Embassy in Beijing and in the White House National Security Council (NSC) has instilled me with humility about extrapolating China’s strategic designs from its publicly available statements and reports. Nevertheless, drawing from over 50 hours of Zoom-based dialogues with Chinese officials and scholars, a review of Chinese officials’ speeches and Chinese expert commentaries, and over a decade of interacting with senior Chinese officials on such questions, I do believe it is possible to draw some preliminary observations about China’s evolving approach to its changing international environment. This evaluation of China’s orientation toward the United States remained largely intact through January 2020, when both sides finalized negotiations on a “phase-1” trade deal. In the weeks that followed, the bilateral dynamic shifted sharply. Facing the humanitarian and financial losses resulting from the uncontrolled spread of COVID-19, President Trump shifted from touting Xi Jinping as his friend to branding China as his enemy and the source of the pain that many Americans were feeling. And China largely reciprocated, pointing its propaganda cannons at America’s response to the public health crisis and the cascade of social, economic, and political problems that flowed from it. In subsequent months, a tit-for-tat pattern emerged, e.g., on treatment of each side’s journalists, on consulate closures, on recriminations over each side’s involvement in the origin of COVID-19, and on sanctions of high-level individuals in both countries. Beijing began mirror-imaging America’s economic pressure toolkit. Like the United States, China developed laws and regulations for export controls, national security investment screening, policy-related visa sanctions, and extraterritorial provisions in laws and administrative regulations. Beijing also grew less restrained in its actions at home and abroad. Chinese authorities advanced a campaign of brutal suppression in Xinjiang, tightened control of Hong Kong, crushed dissent across the country, engaged in deadly clashes with Indian troops for the first time in 45 years, punished countries and individuals that challenged China’s preferred narratives on sensitive issues, and pointedly criticized the performance of Western democracies. These actions represented a significant departure from the foreign policy focus on calibration and caution that HAD BEEN observable as recently as spring 2019. There appears to be broad agreement among officials and experts in China that America’s power in the international system is declining relative to China’s. Many Chinese experts diagnose America’s anxiety about its relative decline as driving its reflexive efforts to undermine China’s rise. Chinese State Councilor and Foreign Minister Wang Yi gave expression to this viewpoint, for example, in his end-of-year interview with Xinhua on January 2, 2021. Reflecting on U.S.-China relations over the previous year, Wang concluded: In recent years, China-US relations have run into unprecedented difficulties. Fundamentally, it comes down to serious misconceptions of U.S. policymakers about China. Some see China as the so-called biggest threat and their China policy based on this misperception is simply wrong. What has happened proves that the U.S. attempt to suppress China and start a new Cold War has not just seriously harmed the interests of the two peoples, but also caused severe disruptions to the world. …China policy toward the United States is consistent and stable.3 In other words, Wang put forward Beijing’s boilerplate explanation for the downturn in relations – it’s America’s fault. There are a diminishing number of Chinese officials or experts who remain willing, at least visibly, to question this explanation of the downturn in U.S.-China relations. One of the few to do so, albeit subtly, is Wang Jisi, president of the Institute of International and Strategic Studies at Peking University. In a January 2021 op-ed, Wang observed, “Our actions at home and in the world determine to a large extent the attitude of the U.S. toward us. I believe that China, not the United States, can turn the tide of U.S.-China relations at historical junctures, although this position may be debatable.”4 One issue where there does appear to be convergence of views inside China, though, is the expectation that there will be continuity in America’s strategic orientation toward China from Trump to Biden. Even as Chinese experts acknowledge that the Biden administration likely will adopt a more nuanced tone and professional approach for dealing with problems, they expect the root causes of American antagonism toward China will remain unchanged. Reflecting this view, Yuan Peng, an advisor to China’s top leaders and president of the China Institute of Contemporary International Relations (CICIR), assesses, “A divided United States and polarized politics will limit Biden’s room to maneuver and force him to focus more energy on domestic challenges. …Biden’s first priority is to reunite the United States. …The U.S. will be consumed with dealing with its own structural challenges for many years.”5 Peking University’s Wang Jisi similarly has concluded, “American policy toward China will continue unchanged under Biden.”6 Beijing appears to be preparing for a long-term struggle with a declining but still dangerous United States. Privately, according to a well-informed policy advisor, China’s leadership has re-evaluated long-term trends and concluded that it no longer can base its national plans on expectations of generally stable relations with the United States.7 Partly as a result, Chinese leaders have pulled forward plans to promote a “dual circulation” economic strategy. In rolling out the strategy, President Xi Jinping explained, “Only by being self-reliant and developing the domestic market and smoothing out internal circulation can we achieve vibrant growth and development, regardless of the hostility in the outside world.”8 China’s spokespeople and official Chinese state media have sought to set public expectations for a long-term struggle with the United States. Key security officials, such as Politburo member and domestic security chief Guo Shengkun, have warned of the likelihood of a long-term struggle with the United States.9 China’s leaders now often refer to “profound changes unseen in a century” to describe their evaluation of the current fluidity IN the international system. These changes often are presented as a paradox, presenting both risks and opportunities for China. On one side of the coin, “profound changes unseen in a century” portend dangerous challenges to China. Politburo member and top diplomat Yang Jiechi framed the challenges by noting, “The world economy has been hit hard. …The pandemic has had a tremendous impact on international trade, investment, consumption, and other economic activities. …The pandemic [also] has exacerbated social cleavages, ethnic conflicts, and political confrontations. …The number of international security risks has increased.”10 On the other side of the coin, Yang observed, “Reform of the international order has sped up. The PRC has taken the lead in controlling the epidemic on a global scale, and in achieving full resumption of work and production, all parties have increased their expectation and reliance on China.”11 Thus, despite expectations of a protracted struggle with the United States, a view of China as an ever more central actor in the international system appears to be gaining traction inside China. At least outwardly, China’s leaders have grown self-congratulatory in their assessment of global trends working in China’s favor. In October, Xi Jinping told cadres at the 5th Plenum that “time and momentum are on our side.”12 Similarly, Chen Yixin, secretary general of the Central Political and Legal Affairs Commission – the top oversight body for China’s domestic security – told a study session on January 15, “The rise of China is a major variable [in the world today]…the rise of the East and decline of the West has become a trend; changes of the international landscape are in our favor.”13 Discerning Beijing’s medium- and long-term strategic objectives has become one of the most heated debates in Western discourse on China. Proponents of viewing China as a malevolent power that seeks to impose its vision and its values on the rest of the world have been emboldened in recent years, largely as a result of China’s brutish behavior at home and abroad. China’s trampling of its citizens’ rights in Xinjiang and Hong Kong has undermined arguments outside of China that the country will pursue its national ambitions in a benign manner. From my vantage, Beijing sees itself as progressing along a continuum leading to China’s restoration as a central actor in Asia and a leading power on the world stage, a country with greater ability to shape rules, norms, and institutions toward its preferences. China’s leaders have consistently made clear their desire to have their political and economic models respected.14 It also has been a consistent feature of Chinese foreign policy to push for deference to its “core interests.”15 China’s foreign policy practitioners have explained that the country’s external relations should support its national goals, particularly its sustainable development.16 These goals include realization of the country’s 14th five-year plan, its 2035 plan, and its second centenary goal of becoming a prosperous, strong, advanced country by the 100th anniversary of the founding of the People’s Republic of China in 2049.17 In recent years, key Chinese voices also have become more willing to articulate China’s global ambitions. For example, Politburo member Yang Jiechi has written about the need for China’s foreign policy to lay a foundation “for national rejuvenation and provide an important guarantee for us to lead the world’s great changes and shape the external environment” (emphasis added).18 To reach its long-term goals, Beijing recognizes it must first overcome near-term obstacles. One such potential obstacle is the formation of allied blocs to oppose Chinese initiatives and to obstruct China’s rise. Such concerns have taken on added urgency with Joe Biden’s election as President given Biden’s sustained emphasis on coordinating with allies and partners to push back against Chinese behaviors of concern. Although the absence of any publicly available definitive government strategy document makes it difficult to assert what strategies China will employ to advance its objectives, a few preliminary judgments can be reasonably made based on behavioral pattern recognition, statements by senior officials, and commentary by Chinese experts and policy advisors. In recent years, three strategic lines of effort have become visible. 1. Maintain a Non-hostile External Environment Key features of Beijing’s medium-term strategy appear to be seeking to lower the temperature of tensions with the United States, strengthening ties with its neighbors, deepening relations with Russia, and encouraging the European Union’s continued movement toward strategic autonomy. Beijing sees such efforts as critical to breaking what it perceives as Washington’s encirclement strategy of China. China’s leaders also view it as important to keep external problems at bay in order to maintain the primary focus on addressing domestic concerns – including its anti-poverty, anti-pollution, and anti-corruption campaigns – upon which public perceptions of its performance ultimately will be most heavily based. On U.S.-China relations, China’s Foreign Ministry spokesperson has been appealing to the “better angels” of U.S.-China relations to lead the relationship away from adversarial antagonism.19 State Councilor and Foreign Minister Wang Yi has been promoting “peaceful coexistence and win-win cooperation” with the United States.20 At the same time, Beijing has signaled no willingness to moderate its approach to Xinjiang, Hong Kong, Tibet, human rights, or Taiwan. Beijing’s unwillingness to recalibrate its approach to issues that are most inflaming U.S.-China tensions effectively forecloses any broad improvement in overall relations. At best, the U.S. and China will be able to manage tensions and lower the temperature on recent mutual recriminations. On regional affairs, China’s completion of the Regional Comprehensive Economic Partnership (RCEP) marked a significant step in its efforts to strengthen relations with its neighbors. The trade bloc accounts for nearly 30 percent of global GDP and global population.21 RCEP has positioned China at the heart of the world’s largest trade grouping in the most dynamic region of the global economy, thus ensuring China will remain central to regional value chains, not isolated from them. Even beyond trade, China has tailored a regional strategy that speaks to the top interests and concerns of leaders in the region. Foreign Minister Wang has laid out a China-ASEAN agenda for 2021 that is focused on defeating COVID-19; bolstering economic recovery; and pushing forward poverty reduction, disaster prevention and relief, climate change, and environmental protection.22 In so doing, Wang appears to be acting on a recognition that many leaders in Asia prioritize economic development and improvement of social conditions. Regional leaders are not indifferent to security concerns, but they recognize that economic instability poses a more proximate threat to their hold on power than the risk of armed conflict. On Russia, China has shown sustained interest in steepening the upward trajectory in overall relations. In recent years, Beijing and Moscow have grown closer across the full range of relations, including technological and military cooperation. Foreign Minister Wang Yi now touts both countries as standing “side by side against power politics,” supporting each other’s core interests, and serving as each other’s “strategic anchor” and “global partner.”23 Beijing also has been encouraging the European Union to pursue strategic autonomy, including by resisting Washington’s entreaties for Brussels to join a trans-Atlantic front in opposition to China. Such encouragement by Beijing for Brussels to chart its own path on the world stage has been a mainstay of leader-level communications for years. Foreign Minister Wang Yi put a fine point on such messaging in his end-of-year press interview on January 2, 2021, when he implored China and the EU to dedicate themselves to “unity and cooperation rather than group politics,” and to “transcend systemic differences rather than draw lines along ideology.”24 Beijing’s desire to forestall trans-Atlantic policy convergence on China seems to have played a role in the December 30, 2020, closure of negotiations on a China-European Union Comprehensive Agreement on Investment. After seven years and 35 rounds of negotiations, the imminent inauguration of Joe Biden appears to have provided an impetus for Beijing to make fresh concessions that contributed to getting the agreement over the finish line. In this contest between Washington’s efforts to form coalitions to confront China on specific issues and Beijing’s counter-bloc strategy, Beijing does not appear to be taking success for granted, at least in its messaging to foreign audiences. Xi Jinping used an address to the Davos World Economic Forum on January 25 to warn of the dangers of attempts to build an alliance of democracies to counter China. Xi warned, “Forming small groups or launching new cold wars on the world stage …would only push the world toward division, if not confrontation.” He stressed, “Repeatedly, history and the reality remind us that, if we walk down the path of confrontation – be it a cold war, a hot war, a trade war or a tech war – all countries are going to suffer in terms of their interests and their people’s well-being.”25 2. Reduce Dependence on America While Increasing the World’s Dependence on China Faced with the prospect of being cut off or having curtailed access to American supply chains, Chinese leaders in recent years have intensified their push to diversify economic relationships and strengthen self-sufficiency. They also have pursued policies that have had the effect of encouraging other countries to become more dependent upon China for their own economic development. China’s “dual circulation strategy” seeks to reduce dependence on foreign suppliers through a domestic cycle of production, distribution, and consumption, alongside a separate cycle of external trade of goods and services. The early results of this approach provide cause for optimism from Beijing’s perspective. In 2020, the world became more reliant on China for growth. China’s economy is expected to account for 16.8 percent of global gross domestic product, adjusted for inflation, the most of any country in the world, according to forecasts by Moody’s Analytics.26 China is forecast to smash historical records in 2020 for the largest surplus in its current account for any country in history.27 China also became the largest recipient of foreign direct investment in 2020, thereby displacing the United States from its customary role as the largest magnet for foreign capital.28 Chinese economic growth and rising living standards have fueled external demand for commodities, as well as autos, luxury goods, and other sectors. This demand has made trading partners that specialize in these industries more dependent upon exports to China for future growth. This dynamic, combined with recently completed trade and investment agreements with ASEAN and the EU, respectively, have reordered China’s trade patterns. In 2020, the ASEAN bloc became China’s No. 1 trading partner, with the EU moving to No. 2 and the US falling to third place. At the same time, China is embarking on an aggressive push to become more self-sufficient in the high technology sector. This directive, endorsed by Xi Jinping, has been embraced at every level of China’s government and Party institutions. Jiang Jinquan, head of the Communist Party’s influential Central Policy Research Office, recently framed technological self-reliance as being essential for overcoming America’s efforts to impede China’s scientific and technological development.29 Beijing has been allocating eye-popping sums of money to push China down the path of technology independence. Whether on domestic semiconductor development, 30 next-generation technology infrastructure, 31 artificial intelligence, 32 biotechnology, aerospace, or a range of other advanced technology sectors, the Chinese government has laid out ambitious plans to become the global pacesetter. Across these sectors, China’s technology incubation strategies have combined a closed domestic market, massive subsidies for domestic national champions, aggressive acquisitions of intellectual property, strategic investments in firms in Silicon Valley and elsewhere, and cyber and other means of relentless industrial espionage. 3. Expand the Reach of Chinese Influence Overseas In recent years, Chinese authorities have become more proactive in seeking to extend their reach into other countries. Chinese entities have made significant investments in overseas media platforms as part of the central government’s mandate to strengthen China’s discourse power.33 Beijing has sought to present itself abroad as a non-revolutionary power, a contributor of global public goods, an opponent of geopolitical bullying, and an upholder of regional and global stability.34 In Beijing’s preferred telling, China is a benevolent rising power, standing on the side of science and reason to lead global efforts to beat back the spread of COVID-19 and to counter the effects of climate change. As part of such efforts, Chinese media outlets also have been generating a wave of commentaries extolling the virtues of China’s governance model and the shortcomings of Western democratic governments, e.g., in containing the spread of COVID-19, delivering economic growth, maintaining social stability, etc. Chinese officials and Chinese media outlets have employed an increasingly sharp tongue in responding to perceived slights to China’s international image. China’s Executive Vice Foreign Minister, Le Yucheng, has justified this approach by explaining that China “cannot submit to the unscrupulous suppression by hostile anti-China forces but naturally fights back. The criticism about ‘wolf warrior diplomacy’ is another version of ‘China threat theory’ and another ‘speech trap,’ which aims to make China give up and never fight back. China’s diplomacy has always been free from all cowardice or obsequiousness and firmly determined to defend national interests and dignity.”35 Reports of China’s use of coercive, corrupt, or covert tools to interfere in other country’s domestic political decisions also have become more common.36 China also has sought to leverage its expertise in infrastructure construction to push forward the Belt and Road Initiative (BRI), which, at its core, seeks to increase China’s influence in many countries across the globe. Beijing also has been punishing countries and foreign individuals that have promoted viewpoints that have challenged China’s preferences. Beijing’s justification of its economic penalties on Australia, for example, was based in part on the Australian government’s call for an independent assessment of the origins of COVID-19, and on reports issued by an Australian think tank on Xinjiang that Beijing found objectionable. A similar story applied to China’s announcement of sanctions on twenty-eight former Trump administration officials for advocating or implementing policies that Beijing opposed.37 At the same time, Beijing has been expanding the overseas mandate of its domestic security agencies, including through extradition treaties, institutional partnerships between Chinese and foreign security agencies, new legal provisions,38 as well as the export of high-technology tools of surveillance to foreign governments.39 In so doing, Beijing appears to be seeking to advance three interlocking objectives. The first is to have a chilling effect on any individual, Chinese or foreign, who advocates views or policies that challenge Chinese interests, broadly defined. Beijing wants to build an impression, particularly among its expatriate community, that no individual is beyond the reach of Chinese law enforcement. Second, Beijing has a growing need to strengthen its capacity to protect Chinese citizens and commercial interests overseas. Third, Beijing would like to encourage more countries to emulate or to draw from its practices for addressing security challenges. The more that countries embrace Chinese practices and/or Chinese surveillance technology, the more likely it will be for Beijing to gain legitimation overseas for its own domestic security model.40 It remains an open question as to whether China’s medium-term strategy will enable China to overcome hurdles that stand in the way of achieving its national ambitions. China’s strategic choices are not made in a vacuum. Chinese actions often generate reactions, whether at home or abroad. For example, China’s tightening grip on the corporate sector appears to elevate control over innovation. This raises a fundamental question about whether a system that presses for conformity and adherence to plans is capable of allowing the unorthodox and boundary-testing thinking that is the lifeblood of next-generation innovations. Such constraints may partly explain why some of China’s most creative minds, such as the founders of the video-conferencing service Zoom and chipmaker Nvidia, along with many of the world’s leading AI researchers, have chosen to pursue their goals outside of China. Beneath China’s flashy economic growth numbers, there also are flashing warning signs about the long-term health of the economy. One such indicator is the declining growth in productivity – or output per worker and unit of capital. China’s economy is only 30 percent as productive as the world’s best-performing economies, such as the U.S., Japan, or Germany, according to the IMF.41 And as China’s aging population demands more resources for social services, this will place stress on the government’s ability to continue propping up growth with government expenditures and state-sector investments. China also confronts questions about whether its pursuit of technological self-sufficiency is achievable or practical as a policy goal. Without access to advanced lithography and other critical external inputs for semiconductor manufacturing, it will be very difficult for China to produce cutting-edge chips that are necessary inputs for China to achieve its technological ambitions. The more adversarial Beijing’s relationship with other advanced powers becomes, the more longshot will its attempts be to achieve technological self-reliance. Similarly, China’s domestic policies are failing to win over the Chinese who live along the country’s borders. There are growing numbers of examples of ethnic Mongols, Uyghurs, Tibetans, and others chafing at Beijing’s intrusive involvement in their lives and its attempts to impose cultural conformity.42 Ditto for Hong Kong.43 The tighter Beijing squeezes, the more that negative attitudes toward China appear to be hardening along the country’s inner periphery and in many parts of the world.44 The United States government already has characterized China’s conduct in Xinjiang as an act of genocide.45 Furthermore, China’s stated ambitions and determined efforts to become a world leader in an expanding number of high technology fields, and to push for rules and norms around those technologies that reflect Beijing’s illiberal tendencies, have generated unease in many parts of the Western world. In response, London has proposed the establishment of a D-10 of leading powers (G-7+ Australia, South Korea, India) to pool resources and align policies to accelerate development of new technologies in democratic societies. By a similar token, the more loudly nationalistic China’s diplomacy becomes, the more alarmed many Western countries have become about China’s domestic and foreign policy trend-lines. China’s expanding interests overseas will demand a greater Chinese presence. Already, as the PLA Navy has become more active beyond its immediate periphery, so too has the level of coordination among other powers in response. This trend can be seen in the Indian Ocean, where there have been corresponding increases in Chinese naval activity alongside rising security coordination among like-minded powers (i.e., “The Quad,” Australia, India, Japan, the United States). Perhaps for some of these reasons, some Chinese experts have been urging sobriety in evaluations of China’s position in the international system. For example, Renmin University scholar and government advisor Shi Yinhong recently cautioned: China’s chances of filling the vacuum created by the Trump administration’s abandonment of America’s original “global leadership role” are limited, and indeed smaller than many at home and abroad predicted. The appeal of China’s “soft power” in the world, the resources and experiences available to China, are quite limited, and the domestic and international obstacles China will encounter, including the complexities created by the coronavirus pandemic, are considerable.46 Experts such as Shi Yinhong appear to be warning against presupposing that China will continue to ascend on a linear trajectory indefinitely in the direction of its national ambitions. Analysts outside of China similarly would be well-served to preserve a healthy degree of modesty in forecasting China’s future path. From Mao’s upheavals to Deng’s reform and opening, from the Tiananmen tragedy to double-digit economic growth, from low-profile foreign policy to brash assertiveness on the world stage, China’s path over recent decades has navigated a series of shifts. These shifts have been driven in large measure by a dynamic interaction between China’s strategic goals, its evaluation of its external environment, and its domestic requirements. This dynamic interaction between external and internal forces has not ended under Xi Jinping. Going forward, continued study of the interplay of these forces on China’s policy decisions will merit further in-depth evaluation. |
e6b0ce5b679312218fca1fecc423be78 | https://www.brookings.edu/articles/how-the-center-right-co-opts-the-far-right-in-austria/ | How the center-right co-opts the far-right in Austria | How the center-right co-opts the far-right in Austria The various country cases discussed in this project show the extent to which right-wing populist parties have shaped and radicalized the discourse on Islam and Muslims in European societies. In contrast to the United States, with its de facto two-party system, the multiparty systems in Europe afford smaller and niche parties greater opportunities to engage in agenda setting and influencing larger parties. This happened with radical right and right-wing populist parties, which, once marginal, have grown their support and can now often act as agenda setters in national politics. This is especially salient in countries where such parties are part of governing coalitions, as has been the case in Denmark, Italy, Norway, Switzerland, and Austria. In contrast to Western Europe, in Hungary and Poland mainstream right parties have basically transformed themselves into radical right populist parties. But this radicalization process was not the result of niche party influence; rather, it was due to ideological shifts and voter seeking strategies of the governing parties themselves. The Austrian case is instructive because the governing People’s Party (ÖVP) under the leadership of 33-year-old Sebastian Kurz has started co-opting the anti-Muslim (and anti-immigration) agenda of the Austrian Freedom Party (FPÖ). Here, we have a nominally centrist-right rather than a radical right-wing party (like the League in Italy), which plays a central role in the implementation of anti-Muslim positions. It is thus important to understand how these policy claims have been popularized and subsequently implemented through legislation and other measures such as shutting down mosques, banning facial covering in public, and issuing a headscarf ban in kindergartens. Despite the many differences among the cases in the project, there are also crucial similarities. For instance, the Dutch case highlights the rather specific context of a consensus oriented political system in which right-wing anti-Muslim rhetoric has emerged. Anti-Muslim discourse in the Netherlands is based on the notion of defending liberal values against the perceived anti-liberal tendencies of Islam. This stands in stark contrast to situation in Hungary, where, among the far-right, the dominant notion centers on defending Europe’s traditional conservative Christian character. Typically, Western Europeans do not perceive their countries as especially Christian or, for that matter, Europe as a Christian continent, which is an important difference with Poland and Hungary. In Western Europe, the idea of a “Judeo-Christian tradition in Europe” has become a new way of excluding Muslims from the construction of a national identity. As radical right parties have gained at the polls, they have also increased their cooperation. This trend has culminated in the emergence of populist a far-right party group in the European Parliament — Identity and Democracy (ID) — for which Europe’s Christian and anti-Muslim identity is a central pillar. These radical right parties have strategically shifted from their former often more explicitly anti-Semitic profile to an anti-Muslim one. Steve Bannon’s transnational operation The Movement is another recent example of such increased transnational cooperation. Understanding the structure of these networks and their connections to external actors like Russia is important for understanding the future development of Europe’s radical right parties. From a comparative perspective, understanding the different influences of each national context is important. For example, a widely shared understanding of national identity in secular terms plays into producing a perception of Islam as “the other.” This is the case in France. In addition, socially liberal countries such as the Netherlands offer a completely different point of departure when dealing with Muslim minorities than do countries with a strong Catholic tradition. To take another example, the East-West divide within Germany can be viewed also from the perspective of varying degrees of secularism. Eastern Germany with its long history of anti-religious socialization during state-socialism may have influenced the way in which anti-Muslim mobilization thrives. The emergence of Patriotic Europeans Against the Islamization of the Occident (PEGIDA) as well as the relative strength of the Alternative for Germany (AfD) in Germany’s eastern states shows the need for further research into how this historical legacy is shaping voter preferences. How center-right and center-left parties respond to the radical right’s othering of Muslims also plays an important role depending on whether such positions gain mainstream acceptance (e.g., the Christian Democrats in Austria) or rejected and delegitimized (e.g., Christian Democrats in Germany). While conservative parties in many cases fully co-opt the radical right agenda, social democrats on the center-left tend to not have a clear position on Islam as they seek to appeal to both more cosmopolitan urban voters and the working class. Lastly, Muslim constituencies or, generally speaking, a more pro-diversity constituency that wants to include Muslims in their societies without demanding assimilation, also shapes the political system. In the Netherlands, Muslims were estranged from established political parties which led to two former Social Democratic politicians to set up their own political party, called DENK. In other cases such as Austria, Muslims are integrated into more mainstream parties. Returning to the similarities between cases, an experience shared by many contributors to this project was methodological: it was more difficult to find party representatives willing to make themselves available for interviews on the specific issue of Muslims as compared to discussions on other issues. Hence, while publicly criticizing Islam and Muslims is more accepted, and more “mainstream” today than it was some years ago, party members still seem generally reluctant to discuss this development and its underlying reasons. Still, our interviews with voters and supporters of the Austrian Freedom Party offered considerable insight. The extensive discussions and listening to their concerns afforded us an opportunity to trace their individual reasons for their skepticism toward Muslims. The conservatives suggested that concerns about Muslims are less the result of actual personal experiences but rather of impressions gained from public discourse, which is shaped to a large extent by the very parties hostile toward Islam. In addition, the interviews also revealed certain inconsistencies. This was the case, for example, when the same person who criticized Muslims for parading their religion because, according to the interviewee, religion ought to be a personal and private thing, then simultaneously complained that Christian symbols and traditions were being pushed out of public life to appease Muslims’ supposed desires (e.g., discussions about banning crucifixes or Christmas-related celebrations in public schools and kindergartens). To be sure, we conducted a small number of interviews which were not representative of the entire population of Freedom Party supporters. However, our initial findings suggest that engaging supporters of populist parties instead of merely talking about them — as some mainstream political parties do — could be an opportunity for potentially mitigating some of the hostile views toward Islam while responding to these citizens’ concerns. |
e680c6dc2ba5c3c2f02fc582c3715dba | https://www.brookings.edu/articles/how-will-chinas-privacy-law-apply-to-the-chinese-state/ | How will China’s privacy law apply to the Chinese state? | How will China’s privacy law apply to the Chinese state? China’s government is drafting its first Personal Information Protection Law (the “Draft”) to regulate the collection, storage, use, processing, transmittal, provision, and disclosure (collectively, “handling”) of personal information by “organizations and individuals.” Most attention so far has surrounded the Draft’s application to companies. But it also specifically imposes personal information handling requirements on “state organs.” These include China’s legislatures, courts, procuratorates, supervision commissions, and military commissions, in addition to administrative departments under the central government—the State Council—and all levels of government throughout the country. The Draft’s inclusion of state authorities is notable, given the Chinese government’s national security orientation and broad information access powers as regulator and enforcer. As discussed below, actual enforcement of the Draft’s obligations against the state will be challenging. Nonetheless, China’s privacy law will be supported, in principle, by an evolving and ostensibly privacy-protective regulatory framework that purports to constrain, as well as empower, public authorities. The Chinese government, like all governments, collects and creates massive amounts of information in connection with diverse regulatory, security, law enforcement, and social welfare tasks. It also regulates data flows and is responsible for the security of data created or acquired by government departments throughout the country, which is to be governed by a proposed Data Security Law that also covers state organs. Many Chinese citizens have seemed rather untroubled by governmental, as opposed to commercial, collection and use of personal information. But they are raising concerns about hacking, illegal sale, and leaks of personal data, whether held by private entities or the government, and about practices like publicizing blacklists of court judgment defaulters. Citizens are contesting, including through lawsuits, over-collection and abuse of personal data through facial recognition and other forms of surveillance technology during the COVID-19 epidemic and more generally within public spaces, prompting some municipal bans. Chinese regulators acknowledge that, in the information age China has embraced, personal information protection is among the most direct concerns of the people. The Chinese Communist Party (CCP) and State Council even cited personal information infringement as an issue that could impact social stability during the upcoming Spring Festival holiday. Although some revisions are likely before enactment of the law, the Draft subjects state organs to its general limiting principles of legality, legitimacy, necessity, and minimum scope for data handling, and specifies they must handle personal information according to their legal authority and not exceed the scope and limits necessary to carry out their statutory duties (Article 34). Like other personal information handlers, state organs must notify individuals and obtain their consent, unless handling such personal information is necessary to fulfill statutory duties, respond to public health and other emergencies, or take other action in the public interest (Article 13). For state organs specifically, notice and consent is also not required if laws adopted by the national legislature or administrative regulations issued by the State Council (collectively, “law”) require confidentiality or where it would impede performing their duties (Article 35)—situations that presumably would apply to national security and law enforcement matters. Without the individual’s consent, state organs must also not publicly disclose or provide others—including other state organs—with personal information they handle, absent authorization stipulated in law (Article 36). State organs must further comply with general requirements relating to automated decision-making (Article 25) and use of facial recognition and surveillance for public safety purposes (Article 27). China’s highest law, the Constitution, stipulates that all state organs must abide by and be held accountable for any violation of the Constitution and the law; specifically protects as civil rights a citizen’s personal dignity and confidentiality of correspondence—foundational concepts supporting privacy protection; and grants citizens compensation for infringement thereof by state organs or personnel. The new Civil Code, adopted last year, in Article 1039 expressly requires state organs and staff to keep confidential, and not leak or unlawfully provide to others, the personal information they learn while performing their duties. Diverse other laws require courts to protect privacy when trying cases and publishing decisions; procuratorates to do the same when investigating crimes; and judges, procurators, and supervision personnel to keep confidential private information they learn through their work. Moreover, a recent law covering “public employees” of both the Chinese Communist Party (CCP) and state entities imposes sanctions for unlawfully divulging private information acquired in their official capacities. The Draft does not mention privacy, and instead identifies a category of “sensitive personal information” requiring special procedures (Article 29), but the Civil Code (Article 1034) defines privacy as a subset of personal information. The administrative bureaucracy is subject to the widest array of law on information management generally. China has long regulated government information through archives and state secrets legislation and, more recently, through regulations and policies on the internet, e-government, credit reporting, and social credit. The Draft would operate within a regulatory environment that emphasizes the sharing and public disclosure of government-held “information resources,” breaking down “information silos” to facilitate more efficient governance, innovation and economic development, social welfare, and a well-functioning market. A joint CCP and State Council “informatization” project is building a basic information resources system for collecting, managing and using information, which is to protect privacy and other confidential information while ensuring information usability. CCP proposals for China’s 14th five-year development plan, for the period 2021–2025, urge both orderly opening of basic public information and enhanced personal information protection. Chinese laws, including expansive national security legislation, and administrative regulations, increasingly require personal and other information handling by government agencies to accord with law and be confined to what is necessary for carrying out statutory duties. State Council Open Government Information (OGI) Regulations promote public disclosure of government-held records as a general presumption, but prohibit administrative organs from releasing private information absent consent, subject to a public interest override. Public interest–based disclosure would constitute a statutory exception to the Draft’s consent requirement for disclosure (Article 26), but individuals can contest such decisions. Apart from public disclosure, the Draft normally requires consent to provide personal information to others (Article 24), which could inhibit inter-governmental sharing of government information that contains personal information. Such sharing, encouraged to facilitate efficiency of government services, is currently regulated by policy, not law. Following earlier, local provisions, 2016 State Council measures stipulate “government information resources” shared across departments should be lawfully collected by government departments, managed within the scope of their legal authority, and used to perform government functions. While government-produced information is presumed eligible for sharing, departments seeking information from other departments must indicate their need for and use of requested information, which is provided based on catalogues that classify information for unconditional, partially restricted, and no sharing. Other State Council provisions, which explicitly require protecting personal information, prohibit government service providers from using material shared by administrative counterparts for purposes unrelated to their services. Concerns about privacy and personal information protection are prominent in China’s evolving, fragmented social credit “system” (SCS), which entails governmental collection and sharing of regulatory information across departments and levels of government, and disclosure of “public credit information” (PCI) that is generated or acquired while performing regulatory duties, such as fines, punishments, court orders, and professional licenses. The Civil Code identifies one’s credit as an important reputational element (Article 1024) and grants individuals the right to request correction, deletion, and other measures regarding their credit information (Article 1030). Credit reporting and enterprise information publicity components of the SCS are governed by State Council regulations that impose governmental privacy protective obligations, as does some local social credit legislation. However, SCS development to date is largely governed by policy documents, which are not “law.” The State Council’s foundational 2014 SCS development plan called for regulating personal information handling, misuse, and protection, and 2016 guidance on personal creditworthiness requires privacy protection, prohibits collecting personal PCI unless authorized by law, and advocates compiling a national personal PCI catalogue—still being finalized—with classification and sharing standards. Concurrent guidance on government integritylimits official action to that expressly authorized by law, and conditions government information disclosure on protecting privacy. Court provisions on publishing judgment defaulter lists, considered part of the SCS inter-departmental enforcement system, stipulate non-private information to be released. December 2020 State Council guidance on standardizing the SCS further emphasizes privacy protection and requires the government to observe principles of legality, legitimacy, necessity, and minimization, and to state clearly the purpose, method, and scope when collecting and using private information. Disclosure of personal credit information in particular must be based on consent or laws, regulations, or State Council decisions and orders. The Draft specifies safeguards and remedies concerning violations by personal information handlers, some of which are not entirely new. In the Draft, individuals are given the rights to access and copy their personal information (Article 45) and correct (Article 46) or delete (Article 47) inaccurate or illegally collected information. They already may access and request correction of information relating to themselves held in government files pursuant to the 2007 OGI Regulations, which further provide the right to request reconsideration of or litigate an administrative organ’s unsatisfactory response. They can file objections and correct inaccurate personal information in social credit files and credit reports, including reports issued by the government-sponsored Credit Reference Center. The Draft empowers individuals to file complaints and reports with responsible departments concerning illegal handling of personal information (Article 61), for which some procedures already exist. Most state organs have online channels for filing claims and petitions concerning rights violations, as well as other matters, and individuals may file reports and accusations regarding the CCP with its discipline inspection authorities. The Draft also provides that unlawful acts be recorded in personal information handlers’ credit files (Article 63), and administrative organs and staff have their own files under the SCS. Where state organs fail to fulfill their personal information protection duties, the Draft directs their superior organs or other competent departments to order correction and discipline responsible officials (Article 64), internal redress mechanisms already codified in law. Individuals may seek compensation for handling activities that infringe their personal information rights, including court determination of the amount (Article 65). While the Draft does not establish the compensation procedure, individuals should be entitled to sue administrative organs for compensation for actions taken and failures to act that violate the Draft’s requirements. Article 67 provides criminal liability for violations that constitute a crime; China’s Criminal Law imposes fines on units including, at least theoretically, “organs” and criminal liability on their personnel for selling or illegally providing personal information. The Beijing CCP committee and municipal government in December endorsed making personal information protection violations eligible for procuratorate-led public interest litigation, a remedy the Draft anticipates for large-scale infringements (Article 66), against both administrative organs and civil entities. However, while the top court in December added “disputes over privacy and personal information protection” to official civil causes of action, making it easier to sue private persons, it did not include them in its administrative causes of action, thus raising questions concerning judicial enforcement against administrative organs, at least until the Draft becomes law. Moreover, Chinese courts cannot entertain administrative lawsuits involving foreign policy or national defense, and are reluctant to adjudicate national security matters. Remedies against the non-administrative state organs such as legislatures and courts are even more problematic. It is doubtful individuals can seek formal legal remedies and compensation for infringement of personal information rights from state organs other than government agencies in the absence of clear procedures stipulated in other laws. Further legislation and implementing regulations will be required to shore up the statutory basis and establish procedures for applying the Draft’s limiting requirements to administrative and other state organs. They typically would be required to publish detailed rules on personal information handling, including its statutory basis, purpose, necessity, use, and scope; procedures to access, copy, correct, and delete information; and available remedies. They should release drafts for public comment, as required by law and as China’s State Internet Information Office did for its draft scope of necessary personal information for mobile applications. They should also publish the compliance audits required by Article 53 to enhance implementation. Clearly, the Draft’s application of personal information handling requirements to all state organs reflects a largely aspirational intent at present, and it would maintain broad authority for state organs to access and use personal information to perform broad statutory functions. And formal legal challenges to even administrative actions are seldom successful, although they can help foster improvements. Yet, China is developing the legal infrastructure for a comprehensive, privacy protective government information management system onto which additional personal information handling requirements for administrative—and potentially other—organs can be grafted. The Draft reinforces or codifies: existing information handling principles of legality, necessity, and minimization that already apply as a policy matter to government employees and are being enforced as to privacy limitations on disclosure through OGI litigation; the privacy protection obligation of all public employees, which has been criminally enforced against government staff including police that leak or unlawfully sell private information; and the rights to access and correct erroneous personal information and seek compensation for infringement, through private as well as official enforcement. Overall, the Draft generally aligns with global privacy trends. It also provides some common ground, in principle, for China’s participation in formulating international personal information protection norms (Article 12). To be sure, the CCP’s role in overseeing China’s legal system, which faces substantial enforcement challenges, China’s divergent stance on data sovereignty, its ubiquitous use of surveillance, and other seemingly intractable issues including how to balance privacy against broad cybersecurity and national security priorities complicate the prospect of reaching agreement on global data governance rules. Nonetheless, the Draft suggests that China is taking personal information protection seriously and establishing related legal checks on government authority for ordinary operations, based on domestic dynamics propelled by the expectations of the Chinese people. The final law, which should undergo at least one more round of public comment, should explicitly grant citizens the legal tools to help assure a measure of enforceability of their privacy rights against the Chinese state. |
9bf7136b28d9551e2e7c7a076ec566a9 | https://www.brookings.edu/articles/indonesian-foreign-policy-a-million-friends-and-zero-enemies/ | Indonesian Foreign Policy: ‘A Million Friends and Zero Enemies’ | Indonesian Foreign Policy: ‘A Million Friends and Zero Enemies’ After nearly ten years in office, President Susilo Bambang Yudhoyono (widely known as SBY) will hand power this July to an elected successor in competitive multi-party elections. His efforts to consolidate Indonesia’s democracy and expand its economy have yielded tangible gains across a variety of measures, with much more room to improve. His mark on Indonesian foreign policy, while rooted in nonalignment and pragmatism, has been noteworthy for its willingness to address values of democracy and human rights head-on. What will this legacy mean for Indonesia’s potential as a leader for other societies, particularly from the Muslim world? Indonesia’s first directly elected president, SBY came to power in 2004 with more than 60 percent of the vote; in 2009, he won re-election in the first round by a similarly wide margin. After a tumultuous transition following the 1998 downfall of the despot President Suharto, the relative success of these two elections, and the country’s acceptance of the results, propelled Indonesia’s rapid transformation into a flourishing democracy and economic dynamo with a rapidly expanding middle class. Now, with SBY unable to seek a third term due to constitutionally mandated term limits, the race is on to succeed him. Several figures have emerged at the forefront. Abdul Rizal Bakrie, head of a successful business conglomerate, is the leading candidate for the Golkar Party, the ruling party during General Suharto’s 32-year reign. Prabowo Subianto, a former Special Forces commander under Suharto who ran unsuccessfully for vice president in 2009, heads the Gerindra Party ticket. Joko Widodo, the current governor of the capital city Jakarta, is most favored to become the next president according to nationwide polls. Unlike Bakrie and Prabowo, Jokowi (as he is known), is a relatively new politician who made his name in the post-Suharto era. He has a reputation for being an honest politician with a can-do attitude, detached from the old political guard embodied by Bakrie and Prabowo. Although he is a favorite to win, it remains unclear whether he will run if his party’s leader, former President Megawati Sukarnoputri, chooses to make another claim for the presidency. As the campaign heats up, one thing is certain: most of the presidential hopefuls will focus on tackling domestic issues such as improving infrastructure and curbing corruption. During the past year, SBY’s Democratic Party has been marred by corruption scandals and his political image has taken a hit. While his domestic policy agenda may now be in jeopardy, it seems that SBY’s foreign policy legacy will largely remain unscathed as his term comes to an end. Much credit should be given to his administration for guiding Indonesia to economic prosperity and international prominence in the last ten years. He has worked to expand Indonesia’s clout on the international stage mainly through its active leadership of the Association of Southeast Asian Nations (ASEAN) and closer cooperation with India, Australia and China. SBY’s administration has been eager to share its experiences on democratic transition with other leaders of aspiring democracies, including Myanmar and Egypt, and hosts an annual Asia-Pacific forum on democracy designed to lend legitimacy to a political reform agenda. SBY also has chosen gradually to increase Indonesia’s international profile by taking part in the G-20 summits and co-chairing the UN Secretary General’s 27-member High Level Panel on the Post-2015 (Millennium Development Goals) Development Agenda. Although this strategy has elevated Indonesia’s standing in the international spotlight, doubts on its rise as an influential global player persist, as SBY’s administration has avoided major commitments that would compromise its historic preference for neutrality and non-interference. To understand Indonesia’s current foreign policy behavior, one must delve into its recent history. Just like politicians in the United States who channel its founding fathers, Indonesian leaders look to their independence heroes for inspiration. After gaining independence in 1945, Indonesia’s first president—Sukarno—pursued a “free and active” foreign policy strategy. The strategy entailed protecting its own national interests, not aligning with major world powers (i.e. the Soviet Union and the United States), and forming strong bonds with other non-aligned countries such as India. During this period, Indonesia became one of the leading members of the Non-Aligned Movement, which grew out of the group’s founding conference hosted by Indonesia in Bandung in 1955. After General Suharto came to power (1967-98), however, Indonesia kept a lower international profile and cultivated close relations with Washington and other Western economic powers in order to develop its economy. What we see in Indonesia’s current foreign policy stance is a blending of these two strategies. It is engaging with the international milieu of both major and minor powers, but still holding back on making significant commitments that could challenge its preference of remaining relatively neutral in international disputes. In a country obsessed with Facebook and other social media, SBY used his 2009 inaugural address to describe the strategic outlook of Indonesia’s current foreign policy this way: “Indonesia is facing a strategic environment where no country perceives Indonesia as an enemy and there is no country which Indonesia considers an enemy. Thus Indonesia can exercise its foreign policy freely in all directions, having a million friends and zero enemies.” For Indonesia, having “a million friends and zero enemies” does much to help sustain its impressive growth in foreign trade and investment. It also helps explain its reluctance to take hard human rights positions that might upset major economic partners that have poor human rights records, such as China. Furthermore, Indonesia is generally timid in making strong commitments to uphold human rights at the international level because it continues to struggle with its own human rights issues. Recently, for example, Indonesia has seen a significant uptick in religious intolerance and government infringement on civil rights and liberties. As the Muslim world’s largest democracy, such troubling internal human rights issues pose a real threat to the credibility of its leaders’ claim to be a beacon of democracy for other fragile democracies. Indonesia’s longstanding preference for non-intervention and opposition to external attempts to meddle in others’ internal affairs align conveniently with its desire to avoid international criticism of its own domestic agenda. This is one motivation behind Indonesia’s failure to ratify the Rome Statute of the International Criminal Court (ICC). Amid calls from the DPR (House of Representatives) to ratify the Statute, Defense Minister Purnomo Yusgiantoro stated: “We’ve already got a law on human rights, a law on human rights tribunals and the Constitution, all of which govern the rights and responsibilities of all citizens. All of them cover the issue of human rights. So even without ratifying [the Rome Statute], we’re already complying with the principles enshrined in it.” Although Indonesia is generally complying with the Rome Statute, it has not officially ratified it for fear of compromising its independence to handle sensitive issues on its own. Ratifying the statute could open Indonesia to unwanted attention by the ICC and limit its options if crimes against humanity were to occur within its borders. While Indonesia is not high on anyone’s list for ICC scrutiny, religious intolerance is growing (most notably the destruction of homes belonging to the Shias of the Sampang Regency at the hands of their Sunni neighbors), and a culture of impunity prevails amongst the country’s security forces. Its elite counterterrorism unit—Densus 88—has been implicated in killings of suspected terrorists and last March, members of Kopassus—the Indonesian Special Forces—stormed Cebongan prison and executed four detainees suspected of murdering a Kopassus sergeant. Unfortunately, this culture of impunity amongst members of the armed forces is nothing new. During Indonesia’s 24-year occupation of East Timor, its military committed atrocities against the East Timorese—most notably the 1991 Santa Cruz Massacre and the spate of killings after the East Timorese independence referendum in 1999. The Santa Cruz tragedy occurred when Indonesian troops opened fire on a memorial procession at the Santa Cruz Cemetery in the East Timor capital of Dili. The East Timorese independence referendum of 1999 saw at least 1,200 East Timorese slaughtered by anti-independence militias backed by the Indonesian army. Efforts to seek justice for the atrocities in East Timor bore no fruit. A makeshift human rights court set up by Indonesia and the UN Special Panels in East Timor tried 18 individuals for abuses committed during 1999, but all were acquitted. The CAVR, which ran from February 2002 to October 2005, accused Wiranto, Chief of the Armed Forces during 1998-99, of being complicit in the abuses committed in East Timor. The Serious Crimes Unit—a prosecutorial body within the UN Mission in East Timor—issued a warrant for Gen. (ret) Wiranto’s arrest in 2004, but the East Timorese government never forwarded it to Interpol. East Timor’s leaders have opted for a reconciliatory approach with Indonesia, rather than seeking punishment. Former East Timor President Jose Ramos-Horta spoke on the issue in 2006: “We have consciously rejected the notion of pushing for an international tribunal for East Timor because, A, it is not practical, B, it would wreck our relationship with Indonesia, and, C, we are serious about supporting Indonesia’s own transition towards democracy…In today’s Indonesia or in the foreseeable future, there will be no leader strong enough who can bring to court and prison senior military officers who were involved in violence in the past. . . . They are still too powerful.” Indeed, these former Indonesian military officials still hold much power. Wiranto ran for the presidency in 2004, the vice presidency in 2009, and will likely run for the presidency again in 2014. Lt Gen. (ret) Prabowo—mentioned earlier as a possible presidential candidate—is alleged to have been responsible for human rights violations during the widespread rioting in Jakarta surrounding Suharto’s downfall in 1998. Although the ICC’s jurisdiction is not retroactive, Indonesia’s troubled history of impunity may give its current leaders reason enough to hold the international lawyers at bay for the foreseeable future. Regardless of the Rome Statute’s fate, the next president will have to tackle the rule of law agenda frontally, on its own or with help from the international community. Further from home, Indonesia’s evolving views on international norms of democracy and human rights were evident in its handling of Syria. SBY suggested that Bashar al-Assad step down from power to allow for a political transition towards leadership accepted by all Syrian parties involved. SBY made the suggestion during a meeting with a delegation of Islamic scholars led by Sheikh Muhammad Ali Ash-Shobuni of Saudi Arabia on January 7, 2013. It seemed Indonesia would take a more active approach in helping to resolve the Syrian crisis. However, Indonesia so far has done little to follow up on SBY’s statement. At the UN General Assembly on May 15, it abstained from voting for a resolution condemning al-Assad’s Syrian regime and accepting the Syrian National Coalition as party to a political transition. Foreign Minister Natalegawa summed up the reason behind the abstention: “Indonesia is not able to support the resolution because it contains elements that run counter to the established international law and international relations by taking side with certain parties in the conflict. For Indonesia, the legitimate government of Syria is for Syrian people themselves to decide, not outside parties.” Indonesia’s behavior towards the Syrian crisis encapsulates a shift toward a more engaged effort to encourage others to uphold international human rights and democracy norms, particularly in the Muslim world, while avoiding any meaningful, substantive action. From the Indonesian perspective, supporting the rebels would cross a line toward meddling in Syria’s affairs, which would contradict its coveted notion of non-interference. Supporting the rebels could also disappoint China and Russia and cause rifts in the already deeply divided bloc of Islamic nations. Similarly, in regards to Egyptian President Mohamed Morsi’s ouster, SBY has maintained Indonesia’s commitment to non-interference: “We do not have a recipe, neither are we advising Egypt on what to do. We are not in the position to do so, and that would not be right.” At Egypt’s request, with some gentle prodding from Jakarta, Indonesia advised Cairo in 2011 regarding organizing elections and setting up regulations on political parties. But this assistance was carefully packaged in tones of non-interference as highlighted in Foreign Minister Natalegawa’s comment: “We used the bilateral approach, which was more acceptable and they opened up to us and invited us to come to share our experience…But we must ensure that we do it cautiously without giving the impression that we are lecturing them.” While these examples demonstrate Indonesia’s growing engagement with the international community, it remains wedded to its historical preference for neutrality and non-interference, i.e., to have “a million friends and zero enemies.” Apart from its limited engagement at the global level, Indonesia has taken some action within its neighborhood to promote human rights and democracy. For example, it has made concrete efforts to encourage Myanmar to make the transition from dictatorship to democracy. Shortly after the Burmese government crackdown on the participants of the Saffron Revolution in 2007, SBY sent retired General Agus Widjojo to cajole Myanmar’s military junta to embrace democratic transition. During the 1990s, Gen. Widjojo was known to be one of the reformist thinkers in the Indonesian military who encouraged Gen. Suharto to step down to make way for a democratic transition. Indonesia has also provided humanitarian assistance to conflict-prone areas of Myanmar, including $1 million to help build three schools in the Rakhine State. These and similar efforts to help Myanmar’s transition have drawn praise from its democratic friends, including Australia. The Bali Democracy Forum (BDF) is another medium through which Indonesia has promoted international norms of democracy. Indonesia launched BDF in 2008 as an annual open intergovernmental forum on the development of democracy in the Asia-Pacific region. Participating countries engage in dialogue based on sharing experiences and best practices in regards to promotion of democratic ideals. The Institute for Peace and Democracy (IPD)—also established in 2008—supports the BDF’s goal of instilling concepts and skills for peace and democracy through intellectual exchanges, training for practitioners, joint missions, network building, publications, and capacity building at Bali’s Udayana University where it is headquartered. It also has actively supported the creation of the ASEAN Intergovernmental Commission on Human Rights (AICHR), a weak but nonetheless important initiative to insert human rights into the ASEAN agenda, and appointed a civil society leader as its representative. It voluntarily held a human rights dialogue with the Commission on June 25, 2013, where it reported to the Commission on the current human rights issues within the country. While these developments are a step in the right direction, they also highlight that Indonesia is playing it safe in taking responsibility on human rights and democracy promotion. It cautiously promotes rights in countries of the greatest immediate interest – those within its ASEAN neighborhood – with a modest and soft touch. As the champion of ASEAN, Indonesia carefully has leveraged Myanmar’s interest in the rotating presidency, which it will assume in 2014, to nudge it closer to embarking on a serious democratic transition. The Bali Democracy Forum, for all its appeal, is also criticized for its inclusivity; countries like Saudi Arabia, Qatar and Iran take part in the Forum largely to pay lip service to their own credentials as “democracies.” Similarly, although Indonesia has boosted the AICHR’s influence by voluntarily opting to engage in a human rights dialogue with the Commission, ASEAN’s human rights body continues to have limited impact because it lacks a mechanism to enforce good human rights practices or to demand information from member countries. It is merely a consultative body, where member countries can choose whether or not to consult with it to improve their protection of human rights. Indonesia’s next president should use its clout in ASEAN to encourage other member nations to support the development of enforcement mechanisms for the AICHR. SBY has taken Indonesia a great distance toward a special seat at the global table. In order for Indonesia to reach its full potential as an influential global player, its next leader should build on this record and be willing to saddle up and take on more substantive actions to promote human rights and democracy at the international level. To do so, it will have to make more commitments than it has to date to both practicing and promoting the liberal international norms it now proudly embraces. It should also invest in building the regional architecture needed to help engender and sustain democratic transitions in its own neighborhood, where it has immediate interests. If, on the other hand, Indonesia does not take the fight for human rights and democracy to the global stage, it will limit its influence as a global leader. In sum, Indonesia is making a lot of interesting noises, but isn’t really making any music yet. The country’s next president needs to continue where SBY left off, and do much more. This article was originally published by The Diplomat . |
4754f7edc3b332eb18c82a743d109148 | https://www.brookings.edu/articles/inside-the-inferno-counterterrorism-professionals-reflect-on-their-work/?shared=email&msg=fail | Inside the Inferno: Counterterrorism Professionals Reflect on Their Work | Inside the Inferno: Counterterrorism Professionals Reflect on Their Work Ursula Wilder, a clinical psychologist working within the Intelligence Community, explores the impact of counterterrorism (CT) work on the professionals who operate in high intensity and high stress environments, often for many years. For some, the work involves actual combat or engagement with terrorists and their violent acts. For others, the work involves making decisions that affect the lives of many and, as a consequence, bearing the weight of making life and death decisions. And for yet another group, counterterrorism assignments involve piercing through massive amounts of intelligence data and reports, while coping with the great uncertainty in locating terrorists and, in turn, warning others of potential terrorist acts before bad outcomes happen. Writing in the Central Intelligence Agency’s journal Studies in Intelligence, Wilder covers new territory for the 60-year-old publication – the examination of the effects of violence and high stress on counterterrorism practitioners and the impacts caused by the weight of making many difficult decisions on a routine basis. While some researchers have explored the impact of war on the human psyche – most notably recent work on post-traumatic stress disorder – less research has been done on the psychology of individuals who work in high-stress world of counterterrorism. The attached article is based on interviews with people who work across the range of counterterrorism (CT) vocations. Some purposefully pursued work in CT, deliberately dedicating themselves to this work for a short period or for an entire career. Other professionals found themselves thrown unexpectedly into CT work because an act of terrorism occurred in close proximity to a recent assignment, requiring the urgent deployment of their knowledge and skills. Local medical and emergency personnel, police, reporters, mental health practitioners, and morticians are examples of professionals who have increasingly been required since 9/11 to react to violence of this sort. Whether CT professionals have been engaged in the work of counterterrorism by choice or by circumstances beyond their control, those who have stepped up to perform these jobs have been—and will continue to be—affected by their professional experiences, in ways subtle and profound and positive and negative. Often their loved ones have been affected, secondarily, but no less profoundly. Wilder’s article draws from interviews with 57 professionals from the main domains in the CT field. The interviews were conducted in 2012 while the author was an Intelligence Community Federal Executive Fellow at the Brookings Institution. While those interviewed represent only a small portion of the CT profession in which many thousands work and have worked, their personal reflections and insights nevertheless provide a window into the psychological trends that likely exist among their colleagues in the entire CT community. Download the full article » Ursula M. Wilder is an Intelligence Community psychologist with field experience in counterintelligence and counterterrorism, who is currently posted at CIA’s Sherman Kent School for Intelligence Analysis. She was a 2011-2012 Federal Executive Fellow at Brookings. Editor’s Note: This work is part of unclassified extracts from Studies in Intelligence, Vol. 58, No. 4 (Extracts, December 2014), a publication of the Center for the Study of Intelligence, a division of the U.S. Central Intelligence Agency. Studies in Intelligence explores many aspects of the intelligence community and its people. Most often it has addressed the field’s history, its methods, and future development. Less often have the journal’s authors examined the personal and psychological impact on intelligence professionals as Wilder undertakes in this piece. |
b66eb148ac2521388b803a53505cce7c | https://www.brookings.edu/articles/is-obama-like-eisenhower/ | Is Obama Like Eisenhower? | Is Obama Like Eisenhower? “I remember some of the speeches of Eisenhower,” Hillary Clinton said during a joint interview with President Obama in January. “You know, you’ve got to be careful, you have to be thoughtful, you can’t rush in.” It seems likely her memories were jogged by the reviews of Evan Thomas’s recent book, Ike’s Bluff, which argued that Eisenhower’s experience as a soldier and general taught him the limitations of exercising power. That book and a spate of other recent studies have established Ike firmly in the public mind as the very embodiment of presidential prudence. They have also turned him into a posthumous adviser to the Obama administration. Before becoming secretary of defense, Chuck Hagel bought three dozen copies of David A. Nichols’s study of the Suez Crisis and distributed them to (among others) the president, Hillary Clinton, and Leon Panetta, his predecessor as secretary of defense. At Suez, Ike refused to support Britain and France when they (in collusion with Israel) invaded Egypt, and he effectively killed the intervention. Hagel’s lesson was clear: Don’t let allies drag you into ill-advised military adventures. In an influential essay published last year in Time entitled “On Foreign Policy, Why Barack Is Like Ike,” Fareed Zakaria argued that when the president showed a wariness to intervene in places like Syria, he was displaying an uncanny resemblance to Eisenhower. The key quality that the two share, Zakaria argued, is “strategic restraint.” In his recent book, Presidential Leadership and the Creation of the American Era (Princeton University Press, 200 pages), Joseph S. Nye of Harvard takes the argument even one step further. Nye claims Eisenhower was actually an early practitioner of what an Obama aide, speaking of the administration’s role in the ouster of the Muammar Gaddafi regime in Libya, notoriously called “leading from behind.” A cursory examination of Eisenhower’s actual Middle East policies reveals the hollowness of both this thesis and the notion that Eisenhower, as president, followed a strategy of restraint—especially as regards the Middle East. To be sure, he frequently exercised prudence in military affairs. He ended the war in Korea and did not intervene in 1956 when the Hungarians rose in revolt against their Soviet masters. Most notable of all, he refrained from intervention in Vietnam. But military prudence should not be confused with global strategy. Modern-day “restraintists” are quick to cite Eisenhower’s warning, in his farewell address, regarding the dangers of “the military industrial complex.” They typically forget, however, to quote his justification for it: “We face a hostile ideology—global in scope, atheistic in character, ruthless in purpose, and insidious in method. Unhappily the danger it poses promises to be of indefinite duration.” Eisenhower, in other words, zealously prosecuted the Cold War. Indeed, contemporary critics diagnosed his administration as suffering from “pactomania,” an irresistible urge to organize alliances against Communism. Many historians now regard his reliance on the CIA, which toppled regimes in Iran and Guatemala, as anything but restrained. And there are also more public examples of Eisenhower flexing his presidential muscles. There was Syria, for one. Then, as now, the country was at the center of a regional power struggle. In the summer of 1956, when the Syrian government began to drift toward the Soviet Union, Eisenhower instructed the CIA to topple it. By summer 1957, the spy agency had attempted to stage two coups, both of which failed. No sooner had Syrian counterintelligence rolled up the second plot than Eisenhower formulated another plan: fomenting jihad. He instructed the CIA to position itself in order to stir up violent disturbances along Syria’s borders. The goal was to present these incidents to the world as a threat—a Syrian threat—to the peace and security of the region. Syria’s neighbors would then use the unrest as a pretext to invade and topple the government in Damascus. The trickiest part of the plan was convincing the Arab states to invade. In the hope that Saudi Arabia would help, Eisenhower wrote to King Saud. The letter expressed alarm over the “serious danger that Syria will become a Soviet Communist satellite.” It affirmed that “any country that was attacked by a Syria which was itself dominated by International Communism” could count on the United States for support. And then it closed with an appeal to Islam: “In view of the special position of Your Majesty as Keeper of the Holy Places of Islam, I trust that you will exert your great influence to the end that the atheistic creed of Communism will not become entrenched at a key position in the Moslem world.” The letter missed its mark. “Saud,” as the historian Salim Yaqub wrote, “had little interest in Eisenhower’s jihad.” In praise of Ike’s pacific record, Zakaria notes that “from the end of the Korean War to the end of his presidency, not one American soldier died in combat.” The statistic is striking, but it creates a misleading impression. In truth, Eisenhower had the one quality all successful leaders have: He was lucky. Any number of his policies could easily have backfired, producing a much less impressive statistic. The Syrian crisis of 1957 is a case in point. While Eisenhower was attempting to generate a jihad, the Turkish government amassed 50,000 troops on the Syrian border. The move provoked the Soviets. In an interview with the New York Times, Nikita Khrushchev, then the Soviet premier, publicly accused the United States of fomenting the crisis and issued a warning to the Turks: “If the rifles fire,” he said bluntly, “the rockets will start flying.” Secretary of State John Foster Dulles immediately came to the aid of the Turks: “If there is an attack on Turkey by the Soviet Union,” he said, “it would not mean a purely defensive operation by the United States, with the Soviet Union a privileged sanctuary from which to attack Turkey.” In such tense circumstances, a miscalculation by a Turkish, Syrian, or Soviet commander could have dragged the United States into an extremely ugly conflict. History, in that case, would have produced less impressive statistics. Zakaria also happens to be factually wrong. A number of soldiers did die on Eisenhower’s watch—three, to be exact. One fell to an enemy sniper; the other two to friendly fire. All of them died in Lebanon during the 1958 intervention. Zero or three—either way the record is remarkable, but the fallen Marines should remind us of an important fact: Eisenhower, when the situation required, did not shrink from entering a messy conflict. In the first half of 1958, Camille Chamoun, the Lebanese president, was battling an insurgency and strongly urged Eisenhower to come to his assistance. The insurgents were receiving support from Syria, which by this time had merged with Gamal Abdel Nasser’s Egypt to form the United Arab Republic. Eisenhower feared a quagmire and resisted calls to intervene. But overnight, his calculus changed. When Eisenhower went to bed on Sunday, July 13, Iraq was an ally—“the country,” he wrote in his memoirs, “that we were counting on heavily as a bulwark of stability and progress in the region.” By the time he woke on Monday, the bulwark had collapsed. In the early morning hours, renegade army officers staged a successful coup, destroying Iraq’s Hashemite monarchy and replacing it with an Arab nationalist republic that Eisenhower feared might align with the United Arab Republic and its Soviet patron. In a mere instant, a Cold War ally had disappeared. Fearing a push by Nasser and the Soviet Union against all Western-leaning states of the region, a number of American allies—including the Lebanese, Saudis, and Jordanians—called for immediate intervention by the United States. Cairo and Moscow, they argued, must be put on notice that the Americans would not let their remaining friends go the way of the Iraqi monarchy. If the United States failed to intervene, the Saudi king informed Eisenhower, it would be “finished” as a power in the region. Eisenhower sprung to action with remarkable speed. Within a few hours, he gave the order to send in the Marines to bolster the resolve of allies and reinvigorating the deterrent capability of the United States. Almost immediately, Eisenhower invited a bipartisan group of congressional leaders to the White House for a briefing. Sam Rayburn, the speaker of the House, expressed concerns: “If we go in and intervene and our operation does not succeed, what do we do then?” He also worried that “the Russians would threaten general war.” Eisenhower replied that it was impossible “to prophesy the exact course of events. If we do or if we don’t go in, the consequences will be bad.” He calculated, however, that it was crucial to take “a strong position rather than a Munich-type position, if we are to avoid the crumbling of our whole security structure.” Rayburn also believed that “intervention would intensify resentment against us throughout the area.” Eisenhower shared his fear. The Lebanon intervention, we now know, went as cleanly as any such operation in history. At the moment of decision, however, Eisenhower regarded the venture as highly risky—so dangerous, in fact, that it reminded him of giving the go order on D-Day, the most momentous event of his life. “Despite the disparity in the size of the two operations,” he wrote in his memoirs, “the possible consequences in each case, if things went wrong, were chilling.” What, in particular, made the intervention so dangerous? “In Lebanon, the question was whether it would be better to incur the deep resentment of nearly all of the Arab world (and some of the rest of the Free world) and in doing so to risk general war with the Soviet Union or to do something worse—which was to do nothing.” Over the last year, a parade of America’s Middle Eastern allies have made their way through the White House, raising the alarm of Syria, and urging Obama to organize a more robust international response. Unlike Ike, Obama calculated that doing nothing was preferable to taking actions that have uncertain outcomes. As a result, when Obama finally decided that some response to Assad’s use of chemical weapons was necessary, he found himself almost bereft of allies. And what about Nye’s favorable comparison of Obama’s foreign policy with Eisenhower’s? “An incautious comment by a midlevel White House official characterized the Libya policy as ‘leading from behind,’ and this became a target for political criticism,” Nye writes, but adds that “Eisenhower was a great exemplar of knowing that sometimes it is most effective to keep a low profile and to lead from behind.” This is an act of rhetorical legerdemain. Nye’s use of the term gives the impression that two very different things are actually one and the same. With respect to Obama, “leading from behind” describes his administration’s policy toward Libyan intervention. With respect to Ike, it describes his management style, which Fred Greenstein famously called “the hidden-hand presidency.” In Eisenhower’s day, intellectuals almost universally regarded him as an amiable dolt, more golfer than strategist. Before Greenstein (together with Stephen Ambrose and others) set the record straight in the 1980s, it was widely assumed that John Foster Dulles was the man who actually ran American foreign policy. Using declassified documents, Greenstein and his cohort showed that Eisenhower was resolutely in charge, a master of detail, fully in command of strategy and tactics. Eisenhower might have put Dulles out front and center stage, but he was always guiding him with a “hidden hand.” The diary of Jock Colville, Winston Churchill’s right-hand man, provides a vivid example of Eisenhower’s skills at “gentle persuasion,” to use Nye’s phrase. After Stalin died in March 1953, Churchill, then in his final term as prime minister, perceived signs of moderation in Moscow. He began a campaign to convince Eisenhower to convene a summit with the USSR on the model of the great wartime conferences. Ike repeatedly rebuffed Churchill, who eventually made his differences with Eisenhower publicly known. Tensions came to a head in Bermuda in December 1953 at a conference attended by the leaders of the United States, Britain, and France. During one of the opening meetings, Churchill immediately delivered an eloquent appeal for engaging the new Soviet leaders. Eisenhower, Colville writes, was enraged. He reacted with “a short, very violent statement, in the coarsest terms,” likening the Soviet Union to “a whore” whom the United States would drive off the main streets. Colville was shocked by Eisenhower’s profanity. “I doubt,” he noted, “if such language has ever been heard at an international conference.” Now consider: The Islamic Republic of Iran recently elected a new president, Hassan Rouhani, whom many observers regard as a moderate. Those observers have beenurging Obama to engage with him directly, just as Churchill urged Ike. Imagine a conference between Obama and a delegation of European leaders who argue eloquently for reaching out to Rouhani. Obama springs up, enraged. The veins in his forehead pop out, throbbing. He launches into a profanity-laced tirade. “Iran,” he thunders, “is a whore and we are going to drive her off the streets of the Middle East.” If Obama were truly like Ike in foreign policy, this thought experiment would not be a fanciful one. The popular association of the Eisenhower administration with “strategic restraint” is itself he product of historical revisionism. It was not the contemporary view. Until the 1980s, most pundits believed the opposite. Their view was perfectly distilled in Townsend Hoopes’s The Devil and John Foster Dulles (1973). The unstated goal of the book was to saddle the Republicans with responsibility for the Vietnam War—no mean feat, given that Democrats Kennedy and Johnson had made the key decisions to intervene. Nevertheless, Hoopes found an ingenious method to lay the responsibility squarely on Eisenhower’s shoulders—or, more precisely, on the shoulders of his secretary of state. John Foster Dulles’s influence, Hoopes explains, was so immense that it extended beyond the Republican Party. Dulles managed to shape the zeitgeist by establishing in the broad culture the unassailable sanctity of “America’s posture of categorical anti-Communism and limitless strategic concern.” Once he successfully stamped the culture with anti-Communist zealotry, the Democrats had no choice but to follow its inexorable logic, which led to imperial overreach in Vietnam. “In early 1968,” Hoopes writes, “when the Tet offensive and Lyndon Johnson’s withdrawal from further political combat tore away the final veil hiding the misperception and failure of America’s freedom-defending and nation-building in South Vietnam, I faced, along with many others, the dawning of the realization that an era in American foreign policy had ended.” This was hysterically overwrought, obviously, but in its day, intellectuals took the argument seriously. It’s worth considering why. Caricature, of course, exaggerates recognizable aspects of reality. In the 1970s, the very real anti-Communism of the Eisenhower era was still a part of living memory. “Mutual Assured Destruction,” “the domino theory,” “brinkmanship”—these 1950s catchphrases reverberated, testifying to the fact that Ike, even while steering clear of military adventures, took the fight to the enemy. By contrast, contemporary audiences know Ike only from history books such as Greenstein’s, which emphasizes Eisenhower’s pragmatism precisely in order to supplant the prevailing caricature of his stupidity. Still, there was more than just a grain of truth to Hoopes’s presentation. Ike operated in a specific ideological context. To detach “Ike the pragmatist” entirely from it is to draw a caricature every bit as distorted as “Dulles the zealot.” Zakaria sees Ike and Obama as uncannily similar for exhibiting “strategic restraint” in their Middle East policies. That Obama has been restrained is undeniable. In what way, however, is his reluctance to use military force “strategic”? What larger plan does the policy serve? The best answer came last March from Tom Donilon, his former national-security adviser. The Obama administration, he explained in an interview, had determined that the United States was “over-invested in our military efforts in South Asia and in the Middle East.” At the same time, it was “dramatically under-invested” in Asia, which was “the most economically dynamic region in the world.” Therefore, it was “rebalancing” to Asia. So Obama, the global strategist, pores over a huge map spread out on the table before him. Using his pointer stick like a croupier, he slides pieces from the Middle East to Asia. That’s all well and good on the global level, but what about the Middle East? The region is undergoing an epochal transformation. Where does the president see it headed? What is the American role in guiding it there? In May 2011, a few months after the Arab Spring first broke out, Obama identified a powerful movement toward freedom and democracy and reached out his hand in partnership. “The question before us,” Obama said at the time “is what role America will play as this story unfolds.” He answered with clarity: “There must be no doubt that the United States of America welcomes change that advances self-determination and opportunity.” Only two years later, he struck a less hopeful note. In the Middle East, he said, “there are ancient sectarian differences, and the hopes of the Arab Spring have unleashed forces of change that are going to take many years to resolve. And that’s why we’re not contemplating putting our troops in the middle of someone else’s war.” Where Obama was nurturing democracy two years ago, he is now arguing for quarantining sectarian violence. This blatant shift raises even more questions. Will this sectarianism burn itself out, or will the conflagration grow? What security structures will best contain it? How will the “rebalancing” to Asia help build them? One suspects that there are no answers to any of these questions, because the decision to pull back was disconnected from a larger vision of the Middle East. “Strategic restraint,” when applied to Obama’s policies, is synonymous with “strategic neglect.” That was not true of Eisenhower’s policies. His eight years in office also coincided with a revolutionary wave. The old imperial and colonial order was crumbling. A new one, dominated by secular pan-Arab nationalism, was taking its place. Eisenhower saw it plainly and formulated a strategy to deal with it. His goal was to channel the nationalism of the region away from the Soviet bloc and toward the West by offering security and economic assistance. The United States was engaged in a delicate balancing act, supporting its European allies against the Soviet Union while simultaneously facilitating the rise of the independent nations of the Middle East, which were hostile to the Europeans. It is impossible to understand any of Ike’s major moves without reference to this vision. Take, for instance, the Suez Crisis, which Zakaria cites as a prime example of “strategic restraint” and which Hagel holds up as a model for Obama. When Eisenhower turned against his allies, he did not do so out of any overarching commitment to “restraint.” He simply believed Britain and France were alienating Arab nationalists and destroying the prospect for a strategic accommodation between the Arab states and the West. He therefore shunted the Europeans aside—in what was actually the most dramatic assertion of American primacy of the Cold War. In the midst of the crisis, he announced the Eisenhower Doctrine, a unilateral American commitment to defend the entire Middle East. His doctrine put the world on formal notice that the United States was replacing Britain as the dominant power in the region. The result of Ike’s “strategic restraint” was a massive increase in the global responsibilities of the United States. Obama’s restraint represents an attempt to shed those responsibilities. The Ike–Obama analogy creates an illusion of commonality and historic continuity where none exists. It is bad history, because it depicts Eisenhower as a two-dimensional figure, entirely detached from his key associates and their core beliefs. At the same time, the analogy presents us with a distorted view of Obama. The Eisenhower Doctrine asserted American primacy in the Middle East, and every president since has regarded it a vital American interest to shape the international order of the region. Every president, that is, except the present one. The old order in the Middle East is crumbling. The enemies and rivals of the United States—Russia, Iran, Syria, Hezbollah, and al-Qaeda—are working assiduously to mold the new order that benefits them. Their efforts, which are often in conflict, have ignited a great fire. Unlike his predecessors, Barack Obama has determined that the United States is best served by hanging back. This is a sharp break with the past—especially with Eisenhower. Those desperately looking to burnish Obama’s reputation when it comes to foreign policy by associating it with that of a successful presidency will have to look elsewhere. This article originally appeared in Commentary Magazine. |
bc852b3b7f3422f66be43c69345358ec | https://www.brookings.edu/articles/islam-in-france/ | Islam in France | Islam in France In a long-planned public-relations initiative, France faced Algeria in a “friendly” soccer match three weeks after the September 11 terrorist attacks in the United States. The organizers hoped to showcase the success of “republican” integration and the maturity of the Algerian migrant presence in France. Contrary to script, however, some young soccer fans booed the national anthem, threw objects at two government ministers and—once the French lead had reached 4-to-1—ran onto the field, forcing the game’s cancellation. The game’s spectacular ending gave a different impression from that desired by its organizers. It also confirmed the misgivings of those wondering, in the aftermath of September 11, whether Muslims in France could represent a threat to the Republic. “Where are the beurs going?” asked the cover of Le Nouvel Observateur, above photographs of the second- and third-generation immigrant youth invading the soccer field. (“Beurs” is a French slang term for Arabs.) Whether from the disruptive actions of a few, or because the five million Muslims residing in France are beginning to take on organized religious and political contours, a decisive moment has arrived in the French Republic’s long coexistence with Islam. Nationality It is difficult to know exactly how many Muslims of different nationalities live in France because the state does not collect religious or ethnic census data. The republican notion of citizenship keeps religious practice and ethnic origin squarely in the private domain (and the French are wary of past abuses of such government files). Half of the estimated five million French Muslims are born or naturalized French citizens. Those of Algerian origin form the largest subgroup. Most arrived during the post-war economic boom: North African independence coincided with strong French demand for manual labor. These workers were already settling in France and starting families when the government ended labor migration in the mid-1970s. A steady annual flow of thousands has persisted under family reunification provisions. This has especially reinforced the Muslim populations around Paris, where one-third are concentrated, and in Lille and Marseilles, which together are home to another one-third. National Origins of the Muslim Population These migrants have tended to concentrate geographically and have benefited from state supports in social and religious policy. But any articulation of their interests as group interests is discouraged. Here the revolutionary spirit of the National Assembly debate granting citizenship to the Jews is still current: deny them everything as a nation and grant them all as individuals. To suggest that Muslims form a coherent, unitary “community” is a taboo act of reification, inconsistent with the Jacobin tradition. Of course practical electoral and symbolic politics leads to a different outcome. French politicians actively court different religious communities. Though there were some 1.2 million Muslim voters by the mid-1990s, the foreign nationality of other Muslims in France complicates the state’s interaction with them. Moreover, France’s reluctance to identify specific suffering or needs of sub-national groups has recently been tested. The last two governments have issued decrees compensating the Jewish communities of France for material and spiritual damages under Vichy. France has also timidly begun revisiting its mixed history of domination and war in Algeria. But there remains a general “republican” reluctance to address the specific challenges of integrating the heirs of this experience. French Muslims and the War on Terrorism Journalists’ speculations in the early fall that the largely migrant-populated suburbs might harbor wasps’ nests of America-haters—if not sleeper cells–coincided with the lethal explosion of a chemical factory in Toulouse, breeding weeks of speculation that France was again victim to Islamic terrorism. Reports of a foiled suicide bomb plot against the American embassy in Paris were followed by coverage of Zaccarias Moussaoui, the Frenchman arrested by FBI agents in August and thought to be the missing twentieth 9/11 hijacker. But Usama Bin Laden’s videotaped call for Muslims to rise up against western host societies does not seem to have won sympathy with French Muslims. French Muslims as a community, even in the religiously sensitive period of Ramadan, overwhelmingly condemned the attack on New York. Ninety-two percent agreed in an IFOP survey that Islam condemns terrorist acts and the same number said the hijackers were not legitimately Muslim “because Islam is a religion of peace and moderation.” Seventy percent also approved of French participation in an eventual military response, only slightly lower than support among the overall French population. 1 Muslims have organized themselves in several principal federations, oriented in part along lines of national origin. Total membership in these associations probably does not exceed 10-20 percent of Muslims in France. Foremost among the Algerian presence is the Grande Mosquée de Paris and its associated Muslim Institute. Similar in size, the Union des Organisations Islamiques de France (UOIF) is a large umbrella organization with Moroccan and Egyptian ties. There are also three main Turkish and various African religious associations, in addition to small cliques around individual religious and political personalities. Nearly all of these religious groups and the mosque they represent have participated in consultations with the Interior Ministry to create a High Authority of Islam in France. In the realm of civic society, a host of groups representing Muslim students and women have gained associational status under to the 1901 law governing secondary associations. Some of these associations have had established contact with local authorities for decades, such as those founded on behalf of the resident “Harki” population of Algerians who sympathized and fought on the French side during the Algerian War. Muslim Piety in 2001 (and 1994) Laïcité The 1905 law separating church and state prevents the public funding and official recognition of religious communities. The law allows for the public maintenance of religious buildings in existence at the time of the law’s passage and also contains similar exceptions for the salaries of clergy employed by prisons, hospitals and the army. In its affirmation of the principle of equality in the free exercise of religion for all French citizens, however, the law is increasingly interpreted as a commitment to create sufficient material conditions for Muslims to practice their religion. This includes prayer spaces and Halal-related requirements, and eventually the training of clergy and the accommodation of Muslims in public cemeteries. Efforts like the recently elected mayor of Marseille’s project to build a large Islamic cultural center and mosque have required consultation of the local Muslim populations, and thus the organization of their representatives. The community’s religious needs are not minor. Of 1558 prayer spaces in France, the vast majority can accommodate fewer than 150 people, and only 20 can hold more than 1,000 congregants. In all, there are five mosques in use in France that were built expressly as mosques. These numbers compare with 40,000 Catholic buildings, 957 temples and 82 synagogues. Since Napoleon forcefully organized Catholics, Protestants, and Jews in centralized consistoires with designated religious heads, those communities have developed parallel umbrella institutions in the political and social domains. Many Muslim leaders would like to combine the roles of political and religious interlocutor in a single council. Consultations have been taking place at the national level under the Interior Ministry (also the Ministère des cultes) since the Rocard government in 1990. Ministers Joxe, Pasqua, Chevénement, and now Vaillant, have attempted to finalize a new High Authority of Islam. But the French government—which strongly supports the republican brand of secularism known as laïcité—favors the creation of a strictly religious council. It is is especially loath to institutionalize domestic political consultations with non-French citizens. Nationality of Imams in France (1990) State involvement in the provision and training of Muslim clergy is also a question of concern to national security, as recent raids on Islamic religious centers in Hamburg and Milan have shown. Due to state inaction in the prison system, for example, a lack of state-provided Imams has created space for Islamic proselytizing to take root. With the share of Muslims in the prison population surpassing 50 percent, only forty-four clerics fulfill the state’s duty to provide religious consultation. By comparison, the prisons employ 460 Catholic clerics. To address technical issues of Muslims’ religious needs, advisers in the Interior Ministry maintain close contact with the diverse components of the Muslim organizational world. The Ministry says it is “accompanying” the Muslims in their quest for centralized organization. But the inclusive nature of this consultation has provoked accusations of religious extremism and nationalism among rival associations. Dalil Boubakeur, rector of the Grande Mosquée de Paris, and his ally Soheib Bencheikh, Mufti of Marseille, have systematically opposed the inclusion of the UOIF, whose Moroccan spokesman they denounce as an Islamic extremist. Plans for a regional assembly election in spring 2002 have also been the subject of controversy. Delegates will be allotted according to a mosque’s surface area, which does not always reflect actual attendance. These technical questions of representation will likely prove resolvable, as the recent elections in Belgium have shown. Indeed in the French region of Alsace-Moselle, governed by a special state-church regime because is was under German control at the time of the 1905 law, organized Islam enjoys the same recognition and benefits as other officially recognized religions. Reconciliation While integration of Islam is taking place at the symbolic and institutional levels, political leaders have begun to confront France’s troubled past with Algeria. Plaques have gone up on the Pont Saint-Michel and in the Invalides courtyard commemorating Muslims who died during the Algerian war. The long-neglected Harkis, who had fought on the side of France and were massacred by nationalist Algerians, finally received a presidential day of honor. And the victims of police violence during a 1961 Paris protest—a still uncounted number of “French Muslim of Algerian origin” estimated at between 50 and 200—had their memory inscribed by the mayor at the site of their deaths. Indeed it was in the context of formally expressing gratitude to 100,000 Muslim soldiers who had died for France in WWI that the Grande Mosquée de Paris and Muslim Institute were founded in 1922, in contravention of the 1905 law. The recent official review of the historical service and suffering of France’s half-million Jews has certainly accelerated their community recognition and strengthened their hand in negotiations with the state. Will something similar happen with France’s Muslim population? Reacting to the October 6 soccer match in Le Monde, the Algerian ambassador to France argued that the French state was equally responsible for the alienation of second-generation migrant youth, and called for their prompt social and political incorporation. The consequences of not integrating Islam into existing state-church structures would likely include an increase in foreign Imams and foreign money, and a decline in transparent and homegrown religious organization. Authorities would rightly fear the increasing hostility of a new generation excluded from public institutions; many also feel a duty to extend to them the promise of republican citizenship. President Chirac did not let a month pass after September’s attacks before convening leaders from the French Muslim world at the Elysée, and the new mayor of Paris organized a gala soirée du Ramadan at the Hôtel de Ville this year. But echoes from the bizarre soccer match still linger in the media, a striking counter-image to Algerian-born French star Zinedine Zidane’s heroics during the World and European Cup championships in 1998 and 2000 that won so much support among the French. When Franco-Portuguese youth whistled at the Marseilleise during a recent France-Portugal game, politicians and the press did not pay any attention. This reflects a strong impression that Arab-Muslim youth in particular have not been properly integrated into French society, and that achieving this will require the development of targeted state policy in domains where it is reluctant to act. How this occurs, over the course of the next decade, will be a major factor in the successful integration of one-third of Europe’s Muslim population. |
b4d68f10d9e1248c992736473381d15d | https://www.brookings.edu/articles/israeli-palestinian-peace-talks-must-be-inclusive/ | Israeli-Palestinian Peace Talks Must Be Inclusive | Israeli-Palestinian Peace Talks Must Be Inclusive Editor’s note: this article originally appeared in The Guardian on September 29, 2010. The future of a viable Palestinian state will ultimately lie in the choices that President Abbas makes in the coming days. His overtures to the Arab League to garner support on next steps is a positive move. Whether or not the peace talks will proceed in the long run is unclear. But if negotiations are to continue – and be successful – now is the time for Abbas to re-examine how they can truly become more inclusive and representative of the realities at play in the West Bank and Gaza. With Gaza under the control of Hamas, decision-making regarding the future of its population’s welfare and statehood will end up without the buy-in of Hamas. Saudi Arabia, a major player in past peace negotiations, is currently sidelined while it could serve as a stabiliser of relations between Hamas and the Palestinian Authority. The exclusion of affected players around the table makes neither political nor economic sense. Finding a way to bring Gaza representatives into the discussions is critical, not only for the legitimacy of the negotiations but because in order to have a viable and sustainable two-state solution, the West Bank and Gaza need to be integrated economically. Currently, the government of Israel has in place more than 500 barriers to movement in the West Bank. In 2008, a World Bank study found that these barriers increase costs to Palestinian businesses by increasing the distance of internal routes from one West Bank town to another by as much as 40%. Additionally, with the completion of the security wall, the government of Israel plans on implementing a “back-to-back” transportation system for all goods from the West Bank moving into Israel. In Gaza, the back-to-back system caused delays of more than 24 hours at the Karni crossing as well as increased costs and damaged goods because of excessive handling and spoilage during the loading and unloading process. Even without this system, it takes almost two hours for Palestinian goods from the West Bank to cross into Israel. Since Israel is the destination for over 85% of Palestinian exports, representing 45% of total West Bank GDP, further delays will make Palestinian exports uncompetitive with those coming from Asia and Latin America. For the West Bank to have a viable economic future, it needs Gaza. It needs the potential of a Gaza port and airport to serve as a connection to the outside world as Israel plans to continue the regime of imposing increasingly high costs on Palestinian imports and separation from the West Bank economically. Today, outside a few areas of growth, the Palestinian economies of both territories continue to suffer. Unemployment in 2009 was nearly 40% for men in Gaza and nearly 20% in the West Bank. This does not even take into account youth unemployment, which is as high as 65% in Gaza with young people having to wait an average of two years after graduation to find a job. Because their economic future depends on each other, the current diplomatic efforts need someone at the table who can speak for the interest of Gaza. Given Hamas’s militant position, this does not have to be Ismail Haniyeh or any other official Hamas representative. In fact, absence of direct talks with Hamas does not mean that indirect representation is impossible. This could be achieved in two ways. First, a set of high-ranking bureaucrats or expatriate Palestinians who are sympathetic to Hamas could sit at side tables to the main talks between the Palestinian Authority and Israel to ensure that Hamas’s voice is heard. Second, another Arab head of state could stand to represent and be a liaison to Hamas. The second approach is more likely to be successful than the first. The reason is simple. Excluding Hamas from the talks is only one example of the many other representatives that are missing. Palestinians from outside the West Bank, including refugees in surrounding Arab states, do not have a voice for their interests. But because the issue of refugees is on the table, they too must be represented. Saudi Arabia has emerged as the country that is best suited to give voice and represent the various political factions in a reconciliatory manner – especially when looking at how to tackle Palestinian-to-Palestinian politics. The Arab Peace Initiative, spearheaded by King Abdullah, showcases the country’s inclination toward addressing the Palestinian-Israeli conflict through regional Arab diplomacy and co-operation. In a recent letter to the Saudis on their national day, the US secretary of state, Hillary Clinton, acknowledged the contribution of the king’s regional vision via the bilateral peace talks. This now must translate into practical action: along with speaking to Syria about its role this week, the US – alongside Egypt and Jordan – needs to proactively seek other countries in the region to engage. Securing peace is certainly a formidable challenge, but finding ways to sustain peace and prosperity can only be done through total inclusion and unbiased representation. Ultimately, the real test of success will be President’s Abbas’s ability to secure a deal that can bridge the growing political divide between the two territories and meet the economic needs of all Palestinians. |
9beda45ad4f792af555e21377669e7cc | https://www.brookings.edu/articles/its-time-for-a-new-policy-on-confucius-institutes/ | It’s time for a new policy on Confucius institutes | It’s time for a new policy on Confucius institutes On March 5, the U.S. Senate voted to deny Department of Education funding to universities that host Confucius Institutes (CIs)—the controversial Chinese language and culture centers partially financed by the People’s Republic of China (PRC)—unless they meet oversight requirements. A federal campaign against their alleged “malign influence,” pressure from politicians and Department of Defense funding restrictions have prompted and accelerated closure of more than half the CIs in the United States. Faculty concerns over preserving academic freedom and university budget constraints concerning operating funds have all contributed to the trend. But so has a decline of American student interest in China studies and learning Mandarin Chinese. These closings and the attendant inflammatory rhetoric exacerbate a national foreign language deficit at a time when training Mandarin speakers familiar with an ever more consequential China should be a national priority. To meet this challenge, the U.S. government should increase funding for Mandarin language and China studies courses, but also stop forcing cash-strapped universities to choose between federal funding and properly managed CI programs. Multiple investigations into U.S.-based CIs, including by the Senate, have produced no evidence that they facilitate espionage, technology theft or any other illegal activity, no evidence that federal funds are used for their support, and only a handful of objectionable U.S. incidents. The Biden administration should lift, or provide necessary waivers of, federal funding restrictions on universities that demonstrate appropriate academic freedom and institutional safeguards around their CIs, which are no longer directly funded by the Chinese government. It should also consider authorizing the Confucius Institute U.S. Center (CIUS) to serve as a visa sponsor to assist Chinese teachers and staff of CIs obtain the proper visas, as well as enable CIUS to serve as a clearinghouse for information on such PRC personnel for relevant U.S. government agencies. The global CI program was initially launched under China’s Ministry of Education (MOE) in 2004, and more recently has been advanced as part of the PRC’s national strategy of Chinese culture “going global.” It consists of campus-based language and culture partnerships formerly funded in part and supported by the MOE. Many CIs also assist Confucius Classrooms teaching Chinese language at K-12 schools. The CI program sent hundreds of teachers to help meet U.S. government goals for Mandarin instruction under the Bush and Obama administrations. An estimated 51 CIs, 44 of them campus-based, continue to operate, down from a peak of 110 throughout the country. This number includes at least seven CIs that are scheduled to close in 2021. In addition, K-12 schools continue to host about 500 Confucius Classrooms. Prior to a June 2020 reorganization, U.S. universities typically negotiated five-year CI agreements with the MOE CI headquarters, called “Hanban,” and Chinese partner universities. While a 2019 Senate subcommittee report described CIs as being “controlled, funded, and mostly staffed” by the Chinese government, they have operated as U.S.-Chinese joint ventures, jointly funded and managed. Sometimes, they have co-directors from China and the United States but many are directed by a U.S. faculty director and a Chinese deputy. Boards of directors composed of university officials and faculty from each side exercise general oversight. Hanban contributed start-up funds to, and shared operating costs with, the U.S. partner institution, which also supplied classrooms and administrative support. Hanban additionally provided language teaching materials, if requested, and paid the salaries and international travel costs for the Mandarin language teachers from the Chinese partner university, as well as grants for research, study tours to China and other matters in some cases. The exact arrangements vary. At larger universities with separate Chinese language departments teaching for-credit courses, CIs typically focus on language teacher training, K-12 language classes and community language and cultural outreach. Some CIs specialized in areas such as healthcare, business, Chinese food and beverage culture, and Chinese film. CIs generated legitimate concerns about academic freedom and independence due to their direct support from, and admitted role as a “soft power” instrument for, China’s party-state. The Chinese Communist Party’s (CCP) United Front organization oversees propaganda and education and is tasked to promote cultural exchanges, friendship between the Chinese and other peoples and a good international environment for achieving China’s policy objectives. In a 2014 report on CI partnerships, the American Association of University Professors (AAUP) argued that allowing third-party control of academic matters compromises academic freedom and institutional autonomy. AAUP recommended that universities cease involvement with CIs, which it characterized as “as an arm of the Chinese state,” unless their agreements are transparent to the university community, afford them control over all academic matters and grant CI teachers the same rights enjoyed by other faculty. The subsequent closure of CIs at two universities attracted congressional scrutiny and prompted a series of dueling reports. An influential 2017 study of 12 CIs by the National Association of Scholars identified a range of concerns including transparency, contractual language, academic freedom and pressure to self-censor. It urged closing all CIs and suggested prudential measures for universities that refused to do so. The study further called for congressional inquiries to evaluate CI national security risks through “spying or collecting sensitive information” and their role in monitoring and harassing Chinese, although it documented no such incidents. In contrast, a 2018 joint Hoover Institute-Asia Society study of Chinese influence activities in the U.S., which acknowledged concerns that campus-based CIs might “potentially infringe” on academic freedom—and made similar recommendations to reduce potential risks—found no actual interference by CIs in mainstream Chinese studies curricula on U.S. campuses and that most CIs operate without controversy. A congressionally-commissioned study by the Government Accountability Office (GAO) published in February 2019 essentially supported that view. Its analysis of governance and secrecy provisions in 90 CI agreements found that U.S. university personnel generally control curriculum and teaching materials, although this is not always made clear in agreements. With respect to a frequently voiced concern that CI agreements often stipulate applicability of both U.S. and Chinese law, it reproduced a common provision also contained in the Hanban template CI agreement that Chinese personnel working at CIs must comply with U.S. law, while Chinese law would apply to Americans involved in China-based CI activities. It further reported a variety of negotiated provisions making U.S. law, as well as school policies, applicable to all CI activities, as in this published agreement. The GAO found that, although 42 of the 90 agreements contained confidentiality clauses, many agreements are publicly available, either posted online, as at least 11 universities did, through state open records laws, or upon request. After describing the benefits including increased resources and concerns about potential constraints on campus programming and speech associated with CIs, the GAO reported that school officials denied having such concerns about their CIs, a finding supported by a contemporaneous 2019 Senate report. Early attempts to impose political requirements for CIs to support the “One China Principle or refrain from discussing Tibet,” for example, were rejected. At least three U.S. universities with CIs have hosted the Dalai Lama, although a CI director warned another university’s provost that re-scheduling a cancelled visit by the Dalai Lama could disrupt relationships with China, leading the provost to observe that a CI does present opportunities for “subtle pressure and conflict.” Most CIs do limit their scope to language and traditional culture, leaving political and other topics to other university contexts. The CI project is intended to promote a favorable understanding of China, but CIs do not enjoy a monopoly over information available on campuses, and based on interviews and at least one study, any concerns that American students will be brainwashed by CCP propaganda, delivered through CIs or otherwise, are overblown. Nonetheless, school officials joined others interviewed in the GAO and Senate studies in suggesting CI management improvements, such as clarifying U.S. universities’ authority and making agreements publicly available. CI partnerships also became embroiled in a Department of Education (DOE) initiative to enforce a foreign gift reporting requirement. After the 2019 Senate study found nearly 70 percent of universities that received more than $250,000 from Hanban failed to properly file, the drive focused on China, even though other countries were larger donors to U.S. higher education.The DOE report on the initiative’s results referenced CIs in connection with concerns that “foreign money buys influence or control over teaching and research.” Widespread non-compliance with the reporting requirement, more a matter of confusion, rather than secrecy, prompted a new DOE reporting portal in June 2020. As tensions between the U.S. and China grew, federal policymakers frequently conflated CI-related academic freedom concerns with a broader set of issues including: Chinese efforts to steal technology, intellectual property and research data; disruptive activities by some campus-based Chinese student associations and China’s consulates; Chinese talent recruitment plans; and other suspect influence efforts. Passed in August 2018, the 2019 National Defense Authorization Act (NDAA) prohibited the Pentagon from financing Chinese language programs at universities that host a CI, absent Department of Defense waivers, which have not been granted. Despite a bipartisan congressional finding announced in February 2019 of “no evidence that these institutes are a center for Chinese espionage efforts or any other illegal activity,” the 2021 NDAA broadens the restriction to funding for any program at universities that host CIs. China’s MOE reorganized the CI project in June 2020, implementing a CCP-approved reform plan to develop CIs as a “significant force” for cultural and educational exchange with other countries. MOE replaced Hanban with a new agency to manage overseas language and culture exchanges, the Center for Language Education and Cooperation (CLEC). CLEC will continue to help provide Mandarin teachers and requested teaching materials. However, the Chinese International Education Foundation (CIEF), a nominally independent organization registered with the Civil Affairs Ministry, supervised by MOE, and initiated by 27 Chinese universities, companies and social organizations, will manage the CI brand and program. CIEF is now responsible, working together with Chinese partner universities, for contractual and funding arrangements, not Hanban or MOE. This rebranding is unlikely to relieve suspicions about the role of CIs in China’s “soft power” projection. Chinese universities that participate in CIEF and serve as CI partners are mostly state-funded and, like everything in China, under CCP leadership. Moreover, as a recent study commissioned by China’s MOE observed, in a charged U.S. political atmosphere, the “Confucius Institute” brand is now associated with Chinese political interference. Nonetheless, at least one U.S. university, Georgia’s Wesleyan College, signed on with CIEF for the duration of its current CI agreement, although others in the U.S. and Europe are proceeding with announced closures. Elsewhere, CIs continued to open in Chile, South Africa, Kenya and Greece, with plans to establish them in Dominica, Maldives, Chad and Central Africa. In August 2020, the Department of State designated the Confucius Institute U.S. Center (CIUS) as a “foreign mission,” effectively controlled by the Chinese government that funds it. Established in Washington, D.C. in 2012 to promote Chinese language teaching and learning in the U.S., CIUS connects school districts interested in developing a Chinese language curriculum to appropriate CI and other resources, and provides professional development opportunities to Confucius Classroom teachers. While acknowledging that CIUS does not undertake diplomatic activities and none of its employees are government officials, the department characterized it as the “de facto headquarters of the Confucius Institute network” and “an entity advancing Beijing’s global propaganda and malign influence campaign on U.S. campuses and K through 12 classrooms.” Citing its opacity and state-directed nature as the “driving reasons behind this designation,” the State Department also directed CIUS to provide details on funding and curriculum materials it supplied to CIs and K-12 Confucius Classrooms and the names of all PRC citizens CIUS had referred or assigned to them. In its response to the department, CIUS explained that, although it seeks to foster awareness of CI programs, it does not fund, supply, staff, supervise or serve as a headquarters for CIs in the U.S. As a registered nonprofit corporation, its financials and related organizational details are publicly available through annual IRS Form 990s. Moreover, after the Hanban reorganization in June 2020, CIUS is no longer directly supported by China’s MOE, nor has it received any funding from CLEC or CIEF and must look to fundraising from Chinese and U.S. universities and other sources. Given this reorganization and CIUS’s role, the State Department might revisit its foreign mission designation. Regardless, CIUS could usefully serve as a visa sponsor, as do some states and nonprofits like the Cordell Hull Foundation, for U.S.-based CIs. Visa issues for visiting teachers have prompted suspensions and contributed to cancellation of some CI programs. As a centralized visa sponsor, CIUS could help ensure compliance with U.S. law and serve as an information clearinghouse on Chinese CI personnel in the U.S., one of the benefits the department had hoped to obtain from the CIUS foreign mission designation. A State Department report on the China challenge calls for the U.S. to train a new generation of public servants and policy thinkers to attain fluency in Chinese and acquire extensive knowledge of China’s culture and history. Yet, interest among U.S. students has been declining since peaking around 2011, as American views of China more generally have plunged to the lowest level since polling began. Multiple factors, including dimmer China-related job prospects, as well as pollution and academic and lifestyle concerns relating to study within the PRC, explain this trend. Nonetheless, official U.S. pressure to close CIs and their K-12 programs, including by withholding federal funds for universities that host CIs, is further exacerbating a national “language deficit” precipitated in part by decreased U.S. government higher education and foreign language funding over the years. In addition, some universities still have difficulty finding qualified Mandarin teachers, especially at the K-12 level, to satisfy remaining demand. Meanwhile, Chinese students are required to learn English from elementary school and as a requirement to gain admission to, and in many cases graduate from, college, with an estimated 400 million Chinese—including front-line military troops—now learning English. To be sure, some private U.S. NGOs offer Mandarin learning, including an Asia Society program with 35,000 students studying Chinese in 100 K-12 schools around the country that are linked with sister schools in China. U.S.-based China and Taiwan-oriented groups also offer various Chinese education, culture and teacher training courses, as well as teaching of Chinese dialects and traditional Chinese characters still used in Taiwan and Hong Kong. Nonetheless, federal funding is needed to adequately meet the Mandarin language challenge and lessen cash-strapped universities’ dependence on Chinese funding and other teaching support. The U.S. government launched an initiative with Taiwan in December 2020 to expand existing Mandarin language opportunities in the U.S. and help fill a gap created by CI closings. It should also increase Mandarin language and China studies funding under other critical language programs, and re-authorize the Fulbright program with China, including language awards, that were terminated in July 2020. Budget cuts impacting universities’ ability to finance their share of operating costs, coronavirus obstacles and low Mandarin class enrollment, compounded by federal government funding restrictions, may mean the end of CIs after a 15-year, generally controversy-free record in the United States. Yet the U.S. is facing a critical shortage of Mandarin-speaking China experts. Even critics concede the CI program has provided valuable learning experiences otherwise unavailable due to budget constraints and the lack of Mandarin teachers at universities and public schools across the nation. The Biden administration has the opportunity to reassess the concerns, evidence and U.S. actions taken with respect to the remaining Confucius Institutes and Classrooms. It should disaggregate legitimate national security concerns, including Chinese espionage and technology theft, from academic freedom issues that are best left to our universities. The federal government and Congress should work to protect our national security in a manner that does not impinge on the academic freedom or institutional autonomy they also seek to protect. Over 30 of the universities, as well as the College Board, that ended CI partnerships since 2017 did so under political pressure that threatened loss of federal funding—not over concerns of Chinese interference or declining interest. Marshall Sahlins, an early and eloquent CI critic who was instrumental in closing the University of Chicago CI in 2014, observed ironically in mid-2018 that “the American government now mimics the totalitarian regime of the PRC by dictating what can and cannot be taught in our own educational institutions.” Universities should, of course, continue to be vigilant against the potential for unwelcome influence including implicit pressure on faculty to self-censor, as well as to ensure compliance with the Department of Education’s foreign gift and other reporting requirements, and visa rules for CI exchange visitors. Given the allegations surrounding CIs, which continue to be pressed by bipartisan Congressional coalitions, CI host universities should all publish their CI agreements online. The CIUS, no longer directly funded by China’s MOE, is well positioned to serve as both a visa agent to help ensure appropriate visas are obtained and a clearinghouse for information on Chinese teachers and administrators working in CIs. More broadly, the U.S. government also has an urgent interest in stabilizing the U.S.-China relationship so that the two countries can work together constructively to meet common challenges. That formidable task requires the U.S. to foster more realistic and actionable expectations, criticisms and commitments, rather than policies and actions based on an alarmist China caricature that does not reflect the more complex reality of that country, its people and its behavior abroad. In an era of tight funding for and decline of interest in Chinese language and culture programs, and a clear need for cultivating Mandarin speakers and China expertise across multiple disciplines, the modest financial contribution and native Mandarin language professionals provided through an appropriately managed CI network should be welcomed, not castigated. |
db9a428c0f807b44ed31df54a23c8392 | https://www.brookings.edu/articles/japans-democratic-renewal-and-the-survival-of-the-liberal-order/ | Japan’s democratic renewal and the survival of the liberal order | Japan’s democratic renewal and the survival of the liberal order Japan plays a central role in the endeavor to rekindle liberal internationalism. The country boasts one of Asia’s oldest democracies and the fulcrum of institutions and norms of representative democracy: free and fair elections, rule of law, full civil rights, and freedom of the press. Given that stable democracies that have adjusted to economic globalization and avoided populist disruption are increasingly in short supply, Japan’s strengths are undisputable. However, the current era of political stability has derived largely from the fragmentation of opposition parties, and there are troubling signs of waning democratic dynamism such as voter apathy, tepid inter-party competition, and the weakening of accountability channels. Internationally, the United States and Japan share concerns about democratic recession and China’s coercive diplomacy. Even though Washington and Tokyo have not historically aligned on a strategy of democracy promotion, they can coordinate efforts to ensure democratic resilience and the survival of the liberal order. Japan has long pressed to include democracies in Asia’s regional architecture and to disseminate economic standards that tame corruption and curb digital protectionism, and it has cultivated deeper security cooperation with democracies that share strategic interests. These are diplomatic tracks that can find resonance with the Biden administration. As the second decade of the 21st century drew to a close, the world order has been consumed by a vortex of change. The casualties of the COVID-19 pandemic keep mounting both in terms of lives lost and livelihoods shattered. Government competence has been tested — and frequently found wanting — in the pressing tasks of outbreak control and long-term economic viability. For many nations, COVID-19 has underscored the growing inequalities of opportunity and risk and the deep tears to the social fabric that confound whole-of-society responses. The international order has not fared much better. The U.S.-China rift has deepened, vaccine nationalism and economic mercantilism have reared their ugly heads, and the spirit of multilateral cooperation has at times appeared depleted. Long gone are the days when liberal democracy looked ascendant. Not only have authoritarian governments perfected digital tools of social control, but populism has shaken the institutions of representative democracy in the West. The challenges for the United States, long a paragon of the free world, are particularly poignant as the recent assaults by members of the losing party on the integrity of the presidential election outcome attest. At this critical juncture, it is nevertheless possible to build on positive trends. Science, with the record-breaking development of vaccines, has offered us the truly viable strategy to overcome the pandemic. Asian democracies, leaning on the lessons learned from past infectious disease outbreaks, have demonstrated their ability to respond effectively to the current public health crisis without compromising civil liberties. Global supply chains have proven resilient and ensured that the world economy did not seize up with prolonged shortages of critical supplies. Middle powers have doubled down on rules-based trade brokering large-scale agreements. The incoming Biden administration has pledged to reassert American leadership in addressing transnational challenges, shoring up multilateralism, and reinvesting in alliances. Central to its governance project is a call for democratic renewal, both at home and abroad. These tasks will undoubtedly face myriad obstacles. Japan plays a central role in the endeavor to rekindle liberal internationalism. The country boasts one of Asia’s oldest democracies and the fulcrum of institutions and norms of representative democracy: free and fair elections, rule of law, full civil rights, and freedom of the press. Japan’s credentials as a consolidated democracy matter more at a time of widespread democratic backsliding and the rising influence of an authoritarian economic behemoth like China. The move away from “America First” transactionalism offers an opportunity for the United States and Japan to deepen bonds based on shared values and seek to leverage their partnership to tackle transnational challenges and shore up multilateralism. Making the U.S.-Japan alliance a bulwark of democracy is a shared and important goal, one that has acquired greater urgency in light of the profound challenges to democratic governance in America. Given that stable democracies that have adjusted to economic globalization and avoided populist disruption are increasingly in short supply, Japan’s strengths are undisputable. Yet, the nation’s penchant for stability has also come at a price, with troubling signs of declining democratic dynamism: weakened inter-party competition and disengaged voters frustrated by insufficient government transparency and responsiveness. In the past, Tokyo’s forays into values-based diplomacy proceeded haltingly and the U.S. and Japan have not converged on the goals and tactics of democracy promotion. Japan faces a distinct set of challenges in reinvigorating its own democracy, but can also share some lessons learned in its efforts to deepen collaboration with fellow democracies and to promote governance and rule of law via economic assistance to developing Asia. Democratic resilience, functional collaboration among like-minded countries, and a commitment to a rules-based order in the Indo-Pacific offer significant venues for U.S.-Japan collaboration. In understanding Japan’s democratic trajectory, it is helpful to revisit the moniker applied to the country during the Cold War era as an “uncommon democracy.” The term was used to denote the phenomenon of decades of unbroken rule by a dominant party (the conservative Liberal Democratic Party, or LDP) in a political system with free elections and media, and well-established civil and political rights.1 Of course, much has changed domestically and internationally since. The LDP lost power twice (for a few months after the election of August 1993 and for three years in 2009-2012), and the geopolitical context has also changed dramatically with the collapse of the Soviet Union, a moment of American unipolarity, and the shift to U.S.-China strategic rivalry. And yet, Japan has reverted to a dominant political ticket (the LDP plus its coalition partner Komeito, a lay-Buddhist party) dwarfing opposition parties. The U.S.-China context does not approximate the original Cold War (given China’s extensive integration into the modern world economy and the absence of a formal Chinese bloc of influence), but it has increasingly acquired an ideological undertone with calls for the “free world” to oppose Chinese authoritarianism. Hence, “uncommon democracy” offers a useful frame to highlight what sets Japan apart from other liberal democracies rocked by populism, its distinctive democratic evolution where apathy trumps polarization,2 and the different approach on democratic support pursued by Tokyo in its aid diplomacy towards Asia — ground zero for U.S.-China strategic competition. Populism is far from a new political phenomenon, but it has risen lately to new prominence in Western democracies as it has made electoral inroads with the growing weight of far right parties in Continental Europe, the success of the Brexit campaign, and a populist American president in Donald J. Trump. Populism thrives when the political establishment appears ineffective in addressing the concerns of disaffected citizens, and it has important consequences for both representative democracy at home and an open international order. It can yield illiberalism by promoting exclusionary politics (where only the interests of the “true people” matter), and by eroding institutional checks and balances. And to the extent that the country’s ills are attributed to outside forces, populist governments favor closed borders and economic nationalist policies. Japan’s political stability has cut a stark contrast to the upswing of populist forces elsewhere. In September 2020, Prime Minister Shinzo Abe finished an eight-year run in office as the longest-serving prime minister in Japan’s history. During his tenure, Japan moved towards more economically liberal policies by assuming leadership in ambitious trade negotiations and with more modest immigration reforms to allow entry of manual workers. While the lock of the establishment on Japanese politics looks secure, the public has grown frustrated with prolonged economic stagnation and the palpable rise of income inequality. The disappointment has manifested in growing ranks of unaffiliated voters (by some counts 39% of total voters in the last general election in 2017) that can decide elections when persuaded by a candidate’s reform promises. They came in force in 2005 for Prime Minister Junichiro Koizumi when he attacked vested interests within his own party, in 2009 for Prime Minister Yukio Hatoyama when he led the Democratic Party of Japan’s dethroning of the ruling party, and in 2018 for Governor of Tokyo Yuriko Koike when she promised to disrupt the “old boy network.” But none of these maverick politicians preached or practiced populism to the detriment of Japan’s representative democracy. Nor has the Japanese public scapegoated globalization. If anything, with the realization that demographics dictate a shrinking internal market, an open trading system is seen more as opportunity than peril.3 The rhythm of postwar Japanese politics was largely set by the rules of electoral competition. The single non-transferable vote in multi-member districts was not widely adopted elsewhere, but it had important consequences for the evolution of Japan’s democracy. It compelled members of the same party to face each other at the electoral booth, thereby weakening party labels and encouraging the operation of party factions to manage internal competition over nominations, funds, and posts. Individual candidates sought to differentiate themselves by cultivating ties with blocs of organized voters (construction, agriculture, etc.) or catering to the needs of local constituencies. Pork barrel practices and money politics were rampant. The electoral and political funding reforms ushered in during the brief stint of a coalition of parties in opposition to the LDP in the early 1990s created an opportunity to improve the quality of Japanese democracy. Increased transparency in political funding with the establishment of public subsidies for political parties and the adoption of a hybrid electoral system (with Japanese citizens casting two votes in Lower House elections: one for a candidate in single-member districts and another for a party in regional blocs with proportional representation rules for the allocation of seats) transformed the currents of Japanese politics. The role of factions declined and programmatic campaigns became more important to electoral contests. While corruption was not eliminated, it did diminish overall, and the pattern of politically-targeted fiscal spending (with “bridges to nowhere”) abated.4 Much hope was placed on the emergence of a robust two-party system to bring about greater government accountability and spur healthy debates over competing policy platforms. After the creation of the Democratic Party of Japan (DPJ) in 1996, Japan did experience strong inter-party competition during the 2000s. When the DPJ won the national election in 2009, it promised to usher in a new era for Japanese politics by reining in bureaucrats, increasing politician oversight, and cutting wasteful spending. However, the DPJ botched domestic policymaking, created friction with the United States over the fate of the Futenma air base in Okinawa, and found itself ill-prepared to respond to the Triple Disaster of March 2011 (earthquake, tsunami, and nuclear power plant accident). In 2012, the party was voted out of office as the LDP under Shinzo Abe made a comeback. The short-lived DPJ experiment is frequently described as Japan’s inoculation to populism, but its impact is more profound: It dashed hopes that party turnover could bring genuine political and economic reform. The DPJ was unable to regain voters’ confidence and eventually fractured, paving the way for a string of six wins in national elections (three each for both the Lower House and the Upper House) for the current ruling coalition, which has resulted in a commanding position in both houses of the National Diet. In September 2020, the splinter groups of the DPJ coalesced into the Constitutional Democratic Party of Japan (CDP, established in 2017), with 150 Lower House seats, far short of the 233 needed to create a majority. Recapturing the public’s trust is still an uphill battle, however, with a modest 8% support rate for the consolidated party. Japan’s much vaunted political stability, therefore, has much to do with the implosion of a viable opposition and is less motivated by a ringing endorsement of government performance. The Japanese public’s normative support of representative democracy remains very strong. In the 2016 Asian Barometer Survey, 95% of Japanese respondents endorsed the notion that while democracy may have its problems, it still is the best form of government. However, there is frustration with the degree of government responsiveness to the public’s demands and pessimism over the country’s long-term future. According to a 2018 Pew Research Center poll, only 35% of respondents feel that elected officials care about what ordinary people think and 62% believe that no matter who wins, things do not change much. The marked disengagement from politics does not bode well for a dynamic democracy. Voter turnout in general elections dropped from 69.3% in 2009 to 59.2% in 2012, then bottomed out at 52.7% in 2014, and had a small recovery to 53.7% in 2017. More broadly, few Japanese report a disposition to participate in civic/political activities: In a 2014 NHK survey, 70% of survey respondents disclosed no intention to partake in demonstrations, political assemblies, or letter writing to parliamentarians or media outlets.5 Apathy towards the political process hinders the vitality of the democratic process in Japan as captured by the democracy index compiled by the Economist Intelligence Unit. In 2019, Japan ranked number 24 (sitting one slot ahead of the United States) with an aggregate score of 7.99. Japan had higher marks on electoral process, pluralism, and civil liberties, but its lowest scores were in the categories of political participation and culture which measure among other things citizen engagement in politics and the number of female politicians, both of which are weak spots for Japan. The close-knit ties between journalists and public officials through the kisha kurabu system (media associations that receive exclusive access to public figures) hinders the participation of outside media and more robust investigative journalism. Those concerns area a primary reason for Japan ranking number 66 in the 2020 World Press Freedom Index. The arrival of political stability during Abe’s second premiership (2012-20) afforded Japan many advantages. Executive leadership enabled the country to become a more proactive actor in world affairs and to make progress in the fight against domestic deflation. However, the sources of stability matter, and when it is derived from the inability of the opposition camp to offer a meaningful alternative to LDP rule, the vibrancy of democracy takes a toll. The weakening of meaningful political competition in Japan has had significant consequences. For one, it encourages citizen passivity towards a political process that seems incapable of delivering change. It also dulls legislative deliberations, and without the prospect of alternation in office between competing political tickets, transparency and accountability weaken. It is no coincidence that some of the political scandals that emerged in the Abe years flowed from the phenomenon of sontaku (government officials awarding policy favors to individuals who they surmise enjoy the prime minister’s support). Robust electoral competition incentivizes governments to remain attuned to public demands. The LDP, and Abe in particular, mounted a comeback in 2012 showing such responsiveness, running with a platform of economic revitalization and an unconventional reflationist strategy.6 The stimuli to remain nimble, however, has since receded. Japan, therefore, is managing its first leadership transition in eight years at a time when the pandemic has delivered the worst economic crisis since the Great Depression. Yoshihide Suga, Abe’s former chief cabinet secretary, cinched his position as Abe’s successor as prime minister both by skillfully navigating factional politics and portraying himself to the public as a steady hand at a time of turbulence. In aiming to deliver structural reform, Prime Minister Suga has rolled out two landmark initiatives: digitalization and carbon neutrality by 2050. Because these are cross-cutting reform initiatives that will impact multiple vested interests and step into sensitive bureaucratic turfs, they are bound to elicit strong resistance. Nevertheless, Suga has very little time to prove that he will be more than a caretaker leader capable of delivering on an ambitious economic reform program. He faces both reelection as LDP party president and a general election in less than a year. Greatly complicating his chances of success is the fact that the honeymoon period with the Japanese public proved short. His decision to break precedent in not appointing the full roster of nominees to the Science Council, the resurfacing of Abe’s scandals over alleged violations of political funding laws (by subsidizing cherry blossom viewing parties for his supporters), and more importantly, dissatisfaction with the government’s pandemic response as COVID-19 cases continue to soar have resulted in a marked drop in public support: from 74% in September to 42% in December. As Japan faces this most severe crisis and with voters heading to the polls later this year, much hangs in the balance for Japan’s political parties: Can the ruling coalition convincingly steer the ship in addressing these pressing national problems? Can a reconsolidated opposition party rise to the occasion and offer a credible alternative to meet these challenges in order to recapture the public’s trust? Political stagnation could yet again bring back the disruptive dynamic of a rapid turnover of prime ministers — which would hamper both the quality of domestic governance and the ability of Japan to sustain a proactive international role. A potential inward turn would be much more consequential at a time when Japan has filled an international leadership vacuum with initiatives such as the Free and Open Indo-Pacific strategy (FOIP). Democratic recession and the weakening of the liberal international order are shared concerns for the United States and Japan. However, values-based diplomacy has evolved differently in each country, and there has not been convergence on the policy of democratic promotion. Japan’s most explicit effort to incorporate universal values into its foreign policy took place in the mid-2000s in the form of the Arc of Freedom and Prosperity initiative. Keenly aware of China’s growing clout in the region, Tokyo chose to emphasize values such as freedom, democracy, and fundamental human rights to distinguish its diplomatic approach from China’s. However, this redirection in Japanese diplomacy proved ephemeral. The policy was quickly shelved when it faced pushback from China who deemed it a containment ploy and was received unenthusiastically in Southeast Asia where the norm of non-interference in domestic affairs runs deep. The role of democratic support in Japan’s aid programs has differed from Western liberal democracies. Japan has shied away from political uses of aid, citing painful memories of Japanese militarism in Asia, the desire to maintain stability in recipient countries and not engage in nation-building, and a belief that gradual governance reforms will improve democracy’s chances in the future. Hence, Japan’s share of economic assistance allocated to civil society is very modest (1.7% of the total aid budget) and it operates through official channels responding to project requests from counterpart governments. As Yasunobu Sato explains, the precepts of the Japanese approach have been human security (with an emphasis on freedom from want by attending to basic human needs) and rule of law. Japan’s lower-case democracy efforts include judicial capacity building, civil code development, and election support. They are deemed essential to check abuse of state power, protect human rights and adjudicate conflicts, and develop the institutions of a market economy.7 Japan’s distinctive approach to democracy support has been met with skepticism in some quarters. For critics, Tokyo is willing to prioritize strategic interests at the expense of defending universal values, and its cautious approach to democratic support runs the risk of solidifying the hold of authoritarian regimes.8 In launching the Free and Open Indo-Pacific strategy in 2016, Japan sought to respond to an even starker shift in the regional balance of power and to leave its own imprint on the regional architecture. The diplomatic push advances Japanese national interests but is also built on lessons learned from previous forays into values-based diplomacy. In FOIP, Tokyo has sought to prevent China’s dominance in regional affairs, yet has also emphasized inclusivity by leaving open the possibility of cooperation if China abides by Japan’s higher standards for economic integration. The FOIP incorporates values such as openness and freedom, commitment to universal rights, and promotes a connectivity agenda with economic assistance programs to improve governance and free trade. A central line of effort is the dissemination liberal values such as respect for international rule of law, freedom of navigation, and the rejection of coercion to solve inter-state disputes. As Maiko Ichihara carefully documents, Japan’s foreign policy discourse on values has shifted over time, with more recent emphasis on liberal rules that both democracies and non-democracies can endorse to bring stability to the region.9 This, more than democracy promotion, is an area where Japan and the United States have found alignment. Cooperation with established Indo-Pacific democracies that share strategic interests is a core tenet of Japan’s FOIP strategy. By resurrecting the Quadrilateral Security Dialogue (or simply, “the Quad”) framework which includes Australia, India, and the U.S., Tokyo is placing a bet on closer ties with maritime democracies that can countervail China’s influence based on greater defense and intelligence cooperation. Both values and interests inform this track of Japanese diplomacy and the desire to buttress security ties with each of the Quad members is unmistakable. Japan has sought greater military interoperability with the United States and the Abe administration reinterpreted the right of collective self-defense to enable Japan to assist its ally if under attack — provided Japan’s own security is also at stake. Ties with India have also deepened with the launch of a 2+2 ministerial dialogue and Japanese participation in the Malabar military exercises. Japan’s security partnership with Australia has made great strides, most recently with the November 2020 reciprocal access agreement which grants military personnel from each country rights to visit and participate in joint training. The launch this year of a Quad-Plus mechanism to address the pandemic by including three additional countries (New Zealand, Vietnam, and South Korea) shows flexibility in embracing functional cooperation with like-minded countries. It also offers a glimpse of hope for Japan and South Korea to cooperate in addressing common challenges given the profound deterioration of relations between America’s two democratic Northeast Asian allies. The newly arrived Biden administration has announced a major shift in U.S. foreign policy priorities — with calls for a Summit for Democracy to address the world’s most pressing challenges. It also confronts at home a profound crisis for American democracy as painfully captured by the attack on the Capitol in a failed attempt to prevent Congressional certification of the presidential election results. Hence, the fight for democratic renewal has acquired new poignancy for the United States, creating a greater sense of identification with struggles elsewhere to defend democracy. Times are hard, but there are ample opportunities for the United States and Japan to jointly advance their shared values and interests. Japan and the United States have in the past taken different approaches to the task of democracy support, but share a common concern over democratic recession in Asia and the use of coercion to settle inter-state disputes. The allies can find common ground in ensuring resilience of liberal democracy and of the liberal international order. To this end, there are elements from Japan’s own experience that can be helpful. A Summit for Democracy may appear as an exclusionary endeavor and blunt its appeal and effectiveness. An alternative approach is to use the politics of membership towards greater inclusivity and representation of democracies in the regional architecture. This is the tack that Japan used in the membership battles over the East Asia Summit and the Regional Comprehensive Economic Partnership to ensure there is greater democratic representation in Asian institutions to balance China’s weight. Second, much can be learned from Tokyo’s embrace of functional cooperation with democracies and non-democracies to disseminate rules that promote good governance and limit the reach of state-controlled models (digital economy) or corrupt practices (transparency and sustainability in infrastructure finance). Functional cooperation and rule-making efforts show great promise in tackling areas such as vaccine distribution, supplier chain resilience, international debt management, and open and trusted data flows. Finally, where values and interests align, deeper bilateral security cooperation and networked cooperation can be effective, as the revival of the Quad suggests. The return of the United States to multilateralism, regionalism, and the commitment to coordination with allies should energize these endeavors. |
bb8764f8f25673a3dda405eb4ae264b6 | https://www.brookings.edu/articles/mass-displacement-caused-by-conflicts-and-one-sided-violence-national-and-international-responses/ | Mass Displacement Caused by Conflicts and One-Sided Violence: National and International Responses | Mass Displacement Caused by Conflicts and One-Sided Violence: National and International Responses INTRODUCTION Massive displacement of people within countries and across borders has become a defining feature of the post-cold war world. It is also a major feature of human insecurity in which genocide, terrorism, egregious human rights violations and appalling human degradation wreak havoc on civilians. The need of internally displaced persons (IDPs), people forcibly uprooted in their own countries, for international protection from conflict and one-sided violence was one of the factors that prompted a shift in global policy and security thinking. Over the past two decades, a strictly state-centred system in which sovereignty was absolute has evolved into become a matter of international concern and scrutiny. This evolution largely grew from the efforts of the human rights movement, which had long championed the view that the rights of people transcend frontiers and that the international community must hold governments to account when they fail to meet their obligations. It also arose from the efforts of the humanitarian community to reach people in need. The deployment of large numbers of relief workers and peacekeeping operations in the field to protect civilians reflects this new reality as do preventive and peacebuilding efforts. Nonetheless, concepts of sovereignty as responsibility and the responsibility to protect remain far ahead of international willingness and capacity to enforce them. The failure of states to protect their citizens has too often met iwth a weak international response. It is therefore critical that the United Nations, concerned governments, regional bodies and civil society assist states in developing their own capacities to prevent mass atrocities while also pressing for the development of the tools needed to enable the international community to take assertive action when persuasive measures fail and masses of people remain under the threat of violence and humanitarian tragedy. This chapter examines the challenges posed by mass displacement caused by violence. Section II looks at the scale and nature of displacement, presents examples of states’ failure to protect their citizens and discusses the consequences of displacement. Section III focuses on the political, legal and operational steps needed to provide greater protection for displaced populations and other civilians caught up in massive violence. Section IV presents conclusions and recommendations for the way forward. Download complete article » |
0f7f27b33e129bdfbf39a44dd92b2d80 | https://www.brookings.edu/articles/meet-the-sims-and-shoot-them/ | Meet the Sims…and Shoot Them | Meet the Sims…and Shoot Them The country of Ghanzia is embroiled in a civil war. As a soldier in America’s Army, your job is to do everything from protect U.S. military convoys against AK-47-wielding attackers to sneak up on a mountain observatory where arms dealers are hiding out. It is a tough and dangerous tour of duty that requires dedication, focus, and a bit of luck. Fortunately, if you get hit by a bullet and bleed to death, you can reboot your computer and sign on under a new name. America’s Army is a video game — a “tactical multiplayer first-person shooter” in gaming lingo — that was originally developed by the U.S. military to aid in its recruiting and training, but is now available for anyone to play. Among the most downloaded Internet games of all time, it is perhaps the best known of a vast array of video game-based military training programs and combat simulations whose scope and importance are rapidly changing not just the video-game marketplace, but also the way the U.S. military finds and trains its future warriors and even how the American public interfaces with the wars carried out in its name. For all the attention to the strategic debates of the post-9/11 era, a different sort of transformation has taken place over the last decade — largely escaping public scrutiny, at modest cost relative to the enormous sums spent elsewhere in the Pentagon budget, and with little planning but enormous consequences. These “games” range from the deadly serious, like programs designed to train soldiers in cultural sensitivity or help veterans overcome the trauma of combat, to the truly outlandish, like a human-sized hamster wheel that makes virtual-reality software feel more realistic. There are even video-game modules that teach soldiers about the perils of sexual harassment. All told, the U.S. military is spending roughly $6 billion each year on its virtual side, embracing the view, as author Tom Chatfield put it, that “games are the 21st century’s most serious business.” The link between games and war goes all the way back to “boards” scratched onto the back of statues by Assyrian guards almost 3,000 years ago. Three millennia later, as the U.S. military recruits from, and is increasingly led by, a generation raised on Grand Theft Auto, real warfare is taking on the look and feel of a video game, from the aerial drones launching precision strikes at terrorists in remote hideouts in Afghanistan and Pakistan to the joystick-controlled robots defusing roadside explosives in Iraq. “The biggest change is that it’s gone from being unique to being ubiquitous. It’s everywhere now,” Mark Sinclair, a staff vice president at military contractor General Dynamics, told a U.S. Navy journal. The Pentagon’s embrace of video games is part of a much larger phenomenon — “militainment” — that is reshaping how the public understands today’s conflicts. The term was first coined to describe any public entertainment that celebrated the military, but today it could be redefined to mean the fascinating, but also worrisome, blurring of the line between entertainment and war. For example, while America’s Army is technically a publicly funded recruiting and training platform, its main commercial rival is Call of Duty: Modern Warfare 2, a game published by Activision Blizzard. The two games compete for market share, but also over who can better define contemporary war zones. In America’s Army, you deploy to the fictitious country of Ghanzia; in Modern Warfare 2, you join a U.S. special operations team that roams from Afghanistan to the Caucasus, winning hearts and minds (or losing them) with a mix of machine pistols and Predator strikes. The players also fight it out in a range of potential future areas of conflict, from Brazil’s rough urban favelas to a simulated Russian invasion of Washington, D.C. (This is actually a major flaw in the game; any invasion force would clearly get stuck in Beltway traffic.) The stakes are high. Modern Warfare 2 came out on Nov. 10, 2009. By the end of the next day, it had racked up $310 million in sales. To put this in perspective, Avatar, James Cameron’s latest Hollywood blockbuster (notably following an ex-Marine remotely fighting through a video-game-like battle environment), earned a measly $27 million on its first day. Another comparison might be even more apt. Roughly 70,000 young Americans chose to join the U.S. Army last year. By contrast, 4.7 million chose to spend Veterans Day playing war at home. And this is no mere American trend. More than 350 million people play video games worldwide, with the war-oriented sector perhaps the most important part of the global market. Modern Warfare 2 may have players join a U.S. special operations team, but one out of every 49 British citizens did so in its first 24 hours. Niche games have also amassed huge followings; in the polarized Middle East, Hezbollah-produced Special Force plays out an attack on Israeli soldiers, while Ummah Defense provides the vicarious thrill of taking on the U.S. military, Israeli settlers, and killer robots. Reporting on militainment sometimes suffers from either an uncomfortable gee-whiz quality or blanket condemnation, when the story is far more complex. There’s much to be impressed by: The Pentagon is saving millions in training costs while creating a learning environment that can look astonishingly like the real thing, potentially saving real-world lives. But as the training and fighting — and even the public’s relationship with war — becomes ever more distant and virtual, there is also an emerging dark side to keep our (glazed-over) eyes on. Be (Online) All That You Can Be Like many innovations, America’s Army and the broader rise of militainment didn’t grow out of a grand strategic plan. Almost a decade ago, a group of U.S. Marines hacked the commercial video game Doom II to create “Marine Doom,” a software program that helped them teach urban warfare (instead of fighting the demons of hell, the Marine version has players team up to take out an enemy bunker). Col. Casey Wardynski, then a professor at West Point, was impressed by the program, as well as by the fact that his two teenage sons were fans of action video games. While researching how to build an increasingly high-tech U.S. military, he approached the Pentagon about making an online game as a recruiting tool. The idea stuck, and in 2000 the Army contracted the Naval Postgraduate School to create it. After two years of development, the game, called America’s Army, was released at the Electronic Entertainment Expo, a sort of annual pilgrimage for video-gamers that draws some 60,000 people to the Los Angeles Convention Center. What happened next surprised all: The Army didn’t just have a new recruiting tool, but an actual market hit. It quickly became one of the top 10 most popular games on the Internet, and within its first five years, some 9 million individuals had signed up to join America’s video-game army, spending some 160 million hours on the site and making it one of the top 10 of all video games, online or otherwise. From the Army’s perspective, commercial triumph was secondary. Its goal was to recruit. And at this, too, the game proved to be a wild success. To log on to the game, you have to connect via the Army’s recruitment website and fork over your information. Gamers can also check out profiles of current Army soldiers and video testimonials of why they joined. Just one year after America’s Army was released, one-fifth of West Point’s freshman class said they had played the game. By 2008, a study by two researchers at the Massachusetts Institute of Technology found that “30 percent of all Americans age 16 to 24 had a more positive impression of the Army because of the game and, even more amazingly, the game had more impact on recruits than all other forms of Army advertising combined.” Notably, this is from a game that the Pentagon has spent an average of $3.28 million a year developing and promoting over the last 10 years — compared with the military’s roughly $8 billion annual recruiting budget. Behind the success of America’s Army lies a story of entrepreneurship and good old-fashioned interservice rivalry. As one of the developers explained in an online forum, “The Navy started to get pissed at the Army because there was never any mention that the game was actually built within a Naval think tank.” So, with the Navy angry, the Army did the only logical thing: It brought in the French. Future versions of America’s Army were licensed to French game company Ubisoft, which also allowed the game to be used on Microsoft Xboxes — a breakthrough because it meant the game could penetrate a much wider marketplace. Ubisoft paid the Army $2 million upfront, plus 5 percent in royalties per game sold. The Army also kept its right to edit content; video games were becoming too violent even for the U.S. military. As Wardynski, the program’s originator who now directs the Army Game Project that administers it, told National Defense, the military wanted to ensure that it wouldn’t be “the sort of game where you tear off someone’s arm and beat them to death with it.” Instead, America’s Army is meant to imbue potential recruits with traditional military values. To join, would-be video soldiers have to pass training sessions not only for the game but also for U.S. military systems, lingo, and values. Over time, they can go through added training to specialize in becoming anything from a Javelin missile operator to a Humvee driver. Once in, players enter a virtual battle built around a scenario from recent real-world U.S. Army experience — in a squad with other online soldiers anywhere in the world. America’s Army has proven so popular globally that, with so many users signing on from Internet cafes in China, the Chinese government tried to ban it. Real soldiers play too, usually identifiable by the unit insignias listed in their gamer names. The game itself isn’t just a regular shoot ’em up, but features an “honor system.” Those who cooperate the best tend to win the most, while the go-it-alone Rambo types tend not to last too long. Following the rules of engagement wins extra points, as does stopping to give medical care to your buddies. Commit a “friendly fire” incident, and you get banned from the server. If you log back on under that same account, your point of view is from behind jail bars in a virtual Fort Leavenworth military prison. The game is not without its shortcomings, suggesting a far more antiseptic version of war than the real thing. Get hit by a bullet and there is only a tiny puff of pink smoke that quickly disappears. And the real world’s “fog of war” — the chaos, confusion, and mistakes in battle that Prussian überstrategist Carl von Clausewitz defined as an enduring feature of warfare — doesn’t cloud the game. One scenario, for example, is built around a real battle during the 2003 invasion of Iraq. A team of Green Berets was attacked by an Iraqi motorized infantry company backed by artillery, tanks, and armored personnel carriers. As the real-life Green Berets did, you can beat back the virtual Iraqis by deftly firing Javelin anti-tank missiles. However, the game leaves out the part when the Green Berets called in an airstrike. Instead of hitting the Iraqis, the plane mistakenly dropped a bomb on friendly forces, killing 17 U.S. troops and their Kurdish allies. Virtual Vets America’s Army quickly expanded from a potent recruiting tool into a valuable training system for soldiers already in the military. Military contractor Foster-Miller’s Talon robot, for example, is used widely in Iraq and Afghanistan to dismantle roadside bombs, the most deadly weapon used against U.S. troops there. The game’s Talon training module cost just $60,000 to develop, but took training in how to operate robots in war to a whole new level. “Prior to this, the only way to train was to take the robot and the controller to the trainees, give them some verbal instruction, and get them started,” Bill Davis, head of the America’s Army future applications program, told National Defense. “This allows them to train without breaking anything.” But with these advances, it’s getting harder to figure out where the games end and the war begins. In Talon the game and the real-life version, soldiers are watching the action through a screen and even holding the very same physical controllers in their hands. And these controllers are modeled after the video-game controllers that the kids grew up with. This makes the transition from training to actual use nearly seamless. As one Foster-Miller executive explained to me, describing the game’s training package for the Talon’s pissed-off big brother, the machine gun-armed Swords robot, “With a flip of the switch, he has a real robot and a real weapon.” Because of “the realism,” he said, the company is finding that “the soldiers train on them endlessly in their free time.” Such “serious games,” as the Army calls them, go well beyond the America’s Army series. One program trains aerial drone operators while Saving Sergeant Pabletti teaches some 80,000 Army soldiers a year what is and isn’t sexual harassment. This use of gaming extends across the entire experience of war. Virtual Iraq is being used at 40 clinics around the United States to help the thousands of veterans returning from war cope with post-traumatic stress disorder. The game platform allows them to re-experience traumatic episodes in a safe environment. These training tools are not just for raw recruits. For example, Gator Six is built around 260 realistic video clips that simulate many of the difficult judgment calls an officer might have to make in modern wars. Designed with the help of 20 Afghanistan and Iraq war veterans, it is broken into three parts: pre-deployment, combat, and transitioning into post-conflict. The military is also turning to video games to help raise its soldiers’ cross-cultural understanding. Army 360 presents 11 “choose your own adventure” mission episodes that take place in present-day Iraq, Afghanistan, and Somalia. The player chooses from four or more decision paths at various stages in the mission, which ultimately lead to as many as 20 different outcomes. Produced by Ken Robinson, a 24-year Special Forces veteran, the game aims to prep soldiers for the real conundrums that they might experience when deploying into different cultures. For instance, upon hearing a burst of AK-47 fire, an infantry patrol leader might mistake a wedding celebration for an ambush, taking the game down a far more dangerous path. Much like their civilian counterparts, military game designers are moving from two-dimensional screens and sitting behind simulator mock-ups to three-dimensional experiences that hit multiple senses. At the Institute for Creative Technologies, founded with a $100 million Pentagon grant and housed at the University of Southern California, the self-described mission is to tap the best of Hollywood to create “synthetic experiences so compelling that participants react as if they are real.” Its Army leader training module, for example, builds off a scenario that has more than 300 pages of script, works in 75 characters, and was even coordinated with USC’s School of Theatre. The program sends trainees out wearing not just 3-D goggles, but also a “scent collar,” which sprays what one reporter described as “aromatic microbursts — of anything from cordite to spices.” One driving simulator is said to be so lifelike that the participants often unconsciously wave to virtual trucks that drive by. A few of the virtual drivers even get carsick. Another center for military gaming research — conveniently located just a short drive from Walt Disney World — bills itself as “a theme park on steroids.” It marries physical sets with virtual-reality projectors for an experience its creators dub “mixed reality.” The Pentagon’s ultimate goal with all this gaming technology is to create “simulation on demand” for almost any military skill set. The concept is not unlike Neo’s kung fu crash courses in the movie The Matrix: When the military needs its people to learn something, it’ll be able to get personally tailored, realistic training instantly. A good example is a tool for young officers called Self-Directed Learning Internet Modules, or SLIMs. They turn the latest military intelligence reports into packets that can be “played” by soldiers. Instead of just getting a briefing a few hours before a mission, the officer can virtually “play” what he might have to do in his real-world mission. For example, a Sadr City SLIM, layered on top of a 3-D map of the Baghdad neighborhood, is the game’s version of what a patrol there might entail. Kids come out and warn of a mine, and then the player has to figure out whether to believe them. A woman screams that the Americans killed her husband, and he has to decide how to respond. Hidden within the training are also lessons the Army wants the young officer to pick up. Taking digital photos and GPS locations of key sites, for example, bumps up the tally, and a “hearts and minds” score measures interaction with the public. The collective effect is potentially revolutionary. Game-based training can be tailored to specific scenarios as well as to an individual’s own rate of learning, sped up or slowed down based on how quickly he or she acquires knowledge. The result is an enormous gain in efficiency. The Navy, for example, switched to such programs for its communications technicians and estimates that it saved some 58 man-years in training time. Virtual training is also appealing because it allows soldiers to learn and exercise their skills, again and again, without the accompanying physical risks. “Combat veterans live longer,” Col. Matthew Caffrey, professor of war gaming and planning at the Air Command and Staff College, told National Defense. “One reason we use war games is to make virtual vets.” For the military as a whole, there are also tens of millions of dollars saved by conducting virtual training at the unit level. One recent war game, for instance, linked the crew of a U.S. aircraft carrier at sea, British and German submarines (docked in harbor, but the crews were on simulators), British airplane crews (sitting at simulators), and a set of Patriot missile batteries (working on practice mode). The players shared information and made joint decisions, just as if the exercise were real. The Navy estimates that its use of gaming at bases, in lieu of doing the same exercises at sea, saves it some 4,000 barrels of fuel a year, while simulated missile launches, rather than firing the real thing, save some $33 million annually. Avatar Fatigue Not everything about militainment is controversial: Who is going to complain, after all, about trying to find a better way to save soldiers’ lives, help trauma victims, or prevent sexual harassment? And as Maj. Gen. John Custer told Training & Simulation Journal, the world has changed: “You have to realize what generation you’re trying to teach. You know what? PowerPoint is not the way to go.” But there are many concerns about what these dramatic changes mean for war’s future. With only so many hours in the day, some in the military worry that video games are beginning to edge out real-world training. Navy Capt. Stephen David complained in the service’s in-house journal that the virtual vets arriving aboard his ship lacked “the requisite familiarity with even the most basic shiphandling skills.” Others raise what is called the “O’Brien Effect,” referring to the time talk-show host Conan O’Brien challenged tennis champion Serena Williams to a match, only to defeat her on the Nintendo Wii. At some point, piloting a plane in combat is different from piloting a computer workstation, just as hitting a real tennis ball is not the same as hitting the Wii version. The real danger of militainment, though, might be in how it risks changing the perceptions of war. “You lose an avatar; just reboot the game,” is how Ken Robinson, the Special Forces veteran who produced Army 360, put it in Training & Simulation Journal. “In real life, you lose your guy; you’ve lost your guy. And then you’ve got to bury him, and then you’ve got to call his wife.” This is not just an issue for the military, but also for a broader public that has less and less to do with actual war. As Celeste Zappala of Philadelphia, a mother who lost her son in Iraq, told Salon, “I’ve always believed when people participate in virtual violence, it makes the victims of violence become less empathetic and less real, and people become immune to the real pain people suffer.” But for most parents, having to send their children to war is not something they worry about, even as it becomes something that more of them play at. At the same time, the nexus of video gaming, war, and militainment is growing even fuzzier with the rapid growth in unmanned systems that use video-gaming technology to conduct actual military operations (the United States now has some 7,000 unmanned systems in its aerial inventory and another 12,000 on the ground). Indeed, the executive at robot-maker Foster-Miller worries that it is becoming too fuzzy. “It’s a Nintendo issue,” he told me. “You get kids used to playing Grand Theft Auto moving on to armed robots. Are you going to feel guilt after killing someone?” With more and more soldiers sitting at a robot’s computer controls, experiencing no real danger other than carpal tunnel syndrome, the experience of war is not merely distanced from risk, but now fully disconnected from it. One Air Force officer speaking to Wired’s Noah Shachtman about his experiences in the Iraq war, which he fought from a cubicle hundreds of miles away, described the feeling: “It’s like a video game…. It can get a little bloodthirsty. But it’s fucking cool.” A commander of a Predator drone squadron based in Nevada probably best summed up to me the quandaries, for both the military and the public. A former F-15 pilot, the officer described the new generation of unmanned systems operators with awe. Years of video gaming had made them “naturals” in the fast-moving, multitasking skills required for modern warfare. But there was also a cost. “The video-game generation is worse at distorting the reality of it [war] from the virtual nature. They don’t have that sense of what’s really going on,” he told me. This might be the essence of this new era of militainment: a greater fidelity to detail, but perhaps a greater distortion in the end. Every day, this officer heads off to virtual war. But when he comes home, he doesn’t let his own children play the many war games aimed at them. “We do the car ones instead.” |
afbfb4b38cd1cdc66a8e62e35fb572a3 | https://www.brookings.edu/articles/must-innocents-die-the-islamic-debate-over-suicide-attacks/ | Must Innocents Die? The Islamic Debate Over Suicide Attacks | Must Innocents Die? The Islamic Debate Over Suicide Attacks Over the last two years, the issue of suicide attacks or “martyrdom operations” against Israel has dominated public discussion throughout the Arab world. Since the outbreak of the current Palestinian intifada, in September 2000, the Palestinian resort to suicide attacks has won widespread Arab public acceptance as a legitimate form of resistance against Israeli occupation. Some Muslim clerics and other commentators justify them on political, moral, and religious grounds. Even those attackers who bomb and kill women and children are hailed as martyrs for their heroism in confronting the enemy. It is often said that the “martyrdom operations” are acts of religious extremism. The operatives who recruit young men and women to detonate themselves in crowds of Israelis manipulate religious fervor by wedding the ideas of heavenly reward to martyrdom. A young believer who detonates himself in the midst of the enemy will ascend straight to heaven and enter paradise—so he or she is indoctrinated. This is presented as the ultimate sacrifice and reward for a devout young Muslim. But the operations themselves are very carefully calculated maneuvers. Islamist and other groups launch suicide attacks because they are seen as effective means to demoralize Israel. The “martyrdom operations” are deemed the only answer to the vastly superior military capabilities of the Israeli army. In the words of the founder and spiritual leader of Hamas, Sheikh Ahmad Yasin, “Once we have warplanes and missiles, then we can think of changing our means of legitimate self-defense. But right now, we can only tackle the fire with our bare hands and sacrifice ourselves.” Advocates have described the attacks as the most important “strategic weapon” of Palestinian resistance. And while religious justification of such attacks is important for many Muslims, secular groups related to Fatah such as the Tanzim and Al-Aqsa Martyrs Brigade have resorted to similar tactics. Suicide attacks enjoyed almost unquestioning support in the Arab world—until the suicide attacks of September 11, 2001. Overnight, Muslims everywhere found themselves defending their religion against charges of espousing violence and terror. Many Muslim scholars have responded by condemning the assault against America as terrorism. But even as they affirm that the attacks in America were terrorism—because they killed innocent civilians—many of the same scholars still regard attacks carried out against Israeli civilians as “martyrdom operations,” a form of legitimate resistance to occupation of holy Muslim land. These scholars now seek to explain the difference between suicide operations in New York and Washington and those perpetrated in Tel Aviv and Jerusalem. The acceleration of Palestinian suicide bombings against Israeli civilians in spring 2002 complicated the issue still further. The “martyrdom operations” against Israeli civilians, following one another in rapid succession, licensed Israel to launch massive reprisals. Before these reprisals, Arab governments and opinion-makers had been content to let the “suicide fever” run rampant, in the media and in street demonstrations. But as Israeli military responses escalated, Arab governments sent their clients—Muslim clerics, journalists, and officials—to throw cold water on the frenzied enthusiasm of the masses. The specter that such operations might spread to their own countries, threatening their own security, cannot be far from their minds. In short, three main arguments have emerged: the first, endorsing the attacks of September 11 and against Israeli targets; the second, rejecting attacks like September 11, but supporting attacks against Israeli targets; and the third, rejecting all suicide attacks, wherever they take place. The debate is now fully engaged, yet it is not entirely new. The debate over “martyrdom operations” goes back to the 1980s, when various groups employed the technique against U.S., French, and Israeli forces in Lebanon. But the sheer number of Palestinian “martyrdom operations” against Israel and the unprecedented number of Americans killed in New York and Washington have imparted a new urgency to the debate. What follows is a sketch of the recent arguments for and against the operations against Israel. It is important to acknowledge that a debate is underway. It is also crucial to recognize that those who sanction attacks against Israeli civilians seem to be winning it. For and Against The debate began in early December 2001, when a deadly wave of suicide operations struck Jerusalem and Haifa, leaving 26 Israelis dead. The bombings constituted the first major wave of terrorism, post-September 11. They came at a time when the leadership of Egypt and Saudi Arabia, whose nationals had been implicated in the September 11 attacks, were still reeling from criticism that they had not done enough to battle terrorism and suicide fever in the region. After September 11, the governments of Egypt and Saudi Arabia faced severe criticism for the role their nationals played in the attacks. They also deemed it in their interest to promote a more conciliatory image of Islam—to argue that it opposes the killing of innocent civilians and is based on ideals of peace and justice. In November, Saudi Arabia’s crown prince Abdullah called a meeting of senior Islamic clerics including the grand mufti of the kingdom, to urge them to exercise caution in their public declarations, and to remember that their words were now under unprecedented international scrutiny: Brothers, you know that we are passing through crucial days. We must be patient and you must clarify to your brothers in good words since you are now the target of the enemies of the Islamic faith…do not be unnerved or provoked by passion. President Mubarak of Egypt made similar appeals to religious scholars and preachers during the holy month of Ramadan. He stressed the tolerant nature of Islam, calling on the religious figures to “clarify the real nature of Islam’s divine message, that it is the religion of tolerance and mercy, that it forbids the killing of innocent civilians.” The December 2001 bombings in Israel immediately put Egypt and Saudi Arabia to the test. Would their religious leaders finally speak out against the widespread glorification and acceptance of “martyrdom” and indiscriminate killing that had taken root in their own societies? Two of them did. Sheikh Muhammad Sa’id Tantawi, head of Egypt’s Al-Azhar mosque and university, had been equivocal about the issue in past declarations. Now he reiterated the government’s position, declaring that the shari’a (Islamic law) “rejects all attempts on human life, and in the name of the shari’a, we condemn all attacks on civilians, whatever their community or state responsible for such an attack.” Echoing Tantawi’s ruling, Sheikh Muhammad bin ‘Abdallah as-Sabil, member of the Saudi council of senior ulema (clerics) and imam at the grand mosque in Mecca, also decried the suicide attacks. “Any attack on innocent people is unlawful and contrary to the shari’a,” he announced, adding, “Muslims must safeguard the lives, honor, and property of Christians and Jews, attacking them contradicts the shari’a.” The Islamic legal arguments against the operations relied upon three principles of Islamic law: the prohibition against killing civilians, the prohibition against suicide, and the protected status of Jews and Christians. But would these arguments hold? For centuries, Muslim rulers have paid the salaries of religious figures and used them to boost their own legitimacy. The highest religious leaders of Egypt and Saudi Arabia are government-appointed officials. Any edict emanating from these religious bodies inevitably reflects state policy. Both governments tap their religious establishments whenever they need religious backing for controversial policies. In the past, the Sheikh al-Azhar, the head of the Azhar theological seminary, provided Anwar Sadat with a religious edict, or fatwa, in support of the peace treaty Sadat signed with Israel. During the Kuwait war, the late grand mufti of Saudi Arabia, Sheikh ‘Abd al-‘Aziz bin Baz, ruled that the presence of foreign troops was permitted on Saudi soil to defend the kingdom from Saddam Hussein. He also issued an extremely controversial fatwa in 1995, ruling that the shari’a permitted peace with Israel, under certain conditions. The dependence of the clerics on the rulers has sometimes eroded the credibility of state religious officials and their edicts. In any event, no Muslim authority exercises a kind of “papal” authority over his coreligionists. And so it was not surprising that the declarations of Tantawi and Sabil, far from ending the debate, actually intensified it. The harshest rebuttal came from the Egyptian-born Sheikh Yusuf al-Qaradawi, who currently heads the Sunni studies department at Qatar University. Sheikh Qaradawi is a consistent critic of the United States as well as many pro-Western Arab governments, and is increasingly popular throughout the Muslim world. He was also one of the first religious scholars to sanction the use of suicide attacks by Palestinian militants during the waves of Hamas-led bombings in the mid-1990s. Qaradawi has gained popularity and legitimacy throughout the Arab world by questioning the authority of the state, and he reaches a broad audience through his regular appearances on the Arabic satellite channel, al-Jazeera. Qaradawi has emerged as one of the preeminent Islamic religious figures in the Arab world and arguably represents the mainstream of Arab Muslim society. “I am astonished that some sheikhs deliver fatwas that betray the mujahideen, instead of supporting them and urging them to sacrifice and martyrdom,” announced Qaradawi. Responding specifically to the imam of Mecca, Qaradawi stated, “It is unfortunate to hear that the grand imam has said it was not permissible to kill civilians in any country or state, even in Israel.” Qaradawi based his opposition to these fatwas on the premise that Israelis were not civilians but rather combatants in a war of occupation waged against the Palestinians. He argued that Israeli society was completely military in its make-up and did not include any civilians…How can the head of Al-Azhar incriminate mujahideen who fight against aggressors? How can he consider these aggressors as innocent civilians? While sanctioning suicide attacks against Israelis, Qaradawi quickly condemned the September 11 attacks against American civilians, claiming that “such martyrdom operations should not be carried out outside of the Palestinian territories.” Attempting to differentiate between terrorism and “martyrdom,” Qaradawi declared, “The Palestinian who blows himself up is a person who is defending his homeland. When he attacks an occupier enemy, he is attacking a legitimate target. This is different from someone who leaves his country and goes to strike a target with which he has no dispute.” Qaradawi distinguished “martyrdom operations” from terrorism as an act of self-defense and thus a legitimate form of resistance. He continues: “The Palestinians have a right to defend their land and property from which they were driven out unjustly…the Palestinians have a right to resist this usurping colonialism with all the means and methods they have. This is a legitimate right endorsed by the divine laws, international laws, and human values.” Qaradawi was also at pains to distinguish between suicide and martyrdom. Islam clearly prohibits suicide, yet views martyrdom as a noble act, assuring individuals a place in heaven. In an interview with al-Jazeera, Qaradawi rejected the term “suicide operations.” This is an unjust and misleading name because these are heroic commando and martyrdom attacks and should not be called suicide operations or be attributed to suicide under any circumstances. He clarified that the term suicide applies to someone who kills himself for personal reasons and is therefore a coward. In contrast, an attack against Israel is defined as martyrdom and therefore legitimized as a brave, unselfish sacrifice carried out on behalf of the entire Muslim community. Other critics of the edicts issued by Tantawi and Sabil based themselves on the status of Jews and Christians in Islam, considered “people of the covenant”—ahl adh-dhimma, or dhimmis. There are clear guidelines in the Qur’an and Sunna for Muslim relations with Jews and Christians, providing for the protection of their lives and property. But as one commentator argued, “preserving the life of the dhimmis is conditional on their living under Muslim rule in a Muslim state. This does not apply to the dhimmis mentioned by the imam [of Mecca], since they are living in their own state that has usurped the rights of Muslims and occupied their lands.” Jews and Christians are protected under Islam, but only when they live under Muslim rule; outside the boundaries of Islamic rule, they are no longer protected. According to this chain of reasoning, it is permissible to kill Jews in Israel who live in their own state, especially as its territory has been usurped from Muslims. Other religious scholars within the Azhar establishment continued to challenge the prohibition on killing civilians espoused by Tantawi and Sabil. ‘Abd al-‘Azim al-Mit’ani, a lecturer at Al-Azhar University, rejected arguments differentiating between Israeli civilian and military targets claiming, “They should not make any difference between civilians and military. It is a fact that Israel is one big military camp. There is no real civilian there. It is the Palestinians’ rights to hit all the inhabitants of Israel as they can.” Al-Mit’ani continued by claiming that the Prophet’s words prohibiting the killing of children, elderly, or women did not apply in the case of Palestinian suicide bombers, stating, “He was talking about an ordinary war, between two armies. The situation in occupied Arab Palestine is different. We are faced with an enemy that attacks indiscriminately. The Palestinians have every right to return the treatment.” The killing of innocent women and children is often quickly dismissed by advocates of suicide operations as “collateral damage” and an inevitable by-product of the struggle against Israel. Addressing this issue in an interview, Sheikh Qaradawi denied that such casualties contradict Islamic doctrine. “Some children, old people, and women may get hurt in such operations. This is not deliberate. However, we must all realize that the Israeli society is a military society, men and women…we cannot say that the casualties were innocent civilians.” ‘Abd as-Sabur Shahin, a lecturer at the Islamic Dar al-‘Ulum College in Cairo, concurred and argued that, “We are at war, as we have never been before throughout history. If civilians are killed in the course of Palestinian operations, this is not a crime.” Islamic scholars who endorse and even promote suicide or “martyrdom” attacks justify their positions by reference to the shari’a. But these rulings are driven more by emotions and television images of Palestinian suffering than by deep immersion in the records of ancient Islamic precedent. These clerics, like many secular enthusiasts of the bombings, seek to champion the Palestinians in their resistance against Israel. And for the Muslim believer, the young men who sacrifice their lives in the name of resistance are not only defending the Palestinian nation, they are also defending the wounded honor of the Arab nation, in a struggle that was lost decades ago by the secular Arab regimes. Enter the Rulers Once the clerics had staked out their positions, it was the turn of the rulers to “operationalize” them. Yet it did not take long for the debate to become muddled, as both the Egyptian and Saudi Arabian governments hedged their own condemnations of the operations. The judgments of Tantawi and Sabil condemning violence against civilians prepped the call by the leaders of Egypt, Saudi Arabia, and Syria to “reject all forms of violence,” at a May 2002 tripartite summit meeting in Sharm al-Sheikh. But their final statement did not specifically condemn “martyrdom operations.” Syria soon exploited the loophole. True, Syrian president Bashar al-Assad had attended the meeting and agreed to the text of the final communiqué—this, despite the fact that Syria continues to support suicide attacks carried out by Hamas and the Palestine Islamic Jihad. But after the summit, Syrian foreign minister Faruq ash-Shar’ explained why Syria put its name to the summit communiqué, by claiming that the term “violence” referred to “Israeli crimes against the Palestinian people.” Syrian sources then confirmed that there would be no change in Syria’s political support for “resistance” and for leaders of Hamas and Islamic Jihad. Saudi Arabia’s effort to curb support for violence against Israel was discussed between President George W. Bush and Crown Prince Abdullah at their meeting in Crawford, Texas, in April 2002, with Abdullah promising to use his influence over both Hamas and Syria. Widely circulating reports later claimed that Saudi Arabia was attempting to use its influence on Hamas to end suicide attacks through high-level meetings. But Hamas spokesman Mahmud az-Zahar denied that any meetings occurred, while affirming the movement’s willingness to discuss the issue of “martyrdom operations” with any Arab party. Such attempts were renewed later in the year under the auspices of the Egyptian government, yet broke down over the refusal of Hamas to end the attacks. Moreover, it soon turned out that Saudi Arabia itself had been paying allowances to the families of “martyrs” and funneling money directly for Hamas, which was responsible for a majority of suicide attacks. The Saudi Committee for the Support of Al-Quds Intifada, headed by Saudi interior minister Prince Nayif bin ‘Abd al-‘Aziz, had been channeling money to Hamas and affiliated organizations. In the words of the Saudi government, the committee “has been extending assistance to the families of Palestinian martyrs, as well as injured and handicapped Palestinians.” According to its own figures, published on an official Saudi government website, the family of each suicide bomber is paid 20,000 Saudi riyals (about $5,300). In addition, money from the Saudi Committee for the Support of Al-Aqsa Intifada was paid to the Tulkarm Charity Committee, an organization cited by the United States as connected to Hamas. While it is clear that the charity committee does administer social work and aid, it is also known to have ties with the military apparatus of Hamas. Secretary of State Colin Powell acknowledged that Saudi cash was funding Hamas during a hearing of the Foreign Operations Subcommittee of the Senate Appropriations Committee. In response to a question about the destination for funds raised through a series of Saudi telethons which raised millions of dollars, Secretary Powell claimed, “We have seen some indications, and I’ve even seen in an Arab newspaper—handed to me by Chairman Arafat, I might add—where some of the money, at least according to this Arab newspaper advertisement, would be going to elements of Hamas.” The Saudi embassy in Washington attempted to deflect criticism that it was funding the families of suicide bombers by claiming that the term “martyr” used in relation to official Saudi efforts to raise funds for Palestinians referred not to suicide bombers but to “Palestinians who are victimized by Israeli terror and violence.” Yet the executive manager of the Saudi committee stated otherwise: “We support the families of Palestinian martyrs, without differentiating between whether the Palestinian was a bomber or was killed by Israeli troops.” In the Egyptian case, the weak link in the debate proved to be Sheikh Tantawi. The Sheikh al-Azhar, subjected to withering criticism, began to issue confusing and contradictory statements, effectively abrogating his earlier fatwa. In an interview with the Egyptian state-owned magazine Ruz al-Yusuf, Tantawi claimed that his earlier rulings had been distorted, stating, My words were clear…a man who blows himself in the middle of enemy militants is a martyr, repeat, a martyr. What we do not condone is for someone to blow himself up in the middle of children or women. If he blows himself up in the middle of Israeli women enlisted in the army, then he is a martyr, since these women are fighters. In later statements he reiterated this formula, declaring, “I repeat that those who defend their rights by blowing themselves up in the midst of their enemies who murder his people, occupy their land or humiliate their people, are martyrs, martyrs, martyrs.” Since shifting his position on attacks, Tantawi has continuously sought to clarify the issue by distinguishing between terrorism and jihad, which is the impetus for “martyrdom.” “Jihad in Islam was ordained in order to support the oppressed and defend sacred places, human lives, personal funds, occupied land, and so on. Terrorism, on the other hand, is an aggression and an insistence on killing innocent people, civilians, and peaceful people.” The distinction rests on the notion of self-defense, which distinguishes martyrdom operations from terrorism. Tantawi intended to cover both angles of the debate, conferring the status of martyr on Palestinian suicide bombers engaged in a struggle of self-defense, while condemning the killing of innocent civilians. Within Palestinian society there have been calls to halt the campaign of suicide attacks. Unfortunately, the majority of these appeals are based on strategic considerations and not on religious or moral arguments. The usual arguments against suicide attacks are that they harm the image of the Palestinian struggle or engender harsh Israeli reprisals—not that the attacks are themselves reprehensible. Some Palestinians support attacks only within the West Bank and Gaza and not in the pre-1967 borders of Israel. But Palestinian critics of the attacks clearly have not persuaded their planners and perpetrators since the attacks continue. Next Phase? If suicide attacks are permissible against Israeli targets, might they be deemed legitimate against repressive Arab leaders accused of being “apostates”? Religious decrees were used to justify the assassination of Egyptian president Anwar Sadat. Various edicts issued against leading intellectuals such as the Egyptian author and Nobel laureate Naguib Mahfouz (who was stabbed by a would-be assassin), and Egyptian secular thinker Farag Foda (who was murdered), have put secularists on notice. All of this raises the prospect that, even if the “suicide fever” subsides among the Palestinians, it could surface elsewhere—not just in the West, where it has already taken a devastating toll, but in Arab capitals ruled by regimes friendly to the West and at peace with Israel. This would be the point of entry for theological freelancers. In recent years, there has been a proliferation of fatwas issued by various religious scholars of dubious authority. These fatwas may lack legal soundness, yet they are often accepted and adhered to by many Muslims who are dissatisfied with the status quo. Some of these fatwas conform to Islamic law, and some do not. But the crucial point is that the lack of qualifications of those issuing such rulings has become nearly irrelevant. Usama bin Ladin’s so-called fatwa of 1998, for example, urged Muslims to “kill the Americans and their allies, civilians and military.” Though such a ruling by a person lacking legal training has no authority in Islam, the fact remains that a small group of people who believed these words perpetrated an unprecedented act of terror against the United States on September 11. Thus, is it inconceivable that a disgruntled extremist group, desperate in its confrontation with an authoritarian state in the Middle East, could use such tactics against an Arab regime? Bin Ladin’s 1998 call for murder was directed at “Americans and their allies,” easily interpreted as the Western-allied states of the Arab world. Arab governments have struggled against the proliferation of fatwas and have taken various measures to limit those issuing religious rulings. The government of Saudi Arabia issued a public statement that only authorized clerics could issue fatwas. This was in part a response to a fatwa issued by a dissident cleric, calling for jihad against the United States. The Saudi minister of Islamic affairs also attempted to curb public calls for jihad, which he declared could only be ordered by the government. In Kuwait, which faces a growing Islamic opposition, the Ministry of Justice, Religious Endowments, and Islamic Affairs issued rules to mosque preachers in an attempt to control religious discourse. The ministry also established a fatwa committee to coordinate and approve religious rulings. Similar restrictions and regulations have been enacted throughout the Arab world in an attempt to limit the influence of independent clerics. Until now, most Islamic groups have refrained from directly confronting the state for ideological or tactical reasons. Generally speaking, those groups that have chosen violence have been crushed. But the widespread legitimacy of the “martyrdom operations” could set extremists to considering the tactic, if and when they revive armed struggle against the state. Egyptian president Hosni Mubarak raised such concerns in an interview with The New York Times: “I am afraid of what’s happening in the Middle East for the future…the seriousness of the situation may generate new kinds of terrorism against all of us, against the U.S., against Egypt, against Jordan.” Arab rulers could find themselves the next target of a new wave of “martyrs.” This suggests that the debate over “martyrdom operations” may continue for a long time to come. And at this moment in time, those in favor of such attacks seem to be scoring points. Cairo and Riyadh remain reluctant to oppose such attacks fully in clear and definitive language, and such attacks continue. Perhaps their hesitation has to do with the answer to this question: Even if they were to call unambiguously for an end to the suicide attacks, would anyone heed them? |
9e543e79daeb37f5f7087ab66ec4a1b3 | https://www.brookings.edu/articles/north-koreas-long-shadow-on-south-koreas-democracy/ | North Korea’s long shadow on South Korea’s democracy | North Korea’s long shadow on South Korea’s democracy South Korea has often been touted as a democratic and economic miracle. After decades of authoritarianism, it transitioned into a consolidated democracy and a technologically advanced, economic powerhouse in the past three decades. In recent months, the country has received international praise for its successful tackling of the coronavirus pandemic. Yet South Korean presidents’ desire to advance their goals vis-à-vis North Korea and the legacy of authoritarianism, especially the centralization of power in the presidency, have led to uneven applications of liberal democratic functions and, at times, egregious examples of abuse and rights violations. President Moon Jae-in, a former human rights lawyer, seemed to offer a break from the previous decade of conservative rule during which the Cold War-era National Security Law was invoked to quash pro-North Korea sentiment and punish any critic of then-Presidents Lee Myung-bak and Park Geun-hye. Moon, however, has used his power to dampen anti-North speech and activities to support his pro-engagement policy toward Pyongyang, undermining his domestic policy goals — such as taming corruption and inequality — while making little headway in relations with an intransigent Kim regime. Progress on strengthening South Korea’s democratic institutions will take time and political will, but in the meanwhile, Moon can take small steps toward those longer-term goals by changing his approach to North Korean human rights and defector groups, empowering civic organizations, and setting the foundations for principled and sustainable policies. With the world still reeling from the coronavirus pandemic, few countries get as much positive attention for successfully managing it than South Korea. Not only did President Moon Jae-in’s administration show what effective governance, technology and science, and expert-driven policies can accomplish in protecting public health, Seoul also showed that it could safely hold national legislative elections, even as Britain, France, and almost 20 U.S. states postponed votes. An impressive two-thirds of South Korea’s eligible voters turned out during the pandemic and delivered a supermajority for the ruling party. The Washington Post, among other media, lauded South Korea for showing the world how “free, fair, and safe elections” can be done, even during a pandemic. Even as South Korea was being touted as a beacon of democracy in global media, the Moon administration has been criticized by domestic observers and international organizations, including the United Nations, for its illiberal practices related to North Korean defectors and human rights advocates in the country. A democracy since the late 1980s, South Korea still bears the weight of previous military regimes’ legal and institutional legacies that continue to hamper consistent application of liberal democratic functions. In particular, each South Korean president’s goals vis-à-vis relations with North Korea, as well as the centralization of power in the position itself, cast shadows on Seoul’s policy choices. While conservative presidents had generally cracked down on any whiff of pro-North sentiment, citing the notorious National Security Law, a relic of the Cold War and South Korea’s anti-communism often deployed to silence pro-democracy entities, Moon’s progressive government has flipped the script, aiming to squelch opposition to his pro-engagement policy toward Pyongyang. From October 2016 to March 2017, hundreds of thousands of South Korean citizens — families, young adults, the elderly, schoolchildren — gathered every weekend in Seoul demanding President Park Geun-hye’s resignation for charges of graft. While corruption scandals that marked the Park administration were the initial cause of these “candlelight protests,” decades of pent-up anger and resentment about deepening socioeconomic inequality, harsh labor conditions and lack of benefits for temporary jobs, long working hours, and low quality of life bubbled up to the surface. In 2016, the Organisation for Economic Cooperation and Development (OECD) ranked South Korea 28th among 38 countries in its quality of life index report; it fell further to 29th in 2017. For South Koreans who protested in the streets for months against the corruption of Park, the daughter of one of South Korea’s dictators, seen as aloof and out of touch — and ultimately impeached and removed from office in March 2017 — Moon Jae-in was seen as a breath of fresh air. Among 13 candidates, Moon won a commanding 41% of the vote, with a 17-point lead over the runner-up. An astonishing 77% of eligible voters cast their ballots. At the time, international observers pronounced the exuberant and peaceful candlelight protests as a “democratic miracle” and wrote that South Korea “showed the world how to do democracy.” Moon Jae-in entered office in May 2017 promising to reform the chaebols, or conglomerates, that dominate the social, economic, and political landscape in South Korea, improve working conditions and hours, and increase the minimum wage. A civil and human rights lawyer who had been jailed for his pro-democracy activism during the country’s authoritarian rule, the progressive Moon said he would be a “president for the people.” In a prescient article, Alexis Dudden, a U.S. historian of modern Korea, celebrated the successes of the candlelight protests in ousting the sitting president and showing that no one is above the law. But she also warned that, “South Koreans will find it challenging in the coming years to keep attention trained on what they want for their country and for the broader Korea because the North Korean vortex will likely define Moon’s presidency.” Indeed, driven by a desire to “take an audacious step” toward an end of the Korean War declaration and the signing of a peace treaty — and undoubtedly alarmed by U.S. President Donald Trump’s apparent lack of concern about South Korea’s security during the “fire and fury” threats in late 2017 that many feared would spiral into a military conflict — Moon has elevated rapprochement with Pyongyang as a top priority. He has also resolved to use the powers of the presidency toward that goal, even if it meant the selective tamping down of civil liberties in his own country. The same abuses of power that was rife under President Park Geun-hye proved their stickiness in the Moon administration, in what political scientists Aram Hur and Andrew Yeo call the “democratic ceiling” in South Korea. Much like his predecessors, President Moon’s tenure has been rocked by scandals, including a reported intervention by top presidential Blue House officials to help one of Moon’s friends become mayor of a southeastern city and graft charges and ethics violations against Moon’s close ally Cho Kuk, who was appointed as minister of justice in September 2019 and resigned a month later. The Cho scandal drew massive crowds to oppose his appointment. Additional allegations that Cho and his wife forged documents to help their daughter get admission into medical school enraged young people who were already sensitive to economic inequality and unfair advantages enjoyed by the elite, and they directed their ire at Moon for covering up for Cho, whom they called a hypocrite. Moon has also drawn criticism and protests for suppressing dissent, particularly from conservatives, continuing a cycle of recrimination and retaliation, and deepening the polarization of Korean politics. But what has drawn special attention from the international community has been Seoul’s cracking down of North Korean defector organizations and others who oppose Moon’s pro-engagement policy toward Pyongyang. Fiercely protective of his goal of making inter-Korean progress during his single five-year term, Moon’s government has put intense pressure on non-governmental organizations and defector groups that focus on human rights in North Korea to mollify Kim Jong Un. In late 2018, the government cut funding for the North Korea Human Rights Foundation (established by law in 2016) by 93%, in part to sustain the summitry of that year. In October 2018, the South Korean government barred a veteran journalist who was a North Korean defector from covering the inter-Korean talks, reportedly to avoid irritating the North Korean regime. The incident spurred the International Press Institute, a global network of media executives and journalists, to write a letter to Moon stating, “We fear that the government has set a new precedent, and, in the future, it would attempt to silence any journalist who is critical of North Korea or the talks between the two countries.” Since then, Seoul has stripped licenses of NGOs that conducted activities such as floating anti-regime leaflets into the North, raided offices and filed criminal complaints, and surveilled or detained activists, prompting these nongovernmental organizations to appeal to the United Nations for help. Activists told Reuters that Seoul’s investigations and withholding of funds have scared away donors, further hampering the organizations’ efforts to support defections and defector networks. A senior leader of the international organization Human Rights Watch stated that “South Korea should be standing up for its own principles” and that the Moon administration is at risk of “violating the rights they spent their entire careers trying to build up.” In December 2020, the Moon government passed a controversial bill that banned the sending of anti-North Korea leaflet balloons, sparking an outcry from civic groups, defectors, and international human rights organizations. Progressive goals, like reining in the power and influence of chaebols and strengthening the social safety net, also fell to the wayside. Political scientist Robert Kelly observed that previous liberal presidencies of Kim Dae-jung and Roh Moo-hyun “lost themselves in trying for an elusive deal with Pyongyang,” and that Moon, too, appeared to be ignoring longstanding liberal domestic agenda items for an as-yet unrequited promise of inter-Korean reconciliation. Furthermore, Moon enlisted the chaebols for his North Korea project. Seeking to lure Kim Jong Un’s regime with economic carrots, Moon brought a group of business leaders, including the heads of four of South Korea’s biggest conglomerates — Samsung, SK, Hyundai, and LG — to the third inter-Korean summit in September 2018. Samsung’s leader at the time was a defendant in a bribery scandal during the Park administration, an inconvenient truth that the Blue House batted away, stating that it was a separate matter. Meanwhile, the South Korean public viewed the economy as the most urgent issue. A poll from September 2018 showed Moon’s approval rating falling to a low of 49%, reflecting the respondents’ view that the Moon administration did little to improve the employment rate, check soaring real estate prices, or address other economic problems. Nevertheless, the Moon administration proposed multiple infrastructure projects, such as the inter-Korean railroad and rebuilding North Korea’s roads and ports, which would have cost tens of billions of dollars, according to expert and government estimates. Seoul’s actions also elicited concern in Washington, as policymakers there reportedly called South Korean companies and banks to remind them of the need for North Korea sanctions enforcement. As one of Moon’s close advisers told the Voice of America in July 2020, “The peaceful management of inter-Korean relations is the number one priority for us, and human rights would come second.” Such sentiment was consistent with the administration’s position since Moon took power. A senior adviser said in January 2019, “North Korean defectors might not enjoy the same benefits that they enjoyed during the two previous conservative governments,” and emphasized that resolution of the North Korean nuclear issue was the top goal for Moon. The attempts to marginalize and silence civil society organizations like the North Korea defector and human rights groups are both symptom and driver of a weakness in South Korean democracy, particularly the extreme centralization of power in the presidential Blue House. South Korean political scientist Choi Jang Jip argued that his country’s democracy lacks the systematic inclusion of societal actors in the policymaking process because of the overarching goals of economic development and national security, even at the cost of protecting individual rights. In the milieu of anti-communism and maintaining order, the progressive forces that drove the country’s pro-democracy movement in the 1980s did not embrace the concept of placing checks on government power to enhance individual rights and liberties. As a result, political parties were weak, underdeveloped and dependent on the personality of the president, the National Assembly continued to lack institutional autonomy as in the authoritarian era, and the judiciary saw its value through the lens of national security and economic development, often to the advantage of the wealthy and powerful. The concentration of power in the Blue House has been rooted in South Korea for decades and has been reinforced regardless of which party is in power. Asia scholars Stephan Haggard and Jong-Sung You have observed that, “Governments on both the political right and left have placed limits on freedom of expression to contain political opposition, and constitutional, legal, and political checks have proven insufficient to stop them.” The progressive governments of Kim Dae-jung (1998-2003) and Roh Moo-hyun (2003-2008) have prosecuted journalists and sought to neutralize conservative media. In 2012, Amnesty International issued a scathing condemnation of the conservative government of Lee Myung-bak (2008-2013), for a “dramatic increase in the abuse of national security laws in a politically motivated attempt to silence debate.” And the Park Geun-hye (2013-2017) government’s application of the National Security Law to surveil, monitor, and crack down on critics, often implying that they are pro-North Korea subversives, elicited warnings from the United Nations Human Rights Council, Freedom House, and South Korean civic groups. A corollary legacy of the military dictatorships of the past is the corruption stemming from the toxic partnership between the state and conglomerates, which impedes the advancement of pluralism and policies responsive to the people. Historians have well documented that South Korea’s economic “miracle” was grounded in the authoritarian practices of President Park Chung-hee — the impeached Park Geun-hye’s father — whose export-dominated policies and government incentives gave rise to the now household names such as Samsung and Hyundai, while violently suppressing unions and tolerated labor exploitation by the corporations.1 Over the years, these family-controlled conglomerates have amassed enormous political influence and have wielded that power to protect their interests despite persistent calls for reform. Every presidential administration since democratization has been implicated in bribery and corruption scandals, including the current Moon government, underscoring the difficulties of decoupling chaebol money from politics and policies. Unwillingness or inability to reform the chaebols leaves in place a system that propagates corruption, entrenches corporate power, and undermines good governance and democratic accountability. Choi Jang Jip averred that the chaebol “have proven a formidable obstacle for furthering the development of democracy, the wide application of the rule of law, and the liberalization and pluralization of civil society.” Frustrated by the lack of progress on chaebol reform under the Moon government, workers (again) took to the streets, given the absence of governance structures that mediate between civil society and government. To their credit, Moon and his party have attempted to revise the Constitution, unsuccessfully in 2018 and again in 2020, to try to decentralize the power of the presidency, change the single five-year presidency to a four-year term with opportunity for a second term, lower the voting age, grant more autonomy to local governments, and delegate more authority to the prime minister, as well as to dismantle conditions that contribute to rampant corruption. Even though Moon said he would not personally benefit from these measures, the National Assembly scuttled the proposal, calling it “imperial” but also highlighting the sclerosis of the legislature and the polarization of politics. Moon has also worked to target and investigate corruption in the previous administration, but, as South Korean economist Park Sang-in noted, the administration has yet to produce and implement specific measures to curb the power the concentration of power and wealth. Yet, part of the reason for the persistence of the practice of tamping down on freedom of expression is their political utility for the presidents in managing opponents, especially since South Korean presidents have a single five-year term to establish their legacy. South Korea’s strong, centralized presidency, relatively unencumbered by legislative or judicial oversight — and dependent on the chaebols to deliver on promises of economic growth — has had negative implications for foreign policy. Choi Jang Jip has argued that the result of such a system is policymaking that is a “makeshift, short-sighted, and improvised process influenced by the president’s immediate policy concerns.” A top-down, personalized foreign policy approach runs the risk of producing inconsistent policies from one administration to another. For example, Moon cast doubt on the deployment of the U.S. Terminal High Altitude Area Defense (THAAD) missile defense system, which the previous government had agreed to install. He also threatened to withdraw from a military intelligence sharing agreement with Japan, another product of the previous administration. The wide fluctuations in policies from one president to another have created concerns in Washington and elsewhere about the reliability and consistency of South Korea’s policies. Indeed, Moon’s approach does not appear to be having the intended effect on Pyongyang. The consistent volley of anti-South rhetoric from the Kim regime, its refusal to accept Seoul’s humanitarian aid during the COVID-19 pandemic, and destruction of the inter-Korean liaison office in June 2020 all strongly suggest that Moon’s conciliatory approach is not working. In fact, Moon’s attempts to silence civil society voices, especially on human rights issues, might be fueling Kim’s perception that he can coerce Seoul to comply with his demands, rather than inspiring him to dismantle his nuclear weapons program. The Trump administration’s decision to not focus on North Korea’s human rights violations since early 2018 at the outset of summit diplomacy — the human rights envoy position has been vacant since 2017 — also played a part in shifting global attention away from the Kim regime’s repressive practices, while providing tacit acceptance of Seoul’s actions against rights groups. Robert R. King, the former special envoy for North Korea human rights under the Obama administration, pithily wrote, “Ignoring human rights does not make the abuses go away, nor does ignoring abuses increase the desire or will of the Kim regime to reach an agreement on security issues in the long run.” Furthermore, the chaebols probably have little to contribute to inter-Korean economic engagement, given existing sanctions, the poor investment and business environment in North Korea, and the Kim regime’s lack of interest. There also is a mismatch in what North Korea can offer and to these companies. The South Korean economist Park Sang-in writes, “South Korean chaebols are concentrated in heavy and chemical industries, and North Korean workers do not yet have the human capital that would be suitable for these industries.” Moon has an opportunity to further consolidate South Korea’s democracy by allowing civil society groups, even ones that are critical of North Korea, to flourish, without damaging his pro-engagement policy toward Pyongyang. Although North Korea typically responds harshly to any criticisms about the regime, when pressed on the issue of human rights, it has taken efforts to improve them in certain cases. For example, North Korea in 2017 delivered progress on the rights of persons with disabilities and allowed for the visit of an official from the U.N. Human Rights Council, to the surprise of many Korea observers. More broadly, the Moon administration and its successors will continue to wrestle with the weaknesses in its system inherited from South Korea’s autocratic predecessors. As this author has laid out elsewhere, tackling these legacies and loosening the grip of chaebols will require bold actions to increase resources to the public sector and services, institutionalize an improved, bipartisan system of checks and balances, encourage and provide more opportunities for the younger generation of political leaders, and address the culture of widespread corruption that lingers in both government and businesses. The risks of letting North Korea policy objectives eclipse domestic priorities are not insignificant. The resounding win for the ruling party at the April 2020 legislative elections was the result of the Moon administration’s successful management of the pandemic, not its flailing North Korea policy; before the pandemic, Moon’s approval rating had fallen to as low as 30% as a result of the country’s economic slowdown and political scandals. But since the election, Moon’s approval rating has fallen steadily, as scandals, counterproductive policy moves, labor controversies, and slow or insensitive government responses soured the public’s view of the administration. Moreover, the conditions that fueled the candlelight protests of 2016-2017 still exist and in some cases have worsened. According to the Bank of Korea, 25-29 year-olds accounted for over 20% of the unemployed in 2018, the highest figure among OECD numbers for seven consecutive years. The income disparity is wider than when he came to office and the support of voters in their 20s who had helped to propel Moon into office dropped from 90% in June 2017 to 44% in October 2019. Progress on strengthening South Korea’s democratic institutions will take time and political will, but in the meanwhile, Moon can take small steps toward those longer-term goals by changing his approach to human rights and defector groups. As Jennifer S. Oh of South Korea’s Ewha Women’s University has argued, “civil society strengthens democratic governance by creating democratic citizens.” What better way to show North Korea and the world how to “do democracy” than to show the power and resilience of South Korea’s democracy — confident enough to listen to critics, empower civic organizations, and sustain wise policies beyond a single presidential term. |
68ee6ed0959b519f151c476aefe0c42b | https://www.brookings.edu/articles/not-by-bread-alone-the-role-of-the-african-american-church-in-inner-city-development/ | Not by Bread Alone: The role of the African-American church in inner-city development | Not by Bread Alone: The role of the African-American church in inner-city development Deep social problems continue to plague inner-city America. Fashioning a response to the scourge of drugs, gangs, violent crime, unemployment, AIDS, failed schools, fatherless families, and early unwed pregnancy is among the most serious domestic policy challenges confronting the nation today. Some attribute these problems solely to structural causes. But a key aspect of the problems is the patterns of behavior that have emerged among young men and women in inner-city communities that limit their ability to seize existing opportunity. While social analysts agree that these behaviors must change if progress is to occur, they disagree fundamentally about how to accomplish such change. For some, the intensification of pathological behaviors among the urban poor is due to the lack of economic opportunities; for others, it is the result of disincentives created by various welfare programs. Though sharply different in their policy implications, these two positions have something important in common. Each assumes that economic factors ultimately drive the behavioral problems, even behaviors involving sexuality, marriage, childbearing, and parenting, which reflect peopleþs basic understanding of what gives meaning to their lives. A different view of these matters takes off from the biblical injunction, “man must not live by bread alone.” From this perspective, the values, attitudes, and beliefs that govern a person’s behaviors are at least partially autonomous, leaving open the prospect that communal agencies of moral and cultural development might change the way individuals conduct their lives. Since religious institutions are primary sources of legitimate moral teaching in our society, this point of view suggests that significant positive change may be possible if inner-city churches can reach individuals, engage them in the activities of the church, and thereby help transform their lives. This suggestion raises interesting issues of theory, of evidence, and of ethics for students of social change. Setting aside appeals to divine intervention, the question arises as to what are the characteristics of religious institutions that, in principle, might make them effective instruments of behavior modification and that are not present in secular settings. Also, what evidence supports the claim that the scope of church involvement in the inner city, and its impact on the behavior of churchgoers, is large enough to potentially make a real difference in these communities? Moreover, instrumental calculations aside, one might ask why churches, in particular, should be charged with the awesome responsibility of helping to achieve renewal in our society’s most desolate backwaters. Each of us, both as scholar and as citizen, has been interested for some time in the idea that religion might promote development in low-income communities. Recently we have been investigating it more systematically. This essay reports on some of our findings and opinions in this critical, but as yet little explored, area of social policy studies, relative to the questions of theory, evidence, and ethics raised above. It is hardly our last word on the subject. Not a Task for Government Arguably, encouraging þgood behavior” means making discriminations among people based on assessments that are difficult, legally and politically, for public agencies to make. Discerning the extent to which particular people have risen to, or fallen short of, our expectations in the concrete, ambiguous circumstances of everyday life is a nontrivial task. If promoting “virtue” necessitates setting, communicating, and enforcing standards, then it requires a high level of knowledge about a person’s circumstances and an ability to draw fine distinctions among individual cases based on that knowledge. Both the informational demands of this activity and the requisite authority to act on what information is available will often exceed the capacity of governmental actors, since citizens have procedural protections and privacy rights that cannot and should not be abrogated. Publicly enforced judgments must be made in a manner consistent with these rights. Voluntary civic associations, as exemplified by religious institutions, are not constrained in the same way or to the same degree. A government agency, when trying to assess whether a welfare recipient has put forward adequate effort toward achieving self-sufficiency, is forced to rely on information like a caseworker’s observations and self-reports of the recipient. Any attempt to limit assistance because the recipient failed to try hard enough would stand up to subsequent judicial review only in the most egregious of cases. Yet families and communal groups providing help to the same person would typically base their continued assistance on a much richer (and, admittedly, often impressionistic) array of information. They would discriminate more finely than a state-sponsored agent ever could between the subtle differences in behavior among individuals that constitute the real content of morality and virtue. Moreover, in a pluralistic society public agents must be neutral in areas where private citizens differ sharply among themselves as to which set of values is the “correct” one. Publicly enforced judgments necessarily reflect a “thin” conception of virtue, weak enough to accommodate the underlying diversity of values among the citizenry, to be contrasted with the “thick” conceptions characteristic of the moral communities in which we are embedded in private life. Thus, introducing into the public schools in any large city a curriculum of sex education that teaches the preferability of two-parent families might be resisted by educators who would cite the great number of their students from single-parent backgrounds. But what if these are the students most in need of hearing the authoritative expression of such a value judgment? In a parochial school context, such a possibility well might affect the design and implementation of a sex education curriculum. Consider the fact that some (one hopes, few) young mothers are not competent–for emotional and intellectual reasons–to nurture their children. In such circumstances, the autonomy of the parent-child relation must somehow be breached if the children are to have a decent shot at developing their God-given talents. Although this is difficult ground, there clearly are circumstances in which, to prevent significant injustice to children, we have somehow to get inside the family sphere and get our hands on the lives of these youngsters. Where does the authority–the standing–come from for that kind of intervention to take place? The government’s doing it is deeply problematic. Yet faith-based communities, where participation is voluntary and social relations among members are close, can in some situations exercise that authority. The Role of Religious Communities Assume for the moment that religious communities do have a unique role to play in the socioeconomic development of low-income areas. What has been their performance to date? Hope for a substantial church role rests in part on the fact of widespread religious participation in the United States. The existing literature documents that more than half of all Americans regularly attend church or are church members. This level of participation and the relative strength of the various denominations appear not to have changed much for at least 20 years. In addition, the bulk of the literature on church attendance concludes that any fall in participation has been mainly among young people with relatively high social status and thus would not affect urban low-income populations. Indeed, studies of racial differences in church participation uniformly find that blacks participate at a greater rate than whites. Nevertheless, a sober review of the evidence does not support the view that inner-city churches are now having a substantial impact on the quality of life in low-income communities by altering the socio-economic status of individual church members. (We say this despite the many examples of outstanding urban ministries doing excellent work in particular communities.) For example, while overall church attendance is higher among blacks than whites, it is relatively low in urban areas, especially in the central cities of the North, where much of the low-income black population is concentrated. Also the fastest growth in church membership for blacks (and for whites) over the past two decades has been among Baptists and other, more conservative religious groups whose members have fewer years of schooling than those of other denominations even after differences in the nonreligious characteristics of members are taken into account. Studies of the effects of religiosity on income and schooling invariably find only small positive effects. We want to stress that the existing literature is unsatisfactory in a number of ways. More direct measures of “religiosity” are needed to determine whether behavioral effects exist. Furthermore, only a few studies can break down their results by race and socioeconomic status; yet there may be important differences across groups. To illustrate, if the social networks of poor black families are less dense than those of others, the effects of any particular social connection might be magnified. Also, if children from more advantaged families acquire beneficial skills or attitudes inside their household, while children from poorer families are relatively more dependent on beneficial external influences, then the potential of religious institutions to play an important role in the inner cities will be underestimated. We therefore urge caution in extending to low-income urban populations the findings of a small effect of religiosity on behavior obtained from aggregate samples. We are well aware of the knotty problem of inferring causality in this area of research. While it is certainly plausible that religiosity favorably affects work, education, and other behaviors, these behaviors may themselves affect religious commitment and participation. Moreover, measures of religiosity may also be correlated with unobserved nonreligious traits that affect, say, years of schooling. One of us has tried to address these problems in a study of the effect of religious participation on schooling using the National Longitudinal Survey of Youth. That study looked at how church attendance during the senior year of high school affected the total years of schooling ultimately completed, relying on differences in the effects of church attendance before, during, and after the senior year to control for any spurious correlations. We found that church attendance during the senior year of high school adds about 0.2 years to total schooling for white women and for blacks, but had no significant effect for white men. We construe this as modest evidence that church attendance may alter behavior in a constructive way. Beyond Social Science Ultimately we do not believe that social scientific evidence can justify what we see as an ethical imperative for institutions of faith, rooted in urban black America, to work toward the redemption and reconstruction of these communities. It is perhaps worth recalling that, as an historic matter, the religiosity now so widespread among black Americans grew out of the experience of slavery. People were driven by brute circumstance to create among themselves a culture with spiritual and moral depth of heroic proportion. They simply had no choice. The brutality of the assault they endured–on their persons, their relations one with another, and their sense of dignity and self-respect–was such that either they would be destroyed as moral beings or they would find a way, through faith, to transcend their condition. That “man must not live by bread alone” was for them more than a theoretical proposition. Grasping the truth of that proposition was their key to survival. These moral and spiritual values proved profoundly significant in the post-slavery development of black Americans. A spirit of self-help, rooted in a deep-seated sense of self-respect, was widely embraced among blacks of all ideological persuasions well into this century. They did what they did–educating their children, acquiring land, founding communal institutions, and struggling for equal rights–not in reaction to or for the approval of whites, but out of an internal conviction of their own worth and capacities. Even acts of black protest and expressions of grievance against whites were, ultimately, reflections of this inner sense of dignity. The crowning achievements of the civil rights movement–its nonviolent method and its successful effort at public moral suasion–can be seen as the projection into American politics of a set of spiritual values that had been evolving among blacks for more than a century. Jesse Jackson, Sr., teaches young blacks the exhortation, “I am somebody,” and this is certainly true. But the crucial question then becomes, “Just who are you?” Many of our fellow citizens now look upon the carnage playing itself out on the streets of ghetto America and supply their own dark answers. The youngster’s response should be: “Because I am somebody, I waste no opportunity to better myself; I respect my body by not polluting it with drugs or promiscuous sex; I comport myself responsibly, I am accountable, I am available to serve others as well as myself.” It is the doing of these fine things, not the saying of any fine words, that teaches oneself and others that one is somebody who has to be reckoned with. But who will show the many hundreds of thousands of black youngsters now teetering on the brink of disaster how to be somebody? One finds a precedent for the huge task we face in the Old Testament book of Nehemiah, which begins as follows: “Hanani, one of my brethren came, he and certain men of Judah; and I asked them concerning the Jews who had escaped, who were left of the captivity, and concerning Jerusalem. And they said unto me, The remnant that are left of the captivity there in the province are in great affliction and reproach; the wall of Jerusalem also is broken down, and its gates are burned with fire. And it came to pass when I heard these words, that I sat down and wept, and mourned certain days, and fasted and prayed before the God of heaven.” “The wall is broken down and its gates are burned with fire.” This metaphor of decay and assault is an apt one for our current ills. We are invited to think of a city without walls as one with no integrity, no structure, subject to the vagaries of any passing fad or fancy. We imagine the collapse of civil society; the absence of an internally derived sense of what a people stand for, of what they must and must not do. With the wall broken and its gates burned, anything becomes possible. In the biblical account Nehemiah heroically led the Jews of Jerusalem to renewal. He went to the Persian king whom he served as cup bearer, secured provision, and returned to Jerusalem, where he rolled up his sleeves and went to work restoring the physical integrity of the environment, but also presiding over a spiritual revival amongst the citizenry. Now, let us relate this to our overarching theme, lest you think you are about to read a sermon. (We are fully capable of sermonizing on this subject–that our second son’s name is Nehemiah is no accident.) Nehemiah, a Jew, was specifically concerned about his people. His work, the reconstruction of civil society, could only be undertaken, as it were, “from the inside out.” He dealt in the specific and concrete circumstances confronting the Jews. He did not deal only in abstractions. He made himself present among those for whom he had a special affection, toward whom he felt a special loyalty. His is not so bad a model. In the inner-city ghettos today “the remnant there are in great affliction and reproach.” For the civic wound of black alienation to be fully and finally bound, a great deal of work must be done in these communities. We blacks are connected–by bonds of history, family, conscience, and common perception in the eyes of outsiders–to those who languish in the urban slums. Black politicians, clergy, intellectuals, businessmen, and ordinary folk must therefore seek to create hope in these desolate young lives; we must work to rebuild these communities; we must become our brotherþs keeper. To say this is, of course, not to absolve the broader American public of its responsibility to formulate decent and prudent social policies aimed at assisting all who languish on the social margins, regardless of race or creed. The ultimate goal is for the sentiment that we must become our brother’s keeper to become more widely shared. Yet when reflecting on the role that churches can play in renewing civil society among the urban poor, we find moral considerations such as those set out here to be, unavoidably, an important part of the dialogue that is now so desperately needed. |
c4f05ce0086c71537df26b6037efb2fb | https://www.brookings.edu/articles/outlaw-of-the-sea-the-senate-republicans-unclos-blunder/?shared=email&msg=fail | Outlaw of the Sea: The Senate Republicans’ UNCLOS Blunder | Outlaw of the Sea: The Senate Republicans’ UNCLOS Blunder When U.S. Senators Kelly Ayotte (R-N.H.) and Rob Portman (R-Ohio), both vice presidential hopefuls, recently declared their opposition to the UN Convention on the Law of the Sea, they virtually guaranteed that it would be dead on arrival if it were sent to the Senate. A group of 34 senators, including Ayotte and Portman and led by Jim DeMint (R-S.C.), is now on the record promising to vote against UNCLOS, which is enough to make getting the two-thirds majority necessary for ratification impossible. UNCLOS was first negotiated 30 years ago. But back then, U.S. President Ronald Reagan objected to it because, he argued, it would jeopardize U.S. national and business interests, most notably with respect to seabed mining. A major renegotiation in 1994 addressed his concerns, and the United States signed. Now, the U.S. Navy and business community are among UNCLOS’ strongest supporters. So, too, was the George W. Bush administration, which tried to get the treaty ratified in 2007 but failed due to Republican opposition in the Senate. Today’s opponents, including Ayotte, DeMint, and Portman, focus on two issues. First, they argue, the treaty is an unacceptable encroachment on U.S. sovereignty; it empowers an international organization — the International Seabed Authority — to regulate commercial activity and distribute revenue from that activity. Yet sovereignty is not a problem: During the 1994 renegotiation, the United States ensured that it would have a veto over how the ISA distributes funds if it ever ratified the treaty. As written, UNCLOS would actually increase the United States’ economic and resource jurisdiction. In fact, Ayotte, DeMint, and Portman’s worst fears are more likely to come to pass if the United States does not ratify the treaty. If the country abdicates its leadership role in the ISA, others will be able to shape it to their own liking and to the United States’ disadvantage. Read the full article at foreignaffairs.com » |
26ec88eb6042892aade9c30259ee1d18 | https://www.brookings.edu/articles/outsourcing-war/ | Outsourcing War | Outsourcing War Understanding the Private Military Industry The tales of war, profit, honor, and greed that emerge from the private military industry often read like something out of a Hollywood screenplay. They range from action-packed stories of guns-for-hire fighting off swarms of insurgents in Iraq to the sad account of a private military air crew languishing in captivity in Colombia, abandoned by their corporate bosses in the United States. A recent African “rent-a-coup” scandal involved the son of a former British prime minister, and accusations of war profiteering have reached into the halls of the White House itself. Incredible as these stories often sound, the private military industry is no fiction. Private companies are becoming significant players in conflicts around the world, supplying not merely the goods but also the services of war. Although recent well-publicized incidents from Abu Ghraib to Zimbabwe have shone unaccustomed light onto this new force in warfare, private military firms (PMFs) remain a poorly understood—and often unacknowledged—phenomenon. Mystery, myth, and conspiracy theory surround them, leaving policymakers and the public in positions of dangerous ignorance. Many key questions remain unanswered, including, What is this industry and where did it come from? What is its role in the United States’ largest current overseas venture, Iraq? What are the broader implications of that role? And how should policymakers respond? Only by developing a better understanding of this burgeoning industry can governments hope to get a proper hold on this newly powerful force in foreign policy. If they fail, the consequences for policy and democracy could be deeply destructive. Private Sector and Public Interest PMFs are businesses that provide governments with professional services intricately linked to warfare; they represent, in other words, the corporate evolution of the age-old profession of mercenaries. Unlike the individual dogs of war of the past, however, PMFs are corporate bodies that offer a wide range of services, from tactical combat operations and strategic planning to logistical support and technical assistance. The modern private military industry emerged at the start of the 1990s, driven by three dynamics: the end of the Cold War, transformations in the nature of warfare that blurred the lines between soldiers and civilians, and a general trend toward privatization and outsourcing of government functions around the world. These three forces fed into each other. When the face-off between the United States and the Soviet Union ended, professional armies around the world were downsized. At the same time, increasing global instability created a demand for more troops. Warfare in the developing world also became messier—more chaotic and less professional—involving forces ranging from warlords to child soldiers, while Western powers became more reluctant to intervene. Meanwhile, advanced militaries grew increasingly reliant on off-the-shelf commercial technology, often maintained and operated by private firms. And finally, many governments succumbed to an ideological trend toward the privatization of many of their functions; a whole raft of former state responsibilities—including education, policing, and the operation of prisons—were turned over to the marketplace. The PMFs that arose as a result are not all alike, nor do they all offer the exact same services. The industry is divided into three basic sectors: military provider firms (also known as “private security firms”), which offer tactical military assistance, including actual combat services, to clients; military consulting firms, which employ retired officers to provide strategic advice and military training; and military support firms, which provide logistics, intelligence, and maintenance services to armed forces, allowing the latter’s soldiers to concentrate on combat and reducing their government’s need to recruit more troops or call up more reserves. Although the world’s most dominant military has become increasingly reliant on PMFs (the Pentagon has entered into more than 3,000 such contracts over the last decade), the industry and its clientele are not just American. Private military companies have operated in more than 50 nations, on every continent but Antarctica. For example, European militaries, which lack the means to transport and support their forces overseas, are now greatly dependent on PMFs for such functions. To get to Afghanistan, European troops relied on a Ukrainian firm that, under a contract worth more than $100 million, ferried them there in former Soviet jets. And the British military, following in the Pentagon’s footsteps, has begun to contract out its logistics to Halliburton. Nowhere has the role of PMFs been more integral—and more controversial—than in Iraq. Not only is Iraq now the site of the single largest U.S. military commitment in more than a decade; it is also the marketplace for the largest deployment of PMFs and personnel ever. More than 60 firms currently employ more than 20,000 private personnel there to carry out military functions (these figures do not include the thousands more that provide nonmilitary reconstruction and oil services)—roughly the same number as are provided by all of the United States’ coalition partners combined. President George W. Bush’s “coalition of the willing” might thus be more aptly described as the “coalition of the billing.” These large numbers have incurred large risks. Private military contractors have suffered an estimated 175 deaths and 900 wounded so far in Iraq (precise numbers are unavailable because the Pentagon does not track nonmilitary casualties)—more than any single U.S. Army division and more than the rest of the coalition combined. More important than the raw numbers is the wide scope of critical jobs that contractors are now carrying out, far more extensive in Iraq than in past wars. In addition to war-gaming and field training U.S. troops before the invasion, private military personnel handled logistics and support during the war’s buildup. The massive U.S. complex at Camp Doha in Kuwait, which served as the launch pad for the invasion, was not only built by a PMF but also operated and guarded by one. During the invasion, contractors maintained and loaded many of the most sophisticated U.S. weapons systems, such as B-2 stealth bombers and Apache helicopters. They even helped operate combat systems such as the Army’s Patriot missile batteries and the Navy’s Aegis missile-defense system. PMFs—ranging from well-established companies such as Vinnell and mpri to startups such as the South African firm Erinys International—have played an even greater role in the postinvasion occupation and counterinsurgency effort. Halliburton’s Kellogg, Brown & Root division, the largest corporate PMF in Iraq, currently provides supplies for troops and maintenance for equipment under a contract thought to be worth as much as $13 billion. (This figure, in current dollars, is roughly two and a half times what the United States paid to fight the entire 1991 Persian Gulf War, and roughly the same as what it spent to fight the American Revolution, the War of 1812, the Mexican-American War, and the Spanish-American War combined.) Other PMFs are helping to train local forces, including the new Iraqi army and national police, and are playing a range of tactical military roles. An estimated 6,000 non-Iraqi private contractors currently carry out armed tactical functions in the country. These individuals are sometimes described as “security guards,” but they are a far cry from the rent-a-cops who troll the food courts of U.S. shopping malls. In Iraq, their jobs include protecting important installations, such as corporate enclaves, U.S. facilities, and the Green Zone in Baghdad; guarding key individuals (Ambassador Paul Bremer, the head of the Coalition Provisional Authority, was protected by a Blackwater team that even had its own armed helicopters); and escorting convoys, a particularly dangerous task thanks to the frequency of roadside ambushes and bombings by the insurgents. PMFs, in other words, have been essential to the U.S. effort in Iraq, helping Washington make up for its troop shortage and doing jobs that U.S. forces would prefer not to. But they have also been involved in some of the most controversial aspects of the war, including alleged corporate profiteering and abuse of Iraqi prisoners. Five Obstructions The mixed record of PMFs in Iraq points to some of the underlying problems and questions related to the industry’s increasing role in U.S. policy. Five broad policy dilemmas are raised by the increasing privatization of the military. The first involves the question of profit in a military context. To put it bluntly, the incentives of a private company do not always align with its clients’ interests—or the public good. In an ideal world, this problem could be kept in check through proper management and oversight; in reality, such scrutiny is often absent. As a result, war-profiteering allegations have been thrown at several firms. For example, Halliburton—Vice President Dick Cheney’s previous employer—has been accused of a number of abuses in Iraq, ranging from overcharging for gasoline to billing for services not rendered; the disputed charges now total $1.8 billion. And Custer Battles, a startup military provider firm that was featured on the front page of the Wall Street Journal in August 2004 has since been accused of running a fraudulent scheme of subsidiaries and false charges. Still more worrisome from a policy standpoint is the question of lost control. Even when contractors do military jobs, they remain private businesses and thus fall outside the military chain of command and justice systems. Unlike military units, PMFs retain a choice over which contracts they will take and can abandon or suspend operations for any reason, including if they become too dangerous or unprofitable; their employees, unlike soldiers, can always choose to walk off the job. Such freedom can leave the military in the lurch, as has occurred several times already in Iraq: during periods of intense violence, numerous private firms delayed, suspended, or ended their operations, placing great stress on U.S. troops. On other occasions, PMF employees endured even greater risks and dangers than their military equivalents. But military operations do not have room for such mixed results. The second general challenge with PMFs stems from the unregulated nature of what has become a global industry. There are insufficient controls over who can work for these firms and for whom these firms can work. The recruiting, screening, and hiring of individuals for public military roles is left in private hands. In Iraq, this problem was magnified by the gold-rush effect: many firms entering the market were either entirely new to the business or had rapidly expanded. To be fair, many PMF employees are extremely well qualified. A great number of retired U.S. special forces operatives have served with PMFs in Iraq, as have former members of the United Kingdom’s elite sas (Special Air Service). But the rush for profits has led some corporations to cut corners in their screening procedures. For example, U.S. Army investigators of the Abu Ghraib prisoner-abuse scandal found that “approximately 35 percent of the contract interrogators [hired by the firm caci] lacked formal military training as interrogators.” In other cases, investigations of contractors serving in Iraq revealed the hiring of a former British Army soldier who had been jailed for working with Irish terrorists and a former South African soldier who had admitted to firebombing the houses of more than 60 political activists during the apartheid era. Similar problems can occur with PMFs’ clientele. Although military contractors have worked for democratic governments, the UN, and even humanitarian and environmental organizations, they have also been employed by dictatorships, rebel groups, drug cartels, and, prior to September 11, 2001, at least two al Qaeda-linked jihadi groups. A recent episode in Equatorial Guinea illustrates the problems that PMFs can run into in the absence of external guidance or rules. In March 2004, Logo Logistics, a British-South African PMF, was accused of plotting to overthrow the government in Malabo; a planeload of employees was arrested in Zimbabwe, and several alleged funders in the British aristocracy (including Sir Mark Thatcher, the son of Margaret Thatcher) were soon implicated in the scandal. The plotters have been accused of trying to topple Equatorial Guinea’s government for profit motives. But their would-be victim, President Teodoro Obiang Nguema Mbasogo, is a corrupt dictator who took power by killing his uncle and runs one of the most despicable regimes on the continent–hardly a sympathetic victim. The third concern raised by PMFs is, ironically, precisely the feature that makes them so popular with governments today: they can accomplish public ends through private means. In other words, they allow governments to carry out actions that would not otherwise be possible, such as those that would not gain legislative or public approval. Sometimes, such freedom is beneficial: it can allow countries to fill unrecognized or unpopular strategic needs. But it also disconnects the public from its foreign policy, removing certain activities from popular oversight. The increased use of private contractors by the U.S. government in Colombia is one illustration of this trend: by hiring PMFs, the Bush administration has circumvented congressional limits on the size and scope of the U.S. military’s involvement in Colombia’s civil war. The use of PMFs in Iraq is another example: by privatizing parts of the U.S. mission, the Bush administration has dramatically lowered the political price for its Iraq policies. Were it not for the more than 20,000 contractors currently operating in the country, the U.S. government would have to either deploy more of its own troops there (which would mean either expanding the regular force or calling up more National Guard members and reservists) or persuade other countries to increase their commitments—either of which would require painful political compromises. By outsourcing parts of the job instead, the Bush administration has avoided such unappealing alternatives and has also been able to shield the full costs from scrutiny: contractor casualties and kidnappings are not listed on public rolls and are rarely mentioned by the media. PMF contracts are also not subject to Freedom of Information Act requests. This reduction in transparency raises deep concerns about the long-term health of American democracy. As the legal scholar Arthur S. Miller once wrote, “democratic government is responsible government—which means accountable government—and the essential problem in contracting out is that responsibility and accountability are greatly diminished.” PMFs also create legal dilemmas, the fourth sort of policy challenge they raise. On both the personal and the corporate level, there is a striking absence of regulation, oversight, and enforcement. Although private military firms and their employees are now integral parts of many military operations, they tend to fall through the cracks of current legal codes, which sharply distinguish civilians from soldiers. Contractors are not quite civilians, given that they often carry and use weapons, interrogate prisoners, load bombs, and fulfill other critical military roles. Yet they are not quite soldiers, either. One military law analyst noted, “Legally speaking, [military contractors] fall into the same grey area as the unlawful combatants detained at Guantánamo Bay.” This lack of clarity means that when contractors are captured, their adversaries get to define their status. The results of this uncertainty can be dire–as they have been for three American employees of California Microwave Systems whose plane crashed in rebel-held territory in Colombia in 2003. The three have been held prisoner ever since, afforded none of the protections of the Geneva Conventions. Meanwhile, their corporate bosses and U.S. government clients seem to have washed their hands of the matter. Such difficulties also play out when contractors commit misdeeds. It is often unclear how, when, where, and which authorities are responsible for investigating, prosecuting, and punishing such crimes. Unlike soldiers, who are accountable under their nation’s military code of justice wherever they are located, contractors have a murky legal status, undefined by international law (they do not fit the formal definition of mercenaries). Normally, a civilian’s crimes fall under the jurisdiction of the country where they are committed. But PMFs typically operate in failed states; indeed, the absence of local authority usually explains their presence in the first place. Prosecuting their crimes locally can thus be difficult. Iraq, for example, still has no well-established courts, and during the formal U.S. occupation, regulations explicitly exempted contractors from local jurisdiction. Yet it is often just as difficult to prosecute contractors in their home country, since few legal systems cover crimes committed outside their territory. Some states do assert extraterritorial jurisdiction over their nationals, but they do so only for certain crimes and often lack the means to enforce their laws abroad. As a result of these gaps, not one private military contractor has been prosecuted or punished for a crime in Iraq (unlike the dozens of U.S. soldiers who have), despite the fact that more than 20,000 contractors have now spent almost two years there. Either every one of them happens to be a model citizen, or there are serious shortcomings in the legal system that governs them. The failure to properly control the behavior of PMFs took on great consequence in the Abu Ghraib prisoner-abuse case. According to reports, all of the translators and up to half of the interrogators involved were private contractors working for two firms, Titan and caci. The U.S. Army found that contractors were involved in 36 percent of the proven incidents and identified 6 employees as individually culpable. More than a year after the incidents, however, not one of these individuals has been indicted, prosecuted, or punished, even though the U.S. Army has found the time to try the enlisted soldiers involved. Nor has there been any attempt to assess corporate responsibility for the misdeeds. Indeed, the only formal inquiry into PMF wrongdoing on the corporate level was conducted by caci itself. Caci investigated caci and, unsurprisingly, found that caci had done no wrong. In the absence of legislation, some parties have already turned to litigation to address problems with PMFs—hardly the best forum for resolving issues related to human rights and the military. For example, some former Abu Ghraib prisoners have already tried to sue in U.S. courts the private firms involved with the prison. And the families of the four Blackwater employees murdered by insurgents in Fallujah have sued the company in a North Carolina court, claiming that the deceased had been sent into danger with a smaller unit than mandated in their contracts and with weapons, vehicles, and preparation that were not up to the standards promised. The final dilemma raised by the extensive use of private contractors involves the future of the military itself. The armed services have long seen themselves as engaged in a unique profession, set apart from the rest of civilian society, which they are entrusted with securing. The introduction of PMFs, and their recruiting from within the military itself, challenges that uniqueness; the military’s professional identity and monopoly on certain activities is being encroached on by the regular civilian marketplace. Most soldiers thus have a deeply ambivalent attitude toward PMFs. On the one hand, they are grateful to have someone help them bear their burden, which, thanks to military overstretch in Iraq, feels particularly onerous at the moment. Even though the job of the U.S. armed services has grown, the force has shrunk by 35 percent since its Cold War high; the British military, meanwhile, is at its smallest since the Napoleonic wars. PMFs help fill the gap as well as offer retired soldiers the potential for a second career in a field they know and love. Some in the military worry, on the other hand, that the PMF boom could endanger the health of their profession and resent the way these firms exploit skills learned at public expense for private profit. They also fear that the expanding PMF marketplace will hurt the military’s ability to retain talented soldiers. Contractors in the PMF industry can make anywhere from two to ten times what they make in the regular military; in Iraq, former special forces troops can earn as much as $1,000 a day. Certain service members, such as pilots, have always had the option of seeking work in the civilian marketplace. But the PMF industry marks a significant change, since it keeps its employees within the military, and thus the public, sphere. More important, PMFs compete directly with the government. Not only do they draw their employees from the military, they do so to play military roles, thus shrinking the military’s purview. PMFs use public funds to offer soldiers higher pay, and then charge the government at an even higher rate, all for services provided by the human capital that the military itself originally helped build. The overall process may be brilliant from a business standpoint, but it is self-defeating from the military’s perspective. This issue has become especially pointed for special forces units, which have the most skills and are thus the most marketable. Elite force commanders in Australia, New Zealand, the United Kingdom, and the United States have all expressed deep concern over the poaching of their numbers by PMFs. One U.S. special forces officer described the issue of retention among his most experienced troops as being “at a tipping point.” So far, the U.S. government has failed to respond adequately to this challenge. Some militaries now allow their soldiers to take a year’s leave of absence, in the hope that they will make their money quickly and then return, rather than be lost to the service forever. But Washington has failed to take even this step; it has only created a special working group to explore the issue. Caveat Emptor and—and Renter As all of these problems suggest, governments that use PMFs must learn to recognize their responsibilities as regulators—and as smart clients. Their failure to do so thus far has distorted the free market and caused a major shift in the military-industrial complex. Without change, the status quo will result in bad policy and bad business. To improve matters, it is first essential to lift the veil of secrecy that surrounds the private military industry. There must be far more openness about and public oversight of the basic numbers involved. Too little is known about the actual dollars spent on PMFs; the Pentagon does not even track the number of contractors working for it in Iraq, much less their casualties. To start changing matters, clients—namely, governments that hire PMFs—must exercise their rights and undertake a comprehensive survey to discern the full scope of what they have outsourced and what have been the results. Washington should also require that, like most other government documents, all current and future contracts involving nonclassified activities be made available to the public on request. Each contract should also include “contractor visibility” measures that list the number of employees involved and what they are to be paid, thus limiting the possibility of financial abuse. The U.S. military must also take a step back and reconsider, from a national security perspective, just what roles and functions should be kept in government hands. Outsourcing can be greatly beneficial, but only to the point where it begins to challenge core functions. According to the old military doctrine on contracting, if a function was “mission-critical” or “emergency-essential”—that is, if it could affect the very success or failure of an operation—it was kept within the military itself. The rule also held that civilians were to be armed only under extraordinary circumstances and then only for self-protection. The United States should either return to these standards or create new ones; the present ad-hoc process is yielding poor results. A third lesson is self-evident but has often been ignored: privatize something only if it will save money or raise quality. If it will not, then do not. Unfortunately, the Pentagon’s current, supposedly business-minded leadership seems to have forgotten Economics 101. All too often, it outsources first and never bothers to ask questions later. That something is done privately does not necessarily make it better, quicker, or cheaper. Rather, it is through leveraging free-market mechanisms that one potentially gets better private results. Success is likely only if a contract is competed for on the open market, if the winning firm can specialize on the job and build in redundancies, if the client is able to provide oversight and management to guard its own interests, and if the contractor is properly motivated by the fear of being fired. Forget these simple rules, as the U.S. government often does, and the result is not the best of privatization but the worst of monopolization. Tapping simple business expertise would help the government become a better client. A staggering 40 percent of Defense Department contracts are currently awarded on a noncompetitive basis, adding up to $300 billion in contracts over the last five years. In the case of caci, the firm linked to abuses at Abu Ghraib, Army investigators subsequently reported not only that a caci employee may have helped write the work order, but also that the Abu Ghraib interrogators had been hired by simply amending an existing contract from 1998–for computer services overseen by the Department of the Interior. When hiring contractors, the Defense Department must learn to better guard its own and the public’s interest. Doing so will require having sufficient eyes and ears to oversee and manage contracts. So far, the military woefully lacks this capacity. The U.S. government has only twice as many personnel overseeing contractors in Iraq, for example, as it had during the 1990s for its Balkans contracts—even though there are now 15 times more contracts and the context is much more challenging. The government should also change the nature of the contracts it signs. Too often, the “cost plus” arrangement has become the default form for all contracts. But this setup, in effect, gives companies more profit if they spend more. When combined with inadequate oversight, it creates a system ripe for inefficiency and abuse. In addition to insisting on more stringent terms, the government should start to use the power of market sanctions to shape more positive results. These days, the opposite seems to happen far more often: Halliburton and caci were both granted massive contract extensions for work in Iraq, despite being in the midst of government investigations. Finally, more must be done to ensure legal accountability. To pay contractors more than soldiers is one thing; to also give them a legal free pass (as happened with Abu Ghraib) is unconscionable. Loopholes must be filled and new laws developed to address the legal and jurisdictional dilemmas PMFs raise. Laws should be written to establish who can work for these companies, who the firms can work for, and who will investigate, prosecute, and punish any wrongdoing by contractors. Because this is a transnational industry, the solution will require international involvement. Proposals to update the international antimercenary laws and to create a UN body to sanction and regulate PMFs have already been made. But any such international effort will take years. In the meantime, every state that has any involvement with the private military industry, as a client or a home base, should update its laws. One hopes that countries will coordinate their efforts and involve regional bodies to maximize coverage. The United Kingdom, for example, could coordinate its present efforts with the rest of the European Union, and the United States should do the same with its allies. The forces that drove the growth of the private military industry seem set in place. Much like the Internet boom, the PMF bubble may burst if the current spate of work in Iraq ever ends, but the industry itself is unlikely to disappear anytime soon. Governments must therefore act to meet this reality. Using private solutions for public military ends is not necessarily a bad thing. But the stakes in warfare are far higher than in the corporate realm: in this most essential public sphere, national security and people’s lives are constantly put at risk. War, as the old proverb has it, is certainly far too important to be left to the generals. The same holds true for the CEOs. |
cb56eedf85bb1d4c13ba1f1fdf2120c1 | https://www.brookings.edu/articles/peacekeepers-inc/ | Peacekeepers, Inc. | Peacekeepers, Inc. Violence breaks out in a small African state. The local government collapses and reports emerge that civilians are being massacred by the tens of thousands. Refugees stream out in pitiable columns. As scenes reminiscent of the Rwanda genocide are played out on the world’s television screens once again, pressure mounts to do something. The U.N.’s calls for action fall on deaf ears. In the U.S., the leadership remains busy with the war on terrorism and Iraq and decides that the political risks of doing nothing are far lower than the risks of losing any American soldiers’ lives in what is essentially a mission of charity. Other nations follow its lead, and none are willing to risk their own troops. As the international community dithers, innocent men, women, and children die by the hour. It is at this point that a private company steps forward with a novel offer. Using its own hired troops, the firm will establish protected safe havens where civilians can take refuge and receive assistance from international aid agencies. Thousands of lives might be saved. All the company asks is a check for $150 million. What would the international community do when faced with such a choice? Would it allow peacekeeping to become a profit-making exercise? Or would it choose to spurn the firm’s offer, but at the risk of lives on the ground? It is certainly a fascinating dilemma, but one that sounds almost too implausible to consider seriously. It is not. A number of states are balanced precariously on the brink of a descent into chaos (Burundi, Congo, and Zimbabwe, to name just a few). And, despite a decade of swerving from crisis to crisis, the world appears in no better a position to respond if they do. The ultimate problem is that the U.N. remains a voluntary organization of states. Its peacekeeping options are thus dependent on the enthusiasm of its members to send their own troops into harm’s way. In areas outside their spheres of influence, this is ever less present, in particular among the developed states whose militaries are best prepared to do a good job. The same problem holds true for regional organizations, which are often weakest in the areas of the world where they are needed most. Sometimes, coalitions can be built to respond to crises, but they require time, cohesion, and a willingness and capability to intervene that may not always be there. Thus, when state failure or chaos occurs, often no one answers the call. Even when peacekeeping forces are available, the units are often slow and cumbersome to deploy, poorly trained, underequipped, lacking in motivation, or operating under a flawed mandate. At the same time, there is growing global trade in private military services for hire, better known as the privatized military industry. These companies range from small consulting firms, formed by retired generals, to transnational corporations that offer battalions of commandos for hire. Often operating out of public sight, such firms have been players in a number of conflicts over the past decade, ranging from Angola to what was Zaire. Even the U.S. military has become one of the prime clients of the industry, with private firms now providing the logistics of every major U.S. military deployment, maintaining such strategic weapons systems as the B-2 stealth bomber and Global Hawk unmanned aerial vehicle, and taking over the ROTC programs in over 200 American universities. Indeed, from 1994 to 2002, the U.S. Defense Department entered into over 3,000 contracts with U.S.-based military firms, estimated at a value of more than $300 billion. Their role in supporting the Iraq war will only see these numbers grow. As a result, over the past several years, many have begun to call for a twenty-first-century business solution to the world’s twenty-first-century security problems. If everything from prisons to welfare has been privatized, why not try turning peacekeeping over to the private market? Proponents of exploring this idea obviously include the companies who stand to profit from it. But they have also expanded well beyond, to include not only the British government, which just issued a “Green Paper” exploring the issue, but also many traditional supporters of U.N. peacekeeping, including even former U.N. Under Secretary Sir Brian Urquhart, who is considered the founding father of peacekeeping. As a U.N. officer summed up his feelings on the firms in an interview with the Ottawa Citizen (April 6, 1998), “In a perfect world we don’t need them or want them. But the world isn’t perfect.” Privatized peacekeeping offers both promise and peril, and the time has come for the international community to face up to some hard choices—before the next disaster forces an even worse dilemma. The privatized military industry Given the fact that few have even heard of it, the privatized military industry is a surprisingly big business. It has several hundred companies, operating in over 100 countries on six continents, and over $100 billion in annual global revenue. In fact, with the recent purchase of MPRI (a Virginia-based military advisory company) by the Fortune 500 firm L-3, many Americans already own slices of the industry in their 401(k)s. In the immediate aftermath of the September 11 attacks, the industry was one of the few to rise in stock valuation rather than plummet. The reason is that the attacks essentially lodged a “security tax” on the economy, from which the private military industry stands to benefit. The industry began its boom roughly a decade ago. The opening of a market for private military services was the result of a synergy between three powerful forces. The immediate catalyst was a massive disruption in the supply and demand of capable military forces since the end of the Cold War. Not only did global military downsizing create a new labor pool of over 6 million recently retired soldiers, but at the same time there was an increase in violent, but less strategically significant, conflicts around the world. With the great powers less willing to intervene or prop up local allies, the outcome was a gap in the market of security, which private firms found themselves able to fill. At the same time, massive transformations are underway in the nature of warfare. While small-arms simplification and proliferation has increased the ability of minor warring groups to disrupt entire societies, creating even greater demand, warfare has also become more technological at its highest levels. As the vast array of military contractor support in the Iraq war illustrated, the most modern forces are more reliant than ever on civilian specialists to run increasingly sophisticated military systems. Finally, the past few decades have been characterized by a normative shift towards the marketization of the former public sphere. The successes of privatization programs and outsourcing strategies have given the market-based solution not only the stamp of legitimacy, but also the push to privatize any function that can be done outside government. The past decade, for example, was marked by the cumulative externalization of a number of functions that were once among the nation-state’s defining characteristics, including schools, welfare programs, prisons, and defense manufacturers. In fact, the parallel to military service outsourcing is already manifest in the domestic security market, where in states as diverse as Britain, Germany, the Philippines, Russia, and the United States, the number of private security forces and the size of their budgets greatly exceed those of public law-enforcement agencies. In short, this newest outsourcing industry drew precedents, models, and justifications from the wider “privatization revolution.” In a parallel to the wider outsourcing industry, there are three primary business sectors in the privatized military industry, with the firms distinguished by the range of services that they offer. Military provider firms, also known as private military companies or PMCs, offer services at the forefront of the battlespace. That is, their employees engage in actual fighting. Military consultant firms provide combat and strategic advisory and training services. Military support firms, akin to supply chain management firms, provide rear-echelon services, such as logistics, technical support, and transportation. The industry’s growth means that almost any military capability can now be hired off the global market. After they receive contracts from clients, who range from state governments and multinational corporations to humanitarian aid groups and even some suspected terrorist groups, the firms recruit military specialists to fill them. They find their employees through formal job announcements in trade journals and through informal alumni networks of elite units. The vast majority are recently retired, meaning that the cost of training is borne elsewhere, an added savings. Where once the creation of a military force required huge investments in both time and resources, today the entire spectrum of conventional forces can be obtained in a matter of weeks, if not days. The barriers to acquiring military strength are thus lowered, making power more fungible than ever. In other words, clients can undertake operations, which they would not be able to do otherwise, simply by writing a check. This is not just a flight of fancy, but has actually already come to fruition in a number of cases. For instance, the armies in the West African ECOWAS organization all lack certain specializations, such as air support and logistics, that are critical for effective intervention operations. Nonetheless, due to the facilitation of firms such as International Charters, Inc. (ICI) of Oregon, the organization’s forces were able to intervene in the Liberian war in the early 1990s. Using a mix of former U.S. special forces and Soviet Red Army veterans, the firm provided the assault and transport helicopters that supplied the regional force and deployed it into combat. In fact, when the capital, Monrovia, was overrun by rebels and the firm’s helicopters were destroyed at the airport base, ICI’s personnel retreated to the American embassy and helped defend it from being burned down. Similarly, in 1998, Ethiopia leased a wing of the latest Su-27 jet fighters (roughly equivalent to F-15s) from the Russian firm Sukhoi—along with the pilots to fly them, the technicians to maintain them, and the mission planners to direct them. This private air force helped Ethiopia win its war with neighboring Eritrea. Peacekeeping possibilities Within the peacekeeping sphere, the military consulting and support industry sectors have already made inroads (one example being firms providing certain national contingents in the East Timor operation with logistics support). However, the idea of military provider firms replacing blue helmet troops on the ground is one of the most controversial proposals to emerge from the industry’s growth. Proponents believe that such outsourcing of peacekeeping will increase both the effectiveness and efficiency of peace operations. In contrast to the U.N.’s tin-cup dependence on whatever forces its members choose to donate, private firms could target their recruiting at more capable personnel and scour the markets for the best equipment. The firms also would lack the procedural hang-ups that hamper international organizations; they are less threatened by the internal tensions that plague multinational forces and can take quicker and more decisive action. In short, private companies might be able to do peacekeeping faster, better, and cheaper. The contrasting experiences in Sierra Leone between the military provider firm Executive Outcomes and the U.N.’s peacekeeping operation are the most often cited example of privatization’s promise. In 1995, the Sierra Leone government was near defeat from the ruf, a nefarious rebel group whose habit of chopping off the arms of civilians as a terror tactic made it one of the most truly evil groups of the late twentieth century. Supported by multinational mining interests, the government hired the private military firm, made up of veterans from the South African apartheid regime’s elite forces, to help rescue it. Deploying a battalion-sized unit of assault infantry (numbering in the low hundreds), who were supported by firm-manned combat helicopters, light artillery, and a few armored vehicles, Executive Outcomes was able to defeat the RUF in a span of weeks. Its victory brought enough stability to allow Sierra Leone to hold its first election in over a decade. After its contract termination, however, the war restarted. In 1999 the U.N. was sent in. Despite having a budget and personnel size nearly 20 times that of the private firm, the U.N. force took several years of operations, and a rescue by the British military, to come close to the same results. There are three potential scenarios for the privatization of peacekeeping forces. The first is privatized protection. The problem of security for relief operations is widespread and pervasive. In fact, more Red Cross workers were killed in action in the 1990s than U.S. Army personnel. Thus, while the ability of humanitarian actors to create a consensual environment themselves is severely limited, military provider firms might be able to provide site and convoy protection to aid groups. This would allow much more effective aid actions in areas where the local government has collapsed. Besides the direct benefit to the workers on the ground, better protection might also prevent local insurgents from gaining control of supplies and lessen the pressure on outside governments to become involved in messy situations, including scenarios like the 1992 Somalia operation. Humanitarian organizations still operating in dangerous places such as Mogadishu already contract for protection with local warlords, so the more formal business alternative might be preferable. In fact, this scenario is not all that unlikely, given that several U.N. agencies already use such firms to provide security for their own offices. The second possibility is hired units constituted as a “Rapid Reaction Force” within an overall peacekeeping operation. Whenever recalcitrant local parties break peace agreements or threaten the operation, military firms would be hired to offer the muscle that blue helmets are unable or unwilling to provide. The quick insertion of a more combat-minded force, even a relatively small private one, could be critical in deterring local adversaries and stiffening the back of the overall peace operation. Paid firms might thus provide the short-term coercion necessary at critical junctures in the operation. The final, and most contentious, scenario is the complete outsourcing of the operation. When a genocide or humanitarian crisis occurs and no state is willing to step forward to send its own troops, the intervention itself might be turned over to private firms. Upon their hire (by the U.N. or anyone else willing to pay), the firm would deploy to a new area, defeat any local opposition, set up infrastructures for protecting and supporting refugees, and then, once the situation was stabilized, potentially hand over control to regular troops. This idea may sound quite incredible but actually was an option considered by policymakers behind closed doors during the refugee crisis that took place in eastern Zaire in 1996. Both the U.N. Department of Peacekeeping and the U.S. National Security Council discussed the idea that, in lieu of U.N. peacekeepers, a private firm be hired to create a secure humanitarian corridor. The plan was dismissed when the question of who would actually foot the bill was raised. The scenarios illustrate how the concept of the private sector taking over peace operations could radically transform the very nature of peacekeeping, opening up all sorts of new possibilities. For example, firm executives have proposed that they could be paid to take back cities such as Mogadishu, which have been lost to warlords and lawlessness. The firms would stabilize them and then turn the cities over to local or U.N. administration, thus perhaps allowing failed states to rejoin the international system. Similarly, the aforementioned Executive Outcomes performed a business exploration of whether it would have had the capacity to intervene in Rwanda in 1994. Internal plans claim that the company could have had armed troops on the ground within 14 days of its hire and been fully deployed with over 1,500 of its own soldiers, along with air and fire support (roughly the equivalent of the U.S. Marine force that first deployed into Afghanistan), within six weeks. The cost for a six-month operation to provide protected safe havens from the genocide was estimated at $150 million (around $600,000 a day). This private option compares quite favorably with the eventual U.N. relief operation, which deployed only after the killings. The U.N. operation ended up costing $3 million a day (and did nothing to save hundreds of thousands of lives). More recently, a consortium of military firms, interestingly entitled the “International Peace Operations Association,” has proposed that it be hired to work on behalf of the largely ineffectual MONUC peacekeeping operations in the Eastern Congo. The private military firms, which range from aerial surveillance operators to a company of Gurkha veterans, have offered to create a “Security Curtain” (50 km demilitarized zone) in one of the most lawless areas on the African continent. The IPOA’s charge would be between $100-200 million, dependent on the scale of the operation. So far, it has found no takers, but the level of violence in the area continues to escalate. Privatization’s perils Obviously, such proposals hold great promise, which explains the enthusiasm for them. But before the international community leaps into the privatization revolution, it would do well also to consider its perils. A perfect market exists only in theory, and marketizing public services therefore carries both advantages and disadvantages. To put it in economic terms, privatization of any type always carries positive and negative externalities. This is never more true than in the military sphere, where profit motives further cloud the fog of war. While experience has shown that these private military businesses may be able to operate more efficiently and effectively than the forces of public organizations, their hire also has often raised an array of worrisome issues. These challenges are certainly better resolved before peacekeeping is turned over to the private market. The first issue is the contractual dilemmas that arise with privatization. There are obvious market incentives for firms to act in their clients’ interests. Any company that does otherwise risks not being hired again. The problem is that market constraints are always imperfect and tend to work only over the long term. In actuality, the security goals of clients are often in tension with the firms’ aim of profit maximization. The result is that considerations of the good of the private company are not always identical with the public good. For privatized peacekeeping, the ensuing dangers include all the problems one has in standard contracting and business outsourcing. The hired firms have incentives to overcharge, pad their personnel lists, hide failures, not perform to their peak capacity, and so on. The worry, though, is that these are all now transferred into the security realm, where people’s lives are at stake. The most worrisome contractual dilemma, however, is that outsourcing also entails turning over control of the actual provision of service. For peacekeeping, this means the troops in the field are not part of national armies, but private citizens hired off the market, working for private firms. Security is now at the mercy of any change in market costs and incentives. One example of the resulting danger derives from the nasty habit humanitarian interventions have of becoming more complex over time. A firm hired to establish a safe haven might later find the situation more difficult than it originally expected. The operation might become unprofitable or, due to any increase in local opposition, more dangerous than anticipated. Thus, the company could find it in its corporate interest to pull out. Or, even if the company is kept in line by market constraints, its employees might decide that the personal risks they face in sticking it out in an operation are too high relative to their pay. Not bound by military law, they can simply break their contracts without fear of punishment and find safer, better paying work elsewhere. In either case, the result is the same: the abandonment of those who were dependent on private protection without consideration for the political costs or the client’s ability to quickly replace them. Second, privatization also raises certain risks stemming from problems of adverse selection and a lessening of accountability. Military provider firms are not always looking for the most congenial workforce, but instead, understandably enough, recruit those known for their effectiveness. For example, many former members of the most notorious and ruthless units of the Soviet and apartheid regimes have found employment in the industry. These individuals acted without concern for human rights in the past and certainly could do so again. In either case, the industry cannot be described as imbued with a culture of peacekeeping. Even if the firms are scrupulous in screening their hires (which is hard to accomplish, given that few prospective employees would think to include an “atrocities committed” section on their resumes), it is still difficult for them to monitor their troops in the field. Furthermore, if employees do commit violations, there is little incentive for a firm to turn them over to any local authorities. To do so risks scaring off both clients and other prospective employees. This turned out to be the case recently in the Balkans. Employees of Dyncorp, who had been contracted to perform police duties for the U.N. and aircraft maintenance for the U.S. Army, were later implicated in child prostitution rings. Dyncorp’s Bosnia site supervisor even filmed himself raping two women. These employees were transferred out of the country, and none were ever criminally prosecuted. Industry executives counter that U.N. peacekeepers have certainly been involved in crimes of their own in the past, so the risks of human rights violations occurring during peace operations are nothing new. The difference with privatization, though, is that while soldiers in U.N. missions are ultimately held responsible under their national military code of justice, contracted peacekeepers are subject only to the laws of the market. Current international law has been found inapplicable to the actions of the industry, as the firms fall outside of the outdated legal conventions that deal only with individual mercenaries. The only possible regulation must then come either from the law of the state in which the operation is taking place or the law of the state in which the firm is based. Since the collapse of the rule of law is what tends to create the conditions for hiring firms in the first place, the first alternative is almost never an option. The transnational nature of the industry makes the second option of home-state regulation difficult as well. Besides the fact that extraterritorial monitoring (i.e., of firms operating outside national boundaries) is very difficult, any time a firm finds the regulation too onerous, it can simply transfer to more friendly environs. Moreover, even among firms that stay based in the few countries with the ability and will to regulate, the jurisdiction is still problematic. For example, U.S. criminal law does not apply outside of U.S. territorial and special maritime jurisdictions, so that if an employee of an American military firm commits an offense abroad, the likelihood of prosecution is extremely low. Consequently, other than nonrenewal of contract, there are no real checks and balances on military firms that will ensure full accountability. The third challenge of privatization is its long-term implications for local parties. The key to any durable peace is the restoration of legitimacy. In particular, this requires the return of control over organized violence to public authorities. Unfortunately, if peacekeeping is privatized, the companies may become a temporary mechanism for preserving peace but still do little to address the underlying causes of unrest and violence. In the view of many, most importantly the U.N. Department of Peacekeeping (which may have a vested bureaucratic interest in opposing the privatization of forces), the act of becoming a peacekeeper is about more than just changing the color of one’s helmet or beret. Peacekeepers’ roles and responsibilities differ markedly from regular military operations. They require an entirely new cultural outlook focused on humanitarian concerns, which at times can duel with or shackle normal military instincts. Not only must peacekeepers operate under very different rules of engagement, but the most important directive is a guiding ethic of neutrality, the act of not taking sides. Thus, the most successful peacekeeping operations (such as experiences in Mozambique, Namibia, and Guatemala) are not simply about placing third-party troops on the ground. Instead, they include a wide variety of “peacebuilding” activities designed to restore torn social fabrics and foster cooperation among local parties. These range from cease-fire monitoring and troop disarmament and demobilization to reconstruction and election monitoring. Thus, U.N. operations are often so unwieldy for the very reason that they must also carry on these essential activities. Private military firms, untrained or uninterested in the culture of peacekeeping, might be ill-equipped to handle them. Moreover, reliance on an outside private force does little to reestablish the local social contract. Instead, it appears more likely to reinforce the idea that power belongs only to those with the ability to afford it. Finally, the nitty-gritty details of implementation often bedevil privatization’s promise in regular government contracting and general industry; they likely will do so with peacekeeping as well. For example, there is no clear answer to the question of who should have the power to hire private military firms. The first scenario of contracted protection not only challenges norms of aid group neutrality, but also perhaps hazardously expands the powers of these outside organizations, which are responsible only to their donors. The presence of such protection forces entails a further multiplication of armed forces on the ground, hardly the best thing in the midst of a complex operation. Likewise, if the power to hire military firms for peacekeeping is restricted to the U.N., it is still unclear what body of the institution should decide. The decision-making process of the General Assembly is certainly unwieldy and also biased against certain states. Limiting authority to the Security Council, however, leaves the developing world—the very place where the privatized deployments are likely to occur—underrepresented. The result is that many of the same arguments that have been made against the U.N.’s having its own standing army also apply to it having its own contracted force. Similar concerns also occur at the operational level. In the rapid reaction force scenario, for example, there will likely be difficulties of integrating a better-paid private force within a larger U.N. peacekeeping force. The probable resentment between the two forces could jeopardize operational cohesion. Likewise, it is difficult to determine who should be in operational command. Few military firms are willing to accept outside commanders of their units, particularly from the U.N., while clients would obviously prefer to have their own people at the top. In lieu of this, some firms have expressed a willingness to allow outside observers to be present during their operations. The exact powers of these observers, though, are also unsettled. For example, who will provide them and ensure their independence? Will they be like rapporteurs, just providing independent reporting on the operations, or like referees, with the ability to veto certain actions or suspend operations in mid-course? Hard choices The international community must face up to the fact that its own weaknesses have presented it with a hard choice. It is only a matter of time before the next humanitarian crisis occurs in an area that falls outside the interests of the leading states. Whenever it happens, there is a strong possibility that the U.N. will either have to stomach its concerns about the unseemliness of privatized peacekeeping or face the prospect of watching thousands of men, women, and children die when the market could have saved them. The onus is to deal with these issues now, before the next crisis brings this quandary to the fore. This requires action on two fronts. The ills of U.N. peacekeeping have been known for over a decade. As pithy as it sounds, it would be in the interest not only of the institution, but also of its primary donors finally to do something about them. A good starting point would be full support for the implementation of the Brahimi Report, a statement of recommendations on peacekeeping reform written last year by an international body of experts. Prime among these proposals is to raise the bar for the recruiting and vetting of international peacekeeper units. The U.N. should also seriously explore the possibility of using the private market to get a better bang for its buck out of existing peacekeeping units. Military support firms already provide the transport, communications, and logistics of operations for many militaries from well-off states. For example, Brown & Root Services provides such support to U.S. forces deployed in the Balkans, Central Asia, and the Gulf. Units from the developing world, which make up the majority of U.N. forces, are glaringly weak in performing these functions. By outsourcing these services and standardizing them over the whole U.N. peacekeeping system, a synergy of public troops and private support might become possible. Similarly, military consultant firms might be able to provide training and assistance that would improve U.N. operational output. The second front is perhaps even more difficult than U.N. reform. The decision to watch genocide and do nothing not only is morally unacceptable, but also is likely untenable in a world of ever-present media attention. So, if the international community is unwilling to pay the costs of providing its own capable peacekeeping forces, then it is better that it now begin finding ways to mitigate the underlying concerns with contracting out humanitarian intervention. This is preferable to an ad hoc response at the point of crisis. The U.N. is currently ill-prepared to enter the business environment that privatization entails. If the decision is made to turn down this path, it will have to make institutional adjustments to protect both its own and public interests. A good starting point is the creation of standardized monitoring and contracting processes. Other priorities include the establishment of clear contractual standards and incentives programs, systems for outside vetting of personnel, and the creation of independent observer teams (with powers not only to monitor, but also to control the delivery of payment in order to establish their authority over the firm). Most important, however, is that military firms be brought under the control of the law, just like any other industry. This will require both the extension of the International Court of Justice to their activities and clear contract provisos that military firm personnel fall under the jurisdiction of international tribunals. From passenger planes serving as cruise missiles to private companies trading in armies, we live in a time of immense flux in the international security environment. A mere 10 years ago, the notion of private firms taking over the responsibilities of peacekeeping would have been absurd. It is now a real prospect. These firms, however, are not altruistic by any measure, meaning that peacekeeping would best be left to the real generals. But if the public sector is unwilling to get its own house in order, the private sector offers a new way to protect those who would otherwise be defenseless. |
d5ae9c341a611dbdd542e7735b7e9899 | https://www.brookings.edu/articles/population-policy-and-politics-how-will-history-judge-chinas-one-child-policy/ | Population, Policy, and Politics: How Will History Judge China’s One-Child Policy? | Population, Policy, and Politics: How Will History Judge China’s One-Child Policy? One of the main puzzles of modern population and social history is why, among all countries confronting rapid population growth in the second half of the twentieth century, China chose to adopt an extreme measure of birth control known as the one-child policy. A related question is why such a policy, acknowledged to have many undesirable consequences, has been retained for so long, even beyond the period of time anticipated by its creators. With the world’s population growth rate now at half its historical peak level and with nearly half of the world’s population living in countries with fertility below replacement level, we can look back at the role politics played in formulating, implementing, and reformulating policies aimed at slowing population growth (Demeny and McNicoll 2006; Robinson and Ross 2007; Demeny 2011). In this context, an examination of China’s unprecedented government intervention in reproduction offers valuable lessons in appreciating the role of politics in the global effort of birth control in the twentieth century. Aside from the rise and fall of Communism, family planning programs along with the Green Revolution could be considered two of the most consequential social experiments of the twentieth century. These two experiments differ, however, in both content and approach. The Green Revolution was aimed at feeding the population, while family planning programs were designed to curtail its growth. The Green Revolution was technological, economic, and global, while family planning programs were social, political, and often country specific. Nowhere in the world did politics and policies figure more prominently in the effort to control population growth than in China. The policy of allowing all couples to have only one child finds no equal in the world and it may be one of the most draconian examples of government social engineering ever seen. In this essay, we cast China’s one-child policy in the changing global context of population policymaking, we revisit the supposed necessity of such a policy by examining the claim that the policy was responsible for preventing 400 million births, and we discuss the reasons such a policy, with all its known negative consequences, has been allowed to stay in place for more than thirty years since its inception. Editor’s Note: this paper first appeared in Population and Development Review, published by the Population Council. Download » (PDF) |
a89b1fe2e5bd4b6255202620e108eed3 | https://www.brookings.edu/articles/postindustrial-hopes-deferred-why-the-democratic-majority-is-still-likely-to-emerge/ | Postindustrial Hopes Deferred: Why the Democratic Majority Is Still Likely to Emerge | Postindustrial Hopes Deferred: Why the Democratic Majority Is Still Likely to Emerge Early during the summer of 2002, as the economy slowed and the winds of scandal swept through Washington and Wall Street, it looked as if the Democrats might win big in November, despite the post-September 11 boost in the GOP’s fortunes. But the two-month-long preelection debate over war with Iraq, initiated and orchestrated by the Bush administration, capped by Bush’s barnstorming tour of key states right before the election, ensured GOP success. The Democrats did well in some key gubernatorial contests—Illinois, Pennsylvania, Michigan, Wisconsin, and Arizona—where talk of war had relatively little effect, but they did not claim the majority toward which they have been moving, by fits and starts, since 1996. Before this decade is over, however, the Democrats are likely to complete this journey, moving the country from a conservative Republican majority to a progressive Democratic one. Particular elections depend on a host of contingencies, from the quality of candidates to the money at their disposal to outside events that help one party much more than the other, as in the 2002 election. But political trends are the product of deeper shifts within society and the economy. The old New Deal Democratic majority, which reigned from 1932 to 1968, was based in the industrial North and the segregated South. The conservative Republican majority of the 1980s exploited dissatisfaction with the civil rights movement and the 1960s counterculture to win the white South and parts of the ethnic North. Those areas, combined with the Republicans’ traditional base in the farms of the prairies and the boardrooms of the North, gave the party a majority that lasted until the early 1990s. Since then, however, a new Democratic majority—different from both its New Deal ancestors and its conservative Republican rivals—has been emerging. The Spread of the Ideopolis This new Democratic majority is rooted in the growth of a postindustrial economy. The old industrial economy was based in cities and organized around assembly-line manufacturing, farming, and mining; the new postindustrial economy is based in large metropolitan areas—or "ideopolises"—that include cities and suburbs and are organized around the production of ideas and services. Many ideopolises, such as metropolitan Boston, Silicon Valley, or the Seattle area, hug the North and far West, but others, like North Carolina’s Research Triangle, the Maryland and Virginia suburbs of Washington, and the Tucson and Phoenix areas in Arizona, dot the nation. During the 1980s, many ideopolises voted Republican; in the 1990s they began to elect Democrats. And the Democratic party itself began to change to reflect the priorities of these people, including growing numbers of professionals and technicians, from computer programmers and financial analysts to teachers and nurses. A quarter or so of the jobs in Austin, Raleigh-Durham, Boston, or San Francisco are held by workers like these, many of whom are women who have joined the workforce since the 1960s. Plentiful, too, are low-level service and information workers, including waiters, hospital orderlies, sales clerks, janitors, and teachers’ aides, many of whom are Hispanics and African-Americans. Together, professionals, women, and minorities, bolstered by blue-collar workers attracted to the Democrats’ stands on economic issues, have formed powerful coalitions that now dominate the politics of many ideopolises. This politics emphasizes tolerance and openness. It is defined more by the professionals, many of whom were deeply shaped by the social movements of the 1960s, than by any other group. They worry about clean air and water, and when the market fails to provide these environmental goods, they call on government. They favor civil rights and liberties and good government. They disdain the intolerance and fundamentalism of the religious right. But they are also leery of the old Democratic politics of “big government” and large-scale social engineering. Republicans can and do argue that not all of America is like Silicon Valley or Boston’s Route 128. But during the past four decades America has become more like these areas—and less like the Mississippi Delta or the Texas Panhandle. A careful study of these postindustrial metropolitan areas indicates the varied ways they have developed. Some, like Silicon Valley or Colorado’s Boulder area, boast large manufacturing facilities, but the manufacturing, whether of pharmaceuticals or of semiconductors, involves applying complex ideas to physical objects. Other postindustrial metro areas like New York and Los Angeles specialize in producing entertainment, media, fashion, design, and advertising or in providing databases, legal counsel, and other business services. Most include major universities that funnel ideas and people into the hard or soft technology industries. Route 128 feeds off Harvard and MIT. Silicon Valley is closely linked to Stanford and the University of California at Berkeley. Dane County’s biomedical research is tied to the University of Wisconsin at Madison. How Do Ideopolises Vote? A close look at 263 ideopolis counties that are part of these high-tech metro areas vividly illustrates the political impact of America’s rising ideopolises. These counties, which account for 44 percent of the nation’s vote, are among its most dynamic, fastest-growing areas. Between 1990 and 2000, the average ideopolis county grew 23 percent, while non-ideopolis counties—centered on less technically advanced cities like Greenville, South Carolina, or Muncie, Indiana, or in rural areas—grew an average of only 10 percent. The Democrats have come to dominate these ideopolis counties. In 1984, for instance, the ideopolis counties went for Ronald Reagan by 55 percent to 44 percent. But in 2000, Al Gore garnered 55 percent of their votes as against 41 percent for George Bush. And if left-wing candidate Ralph Nader’s 3 percent share is included, the total Democratic-leaning vote in America’s ideopolises can be reckoned at close to 58 percent. Republicans’ strength is now in the smaller low-tech and rural counties, where Gore lost to Bush by 53 percent to 44 percent. Indeed, since 1980, the beginning of the Reagan era, almost all the pro-Democratic change in the country has been concentrated in America’s ideopolis counties. The Democrats have become the party of the postindustrial future; the Republicans, the party of the industrial and agricultural past. So What Happened in 2002? These trends notwithstanding, the 2002 election was a poor one for Democrats nationwide. The primary cause was national security, sparked by the Iraq debate, which mobilized Republicans, especially conservative whites in rural and exurban areas, and moved many close elections into the Republican column (though Democratic demobilization because of an anemic Democratic campaign and program was also clearly at work). The white vote was important: Republicans made little headway among key Democratic groups like Hispanics. Indeed, Hispanic support for Democrats was rock solid. For example, in California, the one state where exit poll data are available, Democrat Gray Davis won the governor’s race with 65 percent of the Hispanic vote. Republican Bill Simon won 24 percent of the Hispanic vote—the same share won by Republican Dan Lundgren in California’s 1998 gubernatorial contest. In terms of the national vote for Congress, a Greenberg-Quinlan-Rosner postelection poll found that Hispanics supported the Democrats in 2002 by 62 percent—again nearly identical with 1998 exit poll figures, which showed 63 percent Democratic support for Congress among Hispanics. Political scientist James Gimpel of the University of Maryland confirms that Hispanic voting patterns held firm in the 2002 election. He finds that Hispanics in 10 states polled by Fox News (Texas, Florida, New Jersey, New Hampshire, Arkansas, Colorado, Georgia, Minnesota, Missouri, and South Dakota) supported Democrats over Republicans for the Senate by more than two to one (67 percent to 33 percent). Gimpel sees little evidence that Latinos, in general, are moving away from the Democratic party, despite all the talk about Hispanics as swing voters. He also finds, however, that poor turnout of Latinos did benefit the Republicans in 2002 and that Latino demobilization, if continued, would be a big plus for the GOP. What’s Ahead? Can the Republicans continue a politics that mobilizes their constituencies and demobilizes the Democrats’? Under certain scenarios, national security could continue to crowd out other issues and give Republicans the edge, as it did in 2002. But it is more likely that the importance of national security will ebb and flow and that it will become more, not less, contested between the parties. In this case, the underlying trends described here are likely to come to the fore and continue to move the country toward a new Democratic majority. But—and perhaps this is the chief lesson of the 2002 elections for the Democrats—that majority will not coalesce automatically. Imaginative political leadership, such as that displayed by Democrats in the 1990s, will be required. One leadership challenge is to develop a national security policy that is a plausible alternative to that of the Republicans. Another is to offer voters a domestic policy agenda that goes beyond prescription drugs and defending Social Security—good but tired issues that did not capture voters’ imaginations. New ideas abound on the Democratic side on both national security and domestic policy, but they must be articulated if they are to reach voters—both the swing voters lost in 2002 and the base voters who found the 2002 Democratic program uninspiring and stayed home. But if Democrats can mobilize their base and compete vigorously for swing voters, the gathering impact of postindustrial change is clear. The spread of ideopolises over ever-wider sections of the country should continue to weaken Republicans, even in formerly “safe” states, and strengthen Democrats. That’s why conservative Republican dominance is coming to an end and why a new Democratic majority is still likely by the decade’s end. |
87b969ec223f14ad283b693259641883 | https://www.brookings.edu/articles/postscript-a-new-model-afghan-army/ | Postscript: A New Model Afghan Army | Postscript: A New Model Afghan Army Nearly a year since U. S. forces helped topple the Taliban, attempts to build a functioning Afghan government remain in their infancy. This void, particularly in the security sphere, casts a shadow over the long-term prospects of Operation Enduring Freedom. While attentions have shifted elsewhere, the international community has barely been able to maintain the status quo in Afghanistan. The new Afghan government remains essentially divided and powerless. Token U.S. and French efforts to build an Afghan army have trained only about 1,600 soldiers over the past six months. Moreover, the force?s leadership is unrepresentative of the country?s ethnic makeup and the pay is low, so roughly a third of the recruits end up quitting. Thus, there is still no substantial and representative Afghan national force, and security remains a major problem. Indeed, President Hamid Karzai must rely on an outside private security company to protect him from assassination. Resurgent Taliban and Al Qaeda forces operate with increasing temerity (averaging 50 attacks on U.S. forces every month), rival warlords continue to control most of Afghanistan, and the only flourishing trade is the illegal one in private customs duties and narcotics. Despite this, the Afghan government still clings to the idea that it can build a centralized army of 70,000 and completely disarm the warlord forces – who by some estimates number 700,000 fighters – all in the span of the year. The United States, for its part, now estimates that the army will cost $350 million a year to train, equip and operate, and that this training can be completed in two years. Both plans are highly optimistic. Moreover, without a concept for integrating, rather than instigating, the warlords and making the Afghan army truly representative, they risk backfire. This failure to think realistically about Afghanistan?s future could undermine the overall goals of the U.S. in Central Asia. After its initial military success, the Bush administration reverted to its campaign rhetoric of avoiding “nation building,” and shunned the lessons that the international community painfully learned in the interventions of the 1990s, including the need to build an international apparatus to help rebuild war-torn societies. As a result of this policy, U.S. military forces are ironically more involved in local politics and civil affairs than ever (i.e. ?nationbuilding? by any other name), but do not have the backing of international institutions and NGOs better suited for these roles. The local government is no closer to standing on its own two feet. This not only raises great worries about the prospects for long-term stability in Afghanistan, but also for what might happen in a potential post-war Iraq. While the U.S. has shown the capacity to eject malevolent leaders from power, it has yet to develop a blueprint on how build a successful and stable political system in their void. The U.S. failure to do so not only brings the problems that originally motivated intervention back full circle, but also risks ensnaring U.S. military forces in these troubled areas for the foreseeable future. |
ad04be7ec13753e50345be4c41514be6 | https://www.brookings.edu/articles/promoting-young-guards-the-recent-high-turnover-in-the-pla-leadership-part-ii-expansion-and-escalation/ | Promoting “young guards”: The recent high turnover in the PLA leadership (Part II: Expansion and escalation) | Promoting “young guards”: The recent high turnover in the PLA leadership (Part II: Expansion and escalation) The most noticeable trend under the leadership of Xi Jinping since the 2012 National Congress of the Chinese Communist Party (CCP) has been the continuing consolidation of power. In particular, the military has been a key forum in which Xi has strengthened both his personal power and his new administration’s authority. Xi has adopted several approaches and political tactics to achieve this, including purging the two highest-ranking generals under the previous administration for corruption and other charges; arresting 52 senior military officers on various charges of wrongdoing; reshuffling generals between regions, departments, and services; attempting to systematically reform the PLA’s structure and operations; and, last but not least, rapidly promoting “young guards” (少壮派) in the Chinese military. These bold moves will have profound implications, not only for Xi’s political standing in the lead-up to the next leadership turnover in 2017, but also for the development of civilian-military relations in the country and for the trajectory of China’s military modernization. The second installment in this series focuses on the reform of the military, including a detailed discussion of the background and chronology of the military reform plan. Although this reform is only in the initial stage of a multiyear plan, the People’s Liberation Army (PLA) has already undergone its greatest transformation—in terms of administrative lineup, operational theaters, and strategic priorities—since the founding of the People’s Republic of China (PRC) in 1949. This is part two of a series that will appear in the upcoming issue of the China Leadership Monitor. Download the article in full below. The first paper in the series can be found here: Promoting “young guards”: The recent high turnover in the PLA leadership (Part 1: Purges and reshuffles) |
bc5f4b3240acac83e8c0cd23f98bf62d | https://www.brookings.edu/articles/recruiting-executive-branch-leaders-the-office-of-presidential-personnel/ | Recruiting Executive Branch Leaders: The Office of Presidential Personnel | Recruiting Executive Branch Leaders: The Office of Presidential Personnel Despite the demise of the spoils system about which Garfield was complaining, the demand for government jobs after each presidential election continues to be a hallmark of American politics. It took the assassination of President Garfield by one of the vultures, deranged office-seeker Charles Guiteau, to galvanize Congress to pass the Pendleton Act in 1883 establishing the merit system of civil service. But remaining atop the executive bureaucracy was and is a layer of political officers, a layer that has grown thicker in recent years The Constitution vests the “executive power” in the president and commands that “the laws be faithfully executed.” To fulfill this responsibility each president appoints the major officers of the government. The government’s ability to carry out its primary functions depends crucially on capable civil servants, whose effectiveness is intimately tied to the quality of the leadership of the executive branch, that is, presidential appointments. Each new president who comes to office appoints thousands of men and women to help lead the executive branch. While the career civil servants who work under their direction are recruited on a continual basis by the Office of Personnel Management and individual agencies, the leaders themselves are recruited by the White House Office of Presidential Personnel, which is formed anew by each president. The obligations of the OPP are threefold—to serve the nation by recruiting executive branch leaders, to serve the president by finding qualified loyalists, and to shepherd nominees through the sometimes treacherous appointment process. Serving the Nation Until a few decades ago presidents lacked the personal staff to control the process by which appointees were selected. In the 19th and first half of the 20th century, presidential appointments were dominated by the political parties. As presidents began to assert more personal control, they slowly increased the institutional capacity of the White House to recruit their own nominees for positions in the government and gradually superseded the dominance of the political parties. The presidential recruitment function was transformed in the second half of the 20th century in four ways. First, an increasingly professionalized executive recruitment capacity replaced the political parties as the primary source of appointees. Second, this capacity, which began with one person in charge in the Truman administration, was gradually institutionalized as a regular component of the White House Office headed by an aide with the title of assistant to the president. Third, the reach of the office was extended not only to presidential appointments but also to what are technically agency head appointments (noncareer Senior Executive Service and Schedule C positions). And finally, the office grew from six people in the Kennedy administration to more than 100 at the beginning of the Reagan and Clinton administrations. Thus an institutionalized OPP is handling an increasing number of political appointments for the president. The positions for which the OPP recruits are the most important in the executive branch: the cabinet and subcabinet, leaders of independent agencies, and regulatory commissioners. Together with ambassadors (185), U.S. attorneys (94), U.S. marshals (94), and others, the total number of presidential appointments requiring Senate confirmation is 1,125. Additional lower-level political appointments are available to each administration to help implement its priorities. For example, noncareer appointments in the Senior Executive Service (created in 1978) can by law amount to 10 percent of the total career SES; today they number 720. Schedule C positions, about 200 when first created in 1953, now number 1,428. These latter two categories, technically made by cabinet secretaries and agency heads, have been controlled by the OPP since the Reagan administration. Though less important than presidential appointments, they place an added burden on the OPP, which must also advise the president on hundreds of part-time appointments, many to boards and commissions that may meet several times a year. Given the growing number of political positions, along with the OPP’s increasing scope of authority, it is not surprising that the pace of appointments has slowed in the past four decades (see table 1). Table 1: Length of Appointment Process, As Reported by Appointees LENGTH OF PERIOD 1964–84 1984–99 1 or 2 months 48% 15% 3 or 4 months 34% 26% 5 or 6 months 11% 26% More than 6 months 5% 30% Source: Paul C. Light and Virginia L. Thomas, The Merit and Reputation of an Administration (Presidential Appointee Initiative, 2000), p.8. The number of appointees surveyed was 532 for 1964–84, 435 for 1984–99. Serving the President The primary task of the OPP—helping the president match the right nominee with the right position—is not simple. The personnel office must be ready to go the day after the election, so advance planning is crucial, but often neglected in the pressure of the campaign. The onslaught of office seekers begins immediately, and the OPP must be ready to handle the volume with some political sophistication. The delay in establishing the personnel recruitment process is one of the reasons that the 2000-01 transition has been particularly challenging. A process must be set up to strike the right balance between the president’s personal attention and the need to delegate much of the recruitment task to the OPP. Intense pressure for appointments from the presidential election campaign, Capitol Hill, interest groups, and the newly designated cabinet secretaries will buffet the process. Perhaps most important, the newly elected president’s policy agenda will not be fully implemented until most of the administration’s appointees are confirmed and in office. Political patronage has a long and colorful history in the United States. The purposes of patronage appointments are to reward people for working on the campaign and for the political party and also to ensure that the government is led by people who are committed to the political philosophy and policy agenda of the president. As long as these purposes are consistent with putting qualified people in charge of government programs, there is no problem. But from the perspective of the OPP, demands for patronage are frustrating. Pressures for appointments come from all sides: everybody, it seems, wants to ride the president’s coattails into Washington jobs. According to Pendleton James, President Reagan’s assistant for presidential personnel in 1981-82, “…being the head of presidential personnel is like being a traffic cop on a four-lane freeway. You have these Mack trucks bearing down on you at sixty miles an hour. They might be influential congressmen, senators, state committee chairmen, heads of special interest groups and lobbyists, friends of the president’s, all saying ‘I want Billy Smith to get that job.'” Thus the OPP has to deal with external pressures for appointments, but it also faces internal battles with cabinet secretaries over subcabinet appointments. From the White House staff perspective, subcabinet positions are presidential appointments and should be controlled by the White House. But from the cabinet secretary’s perspective, these appointees will be part of his or her management team, and the secretary will be held accountable for the performance of the department, so substantial discretion should be delegated to department heads. Cabinet secretaries also suspect that the White House OPP is more concerned with repaying political debts than with the quality of subcabinet appointments. Chase Untermeyer, President Bush’s director of presidential personnel, voiced the White House perspective when he suggested that the president introduce his assistant for presidential personnel to his newly appointed cabinet secretaries as someone who has “my complete confidence,” someone who “has been with me many years and knows the people who,…while you were in your condo in Palm Beach during the New Hampshire primary,…helped me get elected so you could become a cabinet secretary.” And, the president should conclude, he will depend on his assistant “to help me see that those people who helped us all get there are properly rewarded.” The perspective of the cabinet secretary was expressed by Frank Carlucci, secretary of defense in the Reagan administration, whose advice to newly appointed cabinet secretaries was, “Spend most of your time at the outset focusing on the personnel system. Get your appointees in place, have your own political personnel person, because the first clash you will have is with the White House personnel office. And I don’t care whether it is a Republican or a Democrat…if you don’t get your own people in place, you are going to end up being a one-armed paper hanger.” What the White House sees as a presidential prerogative and opportunity to reward loyal supporters of the president, the cabinet secretary sees as a chance to mold a management team. The OPP has to strike the right balance for each president. Serving Presidential Nominees While OPP’s most important duties are to the nation and the president, it also has obligations to the individual Americans who want to serve their country. U.S. citizens have a venerable tradition of serving in government for a few years and then returning to private life. The practice brings in people with new ideas and much energy to participate in governing their country. Many of these idealistic Americans, however, have recently had less than inspiring experiences with their nominations to high office. When past and present presidential appointees were asked their general impressions of the nomination and confirmation process, 71 percent thought the process was “fair,” but many also had negative reactions. Twenty-three percent found it “embarrassing”; 40 percent “confusing”; and 47 percent a “necessary evil.” Most nominees began by seeing public service as an honor, but were later put off by the intrusiveness of the process in delving into their personal finances, the investigations into their backgrounds, and the time it takes to be confirmed. Becoming a presidential appointee necessitates collecting much information for financial disclosure forms. Of appointees who served between 1984 and 1999, 32 percent found gathering the information difficult or very difficult (compared with 17 percent of appointees from 1964 to 1984). Completing the financial disclosure forms was so complicated that 25 percent of appointees had to spend between $1,000 and $10,000 for outside expert advice; 6 percent had to spend more than$10,000. From surveys of past appointees, it is clear that the nomination and appointment process has room for improvement. Many problems cited by respondents, however, are not hard to alleviate. One theme that came through clearly is that, once contacted by the OPP, many potential nominees felt that they had been abandoned without sufficient information about how the process would unfold. Chase Untermeyer pointed out “the sad truth” that “often nominees feel abandoned.” He noted how important it is for a nominee “to have somebody holding his or her hand in getting through the process.” The OPP should allocate sufficient personnel to keeping nominees informed of the status of their nominations and helping them through the difficult aspects of disclosure forms and Senate confirmation. Thus the dilemma of the modern White House personnel operation. Far larger and more professional than ever before, the number of appointees under its purview has grown so huge and the appointment process itself so procedurally thick and politically vexing that the OPP is often pushed to or beyond its limits in meeting the needs of the president, the appointees, and the country it is expected to serve. |
90e7e66e1132bca17da0262ff1145205 | https://www.brookings.edu/articles/refreshing-european-energy-security-policy-how-the-u-s-can-help/ | Refreshing European Energy Security Policy: How the U.S. Can Help | Refreshing European Energy Security Policy: How the U.S. Can Help The U.S. can help Central and Eastern Europe and Ukraine by refreshing its European energy security policy. The current crisis validates America’s long-term policy goal of diversifying Europe’s energy supply to diminish Russia’s ability to use energy as a coercive tool against its neighbors. While much progress has been made, with bipartisan support, Russia still dominates Central and Eastern European (CEE) natural gas supply, and recent events call for refocusing our efforts. Stiffening the EU’s spine to create a truly competitive internal energy market, promoting the efforts of the International Monetary Fund (IMF) on internal market reform in CEE countries, supporting indigenous gas production and taking steps to building a reliable energy bridge to Europe through U.S. exports should be the cornerstones of U.S. policy. While no panacea, respected U.S. energy experts have been too quick to dismiss a linkage between Europe (and Ukraine’s) energy insecurity and the utility of expediting U.S. hydrocarbon exports. Indeed, a clear signal that liquefied natural gas (LNG) exports to European allies are “deemed to be in the national interest”[1] would indeed have an immediate impact on Russia’s market power and help accelerate the build out of gas transportation infrastructure in Europe. The U.S. has already caused Russia to renegotiate current gas contracts and discount renewed contacts due to the displacement of LNG flows once meant for our markets. An immediate signal that future U.S. LNG exports will be available to Europe will send a message to Russia that within a few years, despite its current ability to pressure Ukraine and other nations once part of the USSR, this will no longer be possible. Expectations of future supply will impact price expectations and infrastructure investment decisions made today. Ukraine’s future energy security lies in greater reverse flows of gas from Europe and well managed gas storage in Ukraine. To the extent that the firm promise of U.S. LNG exports in the 2017-2022 period sustain lower LNG prices and help finance new interconnections from LNG import terminals on the continent, Ukraine will benefit indirectly as well. U.S. policy has promoted redundant infrastructure as a cornerstone of energy security policy since the 1990s. Long gestating projects like the Baku-Tbilisi-Ceyhan (BTC) pipeline and the Southern Corridor were fundamental goals. Enhancing pipeline interconnections to move gas freely across the continent, internal pricing and efficiency reform and development of clean energy alternatives were the additional core elements. Much progress has been made over two decades. BTC is operational, Azeri gas flows to Turkey and there will be a Southern Gas Corridor (although not as ambitious as the Nabucco project), which will bring gas to Turkey, Italy, Greece and Albania. Norway is providing competitively priced gas to the continent. The European Union’s Third Energy Package[2] has eliminated destination clauses (allowing free sale of gas) and should advance internal reforms. EU competition policy should restrict Russia’s ability to monopolize midstream and downstream energy infrastructure as well as gas supply, including barring Russia from completing the South Stream pipeline intended to allow Russia’s gas flows to bypass Ukraine. The U.S. shale gas boom, as noted above, has had the greatest impact on the competitiveness of the European gas market by creating a glut of LNG supply that has opened a spot market and driven down long-term contract prices. But much remains to be done. The important work ahead depends more on policy reform than pipelines. EU Natural Gas Market Reform EU gas market integration efforts are highly incomplete. As my colleague Pierre Noel has incisively noted: “What emerges in Europe is a patchwork of tightly regulated, interconnected national gas systems governed by ever more detailed and complex rules that Brussels then wants to harmonize.”[3] The commoditization of gas, and the price pressure of spot pricing, has not yet reached the CEE states. While Russia may have renegotiated some of its contracts with Western European buyers to reflect lower gas prices, it remains the dominant gas supplier in CEE. Although EU rules ostensibly prevent destination charges, “the rules governing the Yamal-Europe pipeline, the pipeline across Romania and Bulgaria into Greece and Turkey and the Trans Austria Pipeline” essentially amount to preventing the resale of Russian gas by Western European importers.[4][5] Natural gas cannot yet flow freely across the European pipeline system, even from Spain to France.[6] True point-to-point gas sales from LNG importing terminals from West to East should a priority goal. Internal Energy Market Reform The price of natural gas in many CEE countries remains below the cost of import. Domestic pricing formulas (weighting Russian and domestic gas) ironically favor higher priced Russian gas over potential imported gas. While intended to provide price relief to domestic economies, this policy inhibits both imports of gas from the West, and the economic viability of new infrastructure that would rely on imported LNG.[7] Price reform must therefore precede new infrastructure. The U.S., EU and IMF have important roles to play on this agenda. New LNG Terminals and Pipeline Interconnection Infrastructure. EU market reform and CEE price reform will provide the price signals needed to flow LNG into CEE states. These can come both from new (closer to market) LNG terminals or enhanced interconnection. Accurate price signals can also make heretofore-uneconomic projects viable. Fortunately, there are both LNG import projects [see Table 1] afoot in Europe and interconnections planned [see Table 2] that could advance this vision. While some of these projects may still be far afield and face considerable challenges, they demonstrate the level of commitment CEE nations show towards ensuring diversity of supply. These LNG import terminals are planned to be operational between 2016 and 2020, the same timeframe in which U.S. LNG exports would begin to flow. While policy reform will play the greatest role in enhancing CEE energy security, unfettered exports of U.S. LNG to Europe could play a powerful role in advancing this market –with immediate impacts on Russia’s market power and market share. Skeptics make four errors in dismissing this connection, 1) assuming most U.S. LNG exports will go to Asia, 2) assuming the post 2016 delivery time for U.S. LNG will not impact price formation today, 3) underestimating the importance of securing Henry Hub based LNG supply for financing European infrastructure projects and 4) failing to see the immediate strategic importance of degrading Russia’s future share of the European gas market. LNG Exports to Europe. Multiple skeptics assume U.S. LNG exports will go to Asia and not Europe because price differentials are higher. This is simplistic. Companies like Poland’s PGNiG are currently trying to renegotiate high priced, oil-linked contracts with suppliers like Qatar, and would value alternative supplies on the market that would lower global prices. Given the opportunity to buy U.S. LNG, Europeans buyers (BG, Spain’s Gas Natural, France’s Total, and UK based Centrica) have contracted 65 percent of the supply of the only fully licensed U.S. LNG export project (Cheniere’s Sabine Pass).[10] Asian demand may be impacted by a larger share of nuclear power generation than expected and Asian buyers are pushing hard to erode oil-linked pricing as well. Meanwhile, the governments of CEE nations are using diplomatic channels to make it clear that they see imports of U.S. gas to be a vital component of their energy diversification strategies.[11] Purchasers weigh price heavily of course, but they also weigh the diversity of supply source, and the likelihood of timely project completion.[12] Price Formation. Prices for long-term gas supply are based on future expectations. This is why Russia is now offering discounts on renewals of its gas supply contracts to Europe. Energy is a business of long lead times, where the marginal barrel (or cargo) sets the price. Every decision, from investing in oil or gas production, to building a power plant, or financing an LNG import or export terminal is based on future price expectations. Multiple studies, including those conducted by the U.S. Department of Energy, have concluded that U.S. natural gas supply will rise to meet new LNG export demand.[13] It defies logic to believe that creating more certainty with respect to U.S. LNG exports will do anything other than increase supply and add price pressure to oil linked gas suppliers like Russia. Europe enjoys the price benefits of a short term glut in LNG supply, today, but allowing U.S. Henry Hub priced exports may be the only way to sustain this price pressure over the long-term. Financing New Infrastructure. Analysts rightly point out that commercial parties, not governments, decide whether to build pipelines or LNG plants. Securing reliable supply, however, is indispensable to those transactions. (Governments can also make these transactions easier by providing credit support or harder by delaying permitting). Making U.S. LNG supply available, priced competitively against Russian piped gas, or oil-linked Qatar gas, will add significantly to the ability to finance these projects. Russia may indeed compete for market share by offering short-term discounts to some customers, but enabling buyers to access other sources is the entire point of the policy. Our litmus test (and time horizon) for helping Ukraine, Poland, Lithuania, Latvia and others highly dependent on Russian gas should extend beyond how we help them next week to how we help them next year and beyond. Degrading Russia’s Market Share. Skeptics cite the delay in actual delivery of LNG supplies till post 2016 as a reason to dismiss the utility of unfettering LNG exports now. This is short sighted at best. American policy concerning energy security for Europe has been built almost entirely on pursuing long-term projects to create diversity of supply: Baku-Tbilisi-Ceyhan and the Southern Gas Corridor, to name two. Commercial entities that buy LNG have already locked up their supply through 2016 or so; they are shopping today for supplies after that date. The best way to limit the attractiveness of investment in Russia’s upstream, or its revenue stream over the medium term, is to enable other suppliers to capture market share and erode the oil/gas price linkage so critical to Russia’s income stream. In its study on the global impacts of U.S. LNG exports, Deloitte found that moderate exports of U.S. LNG, 6 billion cubic feet per day, could result in a wealth transfer of up to four billion dollars from Russia to European consumers, simply through reduced contract prices and market share.[14] (Allowing U.S. LNG exports to Asia also damages Russia’s future market, but countries on Russia’s periphery have fewer choices of supply than Asian buyers do). If U.S. LNG exports were deemed to be in the national interest now, the potentiality of that supply would be scored by every banker – and every public shareholder of Russian companies. The focus of U.S.-European energy security policy must move from pipeline promotion to policy reform. The progress Western Europe has made on diversity of supply and competitive pricing has not reached CEE states largely due to their own policies and incomplete EU market reform. But U.S. policy can help move this progress eastward through diplomacy, and through a clear and unequivocal signal that U.S. LNG is available for export to these markets in a useful time frame. A business as usual approach will further entrench the current energy situation, in which Western Europe, where most countries have access to diverse sources of LNG, will continue to be distanced from the reality of Central and Eastern Europe, where Russian dependence persists. We can urge others – the EU, the IMF and host countries – to do their part to create a competitive market for gas and an investment climate commercial parties will want to invest in. But providing LNG supply to Europe is a lever we control and one we should exercise. If the U.S. deemed sales of LNG to Central and East European countries (or NATO allies or countries cooperating on Iran sanctions) to be in the national interest, there would be a race to lock up those supplies. Every molecule a European buys from the U.S. is a molecule subtracted from another supplier’s gas market- and in Europe that supplier is most likely to be Russia, given its large market share. Russia’s pricing power declines today, as does its negotiating power for future supply – and investment. The current process makes it highly uncertain whether or when U.S. LNG export project developers can deliver gas to European customers and the pace of the U.S. Department of Energy approvals risks delaying project approval and construction for too long. A Brookings LNG task force suggested that the optimal U.S. policy would be to let the market decide where exports should go, and to neither promote nor restrict them. I share that view. But as the current environment makes the optimal policy unavailable, we should consider unfettering our future LNG exports to Europe as a first step. Many influential voices from both sides of the partisan aisle have contributed to this discussion in favor of supporting U.S. LNG exports in order to reduce Russia’s dominance in European energy markets, including Larry Summers, former adviser to President Obama, and former Secretary of State Condoleezza Rice. The geopolitical imperative is clear. Both Republicans and Democrats have introduced numerous pieces of legislation that would authorize exports of U.S. LNG to allies, be they NATO or WTO members, in the U.S. Congress. We have spent nearly two decades of intense diplomacy trying to diversify Europe’s energy supply by getting Azerbaijan, Kazakhstan, Turkmenistan and even Iraq to sell them energy. Baku-Tbilisi-Ceyhan. Nabucco. The Southern Corridor. The Trans-Caspian Gas Pipeline. We finally have a tool at our disposal that can provide direct relief to Europe over time, and accelerate the competitiveness of that market today. We want everyone else to help. Shouldn’t we? |
4032053c23a6d9db970d3d2c211f5fd5 | https://www.brookings.edu/articles/remembering-iraqs-displaced/ | Remembering Iraq’s Displaced | Remembering Iraq’s Displaced Looking at the past 10 years of Iraq’s history through the lens of displacement reveals a complex — and sobering — reality. Before the U.S. invasion of Iraq in March 2003, humanitarian agencies prepared for a massive outpouring of Iraqi refugees. But this didn’t happen. Instead a much more dynamic and complex form of displacement occurred. First, some 500,000 Iraqi refugees and internally displaced persons (IDPs) who had been displaced by the Saddam Hussein regime returned to their places of origin. Then, in the 2003 to 2006 period, more than a million Iraqis were displaced as sectarian militias battled for control of specific neighborhoods. In February 2006, the bombing of the Al-Askaria Mosque and its violent aftermath ratcheted the numbers of IDPs up to a staggering 2.7 million. In a period of about a year, five percent of Iraq’s total population fled their homes and settled elsewhere in Iraq while an additional 2 million or so fled the country entirely. It is important to underscore that this displacement was not just a by-product of the conflict, but rather the result of deliberate policies of sectarian cleansing by armed militias. The internally displaced were the most vulnerable — and perhaps the clearest sign of the success of sectarian cleansing as entire neighborhoods were transformed. Sunnis and Shiites alike moved from mixed communities to ones where their sect was the majority. And while the displacement of Sunnis and Shiites was massive, proportionately the displacement of religious minorities was even more sweeping in effect. Those who couldn’t find shelter with families or friends, or without the resources to rent lodging, occupied public buildings and built informal settlements (slums) on the outskirts of Baghdad and throughout the country. Hundreds of thousands of Iraqi IDPs lived — and continue to live — in these informal settlements where living conditions are harsh and the threat of eviction is constant. This large-scale internal displacement also increased the pressure on the Iraqi government to provide basic services such as health, education, sanitation, electricity, food, and shelter. As of September 2012, Iraq’s Ministry of Displacement and Migration (MoDM) reported that there were still over 1.3 million IDPs. (However, if the earlier figures of 2.7 million were correct, one wonders what has happened to the other 1.4 million. Have they all truly integrated into their new communities or moved elsewhere in the country, or simply slipped further under the radar screen?) One of the few international agencies still monitoring displacement in Iraq, the International Organization for Migration, reports that few of today’s IDPs expect to ever return to their homes. In fact, the percentage of those expressing a wish to return to their homes has dropped from 45 percent in 2006 to six percent in 2012, mostly because of the lack of security. And the sectarian dimension remains alive and well. Provincial political leaders view potential returns of IDPs through a sectarian lens, seeing returns of particular groups in terms of their impact on the communitarian makeup of their province and the balance of power between different communities. For those who do want to return to their homes, the complex and extremely bureaucratic question of getting their property back is complicated and will, in the best of cases, take years. The Iraqi MoDM wants to “close the displacement file” by finding solutions for those displaced and has offered cash enticements to encourage people to return to their communities. But finding durable solutions for IDPs isn’t so easy in Iraq, particularly given the difficult economic conditions. As the former Representative of the Secretary-General on the Human Rights of IDPs, Walter Kaelin, said two years ago, resolving displacement in Iraq is a political imperative, a development challenge, and a vital issue for reconciliation and peacebuilding. While IDPs face difficult and uncertain living conditions inside Iraq, Iraqi refugees seeking safety in neighboring countries have faced their own vulnerabilities. With the exception of Palestinian Iraqis, the Iraqis who fled to neighboring countries have not lived in camps, but are dispersed within communities. This has made it difficult to accurately estimate their numbers, assess their needs, and deliver assistance. The Syrian government estimated that a million Iraqis had crossed into its territory and Jordan reported that it was hosting half a million Iraqis. However, the number registered with the United Nations High Commissioner for Refugees (UNHCR) and receiving assistance was far lower. Host governments have been generous in allowing the Iraqis to enter their countries but those policies have been ambiguous and the Iraqis have never had formal refugee status. (None of the governments hosting large numbers of Iraqis is a signatory to the 1951 U.N. Convention on Refugees.) Some of the Iraqis are legal residents. In some countries, they are registered but not allowed to work. Many Iraqis have gone back and forth to Iraq in circular migration patterns — for example, to check on property or collect pensions. The latest figures, based on government estimates, are that there are 1,428,308 refugees of Iraqi origin in Jordan and Syria of whom only 135,000 receive assistance from the UNHCR. Since the numbers peaked in 2009, some Iraqis have returned to Iraq. According to the UNHCR, an estimated 550,000 Iraqis returned to the country between 2008 and 2011, but most weren’t able to return to their homes and instead joined the rank of IDPs. And some Iraqis have been resettled outside the region: more than 85,000 Iraqi refugees over the past decade — 72 percent of whom have gone to the United States. Surprisingly, more than 3,000 Iraqis were resettled out of Syria last year — a testament to the courageous UNHCR staff in Damascus and to the desperation of Iraqis wanting to escape the conflict in Syria. Refugee resettlement has worked, but it has been a lengthy and bureaucratic process; in some cases the enhanced security procedures have led to delays stretching for years. Today Iraqi refugees throughout the region face dwindling donor support, particularly as the needs of Syrian refugees increase. For the hundreds of thousands of Iraqis who remain in Syria, the situation is particularly dire. Some have been displaced within Syria. Some Iraqis have moved to other countries in the region (though they have faced an uncertain welcome by governments facing new inflows of Syrians.) Many — perhaps 100,000 — Iraqis have chosen to return to Iraq in the past year (though given the violence in Syria, it is hard to see this as a voluntary decision). Those that have returned to Iraq have either congregated in a hastily-constructed camp along the Iraq-Syrian border (which has often been closed) or have simply become IDPs. Most of those who fled from their homes in Iraq — whether because of the atrocities of the Hussein regime or the violence of sectarian conflict — left their homes quickly. The journeys to other Iraqi towns or across borders to neighboring countries took hours or days, or in some cases, a few weeks. Many expected that the displacement was temporary and when things settled down, they would return. It’s now been 10 years — six years since the mass displacement triggered by the February 2006 bombing — and solutions, safe and lasting solutions, appear as distant as ever. And there is little international pressure or attention on the Iraqi refugees and IDPs anymore. Perhaps 3 million people — 10 percent of Iraq’s population — remain displaced. And forgotten. |
ef1892591e6823a5f39123b6cc2db032 | https://www.brookings.edu/articles/robots-at-war-the-new-battlefield/?shared=email&msg=fail | Robots at War: The New Battlefield | Robots at War: The New Battlefield There was little to warn of the danger ahead. The Iraqi insurgent had laid his ambush with great cunning. Hidden along the side of the road, the bomb looked like any other piece of trash. American soldiers call these jury-rigged bombs IEDs, official shorthand for improvised explosive devices. The unit hunting for the bomb was an explosive ordnance disposal (EOD) team, the sharp end of the spear in the effort to suppress roadside bombings. By 2006, about 2,500 of these attacks were occurring a month, and they were the leading cause of casualties among U.S. troops as well as Iraqi civilians. In a typical tour in Iraq, each EOD team would go on more than 600 calls, defusing or safely exploding about two devices a day. Perhaps the most telling sign of how critical the teams’ work was to the American war effort is that insurgents began offering a rumored $50,000 bounty for killing an EOD soldier. Unfortunately, this particular IED call would not end well. By the time the soldier was close enough to see the telltale wires protruding from the bomb, it was too late. There was no time to defuse the bomb or to escape. The IED erupted in a wave of flame. Depending on how much explosive has been packed into an IED, a soldier must be as far as 50 yards away to escape death and as far as a half-mile away to escape injury from bomb fragments. Even if a person is not hit, the pressure from the blast by itself can break bones. This soldier, though, had been right on top of the bomb. As the flames and debris cleared, the rest of the team advanced. They found little left of their teammate. Hearts in their throats, they loaded the remains onto a helicopter, which took them back to the team’s base camp near Baghdad International Airport. That night, the team’s commander, a Navy chief petty officer, did his sad duty and wrote home about the incident. The effect of this explosion had been particularly tough on his unit. They had lost their most fearless and technically savvy soldier. More important, they had lost a valued member of the team, a soldier who had saved the others’ lives many times over. The soldier had always taken the most dangerous roles, willing to go first to scout for IEDs and ambushes. Yet the other soldiers in the unit had never once heard a complaint. In his condolences, the chief noted the soldier’s bravery and sacrifice. He apologized for his inability to change what had happened. But he also expressed his thanks and talked up the silver lining he took away from the loss. At least, he wrote, “when a robot dies, you don’t have to write a letter to its mother.” The “soldier” in this case was a 42-pound robot called a PackBot. About the size of a lawn mower, the PackBot mounts all sorts of cameras and sensors, as well as a nimble arm with four joints. It moves using four “flippers.” These are tiny treads that can also rotate on an axis, allowing the robot not only to roll forward and backward using the treads as a tank would, but also to flip its tracks up and down (almost like a seal moving) to climb stairs, rumble over rocks, squeeze down twisting tunnels, and even swim underwater. The cost to the United States of this “death” was $150,000. Read the full article at The Wilson Quarterly » Learn more about Singer’s book, Wired for War: The Robotics Revolution and Conflict in the 21st Century » |
76564ba4b5540299ea20c727379c2fd9 | https://www.brookings.edu/articles/secular-stagnation-even-truer-today-larry-summers-says/ | ‘Secular stagnation’ even truer today, Larry Summers says | ‘Secular stagnation’ even truer today, Larry Summers says Larry Summers is doubling down on his secular-stagnation hypothesis. The Harvard economist and former Treasury secretary first offered the bleak diagnosis in November 2013 at an International Monetary Fund conference. The U.S. and much of the rest of the world was suffering from a chronic shortage of demand and profitable investment opportunities, he argued. There wasn’t any interest rate that would produce healthy growth (given that rates can’t go much below zero). At a recent academic conference at the Federal Reserve Bank of San Francisco, I asked Mr. Summers how his secular stagnation hypothesis looks today, three and half years after he inserted a Depression-era phrase into today’s debate about the economic outlook. Many economists have had their doubts about his gloomy hypothesis, and not all has gone wrong with the U.S. economy. Unemployment, for example, has fallen to 4.4% from 7.2% in 2013, leading to a rise in wages. But Mr. Summers says he has been vindicated by slow economic growth, low inflation and low interest rates, which many forecasters now expect to persist. Today, he is more convinced than ever that secular stagnation is the defining economic problem of our time—one that won’t be easily defeated as long as fiscal authorities are overly preoccupied with debt and central bankers are overly focused on keeping inflation at low levels. Here are edited excerpts of Mr. Summers’s observations from our exchange. With hindsight, his gloomy 2013 view looks better than the consensus 2013 view. “When I made my comments in 2013 at the IMF they were couched with very substantial doubts. Today I would have fewer. The essence of my argument then was that because of a variety of structural factors the neutral rate of interest was much lower than it had been and, therefore, getting to an adequately low rate was going to be more difficult. And that was going to act as a constraint on aggregate demand much more of the time than people thought. Relative to the prevailing forecasts at the time that I spoke, interest rates have been very substantially lower. Growth has been very substantially lower. Inflation has been very substantially lower for the industrialized world. Fiscal policy has been more expansionary. So the broad argument that I was making at that time seems more true today.” The decline in the U.S. unemployment rate doesn’t disprove his hypothesis. “Nobody ever said that the economy was always going to be permanently in a state of deflation. If you go back to the Alvin Hansen [who coined the secular stagnation phrase in 1939 ], he talked about weak recovery. So here we are. We’ve managed to get to 2% growth, not much inflation pressure, 4% unemployment and in order to be there, we’ve got a fed-funds rate eight years into a recovery of 1%. I read that as, on net, something substantial has happened relative to what anybody expected rather than nothing important happened.” Economists are no longer arguing about whether the neutral rate of interest has fallen, but instead are wondering why. “We now have a kind of embarrassing overabundance of explanations for the decline in the neutral rate [the interest rate that will prevail when the economy is at full employment and price stability]. You got smart thoughtful people who think that demography is 75% or 80%. You’ve got smart thoughtful people who think increased risk aversion and a shortage of safe assets is 75% or 80% of it. You’ve got smart people who think widening inequality and a higher propensity to save is half of it. You’ve got smart people quantifying sludged-up financial intermediation as explaining a significant part of it. You’ve got careful thinking about declining capital-goods prices explaining a significant part of it. You’ve got people thinking about corporate savings and rising profitability as explaining a significant part of it. And nobody has come forward with strong a priori arguments for why the real rate should have increased.” This trend has its roots in developments that preceded the Great Recession. “One thing you should pay attention to is the yield on 10-year TIPS [Treasury Inflation-Protected Securities] because I think the interesting part is not the short-run dynamics but averaging over the cycle. If you look at the real interest rate decade by decade, it’s been going down for five decades.” All this strengthens the case for more public investment. “I would be trying to raise R-star [another term for the neutral rate] so I would be wanting to operate with a different fiscal-monetary mix. Even though some of the things the Trump administration is doing are giving it a bad name, the basic impulse that increased business confidence that raises the propensity to invest is a good thing. So, first, more public investment I think is a good thing.” When pressed, Mr. Summers acknowledges a few vulnerabilities in the secular-stagnation view. “I always try to phrase this carefully with words like ‘the foreseeable future,’ because I know that if you’d asked me in 2003 was liquidity-trap economics going to be central to understanding the American economy in the rest of my professional lifetime, I would have said overwhelmingly likely no, and I would have been wrong. So could a whole different configuration with a whole different set of issues prove to be important 10 years from now? Yeah, that could certainly happen. “I don’t think we’re as straight as we’d like to be on the global aspects of this. One of the arguments that I’ve made is that we had the mother of all housing bubbles, we had a vast erosion of credit standards, we had really easy money, we had the Bush tax cuts plus the Iraq war, and all that got us in the precrisis period was adequate growth. Doesn’t that show that there’s some kind of secular stagnation that you needed all that extraordinary stuff to get to adequate plus growth? That’s an argument I’ve made. It’s made a little more awkward by the very large current-account deficit we had in much of that period, which suggests that maybe there was stronger demand, and it was just falling outside the United States. I’m comfortable with my overall view, because I think this is best framed as an issue of the industrial world, but I think that this is a weakness of the line of argument that I’ve taken. “There is some evidence related to hysteresis for what I called a Reverse Says Law—lack of demand creates its own lack of supply down the road in terms of productivity growth. But if one was attempting to synthesize everything rather than to push a very important aspect of a phenomenon that had received too little attention, I think integrating exogenous developments on the supply side would be worthwhile.”“Some people would say it’s really all the supply side, and you’re all about the demand side. We’ve had a big productivity slowdown and isn’t that the right thing for you to think about, and I think there’s obviously something to that. The point I’d make about that is that, in general, we have a way of telling the difference between supply shocks and demand shocks, which is that supply shocks raise prices and demand shocks lower prices. The general tendency to low inflation coincident with low quantity guides you a little more towards the demand-shock view therefore then the supply-shock view. |
68ee4524e5bdb152a1e886d795d0a93a | https://www.brookings.edu/articles/should-we-treat-domestic-terrorists-the-way-we-treat-isis-what-works-and-what-doesnt/ | Should we treat domestic terrorists the way we treat ISIS?: What works—and what doesn’t | Should we treat domestic terrorists the way we treat ISIS?: What works—and what doesn’t The mass shooting in Las Vegas on Sunday night has again raised fears about terrorism. There’s much we don’t yet know. The Islamic State (ISIS) has claimed the attack, but the FBI claims that there is no international terrorism link. The attacker, Stephen Paddock, was 64 years old and white, fitting a stereotype of a right-wing terrorist more than a jihadist one. And he may just be a crazy nut. But regardless of Paddock’s particular pathology, the situation highlights how the United States treats similar forms of violence differently depending on the nature of the perpetrator. Almost two months before, on August 12, 2017, James Alex Fields Jr. drove his car into a crowd of peaceful demonstrators in Charlottesville, killing Heather Heyer, one of the demonstrators. Heyer’s death came after a day of demonstrations in which armed neo-Nazis and Klansmen, including Fields, who bore the symbols of Vanguard America (a white supremacist, neo-Nazi organization), marched ostensibly to protest the removal of a statue of Confederate Civil War hero Robert E. Lee but really to trumpet their hateful ideas. Fields’ act was treated as the crime it was: he was charged with second-degree murder and hit-and-run, along with several crimes related to the injuries of other victims. Yet given the political nature of the violence, and given the power of terrorism as a label, many have called for treating Heyer’s death as terrorism. Indeed, Fields’ use of a car to drive through a crowd resembles nothing more than the vehicle attacks that we’ve seen in Barcelona, Berlin, London, Nice, and other cities in the past two years. In turn, U.S. Attorney General Jeff Sessions labeled Field’s attack “domestic terrorism,” which federal law defines as trying to intimidate a civilian population or affect government policy through violence in an area of U.S. territorial jurisdiction. Fields’ attack is not the first time right-wing violence has raised questions about the terrorism label. After the 2015 attacks on a Planned Parenthood Clinic and a black church in Charleston, demands to treat right-wing violence as terrorism were loud. A December 2015 YouGov poll found that 52 percent of Americans thought the Planned Parenthood attack was terrorism, and civil rights activists widely criticized the fact that few media outlets were calling Dylann Roof, the perpetrator of the Charleston massacre, a terrorist. Words matter, but deeds matter more. It is all well and good to label left- and right-wing violence at home as terrorism, but what if the U.S. government went beyond rhetoric and truly treated these groups as it treats Americans suspected of being involved with jihadist organizations like ISIS? The differences would be profound. Not only would the resources that law enforcement devotes to nonjihadist groups soar, but so too would the means of countering those groups. The legal toolkit would grow dramatically. Perhaps as important would be the indirect effects: banks, Internet companies, and other organizations vital to any group’s success would shy away from anything smacking of domestic terrorism. Nonviolent groups that share some of the radicals’ agenda would also face pressure, and many would feel compelled to change, often in ways that go against U.S. ideals of free speech and free assembly. Taken to its logical conclusion, this thought experiment makes clear that treating domestic extremism just like foreign terrorism would be a mistake, but moving a bit in that direction would be desirable. Federal law enforcement in the United States should have the legal authority to take on more responsibility for addressing domestic terrorism. However, given the power of many terrorism-related laws and the political connotations, the terrorism label should be used sparingly, and the new authorities should be tightly defined and monitored. Independent of greater federal legal authority, the resources allocated to countering domestic terrorism in general and right-wing violence in particular should increase given the danger these groups pose. Counterterrorism and law enforcement officials should also focus on the violent elements within radical groups, using the law to move them away from the line that separates legitimate (if deplorable) protest and violence. In addition, officials should use the law more aggressively to stop potentially violent situations, such as when armed protestors show up at a legitimate march. Finally, legitimate mainstream groups that share part of the radicals’ agenda have a responsibility to report suspects and otherwise police their ranks—just as the United States expects other communities, like American Muslims, to do. Although many definitions of terrorism exist, the most prominent make little differentiation between violent individuals like Fields and jihadists who pledge their allegiance to ISIS. However, both the political baggage that comes with the terrorism label and American public perceptions make it far easier to use the t-word when discussing foreign groups than when describing domestic ones. Bruce Hoffman, one of the world’s foremost experts on terrorism, defines terrorism as organized political violence (or the threat of it) by a substate group with a goal of having a broader psychological impact. U.S. government definitions follow a similar logic. The State Department contends that terrorism is “the unlawful use of force and violence against persons or property to intimidate or coerce a government, the civilian population, or any segment thereof, in furtherance of political or social objectives.” Whether aiming to kill non-Muslims or non-whites is not relevant—it is the nature of the act, not the particular agenda behind it, that matters. With images of 9/11 dominating the thinking on terrorism, some might dismiss white supremacist and like-minded violence as a lesser threat unworthy of the terrorism label, but in post-9/11 United States, non-jihadist terrorists are as dangerous as jihadists. Although the right-wing category encompasses many disparate groups—including white nationalists, neo-Nazis, neo-Confederates, Sovereign Citizens, anti-abortion zealots, anti-Muslim and anti-immigrant groups, Christian Identity believers, and related fanatics—they collectively form a serious threat to Americans. Since 9/11, right-wing groups killed 68 people compared to 95 for jihadist groups; the jihadist figure would be more than halved (and lower than right-wing murders) if the 2015 shooting of 49 people at a gay night club in Orlando were excluded. The year 2016 was the bloodiest year for domestic extremists since 1995, when the right-wing radical Timothy McVeigh killed 168 people when he bombed a federal building in Oklahoma City. Since the election as president of Donald Trump, right-wing violence has soared. In the 34 days following the vote, the Southern Poverty Law Center documented 1,094 bias related incidents, and 37 percent of incidents involved Trump or his campaign slogans. (If the Las Vegas attack is determined to be done by a terrorist group that will have a big impact on the overall numbers.) What about the political impact of domestic terrorism? We often evoke the days after September 11 as a time of national unity in the face of horror. But left- and especially right-wing violence is driving us apart. On the left, the killing is minimal, but images of Antifa and other violence is used to bolster the perception that Trump’s opponents are extreme and committed to stopping legitimate speech and protest. The impact is far greater on the right because of the scope and scale of the violence. Rightwing violence is usually in the name of a cause—racism, anti-abortion, gun rights, opposition to migrants, and so on—that at least some Americans support, even if the more mainstream groups vehemently reject violence. Over 15 percent of Americans believe that abortion should be illegal in all cases, for example. But very few have ever taken up arms in support of that cause. In 2016 polls, meanwhile, most white people believed that anti-white bias was a bigger problem than anti-black bias, despite outcomes for white Americans exceeding those of black Americans on almost every metric. Violent white supremacists thus poke at bigger political wounds than do jihadists, with many Americans sympathizing for the cause but rejecting the killing. That problem is compounded by the fact that Americans are much more comfortable calling some types of political violence “terrorism” than others because the terrorism label carries normative weight. It suggests that boththe person’s actions and cause is beyond the pale. The definitions and laws above are all about actions, not the ideology behind it, but most non-experts lump the two together. As terrorism analyst Brian Jenkins observed many years ago, “terrorism is what the bad guys do.” There are also the legal complications to contend with. Although the United States does have a federal definition of domestic terrorism, domestic terrorism is not an independent federal crime. The U.S. State Department maintains a list of “Foreign Terrorist Organizations,” and this list is necessary so law enforcement, businesses, and ordinary citizens can know which groups are illicit even if they agree with the cause as a whole. To treat domestic terrorism like international terrorism, the United States would need a separate “Domestic Terrorist Organization” list, presumably compiled by the Department of Homeland Security and the Federal Bureau of Investigation with input from other agencies. Who is on that list would be tricky, though, as many “radical” activities such as encouraging hateful beliefs are protected free speech. Also, the government would have to disentangle the domestic and foreign lists if there are groups at home that have no real foreign links but are fanboys of foreign groups such as ISIS. Making this more complex, some states do have domestic terrorism laws. Alabama, Arizona, and New York have prosecuted ISIS cases as state terror cases because the feds either didn’t want to or didn’t have enough evidence. Even if the United States developed a federal statute, there is no immediate legal reason to use a terrorism charge unless other legal means do not offer sufficient prosecutorial power, resources, or other advantages. In general, the U.S. system gives the states, not the federal government, responsibility for punishing violent crimes. There is no general federal murder statute, for example. To qualify as federal, the issue must be a national one, requiring cross-state authority and federal resources. For example, when a U.S. citizen abroad might be a victim of terrorism, the FBI investigates. In addition, several laws enable the federal government to play a role in terrorism-like cases without calling it terrorism. The Fair Access to Clinic Entrances Act, for example, gives the FBI some responsibility for securing abortion clinics, enabling them to investigate potential threats. Federal hate crime statutes can be invoked against violent white supremacists. These laws can be as tough as anything we’d see if terrorism were made a crime, with some carrying the federal death penalty. Dylan Roof, who murdered nine black churchgoers in Charleston, South Carolina in 2015 was convicted of 33 federal charges, including hate crimes, but not terrorism. Nevertheless, he is on death row. What if the United States changed the rules anyway? Treating domestic groups the way we do American individuals tied to designated foreign groups would make a profound difference. Consider, notionally, an individual suspected of ties to ISIS and one suspected of ties to the National Socialist Movement, the organization of American Nazis, (or, if it suits your predilections, say Antifa). In addition to action against ISIS abroad, the response to the ISIS suspect at home would be characterized by a mix of the following: First, there would be early action and monitoring. The government would move quickly to address the terrorism threat. Depending on the opinion of the Department of Justice and FBI, a confidential informant might be used to befriend the suspect and gather evidence. A gentler approach might be a knock on the door by local law enforcement, working closely with federal officials, to assess the situation or possibly an attempt to work with relatives and community leaders to move the individual away from violence. Second, there would be far more resources. The anti-ISIS counterterrorism budget is robust. Much of the funding goes to intelligence, including electronic surveillance, human sources, and other means. As of 2016, the United States had spent $6.2 billion in the fight against ISIS, and the Trump administration’s budget proposal requested another $2 billion flexible fund for fighting the group. There is no neo-Nazi or Klan proto-state comparable to what ISIS has carved out in Iraq and Syria, and thus no need for expensive air and ground operations or satellite intelligence on suspected facilities. However, the numbers above are roughly the same as the FBI’s entire budget, which includes cyber defense, white collar crime, and other concerns. Of the small slice of the FBI budget that goes to counterterrorism, a tiny portion specifically goes to activities focused on right-wing extremist organizations. Indeed, domestic terrorism gets normalized into the FBI’s, DHS’, and the Department of Justice’s regular budgets in part because many of the tools used to address domestic terrorism closely resemble what is done in the name of “ordinary” law enforcement. Third, there would be no tolerance for violence. When the individual seemed to be acquiring firearms, let alone materials for a bomb, law enforcement officials would swoop in. It is hard to imagine concerns about First Amendment rights leading the government to allow a group of armed protestors to march through a town chanting slogans extolling the virtues of shari’a. Fourth, treating domestic extremist violence like ISIS-linked terrorism would open up broad use of the power of the law. The statutes about providing material support for terrorism are incredibly powerful, enabling prosecutors to nab suspected terrorists for even limited support, including simply joining a group (and thus giving one’s own person to the cause). According to the Center on National Security at Fordham Law, in 2016, 80 percent of ISIS-related prosecutions involved a material support charge. Suspected jihadists are often convicted of lesser charges just to get them off the streets. For terrorists without a foreign link, the rules are different. Fewer law enforcement resources coupled with a cautious government response out of fear of violating First and Second Amendment rights offers domestic terrorist groups more space to operate, organize, and preach without heavy surveillance or government interference. The United States does not have an existing infrastructure for preemptive disruption, like with the anti-ISIS campaign, and the law is applied more carefully than it is with regard to suspected jihadists. The FBI and other U.S. government organizations do try to preempt right-wing as well as jihadist attacks at home, of course. For ISIS, however, a few tweets may be enough to get the FBI to act; the bar for non-jihadist organizations is much higher. In short, a change in domestic terrorism policy would result in huge legal and political shifts. If right-wing and the smaller left-wing groups received similar attention as jihadists, the halls of the FBI and DHS would be bursting with new employees. Intelligence penetration of the right-wing community (and any relevant left-wing groups) would soar. This would range from monitoring phones and electronic communications of individuals with possible links to terrorists to planting informers in right-wing radical groups, regular police check-ins with leaders of legitimate right-wing causes to encourage them to provide information on potentially violent members, and trying to “turn” current group members to inform on their comrades. The public social media accounts of potential radicals would also be scrutinized. Affected communities would hate this. Many American Muslims believe that U.S. counterterrorism programs single out Muslims unfairly, and expanding the use of informants and surveillance would increase the numbers under the microscope. Because of the greater intelligence coverage—and lower regard for the privacy of the affected citizens —the amount of information on right-wing groups and individuals would skyrocket. Using link-analysis software, metadata, and greater information gathering, the government would know not only suspected right-wing terrorists, but also their friends, family members, and other parts of their network. The amount of analysis of groups, causes, and individuals would also grow exponentially. The expansion would seek to uncover heretofore unknown connections and otherwise anticipate and disrupt problems before they manifest. Informants would play a particularly large role. With suspected jihadists, the FBI often uses undercover agents and informants who claim to be members of a foreign terrorist organization. They engage individuals they feel have radical ideas and then encourage them to take prosecutable actions. A man named Antonio Martinez, for example, posted messages endorsing violent jihad on Facebook. An FBI informant then began to interact with Martinez posing as a jihadist. The agent gave him a fake bomb, which Martinez then tried to set off. Sting operations are used on right-wing groups, but far less frequently. If we treated right-wing radicals as we do jihadists, government officials would scan Facebook for individuals posting messages embracing violence against Jews or African Americans and then providing them with fake bombs in order to prosecute them. Social media sites contain a staggering number of anti-Semitic and racist threats, and the FBI would have its hands full. Given the politicized nature of America today, if an administration focused largely on right or left-wing groups, it would be accused of suppressing opposition, not fighting terrorism Past experience is useful. In the early 1990s, the FBI conducted the PATCON (Patriot Conspiracy) program. FBI agents posed as members of a fictional right-wing extremist group and collected information on real like-minded groups by attending conventions and other gatherings as well as reporting on private conversations. Analyst J.M. Berger found that the investigation produced little that could be used for criminal prosecution because it did not meet a high evidentiary bar, but it did provoke paranoia in the groups being monitored, with some members being removed as suspected spies. It’s hard to feel sorry for these individuals, but the result—that individuals not guilty of any crime found it harder to exercise freedom of assembly and speech—should trouble everyone. The counterterrorism microscope would also reveal numerous minor but prosecutable offenses not related to terrorism. As investor guru Warren Buffett noted, “If a cop follows you for 500 miles, you’re going to get a ticket.” Arresting such individuals would send a message that the police were watching. Credit card fraud, drug use, and other minor crimes would serve as justification to arrest and disrupt suspected terrorists and as leverage to convince them to cooperate in other investigations of their associates. Indeed, the government at times does not pursue criminal charges if the offense is deemed minor or resources are needed elsewhere, but this would be far less likely if there was a perceived terrorism link. For example, Virginia allows its residents to openly carry a firearm. However, the law stipulates that a non-Virginian from a state where open carry is illegal cannot carry a firearm— a seemingly obscure technicality. Police, however, did not check to make sure all the marchers in Charlottesville—many of which were from other states—met this criterion. (Part of the reason was that the protesters were better armed than the police.) If the marchers were perceived as having potential ties to designated terrorist groups, then law enforcement would likely take extra precautions to ensure they abide by the law – and to have more firepower at hand. Using such a standard would allow protest, but it would make it harder for at least some participants to do so while armed. Another change would be that government lawyers would use the material support laws to go after domestic terrorists. Under the material support statue, providing money, weapons, or even a person’s time to a designated terrorist group is a felony. Civil rights advocates have criticized the scope and breadth of these laws for years. In 2011, David Cole, now the National Legal Director for the ACLU, argued that the laws even create risk for nongovernmental organizations that might advise designated terrorist groups on how to begin peace negotiations to end conflict. Causes linked to domestic terrorist groups domestic would have to eliminate any ambiguity in their actions for fear it could result in their arrest or be used against them in court. For those linked to a group but not convicted of any crime, the consequences could still be dire and should trouble many Americans. In 2011, the FBI declared all Juggalos, the fans of the hip-hop group Insane Clown Posse, which has both violent and nonviolent members, to be a criminal street gang. The designation means that some Juggalos and Juggalettes (female fans) have lost custody battles, jobs, and otherwise suffered as their employers and the courts treated them as gang members. Juggalos staged several protests against the FBI designation and many describe its effects as discrimination. Beyond changing the groups and those pursuing them, treating right-wing violence as terrorism would affect the broader community that might sympathize with the cause but not the violence. Doing business with a declared terrorist organization would create legal liability and would damage a firm’s reputation, which every financial institution would seek to avoid. A U.S. bank, for example, held up money going from a legitimate U.S. charity to provide supplies for hospitals in Syria in order to ensure that the money did not end up the hands of terrorists. One can imagine legitimate charities operating in poor areas with a large Klan or Sovereign Citizen presence facing financial problems because banks fear the money could end up in the wrong hands. The government would also try to shut down the virtual presence of suspected terrorists. After the killing in Charlottesville, GoDaddy, a website hosting company, refused to allow the American neo-Nazi news and commentary site The Daily Stormer to use its platform even though GoDaddy had hosted the site since 2013. When its webmasters went to Google for hosting, they were also turned down. Facebook banned links to the site on its platform. Rather than drawing on the law to justify their actions, these sites said that The Daily Stormer violated their terms of service because it could incite violence. Government scrutiny would make far more companies willing to purge themselves of anything that could be linked to terrorism and eager to create service terms that enable them to do so. Although these companies already take steps to mitigate extremism—Facebook recently announced an initiative to use more artificial intelligence to screen content for extremist material—they would have to apply those steps to the specific needs of right and left-wing extremism if there was a change to the law. Suspected domestic terrorists would be treated similar to suspected child pornographers: if companies had any doubt, they would shun the group. Counter-messaging programs—just say no to jihad—would be far more robust, targeting the community in the hopes of reducing support for radicals within it. The Obama administration funded these poorly, and the Trump administration appears even more skeptical. The Department of Homeland Security gives out $10 million each year in Countering Violent Extremism Grants to organizations that work to develop resilience to extremism. Under the Trump administration, the program has focused almost wholly on jihadism. But similar programs to target non-jihadist domestic extremism would logically follow a change to the laws although, of course, it remains politically easier to focus on jihadists, which both the right and left agree are dangerous, than extremists on either end of the U.S. political spectrum. Nonviolent groups that share part of the radicals’ agenda would also be viewed differently. Many Sovereign Citizens share the NRA’s pro-gun agenda; eco-terrorists’ goals overlap with those of the Sierra Club. Today, such nonviolent political mobilization is not just protected under freedom of speech and freedom of assembly: it is considered vital to democracy, which requires citizens to express their views in the marketplace of ideas and to inform their elected officials. But with a change to the treatment of domestic terrorists, peaceful organizations might become considered “feeders” of the violent organizations. At the very least, they would be suspected of creating a conducive environment to radicalism. Saudi Arabia, for example, funds an array of mosques, textbooks, and extremist preachers for Muslim communities around the world and is often blamed when young men take up arms in the service of these ideas. Domestic nonviolent groups would have an incentive to distance themselves from violent individuals, with the added benefit of helping to make sure that extremists do not slip through the cracks by hiding among a more moderate cohort. This would have a chilling effect on organizations as many would avoid links to individuals who might have links to violence, however defined. Any terrorist seizure of territory, meanwhile, would be viewed as particularly dangerous. When terrorists control territory, they can create training camps that indoctrinate new recruits and provide training on bomb-making, document forgery, assassination, and other dangerous tactics. In addition, controlling territory gives the group legitimacy and undermines the authority of the state. Even small safe houses are dangerous. In 2016, Ammon Bundy and several followers seized control of the headquarters building in the Malheur National Wildlife Refuge in Oregon, claiming that federal land should be turned over to the states to manage. The result was a siege that lasted more than a month, with the participants eventually being arrested—and one killed when he resisted. Under the new system, the radicals would still be given a chance to surrender, and if the police and FBI could arrest them without risk, they would do so. But the situation would not be allowed to persist for days, let alone weeks. In addition, if the danger were deemed high, then taking domestic terrorists out via snipers, drones, or other standoff means would be considered acceptable. Finally, public perception would change. Using the terrorism label carries normative force and would change how these fringe groups are viewed. It would also indirectly shape causes seen as linked to a terrorist group. Before 9/11, a handful of Americans favored a government ruled under Islamic law; after 9/11, the number of supporters remains a handful, but fear concerning this issue became a major political issue in some states. In 2017 alone, 13 states have introduced anti-sharia legislation; Texas and Arkansas passed such laws. This public perception would give any government crackdowns legitimacy and, at the same time, put pressure on companies, local police departments, and others if they do not act decisively. This thought exercise shows how profoundly things might change if the terrorism label were applied more broadly—and that’s probably why we should do so cautiously. Many of the current measures to fight domestic terrorists, such as arrests, work well. In addition, most Americans probably don’t want their government to be treating legitimate political movements with suspicion or making banks or Internet companies suppress free speech. U.S. law enforcement, intelligence, and counter-messaging professionals should apply the law aggressively to prevent and disrupt violent activity while leaving individuals espousing the same ideas to protest in peace. History professor and former senior government official Philip Zelikow calls for shutting down groups that seek to create private militias—that is, the ones that are not “well-regulated” by the states. The Charlottesville marchers, for example, are more like an organized rival to the state rather than individuals acting on their own because they involve organizations with membership rolls, leaders, and even uniforms responding to a common call. This approach balances public safety concerns and First Amendment protections. This standard also works well for domestic Islamist groups, even if they champion ideas most Americans find objectionable. If the non-jihadist terrorism problem continues to grow, the United States should also consider having a carefully worded domestic terrorism statute at the federal level along with an associated list of designated organizations. This would enable the federal government to step in more effectively, using its resources and legal power, if a group becomes a greater danger. Many of the measures described above would represent too much of a change, and any legislation should factor in counterterrorism measures we don’t want as well as ones we do. Suspicion of authority is at a high point, and even the perception that the government might abuse new powers could worsen this even more. The language should be tightly worded and subject to regular legislative oversight—the definition of a foreign terrorist organization is broad, and any domestic legislation should focus heavily on the threat or use of violence and be regularly reviewed to ensure that changes in group behavior are reflected. Even without such a legislative change, the government must allocate an appropriate level of funding and manpower to domestic terrorism. The right-wing threat in particular is comparable to that of jihadist violence at home, and similar resources should be allocated to addressing it. The FBI and DHS should create larger offices dedicated to domestic groups and otherwise develop their intelligence presence. Some of the resources used for jihadist violence could be transferred with little loss. Finally, Americans should recognize the responsibilities of nonviolent organizations that contain radical members or cross paths with them. Just as Muslim-Americans are a vital source of information on suspected ISIS supporters, so too should other mainstream communities and organizations feel compelled to point out the few troublemakers in their midst. Nonviolent pro-life advocacy groups should, for example, monitor comment threads on forums for indicators of violent activity. We want the line between violence and nonviolence to be bright, but this puts some moral onus on legitimate groups to police themselves rather than shrug off violence as the work of a few bad actors. |
7e792921e6013c410b9462583be9a79c | https://www.brookings.edu/articles/social-policy-in-singapore-a-crucible-of-individual-responsibility/ | Social Policy in Singapore: A Crucible of Individual Responsibility | Social Policy in Singapore: A Crucible of Individual Responsibility This article was first published in Issue 9, June 2011 of ETHOS, a journal of the Civil Service College Singapore. For more information, please visit http://www.cscollege.gov.sg/ethos. An important achievement of the capitalist democracies is the creation of policies and programmes that put a human face on capitalism.[1] To use a word popular in Europe, these nations have found ways to balance capitalism with solidarity. Solidarity is the principle that the people of a nation, often operating through their government, accept some responsibility for helping fellow citizens (and even non-citizens) avoid destitution and enjoy some of the fruits of modern economies. There are substantial differences across the capitalist democracies in both the nature and the impacts of their solidarity programmes, but they all provide public help for the elderly, the unemployed, the sick or disabled, and the destitute.[2] These four groups are at risk of poverty or worse because their ability to work and support themselves and their families is impeded by age, infirmity, or difficulty finding a job. It is not unusual at any given time for 40% or more of the individuals in a capitalist country to fall into one or more of these four work-inhibiting categories.[3] Thus, without solidarity programmes that express the commitment by society to help the troubled, a capitalist nation – even a productive and affluent one – could have high levels of poverty, suffering, and even early death. In addition to public spending on the unfortunate, capitalist nations invest heavily in human capital programmes which help people develop their knowledge and skills to become economically productive and financially independent. The most fundamental and most expensive of these programmes is education at the pre-school, elementary, secondary and post-secondary levels. The specific policies and programmes vary across nations, but education and other human capital programmes are universally regarded as vital to efficiency and economic growth.[4] These programmes also promote the solidarity principle because they offer opportunity for advancement to everyone. Smart and hardworking people regardless of background have many opportunities to get ahead in capitalist democracies. On the other hand, family factors and structural factors in society can be so difficult to overcome that no nation has achieved complete equality of opportunity. Even so, the capitalist democracies have achieved substantial economic mobility, in large part because a significant portion of the cost of education is borne by taxpayers. Thus, due to the productivity of capitalist economies and the aim of citizens and their governments to provide equality of opportunity, many children of poor and low-income families receive educational benefits that their parents could not afford. Recent work by Irwin Garfinkel and his colleagues shows that if all expenditures on social programmes and education are combined, many capitalist democracies in Europe and Scandinavia spend over 35% of their Gross Domestic Product (GDP) on these programmes. Even the US, often assailed as a laggard in social spending and solidarity, spends 32%.[5] Clearly, promoting both solidarity and opportunity are primary goals of the capitalist democracies and they put their money where their mouth is. In the case of Singapore, which gained independence peacefully in 1965 after over a century of British colonial government and a few years as part of federated Malaysia, three wise policy decisions were made early on that have had continuing influence on Singapore’s ethos and social environment. The first was to emphasise education. By 1965, it was already evident that education would be key to a nation’s economic progress and wealth.[6] An educated workforce was becoming increasingly important for employment and productivity in trade, finance, technology and manufacturing; education and creativity could also prompt economic innovation. Consequently, the early leaders of newly independent Singapore emphasised public education.[7] Primary education in Singapore is universal and free; both secondary and pre-university education are heavily subsidised and virtually free for lowincome families; and undergraduate education is also highly subsidised for students from low-income families. As a result, Singapore is among the world’s leaders in educational achievement. Singaporean children’s scores on international achievement tests are astonishing. In 2007, for example, on the Trends in International Mathematics and Science Study (TIMSS), Singaporean children scored second (of 36) and third (of 48) countries respectively on 4th and 8th grade math and first on both 4th and 8th grade science.[8] The US by comparison did not score higher than eighth on either test for either grade and finished eleventh on both 4th grade math and 8th grade science. In addition, nearly 97% of Singaporean men and about 93% of Singaporean women are literate (and most of the permanent residents are bilingual), and about a third of the permanent residents have university degrees, a figure that more than doubled over the previous decade.[9] The second fruitful decision by Singapore’s early leaders, made over a period of years preceding and following independence, was to build the nation’s social policy around pensions, healthcare and housing. Unlike most capitalist nations, Singapore established a pension system in the 1950s based on defined contributions rather than defined benefits. The crucial difference between defined benefit and defined-contribution plans is their respective allocation of risk. Governments that promise to provide their citizens with defined benefits are at great risk of insufficient long-term financing such that benefits due can exceed contributions owed, which at some point leads to bankruptcy of the entire system and perhaps even of the government. This problem has plagued nearly every government-sponsored defined benefit plan in the world, primarily because of rapid increases in life expectancy and an unexpected slowdown in population growth.[10] Indeed, many nations have been forced, often under emergency conditions, to refinance their pension system by increasing contributions, reducing benefits, or both.[11] By basing its pension system on defined contributions, the Government of Singapore avoided these problems. Thus, Singaporeans and their employers pay into personal accounts under the Central Provident Fund (CPF); the funds in the account are invested; the remainder of funds in the account can be withdrawn, or used to purchase a life annuity, upon retirement. Part of the money paid into the fund is also used to help pay for medical expenses, or as a source of borrowing to finance a home or other approved investments. The Government’s roles are to require regular contributions to the account, administer the accounts, make the investment decisions or provide approved opportunities from which participants can select their own investments and, from time to time, contribute excess government funds into the accounts, providing account owners with a kind of windfall bonus. The individual’s role is to make contributions, to make decisions about investment of funds in their account (within limits established by the Government), and to withdraw funds only for major purchases such as a home. There are numerous advantages to a defined-contribution pension scheme, which amounts to enforced savings.[12] Establishing its pension system around the central principle of individual ownership is consistent with Singaporean society’s emphasis on individual responsibility. Not surprisingly, interviews show that Singaporeans like the fact that they own their own account and do not have to share it with others.[13] Another major advantage of enforced savings is that individuals have a source from which they can borrow at reasonable rates to make major purchases such as homes. Given that health expenses are paid for out of a separate section of the individual accounts, it seems likely that individuals and families are aware of how much their healthcare costs: perhaps the most fundamental aspect of using the market to control medical costs, and something that many other nations have failed to do.[14] The role of the CPF in teaching individual responsibility and self-reliance must be counted as a considerable advantage of Singapore’s pension system. The decision to base governmentsupported pensions on defined contributions 30 or 40 years ago could be made to seem like the Government was erring on the side of social policy that was too conservative. But today, with government pension systems all over the world in need of cash infusions and with the solvency of entire governments at risk because of flawed pension fund financing, Singapore’s decision to base their pension system on defined contributions looks better and better. The third emphasis of Singaporean social policy is housing, now administered by the Housing Development Board (HDB), which is responsible both for overseeing the construction of public housing and the sale of units to the people of Singapore. The result is that 81% of the population is served by the Government’s housing programmes; 79% of households own their own apartments and 2% rent from the Government.[15] A key public priority during the early years of post-independence nation-building was to help the population obtain decent housing and in this way bind them to the Government and to the nation. Whether housing actually achieved this purpose is difficult to measure, but there is no question that this policy created a nation of homeowners[16] and avoided the growth of slums and the incidence of homelessness that plague many other capitalist countries.[17] As a foreign observer, I think any fair reckoning of housing policy in Singapore would have to conclude that it has been a success – and again I would emphasise the role of the CPF in providing a sound home financing mechanism that has encouraged choice and individual responsibility. Over the years since independence, the Government of Singapore has gradually expanded its social policy to include wage subsidies for low-income workers, child care programmes, work training and other programmes. But education, enforced savings for retirement and other purposes, and housing remain the cornerstones of Singaporean social policy. Like the standard criticism of defined-contribution pension plans, the standard criticism of somewhat minimalist social policy such as that which characterises Singapore is that it does not cover enough risks and actual problems faced by the poor, the elderly, and the sick and disabled. As we have seen, European, Scandinavian and North American countries spend around 35% of their GDPs on social programmes. Although a comparable figure on total social welfare spending is not available for Singapore, the Government of Singapore spends only 16.7% of its GDP on all its programmes, a figure that is less than half as much as European and Scandinavian programmes spent just on social welfare.[18] But a nation’s social policy should be judged on more than the percentage of its GDP devoted to social programmes. Sociologists and anthropologists constantly remind us about cultural differences, so perhaps we should grant that a nation’s social policy is conditioned by the cultural values of the society that created and sustains the government. In the case of Singapore, similar in many ways to the US, the culture is one that emphasises individual responsibility as a necessary precursor to government responsibility. In Singapore, solidarity begins with a nearly universal commitment to individual, family and community responsibility. Reading government documents that describe the goals of Singapore’s social policy is like reading speeches of conservative politicians in the US criticising government spending on social programmes, and arguing that individuals and families should do more to support themselves. Here is a recent mission statement of Singapore’s Ministry of Community Development, Youth and Sports (MCYS): The three core principles that underlie [the Ministry’s] responses to the range of social challenges are: (i) Self-reliance and social responsibility; (ii) Family as the first line of support; and (iii) The Many Helping Hands approach.[19] The MCYS focuses on organising its activities and spending its resources on programmes designed to promote personal, family and community responsibility. Note the emphasis on “Many Helping Hands”. The idea of helping hands is that “volunteer welfare organisations and grassroots bodies” work with Government to implement the safety net. Again, the emphasis on private responsibility is emphasised, in this case by Government providing social assistance through volunteer and grassroots organisations. As a foreign observer who has spent more than three decades participating in the formulation of federal social policy in the US and studying the enactment, implementation and impacts of social policy in the US and Europe, I see a great deal to admire and emulate in Singapore’s social policy. Its education system is one of the world’s finest and produces young students of world-class achievements; its defined contribution pension plan and government discipline in managing the system have helped Singapore avoid the type of financial crisis that threatens the solvency of the pension funds of many nations and plays a major role in developing individual responsibility in its citizens; its housing policy has led to huge rates of home ownership and virtually no homelessness; and its health policy has produced a healthy and longlived population without threatening government solvency. Its legislation on child care, wage subsidies, and employment and training programmes for low-income workers has created a system that provides economic opportunity for all; and its emphasis on self-reliance and family and community responsibility has inculcated selfreliance and a minimal dependency among its citizens. It is little wonder that the World Economic Forum recently determined that Singapore is the world’s third most competitive nation, ahead even of the US.[20] There is, of course, always room for improvement. It might make sense for Singapore to focus even more resources on high-quality pre-school programmes, especially for children from low-income families. It might also prove worthwhile to provide greater work support in the form of child care and wage subsidies to low-income workers. Fairness would also be advanced if Singapore did more for its guest workers, especially by focusing more attention on their housing. But none of these suggestions should detract from the achievements of Singapore in creating one of the best educated, most disciplined, and most self-reliant populations in the world. These great achievements have provided the foundation on which a social and economic miracle has been built. Endnotes 1. In the past year, I have had the opportunity to spend several days in Singapore and a month at the Social Science Research Center Berlin. While in Germany, I studied programmes by which the US and Europe attempt to impose work requirements and use other incentives to encourage work rather than dependency on public programmes. The ideas expressed in this paper were developed in large part during my trips to Singapore and Germany. 2. Garfinkel,Irwin, Rainwater, Lee and Smeeding, Timothy, Wealth and Welfare States: Is America a Laggard or Leader? (London: Oxford, 2010); Esping-Andersen, Gosta, The Three Worlds of Welfare Capitalism (Princeton, NJ: Princeton, 1990); Castles, Francis G., The Future of the Welfare State: Crisis Myths and Crisis Realities (London: Oxford, 2004) 3. The figure for the US was 40.9% in the summer of 2009; for the elderly, the figure approached 100%; for children under age 18 the figure was 50%. Based on special analysis of the Survey of Income and Programme Participation performed by Richard Bavier and sent to the author on 13 September 2010. Figures in the European and Scandinavian democracies are almost certain to be higher than those of the US. 4. Goldin, Claudia and Katz, Lawrence R., The Race between Education and Technology (Cambridge: Harvard, 2008) 5. Garfinkel et al., p. 50. The UK, the US, Belgium, Italy, Germany, and the Netherlands are the countries that spend over 30% of the GDP on social and education programmes. 6. See Endnote 4. 7. For reasons that are not altogether clear, but probably include close and mutually-supportive family structures and traditions that place a high value on study and learning, most Asian ethnic groups excel in achieving high levels of education and, as a result, high levels of economic productivity. In the US for example, Asians achieve higher levels of education and average family income than any other ethnic group (including whites of European origin). According to data from the US Census Bureau, in 2009, 47.7% of Asians as compared with 26.0% of non-Asians had a four-year college degree; 18.2% of Asians as compared with 8.9% of non-Asians had a graduate or professional degree. In 2007, the median income of Asian families was $76,606 as compared with $61,355 for all families and $64,427 for white families. See US Census Bureau, “Educational Attainment in the United States: 2009”, Detailed Tables, http://www.census.gov/population/www/socdemo/education/cps2009.html; US Census Bureau, “The 2010 Statistical Abstract, Income, Expenditures, Poverty, & Wealth”, Table 681. “Money Income of Families – Median Income by Race and Hispanic Origin in Current and Constant”, (Washington, DC: Author, 2007), http://www.census.gov/compendia/statab/cats/income_expenditures_poverty_wealth.html 8. Patrick Gonzales and others, Highlights from TIMSS 2007: Mathematics and Science Achievement of US Fourth and Eighth-Grade Students in an International Context (Washington, DC: National Center for Education Statistics, 2009), http://nces.ed.gov/pubs2009/2009001.pdf 9. Singapore Department of Statistics, “Changing Education Profile of Singapore Population”, presented at Conference on Chinese Population and Socioeconomic Studies: Utilizing the 2000/2001 round Census Data, Hong Kong University of Science and Technology, June 19-21, 2002, http://www.singstat.gov.sg/pubn/papers/people/cp-education.pdf 10. Life expectancy has jumped by almost a decade since 1960 in many nations, including Germany, the Netherlands, the US and the UK. In Singapore, in 2007, the birth rate was at an all-time low of 1.24 children per woman of childbearing age – the 28th year in a row that the birth rate in Singapore remained below the replacement rate of 2.5 children per woman of childbearing age; see World Bank Database, “World Development Indicators: Life Expectancy at Birth, Total (Years)”, SP.DYN.LE00.IN (http://databank.worldbank.org/ddp/home.do [July 2010]); Mydans, Seth, “A Different Kind of Homework for Singapore Students: Get a Date”, New York Times, 29 April 2008, p. A9 11. In the late 1970s and early 1980s, the Social Security Trust Fund in the US was running so low that a financing crisis seemed imminent. Thus, in 1981, President Reagan and Congress appointed a blue-ribbon commission headed by Alan Greenspan to make recommendations on how to reform Social Security so payments from the trust fund could continue. The Greenspan Commission recommended adding newly hired federal workers to the programme (which would boost tax revenues), subjecting a part of Social Security benefits to taxation (which would also boost revenues), increasing implementation of previously scheduled payroll tax increases, and delaying cost-of-living adjustments from June to December each year. Congress enacted these changes and also passed an increase in the full retirement age beginning in 2000. See Greenspan Commission, Report of the National Commission on Social Security Reform (January 1983), http://www.socialsecurity.gov/history/reports/gspan.html 12. The major criticism of defined contribution plans is that individuals bear nearly all the risk. Critics argue that the government should share the risk, but at least pension fund insolvency will not threaten the financial stability of the entire pension system – or indeed the entire government. 13. Sherraden, Michael, “Provident Funds and Social Protection: The Case of Singapore”, Alternatives to Social Security: An International Inquiry, eds. Midgley, James and Sherraden, Michael, (Westport, CT: Greenwood Publishing Group, 1997), pp33-60; Sherraden, Michael and others, “Social Policy Based on Assets: The Impact of Singapore‘s Central Provident Fund”, Asian Journal of Political Science, 3 (2) (1995), pp112-133, http://www.informaworld.com/smpp/content~db=all~content=a789141937 14. But some studies have shown that patients are not good at determining the quality of care and therefore might not make choices that would optimise their health; see Callahan, Daniel, “Consumer-directed Health Care: Promise or Puffery?” Health Economics, Policy and Law, 3 (2008), pp301–311 15. Centre for Governance and Leadership, “Overview of the Social Safety Net”, Civil Service College, Singapore, 2008. 16. More accurately, a nation of apartment owners, land being a premium commodity in tiny Singapore. 17. Housing for foreign workers is often inferior; see Ofori, George, Foreign construction workers in Singapore (working paper, School of Building and Estate Management, National University of Singapore, International Labour Office, Geneva, 2000), http://www.ilo.org/public/english/dialogue/sector/papers/forconst/index.htm 1. Government of Singapore, Ministry of Finance, “FiscalOutlook for Financial Year 2010”, http://www.mof.gov.sg/budget_2010/download/fy2010_budget_highlights_part2.pdf 19. Centre for Governance and Leadership, “Overview of theSocial Safety Net”, pp3 20. World Economic Forum, “The Global Competitiveness Report, 2010-2011” (Geneva: Author, 2010), http://www.weforum.org/en/media/Latest%20News%20Releases/NR_GCR10 |
802438ee48523f84e52030afb67a2ba9 | https://www.brookings.edu/articles/still-the-land-of-opportunity/ | Still the Land of Opportunity? | Still the Land of Opportunity? America is known as “the land of opportunity.” But whether it deserves this reputation has received too little attention. Instead, we seem mesmerized by data on the distribution of incomes which show that incomes are less evenly distributed than they were 20 or 30 years ago. In 1973, the richest 5 percent of all families had 11 times as much income as the poorest one-fifth. By 1996, they had almost 20 times as much. But it is not only the distribution of income that should concern us. It is also the system that produces that distribution. Indeed, I would argue that one cannot judge the fairness of any particular distribution without knowing something about the rules of the game that gave rise to it. Imagine a society in which incomes were as unequal as they are in the United States but where everyone had an equal chance of receiving any particular income-that is, in which the game was a completely unbiased lottery. Although some, especially those who are risk adverse, might blanch at the prospect of losing, and might wish for a more equal set of outcomes a priori (as most famously argued by John Rawls), others might welcome the chance to do exceedingly well. But-and this is the important point-no one could complain that they hadn’t had an equal shot at achieving a good outcome. So the perceived fairness of the process is critical, and the rules governing who wins and who loses matter as much as the outcomes they produce. In talking about this issue, we often invoke the phrase “equal opportunity,” but we seldom reflect on what we really mean by “opportunity,” how much of it we really have, and what we should do if it’s in short supply. Instead, we have an increasingly sterile debate over income equality. One side argues for a redistribution of existing incomes, through higher taxes on the wealthy and more income support for the poor. The other side argues that inequality reflects differences in individual talent and effort, and as such is a spur to higher economic growth, as well as just compensation for unequal effort and skill. If there is any common ground between these two views, it probably revolves around the idea of opportunity and the measures needed to insure that it exists. Opportunity first The American public has always cared more about equal opportunity than about equal results. The commitment to provide everyone with a fair chance to develop their own talents to the fullest is a central tenet of the American creed. This belief has deep roots in American culture and American history and is part of what distinguishes our public philosophy from that of Europe. Socialism has never taken root in American soil. Public opinion is only one reason to refocus the debate. Another is that the current emphasis on income inequality begs the question of how much inequality is too much. Virtually no one favors a completely equal distribution of income. Inequality in rewards encourages individual effort and contributes to economic growth. Many would argue that current inequalities far exceed those needed to encourage work, saving, and risk taking, and further that we need not worry about the optimal degree of inequality in a society that has clearly gone beyond that point. But the argument is hard to prove and will not satisfy those who believe that inequality is the price we pay for a dynamic economy and the right of each individual to retain the benefits from his or her own labor. In light of these debates, if any public consensus is to be found, it is more likely to revolve around the issue of opportunity than around the issue of equality. A final reason why opportunity merits our attention is that it gets at the underlying processes that produce inequality. It addresses not just the symptoms but the causes of inequality. And a deeper understanding of these causes can inform not only one’s sense of what needs to be done but also one’s sense of whether the existing distribution of income is or is not a fair one. Three societies Consider three hypothetical societies, all of which have identical distributions of income as conventionally measured. The first society is a meritocracy. It provides the most income to those who work the hardest and have the greatest talent, regardless of class, gender, race, or other characteristics. The second one, I will call a “fortune-cookie society.” In this society, where one ends up is less a matter of talent or energy than pure luck. The third society is class-stratified. Family background in this society is all important, and thus you need to pick your parents well. The children in this society largely end up where they started, so social mobility is small to nonexistent. The United States and most other advanced countries are a mixture of these three ideal types. Given a choice between the three, most people would probably choose to live in a meritocracy. Not only do the rules determining success in a meritocracy produce greater social efficiency but, in addition, most people consider them inherently more just. Success is dependent on individual action. In principle, by making the right choices, anyone can succeed, whereas in a class-stratified or fortune-cookie society, people are buffeted by forces outside their control. So, even if the distribution of income in each case were identical, most of us would judge them quite differently. We might even prefer to live in a meritocracy with a less equal distribution of income than in a class-stratified or fortune-cookie society with a more equal distribution. Indeed, social historians have found this to be the case. The American public accepts rather large disparities in income and wealth because they believe that such disparities are produced by a meritocratic process. Even those at the bottom of the distribution believe that their children will do better than they have. It is this prospect, and the sense of fairness that accompanies it, that has convinced the American body politic to reject a social-welfare state. For the last 25 years, the top one-fifth of the population has been improving their prospects while the other 80 percent has lagged behind. Yet no one has rebelled. The many have not imposed higher taxes on the few. (Small steps in this direction were taken in 1993, but the Democratic president who proposed them later apologized to a group of wealthy donors for doing so.) Even welfare recipients tell survey researchers that they consider the new rules requiring them to work at whatever job they can get fair. They plan on “bettering themselves.” Such optimism flies in the face of studies suggesting that women on welfare (and those similar to them) will earn poverty-level wages for most of their lives. But it is an optimism that is characteristically, if in this case poignantly, American. Several points need to be made about our purported meritocracy. The first is that even a pure meritocracy leaves less room for individual agency than is commonly believed. Some of us are blessed with good genes and good parents while others are not. The second is that the United States, while sharing these inherent flaws with other meritocracies, remains a remarkably dynamic and fluid society. Although it is not a pure meritocracy, it has moved closer to that ideal than at any time in its past. The third point is that, in the past, a rapid rate of economic growth provided each new generation with enhanced opportunities. It was this fact, in large part, that contributed to our image as the land of opportunity. But a mature economy cannot count on this source of upward mobility to leaven existing disparities; it needs instead to repair its other two opportunity-enhancing institutions: families and schools. The remainder of this essay elaborates on each of these points. The inherent limits of a meritocracy In a meritocracy, one would expect to find considerable social and economic fluidity. In such a system, the abler and more ambitious members of society would continually compete to occupy the top rungs. Family or class background, per se, should matter little in the competition while education should matter a lot. The social-science literature contains a surprising amount of information on this topic. Based on my own reading of this literature, I would argue that social origins or family background matter a good deal. Not everyone begins the race at the same starting line. The kind of family into which a child is born has as much or more influence on that child’s adult success than anything else we can measure. Yes, education is important too, but when we ask who gets a good education, it turns out to be disproportionately those from more advantaged backgrounds. Well-placed parents are much more likely to send their children to good schools and to encourage them to succeed academically. In short, although not as evident as in a class-stratified society, even in a meritocracy one had better pick one’s parents well. Why do families matter so much? There are at least three possibilities. The first is that well-placed parents can pass on advantages to their children without even trying: They have good genes. The second is that they have higher incomes, enabling them to provide better environments for their children. The third is that they are simply better parents, providing their children an appropriate mix of warmth and discipline, emotional security and intellectual stimulation, and preparation for the wider world. It has proved difficult to discover which of these factors is most important. However, as Susan Mayer demonstrates in her recent book, What Money Can’t Buy, the role of material resources has probably been exaggerated. Most studies have failed to adjust for the fact that parents who are successful in the labor market have competencies that make them good parents as well. It is these competencies, rather than the parents’ income, that help their children succeed. I don’t want to leave the impression that income doesn’t matter at all. It enables families to move to better neighborhoods; it relieves the stresses of daily living that often produce inadequate parenting; and, most obviously, it enables parents to purchase necessities. Still, additional income assistance, although possibly desirable on other grounds, is not likely to produce major changes in children’s life prospects. Genes clearly matter. We know this from studies of twins or siblings who have been raised apart. However, IQ or other measures of ability are at least somewhat malleable, and differences in intelligence only partially explain who ends up where on the ladder of success. Good parenting and an appropriate home environment are much harder to measure, but studies suggest that they may explain a substantial portion of the relationship between family background and later success in school or in the labor market. In addition, children with two parents fare much better than those with only one, in part because they have higher incomes but also because the presence of a second parent appears, according to all of the evidence, to be beneficial in and of itself. So, for whatever reason, families matter. Unless we are willing to take children away from their families, the deck is stacked from the beginning. And even if one could remove children from their homes, there would still be the pesky little matter of differences in genetic endowments. Since a meritocracy has no good way of dealing with these two fundamental sources of inequality, it is a pipe dream to think that it can provide everyone with an equal chance. If we want a society in which there is less poverty and more equality, we will have to work harder and more creatively to compensate for at least some of these initial advantages and disadvantages. How much social mobility? Whatever its flaws, a meritocracy is clearly better than some of the alternatives. Although economic and social mobility may be inherently limited, it exists. But just how much of it do we actually have in the United States? Do families matter so much that children can rarely escape their origins? Do people move up and down the economic ladder a little or a lot? Before attempting to answer these questions, let us consider a simple example of a society consisting of only three individuals: Minnie, Mickey, and Mighty. Assume that Minnie, Mickey, and Mighty start with incomes (or other valued goods) of $20,000, $30,000, and $40,000 respectively. Now imagine that Minnie’s children do extremely well, moving from an income of $20,000 to one of $40,000. Mighty’s children, by contrast, fall in status or well-being from $40,000 to $20,000. Mickey’s situation doesn’t change. This is the sort of social mobility we would expect to find in a meritocracy. It is a story of rags to riches (or the reverse) in a generation. Note that the distribution of income, as conventionally measured, has not changed at all. As Joseph Schumpeter once put it, the distribution of income is like the rooms in a hotel-always full but not necessarily with the same people. This same rags-to-riches story can occur over a lifetime as well as between generations. Those at the bottom of the income scale often move up as they accumulate skills and experience, add more earners to the family, or find better jobs. Those at the top may move down as the result of a layoff, a divorce, or a business failure. Thus any snapshot of the distribution of incomes in a single year is unlikely to capture the distribution of incomes over a lifetime. For example, in a society in which everyone was poor at age 25 but rich at age 55, the distribution of annual incomes for the population as a whole would be quite unequal, but everyone would have the same lifetime incomes! Now note that it is theoretically possible for the distribution of income to become more unequal at the same time that the Minnies of the world are improving their status. Is this what happened over the last few decades in the United States? The answer is yes and no. On the one hand, we know that there is a lot of income mobility within the population. Every year, about 25 percent or 30 percent of all adults move between income quintiles (say, from being in the bottom one—fifth of the income distribution to being in the second lowest fifth.) This rate increases with time, approaching 60 percent over a 10-year period. So there is considerable upward and downward movement. A lot of the Minnies in our society move up, and a lot of the Mightys move down. A few of the Minnies may even trade places with the Mightys of the world, as in our example. On the other hand, most people don’t move very far; many remain stuck at the bottom for long periods; and some apparent moves are income reporting errors. (These are particularly large among the very poor and the very wealthy whose incomes tend to come from unearned sources that are difficult to track and that they may be reluctant to reveal.) Most importantly, from the data we have, there is no suggestion of more mobility now than there was 20 or 30 years ago. So one can’t dismiss complaints about growing income inequality with the argument that it has been accompanied by more opportunity than in the past for everyone to share in the new wealth. But what about Minnie’s and Mighty’s children? Suppose we look at mobility across generations instead of looking at it across their own life cycles? Here, the news is much more positive. Social mobility in America appears to have increased, at least since 1960, and probably going back to the middle of the last century (though the data for measuring such things is much better for the more recent period). This conclusion is based on studies done by Michael Hout, David Grusky, Robert Hauser, David Featherman, and others-studies that show less association between some measure of family background and eventual adult career success now than in the past. This association has declined by as much as 50 percent since the early 1960s, according to Hout. What has produced this increase in social mobility? The major suspects are a massive broadening of educational opportunities, the increased importance of formal education to economic success, and more meritocratic procedures for assigning workers to jobs (based on “what you know rather than who you know”). In addition, the extension of opportunities to some previously excluded groups-most notably women and blacks-has produced greater diversity in the higher, as well as the lower, ranks. How much economic mobility? Now return to our three-person society and consider a second scenario. In this one, the economy booms, and Minnie, Mickey, and Mighty all double their initial incomes from $20,000, $30,000, and $40,000 to $40,000, $60,000, and $80,000. Clearly, everyone is better off, although the relative position of each (as well as the distribution of income) is exactly the same as before. It is this sort of economic mobility, rather than social mobility per se, that has primarily been responsible for America’s reputation as the land of opportunity. In other words, the growth of the economy has been the most important source of upward mobility in the United States; it is the reason that children tend to be better off than their parents. In a dynamic economy, a farmer’s son can become a skilled machinist, and the machinist’s son a computer programmer. Each generation is better off than the last one even if there is no social mobility. (Class-based differentials in fertility aside, social mobility-as distinct from economic mobility-is, by definition, a zero-sum game.) But, as important as it was historically, economic mobility has been declining over the past few decades for the simple reason that the rate of economic growth has slowed. Young men born after about 1960, for example, are earning less (in inflation-adjusted terms) than their fathers’ generation did at the same age. It would be nice to assume that a higher rate of growth is in the offing as we enter a new century. Certainly, new technologies and new markets abroad make many observers optimistic. But whatever the force of these developments, they haven’t yet improved the fortunes of the youngest generation. In sum, both these factors-the increase in social mobility and the decline in economic mobility-have affected prospects for the youngest generation. The good news is that individuals are increasingly free to move beyond their origins. The bad news is that fewer destinations represent an improvement over where they began. For those concerned about the material well-being of the youngest generation, this is not a welcome message. But for those concerned about the fairness of the process, the news is unambiguously good. Class stratification Not only has economic growth slowed but its benefits now accrue almost entirely to those with the most education. Simply being a loyal, hard-working employee no longer guarantees that one will achieve the American dream. Whatever progress has been made in extending educational opportunities, it has not kept pace with the demand. Men with a high-school education or less have been particularly hard hit. The combination of slower growth and a distribution of wage gains that have favored women over men and the college educated over the high-school educated since the early 1970s has hurt poorly educated men. Their real incomes are less than one-half what they otherwise would have been in 1995. Education is, to put it simply, the new stratifying variable in American life. This, of course, is what one would hope for in a meritocracy, but only if everyone has a shot at a good education. It is said that Americans would rather talk about sex than money. But they would rather talk about money than class, and some would rather not talk about the underclass at all. Many people consider the label pejorative, but research completed in the past decade suggests that such a group may indeed exist. Its hallmark is its lack of mobility. This group is not just poor but persistently poor, often over several generations. It is concentrated in urban neighborhoods characterized by high rates of welfare dependency, joblessness, single parenthood, and dropping out of school. It is disproportionately made up of racial and ethnic minorities. Although still relatively small (a little under three million people in 1990, according to an Urban Institute analysis of Census data), it appears to be growing. Anyone who doubts the existence of such a group need only read the detailed first-hand portrayals of ghetto life in Alex Kotlovitz’s There are No Children Here, Leon Dash’s Rosa Lee, or Ron Suskind’s A Hope in the Unseen. These accounts suggest that dysfunctional families, poor schools, and isolation from mainstream institutions are depriving a significant segment of our youth of any prospect of one day joining the middle class. All of this is by way of a caution: Whatever the broader trends in economic and social mobility, there may be enclaves that get left behind. Moreover, one can argue that it is this subgroup-and their lack of mobility-that should be our main concern. The very existence of such a group threatens our sense of social cohesion and imposes large costs on society. Its nexus with race is particularly disturbing. What to do? If families and education matter so much, we had best look to them as sources of upward mobility for all Americans-and especially for those stuck at the bottom of the economic ladder. Imagine a world in which everyone graduated from high school with the basic competencies needed by most employers-a world in which no one had a child before they were married and all had a reasonably decent job. Even if these parents held low-wage jobs, and one of them worked less than full-time, they would have an income sufficient to move them above the official poverty line (about $12,000 for a family of three in 1995). The entry-level wage for a male high-school graduate in 1995 was $15,766. If his wife took a half-time job at the minimum wage, they could earn another $5,000 a year. No one should pretend that it is easy to live on $20,000 a year, especially in an urban area. Rent, utilities, and workrelated expenses alone can quickly gobble up most of this amount. It would make enormous sense, in my view, to supplement the incomes of such families with an earned income tax credit, subsidized health care, and subsidized child care. What does not make sense is to insist that the public continue to subsidize families started by young unwed mothers. As of 1990, 45 percent of all first births were to women who were either teenagers, unmarried, or lacking a high-school degree. Add in all those with high-school diplomas that are worthless in the job market, and the picture is even grimmer. There is no public-policy substitute for raising a child in a home with two parents who are adequately educated. Of course, poorly educated parents are nothing new. In fact, the proportion of mothers who are high-school graduates is higher now than it has ever been. But bear in mind that in the past mothers were not expected to work (in part because far more of them were married), that the economy didn’t require people of either sex to have nearly as much education, and that the proportion of children in single-parent families was a fraction of what it is today. Because of increases in divorce and especially out-of-wedlock childbearing, we now have a situation in which three-fifths of all children will spend time in a fatherless family. Almost one-third of all children are born out of wedlock in the United States, and the proportion exceeds one-half in such cities as New York, Chicago, Philadelphia, Detroit, and Washington, D.C. One needn’t be an advocate of more traditional family values to be worried about the economic consequences of such social statistics. In fact, the growth of never-married mothers can account for almost all of the growth in the child poverty rate since 1970. Where does the cycle stop? Urban schools that half a century ago may have provided the children of the poor a way into the middle class are now more likely to lock them into poverty. More than half of fourth and eighth graders in urban public schools fail to meet even minimal standards in reading, math, or science, and more than half of students in big cities will fail to graduate from high school. How can America continue to be the land of opportunity under these circumstances? If families and schools are critical to upward mobility, these children have little chance of success. We have no choice but to address both of these issues if we want to provide opportunities for the next generation. Strengthening families Despite all the talk about the deterioration of the family, no one knows quite what to do about the problem. Welfare reform, which has not only eliminated AFDC as a permanent source of income for young mothers but also made young fathers more liable to pay child support, may well deter some out-of-wedlock childbearing. The next step should be to make the Earned Income Tax Credit (EITC) more marriage friendly. Today, as a result of the credit, a working single parent with two children can qualify for almost $4,000 a year. But if she marries another low-wage earner, she stands to lose most or all of these benefits. Congress should consider basing the credit on individual, rather than family, earnings. (A requirement that couples split their total earnings before the credit rate was applied would prevent benefits from going to low-wage spouses in middle-income families.) Such a revised EITC would greatly enhance the incentive to marry. Equally important, we should find top-quality child care for those children whose mothers are required to work under the new welfare law. Indeed, such care might provide them with the positive experiences that they often fail to get within the home. Such intervention, if properly structured to accomplish this goal, can pay rich dividends in terms of later educational attainment and other social outcomes. The research on this point is, by now, clear. Although early gains in IQ may fade, rigorous studies have documented that disadvantaged children who receive a strong preschool experience are more likely to perform well in school. Some argue that out-of-wedlock childbearing is the result of a lack of jobs for unskilled men. Although I don’t think the evidence backs this view, it may have some merit. If so, we should offer jobs to such men in a few communities and see what happens. But we should tie the offer of a job to parental responsibility or give preference to men who are married. Finally, I am convinced that messages matter. Many liberals argue that young women are having babies out of wedlock because they or their potential spouses are poor and face bleak futures. It is said that such women have no choice but to become unwed mothers. As an after-the-fact explanation, this may be partly true, but it is often accompanied by too ready an acceptance of early, out-of-wedlock childbearing by all concerned. Such fatalistic expectations have a way of becoming self-fulfilling. Just as it is wrong to presume that poor children can’t excel in school, so too it is wrong to suggest to young women from disadvantaged backgrounds that early out-of-wedlock childbearing is their only option. The fact remains that education and deferred childbearing, preferably within marriage, are an almost certain route out of poverty. Perhaps if more people were willing to deliver this message more forcefully, it would begin to influence behavior. Though the question needs to be studied more closely, it would appear that the decline in welfare caseloads since 1993 was triggered, in part, by a new message. Moreover, the new emphasis on conservative values may have contributed to the decrease in teen pregnancy and early childbearing since 1991. These new values can explain as much as two-thirds of the decline in sexual activity among males between 1988 and 1995, according to an Urban Institute study. Fixing urban schools We must stem the tide of early, out-of-wedlock births for one simple reason: Even good teachers cannot cope with large numbers of children from poor or dysfunctional homes. And equally important, children who are not doing well in school are more likely to become the next generation of teenage mothers. This is a two-front war in which success on one front can pay rich dividends on the other. Lose the battle on one front, and the other is likely to be lost as well. That many schools, especially those in urban poor neighborhoods, are failing to educate their students is, I think, no longer in dispute. What is contested is how to respond. Some say that the solution lies in providing vouchers to low-income parents, enabling them to send their children to the school of their choice. Others argue that school choice will deprive public schools of good students and adequate resources. They favor putting more money into the public schools. But choice programs have the potential to provide a needed wake up call to these same schools. Too many people are still defending a system that has shortchanged the children of the poor. Public schools are not about to disappear, and no one should believe that choice programs alone are a sufficient response to the education crisis. We should be equally attentive to the new choice programs and to serious efforts to reform the public schools. In Chicago, for example, a new leadership team took over the school system in 1995-96 and instituted strong accountability measures with real consequences for schools, students, and teachers. Failure to perform can place a school on probation, lead to the removal of a principal, or necessitate that a student repeat a grade. New supports, such as preschool programs, home visiting, after-school and summer programs, and professional development of teachers, are also emphasized. Early indications are that these efforts are working to improve Chicago’s public schools. A more equal chance I began with a plea that we focus our attention less on the distribution of income and more on the opportunity each of us has to achieve a measure of success, recognizing that there will a’-ways be winners and losers but that the process needs to be as fair and open as possible. It can be argued that the process is, to one degree or another, inherently unfair. Children do not have much opportunity. They do not get to pick their parents-or, for that matter, their genetic endowments. It is these deepest of inequalities that have frustrated attempts to provide a greater measure of opportunity. Education is supposed to be the great leveler in our society, but it can just as easily reinforce these initial inequalities. Thus any attempt to give every child the same chance to succeed must come to terms with the diversity of both early family environments and genetic endowments. In policy terms, this requires favoring the most disadvantaged. Numerous programs from Head Start to extra funding for children in lowincome schools have attempted to level the playing field. But even where such efforts have been effective, they have been grossly inadequate to the task of compensating for differences in early environment. Assuming we are not willing to contemplate such radical solutions as removing children from their homes or cloning human beings, we are stuck with a certain amount of unfairness and inequality. The traditional liberal response to this dilemma has been to redistribute income after the fact. It is technically easy to do but likely to run afoul of public sentiment in this country, including the hopes and dreams of the disadvantaged themselves. They need income; but they also want self-respect. In my view, we must find ways to strengthen families and schools in ways that give children a more equal chance to compete for society’s prizes. To do otherwise runs counter to America’s deepest and most cherished values. |
9eafe643ac277298cf41bb572d2a34fa | https://www.brookings.edu/articles/testing-the-limits-of-china-and-brazils-partnership/ | Testing the limits of China and Brazil’s partnership | Testing the limits of China and Brazil’s partnership Brazil is China’s most important economic and political partner in South America, as well as a key participant in the Brazil, Russia, India, China, and South Africa (BRICS) grouping of emerging powers that China increasingly leads. When it comes to global aspirations, China and Brazil have historically been in sync on their critiques of the liberal international order, if not on their preferred remedies. Historically, their prescriptions for foreign policy differ in important ways. China would prefer a world order that better accommodates its interests, and it is becoming less reluctant to use the threat of force in foreign policy to maintain its ascendancy in its geopolitical neighborhood. Brazil traditionally has preferred a rules-bound liberal international order that applies to everyone, especially superpowers. Unlike China, it foreswears the use of coercion in international affairs, even to protect its interests in its immediate neighborhood, South America. Since President Jair Bolsonaro assumed office in January 2019, this historical pattern has been upended. Bolsonaro and his foreign policy team have adopted a strongly pro-U.S. (specifically pro-President Donald Trump) agenda internationally, including engaging in frequent critiques of China. Domestically, the partnership with China has been controversial with some sectors. Specifically, the partnership is criticized by the Brazilian manufacturing sector, which faces strong competition from Chinese products and lacks reciprocal access to Chinese market, and by nationalist-populist voters who support Bolsonaro. Agricultural export interests, by contrast, favor a strong relationship with Beijing because China is a major market for their products. While initially restrained in response to criticism from the Bolsonaro administration, Chinese diplomats have struck back in 2020 in interviews and op-eds with local media.1 This confrontational dynamic is a marked departure from the historical trend in Brazil-China relations, which has trended toward deeper economic and political relations. China has a long-term interest in a close diplomatic relationship with Brazil, important both for its strategy in Latin America and maximizing its global leadership. Beijing is unlikely to want this tension to become the “new normal” in its relations with Brazil. In the face of the COVID-19 pandemic, the Bolsonaro administration has steered an erratic course between conciliatory rhetoric, seeking Chinese assistance against the novel coronavirus, and further criticism.2 Despite the preferences of its current foreign policy team, Brazil has important long-term strategic interests in maintaining a working partnership with China. Brazil’s path toward emerging power status has been a rocky one, as it has tried different strategies to secure a seat at the table to negotiate a place in the international order commensurate with its aspirations. It has vacillated between collaborating with the United States, as occurred during World War II and during the 1990s after the end of the Cold War, and charting its own autonomous path to great power status during the Cold War and during first decades of the 21st century. Each time, Brazil’s aspirations have been undermined by profound crises in its domestic political and economic arrangements that have belied its claim to great power status.3 During the periods when it sought international autonomy, Brazil has found in China an attractive partner in criticizing the liberal international order fostered by the United States in the wake of World War II. Brazil’s military government established diplomatic relations with the People’s Republic of China in 1974, ending its recognition of the Republic of China (Taiwan), and China and Brazil entered a “strategic bilateral partnership” in 1993, initially focused on economic and technological cooperation, but eventually evolving into a more global partnership.4 Both prioritized relationships with the Global South based on solidarity, non-intervention, and mutual respect, deliberately contrasting their approach with that of the superpowers. In particular, they criticized the degree to which the United States ignored the rules of the rules-based liberal order it purportedly championed. What bound the two countries together was a critique of the international system as stacked against the developing world. Both China and Brazil have sought rapid economic and technological development (although China has had much greater success) and have pursued industrialization as an important means to international autonomy and a seat at the table of the world’s major powers.5 One of the signature institutions via which China-Brazil international cooperation has become more formalized is the BRICS partnership. BRICS brings together Brazil, Russia, India, China and South Africa to address global concerns of mutual interest. A club of “emerging powers” (although Russia is arguably declining), BRICS has served as a venue for mutual admiration, for club deals among the members, and sometimes for proposing an alternative world order. Particularly when seeking reforms in the liberal international order, the BRICS countries have proposed alternatives to existing institutions such as the BRICS Development Bank (an alternative for the World Bank) and the Contingency Reserve Arrangement (an alternative for the International Monetary Fund).6 Both China and Brazil have found BRICS a useful mechanism to signal to the incumbent great powers that rising states have both the capacity and the interest in establishing their own global institutions, even though these are yet nascent and may not prosper. China and Brazil’s relationship is grounded on an expansive trade and investment relationship. China began to trade abroad with South America in significant terms after 2000, initially focused mostly on acquiring commodities to supply its rapidly growing industrial base and feed its population.7 Brazil is one of the most productive agricultural export economies in the world, rivaling the United States in this area, as well as a significant exporter of mineral products. By 2019, bilateral trade reached over $100 billion, making China the main destination for Brazilian exports.8 The China-U.S. trade rivalry under the Trump administration benefited Brazil as China shifted its food trade away from the United States, and Brazil’s highly competitive agricultural exporters were eager to take up the slack.9 After 2005, when China began to invest abroad, Brazil also became a significant destination for its foreign direct investment, first in the commodities sector, but then in a wider array of infrastructure projects. By 2017, over half of China’s investments in South America were destined to Brazil.10 Although not formally a target for China’s signature Belt and Road Initiative, Brazil’s global interests and export markets are clearly affected by China’s overseas investment programs, not least of which because it tends to shift the global economic center of gravity away from the United States, one of Brazil’s other major international trading partners.11 However, these trends are not all entirely positive for Brazil or for its relationship with China. Brazil has suffered from a persistent failure in industrial policy, which among other factors has contributed to the deindustrialization of its economy and loss of well-paying formal sector jobs. This is in part due to competition from cheap Chinese manufactured imports. The Brazilian industrial sector, traditionally able to rely on a large domestic market, has favored protectionist policies historically and is not fully competitive globally. It has thus been unprepared to deal with relatively less expensive Chinese imports. In addition, Brazil’s manufacturers face non-tariff barriers when attempting to export to China. While Brazil’s exports to China have soared since 2000, the overall effect has been to emphasize the agricultural and mining sector at the expense of the manufacturing sector, which tends to have higher Brazilian value-added.12 Since winning office in 2018, Bolsonaro has oscillated between the highly critical stance on China that he campaigned on and a conciliatory approach reflective of the importance of economic and trade relations between the two countries. His foreign policy team, led by Foreign Minister Ernesto Araújo, represents a radical break with recent Brazilian diplomatic tradition. Abandoning Brazil’s traditional policy of autonomy in foreign policy, the Bolsonaro administration has adopted a highly pro-U.S. approach, mimicking the international policies of the Trump administration, even following the U.S. lead in announcing that it would move its embassy in Israel from Tel Aviv to Jerusalem which resulted in heavy criticism from Brazilian agricultural exporters concerned about markets in the Arab world.13 Challenging this tendency are the government officials viewed as the “adults in the room,” led by Vice President Hamilton Mourão, a former Army general, and including Economics Minister Paulo Guedes and Agriculture Minister Tereza Cristina. The business community, particularly the agricultural sector, is not in favor of any policies that would upset commercial relations with China, especially since they think it unlikely that the United States would open up to imports of Brazilian agricultural and mining products.14 Although the Brazilian government’s criticism of China was tempered in 2019, particularly in the lead up to a BRICS summit in Brasília, the coronavirus pandemic crisis has brought the worst of Bolsonaro’s anti-China rhetoric to the fore. His allies and followers have engaged in anti-China conspiracy-mongering around the origins of the pandemic in social media, claiming it is an attack on capitalism, echoing arguments put out by far-right circles in the United States. In fact, the degree of political polarization in Brazil is rivaled only by that in the United States. For example, Bolsonaro’s supporters have widely accused the “adults in the room,” such as Guedes, Cristina, and Mourão, of being “secret communists” on social media. In the meantime, Chinese diplomats have responded forcefully in the Brazilian media, triggering further controversy. This is all occurring as Bolsonaro, who tends to dismiss the pandemic as a hoax, has presided over one of the worst government responses to the crisis in the world to date.15 Since the pandemic began, Bolsonaro has fired or lost two health ministers, and no permanent replacement has yet been appointed. Bolsonaro’s family is suspected of ties to corruption and other crimes. There is increasing talk of impeachment in Brasília, and Bolsonaro lacks a sustained base of support in the Brazilian legislature. The Supreme Court has taken a keen interest in allegations made against the president.16 Supporters of Bolsonaro have encouraged a military coup d’état if any actions are taken against the president or his interests.17 This is particularly significant because Bolsonaro has increasingly militarized his government, particularly the health ministry, which places the armed forces in an awkward situation since military officers with no public health experience may be left holding the bag for a government that is failing to adequately address the pandemic.18 The armed forces have been a silent partner for Bolsonaro throughout his presidency, but the current situation may place the normally popular military at risk of suffering reputational damage.19 With both Bolsonaro and Trump under severe domestic criticism for their handling of the coronavirus pandemic (Brazil’s policies mirror that of the United States), any change of course with regards to China policy may well take a back seat to domestic politics, electioneering and simple political survival until after the 2020 U.S. presidential elections. Trump has consistently sought to raise the anti-China card as part of his reelection campaign, and this effort may have foreign policy implications, including for U.S.-Brazil relations. We could imagine that the Trump administration might put more pressure on Bolsonaro to foreswear cooperation with China, and Bolsonaro has shown a tendency to follow Trump’s lead across a range of issues, not just on the coronavirus response. However, China’s economy appears poised to recover more rapidly than that of the United States due to its more effective coronavirus response. As U.S.-China trade relations deteriorate, Brazil is the natural alternative to U.S. suppliers for Beijing to secure imports of food and other commodities. Additionally, there is a long tradition in Brazil of enacting laws “para inglês ver,” i.e. for the English (foreigners) to see. In other words, policies that look good on paper but, in practice, are never enforced. If Brazil were to face Trump administration pressure, it might be more practical to feint publicly towards a harder line on China while its private sector commodity producers seek to export as much as possible, especially in the face of a severe global recession. For China, it may be best to wait out the current political debacle in Brasília. Bolsonaro lacks deep roots in Brazil’s establishment or party system. In addition, Brazil trades twice as much with China as with the United States, a trend accentuated by recent U.S.-China trade disputes.20 The rise of China as a counterweight for U.S. hegemony will remain appealing in the long-term for Brazilian officials and foreign policy analysts who seek to maximize their country’s strategic autonomy internationally. For China, a “strategic partnership” is appealing to significant political and economic interests in Brazil, which suggests this approach will win out in the long run. On the other hand, for the United States and Brazil, their international economic interests are not naturally aligned. In many important agricultural and mineral commodity export markets, their economies are not complimentary, but rather in competition with one another. Moreover, Brazilian diplomats have a long memory for all the times the United States disregarded Brazil’s interests, let alone failed to build a special relationship. Brazil and U.S. interests are poorly aligned in South America, an area where Brazil seeks to preserve its freedom of action rather than cooperate. This means that over time, despite an affinity between Bolsonaro and Trump, we should expect a reversion to the mean in U.S.-Brazilian relations, which historically have been correct, but distant. |
ce3512cf7d9c8fbdb907e6197c36b964 | https://www.brookings.edu/articles/the-advantages-of-an-assertive-china-responding-to-beijings-abrasive-diplomacy/ | The Advantages of an Assertive China: Responding to Beijing’s Abrasive Diplomacy | The Advantages of an Assertive China: Responding to Beijing’s Abrasive Diplomacy Over the past two years, in a departure from the policy of reassurance it adopted in the late 1990s, China has managed to damage relations with most of its neighbors and with the United States. Mistrust of Beijing throughout the region and in Washington is palpable. Observers claim that China has become more assertive, revising its grand strategy to reflect its own rise and the United States’ decline since the financial crisis began in 2008. In fact, China’s counterproductive policies toward its neighbors and the United States are better understood as reactive and conservative rather than assertive and innovative. Beijing’s new, more truculent posture is rooted in an exaggerated sense of China’s rise in global power and serious domestic political insecurity. As a result, Chinese policymakers are hypersensitive to nationalist criticism at home and more rigid — at times even arrogant — in response to perceived challenges abroad. A series of recent standoffs and tough diplomatic gestures certainly seem a world apart from China’s previous strategy, set in the 1990s, of a “peaceful rise,” which emphasized regional economic integration and multilateral confidence building in an effort to assuage the fears of China’s neighbors during its ascendance to great-power status. Examples of China’s recent abrasiveness abound. In 2009, Chinese ships harassed the unarmed U.S. Navy ship Impeccable in international waters off the coast of China. At the ASEAN (Association of Southeast Asian Nations) Regional Forum in July 2010, Chinese Foreign Minister Yang Jiechi warned Southeast Asian states against coordinating with outside powers in managing territorial disputes with Beijing. Later that year, Beijing demanded an apology and compensation from Tokyo after Japan detained — and then released, under Chinese pressure — a Chinese fishing boat captain whose boat had collided with a Japanese coast guard vessel. Also in 2010, Chinese officials twice warned the United States and South Korea against conducting naval exercises in international waters near China — even after North Korea sank a South Korean naval vessel in March, revealed a well-developed uranium-enrichment program in November, and then shelled a South Korean island, Yeonpyeong, that same month. Despite the image of a more powerful China seeking to drive events under the rubric of a new grand strategy, Beijing — with a few important exceptions — has been reacting, however abrasively, to unwelcome and unforeseen events that have often been initiated by others. In many ways, China’s foreign policy was more creative and proactive in the two years leading up to the financial crisis than it is today. Between 2006 and 2008, China adopted constructive and assertive policies toward North Korea, Sudan, and Somali piracy that were unprecedented in the history of the People’s Republic of China’s foreign relations. The United States and its diplomatic partners should promote the return of such an assertive China — without which Washington will face greater difficulty in addressing pressing global challenges such as nuclear proliferation, climate change, and global economic instability. China has become far too big to stand on the sidelines — let alone to stand in the way — while others attempt to resolve these issues. THE GOOD OLD DAYS? In September 2005, then U.S. Deputy Secretary of State Robert Zoellick called for China to become a “responsible stakeholder” on the international stage. The goal of this Bush administration initiative was to move the U.S.-Chinese relationship beyond traditional bilateral issues — relations across the Taiwan Strait, human rights, and economic frictions — and toward cooperation on ensuring stability in places such as Northeast Asia, the Persian Gulf, and Africa. In the following two years, the Chinese responded impressively, although only partially, to this shift in U.S. policy. Beijing not only continued to host the six-party talks on North Korea’s nuclear program but also participated in the crafting of international sanctions against Pyongyang in the UN Security Council. Especially in late 2006 and early 2007, China also exerted bilateral economic pressure on North Korea, which led to the disablement of its nuclear facilities at Yongbyon, the only concrete progress made to date as part of the six-party talks. Beijing also changed course on Sudan. It went from protecting Sudan’s regime against international pressure over human rights abuses in Darfur to backing then UN Secretary-General Kofi Annan’s three-phase plan for peace and stability in the region in late 2006. Chinese officials pressured Khartoum to accept the second phase of that plan, which called for the creation of a joint United Nations-African Union peacekeeping force. Then, in early 2007, after a dialogue about the region between the U.S. State Department and the Chinese Foreign Ministry, Beijing agreed to send more than 300 Chinese military engineers to Darfur, the first non-African peacekeepers committed to the UN operation. In late 2008, China also agreed to send a naval contingent to the Gulf of Aden to assist in the international effort to counter piracy off the coast of Somalia. Perhaps most significant, considering Beijing’s traditional principle of noninterference in the internal affairs of sovereign states, the UN resolution enabling the mission allowed for the pursuit of pirates into Somalia’s territorial waters. To be sure, Washington and its diplomatic partners would have liked to have seen even more from Beijing in this period. But China’s new policies represented more than a minor shift. Beijing was moving away from its traditional foreign policy relationships and softening, although not abolishing, its long-held and once rigid positions on sanctions and noninterference in the internal affairs of states. By making clear to skeptical Chinese audiences that Washington does not view the relationship as a zero-sum game, the Bush administration’s initiative was good for U.S.-Chinese bilateral relations. More important, U.S. policy underscored that addressing global problems, such as nuclear proliferation in North Korea and Iran, terrorism, transnational crime, global financial instability, environmental degradation, and piracy on the high seas, is in everyone’s interest, including China’s. Finally, the U.S. initiative reflected Washington’s understanding that with China’s rising clout comes increased responsibilities. Put simply, China has become too big to maintain its traditional policy of noninterference and its aversion to economic sanctions; too big to preserve friendly diplomacy toward international pariahs such as Pyongyang, Khartoum, and Tehran; and too big to fall back on its developing-country status as a way to resist making sacrifices to stabilize the world economy and mitigate environmental damage. LOST MOMENTUM Unfortunately, China has failed to maintain this positive momentum in its foreign policy, damaging U.S.-Chinese relations in the process. The most dramatic change is in its North Korea policy: rather than pressuring Pyongyang after its nuclear and missile tests in the spring of 2009, Beijing seems to have doubled down on its economic and political ties with Kim Jong Il’s regime. Knowledgeable observers believe that trade and investment relations between China and North Korea have deepened over the past three years. There has also been frequent high-level public diplomacy between Chinese and North Korean leaders, including two visits by Kim to China last year. Last October, Zhou Yongkang, a member of the Chinese Communist Party’s Politburo Standing Committee, stood with top members of the Kim regime during the Korean Workers’ Party’s anniversary celebration. This attention was most welcome in Pyongyang during the regime’s sensitive transition period, in which Kim has been grooming his youngest son, Kim Jong Un, to eventually take over. Driven by the fear of a precipitous collapse of a neighboring communist regime and the reduction of Chinese influence on the Korean Peninsula, Beijing has fallen back on long-held conservative Communist Party foreign policy principles in backing North Korea. In particular, it stood by the Kim regime during the course of several crises sparked by Pyongyang last year. In May, an international commission determined that a North Korean submarine had indeed sunk the South Korean naval ship Cheonan in March; for its part, China refused to review the evidence and protected North Korea from facing direct criticism in the UN Security Council. In so doing, Chinese leaders alienated many in the international community, especially South Korea, Japan, and the United States. Beijing similarly protected North Korea from international condemnation after Pyongyang revealed last fall that it had secretly developed a uranium-enrichment facility. And then, after North Korea shelled a South Korean island in November, Beijing once again adopted an agnostic pose, simply calling for calm and warning all sides against any further escalation. The only specific warning it could muster was its ultimately unsuccessful effort to dissuade U.S. warships involved in joint U.S.-South Korean naval exercises from entering the Yellow Sea, which overlaps with China’s exclusive economic zone. The picture on Iran is more mixed, in part because the Bush administration and U.S. partners had made such limited progress on eliciting China’s cooperation before 2008. Beijing’s efforts to water down UN Security Council Resolution 1929 — which imposed a fourth round of sanctions against Tehran in June 2010 — therefore cannot be seen as retrograde behavior. In fact, the Obama administration deserves credit for managing to get any resolution passed at all. Optimists can point to the fact that these UN sanctions — which include an arms embargo and financial measures — might cause some real discomfort to influential figures in Iran. Still, the sanctions placed no direct pressure on Iran’s lucrative energy sector. In what might be a sign of progress in China’s policy on Iran, media reports suggest that China slowed its pursuit of new energy deals with Iran in the months following the passage of the sanctions resolution. China’s continued pursuit of oil and gas agreements with Iran, even as new international sanctions have been leveled against the country, has long been a sore point for those worried about Iran’s nuclear ambitions. Many fear that as European and Japanese firms leave the Iranian market, Chinese firms will simply “backfill” that economic space. It is too soon to judge the meaning of any alleged change in China’s policy toward Iran. Not much time has passed since the adoption of the latest UN resolution; moreover, the reasons behind the reported slowdown in new Chinese business activity in Iran remain unclear (purely economic issues may be the cause). It is also possible that any newfound Chinese restraint in Iran is less a symptom of a sudden acceptance of its role as a responsible stakeholder and more a sign of its grudging, and potentially temporary, acquiescence to unilateral measures enacted by the United States and Europe that target third-country firms working in Iran. Beijing views such sanctions as illegitimate and unfair. Last year was also marked by bilateral tension between the United States and China over such issues as Chinese Internet hacking and media restrictions, U.S. arms sales to Taiwan, and U.S. President Barack Obama’s meeting with the Dalai Lama. Even though U.S. policies on these issues were not new, the reaction in Beijing was more strident than in the past. China was also rankled by U.S. Secretary of State Hillary Clinton’s diplomacy regarding the management of sovereignty disputes in the South China Sea at the ASEAN Regional Forum meeting in Vietnam last July. China is the only nation in the region that claims all the disputed islands in the sea. Its expansive claims are also ambiguous, relying on maps that predate the People’s Republic of China and sometimes on vague terms such as “historic waters,” which carry no validity in international law. At the meeting, Clinton called for the peaceful settlement of differences, freedom of navigation, a legal basis for all claims rooted in customary international law, and multilateral confidence-building measures. Even though Clinton did not specifically name China and her comments did not change the United States’ traditional neutrality on maritime sovereignty disputes, the U.S. initiative was unwelcome in Beijing. The Chinese foreign minister’s harsh reaction at the conference — warning regional actors against collaborating with outside powers in dealing with the disputes — created tension between China and relevant ASEAN states and between China and Japan, which, like the United States, has no territorial claims in the South China Sea but is concerned about maintaining freedom of navigation there and regional security. BEIJING’S CONFIDENT INSECURITY What explains the acerbic turn in Beijing’s foreign policy? Rather than a simple assertion of its newfound power, China’s negative diplomacy seems rooted in a strange mix of confidence on the international stage and insecurity at home. Since the onset of the financial crisis in 2008, Chinese citizens, lower-level government officials, and nationalist commentators in the media have often exaggerated China’s rise in influence and the declining power of the United States. According to some of my Chinese interlocutors, top officials in Beijing have a much more sober assessment of China’s global position and of the development challenges ahead. Yet those domestic voices calling for a more muscular Chinese foreign policy have created a heated political environment. Popular nationalism, the growth in the number of media outlets through which Chinese citizens can express their views, and the increasing sensitivity of the government to public opinion in a period of perceived instability have provided the space for attacks on the United States and, by association, criticism of Beijing’s U.S. policy as too soft. These are the views of not just those far from power, however: the authors of such critiques have notably included active-duty military officers and scholars at state-run think tanks and universities. Apparently gone are the days when Chinese elites could ignore these voices. The government currently seems more nervous about maintaining long-term regime legitimacy and social stability than at any time since the period just after the 1989 Tiananmen massacre. Party leaders hope to avoid criticism along nationalist lines, a theme that has the potential to unify the many otherwise disparate local protests against Chinese officials. Moreover, individual officials need to foster their reputations as protectors of national pride and domestic stability during the leadership transition process, which will culminate in 2012 with the party’s formal selection of a successor to President Hu Jintao. Such an environment does not lend itself to policies that might be seen as bowing to foreign pressure or being too solicitous of Washington. Further complicating matters is the fact that an increasing number of bureaucracies have entered into the Chinese foreign-policy making process, including those of the military, energy companies, major exporters of manufactured goods, and regional party elites. This is a rather new phenomenon, and the top leadership seems unwilling or unable to meld the interests of these different groups into a coordinated grand strategy. Some of these domestic actors arguably benefit from China’s cooperation with pariah states, expansive and rigid interpretations of sovereignty claims, and, in some cases, tension with the United States and its allies. They might benefit less — or even be hurt — by the sort of Chinese internationalism sought by the the European Union, Japan, South Korea, the United States, and others. Therefore, nationalist pundits and bloggers in China find allies in high places, and top government officials are nervous about countering this trend directly. The result has been the creation of a dangerously stunted version of a free press, in which a Chinese commentator may more safely criticize government policy from a hawkish, nationalist direction than from a moderate, internationalist one. According to my sources in China, these factors produce two deleterious effects on Chinese foreign policy. First, for domestic and bureaucratic reasons, Beijing elites need to react stridently to all perceived slights to national pride and sovereignty. When, for example, various Asian states sided with Clinton at the ASEAN meeting in Hanoi, Chinese Foreign Ministry officials felt compelled to respond in caustic terms that alienated several of China’s southern neighbors. The negative Chinese reaction to Japan’s jailing of the fishing boat captain on domestic legal grounds was predictable, but the Chinese government was especially bellicose in its response: Beijing cut off rare-earth shipments to Japan and, perhaps most important, demanded an official apology and reparations after the Japanese had already acceded to Chinese demands to release the ship’s captain and crew. This may have impressed domestic audiences in China, but it deeply alienated the Japanese public, which, according to recent polls, now holds very negative views of China. All of this trouble is occurring while the Democratic Party of Japan — traditionally considered very accommodating to China — is Japan’s ruling party. The timing of the tense state of Chinese-Japanese relations thus speaks volumes about the opportunity costs of China’s diplomatic truculence. Similarly, no one believes that China truly supports North Korea’s military provocations or development of nuclear weapons. But Beijing’s concerns about maintaining domestic stability in North Korea, peace on the Korean Peninsula, and social stability in China have prevented Chinese officials from criticizing North Korea publicly or allowing the UN Security Council to do so. What is more, these interests also keep Chinese officials from refuting conspiracy theories in the Chinese media and on the Internet that the United States and South Korea plotted to exacerbate tensions on the Korean Peninsula to create an excuse to carry out military exercises near China’s borders. To the contrary, the Foreign Ministry only fed the fire in July and November 2010 by warning the United States not to place warships in waters near China without Beijing’s permission. This move may have won some favor within the Chinese military and the Chinese public, but the diplomatic costs of being seen to pardon or even defend Pyongyang’s actions were high in Seoul, Tokyo, and Washington. A truly assertive great power would not allow a small pariah state to hijack its foreign policy in such a fashion. The second negative and important effect on China’s foreign policy is that Beijing has become less likely to join the international community in tackling global problems. For example, a tough Chinese stand on North Korean or Iranian nuclear proliferation is now easily portrayed by nationalist elements as an accommodation to the United States. At the same time, domestic interest groups — such as energy companies and financial institutions in the case of sanctions against Iran and economic interests in northeastern China and the military in the case of North Korea — oppose policy innovations that would hurt their parochial interests. Such groups can express themselves directly in a more diversified policy process, and they can also use the media and the Internet to create a negative domestic political environment for policy changes. WHAT BEIJING CAN GAIN Throughout 2009, many Chinese both inside and outside the government believed that the new Obama administration was seeking to accommodate China, either as a matter of political orientation or based on a realistic assessment of the perceived global power transition. That year, U.S. officials discussed the need for mutual strategic reassurance, eschewed new arms sales to Taiwan, and kept the Dalai Lama from meeting with Obama in Washington prior to Obama’s trip to China in November. On that visit, China and the United States issued a joint statement in which the two nations pledged to respect each other’s “core national interests” and sovereignty. But then, in early 2010, as many in Beijing saw it, Washington appeared to reverse course. In this view, the Obama administration violated China’s core interests by notifying Congress of the impending sale of defensive weapons to Taiwan, criticizing China’s poor record on Internet freedom, and allowing for a private visit between Obama and the Dalai Lama. It is only logical, according to many Chinese observers, that Beijing should in turn refuse to assist the United States in pursuing what Beijing believes to be U.S. core national interests, such as preventing nuclear proliferation in Iran and North Korea or stabilizing the U.S. economy and the international financial system through the sale of U.S. Treasury bills. But understanding U.S.-Chinese relations as a horse trade over Chinese and U.S. core national interests is intellectually incorrect and politically unhelpful. The most basic problem is that almost everything the United States is asking of China falls directly in line with China’s interests. In other words, curbing nuclear proliferation or policing international waters for pirates is not “assisting” the United States — it is serving China’s own interests as well. Consequently, if China reduces its cooperation with the United States on such issues, it will harm its own foreign policy portfolio. China’s North Korea policy provides the clearest example. If the six-party talks were to fail permanently, the biggest loser — besides the North Korean people — would arguably be China. Beijing justifiably gained diplomatic prestige by becoming a leader in the six-party talks; the other parties were quick to credit China for taking an unexpectedly proactive stance. But just as China gained praise for the progress in the six-party talks in 2006 and 2007, it now suffers a loss of prestige when North Korea refuses to abide by the demands of the international community. How can China portray itself as a great power when it cannot even influence the behavior of its weak neighbor and ally, which is entirely dependent on its economic ties to China? Moreover, since China maintains basically normal economic and diplomatic relations with North Korea — despite the UN Security Council sanctions it helped create and the much stricter unilateral sanctions by Japan and South Korea — its relationship with North Korea raises suspicions in regional capitals about Beijing’s long-term intentions. North Korea’s nuclear program is also likely to spur the buildup of new military hardware and the deepening of alliances in East Asia. Japan considers Pyongyang’s development of deliverable nuclear weapons a real threat, for example. In the most dramatic, although arguably least likely, scenario, advancements in North Korea’s nuclear program might cause Japan to scrap its nuclear taboo and develop its own nuclear weapons. What is less appreciated is how North Korean nuclear developments could affect Japan’s conventional military programs in ways that would worry China. It is reasonable to expect increased Japanese participation in the ongoing U.S.-led program to develop a regional missile defense system in East Asia, an initiative that China considers a challenge to its own deterrent capabilities. Moreover, Japan seems likely to jettison its long-standing self-restraint on developing offensive conventional capabilities by investing in an arsenal of fast, conventionally tipped strike weapons that could destroy North Korean missiles on the ground before launch. These strike weapons would have multiple uses, and their development would have symbolic meaning for the future of Japan’s overall military posture, making such an outcome undesirable from China’s perspective. If left unchecked, the further development of North Korean nuclear weapons would also lead to greater and more active cooperation between the United States and its regional allies. Many components of this effort would be unwelcome in Beijing. For example, South Korea might more readily join a regional missile defense program with Japan and the United States. More generally, since the international community is also concerned about the transfer of nuclear materials from North Korea to other states and nonstate groups, the United States and its regional allies are likely to enhance their naval cooperation and exercises, as well as active inspections of North Korean shipping vessels as part of the Proliferation Security Initiative. In a related way, North Korea’s military provocations last year led to a series of U.S.-South Korean military exercises, including antisubmarine warfare training, and a tightening of security consultations among Seoul, Tokyo, and Washington. In the near term, Chinese leaders must consider the potential for instability or war on the neighboring Korean Peninsula in the event North Korea were to retaliate against these new measures. And over the long term, Beijing is likely to be concerned about the effects of this increased cooperation on its own military position in the region. What is true for China’s North Korea policy is also true for its policy toward Iran. China is a net importer of energy with a large export sector that would be greatly affected by sudden, sharp price increases in energy, which would raise the costs of both production and shipping. This reality should affect its calculus with Iran, a major destabilizing force in the energy-rich Middle East and Persian Gulf — and one that would likely become only more destabilizing if its regime gained the added confidence of a nuclear deterrent. Moreover, Israel considers the development of Iranian nuclear weapons an existential threat; it appears quite probable that if diplomacy fails to alter the current trajectory of Iran’s nuclear ambitions, Israel will eventually take military action against Iran. Such a turn of events could lead to massive instability in the region, threatening the free flow of energy on which China and all other net importers rely. It is therefore in Beijing’s interest to work more closely with Washington and its allies — all of which would like to see stable energy markets — to craft diplomatic approaches that might prevent such an outcome. PERSUASION, NOT CONTAINMENT There may be some cause for optimism, however restrained, regarding Beijing’s recent turn toward a more conservative and reactive foreign policy. Fortunately for the United States and its allies, there is an active debate among elites in Beijing about the costs and benefits of the country’s current policy line (according to my interlocutors, this debate is most heated about China’s recent policies toward North Korea). Washington and other governments have an opportunity to shape the international environment in a way that can assist those Chinese elites who are espousing creative, constructive, and assertive policies while undercutting those who advocate reactive, conservative, and aggressive ones. The best way to do this is to consistently offer China an active role in multilateral cooperative efforts — and without displaying jealousy of the newfound influence China might gain by accepting this role. At the same time, the United States and its allies need to emphasize that they will react to the challenges posed by North Korea or Iran with or without Chinese cooperation; China’s interests will suffer if it obstructs those efforts or even stands on the sidelines. Such an approach has historically been successful. In the mid-1990s, Beijing similarly alienated many of its neighbors and the United States, by bullying Taiwan, adopting a muscular posture toward the Philippines in the South China Sea, and overreacting to enhanced U.S. security cooperation with Japan. But a combination of wise and firm policies by Washington and its partners (for example, the “Nye Initiative” to strengthen the U.S.-Japanese alliance and the dispatch of two aircraft carrier battle groups to the waters off Taiwan during a crisis in 1996) helped foster the ascendance of more moderate thinking in Beijing. By 1997, Chinese diplomacy was on a much more positive track. There is no reason to believe that a similar process cannot occur today — but given the perceptions about China’s increased power and potential domestic instability discussed above, the challenges now may be greater than they were in the 1990s. Although some in Washington and many in Beijing grossly exaggerate when they say that the United States has “returned to Asia” under Obama, there is no question that China and other regional players have noticed that Obama and his principal advisers — Secretary of State Clinton, Secretary of Defense Robert Gates, and National Security Adviser Thomas Donilon — have traveled often to the region, including in November 2010, when Obama and Clinton went on separate multination tours of Asia. More concretely, the U.S.-South Korean military exercises in the Yellow Sea following the November attack by North Korea and the trilateral meeting of Japanese, Korean, and U.S. security officials in Washington demonstrated that the United States and its partners have diplomatic and security options even without China’s active cooperation. Beijing does not like such initiatives — all the more reason for China to return to a more creative, assertive, and reassuring set of policies to solve the problems that caused the United States and its allies to react this way in the first place. The Obama administration should continue to strengthen U.S. relationships in Asia. Such an agenda is a good idea under any circumstances. But especially when China’s policies are damaging to everyone’s interests — including its own — Washington should underscore that even though it would prefer to address problems with Beijing’s active cooperation, there are other, less attractive options available. This is persuasion and not containment; China is still being asked to play a larger, not smaller, role both regionally and globally. In addition, Washington should portray the prospect of cooperation not as a request based on U.S. national interests but as a means through which Beijing can pursue its own interests and, at the same time, reassure other actors. The fact that the term “core national interest” has not been used by a high-level U.S. official since the 2009 joint statement suggests that U.S. government officials already understand the counterproductive psychology that such terms foster in China’s strategic thinking. Instead, U.S. diplomacy toward China has appropriately emphasized the pursuit of mutual interests while recognizing areas of serious difference. Finally, as it has in the past, Washington should publicize and laud the examples of past and current Chinese cooperation with the international community in addressing global problems. In 2010, the Obama administration’s policies in Asia had a positive, albeit limited, effect. Despite ongoing differences between China and the United States — over North Korea, Chinese currency valuation, and the U.S. Federal Reserve’s “quantitative easing” policy — Beijing nonetheless sought to improve bilateral relations in the lead-up to President Hu’s visit to the United States in January of this year. For example, Beijing allowed for the restoration of military-to-military dialogue in the fall of 2010 after a nine-month hiatus caused by China’s disapproval of U.S. arms sales to Taiwan earlier in the year, and China’s minister of defense, General Liang Guanglie, invited Defense Secretary Gates to visit China the same month Hu traveled to Washington. There are also signs that China is beginning to reach out to ASEAN member states to address ongoing security concerns that were exacerbated by Beijing’s bullying at the last ASEAN Regional Forum. Finally, China may have played a constructive role in reining in North Korea after it threatened South Korea in response to a South Korean artillery exercise off Yeonpyeong Island in December 2010, just a month after North Korea’s attack on the island: as of this writing, no retaliation had occurred. That is the good news. What is less commonly noted, however, is that the same factors that have caused China’s recent tensions with its neighbors and the United States have produced an arguably stickier and more consequential long-term problem: they have retarded, if not halted outright, what was a very positive and much-needed shift in Chinese foreign policy during the last two years of the Bush administration. During that period, Beijing showed a willingness to soften some of its traditional prohibitions on an assertive foreign policy so as to assist the international community in dealing with problems faced by all global actors, including China. Even if U.S.-Chinese ties improve and China reverses the negative trends in its regional diplomacy, Washington may still be unsatisfied if the shift does not include enhanced Chinese participation in international efforts to tackle global problems, especially proliferation in North Korea and Iran. For the United States and its allies, securing this kind of Chinese cooperation may be the highest hurdle to clear. Obama has an impressive group of advisers on Asia, but the domestic political and psychological factors in China will create reasons for pessimism, at least until China’s succession is complete in 2012. Unfortunately, without such a change in China’s policies, solving problems from proliferation to climate change will be much more difficult for the United States and the rest of the international community. In this one important sense, the United States needs a more assertive China. Reprinted by permission of FOREIGN AFFAIRS, March/April 2011, Vol 90, No 2. Copyright 2011 by the Council on Foreign Relations, Inc. |
bfbca0955fbaafb5cb61f4a6f235b04c | https://www.brookings.edu/articles/the-aging-of-america-will-the-baby-boom-be-ready-for-retirement/ | The Aging of America: Will the Baby Boom Be Ready for Retirement? | The Aging of America: Will the Baby Boom Be Ready for Retirement? This article is part of a broader study of saving funded by the National Institute on Aging and TIAA-CREF. The baby boom generation—the roughly 76 million people born between 1946 and 1964—has been reshaping American society for ve decades. From jamming the nation’s schools in the 1950s and 1960s, to crowding labor markets and housing markets in the 1970s and 1980s, to affecting consumption patterns almost continuously, boomers have altered economic patterns and institutions at each stage of their lives. Now that the leading edge of the generation has turned 50, the impending collision between the boomers and the nation’s retirement system is naturally catching the eye of policymakers and the boomers themselves. Retirement income security in the United States has traditionally been based on the so-called three-legged stool: Social Security, private pensions, and other personal saving. Since World War II the system has served the elderly well: the poverty rate among elderly households fell from 35 percent in 1959 to 11 percent in 1995. But the future is uncertain. Partly because of the demographic bulge created by the baby boomers, Social Security faces a long-term imbalance. The solution, even if it involves privatization, must in some way cut benefits or raise taxes. The private pension system has changed dramatically in ways that give workers increased discretion over participation, contribution, and investment decisions and easier access to pension funds before retirement—thus raising questions about how well future pensions can help finance retirement. Personal saving, also problematic, has remained anemic for over a decade. Net personal saving other than pensions has virtually disappeared. These developments would be enough to raise concern about retirement preparations under the best of circumstances. But the prospect of a huge generation edging unprepared toward retirement raises worrisome questions about the living standards of the baby boomers in retirement, the concomitant pressure on government policies, and the stability of the nation’s retirement system. Are the baby boomers making adequate preparations for retirement? In part, the answer depends on what is meant by “adequate.” One definition is to have enough resources to maintain preretirement living standards in retirement. A rule of thumb often used by financial planners is that retirees should be able to meet this goal by replacing 60-80 percent of preretirement income. Retired households can maintain their preretirement standard of living with less income because they have more leisure time, fewer household members, and lower expenses. Taxes are lower because retirees escape payroll taxes and the income tax is progressive. And mortgages have, for the most part, been paid off. On the other hand, older households may face higher and more uncertain medical expenses, even though they are covered by Medicare. From a public policy perspective, assuring that retirees maintain 100 percent of preretirement living standards may be overly ambitious. But should policymakers aim to ensure that they maintain 90 percent of their living standards? Or that they stay out of poverty? Or use some other criterion? Retirement planning takes time, and these issues need to be addressed sooner rather than later. A second big question is how to measure how well baby boomers are preparing for retirement. Studies that focus only on personal saving put aside for retirement yield bleak conclusions. One found that in 1991 the median household headed by a 65-69 year old had financial assets of only $14,000. But expanding the measure to include Social Security, pensions, housing, and other wealth boosts median wealth to about $270,000. A third issue—crucial but as yet little explored—is which baby boomers are not providing adequately for retirement and how big the gap is between what they have and what they should have. Some boomers are doing extremely well, others quite poorly. Summary averages for an entire generation may not be useful as descriptions of the problem or as suggestions for policy. The uncertain prospects for the baby boomers in retirement are particularly troubling because, as a society, we as yet understand little about the dynamics of retirement. Only one or two generations of Americans have had lengthy retirements, and the crucial retirement issues—health care, asset markets, Social Security, life span—keep changing rapidly, making long-term predictions even harder. How Well Are the Boomers Doing? Interpreting the Evidence Only a few studies have examined how well the boomers are preparing for retirement. The Congressional Budget Office recently compared households aged 25-44 in 1989 (roughly the boomer cohort) with households the same age in 1962. Boomer households, it turned out, had more real income and a higher ratio of wealth to income than the earlier generation. Though this finding seems promising, in fact the CBO study implies that baby boomers are going to do well in retirement only if (i) the current generation of elderly is thought to be doing well, (ii) the retirement needs of the two generations are the same, (iii) the experience from middle age to retirement is the same for both, and (iv) boomers will be content to do as well in retirement as today’s retirees. None of these is certain. For example, although today’s elderly are generally thought to be doing well, some 18 percent were living below 125 percent of the poverty line in 1995. And the boomers’ longer life expectancy means that they will need greater wealth for retirement. Whether the boomers and the previous generation will have similar experiences from middle age to retirement is an open—and still evolving—question. The earlier generation benefited from the growth of Social Security and housing values in the 1970s. But the boomers have gained from the dramatic rise in the stock market since the early 1980s, from smaller household size, which reduces living expenses, and from higher employment rates for women, which will raise their pension coverage. In addition, boomers are more likely to be in white-collar work and so should expect earnings to peak later in life and be able to work longer if they wish. Finally, boomers may not be content with the living standard of today’s retirees. They may aim instead for retirement living standards more comparable to those of their own working years. For all these reasons, how to inter-pret CBO’s finding is unclear, even if the finding itself is unambiguous. The most comprehensive study of these issues was undertaken by Stanford’s Douglas Bernheim in conjunction with Merrill Lynch. Bernheim developed an elaborate computer model that simulates households’ optimal saving and consumption choices over time, as a function of family size, earnings patterns, age, Social Security, pensions, and other factors. He then compared households’ actual saving with what the simulations indicated they should be saving. His primary finding, summarized in a “baby boomer retirement index,” is that boomers are saving only about a third of what they need to maintain preretirement living standards in retirement. The index has attracted much attention but is not well understood. It does not measure the adequacy of saving by the ratio of total retirement resources (Social Security, pensions, and other assets) to total retirement needs (the wealth necessary on the eve of retirement to maintain preretirement living standards). Instead, it examines the ratio of “other assets” to the part of total needs not covered by Social Security and pensions. As a result, the index reveals little about the overall adequacy of retirement preparations (see table 1). In case A, a hypothetical household needs to accumulate 100 units of wealth. It is on course to generate 61 in Social Security, 30 in pensions, and 3 in other assets. Total retirement resources are projected to be 94 percent of what is needed to maintain living standards. But according to the boomer index, the household is saving only 33 percent of what it needs. Table 1. Two Ways to Measure Adequacy of Retirement SavingUnits of wealth Case Social Security(1) Pension(2) Other assests(3) Needs(4) Total resourcesindex (%)[(1)+(2)+(3)]/4 Boomer index (%)(3)/[(4)(1)(2)] A 61 30 3 100 94 33 B 0 0 33 100 33 33 C 20 20 20 100 60 33 D 61 0 33 100 94 85 E 61 0 33 100 78 45 F 61 30 3 95 99 75 G 61 30 3 93 101 150 Thus, a baby boomer index standing at one-third does not imply that, absent changes in saving behavior, boomers’ retirement living standards will be one-third their current living standard. It could mean that (as in case B), or it could mean retirement living standards will be 60 percent of current living standards (case C), or 94 percent (case A), or even over 99 percent (if Social Security and pensions were 99 and other saving were 0.33). A second problem is that changes in the boomer index over time, or differences across groups, do not correspond to changes or differences in the adequacy of overall retirement saving. If, as in case D, the household in A rolls over its pension into an IRA, the boomer index soars, though total retirement resources are unchanged. If, as in case E, household A rolls over half of its pension into other assets and spends the rest on a vacation, the household has a higher boomer index, but less adequate total retirement preparation. Finally, the boomer index can be extremely sensitive to estimates of retirement needs. In case F, retirement needs are 5 percent lower than in A, and the index rises from 33 percent to 75 percent. In case G, retirement needs are 7 percent lower than in A, and the index rises to 150 percent. Bernheim points out that his model understates the retirement saving problem. The wealth measure, he notes, includes assets the household has earmarked for retirement as well as half of other (non-housing) wealth. The model also assumes no cuts in future Social Security benefits, no increases in Social Security taxes, and no increase in life span. But in other ways the model overstates the problem. It assumes that any man not covered by a pension at the time of the survey, when respondents are 35-45 years old, will never be covered, though pension coverage rates tend to rise a good bit as a worker ages. The model also likely understates pension benefits since it uses benefit data from the 1970s. Because the pension system grew rapidly from the 1940s to the 1970s, workers retiring in the 1970s likely had fewer years in the pension system and hence lower benefits than the boomers will upon retiring. The model excludes all housing wealth and inheritances—no small matter, since, by Bernheim’s calculation, including housing would raise the index to 70 percent, and a fair proportion of boomers is likely to receive substantial inheritances. The model assumes that people will retire at age 65, though the normal Social Security retirement age will be 66 for most boomers, 67 for the youngest. The model also excludes all earnings after “retirement,” though about 18 percent of the income of the elderly today is from working. And with partial retirement on the increase, retired boomers may work even more regardless of the adequacy of saving. Finally, the model makes no allowance for retirees’ lower work-related expenses or lower expenses for mortgages or other durable goods—such as furniture, appliances, and cars. Whether all these biases are larger or smaller than those in the opposite direction noted by Bernheim is unclear. Measuring and including these items is an important area for further research. A New Perspective Fundamental questions about retirement saving remain not only unanswered, but unasked. What proportion of households is saving adequately for retirement? What are the characteristics of those households? How has the proportion changed over time? Among those not saving enough, how big is the problem? Table 2 begins to answer such questions by presenting my own estimates of the proportion of married households, with the husband working, who are “on track” toward accumulating enough wealth for retirement. The measure of “on track” is based on calculations in a study by Bernheim and John Karl Scholz, of the University of Wisconsin, that determines how much a household needs to have saved by a given age, given its earnings, prospective Social Security benefits, pension status, family size and other characteristics. (That study uses the Bernheim model described above, so the data suffer from all the biases already mentioned. Another bias here is that the sample includes only married couples where the husband works full time. Other married couples and singles are likely to be faring worse.) Table 2. Proportion of Married Households Saving Adequately for RetirementPercent Proportion Saving Adequately When: Year Net assets exclude housing equity Net assets include half of housing equity Net assets include all of housing equity All householdsa 1983 44 66 76 1986 53 71 78 1989 43 63 72 1992 47 61 70 Baby boomer householdsb 1989 48 67 73 1992 48 63 71 Source: Author’s calculations from the Survey of Consumer Finances.a Husband is aged 25-64 and works at least 20 hours per week.b Husband was born between 1946 and 1964 and works at least 20 hours a week. When housing equity is not counted, slightly less than half of all households—and about the same share of all boomers—were saving adequately in 1992. When half (all) of housing equity is counted, the adequacy rate climbs to 61 percent (70 percent). Adequacy rates rise with education and income. Within the baby boom generation, adequacy rates generally decline somewhat with age. They are higher for boomers with pensions than for those without, either because pensions raise households’ overall wealth or because people more oriented toward saving and thinking about retirement are also more likely to have jobs with pensions. High adequacy rates do not necessarily require high levels of saving. For example, suppose annual retirement needs are 75 percent of final earnings. According to the Social Security Administration, Social Security benefits replace about 46 percent of final earnings for the average worker earning $50,000 at retirement. (Note that in this example Social Security replaces 61 percent—46/75—of total retirement needs, as in case A in table 1. The percentage would be higher for workers with lower earnings.) With pensions typically replacing 25-30 percent of final earnings, a household with Social Security and a pension would not need much more saving to maintain adequate living standards, especially if the household can work for a time in retirement or expects to receive bequests. As table 3 shows, the wealth shortfall among households that are not saving adequately (ignoring all housing equity) is relatively small for many. The median inadequate saver has a shortfall of $22,000, or about six months of earnings—a problem that could be solved either by postponing retirement for six months or by lowering retirement living standards a little. Even among 60-64 year-olds, the median inadequate saver could completely resolve his or her saving shortfall by working for two more years past age 65. Table 3. Median Shortfall in Retirement Wealtha Shortfall Age In dollars In terms of annual earnings 25-29 2,960 0.12 30-34 3,400 0.10 35-39 13,180 0.37 40-44 26,940 0.73 45-49 33,500 0.82 50-54 65,100 1.25 55-59 51,800 1.47 60-64 75,470 2.17 All households 22,480 0.52 Baby boomerhouseholds 13,480 0.38 Source: Author’s calculations based on the 1992 Survey of Consumer Finances.a The sample is households not saving adequately for retirement when housing equity is not included. Thus, the glass can be viewed as half full or half empty. When housing equity is ignored, the typical household seems to be barely saving adequately or just missing. When housing is included, over two-thirds of households appear to have more than the minimum needed, given their age and other factors. Roughly speaking, a third of the sample is doing well by any measure, a third is doing poorly by any measure, and the middle third is (or may be) just hanging in there. Both of the following statements are equally true. Up to two-thirds of the households are now saving at least as much as they should be. And two-thirds are “at risk” in that any deterioration in their situation could make it impossible for them to maintain their living standards in retirement. In short, two key factors matter tremendously to any characterization of the problem: the heterogeneity of saving behavior across households and uncertainty concerning the right measures of wealth to use. Areas of Uncertainty The boomers’ prospects are also complicated by uncertainty in other areas: retirement patterns, life spans, home values, asset markets, health care costs, and the economy itself. Average age at retirement, which fell through the 20th century for men, may start rising regardless of the adequacy of saving. Many of today’s jobs do not depend on “brawn” and can thus be done by older people. The normal Social Security retirement age will rise to 66 by 2009 and 67 by 2027 even if no further changes are made in Social Security. Partial retirement may matter as well. Many retirees cut back on work gradually rather than abruptly. According to a study by economist Christopher Ruhm, only 36 percent of household heads retire immediately at the end of their career jobs. Nearly half remain in the labor force for at least ve years. Of workers eligible for a pension, 47 percent continue to work after leaving their career job. If people continue to work even after retirement, they will be better able to support living standards in retirement. A related uncertainty involves life span. Expected remaining life spans of 65 year-olds have grown in the past two decades and are projected to grow further. Living longer means having to stretch a given amount of money over a longer period. Uncertainty regarding home equity is twofold. First, how will housing prices evolve? Both demographic pressures and the reduction in tax rates in the 1980s may reduce the long-term value of housing. And, second, regardless of housing values, to what extent should housing be counted as part of household wealth? In recent decades, the elderly have been reluctant to cash in their housing equity. But baby boomers have been willing to extract housing equity and were major recipients of home equity lending booms in the 1980s and 1990s. It remains to be seen whether the boomers in retirement will act more like themselves in earlier years or like current retirees. In any case, a household with low financial assets that lives in a $300,000 house and refuses to dip into housing equity may not be considered a pressing social concern. Asset markets too are uncertain. Equity values cannot continue to grow as rapidly as they did in 1996. And even if the boomers accumulate what seem to be sufficient retirement funds, they will, loosely speaking, all want to cash in those funds at roughly the same time. That could mean massive sell-offs of stocks and bonds that could depress asset prices. Conceivably asset prices could fall sharply, but since markets are forward looking, asset prices may instead be stagnant for a long period. Finally, the evolution of health care costs and of the economy as a whole could have a major impact on the adequacy of retirement preparations. What’s in Store? The retirement prospects for the baby boomers are uncertain. One issue is what policymakers and boomers themselves will accept as a reasonable goal for retirement living. More thought needs to be given as to how to assess living standards when, as a matter of biology, retirees face declining health. In addition, they typically have more leisure time and can literally substitute time for money. A second source of uncertainty is the boomers themselves. Whatever imponderables the economy as a whole may offer, baby boomers can improve their retirement prospects by saving more—that is, by reducing their current living standards. What can government do? First, keep the fiscal house in order by reducing the long-term budget deficit in ways that do not reduce private saving. Second, the government could provide, or encourage others to provide, financial education to workers and households on how much they need to save. Third, the government should encourage people to use the many saving incentives already in place. Fourth, judicious Social Security and pension reform, especially pension reform that raises pension coverage, could help resolve these problems and raise private saving at the same time. |
28d5eef0ef3f9272bba9f7489f330409 | https://www.brookings.edu/articles/the-contemporary-presidency-the-bush-white-house-first-appraisals/ | The Contemporary Presidency: The Bush White House: First Appraisals | The Contemporary Presidency: The Bush White House: First Appraisals When the disputed election of 2000 ended with the Supreme Court’s decision on December 12, it effectively shortened the presidential transition to less than fifty days and complicated the incoming administration’s personnel problems. Chief among George W. Bush’s immediate hiring decisions was the choice of senior White House staff, those advisers with whom he would have the most day-to-day contact. Selecting an ideal White House staff is confounded by a host of factors: satisfying the president-elect’s personal preferences, honoring political obligations, finding experts with the appropriate ideological hue, and achieving diversity goals. While these were Bush’s initial goals, the terrorist attacks of September 11, 2001, required instant adjustments that resulted in structural, procedural, and staff changes. This article examines Bush’s first crack at assembling his White House and assesses its early performance as well as the staff and structural changes made in the wake of the terrorist attacks. In an effort to gain perspective on the Bush record, we compare his staff to the initial staffs of his three immediate predecessors—Bill Clinton, George H.W. Bush, and Ronald Reagan. More specifically, we examine appointments to the Executive Office of the President (EOP), including such senior staff members as the national security adviser and the director of the Office of Management and Budget. The conventional wisdom was that President Bush hired an older, wiser set of advisers than President Clinton, who had rewarded “the kids”—hard-working, youthful campaign staffers (Stephanopoulos 1999, 148; Houston 1993, 22). Furthermore, while Clinton worked hard to assemble a team that “looked like America,” Bush hired establishment Republicans, particularly those with a conservative bent. However, staff biographies published in the National Journal reveal remarkable similarity between the two administrations. Adding Presidents George H.W. Bush and Ronald Reagan into the comparison provides a long-term look at presidents’ initial staffing, revealing additional similarities as well as important differences. This article identifies the unique features of President Bush’s staffing organization as well as recent additions. Part two will discuss presidents’ first attempts to staff the White House from 1981 through 2001 and demonstrates key demographic characteristics and concludes with an evaluative discussion of the Bush operation. Inaugural Innovations Although President Bush’s staff possessed qualities similar to those of his predecessors, he imposed his own ideas about running a White House by making structural changes within the EOP, reflecting his administration’s priorities, goals, and general approach to governing. He began his term by adding two new units: the Office of Strategic Initiatives (OSI) and the Office of Faith-Based and Community Initiatives (OFBCI). He bolstered the Office of the Vice President, and his cabinet was given both standard and untraditional functions (Nakashima and Milbank 2001, A1). The events of September 11, 2001, additionally imposed various structural and procedural changes that affected cabinet and White House staff Each innovation represented a break with the Clinton presidency, although in some cases, there were roots in prior administrations. The OSI, led by Bush confidant Karl Rove, was designed to think ahead and devise long-term political strategy. “It is an effort to solve the problem that consistently dogs White House staffs: the pressure to respond to unexpected events and to react to daily news cycles, which causes presidential advisers to lose sight of the big picture” (Milbank 2001a, A1). The equivalent during the Reagan administration could have been the Office of Planning and Evaluation, led by Richard Beal, a colleague of pollster Richard Wirthlin. It is hardly unusual for presidents to create offices designed to ensure their political longevity. For instance, Reagan’s Office of Political Affairs, initially led by Lyn Nofziger, was charged with maintaining and expanding his electoral coalition but was not afforded the opportunity to devise long-term strategy. The unique feature of the OSI was that the president’s leading political adviser was in charge. George H.W. Bush relied on the strategic advice of Lee Atwater but did not provide him with a White House perch. Atwater resided at the Republican National Committee until health problems forced him to resign. After Atwater’s death, the absence of political insight and strategy became a serious weakness in the administration and the reelection campaign. President Clinton used outside consultants James Carville, Paul Begala, Mandy Grunwald, and pollster Stanley Greenberg until the disastrous 1994 midterm elections. Subsequently, Dick Morris provided strategic input while running a consulting firm in which he offered advice to politicians of all stripes. The Bush administration clearly took a different approach by thoroughly integrating Rove into the White House chain of command. Though Rove was a polemic figure in the early days of the administration, if his office has the capacity to create a successful long-term governing strategy, particularly in the aftermath of September 11, it will have been a sensible organizational solution to a persistent presidential problem. Prospects for success, however, are dim. According to one aide in the post-terrorism crisis period, “You can’t predict events more than 72 hours out at most…We pretty much have a game plan for next week, but that could change” (Milbank and Graham 2001, A4). Clearly, these events have and will continue to pose great challenges for Rove and his new office. The White House OFBCI, established by executive order, was meant to demonstrate President Bush’s commitment to “compassionate conservatism” by reaching out to faith-based and community organizations in an effort to help the needy. The initial legislative initiative endorsed by the administration, H.R. 7, primarily sought to ease government restrictions on religious organizations so that faith-based groups could more easily provide government services such as day care and alcohol rehabilitation. Numerous presidents have created offices solely for the sake of pursuing a single policy (Clinton for the Y2K problem, for example). But the establishment of such an office by means of an executive order is unusual and perhaps unwise since it can only be eliminated by issuing another executive order. Special White House offices create unreasonable expectations for constituencies that previously lacked a White House contact. Such expectations also create problems for an overburdened White House staff. Aside from structural innovations, President Bush expanded the influence of some positions, most notably the vice presidency. The stature of vice presidents has risen markedly since Jimmy Carter selected Walter Mondale in 1976, and Al Gore was clearly the most engaged vice president of the twentieth century. But Cheney’s vast Washington experience, as well as his formidable role in the transition, has catapulted the vice presidency to new heights. According to one report, “Hardly anyone would minimize the enormous role Cheney plays in running George W. Bush’s administration” (Barnes 2001, 814). The vice president’s initial activities included devising energy policy, diplomacy, and congressional lobbying. In the aftermath of September 11, while intermittently placed in an “undisclosed location” for security reasons, the vice president continued to play an integral role in the administration. In fact, Cheney created the initial plan to set up the Office of Homeland Security. At the same time, his aides closely coordinated and collaborated with the president’s staff. His chief of staff attended “most of the A-level meetings” at the White House, and two of his aides—Mary Matalin and Lewis Libby—also were titled assistant to the president (Barnes, 2001). President Bush’s reputation as one who likes to delegate authority, along with the impressive resumes of some cabinet members, led observers to expect the cabinet to play an enhanced role in the administration. According to one early forecast, With their golden resumes, long years of public service, strong personalities and close ties to Mr. Bush, Vice President-elect Cheney, and the Republican establishment-in-waiting, the men and women of the emerging Cabinet can be expected to exert just as much influence over the administration as the staff in the White House exerts, if not more. (Kahn, 2000, 1) The supposition was that these department heads would need little direction from the White House, particularly on day-to-day matters. But students of American politics remembered Jimmy Carter’s failed attempt to form a “cabinet government” and how his White House staff rejected this approach in favor of centralizing control, maintaining the authority to rein in cabinet members when necessary. Though this centripetal force is quite powerful, the September 11 attacks will likely preclude the marginalization of key cabinet members in the Bush administration. Just days before the September 11 attack, Time magazine published an article declaring Secretary of State Colin Powell the “odd man out.” Then the events on that fateful day enhanced and strengthened not only the role of Secretary Powell but a number of other cabinet secretaries as well (particularly Attorney General Ashcroft and Secretary of Defense Rumsfeld). Indeed, any department with important homeland security concerns instantly moved up a notch on the power ladder in Washington. Just as scholars have noted the presence of an “inner” and “outer” cabinet in which the original departments (State, War, Treasury, and Justice) dominate the president’s time and attention, the events of September 11 have created a somewhat expanded inner cabinet. So while some observers anticipated the possibility of a more active cabinet, the tendency for certain members (given their prior relationship with the president or the relevance of their department) to exert more influence than others resulted in a variation on the inner and outer cabinet model. The foremost cabinet innovation to deal with terrorism, the “war cabinet,” is a body “composed of top national security officials from the White House, CIA, State Department, and Pentagon [and] has become the main decision-making body determining how the United States will frame its response to the Sept. 11 attacks” (Allen and Sipress 2001, A3). After 9/11, Bush’s organizational challenge was how to respond to the urgent need for homeland security. He had to choose between two basic approaches: the department model, a single operating agency with overall responsibility for preventing, protecting against, and responding to terrorist attacks; or the National Security Council (NSC) model, a White House office responsible for coordinating the various operating agencies and getting them to work as a team. Bush opted for the NSC model and chose a good friend, Governor Tom Ridge of Pennsylvania, to be his homeland security adviser. But with no authority over the operating units, Ridge’s chances of success, observers calculated, would depend importantly on his perceived clout with the president and his personal leadership skills. At the same time, many in Congress were pushing a bill to create a new cabinet department whose secretary would be confirmed by the Senate and who would be expected to testify before congressional committees. (4) The most important test of whether the NSC model under Ridge’s leadership was effective came in December, when he proposed to the president a new agency merging those parts of government with responsibilities for protecting U.S. borders, such as the Coast Guard and the Customs Service. What happened next, according to the Washington Post, was that the proposal was leaked to the press and “the bureaucracies erupted…scuttling the border proposal.” The lesson learned by Bush’s team was that “ideas introduced piecemeal will be killed piecemeal” (Von Drehle and Allen 2002). This began a top-secret White House operation that eventually produced a surprising mega-proposal. Reversing course, the president on June 6, 2002, asked Congress to join him in creating a Department of Homeland Security with 169,154 employees and a budget of $ 37.5 billion. The major pieces of the new department—the third largest unit of the federal governmen—-would be the Coast Guard, the Transportation Security Agency, the Immigration and Naturalization Service, the Customs Service, the Secret Service, and the Federal Emergency Management Agency. Bush would continue to have a cabinet-level Homeland Security Council and a homeland security adviser in the White House, both created by executive order. In January 2002, Bush turned to the executive order format again to create another White House office following his State of the Union challenge to the American people to commit themselves to four thousand hours of public service during their lives (Executive Order 13254). He called his new program the USA Freedom Corps, with a council similar to the Homeland Security Council, and gave an assistant, John Bridgeland, instructions to coordinate other government agencies in the volunteer business, such as the Peace Corps, AmeriCorps, and Senior Corps. Aside from these more ambitious innovations, the events of September 11 altered “business as usual” in the White House. In the immediate aftermath, most aides, whether in the Offices of Communications, Public Liaison, Political Affairs, or the OFBCI, assumed responsibilities pertaining to the attack and recovery (Milbank 2001b, A25). Deputy Chief of Staff Josh Bolten was put in charge of the Domestic Consequences Principals Committee, assessing the impact of the attacks on domestic policy. Presidential confidant Karen Hughes created a special White House-based public relations operation aimed at winning international support, particularly in the Islamic world, for the antiterrorist campaign (Pincus and DeYoung 2001, A18). While it is important to identify new features of an incoming administration, it is equally important to note the volatility of these innovations. If they fail to live up to expectations—or worse, if they create new problems—innovations must be quickly discarded. Presidents are rightly cautious when it comes to adding or subtracting White House offices and responsibilities. Unfortunately, they are often less adept at correcting their own mistakes. Hitting the Ground Running Seeking to avoid the missteps of the early Clinton administration as well as skepticism surrounding his ability to govern, President-Elect Bush moved with surprising speed. Clinton had not chosen his White House staff until a week before his inauguration. But by January 4, 2001, Bush had nearly completed the selection of his senior aides. As he made his choices public, pundits were quick to highlight distinctions between the Clinton and Bush staffs, with Bush’s people clearly getting higher grades. The National Journal, for instance, characterized the Bush team as “one of the most experienced senior staffs in modern memory” (Simendinger 2001, 246). Interestingly, as Table 1 illustrates, the characteristics of these staffs—age, gender, ethnicity—were remarkably similar. TABLE 1The Executive Office of the President (EOP): The “A” Team-Reagan 1981, Bush 1989, Clinton 1993, and Bush 2001 The average age of incoming staffers has remained steady since 1981. For all the raves about “seasoned veterans” and critiques of Clinton’s youthful staff, the average age of Bush’s “A” team was identical to Clinton’s. Articles written in the early days of the Clinton administration portrayed his aides as “star struck young staffers” and compared the atmosphere to that of a college dormitory. The title of one op-ed piece—”Home Alone 3: The White House; Where Are the Grown-Ups?”—captures the sentiment among many observers (Krauthammer 1993, A31). Yet for the past twenty years, the average age of presidents’ closest advisers has hovered in the mid-forties, a particularly productive and energetic period in the lives of many executives. Despite opposition cries that the Bush White House is nothing but “a bunch of white males,” the numbers indicate that the president appointed women and minorities in numbers that more closely resemble Clinton than Bush the elder. Significantly, President Bush appointed women to more influential positions than any prior president. The Bush inner circle includes Karen Hughes, counselor to the president (“the most powerful woman ever to work on a White House staff”); National Security Adviser Condoleeza Rice; and Margaret Spelling (nee La Montagne), assistant to the president for domestic policy. President Bush has appointed substantially more minorities than all three predecessors, and at the highest echelons, appointments include Rice and Alberto Gonzales, counsel to the president. The expanded role of Hispanics reflects the president’s Texas roots as well as the growing influence of this sector of the population. Surrounding oneself with a home state “mafia,” as the press sometimes charges, may be viewed as a president choosing loyalty over ability. But for two-term governors of the nation’s two largest states, it is hardly surprising that Reagan and Bush turned to the talent pools of California and Texas for executives. The first President Bush, lacking a true home state (Connecticut, Maine, or Texas), had a low percentage of home-state appointments. And despite the public impression of Washington being overrun by Arkansans when Clinton was president, Arkansas is a very small state, which was reflected in the low number of appointments. Finally, in terms of prior experience, the Bush administration turned most often to his campaign, reflecting that there had been less “ad hocery” than in many campaign organizations. As the well-financed front-runner, he had the luxury of picking experts who could move into the White House with him. The Clinton administration was the only one in which working in the executive branch was not among the top two occupations, a phenomenon explained by the twelve-year dearth of Democratic presidents. This absence of Democratic presidents resulted in a smaller talent pool of former White House staff members. Working within these constraints, Clinton recruited from the halls of Congress, where Democratic aides and advisers bided their time between presidential elections. What is most surprising in this longitudinal comparison is that much of conventional wisdom is wrong. The Clinton administration was not run by youngsters, and the Bush administration was not hostile to appointing minorities and women. The realities of White House staffing defy popular myth. Assessing the Bush Team: Early Missteps Mistakes are endemic to the start of any administration. A lethal combination of early arrogance and euphoria often derail best intentions. George W. Bush’s first stumble was over the nomination of Linda Chavez for labor secretary. Some blamed the debacle on a lax vetting process, but Chavez withdrew quickly and a less controversial successor was named and confirmed without incident. Having recovered from this mishap, the White House endured an eruption of criticism over the delayed stock divestitures of senior staff and cabinet members, especially the holdings of Secretary of the Treasury Paul O’Neill. The heat was turned up even more when Karl Rove met with lobbyists for Intel, a company in which he owned stock, thereby opening the door to political opponents who promptly demanded an investigation. These missteps resulted in bad publicity that may have distracted the White House officials but did not prove disabling. In contrast, the newly established White House OFBCI attracted controversy from day one when Republicans and Democrats alike began to question the constitutionality of financially assisting religious institutions that provide government services. The mission of this office not only created opposition within both parties but created enemies and allies within the religious community—the very group that supposedly had the most to gain from such an office. Compounding the administration’s tribulations was a summertime leak from an employee of the Salvation Army who revealed that the White House was seeking to protect it from cities’ domestic partnership benefit requirements. The uproar created even more suspicion. Eventually, the OFBCI’s bill passed the House with anemic support—a vote of 233 to 198—and a rocky road ahead in the Senate. The key Democratic supporter, Senator Lieberman, was not pleased with H.R. 7 and expressed his desire to rewrite the bill. Ultimately, the events of 9/11 relegated the issue to the sidelines, and OFBCI pursued the less controversial component of its agenda—tax incentives for charitable giving (the CARE Act, S. 1924). These missteps pale in comparison with the shock waves following Senator Jim Jeffords’s defection from the Republican Party, causing the party’s loss of majority status in the Senate, jeopardizing the president’s legislative agenda, and for the first time casting serious doubt on the performance of the White House staff. Why didn’t they know about Jeffords’s apparent dissatisfaction? If they did know, why didn’t they do something? As we enter the summer of 2002, it is not clear that the staff innovations designed to meet the challenges of “America’s new war” can achieve the necessary level of integration and cooperation. Nevertheless, the Bush team appears to have established an advisory system reflective of a unique balance between White House staff and cabinet input—a system that fits the needs of today’s crisis atmosphere. The president will have to respond to cracks that appear in his administrative structures and personnel weaknesses, but he deserves credit for diminishing the intense public fear in the aftermath of 9/11 by responding in a manner that suited the needs of the country. In the end, it is impossible for any administration to be mistake-free. Stumbling is inevitable. Still, President Bush benefited from his predecessors’ mistakes. His transition and first days seemed like a cakewalk compared to the unrelenting criticism faced by President Clinton. Assembling a staff of seasoned veterans in less than fifty days is no small feat. Avoiding all manner of mistakes is well beyond the realities of doing governance in Washington. References Allen, Mike, and Alan Sipress. 2001. Attacks refocus White House on how to fight terrorism. Washington Post, September 26. Barnes, James A. 2001. The imperial vice presidency. National Journal 33 (11, March 17): 814. Houston, Fiona. 1993. Youth actively served by junior Clinton aides. New York Times, March 28. Kahn, Joseph. 2000. Bush’s selections signal a widening of cabinet’s role. New York Times, December 31. Krauthammer, Charles. 1993. Home Alone 3: The White House; where are the grown-ups? Washington Post, May 14. Kumar, Martha. 2002. Recruiting and organizing the White House staff. PS 35 (1): 35-40. Milbank, Dana. 2001a. Serious “strategery”; as Rove launches elaborate political effort, some see a nascent Clintonian “war room.” Washington Post, April 22. –. 2001b. White House staff switches gears. Washington Post, September 17. Milbank, Dana, and Bradley Graham. 2001. With crisis, White House style is now more fluid. Washington Post, October 10. Nakashima, Ellen, and Dana Milbank. 2001. Bush cabinet takes back seat in driving policy. Washington Post, September 5. Pfiffner, James. 1996. The strategic presidency: Hitting the ground running. 2d ed. Lawrence: University Press of Kansas. Pincus, Walter, and Karen DeYoung. 2001. U.S. says new tape points to Bin Laden. Washington Post, December 9. Simendinger, Alexis. 2001. Stepping into power. National Journal 33 (4, January 27): 246. Stephanopoulos, George. 1999. All too human. New York: Little, Brown. Von Drehle, David, and Mike Allen. 2002. Bush plan’s underground architects. Washington Post, June 9. |
70aa6eba7d22e48f4a8d23df07af3be7 | https://www.brookings.edu/articles/the-cyber-terror-bogeyman/ | The Cyber Terror Bogeyman | The Cyber Terror Bogeyman We have let our fears obscure how terrorists really use the Internet. About 31,300. That is roughly the number of magazine and journal articles written so far that discuss the phenomenon of cyber terrorism. Zero. That is the number of people that who been hurt or killed by cyber terrorism at the time this went to press. In many ways, cyber terrorism is like the Discovery Channel’s “Shark Week,” when we obsess about shark attacks despite the fact that you are roughly 15,000 times more likely to be hurt or killed in an accident involving a toilet. But by looking at how terror groups actually use the Internet, rather than fixating on nightmare scenarios, we can properly prioritize and focus our efforts. Part of the problem is the way we talk about the issue. The FBI defines cyber terrorism as a “premeditated, politically motivated attack against information, computer systems, computer programs and data which results in violence against non-combatant targets by subnational groups or clandestine agents.” A key word there is “violence,” yet many discussions sweep all sorts of nonviolent online mischief into the “terror” bin. Various reports lump together everything from Defense Secretary Leon Panetta’s recent statements that a terror group might launch a “digital Pearl Harbor” to Stuxnet-like sabotage (ahem, committed by state forces) to hacktivism, WikiLeaks and credit card fraud. As one congressional staffer put it, the way we use a term like cyber terrorism “has as much clarity as cybersecurity — that is, none at all.” Another part of the problem is that we often mix up our fears with the actual state of affairs. Last year, Deputy Defense Secretary William Lynn, the Pentagon’s lead official for cybersecurity, spoke to the top experts in the field at the RSA Conference in San Francisco. “It is possible for a terrorist group to develop cyber-attack tools on their own or to buy them on the black market,” Lynn warned. “A couple dozen talented programmers wearing flip-flops and drinking Red Bull can do a lot of damage.” The deputy defense secretary was conflating fear and reality, not just about what stimulant-drinking programmers are actually hired to do, but also what is needed to pull off an attack that causes meaningful violence. The requirements go well beyond finding top cyber experts. Taking down hydroelectric generators, or designing malware like Stuxnet that causes nuclear centrifuges to spin out of sequence doesn’t just require the skills and means to get into a computer system. It’s also knowing what to do once you are in. To cause true damage requires an understanding of the devices themselves and how they run, the engineering and physics behind the target. The Stuxnet case, for example, involved not just cyber experts well beyond a few wearing flip-flops, but also experts in areas that ranged from intelligence and surveillance to nuclear physics to the engineering of a specific kind of Siemens-brand industrial equipment. It also required expensive tests, not only of the software, but on working versions of the target hardware as well. As George R. Lucas Jr., a professor at the U.S. Naval Academy, put it, conducting a truly mass-scale action using cyber means “simply outstrips the intellectual, organizational and personnel capacities of even the most well-funded and well-organized terrorist organization, as well as those of even the most sophisticated international criminal enterprises.” Lucas said the threat of cyber terrorism has been vastly overblown. “To be blunt, neither the 14-year-old hacker in your next-door neighbor’s upstairs bedroom, nor the two- or three-person al-Qaida cell holed up in some apartment in Hamburg are going to bring down the Glen Canyon and Hoover dams,” he said. We should be crystal clear: This is not to say that terrorist groups are uninterested in using the technology of cyberspace to carry out acts of violence. In 2001, al-Qaida computers seized in Afghanistan were found to contain models of a dam, plus engineering software that simulated the catastrophic failure of controls. Five years later, jihadist websites were urging cyber attacks on the U.S. financial industry to retaliate for abuses at Guantanamo Bay. Nor does it mean that cyber terrorism, particularly attacks on critical infrastructure, is of no concern. In 2007, Idaho National Lab researchers experimented with cyber attacks on their own facility; they learned that remotely changing the operating cycle of a power generator could make it catch fire. Four years later, the Los Angeles Times reported that white-hat hackers hired by a water provider in California broke into the system in less than a week. Policymakers must worry that real-world versions of such attacks might have a ripple effect that could, for example, knock out parts of the national power grid or shut down a municipal or even regional water supply. But so far, what terrorists have accomplished in the cyber realm doesn’t match our fears, their dreams or even what they have managed through traditional means. The only publicly documented case of an actual al-Qaida attempt at a cyber attack wouldn’t have even met the FBI definition. Under questioning at Guantanamo Bay, Mohmedou Ould Slahi confessed to trying to knock offline the Israeli prime minister’s public website. The same goes for the September denial-of-service attacks on five U.S. banking firms, for which the Islamist group “Izz ad-Din al-Qassam Cyber Fighters” claimed responsibility. (Some experts believe the group was merely stealing credit for someone else’s work.) The attacks, which prevented customers from accessing the sites for a few hours, were the equivalent of a crowd standing in your lobby blocking access or a gang of neighborhood kids constantly doing “ring and runs” at your front doorbell. It’s annoying, to be sure, but nothing that would make the terrorism threat matrix if you removed the word “cyber.” And while it may make for good headlines, it is certainly not in the vein of a “cyber 9/11” or “digital Pearl Harbor.” Even the 2007 cyber attacks on Estonia, the most-discussed incident of its kind, had little impact on the daily life of the average Estonian and certainly no long-term effect. Allegedly assisted by the Russian government, and hence well beyond the capacity of most terror organizations, the attacks merely disrupted public-facing government websites for a few days. Compare that with the impact of planes crashing into the center of the U.S. financial system, the London subway attacks or the thousands of homemade bomb attacks that happen around the world each year. Even when you move into the “what if” side the damage potential of cyber terror still pales compared with other types of potential terror attacks. A disruption of the power grid for a few days would certainly be catastrophic (though it’s something that Washington, D.C., residents have lived through in the last year. Does the Pepco power company qualify as a cyber threat?). But, again, in strategic planning, we have to put threats into context. The explosion of just one nuclear bomb, even a jury-rigged radiological “dirty bomb,” could irradiate an American city for centuries. Similarly, while a computer virus could wreak havoc in the economy, a biological attack could change our very patterns of life forever. As one cyber expert said, “There are [cyber] threats out there, but there are no threats that threaten our fundamental way of life.” Terrorists Online Better than fixating on an unlikely hack that opens the floodgates of Hoover Dam, in assessing cyber terrorism we should look at how terror groups actually use the Internet. The answer turns out to be: pretty much how everyone else uses it. Yes, the Internet is becoming a place of growing danger and new digital weaponry is being developed. We must be mindful of forces that would use malware against us, much as we have used it in offensive operations against Iran. But the Internet’s main function remains to gather and share information across great distances with instant ease. For instance, online dating sites and terror groups alike use the Internet to connect people of similar interests and beliefs who otherwise wouldn’t normally meet. Similarly, online voices — be they restaurant bloggers or radical imams — are magnified, reaching more people than ever. (Indeed, the Internet seems to reward the more extreme with more attention.) Al-Qaida, denied safe havens by U.S. military operations after 9/11, spent the next decade shifting its propaganda distribution from hand-carried cassette tapes to vastly superior online methods. The last video that Osama bin Laden issued before his death was simultaneously uploaded onto five sites. Counterterrorism groups rushed to take them down, but within one hour, the video had been captured and copied to more than 600 sites. Within a day, the number of sites hosting the video had doubled again, each watchable by thousands. Beyond propaganda, cyberspace allows groups to spread particular knowledge in new and innovative ways. The same kinds of tools that allow positive organizations such as the Khan Academy to help kids around the world learn math and science has given terrorist groups unprecedented ways to discuss and disseminate tactics, techniques and procedures. The recipes for explosives are readily available on the Internet, while terror groups have used the Internet to share designs for IEDs instantly across conflict zones from Iraq to Afghanistan. Online sharing has helped such groups continue their work even as drone strikes and other global counterterror efforts deprive them of geographic spaces to teach and train. And what terror groups value from the Internet is the same as the rest of us — reliable service, easy terms and virtual anonymity — which complicates the old way of thinking about the locale of threats. The Taliban, for example, ran a website for more than a year that released propaganda and kept a running tally of suicide bombings, rocket attacks and raids against U.S. troops in Afghanistan. And yet the host for the website was a Texas company called The Planet, which rented out websites for $70 a month, payable by credit card. The company, which hosted some 16 million accounts, wasn’t aware that one of them was a Taliban information clearinghouse until it was contacted by U.S. authorities and shut the site down. This gaining of knowledge is not just about the “how” of a terror attack, but even the “who” and the “where” on the targeting side. Groups have used cyberspace as a low-cost, low-risk means to gather intelligence in ways they could only dream about a generation ago. For example, no terrorist group has the financial resources to afford a spy satellite to scope out targets from above with pinpoint precision, let alone the capability to build and launch it. Yet, Google Earth filled in just as effectively for Lashkar-e-Taiba, a Pakistan-based terror group, when it was planning the 2008 Mumbai attacks, and for the Taliban team that planned the raid earlier this year on Camp Bastion in Afghanistan. What this means when it comes to terrorism is that, much like in other areas of cybersecurity, we have to be aware of our own habits and uses of the Internet and how bad actors might take advantage. In 2007, when U.S. Army helicopters landed at a base in Iraq, soldiers reportedly used their smartphones to snap photos and upload them to the Internet. The geotags embedded in the photos allowed insurgents to pinpoint and destroy four of the helicopters in a mortar attack. The incident has become a standard part of experts’ warnings. “Is a badge on Foursquare worth your life?” asks Brittany Brown, the social media manager at Fort Benning, Ga. A growing worry here is that groups might use social networking and Kevin Mitnick-style “social engineering” to seek information not just about hard targets, but human ones. After the bin Laden raid in 2011, an American cybersecurity analyst wondered what he could find out about the supposedly super-secret unit that carried it out. He was able to find 12 current or former members’ names, their families’ names and their home addresses. This information was acquired not as the result of leaks to the press, but rather through the use of social networking tricks (for instance, tracking people and their network of friends and family by their appearances in pictures wearing T-shirts with unit logos or through websites that mention BUDS training classes). In similar experiments, he uncovered the names of FBI undercover agents and, in one particularly saucy example, a pair of senior U.S. government officials who opened themselves up to potential blackmail by participating in a swinger site. The analyst uses the results of such exercises to warn his “targets” that there was more about them on the Internet than they realized — a useful reminder for us all. Ultimately, in making a global risk assessment, we have to weigh an imagined future, in which terror groups unleash a cataclysm via computer virus, against the present reality, in which they use information flows to inform and improve their actions in the physical world. So what does that suggest for cyber counterterror efforts? A Double-Edged Sword “It seems that someone is using my account and is somehow sending messages with my name,” emailed one person who fell for an online trick. “The dangerous thing in the matter is that they [his contacts replying to what they thought was a genuine email] say that I had sent them a message including a link for download, which they downloaded.” We can all empathize with this fellow, whose story was captured by Wired magazine’s Danger Room blog. Many of us have gone through the same experience or received similar warnings from friends or family that someone’s hacked their account and to be aware of suspicious messages. The difference is that the person complaining about being hacked in this case was “Yaman Mukhadab,” a prominent poster inside what was supposed to be an elite password-protected forum for radicals, called Shumukh. Before he sent out his warning to the forum, the group had been engaged in such activities as assembling a “wish list” of American security industry leaders, defense officials and other public figures for terrorists to target and kill. Mukhadab’s cyber hardships — induced, of course, by counterterrorism agencies — illustrate how technology remains a double-edged sword. The realm of the Internet is supposed to be a fearful place, perfect for terrorists, and yet it can also work for us. Some counterterror experts argue that instead of playing a never-ending game of Whac-a-Mole — trying to track and then shut down all terrorist use of the Internet — it might be better to take advantage of their presence online. “You can learn a lot from the enemy by watching them chat online,” Martin Libicki, a senior policy analyst at the Rand Corp., told the Washington Post. While the cyber era allows terror groups to easily distribute the playbook of potential terrorist tactics, techniques and procedures, it also reveals to defenders which ones are popular and spreading. If individuals and groups can link up as never before, so too do intelligence analysts have unprecedented abilities to track them and map out social networks. This applies both to identifying would-be cyber terrorists designing malware as well as those still using the bombs and guns of the present world. In 2008 and 2009, U.S. intelligence agencies reportedly tried to attack and shut down the top terrorist propaganda websites on the anniversary of 9/11, to try to delay the release of a bin Laden video celebrating the attacks. In 2010, however, they took a different tack. As Wired magazine reported, “The user account for al-Qaida’s al-Fajr media distribution network was hacked and used to encourage forum members to sign up for Ekhlaas, a forum which had closed a year before and mysteriously resurfaced.” The new forum was a fake, the equivalent of an online spider web, stickily entangling would-be terrorists and their fans. The following year, a similar thing happened to the Global Islamic Media Front, a network for producing and distributing radical propaganda online. GIMF was forced to warn its members that the group’s own encryption program, “Mujahideen Secrets 2.0,” shouldn’t be downloaded because it had been compromised. More amusing was the 2010 episode in which al-Qaida in the Arabian Peninsula posted the first issue of Inspire, an English-language online magazine designed to draw in recruits and spread terror tactics. Excited terrorist readers instead found the pages replaced by a PDF for a cupcake recipe, reportedly put there by hackers for British intelligence agencies. One can imagine deadlier forms of information corruption, such as changing the online recipes of how to make a bomb, so that a would-be bombmaker blows himself up during assembly. We can look at the digital world with only fear or we can recognize that every new technology brings promise and peril. The advent of reliable post in the 1800s allowed the most dangerous terrorists of that time, anarchist groups, to correspond across state borders, recruiting and coordinating in a way previously not possible, and even to deploy a new weapon: letter bombs. But it also allowed police to read their letters and crack down on them. So, too, today with the digital post. When it comes to cyber terrorism versus the terrorist use of cyberspace, we must balance chasing the chimeras of our fevered imaginations with watching the information flows where the real action is taking place. |
e663bbf9472fe85fb4c8d8b1639195b8 | https://www.brookings.edu/articles/the-end-of-the-ccps-resilient-authoritarianism-a-tripartite-assessment-of-shifting-power-in-china/ | The End of the CCP’s Resilient Authoritarianism? A Tripartite Assessment of Shifting Power in China | The End of the CCP’s Resilient Authoritarianism? A Tripartite Assessment of Shifting Power in China Editor’s Note: This paper first appeared in The China Quarterly (Volume 211, September 2012). ABSTRACT This essay challenges the widely held view of the Chinese Communist Party’s (CCP) purported “resilient authoritarianism,” which asserts that China’s one-party political system is able to enhance the state capacity to govern effectively through institutional adaptations and policy adjustments. An analysis of the recent and still unfolding Bo Xilai crisis reveals the flaws in China’s political system, including nepotism and patron-client ties in the selection of leaders, rampant corruption, the growing oligarchic power of state-owned enterprises, elites’ contempt for the law and the potential failure to broker deals between competing factions in the Party leadership. The essay argues that the CCP’s “authoritarian resilience” is a stagnant system, both conceptually and empirically, because it resists much-needed democratic changes in the country. The problems of the resilient authoritarianism thesis is traceable to the monolithic conceptualizing of China – the failure to appreciate seemingly paradoxical transformative trends in the country, which this essay characterizes as three paralleled developments, namely, 1) weak leaders, strong factions; 2) weak government, strong interest groups; and 3) weak Party, strong country. One should not confuse China’s national resilience (in terms of the emerging middle class, new interest group politics, and dynamic society) with the CCP’s capacity and legitimacy to rule the country. The essay concludes that if the CCP intends to regain the public’s confidence and avoid a bottom-up revolution, it must abandon the notion of “authoritarian resilience” and embrace a systematic democratic transition with bold steps towards intra-Party elections, judicial independence and a gradual opening of the mainstream media. Download (PDF) » |
fc8958af8a10c5bd6a4285cba0158513 | https://www.brookings.edu/articles/the-law-catches-up-to-private-militaries-embeds/ | The Law Catches Up to Private Militaries, Embeds | The Law Catches Up to Private Militaries, Embeds Over the last few years, tales of private military contractors run amuck in Iraq — from the CACI interrogators at Abu Ghraib to the Aegis company’s Elvis-themed internet “trophy video” — have continually popped up in the headlines. Unfortunately, when it came to actually doing something about these episodes of Outsourcing Gone Wild, Hollywood took more action than Washington. The TV series Law and Order punished fictional contractor crimes, while our courts ignored the actual ones. Leonardo Dicaprio acted in a movie featuring the private military industry, while our government enacted no actual policy on it. But those carefree days of military contractors romping across the hills and dales of the Iraqi countryside, without legal status or accountability, may be over. The Congress has struck back. Amidst all the add-ins, pork spending, and excitement of the budget process, it has now come out that a tiny clause was slipped into the Pentagon’s fiscal year 2007 budget legislation. The one sentence section (number 552 of a total 3510 sections) states that “Paragraph (10) of section 802(a) of title 10, United States Code (article 2(a) of the Uniform Code of Military Justice), is amended by striking ‘war’ and inserting ‘declared war or a contingency operation’.” The measure passed without much notice or any debate. And then, as they might sing on School House Rock, that bill became a law (P.L.109-364). The addition of five little words to a massive US legal code that fills entire shelves at law libraries wouldn’t normally matter for much. But with this change, contractors’ ‘get out of jail free’ card may have been torn to shreds. Previously, contractors would only fall under the Uniform Code of Military Justice, better known as the court martial system, if Congress declared war. This is something that has not happened in over 65 years and out of sorts with the most likely operations in the 21st century. The result is that whenever our military officers came across episodes of suspected contractor crimes in missions like Bosnia, Kosovo, Iraq, or Afghanistan, they had no tools to resolve them. As long as Congress had not formally declared war, civilians — even those working for the US armed forces, carrying out military missions in a conflict zone — fell outside their jurisdiction. The military’s relationship with the contractor was, well, merely contractual. At most, the local officer in charge could request to the employing firm that the individual be demoted or fired. If he thought a felony occurred, the officer might be able to report them on to civilian authorities. Getting tattled on to the boss is certainly fine for some incidents. But, clearly, it’s not how one deals with suspected crimes. And it’s nowhere near the proper response to the amazing, awful stories that have made the headlines (the most recent being the contractors who sprung a former Iraqi government minister, imprisoned on corruption charges, from a Green Zone jail). And for every story that has been deemed newsworthy, there are dozens that never see the spotlight. One US army officer recently told me of an incident he witnessed, where a contractor shot a young Iraqi who got too close to his vehicle while in line at the Green Zone entrance. The boy was waiting there to apply for a job. Not merely a tragedy, but one more nail in the coffin for any US effort at winning hearts and minds. But when such incidents happen, officers like him have had no recourse other than to file reports that are supposed to be sent on either to the local government or the US Department of Justice, neither of which had traditionally done much. The local government is often failed or too weak to act — the very reason we are still in Iraq. And our Department of Justice has treated contractor crimes in a more Shakespearean than Hollywood way, as in Much Ado About Nothing. Last month, DOJ reported to Congress that it has sat on over 20 investigations of suspected contractor crimes without action in the last year. The problem is not merely one of a lack of political will on the part of the Administration to deal with such crimes. Contractors have also fallen through a gap in the law. The roles and numbers of military contractors are far greater than in the past, but the legal system hasn’t caught up. Even in situations when US civilian law could potentially have been applied to contractor crimes (through the Military Extraterritorial Jurisdiction Act), it wasn’t. Underlying the previous laws like MEJA was the assumption that civilian prosecutors back in the US would be able to make determinations of what is proper and improper behavior in conflicts, go gather evidence, carry out depositions in the middle of warzones, and then be willing and able to prosecute them to juries back home. The reality is that no US Attorney likes to waste limited budgets on such messy, complex cases 9,000 miles outside their district, even if they were fortunate enough to have the evidence at hand. The only time MEJA has been successfully applied was against the wife of a soldier, who stabbed him during a domestic dispute at a US base in Turkey. Not one contractor of the entire military industry in Iraq has been charged with any crime over the last 3 and a half years, let alone prosecuted or punished. Given the raw numbers of contractors, let alone the incidents we know about, it boggles the mind. The situation perhaps hit its low-point this fall, when the Under Secretary of the Army testified to Congress that the Army had never authorized Halliburton or any of its subcontractors (essentially the entire industry) to carry weapons or guard convoys. He even denied the US had firms handling these jobs. Never mind the thousands of newspaper, magazine, and TV news stories about the industry. Never mind Google’s 1,350,000 web mentions. Never mind the official report from U.S. Central Command that there were over 100,000 contractors in Iraq carrying out these and other military roles. In a sense, the Bush Administration was using a cop-out that all but the worst Hollywood script writers avoid. Just like the end of the TV series Dallas, Congress was somehow supposed to accept that the private military industry in Iraq and all that had happened with it was somehow ‘just a dream.’ But Congress didn’t bite, it now seems. With the addition of just five words in the law, contractors now can fall under the purview of the military justice system. This means that if contractors violate the rules of engagement in a warzone or commit crimes during a contingency operation like Iraq, they can now be court-martialed (as in, Corporate Warriors, meet A Few Good Men). On face value, this appears to be a step forward for realistic accountability. Military contractor conduct can now be checked by the military investigation and court system, which unlike civilian courts, is actually ready and able both to understand the peculiarities of life and work in a warzone and kick into action when things go wrong. The amazing thing is that the change in the legal code is so succinct and easy to miss (one sentence in a 439-page bill, sandwiched between a discussion on timely notice of deployments and a section ordering that the next of kin of medal of honor winners get flags) that it has so far gone completely unnoticed in the few weeks since it became the law of the land. Not only has the media not yet reported on it. Neither have military officers or even the lobbyists paid by the military industry to stay on top of these things. So what happens next? In all likelihood, many firms, who have so far thrived in the unregulated marketplace, will now lobby hard to try to strike down the change. We will perhaps even soon enjoy the sight of CEOs of military firms, preening about their loss of rights and how the new definition of warzone will keep them from rescuing kittens caught in trees. But, ironically, the contractual nature of the military industry serves as an effective mechanism to prevent loss of rights. The legal change only applies to the section in the existing law dealing with those civilians “serving with or accompanying an armed force in the field,” i.e. only those contractors on operations in conflict zones like Iraq or Afghanistan. It would apply not to the broader public in the US, not to local civilians, and not even to military contractors working in places where civilian law is stood up. Indeed, it even wouldn’t apply to our foes, upholding recent rulings on the scope of military law and the detainees at Gitmo. In many ways, the new law is the 21st century business version of the rights contract: If a private individual wants to travel to a warzone and do military jobs for profit, on behalf of the US government, then that individual agrees to fall under the same codes of law and consequence that American soldiers, in the same zones, doing the same sorts of jobs, have to live and work by. If a contractor doesn’t agree to these regulations, that’s fine, don’t contract. Unlike soldiers, they are still civilians with no obligation to serve. The new regulation also seems to pass the fairness test. That is, a lance corporal or a specialist earns less than $20,000 a year for service in Iraq, while a contractor can earn upwards of $100,000-200,000 a year (tax free) for doing the same job and can quit whenever they want. It doesn’t seem that unreasonable then to expect the contractor to abide by the same laws as their military counterpart while in the combat theatre. Given that the vast majority of private military employees are upstanding men and women — and mostly former soldiers, to boot — living under the new system will not mean much change at all. All it does is now give military investigators a way finally to stop the bad apples from filling the headlines and getting away free. The change in the law is long overdue. But in being so brief, it needs clarity on exactly how it will be realized. For example, how will it be applied to ongoing contracts and operations? Given that the firm executives and their lobbyists back in DC have completely dropped the ball, someone ought to tell the contractors in Iraq that they can now be court martialed. Likewise, the scope of the new law could made more clear; it could be either too limited or too wide, depending on the interpretation. While it is apparent that any military contractor working directly or indirectly for the US military falls under the change, it is unclear whether those doing similar jobs for other US government agencies in the same warzone would fall under it as well (recalling that the contractors at Abu Ghraib were technically employed by the US Department of Interior, sublet out to DOD). On the opposite side, what about civilians who have agreed to be embedded, but not contracted? The Iraq war is the first that journalists could formally embed in units, so there is not much experience with its legal side in contingency operations. The lack of any legal precedent, combined with the new law, could mean that an overly aggressive interpretation might now also include journalists who have embedded. Given that journalists are not armed, not contracted (so not paid directly or indirectly from public monies) and most important, not there to serve the mission objectives, this would probably be too extensive an interpretation. It would also likely mean less embeds. But given the current lack of satisfaction with the embed program in the media, any effect here may be a tempest in a tea pot. As of Fall 2006, there were only nine embedded reporters in all of Iraq. Of the nine, four were from military media (three from Stars and Stripes, one from Armed Forces Network), two not even with US units (one Polish radio reporter with Polish troops, one Italian reporter with Italian troops), and one was an American writing a book. Moreover, we should remember that embeds already make a rights tradeoff when they agree to the military’s reporting rules. That is, they have already given up some of their 1st Amendment protections (something at the heart of their professional ethic) in exchange for access, so agreeing to potentially fall under UCMJ when deployed may not be a deal breaker. The ultimate point is that the change gives the military and the civilians courts a new tool to use in better managing and overseeing contractors, but leaves it to the Pentagon and DOJ to decide when and where to use it. Given their recent track record on legal issues in the context of Iraq and the war on terror, many won’t be that reassured. Congress is to be applauded for finally taking action to reign in the industry and aid military officers in their duties, but the job is not done. While there may be an inclination to let such questions of scope and implementation be figured out through test cases in the courts, our elected public representatives should request DoD to answer the questions above in a report to Congress. Moreover, while the change may help close one accountability loophole, in no way should it be read as a panacea for the rest of the private military industry’s ills. The new Congress still has much to deal with when it comes to the still unregulated industry, including getting enough eyes and ears to actually oversee and manage our contracts effectively, create reporting structures, and forcing the Pentagon to develop better fiscal controls and market sanctions, to actually save money than spend it out. A change of a few words in a legislative bill certainly isn’t the stuff of a blockbuster movie. So don’t expect to see Angelina Jolie starring in “Paragraph (10) of Section 802(a)” in a theatre near you anytime soon. But the legal changes in it are a sign that Congress is finally catching up to Hollywood on the private military industry. And that is the stuff of good governance. |
adcc0bcbde54879e0738bf523f26ea18 | https://www.brookings.edu/articles/the-man-who-lost-an-empire/ | The man who lost an empire | The man who lost an empire In 1921, the year before he founded the Union of Soviet Socialist Republics, Vladimir Lenin challenged his fellow Bolsheviks with a rhetorical question: “Who overtakes whom?” Joseph Stalin preferred a starker version: “Who-whom?” Both saw politics as a deadly competition in which the winner takes all—war by other means. So does Vladimir Putin. He looks back at the late 1980s and 1990s with bitterness. He sees the tumult of that period not just as the collapse of the Soviet Union and a victory for its enemies in the West but a devastating blow to something far more precious, venerable, and enduring: the Russian state. Under his regime another familiar question looms: “Who is to blame?” In Putin’s eyes, the heaviest responsibility falls on the last leader of the Soviet Union, Mikhail Gorbachev, who was the prime mover of what Putin has called “the greatest geopolitical catastrophe” of the twentieth century. Yet instead of denouncing or punishing Gorbachev, Putin has treated him with thinly disguised condescension. For years Gorbachev traveled abroad to accept honors and honorariums, knowing that, back home, masses of his fellow citizens scorned him. In 1996, he ran in the presidential election and got half a percent of the vote. Now eighty-six and in shaky health, Gorbachev lives quietly in a dacha outside Moscow—but not silently. Over the course of a decade, he gave eight long interviews to William Taubman, a historian at Amherst College, and his wife Jane, who taught Russian there. The result of Taubman’s research is a masterpiece of narrative scholarship. It is also the first comprehensive biography of this world-historical figure. Other chronicles of Gorbachev’s life and verdicts on his record will follow, but they will be without the trove of personal insights that Taubman has gleaned from his access to Gorbachev himself, his advisers, and other participants in those dramatic years. Taubman notes that his subject’s birth in a peasant village in the North Caucasus coincided, both in time and place, with the rise of Stalinism—and that his family both resisted that phenomenon and suffered from it. Gorbachev’s grandmothers and mother insisted that he be secretly baptized when he was born in 1931, in defiance of the draconian suppression of religion that had accompanied communization. He would never know two of his uncles and one aunt who perished in the famine of those years, and both of his grandfathers were sent to the Gulag during Stalin’s Great Terror. The book also contains a love story. Gorbachev met Raisa Titarenko at Moscow State University. Their romance was enhanced and sustained by her intelligence, strong will, firm convictions, and dedication to her ambitious and extroverted husband. She sacrificed what could have been a successful career as a sociologist in order to help her husband and protect him from his enemies, but also from his tendency to let his loquaciousness, charm, and self-confidence trip him up. No one influenced him more. Taubman’s previous subject was the life and career of Nikita Khrushchev, a crude, bumptious, erratic, and belligerent character who nonetheless was the only leader of the Soviet Union before Gorbachev to qualify as a reformer.1 Khrushchev spent much of his life as one of Stalin’s henchmen, with blood on his hands. But once the dictator had died and Khrushchev had outmaneuvered his peers in the scramble for succession, he did his best to “de-Stalinize” the Soviet Union, freeing prisoners from the Gulag and relaxing repression and censorship. Khrushchev paid a heavy price for his experiments in the liberalization of society and culture. In 1964, the Presidium (later renamed the Politburo) summoned him back to Moscow from a Black Sea vacation, fired him, and consigned him to house arrest as a “special pensioner” for the rest of his life. Gorbachev’s punishment has been watching the renunciation of much of his own legacy, and the restoration of an authoritarian, predatory regime ruled by a former middle-rank KGB officer whose purposes, methods, and policies are antithetical to those that made Gorbachev a transformative global figure. Gorbachev, too, had crucial connections with the KGB at the highest level. He probably would never have ascended to power without the patronage of Yuri Andropov, the head of Soviet intelligence for fifteen years before he became party leader in 1982. Taubman tells us that Andropov met Gorbachev in 1968, soon after he took over the KGB. One of the youngest provincial party chiefs in the USSR, Gorbachev combined a reputation for loyalty with a fertile mind, pragmatism, and a talent for innovation—qualities that Andropov felt the country sorely needed. Andropov’s position gave him access to data on the deterioration of Soviet society. The economy was anemic, the governing structures were rigid and inefficient, and laws were ignored or unjust. Factory towns polluted the air and water as they churned out armaments while ordinary citizens had to stand in long lines for paltry supplies of food and shoddy goods. Collectivized agriculture made the lives of farmers and their customers miserable. Public health services were abysmal, and the population, especially the Slavic majority, suffered from pervasive alcoholism, low birthrates, and decreasing life expectancy. Andropov faulted the complacent and stultifying policies of Leonid Brezhnev, the ponderous, beetle-browed apparatchik who replaced Khrushchev in 1964. Andropov succeeded Brezhnev in 1982. His kidneys were failing and within a few months he was working mostly from his dacha, tethered to a dialysis machine. From a hospital bed in 1983 he urged his underlings to choose Gorbachev as his successor. But when Andropov died the following February, they had already decided that Konstantin Chernenko, a seventy-two-year-old party hack, would be a safer pick, although he was afflicted with emphysema, pleurisy, pneumonia, and heart disease. They were organizing Chernenko’s funeral just thirteen months later. Unlike the succession after Andropov’s death, the outcome this time had not been decided beforehand. Gorbachev, who had just celebrated his fifty-fourth birthday, was the front-runner. He had risen to the number-two position in the Politburo, which meant that he would chair the meeting on succession. There were half a dozen other aspirants, ranging in age from sixty-two to eighty, all orthodox Communists to the bone and all experienced in protecting their personal turf and the status quo. Gorbachev let all the members have their say while he steered them toward selecting him by acclamation to head the commission that would supervise Chernenko’s interment in the Kremlin Wall Necropolis. That ceremonial duty was, by tradition, a signal that he would be the next leader. The Central Committee rubber-stamped the decision the next day.2 Until this triumphal moment in his career, Gorbachev was wary about how Raisa would react to his appointment and therefore had not told her his plans. Taubman, in one of their interviews, probed why he kept his closest confidante in the dark. Was it because she would be disappointed if he lost out, or because she feared the troubles that the job would bring? The latter, Gorbachev replied laconically. His critics accused him then—and still do now—of hubris and lust for power. Taubman does not agree: “True, he wanted the top spot and had been maneuvering to get it. But he didn’t want power for its own sake; if power had been his goal, he often insisted in later years, he would have presided happily ever after over the status quo, as Brezhnev had.” Taubman opens his chapter on Gorbachev’s debut in office with another question that resonates with Russian history, “What Is to Be Done?,” the title of a 1901 pamphlet by Lenin (quoting the nineteenth-century revolutionary Nikolai Chernyshevsky). The pamphlet called for a socialist revolution that Gorbachev, like many of his fellow Leninists, felt had lost its way. Gorbachev’s first public events as general secretary were refreshingly informal and engaging. He radiated charisma and confidence, creating expectations that he knew what had to be done to shape the next stage of Soviet history. However, Gorbachev had no cogent plans to tackle the chronic ills of the economy. Even if he had, it would have been threatening to others in the Politburo. One-man rule had died with Stalin and been supplanted by a collective leadership—a super-board of directors who kept an eye on the chair, especially one so young, untested, and cocky. Gorbachev never got over the fear that he might suffer some version of Khrushchev’s fate. Partly for that reason, he at first shied away from radical economic reforms. As Taubman puts it, “Although Gorbachev’s style was unprecedented, the substance of his policies during his first year in office was not.” But before long he was transforming the Soviet Union’s foreign policy and reconstructing its political ethos, practices, and institutions. By the end of 1986, he concluded that the only way to maintain the Communist Party’s authority over Soviet society was to open the system to the participation of the citizenry. He had yet to fully conceive of this goal, not to mention implement it. But he knew that it would involve replacing conservatives with reformers. His ability to do this came with the office of the general secretary. He had already nudged Andrei Gromyko out of the foreign ministry into the titular position of head of state, so he could apply his “new thinking” to Russia’s place in the world. Old thinking, as he saw it, was based on the conviction that the Western imperialists were bent on invading the USSR. Taubman notes that in Gorbachev’s first months, he repeatedly floated the public admonition, “We must live and let live,” a tacit repudiation of the Leninist-Stalinist motto of “who-whom?” Several months before Gorbachev became general secretary, he had visited Britain and impressed Margaret Thatcher. (“We can do business together,” she announced.) Ronald Reagan, who had famously dubbed the USSR “the evil empire,” was initially skeptical, but a summit in Geneva in the fall of 1985 broke the ice. While there was no diplomatic progress, both leaders called the meeting a “breakthrough,” largely, Taubman shows, because of extraordinary personal chemistry. They compared their backgrounds and found affinity: both had begun life in “small farming communities,” Reagan told Gorbachev at Geneva, yet here they were “with the fate of the world in [our] hands.” In Reagan’s eyes, the Soviet bloc was still an empire, but less evil than it had been before. Neither Gorbachev nor Reagan realized that the empire itself was on the verge of implosion. From the day he became general secretary, Gorbachev paid little attention to the Soviet vassal states of the Warsaw Pact. He believed that they would follow his own reforms. After all, since the early 1950s, East Germany, Hungary, and Czechoslovakia had made attempts to liberalize, only to be invaded by Soviet tanks. In 1989, Gorbachev presided over a Warsaw Pact summit that guaranteed “equality, independence and the right of each country to arrive at its own political position, strategy and tactics without interference from an outside party.” He thought that his renunciation of violence as the foundation of governance would be the salvation of the Soviet Union and its “socialist” brethren in Eastern Europe. This, for Taubman, was Gorbachev’s “sharpest break of all” from his predecessors. But without the protection of Moscow, the party chiefs in Eastern Europe were at the mercy of their citizens, and Moscow’s example of glasnost (openness or free speech), perestroika (restructuring), and new thinking emboldened independence-minded reformers in all the Soviet captive states. A similar dynamic was taking hold within the USSR itself, with dangerous consequences for Gorbachev. Most of the Central European countries had experienced parliamentary democracy before their occupation by the Third Reich and then decades of Soviet domination. Their reformers could invoke that past. Russia’s political culture, in contrast, was disadvantaged by its roots in centuries of autocracy. Gorbachev, in his anomalous if not schizophrenic dual role as head of the Communist Party and reformer-in-chief, hoped to democratize the party, not to expunge it. His plan in 1989, as Taubman explains, was to shift power from the party hierarchy into the hands of citizens by using free elections to fill local, district, and national councils, or soviets (the Bolshevik term that found its way into the name of the country and its system of governance). The opening up of politics to civil society is what his liberal supporters had long awaited. It also marked the incipient return of genuine elections in the Soviet Union, not seen since the brief episode of constitutional reform, the establishment of the state duma (parliament), and multiparty elections that occurred after the Russian Revolution of 1905, only to be snuffed out by the Bolsheviks. As Taubman puts it, the election campaign for a new USSR Congress of People’s Deputies “constituted the new, more democratic ‘game’ that Gorbachev was still learning to play.” It was by no means clear that he would win. Just as Gorbachev expected, hard-liners abominated and resisted the dilution of the party’s power. What unsettled his plan was that liberals—including the widely admired dissident Andrei Sakharov, whom Gorbachev had freed from internal exile—felt he was moving too slowly, while regional leaders were tightening their hold on their own fiefdoms. The most important of these regional leaders was Boris Yeltsin, a former Gorbachev ally. Gorbachev had plucked Yeltsin out of the provincial capital Sverdlovsk in the Urals, where he had a reputation as an energetic innovator, and put him in charge of the Moscow party. Yeltsin brought more zeal to the job than Gorbachev bargained for. The personal acrimony and political feuding that ensued between them, as dramatically described by Taubman, made for a spectacle on the level of a Shakespearean tragedy or a Mussorgsky opera. Yeltsin fumed over Gorbachev’s reluctance to make him a full member of the Politburo. When Yeltsin dared criticize Gorbachev at a crucial Central Committee plenum in October 1987, the session turned into a five-hour rhetorical pummeling of Yeltsin. Outside the hall, this move backfired. To many Russians fed up with the party, Gorbachev came off looking like a bully, increasing his victim’s popularity. As Taubman notes, Gorbachev inadvertently created his own nemesis. These two antagonists shared democratic values and goals, but their approaches differed sharply. Gorbachev was determined to maintain a reformed USSR under his own leadership. To do so he had to straddle a widening fissure between a dispirited, schismatic, and widely reviled Communist Party and a burgeoning anti-establishment, democratic movement. Yeltsin, by contrast, worked feverishly to consolidate the Russian Republic—one among fifteen republics in the USSR—as a base from which he could harness the union’s centrifugal forces to his own political advantage. Yeltsin handily won an election for the presidency of the Russian Republic in 1991, having quit the Communist Party the year before, calling into question the viability of the Soviet state. Russia’s current propagandists have rewritten history to support the myth that Western powers, particularly the United States, connived in the breakup of the USSR. There was no such goal in Washington. However, President George H.W. Bush, in his early months in office, called for a hiatus in the American embrace with Moscow so that he could take counsel from his secretary of defense Dick Cheney, national security adviser Brent Scowcroft, and Scowcroft’s deputy Robert Gates. They felt that Reagan had gone too far in endorsing perestroika and other reforms. Taubman believes that the so-called “pause” was a serious error. For the first time since the Bolshevik Revolution, the Soviet Union had a leader who no longer saw the United States as an enemy but as a potential partner. Gorbachev had good reason to resent Bush’s lukewarm support in early 1989, since it weakened him politically within the USSR. And when Gorbachev pleaded in 1990 and 1991 for extensive economic aid on the scale of the Marshall Plan, Washington refused, citing economic troubles of its own. However, despite Bush’s initial hesitation, he echoed Reagan’s praise of Gorbachev and his reforms, asserting that they were in the interest of the United States and world peace. President Bush also tried to tamp down the pressure for independence within the Soviet republics, particularly in Ukraine, where citizens were about to vote for secession in a referendum. In July 1991, Bush made a trip to Moscow and Kiev with those two purposes in mind. He failed in both capitals. Gorbachev was wobbly and Yeltsin confident. In Ukraine, both the leaders and the populace rejected Bush’s plea to give Gorbachev more time to liberalize the USSR. Neither the American nor the Soviet president realized that a mortal threat to the USSR had already emerged. Since the spring, Vladimir Kryuchko, the chief of the KGB, had been fomenting a plot to force Gorbachev to declare a state of emergency and relinquish much of his power to a junta of hard-liners. In August 1991, the conspirators placed Gorbachev and his family under house arrest at their vacation home on the Black Sea. The coup was a fiasco. The conspirators were incompetent and in at least one case drunk; another committed suicide. But it was nonetheless a serious setback for Gorbachev, and one for which he was also at fault. Taubman makes clear that Gorbachev ignored warnings, including from President Bush, that a coup was coming. One of Gorbachev’s closest and most loyal aides lamented his boss’s overconfidence. Gorbachev “couldn’t believe” that the conspirators would betray him. Why? Because he thought they “were incapable of doing anything without their leader.” The citizens of Moscow saved Gorbachev, pouring into the streets in peaceful demonstrations. However, their goal was less to return him to power than to defend their newborn democracy, which Gorbachev had made possible and Yeltsin now championed. Yeltsin climbed onto a tank, rallied the crowds, and showed the world that he was the man of the future. For Gorbachev, the coup was a personal as well as a political tragedy. Raisa had a stroke amid the stress of humiliation and danger of their house arrest in Crimea. She never fully recovered. While the plotters went to jail and Gorbachev returned to his office in the Kremlin, he was now a spent force. He resigned from the post of general secretary in hopes that he, too, could distance himself from the party, allowing him to remain the president of the Soviet Union. It was too late. Yeltsin and leaders of other republics were negotiating a treaty that would create the Commonwealth of Independent States. The USSR dissolved on December 26, and by New Year’s Day 1992, the hammer-and-sickle flag that had flown over the Kremlin for sixty-eight years had been replaced by the tricolor of Russia’s tsarist past. The transition was much less turbulent than Gorbachev feared it would be. As Taubman notes, he used a private channel to ask for Bush’s help in protecting him from reprisals or indignities. (The historian Michael Beschloss and I were the intermediaries.) Bush’s secretary of state, James Baker, complied, admonishing Yeltsin to handle his victory “in a dignified way—as in the West.”3 In 2006, an interviewer asked Gorbachev if he had contemplated the use of lethal force as a means of keeping the USSR intact. “Of course not,” Gorbachev insisted. “It never came into my head, because if it had, I wouldn’t have been Gorbachev.”4 This grandiloquent reversion to the third person suggests that the lion in winter still regarded himself as a heroic figure. It would probably have elicited a remonstration from Raisa. But she had died of leukemia seven years before, in 1999, in a hospital in Germany. “Of course, I’m guilty,” he said publicly while in mourning. “I’m the one who did her in. Politics captivated me. And she took it all to heart. If only our life had been more modest, she would be alive today.” Yeltsin’s triumph and the euphoria of his supporters were short-lived. Missteps, failures, and humiliations awaited him early in his tempestuous eight-year tenure, just as they had cast a shadow over the accomplishments of his predecessor. But like Gorbachev, Yeltsin clung to his revulsion for the brutishness of the Communist past. He, too, was loath to use force or risk instability as the world’s largest territorial state dismantled itself. One of his most important decisions was to maintain the borders dividing the republics of the USSR as the new international borders for the post-Soviet independent states. This decision and its enforcement spared the former USSR the kind of revanchist, religious, and ethnic carnage that accompanied the disintegration of Yugoslavia. Russians have often compared the chaotic Yeltsin presidency to the so-called Time of Troubles, an interregnum between tsarist dynasties at the end of the sixteenth and the early seventeenth centuries. Still, Yeltsin tried as hard as Gorbachev to establish participatory democracy and partnership with the West. Despite the antipathy between them, both wanted their children to live in a “normal, modern country”—a deliberately understated phrase for an outcome that reformers knew would require a long, difficult, and often dangerous process to achieve. Throughout Russian history, progress has often awakened the forces of regression. On the last day of the twentieth century, Yeltsin, suffering from heart disease and politically exhausted, resigned without warning from the presidency. In the course of his leadership, he had six prime ministers, one who was appointed twice and four of whom he fired.5 Putin, as the last prime minister before Yeltsin’s resignation, got the ultimate promotion. A major factor in his ascent was his scorched-earth strategy in subduing Chechen secessionists. Bringing the Caucasus homeland back under Moscow’s rule boosted Putin’s reputation for being a tough, vigorous leader. Over the last seventeen years, especially since the end of his second term as president in 2008, he has thrown Russia’s evolution into reverse, dismantling the incipient democracy that both Gorbachev and Yeltsin had made it their life’s mission to establish. What if the Politburo had not decided thirty-two years ago to bypass the old guard and take a chance on the youthful, innovative Gorbachev? The USSR, the Communist Party of the Soviet Union, the Warsaw Pact, the Iron Curtain, the Berlin Wall, and the cold war might have lasted well into the twenty-first century. In that alternative scenario, Lieutenant-Colonel Putin, an undercover agent in the sleepy backwater of East Germany during five of Gorbachev’s years in power, might have spent the rest of his professional life serving the Soviet state in obscurity. Instead, after burning the files in the KGB rezidentura in Dresden, Putin returned to his native Leningrad (soon to be St. Petersburg) and joined the orbit of the mayor, Anatoly Sobchak, formerly a leading liberal legislator in the new, genuinely democratic parliament that Gorbachev had created, a fateful move that would soon position Putin to be Yeltsin’s handpicked successor. Shortly before Yeltsin died in 2007, he privately acknowledged what a disastrous mistake that choice was. In that same year, Putin ratcheted up his scolding of the West, bullying Russia’s neighbors, concentrating power in his hands, cowing the parliament, muzzling independent media, rigging Russian elections, and meddling in the politics of other countries. One thing has not changed: Russia today limps along with a dysfunctional economy inherited from the Soviet and Yeltsin eras. It still lacks robust manufacturing and service sectors for domestic and foreign markets, while depending on natural resources mined or pumped out of the ground, and suffering more than ever from flagrant, institutionalized corruption—itself a form of dictatorship. The Russian people and the world have lived with Putinism for more than a decade. Now is an appropriate time to pose a version of Lenin’s question: Whose vision of Russia’s future will prevail? Was the Gorbachev-Yeltsin experiment in democratization an aberration, while Putin’s tyranny is Russia’s fate? There are plenty of liberal (and therefore brave) Russians who are optimistic, if only in the long run. They see their current president reaching back to a gruesome past for the means to shore up his power, hoping that he will make Russia great again. Yet those means were the ruin of Soviet Russia. The Bolsheviks, at least, had a new, bold, untried, albeit disastrous answer to the question of what should be done. The Putinists have no such excuse. |
c69e4e12deb162cb709b1a1845c52dbd | https://www.brookings.edu/articles/the-myth-of-charter-schools/ | The Myth of Charter Schools | The Myth of Charter Schools Ordinarily, documentaries about education attract little attention, and seldom, if ever, reach neighborhood movie theaters. Davis Guggenheim’s Waiting for “Superman” is different. It arrived in late September with the biggest publicity splash I have ever seen for a documentary. Not only was it the subject of major stories in Time and New York, but it was featured twice on The Oprah Winfrey Show and was the centerpiece of several days of programming by NBC, including an interview with President Obama. Two other films expounding the same arguments—The Lottery and The Cartel—were released in the late spring, but they received far less attention than Guggenheim’s film. His reputation as the director of the Academy Award–winning An Inconvenient Truth, about global warming, contributed to the anticipation surrounding Waiting for “Superman,” but the media frenzy suggested something more. Guggenheim presents the popularized version of an account of American public education that is promoted by some of the nation’s most powerful figures and institutions. The message of these films has become alarmingly familiar: American public education is a failed enterprise. The problem is not money. Public schools already spend too much. Test scores are low because there are so many bad teachers, whose jobs are protected by powerful unions. Students drop out because the schools fail them, but they could accomplish practically anything if they were saved from bad teachers. They would get higher test scores if schools could fire more bad teachers and pay more to good ones. The only hope for the future of our society, especially for poor black and Hispanic children, is escape from public schools, especially to charter schools, which are mostly funded by the government but controlled by private organizations, many of them operating to make a profit. The Cartel maintains that we must not only create more charter schools, but provide vouchers so that children can flee incompetent public schools and attend private schools. There, we are led to believe, teachers will be caring and highly skilled (unlike the lazy dullards in public schools); the schools will have high expectations and test scores will soar; and all children will succeed academically, regardless of their circumstances. The Lottery echoes the main story line of Waiting for “Superman”: it is about children who are desperate to avoid the New York City public schools and eager to win a spot in a shiny new charter school in Harlem. For many people, these arguments require a willing suspension of disbelief. Most Americans graduated from public schools, and most went from school to college or the workplace without thinking that their school had limited their life chances. There was a time—which now seems distant—when most people assumed that students’ performance in school was largely determined by their own efforts and by the circumstances and support of their family, not by their teachers. There were good teachers and mediocre teachers, even bad teachers, but in the end, most public schools offered ample opportunity for education to those willing to pursue it. The annual Gallup poll about education shows that Americans are overwhelmingly dissatisfied with the quality of the nation’s schools, but 77 percent of public school parents award their own child’s public school a grade of A or B, the highest level of approval since the question was first asked in 1985. Waiting for “Superman” and the other films appeal to a broad apprehension that the nation is falling behind in global competition. If the economy is a shambles, if poverty persists for significant segments of the population, if American kids are not as serious about their studies as their peers in other nations, the schools must be to blame. At last we have the culprit on which we can pin our anger, our palpable sense that something is very wrong with our society, that we are on the wrong track, and that America is losing the race for global dominance. It is not globalization or deindustrialization or poverty or our coarse popular culture or predatory financial practices that bear responsibility: it’s the public schools, their teachers, and their unions. The inspiration for Waiting for “Superman” began, Guggenheim explains, as he drove his own children to a private school, past the neighborhood schools with low test scores. He wondered about the fate of the children whose families did not have the choice of schools available to his own children. What was the quality of their education? He was sure it must be terrible. The press release for the film says that he wondered, “How heartsick and worried did their parents feel as they dropped their kids off this morning?” Guggenheim is a graduate of Sidwell Friends, the elite private school in Washington, D.C., where President Obama’s daughters are enrolled. The public schools that he passed by each morning must have seemed as hopeless and dreadful to him as the public schools in Washington that his own parents had shunned. Waiting for “Superman” tells the story of five children who enter a lottery to win a coveted place in a charter school. Four of them seek to escape the public schools; one was asked to leave a Catholic school because her mother couldn’t afford the tuition. Four of the children are black or Hispanic and live in gritty neighborhoods, while the one white child lives in a leafy suburb. We come to know each of these children and their families; we learn about their dreams for the future; we see that they are lovable; and we identify with them. By the end of the film, we are rooting for them as the day of the lottery approaches. In each of the schools to which they have applied, the odds against them are large. Anthony, a fifth-grader in Washington, D.C., applies to the SEED charter boarding school, where there are sixty-one applicants for twenty-four places. Francisco is a first-grade student in the Bronx whose mother (a social worker with a graduate degree) is desperate to get him out of the New York City public schools and into a charter school; she applies to Harlem Success Academy where he is one of 792 applicants for forty places. Bianca is the kindergarten student in Harlem whose mother cannot afford Catholic school tuition; she enters the lottery at another Harlem Success Academy, as one of 767 students competing for thirty-five openings. Daisy is a fifth-grade student in East Los Angeles whose parents hope she can win a spot at KIPP LA PREP, where 135 students have applied for ten places. Emily is an eighth-grade student in Silicon Valley, where the local high school has gorgeous facilities, high graduation rates, and impressive test scores, but her family worries that she will be assigned to a slow track because of her low test scores; so they enter the lottery for Summit Preparatory Charter High School, where she is one of 455 students competing for 110 places. The stars of the film are Geoffrey Canada, the CEO of the Harlem Children’s Zone, which provides a broad variety of social services to families and children and runs two charter schools; Michelle Rhee, chancellor of the Washington, D.C., public school system, who closed schools, fired teachers and principals, and gained a national reputation for her tough policies; David Levin and Michael Feinberg, who have built a network of nearly one hundred high-performing KIPP charter schools over the past sixteen years; and Randi Weingarten, president of the American Federation of Teachers, who is cast in the role of chief villain. Other charter school leaders, like Steve Barr of the Green Dot chain in Los Angeles, do star turns, as does Bill Gates of Microsoft, whose foundation has invested many millions of dollars in expanding the number of charter schools. No successful public school teacher or principal or superintendent appears in the film; indeed there is no mention of any successful public school, only the incessant drumbeat on the theme of public school failure. The situation is dire, the film warns us. We must act. But what must we do? The message of the film is clear. Public schools are bad, privately managed charter schools are good. Parents clamor to get their children out of the public schools in New York City (despite the claims by Mayor Michael Bloomberg that the city’s schools are better than ever) and into the charters (the mayor also plans to double the number of charters, to help more families escape from the public schools that he controls). If we could fire the bottom 5 to 10 percent of the lowest-performing teachers every year, says Hoover Institution economist Eric Hanushek in the film, our national test scores would soon approach the top of international rankings in mathematics and science. Some fact-checking is in order, and the place to start is with the film’s quiet acknowledgment that only one in five charter schools is able to get the “amazing results” that it celebrates. Nothing more is said about this astonishing statistic. It is drawn from a national study of charter schools by Stanford economist Margaret Raymond (the wife of Hanushek). Known as the CREDO study, it evaluated student progress on math tests in half the nation’s five thousand charter schools and concluded that 17 percent were superior to a matched traditional public school; 37 percent were worse than the public school; and the remaining 46 percent had academic gains no different from that of a similar public school. The proportion of charters that get amazing results is far smaller than 17 percent.Why did Davis Guggenheim pay no attention to the charter schools that are run by incompetent leaders or corporations mainly concerned to make money? Why propound to an unknowing public the myth that charter schools are the answer to our educational woes, when the filmmaker knows that there are twice as many failing charters as there are successful ones? Why not give an honest accounting? The propagandistic nature of Waiting for “Superman” is revealed by Guggenheim’s complete indifference to the wide variation among charter schools. There are excellent charter schools, just as there are excellent public schools. Why did he not also inquire into the charter chains that are mired in unsavory real estate deals, or take his camera to the charters where most students are getting lower scores than those in the neighborhood public schools? Why did he not report on the charter principals who have been indicted for embezzlement, or the charters that blur the line between church and state? Why did he not look into the charter schools whose leaders are paid $300,000–$400,000 a year to oversee small numbers of schools and students? Guggenheim seems to believe that teachers alone can overcome the effects of student poverty, even though there are countless studies that demonstrate the link between income and test scores. He shows us footage of the pilot Chuck Yeager breaking the sound barrier, to the amazement of people who said it couldn’t be done. Since Yeager broke the sound barrier, we should be prepared to believe that able teachers are all it takes to overcome the disadvantages of poverty, homelessness, joblessness, poor nutrition, absent parents, etc. The movie asserts a central thesis in today’s school reform discussion: the idea that teachers are the most important factor determining student achievement. But this proposition is false. Hanushek has released studies showing that teacher quality accounts for about 7.5–10 percent of student test score gains. Several other high-quality analyses echo this finding, and while estimates vary a bit, there is a relative consensus: teachers statistically account for around 10–20 percent of achievement outcomes. Teachers are the most important factor within schools. But the same body of research shows that nonschool factors matter even more than teachers. According to University of Washington economist Dan Goldhaber, about 60 percent of achievement is explained by nonschool factors, such as family income. So while teachers are the most important factor within schools, their effects pale in comparison with those of students’ backgrounds, families, and other factors beyond the control of schools and teachers. Teachers can have a profound effect on students, but it would be foolish to believe that teachers alone can undo the damage caused by poverty and its associated burdens. Guggenheim skirts the issue of poverty by showing only families that are intact and dedicated to helping their children succeed. One of the children he follows is raised by a doting grandmother; two have single mothers who are relentless in seeking better education for them; two of them live with a mother and father. Nothing is said about children whose families are not available, for whatever reason, to support them, or about children who are homeless, or children with special needs. Nor is there any reference to the many charter schools that enroll disproportionately small numbers of children who are English-language learners or have disabilities. The film never acknowledges that charter schools were created mainly at the instigation of Albert Shanker, the president of the American Federation of Teachers from 1974 to 1997. Shanker had the idea in 1988 that a group of public school teachers would ask their colleagues for permission to create a small school that would focus on the neediest students, those who had dropped out and those who were disengaged from school and likely to drop out. He sold the idea as a way to open schools that would collaborate with public schools and help motivate disengaged students. In 1993, Shanker turned against the charter school idea when he realized that for-profit organizations saw it as a business opportunity and were advancing an agenda of school privatization. Michelle Rhee gained her teaching experience in Baltimore as an employee of Education Alternatives, Inc., one of the first of the for-profit operations. Today, charter schools are promoted not as ways to collaborate with public schools but as competitors that will force them to get better or go out of business. In fact, they have become the force for privatization that Shanker feared. Because of the high-stakes testing regime created by President George W. Bush’s No Child Left Behind (NCLB) legislation, charter schools compete to get higher test scores than regular public schools and thus have an incentive to avoid students who might pull down their scores. Under NCLB, low-performing schools may be closed, while high-performing ones may get bonuses. Some charter schools “counsel out” or expel students just before state testing day. Some have high attrition rates, especially among lower-performing students. Perhaps the greatest distortion in this film is its misrepresentation of data about student academic performance. The film claims that 70 percent of eighth-grade students cannot read at grade level. This is flatly wrong. Guggenheim here relies on numbers drawn from the federally sponsored National Assessment of Educational Progress (NAEP). I served as a member of the governing board for the national tests for seven years, and I know how misleading Guggenheim’s figures are. NAEP doesn’t measure performance in terms of grade-level achievement. The highest level of performance, “advanced,” is equivalent to an A+, representing the highest possible academic performance. The next level, “proficient,” is equivalent to an A or a very strong B. The next level is “basic,” which probably translates into a C grade. The film assumes that any student below proficient is “below grade level.” But it would be far more fitting to worry about students who are “below basic,” who are 25 percent of the national sample, not 70 percent. Guggenheim didn’t bother to take a close look at the heroes of his documentary. Geoffrey Canada is justly celebrated for the creation of the Harlem Children’s Zone, which not only runs two charter schools but surrounds children and their families with a broad array of social and medical services. Canada has a board of wealthy philanthropists and a very successful fund-raising apparatus. With assets of more than $200 million, his organization has no shortage of funds. Canada himself is currently paid $400,000 annually. For Guggenheim to praise Canada while also claiming that public schools don’t need any more money is bizarre. Canada’s charter schools get better results than nearby public schools serving impoverished students. If all inner-city schools had the same resources as his, they might get the same good results. But contrary to the myth that Guggenheim propounds about “amazing results,” even Geoffrey Canada’s schools have many students who are not proficient. On the 2010 state tests, 60 percent of the fourth-grade students in one of his charter schools were not proficient in reading, nor were 50 percent in the other. It should be noted—and Guggenheim didn’t note it—that Canada kicked out his entire first class of middle school students when they didn’t get good enough test scores to satisfy his board of trustees. This sad event was documented by Paul Tough in his laudatory account of Canada’s Harlem Children’s Zone, Whatever It Takes (2009). Contrary to Guggenheim’s mythology, even the best-funded charters, with the finest services, can’t completely negate the effects of poverty. Guggenheim ignored other clues that might have gotten in the way of a good story. While blasting the teachers’ unions, he points to Finland as a nation whose educational system the US should emulate, not bothering to explain that it has a completely unionized teaching force. His documentary showers praise on testing and accountability, yet he does not acknowledge that Finland seldom tests its students. Any Finnish educator will say that Finland improved its public education system not by privatizing its schools or constantly testing its students, but by investing in the preparation, support, and retention of excellent teachers. It achieved its present eminence not by systematically firing 5–10 percent of its teachers, but by patiently building for the future. Finland has a national curriculum, which is not restricted to the basic skills of reading and math, but includes the arts, sciences, history, foreign languages, and other subjects that are essential to a good, rounded education. Finland also strengthened its social welfare programs for children and families. Guggenheim simply ignores the realities of the Finnish system. In any school reform proposal, the question of “scalability” always arises. Can reforms be reproduced on a broad scale? The fact that one school produces amazing results is not in itself a demonstration that every other school can do the same. For example, Guggenheim holds up Locke High School in Los Angeles, part of the Green Dot charter chain, as a success story but does not tell the whole story. With an infusion of $15 million of mostly private funding, Green Dot produced a safer, cleaner campus, but no more than tiny improvements in its students’ abysmal test scores. According to the Los Angeles Times, the percentage of its students proficient in English rose from 13.7 percent in 2009 to 14.9 percent in 2010, while in math the proportion of proficient students grew from 4 percent to 6.7 percent. What can be learned from this small progress? Becoming a charter is no guarantee that a school serving a tough neighborhood will produce educational miracles. Another highly praised school that is featured in the film is the SEED charter boarding school in Washington, D.C. SEED seems to deserve all the praise that it receives from Guggenheim, CBS’s 60 Minutes, and elsewhere. It has remarkable rates of graduation and college acceptance. But SEED spends $35,000 per student, as compared to average current spending for public schools of about one third that amount. Is our society prepared to open boarding schools for tens of thousands of inner-city students and pay what it costs to copy the SEED model? Those who claim that better education for the neediest students won’t require more money cannot use SEED to support their argument. Guggenheim seems to demand that public schools start firing “bad” teachers so they can get the great results that one of every five charter schools gets. But he never explains how difficult it is to identify “bad” teachers. If one looks only at test scores, teachers in affluent suburbs get higher ones. If one uses student gains or losses as a general measure, then those who teach the neediest children—English-language learners, troubled students, autistic students—will see the smallest gains, and teachers will have an incentive to avoid districts and classes with large numbers of the neediest students. Ultimately the job of hiring teachers, evaluating them, and deciding who should stay and who should go falls to administrators. We should be taking a close look at those who award due process rights (the accurate term for “tenure”) to too many incompetent teachers. The best way to ensure that there are no bad or ineffective teachers in our public schools is to insist that we have principals and supervisors who are knowledgeable and experienced educators. Yet there is currently a vogue to recruit and train principals who have little or no education experience. (The George W. Bush Institute just announced its intention to train 50,000 new principals in the next decade and to recruit noneducators for this sensitive post.) Waiting for “Superman” is the most important public-relations coup that the critics of public education have made so far. Their power is not to be underestimated. For years, right-wing critics demanded vouchers and got nowhere. Now, many of them are watching in amazement as their ineffectual attacks on “government schools” and their advocacy of privately managed schools with public funding have become the received wisdom among liberal elites. Despite their uneven record, charter schools have the enthusiastic endorsement of the Obama administration, the Gates Foundation, the Broad Foundation, and the Dell Foundation. In recent months, The New York Times has published three stories about how charter schools have become the favorite cause of hedge fund executives. According to the Times, when Andrew Cuomo wanted to tap into Wall Street money for his gubernatorial campaign, he had to meet with the executive director of Democrats for Education Reform (DFER), a pro-charter group. Dominated by hedge fund managers who control billions of dollars, DFER has contributed heavily to political candidates for local and state offices who pledge to promote charter schools. (Its efforts to unseat incumbents in three predominantly black State Senate districts in New York City came to nothing; none of its hand-picked candidates received as much as 30 percent of the vote in the primary elections, even with the full-throated endorsement of the city’s tabloids.) Despite the loss of local elections and the defeat of Washington, D.C. Mayor Adrian Fenty (who had appointed the controversial schools chancellor Michelle Rhee), the combined clout of these groups, plus the enormous power of the federal government and the uncritical support of the major media, presents a serious challenge to the viability and future of public education. It bears mentioning that nations with high-performing school systems—whether Korea, Singapore, Finland, or Japan—have succeeded not by privatizing their schools or closing those with low scores, but by strengthening the education profession. They also have less poverty than we do. Fewer than 5 percent of children in Finland live in poverty, as compared to 20 percent in the United States. Those who insist that poverty doesn’t matter, that only teachers matter, prefer to ignore such contrasts. If we are serious about improving our schools, we will take steps to improve our teacher force, as Finland and other nations have done. That would mean better screening to select the best candidates, higher salaries, better support and mentoring systems, and better working conditions. Guggenheim complains that only one in 2,500 teachers loses his or her teaching certificate, but fails to mention that 50 percent of those who enter teaching leave within five years, mostly because of poor working conditions, lack of adequate resources, and the stress of dealing with difficult children and disrespectful parents. Some who leave “fire themselves”; others were fired before they got tenure. We should also insist that only highly experienced teachers become principals (the “head teacher” in the school), not retired businessmen and military personnel. Every school should have a curriculum that includes a full range of studies, not just basic skills. And if we really are intent on school improvement, we must reduce the appalling rates of child poverty that impede success in school and in life. There is a clash of ideas occurring in education right now between those who believe that public education is not only a fundamental right but a vital public service, akin to the public provision of police, fire protection, parks, and public libraries, and those who believe that the private sector is always superior to the public sector. Waiting for “Superman” is a powerful weapon on behalf of those championing the “free market” and privatization. It raises important questions, but all of the answers it offers require a transfer of public funds to the private sector. The stock market crash of 2008 should suffice to remind us that the managers of the private sector do not have a monopoly on success. Public education is one of the cornerstones of American democracy. The public schools must accept everyone who appears at their doors, no matter their race, language, economic status, or disability. Like the huddled masses who arrived from Europe in years gone by, immigrants from across the world today turn to the public schools to learn what they need to know to become part of this society. The schools should be far better than they are now, but privatizing them is no solution. In the final moments of Waiting for “Superman,” the children and their parents assemble in auditoriums in New York City, Washington, D.C., Los Angeles, and Silicon Valley, waiting nervously to see if they will win the lottery. As the camera pans the room, you see tears rolling down the cheeks of children and adults alike, all their hopes focused on a listing of numbers or names. Many people react to the scene with their own tears, sad for the children who lose. I had a different reaction. First, I thought to myself that the charter operators were cynically using children as political pawns in their own campaign to promote their cause. (Gail Collins in The New York Times had a similar reaction and wondered why they couldn’t just send the families a letter in the mail instead of subjecting them to public rejection.) Second, I felt an immense sense of gratitude to the much-maligned American public education system, where no one has to win a lottery to gain admission. |
54071d4f44a980fdb457c069412a93d9 | https://www.brookings.edu/articles/the-obama-administrations-evidence-based-social-policy-initiatives-an-overview/?shared=email&msg=fail | The Obama Administration’s Evidence-Based Social Policy Initiatives: An Overview | The Obama Administration’s Evidence-Based Social Policy Initiatives: An Overview Summary This paper outlines the Obama administration’s plan to strengthen the evidence base for U.S. social policy. The Obama administration has created the most expansive opportunity for rigorous evidence to influence social policy in the history of the U.S. government. No president or budget director for a president have ever been so intent on using evidence to shape decisions about the funding of social programs as President Obama and former Budget Director Orszag. The Obama plan to create evidence-based social policy initiatives turns the normal relationship between policy decision making and use of social science evidence on its head. Instead of evidence being on the outside of the decision making process trying to get in, Obama brings evidence inside from the beginning. The administration must still convince others that the use of evidence will improve policymaking and program outcomes, but the argument that evidence deserves a prime role in policymaking is being made by people inside the administration and they are arguing to retain an evidence-based approach as a fundamental part of the president’s legislative agenda, rather than fighting from the outside to insert evidence-based policies into the decision making process. Although less emphasized, the Obama plan for basing program decisions on rigorous evidence can be useful for cutting spending as well as funding new programs. To read the full report please visit the National Endowment for Education, Technology and the Arts. |
f19461f84a890f1bd5772020c74f18a7 | https://www.brookings.edu/articles/the-oceans-11-of-cyber-strikes/ | The “Oceans 11” of Cyber Strikes | The “Oceans 11” of Cyber Strikes There is perhaps no contemporary security policy issue that is as important, but so poorly understood, as cybersecurity. A major part of the problem is a simple lack of familiarity with the most basic terms and definitions. Almost all policymakers today are digital immigrants — people who grew up in a world where computers were rarely used, but who now live in a world where they are ubiquitous. Unlike younger digital natives to whom computers are a natural feature, these leaders often feel like strangers in a new land, unable to speak the language, and thus more likely to keep silent for fear of embarrassment or misunderstanding. These immigrants are also the ones whose ignorance is most often taken advantage of by get-rich-quick schemes and other bad policy advice. I recently watched a good example of this at a meeting in Washington of government officials and business leaders. A so-called consultant in cybersecurity (at least that’s what his business card and website said, and who are we to question the Internet?) spent half his presentation talking up the massive boogeyman of cyber danger that loomed for any and all, mentioning again and again the new specter of APTs, or advanced persistent threats. Fortunately, he spent the second half explaining how all that was needed to deter such threats was to be “good enough.” As long as you make sure your defenses are slightly better than the next guy’s, the attackers will give up and quickly move on. And lo and behold, his firm had a generic package for sale that would solve for just those needs. It was a presentation that was slick and effective — and wrong. APTs are a phenomenon that has gained more and more notoriety in recent years but still is poorly understood. They illustrate the challenge in the policy world of calling attention to very real emerging challenges in cyberspace but also avoiding overreaction, hype and hysteria. What is an APT? If cybersecurity threats were movies, an advanced persistent threat would be the “Ocean’s Eleven” of the field. It’s not that APTs star handsome actors like George Clooney or Brad Pitt in Armani suits; indeed, they are more likely to be run by their polar opposites, in sweat-stained T-shirts. Rather, APTs have a level of planning that sets them apart from other cyber threats. Like the plots in the “Ocean’s” movies, they are the work of a team that combines organization, intelligence, complexity and patience. And because of that, like the kind of major casino heists depicted in the movies, APTs are actually quite rare. While there are some 60,000 new pieces of malware created by cyber criminals each day, a miniscule percentage have anything to do with an APT. And even more, much as every group of criminals would like to think they are just like the gang in “Ocean’s Eleven,” only a subset of groups behind APTs are actually that good. One defense firm, for example, is aware of a number of APTs targeting its systems, but divides them in the A-team group that spooks them and a wider set of Z-team groups, which they laugh about. An APT starts with a specific target in mind. The perpetrators know what they want and who, specifically, they will go after to get it. APT targets have ranged from military jet designs to classified diplomatic documents to oil company trade secrets. While everyone would like to think they are important enough to be targeted by an APT, the reality is that most of us don’t rise to that level. Sorry, you’re not going to be played by Al Pacino, as in the last “Ocean’s” movie. Once the target has been identified, the hallmark of an APT is how it reflects the work of a coordinated team of specialized experts, each taking on different roles. Much like a robber casing a bank or a spy observing a military base, a surveillance team performs target development — learning everything they can about the person or organization they are going after and key vulnerabilities. In this effort, online search tools and social networking are a blessing to the attackers. Want to steal a new defense widget and therefore need to know who is the vice president of product development? In the past, you would have had to send James Bond to seduce the receptionist in Human Resources and then sneak into her files while she was sleeping off a night of romance and martinis. Now, just have your Red Bull-sipping targeting guy use a search engine and he can get everything from that executive’s résumé to the name of her daughter’s pet iguana. As cybersecurity expert Gary McGraw notes, “The most impressive tool in the attackers’ arsenal is Google.” These groups might not just use search from afar, but also work to bring themselves closer to the target with physical or even virtual means, such as social networking. Perhaps the most innovative recent example was when senior British officers and defense officials were tricked into accepting friend requests from a fake Facebook account claiming to be Adm. James Stavridis. As the Telegraph reported, “They thought they had become genuine friends of NATO’s Supreme Allied Commander — but instead every personal detail on Facebook, including private email addresses, phone numbers and pictures were able to be harvested.” It is this phase that also explains why such attacks are differentiated as persistent. The reconnaissance and preparations conducted can literally take months. The teams are not just trying to understand the organizational structure of the target, but also its key concerns and even its tendencies. One APT, for example, was casing a technology firm headquartered in Minnesota. They ultimately figured out that the best way to crack the system was to wait until a major blizzard. Then they sent a faked email with a document purporting to be the firm’s new snow day policy. Another effort, reported by Reuters in 2011, was allegedly conducted by Chinese intelligence and military units, who gathered details not only on who were the key friends and associates of U.S. national security officials, but even what farewell message they typically signed off with in their emails. Once the target is understood, an intrusion team of professional hackers will then work to breach the system. One of the most common compromise activities is spear phishing, where an individual or group is targeted with a communications that seems to come from a trusted source. When they open files or links in the message, they instead trigger a download of malware, as in the “snow day” email. A faked email is frequently used by such attackers. Take Operation Shady RAT (remote access tool), a highly successful campaign that targeted some 72 organizations around the world, from aerospace firms to the World Anti-Doping Agency (notably right before the 2008 Olympics). When the counterfeit email attachment was opened, malware was implanted inside the target’s network. This created a backdoor communication channel to an outside Web server, which had, in turn, been compromised with hidden instructions in the Web page’s code in an effort by the attackers to cover their tracks. What is notable here is that the initial target is frequently not the end target. Often, the best way into a network is via trusted outsiders, often with lower levels of defense. One defense firm was penetrated in 2010 via smaller company vendors. Next, the attackers may target people in the network who have some level of access that will open the gates wider. Last year, an APT was launched at various think tanks. The attackers sought access to scholars who worked on Asian security issues, but aimed initially at employees who had administrative rights and access to passwords. Email is not the only way in. Other APTs have used Facebook and other social networks to figure out the friends of individuals with a high level of privilege inside a targeted network. Then, they compromise these friends’ instant messaging chats as a way to sneak in. The malware used in these attachments is often quite sophisticated. Polymorphic malware, for example, changes form every time it runs, to stay ahead of defenses, and then can burrow deep into computer networks to avoid discovery. The best APTs might use even more advanced tools, like malware that is tailored to the system it is targeting or avoids automatic propagation that might lead to detection, or even goes after a new vulnerability known as a zero day. (In this case, the attack is acting before the first or “zeroth” day of developer awareness of the weakness, meaning there is not yet a security fix available to users of the software.) Much like a military unit or even a sports team would do, APT groups often conduct dry runs and even quality assurance tests to minimize the number of anti-virus programs that can detect them. Once the team is in, they branch out like a viral infection, often with more personnel joining the effort. They jump from the initial footholds, compromising additional machines inside the network that can run the malware and be used to enter and leave. This often involves installing keystroke-logging software and command-and-control programs, which allow them to direct the malicious code to seek out sensitive information. At this point, the target is “pwned” (a common mistyping of “owned” and a term used by hackers and online gamers when they have gotten the better of a target or opponent). An exfiltration team begins work on retrieving what the APT was targeting all along. Here is another hallmark of a real APT: Such a team eschews the usual criminal ethic of “grab what you can get” in favor of a disciplined pursuit of specific files. In many cases, the attackers don’t even open the files during a theft, suggesting that their earlier reconnaissance was thorough enough that they didn’t need to double-check. Many analysts believe this discipline suggests the hidden hand of military or intelligence officials, either as team members or advisers, in many APTs. Many APTs are detected during exfiltration, when data is leaving the network in massive amounts that are hard to mask. Exfiltration teams therefore use all sorts of tricks to cover their tracks. One frequently used tactic is to have the data routed through way stations in multiple countries, akin to a money launderer running stolen funds through banks all over the world. This makes it not only harder to track them down, but also routes their activities through different legal jurisdictions. Some APTs do more than just copy the data. French officials, for example, said an APT run out of China gained access to the computers of several high-level French political and business leaders, and then activated the devices’ microphones and Web cameras so that they could eavesdrop on their owners’ conversations. Even more nefarious, some APTs alter the files to which they gain access. By definition, this is the point at which an action moves from theft or spying into sabotage or even attack. It may also become the line international law ultimately decides is the difference between espionage and war. What makes APTs even more of a challenge is that even if a target finds out it has been attacked, the pain is not yet over. Finding which machines and accounts inside the system have been infected can take months. Even more, if the effort is truly persistent — say, if the target has some sort of ongoing value to the attacker — there might be an additional unit in the APT whose very job it is to maintain an electronic foothold in the network. Their job is to ensure there is a sequel — an “Ocean’s Twelve,” so to speak. Rather than focusing on what information to steal, this team might, for example, monitor internal emails to learn how the defenders are trying to get them out, in order to stay one step ahead. With their electronic communications compromised, some defenders’ response here is often old-school. They will do things like literally yank hard drives out of their computers and post handwritten signs in the hallways about needed password changes. In sum, APTs are a nightmare scenario for any organization. Many don’t know they’ve been hit until it is too late. And even if they do find out, it is often impossible to prove who did it. Indeed, that is why APTs may be the most controversial of all the threat vectors in cybersecurity. Except in cases where the attackers are sloppy (such as when a high-ranking officer in China’s People’s Liberation Army employed the same server to communicate with his mistress that he was also using to coordinate an APT), there is little actual proof that would stand up in a court of law or even the court of public opinion. What we are often left with instead are just suspicions and finger-pointing, which is why APTs have become so poisonous for diplomatic relations, especially between the U.S. and China. This is exactly why it is important to better understand the nature of such threats: to be able to better respond effectively. APTs are not as pervasive as they seem from the level of discussion in business pitches and congressional hearings. Their very sophistication both creates a problem and acts as a limiting factor. But in turn, that sophistication means the threat is not likely to be stopped by some “secret sauce” sold by the many firms that have followed the federal budget money train into cybersecurity. A good defense is complex and layered, taking on each of the attacker’s phases, from surveillance to exfiltration. This means that the best counters will range from the highly sophisticated (such as new mechanisms to monitor anomalies in network traffic) to simply spreading better understanding of the basics of the issue. Take the relatively simple but important job of getting network users to observe basic cyber hygiene. In one noteworthy case in 2008, a DoD network was reportedly compromised via a memory stick left in the parking lot outside the base. A foreign intelligence agency was alleged to have left it there, thinking U.S. soldiers wouldn’t be able to resist its lures. And they were right: Someone picked it up and plugged it into a computer. Yet, you wouldn’t stick something you found in a parking lot into your mouth, so why would you think it OK to stick it into your computer? In APTs, as well the wider issues of cybersecurity, information is power. This cuts both ways, however. The very real threats, as well as those who would profit from them, are targeting some valued bit or byte of knowledge. But their success, whether at stealing that information, or from banking on our fears, depends on our ignorance. |
c607e402286374fc112fb5c9e813365c | https://www.brookings.edu/articles/the-risks-to-australias-democracy/ | The risks to Australia’s democracy | The risks to Australia’s democracy Australians are proud that their country is one of the first genuine liberal democracies in the modern world. Its democratic institutions and practices have been hailed for its robustness, adaptability, functionality, and resilience. Indeed, Australia has been leading the world when it comes to a public conversation about protecting liberal institutions from subversion and interference by entities linked to foreign governments and passing legislation to deter and prosecute such activities. The ease with which foreign governments, especially the People’s Republic of China, have been able to infiltrate, disrupt and/or influence Australian decisionmakers and institutions are of the highest concern. But Australia is making good progress in managing these activities. As important, but less appreciated, are domestic challenges to the country’s democratic institutions, practices, and governance in the context of a global pandemic and the national health emergence which continues to unfold. It is in this context that longstanding complacency with respect to governance standards, deep public ignorance about the proper workings of institutions, and arguable overreach by various levels of government without accountability for such overreach is worryingly evident. How the country assesses and responds to these governance failures, albeit at a time of immense stress brought about by the COVID-19 pandemic, will determine whether Australia will emerge as an even more robust, adaptable, and functional democracy when the health crisis is over. In March 2017, then-Australian Prime Minister Malcolm Turnbull argued that Australia had the “most successful multicultural society in the world.” He did so at a launch of Australia’s Multicultural Statement which lauded the strength, unity, and success of the country’s democracy. In doing so, Turnbull praised the robustness, adaptability, functionality, and resilience of Australia’s democratic institutions. It was these institutions which has allowed Australia to offer a superior model of good governance for its citizens and create a diverse yet cohesive society at the same time. There is much data and evidence to back up Turnbull’s boast. For example, the World Bank’s Worldwide Governance Indicators places Australia in the top three Indo-Pacific countries with respect to all relevant categories such as accountability, political stability, government effectiveness, regulatory quality, rule of law, and controlling corruption. That Australian citizenship is one of the most sought after in the world is further evidence of a successful and multicultural democratic nation, as is the extreme reluctance of Australians abroad to give up their citizenship (and therefore the right to return to Australia). In short, any discussion about successful democratic nations and institutions ought to include Australia. Nevertheless, the standing of democracy amongst Australians — presumably including that of Australian democracy — is not overwhelmingly positive. In the latest authoritative Lowy Institute Poll of Australian attitudes towards democracy, it is troubling that 30% of 18-29 year-old citizens surveyed believed a non-democratic system is preferable to a democratic one under some circumstances, while 55% believed democracy is preferable regardless of circumstance. This is a contrast to those 60 years and over surveyed, only 15% of whom believed a non-democratic system might be preferable, while 72% believed democracy was always preferable. The overall numbers for all surveyed was 22% and 65%, respectively. The lower regard for democracy amongst younger Australians is reflected in previous polling going back to 2012. The reasons for this must be understood and addressed. Furthermore, there are some domestic and external challenges which have the great potential to weaken or corrupt the country’s democracy and related institutions. “Exogenous” factors such as the rise of an increasingly authoritarian and ambitious China, the emergence and use of certain forms of technology, and the immense disruption of the COVID-19 pandemic have highlighted vulnerabilities and shortcomings regarding the purported robustness, adaptability, and functionality of Australian democracy. In turn, one would expect that failure to meet these challenges will result in decreased regard for democracy even if it does not necessarily entail an increased regard for autocracy. Most Australians are famously disinterested in politics and contrast their general disinterest in politics favorably with the passionate and intense American debates about the state of U.S. democracy. While there is broad support for the fact that Australia is one of the few democracies where voting is compulsory, there is perhaps a sense that one’s democratic duty is fulfilled when one is lining up to cast a vote during a federal, state, or local election. As one former prime minister commented to this author several years ago, “we [Australians] don’t need to talk about democracy — we just do it.” This more flippant attitude to politics means that Australian democracy is far less partisan and divisive than in a country such as the United States. However, there are downsides. For example, the COVID-19 pandemic has revealed a widespread Australian ignorance about the structure and workings of its democracy. This has been illustrated during the COVID-19 pandemic health emergency where difficult decisions must be made to prevent infection and protect the community while limiting negative impacts to the economy and civil society at the same time. There is immense public ignorance as to the respective roles of the federal and state governments. Much of the Australian public is ignorant of the reality that it is the latter which is largely responsible for the health and public response to prevent and/or manage the consequences of the pandemic — decisions which have immense impact on the daily lives and livelihoods of Australians. Yet, community scrutiny and criticism are being (mis)directed toward the federal government for decisions being taken by state authorities on issues such as closing domestic borders, lockdown laws, and domestic quarantine procedures. The upshot is state governments are enjoying far less public and media scrutiny of decisions they have taken which is resulting in lower standards of accountability and transparency from state governments than ought to be the case. The lower the accountability and transparency for decisions taken and implemented, the poorer the incentive that these be proportionate and appropriate for the problem at hand. That lack of accountability and transparency is exacerbated by the peculiarity of the Australian taxation system where the federal government collects more than 81% of all taxation revenue including all income tax. Other levels of government collect more obscure taxes such as land, payroll, and stamp duties on the purchase of property. This means Australian citizens overwhelmingly associate paying taxes with the federal government, and consequently, predominantly only apply scrutiny to policy decisions made by the federal government. In the current environment, this has led to shortcomings with respect to issues of accountability. State governments retain primary authority to respond to COVID-19 through decisions which have profoundly disruptive and sometimes destructive impacts on Australian citizens, households, and businesses — border closures, lockdowns, curfews, and diverting of health care resources away from other much needed areas, etc. At the same time, it is the federal government which must manage and finance the economic, social, health, and other impacts of decisions made by state governments. This means that dissatisfaction with policies made and implemented by state governments is frequently directed against the federal government, accompanied by public demands that further federal resources be allocated to alleviate the economic impacts of measures enacted by state governments including domestic border closures. The point about poor democratic accountability arises because state governments have few incentives to consider the economic and non-economic impacts of their policies. More troubling than flaws in institutional and fiscal design is the mindset by some state governments that legislation and actions to suspend civil and even some human rights of citizens so as to respond to the health emergency can be applied arbitrarily and must be accepted uncritically, promptly, and without the need for scrutiny as to the reasoning or implementation of such emergency measures. The most egregious occurred in Victoria, the country’s second most populous state. As with any great crisis or disruption, democratic institutions, practices, and mindsets are being tested in a way which does not occur during business as usual. The suspension of the state parliament in August to “prevent the spread of COVID-19” means that the government has not been subject to the peak forum for political and public scrutiny. This was a period in which there was a spike in infections following the poor handling of quarantine for returning international travellers by the Victorian government. The suspension of parliament also meant there was no formal political debate on the imposition of an indefinite curfew from early August onward despite the high controversy surrounding the decision. To put this in context, federal and state parliaments sat during both world wars and the Spanish Flu, and curfews have never been imposed. In responding to a question about whether he had gone too far with respect to imposing a curfew (avoiding the question of why a curfew was needed when no other state had one), Victorian Premier Daniel Andrews replied: “it is not about human rights. It is about human life.” There was little explanation of the reasons for declaring both a state of emergence and disaster, which hands the state’s police minister extraordinary powers to suspend the operation of any legislation passed by parliament, control all movement in and out of Victoria, take possession and make use of any person’s property, and direct any government agency to do or refrain from performing any act considered necessary in responding to what was a poorly defined emergency or disaster. Note also that the passing of the COVID-19 Emergency Measures Bill 2020 in October in the same state which was condemned by a group of eminent lawyers, including a former High Court judge, in an open letter as “unprecedented, excessive and open to abuse.” It may be that the measures are necessary in the current time but there was little effort by the government to invite debate or make the case for it as one would expect in an open democracy. Finally, many Australians have become disconcertingly accepting of the need for premiers to exercise emergency powers without regard to time limits and when such powers might cease. This writer observed widespread admiration for the “effectiveness” of China’s decision to impose a brutal lockdown in Wuhan earlier in 2020 amid public calls that similar measures be applied to so-called but ill-defined “hotspots” within Australia, without regard to the checks and balances that are intrinsic to a liberal democracy. One should also note that concerns are not just centered around relatively unaccountable state premiers. Majorities of those polled continue to support state premiers promising harsher policies that suspend the normal rights of citizens, suggesting a preference for an “authoritarian” solution to pressing public crises and challenges. Checks, balances, and rights are perceived to be obstacles to solutions rather than inalienable principles around which solutions must be derived. It is so far unclear whether Australia will emerge as a more robust, adaptable, and functional democracy after COVID-19. Australian politicians and citizens need reminding that decisionmakers ought to bear the political and economic costs of their actions. The failure to understand how institutions influence incentives, and therefore behavior, is dangerous. In a region where alternative political systems are assessed according to capacity to overcome challenges and results, the suspension of normal standards of transparency, accountability and debate when faced with a serious emergency is a nod to those believing that authoritarian approaches are superior when it comes to meeting complex problems. That is a dangerous line of argument in any democracy. Finally, Australia’s ability to constantly refine and improve its own democracy has impacts on its standing, and consequently, for Australian efforts to promote democracy in the region. Australia is one of the few genuine liberal democracies in the region and shoulders a special responsibility. Australian democracy promotion efforts in the region, including through its aid and developments programs, champion the implementation of principles and practices such as transparency, accountability, and adaptability at all levels of government. One of the selling points is that such liberal institutions will ultimately strengthen national resilience and help stable governments through balancing the competing rights and interests of a pluralistic society — especially at a time of crisis. The ability of Australia to weather the pandemic and maintain faith in its own democratic institutions and processes will be closely watched by its neighbors. Australia is an immigrant nation and is proud of its multicultural values. At the same time, Australia is at the forefront of calling out and passing legislation against covert influence and foreign interference activities, mainly by Chinese operatives. Of highest concern is Beijing’s United Front, a vast, organized, and well-resourced network of domestic and foreign entities and individuals, whose purpose is to co-opt ethnic Chinese individuals and organizations, which has been called a “magical weapon” by Chinese President Xi Jinping. As awkward as it is, there is no escaping that race and ethnicity has become a legitimate political and national security issue, and a challenge to Australia democratic institutions. The following is just one illustration. In April 2019, the ABC reported that Gladys Liu, the first Chinese-born member of Parliament in Australia, was associated with Australian-based organizations with ties to the United Front. Liu’s loyalty was widely questioned when she refused to criticize Beijing or affirm Australia’s settled position on the South China Sea when asked about it on a television interview. Regardless of whether Liu was fairly scrutinized or not, one needs to be clear about the origins of the problem: the Chinese Communist Party (CCP) has chosen to politicize and even weaponize race as a tool of foreign policy and subversion. Xi has delivered multiple speeches and made it formal policy to demand loyalty and commitment from diasporas who the Party refers to as the “sons and daughters” of China. This implies one’s identity and loyalty are not defined by citizenship but race or ethnicity. In Australia, the majority of Chinese-language press is owned by entities linked to Beijing. The problem is compounded by the reality that social media platforms used by Chinese-Australians such as WeChat and Weibo are already moderated and censored by authorities in China. Many Australian-based Chinese community organizations have been set up specifically to influence the diaspora while existing ones are targets for influence and infiltration through financial incentives or intimidation. Beijing’s policies are creating concern for the government and outrage amongst many Australian citizens. A chilling illustration of the CCP threatening to use its supposed “magic weapon’ occurred in April 2017 at a time when the opposition Australian Labor Party was opposing the ratification of an extradition treaty between the two countries. In a meeting with Labor’s spokespersons for foreign affairs and defence, then-Politburo member and security czar Meng Jianzhu was widely reported to have issued the threat to his Australian interlocutors that Beijing would be forced to tell Australians of Chinese ethnicity the Labor Party did not support the Australia-China relationship unless the party changed its position on the treaty. As it is in other liberal democracies, the objective must be to ensure that Australians of all ethnicities feel free to hold and express their legitimate views without fear of censure or consequences. The point is not to tell the Chinese diaspora what they should think — it is to protect them against foreign governments telling them what they must think. Members of Chinese community organizations and the population at large both need to have the assurance that these organizations are not front entities for Beijing or else have been infiltrated to support the CCP’s agenda. If that assurance is lacking, all members will inevitably and unfairly be tainted simply by association. That will only lead to the fracturing of multicultural societies. If Chinese diasporas are to feel respected and valued in Australia and other countries, and if more ethnic Chinese citizens are to be encouraged to run for political office, the countering of Beijing’s United Front operations needs to be taken seriously. That is the source of the fissure in the first place. Existing legislation is largely about transparency with respect to one’s source of funds and who one takes instruction from. Legislation prohibiting such activities ought to be passed. Politicians, community leaders, and individuals must be given the space and support to call out external attempts to covertly influence, silence, or intimidate. There needs to be transparency in media ownership and prohibitions on Chinese controlling ownership of media assets. Regarding social media and apps, the current Australian government’s acknowledgment that applications such as TikTok and WeChat carry the risk of data being sent back to and misused by Beijing, but that there is no basis on national security grounds to ban these apps, appear to be a contradiction. It would be dangerous for these Chinese platforms to become the dominant ones used by Chinese diasporas in the country. Many Chinese diasporas receive most of their news via social apps in Mandarin and most of the news content on these apps are drawn from censored material from mainland China. Most of all, the perceived link between race on the one hand and one’s loyalty and views on the other, must be broken. In Australia’s case, failure to do so could mean that Liu is the first and last Chinese-born Australian to join the federal parliament — with ramifications for other democracies. Liberal democracies (i.e., those with universal suffrage and liberal institutions such as a free press and rule of law) excel at highlighting their weaknesses and are poor at showcasing their strengths. Authoritarian systems tend to the converse. Xi Jinping’s China has become adept at overplaying its achievements and underplaying its failures. Importantly, Beijing has become far more confident and brazen in promoting its approach to governance as a superior one to democratic approaches for developing economies. COVID-19 has challenged the robustness, adaptability, functionality, and resilience of all governance systems. That must be understood within the framework of the contest of values, institutions, and systems which is currently playing out, most of all in the Indo-Pacific region. Australia has stood up well so far, but there are indications of weaknesses and failings. These should not be swept under the carpet when the pandemic passes. Any deterioration in standards of one of the few genuinely liberal democratic nations in the Indo-Pacific would have consequences for Australians and for fledgling democracies throughout the region. |
615ce7dd6c9ac7695dec3152cf596178 | https://www.brookings.edu/articles/the-self-limiting-success-of-iran-sanctions/ | The Self-Limiting Success of Iran Sanctions | The Self-Limiting Success of Iran Sanctions Reprinted by permission of INTERNATIONAL AFFAIRS, November 2011. Since the 1979 revolution that ousted Iran’s pro-American monarchy and replaced it with a theocratic regime hostile to the West, the United States has sought to temper Iran’s geopolitical ambitions through a combination of tough rhetoric and economic sanctions. After more than 30 years, the cycle is as unsurprising as it is ineffective: the United States and its allies orchestrate stringent economic measures through the United Nations, and then await concessions that somehow never materialize. Indeed, as UN proscriptions have amassed and Iran’s trade with its traditional partners withers, there is no indication that the theocratic state is prepared to adjust its aspirations with respect to either its nuclear programme or its claims to regional power. A closer look reveals that the international community missed a critical turning point in Iran’s international orientation, and squandered the single obvious opportunity to shift Iranian policies towards a more constructive direction. In the 1990s, Iran appeared to be on the verge of discarding its radical patrimony, at least with respect to its foreign policy, much as other revolutionary states such as China and Vietnam have done. The end of the long war with Iraq and the death of the Islamic Republic’s charismatic founder facilitated a period of reconstruction, a respite from the state’s existential insecurities, and a predictable reconsideration of the regime’s ideological verities. By the end of the decade, a reformist cadre led by President Muhammad Khatami sought to rejoin the international community by conceding to its mandates and adhering to its conventions. At the dawn of the twenty-first century, Iran finally appeared ready to usher in its own Thermidorian Reaction. Yet this prospect appeared to fade after the election of hardliner Mahmoud Ahmadinejad to succeed Khatami in 2005. In the succeeding years, the Islamic Republic has regressed towards policies that resemble the worst excesses of its zealous early years: at home, unambiguous repression of any dissent and an insistence on absolute fealty to an ageing clerical tyrant; abroad, provocative policies towards its neighbours and belligerence towards Washington. Unexpectedly, it has been a younger generation of Iranian politicians—Ahmadinejad and his cohort—who have rejected the nascent pragmatism of their elders; these children of the revolution are seeking to revive its mandates rather than to restrain them. At the same moment as Iran’s formidable new right wing came to the fore, the region began an even more dramatic set of political transformations, first with the US interventions to Iran’s east and west that removed the theocracy’s most menacing adversaries, and later with the advent of a powerful, far-reaching movement for democratic accountability across the Arab world. As a result of these intersecting trends, Iran’s paranoid, combative leadership has been emboldened to take advantage of the opportunities to be found in an uncertain regional environment with a shifting balance of power. For this reason, the threats posed by Iran’s domestic and regional policies loom ever larger for Washington and the broader international community. To date, however, the Obama administration has stuck to the essential framework of the carrot-and-stick diplomacy it adopted upon taking office in 2009—an approach that differs merely in style from that of the Bush administration during its second term. This self-described ‘dual-track’ strategy relies on economic pressure to persuade Tehran to enter negotiations and moderate its policies, consistent with the basic American formula for dealing with Iran since 1979. The achievements of such an approach have always been open to question. Even as the Obama administration has imposed the broadest and most robust multilateral restrictions on Iran in history, all of Tehran’s most disturbing policies, including its aggressive nuclear programme, proceed apace. Sanctions have imposed heavy financial and political costs on the Islamic Republic, but they have not convinced Iranian leaders that their interests would be better served by relinquishing their nuclear ambitions, abandoning their other reckless policies, or even opening a serious dialogue with Washington. This obduracy is a function of the complex political transformation within Iran over the course of the past decade, the regime’s well-honed capabilities for evading and insulating itself against sanctions, and of course the momentous changes that have swept the broader region. As a result, in dealing with the Islamic Republic of 2011 economic sanctions can have little expectation of achieving meaningful changes in Tehran’s policies. This article examines the history of sanctioning the Islamic Republic, and argues that despite their increasing severity, sanctions have failed to achieve their intended policy results thanks to the regime’s capacity for resisting international pressure. Moreover, the rise of a new generation of hard-liners and the uncertain aftermath of the Arab Spring has exacerbated the regime’s aversion to compromise. Three decades of pressure The essential framework for US policy towards Tehran was established during the earliest hours after the seizure in November 1979 of the US Embassy and its staff. As a former senior State Department official recalled of the deliberations, ‘almost as soon as policy discussions began on [the day after the Embassy was overrun], the members of the crisis team in both the White House and the State Department focused on a two-track strategy’. The objective then was to ‘open the door to negotiation’ while also ‘increas[ing] the cost to Iran of holding the hostages’.1 From that time forward, each US president has indulged in finetuning, but the basic blueprint for addressing the array of threats emanating from the Islamic Republic has remained almost precisely the same as the plan crafted at the outset of the hostage crisis. The embrace of a dualistic approach for responding to the hostage crisis reflected the divisions that quickly erupted within the Carter administration, personified by two individuals: Secretary of State Cyrus Vance, who favoured negotiations, and National Security Advisor Zbigniew Brzezinski, who pressed for greater consideration of coercive options. However, none of the military options presented to the President in November 1979—including a rescue mission and retaliatory strikes or raids—offered compelling prospects for successfully extracting the captive diplomats. With American lives at stake and the hostages’ safe release defined both privately and publicly as the paramount US policy objective, the administration opted early on to exhaust non-military measures before resorting to force. As a result, this original version of the dual-track US strategy relied heavily on economic pressure: prohibition of Iranian oil imports to the United States, a freeze of all Iranian state assets held by US institutions, and eventually a travel ban and a comprehensive embargo on nearly all forms of trade with Iran. Of these measures, the freeze on assets proved uniquely powerful: in one fell swoop—precipitated at least in part by concerns that Iran planned to withdraw its US deposits— Washington ‘effectively immobilized $12 billion in Iran’s assets, including most of its available foreign exchange reserves’.2 The financial constraints imposed by the freeze may not have fully crippled the Iranian economy, which was already reeling as a result of revolutionary chaos, but this measure magnified the negative consequences of Iran’s inept and ideological management of its oil sector as well as the existential crisis precipitated by the September 1980 Iraqi invasion. As the hostage crisis moved slowly towards resolution, economic interests assumed an increasingly high priority for Iran’s leadership in the terms of any agreement, and the negotiations that produced the January 1981 Algiers Accords entailed a considerable focus on the accounting and mechanics of settling the outstanding financial disputes between the two countries. As one of the negotiators of that agreement has argued, the existence of vast and complex financial claims between the two countries, including many private American claims stemming from the revolution itself, would have effectively obligated both sides to engage in negotiations even if no political imperative for doing so had existed.3 In this respect, the hostage crisis may offer the single striking example of the efficacy of sanctions in producing a demonstrable concession from Tehran, and the US–Iran Claims Tribunal that was created as a result of the Algiers Accords has proved the single consistent and enduring mechanism for official interactions between the two governments. However, the hostage crisis also underscores the limitations of sanctions, and particularly highlights the difficulty of achieving broad multilateral support for stiff penalties against Iran. Despite US allies’ sympathies with the American predicament and outrage over the affront to international law and diplomatic protocol, even Washington’s closest international partners proved deeply reluctant to jeopardize their economic interests in Iran or undertake measures that might alienate opinion in other parts of the Muslim world. Japan and the European Community adopted minimalist trade sanctions only midway through the 15-month crisis, and even those measures were loose enough to enable trade to continue to rebound from post-revolutionary disruptions. For its part, the Soviet Union actively undermined the sanctions regime by vetoing a modest array of proposed United Nations sanctions in January 1980 and dangling ambiguous offers of economic assistance to Tehran throughout the crisis. The deliberations over sanctions during the hostage crisis reflected a persistent overconfidence among US officials about Washington’s capacity to exercise a positive impact on the struggle for power within Iran’s fractious polity. Then as now, US officials sought to craft a balance of inducements and penalties that would empower the theocracy’s persistently embattled moderates. The release of the hostages was ultimately a pragmatic move by Tehran; it is difficult to discern any significant constructive influence of carrot-and-stick diplomacy on the interim outcome of the enduring contest for influence within the Islamic Republic. Although tensions between Tehran and Washington remained high even in the aftermath of the hostages’ release, the Algiers Accords included provisions for the lifting of trade sanctions, meaning that there were now no legal impediments to economic interactions between Iranians and Americans. However, the climate degenerated quickly, as Iran stepped up its campaign of subversion against its neighbours and began cultivating terrorist proxies in the Middle East. At the same time, Washington began shifting from its previous neutrality in the long, bloody Iran– Iraq war to a distinct tilt towards Baghdad, in part to avert the regional upheaval that would have ensued if Iran had succeeded in its aim to take the Iraqi capital. Throughout its two terms, the Reagan administration sought new mechanisms for ratcheting up pressure on Tehran, including a willingness to deploy military as well as economic coercion. Early measures included an aggressive campaign to prevent Tehran from securing desperately needed military equipment and the 1984 designation of Iran as a state sponsor of terrorism for its support of Hezbollah in Lebanon and its involvement in the bombing in 1983 of a US barracks in Beirut that killed 241 marines. These restrictions were relatively limited measures, at least compared to the hostage-era sanctions, and whatever sense of urgency they may have conveyed to Tehran was surely undercut by the nearly simultaneous covert sales of US arms to the revolutionary regime that took place as part of the Iran–Contra episode. Just as State Department officials in 1979–81 sought to empower those presumed to be moderates in the hope of securing the hostages’ freedom, the Reagan administration officials who embarked on the secret weapons sales were motivated by a conviction that the effort would tip the internal balance of power against Khomeini and in favour of pro-western moderates. In the embarrassing aftermath of the Iran–Contra scandal, the Reagan administration persisted in pursuing a bifurcated policy towards Tehran, albeit one that was much more heavily biased towards coercion. While senior US officials continued to hold the door open to engaging Iran’s leadership, this phase also includes the sole period of direct armed conflict between Iranian and American forces, after Washington agreed to escort Kuwaiti oil tankers that had come under Iranian attack through the Gulf. In addition, the administration strengthened the sanctions regime in the wake of the Gulf skirmishes, with a 1987 embargo on all imports from Iran and selected exports to it, although it should be noted that the effects of this measure were checked by the loopholes that permitted US companies to continue to do business with Iran’s oil sector.4 The persistence of a hostage problem in the Middle East persuaded President George H. W. Bush to turn up the volume on inducements within the dual-track approach to Iran. In his inaugural address of January 1989, President Bush referred to American hostages held in Lebanon and added that ‘assistance can be shown here, and will be long remembered. Good will begets good will. Good faith can be a spiral that endlessly moves on.’5 He and other officials repeatedly signaled publicly their openness to engagement. Iran’s leaders vociferously and publicly rebuffed Bush, and their back-channel efforts to assist in Lebanon proved unconvincing to Washington given the ongoing violence there. Still, even as the administration’s engagement track faltered, so too did any meaningful efforts to apply new pressure on Tehran. New sanctions enacted during this period were codified in the 1992 Iran–Iraq Nonproliferation Act, which prohibited any transfer of any goods or technologies that could facilitate the development of chemical, biological or nuclear weapons or destabilizing conventional weapons such as missiles. This legislation is notable as the first salvo of Washington’s increasing reliance on threats of extraterritorial measures against third countries as a means of attaining new leverage against Iran, something that prior administrations had studiously avoided for fear of alienating crucial allies. The administration also sought to strengthen multilateral restrictions on dual-use goods, albeit to little effect. The relative inertia displayed by the first Bush administration on Iran masks a hardening of Washington’s attitude towards the theocratic state that is evident in two important respects. The first was the decision to ostracize both Iran and Iraq, as part of a broader reconceptualization of a post-Soviet strategic landscape dominated by threats from rogue states.6 This approach contrasts with the historic tendency to balance one of the northern Gulf states against the other. Second, the administration’s direct and fruitful immersion in the precarious terrain of Middle East peacemaking created a new imperative for antagonism between the two old adversaries. At the same time, Tehran stepped up its regional assertiveness in the aftermath of the Iraqi defeat and in response to the momentous prospect of Arab– Israeli peace. Its stated goal was to drive American forces from the region and reverse the tentative shoots of regional peacemaking; beyond this, the revolutionary leadership sought to establish itself as the pre-eminent power in the Gulf and undermine its Arab neighbours. All these trends only intensified during the Clinton administration from 1993. Secretary of State Warren Christopher’s experience of leading the contentious negotiations that freed the hostages left a legacy of profound distrust of Iran and a revulsion from the policies of the Islamic regime. He and others in the administration entered office determined to avoid Tehran’s duplicitous tactics and to firmly isolate Tehran, as well as Baghdad, in order to facilitate the emergence of a ‘new Middle East’ anchored by a post-Oslo Accords Arab–Israeli peace. The new US policy became known as ‘dual containment’, a rejection of the historical tendency of Washington to counterbalance one of the Gulf powerhouses by indulging the other. Washington sought to shape the essential parameters of regional politics through economic warfare against Iran. The Clinton administration’s initial efforts were focused on generating greater cooperation from its reluctant European allies to apply pressure on Tehran, a posture that was undermined by the persistence of considerable volumes of US trade with Iran. A combination of European resistance, frictions between the Republican Congress and a Democratic administration over Iran policy, and an unexpected Iranian overture precipitated a dramatic tightening of US economic restrictions against the Islamic Republic. In 1995, Tehran offered its first upstream oil deal since the revolution, and opted to put the offer to an American company in what Iranian President Ali Akbar Hashemi Rafsanjani later described as ‘a message to the United States that was not properly understood’.7 Whatever the intended impact may have been, the move triggered a renunciation of the deal, as well as new executive and congressional measures that imposed a comprehensive US trade and investment ban and threatened penalties on third-country investors in Iran’s energy sector. The latter measure, codified in the Iran–Libya Sanctions Act of 1996, is notable for having caused a predictable uproar among America’s allies. The application of secondary sanctions violated the declared US commitments to free trade, complicated Washington’s efforts to generate a united front with Europe, and ultimately was diluted by the evident reluctance of Washington to implement its provisions—or even identify its prospective targets. No sooner had the Clinton administration erected the most rigorous unilateral sanctions regime since the hostage crisis than it was unexpectedly confronted by a changing set of circumstances within Iran’s domestic political contestation. The 1997 election of a moderate president and the advent of a new political force within the Islamic Republic that aimed at reviving the republican imperatives of the Islamic revolution prompted a frenzied US effort to translate these changes into a genuine improvement in the bilateral relationship. After a flurry of signals from both sides, including the proposal by Secretary of State Madeleine Albright for the development of a ‘road map’ towards rapprochement, the same American administration that had significantly intensified sanctions for the first time since the revolution whiplashed in the opposing direction. As part of the most dramatic series of US overtures towards Tehran since 1979, President Clinton adopted a number of measures that eased existing sanctions. These included authorizing the sale of spare airline parts, lifting restrictions on sales of food and pharmaceuticals, removing Iran’s designation as a conduit for narcotics production and transit, and—finally and most dramatically—a wide-ranging speech by Albright that articulated US regret for a range of previous policies and announced the lifting of sanctions on caviar, carpets and pistachios. Neither the onerous economic restrictions of 1995 and 1996 nor the accommodating overtures of 1998, 1999 and 2000 generated positive changes in Iranian behaviour or rhetoric. Much like the Clinton administration, President George W. Bush and his team entered office with deep mistrust of Iran and a determination to avoid repeating the mistakes of their predecessors. Unlike the 1990s, the events of the Bush era—particularly the 9/11 attacks and the revelations about Iran’s covert nuclear programme—only reinforced that mistrust and the pursuit of maximum pressure on Tehran. Like Clinton, President Bush sought to cordon off Iran from the rest of the Middle East in order to facilitate a broader regional transformation; however, in doing so, he also sought to align US policy with the aspirations of the Iranian people, rather than exploit whatever distinctions remained within its rapidly consolidating hardline government. The hallmarks of his first-term approach to Tehran tended towards diplomatic grandstanding over significant policy initiatives or specific new sanctions. This period coincided with the waning days of the Iranian reform movement, which struggled to sustain temporary concessions on the regime’s nuclear activities. During its second term, however, the Bush administration adopted a more assiduous and innovative approach that sought to strengthen US leverage in dealing with an apparently ascendant Iran. While retaining the pugnacious rhetoric of his first term, President Bush adopted a substantially different course, one that prioritized multilateral cooperation and identified new mechanisms for broadening the impact of US unilateral measures. This involved several components, including the use of executive prerogative to designate Iranian entities as associated with terrorism, under measures adopted after the September 11 attacks, as well as enhanced counterproliferation sanctions. These unusually far-reaching restrictions effectively precluded foreign banks with US interests or presence from doing business with designated institutions in Iran. While extraterritorial sanctions had provoked European opposition in the past, these measures met little overt resistance from either the diplomatic or the financial community. The surprising degree of compliance reflects a combination of effective US diplomacy with allies, a more sceptical international mood towards Tehran, and the obliqueness of the measures, which ostensibly targeted merely the Iranian institutions but indirectly imposed constraints on any of their foreign business partners. These measures were accompanied by a concerted campaign, primarily focused on financial firms in Europe and the Gulf, intended to highlight both the increasing legal roadblocks to investing in Iran as well as the reputational risks of doing so. The outcome was dramatic: after more than two decades of trying to bring the rest of the world into line with American efforts to isolate and pressurize Iran, Washington helped launch a wave of divestment from Iran simply by capitalizing on the unique role of the US financial system in magnifying the impact of US restrictions. The US federal measures were complemented by the proliferation of state-level measures, the cumulative effect of which was to reinforce the disincentives for any firm with American interests to deal with Iranian counterparts. Finally, the Bush administration embarked on a long and arduous campaign to bring the Iranian nuclear file before the United Nations Security Council (UNSC), in the course of which it reversed its prior refusal to negotiate with Tehran and relaxed its stance towards future Iranian nuclear activities. These concessions won Washington a trio of successive UNSC sanctions resolutions that began to build an international consensus on penalizing both the state of Iran and specific institutions within it over the nuclear issue. The end result was a new relevance for Washington’s Iran policy—but with no evidence that this sudden salience is producing the intended moderation of Iranian policies. In contrast to his predecessors as well as his rivals, as a candidate for the US presidency Barack Obama publicly campaigned on the exigency of a more effective approach to Iran; during the Democratic primary race, he embraced the need for direct negotiations without preconditions. Moreover, after taking office the new President personally invested himself in reaching out to Iran, via a video message commemorating the Iranian new year (Nowruz) in March 2009—a greeting that was evidently crafted to appeal to regime elites as well as ordinary citizens. Over the course of the next several months, the administration reportedly initiated other gestures towards Tehran of a more private nature, including unprecedented correspondence from President Obama to the Iranian supreme leader. However, President Obama indicated from the outset that engagement would be given an early deadline to prevent Tehran from using the process to evade demands made of it. As in other elements of its Iran policy, the Obama administration retained the basic outline of the Bush approach to Iran sanctions, with modest enhancements. The designation of Iranian individuals and institutions under the counterproliferation and counterterrorism statutes remains a powerful tool for creating ripple effects across the global landscape of the country’s trade ties. Beyond these steps, however, President Obama has sought to enhance the persuasive power of US policy—initiating early overtures towards Tehran as a means of demonstrating to Europe the seriousness of American readiness to negotiate, making key compromises on issues at stake with Russia to draw Moscow into a more cooperative relationship on Iran, and investing in a protracted negotiation of the latest (and presumably last) UN resolution on Iran, Security Council Resolution 1929, so that it would serve as a platform for additional measures by individual states as well as the European Union. The advantages of this synergy cannot be overestimated, and in many ways those subsequent unilateral sanctions are far more significant than the UN measure itself. Washington took other steps to encourage cooperation among ‘like-minded states’ in Europe and in Asia, notably by using sanctions policy to highlight human rights abuses in Iran and to restrict the government’s access to technology used to control the free flow of information. Notable new measures include the July 2010 Comprehensive Iran Sanctions and Divestment Act (CISADA), which includes a rescission of the prior exemption of caviar, carpets and pistachios from US sanctions, as well as a new array of extraterritorial measures including restrictions on sales of refined petroleum products to Tehran. In part because CISADA was enacted so quickly on the heels of the UN resolution, there was some grumbling, particularly from the Russians, that Washington was exceeding its mandate. Still, the unilateral American actions did not provoke intra-alliance tensions or defections from the overall international consensus on putting pressure on Tehran. A measure of success—and continued obduracy After more than three decades of reliance on sanctions as the centrepiece of US policy towards Tehran, Washington can finally claim a measure of success, at least with respect to the breadth of multilateral cooperation, the potency of international implementation, and apparent costs imposed on Iran as a result of its defiance of UN mandates. The consequences of the sharpened sanctions regime can be seen across the board within Iran. Trade with Europe has declined precipitously, and sanctions have forced Tehran to recapitalize its banks and seek out creative mechanisms— including barter instruments—for increasing proportions of its considerable trade finance requirements. Indian imports of Iranian gasoline have gone unpaid for months, for lack of a legally viable payment process, while Iranian jets have been grounded in Europe as a result of US restrictions on sales of refined petroleum products. A wide range of Iranian politicians, including Ayatollah Khamenei, have acknowledged the increasing hardships posed as a result of the restrictions. The argument in favour of sanctions is grounded in the historical evidence that Iranian policy is often shaped by a rational assessment of costs and benefits. And yet it is not apparent that the mounting costs of sanctions have brought the clerical leadership any closer to a meaningful process of dialogue—much less serious compromises—on its nuclear programme or the other elements of its provocative policies. This reflects the formative influence of Iran’s domestic political dynamics, and its unexpected evolution, on the regime’s assessment of risks and rewards. Historically, Iranian leaders have tended to reject the significance of sanctions, at least rhetorically, and have celebrated the country’s capacity to withstand external economic pressure, particularly the measures imposed on Iran by Washington. In the immediate aftermath of the revolution, this ethos was philosophically consistent with the revolutionary leadership’s quest for independence and its ambivalence about capitalism and international entanglements. The rupture of Iran’s financial relationship with the United States and the American ban on exporting military equipment to Iran spurred Tehran to invest in its domestic capacity, particularly in the security sector. Over time, sanctions have been integrated within the regime’s ideological narrative. Like the war with Iraq in the 1980s, economic pressure represents another component of the international conspiracy to undermine the Islamic Revolution, a plot that has been foiled by Iran’s wise and righteous leaders, who have used sanctions to the country’s benefit by strengthening its indigenous capabilities and sovereignty. From this perspective, the hard-liners may perceive merely surviving new sanctions—even at a significant price—as a victory, and will portray it as such to their support base. ‘These [past] sanctions didn’t work to our detriment,’ Khamenei said in 2008. ‘We were able to create an opportunity out of this threat. It’s the same today. We are not afraid of Western sanctions. With the blessing of God, the Iranian nation, in the face of any sanction or economic embargo, will demonstrate an effort which will double or increase its progress by many folds.’8 Since that time he has returned to the question of sanctions repeatedly. In February 2011, Khamenei implicitly acknowledged the constraints that the latest round of UNSC sanctions, including an arms embargo, had imposed on Iran’s military in an address to the air force on the anniversary of the revolution. These tendencies have been redoubled as a result of the historic transformation that Iran has undergone over the course of the past two decades. The 1990s are often seen as a decade of economic reconstruction and political reform in Iran’s revolutionary theocracy. Intellectuals, businessmen and technocrats dominated the public sphere, as Iran seemed to be distancing itself from its revolutionary heritage. The clerical reformers were seeking to reconcile democracy and religion, while the younger generation was moving away from a political culture that celebrated martyrdom and spiritual devotion. However, beneath the surface of innovation and change was an emerging war generation—pious former soldiers who as young men had served on the front lines of the Iran–Iraq conflict. This cohort of austere veterans maintained their revolutionary zeal and a commitment to Ayatollah Khomeini’s original mission. This segment of society would produce some of Iran’s more important future power-brokers, such as Mahmoud Ahmadinejad, Sa’id Jalili and Mujtaba Samarah Hashemi. The young reactionaries defined their ideology by calling for a return to the ‘roots of the revolution’. The new right would often romanticize the 1980s as a pristine decade of ideological solidarity and national cohesion. They saw it as an era when the entire nation was united behind the cause of the Islamic Republic and was determined to assert its independence in the face of western hostility and Saddam’s aggression. Khomeini and his disciples were dedicated public servants free of the corruption and crass competition for power that would characterize their successors. Self-reliance and self-sufficiency were the cherished values of a nation that sought to mould a new Middle East. Western restrictions ‘backfired’, Khamenei said earlier in 2011, adding: ‘The sanctions made our youth think and produce whatever the enemy did not like us to possess. They produced such things on their own and in some cases they produced even better versions.’9 Like all idealized recollections, the conservative view of the 1980s has a limited connection to reality. For most Iranians, the first decade of the revolution was a time of economic privations, encroaching autocracy and a seemingly endless war that nearly destroyed the country. In the path of self-aggrandizement, Iran has to be prepared to pay a price. Ayatollah Ahmad Jannati, the head of the Guardian Council, stressed this point, noting: ‘We have to have perseverance. We will tolerate sanctions and enmities and continue in our Islamic stance.’ While serving as deputy secretary of the Supreme National Security Council, Ali Hussein-Tash similarly noted: ‘A nation that does not engage in risks, and difficult challenges, and a nation that does not stand up for it, can never be a proud nation.’ In essence, the new right has redefined Iran’s national interests, privileging strategic gain over economic growth. The western politicians who insist that financial penalties will somehow detract the theocracy from its planned course do not fully appreciate the hard-liners’ mindset. Defiance and the aspirations of Iran’s new right Given their determination to eclipse American power, the Iranian new right seems to regard the acquisition of nuclear capability as an important objective. Ayatollah Muhammad Taqi Mesbah-Yazdi has declared this task a ‘great divine test’, while the mouthpiece of the extreme right, Kayhan, has openly called for acquiring ‘knowledge and ability to make nuclear weapons that are necessary in preparation for the next phase in future battlefields’.10 While the Rafsanjani and Khatami administrations looked on nuclear weapons as tools of deterrence, for the conservatives they are a critical means of solidifying Iran’s pre-eminence in the region. The current leaders of the Islamic Republic accordingly display little intention of bartering the programme away for commercial contracts or security guarantees which they openly deride. A hegemonic Iran requires a robust and extensive nuclear infrastructure. As is typical of Iranian politics, the rise of the new right has invited challenges from other conservatives who may share its objectives but are pressing for a more tempered approach. Ironically, evidence suggests that this tendency was also shaped by the seminal experience of the Iran–Iraq war. In the aftermath of the war, many officials within the intelligence and security services, along with combatants from the Revolutionary Guards, began to contemplate their nation’s future path. Among the leading members of this group are the mayor of Tehran, Muhammad Qalibaf; the Speaker of the parliament, Ali Larijani; and the former defence minister Ali Shamkhani. Their writings and speeches reflect a shared conviction that the end of the Cold War and Iran’s unique geographical location position it naturally as the pre-eminent power of the region; but that, for too long, the ideological edges of the regime and its unnecessarily hostile approach to the region have thwarted its ambitions to occupy that position. They argue that the only way for the Islamic Republic to reach its desired status is to behave in a reasonable manner while increasing its power. Such an Iran would have to impose some limits on the expressions of its influence, accede to certain global norms and be prepared to negotiate mutually acceptable compacts with its adversaries. From the perspective of the country’s dominant conservatives, Iran has been offered a rare and historic opportunity to emerge as the predominant power in the Persian Gulf and a pivotal state of the Middle East. The ‘Arab Spring’ that has already displaced an important American ally in Egypt and is putting pressure on the remaining pillars of the US alliance system, offers opportunities for projection of Iran’s influence. The consolidation of Shi’i power in Iraq, the continued ascendance of Hezbollah and Hamas, and the disarray in the pro-western camp are all seen as positive developments. Iranian officialdom is convinced that the goal of regional predominance is within its grasp. Whether it is correct in its assessment of America’s declining power or the fragility of the hold on power exercised by the incumbent Arab regimes is less relevant. The salient point is that such perceptions condition their approach to international affairs. At a time when the region is undergoing an unpredictable transition and the Arab populace is pressing for autonomy from American mandates, Iran cannot be seen as conceding to the US on the critical nuclear issue. The domestic politics of Iran only reinforce calls for intransigence. After years of proclaiming that the nuclear programme is the most important national asset since the nationalization of the oil industry in 1951, the government believes that even merely suspending the programme will challenge the legitimacy of the state. The Islamic Republic’s deliberate strategy of marrying Iran’s national identity to the cause of nuclear aggrandizement makes the task of engaging its leadership in a process of constructive diplomacy even more daunting. The ultimate arbiter of Iranian politics and the person responsible for setting the national course remains the Supreme Leader, Ali Khamenei. Thus far, Khamenei has upheld the essential aspects of the conservative approach and has been effusive in his praise for Iran’s nuclear defiance. ‘Among the significant characteristics of the ninth government is justice-seeking and its campaign against global hegemony which are among the mottos of the revolutionary government,’ insisted Khamenei. He has also echoed Ahmadinejad’s claims by stressing that any ‘setback will encourage the enemy to become more assertive’.11 As a Supreme Leader who has survived the internal challenge of the reform movement and the external threat of American intervention, he seems at ease with the new right’s nuclear advocacy. As Khamenei gazes across Iran’s contested domestic landscape, his foremost concern is the survival of the regime, which is beset by economic difficulty and political ferment. The rise of the democratic opposition group known as the Green Movement has unsettled the Supreme Leader, who tends to view all opposition as externally engineered, and who now sees a menacing America in conjunction with its fifth column seeking to displace the theocratic regime. ‘It was sedition. Sedition means that certain people chant slogans with a 100 percent wrong content in order to deceive people, but they failed. Last year’s sedition was a manifestation of enemies’ plots,’ pronounced Khamenei.12 Whatever doubts and suspicions Khamenei already harboured against the West were further accentuated by the latest wave of protest to engulf the Islamic Republic. In an important and largely ignored speech in 2010, Khamenei set out his case against engaging with the United States. ‘The change of behavior they want—and which they don’t always emphasize—is in fact a negation of our identity … Ours is a fundamental antagonism,’ declared Khamenei.13 Iran’s Supreme Leader appreciates that engagement with the United States is potentially subversive and could undermine the pillars of the Islamic state. At a time when the Arab public is seeking to reclaim its destiny and finally free itself from external dominance, Khamenei does not want to be the regional ruler who compromises with the United States. In the end, for the Supreme Leader the political costs of engagement outweigh its economic benefits. This stance is consistent with the strategic preferences of the Islamic Republic from its inception. Since the revolution, Iran has experienced a number of episodes of severe economic pressure, as a result of volatile oil prices and the severe political crises that ensued after the revolution and during the war with Iraq. None of these episodes of economic pressure generated any significant moderation of Iranian foreign policy; on the contrary, when purse strings tightened the Iranian regime pulled together and rallied the public behind it. The current political context is, of course, unique, but a review of Iranian history dispels any illusion that Tehran will automatically buckle when its budget becomes too tight. Iran’s conservative power structure has responded to the latest volley of sanctions in a multifaceted fashion, including defiance, mitigation, aversion, insulation and a self-serving public diplomacy campaign. Tehran has taken a number of steps over the years to mitigate its vulnerability to external economic leverage. In particular, Tehran instituted a range of measures to minimize gasoline consumption and ramp up refinery capacity in a bid to reduce the country’s reliance on imported petroleum products, and has launched a historic effort to eliminate the longstanding and profoundly debilitating price subsidies on various vital consumer goods, including bread and gasoline. These steps have been a clearly articulated priority for Tehran for at least several years, specifically intended to undercut the impact of international restrictions.14 The regime is resourceful, adaptable and well versed in insulating its preferred constituencies and in identifying alternative suppliers. Through trade and mercantilist diplomacy, Iran has deliberately sought to expand its network of trading partners and to reorientate its trade and investment patterns to privilege countries with international influence and minimal interest in political intervention. Iranian leaders are experienced at replacing prohibited suppliers, finding alternative financiers, and absorbing additional costs in order to mitigate the impact of sanctions. Loopholes in the landscape Since the 1979 revolution, sanctioning Iran has largely been the lonely work of the US government. Underpinning the international community’s historical reluctance to embrace sanctions is a divergence in views both on Iran itself and on the efficacy of economic pressures. The early achievements of the Obama administration have established a solid basis for multilateral cooperation on Iran, but there is reason to suspect that the international community may splinter as the standoff persists. Few countries other than the United States have consistently treated the Islamic Republic as a pariah state; on the contrary, important international actors such as China and Russia have invested significantly in developing a deep relationship with a country viewed by many as the region’s natural powerhouse. And while energy interests and other economic enticements, including Iran’s role as a market for Russian arms, have proved powerful binding forces, dismissing international resistance to sanctions as purely mercenary is overly simplistic. In Moscow, Beijing and other capitals, Iran remains a useful interlocutor in a critical region of the world, and these countries are loath to jeopardize their relationship with this important agent. They also share a resentment of American prerogatives and a mistrust of Washington’s intentions. Achieving international consensus on tough sanctions is further complicated by divergent perspectives on the likely consequences. Traditionally, Washington has argued that increasing the costs of Iranian malfeasance can alter the regime’s policy calculus and dissuade it from adopting problematic policies. This view of sanctions as an instrument that can affect a recalcitrant regime is not widely shared within the international community. In particular, Moscow and Beijing have repeatedly raised concerns that, rather than inducing moderation, sanctions might provoke further Iranian radicalization and retaliation, either via direct actions against governments that adhere to any boycott or by accelerating their nuclear activities and withdrawing from the Nuclear Non-Proliferation Treaty. Moscow’s and Beijing’s reluctance to follow the US line is also informed by their long memories of their own countries’ experiences with sanctions and other forms of western economic pressure. In recent years, Russia and China have drawn the lion’s share of attention and recriminations for hindering progress on sanctions, and yet ambivalence runs deep throughout much of the rest of the international community. Even within many European polities, the legacy of three decades of ‘constructive engagement’—an approach that endeavoured to moderate Iranian policies by drawing the regime into a more mutually beneficial network of relationships—has left a residue of discomfort among some leaders with sanctions as the primary policy instrument. In addition, Iran’s neighbours in the Persian Gulf region, who revile the Shi’i theocracy and would prefer almost any outcome to a nuclear-capable Iran, remain somewhat ambivalent about directly confronting the Islamic Republic with anything short of devastating force.15 Their trepidation is based on fears of Iranian retaliation and concerns about preserving their own economic stability in the midst of profound global uncertainty. In any negotiations involving multiple parties and interests, a single, influential hedger can dissuade other fence-sitters from signing up to an agreement. Such selfreinforcing mistrust within the international community persistently undercuts efforts to achieve a comprehensive sanctions regime. Today, European companies grumble about pressure to forfeit opportunities to their Chinese competitors, who will quickly take their places with impunity. Tehran has exploited this dynamic, seeking to expand its economic ties in ways that complicate any prospects for western leverage. Iranian leaders have also used the opportunities afforded by the rise of ambitious new powers on the international scene, through mechanisms such as the May 2010 ‘Trilateral Declaration’ with Brazil and Turkey that sought to undercut progress on the latest round of UN sanctions. Conclusion: options and alternatives Economic sanctions are typically intended to influence both the cost–benefit analysis of decision-makers and the ability of their government agencies to implement problematic policies. It seems self-evident that 30 years of American economic restrictions and broad export controls on military materiel have imposed functional constraints on Iran’s capacity to cause trouble in its region. But on the arguably more important plane of leadership choices, sanctions have thus far failed to dissuade Tehran from pursuing its most objectionable policies, particularly its efforts to develop a vast nuclear infrastructure. During the tenures of Rafsanjani and Khatami, the Iranian approach to the world was conditioned by tensions between pragmatism and revolutionary values. But it has undergone a marked change over the course of the past decade. Under the auspices of the Supreme Leader, a ‘war generation’ with imperial ambitions and an austere Islamism has come to power, redefining the parameters of Iran’s international relations and pressing its perceived advantages to their uppermost limits. Iran is well along the path to achieving a nuclear weapons capability while emerging as the most important state in the Middle East. As long as Iranian leaders perceive themselves to be under siege from a domestic insurrection orchestrated by their longstanding enemies, they may be reluctant, and less able, to negotiate in a serious and sustained fashion with the international community, particularly on the nuclear programme—an issue that they have identified as critical to the security of the regime and the state. Moreover, regional developments are almost certainly undermining the ultimate objective of the Obama administration’s approach, namely to pressurize or persuade Iran’s leader to bargain this nuclear programme away. The current approach is minimally sufficient for dealing with Iran, in the sense that it has successfully impeded Iran’s most problematic policies without actually generating much progress towards reversing them or altering the regime’s political calculus. But without a viable endpoint, Washington’s strategy is simply too heavily reliant on economic sanctions, a tool whose efficacy progressively declines, to resolve successfully the most urgent American concerns about Iranian policies. Even though the sanctions succeed on a purely economic basis—that is, in imposing significant costs on the regime and exacerbating public frustration over economic hardships—they appear to be backfiring by further entrenching Iranian intransigence. Under such circumstances, the policy of economic sanctions as a means of producing reliable interlocutors must be reconsidered. 1 Harold H. Saunders, ‘Diplomacy and pressure: November 1979–May 1980’, in Warren Christopher et al., American hostages in Iran: the conduct of a crisis (New Haven, CT: Yale University Press, 1985), p. 73. 2 Robert Carswell and Richard J. Davis, ‘Crafting the financial settlement’, in Christopher et al., American hostages in Iran, p. 232. 3 Roberts B. Owen, ‘Final negotiation and release in Algiers’, in Christopher et al., American hostages in Iran, pp. 299–300. 4 Hossein G. Askari, John Forrer, Hildy Teegen and Jiawen Yeng, Case studies of U.S. economic sanctions: the Chinese, Cuban and Iranian experience (Westport, CT: Praeger, 1987), p. 190. 5 ‘The 41st President; transcript of Bush’s Inaugural Address: “Nation stands ready to push on”’, New York Times, 21 Jan. 1989. 6 Robert Litwak, Rogue states and U.S. foreign policy: containment after the Cold War (Washington DC: Woodrow Wilson Center Press, 2000), p. 166. 7 New York Times, 10 May 1995. 8 Speech by Ayatollah Seyyed Ali Khamenei in Shiraz, 20 April 2008, Vision of the Islamic Republic of Iran Network 1, Thursday, 1 May 2008, World News Connection, document no. 200805011477.1_cd440fd14763d95e, accessed 25 Oct. 2011. 9 Islamic Republic News Agency (IRNA), 2 Feb. 2011. 10 Keyhan, 12 Feb. 2006. 11 IRNA, 2 March 2008. 12 IRNA, 8 June 2011. 13 IRNA, 20 Aug. 2010. 14 See e.g. Khamenei’s sermon on 13 Oct. 2007, in which he declares that ‘we pay billions to import gasoline or import other things in order that a certain section of us—or a segment of our society—can spend and be extravagant. Is this right, I ask you? We, as a nation, have to look at this as a national problem . . . They [the West] have launched sanctions against us time and again precisely because they pin their hope on this particularly negative characteristic of our nation. If we continue to be a wasteful and profligate nation, we will be vulnerable to difficulties. But a nation which refrains from such extravagance and takes care with its expenditures and revenues will not be vulnerable to these difficulties. In this case they can sanction the nation all they like. Such [a] nation will not suffer if it faced sanctions.’ World News Connection, NewsEdge document no. 200710131477.1_adcf03c738cd1232, accessed 25 Oct. 2011. 15 ‘Iran sanctions raise Saudi doubts’, Al-Jazeera (English), 16 Feb. 2010, http://english.aljazeera.net//news/middleeast/2010/02/201021641617847397.html, accessed 25 Oct. 2011. |
1614d91fcac58458d74efc2439af8fad | https://www.brookings.edu/articles/the-senate-and-executive-branch-appointments-an-obstacle-course-on-capitol-hill/ | The Senate and Executive Branch Appointments: An Obstacle Course on Capitol Hill? | The Senate and Executive Branch Appointments: An Obstacle Course on Capitol Hill? Bolstered by analyses from both journalists and academics, the conventional wisdom now holds that the Senate has become increasingly hostile to presidential appointees. Would-be judges, justices, ambassadors, commissioners, and executive branch officials are “borked” by vicious special interests and their Capitol Hill co-conspirators. Appointees are “held hostage” by senators who seek substantive trade-offs or the confirmation of their own favored candidates for judicial or regulatory posts. Senators place so-called “holds” on nominations, thus delaying matters interminably. All in all, the Senate’s performance, at least as commonly portrayed, does little to enhance the appointment-confirmation process. Quite the contrary. The Senate, to recall Robert Bendiner’s description of more than 30 years ago, seems a major culprit in the lengthy and often distasteful politics of confirmation—a veritable “obstacle course on Capitol Hill.” This characterization fits with our broader understanding of the Senate of the past 20 years. As detailed by political scientist Barbara Sinclair and her fellow congressional scholars, the Senate has become both highly individualized and extremely partisan. At first blush, such a pairing seems unlikely—would not senators in a highly partisan legislature subordinate their individual desires for the good of the entire partisan caucus? But the Senate, once a bastion of collegiality, has become less civil, less cordial—sometimes almost rivaling the raucous House of the 1990s in its testiness. The lengthy, increasingly bitter partisan stand-off over the final year of the Clinton administration has given further credence to the perception that the Senate has become deeply hostile to appointments from a Democratic executive. Still, headlines, assumptions, and conventional wisdom can be wrong, to a greater or lesser extent. We might do well to examine the data on confirmations. Do Senate confirmations take longer than they used to, especially in the modern era? Are more nominations withdrawn or returned to the executive? Second, we might well ask how Senate processes might be altered in a partisan, individualistic era, especially when the upper chamber, unlike the rule-dominated House, usually operates through the mechanism of unanimous consent—that is, a single senator’s objection can delay, if not stop, the normal legislative process. Even if we find that the conventional wisdom is accurate and that presidential appointments often run into a congressional roadblock, there may be little that can be done within the legislative branch. Indeed, Christopher Deering’s assessment of Senate confirmation politics, circa 1986, bears repeating: “The relationship between the executive and legislative branches…remains essentially political…. The Senate’s role in the review of executive personnel is but one example of that relationship. The Senate’s role in the confirmation process was designed not to eliminate politics but to make possible the use of politics as a safeguard…a protection against tyranny.” Circa the year 2000, one might well argue that more is going on than “protection against tyranny,” but exactly what remains open to question. The following discussion focuses on 329 top policymaking, full-time positions in the 14 executive departments that require presidential appointment and the approval of the Senate. Ambassadors, regulatory commission slots, military commissions, and federal attorneys are excluded. Executive Branch Appointments and the Senate, 1981?99 To understand the magnitude of any “problem” with Senate confirmations of executive branch appointments, we need to know three things. First, how lengthy is the Senate confirmation process, and to what extent do some appointments take a disproportionately long time to be resolved? Second, how many appointments are withdrawn and returned? And third, how are appointments processed under differing conditions, such as divided government and various periods of a presidency (especially during the transition to a new administration as opposed to the remainder of a president’s tenure)? Because the available data do not allow for systematic explorations before the Reagan administration, over-time comparisons are limited. Still, some trends do begin to emerge. First, the confirmation process has grown longer. In 1981, the Republican Senate took an average of 30 days to confirm Ronald Reagan’s executive branch appointees; in 1993, the Democratic Senate took 41 days to confirm Bill Clinton’s first nominees, an increase of 37 percent. Six years later, in the first session of the 106th Congress, the confirmation process had dragged out to 87 days, more than twice the 1993 figure and almost three times that of 1981. The comparisons are skewed somewhat by the Republican control of the Senate in 1999 and the less urgent, less visible nature of confirming appointees late in an administration, as opposed to the initial round of appointments that receive considerable attention given the need to put a government in place a few weeks after the November election. Still, the process has grown longer, both early and late in an administration. In 1992, for example, a sample of George Bush’s appointments during the last year of his presidency (and facing a Democratic Senate) averaged 60 days for confirmation, in contrast to Clinton’s 86 days in 1999 and early 2000. Thus the typical 1999 confirmation process averaged almost three months when Congress was in session. Taking into account the 34 days of late-summer congressional recess, the confirmation process of 1999 averaged 121 days—almost exactly four months. The positions, of course, did not necessarily remain vacant, because the 1998 Federal Vacancies Reform Act allowed acting officials to fill many slots. Still, the lengthening Senate confirmation process indicates that a problem does exist—all the more so given the increasing time that the president has taken to make appointments. A second data set relates to the likelihood that a president’s appointments will be confirmed. How often does the Senate return appointments to the president or cause nominations to be withdrawn? Again, looking at the rates under differing circumstances makes sense. The president is likely to do better with appointments he makes right after being elected than with any other, and divided government may affect confirmation rates. As reported in table 1, Presidents Reagan, Bush, and Clinton fared about the same in winning confirmation for their nominees. All three won approval of more than 95 percent of their appointees in their administrations’ first two years, but did less well for the rest of their tenure in office. To the extent that a trend emerges, it reinforces the inference that the Senate has put up more obstacles over time. Reagan’s nominees were confirmed at an 86 percent clip between 1983 and 1988, whereas Clinton won approval for only 79 percent of his appointments. But Clinton faced a Republican Senate for the entire six-year stretch, while Reagan dealt with a Democratic chamber only in his last two years, when his confirmation rate fell to 82.5 percent (much like Bush’s 81.6 percent success rate in 1991-92, under similar conditions). A summary view of confirmations over the past two decades demonstrates that the Senate process has grown longer, that divided government lowers confirmation rates a bit, and that the president’s capacity to win Senate approval for his nominees declines modestly. At the same time, President Clinton did win confirmation of 96 percent of his nominees in the first two years of his administration, albeit with less dispatch than did Ronald Reagan in 1981. This leads us to consider whether the Senate is truly the culprit here and, if it is, whether anything might be done to affect the way the chamber handles the confirmation process. Table 1: Confirmed, Returned, and Withdrawn Executive-Branch Nominees, 1981–99 Confirmed Returned Withdrawn* Years Nominees Number (Percent) Number (Percent) Number (Percent) Bill Clinton 1999 85 68 (80.0) 0 3 (3.5) 1997–98 207** 166 (80.2) 16 (7.8) 9 (4.3) 1995–96 79 59 (74.7) 19 (241) 1 (1.7) 1993–94 323 310 (96.0) 9 (2.8) 4 (1.2) George Bush 1991–92 136 111 (81.6) 21 (15.4) 4 (2.9) 1989–90 292 278 (95.5) 8 (2.7) 0 Ronald Reagan 1987–88 159 131 (82.5) 24 (15.1) 4 (2.5) 1985–86 182 165 (90.7) 14 (7.7) 3 (1.7) 1983–84 111 93 (83.8) 18 (16.2) 0 1981–82 269 260 (96.7) 6 (2.2) 3 (1.1) * In 1999, 14 appointments were carried over; in 1998, 9 recess appointments were made **Included 16 carryover appointments Sources: Various Congressional Research Service studies, 1983–2000, compiled by Rogelio Garcia. The Senate: Partisan, Individualistic, and Separate Aside from anecdotal evidence of particular bitter confirmation fights, such as former Senator John Tower’s failure to win confirmation as secretary of defense in 1989, or ineptly handled appointments, such as Lani Guinier and Zoe Baird in the Clinton transition, we have little systematic data on how the Senate affects confirmation politics in the post-1980 era of increased individualism and stronger partisanship. Nevertheless, convincing evidence does exist that the Senate has become both more individualistic and more partisan. Barbara Sinclair, for example, reports steady growth in filibusters over the past 40 years, especially in the past 20, and Sarah Binder and Steven Smith demonstrate that the use of filibusters continues to reflect the policy goals of individual senators, groups of senators, and, at times, the minority party. Moreover, the Senate continues to consider itself a co-equal partner within the appointment process. As separation-of-powers scholar Louis Fisher observes, “The mere fact that the President submits a name for consideration does not obligate the Senate to act promptly.” Indeed, the Senate’s willingness to sit on a nomination may reflect its status in a “separate-but-equal” system. Still, the unlikely combination of individualism and partisanship surely defines the contemporary Senate. As Sinclair summarizes, by the mid-1970s, the Senate “had become a body in which every member regardless of seniority considered himself entitled to participate on any issue that interested him for either constituency or policy reasons. Senators took for granted that they?and their colleagues?would regularly exploit the powers the Senate rules gave them.” Senators also emphasized “their links with interest groups, policy communities and the media more than their ties with each other.” And, notes Sinclair, by the late 1980s, “Senators were increasingly voting along partisan lines. In the late 1960s and early 1970s, only about a third of Senate roll call votes pitted a majority of Democrats against a majority of Republicans. By the 1990s, from half to two-thirds of roll calls were such party votes. …By the 1990s a typical party vote saw well over 80 percent of the Democrats voting together on one side and well over 80 percent of the Republicans on the other.” In fact, in the 105th Congress, Senate party loyalty scores slightly exceeded those of the House, which has been seen as the more partisan chamber. Unsurprisingly, the heightened individualism and partisanship has affected the confirmation of executive branch nominees. Guarantees by the Senate leadership to the contrary, every senator can place a “hold” on a nomination?delaying it, if not delivering a death sentence—though this tactic has been used more visibly on ambassadorial than on executive department appointments. Even noncontroversial nominations can fall victim to highly partisan Senate politics, as nominees are “held hostage” to other nominations, to appropriations bills, or to substantive legislation. Where there is real controversy, as with the appointment of Bill Lann Lee to head the Justice Department’s civil rights division, partisan conflict increases and extends beyond Congress to the Senate’s relationship with the White House. Confirmation and the Senate The great majority of presidential appointees to high-level executive positions win approval by the Senate, although the success rate hovers at about 80 percent once a president has initially constructed his administration. Adding to their uncertainty, these later appointees must wait an average of four months for the Senate to act, once it has received their nomination. For these nominees, the process is long, and the outcome uncertain. Add to this the partisan politicking and the intense scrutiny, and it is no wonder that some potential officeholders decline the honor of nomination. Might the Senate smooth the way for future nominees? Given the profound changes in the chamber over the past 25 years—the great latitude allowed individual members and the intense partisanship that dominates much decisionmaking—it seems unlikely that reformers would profit much from attempting to reshape Senate procedures. The best circumstance for speedier and more successful confirmations would be for the same party to control both the Senate and the presidency. Ronald Reagan did better in the mid-1980s with a Republican Senate than did either George Bush or Bill Clinton with opposing-party control in the 1990s. Bridging the separate institutions may be more valuable than seeking to reform an institution that has proven highly resistant to planned change. |
ee2d9494f92ac0218b9adca6d01466c3 | https://www.brookings.edu/articles/the-three-faith-factors/ | The Three Faith Factors | The Three Faith Factors How, if at all, does religion affect health and social welfare? Under what, if any, conditions does religion help to improve the lives of disadvantaged urban children and families, and how, if at all, can we foster those conditions? Is there any significant body of evidence to suggest that religion reduces crime and delinquency among low-income, inner-city youth? In 1995, when I began asking these questions in earnest, there was little reliable empirical research with which to address them. Today, however, we have many first-rate statistical and ethnographic studies that supply some preliminary answers. Though far from definitive, the evidence to date suggests that religion can improve individual well-being and ameliorate specific social problems. But what types of religious influences are most beneficial to the individual and society? At least three separate but related faith factors can be identified-what I will call “organic religion,” “programmatic religion,” and “ecological religion.” “Organic religion” is defined as a belief in God and regular attendance of religious services in a church, synagogue, mosque, or other traditional places of worship. “Programmatic religion” refers to individual participation in social programs run by organizations with a religious affiliation. With or without attending religious services, a child might be enrolled in an after-school program that is staffed mainly by religious leaders and volunteers. Lastly, even if one does not believe in God or attend services or religiously run social programs, one may still be exposed to “ecological religion.” For many urban youths, the only institutions more ubiquitous than liquor outlets are churches, the only unbroken windows they see are stainedglass windows, and many of the social-service programs that routinely supply them or their neighbors with basic necessities and services are operated through community ministries. Even without any formal religion in their lives, such youths may still be exposed to religious influences. State of the research The empirical research to date suggests that, especially for low-income urban children, youth, and young adults, these different forms of religious influence help to counter other, negative individual and social influences. Other things being equal, church attendance, participation in faith-based programs, and benefits received or services delivered from the hands of people working through local congregations are each associated with a greater probability that urban youth will escape poverty, crime, and other social ills. Still, the body of research on organic, programmatic, and ecological religion is far from comprehensive. While the literature on organic religion is now highly developed, that on programmatic religion is much less so. Scholars have usually approached organic and programmatic religion in ways that lend themselves to addressing whether these respective religious factors improve life prospects. In contrast, the literature on ecological religion focuses far more on the extent to which community congregations supply social services to their needy neighborhoods than on whether their presence and activities improve individual life prospects or overall community conditions. Certain questions remain almost completely unasked and hence unanswered. One might reasonably posit that, other things being equal, an inner-city youth who is exposed to all three kinds of religious influence would be most likely to prosper. But in fact we do not yet know whether such a “three-factor” youth is likely to do better than an otherwise comparable “single-factor” or “two-factor” youth. Moreover, there is as yet no firm empirical basis for knowing whether faith-based social programs outperform secular ones, and if so, why. And the only answers we can now credibly give to healthy skepticism about the social efficacy of religion are based not on experimental evidence but on counterfactual reasoning. Thus, to those who say, “If religion reduces deviance and is so ubiquitous, then why are things still so bad?”, we can only respond, “How much worse would things be were it not for religious influences?” Still, we do know far more today than we did seven years ago. The studies we have allow us to examine organic, programmmatic, and ecological religion in relation to relevant research literature on urban crime and delinquency. What we know is highly encouraging. These three types of religious influence constitute a social trinity of “spiritual capital” that can help low-income urban children, youth, and families. Organic religion Imagine two sets of people who are alike in terms of average age, income, and other socioeconomic and demographic characteristics. One group consists of people who believe in God, attend worship services regularly, and exhibit other religious commitments. The other group consists of nonbelievers who attend worship services rarely, if at all, and exhibit few if any marks of religious commitment. Other things being equal, the former group will suffer less, on average, from hypertension, depression, and drug and alcohol abuse, will have lower rates of suicide, nonmarital childbearing, educational failure, and juvenile delinquency, and will boast more members who live into their seventies and eighties. The University of Pennsylvania’s Byron Johnson recently conducted a systematic review of over 700 relevant organic religion studies. In the vast majority of these studies, organic religion was found to vary inversely with negative social and health outcomes, and was associated with emotions and behavioral traits that vary directly with positive social and health outcomes. For example, based on his systematic review of 46 organic religion and delinquency studies, Johnson reported that “religious commitment and involvement helps protect youth from delinquent behavior and deviant activities…. There is mounting evidence that religious involvement may lower the risks of a broad range of delinquent behaviors, including both minor and serious forms of criminal behavior.” Likewise, a 1985 study by Harvard economist Richard Freeman reported that churchgoing, independent of other factors, made young black males from high-poverty neighborhoods substantially more likely than otherwise comparable “unchurched” young men to escape poverty, crime, and other social ills. In a 1998 re-analysis and extension of Freeman’s study, Johnson and David B. Larson mined national longitudinal data on urban black youth and confirmed that religious commitments are powerful predictors of whether those youth will escape crime, poverty, and other social problems. There are now over a dozen similar findings in the literature on religiosity and delinquency, and nearly as many in the literature on religiosity and adult criminality. Johnson speculates that religious commitment “may help adolescents learn ‘pro-social behavior’ that emphasizes concern for others’ welfare,” giving them “a greater sense of empathy toward others,” and thereby rendering them “less likely to commit acts that harm others.” Such speculations square with the views of the hundreds of community and ministry leaders I have gotten to know over the years. Eva Thorne, an MIT-trained political scientist who has spent over a decade ministering to children and youth in one of Boston’s poorest neighborhoods, explains that much of society tends to relate to these “problem children” in terms of “the abuses the children suffered, the violence they have witnessed or done or had inflicted on them, the failures they had in school, the illicit acts they have committed, the illegal drugs they have consumed or sold, the low-income neighborhood in which they have lived.” In contrast, she explains, community ministers and religious leaders relate to them, and encourage them to understand themselves, as “children of God.” Programmatic religion Notwithstanding the great social benefits of organic religion, we cannot and should not use public funds or other public means to proselytize, promote sectarian worship, or advance religious instruction. But government is permitted to contract with certain faith-based organizations, from local houses of worship to national religious charities, that deliver specific social services and programs. Government, for example, cannot aid a local church’s Sunday school bible-studies program (an example of organic religion), but it can fund the same church’s after-school literacy training (an example of programmatic religion), even if the program is staffed by clergy or religious volunteers and is conducted in a place of worship. Unfortunately, the empirical research on the efficacy of programmatic religion in improving health and social welfare is far less developed at this point than the literature on organic religion. Relatively little data exists for even those programs run by large, well-known religious service organizations. The Salvation Army presently operates some 9,500 centers, takes in $2 billion in yearly revenues, and serves over 30 million people from coast to coast. The organization also runs a number of social-service programs, from homeless shelters and housing programs to drug-rehabilitation and job-training programs. But we know next to nothing about how effective the organization’s famous Adult Rehabilitation Centers (ARCs) are in reducing substance abuse, because the ARCs have yet to be systematically studied or evaluated. Other religious social-service organizations, such as the national evangelical Christian organization Teen Challenge, are also largely unexamined. Teen Challenge features a two-part process in which program participants spend several months at a reception center getting clean and sober before officially beginning the program. The program is explicitly Christ-centered and predicated on the belief that drug addiction can be cured only by total reliance on God’s grace. Teen Challenge reports tremendous (over 80 percent) success rates in curing young adult drug abusers. A few decent studies of Teen Challenge have been completed, and their results are generally positive, but none of these studies meets the most rigorous evaluation research standards. Selection bias, a lack of data over an extended time period, and other problems plague the extant documentation on the program’s efficacy. The picture becomes clearer when we turn to another national faith-based organization, the Prison Fellowship Ministries (PFM). Led by former Watergate felon Charles Colson, PFM is a $60 million-a-year evangelical Christian ministry that encompasses not only men and women behind bars but the children and families of prisoners as well. PFM sponsors a variety of prison-based education, drug-treatment, and other programs. Now emerging is a critical mass of systematic research and follow-up studies suggesting that participants in certain PFM programs have significantly lower recidivism rates than otherwise comparable offenders. For example, one study found that 14 percent of the inmates in four New York State prisons who participated in PFM’s bible-studies activities were rearrested one year after release, compared to 42 percent of otherwise comparable inmates in these prisons who did not participate in the program. Aside from these data, there has yet to be a single strongly experimental or quasi-experimental study of any major faithbased program. Indeed, with respect to faith-based programs for drug addicts, prisoners, at-risk youth, and other populations, most of the “success rates” one hears about are simple summary statistics that lack meaningful data-gathering, analysis, or evaluation. The good news, however, is that nearly all programmatic religion powerhouses-including the Salvation Army, Teen Challenge, and PFM-welcome independent evaluations of their programs. The same is generally true for the extraordinarily large and diverse sector of local and regional faith-based social programs, including those that work through interfaith, ecumenical, public and private, or religious and secular partnerships. Philadelphia’s Youth Education for Tomorrow (YET) program, which offers intensive after-school reading instruction through a cooperative effort of several dozen churches, Catholic schools, grassroots religious groups, and public schools, is a promising example of programmatic religion’s success. A recently published first-year evaluation of the program by Public/Private Ventures analyzed data on about 1,000 YET children, all of whom entered the program reading two grades or more below grade level. The study showed that children who attended YET 100 days or more vaulted 1.9 years in reading ability, while children who came fewer than 100 times still registered an impressive average gain of 1.1 grade levels. YET is typical of local programmatic religion programs that, in collaboration with secular and other partners, aim at achieving a specific civic goal. Consider the story of one YET worker as recounted in the aforementioned Public/Private Ventures report: There was a little boy who was going to get kicked out of school, so they sent him to his YET teacher, and she basically told him, “Jesus loves you, and He sent me here to tell you that He loves you, and there’s a good little boy in there that I’m going to help bring out, and I just want you to know that wherever you go, Jesus loves you and I love you.” And that was outside of the literacy program, but he was in her literacy program, and he made the honor roll. In a similar vein, Robert Woodson, president of the National Center for Neighborhood Enterprise, suggests that the reason programmatic religion may reduce deviance and delinquency is that faith-based social-service providers, armed with a religious sensibility, often go above and beyond the call of duty and act in ways that inspire an unusual degree of trust among program beneficiaries. In his 1999 book The Triumphs of Joseph, Woodson recalls: I will never forget the sight of former felons and addicts washing pots and pans and scrubbing down pews in restitution for some violation of a program’s rules. Previously not a threat of a death sentence or life imprisonment meant anything to these individuals who accepted homicides as a fact of life and anticipated a life span of under thirty years. Yet they had willingly accepted the discipline of an unpretentious sixty-year-old outreach minister because he had won their trust. A religious difference? One approach to testing such ideas about the efficacy of programmatic religion is to examine how faith-based organizations fare at activities-such as mentoring at-risk youth-for which there exists a sizable body of research and a track record of success by secular providers. By doing so, we may better understand the conditions under which faith-based programs produce positive social outcomes and can judge whether, in fact, faith-based programs hold any comparative advantages over strictly secular ones. To date, the most well-regarded study of mentoring is Public/Private Ventures’ 1995 evaluation of the Big Brothers Big Sisters of America program. Joseph Tierney and Jean Baldwin Grossman of Public/Private Ventures found that youth (most of them low-income minority youth) who were matched with a big brother or sister reaped significant benefits compared with their counterparts who remained on waiting lists. The matched children were 46 percent less likely to begin using drugs and 27 percent less likely to begin using alcohol. They were a third less likely to hit someone. They skipped school half as many days as the wait-listed youth. They also liked school more, got better grades, and formed better relationships with their parents and peers. These effects held for boys and girls across all races. But the evaluation called attention to at least two problems. First, thousands of eligible children remained on waiting lists due to a shortage of available mentors. Second, the inner-city youths who most needed responsible nonparental adult support and guidance in their lives were the least likely to get it. The simple reason is that Big Brothers Big Sisters of America, like most other effective mentoring programs that reach at-risk urban youth, attracts youths who already have at least one parent, guardian, or other adult in their lives who is responsible enough to sign them up, follow through on interviews and phone calls, fill out forms, and so forth. In a threecity study completed in 1998, Public/Private Ventures found that many low-income minority youth in urban neighborhoods lacked even this level of guidance. As many as a quarter of youth in “moderately poor” neighborhoods were completely “disconnected” from “positive adult supports.” In response, Public/Private Ventures, in partnership with Big Brothers Big Sisters of America and networks including over 30 local churches in Philadelphia, began in 2000 what has become known as the Amachi mentoring program. “Amachi” is a West African word signifying wonder at the precious gifts God gives us through children. The program attempts to reach some of the most severely at-risk youth, namely, low-income urban children who have one or both parents incarcerated. In only its first year, Amachi mobilized over 500 adult mentors through local churches, which more than doubled the total number of mentors that Big Brothers Big Sisters of America had in the city. With the aid of Amachi’s leader, Reverend W. Wilson Goode, Sr., researchers are analyzing the program and will eventually undertake a full-scale study. Already, however, it seems clear that Amachi’s mentoring relationships between religious adults and the often desperately needy children of prisoners differ from the usual “active matches.” Baseline data gathered thus far indicate that the religious mentors spend more hours with the children, open both their homes and their pocketbooks more, meet more often to discuss how best to help the children, and otherwise exceed official program expectations. However, even with most mentors going above and beyond the call of duty, it remains to be seen whether Amachi produces any long-term improvements in the lives of these children, and whether, beyond its proven advantages in mobilizing mentors, it outperforms strictly secular mentoring programs. Ecological religion Suppose that a low-income, semiliterate urban youth with a mom or dad behind bars is not in any faith-based program nor has any experience with organic religion. Nevertheless, faith could still be a significant factor in his life, one that affects his life prospects-including the probabilities that he will become delinquent, or a criminal, or be criminally victimized by others. The child might not be in any formal faith-based program, but he might receive food, money, medicine, shelter, or attend preschool, day care, or summer camp through local religious congregations. Without believing in God or being the least bit religious, he might end up receiving court-ordered juvenile justice services administered by community-serving ministries. Or, he may simply live in a neighborhood dotted with churches. In sum, even without organic religion or programmatic religion in his life, he may still be exposed to ecological religion. The ecology of urban life, especially in the poorest neighborhoods-and most especially for low-income African Americans-is a religious one. It is not just that churches, synagogues, mosques, and other religious institutions, large and small, are everywhere one turns in these communities. It is also that, after decades of public and private disinvestments, virtually all other institutions (save the ubiquitous local liquor outlets), have folded or fled these neighborhoods. This has left to sacred places the provision of an everwider range of civic functions, usually, however, without much in the way of either governmental or private-individual, philanthropic, or corporate-financial support or technical assistance. And it has led many public agencies to rely, de facto, on community-based religious volunteers and organizations to administer social-welfare programs and services, even where those urban government-by-proxy contracting arrangements have involved, sotto voce, pervasively sectarian organizations. The literature on ecological religion falls into three categories: first, the path-breaking congregation surveys led by the University of Pennsylvania’s Ram A. Cnaan; second, the subset of studies that focus on ecological religion in relation to partnerships with public agencies (particularly criminal-justice agencies); and, third, the studies in economics, psychology, and other disciplines that have analyzed ecological religion as an element of social capital or, as I prefer, spiritual capital. Congregations and blessing stations Of the many studies of the social services provided by local religious congregations, the most thorough are the congregation surveys by Cnaan. In the mid 1990s, Cnaan and his Penn research team conducted a multi-city study of more than a hundred randomly selected urban churches and their services within their communities. Congregations were surveyed in Philadelphia, New York, Chicago, Indianapolis, Mobile, Oakland, and San Francisco. The study was based on site visits that averaged three hours in length, along with a carefully crafted 20-page questionnaire covering over 200 specific social-welfare services. Cnaan and his colleagues found that over 93 percent of these churches opened their doors to the larger community. Each church provided an average of 5,300 hours of volunteer support and $140,000 a year in community services, and each church supported an average of four major programs in addition to informal and impromptu services. Poor children who were not church members or otherwise affiliated with the church benefited especially from the church-supported services, and, with only a few exceptions, these ministries did not make entering their buildings, receiving their services, or participating in their social-welfare outreach efforts contingent upon religious conversion or worship. More recently, Cnaan launched a census of Philadelphia congregations and their involvement in social-service delivery. Thus far, following the same intensive field and survey research protocols that marked the previous study, he has gathered data on 1,376 of an estimated 2,095 congregations. He has found that 88 percent of the congregations provide at least one social service, and most provide two or more services. Again, the primary beneficiaries are neighborhood children and youth, most of whom are otherwise unaffiliated with the ministries that serve them. According to the census, about 40 percent of the city’s churches, synagogues, and mosques already collaborate with secular and governmental organizations, and over 60 percent are open to working with government welfare programs. What would it cost for government or other nonreligious organizations to replace the social services provided annually by community-serving ministries in Philadelphia alone? Estimating the value of their space at subpar motel rates, and the value of volunteer hours at sub-minimum wage rates, Cnaan arrived at a figure of $250,000,000. This is a very conservative estimate for two reasons: First, the calculation accounts for only five major services provided by each congregation (many provide more); and, second, the survey, and hence the calculation, deals only with congregations’ services, and does not include those services provided by ministries that operate independently of congregations. How many such noncongregation “storefront” or “blessing station” ministries exist in urban America? These ministries are an important element of the religious ecology of urban neighborhoods that has yet to be systematically studied. All of the exploratory studies to date suggest, however, that the service contributions of blessing-station ministries may rival, or even exceed, those of the congregations. In the late 1990s, two of my former Princeton undergraduate students, Jeremy White and Mary de Marcellus, spent several months in the poorest, most crime-ridden neighborhoods of Southeast Washington, D.C. They were attempting to find noncongregation-based ministries that provided children and youth with after-school safe havens, recreation centers, homework help, gang-violence prevention, or other services. They found over one hundred such ministries, most of which could not be located in the phone book or by any means other than being on the streets. As White and de Marcellus reported in a summary of their findings published by the Manhattan Institute, these ministries served about 3,500 children and youth on a weekly basis. Like the congregations in the Cnaan surveys, the majority of the blessing stations in this exploratory study-even the most expressly evangelical ones-served young people who were not coreligionists, and did not make religious profession a condition of receiving services. All were led by a single, highly dedicated person of deep religious faith who sacrificed personally to get the ministry started, and did so with no expectation of outside help. Public-private partnerships The religious environment of urban life also includes contractual and other partnerships between government agencies and faith-based organizations. These partnerships play significant roles in community development, welfare-to-work programs, day care, and other areas. In June 2001, the U.S. Conference of Mayors unanimously resolved to stimulate further public partnerships with faith-based organizations, and over a hundred mayors have created local offices of faithbased initiatives. Some, including Baltimore’s Martin O’Malley and Philadelphia’s John Street, are making networks of local community-serving ministries integral to carrying out school reform and other policies. What is known about such public-private partnerships as they relate to the justice system? In the mid 1990s, I became fascinated with the role of various Boston ministries in the city’s criminal-justice system. Police and probation officers were working with preachers and religious volunteers in a variety of programs to monitor and mentor young men on probation, to identify middle-school children who were being recruited into gangs, to broker truces among competing drug lords and gangs, and to counsel prisoners and help find jobs for recent parolees. In each level of the justice system-prevention, intervention, and enforcement-and within each component of the administration (courts, police, corrections), cadres of Boston’s ministers were found to be involved in official or unofficial partnerships with public agencies. What was especially striking about these partnerships was that the religious organizations were frequently involved in handling the most difficult cases. But as the social scientist’s quip goes, the plural of anecdote is not data. The important question was whether the high level of collaboration taking place in Boston was typical or anomalous. To determine this, I helped Public/Private Ventures launch in 1996 a national study of the interface between justice agencies and faith-based organizations. The aim was to examine the extent to which these agencies and organizations work together to serve youth already involved in violent or criminal activities, or who are high-risk candidates for such behavior. Field research on the project began in 1998 and, over the next four years, encompassed 16 cities. The subsequent report, Faith and Action: Implementation of the National FaithBased Initiative For High-Risk Youth, concluded that in every city studied, the partnerships between justice-system agencies and faith-based organizations like those first seen in Boston are widespread. Though hardly without financial constraints and other challenges, most of the faith-based organizations in almost every city were important participants in delivering one or more of the justice system’s mentoring, education, employment, and life-skills services. They also performed street outreach, detention-center outreach, and court advocacy. Consistent with the aforementioned research of Professor Cnaan, most of these programs welcomed children and youth and provided services without any preconditions regarding the beneficiaries’ present or future religious commitments. Prayer and other religious practices and expressions did figure in some of the services, but participation was not required. Although these “faith-based practices” occurred more frequently when programs took place in churches rather than at a neutral site, not all programs that offered activities in their churches were found to have “a high salience of faith.” Nevertheless, the systematic interviews conducted with district attorneys, other justice officials, and the youth themselves indicate that, if anything, most welcomed the use of religious vernacular and spiritual symbols. Spiritual capital As suggested in Better Together, the recently released final report of the Saguaro Seminar discussion series led by Harvard University professor Robert D. Putnam, much of the nation’s social capital-“community connections of trust and reciprocity”-is spiritual capital produced by community-serving religious leaders, volunteers, and institutions. It is worth quoting the report’s conclusions at some length: Houses of worship build and sustain more social capital-and social capital of more varied forms-than any other type of institution in America…. Roughly speaking, nearly half of America’s stock of social capital is religious or religiously affiliated, whether measured by association memberships, philanthropy, or volunteering…. Faith gives meaning to community service and good will, forging a spiritual connection between individual impulses and great public issues. That is, religion helps people to internalize an orientation to the public good. Because faith has such power to transform lives, faith-based programs can enjoy success where secular programs have failed. Many factors have been offered as explanations for the post-1993 decline in crime rates: better policing, longer prison terms, disrupted illegal drug markets, more restrictive guncontrol policies, less restrictive gun-control policies, a sudden surge in the efficacy of prevention programs, the delayed effects of legalized abortion, and the immediate effects of a booming economy. Nobody, however, predicted these drops in crime, and no one has yet truly explained them. But rather than debate which of the usual-suspect factors, and in what proportions, were responsible for how much of the declines, I would suggest that we instead consider the contributions of spiritual capital in its organic, programmatic, and ecological forms. In a 2001 article in the flagship academic journal Criminology, Byron Johnson reported preliminary evidence that the risk of illicit drug use associated with growing up in a neighborhood where crime and disorder are ever-present “can be mitigated by the adolescent’s individual religiosity and related protective networks of social relations.” But studies of crime and delinquency that factor spiritual capital into the analysis remain few and far between, and many questions are left unanswered. For instance, does living in a poor urban neighborhood that is rich in ecological religion have any independent effect on one’s present well-being or future life prospects, including the prospects of avoiding engaging in or becoming a victim of crime? We do not know, but we ought to find out. As James Q. Wilson has observed, “The chief federal role in domestic law enforcement should be to encourage and fund research,” where “research” is understood chiefly to mean “evaluations of ideas about how to reduce crime.” I would extend Wilson’s remarks to include federal research on spiritual capital and how it can help to prevent teenage pregnancies, reduce public health problems, combat illiteracy, and achieve many other vital social goals. Such empirical research can serve to guide ongoing debates about whether government ought to cooperate with faith-based organizations or help fund sacred places that serve civic purposes. Of course, better empirical research cannot resolve the entire range of questions that surround faith-based approaches to social and urban problems, or, for that matter, tell us precisely how to solve those problems. But organic, programmatic, and ecological religion represent three paths to social well-being that deserve far more intellectual and civic interest than we have yet shown them. |
8fbf753aac8a13883f290e1e80900d86 | https://www.brookings.edu/articles/the-war-on-terrorism-the-big-picture/ | The War on Terrorism: The Big Picture | The War on Terrorism: The Big Picture “Today, we lack metrics to know if we are winning or losing the global war on terror. Are we capturing, killing, or deterring and dissuading more terrorists every day than the madrassas and the radical clerics are recruiting, training, and deploying against us? “Does the US need to fashion a broad, integrated plan to stop the next generation of terrorists? The US is putting relatively little effort into a long-range plan, but we are putting a great deal of effort into trying to stop terrorists. The cost-benefit ratio is against us! Our cost is billions against the terrorists’ costs of millions. . . . Is our current situation such that ‘the harder we work, the behinder we get’?”1 Such are the big-picture questions for the War on Terror, the kind that should have shaped Pentagon strategy from the start. Unfortunately, they apparently weren’t asked by Secretary of Defense Donald Rumsfeld until 16 October 2003, in a private memo that he issued to his top staff. While the media focused on his admission of a “long, hard slog” in Iraq, contrary to the rosy predictions made earlier, the true surprise was that Secretary Rumsfeld questioned even whether we are “winning or losing the Global War on Terror.” He described how his office had yet to enact a “bold,” measurable, or even systematic plan to win the War on Terror, despite being two years and two ground wars into the fight. In short, what Rumsfeld’s memo admitted was a shocking absence of strategic thinking. One hopes that the “12-step” programs are right, and that admitting one’s errors is the first step to solving them. Since the 9/11 attacks, the United States has expended an immense amount of blood and treasure in the pursuit of security for its citizens and punishment upon those who would do them harm. Yet while the US military has successfully overturned vile regimes in both Afghanistan and Iraq, Secretary Rumsfeld’s internal memo disclosed his frank assessment that we appear little closer to resolving the actual challenge that drives us, eradicating the group that carried out the 9/11 attacks and preventing any repeats. While the United States and its allies have seized a portion of al Qaeda lieutenants and assets, the organization remains vibrant, its senior leadership largely intact, its popularity greater than ever, its ability to recruit unbroken, and its ideology and funds spreading across a global network present in places ranging from Algeria and Belgium to Indonesia and Iraq. Of greatest concern, its potential to strike at American citizens and interests both at home and abroad continues. After the Madrid bombings, some worry that its capabilities may even be growing.2 View Full Article at Parameters |
5618a032738db0df9c04fc98851048ba | https://www.brookings.edu/articles/the-world-economy-is-recovering/ | The World Economy is Recovering | The World Economy is Recovering Editor’s Note: This commentary is based on research and analysis from the Tracking Indexes for the Global Economic Recovery (TIGER) interactive map, which appears on the Financial Times Web site. View the Brookings version of the interactive map » Despite all the portents of doom the world economy has been quietly mending itself. This is not to say that the recovery is firmly entrenched or that few risks remain, but despite the rough patches in 2010, it is important to keep in mind that the economic picture looks far better now than it did a year ago. Why do I conclude this? Well, to get an accurate picture of where the world economy now stands, we need to look at a broad set of economic data. We have gathered data from the G20 economies for three types of indicators: real economic activity, captured by GDP, industrial production, employment, imports and exports; financial indicators such as national stock market indexes, stock market capitalization and, in the case of emerging markets, their bond spreads relative to U.S. treasuries; and finally, indicators of business and consumer confidence. By combining information from these variables using statistical techniques, we can take the pulse of individual economies as well as the world economy. And thus was born the Brookings Institution-Financial Times index for the world economy, which we have christened TIGER—Tracking Indices for the Global Economic Recovery. The composite indexes reveal five dominant themes. First, the global economy turned the corner by mid-2009 and has strengthened gradually since then. Growth rates of many indicators have rebounded strongly after plunging into negative territory during 2008. These high growth rates are off a lower base of course and there is still a lot of ground to be made up before the levels of these indicators are back at their pre-crisis levels. For instance, growth rates of industrial production in many G20 economies are now higher than before the crisis but, because growth rates fell sharply during 2008, the levels of industrial production are still below pre-crisis levels. Still, the recovery has clearly gathered momentum. Second, the recovery has been rather uneven. Growth rates of industrial production and trade volumes have recovered strongly, while the recovery in GDP and employment has been modest at best. Employment growth, which tends to be a lagging indicator of the business cycle, was very weak in advanced economies until the beginning of 2010 but is now showing some signs of life. So the recovery is ever so slowly becoming more broad-based. Third, the performance of world financial markets has outpaced that of key macro variables. In the last two months, however, financial markets have dipped as they have been rattled by the problems in Europe. This could signal prescience of financial markets about more difficult times ahead or just a temporary pullback from an earlier surge of unfounded optimism. Either way, this is not good for the recovery. Then again, a more tempered financial market performance may not be such a bad thing for the longer term. Fourth, confidence measures have regained some of the ground they lost during the worst of the crisis. In both advanced and emerging market economies, business confidence is still rising gradually but consumer confidence in advanced economies has been stuck in a rut in recent months. Resurgent business confidence is a positive sign as it could boost investment. But weak consumer confidence and minimal employment growth could dampen the recovery if they translate into tepid growth in private consumption. And finally, emerging markets felt the effects of the global crisis later than the advanced economies and have also recovered more sharply. Among the major emerging markets, the recoveries in China and India have been particularly strong. So far in 2010, emerging markets are still barreling their way to a strong performance despite the problems that have beset advanced economies. Perhaps, in a long-term structural sense, they are becoming less dependent on advanced economies. But emerging markets cannot pull the world economy along by themselves. If advanced economies continue to turn in a weak performance, we are in for a long and hard slog towards a durable global economic recovery. We are certainly not out of the woods yet and all manner of risks could still forestall the recovery. While it is easy to paint dire scenarios, it is still worth recognizing that there is a lot of positive news relative to the desperate circumstances that the world economy was in a year ago. It’s not yet time to open up the bubbly, but at least there is less need now for a stiff drink. |
464377c696ecf4e8053956dea44e6100 | https://www.brookings.edu/articles/truth-and-reconciliation-sidestepping-the-filibuster/ | Truth and Reconciliation: Sidestepping the Filibuster | Truth and Reconciliation: Sidestepping the Filibuster Editor’s Note: After a year of trying to pass health care reform, an effort highlighted by the Blair House summit on February 25, President Obama and Democratic leaders are building a case for using the Senate reconciliation process to push through legislation. Under reconciliation, bills cannot be filibustered and can thus pass the Senate by majority vote. In this article published last year, Thomas Mann, Molly Reynolds and AEI’s Norm Ornstein analyzed how reconciliation has been used to pass landmark legislation. “Reconciliation” means “restoration of harmony.” But as a term of art in budgeting, it has become an act of war. President Obama and most Democrats in Congress hope to include health and education reform in reconciliation instructions as part of the budget process. No mystery why. The sixty vote hurdle in the Senate of the filibuster could scotch these central components of their agenda via united Republican opposition. Bills considered under reconciliation cannot be filibustered and can therefore pass the Senate by majority vote. Republicans are outraged by what they argue is an egregious partisan power grab, one that tramples on Senate rules and norms permitting extended debate and amendment. What is the precedent for using reconciliation to enact major policy changes? Much more extensive than the architects of the Congressional Budget and Impoundment Control Act of 1974 had in mind-or than Senate Republicans are willing to admit these days. Reconciliation was designed as a narrow procedure to bring revenue and direct spending under existing laws into conformity with the levels set in the annual budget resolution. It was used initially to cut the budget deficit by increasing revenues or decreasing spending but in more recent years its primary purpose has been to reduce taxes. Twenty-two reconciliation bills were passed between 1980 and 2008, although three (written by Republican majorities in Congress) were vetoed by President Clinton and never became law. Whether reducing or increasing deficits, many of the reconciliation bills made major changes in policy. Health insurance portability (COBRA), nursing home standards, expanded Medicaid eligibility, increases in the earned income tax credit, welfare reform, the state Children’s Health Insurance Program, major tax cuts and student aid reform were all enacted under reconciliation procedures. Health reform 2009 style would be the most ambitious use of reconciliation but it fits a pattern used over three decades by both parties to avoid the strictures of Senate filibusters. To be sure, there is a price beyond the political one for using reconciliation. Elements in bills that are not strictly designed to have a budget impact can be removed on points of order, leaving comprehensive bills less than comprehensive. And the time frame for reconciliation bills is at most ten years, after which they expire unless explicitly renewed (the problem, of course, with the Bush tax cuts.) The best path would be to have reconciliation as an implicit or explicit threat: if Democrats can employ it to accomplish the policy goal with only a simple majority, Republicans may be persuaded to abandon efforts to use their 41 votes to just say no and instead engage the majority constructively to find common ground. But if that is not feasible, it is perfectly reasonable for Democrats to use the process for health care reform that both parties have used regularly for other major initiatives. The result might be more piecemeal and imperfect, but it would be better than the alternative of no bill at all. |
85d30f76b2a8a1ebe2d8c484d7176f52 | https://www.brookings.edu/articles/understanding-urban-riots-in-france/ | Understanding Urban Riots in France | Understanding Urban Riots in France When Theo Van Ghogh was murdered in Amsterdam on November 2, 2004 by Mohammed Bouyeri, an Islamist with Dutch and Moroccan citizenships, many said that this was the failure of the Dutch model of integration by tolerance. When bombs planted by young Britons of Pakistani and Jamaican descent exploded in the London subway on July 7, 2005, many said that this signified the failure of the British model of integration by multiculturalism. A month and a half later, when levees broke in New Orleans under the onslaught of hurricane Katrina and the poor, predominantly black population was trapped in the flooded city, many said that this revealed the failure of the integrating power of the “American dream.” And when riots erupted on the outskirts of major French cities (though not in Marseille, as will be seen later) in November 2005, many said that this unmasked the weakness of the French “one-law-for-all” republican model of integration. Now, that all major models of integration are proclaimed dead, serious analysis may finally begin, because these models often hide as much as they reveal. For example, in spite of the alleged rigidity of the “republican” model, supposed to prevent French officials from implementing any specific policies directed at immigrant populations, France has actually experimented with policies close to affirmative action. Without recognizing ethnic or religious minorities as such, ambitious social programs have been implemented in urban areas where immigrants live. Those programs, initiated in the early 1980s, included the creations of Zones of Educational Priority, known as ZEPs (Zones d’Education Prioritaire), and special tax-exempt zones (zones franches) meant to stimulate local economic activity. Those programs did in fact bring some—albeit insufficient—results. A lot of public money has been spent on rehabilitating bleak housing projects in immigrant neighborhoods under the guise of “urban policy” (politique de la ville), which could be more aptly called “suburban” (banlieues) policy. The French military has initiated several recruiting programs aimed at the young from the banlieues. Private firms and even grandes écoles (major universities), like Sciences-Po in Paris, have been reaching out to the minorities in order to diversify their workforce and student bodies. In other words, the real problem was not the French “republican” model, which has been hailed by many immigrants and which is more flexible than generally admitted, but insufficient mobilization of the French people to make it a reality. The rioting expresses, among other things, frustration caused by the gap between the model and the reality, and a desire to see the fulfillment of the promises inherent in the model. In any case, it is difficult to imagine how the adoption of the “multicultural” model, where minorities are treated as groups endowed with separate collective “identities” and special rights, would suddenly cure such social ills as everyday discriminations, unemployment and ghettoization, which lie at the heart of the current crisis. It is worth remembering that the evolution of the “multicultural” model in the Netherlands and in Great Britain has raised some serious issues, especially after recent terrorist attacks in London. “We have allowed tolerance of diversity to harden into the effective isolation of communities,” said Trevor Phillips, the black British chairman of Great Britain’s Committee for Racial Equality. “We have made too much room for the expression of minorities’ historical identity to the detriment of their loyalty to the United Kingdom today.”i To get a better idea of the causes of the November 2005 urban riots in France, which have claimed 200 million euros in damaged property and one death, one should try to forget about theoretical models and concentrate on specific factors that caused the eruption at this specific moment and in these specific places. Those factors includethe particular French ethnic context, economic conditions, discrimination, police violence, housing, and (bad) national policies. It should also be clear that despite the claims of many foreign commentators, religion was conspicuously absent from the mix. Unlike its many European neighbors, France has always been a country of immigrants and has absorbed numerous waves of foreigners. In 1999 no less than 23 percent of the French population claimed foreign origin (with at least one parent or grand-parent coming from abroad). Within this group 5 percent had their roots in the sub-Saharan Africa, 22 percent in the Maghreb, and 2.4 percent in Turkey. Together, those groups represented about 30 percent of French residents of foreign descent—between 4 and 5 million people. In religious terms, today’s France has the largest Muslim and Jewish minorities in the whole Europe. This means that the challenges of integration are much greater in France than in other European countries, especially because most immigrant workers, who arrived in the 1960s and 1970s, and their families, who joined them between the 1970s and the present, come from rural areas and had little no or education. That does not mean that their are not being integrated into the French mainstream, but their integration is certainly slower and more challenging (and success stories, which are more and more frequent, generally go unreported). For example, children of immigrants do as well at school as French children from the same socio-economic group. However, since immigrants constitute a disproportionately high percentage of the lower classes, in absolute terms their children do less well than children from French families. The French integration system from the 19th untill the 20th century rested on three pillars: school, compulsory military service, and work. French public schools in the banlieues, despite difficult conditions, are for the most part still fulfilling this task. But general military draft was abolished in the late 1990s, and the economic slow-down that started in 1973 made jobs for the new arrivals increasingly scarce. The young men and teenagers from the banlieues are rioting and burning cars largely because they have little hope of upward social mobility. Among the young men of the cités (largely immigrant housing projects in the suburbs) is as high as 40 percent. Slow growth rate at the national level is not the only cause of unemployment among these young men—and President Jacques Chirac acknowledged as much in his November 14 speechii. Racism and discrimination are very much alive in the French society, whether in housing, in the job market, or in social life. Young men of North African origin are more likely to be unemployed than their French contemporaries with similar job trainingiii. Negative racial stereotypes lingering from the colonial or even earlier times make everyday life of persons of African origin often difficult and frustrating. The young “Beurs” (a slang word for Arabs) and “Blacks” from the cités report many cases of discrimination, such as being refused entrance to nightclubs. Of course, such acts are illegal and they are combated by a new government agency called HALDE (Haute Autorité de Lutte contre les Discriminations et pour l’Egalité) with the authority to monitor and fight instances of discrimination. But less obvious forms of racial prejudice persist, and young Frenchmen of Arab origin often believe they should change their names and appearance in order to be considered fully French. The experience of exclusion and unfairness was high on the list of factors that have driven their revolt—sometimes with tacit approval of their fathers and older brothers. Police violence and racial profiling.Among various types of discrimination suffered by the young from the banlieues one in particular stands out: racial profiling by the police. The riots started when Ziad Benna and Bouna Traore, two teenagers of Tunisian and Malian origin at Clichy-sous-Bois, died in a power substation where they were hiding from the police. Of course, the police in these neighborhoods is working under arduous conditions, but its record has been far from exemplary. Racial profiling is ubiquitous, and even older inhabitants of the cités complain about various affronts they suffer at the hand of the police. Even worse, due to political changes introduced in 2002 by the Minister of the Interior Nicolas Sarkozy, the previous government’s policies of a friendlier mode of police work, police de proximité (neighborhood policing), were scrapped. As a result, policemen go into the cités only to do the “repressive” part of their job—to impose order, investigate a crime or perform an arrest, which strains their relations with idle and disgruntled teenagers. An aggravating factor in the life of French ethnic communities is their de facto ghettoization. Strangely enough, an important role is played by the architecture of the cités. Between the mid-1950s and the 1960s a severe housing crisis hit France. The authorities responded with a rush construction program. They built clusters of high-rise apartment houses of ten stories or more that at the time passed for the quintessence of architectural modernity. In addition, they could be built cheaply and quickly enough to provide new, permanent living quarters for the inhabitants of slums that had developed around some cities. But this seeming housing remedy soon turned into a social disaster. The bleak, unglamorous concrete-slab neighborhoods were gradually abandoned by those who could afford to leave: first, by the French blue collars and later, by more successful immigrants. The populations that stayed behind consisted mostly of the underclass, or “losers”, creating zones of highly concentrated social pathology: school underperformance, unemployment, drug trafficking and other crime, etc. For a teenager, whatever his or her origin, to live in those pockets of poverty is a curse and a social stigma. At the same time a protective, highly territorial cité sub-culture has developed: either you belong here, it is your place and people respect you, or you are a strangers and you better keep out. That includes, of course, the police. In general, cité youngsters rarely venture far from their familiar turf. That is why rioting hardly ever spread to downtown areas, where the young people feel out of place and vulnerable. With high unemployment, the cités are also zones of profound boredom. There is literally nothing to do there, especially when local associations and community programs are severely underfunded. No wonder that teenagers are having a great time playing Cowboys and Indians with the cops.It is like real-life Game Boy, and the media pay attention! In other European countries similar phenomena did not develop: there are “tough” neighborhoods, but not quite as bleak as cités that seem to distill social ills, hopelessness and despair to the point of encouraging self-destructive behavior (teenagers were sometimes burning schools and sports facilities they were using themselves). The French authorities are gradually demolishing cités and replacing them with more human-scale housing projects, but the scope and the cost of this endeavor is immense, and it is going on too slowly. Why did the riots erupt in November 2005 and not three or six or nine years before? Evidently, we are dealing with a cumulative effect, but there is more to it. After the elections of 2002 the new Jean-Pierre Raffarin government embarked on a more conservative policy and de-emphasized social programs. The so-called neighborhood policing was abandoned. Instead, Minister Sarkozy instructed the police to concentrate on providing public safety and combating crime, and not on “social work.” The Raffarin government also severely cut subsidies for community associations and local social workers despite the fact that many sociologists stressed their importance in creating a better social climate and a more nurturing environment for teenagers. A good way to see the importance of maintaining the “social fabric” is to study the case of Marseille. Despite a large immigrant population, especially from the Maghreb (and the Comoros), and the existence of bleak cités (the quartiers Nord or “Northern quarters” as they are known in Marseille), there was very little unrest. Though no systematic comparative studies have been conducted yet, most experts say that Marseille’s relative stability results from its established social networks, smaller ethnic and economic differences between the rich downtown and the suburbs, the work performed by the community (social workers, mediators, associations, etc.), and better relations between the police and the population. Last but not least, the one factor that was conspicuously absent was Islam. Reading some conservative American commentators one could get the impression that Paris had been overrun by hordes of radical Islamists. For example, Daniel Pipes writes in The New York Sun,iv about “the first instance of a semi-organized Muslim insurgency in Europe” and about “rioting by Muslim youth that began October 27 in France to calls of ‘Allahu Akbar’.” Of course, many of the perpetrators of recent violence come from Muslim backgrounds—as do many of their victims. But they have no religious agenda and, even more tellingly, no political agenda: most of them are teenagers, often deprived of hopes for a good future and a good job. Many have already had their brushes with the law. Those youngsters are not likely to listen to anybody: neither to their parents, nor social workers, nor even the soccer star Zinedine Zidane, and least of all to religious authorities. Both radical Muslim organizations, such as the Tabligh, an international proselytizing group active in France, and more moderate ones, like the Union of French Islamic Organizations (UOIF), which issued a fatwa v condemning the riots as un-Islamic, have revealed their powerlessness and total lack of impact on the situation. Teenagers from the cités are having an exhilarating time and are not going to stop because of an order of an imam or an Islamist recruiter who wants them to lead a boring, pious life. The only real Islamist danger would be to send them to prison where they could encounter religious radicalism. There have been problems with religious radicalism in the banlieues and among disaffected young French of Muslim background, but they are largely separate from the rioting and rampaging of November 2005. The teenagers that were burning cars were not the ones who cared for religion—even religion suffused with anti-French or anti-Western ideology. It is sad that French Muslims, who as a group had nothing to do with the riots and who according to a recent reportvi feel more and more at home in France, and are even more optimistic about France’s future than other religious groupsvii, may end up paying a disproportionate part of the bill in the form of increased suspicion from their compatriots and from the international community. [i] As quoted by Frances Stead Sellers in the Washington Post. And by Le Monde.[ii] Read Chirac’s November 14 speech [iii] See: http://www.ined.fr/publications/collections/dossiersetrecherches/130.pdf [iv] See: http://www.danielpipes.org/article/3113 [v] See: http://www.uoif-online.com/modules.php?op=modload&name=News&file=article&sid=414&mode=thread&order=0&thold=0 [vi] Sylvain Brouard and Vincent Tiberaj. Rapport au politique des Français issus de l’immigration. CEVIPOF, SCIENCES PO, Juin 2005. [vii] See: http://www.vaisse.net/BiblioJustin/Tribunes/BiblioJustin-Marx_not_Bin_Laden9Nov05.pdf |
290e4e9fef7062d6ee66a734f63e2eab | https://www.brookings.edu/articles/us-japan-relations-in-the-era-of-trump/?shared=email&msg=fail | US-Japan relations in the era of Trump | US-Japan relations in the era of Trump No bilateral relationship matters more to Japan than the one with the United States. From wartime foe and occupying force, the United States morphed into Japan’s indispensable security partner, becoming its anchor for a successful reintegration into the postwar international system. The basic deal undergirding the U.S.-Japan alliance remains intact: the extension of the U.S. nuclear umbrella protection in exchange for Japan hosting American military bases that are at the heart of U.S. forward military projection in the critical Asian region. But if there is a constant in the U.S.-Japan relationship, it is its perpetual transformation. Mireya Solís Director - Center for East Asia Policy Studies Senior Fellow - Foreign Policy, Center for East Asia Policy Studies Philip Knight Chair in Japan Studies Twitter solis_msolis The bilateral relationship is now entering uncharted territory. U.S. domestic politics proved to be the black swan of our era, ushering in the Trump presidency. His “America First” policy, with its transactional view of alliances and its brazen economic unilateralism, challenges key pillars of the U.S.-Japan relationship. Related Content 2019 Sep 17 Past Event Japan, Taiwan, and the future of ‘America First’ trade policy 10:00 AM - 11:30 AM EDT Washington, DC Order from Chaos Why Japan and the US should strike a quick bilateral trade deal Mireya Solís Monday, April 15, 2019 Related Books Dilemmas of a Trading Nation By Mireya Solís 2017 |
bb7b6e27e7f43e37892eb248652412a5 | https://www.brookings.edu/articles/veiled-meaning-the-french-law-banning-religious-symbols-in-public-schools/?emci=24fe5c35-7571-eb11-9889-00155d43c992&emdi=ea000000-0000-0000-0000-000000000001&ceid= | Veiled Meaning: The French Law Banning Religious Symbols in Public Schools | Veiled Meaning: The French Law Banning Religious Symbols in Public Schools On March 3, 2004, the French Senate gave the final approval for a bill prohibiting the wearing of conspicuous religious symbols in public schools. The law, which will enter into force in September, does not ban the wearing of headscarves or any other conspicuous symbol in public places, universities, or in private schools, and does not actually change the status quo established in France by a government ruling in 1989 and a ministerial decree in 1994. Rather, the law is a narrowly defined reassertion of religious neutrality within French public schools.2 This vote implements one of the recommendations of a special commission on religion in France, appointed by the government and headed by Bernard Stasi, a former member of the European Parliament and now the mediator, essentially Ombudsman, of the Republic, which heard hundreds of witnesses between July and December 2003.3 This law has been widely condemned in the United States. American public high schools accept students wearing religious symbols, such as the headscarf, a Jewish skullcap or a large Christian cross. Many Americans therefore assume that the wearing of such personal symbols in public schools can be accommodated without violating principles of religious freedom. French supporters of the headscarf ban, however, argue that in the current French social, political and cultural context, they cannot. That is why the government felt it was necessary to pass a new law. Veiw Full Article (PDF—92kb) Get Adobe Acrobat Reader |
71ebb34e59b4d7ff0abf3f0355df6be2 | https://www.brookings.edu/articles/vox-populi-public-opinion-and-the-democratic-dilemma/ | Vox Populi: Public opinion and the democratic dilemma | Vox Populi: Public opinion and the democratic dilemma “Now at the feast the governor was accustomed to release for the crowd any one prisoner whom they wanted. . . . The governor again said to them, ‘Which of the two do you want me to release for you?’ And they said, ‘Barabbas.’ Pilate said to them, ‘Then what shall I do with Jesus who is called Christ?’ They all said, ‘Let him be crucified.’ And he said, ‘Why what evil has he done?’ But they shouted all the more, ‘Let him be crucified.’ So when Pilate saw that he was gaining nothing, but rather that a riot was beginning, he took water and washed his hands before the crowd . . . Then he released for them Barabbas and having scourged Jesus, delivered him to be crucified.” Thus, Matthew’s dramatic rendering of Pilate’s accession to the demand of the crowd for the crucifixion of Jesus raises the fundamental dilemma of democratic governance: the relative claims of the wishes of the public and the wisdom of public officials in making policy. That is, what is the appropriate balance between the preferences of citizens and the considered judgment of policymakers? Some four centuries earlier the same issue had arisen in a society with a more democratic tradition, ancient Athens, and the people had ruled, if less passionately, similarly unwisely. With Athens recovering from a protracted war and experiencing some political turmoil, Socrates stood accused of introducing novel religious practices and corrupting the young. He chose trial rather than voluntary self-exile. As reported by Plato, Socrates was, despite an eloquent self-defense, found guilty by a jury of 500 citizens and sentenced to die. Although his friends contrived for him to escape from prison, he opted to remain in chains, arguing that while he believed himself innocent, he did not wish to violate a lawful process. Eventually Socrates drank the hemlock. That we can attribute the condemnation of two men who influenced so profoundly the course of Western civilization to the myopia and suggestibility of ordinary citizens acting collectively might lead us to be skeptical of the capacity of the people for self-government and to infer that we should trust instead in the wisdom of their leaders. Before writing off the public, however, we should recall that the history of the ancients does not always cast such doubt on the decisions of ordinary people. As related by Thucydides in “The Debate on Mytilene,” the Athenians demonstrated superior judgment in their deliberation about the fate of Mytilene, a city on the island of Lesbos that had broken ranks with Athens to join forces with Sparta in the Peloponnesian War. The angry Athenians first made a hasty and unprecedented decision to put to death not only the captured Mytilenian rebels who had incited the revolt but the city’s entire adult male population. After dispatching a trireme with the news, however, the Athenians had second thoughts and, following a debate of considerable elevation, rejected a demagogic appeal by Cleon and reversed their original cruel decision. Furthermore, the trials of Jesus and Socrates contain their own ambiguities. Pilate, as the Roman procurator of Judea, did not owe his tenure in office to the potentially unruly crowd to which he deferred and presumably enjoyed considerable autonomy. And the prosecution of Socrates was instigated not by the citizens of Athens but by Anytus, one of the leaders of the restored Athenian democracy, who persuaded the young Meletus to bring the charges. It All Depends Given the wealth of examples—ranging from scandalous disregard for democratic process by the Nixon administration during Watergate to the something-for-nothing tax aversion of the public—of venality, shortsightedness, opportunism, and ignorance on the part of both the American people and their democratic leaders, it is not surprising that our political rhetoric displays considerable ambivalence about the dilemma of public opinion and democratic governance. When political leaders alter their stated positions to correspond to the expressed preferences of constituents, are they being responsive or pandering? Is finding the middle ground compromise or selling out? Are public officials who stake out a position at variance with the popular will and then seek to bring the public behind them demonstrating leadership or a stubborn refusal to listen to the people? Are they educating or propagandizing the citizenry? Our answers inevitably depend on whose ox is being gored. How we evaluate the politician who sails against the wind of public opinion depends on whether he is Winston Churchill delivering Cassandra—like warnings against the German threat on the eve of World War II or Slobodan Milosevic fomenting ethnic hostility and leading the Serbs into war. Similarly, how we respond to the politician who heeds public reactions depends on whether he is Franklin Roosevelt dropping his unpopular Supreme Court-packing plan or George Wallace attributing his loss in Alabama’s 1958 runoff gubernatorial primary to having been “out-segged” by his Klan-backed opponent and vowing never to let it happen again. Churchill was a leader; Milosevic, a demagogue. Roosevelt was responsive; Wallace pandered. What Do the People Say? As might be expected, this ambivalence is reflected in what the public says about how the democratic process ought to work. The July-August 2002 issue of Public Perspective reports the results of a survey conducted early in 2001 in collaboration with the Henry J. Kaiser Family Foundation that might suggest that the public comes down overwhelmingly on the side of an “instructed-delegate” version of democracy. Asked “[H]ow much influence do you think the views of the majority of Americans should have on the decisions of elected and government officials in Washington?” fully 68 percent of respondents replied “a great deal,” and another 26 percent said “a fair amount.” When the question was rephrased and an alternative suggested, however, the consensus evaporated. Forty-two percent believed that elected and government officials should “use their knowledge and judgment to make decisions about what is the best policy to pursue even if this goes against what the majority of the public wants.” Fifty-four percent believed that officials should “follow what the majority of the public wants, even if it goes against the officials’ knowledge and judgment.” When the question was further qualified, the commitment to majoritarian democracy eroded yet again. Reminded that “at times in the past, the majority of Americans have held positions later judged to be wrong, such as their support of racial segregation of blacks and whites,” 40 percent opined that “officials in Washington should do what the majority wants because the majority is usually right,” while 51 percent replied that “officials [should] rely on their knowledge and judgment when they think the majority is wrong.” Just to muddy the waters further, 23 percent of respondents agreed strongly that “elected officials consult polls because they believe the public should have a say in what government does,” and 58 percent agreed strongly that “the main reason officials consult polls is because they want to stay popular and get re-elected.” Although these two findings were presented under the heading “Cynicism,” it could be easily argued that both represent forms of democratic responsiveness. The emergence and refinement of the public opinion poll would seem to give the contemporary political leader who wishes to be responsive an important edge over such historical counterparts as the Ottoman Sultans, said to have disguised themselves to go out among their people, or George Washington, who would occasionally get on his horse to make soundings among the citizenry. Nevertheless, the Public Perspective survey suggests that the public has, at best, mixed views about opinion polls. On one hand, 76 percent of respondents indicated that polls are very or somewhat useful “for elected and government officials to understand how the public feels about important issues.” And 83 percent agreed strongly or somewhat that “public opinion polling is far from perfect, but it is one of the best means for communicating what the public is thinking.” On the other hand, 53 percent believed that opinion polls “accurately reflect what the public thinks” only some of the time and another 11 percent, that they hardly ever do. Moreover, 80 percent agreed strongly or somewhat that “the questions asked in polls don’t give people the opportunity to say what they really think about an issue.” Asked to choose among a variety of alternatives, only 25 percent selected public opinion polls as “the best way for officials to learn what the majority of people think about important issues,” compared with 43 percent who chose town hall meetings. The public’s lukewarm view of polls also emerges from the answers to a battery of eight items in the Public Perspective survey about the information sources to which public officials should pay attention when making decisions about important issues. Thirty-eight percent of respondents thought officials should pay a great deal of attention to public opinion polls. Coming in first—ahead not only of public opinion polls but also of journalists, lobbyists, campaign contributors, and officials’ own conscience or judgment—were “members of the public who contact them about the issue.” Of those surveyed, 58 percent said that public officials should pay a great deal of attention (and 32 percent, a fair amount of attention) to people who get in touch with them, a finding that raises another aspect of the democratic dilemma. It is well known that those who contact a public official are unlikely to represent a random sample of public opinion about the matter. They are, among other things, more likely to care intensely and to have relatively extreme views. To pay them special heed would be, in many cases, to oppose the preferences of the majority. The relative weight to be given to the views of a rather indifferent majority—as opposed to the views of a minority deeply concerned about a policy controversy—is a puzzle of long standing in democratic theory. The Framers of the Constitution showed considerable solicitude for minority viewpoints, and concern about the dangers of a majority faction informs Madison’s Federalist No. 10. Once again, however, it matters whose ox is being gored. Many people will urge deference to the wishes of either the intense minority that opposes handgun control or the intense minority that opposes school prayer, but not to both. In environmental conflicts, I have differing amounts of sympathy for various intensely concerned interests: the Midwestern utilities whose pollutants result in the acid rain that is killing my favorite New England ponds; the snowmobilers whose recreational tastes I do not share; and the commercial fishermen whose plight has been explained by the congressman we share and who provide savory fare for my table. Whether democratic leaders do respond to public opinion elicits no more agreement than whether they should. Various academic studies can be adduced to demonstrate strong links between public opinion and the actions of policymakers. These analyses proceed from the implicit assumption that policymakers, especially elected officials, feel constrained to pursue policy objectives congruent with—or, at least, seemingly congruent with—the preferences of their constituents. Some of these studies aggregate across a variety of issues. They show, for example, that, with a variety of factors taken into account, states where the public leans in a liberal (or conservative) direction tend to enact policies that reflect the overall tilt in public preferences. In addition, studies demonstrate that abrupt ideological shifts in public mood tend to produce subsequent policy changes in the corresponding direction. Furthermore, inquiries into the politics of particular policies, from Medicare to welfare to civil rights, find evidence for responsiveness to public opinion by public officials. But other analysts reach the opposite conclusion. Their studies stress a variety of themes: that the general public lacks the time and political concern to have informed and detailed opinions about multiple policy issues; that policymakers are more responsive to the attentive publics—campaign funders, lobbyists, and other policy activists—whose support is critical than to the public at large; and that policymakers can shape public opinion by using the media as well as polls and focus groups to craft messages that placate public opinion while maximizing autonomy. No Simple Answers The way to reconcile what might seem to be discrepant findings is to argue not that the truth lies in the middle, but that it all depends. Policymaking in America is so diverse and complex that no single pattern obtains for the relationship between public opinion and policy, and the appropriate goal is to specify the circumstances under which public opinion places broad or narrow limits on a public official’s actions. For example, we might inquire as to the nature of the constituency that matters for democratic responsiveness: the public of the whole nation, state, or municipality; the electoral district; or the partisan majority that elected the official. Moreover, issues differ—in their complexity, their visibility, and their salience to the public at large and to various groups within the public. Presumably, policymakers are more likely to be constrained by opinion when issues are highly visible and widely salient. When complexity is added to the equation, however, public officials may gain freedom to act—so long as they do something. As exemplified by recent federal legislation on both education and homeland security, the public wanted action and was impatient with congressional stalemate. Whether the resulting policies reflect public preferences—and whether they will address complex problems successfully—is not fully clear. Ironically, although we would expect policymakers to be more likely to be obliged to follow public opinion when issues are visible and salient, on certain visible issues, such as abortion, they may be compelled not to put a finger to the wind, but rather to remain steadfast to the dictates of conscience. Just as issues differ, so do political actions and policy arenas. Once again, visibility would seem to be a key factor. Legislators who feel constrained to follow their constituencies when a roll call vote is taken in the House may have more maneuvering room when a bill is marked up in subcommittee or when a narrow provision is slipped into an omnibus piece of legislation. A decision by an obscure administrative unit of the executive branch would seem less likely to receive public scrutiny than would an executive order. Moreover, policymakers in different offices are accountable to the public in different ways. Judges, especially those who serve for life, surely have more autonomy from public opinion than do legislators, though even judges are limited in the extent to which they can fly in the face of deeply held public preferences. These conjectures suggest the many dimensions that must be taken into account when specifying the circumstances under which policymakers are likely to pay attention to public preferences; to heed the pleadings of attentive special publics, their fellow partisans, or other political allies; and to exercise independent judgment. More years ago than I care to remember, when I studied the writing of the Constitution in 11th-grade U.S. history, I naively asked my teacher, “Are the people we send to Congress supposed to do what we want them to do or are they supposed to do what they think is best?” I do not remember exactly what my teacher replied. I know that I did not get a satisfactory answer. I still do not have one. |
6560ec32ee49ae7449976c5373aceda6 | https://www.brookings.edu/articles/washingtons-think-tanks-factories-to-call-our-own/?shared=email&msg=fail | Washington’s Think Tanks: Factories to Call Our Own | Washington’s Think Tanks: Factories to Call Our Own Editor’s Note: This article originally appeared in the Washingtonian’s August 2010 issue. Click here to view the text with full graphics and a chart. America has been defined by its great cities and their signature industries. Pittsburgh became the city of steel during the industrial age. During the first half of the 20th century, Detroit became the city of automobiles. Los Angeles led the rise of film and television. More recently, California’s Silicon Valley spearheaded the technology boom, and New York is synonymous with Wall Street.Washington has always been different. It has never had much of a manufacturing base, and since Georgetown ceased to be an active port in the late 18th century, it hasn’t exported many products. In fact, Washington “makes” very little. Yet there is one industry that Washington can claim as its own: the ideas industry. Travel down Massachusetts Avenue in Northwest DC and you’ll find yourself in the heart of an industry that was, when it began, unique to the nation’s capital. The imposing facades of the Brookings Institution, the Carnegie Endowment for International Peace, and the Johns Hopkins School of Advanced International Studies bear little resemblance to the old steel mills of Pittsburgh, but they are factories all the same—producing an endless stream of books, policy papers, reports, analyses, and commentary on everything from health care to taxes to defense. Washington’s “ideas” economy, based in its think tanks and universities, has made the city an intellectual leader. In 2009, the University of Pennsylvania conducted a survey of the world’s think tanks. It identified 6,305 in 169 countries. At the center of this universe was Washington. Some 393 think tanks were located in the District, more than in any other city in the world; DC is home to about one-fifth of all the think tanks in the United States. Another 149 are in Virginia and Maryland. With budgets ranging from a few hundred thousand dollars to $80 million, the ideas industry is a huge driver of the local economy. And it’s not just a matter of numbers. When the think tanks in the survey were rated for the influence of their work, nine of the top ten in the United States had offices in Washington; the Hoover Institution at Stanford University—staffed with many DC refugees—was the only non-DC think tank to make the top ten. Results were similar when think tanks were broken down into specialties. All of the world’s top five that work on environmental issues are in DC. Indeed, in almost every area of importance, Washington came out on top, with four of the top five in international economics and international affairs and three of the top five each in health policy and social policy. When it came to ranking the world’s think tanks according to overall influence and respect, the winner was the Brookings Institution, formed in 1916 by the Midwestern industrialist Robert Brookings. Coming in second was its next-door neighbor, the Carnegie Endowment for International Peace. The previous year, Brookings had tied for the top with the Peterson Institute for International Economics, across the street. If Washington is the center of the think-tank universe, the 1700 block of Massachusetts Avenue, just off Dupont Circle, is ground zero. “It is not surprising that think tanks cluster in Washington,” says Martin Indyk, a former US ambassador to Israel who helped found the Washington Institute for Near East Policy and now is director of foreign policy at Brookings. “It is at their core to deal with policy, for some to study it and others to advise on it.” With regard to policy, Washington’s think tanks can claim to have created an immense amount of change that has reshaped our nation and the world. Everything from the Marshall Plan to the US Agency for International Development to environmental standards found their origins in think tanks scattered around Washington. For instance, when President Reagan took office in 1981, he quickly gave every member of his cabinet an 1,100-page book from the Heritage Foundation, Mandate for Leadership, that provided an outline for conservative principles he wished to enact. Of its 2,000 recommendations, roughly 60 percent came to fruition—“which is why Mr. Reagan’s tenure was 60 percent successful,” leading conservative William F. Buckley Jr. later quipped. At the other end of the political spectrum, just days after the 2008 election the Center for American Progress—a progressive think tank founded in 2003 partly as a reaction to the success of Heritage—released a massive, 704-page outline of a possible agenda for newly elected Barack Obama. The yearlong effort, which resulted in the book Change for America: A Progressive Blueprint for the 44th President, helped the Obama administration jump-start its agenda as it came to Washington in early 2009, and more than 50 staff members from CAP have since joined the administration. While the Marshall Plan of 1947—which arose from work at Brookings—is hailed as a masterpiece of policy and diplomacy, many would argue that some of the worst policy decisions also had connections to think tanks. For example, a number of the erroneous claims about how easy a war against Iraq would be came with a think-tank seal of approval; most were cheerleading masquerading as analysis. Not everyone is a fan of Washington’s think tanks. Hady Amr, director of Brookings’s Doha Center, has studied Arab world views on the topic. He says that the prevailing view in the Middle East was that our think tanks were actually an “activist lobby entity that cooks up schemes that are secretly woven into government plans.” Ralph Peters, a retired Army lieutenant colonel and now a columnist for the New York Post, has a different, but no less critical, view. “Think tanks are simply welfare agencies for intellectuals who can’t survive in the marketplace as well as holding pens for political creatures briefly out of office,” he says, pointing out the thousands of policy papers that come out of the region’s think tanks. “The Sierra Club should be picketing them over all the innocent trees they’ve killed.” When I worked in the Pentagon, where the time horizon often extended only as far as the next morning’s newspaper, meticulous research-based findings were frequently eschewed in favor of whatever memos emerged from a balky bureaucratic process. By contrast, when I worked at Harvard, I used to joke that the professors would be glad if the newspaper stopped arriving each morning, as reality had the habit of getting in the way of their theories. Think tanks are like the bicycle chain that links the policy world with the research world, applying academic rigor to contemporary policy problems. In a sense, they’re universities with no students, whose world of study is politics and policy. Think tanks “help set policy agendas and bridge the gap between knowledge and power,” according to James G. McGann, director of the Think Tanks and Civil Societies Program at the University of Pennsylvania, who has spent more than 20 years studying the field. “At its best, a think tank contributes to a better world,” says Richard Danzig, a former Secretary of the Navy who has served on the boards of the Center for Strategic and Budgetary Assessments, the Rand Corporation, and Public Agenda and is now chairman of the Center for a New American Security. “It does this by sponsoring thought, research, and dialogue. Optimally, it provides support, time, and space to the privileged few who populate it so that they think more deeply, more broadly, and more soundly than the prevailing wisdom.” Think tanks can approach a tough policy problem without the time pressures government officials face. As Shawn Brimley, a Pentagon strategist who works in the Office of the Secretary of Defense, says, think tanks “help the government overcome the tyranny of the in box by providing good analysis on long-term strategic problems.” Even when think tanks come to different conclusions, as often happens, they provide policymakers and the public the means to a deeper understanding of issues than would have been possible without their scholarship and analysis. When the situation in Iraq deteriorated between 2004 and 2006, it was think tanks across the political spectrum that pushed the mix of alternative polices——a surge in troop numbers, new counterinsurgency tactics, and a change in pressure on the Iraqi government—that were ultimately adopted. Ideally, their nongovernmental status gives think-tankers a semblance of academic freedom. As former Pentagon official Phil Carter points out, it affords them “the ability to arrive at intellectually honest conclusions without fear of reprisal.” For this reason, think tanks are often stalking horses for controversial policies that policymakers want to deal with but consider too hot politically. When Obama officials wanted to show progress on allowing gay people to serve openly in the armed forces, they suggested having Rand study the “don’t ask, don’t tell” policy put in place during the Clinton administration. Think tanks have a quiet power that government either lacks or is unwilling to use: They bring together leaders and experts who should meet but whom government can’t convene publicly. Such “track twos” range from hosting Israeli and Palestinian negotiators to quietly gathering the multiple actors in the health-care debate for an off-the-record meeting. As much as think tanks thrive because of their nongovernmental status, they’re filled with people who either have been or want to be in government—or both. This “shadow government” function isn’t necessarily a bad thing in our political system, where opposition parties don’t have “shadow ministers” as they do in the United Kingdom and other parliamentary structures. Says Stephen Cohen, a former State Department official and now a Brookings senior fellow: “When I did my time in State policy planning, I realized that no one was learning anything once they got into the government—they relied upon their intellectual capital before they went in for however many years they lasted.” In this role, think tanks almost serve as a sort of recruiting network and farm team for government. Each year, Washington is restocked with bright young talent from around the world, thousands of whom arrive to work as interns or research assistants at think tanks, getting their feet wet in the policy world. And at this they are remarkably successful. Most governmental departments have a significant percentage of people who have worked at think tanks at some point in their careers. More than 60 percent of the assistant secretaries at the State Department came out of think tanks. Yet this is also part of the weakness of the field: Some think-tankers can get caught up in the action—politicking, networking, hobnobbing—and forget the research part of the job. Too many use think-tank perches as bully pulpits for delivering opinion cloaked as expertise. Watch CNN or Fox News for a week and this becomes clear. Many of the same “experts” pop up again and again, whether the topic is Pakistan or swine flu. They may seem authoritative, especially compared with the blow-dried hosts, but they’re often keeping that seat from being filled by an actual expert, who truly has studied the issue and would have something insightful or new to say. The seductive effect of being so close to power, as well as constantly being called by the media for quotes in stories or appearances on talk shows, can lead many in Washington’s think tanks to confuse visibility with utility. The more prominent someone is, the more likely the media are to solicit that person’s opinion. The result is that media appearances can result in real influence, warranted or not. The problem is that if we use the number of media citations as our main measure (which, unfortunately, many government policymakers and think-tank funders do), then Paris Hilton should be among the most powerful and consulted people in the world. Even worse is that this cross between politicking and media appearances can result in real influence, or at least grease one’s path to power. Sometimes things can get ugly. After Obama won the presidency, a number of left-leaning think tanks feuded in the blogs and the media over who had helped generate the ideas used by his campaign. It was an effort not only to further their institutional prominence—and show donors that their money had been well spent—but also to place their people in government. In Washington, the most common divide among people and organizations is the partisan one. But think tanks aren’t technically Republican or Democrat, for the simple reason that such labels would change their tax status. Anyone working in think tanks quickly learns the code words “conservative” and “progressive.” On the right are outfits such as the Heritage Foundation, whose Web site describes it as conservative, while on the left are groups such as the Center for American Progress, whose site says its mission is to “engage in the war of ideas with conservatives.” Partisanship or the lack thereof can be a key determinant of a think tank’s independence and effectiveness. An ideological institution can help set a political party’s agenda, but then it’s wed to party leaders’ decisions rather than being independent. Ironically, partisan think tanks often have easier times when the opposition is in power. When their team wins, the new administration sucks away their top talent, donors are less eager to contribute, and the freedom to comment—and criticize—is curtailed. In between the partisan divide are such entities as the Council on Foreign Relations, Brookings, and the Center for Strategic and International Studies, which are nonpartisan in more than name only. While they have fellows and staff who lean one way or the other (as is their right as citizens), these organizations don’t push a particular political view. A survey of their staffs probably would show that they tend to tilt slightly right or left overall, but usually in opposition to whichever side is in power. This is simply because more talent tends to be job-hunting from the side that’s out of power. To judge a think tank solely by where it lies on the political spectrum is to miss the other key differentiators. For instance, the target audience of the think tank can be a huge determinant of everything from its mode of communication to its geographic location. The Heritage Foundation and the American Enterprise Institute are right-leaning think tanks. Heritage focuses on influencing Congress, so it not only is located near the Capitol—with headquarters on the Senate side and a second, newer location near the House—but also produces shorter publications that frequently target legislative votes. It famously packages its ideas in a format designed to be readable in the time it takes a member of Congress to walk from his or her office to the House or Senate floor. By contrast, AEI may be just as right-wing in its partisanship—amusingly, it will sometimes try to deny this in angry letters to the editor, an effort hampered not only by its uniformly right-leaning staff and policy positions but also by the fact that its former president described it as “right of center” and extolled its mentorship of the careers of Milton Friedman, Robert Bork, and Jeane Kirkpatrick—but it has targeted the executive branch in its focus, with its staff briefing President George W. Bush as part of the debate over the “surge” in Iraq. To find another key differentiator among think tanks, follow the money. Only a few of the older ones are lucky enough to have endowments. Funding sources include government, corporate, foundation, and individual donors, each with its own goals in mind. Some donors want to influence votes in Congress or shape public opinion, others want to position themselves or the experts they fund for future government jobs, while others want to push specific areas of research or education. Critical factors are the parameters under which a think tank accepts money and how dependent it is on one donor’s largess. Can it walk away from the money if the goals of the donor and the think tank begin to differ? If not, it has been “captured” by the donor. Funding can also affect the question of who decides what is to be researched. The individual researcher? The think tank’s leadership? An outside group? Both Brookings and Rand are large, nonpartisan think tanks that would seem to have much in common. But Brookings is funded by a mix of endowment and donor money, with individual scholars deciding their own major research projects, while Rand is a Federally Funded Research and Development Center—a think tank directly supported by the federal government, as are other centers such as the National Cancer Institute in Frederick, the Center for Naval Analyses in Alexandria, and McLean’s Center for Advanced Aviation System Development—which means that government officials decide its research priorities. In any industry, the channels of distribution matter greatly, and here, too, is a factor that is greatly shaping Washington’s ideas industry. Fairness and Accuracy in Reporting found that Brookings has consistently been the think tank most frequently cited in newspapers and on TV over the last five years. However, the number of traditional press mentions it needed to hold the top spot has declined from 4,675 to 2,166. It wasn’t that think tanks were doing less work but that there were fewer newspapers, and the ones that remained were covering less news. As print loses its central importance and new forms of communication emerge, think tanks are wrestling with how to adapt—many are adding blogs, Twitter feeds, and Facebook pages. Tensions recently arose between the libertarian Cato Institute and the Heritage Foundation not because of the split between libertarians and conservatives but because staffers at the two think tanks accused one another of committing the cardinal Facebook faux pas of stalking their institution’s “fan” list. Whether trying to “friend” someone else’s fans is something think tanks should be doing is a good question, but there are more-serious issues. All Washington think tanks must wrestle not only with how to get their messages out in this new medium but also with whether the medium begins to change the message. Can something of importance in public policy be said in ten-second sound bites or Twitter posts of 140 characters? Another trend is the simultaneous proliferation and shrinkage of Washington’s think tanks. Most of those that have popped up in the last decade have been smaller and more specialized. Their lower overhead has allowed them to compete rapidly with more-established think tanks. Yet the rise of the single-dimension format coincides with the growth of policy problems that are more multidimensional. The newer think tanks also tend to be shorter-term in their organizational focus, more reliant on the ebb and flow of donor money, and thus potentially less independent and accountable. This issue of accountability is of even greater importance now that the chief targets of the work of think tanks—government officials—are figuring out how to co-opt it. At one time, think tanks were held at a distance by people in government even in the close confines of Washington. Those policymakers who appreciated their work did so for the very reason that the work was produced outside government channels, while those who disliked think tanks viewed them as the enemy. President Nixon’s tapes revealed that more than a year before Watergate he ordered a break-in at Brookings’s offices, which was foiled by a security guard. Today think-tankers joke that you haven’t made it until you’ve served on some sort of government “review” or “advisory” panel. During the policy debate over the war in Afghanistan, no fewer than three review groups of think-tankers were formed by government bodies, each given access and status—and thus better chances for op-eds and interviews—but no actual power. Some of the group members seem to have been chosen less for their expertise than because of their prominence—part of a broader transformation of instant Iraq experts into automatic Afghanistan gurus. On Foreign Policy’s Web site, Laura Rozen slyly wrote of General Stanley McChrystal’s efforts to win important hearts and minds: “He has moved to deftly enlist the Washington class of think tankers, armchair warriors, foreign-policy pundits and op-ed writers in the success of his mission—as well as grab up a few people who have made their mark in Afghanistan.” Later, some of those McChrystal had wooed found out his true feelings in that infamous Rolling Stone article, as he and his staff vented more harsh feelings about Washington policymakers. Rory Stewart, director of the Carr Center for Human Rights Policy and a member of Richard Holbrooke’s special committee for Afghanistan and Pakistan policy (a different advisory group from McChrystal’s), described, in a Financial Times interview, the perils of selecting advisers for reasons other than to get their advice: “I do a lot of work with policymakers, but how much effect am I having? It’s like they’re coming in and saying to you, ‘I’m going to drive my car off a cliff. Should I or should I not wear a seatbelt?’ And you say, ‘I don’t think you should drive your car off the cliff.’ And they say, ‘No, no, that bit’s already been decided—the question is whether to wear a seatbelt.’ And you say, ‘Well, you might as well wear a seatbelt.’ And then they say, ‘We’ve consulted with policy expert Rory Stewart and he says . . . .’ ” Washington think tanks are being buffeted by the economy—the financial crisis is shaping both the funding environment for think tanks and the way they respond. With shrinking public and private dollars, many enacted hiring freezes. Unfortunately for Bush-administration refugees, this happened at almost the same time they started to job-hunt, so the revolving door didn’t swing as widely as it had in the past, leaving many staffs more unbalanced than they once were, not for strategic reasons but because of economics. The shrinking money environment also put motivated donors in a more privileged position and presented think tanks with harder choices about whom to take money from. Last year, lawyers representing private military firms offered to help raise money in the six figures and up for a set of think tanks—if they would conduct a study on private military firms that would be issued to the public. In a time of staff layoffs and budget deficits, some think tanks accepted the money; others took the high road—and suffered the budgetary consequences. But perhaps the most important trend affecting think tanks is globalization. As corporations and universities have been doing for years, many think tanks are opening up branches abroad. The Carnegie Endowment for International Peace has reached beyond Dupont Circle to open new offices in Beijing, Beirut, Brussels, and Moscow. The other part of globalization is the rise of indigenous think tanks. China has gone from just a few a decade ago to 428 in 2009, moving it into second place in number of think tanks. The People’s Daily newspaper reported that the Chinese organizations are also growing in influence: “Every time that the government formulates fundamental policies, or seeks to resolve problems in the lives of citizens, these think tanks provide their counsel.” The rise of so many think tanks around the world raises questions about their future. As James McGann of the University of Pennsylvania explains, “Designed to look like nongovernmental organizations, these are in fact arms of the government. They’ve emerged as a favorite strategy for authoritarian regimes to mask their diktats as a flourishing civil society.” Perhaps the best illustration is that there are now two think tanks in North Korea; it’s hard to argue that they actually provide independent research. The broader globalization question for think tanks here may be simpler—and more worrisome. Washington may have been the origin and center of think tanks for the last century, but no industry stays the same forever. Indeed, the 2009 Global Think-Tank Summit wasn’t held inside the Beltway—it was in Beijing. Could what happened to America’s manufacturing industry also one day befall Washington’s ideas industry? |
778d6bef68f2b642e1d7ac22f7e16161 | https://www.brookings.edu/articles/what-now-the-oval-office/ | What Now? The Oval Office | What Now? The Oval Office You are about to move into the Oval Office—one of the most dramatic, architecturally satisfying rooms in the world—and you are going to have to make a basic decision: do you wish to admire it or work in it? Clearly, this is a great ceremonial place. You will call in members of the press pool to snap photos of you chatting with world leaders, with the marble mantel of the fireplace in the background, the presidential seal set in plaster on the ceiling, the flags of the United States and the president behind the desk. But is this really where you want to roll up your sleeves, spread out your papers, loosen your tie and work? I have worked in the White House twice, for two very different people, and their answers were yes (Eisenhower) and no (Nixon). The Oval Office was where President Eisenhower chose to conduct the affairs of state. He didn’t even bother to change the green carpet and draperies from Harry Truman’s occupancy. Nixon, on the other hand, established his serious workspace across the gated West Executive Street and up a flight of stairs in Room 180 of the Executive Office Building (since renamed the Eisenhower Executive Office Building). It was here that Nixon probably had a hole drilled into his desk to secure the wires to the machine that was taping “Watergate” conversations. The Oval Office was relegated to the ceremonial place. ##1## The president who made himself most at home in the Oval Office was John F. Kennedy. He brought in a rocking chair to ease his back pain; his collection of ship models; maritime paintings (instead of presidential portraits); a silver goblet from New Ross, Ireland, the town from which his great grand-father set off for America; a watercolor of the White House painted by his wife; a chair from his student days at Harvard; and a plaque, given to him by Admiral Hyman Rickover, inscribed with the words of the Breton fisherman’s prayer: “O, God, Thy sea is so great and my boat is so small.” On his desk, encased in plastic, sat the coconut shell carved with the message that led to his rescue after PT-109 was cut in two by a Japanese destroyer in the Solomon Islands. Other presidents added their own personal touches. Reagan gave the Oval Office a distinctly Western flavor with Remington cowboy sculptures and miniature bronze saddles. George H. W. Bush featured blue and white, the colors of Yale, his alma mater. His son hung scenes of Texas by Texan artists on loan from Texas museums. In addition to deciding which portraits to hang in the Oval Office, you will have to choose which desk you will use there. The options, however, are limited to four historic desks—the Resolute Desk, the Theodore Roosevelt Desk, the Wilson Desk, and the C&O Desk—or bringing your own. The Resolute Desk: The Resolute Desk is a partner’s desk, meaning it was designed to accommodate a person sitting and working on either side. Franklin D. Roosevelt, however, chose to add a center panel with a carved Seal of the President in order to hide his iron leg braces from view and to conceal a safe. While the desk has been used often by presidents since 1880, Kennedy was the first to put it in the Oval Office. Carter, Reagan, Clinton, and George W. Bush also used the Resolute Desk in the Oval Office. Theodore Roosevelt Desk This is the original West Wing desk, made in 1902 for Theodore Roosevelt, used in the Oval Office by Taft, Wilson, Harding, Coolidge, Hoover, Franklin D. Roosevelt, Truman, and Eisenhower. Nixon chose this desk for his “working office,” Room 180 in the Eisenhower Executive Office Building, and presumably the Watergate tapes were made by an apparatus concealed in its drawer. Its practicality is that it has a larger surface than the Resolute Desk. The Wilson Desk The Wilson Desk was used in the Oval Office by Presidents Nixon and Ford. This was Nixon’s desk in the Capitol when he was vice president, and he requested it for the White House. His attachment stemmed from his belief that the desk once belonged to Woodrow Wilson. Unfortunately the following footnote in the 1969 edition of Public Papers of the Presidents noted: “Later research indicated that the desk had not been President Woodrow Wilson’s as had long been assumed but was used by Vice President Henry Wilson during President Grant’s administration.” C&O Desk The C&O Desk was used in the Oval Office by George H. W. Bush, who moved it from his vice presidential office in the Capitol. It is a handsome reproduction of an eighteenth-century English double pedestal desk, with a full set of drawers on each side, made around 1920 for the owners of the Chesapeake & Ohio Railway. It was later donated to the White House and used by Ford, Carter, and Reagan in the West Wing Study. Your Own Desk You can, of course, bring your own desk with you to the Oval Office, as did Lyndon Johnson. The Johnson desk is now in the replica Oval Office at the LBJ museum in Austin, and, I am reliably told, the retired president sometimes sat at the desk to surprise unsuspecting museum visitors. |
a58a7ffd58ae9f2be8eba416365255a6 | https://www.brookings.edu/articles/who-needs-harvard/?shared=email&msg=fail | Who Needs Harvard? | Who Needs Harvard? Today almost everyone seems to assume that the critical moment in young people’s lives is finding out which colleges have accepted them. Winning admission to an elite school is imagined to be a golden passport to success; for bright students, failing to do so is seen as a major life setback. As a result, the fixation on getting into a super-selective college or university has never been greater. Parents’ expectations that their children will attend top schools have “risen substantially” in the past decade, says Jim Conroy, the head of college counseling at New Trier High School, in Winnetka, Illinois. He adds, “Parents regularly tell me, ‘I want whatever is highest-ranked.'” Shirley Levin, of Rockville, Maryland, who has worked as a college-admissions consultant for twenty-three years, concurs: “Never have stress levels for high school students been so high about where they get in, or about the idea that if you don’t get into a glamour college, your life is somehow ruined.” Admissions mania focuses most intensely on what might be called the Gotta-Get-Ins, the colleges with maximum allure. The twenty-five Gotta-Get-Ins of the moment, according to admissions officers, are the Ivies (Brown, Columbia, Cornell, Dartmouth, Harvard, Penn, Princeton, and Yale), plus Amherst, Berkeley, Caltech, Chicago, Duke, Georgetown, Johns Hopkins, MIT, Northwestern, Pomona, Smith, Stanford, Swarthmore, Vassar, Washington University in St. Louis, Wellesley, and Williams. Some students and their parents have always been obsessed with getting into the best colleges, of course. But as a result of rising population, rising affluence, and rising awareness of the value of education, millions of families are now in a state of nervous collapse regarding college admissions. Moreover, although the total number of college applicants keeps increasing, the number of freshman slots at the elite colleges has changed little. Thus competition for elite-college admission has grown ever more cutthroat. Each year more and more bright, qualified high school seniors don’t receive the coveted thick envelope from a Gotta-Get-In. But what if the basis for all this stress and disappointment—the idea that getting into an elite college makes a big difference in life—is wrong? What if it turns out that going to the “highest ranked” school hardly matters at all? The researchers Alan Krueger and Stacy Berg Dale began investigating this question, and in 1999 produced a study that dropped a bomb on the notion of elite-college attendance as essential to success later in life. Krueger, a Princeton economist, and Dale, affiliated with the Andrew Mellon Foundation, began by comparing students who entered Ivy League and similar schools in 1976 with students who entered less prestigious colleges the same year. They found, for instance, that by 1995 Yale graduates were earning 30 percent more than Tulane graduates, which seemed to support the assumption that attending an elite college smoothes one’s path in life. But maybe the kids who got into Yale were simply more talented or hardworking than those who got into Tulane. To adjust for this, Krueger and Dale studied what happened to students who were accepted at an Ivy or a similar institution, but chose instead to attend a less sexy, “moderately selective” school. It turned out that such students had, on average, the same income twenty years later as graduates of the elite colleges. Krueger and Dale found that for students bright enough to win admission to a top school, later income “varied little, no matter which type of college they attended.” In other words, the student, not the school, was responsible for the success. Research does find an unmistakable advantage to getting a bachelor’s degree. In 2002, according to Census Bureau figures, the mean income of college graduates was almost double that of those holding only high school diplomas. Trends in the knowledge-based economy suggest that college gets more valuable every year. For those graduating from high school today and in the near future, failure to attend at least some college may mean a McJobs existence for all but the most talented or unconventional. But, as Krueger has written, “that you go to college is more important than where you go.” The advantages conferred by the most selective schools may be overstated. Consider how many schools are not in the top twenty-five, yet may be only slightly less good than the elites: Bard, Barnard, Bates, Bowdoin, Brandeis, Bryn Mawr, Bucknell, Carleton, Carnegie Mellon, Claremont McKenna, Colby, Colgate, Colorado College, Davidson, Denison, Dickinson, Emory, George Washington, Grinnell, Hamilton, Harvey Mudd, Haverford, Holy Cross, Kenyon, Lafayette, Macalester, Middlebury, Mount Holyoke, Notre Dame, Oberlin, Occidental, Reed, Rice, Sarah Lawrence, Skidmore, Spelman, St. John’s of Annapolis, Trinity of Connecticut, Union, Vanderbilt, Washington and Lee, Wesleyan, Whitman, William and Mary, and the universities of Michigan and Virginia. Then consider the many other schools that may lack the je ne sais quoi of the top destinations but are nonetheless estimable, such as Boston College, Case Western, Georgia Tech, Rochester, SUNY-Binghamton, Texas Christian, Tufts, the University of Illinois at Champaign Urbana, the University of North Carolina at Chapel Hill, the University of Texas at Austin, the University of Washington, the University of Wisconsin at Madison, and the University of California campuses at Davis, Irvine, Los Angeles, and San Diego. (These lists are meant not to be exhaustive but merely to make the point that there are many, many good schools in America.) “Any family ought to be thrilled to have a child admitted to Madison, but parents obsessed with prestige would not consider Madison a win,” says David Adamany, the president of Temple University. “The child who is rejected at Harvard will probably go on to receive a superior education and have an outstanding college experience at any of dozens of other places, but start off feeling inadequate and burdened by the sense of disappointing his or her parents. Many parents now set their children up to consider themselves failures if they don’t get the acceptance letter from a super-selective school.” Beyond the Krueger-Dale research, there is abundant anecdotal evidence that any of a wide range of colleges can equip its graduates for success. Consider the United States Senate. This most exclusive of clubs currently lists twenty-six members with undergraduate degrees from the Gotta-Get-Ins—a disproportionately good showing considering the small percentage of students who graduate from these schools. But the diversity of Senate backgrounds is even more striking. Fully half of U.S. senators are graduates of public universities, and many went to “states”—among them Chico State, Colorado State, Iowa State, Kansas State, Louisiana State, Michigan State, North Carolina State, Ohio State, Oklahoma State, Oregon State, Penn State, San Jose State, South Dakota State, Utah State, and Washington State. Or consider the CEOs of the top ten Fortune 500 corporations: only four went to elite schools. H. Lee Scott Jr., of Wal-Mart, the world’s largest corporation, is a graduate of Pittsburg State, in Pittsburg, Kansas. Or consider Rhodes scholars: this year only sixteen of the thirty-two American recipients hailed from elite colleges; the others attended Hobart, Millsaps, Morehouse, St. Olaf, the University of the South, Utah State, and Wake Forest, among other non-elites. Steven Spielberg was rejected by the prestigious film schools at USC and UCLA; he attended Cal State Long Beach, and seems to have done all right for himself. Roger Straus, of Farrar, Straus & Giroux, one of the most influential people in postwar American letters, who died last spring at eighty-seven, was a graduate of the University of Missouri. “[Students] have been led to believe that if you go to X school, then Y will result, and this just isn’t true,” says Judith Shapiro, the president of Barnard. “It’s good to attend a good college, but there are many good colleges. Getting into Princeton or Barnard just isn’t a life-or-death matter.” That getting into Princeton isn’t a life-or-death matter hit home years ago for Loren Pope, then the education editor of The New York Times. For his 1990 book, Looking Beyond the Ivy League, Pope scanned Who’s Who entries of the 1980s, compiling figures on undergraduate degrees. (This was at a time when Who’s Who was still the social directory of American distinction—before the marketing of Who’s Who in Southeastern Middle School Girls’ Tennis and innumerable other spinoffs.) Pope found that the schools that produced the most Who’s Who entrants were Yale, Harvard, Princeton, Chicago, and Caltech; that much conformed to expectations. But other colleges near the top in Who’s Who productivity included DePauw, Holy Cross, Wabash, Washington and Lee, and Wheaton of Illinois. Pope found that Bowdoin, Denison, Franklin & Marshall, Millsaps, and the University of the South were better at producing Who’s Who entrants than Georgetown or the University of Virginia, and that Beloit bested Duke. These findings helped persuade Pope that the glamour schools were losing their status as the gatekeepers of accomplishment. Today Pope campaigns for a group of forty colleges that he considers nearly the equals of the elite, but more personal, more pleasant, less stress-inducing, and—in some cases, at least—less expensive. Institutions like Hope, Rhodes, and Ursinus do not inspire the same kind of admissions lust as the Ivies, but they are places where parents should feel very good about sending their kids. (A list of the well-regarded non-elite colleges Pope champions can be found at www.ctcl.com.) The Gotta-Get-Ins can no longer claim to be the more or less exclusive gatekeepers to graduate school. Once, it was assumed that an elite-college undergraduate degree was required for admission to a top law or medical program. No more: 61 percent of new students at Harvard Law School last year had received their bachelor’s degrees outside the Ivy League. “Every year I have someone who went to Harvard College but can’t get into Harvard Law, plus someone who went to the University of Maryland and does get into Harvard Law,” Shirley Levin says. For Looking Beyond the Ivy League, Pope analyzed eight consecutive sets of scores on the medical-school aptitude test. Caltech produced the highest-scoring students, but Carleton outdid Harvard, Muhlenberg topped Dartmouth, and Ohio Wesleyan finished ahead of Berkeley. The elites still lead in producing undergraduates who go on for doctorates (Caltech had the highest percentage during the 1990s), but Earlham, Grinnell, Kalamazoo, Kenyon, Knox, Lawrence, Macalester, Oberlin, and Wooster do better on this scale than many higher-status schools. In the 1990s little Earlham, with just 1,200 students, produced a higher percentage of graduates who have since received doctorates than did Brown, Dartmouth, Duke, Northwestern, Penn, or Vassar. That non-elite schools do well in Who’s Who and in sending students on to graduate school or to the Senate suggests that many overestimate the impact of the Gotta-Get-Ins not only on future earnings but on interesting career paths as well. For example, I graduated from Colorado College, a small liberal arts institution that is admired but, needless to say, is no Stanford. While I was there, in the mid-1970s, wandering around the campus were disheveled kids whose names have since become linked with an array of achievements: Neal Baer, M.D., an executive producer for the NBC show ER; Frank Bowman, a former federal prosecutor often quoted as the leading specialist on federal sentencing guidelines; Katharine DeShaw, the director of fundraising for the Los Angeles County Museum of Art; David Hendrickson, the chairman of the political-science department at Colorado College; Richard Kilbride, the managing director of ING Asset Management, which administers about $450 billion; Robert Krimmer, a television actor; Margaret Liu, M.D., a senior adviser to the Bill and Melinda Gates Foundation, and one of the world’s foremost authorities on vaccines; David Malpass, the chief economist for Bear Stearns; Mark McConnell, an animator who has won Emmys for television graphics; Jim McDowell, the vice-president of marketing for BMW North America; Marcia McNutt, the CEO of the Monterey Bay Aquarium Research Institute; Michael Nava, the author of the Henry Rios detective novels; Peter Neupert, the CEO of Drugstore.com; Anne Reifenberg, the deputy business editor of the Los Angeles Times; Deborah Caulfield Rybak, a co-author of an acclaimed book about tobacco litigation; Ken Salazar, the attorney general of Colorado and a Democratic candidate for the U.S. Senate in 2004; Thom Shanker, the Pentagon correspondent for The New York Times; Joe Simitian, named to the 2003 Scientific American list of the fifty most influential people in technology; and Eric Sondermann, the founder of one of Denver’s top public-relations firms. In terms of students who went on to interesting or prominent lives, Colorado College may have done just as well in this period as Columbia or Cornell or any other Gotta-Get-In destination. Doubtless other colleges could make the same claim for themselves for this or other periods; I’m simply citing the example I know personally. The point is that for some time the center of gravity for achievement has been shifting away from the topmost colleges. Fundamental to that shift has been a steady improvement in the educational quality of non-elite schools. Many college officials I interviewed said approximately the same thing: that a generation or two ago it really was a setback if a top student didn’t get admitted to an Ivy or one of a few other elite destinations, because only a small number of places were offering a truly first-rate education. But since then the non-elites have improved dramatically. “Illinois Wesleyan is a significantly better college than it was in the 1950s,” says Janet McNew, the school’s provost, “whereas Harvard has probably changed much less dramatically in the past half century.” That statement could apply to many other colleges. Pretty good schools of the past have gotten much better, while the great schools have remained more or less the same. The result is that numerous colleges have narrowed the gap with the elites. How many colleges now provide an excellent education? Possibly a hundred, suggests Jim Conroy, of New Trier; probably more than two hundred, Shirley Levin says. The improvement is especially noteworthy at large public universities. Michigan and Virginia have become “public Ivies,” and numerous state-run universities now offer a top-flight education. Whether or not students take a public university up on its offer of a good education is another matter: large, chaotic campuses may create an environment in which it’s possible to slide by with four years of drinking beer and playing video games, whereas small private colleges usually notice students who try this. Yet the rising quality of public universities is important, because these schools provide substantial numbers of slots, often with discounted in-state tuition. Many families who cannot afford private colleges now have appealing alternatives at public universities. One reason so many colleges have improved is the profusion of able faculty members. The education wave fostered by the GI Bill drew many talented people into academia. Because tenured openings at the glamour schools are subject to slow turnover, this legion of new teachers fanned out to other colleges, raising the quality of instruction at non-elite schools. While this was happening, the country became more prosperous, and giving to colleges—including those below the glamour level—shot up. When the first GI Bill cohort began to die, big gifts started flowing to the non-elites. (Earlier this year one graduate bequeathed Pitt’s law school $4.25 million.) Today many non-elite schools have significant financial resources: Emory has an endowment of $4.5 billion, Case Western an endowment of $1.4 billion, and even little Colby an endowment of $323 million—an amount that a few decades ago would have seemed unimaginable for a small liberal arts school without a national profile. As colleges below the top were improving, the old WASP insider system was losing its grip on business and other institutions. There was a time when an Ivy League diploma was vital to career advancement in many places, because an Ivy grad could be assumed to be from the correct upper-middle-class Protestant background. Today an Ivy diploma reveals nothing about a person’s background, and favoritism in hiring and promotion is on the decline; most businesses would rather have a Lehigh graduate who performs at a high level than a Brown graduate who doesn’t. Law firms do remain exceptionally status-conscious—some college counselors believe that law firms still hire associates based partly on where they were undergraduates. But the majority of employers aren’t looking for status degrees, and some may even avoid candidates from the top schools, on the theory that such aspirants have unrealistic expectations of quick promotion. Relationships labeled ironic are often merely coincidental. But it is genuinely ironic that as non-elite colleges have improved in educational quality and financial resources, and favoritism toward top-school degrees has faded, getting into an elite school has nonetheless become more of a national obsession. Which brings us back to the Krueger-Dale thesis. Can we really be sure Hamilton is nearly as good as Harvard? Some analysts maintain that there are indeed significant advantages to the most selective schools. For instance, a study by Caroline Hoxby, a Harvard economist who has researched college outcomes, suggests that graduates of elite schools do earn more than those of comparable ability who attended other colleges. Hoxby studied male students who entered college in 1982, and adjusted for aptitude, though she used criteria different from those employed by Krueger and Dale. She projected that among students of similar aptitude, those who attended the most selective colleges would earn an average of $2.9 million during their careers; those who attended the next most selective colleges would earn $2.8 million; and those who attended all other colleges would average $2.5 million. This helped convince Hoxby that top applicants should, in fact, lust after the most exclusive possibilities. “There’s a clear benefit to the top fifty or so colleges,” she says. “Connections made at the top schools matter. It’s not so much that you meet the son of a wealthy banker and his father offers you a job, but that you meet specialists and experts who are on campus for conferences and speeches. The conference networking scene is much better at the elite universities.” Hoxby estimates that about three quarters of the educational benefit a student receives is determined by his or her effort and abilities, and should be more or less the same at any good college. The remaining quarter, she thinks, is determined by the status of the school—higher-status schools have more resources and better networking opportunities, and surround top students with other top students. “Today there are large numbers of colleges with good faculty, so faculty probably isn’t the explanation for the advantage at the top,” Hoxby says. “Probably there is not much difference between the quality of the faculty at Princeton and at Rutgers. But there’s a lot of difference between the students at those places, and some of every person’s education comes from interaction with other students.” Being in a super-competitive environment may cause a few students to have nervous breakdowns, but many do their best work under pressure, and the contest is keenest at the Gotta-Get-Ins. Hoxby notes that some medium-rated public universities have established internal “honors colleges” to attract top performers who might qualify for the best destinations. “Students at honors colleges in the public universities do okay, but not as well as they would do at the elite schools,” Hoxby argues. The reason, she feels, is that they’re not surrounded by other top-performing students. There is one group of students that even Krueger and Dale found benefited significantly from attending elite schools: those from disadvantaged backgrounds. Kids from poor families seem to profit from exposure to Amherst or Northwestern much more than kids from well-off families. Why? One possible answer is that they learn sociological cues and customs to which they have not been exposed before. In his 2003 book, Limbo, Alfred Lubrano, the son of a bricklayer, analyzed what happens when people from working-class backgrounds enter the white-collar culture. Part of their socialization, Lubrano wrote, is learning to act dispassionate and outwardly composed at all times, regardless of how they might feel inside. Students from well-off communities generally arrive at college already trained to masquerade as calm. Students from disadvantaged backgrounds may benefit from exposure to this way of carrying oneself—a trait that may be particularly in evidence at the top colleges. It’s understandable that so many high schoolers and their nervous parents are preoccupied with the idea of getting into an elite college. The teen years are a series of tests: of scholastic success, of fitting in, of prowess at throwing and catching balls, of skill at pleasing adults. These tests seem to culminate in a be-all-and-end-all judgment about the first eighteen years of a person’s life, and that judgment is made by college admissions officers. The day college acceptance letters arrive is to teens the moment of truth: they learn what the adult world really thinks of them, and receive an omen of whether or not their lives will be successful. Of course, grown-up land is full of Yale graduates who are unhappy failures and Georgia Tech grads who run big organizations or have a great sense of well-being. But teens can’t be expected to understand this. All they can be sure of is that colleges will accept or reject them, and it’s like being accepted or rejected for a date—only much more intense, and their parents know all the details. Surely it is impossible to do away with the trials of the college-application process altogether. But college admissions would be less nerve-racking, and hang less ominously over the high school years, if it were better understood that a large number of colleges and universities can now provide students with an excellent education, sending them onward to healthy incomes and appealing careers. Harvard is marvelous, but you don’t have to go there to get your foot in the door of life. |
a3311bb5ae68fe619a94689746c67ad2 | https://www.brookings.edu/articles/why-the-turkey-krg-alliance-works-for-now/ | Why the Turkey-KRG alliance works, for now | Why the Turkey-KRG alliance works, for now Turkey’s relationship with Iraq’s Kurds has not been without its problems. Not so long ago Ankara refused to deal with the Kurdistan Regional Government (KRG) and has opposed its efforts to consolidate control in disputed oil-rich areas such as Kirkuk. Moreover, the Kurdistan Workers’ Party (PKK) has complicated the KRG’s efforts to strengthen political and economic ties with its neighbours, all of whom have historically combated Kurdish rebellions within their own territories. Times have certainly changed. There have been landmark visits and exchanges between senior-ranking Turkish and KRG officials as well as a rapid increase in trade that has seen Turkish companies flood Kurdistan’s market and the building of a pipeline that enables the KRG to independently export its hydrocarbons to international markets. The shift in Turkish foreign policy towards Erbil is happening despite concerns about PKK’s rebellion and Turkey’s major economic and political interests in Baghdad. It is a response to the violent instability in Iraq and the growing Iranian influence. While Turkey has significant political and economic ties with the rest of Iraq, Ankara – like others in the Arab and Islamic world – may believe Baghdad’s Shia-ruling establishment has shifted too far into Iran’s orbit of influence and, during the course of the campaign against the Islamic State of Iraq and the Levant (ISIL) group, Tehran has further expanded its influence in the country. In fact, Turkey may double-down on its cooperation with the KRG because of the uncertainties and challenges that may follow in the post-ISIL period, both in Iraq and Syria. The political and administrative structures that emerge after the Islamic ISIL’s eventual defeat have as much relevance for Turkey and its geopolitical rivals as it does Iraq’s own political actors. Forging close ties with Iraqi Kurdistan, based around mutual security, economic and strategic interests, after the Mosul operation enables Turkey to maintain a buffer against Iranian influence. The encroachment of Shia militias into Tal Afar – some of whom include Shia Turkmen who were expelled by ISIL when it seized northern Iraq in 2014 – brings the militias closer into disputed and strategically important areas such as Sinjar, where the PKK and Massoud Barzani’s Kurdistan Democratic Party (KDP) have a significant presence. This has implications for Turkey’s security interests in the longer-run. The presence of Shia militias in the vicinity of places such as Sinjar will inflame tensions between the militias and the KRG as well as embolden the PKK, which has attempted to bring Sinjar under its own administrative control and whose fighters in Sinjar are financially supported by Baghdad. Ankara fears that these recently emerged alliances and security structures could become stronger, especially because of the strong nexus that connects the PKK, Shia militias and Iran. Turkey has, therefore, continued its troop presence in northern Iraq, despite threats from Baghdad and Shia militias. Moreover, beyond the KRG, the PKK undermines Ankara’s other allies in Iraq, such as Atheel Nujayfi, the former governor of Nineveh province, whose 6,500-strong militia have been trained by Ankara. Turkish influence in Iraqi Kurdistan has largely been framed as a Turkey-KDP project aimed at securing the KDP’s position as the dominant party there, to the detriment of its main rival, the Patriotic Union of Kurdistan (PUK), which enjoys closer ties to the Democratic Union Party (PYD) and the People’s Protection Units (YPG), PKK-leaning groups in Syria. But the situation is much more complicated than that. It is commonly believed that Turkey’s strong ties with the KRG only recently emerged, when, in fact, the relationship strengthened in the 1990s. Turkey played an important role in alleviating the humanitarian crisis that followed the first Gulf War. The ensuing western-backed no-fly zone and the creation of an autonomous Kurdish region further enabled the space for commercial ties, even if politically, and publicly, relations remained tense and constrained. Today’s political cooperation between Ankara and the KRG is an extension of these ties. Contrary to conventional wisdom, in the past both the KDP and the PUK have worked alongside Turkey against the PKK, whose Marxist-orientated vision of Kurdish nationalism runs contrary to their social-democratic and liberal outlook. In the 1990s, the PUK founder and former president of Iraq, Jalal Talabani, attacked the PKK for “working to abort our democratic experiment and remove our parliament”. As recently as 2009, Talabani asked the group to leave Iraqi Kurdistan. Both Barzani and Talabani were provided with Turkish passports, which allowed them to travel freely outside of Iraq and Turkey, and were even allowed to establish official representation in Ankara. Over the past two years, the spillover from the Syria conflict has complicated Turkey’s ties to the region’s Kurds, owing to its own domestic confrontation with the PKK but also the ascendancy in Syria of the PYD and the YPG. The Syria conflict has heightened Kurdish nationalistic sentiments and has provided the opportunity for greater Kurdish autonomy throughout the region. This has also made more difficult KRG efforts to balance domestic Kurdish sentiments with their dependency on Ankara. Yet, Turkey is still the only reliable ally for Iraq’s Kurds. In a region that can no longer count on United States engagement, Turkey may be the least worst option for the KRG. Iran also enjoys close ties to the Kurds and has historically provided them with a base from which to fight the former Baath regime, but it cannot offer what Turkey can. Its nuclear ambitions, support for groups identified as terrorist organisations, support for anti-Kurdish Shia militias (who have clashed with PUK Peshmerga forces) and the anti-western rhetoric of its leadership has undermined its international standing and makes it a more unreliable, unpredictable and economically weaker ally. Partnering with Turkey – a major military power, a NATO member and historic western ally with a resilient economy – provides the Kurdistan Region with its own “buffer” against the atomised security structures in Iraq and the rest of the region. The state and non-state actors that threaten the Kurdistan region in the current political and security environment will think twice before challenging Turkey’s security interests in Iraq and, for now, those interests overlap with the KRG’s own. The KRG will also benefit from increased foreign investment, technological expertise and access to the European markets. Continued interaction could also help to alleviate Turkey’s tensions with other Kurdish groups in the region and, potentially, restart the peace process with the PKK. |
4614ef106356774f8778ef08b81a8e5e | https://www.brookings.edu/articles/workers-rights-labor-standards-and-global-trade/ | Workers’ Rights: Labor standards and global trade | Workers’ Rights: Labor standards and global trade Of all the debates surrounding globalization, one of the most contentious involves trade and workers’ rights. Proponents of workers’ rights argue that trading nations should be held to strict labor standards—and they offer two quite different justifications for their view. The first is a moral argument whose premise is that many labor standards, such as freedom of association and the prohibition of forced labor, protect basic human rights. Foreign nations that wish to be granted free access to the world’s biggest and richest markets should be required to observe fundamental human values, including labor rights. In short, the lure of market access to the United States and the European Union should be used to expand the domain of human rights. The key consideration here is the efficacy of labor standards policies. Will they improve human rights among would-be trading partners? Or will they slow progress toward human rights by keeping politically powerless workers mired in poverty? Some countries, including China, might reject otherwise appealing trade deals that contain enforceable labor standards. By insisting on tough labor standards, the wealthy democracies could lay claim to the moral high ground. But they might have to forgo a trade pact that could help their own producers and consumers while boosting the incomes and political power of impoverished Chinese workers. The second argument for strict labor standards stresses not the welfare of poor workers, but simple economic self-interest. A trading partner that fails to enforce basic protections for its workers can gain an unfair trade advantage, boosting its market competitiveness against countries with stronger labor safeguards. Including labor standards in trade deals can encourage countries in a free trade zone to maintain worker protections rather than abandoning them in a race to the bottom. If each country must observe a common set of minimum standards, member countries can offer and enforce worker protections at a more nearly optimal level. This second argument, unlike the first, can be assessed with economic theory and evidence. Evaluating these arguments requires answering three questions. First, what labor standards are important to U.S. trade and foreign policy? Second, how can labor standards, once negotiated, be enforced? Finally, does it make sense to insist that our trade partners adhere to a common set of core labor standards?and if so, which standards? Which Labor Standards Matter Most? Although the international community agrees broadly on the need to respect labor standards, agreement does not extend to what those standards should be. Forced labor and slavery are almost universally regarded as repugnant, but other labor safeguards thought vital in the world’s richest countries are not widely observed elsewhere. The International Labor Organization, created by the Treaty of Versailles after World War I, has published labor standards in dozens of areas, but it has identified eight essential core standards (see box on page 13), most of which refer to basic human rights. Of the 175 ILO member countries, overwhelming majorities have ratified most of the eight standards. More than 150 have ratified the four treating forced labor and discrimination in employment and wages. Washington has ratified just two standards, one abolishing forced labor and the other eliminating the worst forms of child labor, placing the United States in the company of only eight other ILO member countries, including China, Myanmar, and Oman. Many proponents of labor standards would expand the core list of ILO protections to cover workplace safety, working conditions, and wages. The U.S. Trade Act of 1974 defines “internationally recognized worker rights” to include “acceptable conditions of work with respect to minimum wages, hours of work, and occupational safety and health.” The University of Michigan, for example, obliges producers of goods bearing its insignia to respect the core ILO standards and also requires them to pay minimum wages and to offer a “safe and healthy working environment.” The labor standards that might be covered by a trade agreement fall along a continuum from those that focus on basic human rights to those that stress working conditions and pay. On the whole, the case for the former is more persuasive. Insisting that other nations respect workers’ right of free association reflects our moral view that this right is fundamental to human dignity. Workers may also have a “right” to a safe and healthy workplace, but that right comes at some cost to productive efficiency. Insisting that other nations adopt American standards for a safe and healthy workplace means that they must also adopt our view of the appropriate trade-off between health and safety, on the one hand, and productive efficiency, on the other. Enforcing Labor Standards: The Status Quo The principal global institution enforcing labor standards today is the ILO, which reports regularly and periodically on the steps each nation takes to implement the standards it has ratified. If complaints are lodged, the ILO investigates the alleged violation and publicizes its findings. Even if a member nation has not ratified the freedom-of-association conventions, the ILO may investigate alleged violations of those conventions. The ILO cannot, however, authorize retaliatory trade measures or sanctions. Instead it provides technical assistance to member countries to bring their labor laws and enforcement procedures into compliance. Although the work of the ILO has been recognized with a Nobel Peace Prize, many labor sympathizers are skeptical that it can protect workers using its existing enforcement tools since they impose little penalty besides bad publicity. Putting Teeth into Standards Enforcement Labor advocates favor strengthening enforcement by expanding the role of the World Trade Organization or using bilateral trade agreements. WTO rules do not apply to labor standards; they govern members’ treatment of the goods, services, and intellectual property of other member countries. In those areas the WTO has developed elaborate dispute settlement procedures to investigate complaints. If a WTO panel finds that a member country has violated WTO rules, it may allow the complaining country to retaliate. At the 1996 WTO ministerial meeting, developing countries strongly resisted efforts to allow the WTO to enforce labor standards, and the meeting concluded by affirming the ILO’s role in determining and dealing with labor standards. Similarly, when President Clinton and some EU leaders tried to bring workers’ rights into the next round of multilateral trade negotiations at the 1999 WTO ministerial meeting in Seattle, developing countries rejected the initiative. In a recent free trade pact, Jordan and the United States agreed to protect core ILO workers’ rights. They also spelled out how to resolve disputes over labor standards: if one country weakens its labor laws or fails to bring its laws or enforcement into compliance with the ILO core standards, the other may take appropriate measures, including withdrawal of trade benefits. The AFL-CIO has endorsed the labor provisions of the Jordan trade pact, while the U.S. Chamber of Commerce has denounced them. The Chamber favors free trade agreements, and it fears that most countries will resist including enforceable labor standards in any new agreement. This view is almost certainly correct, at least in the developing world. Practical Difficulties Some Americans may fear that including enforceable labor standards in trade agreements will open the United States to charges that it fails to enforce ILO core standards, exposing it to possible trade penalties. But U.S. civil rights and labor laws already contain the fundamental protections demanded by the ILO conventions. Citizens in developing countries might be less confident that their laws and enforcement procedures will meet the tests implied by the ILO conventions, especially as construed by observers from affluent countries. Interpretations devised in the drawing rooms of Paris or the recreation rooms of suburban Washington might seem out of touch with conditions in countries where half or more of the population lives on less than $2 a day. Two of the most troublesome ILO standards involve child labor. Rich countries—very sensibly—restrict children’s participation in the job market so that youngsters can attend school and prepare to become workers. But in poor countries, where children’s earnings are a crucial family resource and schooling may be unavailable, the restrictions may not be appropriate. Of course, children in poor countries deserve protection and education too, but the standard of protection and the resources available for schooling will be far below those in a wealthy country. A standard of protection that is appropriate in rich countries can impose excessive burdens on poor ones. Third-world leaders fear, understandably, that including enforceable labor standards in trade treaties will expose their countries to constant challenge in the WTO—and that the standards will be used mainly to protect workers and businesses in developed countries from competition from third-world workers. AFL-CIO President John Sweeney denies that enforcing labor standards can have a protectionist impact. The ILO standards, he notes, are designed to protect the interests of workers in low-income as well as high-income countries. The WTO and United States strongly defend intellectual property (IP) rights and enforce trade penalties when developing countries violate those rights. Extending the same protections to workers’ rights, he reasons, cannot be protectionist. While it is easy to sympathize with Sweeney’s view, there is a big difference between worker rights in another country and the IP rights of a country’s own citizens. If Burma denies its workers the right to organize independent unions, its actions are deplorable but do not directly injure me. If Burma allows publishers and recording companies to reproduce my copyrighted books and songs without compensating me, the theft of my creative efforts injures me directly. It is hardly surprising that U.S. voters would insist on remedies for injuries to themselves before fixing the problems of workers overseas. Sweeney may object that the injury to Burmese workers from human rights abuses is much more serious than the monetary losses from copyright infringement suffered by a handful of artists, inventors, and U.S. corporations. And he may well be right. But American artists, inventors, and corporate shareholders can vote in U.S. elections; Burmese workers cannot. How to Assess WTO Penalties? If the WTO is to be used to assess penalties against countries violating international labor norms, its member countries must devise a new way to assign penalties for violations. Under current procedures, a country found to have a valid trade complaint may retaliate against the offending country by withholding a trade benefit roughly equivalent to the benefit denied it by the offender as a result of the violation of WTO rules. It is not obvious how to calculate the penalty when the violation involves a labor standard. There the injury has been suffered by workers in the offending country, and residents of the complaining country may have enjoyed a net benefit. Suppose, for example, the United States accuses another country of employing underage children in its apparel industry. The violation increases the offending country’s supply of low-wage workers, thus reducing producers’ wage costs and the prices charged to domestic and overseas consumers. The adult workers in the offending country have clearly suffered injury, as have the children if their work has deprived them of schooling that was otherwise available. How did the violation affect Americans? U.S. apparel workers probably lost wages and jobs. But their losses are counterbalanced by gains to U.S. consumers, who bought clothing more cheaply because of child labor in the offending country. Since all American workers, including those in the apparel industry, are themselves consumers, it is not clear whether the violation injured U.S. workers as a class. Last year apparel imports into the United States exceeded exports by about $55 billion. If the use of child labor overseas cut the cost of imports, Americans spent less for clothing than they otherwise would have. While most Americans deplore child labor, at home or abroad, it is hard to see how an overseas violation of the child labor standard has injured them. Nor is the United States likely to weaken its own child labor laws because it has benefited from the availability of cheaper imported clothes. Private Sanctions As a final option for enforcing labor standards, American consumers can apply their own private sanctions. Anyone who finds child labor or forced labor reprehensible can refuse to buy products made in countries that tolerate those practices. The ILO could push consumers into action by publishing information about offending countries and their violations. It could also publicize any country’s refusal to cooperate with ILO investigations. If voters want more information about imported goods and services from countries that comply with ILO standards, their own national governments can provide it. Washington can help American consumers increase pressure on offending countries by requiring sellers to label products with the country of origin. It could also encourage or require sellers to identify goods and services produced in countries that fully comply with ILO’s core labor standards. Should Uncle Sam Enforce Labor Standards? The case for enforcing labor standards is strongest when it involves basic human rights, such as freedom of association or freedom from slavery, and when it rests on moral grounds rather than economic calculation. If Washington wants to require its trading partners to respect basic human rights, it must be prepared to accept the real costs it will thereby impose on its own producers and consumers—and occasionally on the victims overseas whom it is trying to help. Economic theory and evidence may be useful in calculating the potential cost of trade sanctions to the United States and its trading partners. It is not helpful in determining whether the potential gains to human rights are worth the income sacrifice. Nor is social science very informative about whether a policy of trade sanctions is likely to improve victims’ rights. The case for requiring U.S. trade partners to respect international labor standards is least compelling when it involves the terms and conditions of employment. If a country respects ILO core standards, then workers will be able to negotiate for the best combination of pay, fringe benefits, work hours, and workplace amenities that their level of productivity allows. If we insist that the resulting compensation package meet minimum international standards, we are substituting our own judgment for that of the affected workers and their employers. Readers may object, rightly, that the weak bargaining position of workers in poor countries makes it unlikely that their negotiations with employers will secure decent compensation and safe working conditions. But their weak bargaining position is linked to their low productivity and skills. Today U.S. and European labor standards are much higher, and labor regulation enforced more rigorously, than was the case 50 years ago. The improvement is closely associated with workers’ increased skill and productivity. Even in the developing world, the better-off countries are more likely than the poorest to conform to ILO labor standards. In countries with per capita income of $500 a year or less, 30—60 percent of children between the ages of 10 and 14 work. In countries with per capita income of $500—1,000, just 10—30 percent of youngsters work. As productivity improves, so too will the bargaining position and wages of industrial workers. If history is any guide, national labor standards will improve as well. The most reliable way to improve the condition of third-world workers is to boost their average productivity. Concerned voters in rich countries can help make this happen by pressing to open up their own markets to third-world products. Many low-income countries have a comparative advantage in manufacturing apparel, textiles, and footwear and in producing staple foods, fruits, and vegetables. Rich countries often impose high tariffs or quotas on these products, and nearly all provide generous subsidies to their farmers—thus denying third-world producers and farmers access to a huge potential market. The World Bank estimates that tariff and nontariff barriers, together with subsidies lavished on U.S. and European farmers, cost third-world countries more in lost trade than they get in foreign aid. If we insist that developing countries meet immediately the labor standards that the richest countries achieved only gradually, we will keep some of them out of the world’s best markets. The poor countries that agree to abide by ILO standards will occasionally be challenged—sometimes by representatives of rich countries more intent on protecting their own workers from “unfair” overseas competition than on improving the lot of third-world workers. While the moral case for requiring our trading partners to respect labor rights is compelling, the case for removing trade barriers that limit the product markets and incomes of the world’s poorest workers is just as powerful. |
32f39bb9f5a287a7c79a56a357935dd8 | https://www.brookings.edu/articles/would-the-saudis-go-nuclear/ | Would the Saudis Go Nuclear? | Would the Saudis Go Nuclear? In the wake of September 11, 2001, and last week’s terrorist bombings in Riyadh, America’s relationship with Saudi Arabia has sunk to new lows. On the right, neoconservatives rail against Saudi support for extremist Islam and fume over the kingdom’s refusal to allow American airmen to fly sorties from Prince Sultan Air Base during the Afghan war. The conservative magazine Commentary recently ran an article titled “Our Enemies, the Saudis,” while The Wall Street Journal’s editorial page has blasted the Saudis for funneling money to militant Islamists. For their part, liberals point to Saudi Arabia’s lack of political freedom as evidence that America’s realpolitik-based Middle East policy is morally corrupt: This month, The Washington Post editorial page took the Bush administration to task for not citing Saudi Arabia as a violator of human rights and religious freedoms. In fact, hawks and doves who disagree about virtually everything else agree that the United States would be better off without its Saudi “ally.” “I think we’re heading for a divorce,” Youssef Ibrahim, former head of a Council on Foreign Relations task force on U.S.-Saudi relations, told BusinessWeek last year. Realists counter that the United States needs Saudi oil and Saudi military bases. But there’s a less obvious argument for making sure the long-standing Washington-Riyadh partnership doesn’t fracture: If it does, the Saudis might well go nuclear. Saudi Arabia could develop a nuclear arsenal relatively quickly. In the late ’80s, Riyadh secretly purchased between 50 and 60 CSS-2 missiles from China. The missiles were advanced, each with a range of up to 3,500 kilometers and a payload capacity of up to 2,500 kilograms. What concerned observers, though, was not so much these impressive capabilities but rather the missiles’ dismal accuracy. Mated to a conventional warhead, with a destructive radius of at most tens of meters, these CSS-2 missiles would be useless—their explosives would miss the target. But the CSS-2 is perfect for delivering a nuclear weapon. The missile itself may miss by a couple of kilometers, but, if the bomb’s destructive radius is roughly as large, it will still destroy the target. The CSS-2 purchase, analysts reasoned, was an indication that the Saudis were at least hedging in the nuclear direction. July 1994 brought more news of Saudi interest in nuclear weapons when defector Mohammed Al Khilewi, a former diplomat in the Saudi U.N. mission, told London’s Sunday Times that, between 1985 and 1990, Saudi Arabia had actively aided Iraq’s nuclear weapons program, both financially and technologically, in return for a share of the program’s product. Though Khilewi produced letters supporting his claim, no one has publicly corroborated his accusations. Still, the episode was unsettling. Then, in July 1999, The New York Times reported that Saudi Defense Minister Prince Sultan bin Abdulaziz Al Saud had recently visited sensitive Pakistani nuclear weapons sites. Prince Sultan toured the Kahuta facility where Pakistan produced enriched uranium for nuclear bombs—and which, at the same time, was allegedly supplying materiel and expertise to the North Korean nuclear program. The Saudis refused to explain the prince’s visit. If Saudi Arabia chose the nuclear path, it would most likely exploit this Pakistani connection. Alternatively, it could go to North Korea or even to China, which has sold the Saudis missiles in the past. Most likely, as Richard L. Russell, a Saudi specialist at National Defense University, argued two years ago in the journal Survival, the Saudis would attempt to purchase complete warheads rather than build an extensive weapons-production infrastructure. Saudi Arabia saw Israel destroy Iraq’s Osirak reactor in 1981, and it is familiar with America’s 1994 threat to bomb North Korea’s reactor and reprocessing facility at Yongbyon. As a result, it would probably conclude that any large nuclear infrastructure might be preemptively destroyed. At the same time, Riyadh probably realizes that America’s current hesitation to attack North Korea stems at least in part from the fact that North Korea likely already has one or two complete warheads, which American forces would have no hope of destroying in a precision strike. By buying ready-made warheads, Riyadh would make a preemptive attack less likely. And, unlike recent proliferators such as North Korea, the Saudis have the money to do so. Some analysts would argue that Saudi Arabia could enhance its security by undertaking a conventional, rather than a nuclear, buildup. But, as Middle East expert F. Gregory Gause III argues in a new Brookings Institution paper, Saudi Arabia’s military is “weak in part by design, to prevent the internal threat of a military coup.” Until the Saudi government is widely legitimate in the eyes of its own people—and that day seems a long way off—it is unlikely to build a large conventional military. By contrast, Saudi leaders might favor a nuclear arsenal, believing it enhances their security against external threats while being fairly useless to coup-plotters. Why would Riyadh want nukes now? Because of a potentially dangerous confluence of events. The rapidly progressing nuclear program of traditional rival Iran has no doubt spooked the Saudi leadership. Last fall, dissidents revealed the existence of a covert Iranian uranium-enrichment program, forcing analysts to drastically revise down their estimates of how long it might take Iran to obtain nuclear weapons. Reacting to that development, Patrick Clawson, deputy director of the Washington Institute for Near East Policy, recently wrote that “Saudi Arabia is the state most likely to proliferate in response to an Iranian nuclear threat” because, he argued, the Saudis fear a nuclear-armed Iran could have designs on Saudi Arabia, a Sunni monarchy that is home to a large number of oppressed Shia. After all, Tehran has for years allegedly supported Shia terrorist groups operating in Saudi Arabia and was blamed by many analysts for the 1996 Khobar Towers bombing. Holding back the Saudi nuclear program, of course, has been the kingdom’s relationship with the United States. Though America has never signed a formal treaty with Riyadh, since World War II the United States has made clear by its actions—most notably, by protecting Saudi Arabia during the 1991 Gulf war—and by informal guarantees given to Saudi leaders by American officials that it will protect the monarchy from outside threats. Since the September 11 attacks, though, that relationship has grown increasingly frail. When a RAND analyst last summer told the Defense Policy Board, then chaired by Richard Perle, that Saudi Arabia was “the kernel of evil, the prime mover, the most dangerous opponent” in the Middle East, he not only raised hackles in Riyadh, he reflected the opinion of many close to the Bush administration. R. James Woolsey, former CIA director and White House confidant, was even more emphatic in a speech last November, referring to “the barbarics [sic], the Saudi royal family.” The recent decision by Washington to pull most of its forces out of Saudi Arabia, reducing its deployment from 5,000 to 400 personnel and moving its operations to Qatar, has added facts on the ground to the rhetorical barrage. This recent decline in U.S.-Saudi relations can hardly make the Saudi royal family feel secure. Suddenly removing the U.S. security blanket just as regional rivalries are intensifying could push the Saudis into the nuclear club. That’s a scary prospect, particularly when you consider the possibility of Islamists overthrowing the monarchy. Instead, the United States should be careful to maintain Saudi Arabia’s confidence even as the two nations inevitably drift apart. The United States might even extend an explicit security guarantee to the Saudis, the kind of formal treaty it gave Europe to keep it non-nuclear during the cold war-and the kind of formal arrangement Washington and Riyadh have never signed before. Such a formal deal could raise anti-American sentiment in the desert kingdom. But the alternative might be worse. |
8dd1e81340fe1008b49c7c119348d8fe | https://www.brookings.edu/blog/africa-in-focus/2013/11/13/sexual-and-gender-based-violence-in-the-democratic-republic-of-the-congo-opportunities-for-progress-as-m23-disarms/ | Sexual and Gender-based Violence in the Democratic Republic of the Congo: Opportunities for Progress as M23 Disarms? | Sexual and Gender-based Violence in the Democratic Republic of the Congo: Opportunities for Progress as M23 Disarms? Years of war in the eastern Democratic Republic of Congo (DRC) have brought a host of tragedies, large and small. Amongst the greatest tragedies is undoubtedly that the DRC has come to be known as the “rape capital of the world.” At some points in the conflict, an estimated 48 women were raped every hour, by militiamen but also by notoriously undisciplined Congolese soldiers. Rampant sexual and gender-based violence has long been a driver of the country’s displacement crisis. An estimated 2.7 million Congolese are displaced within their own country, and one million were uprooted last year alone. In a cruel irony, risk of sexual and gender-based violence is not only a driver of displacement but one of its consequences: Internally displaced persons (IDPs) who are separated from their families and communities often face increased risk of violent attacks, including rape. The recent defeat of the M23 rebel group has raised cautious hopes of peace and an end to the systematic sexual and gender-based violence that has so brutally characterized the war in the DRC. A recent event at the Brookings Institution addressed the challenge of preventing and responding to sexual and gender-based violence in conflict contexts, including in the DRC. On October 21, the Brookings-LSE Project on Internal Displacement and IMA World Health hosted Dr. Denis Mukwege, the renowned Congolese physician and human rights advocate who founded the Panzi Hospital, an institution dedicated to assisting the survivors of rape. Since establishing the Panzi Hospital in South Kivu in 1998, Dr. Mukwege and his colleagues have performed more than 30,000 surgeries to address complications stemming from rape. Dr. Mukwege’s tireless efforts to speak out against rape in the DRC have earned him a Nobel Peace Prize nomination, but also forced him to flee his own home after an assassination attempt. Despite the danger, Dr. Mukwege returned to the eastern Congo to continue his work, on the urging of the women he has assisted. In his speech at Brookings, Dr. Mukwege stressed that the roots of wartime rape lie in the gender inequalities that characterize societies not only in Africa but around the world; conflict merely amplifies these pre-existing inequalities. Education is needed to eradicate these inequalities and to increase understanding of the nature of rape itself as a crime. Rape is not, Dr. Mukwege emphasized, a sexual relationship. It is an effect to negate another human being and destroy their equality. This is a crime that echoes across generations, and creates a range of direct and indirect victims whose needs and rights are often forgotten, but must be addressed. These indirect victims include families who see their loved ones being raped, children born of rape, and whole communities in which the social fabric is destroyed because of systematic rape. Advocating a holistic socio-economic approach that enables the reintegration of survivors into society and the transformation of victims into community leaders, Dr. Mukwege underlined the need for psychological support for victims to restore their “strength so that they can fight for their rights.” The struggle to prevent and respond to sexual and gender based violence in the DRC, across the African continent and around the world depends in large part on the question of impunity. Dr. Mukwege underlined the critical role of the rule of law in deterring violence against women. “In the DRC,” Dr. Mukwege said, “we have probably one of the best laws in the world that was written in 2006 to protect women. But how many women know about this law? How many men know about this law? Actually, it’s not only a question of them knowing it, but the application.” Crucially, the agreement negotiated in Uganda on the disarmament of M23 states that the rebel leaders will be held accountable in court for the major violations for which they are responsible. In a break from the past, there is to be no amnesty. Efforts to ensure accountability must tackle the interlinked questions of responsibility for sexual and gender-based violence and the country’s displacement crisis. Ratifying the African Union Convention on the Protection and Assistance of Internally Displaced Persons in Africa (Kampala Convention), an important new human rights treaty, would be an important step in this direction. The DRC is currently a signatory to the Convention, which explicitly prohibits and requires states to prevent sexual and gender-based violence against IDPs. The Convention also calls on states to “take special measures to protect and provide for the reproductive and sexual health of internally displaced women as well as appropriate psycho-social support for victims of sexual and other related abuses.” Building on the momentum created by the defeat of M23, the government of the DRC should ratify the Kampala Convention and develop a concerted plan to implement the agreement, particularly those provisions that seek to prevent and respond to sexual and gender-based violence. At the same time, capacity to implement the domestic laws Dr. Mukwege highlighted must be strengthened. The scourge of sexual and gender based violence in the DRC will not disappear with the defeat of M23—rapes and other attacks continue to be perpetrated by members of other militias, Congolese soldiers and in domestic contexts. Nonetheless, the dismantling of M23 must be capitalized on as an opportunity to redouble the fight against sexual violence in conflict. |
dcf13f6064486a336a818f96ff2760cb | https://www.brookings.edu/blog/africa-in-focus/2014/01/14/japan-in-africa-a-rising-sun/ | Japan in Africa: A Rising Sun? | Japan in Africa: A Rising Sun? Japanese Prime Minister Shinzo Abe completed his African tour this Monday in Ethiopia after visits to Côte d’Ivoire and Mozambique. His visit was the first tour of Africa by a Japanese leader in eight years and the first visit to a francophone West African country. Thankfully, this low frequency of visits to the continent by Japanese leaders does not pay justice to Japan’s involvement in Africa. According to official aid statistics (which excludes China), Japan is the fifth largest bilateral official development assistance (ODA) donor to Africa after the U.S., France, the U.K. and Germany. Japanese ODA to the continent averaged about $1.8 billion per year in 2008-2012—double its 2003-2007 level (see Figure 1). These figures do not include Japan’s aid to Africa through some multilateral donors such as the World Bank. Figure 1. Source: Japan’s Official Development Assistance White Paper 2012 For the past 20 years, Japan’s main roadmap for its assistance to Africa has been charted by the TICAD (Tokyo International Conference on African Development). The TICAD is a global forum between Japanese and African head of states and is held every five years. It is co-organized with the U.N., the UNDP, the World Bank and the African Union Commission. In principle, the TICAD advocates “Africa’s ownership” of its development and the “partnership” between Africa and the global community. It also serves as an accountability framework. Africans have also become familiar with the JICA acronym (Japan International Cooperation Agency) and may have seen young Japanese men and women from the JOCV (Japanese Overseas Cooperation Volunteers). Up until last year, most of Japan’s focus on Africa under TICAD IV was on traditional aid targets (infrastructure, agriculture, water and sanitation, education, and health, as well as peace keeping operations: Japan has provided 400 self-defense forces personnel as part of the U.N. mission in South Sudan). But Japan’s involvement in Africa is now at crossroads. TICAD V, which was held in Yokohama in June 2013, added a new element: private sector involvement. As Prime Minister Abe put it in his opening address at TICAD V, “What Africa needs now is private sector investment, and public-private partnership leverages that investment.” In Yokohama, the prime minister committed to support African growth over the next five years, through not only $32 billion in ODA but also $16 billion of “other public and private resources.” He also mentioned $2 billion of trade insurance underwriting. These funds will be targeted to areas that were identified in consultation with African countries, including infrastructure, capacity building, health and agriculture. So Prime Minister Abe’s recent African trip is in in line with TICAD V. It is therefore not surprising that business leaders joined the trip and that $570 million in loans to gas-rich Mozambique were announced. With this in mind, it is encouraging to see that two of the stops in the Japanese prime minister’s visit took into account regional integration in the continent. In Addis Ababa, the prime minister gave a speech at the African Union headquarters. His intervention was mostly focused on the need to maintain peace and security on the continent, and he pledged about $320 million for conflict and disaster response, including $25 million to address the crisis in South Sudan and $3 million to the one in the Central African Republic. Earlier, in Abidjan, Côte d’Ivoire, Abe met heads of state and government of the Economic Community of West African States (ECOWAS). In short, Prime Minister Abe’s visit heralds a new type of relationship between Japan and Africa. Japanese engagement with African countries will involve the private sector much more than previously. It is up to African policymakers to seize this opportunity to meet the continent’s transformational agenda. |
db454bfc7f851206082092fcbfce4282 | https://www.brookings.edu/blog/africa-in-focus/2014/03/17/ending-child-marriage-should-be-a-development-priority-for-africas-grassroots-in-2014-2/ | Ending Child Marriage Should be a Development Priority for Africa’s Grassroots in 2014 | Ending Child Marriage Should be a Development Priority for Africa’s Grassroots in 2014 The year 2014 holds yet another chance to eliminate the harmful traditional practice of child marriage in a long struggle that has consumed the attention of gender activists, public health practitioners and human rights advocates over the past 50 years. As youth employment fails to keep pace with the continent’s teeming population growth rates and as rural economies collapse in the face of armed conflict and climate displacement, the problem of what to do with the girls of Africa is of central concern to the poor village man as it is for the global community troubled by human security. The United Nations Population Fund (UNFPA) estimates that if current population growth trends continue, one in four adolescent girls will be in sub-Saharan Africa by 2030, and that the total number of adolescent mothers in this zone will increase from 10.1 million in 2010 to 16.4 million in 2030. Most of Africa’s adolescent mothers are child brides married before the age of 18 years—the age of maturity, designated in United Nations Conventions such as the Convention of the Rights of the Child and the Convention on the Elimination of All Forms of Discrimination Against Women. Child marriage, often referred to as forced or early marriage, is a formal marriage or an informal union taking place in the shadows of society—as it is underreported and largely practiced under customary and religious laws. Child marriage is shaped by custom, religion and poverty and exacerbated by ethno-religious crisis, conflict and environmental disasters. Girls are denied the right to education, made to toil in domestic servitude and live in physical seclusion in their husbands’ marital homes. Child brides everywhere are disempowered, vulnerable and exploited. Implications go behind the well-being of the child: Child marriage negatively impacts socio-economic development of regions and nations as well as global stability. Gordon Brown, in a 2013 review of child marriage, observes that infants born to child mothers under the age of 18 are 60 percent more likely to die in the first year of life than infants born to mothers 19 years and older. World Bank studies by Nguyen and Wodon (2012) note that for each year of early marriage, the probability of literacy for the girl is reduced by 5.6 percentage points, and the probability of secondary school completion declines by 6.5 percentage points. In Africa, where the median age of marriage is younger, girls have even less access to educational, family planning and obstetric care services. In addition, the millions of girls in Africa who marry before age 18 are more likely to have few or no years of schooling, reside in poorer and rural areas, be victims of physical or sexual violence, have their right to free movement restricted, and be denied access to health and social services. Although the practice is on the decline worldwide and in North Africa in particular, in sub-Saharan Africa child marriage continues to be a dominant form of customary unions as the options of school and youth employment remain unattainable in the largely agrarian economies of this region. While the absolute numbers and prevalence of child marriages are highest in South Asia, many experts agree that the intensity, pattern and context of child marriage in sub-Saharan Africa in general and West Africa in particular make this phenomenon most severe and detrimental to the lives of girls. The mean spousal age difference between husband and wife is 5 years more in West Africa compared to East and Southern Asia, the age of first pregnancies and total number of adolescent pregnancies are also higher in Africa compared to Asia and access and uptake of family planning services is also lower in the this region. Four of the 10 countries with the highest rates of child marriage worldwide are in West Africa. Three of these countries—Niger (75 percent), Guinea (63 per cent) and Mali (55 percent)—have maintained their top spots for over 10 years. In addition, in the Central African Republic, Mozambique, South Sudan, Malawi and the Democratic Republic of the Congo, more than 50 percent of girls are in a marital union before the age of 18 years. Ending child marriage is a global good. It can lessen the burden on countries’ health infrastructure and mitigate the human footprint of resource-poor countries in Africa. It reduces human suffering, recognizes human dignity and challenges gender-based discrimination. Ultimately, ending child marriage frees up untapped human resources and enables girls and women to contribute meaningfully to development. Despite numerous initiatives by activists and the global development community, child marriage has failed to go away in many hotspots of the global South it remains as persistent as in the pre-independence era. In West Africa—the region identified in Gordon Brown’s 2013 review as having the highest incidences of child marriage worldwide—a recent Ford Foundation report found that the median age of first marriage had only increased by a little over one year between 2000 and 2012. Explanations for the persistence of child marriage in Africa revolve around the gender blindness of male policymakers; weakness of child protection and human rights agencies; and the persistence of culture and tradition in the context of state fragility. Against this background, pushing through an ending child marriage policy agenda is a difficult and uphill struggle. Since 2014 is no ordinary year, but one in which the global development priorities are being set for the post-2015 sustainable development era, now is the opportunity to create policies and goals to eliminate this harmful practice within the global framework while taking into account the local needs of the man and woman in the village. As Africa lays out the continent’s development objectives for the post-2015 period, high-level consultations at the African Union has resulted in the African Consensus Position Document with three clear policy objectives. However, none of the development priorities focus intently on gender equality or ending child marriage. Rather, gender equality only emerges as one of eight targets within the third development priority (the human development priority), and it links the eradication of female genital mutilation (FGM) with ending child marriage. A recent publication by the Brookings Institution on child marriage in West Africa suggests that such a linkage hinders and complicates the cause of ending child marriage for policymakers. Moreover, the target of eradicating FGM and child marriage is proposed without reference to social justice for violation of the myriad child marriage laws that exist, and no operationalized plan is put forward to for building support for this initiative at the grassroots level. With limited attention accorded to ending child marriage and with regional economic agencies driving priority setting in Africa, there is a real and palpable fear that the target of ending child marriage may not be operationalized. Similarly, while many activists and civil society organizations across the continent have committed to ending child marriage, they find themselves significantly under resourced, circumscribed in their outreach base, limited in representation at global platforms and weak in their ability to utilize right and justice in the struggle. African feminist associations have protested the limited attention accorded to female genital mutilation and ending child marriage in the African Consensus Position and have called for a standalone goal on gender equality and women’s empowerment with specific indicators around increasing the median age of marriage and a legislative framework guaranteeing justice for girls whose rights have been violated. The position advanced by feminist groups is a simple but convincing one—the continent’s crippling youth unemployment problem, skills deficit and overburdened infrastructure cannot be addressed if girls are denied schooling, are coerced into marriage before the age of consent, and are forced into motherhood in childhood. Not surprisingly, recent commitments towards ending child marriage in Africa within the framework of the post-2015 agenda appear to be yet another attempt at laying out a wish list with little meaning for the people of the continent as efforts remain concentrated at the high level of regional institutions. Examples include the 6th February 2014 Outcome Document of the 58th session of the U.N. Commission on the Status of Women on the theme “Challenges and achievements in the implementation of the MDG for women and girls” and the 8th February 2014 Declaration of the African Union Ministers of Gender and Women’s Affairs on the Post-2015 Development Agenda. These documents are not backed up by a framework inspiring consultation from below and contain no mechanism for building commitment from men, women, and community and faith leaders at the grassroots level. For Africa to achieve the seismic shift away from child marriage, more has to be done to join policy initiatives with grassroots concerns. At the policy level, the regions’ politicians and bureaucrats must translate high-level commitments into real, national-level policies, programs and projects linking the goal of ending child marriage with youth employment, adolescent reproductive health and education policies. Policymakers must also design and implement smart interventions that convince communities of the benefits of girls’ education by paying attention to quality and by linking education to income generation, skill acquisition and job creation for youth. It seems clear that robust, high-level initiatives may well fail to be effective if they do not provide an alternative model for communities making tough decisions about the future for girls in the face of insecurity, high youth unemployment, and low-quality but high-cost education. The good news is that the global community shaping the post-2015 agenda is beginning to commit to ending child marriage as a development priority. The most recent campaign to end this harmful traditional practice, launched by the U.N. secretary-general at the inaugural International Day of the Girl Child, has been picked as illustrative goal 2 of the High-Level Panel and the UNFPA Proposal for the Post-2015 Development Framework called for an end to child marriage. However, the global community must do more to provide models, training, and platforms for action and inclusive debate around the goal of ending child marriage within the framework of a meaningful post-2015 sustainable development agenda in which girls and women are central. |
9775c182496e10029ceff04f2ce147fe | https://www.brookings.edu/blog/africa-in-focus/2014/05/09/possible-trajectories-of-the-boko-haram-conflict-in-nigeria/ | Possible Trajectories of the Boko Haram Conflict in Nigeria | Possible Trajectories of the Boko Haram Conflict in Nigeria In a continuation of the conflict with Boko Haram in Nigeria, earlier this week suspected Boko Haram fighters killed at least 100 people when they attacked Gamboru village in the country’s northeast. The village was being used as a base in the globally prominent search for the missing schoolgirls. And earlier this week, the United States joined other countries (such as France, the United Kingdom and China) in offering to assist the Nigerian government in the search. But where is this violence heading? President Goodluck Jonathan hopes that the current tragedy involving these girls could be “the beginning of the end of terror in Nigeria.” However, other critics warn that outside intervention might only fan the flames. In my last blog, I discussed theories for the emergence and radicalization of both Boko Haram and Ansaru. In my next blog, I will discuss possible strategies for containing the conflict in the short and medium term, as well as long-term strategies for neutralizing the two terrorist groups and the threats they pose to the Nigerian state. Below I discuss possible trajectories of the conflict: Can the conflict abate? Will current patterns hold steady? Or will the violence accelerate, and why? It is possible for the conflicts with Boko Haram and Ansaru to abate. However, this prospect does not look good in the short term. In fact, despite intensified military campaigns against the sects and the declaration of a state of emergency in Borno, Yobe and Adamawa states in May 2013 and the groups were resilient enough to carry out major attacks, such as the one on an air force base in Maiduguri last fall that left several people dead. As many experts have noted, the recent attacks show that the threats from Boko Haram and Ansaru are growing, not diminishing. If the Boko Haram conflicts abate, it may not be before the conclusion of the 2015 presidential election, which President Goodluck Jonathan, a Christian from the minority Ijaw ethnic group, is likely to contest. Though both Boko Haram and Ansaru couch their terrorism in religious revivalism, they are able to tap into social and political discontent within the local population, ensuring that at least some locals can sympathize with their cause. One of the issues appears to be a belief by some people in the north that the decision by President Jonathan to contest the 2011 presidential election flouted the ruling People’s Democratic Party’s (PDP) zoning and power rotation policy, thereby “cheating” the north from taking its “turn” at producing a president. Under the PDP’s zoning and power rotation arrangement, the north was supposed to produce the president of the country for eight years—after Olusegun Obasanjo, a Christian Yoruba, had served out two terms of four years each. However, Umaru Yar’Adua, who succeeded Obasanjo as president and was from the north, died in office after only three years of being there, paving the way for the then-vice president, Goodluck Jonathan, to succeed him. If a northern Muslim defeats Jonathan in the 2015 presidential election, some of the popular political discontent in the north on which Boko Haram and Ansaru feed will be removed, likely leading to an abatement in their terrorist attacks. This was precisely what happened in the restive Niger Delta when Jonathan, a minority from that region, became the vice president of the country. In fact, in the amnesty granted to the Niger Delta militants, then-Vice President Jonathan played a key role in the negotiations with the militants. Under the Jonathan presidency it would be difficult for the militants in that area to renew their violent agitations because they know such actions would be perceived in the local community as undermining the regime of one of their own. The flipside to this scenario is that if Jonathan loses the 2015 election, it could re-ignite militancy in the Niger Delta, with severe adverse implications for crude oil production. It could also send dangerous signals to other sections of the country that they need their own insurgency groups capable of holding the country to ransom for their section of the country to produce a president. Another factor that could lead to a deceleration in Boko Haram terrorism is if the federal government replaces the civilian governors of the three most affected states (Borno, Yobe and Adamawa) with military administrators. Military governors are more likely to be in a position to slow Boko Haram terrorism in their states. On the other hand, this move could mark the beginning of the truncation of democracy in Nigeria since ambitious military officers could use the opportunity to seize power at the national level. The country’s current democratic rule only started in May 1999. Though Nigeria gained her independence from Britain in 1960 and started as a Westminster-style liberal democracy, the military usurped power in 1966 and established a prolonged dictatorship. An attempt to re-establish democracy in the country in 1979 was stopped again in December 1983 when the military once supplanted the civilian regime and established another dictatorship that lasted until 1999. A third factor that could lead to the abatement of Boko Haram-related violence in the country is the recent kidnap of over 200 Chibok girls and the collective anger it has mobilized against the sect both within and outside the country. Already, the United States, France, Britain and China have offered various forms of military and intelligence sharing assistance to find the girls, which the Nigerian government accepted. About seven U.S. military officials are expected to arrive Nigeria today to help in the search for the missing girls. They will join about 60 U.S. interagency members who have been on the ground since before the kidnappings as part of United States’ counterterrorism efforts within Nigeria. If the U.S. military is able to quickly locate the whereabouts of the abducted girls and free them without many of the kidnapped girls losing their life in the process or heavy civilian casualties during any rescue operation, it will bolster the U.S.’s standing in the eyes of the Nigerian public and possibly lead to a request for a broader U.S. assistance in fighting the sect. Then, if the U.S. is able within a short frame of time to help the Nigerian government arrest the leaders and sponsors of the sect—with minimum casualties on all sides—the conflict could also abate. Under this scenario, Borno, Yobe and Adamawa States remain the hotbeds of conflict, with episodic occurrences in other northern states. This scenario is not likely to occur because the conflict has already intensified from what it was only a few months ago. The sect has become more audacious as its recent attacks in Borno State vividly demonstrated. For instance, in the sect’s attack in the town of Gamboru this week. Allegedly, the terrorists, “wearing military fatigue, came driving dozens of pick-up trucks and motorcycles, with three armored personnel carriers providing cover.” Residents of Gamboru also claimed that an aircraft hovered in the skies throughout the attack, as militants wreaked havoc for four hours in the middle of the day. Thus, the scenario of the conflict remaining unchanged in its character and trajectory is unlikely. Unless it abates in line with the first trajectory above, the conflict will likely continue to intensify. It is possible for the Boko Haram conflict to grow far worse than it is now. This trend could happen under at least four possible scenarios. There is a strong feeling that Jonathan will contest and narrowly win the 2015 presidential election against a Muslim presidential candidate from the north and that the outcome of the election will be hotly disputed. Post-election violence in the north could be re-enacted along the lines of what happened after the 2011 presidential elections when Muhammadu Buhari of the defunct Congress of Political Change (CPC) lost to Jonathan. Boko Haram and Ansaru could tap into fairly generalized political frustrations among Muslims in the northern part of the country to increase and widen the tempo of their activities, targeting especially Christians and those thought to have collaborated with Jonathan in “rigging” the election. Under this scenario, Nigeria will be saved from anarchy or civil war only if the urge for reprisal attacks in the south is contained. In essence, it is possible for the Boko Haram conflict to be contained or widen. In the next blog, I will discuss possible strategies for containing the conflict in the short- to-medium term as well as long term strategies for neutralizing the two terrorist groups and the threats they pose to the Nigerian state. Note: This blog reflects the views of the author only and does not reflect the views of the Africa Growth Initiative. Just this month, the Brookings Africa Growth Initiative is wrapping up a yearlong study on the impact conflict has had on the agricultural sectors in northern Nigeria and Mali. Adibe collaborated with Brookings on this study and specifically put together a long-form exposition on the possible trajectories of Nigeria’s conflict. While the full report moves toward publication, Brookings asked him to publish excerpts for Africa in Focus, 1) explaining the emergence of Boko Haram, 2) discussing possible scenarios on how the conflict could evolve, and 3) providing policy recommendations for curbing the violence. |
7570ec3c492442a453717eb91768253c | https://www.brookings.edu/blog/africa-in-focus/2014/05/14/boko-haram-in-nigeria-the-way-forward/ | Boko Haram in Nigeria: The Way Forward | Boko Haram in Nigeria: The Way Forward On Monday, in a video showing 130 of the over 200 kidnapped Nigerian schoolgirls, Boko Haram announced that it would be willing to let the girls go as part of a trade for Boko Haram militants currently held by Nigeria. Later that day, Nigerian Interior Minister Abba Moro announced that Nigeria declined the offer, stating that the sect is not in any moral position to swap prisoners for the innocent girls. As I stated in an earlier blog, this kidnapping is only the latest in a long list of attacks against the Nigerian state and its innocent civilians. Boko Haram militants have been active around the country and especially in the northeast for many years. In fact, this week President Goodluck Jonathan also asked Nigeria’s parliament to extend the state of emergency declared in May of 2013 in the northeastern states of Adamawa, Borno and Yobe—the ones most vulnerable and consistently victimized by Boko Haram—by another six months. However, the tragedies in Nigeria and the conflict with Boko Haram require more than just responses to terrorist activities. Though foreign governments are now providing Nigeria with security and surveillance support, the conflict will not end until longer-term and deeply held grievances are addressed. The strategies adopted by the government should be divided into long-term measures aimed at neutralizing the groups and short- to medium-term measures aimed at containing them and their terrorism. Nigeria, the most populous country in sub-Saharan Africa as well as the biggest economy, is facing a severe crisis in its nation-building process. Virtually every part of Nigeria claims it is “marginalized.” Concomitant groups have been calling for the convocation of a “Sovereign National Conference”—a euphemism for a meeting to discuss whether Nigerians want to continue to live together as one country. Something nasty has happened to the effort to create “true Nigerians”—that is, Nigerians who privilege their Nigerian identity over the other identities they bear in the country. Thus, some people still believe that Nigeria is a “mere geographical expression,” a nation only in name and with only very few “true Nigerians.” The struggle in nation building mixes with poverty, inequality and a lack of development in the country, creating an existential crisis for many Nigerians. As I stated in my previous blog, for many young people, a way of resolving this sense of alienation is to retreat from the “Nigeria project”—the idea of fashioning a nation out of the disparate nationalities that make up the country—and construct meanings in chosen primordial identities, often with the Nigerian state as the enemy. I have elsewhere [i] called this phenomenon the “de-Nigerianization process.” In Nigeria, there is a heavy burden of institutionalized memories of hurt, injustice, distrust and even a disguised longing for vengeance by various individuals, ethnic groups, regions and religious groups. In this sense, actions that ordinary Nigerians rightly see as heinous are seen by some as normal, even heroic. There is a feeling that this “de-Nigerianization process” is accelerating by leaps and bounds. No individual or political authority enjoys universally perceived legitimacy across the main fault lines and therefore the country is in desperate need of creating more “true Nigerians.” If this trend continues, there is a high risk of a growing number of individuals and groups impairing or even attacking the Nigerian state. Already, some of those entrusted with the nation’s common patrimony steal it blind; some law enforcement officers turn the other way if offered a little inducement; organized labor (including university lecturers) sometimes goes on prolonged strikes on a whim; students may resort to cultism and exam malpractices; and workers often drag their feet, refuse to put in their best and engage in moonlighting. It seems that everyone has one form of grouse or another against the Nigerian state and its institutions. A long-term solution for containing Boko Haram’s and Ansaru’s terrorism, and for neutralizing them along with other insurgency groups in Nigeria, is to resolve the crisis in the country’s nation-building processes. Terrorism will end when Nigerians come to see themselves as one people and develop that sense of what Benedict Anderson calls “imagined communities.” For Anderson, a nation is a community socially constructed and imagined by the people who perceive themselves as part of the group. For him, a nation “is imagined because the members of even the smallest nation will never know most of their fellow-members, meet them, or even hear of them, yet in the minds of each lives the image of their communion.”[ii] Re-starting the stalled nation-building process is not going to happen overnight. The following measures, however, hold a good promise: (a) I remain skeptical that the on-going ad hoc National Conference convened by the federal government to recommend solutions to the country’s many challenges will succeed, because of deeply ingrained distrust among Nigerians. However, the conference, if well managed, could be a credible platform for all stakeholders to vent their grievances and frustrations with the Nigeria project. The catharsis will be useful as the country strives for long-term solutions to its nation-building problems. In the same vein, some recommendations from the conference, if implemented, could help mollify some aggrieved groups. (b) Perhaps one of the long-term solutions to the Boko Harm challenge could come by default. The increasing wave of “Naija optimism” could help blunt the pull of the centrifugal forces. This is a wave of new hope around the country’s economic prospects, typified in the recent inclusion of Nigeria in the MINT (Mexico, Indonesia, Nigeria and Turkey) emerging economies and the rebasing of its GDP, making it the largest economy in Africa and the 26th largest in the world. Because people instinctively want to identify with success, economic growth, especially if it is accompanied with more equitable distribution and people-oriented development could pacify irredentist pressures, as separatist forces may have to contend with the fear of leaving at the time the country is being tapped as among the likely future economic superpowers of the world. (c) As Nigeria’s economy develops, the various parts of the country could develop organic economic linkages that will help further the cause of the nation-building process. For instance, if the groundnuts produced in the north are used in the manufacture of peanut butter in the southeast, and the cocoa produced in the west is used for manufacturing chocolate drinks in the north, such economic linkages will help blunt interregional animosities and thus further the cause of national unity. In the short- to medium-term, the government should adopt a combination of koboko (Hausa word for whip) and “pieces of the National cake” (a Nigerian phrase for “patronage” or “co-optation into the system”). In Western speak, carrot and stick strategies. Some of the measures the government could take include: (i) Empowering the state governments in the north to lead the charge and be the faces of the fight against Boko Haram. This could, if anything, address the conspiracy theory in the north that President Goodluck Jonathan’s administration is funding Boko Haram either to make Islam look bad or to depopulate the north ahead of the 2015 elections. It is important to underline that the conspiracy theories have made it more difficult to mobilize collective anger against Boko Haram. (ii) Creating a Ministry of Northern Affairs—just like the Ministry of Niger Delta Affairs—to help address the numerous challenges in the north, including the problems of poverty, unemployment, illiteracy and radical Islam. This establishment would be one way of winning the hearts and minds of the locals and cooling local grievances on which Boko Haram feeds. (iii) Conducting speedy and fair trials, under Islamic laws, of those found to be Boko Haram activists or funders and letting the law have its full course. Having suspects stand for trial for months or even years creates a backlash, and often has a way of mobilizing sympathy for the suspects. It may also be strategic to try the suspects under Islamic laws since the sect members have openly rejected Western civilization, including its jurisprudence. Whatever punishment is meted to them under Islamic jurisprudence will not be seen as part of Western conspiracy against Islam. (iv) Instituting a sort of Marshall Plan for the northeast aimed at winning the hearts and minds of the local populace. The plan should aim at providing quality education, building local capacity and providing jobs. (v) Exploring the option of offering amnesty to the more moderate members of the sects while side-lining the hardliners and finding means to effectively neutralize them. Conclusion There is no quick fix to fighting terrorism anywhere in the world as the experiences in Afghanistan, Somalia, Yemen and other countries have shown. However, with the above recommended short- to-medium term strategies pursued concurrently with the long-term strategy of resolving the crisis in Nigeria’s nation-building processes, Boko Harm’s and Ansaru’s terrorism can be contained, and the groups eventually neutralized. Note: This blog reflects the views of the author only and does not reflect the views of the Africa Growth Initiative. Just this month, the Brookings Africa Growth Initiative is wrapping up a yearlong study on the impact conflict has had on the agricultural sectors in northern Nigeria and Mali. Adibe collaborated with Brookings on this study and specifically put together a long-form exposition on the possible trajectories of Nigeria’s conflict. While the full report moves toward publication, Brookings asked him to publish excerpts for Africa in Focus, 1) explaining the emergence of Boko Haram , 2) discussing possible scenarios on how the conflict could evolve , and 3) providing policy recommendations for curbing the violence. |
d9dc130aeba1bcbdbb5f4a5d1025a22a | https://www.brookings.edu/blog/africa-in-focus/2014/06/06/africa-in-the-news-us-reinforces-power-africa-and-nigeria-has-a-new-central-bank-governor/ | Africa in the News: US Reinforces Power Africa, and Nigeria Has a New Central Bank Governor | Africa in the News: US Reinforces Power Africa, and Nigeria Has a New Central Bank Governor On June 3-4, 2014, U.S. Secretary of Energy Ernest Moniz led a delegation of high-level U.S. officials to Addis Ababa for the U.S.-Africa Energy Ministerial, cohosted by the government of Ethiopia. Based around the theme, “Catalyzing Sustainable Energy Growth in Africa,” the ministerial convened leaders from government, the private sector, academia and civil society to discuss the technologies, strategies and partnerships necessary for leveraging the continent’s extraordinary renewable and hydrocarbon energy resources. Attendees included Rajiv Shah, administrator of the U.S. Agency for International Development; Alex Rugamba, director of the African Development Bank; Fred Hochberg, chairman of the U.S. Export-Import Bank; and Elizabeth Littlefield, president of the Overseas Private Investment Corporation. Public figures representing ministries of Energy and Development from African countries also participated in the meeting. Discussions at the ministerial highlighted the importance of energy infrastructure development as an enabler of economic development and the need for sustainable energy, given the risks in Africa from climate change. The forum also served as a platform to launch the new Power Africa Initiative, “Beyond the Grid: a framework for American investment in off-grid and small-scale energy projects. These projects will target primarily rural areas in the six Power Africa partners: Ethiopia, Ghana, Kenya, Liberia, Nigeria and Tanzania. An estimated 240 million people in rural and peri-urban communities in Africa lack access to electricity and have largely been excluded from government plans to extend electricity grids beyond densely populated urban centers. Through the Beyond the Grid approach, 27 private investors and donors have committed $1 billion to improving access to affordable energy in these areas, covering an expected 20 million underserved households and businesses. Proposed projects by investors include: $80 million in funding by Schneider Electric to finance off-grid energy small and medium enterprises and train 1,000 Africans in energy-related trades as well as Solar Sister’s commitment to develop a distributed network of female entrepreneurs who will run clean energy micro-businesses. On Tuesday, Godwin Emefiele, former chief executive officer of Zenith Bank Plc, assumed the role of governor of the Nigerian Central Bank. He replaced Lamido Sanusi who was suspended by President Goodluck Jonathan in February for alleged financial mismanagement. Sanusi denies President Jonathan’s claims and argues that his firing was politically motivated. In a press conference on Thursday, Emefiele outlined his agenda as governor, including a major focus on “development banking”—cutting unemployment and poverty rates—not just monetary stability. Specifically, he will aim to implement measures to identify and direct credit toward productive sectors of the economy. Emefiele also stated that he will work to achieve “very daunting twin goals: gradually reducing interest rates while upholding the stability of the naira. Slumping foreign currency reserves due to below-target oil outputs (the country’s main export) have made it increasingly difficult for the Nigerian Central Bank to maintain the currency peg over the past year. This challenge has led to mounting pressure to devalue naira—a move that Emefiele staunchly opposes. Following his statements on Thursday, the naira dropped by 0.69 percent against the dollar to a one-month low of 163.85 as bond yields and treasury bills also fell. As noted by the Financial Times, the volatility in the foreign exchange market is based in part on investors’ concerns over the shift in the interest rate policy as well as perceptions of the eroding independence of the Nigerian Central Bank. |
f6a89ada49305e3b1a5651f96338653c | https://www.brookings.edu/blog/africa-in-focus/2014/08/15/ghanas-request-for-imf-assistance/ | Ghana’s Request for IMF Assistance | Ghana’s Request for IMF Assistance On Friday, August 8, 2014, IMF Deputy Managing Director Min Zhu issued the following short statement: “Today, IMF Management received a formal request from the Ghanaian authorities to initiate discussions on an economic program that could be supported by the IMF. The Fund stands ready to help Ghana address the current economic challenges it is facing. We expect to send an IMF team to Ghana in early September to initiate discussions on a program.” The last time Ghana went to the IMF was five years ago, and, during its resulting three-year program, the country managed to raise its real GDP growth rate from about 4.0 percent in 2009 to 7.9 percent in 2012 with a peak of 14.4 percent in 2011. However, last year, growth slowed down to about 5.4 percent, and the IMF forecasts a 4.8 percent figure this year. More troubling, indicators on the macroeconomic dashboard are sending alarm signals: Headline inflation is hovering around 15 percent; the Ghanaian currency—the cedi—has fallen 37 percent against the U.S. dollar since the beginning of the year; and the central bank has increased its policy rate to 19 percent in July. The government has been missing its fiscal deficit forecasts, and social indicators are also sending some warning signals—activists marched through the capital Accra in late July during “Red Friday” to protest the worsening economic situation. Ghana’s economic performance has been plagued by the “twin deficits” of fiscal and current account deficits. In other words, the government has been spending more than it collects in terms of revenues, and the country has been importing more than it exports. This situation cannot be sustained for long and creates macroeconomic imbalances. Two main drivers of the twin deficits are higher salaries and lower gold and cocoa prices. Higher salaries: When it comes to the worsening fiscal deficit, the Institute of Statistical, Social and Economic Research (ISSER), one of AGI’s partner think tanks, reports in its State of the Ghanaian Economy in 2012 that discretionary expenditure doubled to 18.5 billion cedis in 2012, driven by higher personal emoluments, which have been consistently rising since 2009. Lower gold and cocoa prices: The high current account deficit is mainly attributed to the fall in the global prices of gold and cocoa. Ghana’s heavy dependence on the export of primary commodities (gold and cocoa account for 62 percent of export receipts in 2012) makes the country vulnerable to external prices. In addition, the government has not been able to raise enough revenues to meet its fiscal targets and although the country has recently joined the club of oil exporters, oil revenues are not yet a significant buffer against lower gold and cocoa prices. This year, the government of Ghana developed its Economic and Financial Policies for the Medium Term to address the twin deficits. The major elements of this strategy include: 1. Imposing levies on certain imports and profits of specific sectors, eliminating fuel subsidies, and raising electricity and water tariffs to compensate for the shortage in tax collection and grants; 2. Raising the amount and coverage of value-added tax rates to mobilize additional revenue and keep the current primary spending; 3. Adopting public sector reforms that include a rationalization of the public service, the termination of some public services, the reduction of budget rigidities by adjusting the statutory funds, and efforts to increase tax compliance by reforming the revenue administration system; and 4. Enacting a tighter monetary policy. In its latest report on Ghana, IMF staff noted that the Ghanaian government’s strategy “is an important first step that now needs to be translated into specific, quantified, and time-bound actions, particularly with respect to the planned rationalization of the public service and tax policy measures.” The IMF also noted that “in light of Ghana’s significant fiscal and external imbalances, staff would strongly encourage the government to target a larger and more frontloaded fiscal consolidation.” Fitch ratings stressed recently that an IMF program that supports fiscal consolidation and addresses macroeconomic imbalances in Ghana could stabilize the sovereign rating credit outlook which is currently at B-. The credit rating agency, however, warns that “an IMF program is not a foregone conclusion, nor is its effective implementation as a lasting reduction in exchange rate and funding pressures is unlikely until a program is agreed and a credible deficit reduction strategy is implemented.” Deteriorating market conditions will test the extent to which the government can manage the current situation without resorting to external funding. The government is facing more expensive short-term funding costs (182-day Treasury bill yields have crossed the 25 percent barrier) and is having problems raising long-term money domestically (auctions of five- and seven-year bonds were cancelled). Fitch ratings recently noted that a shortage of local currency liquidity has resulted in banks and non-bank financial institutions cutting holdings of government securities, leading to the central bank funding $1 billion (or 85 percent) of the budget deficit during the first five months of 2014. The shortage of U.S. dollars is increasing, and parallel exchange markets complicate the central bank’s management of liquidity. Gross international reserves are falling and are estimated to $4.5 billion or 2.2 months of current external payments in June from $5.6 billion at the end of 2013. An IMF program will help stop the bleeding, but the key challenge for the government will be to engage all stakeholders in implementing the measures necessary to put Ghana back on a sustainable macroeconomic path. As noted above, people have already taken to the streets of Accra against the worsening economic conditions, and protesters are demanding reduced cost of utilities—such as water and power supplies—reduced fuel prices; inclusive growth and transparency; more jobs; and stronger local currency. Similarly, polytechnic teachers and nurses were recently on strike over unpaid salaries and allowances. In the months ahead, the Ghanaian government will need strong political will and skillful negotiation strategies. On the one hand, it will have to convince the Ghanaian public that short-term pain will be needed for Ghana to achieve its long-term transformational agenda. On the other hand, the government will need to identify policy decisions that are credible for market participants and IMF staff who are expecting a large and rapid fiscal consolidation. This will not be easy, but Ghana has been down this road three times in the recent past. The country had three-year programs with the IMF in 2009, 2003 and 1999, and a good starting point for the government will be to identify what worked well in the past. |
8da564cf9f176ee78c9c0eb5e4c3aebc | https://www.brookings.edu/blog/africa-in-focus/2014/08/19/are-ghanas-women-more-entrepreneurial-than-its-men/ | Are Ghana’s Women More Entrepreneurial Than its Men? | Are Ghana’s Women More Entrepreneurial Than its Men? Like many countries in sub-Saharan Africa, Ghana’s impressive economic growth in recent decades, largely propelled by higher commodity prices such as cocoa and gold on the international market, has so far failed to re-structure the country’s economy. The broader Ghanaian economy continues to exhibit fragility and a weak adaptability to internal and external challenges with limited capacity to absorb shocks. More importantly, the impressive economic growth of recent years has not lead to the creation of sufficient employment, especially for the teeming youth who enter the labor market every year. Within the context of limited employment and other opportunities in the labor market, the development of entrepreneurial activity is seen as having a key role in this environment. This is because entrepreneurship can drive innovation and competition, and act as a catalyst for structural transformation of an economy and, consequently, a reduction in poverty. While the literature on entrepreneurship is vast, little is known about the forms and characteristics of female entrepreneurship in Ghana in particular and sub-Saharan African countries in general. Following the first-ever Global Entrepreneurship Monitoring (GEM) survey in Ghana, we examined the reasons and effects of the relatively higher levels of entrepreneurial activities among women compared to their male counterparts.[1] More specifically, we examined women and entrepreneurship in Ghana, women’s motivation to start and operate businesses, the key challenges they face, and the overall impact on national development and poverty reduction. Even though in proportional terms, studies have indicated that both genders experience difficulty on the labor market, females tend to fare worse than their male counterparts [2]. For instance, the 2010 Population and Housing Census of Ghana indicates that although unemployment declined between 2000 and 2010, it was still relatively higher for women compared to men. Unemployment among males 15 years above declined from 10.1 percent in 2000 to 4.8 percent in 2010 compared to 10.7 percent to 5.8 percent for females for the same time period. This situation is partly due to the relatively low level of education and other constraints among females on the labor market. It is within this context that female participation in entrepreneurship becomes very important. Female-controlled businesses in Ghana often tend to be micro and small enterprises (MSEs) largely concentrated in the informal sector. This trend also hinders female success: MSEs in the informal sector are unable to expand because they are confronted with a myriad of challenges such as limited access to public infrastructure and services (water, electricity, etc.), cheap and long term credit, and new technologies. According to Yankson et al. (2011), across all countries in the world, societal views on culture, religion, and child care, and levels of education and development have serious implications for attitudes and opinions about working women. These societal views can weigh heavily against women entering the business world. Despite these challenges, the 2010 GEM survey revealed that in Ghana, women are more entrepreneurial than men—a condition which is an exception across all the GEM countries. Using a key measurement in GEM, the “total early stage entrepreneurial activity” (TEA),[3] the 2010 study revealed that—with the exception of Ghana—across all GEM countries male participation in entrepreneurial activity exceeds that of the female (see Figure 1). The TEA rate for Ghana was estimated at almost 60 percent for the females and 42 percent for males. In other words, unlike other countries, in Ghana there are fewer men than women starting businesses. The GEM study confirms the conclusions of other studies that have concluded that the number of female entrepreneurs in Ghana far outweigh the number of male entrepreneurs. The growth of female entrepreneurs needs to be regarded as part of a broader process of social change that is marked by increases in the number of women in the workforce, including women in business. This process of social change is associated with increased education for women, postponement of early marriage, smaller family sizes and the increased desire for financial independence—all of which contribute to the growth of women-owned businesses.[4] In the view of Dzisi (2008), today, entrepreneurship is an accepted career path for women; it is even preferred to some degree as it is seen to have the potential to offer flexibility and independence that typical employment does not. Though Ghanaian women have long been active in business, the effects of economic reform programs beginning in the mid-1980s have pushed more women into the informal sector, either as the sole providers or supplementers of household incomes.[5] The reforms ushered in an era of rising prices of basic needs, growing unemployment and underemployment of male partners, declining real income, and the growing demand to meet local levies for social amenities provision through user charges. Under these conditions, the necessity for female supportive income for the household became imperative and women’s income-generating activities became indispensable to family survival.[6] Commenting on the factors that motivate women to starting business, Mumuni et al. (2013) note that these factors can be grouped as follows: Clearly, the myriad of reasons that motivate or drive women to engage in entrepreneurship could be simply classified under the labeling of necessity and opportunity-driven. In this direction, Mumuni et al.’s (2013) categorizations of no choice, by chance and forced entrepreneurships could be classified as necessity-driven entrepreneurship while informed and pure entrepreneurships could fall under opportunity-driven entrepreneurship. Although it has been argued that institutional, legal and regulatory frameworks for business development in Ghana are to a large extent gender neutral, certain social-cultural factors and structural conditions continue to militate against women entrepreneurship. Compared to their male counterparts, women in Ghana are poorer, have heavier time burdens, and are less likely to be literate. All these factors have negative impacts on women entrepreneurship although the regulatory, legal and institutional regimes of the country can be described as gender neutral. The recent study IFC/World Bank study, Voices of Women Entrepreneurs in Ghana, highlighted the issues, concerns and successes of female entrepreneurs in their own voices. The IFC/World Bank revealed several areas perceived by Ghanaian female entrepreneurs as particularly challenging, and three key areas stand out: balancing work and family life; dealing with corruption; access to credit and; managing male employees (see Figure 2). Of all the perceived difficult areas that hinder women in business, the most critical is the balancing of work and family life. Culturally, a role has been carved for Ghanaian women as maintainers and caretakers of homes, a role that is often at odds with being a business owner. Women often struggle to balance the time it takes to run a business and the expectations of society in meeting family commitments. Another key challenging area highlighted in many other studies is cultural practice regarding land and property ownership (especially inheritance) and its negative impacts on women entrepreneurship. Access to land is administered under customary law, which contains built-in discrimination against female land ownership. In addition, inheritance systems largely administered through traditional and cultural practices tend to discriminate against women—one can therefore understand the role of land as one of the range of constraints faced by female entrepreneurs. Furthermore, it has been argued that women’s limited access to start-up capital is related to access to credit, which in turn is correlated with formal property ownership. Therefore, the inability of women to own property has serious implications for access to credit as they have no property to use as collateral for start-up capital. Economic and political reforms over the last three decades in Ghana have resulted in a stable socio-economic environment that has facilitated significant economic growth rates, with a strong focus on private sector development. These growth rates have not, however, generated significant employment opportunities in the private sector that is dominantly informal. In addition, economic liberalization and privatization have been associated with dwindling employment levels in the public sector, which in any case to a large extent favors individuals with relatively higher levels of education and skill training. Despite women’s dominance in entrepreneurial activities in Ghana, women still face significant obstacles. In addition, the challenges of men and women’s entrepreneurship in Ghana cannot be separated from those of the informal sector in particular and the private sector development in general. Consequently, the policy response needs not to narrowly focus on promoting female entrepreneurship but to broadly resolve the constraints affecting the informal sector or entrepreneurship in general. In this way, support for female entrepreneurship should be viewed as one element of a comprehensive development policy drive that addresses the complex factors and relationships that influence women’s access to meaningful employment as well as contribution to national development. This approach will contribute to improvement in gender equity and promote human capital accumulation, women’s economic participation and beneficial economic growth effects. Note: George Owusu is an associate professor at the Institute of Statistical, Social & Economic Research (ISSER) at the University of Ghana, Legon. He can be reached at gowusu@ug.edu.gh or geowusu@yahoo.com. Peter Quartey and Simon Bawakyillenuo are an associate professor and research fellow, respectively, at ISSER. ISSER is one of the Brookings Africa Growth Initiative’s six local think tank partners based in Africa. This blog reflects the views of the authors only and does not reflect the views of the Africa Growth Initiative. [1] The Global Entrepreneurship Monitor (GEM) is the largest independent survey of entrepreneurship in the world, which is carried out yearly. It analyzes the relationship between the level of entrepreneurship and economic growth, and examines the conditions that foster and constrain entrepreneurship in each participating country. Over 60 countries have been involved in the GEM research consortium over the decades but very few have participated from Sub-Saharan Africa. Through ISSER’s research project titled Youth and Employment: The Role of Entrepreneurship in African Economies (YEMP), funded by the Danish Ministry of Foreign Affairs/Danida, Ghana participated in the 2010 GEM global survey for the first time. [2] Ghana Trade Union Congress (GTUC) 2005: Policies on Employment, Earnings and the Petroleum Sector, Accra: GTUC/UNDP. [3] TEA is defined as the proportion of the total adult population of 18-64 years who are either a nascent entrepreneur or owner/manager of a new business (3-42 months old). This measurement takes into account of all the adult population and not simply those engaged in entrepreneurship. [4] Fielden, S. and Davidson, M. 2005: International Handbook of Women and Small Business Entrepreneurship, Northampton, Massachusetts: Edward Elgar Publishing Limited. [5] Owusu, G. and Lund, R. 2004: Markets and women’s trade: Exploring their role in district development in Ghana, Norsk Geografisk Tidsskrift – Norwegian Journal of Geography. 58(3), pp. 113-124. [6] Robertson, C. 1995: Comparative advantage: Women in trade in Accra, Ghana and Nairobi, Kenya. House-Midamba, B. and Ekechi, K.E. (eds.): African Market Women and Economic Power, London: Greenwood Press, pp. 99-119 |
5e34af3d661b44dab56d5044c2b6674a | https://www.brookings.edu/blog/africa-in-focus/2014/11/20/africas-failure-to-industrialize-bad-luck-or-bad-policy/ | Africa’s Failure to Industrialize: Bad Luck or Bad Policy? | Africa’s Failure to Industrialize: Bad Luck or Bad Policy? On Thursday, November 20 the United Nations celebrated the 25th Africa Industrialization Day. But perhaps “celebrate” is not exactly the right word. Africa’s experience with industrialization over the past quarter century has actually been disappointing. In 2010 sub-Saharan Africa’s average share of manufacturing value added in GDP was 10 percent, unchanged from the 1970s. At the same time, manufacturing output per person was about a third of the average for all developing countries, and manufactured exports per person about 10 percent. Thus, I pose the question: Is Africa’s failure to industrialize in the 25 years since the first African Industrialization Day due to bad policy or bad luck? About four years ago the African Development Bank, the Brookings Institution and the United Nations University-World Institute for Development Economics Research (UNU-WIDER) came together to try to answer a seemingly simple but puzzling question: Why is there so little industry in Africa? We called our program of research Learning to Compete, because this was the greatest challenge faced by African industry. Among the projects that we sponsored were 11 detailed country case studies—eight from sub-Saharan Africa, one from North Africa and two from newly industrializing East Asia—done by researchers from the countries involved. The case studies are now available here. They make discouraging reading for anyone interested in African industrialization. The eight sub-Saharan countries—Ethiopia, Ghana, Kenya, Mozambique, Nigeria, Senegal, Tanzania and Uganda—were all among the region’s early industrializers and are also all among the stars of the region’s growth turnaround. Tunisia—along with Mauritius, which we did not study in detail—is one of the brighter lights in the African continent’s industrialization story. The Asian countries—Cambodia and Vietnam—were chosen because they are emerging Asia’s newest industrializers. The country studies describe the range of public policies used to promote industrial development and the evolution of industry in each country. Most seek to identify the factors that have constrained industrialization and the nature of public actions designed to relieve those constraints. What is striking about the eight sub-Saharan African countries is that, despite considerable diversity in geographical location, resource endowments and history, they share remarkable similarity in their experience with industrialization. The Asian and the Tunisian stories begin in very much the same place as these sub-Saharan countries with an early drive for state-led import substituting industrialization but diverge substantially in terms of industrial policies and performance in later periods. Africa’s failure to industrialize is partly due to bad luck. The terms of trade shocks and economic crises of the 1970s and 1980s brought with them a 20-year period of macroeconomic stabilization, trade liberalization and privatization. Import competition forces inefficient firms, both public and private, out of business. Uncertainty with the outcome of the adjustment process and low or negative economic growth meant that there was little private investment overall and practically none in industry. Political instability and conflict also caused investors to hold back. When Africa emerged from its long economic hibernation around the turn of the 21st century, African industry was no longer competing with the high-wage industrial “North,” as it had in the 1960s and 1970s. It was competing with Asia. From the point of view of industrial development the timing of the region’s economic recovery was unlucky to say the least. But the failure to industrialize was also due to bad policy. The eight sub-Saharan countries enacted remarkably similar policies for industrial development: state-led import substitution, Structural Adjustment and investment climate reform. Import substitution sowed the seeds of its own destruction. High protection and heavy import dependency meant that African industry was poorly prepared for international competition. The tendency of many African governments to assign a leading role to the state in creating and operating manufacturing firms simply made the problem worse. Investments were often made with little regard to efficiency, and the managerial capacity of the state was badly overstretched. While the reforms of the Structural Adjustment period paid off in terms of better macroeconomic management and faster overall growth, the rapid liberalization of trade and some ill-advised conditions—such as freeing up the import of second-hand clothing for resale—probably caused a more severe contraction of industry than was desirable. But, hindsight is always 20/20. The key issue looking forward is: Do the policies African governments now have in place prepare Africa to turn the corner in industrial development? Around 2000, the World Bank and many bilateral donors shifted their focus in spurring industrial development to the “investment climate”—the policy, institutional and physical environment within which private firms operate. Investment climate reforms reflect the priorities and dogmas of the aid community. Given the importance of development assistance in the eight sub-Saharan economies, it is perhaps unsurprising that all have implemented investment climate reforms since 2000. Our country studies, however, strongly suggest that the donor agenda on the investment climate is both poorly implemented and insufficient. Although in principle improvements in the investment climate are supposed to cover the whole range of issues from macroeconomic management, to infrastructure and skills, to the policies and institutions that most closely affect private investors, in practice the investment climate agenda has centered too narrowly on regulatory reform. Setting new priorities for the investment climate is certainly possible—and we make some suggestions how to do that in a forthcoming book, Learning to Compete, that summarizes the results of the project—but, by themselves, changes in the investment climate are unlikely to be enough to overcome the challenges faced by African economies trying to compete in global industry. What is the alternative? Beginning with Japan and moving through the “Four Tigers,” Indonesia, Malaysia, Thailand and spectacularly then on to China, East Asian economies all followed quite similar industrial development policies with striking results. The source of their early industrial dynamism came from rapid growth of export manufacturing, based on an “export push”—a coordinated set of macroeconomic and structural policies designed to boost industrial exports. East Asian countries also actively supported industry more generally, developing programs to encourage diversification and increase firm-level productivity. Today, Cambodia and Vietnam—the two countries that we studied—are taking the same path. Industrial growth in both has been explosive. The two African countries—Mauritius and Tunisia—that went their own way in terms of policies for industrialization emulated the East Asian model. While it is fair to say that neither country’s industrialization story is an unqualified success—both have had some difficulty in making the transition from low-end manufacturing toward more sophisticated and technology-intensive goods—relative to the rest of the continent they are the “leopards” of industry. Perhaps it is time to think again about investment climate reform. |
fdacdf17b239d6c9210d00dcb01d919f | https://www.brookings.edu/blog/africa-in-focus/2015/02/20/suggestions-for-obamas-last-trip-to-africa-as-president/ | Suggestions for Obama’s last trip to Africa as president | Suggestions for Obama’s last trip to Africa as president Since becoming president of the United States, Barack Obama has visited five African countries: Ghana, Egypt, Senegal, Tanzania, and South Africa. The president used his 2009 trips to Ghana and Egypt to articulate his broad and ambitious policy of engagement towards sub-Saharan Africa and the Arab world, respectively. The president’s pronouncements during the trips to Ghana and Egypt generated high expectations for a new dawn in the relationship between United States and these regions. Nevertheless, there was not much by way of new policy initiatives to back these pronouncements, and, thus, to a large extent these two visits were more symbolic than substantive. As such, during his first term a lot of criticism by policy analysts was directed at the president’s detachment from Africa. Many felt that under Obama’s presidency America was lagging behind many other countries especially China, India, Brazil, and even other smaller economies such as Turkey in its engagement with Africa. In 2013, the president made a more extensive and substantive trip to Africa, traveling to Senegal, South Africa and Tanzania. During this visit, the president announced actual initiatives that aim to deepen commercial relations, support regional trade logistics, and enhance security. Also significant was the announcement of the first U.S.-Africa Leaders Summit to be held the following year, in August 2014. In 2013 President Obama indicated that he would visit Africa at least one more time during his presidency. The expectation is that this trip will be later in 2015 but most likely in 2016—his last full year in office. Given that planning for U.S. presidential international trips require months, if not years, of planning, it is a good bet that the planning for the next African trip will soon be underway. Thus, his planning team should note that a good way to maximize the impact of his trip is to be more strategic in the choice of countries visited and also include a policy focus relevant to the entire continent. While the countries visited so far are quite deserving of the honor, the omission of others has been, so far, both very significant and clearly misguided. It appears the choice of the countries visited by the president were based on what were seen as “safe bets”—those meeting some peace and governance thresholds. The president has avoided countries facing major challenges such as terrorism and poor governance records. For a more lasting impact, though, the president needs to get out of his comfort zone, visit non-“safe bet” countries, and connect with countries showing openness to reforms, are rising economic leaders, and could be key strategic security partners. In this regard, I propose that the president trip cover at least the following countries: Nigeria, Ethiopia, and Kenya. Nigeria, a country characterized by serious governance problems compounded by ever-intensifying incidents of terrorism imparted by Boko Haram, would not pass the “safe bet” test. Notwithstanding the failures and challenges that Nigeria faces, and regardless of how the 2015 elections turn out, this is the country that deserves a visit by President Obama. It is a country that has long been characterized by high levels of corruption and serious ethnic and religious fractures. But Nigeria is the most important country in Africa, and it is the country that has the most influence on the direction that Africa takes. It is now the largest economy on the continent and has the largest population there. In addition, Nigeria is the dominant country in West Africa’s regional economic community—the Economic Community of West African States (ECOWAS). Despite all its shortcomings, Nigeria has, in recent years, undertaken major reforms that are helping stimulate the economy and shift it away from an overreliance on oil. By all accounts, Nigeria can be considered the continental anchor: Whatever happens in that country has large spillover effects across the continent. The president could use the visit to articulate a strategy to fight terrorism not only in Nigeria but also across the continent. Fighting terrorist groups should be a key focus of the president’s trip in Nigeria but also during the visit to other countries given the increasing threats posed by these groups and the fact that they have potential to grow and export terror outside Africa. In addition, this is the place where the president can focus on the importance of strengthening the institutions of governance for peaceful co-existence among the country’s very diverse ethnic and religious groups. Many of Nigeria’s problems are linked to its failure to deal with diversity, which is a problem that characterizes most of the continent and costs a great deal both in terms of violent conflicts and economic performance. Thus, Nigeria would be the perfect country for the president to articulate how the United States can work with Africans to strengthen institutions. Ethiopia is another large country also characterized by significant governance problems. The country’s past has been characterized by dictatorships, serious conflict and devastating famines. However, since the dictator Mengistu Haile Mariam was deposed, Ethiopia has made important progress, including adoption of a new federalist constitution and far-reaching economic reforms that have seen the country achieve one of the highest growth rates in the continent over the last decade. The economic reforms have attracted new foreign direct investments with the consequential emergence of new industrial clusters, especially in leather processing. Not all is perfect though: Like with governance, Ethiopia still lags far behind other countries in deregulating some key sectors of the economy especially telecommunications, land markets, banking, and finance. This country deserves a visit by President Obama for a number of reasons. First, the leadership in Addis Ababa has demonstrated willingness to reform. Although a work in progress, the reform process is on a positive trajectory and is a good example for other African countries. Second, the country is an important ally in the war against terrorism and has been pivotal in the war against al-Shabab. Finally, the president should use the trip to visit the headquarters of the African Union in Addis Ababa. A visit to the AU headquarters by the U.S. president would be a significant endorsement of the role of the continental organization and would, indeed, be the best forum in which to hold the next U.S.-African Leaders Summit—building up on the success of the first summit held in Washington in 2014. Given the central role that the AU is charged with in advancing the African integration project, President Obama and the African leaders could use the summit to discuss strategies to advance the pace of regional integration especially as pertains to involvement of the U.S. private sector, such as in the building of regional infrastructure. As the president’s second “home,” Kenya must be included in the itinerary. Previous U.S. presidents have shown great pride by visiting their ancestral homes. Notable are the visits by Presidents Kennedy, Reagan and Clinton to their ancestral homes in Ireland. It will be an opportunity for the president to demonstrate pride in his African roots. Although President Obama visited Kenya as a private citizen and again as a U.S. senator, a visit as president will have great significance not only to him but also to Kenyans and indeed other Africans. Outside of his personal connection, there are other reasons for the president to visit Kenya. Kenya has made major political and economic reforms. It now has one of the most progressive constitutions in the world and the implementation of this constitution is continuing steadily. It is the largest economy in East Africa and a leader in the integration of the East African Community. Kenya is emerging as Africa’s innovation hub and has also been at the forefront on the war against terrorism, especially against al-Shabab. Kenya continues to play a very important role in brokering peace initiatives in the region. For all these reasons, Kenya deserves to be included in the president’s itinerary. For Obama’s final trip to Africa as president to be impactful, it is also crucial that he focuses on a few key policy issues that have continental implications as opposed to many, small fragmented policies. Furthermore, the policy approach should build on mutualism prominent in the deliberations during the U.S.-Africa Leaders Summit. In this regard and as discussed above, key policy issues that the president should seek to focus on should include collaborative strategies in the fight against terrorist groups in Africa and support of Africa’s regional integration project especially through the participation of U.S. private sector. Finally, the president should focus on the Post-2015 Development Agenda. Specifically, he should articulate approaches of how the U.S. would work with Africans in advancing the development agenda. It would be particularly impactful if the president will mobilize the international community to support Africa’s Post-2015 Development Agenda. U.S. initiatives that support Africans in dealing with these broad issues is a sure way for the president to solidify an African legacy. |
f3017a44e378ed76e9f338808218845a | https://www.brookings.edu/blog/africa-in-focus/2015/03/10/financing-for-development-six-priorities-for-africa/ | Financing for Development: Six Priorities for Africa | Financing for Development: Six Priorities for Africa Less than 150 days remain before the Third International Conference on Financing for Development (FfD) takes place in Addis Ababa. The African development financing context has certainly changed since the two previous similar gathering in Monterrey in 2002 and in Doha in 2008: For example, private capital flows, mainly in the form of foreign direct investment, and remittances have now overtaken official development assistance (ODA). Similarly, China and the other BRICS countries have strongly increased their presence in the continent. Given the challenges Africa faces, African stakeholders have much to contribute to the dialogue, but, as Africa’s context has changed, we must reflect on what Africa’s priorities should be. The upcoming event, “Financing the future: Fresh perspectives on global development,” hosted by the Overseas Development Institute [1] on March 17-18, will be exploring these issues in order to set the stage for the debate in Addis later this year. Policymakers, in anticipation of the Addis meetings, will examine new mechanisms and sources of financing to inform the discussions later this year. For example, the Brookings Africa Growth Initiative is chairing a panel on the comparative advantage of international public finance in relation to other forms of development finance. In anticipation of the Sustainable Development Goals, Africa has already established a common position on the Post-2015 Development Agenda, based on six pillars, with the aim to speak with one voice and facilitate the discussion towards a global consensus on the SDGs. The first five pillars cover a number of more specific priorities. For instance, Pillar One focuses on structural economic transformation and inclusive growth while Pillar Two highlights science, technology, and innovation. We know that both of these objectives in particular face major financing gaps as domestic resources are not sufficient to cover the costs associated with the SDGs. This is why Pillar Six, finance and partnerships, is so important and must be linked with the first five pillars. Thus, the Addis FfD conference must take into account the financing of the “pillars” in the Common African Position. As world leaders begin prepare for their final decision on financing for development, I want to provide my recommendations on how best to surmount Africa’s financing gaps to achieve its development goals: This priority may seem quite obvious, but it is very tempting to focus on raising more finance without questioning the intended use of the money raised. For instance, there is a consensus forming around the need to invest in upgrading and developing Africa’s infrastructure. Yet, the focus of the ongoing conversation is on energy infrastructure (highlighted by the U.S.’s Power Africa initiative) while urban infrastructure has largely been omitted from the discussion. We feel, instead, that urban infrastructure should be considered a priority given that African cities are growing quickly and have vast needs, including new roads, public transit, and water and sanitation systems. Mechanisms such as municipal bonds issued by cities could be a unique means of filling these financing voids—USAID and the Gates Foundation, for example, are currently working with the city of Dakar to issue its first municipal bond, which will be the first non-government guaranteed municipal bond for sub-Saharan Africa (outside of South Africa). A multifaceted approach, including public and private, domestic and international finance will be necessary to meet the continent’s vast financing needs. Over the past 15 years, external financing from the private sector, especially foreign direct investment (FDI), has risen relative to public financing through ODA. Meanwhile, domestic public finance has increased, as countries have received some debt relief, improved revenue collection mechanisms, and benefitted from commodity price booms (although tax revenue generation still remain relatively low). However, an important question to ask when examining the roles of these different sources of finance is: What can African governments really control? Governments can influence public finance and to some extent domestic markets, so they should start by focusing on these two areas. Taking a regional view in developing capital markets can make a lot of sense, since it allows for economies of scale and has worked well for the West African Economic and Monetary Union (WAEMU). When it comes to capital markets, for instance, why not have continental or at least regional targets and commitments put in place regional legal and regulatory frameworks, develop the money markets (the cornerstone of capital markets), and integrate payments systems in order to reduce transaction costs? These strategies would provide the basis for strengthening domestic markets and public finance. Remittances are increasing too, averaging $21.8 billion over the past decade—with some countries, including Nigeria and Senegal, receiving approximately 10 percent of their GDPs in remittances. Yet the costs of sending remittances to Africa are the highest in the world, and transfers within Africa cost even more. Since remittances mostly fuel consumption within the social sectors (health and education) and thus have developmental impacts, let’s reduce the cost of sending remittances and transform the ways they can be invested to spur entrepreneurship and development. For instance, if a bank sees that an individual regularly receives remittances, it could invite the recipient to join the bank’s clientele, and, based on the history of remittances received, make a loan to the individual to help further her or his entrepreneurial pursuits. While remittance flows to Africa have increased over the past decade, recent trends in global financial regulation, such as the increase in anti-money laundering (AML) and combating the financing of terrorism (CFT) standards, have had unintended consequences for the continent and stifled remittances. For instance, AML-CFT regulations have hurt Africa, as seen when many U.S. banks discontinued remittance services to Somalia after AML-CFT regulations were implemented there. Even Basel III, with its disincentives for banks to engage in long-term finance, due to more stringent liquidity ratios, can have negative consequences for Africa’s attempts to reduce its financing gap in long-term infrastructure projects. Cost of compliance can push global banks to reduce or even cease their activities in small African markets: Why take the risk in small markets when the costs are so high? The case of BNP Paribas, which was slapped with a $9 billion fine for its business in Sudan and Iran, is still fresh in the mind of global bankers. Illicit financial flows are relatively high by some estimates, with African countries losing nearly $60 billion a year predominantly due to tax evasion by commercial firms and the undervaluing of services and traded goods, while corruption and organized crime also contribute to illicit flows. This loss in capital has translated to lost opportunities for advancing economic and human development in Africa. Efforts by African countries and global institutions such as the United Nations are under way to engage foreign governments and corporations to track and reduce illicit financial flows, and they should be strengthened. Improving the quantity and quality of finance to Africa is necessary, but not sufficient to secure sustainable development for the region. For instance, in dollar terms, FDI still predominantly goes to resource-rich countries such as Nigeria, Angola, and South Africa (although there has been some progress in diversification to other countries with large consumer markets or financial sectors in recent years). Importantly, we also need to manage the risks from African countries issuing foreign currency denominated bonds, as I discuss in my recent Brookings paper, Trends and Developments in African Frontier Bond Markets. But African countries must focus on getting a bigger bang for their buck through partnerships that will promote the transfer of knowledge and skills and integrate African businesses into global value chains. At the same time, they should avoid detrimental regulation such as excessive local content regulation. For instance, it may be mutually beneficial for both African governments and foreign firms to foster local participation in some parts of the value chain(such as the downstream oil sector) and be less demanding initially in other parts (such as the upstream oil sector). Finally, it’s worth highlighting that Africa is not a country. Fragile and low-income countries still receive a lot of aid. New issues such as “green” (climate change funding) and “blue” (ocean preservation) finance require aid. African governments must remember to be granular in their approaches and at times focus on one particular type of financial flow. Coordination of the different type of stakeholders can lead to important gains for all. Again, take infrastructure, our work shows that many stakeholders (including the U.S. and China) are committed to investing in the African power sector, but that means that coordination and cooperation is needed among the various actors. Sweden, for instance, is investing in the U.S. Power Africa initiative. But how about engaging China and other partners in the energy sector? The African Development Bank could play a central role coordinating this. The good news is that Africa is increasingly speaking with one voice, as seen in the case of the Common African Position on the Post-2015 Development Agenda. The priorities above are consistent with Pillar Six of the Common African Position, which is about finance and partnerships. Clearly, African policymakers have emphasized the need to (a) improve domestic resource mobilization; (b) maximize innovative financing (remittances and long-term, non-traditional financing mechanisms); and (c) implement existing commitments and promote the quality and predictability of external financing. These are all relevant issues, but we should strive to have Pillar Six strengthen the five other pillars. Finance should underpin the SDGs! [1] In collaboration with the Brookings Institution, the African Center for Economic Transformation (ACET), the Collaborative Africa Budget Reform Initiative (CABRI), the Centro de Pensamiento Estratégico Internacional (CEPEI), Development Finance International (DFI), the Economic and Social Research Foundation (ESRF), and the United Nations Development Program (UNDP). |
431eb55a99f480a0fa519dde2b8e3bce | https://www.brookings.edu/blog/africa-in-focus/2015/05/15/five-takeaways-from-the-bangui-forum-for-national-reconciliation-in-the-central-african-republic/ | Five takeaways from the Bangui Forum for National Reconciliation in the Central African Republic | Five takeaways from the Bangui Forum for National Reconciliation in the Central African Republic More than two years after the predominantly Muslim Séléka rebel group overthrew the government of President François Bozizé, igniting waves of intercommunal violence between the country’s Muslim minority and Christian populations, the Central African Republic (CAR) is still working to end the cycles of violence and reunite the country. This month, the CAR took an important step toward fostering national cohesion through its week-long Bangui Forum on National Reconciliation, which concluded on Monday, May 11. The Bangui Forum brought together nearly 700 leaders from diverse groups within the CAR’s society—including the transitional government, national political parties, the main opposing armed groups (the Séléka and anti-balaka), the private sector, civil society, traditional chiefs, and religious groups—to define their collective vision for the country’s future. At breakout meetings on the themes of peace and security, justice and reconciliation, social and economic development, and governance, participants debated the different elements of the country’s peacebuilding agenda, and during the plenary session over the weekend, adopted several important recommendations, highlighted below: Ten factions of the Séléka and anti-balaka militias signed a Disarmament, Demobilization and Reintegration (DDR) agreement, which called for all combatants to give up their weapons by the time of the national elections. According to the agreement, former combatants (who have not been charged with war crimes) will either be integrated into state security institutions—the army, police or national forestry and water commission—or become beneficiaries of income-generating community development projects. Meanwhile, armed actors from other countries who did not commit war crimes will be repatriated to their countries of origin. While observers have widely applauded the conclusion of this agreement, significant challenges to its full implementation remain, including major funding gaps for the DDR program (an issue that held up DDR during the last crisis in the CAR), as well as the weak capacity of the armed groups’ leaders to exercise control over all of their members and ensure their compliance with the disarmament process. For the DDR program to succeed this time around, the government and international community must put forward sufficient funding for the proposed process and build up the state’s security institutions and development programs targeted to former combatants in order to incentivize them to leave their positions in the militias. Leaders of the two main armed groups have agreed to release all children under their control, estimated to number from 6,000 to 10,000 children, according to the United Nations Children’s Fund (UNICEF). Once the children are released, they will receive medical treatment, psychosocial support, and then will be returned to their families and communities or placed in foster care. The exact timeline for the release of all the children has yet to be determined; however, on Thursday, May 14, 357 children were released in a ceremony near Bambari. The Séléka and anti-balaka leaders have also granted humanitarian actors complete and immediate access to the areas where the children are located so that UNICEF and its partners can begin identifying and reuniting them with their families. Although the release and provision of emergency services to more than 300 children on Thursday is considered a great first step toward ending child suffering in the CAR, as noted by UNICEF representative Mohamed Malick Fall, the children will still “require extensive support and protection so that they can rebuild their lives and resume their childhood.” To fund its child soldier reintegration and rehabilitation efforts in the CAR, UNICEF has requested $73.9 million in 2015, but as of April 30, only $17 million has been funded. The recommendations adopted by the forum called for the elections to be postponed—to June and July for parliamentary elections and to August for the presidential one. This does not come as a big surprise, considering the sheer number of prerequisites (e.g., establishing a threshold of security, obtaining polling equipment, and training electoral staff and observers) that need to be met in order to hold the elections, and the limited time and funding made available (only 26 percent of the requested amount has been received by May 2015) to achieve these requirements. In turn, participants called for the current transitional government to make a request to the Conference of Heads of State of the Economic Community of Central African States (ECCAS) to remain in office until the elections take place. According to former prime minister and president of the CAR’s URCA party (Union pour le renouveau centrafricain), Anicet-George Dologuélé, extending the current government’s time in office until the elections take place is the most practical way forward, since assembling a new government and allowing it time to settle in and establish its work plan will take several months—by which point it will be time to replace them via the national elections. Still, some critics of the decision to keep the current government in power, including the president of the party Mouvement démocratique pour la renaissance et l’évolution de la Centrafrique, Joseph Bendouga, argue that the country should not allow an extension of the current government since it could give the current leaders the false idea that their time in power could be indefinite. Participants agreed on the structures for justice and reconciliation in the country, including a national truth and reconciliation commission, as well as broad-based, local peace and reconciliation committees. Local initiatives will often be spearheaded by traditional chiefs given their strong influence within communities (compared to the state’s weak resources and capacity to reach out to communities). Trust between stakeholders will need to be gained and strengthened, and the role of religious leaders in building peace will remain important. Participants also called for a formal investigation into cross-border crimes, especially those committed by the Lord’s Resistance Army. Restarting the CAR’s economy despite the pockets of insecurity that still exist throughout the country is foremost on the minds of Central Africans. Building inclusive economic institutions is extremely important to the country as a means of reducing the poverty and inequalities that have created the grievances and tensions between different social groups in the past. Opportunities for revitalizing the mining and agricultural sectors received the most attention from the forum’s participants, who proposed that the sanctions imposed on diamonds in the CAR by the Kimberly Process be removed and that seeds, tools, and other agricultural inputs be distributed to farmers to restart their production. The forum also emphasized the primacy of enabling herders, who fled to neighboring countries during the conflict, to return to the CAR by replenishing their herds. While the government is facing a number of competing priorities in the allocation of its limited fiscal resources, focusing on targeted investments in the sectors that will help mobilize additional domestic revenues is crucial to creating a foundation for economic recovery and can help strengthen domestic institutions. Efforts by the international community to restore peace in the CAR should not preclude assistance towards economic developement. On the one hand, these promising agreements reflect the desire of Central Africans to move past the conflict and build a more peaceful, democratic society; on the other hand, as noted by Interim President Catherine Samba-Panza, the CAR has a history of holding national debates on peace and reconciliation—five since 1980—and then descending again into crisis. Even on forum’s last day, disorder broke out as some members of the anti-balaka expressed their dissatisfaction with the forum’s final recommendations by walking out during the closing ceremony. Two hundred to 300 anti-balaka and Séléka protesters gathered outside the forum to voice frustrations over the fact that several of their members have been put under house arrest and will face criminal trials for crimes that they committed during the conflict. As the ceremony ended, shots sounded in the street, demonstrating the fragile security situation the country still faces. Yet, unlike past national hearings on peace and reconciliation—in which political elites assembled to make decisions of national import on behalf of the entire country—this forum relied heavily on grassroots consultations and the inclusion of citizens’ voices, especially those most affected by the conflict. With strong participation from affected populations, it is now up to the transitional government and its international supporters to muster the political will to carry out the aspirations of the Central African people. |
0fa63c43412c93b5a6f68bc080d0cd26 | https://www.brookings.edu/blog/africa-in-focus/2015/07/29/obama-in-kenya-a-report-from-the-field-and-a-recap-of-the-global-entrepreneurship-summit/ | Obama in Kenya: A report from the field and a recap of the Global Entrepreneurship Summit | Obama in Kenya: A report from the field and a recap of the Global Entrepreneurship Summit The nation of Kenya was gripped by a palpable sense of excitement and outsized expectations prior to President Obama’s arrival on Friday, July 24. On the day the president arrived, offices in Nairobi were closed, banks shut down early, and the city’s emptied streets were adorned with American flags and ubiquitous posters conveying the message, “Welcome Home Obama.” One television station devoted full-time coverage to the visit. The tenor of expectation was captured by Macharia Gaitho, a columnist for the Daily Nation, who wrote that Obama’s trip to Kenya was “the most important visit by a foreign leader since independence.” While seemingly excessive, several well-regarded business leaders strongly agreed with the sentiment. There was no question that Obama’s visit would be a boost to Kenya’s confidence, shaken by al-Shabab’s horrific attacks on the Westgate Mall and Garissa University, and the uncertainty of the nation’s relationship with the U.S. given the International Criminal Court indictments of President Uhuru Kenyatta (since dropped due to a lack of evidence) and Vice President William Ruto. Many Kenyans have also questioned why it took Obama six years since coming into office to visit his father’s homeland. However, all doubts about Kenya’s standing with the U.S. seemed to evaporate when President Obama bound down the steps of Air Force One and embraced a waiting President Kenyatta at the foot of the stairs. The president then received a bouquet of flowers from a nine-year-old girl and proceeded to greet the dignitaries aligned on the red carpet. Most emotive was the president’s warm embrace of his older half-sister, Auma Obama, last in the greeting line. Auma then got into the president’s limousine and the two took off, embedded in a large train of security vehicles, for the hotel and a family reunion. Twenty-seven years earlier, in a story that was repeated often in the press over the weekend, Auma had picked up a 25-year-old Barack Obama at Jomo Kenyatta International Airport on his first visit to Kenya. Shortly after leaving the airport, her aged VW Beetle broke down. The story added poignancy to the sight of the two leaving the airport in the presidential limousine as well as a sense that a definitive new chapter in U.S.-Kenyan relations was beginning. And, as Auma quipped two days later while introducing her half-brother for his speech to the nation, she appreciated the returned favor of a ride from the airport. At the core of Obama’s visit was a debate over Kenya’s identity. Was the nation a “hotbed of terror” as CNN reported on the eve of Obama’s arrival, or, as President Kenyatta rebutted in his opening to the Global Entrepreneurship Summit, a “hotbed of vibrant culture, spectacular natural beauty, and infinite possibility”? The president’s participation in the Global Entrepreneurship Summit (GES) in Nairobi underscored his belief that entrepreneurship is the “spark of prosperity,” and that people around the world, especially young people, want to start businesses to improve their lives and communities. He similarly expressed his belief in a rising continent during the trip, declaring on several occasions that “Kenya’s on the move, Africa’s on the move,” and emphasizing the country’s strong middle class, high economic growth, and entrepreneurial spirit. He also noted that, in 2006 when he visited South Korea, the Asian nation’s economy was 40 times larger than Kenya’s, and, since then, that gap has been cut in half. The president, though, was not overly optimistic: In his frank, even personal, speech to the nation, Obama still identified corruption, tribalism, and ethnicity, and a lack of investment in women and girls’ education as the country’s most significant challenges, specifically pointing out that corruption costs the nation 250,000 jobs a year. Against this backdrop, the Global Entrepreneurship Summit provided the rationale for Obama’s visit to Kenya. The GES, launched in 2009 in Cairo, is especially relevant in Africa where 10-12 million youth enter the labor market every year, and youth remain almost twice as likely to be unemployed as their elders. Over the course of two days at the sprawling United Nations campus on the outskirts of Nairobi, entrepreneurs from Kenya, Africa, and more than 100 countries had the opportunity to network with each other, interact with U.S. start-up executives from companies such as Airbnb and Uber, and hear from venture capitalists from Silicon Valley. A principal theme that emerged from numerous conversations and panels was the difference in scale and experience between entrepreneurs in the U.S. and Africa. Entrepreneurs in the U.S. want to “disrupt” existing platforms to enhance their impact and value. Entrepreneurs in Africa share the desire to impact their communities and generate income, but they largely have to create systems and platforms that do not exist. One venture capitalist from Silicon Valley commented that capital is usually invested in start-ups that already have products and have identified employees for key functions. Start-ups in Africa, however, are frequently self-funded and dependent on finding a commercial niche that enables them to be sustainable. Most will never attract venture capital, and finding skilled employees is a major problem. As one entrepreneur from Ethiopia remarked, her business is “me, myself, and I,” even though she has hired 10-20 employees to fill orders as they come in. Western entrepreneurs think about their businesses as being part of an eco-system. In Africa, it seems that entrepreneurs are more focused on being part of a value chain and finding a market for their products. As one conference participant put it, entrepreneurship in Africa is about turning challenge into opportunity. Many African entrepreneurs at GES presented compelling stories. As Dr. Shadi Sabeh, the founder of the Brilliant Footsteps Academy in Sokoto, Nigeria, put it, “My goals are to make money, positively impact my community, and do what I love to do, which is to teach.” Sabeh’s school, which has grown to a staff of 120 and 500 students in three years, is dedicated to integrating formal and informal Islamic instruction in an effort combat extremism and promote peace in northern Nigeria. Hello Tractor, founded by Jehiel Oliver, is a startup in Abuja, Nigeria that uses mobile phone technology and GPS to make tractors available to farmers on a rental basis to lower the cost of the machines, enhance their use, and ensure they have proper maintenance. The start-up has received $180,000 in funding and aims to service 110,000 farmers and create 1000 jobs. Eco-pads is a start-up in Kampala, Uganda founded by Lucy Athieno with a $25,000 grant from the U.S. African Development Foundation that makes re-useable sanitary napkins available to 800 girls to enable them to stay in school. Sales have more than doubled in size over the last year. Tamarind Nott started a company in 2012 that relies on the traditional knowledge of the Himba women in Namibia to produce a skin cream called Namibian Myrrh that is available in Namibia and, via web sales, in South Africa. Several American companies clearly had relevance for African entrepreneurs. A representative from Uber noted that companies need to be clear about what is negotiable and what is non-negotiable. For example, given that some developing market legal systems are not ready for a shared economy, it is possible to pay for an Uber ride in Nairobi by cash (Uber also has a presence in Nigeria and South Africa). A former Facebook representative noted that the Facebook platform was developed by engineers in Menlo Park, California for smartphone high-speed connectivity. In Lagos, however, as Facebook engineers have come to understand, most smartphones did not have enough memory and only 2G networks are available, so they have had to rework the platform for the Nigerian market. The lesson imparted here: New products should be adaptable to different environments. Perhaps most interesting to me for African entrepreneurs was Ross Baird of the successful Village Capital who emphasized that it is essential to know where the market will be in 20-30 years and discussed how to position a company to grow into that market. As a result, Village Capital has focused on early phase investments in African start-ups at a range between $50,000 and $500,000. The firm has made 114 venture deals of less than $1 million over the last year. Village Capital has also pioneered (and recommended) peer-to-peer due diligence—in which 10-12 entrepreneurs collectively determine where their investments will have the largest impact—which has now become a key part of investment decisions. In addition, Baird noted that a focus on women entrepreneurs has also been a priority, as women-owned businesses generally have a track record of success, and last year 30 percent of Village Capital’s investments went to women-owned businesses. In the course of his visit, Obama signed a number of agreements related to fighting terrorism, obstructing corruption, and promoting trade and investment. Yet perhaps his most significant “deliverable” was the candor with which he spoke to the Kenyan people and his example of overcoming tremendous odds to assume the most powerful position on earth. This message also resonated with the approximately 1500 entrepreneurs who participated in GES and frequently face significant challenges in ensuring the success of their businesses. When President Obama began his speech to the nation, he noted that he was the first U.S. president to visit Kenya. He also noted that he was the first Kenyan-American to be elected president of the United States, an observation he has rarely, if ever, publicly made before. In doing so, Obama emphasized his connection to the Kenyan people in a way that only he could and, as one Kenyan put it, redefined the Kenyan dream. |
56e1ec947c4abf8c84d0c25c2f77c891 | https://www.brookings.edu/blog/africa-in-focus/2015/09/16/metals-and-oil-a-tale-of-two-commodities/ | Metals and oil: A tale of two commodities | Metals and oil: A tale of two commodities “It was the best of times, it was the worst of times.” With these words Charles Dickens opens his novel “A Tale of Two Cities.” Winners and losers in a “tale of two commodities” may one day look back with similar reflections, as prices of metals and oil have seen some seismic shifts in recent weeks, months, and years. This blog seeks to explain how supply, demand, and financial market conditions are affecting commodities’ prices in somewhat different ways. Stay tuned for a deeper analysis of the trends in a special commodities feature, which will be included in next month’s World Economic Outlook by the International Monetary Fund. The effect of low metal prices is likely to have dramatic consequences for metals and oil exporters alike, especially as African countries have made major discoveries in mines and oil/gas in recent decades. For countries like Mozambique, Tanzania, and Uganda, where the investment phase in the extraction sector has started but the production has not, there are especially important macro-fiscal risks. Base metals—such as iron ore, copper, aluminum, and nickel—are the lifeblood of global industrial production and construction. Shaped by shifts in supply and demand, they are a valuable weathervane of change in the world economy. There is no doubt about the direction of the prevailing wind for metals in recent years. Prices have been gradually declining since 2011 (see Figure 1.SF.7). While oil prices have also dropped, their decline is more recent (prices peaked in 2014) and more abrupt. That said, in both cases, the downward pressure on prices result broadly from abundant production from the era of high prices. This is now coming home to roost with lower demand from both emerging markets and advanced economies. There are importance nuances, however, in the relative strength and nature of those forces. In the early 2000s, demand for metals shifted from advanced economies in the West to emerging markets in the East. China, by far the main driving force, now accounts for half of global base metal consumption. Compare that with China’s more modest consumption of 14 percent of the world’s oil, which is almost exclusively used for transportation. It is therefore no surprise that metal prices are heavily influenced by demand, and the needs of one economic giant in particular. (India, Russia, and South Korea have also increased their metal consumption, but remain far behind China.) The slower pace of investment in China in the last few years, however, compounded by concerns over future demand amid the sharp stock market decline and currency devaluation this summer, has been exerting downward pressure on metal prices. In our other tale, concerning oil prices, our view is that supply factors are playing a bigger role than demand ones. (See also our blog from last December.) OPEC’s decision to maintain its level of production and strong shale oil production in the United States—in addition to the large production capacity from earlier investment—have contributed to an unprecedented supply glut. Add to that the prospect of Iran increasing oil production following the nuclear deal; Congress possibly lifting the U.S. ban on crude oil exports; and Libya and Iraq beating many analysts’ expectations for production, despite their difficult geopolitical contexts. With slowing demand from emerging markets and advanced economies, we are more likely to see an era of much lower prices than we have seen in the past few years. While we have highlighted changes in demand, supply also matters in the metals story. Global production has increased across the board for most metals, owing to rapid investment in capacity in the 2000s. Strikingly, the world production of iron ore has tripled since 2003. Such resource wealth can be a boon to developing countries, but it can also pose macroeconomic challenges if it accounts for a large portion of their exports. Fluctuations in prices or changing demand from large importers such as China imply obvious vulnerabilities. Over the last decade, many developing countries have seen their dependence on metals exports deepen. For example, metals account for more than half of the total exports of Mauritania, Chile, and Niger. Beyond supply and demand, a third factor has been influencing short-run fluctuations in commodity prices. Investors can rapidly move away from what they perceive to be riskier bets, including stocks and commodities. This so-called “risk off” behavior has at times put downward pressure on prices of both oil and metals. The sell-off on August 24 is a clear example. Initially oil prices recovered, then metals also rebounded significantly. Metal and oil price indices before and after “Black Monday”, August 24. Just like a Dickens plot twist, the next moves of metal prices cannot be predicted with certainty. Futures markets clearly point to continued low prices. But it is helpful to go beyond futures and review the forces underpinning demand and supply of metals. On the demand side, the Chinese economy is projected to slow further, gradually, but with considerable uncertainty. Our simple analysis finds that 60 percent of the variance in metal prices can be explained by fluctuations in China’s industrial production. Recent further falls in Chinese industrial production could further justify metal price declines, especially considering the intended rebalancing that is shifting away from investment toward consumption. The change in the composition in China’s growth may disproportionately affect metals compared to oil for reasons like the decline in the construction sector and the rising demand for transportation from the growing middle class. On the supply side, investment in the metals sector has dropped but it is unlikely that it will lead to a significant price rebound in the near future. Indeed, low energy prices have helped in reducing costs for mining and refining including for copper, steel, and aluminum. We also continue to discover more and more major mines outside advanced economies, which will add to global supply. Indeed, the frontiers of metals extraction have been expanding to Latin America and Africa over the past decades, and this trend is unlikely to reverse significantly. If anything, improvements in the investment climate, which drives investment in exploration and extraction, are likely to steadily continue in these regions. Ample supply is therefore likely to continue pushing metal prices further down. While both oil and metal prices are currently relatively low, there are importance nuances in the underlying driving factors behind the fall. All in all, the imbalance between weaker demand and steady increase in supply suggests that the metals market is likely to see a continued glut and a “low for long” price scenario. If this turns out to be true, there is a risk that investment will continue to falter and lead to a sharp increase in prices down the road. |
419910efccf9e8aa664b9a65a71c587b | https://www.brookings.edu/blog/africa-in-focus/2015/09/18/resource-funds-stabilization-parking-and-intergenerational-transfer/ | Resource funds: Stabilization, parking and intergenerational transfer | Resource funds: Stabilization, parking and intergenerational transfer Since July 2014 the price of crude oil has fallen from above $100 per barrel to below $40. This drop has brought the risks facing oil exporters into stark relief. Nowhere more so than in Africa, where there are now questions about how viable the recent oil and gas discoveries in Kenya, Tanzania, and Uganda will be. At the same time, many other African nations, including Angola, Ghana, Nigeria, and Senegal, are seeing their newly established sovereign wealth funds tested for the first time. With the volatility of oil revenues clear in people’s minds, now is a good time to discuss how such funds can be used to stabilize oil revenues and share their benefits across generations. In particular, do sovereign wealth funds make sense for all oil exporters? If so, what should they look like, and how should they be used? There are broadly three reasons why a resource-exporter might set up a sovereign wealth fund: intergenerational transfer, parking, and stabilization. The relevance of each will depend on the country’s level of development. Intergenerational funds are a way for developed countries to convert below-ground natural resource assets into above-ground financial assets, like Norway’s $900 billion Government Pension Fund Global. In this way, the natural wealth is preserved for future generations. Rather than investing at home, these funds should hold a globally diversified portfolio that ideally offsets some of the country’s exposure to oil price movements. The reason is that developed countries, with ready access to international capital, should already have invested in all opportunities that yield more than the return on foreign assets. Thus, it is better to invest abroad—to diversify risk, take demand out of the economy, and limit the appreciation of the real exchange rate. In keeping with the aim of preserving wealth for future generations, the funds should make use of their long horizon to invest in assets that yield an illiquidity premium. These intergenerational funds should have tight spending rules to preserve their capital. The rule should ideally be expressed as a fixed share of total above- plus below-ground wealth, so that it is stable over time (this is equivalent to a declining share of the fund, which represents a larger fraction of total wealth as oil is extracted). This share should be slightly below the long-run rate of return of the fund in order to incorporate some precautionary savings (discussed below). While developed countries should preserve their resource wealth abroad, developing countries face more pressing needs at home. These countries need domestic investment because their cost of borrowing abroad is typically high. The financial and social rate of return on this investment is likely to be much higher than that earned on foreign assets. Adjusting the pace of oil extraction to match the ideal rate of extraction is a simple way of avoiding absorption constraints. However, this won’t typically be possible, which brings about the need for parking funds. Parking funds are designed to temporarily hold resource revenues until they can be invested efficiently. In the early days of a resource windfall the country may have major constraints on its ability to absorb investment. For example, there is no point in investing rapidly in schools because it takes time for teachers to teach new teachers. Similarly, investing quickly in roads will drive up the cost of construction workers, so that less road is laid for every dollar spent. While these absorption constraints bind, it makes more sense to invest the revenues abroad, where they will earn a temporarily higher return. These parking funds need different asset allocations and spending rules to those of developed countries’ intergenerational funds. Their asset allocation should focus on medium-horizon assets, because they will eventually be sold to invest at home. Their spending rule should have limits on the amount spent each year and ideally direct it to domestic investment. In practice, it is better to feed this revenue through the government budget, which is better placed to identify investments with a high social rate of return, than investing directly as a separate fund. Risk created by volatile resource prices creates a case for saving in a stabilization fund, which receives revenues when the resource price is high. When the price falls, as in the current environment, governments will inevitably have to reign in their spending because it is very difficult to forecast when—or if—the price will rise again. Stabilization funds should be used to spread the fall in government spending over a few periods. This is so important because of the adjustment costs, or real frictions, associated with changing government spending. When commodity prices fall these costs might involve labor market frictions, with a reallocation of workers away from the public service, or nominal frictions, with a contraction in government spending. The typical response to a negative demand shock is to loosen monetary policy. However, in developing countries where monetary policy may be constrained (e.g., by a currency peg) or ineffective (e.g., because of underdeveloped financial markets), these demand shocks will distort the economy. In the present climate, countries that have planned ahead to accumulate stabilization funds can use them to smoothly adjust government spending, offsetting these distortions, without needing to turn to debt markets. However, the cost of these funds is that they should hold relatively liquid, but low-yielding, assets. Now that the recent commodity price boom is over, many people are asking, “What do we have to show for it?” Those from countries with sovereign wealth funds will be seeing the benefit of buffering government spending. Those countries without, like Chad, Australia, and Ecuador, must use the opportunity to prepare for the future. Introducing rent taxes will have relatively little impact in the short-term, while commodity prices are low, so will face less opposition. Doing so now, and committing the revenues to sovereign wealth funds, will prepare those countries for when the next boom begins. A lesson may be learned from Norway’s experience in the late 1980s, when an oil price collapse precipitated a recession and banking crisis. There a sense that missing the opportunity of the last boom, and the pain of poor stabilization policy, helped create the political will needed to establish a now-lauded sovereign wealth fund. Hopefully the clarity of hindsight will allow others to similarly share their natural resource wealth efficiently and stably with future generations around the world. Note: This blog reflects the views of the authors only and does not reflect the views of the Africa Growth Initiative. |
24e2ede428b6f5cd9770624aa270d8ed | https://www.brookings.edu/blog/africa-in-focus/2015/09/30/the-sdgs-business-and-the-development-challenge/ | The SDGs, business, and the development challenge | The SDGs, business, and the development challenge The most intriguing characterization of the new Sustainable Developments Goals (SDGs), endorsed by world leaders at the United Nations General Assembly on September 25, came from Amina Mohammed, assistant secretary-general of the United Nations and special advisor on Post-2015 Development Planning. During a panel discussion at the African Leadership Forum on September 24, Ms. Mohammed described the SDGs in a novel way, as “17 opportunities for investment.” Ms. Mohammed’s comment is a recognition that the private sector will play an important role in achieving the 17 SDGs, especially those related to food security, climate change, education, health, sanitation and water, gender equality, and reliable sources of energy. Why is the role of business in the SDGs important? For the first time, the private sector has a recognized role in achieving the global development agenda. Indeed, in the Millennium Development Goals (MDGs), published in 2000, there was only a passing reference to the private sector, and that was to call on technology companies to enhance cell phone availability and internet penetration. The new global development agenda is significantly different: Within the SDGs, there is an appreciation of the private sector’s role in the complex process of social and economic development. For example, Goal 8 (“decent work and economic growth”) emphasizes job creation, entrepreneurship, innovation, and small-and medium-sized enterprises. Goal 9 focuses on building resilient infrastructure, promoting inclusive and sustainable industrialization, and fostering innovation—all tasks in which business and entrepreneurs are essential. Goal 17 (“partnerships for the goals”) encourages and promotes “effective public, public-private and civil society partnerships,” which are critical to commercial success and economic development in emerging markets, especially in sub-Saharan Africa. Importantly, given that the cost of implementing the SDGs is estimated to be $4.5 trillion per year, there is a need for new collaboration models and financing instruments. Corporates and investment funds will be critical to mobilizing the needed funds. The SDGs, unlike the MDGs, reflect a common language that increasingly is understood by government, civil society, and business. While distrust may remain in certain quarters, trust clearly is improving. As Horst Köhler, the former International Monetary Fund chief and one of the architects of the SDGs, said, these goals are “our declaration of inter-dependence for the 21st century.” In the MDGs, full and productive employment and decent work for all was merely a target of the first goal. In the SDGs this target has been elevated to a goal in itself (Goal 8), which is to “promote sustained, inclusive and sustainable economic growth, full and productive employment and decent work for all.” Obviously “productive employment” and “decent work” cannot occur without a robust private sector and a conducive investment environment. Similarly, investors are waking up to the opportunity in Africa, and civil society increasingly is acknowledging the role of business in accelerating economic development. For example, the African Growth and Opportunity Act (AGOA) Civil Society Network issued a communiqué following the 2015 U.S.-Africa AGOA Forum in Gabon that called on member governments “to create the investment and business environment, including protection for intellectual property rights, that will attract foreign direct investment to the textile and apparel sector with a view of increasing production capacity and vertical integration.” Notably, the Post-2015 Development Agenda actually overlooks some goals in which the private sector can play an important role in promoting economic development. For example, when we consider Goal 16 (“peace, justice, and strong institutions”), we should remember that businesses and entrepreneurs can assist in the development of post-conflict states and contribute to the peace-building process. This suggestion is not new: There have been instances where companies have entered conflict zones. For example, U.S. oil companies navigated the Angolan civil war in the 1970s and helped stabilize the fledgling government in Luanda in the face of intense uncertainty and pressure, especially from the U.S. government. More recently, Nespresso has been working with Technoserve in the Yei region of South Sudan. Over the last two years, Nespresso has worked with more than 300 small-holder farmers to purchase and export 12 metric tons of fully washed green coffee. Several hundred farmers have received agronomy training while the civil war has raged. Including the private sector in the development agenda comes with an important requirement: corporate accountability. The SDGs try to capture this priority by encouraging companies “especially large and transnational companies” to adopt sustainable practices and integrate sustainability information into their reporting cycles and annual reports (Goal 12.6). Companies will continue to be challenged to synchronize the outcomes of their investments, social and otherwise, with global development goals. Relatedly, drawing the private sector more directly into the social and economic development process reflects the needs for a clear alignment between a company’s commercial objectives and a country’s social and economic development targets. While this places an emphasis on the importance of national development strategies, global goals such as the SDGs help clarify vital global objectives and create a common approach for achieving those goals. As Jane Nelson, director of the CSR Initiative at the Harvard Kennedy School and Brookings nonresident senior fellow, commented during a September 24 forum on “Business and the SDGs-Building Blocks for Success at Scale,” the Sustainable Development Goals are best seen as a “lighthouse” for all stakeholders engaged in the development process, both as a guide to decision-making and a measurement of progress. Thus, the global development agenda remains large, the progress of the last 15 years notwithstanding. Fulfilling that agenda cannot be achieved without bringing business more directly into the development process and the development policy dialogue with government and civil society. |
664b15ecebdf829000de97c60c765e42 | https://www.brookings.edu/blog/africa-in-focus/2015/12/31/foresight-africa-2016-free-trade-agreements-and-the-implications-for-africa/?shared=email&msg=fail | Foresight Africa 2016: Free trade agreements and the implications for Africa | Foresight Africa 2016: Free trade agreements and the implications for Africa The launch of Foresight Africa: Top priorities for the continent in 2016 is on the horizon. In anticipation of this year’s report, the Africa Growth Initiative team offers the below preview of one of the major topics covered and encourages you to check out the report here. You can also join the conversation on Twitter using #ForesightAfrica. Much of the world is entering into mega-regional free trade areas (FTAs). Indeed, October 5, 2015 marked the ratification of the Trans-Pacific Partnership (TPP), the most significant trade agreement in recent years. Now, the United States is currently negotiating with the European Union toward the establishment of the Transatlantic Trade and Investment Partnership (TTIP). When combined, the TPP and the TTIP will cover 60 percent of global GDP. But where does that leave sub-Saharan Africa, as no country in the region is party to these agreements? According to Foresight Africa issue brief author, Brookings Global Economy and Development Senior Fellow Joshua P. Meltzer, these exclusions will impede African businesses’ ability to compete on a global scale. The 12 TPP countries, which represent 40 percent of global GDP, 25 percent of global exports, and 30 percent of global imports, clearly are set to dominate global trade in upcoming years and likely to move much trade away from Africa—trade that was not very large in the first place. For example, the value in U.S. dollars of exports from the United States to other TPP countries is 29 times the value of U.S. exports towards all of sub-Saharan Africa. The infographic below (grabbed from this year’s Foresight Africa) shows the existing disproportionate trade distribution (in terms of exports) among TPP countries, sub-Saharan Africa, and the rest of the world. Figure 1 shows the proportions of exports of the TPP countries (and sub-Saharan Africa) on the left to the destinations on the right. Very little goes to Africa and, with the signing of the TPP in 2015, the proportion going to TPP countries will only grow. Table 1 shows those shares in more detail. Source: WITS World Bank accessed on November 12, 2015. Vietnam data is from 2013. Source: WITS World Bank accessed on November 12, 2015. Vietnam data is from 2013. Sub-Saharan Africa will fall behind for many reasons. First, the preferential access offered through the TPP will lead to relatively higher tariffs for trade with Africa. A second reason is, as Meltzer argues in his brief, new, tougher labor, environment, and health and safety standards enshrined in the TPP will become de facto global standards, and African businesses may struggle to meet them. African trade may also suffer because of its growing dependence on the services sector and new rules around global supply chains, on which Meltzer elaborates in Foresight Africa this year. These changes will happen despite the fact that the African Growth and Opportunity Act (AGOA) was reauthorized in 2015 and extended to 2025. Yes, AGOA is the cornerstone of the U.S.-African trade relationship, but in this evolving trade environment and the so-far “underutilization” of AGOA, sub-Saharan African will still fall behind. In fact, Foresight Africa viewpoint contributor Africa Growth Initiative Nonresident Fellow Witney Schneidman describes the legislation as an “underutilized resource,” as the legislation provides duty-free access to many manufactured goods—notably textiles. Indeed, this underutilization also hurts manufacturing jobs, the percentage of which lies around 6 percent, a figure that has remained unchanged in decades. Without country-led efforts to create AGOA strategies, beneficiary countries won’t be able to truly capitalize on the advantages the legislation offers. For more on how the mega-regional trade agreements will affect African trade in the years ahead, make sure you check out Foresight Africa: Top priorities for the continent in 2016 and join the conversation on Twitter using #ForesightAfrica. |
d68a91ba3fac899254d43db8ab59773e | https://www.brookings.edu/blog/africa-in-focus/2016/03/22/benins-landmark-elections-an-experiment-in-political-transitions/ | Benin’s landmark elections: An experiment in political transitions | Benin’s landmark elections: An experiment in political transitions Benin is the new field of dreams and promises kept. In a year when many countries on the continent are changing their constitutions to allow for incumbent presidents to run yet again, Benin, under President Yayi Boni, is respecting the term limits set down in its constitution. Thanks in part to pressure from the population, this development is allowing for political and democratic change. Indeed, the second round of elections took place on Sunday, between Prime Minister Lionel Zinsou and cotton magnate Patrice Talon who won 27.1 percent and 23.5 percent of the first round votes, respectively. In the second round Patrice Talon is believed to have won with a provisional margin of about 65 percent. Official certification from the Supreme Court is still pending, but Lionel Zinsou has publicly conceded. These elections prove that Benin is consolidating its political transitions process. It joins many other African countries such as Ghana, Nigeria, Kenya, Namibia, Zambia, and Tanzania in respecting their constitutions, continuing the process of political transition, and supporting institution building. With a total of 20 leadership transitions (defined as a change in the ruler/president of the country) overall since Benin’s independence, the last five have been contested multi-party elections. Sunday’s election is the sixth successive multiparty contest since 1991 with three complete, democratic leadership transitions—from Nicephore Soglo in 1991, to Mathieu Kerekou in 1996, to Yayi Boni in 2006. Benin is unique: Only six African countries have similar records. Ghana, Mozambique, Namibia, Mauritius, Cape Verde, and Malawi have succeeded in having three uninterrupted peaceful leadership transitions over the last 20-25 years with two-term presidents. Benin’s success comes with a number of electoral innovations that are challenging the status quo and could have important implications for the rest of the continent in the future. In this election, Benin’s politicians attempted at succession planning, albeit tentative in many ways. Africa has more often witnessed presidents that tend to undermine the aspirations of their cabinet and government members creating a leadership vacuum around them, which generally paves the way for them changing the constitution to run again. In Benin, however, outgoing President Boni appointed Lionel Zinsou Prime Minister in June 2015, following a two-year vacancy of the job to provide him with a platform to prepare for the elections. In December 2015, Zinsou announced his candidacy for the presidency of the republic, endorsed by Boni. The nomination and endorsement of Zinsou by the president creates a potentially welcome precedent. Second, the diverse nature of the candidates and the influence of the private sector: For the first time on the continent over the last three decades we will have two candidates vying for the highest public office who have spent the majority of their careers in the private sector. While private sector actors have traditionally played a more backseat role, funding campaigns or influencing the choice and direction of policy, they have rarely had their names on the ballot. In most countries technocrats, defined as specialists especially in the public policy field, have shied away from politics. Of the 33 original candidates in the first round, the top five candidates comprised one of the most technocratic short lists of presidential candidates possibly ever seen on the continent. The list included two prime ministers (one current and one former), Zinsou (formerly the head of French private equity firm PAI Partners) and Koupaki; two independent and wealthy candidates, Talon the cotton baron of Benin and Adjavon, the poultry baron; and two former IMF and West African Central Bank senior managers, Bio Tchane and Koupaki (again). Zinsou worked for the French government as adviser to the foreign minister while Koupaki also worked as advisor to President Alassane Ouattara of Côte d’Ivoire in the past. The choices for the population were between successful international technocrats or extremely successful businessmen. Choices like these are not generally available to populations on the continent. The diverse line-up in Benin could mean that once leadership transitions are institutionalized and populations’ belief in the transparency and governance of the process is established, more robust candidates are willing and prepared to participate in the process, which until now could mean jail, exile, or torture for anyone out of the ruling party. This election saw a growing involvement of the diaspora in politics: The role and importance of the diaspora in leadership transitions on the continent remains a thorny issue in many countries. While increasingly more countries make it possible for the diaspora to vote in elections, the majority of countries do not. Benin’s constitution was amended in 1995 to allow candidates with dual citizenship to stand for elections and for the diaspora to vote in elections. There are over 4 million Beninois that leave outside of Benin, mostly in other African countries like Nigeria, France and the U.S. The diaspora generally is associated with opposition parties and hence incumbent parties tend to be suspicious of them. However as the number and frequency of multiparty transitions increases all political parties are increasingly courting the diaspora and in some countries the diaspora is represented in the parliament. In the case of Benin, Zinsou, Koupaki, and Bio Tchane—among the top five candidates—all emanate from the diaspora. They spent most of their professional lives abroad working for international organizations, doing business, or serving other functions. These welcome trends imply that African leadership transition institutions are maturing. Leadership positions are moving from being occupied by professional politicians and the military to technocrats, professionals, and the private sector, thanks to a more open and participatory process. The continent has moved from a place where civil society (including the private sector) could only speak up through riots and street protests to one where they can play a direct role in the leadership transition process. From Burkina Faso to Kenya to Benin, the continent is maturing, despite setbacks in places like Burundi. Second, political platforms and second-round elections will continue to be important to the future of the leadership transition process. In Benin, while the incumbent party benefited from a well-organized party process, and the candidate had the support of the organized political parties through successful succession planning, 22 opposition candidates had a loosely defined coalition, called the “rupture alliance,” aimed at improving governance in the country and stopping the progression of the incumbent party candidate. The two wealthy private sector candidates Talon and Adjavon ran as independents with no party apparatus. With a second round, the losing candidates are obliged to organize themselves into a more organized and homogeneous structure as they throw their voices behind either of the top two candidates. This process helps build institutions and strengthen the leadership transition process, providing clear choices to the population and obliging the last two candidates to improve the messages and choices they offer. As African countries move from uncontested one-party elections to more contestability, this sequential process will help build alternative coalitions and form the bases for genuine political party development over time. Moving from a loose coalition of people to a coalition of people with like-minded public policy positions will essentially mature the participatory process. Third, the new makeup of the candidate field is changing the amount and composition of election spending. There is no spending limit for elections in most African countries. In a country with a GDP per capita of $800, poverty remains high and free handouts in cash or kind can sway votes. In most elections, the incumbent party has the most resources. It has access to national resources and can dominate the campaign scene. In fact, an IMF study by Ebeke and Ölçer, Fiscal policy over the election cycle in low income countries (2013) showed that government consumption significantly increases during election years, normally because incumbents vying for a new mandate use the public purse for the election. While increasingly many countries budget for and provide some resources to all candidates, these resources are a small fraction of what is spent by the incumbent ruling party. In Benin, with two of the richest private sector contenders vying for the presidency, the cost of the first round of elections was high. Candidates spent usually large sums to get out the vote. Both Adjavon and Talon, self-made millionaires, had their own money to use for the elections. It is clear that the overall cost of the election went up as the war chest of the private sector candidates was comparable to that of the incumbent party candidate. Further analysis is needed to see if such massive private sector funding of the election substantially increased the share of public sector consumption. If this phenomenon were to continue then there will be a need to focus more on campaign finance on the continent. However, in this case, it also helped put candidates on equal footing with the incumbent party candidate. With similar financial resources, the two private sector candidates came very close to the incumbent with 27.3 percent for the incumbent party, 23.5 and 22.1 for both private sector candidates, Talon and Adjavon, respectively. Fourth, there is a place for the diaspora in African politics. Increasingly on the continent countries like Côte d’Ivoire, Liberia, Kenya, Mali, Burkina Faso, and Nigeria are leveraging their diaspora to serve within the government. There are increasingly more ministers and prime ministers from the diaspora in African governments. However, the appetite for having a president from the diaspora is not yet widespread. Côte d’Ivoire and Liberia are the most common examples of this. Zinsou’s strong showing after the first round implies that populations may be more willing to consider diaspora candidates than they have previously been. In countries with a substantial part of the population outside the country this could be an important development as it broadens the choices of candidates for the population and should be encouraged. Candidates from the diaspora are often seen, wrongly or rightly, as more willing to tackle difficult policy issues and more willing to tackle corruption. As the people of Benin prepare to welcome a new president, there are already a number of lessons to draw and learn from. Overall, the Benin experience demonstrates that Africa’s leadership transition process is maturing as strong institutions are being built, and Africa’s processes are converging with the rest of the world in an era where we have politicians and business moguls vying for top public office in many other countries. |
9cb6409224aa262879220454915e02d9 | https://www.brookings.edu/blog/africa-in-focus/2016/04/05/commodities-industry-and-the-african-growth-miracle/ | Commodities, industry, and the African Growth Miracle | Commodities, industry, and the African Growth Miracle The 2016 Spring Meetings of the International Monetary Fund (IMF) and World Bank occur during uncertain times for the “African Growth Miracle.” After more than two decades of sustained economic expansion, growth in sub-Saharan Africa slowed to 3.4 percent in 2015, the weakest performance since 2009. The growth slow-down reflects lower commodity prices, declining growth in major trading partners, and tightening borrowing conditions. According to the World Bank, many of these factors—including low commodity prices and weak global trade—are expected to persist, pointing to a weak recovery for the region. GDP growth is expected to pick up to 4.2 percent in 2016 and to 4.7 percent in 2017-18. With population growth still about 2.7 percent per year, progress against poverty and growth of the emerging African middle class will slow. Africa’s pattern of exports makes it particularly vulnerable to commodity price shocks. Fuels, ores, and metals accounted for more than 60 percent of the region’s total exports in 2010-14 compared with 16 percent for manufactured goods. Following sharp declines in 2014, commodity prices weakened again in 2015. The prices of oil and metals, such as iron ore, copper, and platinum, declined substantially, accompanied by more moderate declines in those of some agricultural commodities, such as coffee. Commodity prices are expected to stabilize but remain low through 2017. A sharper-than-expected slowdown in China could have additional repercussions. The World Bank estimates that in the space of two years a 1 percentage point drop in China’s growth could result in a decline in average commodity prices of about 6 percentage points. In its January 2016 Global Economic Prospects report the World Bank proposes a policy solution to Africa’s continued vulnerability to commodities: “creating the conditions for a more competitive manufacturing sector.” Sadly, while advocating “structural reforms…to alleviate domestic impediments to growth [and] a major improvement in providing electricity,” the Bank is woefully short on specifics. This is hardly surprising. Beyond supporting improvements in the “investment climate”—structural reforms by another name—and pushing its Doing Business agenda, the Bank and the larger donor community have ignored Africa’s industrialization challenge for more than 20 years. By any measure Africa’s failure to industrialize is striking. In 2013 the average share of manufacturing in GDP in sub-Saharan Africa was about 10 percent, half of what would be expected from the region’s level of development. Africa’s share of global manufacturing fell from about 3 percent in 1970 to less than 2 percent in 2013. Manufacturing output per person is about a third of the average for all developing countries and manufactured exports per person, a key measure of success in global markets, are about 10 percent of the global average for low-income countries. For an institution dedicated to “ending extreme poverty and promoting shared prosperity” ignoring a sector that has the potential to create millions of well-paid jobs for people of moderate skills until the recent commodity price collapse seems a major oversight. It turns out that finding policies to assist Africa to overcome its manufacturing deficit is not as simple as advocating structural reforms and more electrical power. Over the past five years the African Development Bank, Brookings, and the United Nations University-World Institute for Development Economics Research (UNU-WIDER) have jointly sponsored a multi-year, multi-country research project designed to answer the question: Why is there so little industry in Africa? On April 12, 2016 we will launch one output of the project, the book Made in Africa: Learning to Compete in Industry at Brookings. Made in Africa offers some new thinking on how Africa can industrialize. A major finding of our research is that three closely related drivers of firm-level productivity—exports, agglomeration, and firm capabilities—have been largely responsible for East Asia’s industrial success, and their absence goes a long way toward explaining Africa’s lack of industrial dynamism. In Made in Africa we spell out how African governments can address the objectives of boosting manufactured exports, supporting industrial agglomerations, and building firm capabilities. We have some advice for the Bank and the donors as well: Try to become part of the solution rather than part of the problem. On April 12, the Brookings Africa Growth Initiative, African Development Bank, and United Nations University World Institute for Development Economics Research will co-host an event to discuss these issues and more. Professor Benno Ndulu, Governor of the Bank of Tanzania, will be among the panelists. You can register to attend here. |
0fd8ff2fb90f5a7dc68146e6f7c2b074 | https://www.brookings.edu/blog/africa-in-focus/2016/04/25/african-lions-unpacking-labor-trends-and-growth-in-mozambique/ | African Lions: Unpacking labor trends and growth in Mozambique | African Lions: Unpacking labor trends and growth in Mozambique Mozambique, over the last two decades, has experienced explosive growth, with an average GDP growth rate of almost 8 percent between 1997-2015. Not only that, but, for the most part, Mozambique has a track record of solid macroeconomic policies, like controlling inflation, reducing current account deficit, and lowering the country’s dependence on aid. Like many other sub-Saharan African countries, though, the rapid growth rate has not transformed into substantially decreasing poverty rates. Indeed, while Mozambique’s poverty rates fell dramatically from 1997-2003, many experts attribute that trend to post-war recovery from the civil war that ended in 1992, no clear progress seems to have been made from 2003-09. In a recent paper, Understanding Mozambique’s growth experience through an employment lens, and as part of the wider African Lions project, Sam Jones and Finn Tarp examine macro and microeconomic developments in Mozambique and apply labor market decomposition tools to investigate the disconnect between the country’s performance in growth and poverty reduction. In their study, the authors find that, while labor movement out of agriculture—like in classic structural transformation—has contributed to aggregate economic growth, this trend is over-concentrated in the services sector (Figure 1), which itself is experiencing a decrease in labor productivity. Figure 1: Trends in sectoral shares of employment (1996-2014) Source: Jones and Tarp’s (2015) calculations using Mozambican official statistics. As the author’s point out, while labor makeup is changing, each sector’s contribution to GDP is only shifting a little (Figure 2), with minor changes in sectoral trends in labor productivity overall. Figure 2: Trends in sectoral shares of real GDP (1996-2014) Source: Jones and Tarp’s (2015) calculations using Mozambican official statistics. Thus, as seen in Figure 3, as the labor share for services has grown, its labor productivity has fallen. Manufacturing follows the same path, but to a more modest degree. Interestingly, as also shown in the figure, the mining sector, while slightly losing labor, has grown in labor productivity. The authors attribute the falling productivity in the services sector to the fact that “new workers in this sector tend to operate on an informal basis and undertake activities that are more precarious relative to existing workers.” At the same time, they credit the trend in mining to the lack of creation of “new employment posts in line with the pace of new entrants to the economy.” Overall, they note, “recent aggregate growth appears to have been driven by capital-intensive growth in the mining sector and by comparatively rapid growth of employment in services sector but typically in activities that have lower productivity than the sector average.” Thus, the authors’ main findings include: Given these revelations and agriculture’s continued low productivity, how might Mozambique continue its high growth, especially in the light of the oncoming demographic dividend? The authors provide policy recommendations for continued growth. These include: To read the full paper, see here » Note: The African Lions project is a collaboration among United Nations University-World Institute for Development Economics Research (UNU-WIDER), the University of Cape Town’s Development Policy Research Unit (DPRU), and the Brookings Africa Growth Initiative, that provides an analytical basis for policy recommendations and value-added guidance to domestic policymakers in the fast-growing economies of Africa, as well as for the broader global community interested in the development of the region. The six papers, covering Mozambique, Kenya, Ghana, South Africa, Ethiopia, and Nigeria, explore the key constraints facing African economies as they attempt to maintain a long-run economic growth and development trajectory. |
Subsets and Splits