Amid the coronavirus pandemic, companies need a crisis response coordinated by top management that gives experts and managers the autonomy to implement creative, pragmatic solutions.
The COVID-19 outbreak, caused by the coronavirus (SARS-CoV-2), is a deep humanitarian crisis that has also gravely affected the global economy. It is posing difficult—even unprecedented—challenges for business leaders. They are finding that the fast-moving situation is impervious to familiar remedial actions. By the time a response is mounted, the situation has changed, and the scale, speed, and impact of issues have unexpectedly intensified. Leaders everywhere have experienced some form of such disruption, though the magnitude of the present crisis is trying the lessons of human experience. The struggle to avoid ineffective, reactive approaches has consequently been all the more difficult.
Together with many leading companies, we have developed a better approach—a flexible structure for guiding the work—called the integrated nerve center. In an unfamiliar crisis, such as the COVID-19 outbreak, the nerve center concentrates crucial leadership skills and organizational capabilities and gives leaders the best chance of getting ahead of events rather than reacting to them.
The integrated nerve center is not a formulaic panacea. It is, rather, an efficient means of coordinating an organization’s active response to a major crisis. It is endowed with enterprise-wide authority and enables leaders and experts to test approaches quickly, preserve and deepen the most effective solutions, and move on ahead of the changing environment. In hundreds of discussions conducted in the past few weeks, we have looked at the efforts of many companies now in the process of building COVID-19 nerve centers. We feel that the insights of this common experience are of wide and pressing importance.
In an unfamiliar crisis, the nerve center concentrates crucial leadership skills and organizational capabilities and gives leaders the best chance of getting ahead of events rather than reacting to them.
Discover, decide, design, deliver: Lessons from past crises
Common crisis-management failures arise according to the demands of the crisis, which can be understood in a fourfold manner. The first task of crisis management is to discover the current situation and form an accurate view of how it might evolve, deriving implications for the organization. From discovery, leaders must move on to decide on and design the necessary immediate and strategic actions, speedily establishing a pragmatic, flexible operating model.
This model is ideally based on adequate stress testing of contextualized hypotheses and scenarios. It should also adhere to company and societal values. Finally, companies must deliver the solutions in a disciplined and efficient way, with enough built-in flexibility to accommodate late pivotal changes. In real crises, things go awry in each of these four categories:
Inadequate discovery. This is a failure to invest in an accurate, full determination of the depth, extent, and velocity of the crisis. Companies typically reflect an optimist bias in initial assessments, for example, and then in subsequent reassessments as well. Eventually the false hopes embodied in these inaccurate assessments become obviously insupportable, at which point, however, the crisis has worsened, and much valuable time and resources have been wasted.
Poor decision making. Most poorly handled crises are defined by poor decision making. Bad decisions can result from many causes, such as acting on incomplete information (action bias). In our experience, reluctance to act until “all the facts are in” is a more common fallacy. The tendency for decision makers to analogize a new and unfamiliar situation to past experience (pattern recognition) is another serious pitfall. Groupthink and political pressure commonly lead decision makers astray. Reputations—and sometimes, compensatory incentives—are often at stake in large, expensive projects. Consequently, undue pressure can be exerted to push through an unforeseen problem whose resolution is disregarded or seen as insufficiently important to revise timelines and budgets. Relatively minor arising technical issues can, by this dynamic, become major problems and even lead to catastrophic failures.
Constrained solution design. Many crises have one or more technical causes—the problem in itself—that must be addressed with tailored solutions. These solutions must be either newly invented or imported to a new domain. Responding organizations must not allow themselves to be constrained by poor or inadequate solution designs. The immediate technical solution for diagnosing COVID-19—the starting point for treatment solutions—is the effective test. A type of test known as polymerase-chain-reaction (PCR) testing, developed in China, Europe, and South Korea for the disease, has become the standard for effective testing and is now being produced at scale around the world. The test was first produced in Germany in January 2020, not long after COVID-19 appeared in China. Yet in the United States, the presence of an ineffective test delayed the adoption of the effective one for a crucial early period in the spread of the virus.
Delivery failure. For anyone with actual experience in handling a crisis, execution failure is a constant risk. Small contingent (random) failures can cause larger failures of the most well-thought-out plans. Faulty solutions can command undue loyalty from managers suffering from “operations addiction”: instead of recognizing the root problem, responsible parties look for patches to preserve the flawed response. Chaotic conditions will necessarily cause disruptions, but the presence of accountable leaders with good judgment and the freedom to act and improvise as needed can minimize execution delays and failures.
The COVID-19-response structure
The nerve center is designed to resolve these four challenges under the heavy pressures of a major crisis. Certainly, companies and institutions are facing such a crisis with the COVID-19 outbreak, which has triggered travel restrictions, border closings, supply-chain disruptions, and work stoppages across the globe. The exhibit shows one example of a COVID-19-response structure.
In this example, the nerve-center structure is organized around five teams, each responsible for a number of work streams. It is designed as an agile structure, coordinated through an integration team, but there is enough autonomy of action granted to constituent team leaders to work through bottlenecks and keep the response moving.
Nerve-center integration team
The nerve-center integration team is the coordinating head of the larger nerve-center structure. Its purpose is to set the overall tone of the COVID-19-response work, acting as a single source of truth, in real time, for all information and actions related to the outbreak and response. It must maintain close two-way communication with all teams.
It is headed by a senior C-suite leader and includes an epidemiological expert, a project coordinator, and a scenario-planning analyst. The organization should empower this team to command whatever resources it deems are necessary to integrate closely and accomplish the work of the other four teams. The team’s responsibilities can be summarized as follows:
acting as the single source of truth for issue resolution
ensuring that sufficient resources are deployed where and when needed
coordinating the portfolio of remedial actions across the work streams of all teams, based on scenarios and triggers
aligning team leaders on scenarios, with the help of roundtables and other exercises as needed
For most organizations, business as usual cannot be expected to reign during the COVID-19 outbreak. Organizations need to develop a plan to support employees that is consistent with conservative health and safety guidelines. The plan must be flexible enough to accommodate policy changes as needed through the outbreak. It is useful for companies to compare their efforts in this domain with the actions that other organizations of similar size are taking, to determine the right policies and levels of support for their people.
The most helpful workforce-protection models provide clear, simple language to local managers on how to deal with COVID-19 that is consistent with the guidelines provided by WHO, national health organizations (such as the US Centers for Disease Control and Prevention), and local health agencies. The model should provide managers with a degree of autonomy sufficient to allow them to deal with any quickly evolving situation. Free two-way communication is also important so that managers can monitor adherence to policies as they evolve and employees can safely express their reservations about personal safety, as well as any other concerns.
The recommended workforce-protection team includes the head of HR (team leader); the HR full-time leader; representatives from security, legal, and employee communications; and the ombudsperson. The workforce-protection team is charged with the following work streams:
developing brief policy papers, issue-escalation criteria and call trees, and actions (including preventative actions), as needed
managing multichannel communications, including confidential feedback and reporting channels
aligning policies and incentives for third-party and real-estate contractors
establishing or maintaining communications platforms to enable employees to work from home (necessary infrastructure includes a virtual private network, telephony, and broadband readiness), including, as appropriate, deployment of collaborative software tools to enable video and audio conferencing, screen sharing, “whiteboarding,” polling, chat, and other interactive capabilities
helping manage productivity, using such means as staggered work times; respecting social-distancing norms; and instituting health checks
developing “issue maps” and clear ownership and deadlines for issue resolution
engaging with local, state, and national political leaders and health officials
Companies need to define the extent and likely duration of their supply-chain (including tier-one, -two, and -three suppliers) exposure to areas that are experiencing community transmission and their inventory levels. Most companies are now primarily focused on immediate stabilization, given that, in China (where few new COVID-19 cases are being reported), most plants are now restarting. In addition to supporting supplier restarts, companies should explore bridging strategies, including supply rationing, prebooking logistics capacity (shipping, rail, and airfreight), using after-sales stock, and gaining higher-priority status from suppliers. Companies should plan to manage supply for products that may be subject to unusual spikes in demand as they come back on line. In some cases, longer-term stabilization strategies may be necessary. Here, companies will have to use updated demand planning, optimize their networks further, and identify new suppliers. These approaches may be generally warranted to ensure enduring supply-chain resilience against risks beyond COVID-19, once the crisis is over.
The supply-chain-stabilization team will include the head of procurement (team leader), the procurement manager, a supply-chain analyst, the regional supply-chain managers, and the logistics manager. This team will manage four work streams:
ensuring risk transparency across tier-one, -two, and -three suppliers; supporting supplier restarts; managing orders; and ensuring the qualifications of new suppliers
managing ports, prebooking logistics capacity, and optimizing routes
identifying critical parts, rationing parts as needed, and optimizing locations
developing scenario-based sales and operations planning for SKU-level demand and managing the planning for production and sourcing
Companies that truly navigate through disruptions often succeed because they invest in their core customer segments and anticipate those segments’ needs and actions. In China today, for example, while consumer demand is down, it has not disappeared—far from it. People have dramatically shifted toward online shopping and ordering for all types of goods, including for food and produce delivery. Companies should invest more in online channels as part of their push for multichannel distribution. The investment should include ensuring the quality and delivery of goods sold online. Keep in mind, too, that changing customer preferences may not return to preoutbreak norms.
The customer-engagement team will include the head of sales and marketing (team leader), a financial analyst, and managers for customer communications, customer incentives, and SKUs. The customer-engagement team will manage three work streams:
communicating to B2B customers (through a dedicated site) and developing scenario-based risk communications
intervening as needed across the customer journey to prevent leakage, training customer-facing employees, and monitoring customer-service execution
developing customer communications about COVID-19 situations and practices, as well as fact-based reports on COVID-19-related issues
Financials stress testing
Companies need to develop business scenarios tailored to their own contexts. Experts using analytics can define the values for the critical variables that will affect revenue and cost. Companies should model their financials (cash flow, profit and loss, and balance sheet) in each scenario and identify triggers that might significantly impair liquidity. For each trigger in each scenario, companies should define moves to stabilize the organization. Such moves could include optimizing accounts payable and receivable, cost-reduction measures, and divestment or M&A actions.
The financials-stress-testing team will include the CFO (team leader), the leader of strategy or business development, the leader of treasury, a representative from legal, and one or more financial analysts. The team will manage two work streams:
developing relevant scenarios based on the latest epidemiological and economic outlooks
assembling relevant financials data according to different scenarios, especially working-capital requirements
Getting started quickly: The minimal viable nerve center
A common pitfall in nerve-center design is needless complexity. A good way of avoiding this is to aim at a minimal viable nerve center. Companies taking this approach quickly assemble the bare essentials needed to get operations up and running. The core nerve-center group, which might include all the team heads, will shape the structure, as needed, as the crisis evolves. Experience points to four essential elements that should be put in place right away.
The teams need to be staffed quickly, with individual roles, responsibilities, and accountabilities made clear. Flexibility will be an important principle, since roles will change over time, sometimes quite rapidly. Also important is that nerve-center leaders be authorized to make timely decisions, sometimes without the opportunity to syndicate with other leaders.
Meetings should be limited to those in which vital deliberations are conducted and actions decided on. They should, however, be frequent enough to foster collaboration. Ensure that meetings address essential topics and elicit the best thinking for the relevant work streams.
The responsible members for each work stream should have the opportunity to seek input from the coordinating leaders. Solutions should be tested and decisions made to commit to effective methods and set aside ineffective ones. Select meeting attendees with care: Meetings of only senior leaders tend to encourage purely upward reporting rather than constructive debate and real problem solving.
Meetings with too many frontline managers and individual contributors can become overly focused on tactical issues rather than the central problems. The difficulty of a high-quality operating cadence lies in maintaining a basic underlying structure and then allowing flexibility so that the organization can pivot when it needs to.
The nerve center will first identify the critical issues present in each work stream, with the expectation that these will evolve over time. Issues should be described in an issue map for risks and threats. In their totality, these maps will represent the core problem statement for the crisis situation and allow the group to articulate and address the challenges clearly and relatively quickly. The mapping can be divided between immediate, addressable risks and unforeseen, arising threats. Risk maps can be longer and more comprehensive; threat maps, however, can address the biggest issues—those that could drive significant disruption as the crisis continues.
Some known COVID-19 risks, such as those posed to traveling employees, could be readily addressed with policies (such as travel restrictions).
Unforeseen threats that could arise as the crisis continues can be anticipated in “premortem” workshops. Nerve-center teams therein work out possible responses—ones to take if, for example, a sudden gap should open in the supply chain because of policies imposed beyond the company’s control.
Once companies establish a good understanding of the critical issues across all work streams, they will find it helpful to run financial calculations (balance sheet, cash flow, and profit and loss) on issues and responses. This will project scenarios for particular issues, allowing companies to form views on issue likelihood, timing, and magnitude.
Leaders can find it extremely difficult to craft sensible goals during a crisis. Many trade-offs usually have to be made between ideal outcomes and the many real constraints the organization faces. Once more realistic goals reflecting the trade-offs are arrived at, they can be assigned a few milestones and key performance indicators (KPIs) so that progress toward them can be tracked in simple ways.
A few other elements can become helpful as the nerve center evolves. For the COVID-19 crisis, these could include common operating pictures, giving a single view on the current status of the response; KPI dashboards, to confirm whether or not hoped-for outcomes are being achieved; and listening posts, which are early-warning indicators that can point out forthcoming changes in the trajectory of a crisis.
The cultural challenge
The hard truth about effective business leadership is that leaders operate within powerful cultural and social contexts. The largest organizations, with hundreds of thousands of employees, might appear, in normal business conditions, to operate according to a command-and-control structure.
The reality is more complex. While large organizations use many top-down, pyramid-like structures and processes, these work only when outcomes are predictable. On the other hand, routinized ways of working impede the creativity and flexibility that organizations need to respond at speed amid a crisis.
The exhibit of the integrated-nerve-center structure we have offered is not meant as a precise instruction manual. It is a general outline in need of contextual tailoring from organization to organization. The form described is most applicable to large corporations with global supply chains. For financial institutions, the structure would give little prominence to supply-chain stabilization and much more weight to financials stress testing. The structure is, however, adaptable for any large organization and can be effectively deployed in any crisis.
From a business standpoint, the COVID-19 outbreak is a particular kind of crisis, quite different from those affecting a single large, multinational company. Rather, it is more like the financial crisis of 2008 to 2009, in that it presents as a shock to the greater part of all global economic activity: all the more reason that organizations need to concentrate leadership and capabilities in a fast-acting, integrated nerve center.
With senior-leadership support and participation, the nerve-center structure can provide the organizational parameters that companies need to navigate through the disruptions caused by the COVID-19 outbreak. The approach works because it enables a coordinated response led by top management while also giving experts and managers the autonomy they need to implement creative, pragmatic solutions.
As the COVID-19 pandemic sweeps across the globe, manufacturing organizations face significant operational challenges. Some companies have temporarily shuttered factories in response to government restrictions or falling demand, but others are facing significant increases in demand for essential supplies.
Frontline manufacturing staff can’t take their work to the relative safety of their homes. Plant leaders are therefore looking for ways to operate through the immediate crisis—all while preparing for a potentially much longer period of heightened uncertainty regarding demand and supply, and a lasting need to maintain enhanced hygiene and physical distancing.
Three areas of focus can help plant leaders navigate the transition from initial crisis response to the “next normal”:
Protect the workforce: Formalize and standardize operating procedures, processes, and tools that help keep staff safe. Build workforce confidence through effective, two-way communication that responds to employees’ concerns through flexible adaptation.
Manage risks to ensure business continuity: Anticipate potential changes and model the way the plant should react well ahead of the fluctuations to enable rapid, fact-based actions.
Drive productivity at a distance: Continue to effectively manage performance at the plant while physical distancing and remote working policies remain in place.
Protect the workforce
The most critical focus for every organization is to keep employees safe in an environment where repeated outbreaks are a persistent threat. To achieve this, companies can deploy a comprehensive set of policies and guidelines, including enhanced hygiene measures, provision of additional personal protective equipment (PPE) where necessary, physical distancing, and modifications to existing governance and behaviors. Protecting employees’ mental health has also emerged as a high priority, with companies in China (and elsewhere) providing counseling services to employees returning after prolonged quarantines. These measures, developed in the initial response to the crisis, can be integrated into an organization’s standard procedures as it makes the transition to next-normal operations.
Communication is key
Ramping up internal communications is vitally important, including regular sharing of information about the company’s evolving knowledge of the crisis and how it is using that knowledge to protect employees and the organization. Clarity, simplicity, and framing all matter—research from earlier epidemics shows that positive messages focused on best practices were more effective than negative messages designed to address misinformation. Frequency counts as well, as audiences need to hear a message repeatedly before fully absorbing it. And that implies consistent content, reflecting a single source of truth at the corporate center.
Finally, the best communication is two-way, with managers answering questions and engaging in an open dialogue with employees at all levels. One equipment maker, for example, asks supervisors to collect queries and concerns from frontline team members every morning. The company’s HR department then publishes an updated daily list of questions and answers, which are displayed on monitors around the factory. After the introduction of the new policy, absenteeism among shop-floor staff dropped significantly and productivity returned to pre-crisis levels. As an additional, unintended benefit, the approach uncovered a number of frontline concerns unrelated to the pandemic, allowing managers to take additional steps to boost productivity and improve workforce satisfaction.
Plant leaders are already telling us that their frontline personnel appreciate the increased frequency and clarity of two-way communication necessitated by the outbreak. Organizations can capitalize on these improvements by standardizing their enhanced communication approach, rather than letting things regress to pre-crisis norms as the situation stabilizes.
Enabling workplace physical distancing
To keep staff safe over the longer term, companies can retain and formalize appropriate parts of their emergency-response guidelines, so they become part of plants’ standard operating procedures. Such guidelines might include enhanced health surveillance, restrictions on the use of communal tools and areas, regular sanitization of equipment along with periodic deep cleans of whole workplaces, and HR policies that ensure workers can stay at home if they feel unwell. Regulatory changes also merit extra attention, as governments introduce new rules on mandatory sick pay, or requirements for employees to limit contact with products or one another.
At the onset of the crisis, some companies began to ask employees to take a digital survey before starting on-site work, confirming that they do not have any COVID-19 symptoms, sharing their travel history since their last shift, and verifying they understand new health and safety guidelines. This approach provided valuable data that could aid contact tracing (where consistent with local practices) in the event of a positive test at the plant. It also helps to reinforce the importance of following health policies and reminds employees to avoid the risk of getting others sick.
Minimizing the potential future impact of infections will require companies to alter team structures and working methods in order to limit contact across the workforce. One way this can be done is by establishing “pods” for all on-site personnel, organized for self-contained teams with clearly defined tasks and workspaces that can be physically and socially separated from each other as much as possible.
Organizational changes to support the introduction of pods include dedicating workers to a single production line and removing “floating” workers—for example, by making pod members responsible for collecting materials and for conducting their own routine quality checks and maintenance. Shift handover meetings can be conducted remotely, using videoconferencing technology, while the start, stop, and break times of different pods can be staggered to minimize contact in communal areas of the plant. Plants may even choose to modify shift patterns, so lines in close proximity to one another are staffed and run at different times.
Exhibit 1 shows how the pod approach might work on a packaging line. Before the changes, operators working on the line were responsible for multiple machines, supported by logistics, quality, and utility personnel who worked across multiple lines. Under the pod system, operators are assigned to fewer machines but responsible for more tasks within their work area, thereby minimizing contact with staff and equipment outside the pod.
Instead of multiple employees handling each pallet, for example, a single team member is responsible for its entire journey. Some tasks, such as quality assurance, are now conducted by remote specialists, aided by cameras and digital tools. New physical barriers guard against accidental contact between pod workers, while allowing the unimpeded movement of product.
Manage risks to ensure business continuity
The coronavirus crisis has dramatically increased risk for every business, with many experiencing shocks in both supply and demand. Manufacturing plants are at the center of that uncertainty, and their continued operation through the crisis and beyond will depend in large part on the organization’s ability to navigate these wider risks. We have written elsewhere about the necessary steps to build resilience into the wider supply chain, and plant leaders will play a central role in their organization’s response.
Plant leaders can also plan their own response to risks that could directly affect operations in their facility—starting with what to do if an employee anywhere in the plant tests positive for a COVID-19 infection. Responses can include—but would not be limited to—consulting with health authorities, quarantining the affected person (together with any other staff who were working in close proximity), and isolating and sanitizing exposed products, tools, and workspaces.
Facing higher levels of uncertainty over the medium term, plants will likely find it useful to ramp up their scenario planning, with a higher planning cadence and a wider range of potential scenarios included in their analysis. When closely tied to the organization’s wider response and recovery strategy, this accelerated planning helps the plant develop strategies to accommodate substitute materials, or produce hard-to-source parts in-house.
Some companies are using digital twins of their facilities to simulate operation under different staffing levels and production scenarios. This approach can support many aspects of operational planning, from evaluating the impact of changes to plant layout to determining the mix of skills that on-site teams will require.
The transition to the next normal in manufacturing plants will require both leaders and frontline teams to develop new capabilities. The introduction of pods on the production line, for example, may call for operators with a wider range of skills, so they can complete all the tasks required in their pod or cover for absent colleagues.
New digital approaches can accelerate the capability-building process and allow employees to develop new skills remotely. Such techniques include the remote delivery of training using e-learning systems or the use of virtual-reality technologies to familiarize operators with new tasks or plant layouts. Augmented-reality systems help shop-floor staff to receive training, advice, and support from remote colleagues. Specialist contractors can use such systems to guide shop-floor staff through machine maintenance or troubleshooting.
Drive productivity at a distance
For as long as virus transmission among employees remains a risk, companies will naturally want to minimize unnecessary contact between personnel. Anybody not absolutely required on-site, including managers and many support functions, can be encouraged to work remotely as much as possible to protect the health of their shop-floor colleagues. To minimize the risk that an entire leadership cohort would need to enter quarantine at the same time, leadership staff who do need to stay on-site can be separated into at least two teams, with no physical contact between them.
As they reconfigure their operations to keep employees safe and respond to changes in the wider value chain, companies still need to maintain manufacturing performance. In many plants, leaders have long managed performance face to face, using daily shift briefings, visual management, and regular “gemba walks”—observant walk-throughs of the shop floor and wherever else the “real work” is being done. Physical-distancing and remote-working policies will make these established approaches more difficult, compelling companies to find new ways to manage shop-floor performance.
The technology necessary to support these changes doesn’t need to be expensive. Staff working off-site can use secure remote-access programs from their personal devices to handle shift handover meetings and similar activities. Some plants have equipped operators with two-way radios, assigning channels to specific teams or functional groups. This approach can actually increase the speed at which issues are communicated and resolved.
Now is a good time for companies to revisit the suites of metrics they use to track manufacturing performance. To make up for reduced in-person access to the shop floor, some factory-management teams are already beginning to identify and track leading key performance indicators (KPIs) in addition to the standard first- and second-level KPIs they usually rely upon.
Exhibit 2 illustrates this approach with a simplified cascade of KPIs from a high-speed production line. Each of the top-level performance KPIs on the left of the chart sits over a number of second-level KPIs that describe the major sources of losses experienced on the line. The leading KPIs in the third column track previously agreed-on actions designed to minimize those losses.
Monitoring how often frontline teams are cleaning, checking, and adjusting critical parts of the equipment—perhaps using sensors, if available—can give team leaders and plant managers a useful early warning of potential problems before they weaken operational performance. Historically, senior managers would rely on line leaders to review these activities in person, but with only remote monitoring possible, these data points can fill critical information gaps for managers. For example, if the number of times the infeed rails are cleaned starts to fall on a filler line, managers can follow up with the operators rather than wait for jams to reduce the line’s overall equipment effectiveness—the standard KPI that leadership teams usually follow.
Absenteeism rates are another important area of focus. Understandably, employees concerned about COVID-19 exposure could be reluctant to come to work, while others may be prevented from doing so by sickness or by quarantine rules. Some companies are proactively reaching out to employees the day before and the morning of their shifts to ask if they are planning to come to work, while others are offering hazard pay or soliciting volunteers to be “on call” for overtime, depending on vacancies. With advance notice of absenteeism and clear production priorities, plant teams stand a better chance of developing and executing efficient production plans.
Managers can use a skills matrix (Exhibit 3) to identify potential shortages of critical capabilities on a day-to-day tactical basis and, together with scenario modeling, guide decisions about staff training or recruitment requirements. Even a simple spreadsheet can quickly highlight problems and identify opportunities for reskilling or upskilling to improve workforce resilience.
In the longer term, the organization’s response to COVID-19 should accelerate the digital transformation that is already under way in many manufacturing environments. For teams working remotely or under physical-distancing guidelines, real-time data collection and advanced-analytics technologies can provide a more detailed, accurate, and up-to-date picture of plant operations.
Handheld cameras and smart glasses can give remote staff a virtual shop-floor presence, allowing them to assist frontline teams with troubleshooting tasks or even participate in gemba walks to support line supervisors and operators. Digital standard operating procedures (SOPs) and problem-solving guides can support frontline teams when managers or more experienced colleagues are not on hand. Online learning technologies can help staff develop new skills quickly, creating a more flexible, more technology-savvy workforce at every level of the organization (Exhibit 4).
The next normal is also likely to drive a change in the metrics and targets companies use to optimize manufacturing performance. Management systems that typically emphasize productivity and quality will expand to include a greater focus on flexibility (for example, the number of staff cross-trained to perform multiple tasks on the line) and resilience (the number of component shortages due to supply-chain or quality issues, or the skills that are in short supply because only a small number of employees have the necessary training or experience). Companies can reinforce those changes by adjusting targets and incentives for individual employees, such as by emphasizing adherence to health and safety guidelines. Staff could be rewarded for developing broader skill sets, reducing reliance on external contractors and increasing the overall resilience of the workforce.
The coronavirus will have long-lasting—perhaps permanent—effects on manufacturing organizations, forcing companies to restructure their operations to maintain production while protecting their workers. The coming weeks and months will remain extremely challenging for plant leaders, but the crisis also creates an opportunity to reimagine the way work is done. By accelerating the adoption of new digital technologies and by drawing on the flexibility and creativity of their frontline staff, companies have the opportunity to emerge from the crisis with manufacturing operations that are safer, more productive, and more resilient.
The specter of war is frequently invoked in discussions about the COVID-19 pandemic. Heads of state and government leaders from Donald Trump to Emmanuel Macron have employed wartime rhetoric to describe the crisis—“we are at war,” Macron declared in his March television address announcing a nationwide lockdown, while Trump has tweeted about the virus as “the invisible enemy.” And as the death toll rises in the United States, many have made comparisons to the number of those killed in the Vietnam War.
The past is not prologue, and the comparisons to war have limits and detractors (Germany’s president, Frank-Walter Steinmeier, for one, has said the pandemic is not a war but rather a “test for humanity”). Still, wartime analogies can be useful for an understanding of the potential economic consequences of this crisis. Wars last longer than downturns, and the economic cycle in which we suddenly find ourselves is unlike any peacetime cycle we have experienced in the past half century—including during the Vietnam War and in the aftermath of 9/11. In some key ways, the period we are going through resembles the fully immersed experience of a mass mobilization, wartime economy. While some European countries and parts of the United States are now starting to loosen lockdown measures, the duration of this “war” will be dictated by the time it takes to defeat the virus with effective treatments, vaccines, and immunity, and its depth will be dictated by how much and how effectively we mobilize.
Here are seven insights from a sweep through history highlighting parallels and some differences with today’s pandemic:
This could go on much longer than we anticipate. Years-long wars often don’t start with that expectation. At the onset of the First World War, in August 1914, Kaiser Wilhelm II told German troops they would be “home before the leaves have fallen from the trees,” and in England, the talk was about the war being over by Christmas. Churchill, then Lord of the Admiralty, used the phrase “business as usual” in December 1914 to describe the maxim of Britain in the war. In the American Civil War, thousands of volunteers signed up for 90 days in the expectation of a brief conflict.
What does this mean for us today? Parts of the economy are slowly reopening, though in most cases the opening is tentative and will remain below capacity. Are we at the “end of the beginning,” or should we prepare for a resurgence in the fall or even sooner? History shows us we have been in this fog before. Unlike political leaders and the general public though, in most of those cases, military leaders—similar to some epidemiologists and medical experts in the early phase of the COVID-19 crisis—did expect the crisis to be drawn out and more painful than the conventional wisdom. In Britain in 1914, for instance, the secretary of state for war, Lord Kitchener, predicted a war lasting at least three years, with fighting down to “the last million” soldiers.
Government becomes a much bigger actor in the economy. As war expands, deficit-financed public spending ramps up to levels unimaginable in peacetime—slowly at first, then suddenly as the magnitude of the conflict becomes evident. Government becomes the primary actor and purchaser in the economy. At the start of the First World War, government consumption in Britain rose from 8 percent of GDP in 1913 to 13 percent in 1914; by 1915, it had shot up to 33 percent of GDP and peaked at nearly 40 percent in 1917, according to the Bank of England’s “millennium of macroeconomic data” set, the source of the UK statistics in this article. In the Second World War, America’s government consumption rose from 15 percent of GDP in 1940 (already up from 9 percent in 1930) to 48 percent by 1943. The increase in spending was supported by both taxes and debt. The US federal deficit, which averaged 5 percent of GDP in the mid-1930s before falling to zero in 1938, ramped up to 26 percent of GDP in 1943. Federal tax receipts also rose, from 7 percent of GDP in 1941 to 19 percent by 1944 (US data cited here is mostly from Federal Reserve Economic Research).
What’s different now? Public debt is much higher to begin with. By 2018, central-government debt was at 80 to 85 percent of GDP in France and the United Kingdom, 130 percent of GDP in Italy, and nearly 200 percent of GDP in Japan, according to International Monetary Fund data. At 106 percent of GDP, US public debt is already near its historical peak in 1946. US federal tax receipts have remained above 15 percent of GDP during the postwar period. Total assets on the federal balance sheet amounted to $4 trillion in 2019—at nearly 20 percent of GDP, which is close to the ratio at the end of the Second World War—and in the Covid-19 response has increased to nearly $7 trillion. Central-bank assets in the euro area also jumped by around €800 billion to €5.3 trillion in April 2020. The public purse is thus already as stretched in many countries as it was at the end of the Second World War. Yet if the “wartime economy” continues for longer than we expect, growth in government consumption is what will keep GDP growth going as households cut back on consumption, businesses cut back on investment, and exports fall.
Wartime increases in government spending come with wartime mobilization of people and materials. Britain’s armed forces doubled in 1914 from 400,000 to 800,000—then shot up to more than four million by 1917. During the Second World War, US military personnel grew from 330,000 in 1939 to two million in the European theater alone, with frontline troops making up roughly 40 percent, according to some estimates. To sustain such mobilization there were additional resources for infrastructure, logistics, and administration in the theater, as well as increases at home for the production of machinery and equipment, vehicles, and agricultural and mining output to support the war effort.
Could such a mobilization happen this time? In one sense it may already be happening—with an “at-home mobilization” of residents being asked or required to stay home, forgo paychecks, and risk unemployment. Government spending has ramped up to finance such a reverse mobilization, paying workers directly or through their employers, just as it did soldiers in war. Across Europe’s five largest economies, more than 30 million furloughed workers continue to receive much or all of their pay via government subsidies to companies. In the United States, roughly half the Coronavirus Aid, Relief, and Economic Security (CARES) Act package includes direct payments to households ($300 billion), expanded unemployment benefits ($260 billion) and paycheck protection for employed workers ($670 billion). The US at-home mobilization currently underway lasts through July 31 and amounts to nearly 6 percent of US GDP—roughly the US military budget’s share of GDP in 1942. In France and the UK, government payments to furloughed workers alone amounts to roughly 2 percent of GDP.
Such a reverse mobilization, if extended over a long period, could be in addition to a more typical “frontline” mobilization of two to three million healthcare workers, including nurses, technicians, healthcare aides, contact tracers, and testers. The labor mobilization could come with a mandated redeployment of capital and direct government contracts. One example is the $2.6 billion of contracts with Ford and GE, GM, Philips, and a half-dozen other firms for ventilators, a contract whose value is 0.1 percent of the US government’s current $3 trillion of final consumption.
Mobilization ramps up to absorb all the slack in the economy, tightening the labor market and raising inflation. The massive labor mobilization of wartime brings unemployment levels down—sometimes down to levels not seen in peacetime. Britain’s unemployment rate fell below 1 percent during the First World War as the civilian labor force shrank in size. Wages rose, and union membership doubled. The US unemployment rate in the Second World War also fell—from 17 percent in 1939 to 1 percent in 1944. Large-scale mobilization tightened the labor market and, combined with farm prices that were held high to ensure adequate food supply, contributed to inflation. In the First World War, Britain’s price index tripled from 1913 to 1920; in America, the periods of highest inflation in the 20th century, aside from the 1970s, were the years immediately following the two world wars.
Such a scenario seems implausible today. It is hard to imagine that many or most of the 30 million US workers that filed for unemployment (as of May 1, 2020), or the 30 million furloughed workers in France, Germany, Italy, Spain, and the United Kingdom will be absorbed by mobilization. Instead of a full-scale frontline mobilization of health workers and an at-home mobilization of nearly everyone else, we may instead settle for a half-normal situation. The Economist calls this the “90 percent economy,” one in which travel and hospitality operate well below capacity, bankruptcies and financial hardships continue at a steady pace, and there remain persistent worries about a second or third wave of infections. In such a scenario, labor mobilization is unlikely to absorb much slack. The unemployment rate may be much higher than in prior war periods, along with a high risk of long-term unemployment, discouraged workers, and persistent distress in communities across the country. The relatively low mobilization of such an extended crisis may not contribute to inflation. In any case, over the past decade, inflation has remained persistently weak despite the longest economic expansion on record.
Wartime means major winners and losers among sectors. In recessions, economic resuscitation attempts focus on jump-starting the whole system, but in wartime economies, resources move quickly from one area to another. Governments call the shots for anything deemed strategic, from tanks to food. Britain’s steel output grew by 25 percent between 1913 and 1917; its munition output increased 40-fold in the same period. France and Germany saw even greater increases in their munition output. Meanwhile sectors that depend on households’ discretionary spending can see a fall in output— sometimes enforced by constraints. Between 1941 and 1944, for instance, urban American households reduced their spending on household furnishings, appliances, recreation, and entertainment by 25 percent, according to research by the US Bureau of Labor Statistics. Spending on automobiles fell by more than 50 percent as automobile factories were retooled for military trucks, jeeps, tanks, aircraft, vehicle parts, and munition. The dispersion of sector outcomes in wartime can be as wrenching as in the 2008–09 period (when automobile industry GDP fell by more than 50 percent, and many other industries saw 20–25 percent declines in a single year) and last much longer than in most recessions.
During this pandemic, economic loss has been disproportionately in sectors affected by the lockdowns. Sectors such as transportation, recreation, hospitality, and discretionary retail make up 50 percent of households’ discretionary spend, or about 10 percent of total GDP. These sectors are usually the ones affected by the impact of war on households’ discretionary budgets. We haven’t yet seen the reallocation in this crisis, as the government response so far is mostly in the form of transfers to households and businesses to maintain current allocations, not direct government spending to reallocate resources.
War can end with a recession. When war causes great physical destruction, as in France and Germany during the Second World War, urgently needed reconstruction can fuel long economic-growth periods—but that’s not always the case. In the United States there was a recession after both world wars, the American Civil War, and the Korean War. Government consumption shrank quickly but households did not have the income growth to step up as economic engines. In some cases, inflation and central-bank action were additional triggers. The size and duration of the recession were affected partly by the backlash to rising public debt or inflation. America and Britain saw sharp recessions in 1920–21, with falling farm prices and worker incomes, austerity measures, and high unemployment. In Britain, for instance, unemployment rose in the years following the First World War, reaching 11 percent by 1921. Historians suggest that some conditions for the 1929 crash and subsequent depression can be traced to policy actions immediately following the First World War.
What will happen when our efforts to defeat the virus end, for instance, with a vaccine? Some of the conditions for a postpandemic slowdown have already been seeded. Governments in many countries are taking extraordinary fiscal measures, and the end of war will be signaled by a pullback in those measures. The public debt could trigger concerns and calls for cutbacks and austerity (as has happened in our living memory after the financial crisis). Household spending has already been hit and may take time to recover if unemployment continues to be high and persistent. In the corporate sector, large firms may be more likely to bounce back from such a recession; smaller firms—especially those in smaller towns and nonmetropolitan areas—tend to be more vulnerable, and many may not survive. Trade growth had already been slowing since 2012 and could slow further if companies focus on localizing resilient supply chains (especially coming on the heels of tariff and “decoupling” concerns).
The end of war can bring institutional changes and a better social contract. The two world wars were followed by periods that saw a range of attempts to improve social services and reintegrate soldiers into the workforce and society. In Britain, France, and some other European countries, major welfare reforms were enacted during or at the end of the Second World War, including the introduction of universal social security in France and the Beveridge Reforms in Britain, which created the National Health Service. In the United States, the GI Bill gave returning soldiers an opportunity to upgrade their skills and education. The social contract between institutions and individuals was strengthened.
Could that happen this time? It may depend on the extent of mobilization. In a scenario of low mobilization and a half-normal economy, the COVID-19 pandemic could potentially heighten economic insecurity, which has grown for individuals in their roles as workers, savers, and consumers over the past two decades. The situation postpandemic would be very different from wartime precedents in such a scenario, with wages and inflation remaining subdued, interest rates remaining near or below zero, and high unemployment persisting even after labor mobilization. These factors would create major social and economic challenges for government and business leaders.
On the other hand, a full-scale healthcare and at-home mobilization could put us in a different situation. Just as previous wars brought forward labor-market changes, such as greater unionization, worker benefits, and increases in female participation in the workforce, this war could accelerate changes such as universal incomes, remote work, and greater resilience for households, workers, and companies in supply chains. With these changes we could end up with a renewed social contract that improves income security, expands access to technology, and creates a rising tide of productivity and economic prosperity.
“War is hell,” General Sherman famously remarked, and the pain and suffering caused by the COVID-19 pandemic is also proving hellish for many victims. The economics of the pandemic are also looking bleak, and the timing and strength of any recovery is still unclear. Amid the talk of U-shaped and V-shaped recoveries and forecasts for the new normal, we are also being cautioned that pandemics create unforeseeable breaks in trends and that mean reversion, or moving back toward the norm over time, may not be the most likely outcome. Wartime analogies may not all be appropriate or relevant to this crisis—but they do provide some indication of what’s likely to be an unpredictable road ahead.
The COVID-19 crisis reminds us how underprepared the world is to detect and respond to emerging infectious diseases. We must make smart investments now to simultaneously navigate COVID-19 and prepare for future pandemics.
Not the last pandemic: Investing now to reimagine public-health systems
The COVID-19 pandemic has exposed overlooked weaknesses in the world’s infectious-disease-surveillance and -response capabilities—weaknesses that have persisted in spite of the obvious harm they caused during prior outbreaks. Many countries, including some thought to have strong response capabilities, failed to detect or respond decisively to the early signs of SARS-CoV-2 outbreaks. That meant they started to fight the virus’s spread after transmission was well established. Once they did mobilize, some nations struggled to ramp up public communications, testing, contact tracing, critical-care capacity, and other systems for containing infectious diseases. Ill-defined or overlapping roles at various levels of government or between the public and private sectors resulted in further setbacks. Overall, delayed countermoves worsened the death toll and economic damage.
Correcting those weaknesses won’t be easy. Government leaders remain focused on navigating the current crisis, but making smart investments now can both accelerate COVID-19 response and strengthen public-health systems to reduce the chance of future pandemics. Investments in public health and other public goods are sorely undervalued; investments in preventive measures, whose success is invisible, even more so. Many such investments would have to be made in countries that cannot afford them.
Nevertheless, now is the moment to act. The world has seen repeated instances of what former World Bank president Jim Kim has called a cycle of “panic, neglect, panic, neglect,” whereby the terror created by a disease outbreak recedes, attention shifts, and we let our vital outbreak-fighting mechanisms atrophy.1 And while some are calling the COVID-19 crisis a 100-year event, we might come to see the current pandemic as a test run for a pandemic that arrives soon, with even more serious consequences. Imagine a disease that transmits as readily as COVID-19 but kills 25 percent of those infected and disproportionately harms children.
The case for strengthening the world’s pandemic-response capacity at the global, national, and local levels is compelling. The economic disruption caused by the COVID-19 pandemic could cost between $9 trillion and $33 trillion—many times more than the projected cost of preventing future pandemics. We have estimated that spending $70 billion to $120 billion over the next two years and $20 billion to $40 billion annually after that could substantially reduce the likelihood of future pandemics (Exhibit 1).
These are high-level estimates with wide error bars. They do not include all the costs of strengthening health systems around the world. A comprehensive program of health-system strengthening at all levels would cost substantially more and also contribute to effective outbreak management. Our preliminary findings call for further investigation, but we hope the overall message is clear: infectious diseases will continue to emerge, and a vigorous program of capacity building will prepare the world to respond better than we have so far to the COVID-19 pandemic.
In this article, we describe the five areas that such a program might cover: building “always on” response systems, strengthening mechanisms for detecting infectious diseases, integrating efforts to prevent outbreaks, developing healthcare systems that can handle surges while maintaining the provision of essential services, and accelerating R&D for diagnostics, therapeutics, and vaccines (Exhibit 2).
From ‘break glass in case of emergency’ response systems to always-on systems and partnerships that can scale rapidly during pandemics
Responding to outbreaks of infectious diseases involves different norms, processes, and structures from those used when delivering regular healthcare services. Decision making needs to be streamlined; leaders must make no-regrets decisions in the face of uncertainty. But much of our present epidemic-management system goes unused until outbreaks happen, in a “break glass in case of emergency” model. It is difficult to switch on those latent response capabilities suddenly and unrealistic to expect them to work right away.
A better system might be founded on a principle of active preparedness and constructed out of mechanisms that can be consistently used and fine-tuned so they are ready to go when outbreaks start (Exhibit 3). We see several means of instituting such an always-on system. One is to use the same mechanisms that we need for fast-moving outbreaks (such as COVID-19) to address slow-moving outbreaks (such as HIV and tuberculosis) and antimicrobial-resistant pathogens. Case investigation and contact tracing are skills familiar to specialists who manage HIV and tuberculosis. But few areas have deployed their experts effectively in responding to the COVID-19 pandemic.
Another way to build active preparedness is to form cross-sector partnerships—something that becomes much more challenging during a crisis. The private sector has generally been willing to help during the COVID-19 crisis, but many companies have had trouble finding effective channels. The Coalition for Epidemic Preparedness Innovation (CEPI) represents a model for always-on partnerships across sectors. It was founded in 2017 as a not-for-profit platform to accelerate the development of vaccines against emerging infectious diseases. When the COVID-19 outbreak began, the organization pivoted from studying a wide set of diseases with epidemic potential to focus much of its attention on the new threat. Along with the Gavi alliance and others, CEPI has been an important vehicle for ensuring that vaccine-development efforts for COVID-19 hit the ground running.
Governments can also maintain their information-sharing practices between major outbreaks and then ramp them up when outbreaks start. South Korea, for example, built an always-on disaster- and safety-information system to capture risk information in real time following its experience in responding to MERS. The system brings together data, including localized geospatial information, from 11 existing disaster-management systems and 16 government ministries. It includes a rapid emergency-approval system for diagnostic-testing kits. As COVID-19 spread, South Korea activated that approval system to scale up testing quickly.
The principle of active preparedness might also lead governments to strengthen other aspects of pandemic response, such as the development of diagnostics and therapeutics for emerging infectious diseases (which might focus on known gaps between epidemics), the manufacturing of personal protective and medical equipment, and the sharing of information. Predefining response roles for different stakeholders at the global, national, and local levels is also an important part of active preparedness, since well-defined roles prevent delays and confusion when an outbreak occurs.
Last, governments can keep outbreak preparedness on the public agenda. Iceland offers an example of how to do that effectively. Since 2004, the country has been testing and revising its plans for responding to global pandemics. Authorities there also encourage the public to take part in preparing for natural disasters. The government’s efforts to heighten public awareness of the threat posed by infectious diseases and to engage the public in the necessary response measures aided the country’s successful response to the COVID-19 pandemic.
To build always-on systems around the world, an up-front two-year investment of $20 billion to $30 billion and ensuing annual investments of $5 billion to $10 billion (for a ten-year total of $60 billion to $110 billion) would go into the following areas:
building and maintaining high-quality, flexible outbreak-investigation capacity in all geographies: most countries have a field-epidemiology-training program of some kind, but many of them are underfunded and place their graduates onto uncertain career pathways; strengthening such programs is likely to be one of the most effective investments that a country can make in developing its outbreak-investigation capacity
supporting epidemiological-response capacity with emergency operations centers (EOCs) that function during all types of major crises
maintaining robust stockpiles of medical supplies and emergency supply-chain mechanisms at the subnational, national, or regional levels (depending on the setting)
conducting regular outbreak simulations and other cross-sectoral preparedness activities
From uneven disease surveillance to strengthened global, national, and local mechanisms to detect infectious diseases
Retrospective analysis of tissue samples shows that SARS-CoV-2 was circulating in a number of countries well before it was first recognized. Failures to detect the disease meant that chains of transmission had been firmly established before countries began to respond. Such problems occur because disease surveillance is often based on old-fashioned practices: frontline health workers noticing unusual patterns of symptoms and reporting them through analog channels. Most countries are far from realizing the potential of advanced analytics to supplement traditional event-based surveillance in identifying infectious disease risks so that authorities can initiate efforts to stop individual chains of transmission.
We have begun to see wider use of nontraditional data during the response to the COVID-19 pandemic—for example, the use of mobility and credit-card-transaction data to monitor compliance with public-health measures—but there is potential to do much more (Exhibit 4).
Stopping individual chains of transmission requires strong detection and response capabilities at the national and local levels. Those capabilities are important to have in place across the globe, especially in parts of the world where frequent human–wildlife interactions make zoonotic events (transmission of pathogens from animals to people) more likely. Many developing countries will need external funding and support to build up their disease-surveillance systems. Donor countries might think of their investments in those systems as investments in their own safety.
Recognizing that one country’s infectious-disease threat is a threat to all nations—a lesson reinforced by outbreaks of SARS in Toronto, cholera in Haiti, MERS in South Korea, and Zika across the Americas—previous generations created the International Health Regulations (IHR) to promote cooperation and coordination on outbreak response. However, compliance with the IHR has been imperfect because countries may be reluctant to suffer the economic consequences of admitting to a major outbreak. Weak cooperation efforts were identified as a factor in the slow initial response to the West Africa Ebola outbreak. As the COVID-19 crisis continues, leaders might find reason to renew their commitment to global and regional mechanisms for coordinating outbreak responses.
Such an agenda might include deepening understanding of viral threats around the world, renewing and strengthening commitments to sharing data on infectious diseases, taking steps to limit the trade in wildlife, cooperating more extensively on R&D, and ensuring that access to information is widely available. An investment program of $10 billion to $15 billion for the first two years and $4 billion to $6 billion per year thereafter (for a ten-year total of $42 billion to $63 billion) would pay for the following:
significantly strengthening disease-surveillance systems (including for animal health) in low- and middle-income countries and promoting their interoperability to improve compliance with IHR; investments at the local and national levels would help pay for the technology systems and human capacity needed to detect pandemic-prone pathogens
addressing surveillance gaps in high-income economies through investment at the national and local levels
developing stronger regional surveillance networks in Africa, Asia, and South America
supporting the development and global rollout of advanced technologies for disease surveillance
From waiting for outbreaks to an integrated epidemic-prevention agenda
While we cannot prevent all epidemics, we can use all the tools in our arsenal to prevent those we can. Three approaches to doing so stand out: reducing the risk of zoonotic events, limiting antimicrobial resistance (AMR), and administering vaccines more widely (Exhibit 5).
Zoonotic events, in which infectious diseases make the jump from an animal to a human, touched off some of the most dangerous recent epidemics, including of COVID-19, Ebola, MERS, and SARS. Zoonosis can’t be eliminated, but its occurrence can be reduced. Areas with high biodiversity and places where humans frequently encounter wildlife present the greatest risk of zoonotic events and therefore require special attention.
Another root cause is ecosystem degradation, which makes zoonotic events more likely by increasing interactions between humans and wildlife. Scientists have estimated that a large portion of zoonotic-disease outbreaks can be linked to changes in agriculture, land use, and wildlife hunting over the past 80 years. Economic incentives, legal changes, and public education can lessen contact between humans and wildlife and help protect forests and wilderness areas, thereby decreasing the likelihood of zoonosis. There is also much more to learn about the threats we face through wider mapping of the viruses that exist in animal populations.
Limiting AMR—the evolution of pathogens to be less susceptible to antimicrobial agents—is another important way to prevent epidemics. AMR is a public-health crisis to be managed in its own right. It is also a potential accelerant of future outbreaks: as pathogens become resistant, diseases that are currently controllable can spread more widely. Conveniently, managing AMR requires many of the same tools and techniques that support responses to acute outbreaks, including surveillance, case investigation, information sharing, and special protocols for healthcare settings. Efforts to improve AMR management, therefore, not only strengthen outbreak-response capabilities but also help prevent outbreaks in the first place.
Finally, the unprecedented R&D effort that has been launched to develop a vaccine against COVID-19 serves as a reminder that we are not realizing the full benefit of existing vaccines. Recent outbreaks of measles, for example, show that places with lower vaccination rates are more susceptible to diseases that vaccines can prevent. Achieving full global coverage of all of the vaccines in our arsenal would save millions of lives over the coming decades. It will be especially important to jump-start immunization efforts after the current pandemic with catch-up campaigns for children who have missed scheduled vaccines.
The approaches we have described represent important steps toward preventing outbreaks. We estimate that it would cost approximately $20 billion to $30 billion for two years and then $5 billion to $12 billion per year thereafter (for a ten-year total of $60 billion to $126 billion) to limit human exposure to wild animals, map more of the global virome, slow the spread of AMR, and close the global immunization gap.
From a scramble for healthcare capacity to systems ready to surge while maintaining essential services
Exponential case growth during the early phases of the COVID-19 pandemic compelled officials in some countries to rapidly redirect much of their healthcare capacity to treating patients with COVID-19. Most health systems have met this challenge, but future waves of COVID-19 or other epidemics may provide sterner tests (Exhibit 6). To prepare, health systems can establish plans detailing how capacity can be diverted to pandemic management and how additional capacity can be added quickly (for example, by converting nonmedical facilities to temporary healthcare facilities and by establishing field hospitals). Some places used existing plans of that type to respond to the COVID-19 pandemic; others created emergency plans during the outbreak. More can be done to codify and improve such plans. Not all health-system gaps around the world can be addressed in the short term, but tools such as the Service Availability and Readiness Assessment (SARA) and joint external evaluations (JEEs) can help in assessing overall system readiness and identifying the highest-priority needs.
Surge-capacity plans for pandemics should account for the need to maintain essential healthcare services (Exhibit 6). It does little good to prevent 1,000 epidemic deaths if 1,000 other people die because they couldn’t obtain healthcare. In addition to the deaths attributed to COVID-19, the pandemic has resulted in excess short-term mortality for reasons such as delays in urgent care for acute conditions. The US Centers for Disease Control and Prevention, for example, has estimated that approximately 5 to 10 percent more deaths than normal have occurred during the COVID-19 outbreak, excluding those that are fully attributable to the disease itself.
In the long term, epidemics also tend to increase mortality because people defer preventive measures (such as routine immunization) and care (such as diabetes management). Similar challenges arose during prior outbreaks. Amid the 2014–16 Ebola outbreak in West Africa, decreases in healthcare delivery led to setbacks in non-Ebola care, with more than 1,000 measles cases resulting from reduced vaccination coverage. Similarly, the 2010 earthquake and ensuing cholera epidemic in Haiti stalled improvements in the mortality rates for children younger than age five to a greater extent than could be directly attributed to those events (Exhibit 7). Last, the response to the COVID-19 pandemic has increased the burden of mental illness and caused an economic downturn that could worsen the health of many people.
Certain investments can help prepare healthcare systems to handle surges while delivering essential and routine services. An initial two-year outlay of $5 billion to $10 billion and yearly spending of $2 billion to $4 billion thereafter (for a ten-year total of $21 billion to $42 billion) would pay for the following actions:
conducting relevant assessments (such as SARA and JEEs) to highlight gaps and address the challenges identified in scaling healthcare capacity
strengthening health systems in targeted ways: while building resilient health systems around the world is a multidecade agenda, closing the largest gaps in care capacity offers disproportionate benefit (the total cost of building high-quality, resilient health systems will be far higher than the cost of closing capacity gaps and goes beyond the scope of the analysis presented in this article)
planning explicitly to manage secondary health impacts and maintain continuity, including task shifting and expanded use of telehealth
improving the use of real-time data to provide early warnings of secondary health consequences (for example, mortality in excess of historical baselines, home-birth rates, and short-term immunization rates) and to share information across entire healthcare systems
employing alternate care-delivery models, such as campaigns about immunization and family planning
From underinvestment in R&D for emerging infectious diseases to a renaissance
Humans have done more to overcome the threat posed by infectious diseases in the past 100 years than during the previous 10,000. The widespread availability of antibiotics allows us to manage most bacterial infections. HIV remains a serious condition, but it isn’t usually an immediately life-threatening one for people with access to antiretroviral therapy, thanks to the innovations of the past 35 years. And the past decade has seen remarkable progress in our ability to cure hepatitis C.
However, important gaps remain. Public-health leaders have frequently called attention to the threat posed by emerging infectious diseases. Even before the COVID-19 outbreak, the pandemic threat posed by known pathogens such as influenza and by an unknown “pathogen X” was well understood.3 The pace of innovation in antibiotics is not keeping pace with the increases in antimicrobial resistance. Current regulatory and incentive structures fail to reward innovations that can help counteract emerging infectious diseases or resistant bacteria. It is difficult for companies to project the financial returns from interventions for diseases that emerge sporadically and may be controlled before clinical trials are complete (as happened during the West Africa Ebola outbreak). That is especially true of interventions for diseases that mainly affect people in low-income countries.
R&D efforts in response to the COVID-19 pandemic have been unprecedented: hundreds of vaccine and therapeutic candidates are being evaluated. While these efforts have been extremely exciting, many eyes will also be focused on whether the market dynamics (such as economics, competitive dynamics, and demand) in the coming months demonstrate that healthy markets are possible for pandemic-response products and how these dynamics will affect incentives for future development.
Building on the momentum created by COVID-19-related R&D, there is potential to spark a renaissance in infectious-disease R&D (Exhibit 8). The renaissance might focus on several necessities that the response to the COVID-19 pandemic has highlighted. One necessity is a portfolio of options. We can be cautiously optimistic about the potential of an effective COVID-19 vaccine being available during 2021—but only because so many candidates are in the works. Another necessity is flexible manufacturing capacity that can be deployed rapidly to make massive quantities of the most effective vaccines and therapeutics. A third necessity is intervention across a range of potential outbreak pathogens, requiring active programs for more than ten diseases.
Delivering such necessities will require building on the early success of initiatives such as CEPI to reimagine product-development pathways, from funding models and collaboration platforms to regulatory review and access agreements. Spending $15 billion to $35 billion in the first two years and $4 billion to $6 billion per year thereafter (for a ten-year total of $47 billion to $83 billion) would fund these activities:
accelerating the development of diagnostics, therapeutics, and vaccines against known threats—including influenza, for which effective R&D might yield significant advances
accelerating the development of next-generation antibiotics to counter the threat of AMR
establishing and funding platforms for the development of diagnostics, therapeutics, and vaccines against emerging infectious diseases
maintaining the capacity to manufacture five billion doses of vaccine and large quantities of therapeutics
Bringing it all together
As we continue to respond to the COVID-19 pandemic, countries should make deliberate investments to reduce the chance of such a crisis happening again. We estimate that an initial global investment of $70 billion to $120 billion over the next two years ($35 billion to $60 billion per year), followed by an investment of $20 billion to $40 billion per year to maintain always-on systems, would significantly reduce the chance of a future pandemic. Those figures, totaling $230 billion to $425 billion over the next decade, include spending at the global, country, and subnational levels (Exhibit 9).
The playwright Edward Albee once said, “I find most people spend too much time living as if they’re never going to die.”4 So it is with the global response to infectious diseases: we have spent too much time behaving as though another deadly pathogen won’t emerge. Outbreaks of SARS, MERS, Ebola, and Zika led to some investments in pandemic preparedness over the past 20 years, but few of them are the lasting, systemic changes needed to detect, prevent, and treat emerging infectious diseases. And now, even with all of humanity’s knowledge and resources, hundreds of thousands of people have been killed by a disease that was only identified six months ago. The COVID-19 pandemic won’t be the last epidemic to threaten the world. By taking action and funding changes now, we can better withstand the next one.
Cities are at the center of global economic activity. They have also been hubs for the spread of COVID-19. City governments have had to shoulder a large share of the fight against the virus. Many have acted decisively to curb its spread and to lower infection levels. They have restricted the movements of citizens, put in place the resources to quickly diagnose and manage the sick, and improved hygiene and safety in public places.
The success of these measures, in place for months, has been accompanied by inevitable negative effects on the economy and social activity (such as education). The pressure to reopen has mounted, and in some cities early attempts to do so resulted in flare-ups of COVID-19. The balance between protecting public health and restoring the economy is now absorbing the attention of municipal authorities everywhere.
Given the danger of resurgence, cities seeking to restart their economies must roll back restrictions with care, ensuring that the pandemic remains under control. Cities are using several metrics to evaluate the public-health situation. The most important of these are the absolute number of infections (including new cases, recovered cases, and active cases) and the reproduction number1 (Rt): the average number of people who become infected by an infected person. Our findings add a third important indicator, resident mobility. Together with the number of infections and Rt, it creates a more rounded view of the state of the pandemic in cities.
Given the danger of resurgence, cities seeking to restart their economies must roll back restrictions with care, ensuring that the pandemic remains under control.
This mobility metric allows cities to generate new insights into how they are coping with the virus at the initial stages of reopening and to gauge the effectiveness of their initiatives. Changed mobility patterns, as we shall see, can be correlated with the slower spread of the coronavirus, and some approaches to reopening have been correlated with desirable public-health outcomes. Leaders can use these findings to help inform their reopening plans. While direct evidence of causation is difficult, given the multitude of measures implemented simultaneously, cities can look at the associations of different interventions.
Which factors should be tracked?
The story of each city is unique, since the speed of transmission depends heavily on the density and mobility of populations and their local cultures. Medical researchers around the world have mobilized to improve our understanding of the virus’s biological transmissibility and how best to fight it. At this stage of the pandemic, scientific proof of causation about what controls the virus’s spread in major cities has not been established. While fully recognizing that correlation is not causation, our research suggests an elevated degree of correlation between certain initiatives to reduce social interaction and the desired outcomes in controlling and slowing the spread of COVID-19. This knowledge can be useful to city governments.
Our sample contains many of the biggest cities in the twenty countries with the highest number of registered infections—Berlin, London, Madrid, Milan, Moscow, New York, Paris, and Rome. We also investigated cities, such as Lisbon, at the other end of the spectrum, with low levels of infection. Our sample thus covered cities that were grappling with infection rates ranging from above 1.5 percent of the population (tens of thousands of sick people urgently needing care) to below 0.2 percent of the population (patients numbering only in the dozens). To better understand the effective reproduction number, we researched the relationships of over 50 different factors, using multivariable regression in time and across cities. These factors included 13 categories of measures that cities had implemented (such as physical distancing and face-covering requirements), different mobility types as measures of social interaction, and data on weather and population (including population density, totals, and area).
The analysis yielded two important results:
Measuring mobility indicators is among the most effective ways to predict the movement of Rt over the next seven to 14 days. The mobility indicators explained around 80 percent of the Rt variation in time. Cities that most reduced social interaction (as measured by mobility indicators) were able to achieve respectively higher reductions in Rt.
Among the types of mobility, the best correlations emerged from intercity mobility, measured as the use of public and private transportation. Much weaker correlations attached to local mobility, such as visits to local retail stores.
Using mobility as an additional metric to augment the case count and the Rt number, we looked at how cities were coping with the spread of COVID-19. We found that when lockdown and quarantine measures were first introduced, most of the cities faced very serious conditions: high case counts and an Rt above 3 or even 4. Once urgent steps were put in place, such as limiting the number of visits to public places and local businesses, the trajectory of new cases began to flatten, and the Rt level rapidly fell below 1. This is a crucial threshold, meaning that transmission is falling rather than expanding (Exhibit 1).
These cities fully recognized that their initiatives would probably have a severe impact on economic activity but nonetheless regarded them as necessary during the pandemic. Two groups of typical initiatives were the most widely implemented, with the added rule that facial coverings should be worn in public at all times.
The initiatives correlating with the most substantial reduction in the spread of the virus (case count and Rt) were stay-at-home measures, sometimes called lockdowns. Except for essential workers, governments restricted the movement of people outside their homes, often imposing fines for violating the orders. Residents by and large complied with (and sometimes anticipated) them, at least partly from fear of infection. The stay-at-home measures correlate more closely with a reduction in viral spread than physical-distancing measures do (see below).
The measures were accompanied by efforts to increase public awareness, with literature and media announcements. Many offices, stores, and manufacturing facilities were closed, and public events were canceled. Cities regulated mobility, allowing unrestricted movement mostly for essential workers. As the spread of the virus declined, public officials recognized the limits on the duration of such measures because of their adverse impact on the economy. As Exhibit 2 shows, mobility indicators help the authorities track the reduction in social interaction. That in turn helps them understand the expected change in the number of cases and in Rt over seven to 14 days.
The second set of initiatives focus on optimizing physical distancing when people do leave their homes and on regulating cleaning and hygiene. These measures are usually put in place while stay-at-home measures are in effect; they also stay in place when the stricter measures are lifted.
The most important of these measures are the maintenance of a minimum distance (such as 1.5 to 2.0 meters) between each person in public places and the prohibition of meetings of more than two people. The wearing of masks is mandatory—in some cases, whenever people leave their homes and, in others, when they use public transportation or enter stores. Another important initiative that city authorities are taking is to set sanitary and hygienic standards for local businesses. The implementation of these measures correlates with some reduction in new cases and Rt, but less than stay-at-home measures do.
Mobility restrictions and the effects of their removal
City governments seek guidance on the criteria and time lines for lifting COVID-19 restrictions. We believe our analysis will be helpful. At the lowest point, mobility levels in the cities we studied ranged from 10 to 25 percent of prepandemic levels. The reduced mobility brought down the number of new cases and the Rt levels. Once restrictions on mobility are lifted, it is reasonable to expect that the downward trend in the spread of the virus may slow or stop. Cities must therefore guard against the real risk that the infection rate will move back again to epidemic levels—that is, to an Rt of more than 1. We observe that most cities are succeeding in increasing mobility while keeping Rt stable, though none of them has attained anything close to normal levels of mobility.
Significantly, in some cities mobility revived even before restrictions were lifted. In New York, for example, mobility reached 43 percent of normal, from a low of 25 percent, while restrictions were still in place. Rt increased in New York as well but stayed below 1. Cities need to keep an eye on the number of new infections and on Rt levels and to impose further restrictions when the numbers rise unacceptably.
Our research suggests that two approaches are correlated with avoiding a viral resurgence. The first combines conservative timing and very low infection levels. The second is linked to urban mobility levels and focuses on preventive measures in urban public-transportation systems. The two approaches are not counterposed, but rather express different, partly sequential aspects, of controlling the virus and reopening the economy.
‘Near zero’ virus levels
A conservative approach to reopening is needed, commencing at the point where the virus is at near-zero levels. Our research suggests that cities can safely begin lifting restrictions when almost no new cases are being registered and Rt has been at 0.7–0.6 (or lower) for several weeks. Cities that have met these criteria have found that the Rt number remains stable thereafter.
Surveyed cities have used either of two time frames for lifting restrictions. Berlin and Madrid acted after Rt remained low for one to two weeks. London and Paris waited three or more weeks before lifting restrictions. Under both approaches, rigorous processes were put in place to monitor public health and ensure that any rebound in Rt was kept within manageable and acceptable levels. The mobility metric is increasingly important for such decisions because it helps cities to track whether initiatives are still effective.
Preemptive countermeasures in public transport
Cities have applied a number of measures to limit the spread of infection in public transport. These include more frequent cleaning and disinfecting of trains, trams, buses, and stations; requiring passengers and employees to wear face coverings; applying stickers, markers, shields, and barriers to aid in physical distancing; running public conveyances more frequently; limiting the number of passengers; and increasing access to bicycle sharing. Our research shows that once mobility restrictions start to be lifted, the use of public transport comes back slowly. As of June, ridership has reached 30 to 40 percent of prepandemic levels in Berlin and Lisbon; 25 percent in London, Milan, and Paris; and 17 percent in Madrid.
Some city experiences
Our research shows a correlation between the evolution of Rt in cities and these two approaches (Exhibit 3). Rt evolved more desirably overall, and Rt rates did not increase, in cities that chose longer waiting periods (three or more weeks) and implemented more safety measures. Rt movements appear to be satisfactory in cities that waited only one to two weeks and implemented only some of the preemptive countermeasures, although that approach has been slightly more risky.
Exhibit 3 presents early-stage evidence of the success of the approaches used in all the cities. While Rt jumped in some cities, it remained below 1 in all. The approaches taken so far have, by and large, effectively prevented the disease from spreading rapidly again. These cities not only waited for Rt to stabilize before easing their rules but also lifted restrictions step-by-step: they made a change, waited to see its effects, and then decided on their next move. The waiting period was generally two to three weeks. However, it is important to recognize that cities are taking very different approaches to gradual reopenings, with variations in what industries should go first, and what restrictions are kept in place.
In the cases we examined, rigorous processes monitor public health and ensure that Rt remains within manageable and acceptable levels. If the Rt rate rises above 1.0, this triggers extra measures and a delay in additional steps toward reopening, until Rt rates again fall below 1. These cities also keep in place all safeguards—for example, the obligatory wearing of masks—to protect people who leave their homes, and they continue to enforce rules for physical distancing.
Cities are reopening their economies in rounds. Restrictions are lifted step-by-step for different industries, depending on how essential they are, the level of risk they present for spreading the virus, and their economic condition. In the first round, cities have often allowed manufacturing and construction activities to resume operation, along with small retail stores and parks. In some cases, schools and public spaces (such as museums) also reopen during the first round. After it starts, these cities have waited two to three weeks and then moved on to the second round of reopening. The process can also be reversed as we see in some US locations where virus levels have resurged.
Berlin and London present a study in contrasts. In Berlin, small stores (including pharmacies, household-goods stores, pet shops, and dry cleaners) opened in the first round. Secondary schools, starting from the tenth grade, also opened at this stage; the entire primary and secondary school system was scheduled to reopen in late August. Berlin has also reopened public places such as museums, art galleries, and zoos. Yet a number of restrictions were kept in place, such as the obligatory wearing of face coverings and restrictions on public events of more than 1,000 people.
In Berlin, the phases followed each other more quickly than they did in London. There the authorities are moving more slowly. A waiting period of 30 days, on average, has separated each phase. In its first phase, London opened food production, construction, manufacturing, and logistics and distribution companies. Educational facilities were reopened for vulnerable children only. During June and July, London began opening schools generally, and restaurant openings are slated for July and August.
Our research suggests that leaders of city governments can focus their efforts in three broad areas. The first is to get the number of cases toward zero and Rt below 1 as rapidly as possible. Stay-at-home measures appear to achieve these goals more effectively than physical-distancing measures alone, but both are needed. Our findings also align with recent research on the importance of banning large public events, which accelerate the spread of the virus.
Second, the reopening should begin only after the number of cases has stabilized toward zero for a number of weeks and Rt rates are low. At this point cities must take countermeasures to prevent resurgence, particularly addressing the danger of infection on public transportation.
Finally, restrictions should be lifted in a step-by-step approach. A meaningful waiting period should precede each step, to ensure that performance criteria have been met, before the next stage is begun. It should be noted that in late July (the time this article was written), the cities farthest along in the process of safe reopening have only reached 50 to 60 percent pre-COVID mobility. For all of us, a long journey to the new normal still lies ahead.