Big Oil’s electric shock

This article first appeared on the website of Better Place

A great indicator that disruptive innovations are nearing the all-important tipping point is when powerful incumbents start peddling nonsense masquerading as facts, to sow doubt about the viability of the emerging technology or business model.  There’s nothing particularly sinister about this.  By scrambling to erect roadblocks to new market entrants that threaten their hegemony, oligopolies are only doing what comes naturally to an organism under attack by an existential threat.  And if your job is to find, extract, refine, distribute and sell liquid fuels, then electric cars certainly qualify.

I’m thoroughly heartened when I read statements from Big Oil about the “many barriers” that must be overcome before electrons can make a significant dent in a mobility sector dominated by petroleum.  Heartened because as recently as two years ago I would have been hard pressed to find any commentary at all from the oil majors about transport electrification.  Back then, the tune was all about the prospects for second generation biofuels and the supposed holy grail that is hydrogen.  But today, barely an eyebrow is raised when senior executives from the likes of ExxonMobil or Shell claim that electric cars hold genuine future promise, but not before we decarbonise the power supply.  In other words: “You EV guys are very well meaning – and we wish you well – but until the world stops burning coal, allow motor manufacturers to continue tinkering with incremental efficiency gains while we drill, baby, spill!”.

The decarbonised grid storyline is becoming the new conventional wisdom.  And like much conventional wisdom, when examined closely it turns out to be patent nonsense, though on the surface it appears reasonable.  We begin to understand why it is flawed when we examine what I call the Four Truths that we can hold to be self-evident.  They hold whenever we elect to set fire to carbon-based fuels in order to benefit from motorised kilometres:

(1) Large is better than small

Megawatt (MW) scale plants are able to run hotter, therefore more efficiently, than the kilowatt (kW) scale engines that power motor cars.  This truth has its roots firmly in the basic laws of thermodynamics, which are not subject to revision.

(2) Constant load is better than variable load

Combustion facilities have an optimal operating efficiency that is achievable more or less continuously in a power plant.  In vehicles, the engine speed is seldom constant, as it is dictated by the variable driving conditions.

(3) Stationary is better than mobile

In practical terms it is far easier to manage, collect, and process combustion emissions from stationary plants than from mobile vehicle tailpipes.

(4) Few is better than many

The greater the number of emissions sources, the harder it becomes to do anything about them.

Notice that truths (1) and (2) relate to energy efficiency, while (3) and (4) are all about emissions control – this is why (1) and (4) are not merely different ways of expressing the same point.  And what should we conclude from these truths?  It is better to burn fuel – be it coal, crude oil, natural gas, or biomass – in hundreds of large, stationary power plants running at constant speed rather than millions of small, mobile internal combustion engines running variably.  Put differently, all else being equal electricity beats liquid fuels on energy efficiency and emissions control.

The real killer for Big Oil is that for years we’ve been led to believe that petroleum was too valuable to turn into electricity.  It’s true only if your core business is shackled to the liquid transport fuel paradigm.  From an energy efficiency, energy security and environmental perspective, crude oil is far too valuable to waste in automobiles.  The same goes for coal, natural gas, and biomass.  Biofuels – the tenuous lifeline of the liquid fuel company – break against the rocks here.  Far better to convert the biomass into heat and electricity to displace dirty coal.

So back to the conventional wisdom.  Let’s imagine a world in which 100% of our primary energy comes from fossil fuels.  Electric mobility wins, hands down.  But of course, we don’t live in such a world.  The world we live in has a steadily decarbonising electricity supply, while oil majors are forced to exploit ever-more exotic and energy-intensive forms of black gold.  They’ll have a helluva job making diesel or gasoline from wind turbines and solar panels.

CSV Forum = Coalescence of Sceptical Viewpoints?

This blog first appeared on the website of think tank and strategy consultancy SustainAbility

Reflecting on yesterday’s CSV Forum 2010 in London, I confess that my expectations going into the event were low. I always find it difficult to accept that events of this nature are not exquisitely choreographed by the hosts – in this case the world’s largest food company, Nestlé, which espouses the Creating Shared Value (CSV) concept of corporate responsibility. It’s not that I don’t see the value of the CSV approach, or that I don’t welcome the opportunity to mix with – and learn from – some of the great thinkers on the Nestlé‘s three CSVfocus areas, namely Water, Rural Development and Nutrition. I suppose my expectations for a frank and lively conversation were somewhat lowered at the pre-Forum dinner on Wednesday evening, enjoyed in a spectacularly lavish setting at the top of the iconic Gherkin building in the City of London. That’s where I heard Nestlé‘s CEO Paul Bulcke remark that “CSV” is in fact nothing new, it’s simply a helpful articulation – thanks to Prof. Michael Porter of Harvard Business School – of what Nestlé has always believed and practised (or words to that effect). Hey ho, this is going to be a looong day…

With my sceptic hat perched comfortably atop my thinning pate, I took my seat in the 300-strong audience at the Mermaid Conference Centre on the north bank of the Thames, and I found myself genuinely engaged. In fact, I almost started to consider myself embraced, to borrow Nestlé‘s own language of stakeholder engagement. There actually was a frank and lively conversation, and it was bookended by an audience poll that certainly didn’t give the impression of choreography.

A question was put to the audience by the session moderator – SustainAbility non-executive director Sophia Tickell – at the very start of the Forum along the lines of “To what extent do you agree with the following statement: Sustainability is now firmly embedded in corporate strategy?“. Forum delegates were asked to respond on a five point scale from ‘Strongly disagree’ through ‘Neither agree nor disagree’ to ‘Strongly agree’. What was particularly insightful was that the very same question was posed at the end of the Forum, after some eight hours of debate on the sustainability issues most material to Nestlé‘s business. The before and after responses are summarised in the following graph:

My take on this: as soon as the great sustainability challenges of our time are brought into sharp focus, even comparatively well informed observers come to realise there is so much more that needs to be done before business can consider themselves to have embraced sustainability.

And I’ll leave a final question open-ended: is CSV really just a smart Harvard professor’s articulation of what Nestlé has always been doing?

The Future of Oil

This article, co-authored with John Elkington, first appeared in China Dialogue and was repeated on the Guardian Environment Network

The race for the world’s remaining oil reserves could get very nasty.  Recently, Nigerian militants announced their determination to oppose the efforts of a major Chinese energy group to secure six billion barrels of crude reserves, comparing the potential new investors to “locusts”.  The Movement for the Emancipation of the Niger Delta (MEND) told journalists that the record of Chinese companies in other African nations suggested “an entry into the oil industry in Nigeria will be a disaster for the oil-bearing communities”.  

Whatever the facts, the end of the first decade of the twenty-first century is likely to be seen by future historians as the beginning of the final chapter of a unique, unrepeatable period in human development.  Even oil companies now see the Age of Oil in irreversible decline – even if that decline spans decades. International oil companies (IOCs) increasingly accept that they must transform themselves completely – or expire – by mid-century.  

Superficially, the so-called “super majors” appear to be in good health. Fortune’s Global 500 list places the “big six” – Shell, ExxonMobil, BP, Chevron, Total, and ConocoPhillips – among the seven largest corporations in the world, as measured by 2008 revenues.  In third place, Wal-Mart stands alone as the only top seven company not dedicated to finding, extracting, processing, distributing and selling the liquid transportation fuels that drive the global economy, although few business models are as dependent on the ready availability of relatively cheap oil. 

Worryingly for such companies, 2008 may prove to have been the high water mark for the global oil industry, with geological, geopolitical and climate-related pressures now creating new market dynamics.  The oil question is now, more than ever, a transport question.  Cheap and reliable supplies of transportation fuel are the very lifeblood of our globalised economy.  So it matters profoundly that we are entering an era in which oil supplies will be neither cheap nor reliable. 

For the likes of Shell, BP, and ExxonMobil, whose rates of liquid hydrocarbon production peaked in 2002, 2005, and 2006 respectively, the current economic paradigm requires them to replace reserves.  Investors primarily value IOCs on this basis, as well as their ability to execute projects on time within budget.  A key problem for the IOCs is that petroleum-rich countries feel increasingly confident in the ability of their own national oil companies to steward their domestic resources.  So generous concessions once offered to IOCs in return for technical and managerial expertise are now deemed unnecessary.

The imperative to satisfy investor expectations fuels an increasingly risky growth strategy, which drives IOCs towards energy-intensive (and potentially climate-destabilising) unconventional oil substitutes, such as tar sands (in Canada), gas-to-liquids (in Qatar), and coal-to-liquids (in China and elsewhere).  These pathways are not chosen as ideals: they are more or less reflexive responses to external market pressures.

Meanwhile, the uncomfortable fact is that our economies are addicted to liquid hydrocarbon transport fuels, the consumption of which creates a catalogue of negative side effects.  And we cannot hope to address this addiction by way of our “dealers” developing even more damaging derivatives of the same drug. 

As if that were not enough, there is the hot topic of “peak oil”, defined as the point at which global oil production reaches a maximum rate, from where it steadily declines.  The basic principle is uncontroversial: production of a finite non-renewable resource cannot expand endlessly, and this has been demonstrated in practice at national level all over the world.  The heated debate centres on the point at which the peak in global oil production is likely to be reached. 

“Early toppers” argue that the peak has already been passed, and that the world will never produce more than 85 million barrels per day.  By contrast, “late toppers” point to the huge scale of unconventional reserves – for example, Alberta’s tar sands resource is vast – that remain untapped, as well as the potential bounty locked away in frontier regions such as the Arctic Ocean, where global warming is opening up new areas for oil and gas exploration. 

Unfortunately, what matters is not the absolute size of these unconventional and frontier resources, but the rate at which they can be developed and brought to market.  By definition, this is the “difficult” oil.  Production rates are determined by a series of significant financial, social, and environmental constraints that raise grave concerns for the viability of a global economic system made possible by liquid transport fuels. 

At the same time, leaders of all the major economies finally acknowledge what scientists have long been warning: to avoid catastrophic climate-change impacts, the global average surface temperature increase must be limited to 2° Celsius compared with the pre-industrial era.  To stand any reasonable chance of avoiding a 2° Celsius rise, our best understanding of the climate change science suggests that global greenhouse-gas emissions must peak within the next five to 10 years, and then decline by more than 80% on 1990 levels by 2050.  Realistically, meeting this requirement will demand that we engineer a transition to a zero-carbon energy system by mid-century. 

So what might a zero-carbon energy system look like?  As well as dramatic improvements in the energy efficiency of buildings and appliances, and massive deployment of sustainable renewable energy technologies, we will no longer be allowed to burn fossil fuels without capturing and sequestering the carbon dioxide emissions.  This implies that we must restrict our use of fossil fuels to stationary facilities, such as power plants, where carbon capture and storage (CCS) is practical (see “Outlook and obstacles for CCS”).  Strikingly, a zero-carbon energy system will also mean that no liquid hydrocarbon fuels, with the exception of biofuels, can be consumed in mobile applications such as transport. 

This does not make pleasant reading for international oil companies.  Their core business today may be described as: digging geological carbon resources out of the ground, converting those resources into liquid fuels, then marketing those fuels to consumers who set them on fire in internal combustion engines to move around.  By 2050, these activities will all be considered to be strikingly primitive. 

Two degrees is too much

To avoid many of the worst consequences of climate change, the increase in global average surface temperature must remain below 2°C compared with the pre-industrial era.  Measures to reduce greenhouse gas emissions commensurate with this overarching ambition must be guided by the best available science.  Here I explain why I continue to advocate the 2°C threshold for avoiding dangerous climate change, why I don’t embrace a “ppm” target, and what 2°C means for policy makers and the business community.  This article first appeared on the website of think tank and strategy consultancy SustainAbility

Credible climate change policies and business strategies are driven by the primary objective of the United Nations Framework Convention on Climate Change (UNFCCC) – ratified by 192 countries, including the US and China – as well as being informed by the scientific assessments of the Intergovernmental Panel on Climate Change (IPCC), the authoritative voice on the causes, impacts, and mitigation of climate change.  

Article 2 of the UNFCCC states as its ultimate objective: “to stabilise GHG concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.”  The problem is that the UNFCCC never went so far as to define what “dangerous” meant, and this omission has been a bone of contention ever since.  However, through different means, several researchers have arrived at the figure of 2° Celsius, referring to the maximum acceptable increase in the Earth’s average surface temperature versus the pre industrial era.  

The 2°C threshold has since become the de facto limit advocated by most civil society organisations concerned with climate change, as well as several scientific advisory bodies.  In the mid 1990s it was also adopted by the European Union as official policy and therefore, by extension, by the heads of government of the EU Member States.

The EU’s first mention of the need to stay below 2°C appears in the Spring Council conclusions of 1996.  Referring to the IPCC’s Second Assessment Report (SAR) – the most recent scientific assessment at that time – the European Council introduced a link between 2°C and 550 ppm (parts per million) of CO2:

The Council believes that global average temperatures should not exceed 2 degrees above pre industrial level and that therefore concentration levels lower than 550 ppm CO2 should guide global limitation and reduction efforts.

Back in 1996, based on the contemporary science of IPCC SAR, it was believed that stabilisation of atmospheric CO2 concentrations at below 550 ppm would deliver a below 2°C outcome.  Numerous policy makers and businesses subsequently adopted 550 ppm, and many have since failed to move in step with the advancing science.

In 2003, the German Advisory Council on Global Change (WBGU) issued a report titled “Climate Protection Strategies for the 21st Century: Kyoto and Beyond”.  In it, the link was explicitly made between temperature increase and tangible climate related impacts, such as: threats to biodiversity; food security; water scarcity; and ice sheet collapse.  The WBGU reaffirmed its earlier conviction that in order to avert dangerous climatic changes “it is essential to comply with a ‘climate guard rail’ defined by a maximum warming of 2°C relative to pre-industrial values”.  To meet this requirement, it recommended a stabilisation target of below 450 ppm CO2, accompanied by substantial reductions in other GHGs.

Two years later in 2005, an International Climate Change Taskforce (ICCT) – comprising leading scientists, public officials, and representatives of business and NGOs from both developed and developing countries – published a report titled “Meeting the Climate Challenge” which said essentially the same thing regarding the dangerous 2°C threshold, but went further than the WBGU by assigning a much stricter GHG limit of 400 ppm CO2-equivalent (CO2e).  Accounting for current levels of non-CO2 greenhouse gases, this figure amounted to roughly 350 ppm CO2.

It is now believed that even this target may be too high; the IPCC AR4 suggested that stabilisation at 400 ppm CO2e could lead to a temperature rise anywhere up to 2.5°C.

Alarmed by the lack of practical progress on climate change, civil society organisations have been cranking up the pressure on the road to COP-15 in Copenhagen.  The campaign group 350.org was established following the publication of a study by NASA climatologist James Hansen and colleagues in 2008, titled “Target Atmospheric CO2: Where Should Humanity Aim?”  The report’s conclusions?  CO2 will need to be reduced from its current 385 ppm to at most 350 ppm CO2 – with the greatest uncertainty in the target arising from possible changes in non-CO2 effects.

The initial “350” call was more or less aligned with the recommendation of the ICCT in 2005.  However, the 350.org campaign has morphed during its relatively short lifetime to mean the more challenging 350 ppm CO2e, as the website explains: 

Climate impacts happening more quickly than anticipated have led 350.org to see the 350 ppm target not only in terms of CO2, but CO2e.  On a technical level, this becomes a more ambitious target, incorporating other greenhouse gases.  On a practical level, it signifies the same priorities 350.org has embodied all along.

To complete the picture, in July 2009 following the welcome return of the United States to constructive international dialogue, at the Major Economies Forum on Climate Change in L’Aquila, Italy, leaders of the world’s largest economies – including the EU, US, Japan, China, India, Brazil and Russia – made the following ground-breaking declaration:

We recognise the scientific view that the increase in global average temperature above pre-industrial levels ought not to exceed 2°C.

Despite falling short of calling for a long-term target for stabilisation or percentage emissions cuts, the political significance of this declaration is huge.  What is clear from the science is that to stand a reasonable chance of staying below 2°C, we will need to engineer a rapid transition to a completely decarbonised energy system by 2050, as well as reversing deforestation and cutting emissions from other land-based sources.

It is for good reasons that the 2°C principle is widely accepted by heads of State, civil society organisations, climate change scientists and policy makers.  But the threshold has not been universally accepted.  To date, we do not know of any business that has a publicly declared ambition to keep global warming below 2°C.  Then again, we are not aware of any business that has taken any public position on what represents “dangerous” climate change.

The really bad news – and perhaps an insight into why businesses are slow to embrace 2°C – is that every time we look at the science it seems the greenhouse gas stabilisation target for avoiding 2°C is lower than before: from 550 ppm CO2 in 1996, to 450 ppm CO2 in 2003, to 400 ppm CO2e in 2005, even as low as 350 ppm CO2e in 2009.

This is precisely the reason why the 2°C concept is so important to understand. “Staying below 2°C” sets the overarching level of ambition, from which we derive stabilisation targets in “parts per million” (based on the best available science), which then inform us about actionable “percentage emissions reductions” (e.g. minus 85% by 2050).  We cannot fall into the trap of conflating 2°C with a particular ppm stabilisation target – recent history has shown this to be folly.

Our atmosphere is already loaded with greenhouse gases to the tune of around 435 ppm CO2e and rising by 2-3 ppm every year, i.e. well beyond what today’s science indicates is the necessary long-term stabilisation level.  So as well as eliminating emissions from fossil fuels, we will likely need to enhance terrestrial carbon sinks – through massive reforestation programmes and advanced soil management practices – as well as considering other ways to suck CO2 out of the atmosphere if we are to stand any chance of avoiding “dangerous climate change”.

How Green are Electric Cars?

This article first appeared on the Energy Bulletin website

I have been reading and watching with some bemusement a number of stories appearing in the British press and on television this past week on the subject of electric cars.  The media interest is largely a reaction to the UK government’s recent announcement of plans to provide cash incentives to buyers of plug-in vehicles, designed to stimulate the market for highly efficient vehicles.  A number of articles, some of which have hot-links from the ODAC website, have ‘experts’ variously dismissing the environmental benefits of electric cars as fiction, claiming their mass adoption will cause blackouts, or accusing the government of a cheap gimmick.  Whatever the rights and wrongs of the proposed stimulus package, its lack of sophistication should not be allowed to undermine the fact that electric cars are fundamentally a good idea.  Shifting transport away from liquid hydrocarbon fuels towards electricity can make a significant contribution to the twin challenges of climate change and energy security.

Frequently repeated is the lazy sound bite that “electric cars are only as green as the electricity they run on”.  Sounds obvious, doesn’t it?  But it neglects the fact that based on today’s UK electricity mix – still heavily reliant on natural gas and coal – electric cars can cut CO2 emissions in half compared with conventional mechanical vehicles running on petroleum.  Even taking into account transmission and distribution losses, it is always more energy efficient to burn carbon-based fuels – coal, oil, gas, and biomass – in large stationary power plants running at constant load than it is to waste additional energy converting them into liquid transport fuels and then burning them in small mobile internal combustion engines running at variable speeds. 

In a Daily Telegraph article, one expert was quoted as saying that modern diesel engines can achieve 45% efficiency.  This is an extraordinarily optimistic estimate, especially considering that automotive engines are seldom running at optimal efficiency but instead are subject to cold start energy losses, frequent short journeys, stop/start urban driving conditions, idling at traffic lights and in queues, fast acceleration and hard braking, all of which combine to reduce the practical efficiency of the mechanical powertrain to around 20%.

The electric motor is a vastly more efficient – and reliable – device in principle than the internal combustion engine.  To get the picture, we need to compare two vehicles sharing the same platform but utilising different powertrains.  This way, we can eliminate variables such as vehicle size and aerodynamics which complicate comparisons from one vehicle platform to another.  I reviewed the US Department of Energy website devoted to vehicle fuel economy and found that in 2003 the electric variant of the Toyota RAV4 was 4.9 times more energy efficient over the standard test cycle than its petroleum-powered equivalent.  4.9 times!  Note also that Toyota’s aim was not to build an energy efficient vehicle per se, but to comply with California’s “Zero-Emissions Vehicle Mandate” (the RAV4-EV used nickel metal hydride batteries, which are less efficient than modern lithium batteries that will power the new generation of electric cars).  In other words, Toyota achieved this factor ~5 efficiency advantage almost by accident! 

Putting this efficiency advantage into context, we can apply the carbon intensity of any given energy source to see what the effective life-cycle emissions would be.  Imagine a run-of-the-mill pulverised coal plant generating power with approximately 1,000 gCO2/kWh.  Factor in grid losses of around 6%, and the electricity at the plug socket contains roughly 1,064 gCO2/kWh.  Meanwhile, petroleum-based fuels contain around 300 gCO2/kWh, taking into account the efficiency of a typical oil refinery.  On this basis it looks as though petrol is better for the environment than coal-fired electricity.  But when you apply the energy efficiency advantage of the RAV4-EV (i.e. 1,064 divided by 4.9), the relative carbon intensity of energy at the wheels is 28% less than the petrol version.  Diesel engines are typically around 25% more efficient than petrol engines, all else being equal.  This means the RAV4-EV charged with electricity from a run-of-the-mill pulverised coal plant would still be marginally better in terms of CO2 emissions than its diesel-powered equivalent. 

But no country, not even China, has exclusively coal-fired electricity.  In Britain, a diverse range of power generating technology means that electricity drawn at the domestic socket emits around 520 gCO2/kWh on average.  On this basis, an electric RAV4 would produce two-thirds less CO2 per mile driven than the petrol version, and half as much as a comparable diesel.

Furthermore, once all those CO2 emissions have been concentrated from millions of vehicle tailpipes into a relatively few stationary point sources, then they lend themselves to a future in which we can capture and lock away the CO2 underground.  Personally, I cannot imagine carbon capture and storage (CCS) from moving car tailpipes, but I can envisage CCS from large stationary power plants situated near suitable geological storage locations.

Further still, electric vehicles can actually help to accelerate the penetration of renewables such as wind and solar power, because one of the limits to renewable electricity generation is storage of energy from intermittent sources.  With millions of electric vehicles connected to the grid we will have created a massive distributed energy storage facility, in the form of automotive batteries.

The more important point is this: if we are to avert catastrophic climate change, then the power sector will need to steadily decarbonise because it represents the single largest source of CO2 emissions.  The good news is that we know how to decarbonise the power sector; we have a range of technologies and policy measures at our disposal and all that’s lacking is a globally inclusive international treaty to put an effective cap on emissions.  In this respect, it is sensible to take: “Power decarbonisation over time” as one of our starting assumptions.

Contrast this with the liquid fuels sector, in which the carbon intensity is heading northwards as oil companies are forced to exploit more energy-intensive forms of liquid hydrocarbon (e.g. oil sands, oil shale, coal-to-liquids, etc.).  Biofuels – even when produced sustainably with real greenhouse gas benefits – will struggle to make up the difference. Oil is going to get dirtier.  And if the worst of electricity (i.e. pulverised coal) compares favourably with the best that petroleum has to offer (i.e. conventional diesel), then over time the advantage of electric vehicles can only increase.

Finally, there is much to be done in redesigning the entire transport paradigm, e.g. through modal shift from private cars to mass transit, encouraging more walking and cycling, and improving urban planning practices to eliminate demand for transport.  Electric vehicles are not a panacea to cure all transportation ills.  However, the clear energy efficiency advantages of electric vehicles, not to mention the crucial energy diversification potential (energy security frequently trumps environmental security in policy discussions), make them a very important part of the solution as we move toward a sustainable energy future.

Why $40 per barrel is no cause for complacency

This article, co-authored with David Strahan, first appeared on the website of think tank and strategy consultancy SustainAbility

These days it is comforting to have one thing not to worry about.  As the world teeters on the edge of a full-blown depression, and business is crushed between slumping sales and seized-up credit markets, at least the oil price is in retreat.  From an historic high of $147 per barrel last July to around $40 today, the price of crude has collapsed so quickly it is tempting to believe it means the end of the energy crisis; that the spike was just some speculative aberration; and that all talk of ‘peak oil’ is so 2008.  

It is true that the horizon has been utterly transformed.  Last year the big issue keeping many company bosses awake in the small hours was rising energy bills – this year all manner of competing spectres haunt their sleepless nights.  But to relegate oil simply because the price has slumped is to misunderstand the causes of the recent spike and collapse, and therefore the future outlook for energy prices and what it means for business and the climate.  

It is commonplace to blame $147 oil on booming demand in China and India, but that is only one half of the equation.  The other is that global oil production between early 2005 and mid 2008 was stagnant, at around 86 million barrels per day.  So for three years the oil supply was a zero sum game: the East consumed more, and with production static, the price of crude had to rise to force the West to consume less.  Under the circumstances the oil price was a one way bet.  But in the past, rising demand has always been met by increased output, so the key question is: why did global oil production fail to grow?

Analysts divide the oil producing world into two halves: OPEC and the rest.  Non-OPEC output has underperformed against forecasts every year this century.  Because it depends on production from regions that are increasingly mature, non-OPEC output is widely expected to peak by around the end of this decade.  But OPEC also failed to raise its game, and this is unlikely to have been the result of deliberate market manipulation.  At $147 per barrel, the incentive to pump more oil rather than risk destroying demand would have been irresistible, if it were possible.  In fact, there are good reasons to suspect that the cartel’s members have been exaggerating the size of their reserves for decades (most observers attribute the sharp jump in proved reserves of several Middle Eastern members during the 1980s to a dispute over production quotas, which created an incentive to overstate reserves).  So OPEC’s collective inability to respond to record prices by raising production may suggest its output is approaching its geological limits.  If we have not yet arrived at the oil peak, we seem at least to be in the foothills.

The subsequent oil price collapse is just as misunderstood as the spike that preceded it.  Of course, the price is falling because demand is shrinking, and that’s due to the recession.  But what caused the recession?  The obvious culprit is the banking crisis, which has clearly been extraordinarily damaging.  But so too are oil price spikes; every major recession since World War II has been preceded by one.

It’s not hard to see why: the global transportation system – moving goods, workers and consumers around, thereby enabling an increased level of economic activity to take place – is almost entirely fuelled by crude oil.  When the price of oil soars, almost all aspects of modern daily life become more expensive.  And as the oil exporters accumulate more of the world’s money, so everyone else has to make do with less.

The 2008 spike not only set a new record high oil price, in both absolute and inflation-adjusted terms, but it was also very sudden, with the price almost trebling in around eighteen months.  So it seems highly likely that even without the credit crunch, the oil market fundamentals would have been sufficient to push us into a global recession.

Far from being a source of relief, today’s relatively low oil price is as damaging in its own way as the spike.  Oil companies around the world are cancelling or delaying investment in planned production projects, because they are uneconomic at current levels; $60 billion of investment in the Canadian oil sands was shelved in the three months to January alone.  At the same time, existing global production capacity is constantly shrinking, as oil fields age and reservoir pressures decline.  The International Energy Agency (IEA) estimates that capacity is currently shrinking by around two million daily barrels per year, and that this decline rate will accelerate in future (World Energy Outlook 2008).  Oil production projects have long lead times, so the combination of declining reserves and limited investments means there is a very real danger that when economic growth returns, oil supplies will be inadequate to meet demand, and the price will spike once more.  And the cycle starts all over again.

Extreme volatility in the oil price will of course mean the same for gas and electricity.  Natural gas purchasing agreements are tied to the price of crude, meaning that extreme volatility in the oil price means the same for gas and electricity – as has been demonstrated in the past two years.

This is likely to wreak havoc with company budgets, and share valuations – at least for those companies that do not take steps to reduce their exposure.  A recent analysis of the correlation between energy costs and the share valuations of logistics companies showed that financial markets can reward fuel thrift and punish profligacy.  A 10% rise in energy costs was credited with precipitating a 10 cent fall in FedEx’s share price, but a rise of 3 cents for UPS.  It turns out that fuel-per-package-delivered is a key performance indicator for UPS, for which managers are held accountable.  So in addition to carbon reduction, cost cutting, and resilience to short term supply disruption such as the UK fuel duty protests of 2000 (which are likely to become more frequent as the oil supply tightens) there is now yet another reason for companies to eliminate their dependence on oil.

When the next spike occurs depends crucially on the depth of the recession – or depression – although analysts such as Barclays Capital forecasts that in the fourth quarter of this year the oil price will average $87 per barrel, rising to $96 twelve months later.  But for as long as the oil price stays low, it’s not just bad for the future oil supply, but also for investment in renewable electricity generation, where the economics are judged against the cost of electricity from gas fired power stations.  The impact is worsened by the low price of allowances in the EU Emissions Trading Scheme (ETS), now languishing at around 10 EUR per tonne of CO2, where most energy analysts believe it is impotent as a stimulus for green energy investments.  (The economic slowdown has thus highlighted one of the inherent flaws in the existing EU ETS: emissions allowances are allocated in advance, on the assumption that economies will grow.)  Major projects such as the London Array offshore wind farm are in the balance, while plans by the legendary oilman T Boone Pickens to build the world’s largest wind farms in Texas have already been put on hold.  Paradoxically, one of the indirect impacts of falling oil consumption is that investments in green energy technologies are less economically viable.

If we are still in the foothills of peak oil, there is good evidence to suggest we will reach the summit well within most companies’ planning horizons.  We are clearly already in deeply unsustainable territory: discovery of oil has been falling for over forty years, while consumption has risen inexorably, save for a couple of brief recessionary interludes.  Today, for every barrel of oil we discover, we consume three; annual production is already falling in over sixty of the world’s 98 oil producing nations.  Many oil companies and forecasters expect trouble at least by the middle of the next decade – whether or not they strictly accept the term ‘peak oil’.  Shell expects global production to plateau, Total’s chief executive, Christophe de Margerie, says the world will never produce more than 89 million barrels per day, and the IEA says we face a “supply crunch”.

Given the prominence of the peak oil debate, no CEO can claim they were not put on notice about this fundamental threat to their business, irrespective of their role within the economy.  It is hard to imagine any sector prospering today in the absence of a functioning transportation system.

The good news however is there is absolutely no shortage of energy.  The sunlight that hits the earth in an hour contains enough energy to run the global economy for a year.  But while solar, wind, wave, tidal and geothermal energy can all be harnessed to generate clean electricity, they cannot hope to solve the oil crunch – and with it many of the environmental consequences of our crude oil addiction, not least climate change – as long as the global economy runs on liquid hydrocarbon fuels.

There is scant evidence that governments have awoken to the scale of the peak oil crisis, the impacts of which will surely be felt well before the worst effects of climate change start to kick in.  Oil market psychology lurches between two extremes: complacency and panic.  What we need is to find the middle ground: a sense of urgency and an appetite for action commensurate with the challenge, and to sustain it even when oil prices are low.  The trick for corporate leaders will be to figure out what the post-petroleum economy is going to look like, what technologies and policy frameworks will be required to expedite the transition, and what risks and opportunities will emerge within the changing regulatory environment.  In short, they will need to plan how to survive – or better still, profit from – the inevitable transformation.