From Behind Enemy Lines in World War II

ESCAPE FROM BEHIND ENEMY LINES

Cartophilia: in the second world war, Allied airmen carried maps printed on fabric—by the makers of Monopoly. Rebecca Willis tells their story

MAP

From INTELLIGENT LIFE magazine, January/February 2015

This map may well have saved a life. Soft, secret and silent, it is made for use behind enemy lines. Thousands of these so-called silk maps were printed during the second world war and issued to Allied servicemen, mostly to air crew who might be shot down and those about to be parachuted into enemy territory. Cloth maps have several advantages over paper ones: they don’t rustle, they don’t tear or disintegrate when wet, and they are easily concealed against the body or sewn into clothing. They can be used to filter water, or double as a tourniquet, a sling or a bandage.

Silk maps vary in size, colour, scale and material—the early ones were printed on silk left over from making parachutes, but most were on polyester or rayon, as this one is. The size of a large table napkin, it charts parts of north Africa, to a different scale on each side, with the ink showing through. “It’s quite small-scale, so it’s not much use in getting you out of a particular hole,” says Peter Barber, head of map collections at the British Library in London, where it resides. “But once you’re out of the hole, it would help you get out of the country.” He points out the pre-war country names: Anglo-Egyptian Sudan, Italian Libya, and the administrative districts of Libya that ceased to exist in 1963—Cyrenaica, Fezzan and Tripolitania.

Shortly after the start of the war the British government created a department called MI9, responsible for escape and evasion. Its mission was to help resistance fighters and Allied troops in enemy-occupied territory, often by providing equipment, such as compasses hidden inside buttons or pens. Christopher Clayton-Hutton was the intelligence officer behind a whole range of ingenious gadgets; it may not be a coincidence that he worked on the floor below Ian Fleming. He regarded a map as “the escaper’s most important accessory” and realised that one made of cloth would have huge advantages, not least because it could be hidden inside a cigarette packet or the hollow heel of a flying boot.

Clayton-Hutton enlisted the help of the manufacturer of Monopoly, Waddingtons, which was used to printing on fabric as it also made bunting and souvenirs for jubilees and fêtes. Letters between MI9 and Waddingtons reveal a quest for the lightest and most durable maps. Code words were used in case the letters fell into the wrong hands: maps are always referred to as “pictures”, and rather than being delivered to the War Office, the finished maps were sent to the left-luggage desk at King’s Cross station, to be collected later.

This fascinating correspondence survives only because, when Waddingtons was being taken over in 1994 by Hasbro, which had no interest in its archives, Peter Barber received a phone call from a member of staff there. The man was trying to find a home for the silk maps, but mentioned that he’d just thrown out all the letters relating to their creation. Barber asked him to fish them out of the skip, at once. “And now they’re one of our prized manuscripts. If I hadn’t been in the office that day…” History hangs by such slender threads.

In conditions of great secrecy, Waddingtons started by printing ordinary silk maps. Later in the war, it produced lighter versions on tissue paper that could be smuggled into prisoner-of-war camps inside playing cards, chess sets or Monopoly boards. These sometimes gave directions to the nearest border and advice on how to cross it. Subtle differences on the Monopoly boards indicated which map was inside: a full stop after “Free Parking” meant Germany and northern France; a full stop after Marylebone Station denoted Italy.

Reading the directions on some of these maps leaves you breathless. They still give off the smell of danger, all the sharper for their matter-of-fact, military tone. “As this road heads west to Italy, it is important to turn left as soon as it is reached.” That one shows the route from Salzburg to Mojstrana in the then Yugoslavia, held by Allied-friendly forces—a distance of 290km, which, according to Google Maps, would take 61 hours on foot, and presumably more if trying to avoid fascist troops. “Frontier guards usually go alone and seldom in more than pairs. If pursued on open mountains, make for loose rocks which can be rolled and avoid solid rock. Besides the cover, one near-miss with a 10lb rock will often scare off a man. Roll five small rocks rather than one large one. Approach to the frontier along a spur is harder going but far less likely to be spotted.”

Not being spotted was what the maps were all about. It is estimated that, of the 35,000-plus Allied troops who made their way back from behind enemy lines, about half would have had a silk map about their person.

Rebecca Willis is our associate editor and a former travel editor of Vogue

Image Colin Crisford

Why the Mona Lisa Stands Out

WHY THE MONA LISA STANDS OUT

Mona Lisa

Top 12 of 2014. No.1: when a work of art is considered great, we may stop thinking about it for ourselves. Ian Leslie weighs the evidence

From INTELLIGENT LIFE magazine, May/June 2014

In 1993 a psychologist, James Cutting, visited the Musée d’Orsay in Paris to see Renoir’s picture of Parisians at play, “Bal du Moulin de la Galette”, considered one of the greatest works of impressionism. Instead, he found himself magnetically drawn to a painting in the next room: an enchanting, mysterious view of snow on Parisian rooftops. He had never seen it before, nor heard of its creator, Gustave Caillebotte.

That was what got him thinking.

Have you ever fallen for a novel and been amazed not to find it on lists of great books? Or walked around a sculpture renowned as a classic, struggling to see what the fuss is about? If so, you’ve probably pondered the question Cutting asked himself that day: how does a work of art come to be considered great?

The intuitive answer is that some works of art are just great: of intrinsically superior quality. The paintings that win prime spots in galleries, get taught in classes and reproduced in books are the ones that have proved their artistic value over time. If you can’t see they’re superior, that’s your problem. It’s an intimidatingly neat explanation. But some social scientists have been asking awkward questions of it, raising the possibility that artistic canons are little more than fossilised historical accidents.

Cutting, a professor at Cornell University, wondered if a psychological mechanism known as the “mere-exposure effect” played a role in deciding which paintings rise to the top of the cultural league. In a seminal 1968 experiment, people were shown a series of abstract shapes in rapid succession. Some shapes were repeated, but because they came and went so fast, the subjects didn’t notice. When asked which of these random shapes they found most pleasing, they chose ones that, unbeknown to them, had come around more than once. Even unconscious familiarity bred affection.

Back at Cornell, Cutting designed an experiment to test his hunch. Over a lecture course he regularly showed undergraduates works of impressionism for two seconds at a time. Some of the paintings were canonical, included in art-history books. Others were lesser known but of comparable quality. These were exposed four times as often. Afterwards, the students preferred them to the canonical works, while a control group of students liked the canonical ones best. Cutting’s students had grown to like those paintings more simply because they had seen them more.

Cutting believes his experiment offers a clue as to how canons are formed. He points out that the most reproduced works of impressionism today tend to have been bought by five or six wealthy and influential collectors in the late 19th century. The preferences of these men bestowed prestige on certain works, which made the works more likely to be hung in galleries and printed in anthologies. The kudos cascaded down the years, gaining momentum from mere exposure as it did so. The more people were exposed to, say, “Bal du Moulin de la Galette”, the more they liked it, and the more they liked it, the more it appeared in books, on posters and in big exhibitions. Meanwhile, academics and critics created sophisticated justifications for its pre-eminence. After all, it’s not just the masses who tend to rate what they see more often more highly. As contemporary artists like Warhol and Damien Hirst have grasped, critical acclaim is deeply entwined with publicity. “Scholars”, Cutting argues, “are no different from the public in the effects of mere exposure.”

The process described by Cutting evokes a principle that the sociologist Duncan Watts calls “cumulative advantage”: once a thing becomes popular, it will tend to become more popular still. A few years ago, Watts, who is employed by Microsoft to study the dynamics of social networks, had a similar experience to Cutting in another Paris museum. After queuing to see the “Mona Lisa” in its climate-controlled bulletproof box at the Louvre, he came away puzzled: why was it considered so superior to the three other Leonardos in the previous chamber, to which nobody seemed to be paying the slightest attention?

When Watts looked into the history of “the greatest painting of all time”, he discovered that, for most of its life, the “Mona Lisa” languished in relative obscurity. In the 1850s, Leonardo da Vinci was considered no match for giants of Renaissance art like Titian and Raphael, whose works were worth almost ten times as much as the “Mona Lisa”. It was only in the 20th century that Leonardo’s portrait of his patron’s wife rocketed to the number-one spot. What propelled it there wasn’t a scholarly re-evaluation, but a burglary.

In 1911 a maintenance worker at the Louvre walked out of the museum with the “Mona Lisa” hidden under his smock. Parisians were aghast at the theft of a painting to which, until then, they had paid little attention. When the museum reopened, people queued to see the gap where the “Mona Lisa” had once hung in a way they had never done for the painting itself. The police were stumped. At one point, a terrified Pablo Picasso was called in for questioning. But the “Mona Lisa” wasn’t recovered until two years later when the thief, an Italian carpenter called Vincenzo Peruggia, was caught trying to sell it to the Uffizi Gallery in Florence.

The French public was electrified. The Italians hailed Peruggia as a patriot who wanted to return the painting home. Newspapers around the world repro­duced it, making it the first work of art to achieve global fame. From then on, the “Mona Lisa” came to represent Western culture itself. In 1919, when Marcel Duchamp wanted to perform a symbolic defacing of high art, he put a goatee on the “Mona Lisa”, which only reinforced its status in the popular mind as the epitome of great art (or as the critic Kenneth Clark later put it, “the supreme example of perfection”). Throughout the 20th century, musicians, advertisers and film-makers used the painting’s fame for their own purposes, while the painting, in Watts’s words, “used them back”. Peruggia failed to repatriate the “Mona Lisa”, but he succeeded in making it an icon.

Although many have tried, it does seem improbable that the painting’s unique status can be attributed entirely to the quality of its brushstrokes. It has been said that the subject’s eyes follow the viewer around the room. But as the painting’s biographer, Donald Sassoon, drily notes, “In reality the effect can be obtained from any portrait.” Duncan Watts proposes that the “Mona Lisa” is merely an extreme example of a general rule. Paintings, poems and pop songs are buoyed or sunk by random events or preferences that turn into waves of influence, rippling down the generations.

“Saying that cultural objects have value,” Brian Eno once wrote, “is like saying that telephones have conversations.” Nearly all the cultural objects we consume arrive wrapped in inherited opinion; our preferences are always, to some extent, someone else’s. Visitors to the “Mona Lisa” know they are about to visit the greatest work of art ever and come away appropriately awed—or let down. An audience at a performance of “Hamlet” know it is regarded as a work of genius, so that is what they mostly see. Watts even calls the pre-eminence of Shakespeare a “historical fluke”.

Shamus Khan, a sociologist at Columbia University, thinks the way we define “great” has as much to do with status anxiety as artistic worth. He points out that in 19th-century America, the line between “high” and “low” culture was lightly drawn. A steel magnate’s idea of an entertaining evening might include an opera singer and a juggler. But by the turn of the 20th century, the rich were engaged in a struggle to assert their superiority over a rising middle class. They did so by aligning themselves with a more narrowly defined stratum of “high art”. Buying a box at the opera or collecting impressionist art was a way of securing membership of a tribe.

Although the rigid high-low distinction crumbled in the 1960s, we still use culture as a badge of identity, albeit in subtler ways. Today’s fashion for eclecticism—“I love Bach, Abba and Jay Z”—is, Khan argues, a new way for the bohemian middle class to demarcate themselves from what they perceive to be the narrow tastes of those beneath them in the social hierarchy.

The innate quality of a work of art is starting to seem like its least important attribute. But perhaps it’s more significant than our social scientists allow. First of all, a work needs a certain quality to be eligible to be swept to the top of the pile. The “Mona Lisa” may not be a worthy world champion, but it was in the Louvre in the first place, and not by accident.

Secondly, some stuff is simply better than other stuff. Read “Hamlet” after reading even the greatest of Shakespeare’s contemporaries, and the difference may strike you as unarguable. Compare “To be or not to be”, with its uncanny evocation of conscious thought, complete with hesitations, digressions and stumbles into insight, to any soliloquy by Marlowe or Webster, and Shakespeare stands in a league of his own. Watts might say I’m deluding myself, and so are the countless readers and scholars who have reached the same conclusion. But which is the more parsimonious explanation for Shakespeare’s ascendancy?

A study in the British Journal of Aesthetics suggests that the exposure effect doesn’t work the same way on everything, and points to a different conclusion about how canons are formed. Building on Cutting’s experiment, the researchers repeatedly exposed two groups of students to works by two painters, the British pre-Raphaelite John Everett Millais and the American populist Thomas Kinkade. Kinkade’s garish country scenes are the epitome of kitsch—the gold standard for bad art. The researchers found that their subjects grew to like Millais more, as you might expect, given the mere-exposure effect. But they liked Kinkade less. Over time, exposure favours the greater artist.

The social scientists are right to say that we should be a little sceptical of greatness, and that we should always look in the next room. Great art and mediocrity can get confused, even by experts. But that’s why we need to see, and read, as much as we can. The more we’re exposed to the good and the bad, the better we are at telling the difference. The eclecticists have it.

Top Flocking to her: the “Mona Lisa” in her room at the Louvre, surrounded by admirers. Or are they sheep?

Ian Leslie is the author of “Curious: The Desire to Know and Why Your Future Depends on It” and tweets @mrianleslie

Game Theory, Poker, and Real Life

Game theorists crack poker

An ‘essentially unbeatable’ algorithm for the popular card game points to strategies for solving real-life problems without having complete information.

Philip Ball
08 January 2015
Article toolsRights & Permissions

1.16683-shutterstock_148169477
maxuser/Shutterstock

Robots are unlikely to be welcome in casinos any time soon, especially now that a poker-playing computer has learned to play a virtually perfect game — including bluffing.
A new computer algorithm can play one of the most popular variants of poker essentially perfectly. Its creators say that it is virtually “incapable of losing against any opponent in a fair game”.

That means that this particular variant of poker, called heads-up limit hold’em (HULHE), can be considered solved. The algorithm is described in a paper in Science1.

The strategy the authors have computed is so close to perfect “as to render pointless further work on this game”, says Eric Jackson, a computer-poker researcher based in Menlo Park, California.

“I think that it will come as a surprise to experts that a game this big has been solved this soon,” Jackson adds.

A few other popular games have been solved before. In particular, in 2007 a team from the same computer-science department at Alberta — including Neil Burch, a co-author of the latest study — cracked draughts, also known as checkers2.

With regret
In poker, the main challenge is dealing with the immense number of possible ways that a game can be played. Bowling and colleagues have looked at one of the most popular forms, called Texas hold’em. With just two players, the game becomes heads-up, and it is a ‘limit’ game when it has fixed bet sizes and a fixed number of raises. There are 3.16 × 1017 states that HULHE can reach, and 3.19 × 1014 possible points at which a player must make a decision.

Bowling and colleagues designed their algorithm so that it would learn from experience, getting to its champion-level skills required playing more than 1,500 games. At the beginning, it made its decisions randomly, but then it updated itself by attaching a ‘regret’ value to each decision, depending on how poorly it fared.

This procedure, known as counterfactual regret minimization, has been widely adopted in the Annual Computer Poker Competition, which has run since 2006. But Bowling and colleagues have improved it by allowing the algorithm to re-evaluate decisions considered to be poor in earlier training rounds.

The other crucial innovation was the handling of the vast amounts of information that need to be stored to develop and use the strategy, which is of the order of 262 terabytes. This volume of data demands disk storage, which is slow to access. The researchers figured out a data-compression method that reduces the volume to a more manageable 11 terabytes and which adds only 5% to the computation time from the use of disk storage.

“I think the counterfactual regret algorithm is the major advance,” says computer scientist Jonathan Shapiro at the University of Manchester, UK. “But they have done several other very clever things to make this problem computationally feasible.”

Bluffing game
As part of its developing strategy, the computer learned to inject a certain dose of bluffing into its plays. Although bluffing seems like a very human, psychological element of the game, it is in fact part of game theory — and, typically, of computer poker. “Bluffing falls out of the mathematics of the game,” says Bowling, and you can calculate how often you should bluff to obtain best results.

Of course, no poker algorithm can be mathematically guaranteed to win every game, because the game involves a large element of chance based on the hand you’re dealt. But Bowling and his colleagues have demonstrated that their algorithm always wins in the long run.

The problem is only what the researchers call ‘essentially solved’, meaning that there is an extremely small margin by which, in theory, the computer might be beaten by skill rather than chance. But this margin is negligible in practice.

Bowling says that the approach might be useful in real-life situations when one has to make decisions with incomplete information — for example, for managing a portfolio of investments. The team is now focusing on applying their approach to medical decision-making, in collaboration with diabetes specialists.

New Year Resolutions

Doing Better in 2015

For some reason, I continue making resolutions for each coming year even though my track record is somewhat worse than mediocre. Being, in a minor way, a creature of habit I am doing so again this year, though adding a resolution to keep more than three fourths of them. Some are for self improvement; fitness and mental agility, while others deal with relationships with others, including family and friends.

So, here they are:

Ride my bicycle at least 5,000 miles in 2015
Run two half marathons
Take an inch off my waist
Other than wine, consume no simple carbohydrates

Finish my Latin 1-3 course on Rosetta Stone
Finish a Spanish course
Take more pictures for my online gallery

Listen more attentively to family and friends
Collect more stories from them and tell fewer of my own.
Get around to writing my first novel.
Tell my wife I love her at least twice each day even when we are apart
Live in the present rather than in the future
Continue denying that I am getting old: older is okay, but old is unforgivable at my age.
Become more active and less cynical about local politics

I will give a report card this time next year.

Good luck to you with your resolutions.

Lives Lived in Miniature

LIVES LIVED IN MINIATURE

Betty Pinney House, 1870, Victoria and Albert Museum

~ Posted by Kassia St Clair, December 29th 2014

Whiteladies

For her seventh birthday earlier this year my niece asked for a box that only she could open. It would come to contain, she explained, all her secrets. This may be familiar from your own childhood: you begin to close off physical and interior spaces from your parents and siblings. You begin to yearn for privacy.

This intimate idea is explored in a new exhibition of twelve dolls’ houses at the V&A Museum of Childhood in East London. Although here they stand slack-doored, open and illuminated, so that visitors can peer inside, their recesses capture the impulse to craft personal space and write your own narrative. A stranger looking in might only spot that the lady of the house is propped upstairs by the bed, dwarfed by a wooden dresser that nearly reaches the ceiling. The setter of the scene would understand why the lady was up there: she might be waiting for someone, or perhaps hiding from the sinister footman with shapely ankles stalking through the drawing room below. The bigger the house, the more elaborate the décor, and the more convoluted the plot. Betty Pinney’s house (slide 2), built in the 1890s and renovated in the 1970s, is a daedal structure decorated with extreme care and teeming with eccentric dolls and shrunken everyday objects: a rocking horse, a tiny set of tin soldiers, a working lift, a drunken man slumped in the living room next to a set of decanters.

The houses are conspicuously female spaces, a series of fetishised snapshots of changing tastes in interior design and decoration. Miss Pinney was a textile designer, and she used her own prints for wallpaper and upholstery. Amy Miles’s house (slide 3), built in the 1890s, is stuffed with miniature versions of fashionable contemporary consumer goods: folding japanned screens, towel racks and a paper bath mat. An imposing cream and verdigris villa, built in 1935 in the new Art Deco style, has a pool and tennis court. Unusually, in this design by Moray Thomas (slide 4), the outside is as important as the inside: a swimmer prepares to dive, a doll stretches out on a sun-lounger on an upstairs balcony; murals in soft hues depict leisure activities, like flying and swimming.

For most of the period covered here—the 18th to 20th centuries—the dolls’ houses would be the closest to property ownership women would get. They were passed from mother to daughter, moving with them from the full-sized homes of their fathers into those of their husbands. The Tate Baby House, modelled after a late-18th-century country home, spent 170 years descending the female line of a single family, traipsing from Covent Garden to a Cambridge mansion to a country manor house and finally back to London. Female empowerment comes late in the exhibition. The jewel-coloured Jenny’s Home modular system, created in the 1960s in conjunction with Homes & Gardens magazine, is set up here as a high-rise apartment block. But it could just as easily be slotted together as a sprawling villa or a two-up, two-down town house. As women’s rights progressed it was accepted that they could be architects, designing buildings and their interiors. And as the decades march on, more women can expect to be homeowners, too. My niece has little need of a portable symbol of ownership and privacy: she can expect to build her own story to scale.

Small Stories: At Home in a Dolls’ House is at the V&A Museum of Childhood in London until September 6th

Kassia St Clair is contributing editor (style) to Intelligent Life and assistant books and arts editor at The Economist

Solar Power for Japan?

FEATURED STORY
Can Japan Recapture Its Solar Power?
The way the Land of the Rising Sun built and lost its dominance in photovoltaics shows just how vulnerable renewables remain to changing politics and national policies.

By Peter Fairley on December 18, 2014

WHY IT MATTERS

The fate of solar power in Japan, which lost 30 percent of its electricity production after the Fukushima disaster, will be an important test of renewable technologies.

It’s 38 °C on the Atsumi Peninsula southwest of Tokyo: a deadly heat wave has been gripping much of Japan late this summer. Inside the offices of a newly built power plant operated by the plastics company Mitsui Chemicals, the AC is blasting. Outside, 215,000 solar panels are converting the blistering sunlight into 50 megawatts of electricity for the local grid. Three 118-meter-high wind turbines erected at the site add six megawatts of generation capacity to back up the solar panels during the winter.

Mitsui’s plant is just one of thousands of renewable-power installations under way as Japan confronts its third summer in a row without use of the nuclear reactors that had delivered almost 30 percent of its electricity. In Japan people refer to the earthquake and nuclear disaster at Tokyo Electric Power Company’s Fukushima Daiichi nuclear power plant on March 11, 2011, as “Three-Eleven.” Radioactive contamination forced more than 100,000 people to evacuate and terrified millions more. It also sent a shock wave through Japan’s already fragile manufacturing sector, which is the country’s second-largest employer and accounts for 18 percent of its economy.

Eleven of Japan’s 54 nuclear reactors shut down on the day of the earthquake. One year later every reactor in Japan was out of service; each had to be upgraded to meet heightened safety standards and then get in a queue for inspections. During my visit this summer, Japan was still without nuclear power, and only aggressive energy conservation kept the lights on. Meanwhile, the country was using so much more imported fossil fuel that electricity prices were up by about 20 percent for homes and 30 percent for businesses, according to Japan’s Ministry of Economy, Trade, and Industry (METI).

The post-Fukushima energy crisis, however, has fueled hopes for the country’s renewable-power industry, particularly its solar businesses. As one of his last moves before leaving office in the summer of 2011, Prime Minister Naoto Kan established potentially lucrative feed-in tariffs to stimulate the installation of solar, wind, and other forms of renewable energy. Feed-in tariffs set a premium rate at which utilities must purchase power generated from such sources.

The government incentive is what motivated Mitsui to finally make use of land originally purchased for an automotive plastics factory that was never built because carmakers moved manufacturing operations overseas. The site had sat idle for 21 years before Mitsui assembled a consortium to help finance a $180 million investment in solar panels and wind turbines. By moving fast, Mitsui and its six partners qualified for 2012 feed-in tariffs that promised industrial-scale solar facilities 40 yen (35 cents) per kilowatt-hour generated for 20 years. At that price, says Shin Fukuda, the former nuclear engineer who runs Mitsui’s energy and environment business, the consortium should earn back its investment in 10 years and collect substantial profits from the renewable facility for at least another decade.
Sanyo Electric’s so-called Solar Ark, built in 2001 during the heyday of the country’s initial solar boom, was designed to generate 630 kilowatts of power, making it one of the world’s largest solar facilities. It boasts 5,046 solar panels.

Overnight, Japan has become the world’s hottest solar market: in less than two years after Fukushima melted down, the country more than doubled its solar generating capacity. According to METI, developers installed nearly 10 gigawatts of renewable generating capacity through the end of April 2014, including 9.6 gigawatts of photovoltaics. (The nuclear reactors at Fukushima Daiichi had 4.7 gigawatts of capacity; overall, the country has around 290 gigawatts of installed electricity-generating capacity.) Three-quarters of the new solar capacity was in large-scale installations such as Mitsui’s.

Yet this explosion of solar capacity marks a bittersweet triumph for Japan’s solar-panel manufacturers, which had led the design of photovoltaics in the 1980s and launched the global solar industry in the 1990s. Bitter because most of the millions of panels being installed are imports made outside the country. Even some Japanese manufacturers, including early market leader Sharp, have taken to buying panels produced abroad and selling them in Japan.

How Japan­­—once the world’s most advanced semiconductor producer and a pioneer in using that technology to manufacture photovoltaic cells—gave away its solar industry is a story of national insecurity, monopoly power, and money-driven politics. It is also a tale with important lessons for those who believe that the strength of renewable technologies will provide sufficient incentives for countries to transform their energy habits.

In Japan, for most of the 2000s, impressive advances in photovoltaics were ignored because the country’s powerful utilities exerted their political muscle to favor nuclear power. And despite resurging consumer demand for solar power and strong public disdain for nuclear, the same thing could happen again. Will a country with few fossil-fuel resources and bleak memories of the Fukushima disaster take advantage of its technical expertise to recapture its position as a leading producer of photovoltaics, or will it turn away from renewable energy once more?

Riches

Longer than three football fields and over 37 meters tall, the Solar Ark is clearly visible from the Tokkaido Shinkansen as the bullet train crosses central Japan. The structure, covered with photovoltaic panels, looks like a temple of energy from another era—a time when Japan owned the solar-power industry. Sanyo erected the Ark in 2001, arraying on it 5,046 solar panels capable of generating 630 kilowatts of pollution-free electricity.
An image from Japanese television captures smoke rising after a hydrogen explosion at Fukushima Daiichi’s unit 3 on March 14, 2011, days after the initial earthquake. Following the Fukushima disaster, all the country’s nuclear reactors were shut down.

The era that gave rise to this feat began with the energy crises of the 1970s, when spiking global petroleum prices pummeled Japan’s export-driven manufacturing economy. The country harnessed its dominance in the production of electronic semiconductor chips to pursue alternatives for cleaner, safer power in photovoltaics. And unlike other countries, such as the United States, it stuck with the resulting solar development programs even when oil prices dropped in the 1980s. Between 1985 and 2007, Japanese researchers filed for more than twice as many patents in solar technologies as rival U.S. and European inventors combined. Companies like Sharp, Sanyo Electric, Panasonic, and Kyocera became the clear leaders in solar technology. Japanese producers began ramping up sales and solar installations in the 1990s. By 2001 total solar-power output in Japan was 500 times higher than it had been a decade earlier—a decade in which U.S. solar generation edged up by a meager 15 percent.

Then it all came crashing to a halt a decade ago as the country staked its future on nuclear power.

The government’s nuclear plans were ambitious: by the time Fukushima Daiichi melted down, they would call for 14 additional reactors by 2030, which would have nearly doubled nuclear generation to account for 50 percent of Japan’s power supply. Meanwhile, photovoltaic sales in Japan declined during the mid-2000s, and by 2007 Japanese producers had ceded global market leadership to U.S., Chinese, and European manufacturers. In just a few years, the country had gone from industry leader to has-been.

What turned Japan away from the sun was a pernicious blend of perception, culture, and politics. Nuclear power had an aura of strength, while energy based on intermittent renewable power sources looked weak and unreliable—an impression encouraged by the country’s politically powerful utilities. Though Japan has numerous locations that are ideal for wind and solar power, power companies convinced the public that energy choices were limited. “We are really severely of the mind-set that we lack resources and that Japan has to depend on imported fuel,” says Mika Ohbayashi, director of the Tokyo-based Japan Renewable Energy Foundation.

What turned Japan away from the sun was a pernicious blend of perception, culture, and politics.

The utilities’ view was colored by self-interest. Japan’s 10 utilities were (and remain) vertical monopolies. Each controls power generation, transmission, and distribution in its respective region, and its grids are designed to deliver electricity from centralized power plants—including large nuclear reactors. They lack, by design, the interconnections that facilitate the safe use of variable power generation. In most industrialized countries, governments have broken up the monopolies in power markets, freeing operators of transmission grids to build those interconnections, but Japan’s utilities have bucked the deregulation trend. The interconnection problem is further compounded by an artifact: two AC frequencies that split the country’s electrical system in half. Eastern Japan operates at 50 hertz, while western Japan uses 60-hertz power—a barrier that proved crippling in 2011, in the immediate aftermath of the Fukushima disaster, when a suddenly underpowered Tokyo could access little of Osaka’s surplus power.

Asked why Japan chose not to push solar power aggressively when it dominated the global industry, former prime minister Kan told me he puts the blame squarely on the country’s utilities: “The reason is very clear. The electric power companies, the people who wanted to promote nuclear power, were opposed.”

Revival

In a subdivision spreading over reclaimed land in the bay in Ashiya, a city between Osaka and Kobe, a 400-unit residential development called Smart City Shio-Ashiya (“Salty-Ashiya”) is taking shape, the brainchild of the Panasonic subsidiary ­PanaHome. On a Sunday in July, solar panels atop each of the 50 houses built to date are pumping surplus power into the local grid, and PanaHome salespeople are selling a couple with toddlers on the homes’ energy benefits and earthquake resistance.

Shio-Ashiya’s two-story homes include geothermal heating and cooling and other green design features to minimize power consumption, while the high-efficiency rooftop solar panels maximize power generation. The surplus power should, according to PanaHome saleswoman Saho Watanabe, earn residents roughly 100,000 yen ($825) each year. Watanabe touts another feature, which should be invaluable when the grid goes down—say, in an earthquake or typhoon. She opens a cupboard in the dining room of a model home to reveal a lithium battery that, working with an energy management system near the kitchen, can run the family’s AC/heat pumps, first-floor lighting, and refrigerator for about two days.

Panasonic’s solar hopes rest on a technology invented by researchers at Sanyo in the 1990s and acquired by Panasonic four years ago when the corporations merged. The solar cells combine conventional crystalline-silicon and thin-film amorphous-­silicon technologies to achieve relatively high efficiency in converting sunlight to electricity. Called HIT, for heterojunction with intrinsic thin layer, the hybrid technology has become a mainstay of the company’s solar strategy.

Shingo Okamoto, a materials scientist who spent his career at Sanyo Electric before becoming director of solar R&D for Panasonic’s EcoSolutions business group, says the panels are earning premium pricing in domestic sales because they produce far more electricity from a given rooftop than the cheaper polycrystalline panels that dominate the market. Assuming that each household consumes electricity at the Japanese average of 1,400 kilowatt-hours per year during daylight hours, he says, a household with the Panasonic system will have 52 percent more surplus power to return to the grid than a home with an ordinary solar system.

Residential power in Japan is pricey—at 24.33 yen (20 cents) per kilowatt-hour in 2013, it was nearly double the U.S. average. And given that electricity prices are “sure to keep going up,” says Okamoto, the most efficient rooftop photovoltaic systems will have a strong advantage. When we met in July at Panasonic’s Shiga plant, east of Kyoto, the plant had just started shipping its newest and most powerful panel design. The advances behind the panel, which uses cells with an efficiency of 22.5 percent, include a light-scattering film on the backside to enhance light absorption. Assembly lines were running 24 hours a day to keep up with domestic demand.

Further advances are in the pipeline. In April, Okamoto’s group produced a silicon solar cell that reached 25.6 percent efficiency, breaking a 15-year-old world record of 25.0 percent. Though the record was set in the lab using a prototype device, Okamoto predicts that the group will ultimately be able to produce commercial cells whose efficiency is within a few percentage points of crystalline silicon’s theoretical limit, 29 percent.

Repowering

Across the coastal mountains from the smashed reactors at Fukushima Daiichi and the contaminated landscape they created, one of the world’s most advanced facilities dedicated to renewable-energy R&D is gearing up. The $100 million complex opened in April in Koriyama, Fukushima Prefecture’s commercial center, and pulls together previously disparate research by Japan’s science and technology agencies. The institute is not here by accident. It’s an explicit commitment to the emotionally and economically devastated region.

The verdant prefecture north of Tokyo remains depopulated after the earthquake, tsunami, and meltdowns of March 2011. Many of the more than 100,000 residents rendered homeless by the disasters will never return. Replacing lost residents and businesses in an area known for radioactive contamination is not easy. Solar-powered radioactivity monitors in Koriyama show that the air is safe, but 100 kilometers to the east, Tokyo Electric Power Company (TEPCO) still struggles to keep contamination from polluting both groundwater and the sea.

The Koriyama R&D facility boasts state-of-the-art labs for crystallizing, slicing, and patterning silicon wafers, and its production line can churn out up to 360 wafers an hour. Outside, a variety of photovoltaics are being tested, along with a modest-sized wind turbine and a large grid-connected battery. Its most ambitious program is directed by Makoto Konagai, one of Japan’s most celebrated solar scientists, who has moved to Koriyama from the Tokyo Institute of Technology. His goal is to smash through the theoretical efficiency limit of silicon cells, demonstrating rates of 30 percent by 2016 and up to 40 percent by 2021. It is an ambitious plan, but three large manufacturers, including Panasonic, have signed on.
Workers watched in October as a crane lifted a section of a radiation shroud that had been placed over a reactor at Fukushima after the earthquake. Lifting the cover exposed the debris inside the destroyed building for the first time since 2011.

While some other researchers seek more efficient alternatives to silicon, which accounts for 90 percent of current solar production, Konagai seeks to redesign the silicon cell from top to bottom. One of his teams, for example, is developing a casting method to produce higher-quality silicon ingots. Another team is rethinking the way semiconductor structures are patterned to turn silicon wafers into cells: Konagai’s plan is to etch or build vertical structures just a few nanometers across, almost 100,000 times narrower than the silicon wafer itself. If his simulations are good, the resulting nanowires or nanowalls will alter the electrical behavior of the silicon within, boosting its potential to absorb light and gather electrical charge.

In June 2011, Fukushima’s previously pro-nuclear governor, Yuhei Sato, declared that Fukushima should pin its future on renewable energy. Community activists initiated dozens of projects across the prefecture, and in 2012 it set a goal of increasing renewable energy from 22 percent to 100 percent of its power supply by 2040.

The cold reality of Japan’s energy predicament, however, is that such bold ambitions are likely to fall short. The type of solar expansion that can be expected from feed-in tariffs alone isn’t likely to meet the prefecture’s goals—or even to replace the power that Japan’s nuclear fleet once delivered. And political and economic forces don’t seem to favor policies that would expand renewables more dramatically.

Projections by the Japan Photovoltaic Energy Association, a Tokyo-based trade group, suggest that annual solar installations will peak this year just shy of seven gigawatts. The group predicts that total installed solar capacity in Japan will reach 102 gigawatts by 2030, which would be enough to meet only a small fraction of the country’s electricity needs. Moderate deployment of wind power would provide some additional electricity. But Japan needs far more. While Japanese consumers and industry have cut power demand since 2011, utilities covered most of the nuclear shortfall by ramping up combustion of imported natural gas, petroleum, and coal. Fossil fuels accounted for some 89 percent of Japan’s electricity generation in 2012. As a result, its total greenhouse-gas emissions were 7 percent higher that year than in 2010.

The prospects for renewable power could get worse. To hedge against the possibility that they may be unable to restart nuclear reactors, utilities are building a new generation of coal-fired power stations. By Ohbayashi’s count, some 13 gigawatts of new coal-fired power generation are now in development.

Meanwhile, the relatively high cost of Japan’s solar power threatens to incite a backlash against renewable energy, encouraged by the pro-nuclear utilities. “There is no doubt that with the current photovoltaics, power generation is expensive,” says Okamoto, expressing his personal viewpoint rather than Panasonic’s. He fears negative reactions from ratepayers, whose rising power bills pay the tariffs that fund photovoltaic systems on rooftops and at power plants like Mitsui Chemicals’: “If we continue to expand our business with the current level of costs, we may have objections.”

What’s more, the old politics that favor nuclear power seem to be returning. Though opinion polls consistently show that a majority of Japanese oppose restarting the utilities’ idled reactors, Prime Minister Shinzo Abe vows to restart those deemed safe by Japan’s Nuclear Regulation Authority. In July the agency issued the first such certification, to a pair of reactors on the southern island of Kyushu—even though offsite emergency control centers mandated after Fukushima have yet to be completed and the reactors are dangerously close to an active volcano. Iodine pills were quickly distributed to the reactors’ neighbors, and the precedent-setting restart is expected soon, after getting the green light from the local governor and the plant’s host city, Satsumasendai, whose economy is crippled without the jobs, tax dollars, and business that the plant provides.

At the same time, utilities are delaying grid connections to renewable developments or imposing grid-upgrade fees that render renewable projects infeasible. The pushback is hitting wind power hardest. Japan’s meager market for wind turbines has actually slowed since Fukushima.

This summer METI launched a committee to manage the implementation of new energy policies. One topic: recent efforts by utilities and the government to restrain further solar installations. Ohbayashi says METI is backpedaling because it misjudged the commercial potential of renewables and their potential impact on the utilities. Says Ohbayashi, “They didn’t foresee the explosive growth of photovoltaics.”

The Japanese government has plans to radically overhaul the country’s balkanized wholesale market and power grid, preparing for a future in which producers compete for the right to deliver power. In that scenario, renewable energy could thrive.

The most critical step, however, is still years away: forcing the vertically integrated utilities to “unbundle” their power generation and transmission businesses. Unbundling is essential to create a level playing field for producers and a system optimized to deliver the cheapest and cleanest power available in real time.

Reëngineering the grid to accommodate massive flows of renewables such as wind and solar is a potentially expensive route for Japan. However, it’s not necessarily more costly than the path back to nuclear that the current government and the utilities are charting. Factoring in the cost of insurance against accidents and upgrades to prevent them could double the cost of nuclear energy.

As former prime minister Naoto Kan told me, the disaster at Fukushima Daiichi has forever altered the economics of nuclear power. “In the past, nuclear power was said to be able to supply power at a very cheap cost, but we know now that is not correct,” he said. “That calculation assumed that no accidents could occur. Now we know they can.”

Peter Fairley is a contributing editor for MIT Technology Review.

Cricket vs Baseball

THE BATTLE OF THE BATS

Sport

Reading the Game: which is the better sport, cricket or baseball? Ed Smith deliberates between the two

From INTELLIGENT LIFE magazine, November/December 2014

Like parents and their teenagers, cricket and baseball are very much alike and yet determined to remain a mystery to each other.

Fourteen years ago, as a 22-year-old professional cricketer who spent his winters in New York, I began writing a book comparing these two sports. I joined up with the New York Mets and swung at some pitches. I lived through an all-New York World Series, trying to follow the path that leads from the baseball diamond to America’s soul. I wandered the streets of Manhattan in the days after 9/11 and watched the Yankees summon a moment of sporting ecstasy amid the rubble. “Playing Hard Ball” was the result. But one question—the big one—seemed too risky even to address. Which game is actually better?

Constraints came from both directions. I felt loyalty to cricket, which paid my wages and filled my dreams. Baseball exerted a different hold: a sense of joyous thanks, bordering on infatuation, towards not only a game but also a city. Now there are no such excuses; daunted but accepting, I must plunge into judgment. It’s a penalty shoot-out, cricket versus baseball, played over five criteria.

First: drama. Cricket, especially the five-day Test, has the ability to nurture deepening tension. The crowd hold several narratives in their minds. What might happen is as interesting as what actually happens, an imaginative depth made possible by cricket’s defining characteristic: time. But for sheer dramatic ecstasy, baseball has the edge. In its rarity and decisiveness, the home run—two extreme forces colliding with brutal symmetry—is like a goal in football (only less liable to be scrappy and untidy). One-nil to baseball—or one-and-oh, as pitchers say.

Second: beauty. Both sports are photogenic. Baseball’s archive of black-and-white photos—the slide home to base, studs high and mud flying, grace and clarity of purpose down in the dirt—matches anything in the museum at Lord’s. The double play, devastatingly complete and perfect, may even trump the direct hit that follows a diving stop at midwicket. But even the smoothest line drive must bow down before cricket’s cover drive. You could watch decades of baseball and never see the equal of David Gower driving. The bat held loosely, the swing an unfurling rather than a coiled spring, the effect gentleness as well as majesty: 1-1.

Third: psychological depth. Before I understood baseball, I used to think this was no contest. But I was watching the wrong things. I used to study the batter (my sporting cousin, after all), trying to enter into his mind, feel his struggle. But the psychological roles are reversed in baseball. In cricket, because he is expected to win any given ball, and because losing his wicket is utterly final, it is the batsman who lives with the guillotine hanging over his neck. In baseball, I eventually realised, that is the life of a pitcher. He, not the batter, is expected to prevail in the next play. Giving up a run in baseball is rare and potentially disastrous, so it has more in common with losing a wicket than with scoring a (cricket) run. This is sport’s ultimate paradox: the more you are expected to succeed, the greater the pressure. The scoring units are just a currency—the more numerous they are, the lower their value. When I saw the torment in the eyes of pitchers, how they live with the terror of conceding a run, how they are solitary and exposed, surrounded by teammates whose primary purpose is to score runs not to prevent them—grasping all this suffering, I felt suddenly at home.

Cricket at its best, however, offers a more symmetrical psychological contest. Baseball batters are not around long enough to go through as great a struggle. In cricket, when bat and ball are in perfect equipoise, each protagonist in danger of toppling over with only the slightest misstep, the pressure is equal on both. It’s 2-1 to cricket, by a whisker.

Fourth: is it fun to play? Any sport, at a level of mastery, offers deep satisfaction. So let’s focus on the experience of the amateur or the child. How high are the barriers to pleasure? Cricket’s technical restraints—bowlers forbidden from bending the arm, batsmen taught to remain stately and sideways-on rather than rotating like a lumberjack hacking at a tree—are bound up with its aesthetic potential. But they certainly don’t help the uninitiated. Baseball, more natural and less buttoned-up, is much closer to the way we throw and hit before we learn how we are supposed to do it. And the grass doesn’t have to be so manicured. In ordinary life, too, cricket’s hunger for time becomes a drag, with the club game relying on spouses being willing to put in a long shift as a single parent. Messing around in a field after a picnic? It’s got to be baseball; 2-2.

Judging the Booker prize, Philip Larkin set himself the ultimate test: “Did I care? If so, what was the quality of the caring?” All sports fans care. But not all sports allow the same complexity and subtlety. Every contest has a clear central story, but what about the subplots and counter-rhythms? Here cricket, which can lay on a series of five Tests, is unrivalled. It becomes something you live with. Baseball is a stirring symphony, cricket is the Ring Cycle.

So 3-2 to cricket? For now, yes. But if Twenty20—an effort to squeeze cricket into the three-hour slot that baseball has always occupied—consumes the whole sport, cricket will be just another game, lacking a USP and looking worryingly desperate to please.

Cricket can’t beat baseball by imitating it. The parent is rarely well served by copying the child.

Ed Smith is a former England cricketer and Times leader writer. He is now a commentator on “Test Match Special” and the author of “Luck”

Image Topham

Follow

Get every new post delivered to your Inbox.

Join 822 other followers