Model of uncertainty
Written by Adam Bernstein
In a world that is increasingly reliant on technology, Adam Bernstein asks if our ability to manage the risks is keeping pace with a seemingly exponential rate of change
Not a day goes by without technology making the headlines – hardly surprising given its deep and rapid integration into personal and commercial life around the globe. New technologies are announced at a seemingly exponential rate. And as technology develops, so does our reliance on it, sometimes to our detriment.
In July 2013 a system failure at the UK’s air traffic controller, NATS, had a significant impact on flight departures and arrivals; in March 2011 the explosion and following contamination at Japan’s Fukushima nuclear plant impacted the lives of some 60 million people within Japan and led to a reevaluation of nuclear policy around the world; and to a less perilous extent, but just as painful to some, May 2013 saw yet another failure at BlackBerry. Each of these scenarios brings into sharp focus our dependence on technology.
Consider the London 2012 Olympics, a world away from the post-war 1948 Olympic Games both in terms of technology and scale. A report on the BBC’s website highlighted the real and genuine fears that the authorities had that the opening night would suffer a cyber attack. As mundane an event as a hacking attack to the lighting systems would have been catastrophic for the organisers.
The BBC’s security correspondent, Gordon Corera, noted what Oliver Hoare, the head of cyber security at the Olympics, said when he received a call at four in the morning on the day of the opening ceremony: “There was a suggestion that there was a credible attack on the electricity infrastructure supporting the Games.” Hoare said that the organisers had planned for this scenario: “We’d tested no less than five times the possibility of an attack, a cyber attack, on the electricity infrastructure.”
Luckily that particular threat did not materialise, but it nevertheless illustrates that nothing can be left to chance, and that adequate planning is paramount when it comes to managing risk. And this particular risk is not exactly new to us.
The risks in context
In its most recent review of the risk landscape, the World Economic Forum (WEF) analysed the perceived dangers for the year based on a survey of 1,000 experts looking at a register of 50 global risks. Of the top five global risks, unforeseen consequences of life science technologies was the biggest mover up the rankings according to likelihood. The report also made great play of digital wildfires in a “hyper connected” world, and in particular, it made the point that social media could spread information virally, around the globe, with the effect that misinformation could be propagated faster than any corrections that followed.
While the top five risks listed in the report in terms of likelihood involved financial and physical issues such as income disparity, fiscal imbalances and water supply issues, the number one issue in terms of its potential global impact was a “major systemic financial failure”. Although this does not specifically detail technology at its root cause, it points to the interconnected nature of our global financial systems that in turn are heavily dependent on technology.
One need not spend too long on a web search to illustrate the point. “Flood of errant trades Is a black eye for Wall Street” was the headline used by the New York Times in August 2012 to detail the effect of an automated stock trading program that not only flooded the market with millions of trades on Wall Street but which spread turmoil even further afield. The UK has also suffered as the Guardian reported in February 2011: “Trading on the London Stock Exchange was halted for more than four hours after a technical glitch, leaving share dealers unable to react to a number of key events, including the volatile situation in Libya, results from Lloyds Banking Group and disappointing UK economic growth figures.”
The pervading web
It is what the WEF terms as our now hyper connected world lies at the heart of many of today’s concerns when it comes to emerging technology risk. According to Dai Davis, a UK based chartered engineer and technology lawyer, Web 3.0 is the next development to watch out for. “We’re not talking about the banal – say a Google connected car or a dishwasher connected to the Internet – I’m talking about microproducts are Internet addressable, say lightbulbs in a car or a building,” he explains, outlining a world where devices automatically report their status and if appropriate, order a replacement. While that technology is available now, the price for component devices is currently prohibitive; but, as we have seen with so many past, but now commonplace technologies that will not always be the case. The risks involved with some kind of malfunction or insecurity will always become apparent at some point in time.
Meanwhile, the web is taking technology in a number of different directions. Among the most fascinating of these developments is that of 3D printing. It is not new by modern standards (Charles Hull of US-based 3D Systems Corporation created the first device in 1984) but the web is propagating the idea faster and further. Who knows where it will be in 20 years’ time? Ask the children – in the UK it is about to appear on the curriculum.
3D printing requires a blueprint from a computer aided design (CAD) drawing that is then digitally sliced into cross sections that a computer can use to guide a printer. The design can then be placed on the web for anyone, in theory, to download and use.
As the technology has developed, so the cost of the printers and the process has fallen and the number of applications has increased. In May 2013 the Guardian ran a report that looked at how the technology is becoming mainstream. Dr Greg Gibbons, an expert in additive manufacturing at the University of Warwick, believes we will soon see 3D print shops like we would have seen photocopy shops in the past. He reckons these print shops would have four or five different machines reproducing in several different types of material. “Customers could go in with a CAD file or a dishwashing machine part to be scanned and leave with a three-dimensional copy,” Dr Gibbons explained.
Even now, what cost thousands of pounds a few years ago can be bought online for less than £700 in the UK from online retailer Maplin. To illustrate this point, a Velleman K8200 printer is already sold out despite its relatively high cost and that of the printing materials.
Judging by recent developments, 3D printing has a bright future. Scientists from Harvard University and the University of Illinois have used a 3D printer to make tiny lithium-ion microbatteries. Harvard professor Jennifer Lewis and her colleagues effectively created a battery from stacks of interlaced, ultra-thin electrodes and designed new functional ‘inks’ with useful chemical and electrical properties.
Says Lewis: “Not only did we demonstrate for the first time that we can 3D print a battery; we demonstrated it in the most rigorous way.” With time and cheaper printers and materials there’s nothing to stop private citizens printing this type of battery at home.
Taking the idea of 3D printing in a completely different direction, Davis sees nothing to prevent the ‘printing’ of almost anything. Davis considers pharmaceuticals to be one area that could work well in a ‘paint mixer’ type machine where a cocktail of various chemicals can be ‘printed’ into a drug.
Understanding the implications
Just as the legitimate can be converted into a 3D drawing or scanned, so too can more harmful products. Twenty five year old American law student Cody Wilson created and released to the web the blueprints for a 3D printable gun – The Liberator. His ability to make a plastic gun with a US$8,000 printer which would be hard for the authorities, especially those at airports, to detect made the actions of Wilson’s group, Defense Distributed, all the more alarming. It did not take long for the US government to take action; although the group’s website – defcad.com – is operational, the drawing has been removed with the US State Department claiming control over it.
Lawyer Dai Davis justifiably wonders how digital technologies can be policed. “The present rules on say designs and copyright just won’t work – there’s nothing to stop someone scanning a product, reverse engineering it and then printing their own scanned copy.” He asserts that the only thing that would work would be a tax on the methods of production (the printer and materials). “Globally”, he says, “laws are just too slow to catch up with technology – just look at the issue of digital downloading.”
One of the more anticipated technologies to come to 2013, albeit not yet on general release, is Google Glass. The wearable computing concept is not new, at least in terms of science fiction – TV’s Six Million Dollar Man and more recently the Terminator franchise both featured a form of this technology.
As for real science, it was Ivan Sutherland at the University of Utah who, in 1965, first posited the notion of a head mounted display that superimposed a virtual world on the real. Known as ‘augmented reality’ it has great potential and great risk in equal measure. With a built in camera, computer and wireless technology it is not hard to see why firms are concerned with security by Google Glass wearers – even Google followed the lead of casinos and banned its own product from its annual shareholders meeting in June.
The International Society for Presence Research published a paper in June 2013 that looked at the future for the technology. It quoted mobile analyst Tomi Ahonen as reckoning that augmented reality could be used by a billion people by 2020. Intel wants a part of the action and is investing around US$100m over the next three years to fund companies developing next generation apps and user interfaces.
The author of the paper, Dan Farber, makes the point that once technologies in this area are standardised, it will truly become a technology that digital natives will adopt en masse. He also predicts that in time augmented reality will become more integrated with the human body.
Steven Feiner, a research at Columbia University, says that it’s “scary in terms of the information available, especially when billions of people with cameras and microphones can capture anything in public. There are no laws against it.”
If researchers at iSec can, in 2013, hack into a Verizon mobile phone network extender and snoop on calls, texts and data that it receives and transmits to Verizon, how will data and privacy be protected with augmented reality? It will be hard considering apps that are being developed now – Winky, for example will let wearers take surreptitious images by just winking an eye at Google Glass. Will governments let Google Glass recognise faces on sight? San Francisco based Lambda Labs are already developing that technology...
Fear of the unknown
People don’t like change and Davis highlights nanotechnology as a perfect example of where technology is having something of a frightening effect. “People don’t like the concept of nano because they can’t see it and if they can’t see something they assume it’s dangerous.”
According to David Holmes of The Lancet, nanotechnology is set to revolutionise everything from energy production down to the clothes we wear, but for him the greatest application of this technology is in medicine. A search on the journal’s website for “nanotechnology” proves the point with 61 references in all Lancet journals and another 9672 on the publication’s Medline covering topics as diverse as imaging and treating cancer, site-targeted drug delivery to nanotechnology in dentistry and the risks of nanotechnology.
Earl Boysen of understandingnano.com details just a few of the many applications for nanotechology. He says “researchers have demonstrated a way to use nanoparticles to diagnose infectious diseases early on.” In this procedure nanoparticles attach themselves to molecules in blood to indicate the start of an infection and a scanner tracks their progress. Other researchers have developed nanoparticles with radioactive cores that attach to lymphoma cells. “The researchers,” says Boysen, “are designing this to stop the spread of cancer from the primary tumor.”
Nanotechnology has the potential to make a vast impact. As far back as July 2007, the European Commission noted in a report by D.G. Rickerby and M. Morrison, Nanotechnology and the environment: A European perspective, the wide variety of potential applications in the biomedical, optical, and electronic fields. While acknowledging the potential for nanotechnology in creating solutions to environmental challenges, the report also warned of the risks of the unknown, given how little is documented about the potential impacts of nanoparticles on the environment and human health.
A 2008 study published in Nature found that carbon nanotubes introduced into the abdominal cavity of mice demonstrated that long thin carbon nanotubes showed the same effects as long thin asbestos fibres, raising concerns that exposure to carbon nanotubes may lead to pleural abnormalities such as mesothelioma (cancer of the lining of the lungs caused by exposure to asbestos).
But nanotechnology, like genetic engineering, can be used for purposes good and bad. The solution is not to ban a technology outright without discussing it properly first. The problem with many scientific debates is that they are held in isolation and that’s a fault in the process. A national parliament is the wrong place to debate technological issues; properly constituted scientific committees that can look at an issue holistically are much better.
The big challenge when it comes to technology is the pace of change. US IT management and solutions firm, CA Technologies, says that the accelerating pace of IT developments is “forcing IT decision-makers to adopt new strategies that keep their existing technology investments from becoming a drag on business value.” However, it notes that companies are still getting substantial value from systems they have invested in for the last 10 to 20 years but that they need to “proactively plan and manage the lifespan of their investments by both optimising their existing assets and adopting next generation solutions.”
An IDC white paper, Maximizing and Accelerating Value with Technology Lifecycle Planning, reads: “with ever-accelerating technological change, it is tempting to over-focus on ‘what’s new’ and downplay the importance of stable in-place solutions that often supply critical business functions.” The authors go on to say that when such solutions are based on mature or even late stage technologies, IT organisations are faced with real challenges as to how to support and prolong usefulness – including updates and modernisation – while planning for eventual transition to newer technologies and operational environments.
For Davis, legacy systems are going to become more of an issue as cloud computing takes greater hold. Those who cannot take advantage of cloud computing services – say online accounts and billing – will become excluded from the market. Good examples are the move to online price comparison websites for insurance and utilities. The disenfranchised are condemned to market exclusion.
But legacy systems are not all bad and it can sometimes be the case that they are maintained for the sake of security. That this can be advantage – if a firm finds a system difficult to update then so will a hacker.
Insuring the future
The European Commission is also concerned about emerging technology risks. Its call for information in July 2013 into the impact of IT on the insurance market, in particular, insurance for (semi-) autonomous ICT-based systems, and the cyber security insurance market is one such example. Tom Scourfield, a partner in the technology team of CMS Cameron McKenna is certain that cyber risk is driving the insurance market right now – “the market is writing insurance policies to protect [firms] against hacking and data liability where held data is either abused or lost.”
He says that the call for information is all part of the Commission’s cyber strategy policy. “Much of what happens goes unreported,” Scourfield asserts. He believes that the Commission wants all breaches to be compulsorily reported to a central point. In the UK, for example, there is no obligation to report a breach to the Information Commissioner, the UK data regulator. Firms that suffer a breach only need to risk assess the likely damage before deciding whether or not to report.
Scourfield says that the call for information from the insurance market is linked to the forthcoming EU data protection directive that should be in place for 2015. Under that directive, firms will be liable for a two per cent levy on their global turnover for data breaches.
“The Commission wants to drive compliance [in the insurance market] and then minimise losses by encouraging the right behaviour from the start,” says Scourfield.
The Association of Corporate Counsel (ACC), an in-house bar association for professional corporate counsel who practice in legal departments globally, offers advice to those wanting to buy insurance coverage. It talks of covering first party risks – to protect against costs for loss of a firms own data or damage that follows from that loss – and third party risks to cover liability to clients and governmental bodies.
In particular, the ACC says when buying cyber risk insurance organisations should consider whether the policy protects against data on unencrypted devices, the cost of data restoration when choosing coverage, whether cover extends to transmissions off premises, and if identity theft resolution services are incorporated.
Interestingly, ACC advice suggests that policyholders may be able to have a risk management service evaluate their in-house data security measures to help lower their premiums. In some cases the cyber risk insurer on either a discounted or reimbursed rate may organise this.
As to the future, professional services firm KPMG reckons in its latest Global Technology Innovation survey that the US will be the most likely source of the next ‘disruptive technology breakthrough’ within the next four years.
Thirty seven per cent of the respondents placed the US in prime position, while 24% cited China, and 10% predicted India, followed by Korea (seven per cent), Japan (six per cent) and Israel (six per cent) as likely innovators of a new market changing technology. The UK was named by only one per cent of respondents as a future hotspot for the next breakthrough and ranks number nine in the list jointly with Russia.
Download this article as a PDF
Contact the editor