Five years ago, the number of Internet-connected devices exceeded the number of people on Earth. Today, devices outnumber humans by three to one. Within another five years that ratio will be closer to ten to one, if projections are to be trusted. Do we understand the implications?
2016 also marked a few sad milestones. In September, we endured the first major distributed denial-of-service attack using “things” instead of computers. It was followed in November by accusations (true or not) that a major social media network allowed “fake news” to affect a presidential election. These events are indicative of how connected both people and things are today, and they couldn’t have occurred just five years ago. This pervasiveness of connectivity is a huge benefit in many respects, but it is not entirely without problems.
Opportunity, threat, or both?
Recently I had the opportunity to attend one of the largest tech conferences in the world: Web Summit in Lisbon, Portugal. Beneath the atmosphere of disruptive new business opportunities and visions of a wondrous future brought forth by technology, there ran an undercurrent of concern: how will these innovations affect our society, and are we doing everything we can to ensure the positives outweigh the negatives?
Where does the responsibility of a hardware manufacturer, software vendor or service provider end?
A talk by the CSO of Facebook, Alex Stamos, illustrated one aspect of this concern. Facebook’s users have to deal with the constant risk of identity theft, scams, and privacy violations. Most of them live outside Europe or the U.S., and certainly don’t use the newest Samsung or Apple flagship to access Facebook. How can Facebook ensure the security of those users who run a five-year-old Android phone without a single security patch installed?
Is it enough to just make the software secure and let the user shoulder the rest of the responsibility?
It turns out that no, it’s not enough. For the user, an identity theft via Facebook is Facebook’s fault, even if the underlying reason was an insecure device, and even if a court cleared Facebook of any negligence. While Stamos’ talk was too short to go into much detail, it was plain that Facebook understands this and designs their services and applications accordingly. Sadly, not everybody else does.
While not the first such incident, the Mirai DDoS attack in last September received the most publicity thus far. It was performed by infecting various IoT devices such as IP cameras using their default passwords. Those devices usually come with set-up guides telling the user, in fairly big letters, to change the default password. But does the manufacturer’s responsibility end there? Nope. It’s possible, even likely, that devices that are insecure out-of-the-box may become illegal to sell within the EU. Manufacturers of insecure “digital home” devices may soon be liable for attacks made using them – attacks that may cause damages worth millions of euros. Far better to secure them now.
Responsibility is not only about security
A Web Summit debate about self-driving cars hit the million-dollar question: how should the car’s autopilot decide between two bad outcomes? Should it ensure the safety of the passengers in all cases, or sacrifice the elderly driver so that the child running onto the street survives? A Daimler chief recently took the former position, but had to quickly retract his words amidst Internet outrage.
Most car manufacturers have so far skirted the question by using the easy cop-out that self-driving cars would prevent more fatalities than they cause.
While undoubtedly true, this won’t absolve the manufacturer of the autopilot’s decisions. The rules for these life and death situations must be written somewhere, and it’s in the interests of both the car manufacturer and the public that they are as clear and fair as possible. If the manufacturers won’t do it themselves, the regulators soon will.
Sometimes it’s better to not disrupt
Disruption is the cool buzzword every Uber or Airbnb wannabe likes to use at every opportunity.
Not all disruption is necessarily good, though – at least not everywhere. If you had a cheap and reliable drone-based delivery system for getting things “the last mile” to customers, deploying it in Europe or Japan would be merely good business sense, no matter who gets disrupted. But what about an African country where youth unemployment is already at 70%, where people get much-needed bits of income performing small jobs like deliveries, and where the most important technological innovations in the last 60 years were the Nokia 3310 and the AK-47?
One drone-delivery startup (medications, in this case) struggled with exactly this dilemma, and ended up leaving the last mile to the locals.
I have been involved for some years now in certain railway-related projects. Our team had, in the beginning, the unofficial motto of “Let’s avoid being interviewed by an Accident Investigation Board”. Our systems are for the most part not on the critical safety loop, and any such interview would likely not be motivated by a failure in them. In spite of that, part of the responsibility for safety across the entire system of systems extends even to us. If by keeping the big picture in mind we can avoid that sort of interview, then it’s effort well spent.
As professionals we tend to focus on details. I propose we add a wide-angle lens to our toolbox, to also see the world around our individual projects.