I would like to convince you of two things. The first is that robotics will follow computers and the Internet as the next transformative technology.1 The second is that, for the first time in recent memory, the United States runs the risk of being left behind. I explain why we lawyers are to blame, and offer a modest, non-Shakespearean solution.
As William Gibson once said: “The future is already here—it’s just not very evenly distributed.” Transformative technologies have their early adopters. One is the military. The United States military was among the first organizations to use computers. It also created the ARPAnet, the Internet’s precursor. Today, the military makes widespread use of robots, as Peter Singer catalogs exhaustively in his 2009 book, Wired for War. The numbers are incredible; Air Force drones recently reached a million combat hours.
Other early adopters include artists and hobbyists. Computer-generated music began as early as the 1950s. Frank Herbert, the author Dune, was an early convert to personal computing. He wrote one of the first home computer guides — the ominously titled Without Me You’re Nothing. Today, hobbyists and “makers” are using Arduino and other platforms to build their own robots. The editor-in-chief of Wired Magazine is a noted DYI drone enthusiast. This summer there was an entire film festival devoted to robotics in New York City.
There is a sense in which robots are already mainstream. Your car was probably built by a robot. If you have ever purchased shoes from Zappos.com, a robot likely fished them out of the warehouse. Robot assistance is more common than not in certain surgeries. Sales of iRobot’s robotic vacuum cleaner are in the millions.
Look closely at headlines and you’ll begin to see robots there as well. Robotic submarines helped assess the extent of the BP oil spoil. A robot defused the bomb in Time Square. We sent robot ground units and drones to the Fukushima Daiichi nuclear power plant. Robots helped rescue the trapped New Zealand miners. More telling still: In the wake of a mining accident in West Virginia, a journalist asked why we were still sending real people into dangerous mines in the first place.
It is for these reasons and more that I believe Bill Gates’ vision of “a robot in every home”; I can see where Honda comes up with the estimate that it will sell more robots than cars by 2020; and I can understand why the Computing Community Consortium would entitle their 2009 report (PDF) to Congress “A Roadmap for U.S. Robotics: From Internet to Robotics.”2
Yet for all its momentum, robotics is at a crossroads. The industry faces a choice — one that you see again and again with transformative technologies. Will this technology be essentially closed, or will it be open?
What do I mean by these words? “Closed” robots resemble any contemporary appliance: They are designed to perform a set task. They run proprietary software and are no more amenable to casual tinkering than a dishwasher. The popular Roomba robotic vacuum cleaner and the first AIBO mechanical pet are closed in this sense. “Open” robots are just the opposite. By definition, they invite contribution. An open robot has no predetermined function, runs third-party or even open-source software, and can be physically altered and extended without compromising performance.3
Consumer robotics started off closed. Which goes part of the way in explaining why it has moved so slowly. A few years ago, only a handful of companies — Sony and iRobot, for instance — were in the business of making consumer robots or writing robot software. Any new device or functionality had to come down from the top. As introduced to the market, Sony’s AIBO dog could only run two programs, both written by Sony. At one point, consumers managed to hack the AIBO and get it to do a wider variety of things. A vibrant community arose, trading ideas and AIBO code. That is, until Sony sued them for violating the copyright in their software.
Compare the early days of personal computing, as described in detail by Jonathan Zittrain in his book, The Future of the Internet. Personal computers were designed to run any software, written by anyone. Indeed, many of the innovations or “killer apps” that popularized PCs came from amateur coders, not Apple or IBM. Consumers bought PCs, Zittrain recounts, not for what the machines did, but for what they might do.
The same is true of open robots. They become more valuable as use cases surface. (It can fold laundry! It can walk a dog!) That open robots are extensible or “modular” constitutes a second advantage. Versatile and useful robots are going to be expensive. Meanwhile, the technology continues to change. Let’s say there is a breakthrough in sensor technology or someone invents a new, more maneuverable gripper. The owner of a closed robot will have to wait for the next model to incorporate these technologies. The owner of an open robot can swap the sensor or gripper out. As Barbara van Schewick argues in another context, this encourages consumers to buy personal and service robots earlier in the product cycle.
The open model — best exemplified, perhaps, by the Silicon Valley robotics incubator Willow Garage — is gaining momentum. Five years ago, iRobot’s co-founder Colin Angle told The Economist that robots would be relatively dumb machines designed for a particular task. Robot vacuums will vacuum; robot pool cleaners will clean the pool. This year at the Consumer Electronics Show, the same company unveiled a robot called AVA designed to run third-party apps. Following a backlash over its copyright lawsuit, Sony released a software developer kit for AIBO, which continues to be used by classrooms and in competitions. Microsoft recently gave open robotics a boost by developing an SDK (software development kit) for its popular Kinect sensor. So far, so good.
Enter the lawyers. The trouble with open platforms is that they open the manufacturer up to a universe of potential lawsuits. If a robot is built to do anything, it can do something bad. If it can run any software, it can run buggy or malicious software. The next killer app could, well, kill someone.
Liability in a closed world is fairly straightforward. A Roomba is supposed to do one thing and do it safely. Should the Roomba cause an injury in the course of vacuuming the floor, then iRobot generally will be held liable as it built the hardware and wrote or licensed the software. If someone hacks the Roomba and uses it to reenact the video game Frogger on the streets of Austin (this really happened), then iRobot can argue product misuse.But what about in an open world? Open robots have no intended use. The hardware, the operating system, and the individual software — any of which could be responsible for an accident — might each have a different author. Open source software could have many authors. But plaintiffs will always sue the deep pockets. And courts could well place the burden on the defendants to sort it out.4
I noted earlier that personal computers have been open from the start. They, too, have no dedicated purpose, run third-party software, and are extensible (through USB ports). But you would not think to sue Microsoft or Dell because Word froze and ate your term paper. It turns out that judges dismissed early cases involving lost or corrupted data on the basis that the volatility of computers was common knowledge. These early precedents congealed over time practically to the point of intuition. Which, I would argue, is a good thing: People might not have gone into the business of making PCs if they could get sued any time something went wrong.
But there is one, key difference between PCs and robots. The damage caused by home computers is intangible. The only casualties are bits. Courts were able to invoke doctrines such as economic loss, which provides that, in the absence of physical injury, a contracting party may recover no more than the value of the contract. Where damage from software is physical, however, when the software can touch you, lawsuits can and do gain traction. Examples include plane crashes based on navigation errors, the delivery of excessive levels of radiation in medical tests, and “sudden acceleration”—a charge respecting which it took a team of NASA scientists ten months to clear Toyota software of fault.
Open robots combine, arguably for the first time, the versatility, complexity, and collaborative ecosystem of a PC with the potential for physical damage or injury. The same norms and legal expedients do not necessarily apply. In robotics no less than in the context of computers or the Internet, the possibility that providers of a platform will be sued for what users do with their products may lead many to reconsider investing in the technology. At a minimum, robotics companies will have an incentive to pursue the slow, manageable route of closing their technology.
To recap: Robots may well be the next big thing in technology. The best way to foster innovation and to grow the consumer robotics industry is through an open model. But open robots also open robotic platform manufacturers to the potential for crippling liability for what users do with those platforms. Where do we go from here?
My proposed solution is a narrow immunity, akin to what we see in general aviation, firearms, and the Internet. In each case, Congress spotted a pattern that threatened an American industry and intervened. Congress immunized the companies that created the product for what consumers or others might do with their product.
For many of the same reasons, I believe we should consider immunizing the manufactures of open robotic platforms for what users do with them. I am talking here about a kind of Section 230 immunity for robotics. You cannot sue Facebook over a defamatory wall post. Nor can you immediately sue an Internet service for hosting copyrighted content. Analogously, if someone adds a chainsaw to their AVA5 or downloads the “dive-bomb” app for their AR.Drone, it should not be possible to name iRobot or Parrot as a defendant. Otherwise, why would these companies take the chance of opening their products?
One final note: It may be tempting to take a wait-and-see approach. Perhaps the fears I’ve outlined are overblown; maybe the courts will find another expedient to incentivize safety without compromising innovation. Scholars have speculated that the courts would have arrived at a Section 230-like solution for Internet content even without the statute. What’s the rush?
We risk a lot in waiting. I don’t think we want to wait to intervene until this young industry is bankrupted, as we did in the context of general aviation. (It was called the General Aviation Rehabilitation Act for a reason.) Several countries already have a head start in robotics, a higher bar to product liability litigation, or both. The risk of waiting is that, by the time we sort this out, the United States will not be a comparatively serious player in a transformative technology for the first time since the steam engine. Now is the moment to start thinking about this problem.
Thanks very much for reading. Your thoughts are warmly welcome.
This post was adapted from my recent article Open Robotics, which appears in volume 70 of Maryland Law Review and can be downloaded on SSRN. Thanks to Robert Richards and Vox PopuLII for the opportunity to share my research.
[Editor's Note: Mr. Calo's post has implications as well for AI and law scenarios, e.g., those involving open robots -- having artificial intelligence -- that engage in conduct determined by automated decisions taken using legal rules modeled in computer language.]
 Of course, robotics incorporates and builds upon these technologies. By robotics, I mean to refer to technology that incorporates at least three elements: a sensor, and processor, and an actuator. This is a fairly common if admittedly imperfect definition.
 You may be thinking: we’ve been down this road. The 1980s saw a robotics craze and nothing came of it. This is not entirely true: the use of robotics for manufacturing and space exploration grew exponentially. Processors and sensors were not cheap enough to realize the same vision for personal and service robots. They are now.
 I realize that a plaintiff must generally show the injury to be “foreseeable.” But recall that the defendant need only foresee the category of harm, not the exact situation. Moreover, some jurisdictions shift the burden to the product liability defendant to show that the injury was not foreseeable.
 Thanks to Paul Ohm for this example.
M. Ryan Calo is a project director at the Stanford Center for Internet and Society. Calo co-founded the Legal Aspects of Autonomous Driving program, a unique, interdisciplinary collaboration between Stanford Law School and the School of Engineering. He is on the Program Committee for National Robotics Week and co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence. Calo blogs, tweets, and publishes on the intersection of law and technology.