VoxPopuLII
I would like to convince you of two things. The first is that robotics will follow computers and the Internet as the next transformative technology.1 The second is that, for the first time in recent memory, the United States runs the risk of being left behind. I explain why we lawyers are to blame, and offer a modest, non-Shakespearean solution.
As William Gibson once said: “The future is already here—it’s just not very evenly distributed.” Transformative technologies have their early adopters. One is the military. The United States military was among the first organizations to use computers. It also created the ARPAnet, the Internet’s precursor. Today, the military makes widespread use of robots, as Peter Singer catalogs exhaustively in his 2009 book, Wired for War. The numbers are incredible; Air Force drones recently reached a million combat hours.
Other early adopters include artists and hobbyists. Computer-generated music began as early as the 1950s. Frank Herbert, the author Dune, was an early convert to personal computing. He wrote one of the first home computer guides — the ominously titled Without Me You’re Nothing. Today, hobbyists and “makers” are using Arduino and other platforms to build their own robots. The editor-in-chief of Wired Magazine is a noted DYI drone enthusiast. This summer there was an entire film festival devoted to robotics in New York City.
There is a sense in which robots are already mainstream. Your car was probably built by a robot. If you have ever purchased shoes from Zappos.com, a robot likely fished them out of the warehouse. Robot assistance is more common than not in certain surgeries. Sales of iRobot’s robotic vacuum cleaner are in the millions.
Look closely at headlines and you’ll begin to see robots there as well. Robotic submarines helped assess the extent of the BP oil spoil. A robot defused the bomb in Time Square. We sent robot ground units and drones to the Fukushima Daiichi nuclear power plant. Robots helped rescue the trapped New Zealand miners. More telling still: In the wake of a mining accident in West Virginia, a journalist asked why we were still sending real people into dangerous mines in the first place.
It is for these reasons and more that I believe Bill Gates’ vision of “a robot in every home”; I can see where Honda comes up with the estimate that it will sell more robots than cars by 2020; and I can understand why the Computing Community Consortium would entitle their 2009 report (PDF) to Congress “A Roadmap for U.S. Robotics: From Internet to Robotics.”2
Yet for all its momentum, robotics is at a crossroads. The industry faces a choice — one that you see again and again with transformative technologies. Will this technology be essentially closed, or will it be open?
What do I mean by these words? “Closed” robots resemble any contemporary appliance: They are designed to perform a set task. They run proprietary software and are no more amenable to casual tinkering than a dishwasher. The popular Roomba robotic vacuum cleaner and the first AIBO mechanical pet are closed in this sense. “Open” robots are just the opposite. By definition, they invite contribution. An open robot has no predetermined function, runs third-party or even open-source software, and can be physically altered and extended without compromising performance.3
Consumer robotics started off closed. Which goes part of the way in explaining why it has moved so slowly. A few years ago, only a handful of companies — Sony and iRobot, for instance — were in the business of making consumer robots or writing robot software. Any new device or functionality had to come down from the top. As introduced to the market, Sony’s AIBO dog could only run two programs, both written by Sony. At one point, consumers managed to hack the AIBO and get it to do a wider variety of things. A vibrant community arose, trading ideas and AIBO code. That is, until Sony sued them for violating the copyright in their software.
Compare the early days of personal computing, as described in detail by Jonathan Zittrain in his book, The Future of the Internet. Personal computers were designed to run any software, written by anyone. Indeed, many of the innovations or “killer apps” that popularized PCs came from amateur coders, not Apple or IBM. Consumers bought PCs, Zittrain recounts, not for what the machines did, but for what they might do.
The same is true of open robots. They become more valuable as use cases surface. (It can fold laundry! It can walk a dog!) That open robots are extensible or “modular” constitutes a second advantage. Versatile and useful robots are going to be expensive. Meanwhile, the technology continues to change. Let’s say there is a breakthrough in sensor technology or someone invents a new, more maneuverable gripper. The owner of a closed robot will have to wait for the next model to incorporate these technologies. The owner of an open robot can swap the sensor or gripper out. As Barbara van Schewick argues in another context, this encourages consumers to buy personal and service robots earlier in the product cycle.
The open model — best exemplified, perhaps, by the Silicon Valley robotics incubator Willow Garage — is gaining momentum. Five years ago, iRobot’s co-founder Colin Angle told The Economist that robots would be relatively dumb machines designed for a particular task. Robot vacuums will vacuum; robot pool cleaners will clean the pool. This year at the Consumer Electronics Show, the same company unveiled a robot called AVA designed to run third-party apps. Following a backlash over its copyright lawsuit, Sony released a software developer kit for AIBO, which continues to be used by classrooms and in competitions. Microsoft recently gave open robotics a boost by developing an SDK (software development kit) for its popular Kinect sensor. So far, so good.
Enter the lawyers. The trouble with open platforms is that they open the manufacturer up to a universe of potential lawsuits. If a robot is built to do anything, it can do something bad. If it can run any software, it can run buggy or malicious software. The next killer app could, well, kill someone.
Liability in a closed world is fairly straightforward. A Roomba is supposed to do one thing and do it safely. Should the Roomba cause an injury in the course of vacuuming the floor, then iRobot generally will be held liable as it built the hardware and wrote or licensed the software. If someone hacks the Roomba and uses it to reenact the video game Frogger on the streets of Austin (this really happened), then iRobot can argue product misuse.
But what about in an open world? Open robots have no intended use. The hardware, the operating system, and the individual software — any of which could be responsible for an accident — might each have a different author. Open source software could have many authors. But plaintiffs will always sue the deep pockets. And courts could well place the burden on the defendants to sort it out.4I noted earlier that personal computers have been open from the start. They, too, have no dedicated purpose, run third-party software, and are extensible (through USB ports). But you would not think to sue Microsoft or Dell because Word froze and ate your term paper. It turns out that judges dismissed early cases involving lost or corrupted data on the basis that the volatility of computers was common knowledge. These early precedents congealed over time practically to the point of intuition. Which, I would argue, is a good thing: People might not have gone into the business of making PCs if they could get sued any time something went wrong.
But there is one, key difference between PCs and robots. The damage caused by home computers is intangible. The only casualties are bits. Courts were able to invoke doctrines such as economic loss, which provides that, in the absence of physical injury, a contracting party may recover no more than the value of the contract. Where damage from software is physical, however, when the software can touch you, lawsuits can and do gain traction. Examples include plane crashes based on navigation errors, the delivery of excessive levels of radiation in medical tests, and “sudden acceleration”—a charge respecting which it took a team of NASA scientists ten months to clear Toyota software of fault.
Open robots combine, arguably for the first time, the versatility, complexity, and collaborative ecosystem of a PC with the potential for physical damage or injury. The same norms and legal expedients do not necessarily apply. In robotics no less than in the context of computers or the Internet, the possibility that providers of a platform will be sued for what users do with their products may lead many to reconsider investing in the technology. At a minimum, robotics companies will have an incentive to pursue the slow, manageable route of closing their technology.
To recap: Robots may well be the next big thing in technology. The best way to foster innovation and to grow the consumer robotics industry is through an open model. But open robots also open robotic platform manufacturers to the potential for crippling liability for what users do with those platforms. Where do we go from here?
My proposed solution is a narrow immunity, akin to what we see in general aviation, firearms, and the Internet. In each case, Congress spotted a pattern that threatened an American industry and intervened. Congress immunized the companies that created the product for what consumers or others might do with their product.
For many of the same reasons, I believe we should consider immunizing the manufactures of open robotic platforms for what users do with them. I am talking here about a kind of Section 230 immunity for robotics. You cannot sue Facebook over a defamatory wall post. Nor can you immediately sue an Internet service for hosting copyrighted content. Analogously, if someone adds a chainsaw to their AVA5 or downloads the “dive-bomb” app for their AR.Drone, it should not be possible to name iRobot or Parrot as a defendant. Otherwise, why would these companies take the chance of opening their products?
One final note: It may be tempting to take a wait-and-see approach. Perhaps the fears I’ve outlined are overblown; maybe the courts will find another expedient to incentivize safety without compromising innovation. Scholars have speculated that the courts would have arrived at a Section 230-like solution for Internet content even without the statute. What’s the rush?
We risk a lot in waiting. I don’t think we want to wait to intervene until this young industry is bankrupted, as we did in the context of general aviation. (It was called the General Aviation Rehabilitation Act for a reason.) Several countries already have a head start in robotics, a higher bar to product liability litigation, or both. The risk of waiting is that, by the time we sort this out, the United States will not be a comparatively serious player in a transformative technology for the first time since the steam engine. Now is the moment to start thinking about this problem.
Thanks very much for reading. Your thoughts are warmly welcome.
This post was adapted from my recent article Open Robotics, which appears in volume 70 of Maryland Law Review and can be downloaded on SSRN. Thanks to Robert Richards and Vox PopuLII for the opportunity to share my research.
[Editor’s Note: Mr. Calo’s post has implications as well for AI and law scenarios, e.g., those involving open robots — having artificial intelligence — that engage in conduct determined by automated decisions taken using legal rules modeled in computer language.]
________________
[1] Of course, robotics incorporates and builds upon these technologies. By robotics, I mean to refer to technology that incorporates at least three elements: a sensor, and processor, and an actuator. This is a fairly common if admittedly imperfect definition.
[2] You may be thinking: we’ve been down this road. The 1980s saw a robotics craze and nothing came of it. This is not entirely true: the use of robotics for manufacturing and space exploration grew exponentially. Processors and sensors were not cheap enough to realize the same vision for personal and service robots. They are now.
[3] I draw these definitions in part from the important work of Barbara van Schewick.
[4] I realize that a plaintiff must generally show the injury to be “foreseeable.” But recall that the defendant need only foresee the category of harm, not the exact situation. Moreover, some jurisdictions shift the burden to the product liability defendant to show that the injury was not foreseeable.
[5] Thanks to Paul Ohm for this example.
M. Ryan Calo is a project director at the Stanford Center for Internet and Society. Calo co-founded the Legal Aspects of Autonomous Driving program, a unique, interdisciplinary collaboration between Stanford Law School and the School of Engineering. He is on the Program Committee for National Robotics Week and co-chairs the American Bar Association Committee on Robotics and Artificial Intelligence. Calo blogs, tweets, and publishes on the intersection of law and technology.
VoxPopuLII is edited by Judith Pratt. Editor-in-Chief is Robert Richards, to whom queries should be directed.
Some Twitter discussion of this post is being tracked by Topsy: http://topsy.com/blog.law.cornell.edu/voxpop/2011/07/01/open-robotics/
[…] OPEN ROBOTICS: “I would like to convince you of two things. The first is that robotics will follow computers … […]
The U.S. will benefit enormously from self-driving (robotic) cars. Billions of man-hours freed up, road capacity maximized, fuel efficiency benefits, and less accidents.
But accidents there will be. I am predicting the U.S. will be last to let robotic cars on the road, because manufacturers such as VW and Honda will deploy the technology elsewhere but not here.
By the time the U.S. finally relents and immunizes auto manufacturers, the lead in expertise in this area will be in the hand of foreign companies. Even our sole U.S. car manufacturer, Ford, will be lagging behind, and will soon afterward be toast.
The Apple of self-driving car apps will be European.
So basically, the US used to be the world leader in new technologies, until we handed the country to the product liability lawyers. now we are toast.
Obviously, the best solution is for the US to abandon robotics and then have American lawyers launch enterprise liability wide class action suits against foreign open source robotics manufacturers. Thus at least one third of whatever they make will accrue to American pockets, albeit only a few pockets in Armani suits but suits worn by Americans nevertheless. Robotics could be the new Bendectin.
You make a convincing argument that, when it comes to robots, closed systems are better. I don’t want a piece of malware to cause my Roomba v5.4 to smother me in my sleep with a pillow. Or just unlock my front door while I’m on vacation so thieves can help themselves.
In all seriousness, I would never mess around with robot software for the reasons you describe. I would only want to install things that have been heavily tested, and heavily vetted, with no possibility of a malicious coder inserting malware. I’m thinking of Apple’s App Store review, x500. Does this raise costs? You betcha. Call it an insurance premium.
Sorry, but I am not interested in a “non-Shakespearean” solution. 😛
You failed to mention the Lego Mindstorms Robotics Kit. It started out, for the most part, closed. You could program it using a rather limited proprietary language. People loved it, but some didn’t like the limitations of the programming language. Thus began a community devoted to reverse engineering the kit and producing a more powerful environment. The Lego company could have released the lawyers and (possibly) put a stop to this. Somehow common sense took hold, in a corporate environment, and they embraced the open source community. They only made one reasonable demand, change the project name, as LegOS was too similar to their trademark. The project name became BrickOS and the Mindstorms sales boomed. When the company decided to update the kit they even went so far as to approach the BrickOS community for ideas.
Contrast the continued popularity of Lego Mindstorms with Sony’s Aibo. The Aibo is dead. Sony lawyers killed any hope of a community of hackers driving sales. Noting Sony’s history of litigation against their customers, I would have to assume that Sony hires the world’s dumbest lawyers.
Here’s your template. Change arms to robots and you’re good to go.
http://www.law.cornell.edu/uscode/uscode15/usc_sup_01_15_10_105.html
‘You make a convincing argument that, when it comes to robots, closed systems are better. I don’t want a piece of malware to cause my Roomba v5.4 to smother me in my sleep with a pillow. Or just unlock my front door while I’m on vacation so thieves can help themselves’
Unfortunately, this is just cold truth. Robotics is fundamentally _different_ than PCs and infotech, it operates in the real, physical world and can have real, irreversible physical consequences. Therefore it _cannot_ be allowed operate with the same openness and looseness as PCs. It’s easy to say that freedom is more important than security, in real-world practice the correct answer to which one (freedom or security) is more important is ‘it depends’.
A household robot should not be directly connected to the Internet, nor should it be readily possible to change its basic functional programming. Not in a domestic environment with children, the elderly, etc, around. Nor should it sensors be any more acute than need be for its tasks. For this sort of thing, you _want_ limitations. It’s basic safety engineering.
‘Easter eggs’ and malware bombs and ‘back doors’ are mostly annoyances in programs that process payrolls and calculate statistical models. They can be _deadly_ in a free-operating robot. There have already been instances of things like cell-phone microphones and PC webcams being activated remotely without permission, robotics opens up new possibilities in this.
Sometimes, closed is better than open.
“To recap: Robots may well be the next big thing in technology. The best way to foster innovation and to grow the consumer robotics industry is through an open model. But open robots also open robotic platform manufacturers to the potential for crippling liability for what users do with those platforms. Where do we go from here?”
My opinion open and closed is just a small element of innovation. If you want to get serious about innovation, you need to put money behind your ideas. Especially if we are talking about robotics. It is much more expensive to build robots, then websites. Secondly, we need to be strengthening our labour base in the Maths and Science. This is where we seriously lag India and China, and it has given them the competitive edge.
Thanks for all of your great comments. Some follow up thoughts below.
@Monica — I certainly agree that there are other factors at play. My impression from speaking to industry is that they are specifically worried about liability and other legal hurdles. In April, I had occasion to attend a meeting between robotics companies and a local congressman. Although the companies thought the government could be spending money better (for instance, prizes instead of grants), almost to a company they identified the prospect of open-ended liability and regulatory hurdles as the greatest business threat. Several said their largest market was abroad precisely because they had less to worry from a legal perspective. But, again, I don’t mean to downplay the importance of STEM or investment in robotics, such as President Obama’s recent commitment of $70 million dollars for robot R&D.
@Rabel — Thanks! I use the example of the Protecting Lawful Commerce in Arms Act in the article.
@Ross — Great example, one I wish I had thought of and used in the post and article. I’d love to hear more.
@Xavier — Thanks for your thoughts on the future of autonomous vehicles. If you’re interested in this topic, we (the Center for Internet and Society at Stanford Law School) are teaming up with the Center for Automotive Research at Stanford within the School of Engineering to study the legal aspects of autonomous driving. Our full-time fellow starts in the fall.
[…] Here are some relevant links (note the return of quadrotor videos): UPenn Quad Bloopers President Commanding Robot Revolution, Announces Major Robotics Initiative Open Robotics – a legal perspective […]
Professor Kenneth Anderson has commented on this post at Volokh Conspiracy: http://volokh.com/2011/07/12/robotics-as-social-and-legal-policy/ HT @rcalo