Self- driving cars have problems. Automatic automobiles are sophisticated, powerful machines. They’re big, heavy, flashy, and fast. And no one’s behind the wheel. The problems associated with driverless cars may be easy to imagine. They do the sorts of things human drivers don’t do or they can't do things which may seem simple or even mundane to a human driver.
Google’s reports uniformly blame human drivers. The techno-dreamers have kept very much in character on this issue; people, they say, are getting in the way of the inevitable, automated future. But what’s the real story? How should consumers and drivers approach this odd topic? Here are a few things you should know.
The Trouble With Robot Cars
Cars are complicated. As anyone who’s spent even 15 minutes navigating the chaos of early morning traffic knows, driving involves sophisticated reactions and interpretations of innumerable, unique situations. A computerized driver can deal with rules, speed limits, and weather patterns, but real world driving demands immediate action to one-of-a-kind situations: how can a computer programmer tell its machine to do when the truck on your passenger side starts tossing banana peels out the window and swerving unpredictably? How will it calculate the correct response to a drunk police officer speeding without his lights on? The real world doesn’t always work like it should. Real people are equipped to expect the unexpected in ways no computer can.
It is accepted fact that the only accidents driverless cars have been in are perpetuated by human drivers. This is an interesting statistic which points to the safety of these machines. Driverless cars, for much of their adoption, however, will be on the road along with human drivers. It is imperative, then, that these cars be equipped to deal with the unpredictability of the road, which Google, no doubt, is tirelessly working on. Can a human-programmed vehicle, however, ever be well-equipped enough to deal with the road along with human drivers? Maybe an ideal world would do away with human drivers altogether, making way for a perfect road where little to no accidents ever occur.
The moment you decide to into a driverless car is the moment you hand your responsibility of driving over to the programmers who created the coding system for your driverless car. This brings up interesting questions about what happens if one ever is in an accident. Would the maker of the car be liable, or maybe the driver? Nevertheless, the fact still stands that even with all the testing that can happen, accidents with driverless cars are possible. It is near-impossible for programmers to account for every unpredictable possibility that may happen on the road. Are you willing to, in the most extreme of cases, put your life in the hands of those programmers, and quite possibly be subject to the accidents and craziness of other drivers on the road? Is your driverless car going to know what to do when this happens?
Maybe this isn't an issue with driverless cars, but an issue with the human condition itself and it's vulnerability to human error; on both ends, the programmer of the driverless car and driver. I, for one, know I am not ready to be in the middle of that battle when driverless cars are first released for public consumption. Maybe in the future these cars will be able to account for more possibilities, enough to make them safe even against the average human being.
Sure, you might say, but won’t drivers have the option of simply turning their automation off? Won’t people be able to choose to take control when the situation calls for it? Well, it’s not so simple. Some articles show that automation makes people worse at what they do. Recent incidents involving machine mess ups in airplanes make that clear. A self-driving car will not only force drivers into passive positions; it will strip them of the hard-earned skills they’ve developed over the years. A person who switches off autopilot will come to wheel as unprepared as an off-season athlete. And, of course, self-driving cars will turn everyone into a distracted driver; how closely do you pay attention to traffic on the bus?
What the Google Car is Still Working On
Google often leaves the premonition that their car can drive anywhere a car may legally drive. This is often backed up with the statistic that Google cars have safely driven over 700,000 miles. Impressive, isn't it? Though, the Google car still lacks much which would make it viable for an everyday consumer.
The Google car isn't just let off into the wild, allowed to drive however it pleases from point A to B. Google almost always extensively maps out their route to make sure that nothing conflicts with it. Something as simple as a new traffic light may not be detected by the Google car since it's not yet registered in it's database. No one wants their flashy new driverless vehicle getting them a ticket, or worse, an accident, because it sped right through a stop-sign, do they?
Autonomous Car Researcher Alberto Broggi put out statements that he is concerned with this very problem. If a road/route experiences changes which Google is not informed of, this would pose a serious obstacle for autonomous driving.
If Google were to release an automatic car to the world then it would have to make a to-the-minute up-to-date database of route changes in millions of miles of road all around the world. This may mean Google will work on the cars release slowly, with the Google car only being able to work in a few areas at first and working it's way up as it's database expands and scales with the real-world.
An MIT Article points out numerous issues still present with the Google Car. Notably, it cannot yet drive in the snow.
Among other unsolved problems, Google has yet to drive in snow, and Urmson says safety concerns preclude testing during heavy rains. Nor has it tackled big, open parking lots or multilevel garages. The car’s video cameras detect the color of a traffic light; Urmson said his team is still working to prevent them from being blinded when the sun is directly behind a light. Despite progress handling road crews, “I could construct a construction zone that could befuddle the car,” Urmson says.
In addition, Urmson states that the Google car can't know the difference between a crumpled piece of paper or a rock, in addition to not being able to spot potholes:
The car’s sensors can’t tell if a road obstacle is a rock or a crumpled piece of paper, so the car will try to drive around either. Urmson also says the car can’t detect potholes or spot an uncovered manhole if it isn’t coned off.
So a car that can't drive in snow, may be blinded by the sun, can't drive in multilevel garages, can't tell the difference between rock or paper and can't spot potholes or manholes. It seems the Google car still has quite the road to cover before it can even be considered for consumers.
So What Should You Know?
Well, first of all, hold off on buying an automatic car. Consumers should be wary. Early adoption is a risky business, and there’s no guarantee that this tech is taking off. Newness is not always better. Remember the laser disc. E-books have not been nearly as dominant as Silicon Valley predicted. Even vinyl has overtaken the CD recently.
Drivers should, however, remember that computer cars are about as perfect as any other computer program. Don’t relax just because that boxy Google-mobile is operating on pure programmer logic. I am in high hopes that automatic cars don't glitch out as often as your average web browser, but who knows, they're not even available yet. Keep aware of your surroundings and be willing to adapt to unpredictable situations. Although the driverless car is a exciting concept, I wouldn't want to drive in something which doesn't know the difference between a rock an a piece of paper.
© 2015, insidious All Rights Reserved.