How can we help?

Home > Revision Assistant > Prompt Library > Spot Check Prompt Library > Driverless Cars

Driverless Cars

Grades 9-10 | Informative | Source-Based

Source Lexile®: 1330L-1380L

Learning Standards 




Prompt: The self-driving car has become a big topic of discussion in our society today. After reading the documents and viewing the video regarding the latest technology in driverless cars, write an essay that

  1. Describes how self-driving cars work,

  2. Discusses what the current problems and limitations are, and

  3. Explains how this new technology is being put to use in the world today.


Be sure to support your information with evidence from the source documents.




Source 1

How Do Driverless Cars Work (video)




Source 2

"Self-driving cars: who's building them and how do they work?" 
By Samuel Gibbs 


Volvo is testing driverless lorries to work underground and Google has autonomous cars on the road – but what does it mean for the future of motoring?


Google’s two-seat self-driving prototype car. Photograph: Elijah Nouvelage/Reuters


From self-driving cars to robot lorries, autonomous vehicles are the future of road transportation. But who’s in pole position, who’s stuck in the pit lane and how far away is the starting grid?



How far along are we?


Autonomous vehicles are already on our roads. At the cutting edge there are self-driving cars being tested in pilot programmes, and they are proving perfectly capable of motoring alongside human drivers. But beyond robotic cars, many high-end vehicles available today are already practically capable of driving themselves either under the guise of passenger safety or driver convenience.



Who’s doing it?


In short, everyone. Google started work on the pioneering technology about eight years ago, helped by expert recruits from Stanford, but Uber, China’s Baidu and even Apple – if you believe the rumours – are working on self-driving technology.


The automotive manufacturers aren’t sitting on their hands either. Elon Musk’s Tesla is working on the technology for its electric cars, while GM, Daimler, Volvo, Ford, Jaguar Land Rover, Audi and BMW are also developing solutions.


Smaller companies and groups are also developing and testing the technology in the UK, including consortiums running trials in Greenwich, Bristol, Milton Keynes and Coventry. The Transport Research Lab, Arup, the AA, RAC, Atkins and Imperial College London are all involved.



What kinds of vehicles are they working on?


Self-driving vehicles can take many forms. Most of the automotive manufacturers are looking to create cars very similar to those we already drive – for individual ownership but with the ability to drive themselves.


Others, including Google, are looking at creating cars that are either smaller and more compact, or larger and laid out without a traditional driver’s seat, turning the car’s cabin into a mobile lounge area.


Other research has focused on autonomous vehicles replacing traditional buses and public transport shuttles. Some resemble tram cars without tracks. Other firms, including Uber, are trying to create vehicles that will eventually replace taxis.


Commercial goods vehicle manufacturers are also looking at autonomous trucks, which resemble traditional lorries, but could look more like a train or storage container on wheels.



Where are they doing testing?


Google has been testing its self-driving cars, which have included modified Toyota Priuses, Lexus RX450h SUVs and a bespoke self-driving bubble car, on public roads in Nevada, Florida, California and Michigan since 2012.


Uber recently began testing a self-driving car in Pittsburgh carrying passengers, with a human driver for backup.


Volvo and several other car manufacturers have also performed limited tests on some public roads around Europe and the US, while Baidu partnered with BMW for limited testing in China.


Large-scale testing, including Volvo’s 100-car test with members of the public on a Gothenburg commuter route, is scheduled to start next year. A version of that trial is expected to go ahead in the UK in 2018. The UK is expected to green-light trials on motorways from next year.



Who’s leading the autonomous pack?


Google is currently out in front, having driven more autonomous miles and collected more data than anyone else. But traditional car manufacturers are quickly catching up.


It’s also unclear what Google’s intentions are. The company recently partnered with Fiat Chrysler to fit its self-driving technology into the Chrysler Pacifica hybrid minivan, but its efforts to develop a bespoke self-driving car without a steering wheel or pedals point to an intention to develop cars on its own.


Volvo has been working on self-driving technology under the guise of safety features for years, and has explored the idea of road trains for commercial vehicles, where a front lorry guides a convoy.



What’s required to make a self-driving car work?


The bulk of the technology required for self-driving cars is not all that futuristic, but it is the combination of different sensors with advanced computer vision systems that makes it work.


Many of the vehicles use what is called Lidar (Light Detection and Ranging) – a rotating laser, usually mounted on the roof, that continually scans the environment around the car. Traditional radar is also used for detecting distances to objects and cars, as are various cameras, accelerometers, gyroscopes and GPS, which are all used in conjunction to build a 3D picture of the environment around the vehicle.


The most complex part of a self-driving system is the software that collects the data, analyses it and actually drives the vehicle. It has to be capable of recognising and differentiating between cars, bikes, people, animals and other objects as well as the road surface, where the car is in relation to built-in maps and be able to react to an often unpredictable environment.



Are there speed bumps ahead?


There are several major hold-ups between the developmental prototypes and commercialisation of driverless technology. One of the biggest is the problem of ethics.


Unlike a human who reacts instinctively in an emergency, an autonomous car will have to calculate and choose the appropriate response to each scenario, including possibly a choice between killing its occupants or other people.


Legislation must also be changed before self-driving vehicles will be permitted on public roads beyond small tests, while insurers must decide who pays when an autonomous car inevitably has an accident.


Further down the road another question will be whether, at the point when autonomous vehicles work and are safer than human drivers, we should ban human drivers?



When are we going to be able to step into one?


Many experts believe that full adoption of autonomous vehicles won’t happen until 2030, but some vehicles with self-driving capabilities are expected by 2020. Whether they are legal to drive everywhere or to drive without an occupant – to pick up a passenger or park themselves – remains to be seen.



What’s available right now?


No purely autonomous vehicles are available at present, but several with self-driving features are currently on our roads.


Tesla’s Model S has an advanced cruise control feature called Autopilot, which uses cameras and radar to detect the car’s position in lane, the proximity of other cars and the speed limit. It can control the car’s speed and steering to keep it in the middle of the lane, reacting to other cars and changing lanes on command.


Volvo’s latest XC90 includes a raft of autonomous driving features, including lane assist, adaptive cruise control and a suite of automatic emergency systems that stop the car from pulling into oncoming traffic or from rear-ending cars.




Source 3

"Why self-driving cars aren't safe yet: rain, roadworks and other obstacles"
By Olivia Solon for


Driverless technology remains a work in progress as the fatal crash of Tesla Model S tragically showed. Here are some flaws that persist in autopilot technology


The inside of a Tesla vehicle as it sits parked in a showroom. Photograph: Spencer Platt/Getty Images

Last week’s fatal crash involving a Tesla Model S offers a startling reminder that driverless technology is still a work in progress.


As Tesla’s own blogpost on the “tragic loss” points out, the autopilot technology that was controlling Joshua Brown’s car when it ploughed into a truck is in a “public beta phase”. That means the software has been released into the wild to be stress-tested by members of the public so that bugs can be flushed out. It’s the kind of approach we are used to seeing when we gain early access to new email applications or virtual reality headsets. As Apple co-founder Steve Wozniak told the New York Times: “Beta products shouldn’t have such life-and-death consequences”.


Until the investigation into the tragic incident concludes, we won’t know whether it was caused by a software glitch or human error – particularly with reports suggesting the driver may have been watching a Harry Potter DVD. All we know is that “neither autopilot nor the driver” noticed the white side of the tractor trailer against the brightly lit sky “so the brake was not applied”.


Tesla’s autopilot uses both cameras and radar to detect and avoid obstacles, so in this case we know there must have been a double failure. The cameras struggled with the glare from the sun, while the radar – according to Musk – “tunes out what looks like an overhead road sign to avoid false braking events”.


Elon Musk may have taken to aggressively dismissing coverage of the crash on his Twitter account, but there are still significant every day flaws that present obstacles to wider adoption of self-driving car technology.



Sensor fusion


When you have multiple sensors giving conflicting information, which one do you defer to? This seemed to be an issue at play in the fatal Tesla crash, where the one sensor that did spot the truck misinterpreted it as a road sign overhead.

“The big question for driverless car makers is: how does the intelligence of the machine know that the radar sensor is the one to believe? That’s the secret sauce,” says Sridhar Lakshmanan, a self-driving car specialist and engineering professor at the University of Michigan-Dearborn.





When Delphi sent an autonomous car 3,400 miles across the US in April 2015, engineers had to take control of the car only for a 50-mile stretch. The reason? Unpredictable urban conditions with unmarked lanes and heavy roadworks. In other words, an average city commute.



Sandbags (and assumptions)


One of Google’s self-driving cars collided with a public bus in Mountain View in February as it tried to navigate some sandbags on the street. In attempting to move around the sandbags, the car’s left front struck the side of the bus that was trying to overtake. The car had detected the bus but predicted it would yield, and the test driver behind the wheel also made that assumption.


“Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day,” said Google of the incident.





Adverse weather conditions create visibility problems for both people and the sensors that power driverless technology. Rain can reduce the range and accuracy of laser-based Lidar sensors, obscure the vision of on-board cameras and create confusing reflections and glare. In a bid to improve the performance of driverless technology in soggy conditions, Google has started testing its cars on public roads near Seattle, where regular rain is guaranteed.





As cars become more hi-tech they become more vulnerable to hacking. With driverless vehicles, the extra computers, internet connectivity and sensors increase the possible vulnerabilities. In a proof-of-concept attack, security researcher Jonathan Petit showed that lidar can be easily fooled into detecting a non-existent obstacle using a handheld laser pointer, which can force the car to slow down, stop or swerve.





Just as humans are at fault in more than 90% of car accidents, so too can they be the weakest link in semi-autonomous vehicles – particularly when a functionality labelled as “autopilot” encourages users to place their trust in the machine. “Maybe these intermediate levels [of automation] are not a viable consumer product,” says Richard Wallace, the director of the Transportation Systems Analysis group within the Center for Automotive Research. “They go a little too far in encouraging drivers to check out and yet they aren’t ready to take control.”



And other humans


It’s not just the humans inside cars with self-driving technology, but those in other vehicles that need to be vigilant. Accident rates involving driverless cars are twice as high as for regular cars, according to a study by the University of Michigan’s Transportation Research Institute which looked at data from Google, Delphi and Audi.


However the driverless cars weren’t at fault – they are typically hit from behind by inattentive or aggressive humans unaccustomed to self-driving motorists being such sticklers for the road rules. Google has started to programme its cars differently to behave in more familiar, human ways, such as inching forward at a four-way stop to indicate they will be moving next.


But it’s this collision where the biggest challenges for technology firms lie, encouraging adoption of rapidly developing new technology for a population that is quirky, unpredictable and, in turn, both sceptical and overtrusting.




Source 4

Let the Robot Drive: The Autonomous Car of the Future is Here
Infographic excerpted from Tom Vanderbilt for Wired Jan 20, 2012 (




  1. Radar 
    High-end cars already bristle with radar, which can track nearby objects. For instance, Mercedes’ Distronic Plus, an accident-prevention system, includes units on the rear bumper that trigger an alert when they detect something in the car’s blind spot.
  2. Lane-keeping 
    Windshield-mounted cameras recognize lane markings by spotting the contrast between the road surface and the boundary lines. If the vehicle leaves its lane unintentionally, brief vibrations of the steering wheel alert the driver.
  3. LIDAR 
    Google employs Velodyne’s rooftop Light Detection and Ranging system, which uses 64 lasers, spinning at upwards of 900 rpm, to generate a point cloud that gives the car a 360-degree view.
  4. Infrared Camera 
    Mercedes’ Night View assist uses two headlamps to beam invisible, nonreflective infrared light onto the road ahead. A windshield-mounted camera detects the IR signature and shows the illuminated image (with hazards highlighted) on the dashboard display.
  5. Stereo Vision 
    Mercedes’ prototype system uses two windshield-mounted cameras to build a real-time 3-D image of the road ahead, spotting potential hazards like pedestrians and predicting where they are headed.
  6. GPS/Inertial Measurement 
    A self-driver has to know where it’s going. Google uses a positioning system from Applanix, as well as its own mapping and GPS tech.
  7. Wheel Encoder
    Wheel-mounted sensors measure the velocity of the Google car as it maneuvers through traffic.






Source 5

Graphic excerpted from The Economist “Look, No Hands” Sept 1, 2012











Last modified


This page has no custom tags.


(not set)