Driverless cars are starting to get smarter by the minute, but researchers are still endlessly tweaking to make them safer for the roads. Now, a new system called SegNet allows an autonomous vehicle to “see” its environment.
SegNet identifies an image or a video pixel by pixel and classifies them in 12 different categories: Sky, Building, Pole, Road Marketing, Road, Pavement, Tree, Sign Symbol, Fence, Vehicle, Pedestrian and Bike. Instead of sensors, the driverless car could use cameras.
The new system was developed by researchers at the University of Cambridge. They “trained” the system by manually labeling each individual pixel in a staggering count of 5,000 images over the course of two days.
Real Time Processing with SegNet
Once trained, SegNet was able to label over 90 percent of an image’s pixels in real time. The researchers also demonstrated that it could be used for videos, which means that it can be used in tandem with a high-quality camera fitted outside a car.
All the images used were taken during day time. However, the researchers said that the system can also identify images in night-time environments or images with extreme shadows and lights.
PhD student Alex Kendall said in a statement: “It’s remarkably good at recognizing things in an image, because it’s had so much practice. However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better.”
SegNet knows urban roads and highways well. However, the system needs to learn other settings such as those in the desert, snow and rural areas. Once successful in learning all types of environments, the program could be used by self-driving cars.
SegNet vs LIDAR
Currently, driverless cars use expensive and complicated setups of Light Detection and Ranging, or LIDAR. Such sensors detect obstacles in an environment so the vehicle would not crash into them.
Besides from being expensive, LIDAR sensors were found to be vulnerable to laser pointers. The system was fooled into thinking that there was an obstacle by firing the laser.
Initial tests of the SegNet system for other environments went well, but the researchers said there are still lots of tweaking to do. The program could be used for warning systems in today’s cars.
Self-driving cars also make use of GPS (Global Positioning System) systems which provides maps and real-time location feedback. However, there are still areas across the globe that do not have coverage for GPS systems. It could also malfunction especially if the vehicle drives inside a tunnel or under a bridge.
Precise Location Technology
Another unnamed tech was also developed by Kendall and Professor Roberto Cipolla. The program was already tested in central Cambridge, according to Dispatch Tribunal. It was found that it was able to determine the orientation and location better than the accuracy of GPS systems.
The system could still determine which way it is faced even if a building’s sides were identical. The tech identifies the geometry of a scene to determine its precise location in real time.
Kendall said that there has been a significant progress in the field of robotics and artificial intelligence in the past few years. Autonomous vehicles are now being tested in public roads and the technologies used may still evolve all throughout the duration of the testing.
Driverless Cars and Machine Learning
Machine learning is one of the driving technologies for self-driving cars. The vehicles need to actively learn where it is and how it is positioned on the roads. It needs to quickly identify if there are obstacles nearby especially if it is driving it at higher speeds.
Progress in machine learning has also advanced. Researchers in MIT have managed to teach a machine to learn like humans do.
The machine was able to fool human judges, who needed to identify if a figure was drawn by a human or the machine. A huge number of examples were also needed to teach the machine just like with the SegNet.
Kendall said: ” In the short term, we’re more likely to see this sort of system on a domestic robot – such as a robotic vacuum cleaner, for instance. It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics.”
SegNet and Other Machine Learning Technology
SegNet is just one of the amazing advances in technology with regards to machine learning. There are also other examples that shows how machines can learn through examples.
One of which is the recent Project Oxford from Microsoft (NASDAQ: MSFT). The program could try and determine a human’s emotions on a picture based on the facial expression.
While it could not be useful to navigate a self-driving car, it can still be used in other ways inside the vehicle. For instance, the car could detect if the passenger is bored. It can then pump up some music or show a video on the screen to entertain.
The program could also be used to know if the passenger is frightened. For instance, a criminal could try and rob the passenger while inside the car. Even if there was no alarm button, the car could determine if the passenger was in danger based on the facial expression and alert nearby authorities or speed off.
Another system from MIT researchers was their Data Science Machine. The system was able to beat human teams in developing prediction algorithms.
Human teams took several months to finish the data analysis tasks, while it only took about 12 hours for the machine. Self-driving cars could also benefit from such a system by analysing potential road accidents in advance.
For instance, the autonomous vehicle could quickly analyse where to turn, where to brake or accelerate if there was an impending crash from another vehicle or an accidental slip due to wet roads.
SegNet and self-driving cars may soon cross paths once additional testing and tweaking are complete. It could help driverless cars to drive safer in the streets and highways in the future.