What’s around the corner for autonomous vehicles?
- From passenger safety to averting collisions around corners, an update from the world of self-driving vehicles
- Researchers at MIT have developed a system that could allow these vehicles to see around corners and avoid potential collisions
The past week was brimming with updates from the world of autonomous vehicles. In London, the self-driving car project StreetWise launched commuter research trials with the help of FiveAI, a European startup working on developing software that will power shared, self-driving vehicle services.
On 25 October, South Korean carmaker Hyundai announced that it will be offering autonomous ride-sharing services in the Californian city of Irvine, starting 4 November, to study consumer behaviour in an autonomous driving environment. The company is collaborating with technology group Pony.ai and rideshare firm Via for an on-demand service called BotRide that will offer free rides in autonomous vehicles. As part of this pilot, commuters will be able to use a BotRide app to hail autonomous electric sport utility vehicles. Via’s software will coordinate the shared rides while the vehicles will run on self-driving technology from Pony.ai.
4 November will also mark the beginning of the International Conference on Intelligent Robots and Systems (Iros) in Macau, China. The conference is an annual academic event on advances in robotics. One of the papers being presented at Iros will look to tackle a big sticking point in the conversation on driverless and autonomous vehicles: passenger safety.
The paper will be presented by Massachusetts Institute of Technology (MIT) researchers who have developed a system that can sense tiny changes in shadows on the ground to determine if a moving object is approaching from around the corner. This could potentially be used in autonomous cars to avoid possible collisions with other cars or pedestrians.
Led by Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory at MIT, the researchers have described in the paper successful experiments with an autonomous car driving around a parking garage and an autonomous wheelchair navigating hallways. The researchers leveraged a system called ShadowCam that uses computer-vision techniques to detect and classify changes to shadows falling on the ground.
As the press release explains, ShadowCam uses sequences of video frames from a camera that is targeting a specific area. In this instance, the focus area was the floor in front of a corner. The system detects changes in light intensity, from image to image, that may indicate something “moving away or coming closer". But to make the ShadowCam system useful for autonomous vehicles, the researchers first used Augmented Reality (AR) labels called “AprilTags". These are essentially like QR codes that robots can scan to calculate their 3D position and orientation. Since it is impossible to use these AprilTags in the real world, the researchers devised a process that utilizes image registration and a new “visual-odometry" technique. Image registration looks for variations or changes in an image by overlaying multiple images, while the visual odometry technique computes “feature points" in different environments. For this whole system to work, the researchers used DSO, or “direct sparse odometry", which “plots features of an environment on a 3D point cloud, and then a computer-vision pipeline selects only the features located in a region of interest, such as the floor near a corner".
The next step in this process involves a technique called signal amplification. Once the ShadowCam takes in the image sequences from a region of interest (the floor in front of a corner) and computes the variations in different images, the signal amplification process boosts the pixels that contain shadows. This process makes it easy for the system to detect even weak signals from changes in the shadow.
According to the release, ShadowCam computes the information and classifies each image as containing a stationary object or a dynamic, moving one. If it gets to a dynamic image, it reacts accordingly. The release adds that as of now, the system has only been tested in “indoor settings". “Robotic speeds are much lower indoors, and lighting conditions are more consistent, making it easier for the system to sense and analyse shadows," it adds.
Car-detection times showed that the system was fractionally faster than LiDAR—the traditional system used in autonomous vehicles to detect visible objects. The researchers now aim to develop the system to make it work in both indoor and outdoor lighting conditions.
-
FIRST PUBLISHED02.11.2019 | 09:40 AM IST
- For all the latest Fashion News, Lifestyle News, Food News, Smart Living, Health Tips, and Relationships, only on Mint Lounge.