Moving Objects Segmentation at a Traffic Junction from Vehicular Vision

Main Article Content

Joo Kooi Tan
Seiji Ishikawa
Shinichiro Sonoda
Makoto Miyoshi
Takashi Morie

Abstract

Automatic extraction/segmentation and the recognition of moving objects on a road environment is often problematic. This is especially the case when cameras are mounted on a moving vehicle (for vehicular vision), yet this remains a critical task in vision based safety transportation. The essential problem is twofold: extracting the foreground from the movingbackground, and separating and recognizing pedestrians from other moving objects such as cars that appear in the foreground.
The challenge of our proposed technique is to use a single mobile camera for separating the foreground from the background, and to recognize pedestrians and other objects from vehicular vision in order to achieve a low cost and intelligent driver assistance system. In this paper, the normal distribution is employed for modelling pixel gray values. The proposed technique separates the foreground from the background by comparing the pixel gray values of an input image with the normal distribution model of the pixel. The model is renewed after the separation to give a new background model for the next image. The renewal strategy changes depending on if the concerned pixel is in the background or on the foreground. Performance of the present technique was examined by real world vehicle videos captured at a junction when a car turns left or right and satisfactory results were obtained.

Article Details

How to Cite
[1]
J. K. Tan, S. Ishikawa, S. Sonoda, M. Miyoshi, and T. Morie, “Moving Objects Segmentation at a Traffic Junction from Vehicular Vision”, ECTI-CIT Transactions, vol. 5, no. 2, pp. 73–88, Apr. 2016.
Section
Artificial Intelligence and Machine Learning (AI)