Motion Estimation

Full description

Now-a-days, handheld devices, such as mobile phones, are manufactured with a built-in digital camera which allows users to capture pictures and record videos of their favorite moments. The acquired images often suffer from annoying motion blur effect or out-of-focus problems. Adjusting the camera exposure parameters, such as shutter-speed and ISO value (sensor sensitivity) reduces the motion blur and results into clearer images. Furthermore, continuous focusing provides increased sharpness. Sophisticated mechanism or dedicated sensors are commonly found in commercial cameras to reduce blurring and to determine a correct focus. However, they cannot be accommodated in mobile phones due to small power source and limited physical space. In this we have worked towards the development of a vision-based solution where the incoming visual input from its on-board camera are used to detect significant motions present in the scene and provides continuous motion information in order to facilitate the capture of blur free images and allow continuous focusing.

  1. Feature Inter-Frame (IM) Estimation.
  • Exhaustive search with block distortion measure(BDM)
  • Search area (SA) of (\(2R \times 2R\))
  • Local motion information.
  1. Accumulation of Local Feature Motion
  • Voting in Hough-space
  • Clusters data from the same object
  • Overcomes a degree of noise and false motion data.




Edge segment based moving object detection

Accurate detection of moving objects is an important research interest to track or recognize objects. In this work, we propose an edge segment based background modeling algorithm for the detection of moving object using a static camera. Traditional pixel based methods fetch difficulties to update the background model. They also bring out ghosts while a sudden change occurred in the background. Although edge based methods are robust to illumination variation and noise, existing edge-pixel based methods suffers from scattered moving edge pixels since they cannot utilize edge shape information. Moreover, traditional edge-segment based method treat every edge segment equally, that creates edge miss-match due to non stationary background.  We presents an edge-segment based statistical approach to model the background by using ordinary training images that may even contain moving objects. The proposed method relies on background edge segment matching, thus it does not leave any ghosts behind. Moreover, the proposed method uses a statistical model for every background edge segment individually that makes the approach robust to handle camera movement as well as adapt to background motion (moving tree branches). Experiments with natural image sequences show that our method can detect moving edges efficiently under the above mentioned difficulties.


In the Figure (a) shows the background edge accumulated map with selected ROL (b) Edge distribution map of  the ROI in (a), (c) A cut of the distribution in (b) at column 60.


Human body tracking and behavior analysis

We propose a method that can be used to track non-rigid human body part analysis which has a great demand for automated human body part detection and tracking. Human body tracking is difficult due to the large variation of movements for different body parts. Edge segments from limbs (legs and arms) show high movement variation while walking or running, where as head and torso segments move slowly. Thus, we can assign weight to moving segments so that different level of flexibility can be applied during segment matching. Figure describes a possible application of moving edge segments. Here we can build a segment tracker to track every moving edge segments as is shown. Segments with similar motion and similar side color can form a group. We can compute average group motion. While tracking a group, member edge segment that deviate from the average group motion or whose side intensity does not match with the group’s intensity range can be eliminated. Utilizing a human model, we can initialize a body part tracker. This tracker can predict possible body part location as depicted in for the next frame.




SCI/SCIE Indexed Journals:

▷ Md. Hasanul Kabir, M. Abdullah-Al-Wadud, and Oksam Chae, “Brightness Preserving Image Contrast Enhancement using Weighted Mixture of Global and Local Transformation Functions”, International Arab Journal of Information Technology (IAJIT), Volume 7, No. 4, pp. 403–410, October 2010.

▷ M. Abdullah-Al-Wadud, Md. Hasanul Kabir, M. Ali Akber Dewan, and Oksam Chae, “A Dynamic Histogram Equalization for Image Contrast Enhancement”, IEEE Transactions on Consumer Electronics, Volume 53, Issue 2, pp. 593–600, May 2007.

▷ International patent, Yong Gu Lee, Young Kwon Yoon, Oksam Chae, and Md. Hasanul Kabir, “Method and Apparatus for Motion Compensation”, US Patent no. 20100239239, September, 2010.

▷ M. Hossain, M. A. A. Dewan, and O. Chae. “A flexible edge matching technique for object detection in dynamic environment. Applied Intelligence”, Springer Netherlands, Issn: 0924-669X, pages 1-11, 2011

▷ M. Hossain, M. A. A. Dewan, K. Ahn and O. Chae. “A Linear Time Algorithm of Computing Hausdorff Distance for Content-based Image Analysis”, Circuits, Systems, and Signal Processing, Birkhäuser Boston, Issn: 0278-081X, pages 1-11, 2011

▷ Mahbub Murshed, Adin Ramirez, Jaemyun Kim and Oksam Chae, “Statistica Binary edge frequency accumulation model for moving object detection”, Accepted, International Journal of Innovative Computing, Information and Control (ISSN 1349-4198), Volume 8, Number 6, June 2012. [SCIE], Impact Factor 2.79

▷ Mahbub Murshed, Md. Hasanul Kabir, Oksam Chae, “Moving Object Tracking – An Edge Segment-based Approach”, International Journal of Innovative Computing, Information and Control (ISSN 1349-4198), Volume 7, Number 7, July 2011. (IJICIC) [SCIE], Impact Factor 2.932

▷ M. A. A. Dewan, M. J. Hossain, and O. Chae. “Background independent moving object segmentation for video surveillance”. IEICE Transactions, 92-B(2):585–598, 2009.

▷ Md. Hasanul Kabir, Taskeed Jabid, and Oksam Chae, “Local Directional Pattern Variance (LDPv): A Robust Feature Descriptor for Facial Expression Recognition”, International Arab Journal of Information Technology (IAJIT), Volume 9, Accepted, 2012.

▷ Taskeed Jabid, Md. Hasanul Kabir, and Oksam Chae, “Robust Facial Expression Recognition based on Local Directional Pattern”, ETRI Journal, Volume 32, No. 5, pp. 784–794, October 2010.

Comments are closed.