Generative Models for Clutters and Occlusions
Background clutters and occlusions are two common challenges in visual tracking. We studied a generative model that includes extra hidden processes for non-stationary clutters and occlusions. The inference of these processes makes the tracker more robust and provides self-awareness for the tracker.
One example is a dynamic Bayesian network model that explicitly models the non-stationary clutters for contour tracking. The generative model consist of multiple hidden processes which model the target, the clutter and the occlusions. The image observation models, which depict the generation of the image features, are conditioned on all the hidden processes. Within this model, the tracker can automatically switch among different observation models according to the hidden states of the clutter and occlusions, and the inference of these hidden states provides self-evaluations for the tracker.
Another example we have studied is the occlusion in appearance-based target tracking. Multiple targets need to be tracked simultaneously and their identities need to be maintained during tracking. The generative model accommodates an extra hidden process for occlusion and stipulates the conditions on which the image observation likelihood is calculated. The statistical inference of such a hidden process can reveal the occlusion relations among different targets, which makes the tracker more robust against partial even complete occlusions.
![]() |
![]() |
![]() |
|
|
|
![]() |
![]() |
![]() |
|
|
|