Visual tracking in general faces a fundamental dilemma in practice: tracking has to be computationally efficient but verifying whether the tracker is following the true target tends to be demanding, especially when clutter and/or occlusions are present. Thus, many existing methods tend to be either computationally inefficient when using sophisticated image observation models, or vulnerable to distractions when using simple visual measurements. This mainly threatens long-duration robust tracking.Context-awareness in Persistent Tracking
We proposed a novel and powerful solution for real-world tasks. Rather than focusing only on the target, our approach actually tracks a random field where the target motion is one site in the field and the rest are auxiliary objects that are automatically discovered on the fly by video data mining. Auxiliary objects have three properties at least in a short time interval: persistent co-occurrence with the target, consistent motion correlation with the target, and they are easy to track. The collaborative tracking of this random field leads to efficient computation as well as accurate tracking verification. This new approach has exhibited outstanding performance in many challenging real-world testing cases.
![]() |
![]() |
![]() |
|
|
|
![]() |
![]() |
Note: In each demo: [top-left] results of on-line video data mining that identifies video context of the target (e.g., the headin this case). [top-right] robust information integration that combines all the predictions from the video context. |
|
|
Note: [bottom-left] comparison with a dedicated head tracker. [bottom-right] our result where the yellow bounding box highlights the output the tracked target. |