[Zurück]


Zeitschriftenartikel:

A. Ion, D. Banica, A. Agape, C. Sminchisescu:
"Video Object Segmentation by Salient Segment Chain Composition";
International Journal of Computer Vision, 1 (2013), S. 1 - 8.



Kurzfassung englisch:
We present a model for video segmentation, applicable to
RGB (and if available RGB-D) information that constructs
multiple plausible partitions corresponding to the static and
the moving objects in the scene: i) we generate multiple
figure-ground segmentations, in each frame, parametrically,
based on boundary and optical flow cues, then track, link
and refine the salient segment chains corresponding to the
different objects, over time, using long-range temporal constraints;
ii) a video partition is obtained by composing segment
chains into consistent tilings, where the different individual
object chains explain the video and do not overlap.
Saliency metrics based on figural and motion cues, as
well as measures learned from human eye movements are
exploited, with substantial gain, at the level of segment generation
and chain construction, in order to produce compact
sets of hypotheses which correctly reflect the qualities
of the different configurations. The model makes it possible
to compute multiple hypotheses over both individual object
segmentations tracked over time, and for complete video
partitions. We report quantitative, state of the art results in
the SegTrack single object benchmark, and promising qualitative
and quantitative results in clips filming multiple static
and moving objects collected from Hollywood movies and
from the MIT dataset.

Erstellt aus der Publikationsdatenbank der Technischen Universität Wien.