68 results found
Snoek C, Sande K, Rooij O, et al., 2008, The MediaMill TRECVID 2008 Semantic Video Search Engine, TRECVID Workshop
In this paper we describe our TRECVID 2008 video retrieval experiments. The MediaMill team participated in three tasks: concept detection, automatic search, and interac- tive search. Rather than continuing to increase the number of concept detectors available for retrieval, our TRECVID 2008 experiments focus on increasing the robustness of a small set of detectors using a bag-of-words approach. To that end, our concept detection experiments emphasize in particular the role of visual sampling, the value of color in- variant features, the influence of codebook construction, and the effectiveness of kernel-based learning parameters. For retrieval, a robust but limited set of concept detectors ne- cessitates the need to rely on as many auxiliary information channels as possible. Therefore, our automatic search ex- periments focus on predicting which information channel to trust given a certain topic, leading to a novel framework for predictive video retrieval. To improve the video retrieval re- sults further, our interactive search experiments investigate the roles of visualizing preview results for a certain browse- dimension and active learning mechanisms that learn to solve complex search topics by analysis from user brows- ing behavior. The 2008 edition of the TRECVID bench- mark has been the most successful MediaMill participation to date, resulting in the top ranking for both concept de- tection and interactive search, and a runner-up ranking for automatic retrieval. Again a lot has been learned during this year’s TRECVID campaign; we highlight the most im- portant lessons at the end of this paper.
Mikolajczyk K, Matas J, 2007, Improving descriptors for fast tree matching by optimal linear projection, 11th IEEE International Conference on Computer Vision, Publisher: IEEE, Pages: 337-344, ISSN: 1550-5499
Mikolajczyk K, Leibe B, Schiele B, 2006, Multiple object class detection with a generative model, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol: 1, Pages: 26-33, ISSN: 1063-6919
In this paper we propose an approach capable of simultaneous recognition and localization of multiple object classes using a generative model. A novel hierarchical representation allows to represent individual images as well as various objects classes in a single, scale and rotation invariant model. The recognition method is based on a codebook representation where appearance clusters built from edge based features are shared among several object classes. A probabilistic model allows for reliable detection of various objects in the same image. The approach is highly efficient due to fast clustering and matching methods capable of dealing with millions of high dimensional features. The system shows excellent performance on several object categories over a wide range of scales, in-plane rotations, background clutter, and partial occlusions. The performance of the proposed multi-object class detection approach is competitive to state of the art approaches dedicated to a single object class recognition problem. © 2006 IEEE.
Leibe B, Mikolajczyk K, Schiele B, 2006, Segmentation Based Multi-Cue Integration for Object Detection., Publisher: British Machine Vision Association, Pages: 1169-1178
Leibe B, Mikolajczyk K, Schiele B, 2006, Efficient Clustering and Matching for Object Class Recognition., Publisher: British Machine Vision Association, Pages: 789-798
Yan F, Mikolajczyk K, Barnard M, et al., Lp Norm Multiple Kernel Fisher Discriminant Analysis for Object and Image Categorisation, IEEE Conference on Computer Vision and Pattern Recognition
Awais M, Yan F, Mikolajczyk K, et al., Augmented Kernel Matrix vs Classifier Fusion for Object Recognition, 22nd British Machine Vision Conference, Publisher: BMVA Press, Pages: 60.1-60.11
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.