escolha a categoria

Bag of Visual Features Applications 

Presenter: Ana Paula Brandão Lopes (UFMG)
Date: 04/08/2009

Abstract


Bag of visual features (BOVF) representations have been used successfully in several tasks, like object and human actions recognition and scene classification. In this presentation, we review the basics of BOVF and show how we are applying BOVF to two specific applications: classification of historical photographs based on the presence of buildings and nude detection.

VSUMM: An Approach Based on Color Feature Extraction for Automatic Summarization and Subjective Evaluation of Static Video Summaries 

Presenter: Sandra Eliza Fontes de Avila (UFMG)
Date: 04/08/2009

Abstract


Advances in compression techniques, in decreasing cost of storage, and in high-speed transmission have facilitated the way videos are created, stored and distributed. The increase in the amount of video data deployed and used in many applications, such as search engines and digital libraries, reveals not only the importance as multimedia data type, but also leads to the requirement of efficient management of video data. This management paved the way for new research areas, such as automatic video summarization. Essentially, this research area consists of automatic generating a short summary of a video, which can either be a static summary (keyframes set) or a dynamic summary (set of video segments). This work presents VSUMM, a methodology for the development of static summaries. The method is based on the extraction of color-based features from video frames and unsupervised classification. The video summaries produced are evaluated by users and compared with approaches found in the literature. With a confidence level of 98%, the proposed solution provided results with superior quality relative to the approaches to which it was compared.

Learning to Rank at Query-Time Using Association Rules 

Presenter: Marcos André Gonçalves (UFMG)
Date: 04/07/2009

Abstract



Some applications have to present their results in the form of ranked lists. This is the case of many information retrieval applications, including Content-Based Image Retrieval (CBIR), in which objects (documents, images) must be sorted according to their relevance to a given query. This has led the interest of the information retrieval community in methods that automatically learn effective ranking functions. In this paper we propose a novel method which uncovers patterns (or rules) in the training data associating features of the object with its relevance to the query, and then uses the discovered rules to rank these objects. To address typical problems that are inherent to the utilization of association rules (such as missing rules and rule explosion), the proposed method generates rules on a demand-driven basis, at query-time. The result is an extremely fast and effective ranking method. We conducted a systematic evaluation of the proposed method using the LETOR benchmark collections. We show that generating rules on a demand-driven basis can boost ranking performance, providing gains ranging from 12% to 123%, outperforming the state-of-the-art methods that learn to rank, with no need of time-consuming and laborious pre-processing. As a highlight, we also show that additional information, such as query terms, can make the generated rules more discriminative, further improving ranking performance. Despite focused in document retrieval, the techniques presented here are easily adapted to any retrieval task in which the objects are represented as a bag of features.

Automatic Detection of the Damaged Leaf Area by Pests in Cultivars Through Digital Images 

Presenter: Antônio Carlos de Nazaré Júnior (UFOP)
Date: 04/07/2009

Abstract


In agriculture, there are many difficulties involved in the handling of pests in the tillage. The are several types of attack of plagues. One of them affects directly the leaf of the plant. The preservation of leaf is of much importance for the primary metabolism of the plant and for the maintenance of the production of fruits. Therefore, the accurate and precise detection of the damaged leaf area is essential for the determination of the control action, once a small damage does not require any control measures.

In the present work, we propose a method for automatic detection and/or quantification of the damaged leaf area by pests. These methods work with digital images captured through a digital camera (by farmers themselves) and/or scanners. Thus, they can overcome the difficulties presented by other methods in the literature, such as the use of the planimeter (a gridded surface used to measure the area of an arbitrary two-dimensional shape) and the recover of concave regions presented in the damaged leaves.

These new methods will reduce the time required for the assessment of the damage, and increase the reliability of the leaf analysis. At present, we are testing the methods in samples of soybean leaves (Glycine Max (L.) Merril) collected from the experimental fields of the Phytotechnical Department of the Federal University of Viçosa.

We are currently studying pre-processing techniques for the suppression of noise and shadow (added to the images during the digitization of the samples), and also methods capable of detecting broken points in the edges of the leaves (necessary for a perfect quantification of the damaged leaf area).

Event Detection in Surveillance Videos 

Presenter: Guillermo Cámara Chávez (UFMG)
Date: 04/07/2009

Abstract


Large quantities of video surveillance data exist in today’s world. Cameras are everywhere constantly recording daily occurrences from many angles. Our objective is to develop a framework to aid video analysts in detecting suspicious activity. We consider an activity based video content representation. Visual events are detected and classified automatically in the scene. Although there are many ways to represent the content of video clips in current video retrieval algorithms, there still exists a semantic gap between users and retrieval systems. Visual surveillance systems supply a platform for investigating semantic-based video retrieval.

NPDI

Núcleo de Processamento Digital de Imagens.
Departamento de Ciência da Computação.
Instituto de Ciências Exatas.
Universidade Federal de Minas Gerais

 

Endereço

Av. Antônio Carlos 6627 - ICEx - sl. 3055
Pampulha - CEP 31270-010
Belo Horizonte - Minas Gerais - Brasil.

Telefone (31) 3409-5854