RetrievalLab - A programming tool for content based retrieval

Retrieval Lab Download

Test Images Download

RetrievalLab is a content based retrieval tool (Win10,8,7,XP) that was designed for educational and research purposes. It is a tool to facilitate the testing of new features, segmentation methods or evaluation methods, by presenting a Matlab-like interface that supports variables, functions and plugins.

Contents:

Introduction

RetrievalLab is a tool for illuminating content based retrieval. It can be used in research and in educational workshops to explore, compare, and demonstrate the use of features, databases, images and evaluation methods in content based retrieval tasks. Regarding education, the intention is that students will be able to learn about content based retrieval without spending numerous months creating a custom system. In addition, RetrievalLab has a plugin architecture that makes it possible for users to add new functionality. We have already implemented some well known color features (e.g. HSV histogram) and texture features (e.g. LBP). Also included are salient point approaches (e.g. SIFT) and top classifiers/machine learning algorithms such as Support Vector Machines (SVM) and Neural Networks.

There are four sections in the interface:

  • Variables
    All allocated variables will be shown here. Each variable has a name, a type and a short description.
  • Features
    This is a list of features that is available to the user. They can be used in functions, or they can be dropped onto variables to calculate or load the feature for that variable.
  • Output
    All executed commands will be shown here together with optional reponses from the system.
  • Command
    The user can type the command that should be executed here.

Tutorial 1 - Image retrieval

For retrieving images, we need a database object and an image object. First, we start by constructing a database object where the directory called testimages contains a small image collection (i.e. 50 images of size 640x480):

> db = loaddatabase("D:\testimages")

The list of variables will be updated to represent the new state.

After this, we can calculate features for each image in the database.

> updatefeature(db, "hsv")

Each call to UpdateFeature will add or replace the feature that was given as a parameter. Images can hold a list of features, so calling UpdateFeature with another feature adds that feature to the list. Two convenient functions are also availabe: LoadFeature and SaveFeature. These two functions load or save feature data from a local binary storage that is maintained by RetrievalLab. This saves a lot of time once features are calculated.

Next, we load an image and also calculate a feature. Note that the UpdateFeature function works on databases and images.

> im = loadimage("D:\testimages\img001.jpg")
> updatefeature(im, "hsv")

The function DisplayImage can display an image and optionally extra information like tags or image segments (which will be demonstrated in the second tutorial).

> displayimage(im)

Now that we have a database and a single image, both with a feature, we can start the actual searching.

> index = searchimage(db, im)

The variable 'index' now contains an ordering of the database, based on feature distances to the query image. There are two methods of visualizing this index:

> displayindex(index)

> displayindexmap(index)

DisplayIndex displays the top 16 results from the index variable. The DisplayIndexMap function generates an image that contains the top 50 results from the index, where each image is centered around the best matching image based on the distance to the query image.

Tutorial 2 - Visual concept detection

In a visual concept detection task, a few things are needed: First, the concept should be learned form a set of positive and negative examples. Then, with a learned concept, new images can be analysed and tagged with the concept.

Visual concept detection uses a different approach than searching for images. Visual concepts are usually not found at the global image level, but only in a part of the image. Therefore, we included image segmentation methods to define image regions that will be used to define and to detect the concepts.

We start by loading the databases, segmenting the images and by adding features. Note that the UpdateFeature call can be used multiple times and each call adds a new feature to the list that is attached to each image segment. If a feature is already present in the list, it will be replaced.

> dbpos = loaddatabase("D:\testimages\trees\")
> segmentimage(dbpos, "sift")
> updatefeature(dbpos, "hsv")
> updatefeature(dbpos, "lbp")

> dbneg = loaddatabase("D:\testimages\nottrees\")
> segmentimage(dbneg, "sift")
> updatefeature(dbneg, "hsv")
> updatefeature(dbneg, "lbp")

We now have two databases where each image has been segmented using the SIFT segmentation method. This method determines a set of SIFT interest points and it extracts a small rectangle around that point from the image. For each of these segments, three features are calculated.

For learning a visual concept, we also need a classifier. In this example, a support vector machine is created. Other classifiers are also available: nearest neighbor classification ("nearest") and neural network ("neural").

> classifier = createclassifier("svm")

Now we can combine our two databases and the classifier to create a visual concept. The following function will train the classifier and connect it to the visual concept:

> concept = learnvisualconcept(dbpos, dbneg, classifier)

The visual concept variable is now able to use the neural network classifier, that can classify image segments based on the three features.

If we want to detect a visual concept in an image, we need to follow the same steps as for the database, segmentation and feature extraction:

> image = loadimage("D:\testimages\img002.jpg")
> segmentimage(image, "sift")
> updatefeature(image, "hsv")
> updatefeature(image, "lbp")

The last step is to detect the concept in the image. Each image segment will be classified and given a label "tree" if the classifier has detected the concept. The DisplayImage function recognizes that the image has a list of image segments and associated tags. Tagged segments will be displayed with a clearly visible color.

> findvisualconceptlocations(image, concept, "tree")
> displayimage(image)

Tutorial 3 - Adding a feature plugin

RetrievalLab has a plugin architecture that enables user-supplied functionality to be added to the program. The current version supports feature plugins that can be provided in the form of a DLL.

When RetrievalLab starts, it will look for files that are named Feature*.dll. RetrievalLab assumes there will be seven functions in a feature DLL. Note that feature parameters will be added soon. The seven functions are:

  • const char* GetDescription()
    Returns a readable description of this feature.
  • const char* GetShortDescription()
    Returns a short description that RetrievalLab can use internally for storing calculated feature data.
  • bool NeedsGrayscale()
    Returns true if this feature should be computed over a grayscale image. RetrievalLab will call ProcessGrayscale for the actual feature calculation.
  • bool NeedsColor()
    Returns true if this feature should be computed over a color image. RetrievalLab will call ProcessColor for the actual feature calculation.
  • int GetFeatureLength()
    Returns the length of the feature.
  • void ProcessGrayscale(unsigned char* Data, int Width, int Height, int Stride, double* FeatureVector)
    This function will be called by RetrievalLab with a grayscale representation of an image or image segment. FeatureVector will be an array of doubles that has been set to the length that GetFeatureLength returned.
  • void ProcessColor(unsigned char* Data, int Width, int Height, int Stride, double* FeatureVector)
    This function will be called by RetrievalLab with an RGB representation of an image or image segment. FeatureVector will be an array of doubles that has been set to the length that GetFeatureLength returned.

A very simple feature is calculated by the following function:

void ProcessColor(unsigned char* Data, int Width, int Height, int Stride, double* FeatureVector)
{
  int Offset;
  int R, G, B;

  for (int i = 0; i < 512; i ++)
    FeatureVector[i] = 0;

  for (int i = 0; i < Height; i ++)
  {
    Offset = i * Stride;

    for (int j = 0; j < Width; j ++)
    {
      R = Data[Offset] >> 5;
      G = Data[Offset + 1] >> 5;
      B = Data[Offset + 2] >> 5;

      FeatureVector[(R << 6) + (G << 3) + B] ++;

      Offset += 3;
    }
  }
}

This function processes the RGB image and computes a 512 bin feature vector, using three bits for each color channel.

RetrievalLab was written in C++ using Qt for the interface. When creating a DLL with Qt, the following four files form a basic template for a feature plugin:

When these files are compiled into a DLL and this DLL is copied to the RetrievalLab directory, a new feature "rgb" will be available. Image retrieval or visual concept detection can then use this feature and the feature data can be stored in the internal binary storage that RetrievalLab maintains.

Acknowledgements

We are grateful to Leiden University for their support of this project.

This Work Was Published In

RetrievalLab: a programming tool for content based retrieval
Ard Oerlemans and Michael S. Lew
Proceedings of ACM International Conference on Multimedia Retrieval (ICMR)
ACM New York, NY, USA
Article No. 71, 2011

doi (webpage)