– INTRODUCTION CHAPTER1 INTRODUCTION Introduction to the Project Content Based Image Retrieval CBIR system which is the challenging research platform in the digital image processing

Introduction to the Project
Content Based Image Retrieval CBIR system which is the challenging research platform in the digital image processing. The important theme of using CBIR is to extract visual content of an image automatically, like color, texture, or shape. The simple process to retrieve an image from the image set, we use image search tools like Google images, Yahoo, etc. The main goal of view is based on the efficient search on information set. In the point of searching text, we can search flexibly by using keywords, but if we use images, we search using some features of images, and these features are the keywords. Color and shape image retrieval (CSIR) describes a possible solution for designing and implementing a project which can handle the informational gap between a color and shape of an image. This similar color and shape of an image is retrieved by comparing number of images in datasets.
The “Content-based” means that the search analyzes the contents of the image other than the metadata. Metadata refers to keywords, tags, or descriptions associated with the image. Here, the term “content” refer to colors, shapes, textures, or any other information that can be derived from the image itself. CBIR is desirable because most web-based image search engines rely purely on metadata and this produces a lot of garbage in the results. Also having humans manually enter keywords for images in a large database can be inefficient, expensive and may not capture every keyword that describes the image. Thus a system that can filter images based on their content would provide better indexing and return more accurate results. There is a growing interest in CBIR because of the limitations inherent in metadata-based systems, as well as the large range of possible uses for efficient image retrieval. Textual information about images can be easily searched using existing technology, but this requires humans to manually describe each image in the database. This is impractical for very large databases or for images that are generated automatically, e.g. those from surveillance cameras. It is also possible to miss images that use different synonyms in their descriptions. Systems based on categorizing images in semantic classes like “cat” as a subclass of “animal” avoid this problem but still face the same scaling issues. Image retrieval system is a computer system for browsing, searching and retrieving images from a large database of digital libraries. The use of Metadata such as captioning, keywords or descriptions to the images stored in the database along with the images or the low level feature extracted from the image like shape, color, texture etc. have been used till now for the image retrieval from the existing search engine. A user formulating a query usually has in mind just one topic, while the results produced to satisfy this query may belong to different topics. Therefore only parts of the search results are relevant for a user.

1.2 Literature Survey
Sandeep Singh et al. made a research on “Content Based Image Retrieval using SVM, NN and KNN Classification”. The CBIR tends to index and retrieve images based on their visual content. CBIR avoids many problems associated with traditional ways of retrieving images by keywords. Thus, a growing interest in the area of CBIR has been established in recent years. The performance of a CBIR system mainly depends on the particular image representation and similarity matching function employed. So a new CBIR system is proposed which will provide accurate results as compare to previous developed systems. Soft system will be used in this system. Based Image recovery system which evaluates the similarity of each image in its data accumulate to a query image in terms of various visual features and return the image with desired range of similarity. To develop and put into practice an efficient feature extraction NN and SVM to extract features according to data set using Auto calculate the feature weight by neural network. The precision and recall graph in GUI according to the retrieved contents of the images from the datasets. To Apply back propagation or feed forward algorithm for neural network classification. To calculate cross relationship and apply weakening model for feature matching.M.E. ElAlami published a paper “A new matching strategy for content based image retrieval system”. This present paper introduces a content based image retrieval system using artificial neural network approach as soft computing technique. The proposed system is composed of three major phases: feature extraction, ANN classifier and matching strategy. In feature extraction phase, local and global features are extracted and then Gabor filter is applied to enhance images so that better results can be calculated. The artificial neural network in our proposed system serves as a classifier so that selected features of a query image are used as an input to find multi classes as an output which have largest similarity to the query image. At last feature matching strategy is used to retrieve images from database as result.

In this paper, an algorithm has been proposed to retrieve image from database which are matched with query image. Local and global features of images are calculated. Classification is done by using artificial neural network. Feed Forward back propagation learning technique is used to train and test images of database. We have used three distances spear man, Correlation and Relative Deviation as our similarity metrics. The performance of the system is represented in the form of Confusion matrix, precision and recall graphs.

Guang-Hai Liu et al. , in his paper, “Content-based image retrieval using color difference histogram”; has retrieved images based on color, texture and shape and based several other combinations. The images are extracted automatically based on the texture information available in each image. Some images are purely based on color, while few others were based on texture or shape. Few others can also be retrieved by including or excluding other attributes. The proposed content-based image retrieval system evaluate the similarity of each image to the database to a query image in terms of color, shape or texture features and returns the image within a desired range of similarity. From the results obtained, we conclude that it is possible to refine the search process to a reasonable extent that combines these features with proper weights to obtain the desired image.

In Ahmed Talib et al. , “A weighted dominant color descriptor for content-based image retrieval”, color has been extensively used in the process of image retrieval. The dominant color descriptor (DCD) that was proposed by MPEG-7 is a famous case in point. It is based on compactly describing the prominent colors of an image or a region. However, this technique suffers from some shortcomings; especially with respect to object-based image retrieval. In this paper, a new semantic feature extracted from dominant colors (weight for each DC) is proposed. The newly proposed technique helps reduce the effect of image background on image matching decision where an object’s colors receive much more focus. In addition, a modification to DC-based similarity measure is also proposed. Experimental results demonstrate that the proposed descriptor with the similarity measure modification performs better than the existing descriptor in content-based image retrieval application. The proposed descriptor considers as step forward to the object-based image retrieval.

1.3 Comparison with the Existing Systems
Color and shape image retrieval (CSIR) describes a possible solution for designing and implementing a project which can handle the informational gap between a color and shape of an image. The possible application area of sketch based information retrieval is retrieval was introduced in QBIC and Visual SEEK systems. In these systems the user draws color sketches and blobs on the drawing area. The images were divided into grids, and the color and texture features were determined in these grids. The applications of grids were also used in other algorithms, for example in the edge histogram descriptor method. This method displays the variations occurred while changing the number.

While using the images, the large amount of data and the management of those cause the problem. The processing space is enormous. For this our purpose is to develop a content based image retrieval system, which can retrieve using sketches in frequently used databases. The user has a drawing area where he can draw those sketches efficiently. These sketches act as the base of the retrieval method. In some cases we can recall our minds with the help of figures or drawing.

1.4 Proposed System
Content Based Image Retrieval (CBIR) is the process to search relevant images based on user input automatically. The input could be parameters, sketches or example images. A typical CBIR process first extracts the image features and stores them efficiently. Thenit compares with images from the database and returns the results. In case of images, we search using some features of images, and these features are the keywords. Feature extraction and similarity measure are very dependent on the features used. In each feature, there would be more than one representation. Among these representations, histogram is the most commonly used technique to describe features.

We are proposing a technique for Content based image retrieval. The step by step methodology for the research process consists of pre-processing the image with suitable technique if the image is not clear or it require further enhancement. It increases the quality of the image. The next step consists of representation of an image into something that is more meaningful and easier to analyze. Then a feature extraction algorithm is implemented to extract suitable feature according to the data set available using soft computing techniques.

1.4.1 Objectives
Content-based image retrieval (CBIR), is the application of computer vision techniques to the image retrieval problem, that is, the problem of searching for digital images in large databases. Content-based image retrieval is opposed to traditional concept-based approaches.

“Content-based” means that the search analyzes the contents of the image rather than the metadata such as keywords, tags, or descriptions associated with the image. The term “content” in this context might refer to colors, shapes, textures, or any other information that can be derived from the image itself. CBIR is desirable because searches that rely purely on metadata are dependent on annotation quality and completeness. Having humans manually annotate images by entering keywords or metadata in a large database can be time consuming and may not capture the keywords desired to describe the image. The evaluation of the effectiveness of keyword image search is subjective and has not been well-defined. In the same regard, CBIR systems have similar challenges in defining success.

1.4.2 Methodologies
The most common method for comparing two images in content-based image retrieval is using an image distance measure. An image distance measure compares the similarity of two images in various dimensions such as color, texture, shape, and others. For example a distance of 0 signifies an exact match with the query, with respect to the dimensions that were considered. A value greater than 0 indicates various degrees of similarities between the images. Search results then can be sorted based on their distance to the queried image.

Computing distance measures based on color similarity is achieved by computing a color histogram for each image that identifies the proportion of pixels within an image holding specific vaiues.

Texture measures look for visual patterns in images and how they are spatially defined. Textures are represented by texels which are then placed into a number of sets, depending on how many textures are detected in the image. These sets not only define the texture, but also where in the image the texture is located.

Shape does not refer to the shape of an image but to the shape of a particular region that is being sought out. Shapes will often be determined first applying segmentation or edge detection to an image. Other methods use shape filters to identify given shapes of an image. Shape descriptors may also need to be invariant to translation, rotation, and scale.


2.1 Introduction
There are many requirements such as functional, nonfunctional, user-interface, hardware and software requirements. The functional requirements include the inputs and outputs and their processing inside the system. Non-functional requirements are the other features of the system such as efficiency, speed, capacity, etc. User-interface refers to how the user interact with the system. Hardware requirements are the basic device and its specifications.

Similarity Measure
Feature Storage
Feature Extraction


Figure 2.1: Flow of typical CBIR system
User interface:
The input is given as images, parameters, sketches.

Feature Extraction:
Here we are formulating query using histograms.
Feature Storage:
This involves loading of pre-extracted data from the database.
Similarity Measure:
Comparing the features of the input image and the images in the database. Similar Images are obtained as result.

2.3 Non-functional Requirements
Time Efficiency: Old methods of image indexing, ranging from storing an image in the database and associating it with a keyword or number, to associating it with a categorized description, have become time consuming.. This is not CBIR. In CBIR, each image that is stored in the database has its features extracted and compared to the features of the query image. It involves two steps:
Feature Extraction: The first step in the process is extracting image features to a distinguishable extent.

Matching: The second step involves matching these features to yield a result that is visually similar.

Space Efficiency: One advantage of a signature over the original pixel values is the significant compression of image representation. However, a more important reason for using the signature is to gain an improved correlation between image representation and visual semantics.

Performance: It is crucial to provide fast, reliable and on-time responses when dealing with users enquirers in order to provide better navigation and raise the system interest. Time required to access data using mobile access is very less. It reduces time to almost half.

Flexibility: The system must be flexible in order to allow inserting, editing, and removing elements.

Usability: A friendly interface, flexible, with strong graphical capability and succinct, clear messages and can raise the system efficiency.

Reliable: Reliable applications depends on its capacity to handle all the kind of errors the may eventually occur and inform the users how to proceed to solve problems. This will give the user more confidence.

2.4 User-interface Requirements
The user must have the full prior knowledge of the system before working with it.

The user should provide the standard image as correct input, so that it provides us the exact output.

2.5 Hardware Requirements
System: Pentium – IV, 2.4 GHz
Hard Disk:160 GB
Processor: Dual core
Monitor: 15 VGA Color
Mouse: Logitech.

2.6 Software Requirements
Developing tool: MATLAB-R2015a and More
OS: windows 8/8.1
Language: MATLAB
Domain: Data mining with aid of Image Processing
An image is an array, or a matrix, of square pixels or picture elements arranged in columns and rows. An image is defined in the “real world” is considered to be a function of two real variables, for example a (x, y) with the amplitude of the image at the real coordinate position (x, y).

Modern digital technology has made it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers. The goal of this manipulation can be divided into three categories:
Image Processing (image in -;image out)
Image Analysis (image in -;measurements out)
Image Understanding (image in -; high-level description out)
An image may be considered to contain sub-images sometimes referred to as regions-of-interest, ROIs, or simply regions. This reflects the fact that images frequently contain collections of objects each of which can be the basis for a region. In image processing system it should be possible to apply specific image processing operations to selected regions. Thus one part of an image or region might be processed to suppress motion blur while another part might be processed to improve color rendition. Image processing systems require that the images be available in digitized form, that is, arrays of finite length binary words. For digitization, the given Image is sampled on a discrete grid and each sample or pixel is quantized using a finite number of bits. The digitized image is processed by a computer. To display a digital image, it is first converted into analog signal, which is scanned onto a display.

Closely related to image processing are computer graphics and computer vision. In computer graphics, images are manually made from physical models of objects, environments, and lighting, instead of being acquired from natural scenes, as in most animated movies. Computer vision, on the other hand, is often considered high-level image processing out of which a machine/computer/software intends to decipher the physical contents of an image or a sequence of images. In modern sciences and technologies, images also gain much broader scopes due to the ever growing importance of scientific visualization. Examples include micro-array data in genetic research, or real-time multi-asset portfolio trading in finance. Before going to processing an image, it is converted into a digital form. After converting the image into bit information, processing is performed. This processing technique includes Image enhancement, Image restoration, and Image compression.

Image enhancement:
It refers to accentuation, or sharpening, of image features such as boundaries, or contrast to make a graphic display more useful for display ; analysis. This process does not increase the inherent information content in data. It includes grey level ; contrast manipulation, noise reduction, edge crisping and sharpening, filtering, interpolation and magnification, pseudo colouring, and so on.

Image restoration:
It is concerned with filtering the observed image to minimize the effect of degradations. Effectiveness of image restoration depends on the extent and accuracy of the knowledge of degradation process as well as on filter design. Image restoration differs from image enhancement in that the latter is concerned with more extraction or accentuation of image features.

Image compression:
It is concerned with minimizing the number of bits required to represent an image. Application of compression are in broadcast TV, remote sensing via satellite, military communication via aircraft, radar, teleconferencing, facsimile transmission, for educational ; business documents, medical images that arise in computer tomography, magnetic resonance imaging and digital radiology, motion, pictures, satellite images, weather maps, geological surveys and so on.

Text compression: CCITT GROUP3 ; GROUP4
Still image compression: JPEG
Video image compression: MPEG
Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image. Usually Image Processing system includes treating images as two dimensional signals while applying already set signal processing methods to them. It is among rapidly growing technologies today, with its applications in various aspects of a business. Image Processing forms core research area within engineering and computer science disciplines too.

Image processing basically includes the following three steps:
Importing the image with optical scanner or by digital photography.

Analyzing and manipulating the image which includes data compression and image enhancement and spotting patterns that are not to human eyes like satellite photographs.

Output is the last stage in which result can be altered image or report that is based on image analysis.

2.6.1 Purpose of Image processing
The purpose of image processing is divided into 5 groups. They are:
Visualization – Observe the objects that are not visible.

Image sharpening and restoration – To create a better image.

Image retrieval – Seek for the image of interest.

Measurement of pattern – Measures various objects in an image.

Image Recognition – Distinguish the objects in an image.

2.6.2 Types of image processing
The two types of methods used for Image Processing are Analog and Digital Image Processing. Analog or visual techniques of image processing can be used for the hard copies like printouts and photographs. Image analysts use various fundamentals of interpretation while using these visual techniques. The image processing is not just confined to area that has to be studied but on knowledge of analyst. Association is another important tool in image processing through visual techniques. So analysts apply a combination of personal knowledge and collateral data to image processing. Digital Processing techniques help in manipulation of the digital images by using computers. As raw data from imaging sensors from satellite platform contains deficiencies. To get over such flaws and to get originality of information, it has to undergo various phases of processing. The three general phases that all types of data have to undergo while using digital technique are Pre- processing, enhancement and display, information extraction.

2.6.3 MATLAB R2008a (Version 7.6) Features
Object Orientated Programming:
Major enhancements to object-oriented programming capabilities allowing easier development and maintenance of large applications and data structures.

New classdefkeyword enabling you to define properties, methods, and events in a class definition file.

New handle class with reference behavior, aiding the creation of data structures such as linked lists.

Events and listeners allowing the monitoring of object property changes and actions.

JIT/Accelerator support providing significantly improved object performance over previous releases.

Several enhancements to the development environment to support developing and using classes including improved support for objects in the variable editor and M-lint warnings specific to classes.

Development Environment:
Ability to customize and rearrange the MATLAB Desktop and Editor toolbars.
Expanded code-folding support in the Editor, providing the ability to collapse cells and language constructs (including for, if, switch, and more).
Enhanced inspection of structures and objects with the Variable Editor, previously known as the Array Editor.
File comparison tool expanded to allow comparison of directories, MAT-files and binary files.
Several enhancements to automatic M-file publishing, including support for functions and the ability to define configurations on a per-file basis.
M-Lint code checker support for Embedded MATLAB™ features.
Ability to insert custom FFTW and LAPACK libraries.
New algorithms for LDL, logm, and funm based on recent numerical methods Research.
Graphics and GUI Building:
Ability to link plots to workspace variables, synchronizing displays of changing data.
Support for “brushing” (interactively selecting) data in plots for analysis and manipulation.
Brushed variables in one plot also will highlight in other plots linked to the same workspace data.
New control suitable, accessible from GUIDE, enabling the display and editing of tabular information in graphical user interfaces.
File I/O and External Interfacing:
MEX support for Microsoft® Visual Studio® 2008, OpenWATCOM 1.7, and Intel® FORTRAN 10.1 software.
Mm reader multimedia reader expanded to support QuickTime video on the Apple® Macintosh® platform (previously released on Microsoft® Windows® platforms).
Performance and Large Data Set Handling:
New memory function providing memory information such as largest block available, providing diagnostics of memory problems on Windows platforms.
JIT/Accelerator support enhanced to statements executed at the MATLAB command line and in cell mode in the editor, providing improved performance in these environments.
Automatic multithreaded computation providing improved performance of supported functions on computers with multiple processors.
Significant speed improvement in multiplication of sparse matrices.
3.1 Content Based Image Retrieval System
The earliest use of the term content-based image retrieval in the literature seems to have been by, to describe his experiments into automatic retrieval of images from a database by color and shape feature. The term has since been widely used to describe the process of retrieving desired images from a large collection on the basis of features (such as color, texture and shape) that can be automatically extracted from the images themselves. The features used for retrieval can be either primitive or semantic, but the extraction process must be predominantly automatic. Retrieval of images by manually-assigned keywords is definitely not CBIR as the term is generally understood – even if the keywords describe image content.

CBIR differs from classical information retrieval in that image databases are essentially unstructured, since digitized images consist purely of arrays of pixel intensities, with no inherent meaning. One of the key issues with any kind of image processing is the need to extract useful information from the raw data (such as recognizing the presence of particular shapes or textures) before any kind of reasoning about the image’s contents is possible. Image databases thus differ fundamentally from text databases, where the raw material (words stored as ASCII character strings) has already been logically structured by the author. There is no equivalent of level 1 retrieval in a text database.

CBIR draws many of its methods from the field of image processing and computer vision, and is regarded by some as a subset of that field. It differs from these fields principally through its emphasis on the retrieval of images with desired characteristics from a collection of significant size. Image processing covers a much wider field, including image enhancement, compression, transmission, and interpretation. While there are grey areas (such as object recognition by feature analysis), the distinction between mainstream image analysis and CBIR is usually fairly clear-cut. An example may make this clear. Many police forces now use automatic face recognition systems. Such systems may be used in one of two ways. Firstly, the image in front of the camera may be compared with a single individual’s database record to verify his or her identity. In this case, only two images are matched, a process few observers would call CBIR. Secondly, the entire database may be searched to find the most closely matching images. This is a genuine example of CBIR.


Query Image
Image Database

Query Formation
Visual Content Description

Visual Content Description
Feature Database

Feature Vector

Similarity Comparison

Indexing ; Retrieval

Retrieval Results


Fig 3.1 Block Diagram of CBIR System

The process of retrieving desired images from a large collection on the basis of features (such as color, texture and shape) that can be automatically extracted from the images themselves. The features used for retrieval can be either primitive or semantic, but the extraction process must be predominantly automatic.

In typical Content-based image retrieval systems (Figure I), the visual contents of the images in the database are extracted and described by multi-dimensional feature vectors. The feature vectors of the images in the database form a feature database. To retrieve images, users provide the retrieval system with example images or sketched figures. The system then changes these examples into its internal representation of feature vectors. The similarities /distances between the feature vectors of the query example or sketch and those of the images in the database are then calculated and retrieval is performed with the aid of an indexing scheme. The indexing scheme provides an efficient way to search for the image database.

To modify the retrieval process in order to generate perceptually and semantically more meaningful retrieval results. In this chapter, we introduce these fundamental techniques for content-based image retrieval.

3.2 CBIR Definition
Content Based Image Retrieval is an application for retrieving the images from a huge set of image databases based on the image features such as color, texture and some other attributes. Here we take image feature as the index to that image and retrieve that particular image.

This project makes use of five methods to retrieve both Color and Gray scale images.

The methods used are as follows:
For Gray scale images-
Columnar Mean
Diagonal Mean
Histogram Analysis
For Color (RGB) images- R G B components, retrieving similar images using Euclidean Distance.

Here we fix the dimension of the image to be 256X256 for the image analysis and feature extraction. If the input image is more than the specified dimension then we will resize it to 256X256. For Gray scale image analysis we have taken Post Gray Map (PGM) images and for color (RGB) image analysis we have taken JPG images.

The above mentioned methods are implemented in Matlab 7 and have been successfully run.











Fig 3.2 Flow Chart for Image Retrieval system

Retrieval Image
Similarity Comparison
Feature Extraction
Query Image
Query Image Formulation

Fig3.3 Usecase diagram for Image Retrieval System
Title Image Retrieval System
Actors User, Image Retrieval System
Description 1. Comparing the input image with the images in the database.

2. Retrieving the similar images as output
Flow of Events 1. Use case begins with the user giving image as input
2. Resizing the image into 256*256 matrix
3. Extracting features of the input image
4. Comparing the input image and the images in the database
5. Retrieving the similar images as output.

Table 3.1 Use Case Diagram Details

Color AutoCorelo-gram
hsv Histogram
Methods Used
Color Moments

get hsvImage()
return()return()get colorMoments(Image)
return()get colorAutocorelogram(Image)
Select method()
Fig 3.4: Sequence Diagram for Image Retrieval System

4.1.1Content Based Image Retrieval
Content-based image retrieval (CBIR), also known as query by image content (QBIC) and content-based visual information retrieval (CBVIR) is the application of computer vision to the image retrieval problem, that is, the problem of searching for digital images in large databases.

“Content-based” means that the search will analyze the actual contents of the image. The term ‘content’ in this context might refer colors, shapes, textures, or any other information that can be derived from the image itself. Without the ability to examine image content, searches must rely on metadata such as captions or keywords. Such metadata must be generated by a human and stored alongside each image in the database.

Problems with traditional methods of image indexing Enser, 1995 have led to the rise of interest in techniques for retrieving images on the basis of automatically-derived features such as color, texture and shape – a technology now generally referred to as Content-Based Image Retrieval (CBIR). However, the technology still lacks maturity, and is not yet being used on a significant scale. In the absence of hard evidence on the effectiveness of CBIR techniques in practice, opinion is still sharply divided about their usefulness in handling real-life queries in large and diverse image collections. The concepts which are presently used for CBIR system are all under research.

Let us start with the word “image”. The surrounding world is composed of images. Humans are using their eyes, containing 1.5×10^8sensors, to obtaining images from the surrounding world in the visible portion of the electromagnetic spectrum (wavelengths between 400 and 700 nanometers). The light changes on the retina are sent to image processor center in the cortex.

In the image database systems geographical maps, pictures, medical images, pictures in medical atlases, pictures obtaining by cameras, microscopes, telescopes, video cameras, paintings, drawings and architectures plans, drawings of industrial parts, space images are considered as images.

There are different models for color image representation. In the seventeen century Sir Isaac Newton showed that a beam of sunlight passing through a glass prism comes into view as a rainbow of colors. Therefore, he first understood that white light is composed of many colors. Typically, the computer screen can display 2^8 or 256 different shades of gray. For color images this makes 2^(3×8) = 16,777,216 different colors.

Clerk Maxwell showed in the late nineteen century that every color image cough be created using three images – Red, green and Blue image. A mix of these three images can produce every color. This model, named RGB model, is primarily used in image representation. The RGB image could be presented as a triple(R, G, B) where usually R, G, and B take values in the range 0, 255. Another color model is YIQ model (lamination (Y), phase (I), quadrature phase (Q)). It is the base for the color television standard. Images are presented in computers as a matrix of pixels. They have finite area. If we decrease the pixel dimension the pixel brightness will become close to the real brightness. The same image with different pixel dimension is shown below.

4.1.3Image Database systems
Set of images are collected, analyzed and stored in multimedia information systems, office systems, Geographical information systems(GIS), robotics systems , CAD/CAM systems, earth resources systems, medical databases, virtual reality systems, information retrieval systems, art gallery and museum catalogues, animal and plant atlases, sky star maps, meteorological maps, catalogues in shops and many other places.

There are sets of international organizations dealing with different aspects of image storage, analysis and retrieval. Some of them are: AIA (Automated Imaging/Machine vision), AIIM (Document imaging), ASPRES (Remote Sensing/ Photogram) etc.

There are also many international centers storing images such as: Advanced imaging, Scientific/Industrial Imaging, Microscopy imaging, Industrial Imaging etc. There are also different international work groups working in the field of image compression, TV images, office documents, medical images, industrial images, multimedia images, graphical images, etc.

4.1.4Logical Image Representation in Database Systems:
The logical image representation in image databases systems is based on different image data models. An image object is either an entire image or some other meaningful portion (consisting of a union of one or more disjoint regions) of an image. The logical image description includes: meta-semantic, color, texture, shape, and spatial attributes.

Color attributes could be represented as a histogram of intensity of the pixel colors. A histogram refinement technique is also used by partitioning histogram bins based on the spatial coherence of pixels. Statistical methods are also proposed to index an image by color co-relograms, which is actually a table containing color pairs, where the k-th entry for ;i, j; specifies the probability of locating a pixel of color j at a distance k from a pixel of color I in the image.

4.1.5Classification and indexing schemes
Many picture libraries use keywords as their main form of retrieval – often using indexing schemes developed in-house, which reflect the special nature of their collections. A good example of this is the system developed by Getty Images to index their collection of contemporary stock photographs. Their thesaurus comprises just over 10 000 keywords, divided into nine semantic groups, including geography, people, activities and concepts. Index terms are assigned to the whole image, the main objects depicted, and their setting. Retrieval software has been developed to allow users to submit and refine queries at a range of levels, from the broad (e.g. “freedom”) to the specific (e.g. “a child pushing a swing”).

Probably the best-known indexing scheme in the public domain is the Art and Architecture Thesaurus (AAT), originating at Rensselaer Polytechnic Institute in the early 1980s, and now used in art libraries across the world. AAT is maintained by the Getty Information Institute and consists of nearly 120,000 terms for describing objects, textural materials, images, architecture and other cultural heritage material. There are seven facets or categories which are further subdivided into 33 sub facets or hierarchies. The facets, which progress from the abstract to the concrete, are: associated concepts, physical attributes, styles and periods, agents, activities, materials, and objects. Other tools from Getty include the Union List of Artist Names (ULAN) and the Getty Thesaurus of Geographic Names (TGN). Another popular source for providing subject access to visual material is the Library of Congress Thesaurus for Graphic Materials (LCTGM). Derived from the Library of Congress Subject Headings (LCSH), LCTGM is designed to assist with the indexing of historical image collections in the automated environment. Greenberg 1993 provides a useful comparison between AAT and LCTGM.

A number of indexing schemes use classification codes rather than keywords or subject descriptors to describe image content, as these can give a greater degree of language independence and show concept hierarchies more clearly. Examples of this genre include ICONCLASS from the University of Leiden Gordon, 1990, and TELCLASS from the BBC Evans, 1987. Like AAT, ICONCLASS was designed for the classification of works of art, and to some extent duplicates its function; an example of its use is described by Franklin 1998. TELCLASS was designed with TV and video programmers in mind, and is hence rather more general in its outlook. The Social History and Industrial Classification, maintained by the Museum Documentation Association, is a subject classification for museum cataloguing. It is designed to make links between a wide variety of material including objects, photographs, archival material, tape recordings and information files.

A number of less widely-known schemes have been devised to classify images and drawings for specialist purposes. Examples include the Vienna classification for trademark images World Intellectual Property Organization, 1998, used by registries Worldwide to identify potentially conflicting trademark applications, and the Opitz coding system for machined parts Opitz et al, 1969, used to identify families of similar parts which can be manufactured together.

A survey of art librarians conducted for this report suggests that, despite the existence of specialist classification schemes for images, general classification schemes, such as Dewey Decimal Classification (DDC), Library of Congress (LC), BLISS and the Universal Decimal Classification (UDC), are still widely used in photographic, slide and video libraries. The former scheme is the most popular, which is not surprising when one considers the dominance of DDC in UK public and academic library sectors. ICONCLASS, AAT, LCTGM, SHIC are all in use in at least one or more of the institutions in the survey. However, many libraries and archives use in-house schemes for the description of the subject content. For example, nearly a third of all respondents have their own in-house scheme for indexing slides.

When discussing the indexing of images and videos, one needs to distinguish between systems which are geared to the formal description of the image and those concerned with subject indexing and retrieval. The former is comparable to the bibliographical description of a book. However, there is still no one standard in use for image description, although much effort is being expended in this area by a range of organizations such as the Museum Documentation Association, the Getty Information Institute, the Visual Resources Association the International Federation of Library Association/Art Libraries and the International Committee for Documentation (CIDOC) of the International Council of Museums (ICOM).

The descriptive cataloguing of photographs presents a number of special challenges. Photographs, for example, are not self-identifying. Unlike textual works that provide such essential cataloguing aids as title pages, abstracts and table of contents, photographs often contain no indication of author or photographer, names of persons or places depicted dates, or any textual information whatever. Cataloguing of images is more complex than that for text documents, since records should contain information about the standards used for image capture and how the data is stored as well as descriptive information, such as title, photographer (or painter, artist, etc). In addition, copies of certain types of images may involve many layers of intellectual property rights, pertaining to the original work, its copy (e.g. a photograph), a digital image scanned from the photograph, and any subsequent digital image derived from that image.

Published reviews of traditional indexing practices for images and video include many writers discuss the difficulties of indexing images. The problems of managing a large image collection. He notes that, unlike books, images make no attempt to tell us what they are about and that often they may be used for purposes not anticipated by their originators. Images are rich in information and can be used by researchers from a broad range of disciplines. As Baser comments:
“A set of photographs of a busy street scene a century ago might be useful to historians wanting a ‘snapshot’ of the times, to architects looking at buildings, to urban planners looking at traffic patterns or building shadows, to cultural historians looking at changes in fashion, to medical researchers looking at female smoking habits, to sociologists looking at class distinctions, or to students looking at the use of certain photographic processes or techniques.”
Svenonius 1994 discusses the question of whether it is possible to use words to express the “abruptness of a work in a wordless medium, like art. To get around the problem of the needs of different users groups, van der Starre 1995 advocates that indexers should “stick to ‘plain and simple’ indexing, using index terms accepted by the users, and using preferably a thesaurus with many lead-ins,” thus placing the burden of further selection on the user. Shatford Layne (1994) suggests that, when indexing images, it may be necessary to determine which attributes provide useful groupings of images; which attributes provide information that is useful once the images are found; and which attributes may, or even should, be left to the searcher or researcher to identify. She also advocates further research into the ways images are sought and the reasons that they are useful in order to improve the indexing process. Constantopulos and Doerr (1995) also support a user centred approach to the designing of effective image retrieval systems. They urge that attention needs to be paid to the intentions and goals of the users, since this will help define the desirable descriptive structures and retrieval mechanisms as well as understanding what is ‘out of the scope’ of an indexing system.

When it comes to describing the content of images, respondents in our own survey seem to include a wide range of descriptors including title, period, genre, subject headings, keywords, classification and captions (although there was some variation by format). Virtually all maintain some description of the subject content of their images. The majority of our respondents maintain manual collections of images, so it is not surprising that they also maintain manual indexes. Some 11% of respondents included their photographs and slides in the online catalogues, whilst more than half added their videos to their online catalogues. Standard text retrieval or database management systems were in use in a number of libraries (with textual descriptions only for their images). Three respondents used specific image management systems: Index+, iBase and a bespoke in-house system. Unsurprisingly, none currently use CBIR software.

4.2Gray Scale Image Analysis
4.2.1 Columnar Mean
Columnar Mean is one of the methodologies that we use in the CBIRS to retrieve gray map images. In this method we take only gray scale image for image analysis. In this approach we calculate the average (Empirical Mean or simply Mean) value of each column of the image (because image is stored as a matrix using standard Matlab matrix conventions) and make those values as the index for that image and are stored in the database. While retrieving the image from the database based on the input image, we calculate mean value of each column of the input image and will compare these values with that stored in the database, if there is a match then we will retrieve those images.

4.2.2 Diagonal Mean
In this approach we calculate the Empirical Mean value of the pixels that lies on the principle diagonal of the image (because image is stored as a matrix using standard Matlab matrix conventions) and make that value as the index for that image and is stored in the database. While retrieving the image from the database based on the input image, we calculate mean value of diagonal elements of the input image and will compare these values with that stored in the database, if there is a match then those images are retrieved.

The advantage of using this method is instead of taking mean value of all the 256 rows as the index, we take only one value as the index for that image. Hence the computational time is reduced as we need to match only one field in the database.

But the disadvantage of this approach is accuracy of the image retrieval is less and hence reducing the efficiency of the CBIRS.

4.2.3 Histogram Analysis
The histogram is a summary graph showing a count of the data points falling in various ranges. The effect is a rough approximation of the frequency distribution of the data. The groups of data are called classes, and in the context of a histogram they are known as bins, because one can think of them as containers that accumulate data and “fill up” at a rate equal to the frequency of that data class. An image histogram is a chart that shows the distribution of intensities in an indexed or intensity image. The image histogram function imhist creates this plot by making n equally spaced bins, each representing a range of data values. It then calculates the number of pixels within each range.

Query Image

Fig 4.1(b): Histogram of the input image
4.3Color Image Analysis
4.3.1 RGB Components
An RGB image, sometimes referred to as a true color image, is stored in MATLAB as an m-by-n-by-3 data array that defines red, green, and blue color components for each individual pixel. RGB images do not use a palette. The color of each pixel is determined by the combination of the red, green, and blue intensities stored in each color plane at the pixel’s location. Graphics file formats store RGB images as 24-bit images, where the red, green, and blue components are 8 bits each. This yields a potential of 16 million colors.

The precision with which a real-life image can be replicated has led to the commonly used term true color image.

To further illustrate the concept of the three separate color planes used in an RGB image, the code sample below creates a simple RGB image containing uninterrupted areas of red, green, and blue, and then creates one image for each of its separate color planes (red, green, and blue). It displays each color plane image separately, and also displays the original image.

The following figure depicts an RGB image of class double.

Fig: 3.7 RGB values of an Image
To further illustrate the concept of the three separate color planes used in an RGB image, the code sample below creates a simple RGB image containing uninterrupted areas of red, green, and blue, and then creates one image for each of its separate color planes (red, green, and blue). It displays each color plane image separately, and also displays the original image.

Fig: 3.8 Separate RGB plane
4.3.2 Retrieving similar images using Euclidean Distance
Retrieval using global average RGB:
We use average RGB to calculate color similarity. Average RGB is to compute the average value in R, G, and B channel of each pixel in an image, and use this as a descriptor of an image for comparison purpose.

The average values of R, G, and B used for calculating the Euclidean distance is the same value which is used in the retrieval of images using RGB components for color images. Euclidean distance is a geometrical concept which takes into consideration the co-ordinate values of the pixel points between which the distance is to be found. This distance defines the position variance of two points in terms of pixel values which in case of image processing is the values of R, G, and B.

Here is the equation for distance measure of image Ia and Ib, we use the weighted Euclidean distance. The distance between two exact images will be 0 and the distance between two most dissimilar images (black and white) will be 1 or 255 depending on weather the range of RGB is from 0 to 1 or 0 to 255.

Formula used for calculating the Euclidean Distance is as follows:
dIa,Ib=(ra-rb)2+(ga-gb)2+(ba-bb)331/2In this method we calculate the distance between the query image and candidate set images stored in the database, if the distance is within the already fixed threshold then we will retrieve those images as the similar images to the query images.

The advantage of this approach is that it is easy to implement and it also has a disadvantage that it consumes more computation time when the number of images in the database increases.

The Manhattan distance function computes the distance that would be traveled to get from one data point to the other if a grid-like path is followed. The Manhattan distance between two items is the sum of the differences of their corresponding components. The formula for this distance between a point X=(X1, X2, etc.) and a point Y=(Y1, Y2, etc.) is:
d=i=1n|Xi-Yi|Where n is the number of variables, and Xi and Yi are the values of the ith variable, at points X and Y respectively. It is also called the L1 distance. If u=(x1, y1) and v=(x2, y2) are two points, then the Manhattan Distance between u and v is given by
MH(u,v) =x1-x2+y1-y2SYSTEM TESTING
5.1 Introduction
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product. It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and dose not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

5.2 Test Objectives
All field entries must work properly.

Faster than search using text.

The entry screen, messages and responses must not be delayed.

5.3 Features to be tested
Verify that the entries are of the correct format.

No unknown entries should be allowed.

5.4 Types of Tests
Unit testing
Integration Testing
Functional test
System test
Black box and White box testing
Acceptance testing
5.4.1 Unit Testing
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application. It is done after the completion of an individual unit before integration.

This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tets perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs.

5.4.2 Integration Testing
Integration tests are designed to test integrated software components to determine if they actually run as on program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.

Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects.

The task of the integration test is to check that components or software applications. e.g. components in a software system or – one step up – software applications at the company level – interact without error.

5.4.3 Functional test
Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals.

Functional testing is centred on the following items:
Valid Input : identified classes of valid input that must be accepted.

Invalid Input : identified classes of invalid input that must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

System/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions or special test cases. In addition, systematic coverage pertaining to identify business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of the current tests is determined.

5.4.4 System test
System testing ensures that the integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

5.4.5 Black box and White box testing
Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box, you cannot “see” into it. The test provides inputs and responds to outputs without considering how the software works.

White Box Testing is a testing in which software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.

5.4.6 Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.

Test Results: All test cases mentioned above passed successfully. No defects encountered. The following table describes the testing mechanisms and the considerations to be satisfied.

Test Case Expected Output Obtained Output
1. Clicking on “select
folder” Should display dialog box to pick
up an *.bmp, .jpg, image file. Same as expected.

2. Click “open” Select the query image Same as expected.

3. Pressing “OK” button
in the dialog box 1. Entering number of images to
be retrieved.

2. After entering, a dialog box
with comparison of the
features should appear. Same as expected.

4. Select the method. 1. Specifying the method using
which image has to be
2. After entering, a dialog box
appears with the retrieved
images. Same as expected.

5.5 Outcome of the Project
5.5.1 Performance
Main HSV
Histogram Colorauto-correlogramColo-rmomentsMethods used Profile
L1 L2 88.337s 45.037s 7.654s 0.489s – 0.172s 90s
82.616s 45.643s 7.815s 0.517s 1.594s – 83s

Fig 6.1: Selecting the Folder

Fig 6.2: Selecting query image from the folder

Fig 6.3: Giving exact number of images that is to be retrieved

Fig 6.4: Comparing the features of the input image and the images in the database

Fig 6.5: Selecting the method

Fig 6.6: Retrieved images
A method for retrieval of images from large database is proposed in this paper. The retrieval method is based on feature extraction like color, shape and texture. The folder which contains the query image is selected, and the image is selected from the folder. The selected image is resized for further processing. The histogram of the image is generated. The correlation
between the color components i.e, RGB components are found, followed by color moments of these components is calculated pixel by pixel. These features are stored in the database. Now enter the number of images to be retrieved. The comparison of the features of the query image and the features of the images in the database is performed. The methods used for retrieving the images are Euclidean distance, Manhattan distance, Cityblock distance etc. among which one of the method is selected for retrieving the image. Finally similar images will be displayed. To validate the performance of the retrieved method various images have been tested.

The proposed method is very useful for the real time application. The retrieval system presented in this project mainly reduces the computational time and at the same time increases the user interaction. The results obtained are given in numbers so that there is no need for the user to spend more time in analysis. To achieve fast retrieval speed and make the retrieval system truly scalable to large size image collections, effective multi-dimensional indexing module is an indispensable part of the whole system. This module will be pushed forward by computational geometry, database management, pattern recognition research communities. The future work is to increase the performance level and to extend the algorithm for more advanced and intelligent applications.

1. M.E. ElAlami, “A new matching strategy for content based image retrieval system” Elsevier, Applied soft computing 14(2014).

2. Guang-Hai Liu, Jing-YuYang, “Content-based image retrieval using color difference histogram” Elsevier Pattern Recognition 46(2013).
3. Ahmed Talib, Massudi Mahmuddin, Husniza Husni, Loay E. George, “A weighted dominant color descriptor for content-based image retrieval” Elsevier J.Vis.Commun.Image R (2013).

4. S. Manoharan, S. Sathappan ,”A comparison and analysis of soft computing techniques for content based image retrieval system” , International Journal of Computer Applications (0975 – 8887) Volume 59– No.13, December 2012
5. Ying Liu, Dengsheng Zhang, Guojun Lu,Wei-Ying Ma,” A survey of content-based image retrieval with high-level semantics” Elsevier Pattern Recognition 40(2007).

6. K. Jalaja, Chakravarthy Bhagvati, B. L. Deekshatulu, Arun K. Pujari, “Texture Element Feature Characterizations for CBIR” IEEE (2005).

7. Arnold W.M. Smeulders, Amarnath Gupta “Content Based Image Retrieval at the End of Early years” IEEE transactions on Pattern Analysis and machine intelligence, Vol. 22, No. 12, Dec 2000.

8. Anil K. Jain, Fellow, IEEE, Robert P.W. Duin, and Jianchang Mao, “Statistical Pattern Recognition: A Review” IEEE transactions on Pattern Analysis and machine intelligence, Vol. 22, No. 1, January 2000
9. K. Zagoris, s. Chatzichristofis, and a. Arampatzis. “bag-of-visualwords vs. Global image descriptors on two-stage multimodal retrieval”. 34th international acm sigir conference on research and development in information retrieval, pp. 1251-1252 2011
10. H. Tamura, s. Mori, t. Yamawaki. “textural features corresponding to visual perception”. Ieee transaction on systems, man, and cybernetcs, vol. Smc-8, no. 6, pp. 460–472, june 1978
11. H.b. kekre, s. D. Thepade, t. K. Sarode and v. Suryawanshi. “imag retrieval using texture features extracted from glcm, lbg and kpe”. International journal of computer theory and engineering, vol. 2, no. 5, october, 2010