Category: Personal Projects

Image Super Resolution: Overall Design

There have been selective communities in public which have wide usage of image enhancement systems, for personal use, commercial use, or academic use. In developing and operating such a system, there are five key parameters upon which the design must focus on: (1) up-scaling enhancement quality, (2) efficiency, (3) system acceptance, (4) operating platform, (5) ease of training and implementation.

Through the use of extensive image up-scaling and enhancement quality testing, system design analysis, and operating platform evaluation, our team has improved and optimized some critical aspects of image up-scale and enhancement system by adapting existing algorithm and developing our unique logistics. Hereby, we are presenting the communities with a low cost and easy adaptive system that is close to being ready for commercial development and high level applications.

Our filter design system is based on technology of machine learning method for image high/super resolution. By mapping the images with low and high resolutions from end to end, a convolutional neural network can be achieved. It has the ability to provide intelligent filtering technology to the program that performs the convolution to transform the low-resolution input into a high-resolution output.  It will also employ image segmentation to aide the filter in edge detection This system can finish the process in the following steps:

  • A process that takes a large set of images to train the model for the filter and output the filters
  • Denoise the image
  • Segment the image into partitions
  • Scale each segment using bicubic upscaling by the desired scale value and run the filter
  • Merge all upscaled segments into one final output image

Image Super Resolution: Project Overview

As consumer products continue to grow quality wise and when the data usage of media is continually the demand for high resolution increases as well. This makes upscaling a very important part of the equation.  Whether it is remastering old footage or upscaling from an input stream such as Netflix, upscaling has real world usage that is always more relevant than before.

Per databases, there has been a shear of interest in image up-scaling since 1993. It is because the establishment of World Wide Web in 1991, and the launch of T3 network standard in 1993, which enabled commercial users to transfer data at 44.736Mbit/s over 1.544Mbit/s via T1 standard. With the increase of accessible bandwidth and cheap storage options, people would not satisfy with the low-resolution footage on the internet, in addition, they would like to restore the information came in lost in details.

Although there are various up-scaling and enhancement algorithms that have already been developed, even being commercially sold, they are not perfect.  A lot of them need to meet a performance requirement and often under perform as a result.

In order to solve this problem, our application aims to generate unique filter for particular assigned image category. This can be done by using Convolutional Neural Network. With the advantages of machine learning, we can easily process more detailed images and complex objects.

Project Metrics:

  1. Quality/Flexibility
  2. Speed of machine learning
  3. Segmentation for edge detection and feature handling
  4. Cost and efficiency
  5. Adaptability

Week 4: Efficient Graph-Based Image Segmentation (Pairwise Region Comparison)

In lieu of the impending project I need to complete for ECE420, I have decided that I will pursue this implementation of image segmentation in order to output multiple images to be upscaled individually (my partner’s work) and then later merged back together to form a fully upscaled version of the image.  The motivation of studying and adapting their implementation of this algorithm is due to the simplicity of the adaptation.

Although I have begun implementation of Normalized Cut in C, I have realized the port from matlab to C is horrid.  This is not only due to the size of the code, but the incredible time complexity of its current iteration.  The second problem is the need to the implementation of Matlab native functions (which cannot be converted into C MEX functions).  Although octave offers implementations of things like spdiags and eigs, an easy port of things such as imread, will be close to impossible given that matlab has closed off all the source code.

With that I would like to begin discussion of the algorithm discussed in this paper. Efficient Graph-Based Image Segmentation by Pedro F. Felzenszwalb and Daniel P. Huttenlocher.

Variables Used

G=Undirected graph composed of  G=(V,E)

S=Array of segments , composed of components which are sections of an image, a disjoint-set forest

C=component, or an individual section of the image which is connected to another component in the graph, a minimum spanning tree

Introduction

This implementation of image segmentation like Ncut is based on pairwise region comparison.  An image is stored in S, a disjoint-set forest, using path compression and union by rank in order to create a flatter and faster to access tree.  Each tree stored inside this disjoint-set forest represents a single segment of the image.

This algorithm takes into account a \tau factor which allows one to prioritize larger or smaller segments based upon perceptible differences.  This is huge when it comes to our usage as it means tweaking the \tau factor will allow us to in a way effectively tweak edge detection in segmentation.

Condition for Segmentation

In the paper Felzenszwalb and Huttenlocher discuss a predicate D which is the decision factor in which two segments should be joined or should not be joined.  The main idea behind this is to evaluate whether the boundary between a pair of components exists.  In doing so Felzenszwalb and Huttenlocher compare the difference between two components (in the same segment) to the internal difference of the same two components.

A simple explanation of the terms mentioned above is as follows.

Internal Difference: Maximum weight edge in a component
1
Difference: Minimum weight edge connecting two components.  If the two components are not connected this value is infinity.
1
Minimum internal Difference: min(minimum edge weight of component 1+ threshold, minimum edge weight of component 2+ threshold)
5here tau is defined as \tau(C)=k/|C| where k (controls sensitivity of segmentation) is a variable to be adjusted and |C| represents the cardinality of the component.  Larger k will bias the algorithm to produce larger components.

The reasoning for these definitions can be explained as follows.
Internal Difference: If the maximum weight edge of a component is connected we know that this component has relevance to be connected or segmented.
Difference: This is sort of like a “median” of the weights in a graph giving an idea of how the weights are biased in both components.Minimum Internal Difference: This is the same as internal difference except it allows us to account for a threshold to control the “relevance” of a component and whether two components should be merged.

Taking all of this in account Felzenszwalb and Huttenlocher state the decision criteria as follows.6

The predicate above will check whether a boundary exists between these two components by checking if the difference between the components is large relative to the internal difference with at least one of the components.  If it evaluates to true, it will merge if false it will not merge.

This works because recall Int(C) will return the maximum weight of C and all weights greater than Int(C) will need to be merged.  Thus setting the condition to check for weights larger than Int(C) will produce a merge between both components.

A note on \tau

  • Large k will produce a preference for larger components
    • This will require which parameters are correlated with large k components in the case of checking for long and thin segments, large k will discourage such segments.
  • Small k will produce a preference for smaller components
    • In the case of checking for long and thin segments a small k will encourage such segments.

Algorithm

The algorithm can be described in 4 steps as follows.

  1. Construct an array of edges with non decreasing weight
  2. Start with the first segmentation S_0 with each vertex as its own edge.
  3. Generate segmentation S_1 to S_m-1 (where m is the number of edges) using the predicate D
  4. Return S=S_m

Next up: Analyzing the source code and adaptaion

Source:

Efficient Graph-Based Image Segmentation
P. Felzenszwalb, D. Huttenlocher
International Journal of Computer Vision, Vol. 59, No. 2, September 2004