We still have several things to address for our project. Most of the issues must do with run time and efficiency of the application of both segmentation and upscaling together. The current form of the implementation requires upscaling every segment individually and then merging them together. Given larger images especially for the CPU based implementation, for 1080p->2160p upscaling the runtime is already 44minutes for one segment. This means if we were to have a lot of segments, the of the city scene, this means 205*44=9020 minutes approximately 150 hours. This is unacceptable. If we were to incorporate segmentation again it would have to be inside the upscaling loop itself or somehow built into the models. In hindsight, we can completely disregard this problem when using a DGPU to do the processing as the runtime for one segment will take only several seconds meaning any image will take only several minutes to scale by 2x. However, this will completely nullify any notion of a low power (no DGPU) or mobile implementation (mobile) using image segmentation. Another way we could achieve a much better run time is to use an algorithm developed by someone else for super resolution. For example, we could use Milanfar’s algorithm.
The current denoising model developed for this project is somewhat peculiar. It introduces noise to a completely black image. Although the output appears to the human eye to be completely the same, the segmentation algorithm appears to be very sensitive to the noise (albeit visually appealing) added by the denoising process. It would be good to redevelop the model to add the exception for the case of RGB values of (0,0,0) to do nothing. This has led to the use of the original file to produce segments due to the inconsistencies produced by the model.
Currently, the current segmentation code uses a custom RGB class. Although simpler than the OpenCV matrix representation, it would be easier to read the code and easier to adapt the code if everything used was of type OpenCV. In addition to this, the segmentation, although completely dependent on the RGB class (can adapt the diff condition to fit YUV), can be implemented into the YUV format. This will make it unnecessary to use cv::cvtColor to convert between YUV and RGB. This will make it unnecessary to use cv::cvtColor to convert between YUV and RGB.
For improvements to the CNN, a better optimizing algorithm can be implemented in the future (when one is developed) to further reduce the required memory for computation. Currently when running the model, it takes several days and a server with 128Gb of RAM to finish. Using a rig with any lower amount of RAM will cause the memory to overflow. Although probably infeasible now there is a possiblility of moving the core algorithm (without segmentation).
As we know that, Android has native support for OpenCL, which will open the possibility to the device’s native GPU to implement the algorithm on mobile devices. This will significantly reduce the runtime as we have seen above due to the parallelization capabilities of GPUs. As the chips are getting faster and faster, it is possible to run the implementation in real-time, especially for 480P->720P, 720P->1080P. This makes it possible to make the most use out of the available cable bandwidth. This is very desirable to for streaming services (albeit at the dismay of the user) as they can stream compressed video and upscale it on the users’ end claiming “true HD resolution”.
Things to note: 1. We are using OpenCV version 3.0 RC1 for all the tests below. Prior versions 2.8 and 3.1 failed in one aspect or another. 2.8 failed to compile on Linux and 3.1 failed for some unknown reason when deallocating memory for matrices when a function went out of scope.
Results from the i7-5600U Processor (CPU version of the algorithm using filter2)
Again, we will be using the same 3 color bar as a control base case. Again, using the same input arguments of sigma=0.1, k=30, min_size=10. We chose the upscale ratio of 2.0x we see the following results. First, we take a look at the scaled segments. The runtime of scaling+segmentation+merge is 15.814 seconds. Here is the input image.
Here are the upscaled segments…
And now merged together…
Now we make the image more complex. We use the Undertale logo for this test. We used parameters sigma=0.8, k=900, min_size=550, and a scale ratio of 2.0x. First, we look at the input image. The runtime was 9 minutes 41 seconds. We can already see the incredible difference in run time. This is firstly due to the difference in the resolution, bit depth, and the number of segments produced (12).
Now we take a look at the upscaled sements (only a few shown).
And after a few merges…
And finally here is the output with all the segments merged together. Even given that we have upscaled it we can already see that the heart inside the R appears to have gained some sharpness and in general the whole image seems to have gained more contrast in addition to being upscaled. However, due to the filter used (filter2) the output seems a bit softer than it really should.
Below we look at the source image for our last example.
Given the computer’s CPU it took a total of 44 min 12 sec to upscale one segment (original image) to 2160p. Thus we concluded it would be infeasible to complete a complete upscaling of this image to 2160p. The output of the 4x upscaling is below.
Now we use a completely different setup to perform the upscaling. Given our previous knowledge of the multiplicative nature of runtime due to image segmentation, all of the next tests will be done without image segmentation. We first perform the same upscaling using an i7-5930k. This is a 6 core CPU capable of 12 threads and has a base frequency of 3.5GHz and turobos up to 3.70GHz. Using the same input image as above this CPU was able to perform the upscaling in 38 minutes and 42 seconds. Considering the price difference and core number compared to the Thinkpad, this is only marginally faster. This shows that using CPU to do computation is only producing diminision returns. Below is the output run on filter2.
Our third test case was an AMD FX 9590 Black an 8 core processor with 4.7GHz clock speed and turbo speed of 5.0GHz. There are two main issues to address when it comes to using AMD products to perform the computation. Firstly, because people working at Intel first developed OpenCV, it uses SSE (Streaming SIMD Extensions) optimizations. Although AMD later added support for SSE instructions starting with its Athlon XP line, it is still not up to par with Intel’s implementation. Thus computation is inherently slower than using an Intel product. As thus even with its superior core count and processor speed due to the limitations of the SSE implementation on this processor it is even slower than an Intel i7 dual core processor with a runtime of 1 hour 31 minutes and 9 seconds.
Running the same image using a GPU implementation enabling the use of CUDA and much more efficient parallelization. We used a Nvidia GTX 1070 (1920 Cuda Cores, Clock 1506 Mhz, Boost Clock 1683 Mhz), GTX 980Ti (2816 Cuda Cores, Clock 1075 Mhz, Boost Clock 1075 MHz.), and 2 GTX980Ti in SLI. This was significantly faster, on the order of hundreds of times faster than a CPU implementation.
All results have been summarized below for the city scene.
|Processor Type||Time to execute|
|AMD FX 9590 Black (filter2)||1 hour 31 minutes 9 seconds|
|Intel i7-5600U (filter2)||44 minutes 12 seconds|
|Intel i7-5930k (filter2)||38 minutes 42 seconds|
|Intel E5-2690v4 (filter2)||16 minutes 27 seconds|
|Nvidia GTX 1070 (filter2)||5.0823 seconds|
|Nvidia GTX 980Ti (filter2)||8.0292 seconds|
|Nvidia GTX 980Ti SLI (2) (filter2)||1.16984 seconds|
And the output
The hardware used to run the segementation is a i7-5600U Processor (due to the lack of a dGPU in this device, Thinkpad T450s). Using a more powerful processor will improve the runtime of the algorithm. However, as we will see the runtime of the segmentation algorithm (CPU implementation) is insignificant to the runtime of the upscaling therefore there is not much to be improved from a hardware point of view. Note: Other than the first test image, it appears that a sigma of 0.8 (for smoothing seems to work best.
Here is the source image of an image to segment. It is composed of 3 colors, a black background and 2 colored bars.
These calculations were run with the input arguments k=30, min_size=10, sigma=0.1 and had a run time of .153 seconds for 3 segments.
For the next image the calculation and output of the segments took 9.666 seconds and created 17 segments. In lieu of space, only the most significant segments will be shown.
Here is the source image.
As we increase the complexity of the image, we will notice that the number of segments to begin growing exponentially. Now we look at the final example of image segmentation.
Here is the source image.
The input arguments were sigma=0.8, k=750, min_size=2000, had a run time of 10.320 seconds and produced 27 segments. The input arguments were sigma=0.8, k=750, min_size=2000, had a run time of 10.320 seconds and produced 27 segments.
Upscaling Block Diagram
Discussion on Merge:
The process of merge took an extensive amount of time to make working. For some reason after denoising and upscaling there were many anomalies within the image datat. For instance a black pixel may have non zero RGB valuers or in YUV form it may have non 0 UV channels or very large Y values for 0 UV. We concluded that we cannot use ==0 as the threshold for computing whether a pixel is a part of a segment or not.
Originally each segment was passed as a map of RGB pixels to the merge portion of the code and we checked if the R,G,B channels had values <1. And if it did we would do nothing to the pixel. However this proved impossible as after denoising and upscaling some of the supposed black pixels weren’t truly black. Worst of all the values weren’t consistent sometimes they were (1,1,1) or (3,3,3) or (5,5,5) and we were not able to discern a pattern or upper bound on these values.
After that we decided to convert the image to a black and white format and then directly compare whether an image is “black” or not. The same problem occurred as above and some segments would end up overwriting parts of other segments due to blurring of pixels around the edges.
We tried the same thing going from RGB to YUV and performing the same computation. However we were faced with the same problem again with erratic Y values.
There are two things to consider about why the above three ways failed. Firstly, we were using OpenCV’s implementation of pixel conversion. We are not sure how it is converted and whether it will produce interpolation errors or computation errors. Secondly, the way we accessed the Y values of the YUV converted image were (probably) wrong. Thus, we were getting values as uint output instead of the float values the Y channel was supposed to be.
To resolve these issues, after much trial and error we discovered that the best way to correctly do this is to calculate the Y channel ourselves manually. This required scaling the output Y value by 255.0 and doing calculations in floating point to maintain as much accuracy as possible. Only with this way were we able to properly use a threshold to determine whether a pixel is black or not with a much smaller margin of error in the use of floating point.
Image Segmentation Block Diagram
Discussion on Disjoint-Set Forests as Segment Storage
Disjoint-set forests with union by rank and path compression. Let us define what these things mean. A disjoin-set data structure allows us to keep track of a set of elements made up of disjoint components. In our case, this would mean that each disjoint set would represent an individual segment. The two notable functions that this provides is the find and union (named join in the implementation). This allows us to determine if two representatives are part of the same root and to join two trees into one single tree building the segmentation. The two notable functions that this provides is the find and union (named join in the implementation). This allows us to determine if two representatives are part of the same root and to join two trees into one single tree building the segmentation.
In union-by-size, the determining factor for which set to join to the other is the number of elements in each set. Union by rank ensures that we can retain minimum depth for maximum efficiency. This is results in a complexity of ~O (log n). Rank is initially 0 when a one element set is formed. Then as we begin joining sets, sets of the same rank will be joined and then incrementing the rank variable of the new union set.
Now what is a disjoint-set forest? Instead of implementing a disjoint-set data structure using either an array or vector, in which a representative could be any element inside the set, a disjoint set forest uses a tree structure where the representative is the root of the tree. This alters the find and so that when called it reaturs root of each tree instead. In the case of find called on a node, it will follow parent nodes until it reaches the root. This means that we do not need to keep track of the actual set in which an element belongs to as we can easily search through a tree to determine if two elements are in the same set. In addition to this, when using path compression when find is called on any node it will attach itself and all its parent nodes directly to the root hence lowering the height of the tree making subsequent finds much faster.
Felenswalb and Huttenlocher’s algorithm has complexity O (m α(m)) Where α(m) is the inverse Ackerman function and m represents the number of vertices in the graph. Because there are at most three edges connected to each vertex (one for each RGB component) all operations can be done in constant time. Therefore, the only limiting factor is the rank (depth) which is bounded by O (m α(m)).
Optimization of the CNN was done using the algorithm ADAM developed by Diederik P. Kingma and Jimmy Lei Ba described in their paper Adam: A Method for Stochastic Optimization. The motivation for choosing this algorithm is the low computational complexity of the method. As stated in the paper it only requires first order gradients on the input objective function which has the same computational complexity as evaluating the original function. Some other advantages are that the magnitudes of parameter updates are invariant to rescaling of the gradient, its stepsizes are bounded by the stepsize hyperparameter, and naturally performs a form of step size annealing. In any case this means the algorithm has a degree of automation and adaptivity that many other algorithms do not present.
We start with defining the pseudo code which defines how this functions. Adam uses the method of moments which is where unknown parameters are estimated using 0th, 1st, and higher order moments. Adam uses only the first two. For example in the case of a random vector, 1st order moment is the mean vector and second order is the covariance matrix, a measure of how related two variables are.
Inputs: α (step size), β1β2 (exponential decay rates for moment estimation ranging from [0,1] ), f(θ) (stochastic objective function), θ_0 (initial parameter vector). As advised by the paper the input values for α=0.001, β1=0.9, β2=.999 and ε=10^-8 (for update last step)
Initialization: m_0=0 (1st moment vector), v_0=0 (2nd moment vector), t=0 (Initialize timestep)
In general, we are trying to minimize a noisy objective function such that it matches our expected outoput (an image) with respect to the parameters. Moment estimates are used to compute a converging parameter using bias correction. Bias correction is required because moments are initialized to zero thus without the bias the estimate may or may not change at all.
Given the gradient (g) of the stochastic objective f we can estimate the moment based upon all previous moments. This is done using the following summation formula. Where the beta represents the decay rate of the function and gradient squared is the elementwise square of the vector. (second moment shown here).
The expected value of the moment can be calculated in the same manner where ζ represents an offset value to account for the moving average of the second moment.
Looking at the final line of the equation we are able to see how we can calculate the expected moment and the scalar value multiplying the expected second order gradient to obtain our biased second order moment. This in turn is what is used to scale to obtain the correct value for the newly defined timestep scalar below.
Kingma and Ba have stated that the following conditions are a good choice of how to choose alpha as a step size. |∆t| ≤ α · (1 − β1)/ √( 1 − β2) in the case (1 − β1) > √ (1 − β2), and |∆t| ≤ α. Recall α is the step size and essentially the bound on how many iterations the algorithm will have to go through before we are able to reach an estimate of our parameter. Often we are able to choose α based upon some physical limitation in the model we are building. However, although inefficient often choosing a small enough step size (thus large amount of iterations) will still be able to generate the same result as long as the algorithm is run long enough. In Adam’s case the step size will be automatically scaled as we reach our expected parameter values in the asymptotic case. Thus, larger step size values will still be relatively effective. We are able to see this in the equation for the update step where α is scaled by the moments
Kingma and Ba “define” a signal to noise ratio which describes the condition in which we have converged on the expected parameter when the SNR reaches zero. This is easy to see because we can note that a value multiplied by zero is zero.
Kingma, Diederik P., and Jimmy Lei Ba. “ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION.” ICLR (2015): 1-15. 23 July 2015. Web.
In general, the convolutional neural network is like the traditional neural network. They partake the same features such as objects are made into neuron forms, and each neuron unit can take inputs, then performs dot product, and it does not require to be a linear procedure. The advantage of which is that neurons have learnable bias and weights, so that it makes it easy for users to model them. The layers can be formed by neurons and input can be decomposed into the mannequin. And so, as the result, a single differentiable score function can be held from the pixels of raw images. In the backdrop, a fully connected layer exists for the grading of the loss function to interpret the class grade. The simplified example is depicted in Figure 1.
Still, compared to the traditional neural network, the convolutional neural network explicitly takes images as inputs. Then the architecture is fully encoded for the properties of images. Therefore, due to the simplicity, it is easier to connect neurons and requires less parameters for the network implementation.
To void the overfitting effect that the traditional neural network can have, we deploy our neurons in a 3D volume bucket. On each layer, we have the size defined as the width and height, which is directly related to the input. We call the third dimension as depth, which is the depth of the activation volume instead of the depth of the neural network.
Next, if we consider the layers from the input to the output. Let’s say we have an image as input with the resolution of A by A, then let’s break down the parameters for each layer:
- Input will be decomposed into RGB channels. Then the RAM usage would be [AxAx3].
- The convolution layer will compute the number of neurons which are connected to the local input. If we decide to use number of B filters, the RAM allotment would be [AxAxB].
- The next layer is defined as ReLu. It can perform elementwise activation functions, so that the max (0, x) has the threshold at 0. The size of the neuron volumes will keep at [AxAxB].
- Then, the next polling layer will simply perform down sampling in the spatial dimensions.
- Then the final fully-connected layer will compute the class grades, which is defined by users. If we categorize the image sets into C categories, we will have an output [1x1xC]. Per the nature of the convolutional network, each neuron cell in the final layer will be connected to all the coefficients in previous volumes.
Then, if we consider the convolutional layer, we know that it is defined as a set of learnable filters. It is the fundamentals of convolutional network firstly being applied in LeNet. It is the most computational heavy module for the machine learning part. Every single filter has a small planar size but a full depth matches with the 3D neuron block depth. Simply like human brains, certain layers will be activated if and only if similar features can be found in the object.
If we talk about the single image super resolution, there are 4 major categories: edge based modeling, prediction modeling, example patch modeling, and image statistical modeling. However, if we talk about the general performance, it seems that the example patch modeling can achieve the best overall quality. In addition, it can be derived into two distinct methods. The internal learning method was first proposed in 2009, and applied to the AlexNet in 2012. It utilizes the base similarity properties and generate the exemplar patch via the input image. The external one, which being used in this case, learns the mapping between low and high resolution image patches from the external libraries. With the help of nearest neighbor strategy, it makes the user easier to control the speed and the computation complexity, which later is being called the sparse coding formulation. There have been some applications for image restoration with the similar approaches, however, there is a key feature they did not focus on, which is denoising.
When we process the modeling for a specific patch set, an end to end mapping algorithm will be used with several network estimation parameters. These parameters are gained from minimizing the losses in between the related high resolution image and the reconstructed image. If we denote the reconstructed image as I, the high-resolution contradiction as k, the low-resolution image as i, the total pixel count is j, n is we can get the loss function by using the Mean Squared Error:
N is the number of images in the training patch
Then, we can achieve a high Peak Signal to Noise Ratio. This is the standard metric function being widely used for image restoration quality evaluation.
In addition, MAXI is the maximum possible pixel value of the image.
Yang, Jianchao, and Thomas Thomas Huang. Image Super-resolution:
Historical Overview and Future Challenges (n.d.): n. pag. Web.
Yang, Chih-Yuan, Chao Ma, and Ming-Hsuan Yang. “Single-Image Super-Resolution: A Benchmark.” Computer Vision – ECCV 2014 Lecture Notes in Computer Science (2014): 372-86. Web.
Yang, Jianchao, John Wright, Thomas Huang, and Yi Ma. “Image Super-Resolution via Sparse Representation.” IEEE (n.d.): n. pag. Web
There have been selective communities in public which have wide usage of image enhancement systems, for personal use, commercial use, or academic use. In developing and operating such a system, there are five key parameters upon which the design must focus on: (1) up-scaling enhancement quality, (2) efficiency, (3) system acceptance, (4) operating platform, (5) ease of training and implementation.
Through the use of extensive image up-scaling and enhancement quality testing, system design analysis, and operating platform evaluation, our team has improved and optimized some critical aspects of image up-scale and enhancement system by adapting existing algorithm and developing our unique logistics. Hereby, we are presenting the communities with a low cost and easy adaptive system that is close to being ready for commercial development and high level applications.
Our filter design system is based on technology of machine learning method for image high/super resolution. By mapping the images with low and high resolutions from end to end, a convolutional neural network can be achieved. It has the ability to provide intelligent filtering technology to the program that performs the convolution to transform the low-resolution input into a high-resolution output. It will also employ image segmentation to aide the filter in edge detection This system can finish the process in the following steps:
- A process that takes a large set of images to train the model for the filter and output the filters
- Denoise the image
- Segment the image into partitions
- Scale each segment using bicubic upscaling by the desired scale value and run the filter
- Merge all upscaled segments into one final output image
As consumer products continue to grow quality wise and when the data usage of media is continually the demand for high resolution increases as well. This makes upscaling a very important part of the equation. Whether it is remastering old footage or upscaling from an input stream such as Netflix, upscaling has real world usage that is always more relevant than before.
Per databases, there has been a shear of interest in image up-scaling since 1993. It is because the establishment of World Wide Web in 1991, and the launch of T3 network standard in 1993, which enabled commercial users to transfer data at 44.736Mbit/s over 1.544Mbit/s via T1 standard. With the increase of accessible bandwidth and cheap storage options, people would not satisfy with the low-resolution footage on the internet, in addition, they would like to restore the information came in lost in details.
Although there are various up-scaling and enhancement algorithms that have already been developed, even being commercially sold, they are not perfect. A lot of them need to meet a performance requirement and often under perform as a result.
In order to solve this problem, our application aims to generate unique filter for particular assigned image category. This can be done by using Convolutional Neural Network. With the advantages of machine learning, we can easily process more detailed images and complex objects.
- Speed of machine learning
- Segmentation for edge detection and feature handling
- Cost and efficiency
I am returning to reading literature about image Super Resolution and have taken a look at Sina Farsiu, Dirk Robinson, Michael Elad, and Peyman Milanfar’s work into Super resolution. On Milanfar’s page, there are 4 papers regarding work in this subject and has covered both still and video applications of their work. In the future I will look into these different papers and over time try and port/create an implementation of this outside of their MATLAB code which they have used for prototyping
Super resolution is categorized as an inverse problem. This is because we are trying to reverse the output of the camera system/capture system to recreate the original real life/source representation of in this case the image.
Since the dawn of SR the idea was as follows.
Where is defined as the time varying output of the image system, represents the imaging system (qualities such as blurring or fidelity of the hardware) represents the real life scene, and represents the noise that could have been introduced (for example imperfections in the lense). Note: the underline means it is a vector.
With this in mind we need to “invert” this equation such that the answer is the new output. In order to determine the best solution we must take into account a cost function.
The cost function is used to gauge the fidelity of the output solution. The classic, and now proven to be ineffective/costly, solution uses the least squares method to detemrine the difference from the estimate solution to the real life scene.
Here the and represent a scaling factor and penalty function which is used to limit the number of solutions to this high dimension linear algebra equation. This is not only done in the interest of computation time but also in the interest of ruling out many outlier solutions which may just introduce large amounts of noise. To speed up the calculation matrices are used for the high dimension calculations.
Although the idea is simple modern data implementations of these cost functions often use machine learning and neural networks to train the penalty function. Milanfar and his team’s main contribution will be offering a computationally efficient and more accurate version of this cost function which uses 1D () instead of 2D () to measure the error.
Describing M the Model Matrix
In general the model matrix is described as follows.
M: Model matrix
D/A: Sampling effects of the sensor
H: Point spread function
F: Scalar to preserve the intensity of the image, and image motion
Note: The point spread function is essentially the impulse response of an imaging system. Like in audio processing applications, this impulse response is the inherent change that is caused by the hardware itself. For audio it could be an unintentional LP or HP of the signal or noise. Parameters of the imaging system will be stored in D, A, and H.
Note 2: Motion prediction in F is hard/impossible to do accurately in systems due to the random nature of subjects (such as live organisms) movement. Here as proof of concept (due to simplicity and the simplification in calculation) Milanfar and others used the translational model. It is important to note that one of the major simplifications this model enables is the commutative nature of the model Matrix’s arguments above.
Milanfar’s Work on the Cost Function
is the cost function developed by Milanfar and those that worked with him. As mentioned before they use the norm to calculate error instead of . Their work has shown that exhibits robustness to data outliers which cause noise in the domain. Here M has been replaced by the description of the model matrix above.
The penalty function is the other point of interest. remains a scaling factor. However, they implement the translational model here.
The summation is across “l pixels” in the x direction and “m pixels” in the y direction. S terms serve as the operators for shifting and alpha is the scaling factor.
Calculation of the high resolution output X
The calculation now can be calculated in two steps.
- Produce a blurred high resolution of the image
- Deblur and denoise the image.
Here B represents a diagonal matrix of weights equal to the number of measurements made at a specific element. This ensures that those elements that were not sampled (at all or as much) are not considered as heavily. Ensuring that the interpolation will be done from actual samples from the camera sensor.
Farsiu, S., Robinson, D., Elad, M., & Milanfar, P. (2004). Advances and challenges in super-resolution. International Journal of Imaging Systems and Technology, 14(2), 47-57. doi:10.1002/ima.20007
Things to add
- statistical arguments
In lieu of the impending project I need to complete for ECE420, I have decided that I will pursue this implementation of image segmentation in order to output multiple images to be upscaled individually (my partner’s work) and then later merged back together to form a fully upscaled version of the image. The motivation of studying and adapting their implementation of this algorithm is due to the simplicity of the adaptation.
Although I have begun implementation of Normalized Cut in C, I have realized the port from matlab to C is horrid. This is not only due to the size of the code, but the incredible time complexity of its current iteration. The second problem is the need to the implementation of Matlab native functions (which cannot be converted into C MEX functions). Although octave offers implementations of things like spdiags and eigs, an easy port of things such as imread, will be close to impossible given that matlab has closed off all the source code.
With that I would like to begin discussion of the algorithm discussed in this paper. Efficient Graph-Based Image Segmentation by Pedro F. Felzenszwalb and Daniel P. Huttenlocher.
G=Undirected graph composed of G=(V,E)
S=Array of segments , composed of components which are sections of an image, a disjoint-set forest
C=component, or an individual section of the image which is connected to another component in the graph, a minimum spanning tree
This implementation of image segmentation like Ncut is based on pairwise region comparison. An image is stored in S, a disjoint-set forest, using path compression and union by rank in order to create a flatter and faster to access tree. Each tree stored inside this disjoint-set forest represents a single segment of the image.
This algorithm takes into account a factor which allows one to prioritize larger or smaller segments based upon perceptible differences. This is huge when it comes to our usage as it means tweaking the factor will allow us to in a way effectively tweak edge detection in segmentation.
Condition for Segmentation
In the paper Felzenszwalb and Huttenlocher discuss a predicate D which is the decision factor in which two segments should be joined or should not be joined. The main idea behind this is to evaluate whether the boundary between a pair of components exists. In doing so Felzenszwalb and Huttenlocher compare the difference between two components (in the same segment) to the internal difference of the same two components.
A simple explanation of the terms mentioned above is as follows.
Internal Difference: Maximum weight edge in a component
Difference: Minimum weight edge connecting two components. If the two components are not connected this value is infinity.
Minimum internal Difference: min(minimum edge weight of component 1+ threshold, minimum edge weight of component 2+ threshold)
here tau is defined as where k (controls sensitivity of segmentation) is a variable to be adjusted and |C| represents the cardinality of the component. Larger k will bias the algorithm to produce larger components.
The reasoning for these definitions can be explained as follows.
Internal Difference: If the maximum weight edge of a component is connected we know that this component has relevance to be connected or segmented.
Difference: This is sort of like a “median” of the weights in a graph giving an idea of how the weights are biased in both components.Minimum Internal Difference: This is the same as internal difference except it allows us to account for a threshold to control the “relevance” of a component and whether two components should be merged.
Taking all of this in account Felzenszwalb and Huttenlocher state the decision criteria as follows.
The predicate above will check whether a boundary exists between these two components by checking if the difference between the components is large relative to the internal difference with at least one of the components. If it evaluates to true, it will merge if false it will not merge.
This works because recall Int(C) will return the maximum weight of C and all weights greater than Int(C) will need to be merged. Thus setting the condition to check for weights larger than Int(C) will produce a merge between both components.
A note on
- Large k will produce a preference for larger components
- This will require which parameters are correlated with large k components in the case of checking for long and thin segments, large k will discourage such segments.
- Small k will produce a preference for smaller components
- In the case of checking for long and thin segments a small k will encourage such segments.
The algorithm can be described in 4 steps as follows.
- Construct an array of edges with non decreasing weight
- Start with the first segmentation S_0 with each vertex as its own edge.
- Generate segmentation S_1 to S_m-1 (where m is the number of edges) using the predicate D
- Return S=S_m
Next up: Analyzing the source code and adaptaion
P. Felzenszwalb, D. Huttenlocher
International Journal of Computer Vision, Vol. 59, No. 2, September 2004