I'm trying solve a challenging problem in C++ with the concepts I'm not familiar with.
I'm trying to apply a filter to a matrix. However like I said I'm quite new at this and after some investigation I've found this link where it shows applying a filter is basically a multiplication
However what confuses me that what if my filter is [0,1,0] and I have to apply it to a 5x5 matrix. How would I be able to do that?
EDIT:Second link really confused me. I am pretty much right now trying to decide the "application" process. If I follow the idea of creating a 3x3 matrix with only diagonal [0,1,0] am i going to apply it like in the second link or do I have to apply it to every single cell in matrix. Or if its really going to be a 1-D filter should I,again, apply it to every single cell or leave out the edges and corners?
I think the thing that's being overlooked is that the multiplication is repeated for every element of the input array using subsets of the input data.
The GIMP example showed how to filter a 5x5 image using a 3x3 filter for a single pixel:
. . . . . . . . . .
. - - - . . . . . . . . .
. - @ - . x . . . -> . . @ . .
. - - - . . . . . . . . .
. . . . . . . . . .
I've labelled one input pixel with a @
and its neighbors with -
. You use the smaller matrix:
- - - . . .
- @ - x . . . = 3x3 array
- - - . . .
Sum up the numbers in the resultant 3x3 array, and store that value into the new image, in place of the @
pixel.
To take this to your example, when filtering a 5x5 image using a 3x1 filter:
. . . . . . . . . .
. . . . . . . . . .
. - @ - . x . . . -> . . @ . .
. . . . . . . . . .
. . . . . . . . . .
You'll use a smaller subset of the input array, to match your kernel;
- @ - x . . . = 1x3 array
Then, again, sum the numbers in the resultant array, and store that value into the new image in place of the @
pixel.
That's a convolution kernel.
The idea is that you replace each pixel with a weighted average of it and its neighbors, where the weight are given by your convolution kernel. The process is explained nicely e.g. here.
I find strange that you have a 1-D convolution kernel (i.e. that would be suitable for a one-dimensional image), when usually for image processing 2-D convolution kernels (which take pixels also from the rows above/below) are used, but it could be that your algorithm needs to work only with pixel from the current row.
1x3
matrix. The kernels will do different things, so it is important to figure out what specifically you're being asked to do - sarnold 2012-04-04 22:00
It's confusing what you are looking for in an answer. If we make the assumption that your filter is stored in a std::vector<double>
called filter
and that your image is really 2D and has type std::vector< std::vector<double> >
called image
, then we can do the following to apply the 1-D filter [-1,0,1]
:
std::vector< std::vector<double> > new_image;
std::vector<double> filter;
filter.push_back(-1.0); filter.push_back(0.0); filter.push_back(1.0);
for(int i = 0; i < image.size(); i++){
for(int j = 0; j < image.at(i).size(); j++){
new_image.at(i).push_back( filter.at(0)*image.at(i).at(j-1)
+ filter.at(1)*image.at(i).at(j)
+ filter.at(2)*image.at(i).at(j+1) );
}
}
If you want to have a 2-dimensional filter like this one for example
[0 1 0]
[1 0 1]
[0 1 0]
then we assume it is stored as a vector of vectors as well, and basically do the same.
std::vector< std::vector<double> > new_image;
for(int i = 0; i < image.size(); i++){
for(int j = 0; j < image.at(i).size(); j++){
top_filter_term = filter.at(0).at(0)*image.at(i-1).at(j-1)
+ filter.at(0).at(1)*image.at(i-1).at(j)
+ filter.at(0).at(2)*image.at(i-1).at(j+1);
mid_filter_term = filter.at(1).at(0)*image.at(i).at(j-1)
+ filter.at(1).at(1)*image.at(i).at(j)
+ filter.at(1).at(2)*image.at(i).at(j+1);
bot_filter_term = filter.at(2).at(0)*image.at(i+1).at(j-1)
+ filter.at(2).at(1)*image.at(i+1).at(j)
+ filter.at(2).at(2)*image.at(i+1).at(j+1);
new_image.at(i).push_back(top_filter_term + mid_filter_term + bot_filter_term);
}
}
Please note -- I'm not making any effort to do bounds checking for the filter arrays, you really you should only apply this away from the edges of the image, or add code to apply whatever kinds of boundary conditions you want for your filter. I'm also not making any claims about this being optimized. For most purposes, using vectors is a good way to go because they are dynamically resizable and provide enough built-in support to do a lot of useful image manipulations. But for really large-scale processing, you'll want to optimize things like filter operations.
As for your question about filtering a 3D array, there are a couple of things to consider. One, make sure that you really do want to filter the whole array. For many image processing tasks, it is better and more efficient to split all of the color channels into their own 2D arrays, do your processing, and then put them back together. If you do want a true 3D filter, then be sure that your filter actually is 3D, that is, it will be a vector of vectors of vectors. Then you'll use the exact same logic as above, but you'll have an additional layer of terms for the parts of the filter applied to each color channel, or "slice", of the image.
I think you are talking about color filter. Technically a 5X5 image is actually a 5X5X3 (A), where the '3' corresponds to 3 basic colors (RGB). Now, create a matrix of 3X3 with diagonal [0,1,0] (T).
Now multiply the two matrices (AXT) to get the new 5X5X3 image matrix.
[0, 1, 0]
is going to be an identify transformation, unless the different values represent colors, as ElKamina suggests, or some other information is missing - sarnold 2012-04-04 00:36