background: I work on a down-up approach to the image I am partitioning, where I first want to merge the image into small areas / super-pixel / super-voxels and then on the basis of some criteria, along with it in more divide areas. I am playing with a criterion, to measure how similar it is to appear in two areas. To measure the presence of an area, I use several measures - intensity figures, textures, etc. I count all the features in a long feature vector for an area. Question: Given the two adjacent segments, let R1 and R2, F1 and F2 be the same feature vectors. My questions are as follows: - What are the good metrics for measuring equality between F1 and F2? - How well should F1 and F2 improve metrics? (It is not possible to use any supervised approach of generalization because I do not want to add my algorithm to a set of images) Solution in my mind: In terms of equality (R1, R2) = dot_product (F1 / Norm (F1), F2 / Norm (F2)), I first used F1 and F2 I make generalizations to make unit vectors and then use the dot product to have a similar between two vectors In the form of a measurement. I wonder if there are better ways to improve them and compare them with metrics. I would be glad that the community can tell me in some contexts and the reason why the reason is better than some similarity measure. state of the art picture split algorithm conditional random field superpixel (IMO algorithm is the best option). This type of algorithm captures the connection between each superpixel (commonly using SSVM ) as well as the adjacent superpixel. Generally take a bag of facilities for each of them, such as histograms, or the feature that you think can help. There are several papers describing this process, here you have some of those that I get, however, there are not many libraries or software to deal with the CRF.
No comments:
Post a Comment