A Deep Multiscale Fusion Method via Low-Rank Sparse Decomposition for Object Saliency Detection Based on Urban Data in Optical Remote Sensing Images
Algorithm 1: Proposed saliency detection method.
Input: Raw image I, multiscale segment number N and segment parameter in each scale.
Output: Saliency map.
for
{
if i=1 then
(1) According to the determined parameters, we use SLIC to segment image ;
(2) Determine the input region ,, of each superpixel;
(3) The above is input GoogleNet to extract deep feature ,,;
(4) The deep features of all superpixels constitute a matrix W, and the transformation matrix A of W is calculated by using PCA to obtain the principal component features;
(5) According to the principal component features, saliency values without object priors are calculated to obtain the first segmentation saliency map ;
else
(6) According to the determined parameters, we use Watershed algorithm to segment image;
(7) The saliency map is taken as object priori map. Then it extracts and optimizes proposal object set ;
(8) Determine the input region ,, in ;
(9) The above is input GoogleNet to extract deep feature ,,;
(10) The deep features of all superpixels constitute a matrix W, and the transformation matrix A of W is calculated by using PCA to obtain the principal component features;
(11) According to the principal component features, saliency values with object priors are calculated. And we obtain the saliency map ;
end if
}
(12) Calculate the saliency map weight at each scale;
(13) Adopt weight cellular automata to fuse the obtained N saliency maps and get final SM.