An Efficient Model of Saliency Detection in a Compressed Domain Video Model

International Journal of Computer Science (IJCS Journal) Published by SK Research Group of Companies (SKRGC) Scholarly Peer Reviewed Research Journals

Format: Volume 3, Issue 1, No 2, 2015.

Copyright: All Rights Reserved ©2015

Year of Publication: 2015

Author: Mrs.M.Priya,Dr.K.Mahesh

Reference:IJCS-076

View PDF Format

Abstract

Today several problems occur regarding video over internet. The solution of the problem is compression. The need for video compression is very important to minimize the storage space and transmission cost. There are several applications require video compression such as multimedia, internet, remote sensing etc. An Earlier method can be used for detection of visual and motion saliency in a compressed domain of video. However, nearest out of number of target is not considered .To deal with this depth saliency has been proposed for compressed video. Depth in an additional framework to help select the nearest out of a number of moving targets in a various multimedia application. A new fusion method of parameterized normalization, sum and product (PNSP) is designed to combine the results of saliency maps to get the final saliency map for each video frame. The proposed method provides the high compression ratio and high quality of video.The proposed model can predict the salient regions efficiently for video frames.

References

[1] A.Kh. Al-Asmari A new video compression algorithm for different video conferencing standards International Journal oNetwork Management, 13 (2003). [2] ChenleiGuo, Liming Zhang, “A Novel Multiresolution Spatiotemporal Saliency Detection Model and Its Applications in Image and Video Compression,” Image Processing, 2010, vol. 19, pp. 185–198. [3]Harel, J., Koch, C., Perona, P.: Graph-based visual saliency. In: Advances in Neural Information Processing Systems, vol. 19, p. 545 (2007) . [4] Hou, X., Zhang, L.: Saliency detection: A spectral residual approach. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2007, pp. 1–8. IEEE (2007). [5] Itti, L.: Automatic foveation for video compression using a neurobiological model of visual attention. IEEE Transactions on Image Processing 13(10), 1304–1318 (2004). [6]Slowe, T.E., Marsic, I.: Saliency-based visual representation for compression. In: Proceedings of International Conference on Image Processing 1997, vol. 2, pp. 554–557. IEEE (1997). [7] T. Ebrahimi, M. Kunt, Visual data compression for multimedia applications, Proceedings of the IEEE 86 (6) (1998) 1109–1125. [8]Nevrezİmamoğlu, Weisi Lin,Yuming Fang, “A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform”, IEEE Transactions on Multimedia, 2013. [9]Yuming Fang,Weisi Lin, Bu-sung Lee, and Chiew Tong Lau, ‘Bottom-up Saliency Detection Model Based on Amplitude Spectrum,’ International Conference on MultiMedia Modeling (MMM), 2011. [10] E. G. Richardson, Iain (2003). H.264 and MPEG-4 Video Compression: Video Coding for Next-generation Multimedia. Chichester: John Wiley & Sons Ltd. [11] M.S. Lazar, L.T. Bruton Fractal coding of digital video June IEEE Transactions on Circuits and Systems for Video Technology, 4 (1994), pp. 297–308. [12]C.L.Huang, B.Y.Liao, A Robust Scene-Change DetectionMethod for VideoSegmentation, IEEE Transactions on Circuits and Systems for Video Technology, vol. 11,no. 12, pp. 1281-1288, Dec. 2001. [13]H.C.Liu, G.Zick, Automatic Determination of Scene Changesin MPEG CompressedVideo, ISCAS, vol. 1, pp. 764-767, 1995.Video Techn.Vol 13 (4): 358-363 (2003). [14] J.-R. Ohm, “Complexity and delay analysis of MCTF interframe wavelet structures,” ISO/IEC JTC1/SC29/WG11,Document M8520, Jul. 2002. [15] V. Bhaskaran, K. Konstantinides, “Image and Video Compression Standards,” Kluwer Academic Publishers, 1995. [16] V.-A Nguyen, D. Min, and M. N. Do, “Efficient Techniquesfor Depth Video Compression Using Weighted Mode Filtering,” IEEE Trans. on Circuits and Systems for Video Technology. [17] D. Min, J. Lu, and M. N. Do, “Depth Video Enhancement Basedon Weighted Mode Filtering,” IEEE Trans. on Image Processing, 2011. [18] A. K. Moorthy and A. C. Bovik, “H.264 Visually Lossless Compressibility Index (HVLCI), Software release,” http://live.ece.utexas.edu/research/quality/hvlci.zip, 2010. [19] A.B. Watson, “DCT quantization matrices visually optimized for individual images,” in Proceedings of SPIE, 1993, vol. 1913,pp. 202–216. [20] Y. Zhu, N. Jacobson, H. Pan, and T. Nguyen. Motion decision based spatiotemporal saliency forvideo sequences. In Acoustics, Speech and SignalProcessing (ICASSP), 2011 IEEE InternationalConference on, pages 1333 –1336, may 2011. [21] D. Zhang, B. Li, J. Xu, and H. Li, “Fast transcoding from H.264 AVCto high efficiency video coding,” in Proc. IEEE Int. Conf. MultimediaExpo (ICME), Jul. 2012, pp. 651–656. [22] I.-H. Shin, Y.-L.Lee, and H. W. Park, “Motion estimation for frameratereduction in H.264 transcoding,” in Proc. IEEE Workshop Softw.Technol. Future Embedded Ubiquitous Syst., May 2004, pp. 63–67. [23] ITU-T, “ITU-T Recommendation H.264, Advanced video coding forgeneric audiovisual services,” ITU-T, Tech. Rep., May 2003. [24] A. Vetro, C. Christopoulos, and H. Sun, “Video transcoding architecturesand techniques: An overview,” IEEE Signal Process. Mag., vol. 20, no. 2,pp. 18–29, Mar. 2003.


Keywords

video saliency detection, Regions of interest, video frames, video compression.

This work is licensed under a Creative Commons Attribution 3.0 Unported License.   

TOP
Facebook IconYouTube IconTwitter IconVisit Our Blog