Please use this identifier to cite or link to this item: http://hdl.handle.net/2080/652
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSubudhi, B N-
dc.contributor.authorNanda, P K-
dc.date.accessioned2008-03-23T05:34:52Z-
dc.date.available2008-03-23T05:34:52Z-
dc.date.issued2008-
dc.identifier.citationProceedings of SPIT-IEEE Colloquium and International Conference, 4-5 February 2008, Sardar Patel Institute of Technology, Mumbai, India Vol 1, P 97-102en
dc.identifier.urihttp://hdl.handle.net/2080/652-
dc.descriptionCopyright for the paper belongs to Proceedings Publisheren
dc.description.abstractWe present a novel approach of video segmentation using the proposed compound Markov Random Field video model. This segmentation scheme is based on the spatio-temporal approach where one MRF model is used to model the spatial image and other two MRF models take care in the temporal directions. In this modeling, edge feature in the temporal direction has been introduced to preserve the edges in the segmented images. The problem is formulated as pixel labeling problem and the pixel labels are estimated using the Maximum a Posteriori (MAP) criterion. The MAP estimates are obtained by the proposed hybrid algorithm. The performance of the proposed method is found to be better than that of JSEG method in terms of percentage of misclassification. Different examples are presented to validate the proposed approach.en
dc.format.extent406009 bytes-
dc.format.mimetypeapplication/pdf-
dc.language.isoen-
dc.publisherSPIT-IEEEen
dc.subjectCovariance matricesen
dc.subjectFeature extractionen
dc.subjectGaussian distributionen
dc.subjectGaussian processen
dc.subjectImage edge analysisen
dc.subjectImage segmentationen
dc.subjectpattern recogntionen
dc.subjectSimulated Annealingen
dc.titleCompound Markov Random Field Model Based Video Segmentationen
dc.typeArticleen
Appears in Collections:Conference Papers

Files in This Item:
File Description SizeFormat 
spit-82.pdf396.49 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.