DSpace@nitr >
National Institue of Technology- Rourkela >
Conference Papers >

Please use this identifier to cite or link to this item: http://hdl.handle.net/2080/1122

Full metadata record

DC FieldValueLanguage
contributor.authorKhaitan, P-
contributor.authorKorra, S B-
contributor.authorJena, S K-
contributor.authorMajhi, B-
date.accessioned2010-01-08T03:01:30Z-
date.available2010-01-08T03:01:30Z-
date.issued2009-12-
identifier.citationProceedings of 2nd International Conference on Computer Science and its Applications CSA 2009, Dec 10-12, 2009, South Korea, P 59-64en
identifier.urihttp://www.stanford.edu/~pranavkh/Privacy%20Preservation.pdf-
identifier.urihttp://hdl.handle.net/2080/1122-
descriptionCopyright belongs to the proceedings publisher IEEEen
description.abstractThere are two major aspects of Data Privacy –identity disclosure and attribute disclosure. A number of algorithms have been developed to protect identity disclosure. However, many of these algorithms have not been much successful in preventing attribute disclosure. The standard implementation techniques for the commonly used privacy models are NP-hard. They are time-consuming and often unnecessary. This paper investigates the existing algorithms for preserving privacy by preventing identity disclosure. Improved approximation algorithms are proposed to prevent identity disclosure and attribute disclosure. The existing approximation algorithm for k-anonymity is extended by applying simulated annealing to reduce attribute disclosure. The concepts of l-diversity and closeness are used for minimizing attribute disclosure in polynomial time. An improved algorithm for calculating the value of t-closeness is also proposed. A new parameter is introduced to anonymize the dataset considering both privacy and utility. An existing algorithm has been modified to consider the utility while determining the best full domain generalization scheme. This is followed by a proposed approximation algorithm which takes into consideration the utility while preserving the privacy. It is an example of partial domain generalization wherein different tuples can be anonymized to different extents.en
format.extent922039 bytes-
format.mimetypeapplication/pdf-
language.isoen-
publisherIEEEen
relation.ispartofseriesIEEE Proceedings;Vol. I-
subjectprivacyen
subjectapproximation algorithmen
subjectanonymityen
subjectdiversityen
subjectclosenessen
subjectutilityen
titleApproximation algorithms for optimizing privacy and utilityen
typeArticleen
Appears in Collections:Conference Papers

Files in This Item:

File Description SizeFormat
Privacy Preservation.pdf900KbAdobe PDFView/Open

Show simple item record

All items in DSpace are protected by copyright, with all rights reserved.

 

Powered by DSpace Feedback