Data De-Duplication Using SHA (Secure Hash Algorithm)
Now-a-days the number of users using cloud storage has increased so that the data stored has been increased in exponential rates. The data should be secured and the storage should be used efficiently. But a lot of duplicate data is present as two or more users may upload the same data. To make use o...
Saved in:
| Published in | International Journal of Scientific Research in Computer Science Engineering and Information Technology pp. 1 - 5 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
04.05.2019
|
| Online Access | Get full text |
| ISSN | 2456-3307 2456-3307 |
| DOI | 10.32628/CSEIT1952181 |
Cover
| Summary: | Now-a-days the number of users using cloud storage has increased so that the data stored has been increased in exponential rates. The data should be secured and the storage should be used efficiently. But a lot of duplicate data is present as two or more users may upload the same data. To make use of the cloud storage efficiently we have to reduce the redundant data hence improving the resources like storage space, disk I/O operations of the cloud vendors. Data De-Duplication is the process to remove redundant data and store only one instance of duplicate data. The objective of the proposed system is to increase the efficient comparison of hash values of a different data blocks and security of data. This paper includes a method for data deduplication using SHA (Secure Hash Algorithm) and AES. SHA is used as it is more secure than other hashing algorithms. The data is encrypted using AES at owner machine itself and using SHA the redundant data will be eliminated. |
|---|---|
| ISSN: | 2456-3307 2456-3307 |
| DOI: | 10.32628/CSEIT1952181 |