Loading

Data Deduplication for Efficient Storage on Cloud using Fog Computing Paradigm
Shubham Sharma1, Richa Jain2, Pronika3

1Shubham Sharma, Student, B. tech in CSE, Manav Rachna International Institute of Research & Studies, Faridabad India.
2Richa Jain, Student, B. tech in CSE, Manav Rachna International Institute of Research & Studies, Faridabad India.
3Mrs. Pronika , Assistant Professor, CSE department of Manav Rachna International Institute of Research and Studies, Faridabad India.

Manuscript received on April 09, 2020. | Revised Manuscript received on April 11, 2020. | Manuscript published on May 30, 2020. | PP: 812-815 | Volume-9 Issue-1, May 2020. | Retrieval Number: A1595059120/2020©BEIESP | DOI: 10.35940/ijrte.A1595.059120
Open Access | Ethics and Policies | Cite | Mendeley
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Cloud services have taken the IT world by storm by making its services available to everyone over large geographic area. With the increasing amount of data generate every minute it has become increasing difficult to manage resources and the storage. Thus, data compression techniques like data de duplication that aims at executing the redundancy of data and forming chunks of data that can be stored on a distributed system can be proved to a logistic solution. But when it comes to cloud problems like security has always been a major issue. In order to eliminate these challenges, we need to implement a layer of fog computing they would deal with the shortcomings of cloud computing and at the same time present a filtration front before the incoming data. 
Keywords:  Cloud computing, distributed-system, deduplication, fog computing.
Scope of the Article: Cloud Computing