Post Reply 
Thread Rating:
  • 1 Votes - 1 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Ghemawat:The Google file system. SOSP'03
08-04-2009, 10:46 PM (This post was last modified: 07-09-2018 07:13 PM by lingu.)
Post: #1
Ghemawat:The Google file system. SOSP'03
Ghemawat, S., Gobioff, H., and Leung, S. The Google file system. In Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles (SOSP'03), Bolton Landing, NY, USA, October 19 - 22, 2003. 29-43.

ACM entry:

Very good technical details. One GFS cluster consists of a single master and multiple chunkservers. GFS has a weak consistency model.

The single-master and single-primary-chunk-server structure simplifies the design of GFS. The master server designates a chunk server as "primary" granted with a lease. Until the lease expires, the primary chunk server decides the mutation order within the chunk, and applies it to all other replicas.

The client caches chunk locations, but not the file data. This inconsistency window is limited by cache entry timeout and the next file open, which purges chunk locations from the memory.

Common settings are:
Lease period: 60s
Cache entry timeout: unknown
Heartbeat rate: unknown

20121231 Add ACM entry and remove broken PDF link. /gl

Attached File(s)
.pdf  gfs-sosp2003.pdf (Size: 269.47 KB / Downloads: 2)
Quote this message in a reply
09-15-2009, 02:17 PM
Post: #2
Interesting comments from
Google File System II: Dawn of the Multiplying Master Nodes
By Cade Metz in San Francisco

Link to the article is

Below are some interesting observations and points.

"Updated As its custom-built file system strains under the weight of an online empire it was never designed to support, Google is brewing a replacement. Apparently, this overhaul of the Google File System is already under test as part of the "Caffeine" infrastructure the company announced earlier this week."

"In an interview with the Association for Computer Machinery (ACM), Google's Sean Quinlan says that nearly a decade after its arrival, the original Google File System (GFS) has done things he never thought it would do."

Quinlan (GFS tech leader for two years, Google principal engineer): "Its staying power has been nothing short of remarkable given that Google's operations have scaled orders of magnitude beyond anything the system had been designed to handle, while the application mix Google currently supports is not one that anyone could have possibly imagined back in the late 90s"

"But GFS supports some applications better than others. Designed for batch-oriented applications such as web crawling and indexing, it's all wrong for applications like Gmail or YouTube, meant to serve data to the world's population in near real-time."

"(The situation that throughput outweighs latency) has changed over the past ten years and though Google has worked to build its public-facing apps so that they minimize the shortcomings of GFS, Quinlan and company are now building a new file system from scratch."

"The trouble (of single-master GFS) is that ... A single point of failure may not have been a disaster for batch-oriented applications, but it was certainly unacceptable for latency-sensitive applications, such as video serving."

"In the beginning, GFS even lacked an automatic failover scenario if the master went down. You had to manually restore the master, and service vanished for up to an hour. Automatic failover was later added, but even then, there was a noticeable service outage. According to Quinlan, the lapse started out at several minutes and now it's down to about 10 seconds... Which is still too high."

Quinlan: "While these instances - where you have to provide for failover and error recovery - may have been acceptable in the batch situation, they're definitely not OK from a latency point of view for a user-facing application"

"But even when the system is running well, there can be delays." -- Quinlan: "There are places in the design where we've tried to optimize for throughput by dumping thousands of operations into a queue and then just processing through them," ... "That leads to fine throughput, but it's not great for latency."

"GFS dovetails well with MapReduce, Google's distributed data-crunching platform. But it seems that Google has jumped through more than a few hoops to build BigTable, its (near) real-time distributed database. And nowadays, BigTable is taking more of the load."

"Our user base has definitely migrated from being a MapReduce-based world to more of an interactive world that relies on things such as BigTable. Gmail is an obvious example of that. Videos aren't quite as bad where GFS is concerned because you get to stream data, meaning you can buffer. Still, trying to build an interactive database on top of a file system that was designed from the start to support more batch-oriented operations has certainly proved to be a pain point."

"The other issue is that Google's single master can handle only a limited number of files. The master node stores the metadata describing the files spread across the chunkservers, and that metadata can't be any larger than the master's memory."

"With its new file system - GFS II? - Google is working to solve both problems. Quinlin and crew are moving to a system that uses not only distributed slaves but distributed masters. And the slaves will store much smaller files. The chunks will go from 64MB down to 1MB."

Quinlan: (with 1MB chunks) you have more room to accommodate another ten years of change. "My gut feeling is that if you design for an average 1MB file size, then that should provide for a much larger class of things than does a design that assumes a 64MB average file size. Ideally, you would like to imagine a system that goes all the way down to much smaller file sizes, but 1MB seems a reasonable compromise in our environment."

About why didn't Google design the original GFS around distributed masters -- Quinlan: "The decision to go with a single master was actually one of the very first decisions, mostly just to simplify the overall design problem. That is, building a distributed master right from the outset was deemed too difficult and would take too much time".

Quinlan: "There's no question that GFS faces many challenges now," "Engineers at Google have been working for much of the past two years on a new distributed master system designed to take full advantage of BigTable to attack some of those problems that have proved particularly difficult for GFS."
Quote this message in a reply
10-17-2009, 11:43 PM
Post: #3
author = {Ghemawat, Sanjay and Gobioff, Howard and Leung, Shun-Tak},
title = {The Google file system},
booktitle = {SOSP '03: Proceedings of the nineteenth ACM symposium on Operating systems principles},
year = {2003},
isbn = {1-58113-757-5},
pages = {29--43},
location = {Bolton Landing, NY, USA},
doi = {},
publisher = {ACM},
address = {New York, NY, USA},
Quote this message in a reply
12-20-2009, 09:03 PM
Post: #4
RE: Ghemawat...The Google file system. SOSP'03
There are some challenges to GFS:

1) If the file size is very large. When the file size is large enough, the probability that the file is interrupted is getting unacceptable. Suppose the probability the one chunck fails is p. And we have 3 duplications for one file. Then one chunck size of the file fails with probability 1-(1-p)^3 noted as q. Suppose one file consist of N chunks. Then the probability that this file is interrupted is 1-(1-q)^N. When N is large enough, the probability of failure maybe unacceptable. The problem is more serious when the chunk size is smaller than 64MB.

2) If the overall size of the file system it getting very large. As mentioned in the interview, the single master's memory size limits the overall size of the file system. And again, when the chunck size is smaller, the allowed number of files in this system is much smaller. So a distributed master file system is a good choice for this problem.
Quote this message in a reply
12-21-2009, 09:44 AM
Post: #5
RE: Ghemawat...The Google file system. SOSP'03
The GFS was designed to store a huge size of data in a reliable way that the data can be process effectively and efficiently. There are some highlights of properties of the GFS: global namespace, high Availability, efficient processing , atomic record append, snapshot.

But actually there are some drawbacks in the GFS design:

1) Whether GFS needs to support random write. In the paper authors said the write workload of the GFS are always big large stream. Small random writing nearly not exist. And Google actually design and implement a database system called Bigtable on the top of the GFS. It can cache small files to a big file and then send it to the GFS. We can move some functionality out of the GFS to get more benefit.
2) About fixed chunk size. GFS use 64K fixed chunk size. There are often such cases that the last chunk of the file is divided into two parts. So the file system needs to access different location to get the whole file. If we use map/reduce algorithm, the efficient way is the data stored in the local node. If some files stored in the different node, the communication overhead must increase.
Find all posts by this user
Quote this message in a reply
Post Reply 

Forum Jump: