Post Reply 
 
Thread Rating:
  • 0 Votes - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
GlusterFS
06-26-2017, 06:16 PM
Post: #11
RE: GlusterFS
(06-26-2017 05:58 PM)zma Wrote:  
(06-26-2017 05:00 PM)xwcwt Wrote:  BTW: I have started all the glusterd service in gm62's nodes and mkdir a /data partition with xfs file system format. you can test in this dir to create a mode volume to test each mode.

I guess it is for testing only. So it is fine to remove the data of /data or remove/reconfigure glusterd service on gm62. Is this correct?

Yes. no important data on this gm now.
Find all posts by this user
Quote this message in a reply
06-28-2017, 04:45 PM
Post: #12
RE: GlusterFS
(12-09-2016 01:29 PM)xwcwt Wrote:  
(12-08-2016 03:46 PM)zma Wrote:  
(11-22-2016 03:55 PM)xwcwt Wrote:  Installation & configuration:

This is a good start. Please continue studying for more aspects of it. Such as,

- I/O speed on a single node
- I/O speed for parallel access
- Fault handling
---- a disk failure
---- a node failure
- Stability
---- under stress work load: parallel read heavy, parallel write heavy, parallel random read/write heavy
- Data recovery and re-replication after one replica is lost
- Monitoring capacity: how to find out there are problems like disk failure/node failure/node down and report it?

Will do this. also this need some time.

@xwcwt: please continue studying and testing on GlusterFS in these aspects. We are seriously considering it as a key system component http://tab.d-thinker.org/showthread.php?tid=8702 so please get some key numbers / info for the above aspects.
Visit this user's website Find all posts by this user
Quote this message in a reply
06-28-2017, 05:10 PM
Post: #13
RE: GlusterFS
(06-28-2017 04:45 PM)zma Wrote:  
(12-09-2016 01:29 PM)xwcwt Wrote:  
(12-08-2016 03:46 PM)zma Wrote:  
(11-22-2016 03:55 PM)xwcwt Wrote:  Installation & configuration:

This is a good start. Please continue studying for more aspects of it. Such as,

- I/O speed on a single node
- I/O speed for parallel access
- Fault handling
---- a disk failure
---- a node failure
- Stability
---- under stress work load: parallel read heavy, parallel write heavy, parallel random read/write heavy
- Data recovery and re-replication after one replica is lost
- Monitoring capacity: how to find out there are problems like disk failure/node failure/node down and report it?

Will do this. also this need some time.

@xwcwt: please continue studying and testing on GlusterFS in these aspects. We are seriously considering it as a key system component http://tab.d-thinker.org/showthread.php?tid=8702 so please get some key numbers / info for the above aspects.

Got it. Xiaotong and I am promoting this work.
Find all posts by this user
Quote this message in a reply
06-28-2017, 05:44 PM
Post: #14
RE: GlusterFS
Red Hat's Administration Guide looks a high quality guide we may refer to for managing GlusterFS: https://access.redhat.com/documentation/...https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/
Visit this user's website Find all posts by this user
Quote this message in a reply
06-28-2017, 05:45 PM
Post: #15
RE: GlusterFS
(06-28-2017 05:10 PM)xwcwt Wrote:  
(06-28-2017 04:45 PM)zma Wrote:  
(12-09-2016 01:29 PM)xwcwt Wrote:  
(12-08-2016 03:46 PM)zma Wrote:  
(11-22-2016 03:55 PM)xwcwt Wrote:  Installation & configuration:

This is a good start. Please continue studying for more aspects of it. Such as,

- I/O speed on a single node
- I/O speed for parallel access
- Fault handling
---- a disk failure
---- a node failure
- Stability
---- under stress work load: parallel read heavy, parallel write heavy, parallel random read/write heavy
- Data recovery and re-replication after one replica is lost
- Monitoring capacity: how to find out there are problems like disk failure/node failure/node down and report it?

Will do this. also this need some time.

@xwcwt: please continue studying and testing on GlusterFS in these aspects. We are seriously considering it as a key system component http://tab.d-thinker.org/showthread.php?tid=8702 so please get some key numbers / info for the above aspects.

Got it. Xiaotong and I am promoting this work.

The 3-way distributed replicated volume seems a promising choice https://access.redhat.com/documentation/...https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Creating_Distributed_Replicated_Volumes.html#Creating_Three-way_Distributed_Replicat . We may study that configuration first.
Visit this user's website Find all posts by this user
Quote this message in a reply
06-28-2017, 05:53 PM
Post: #16
RE: GlusterFS
(06-28-2017 05:45 PM)zma Wrote:  
(06-28-2017 05:10 PM)xwcwt Wrote:  
(06-28-2017 04:45 PM)zma Wrote:  
(12-09-2016 01:29 PM)xwcwt Wrote:  
(12-08-2016 03:46 PM)zma Wrote:  
(11-22-2016 03:55 PM)xwcwt Wrote:  Installation & configuration:

This is a good start. Please continue studying for more aspects of it. Such as,

- I/O speed on a single node
- I/O speed for parallel access
- Fault handling
---- a disk failure
---- a node failure
- Stability
---- under stress work load: parallel read heavy, parallel write heavy, parallel random read/write heavy
- Data recovery and re-replication after one replica is lost
- Monitoring capacity: how to find out there are problems like disk failure/node failure/node down and report it?

Will do this. also this need some time.

@xwcwt: please continue studying and testing on GlusterFS in these aspects. We are seriously considering it as a key system component http://tab.d-thinker.org/showthread.php?tid=8702 so please get some key numbers / info for the above aspects.

Got it. Xiaotong and I am promoting this work.

The 3-way distributed replicated volume seems a promising choice https://access.redhat.com/documentation/...https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Creating_Distributed_Replicated_Volumes.html#Creating_Three-way_Distributed_Replicat . We may study that configuration first.

Got it.
Find all posts by this user
Quote this message in a reply
07-18-2017, 11:31 AM
Post: #17
RE: GlusterFS
(06-28-2017 05:53 PM)xwcwt Wrote:  
(06-28-2017 05:45 PM)zma Wrote:  
(06-28-2017 05:10 PM)xwcwt Wrote:  
(06-28-2017 04:45 PM)zma Wrote:  
(12-09-2016 01:29 PM)xwcwt Wrote:  
(12-08-2016 03:46 PM)zma Wrote:  
(11-22-2016 03:55 PM)xwcwt Wrote:  Installation & configuration:

This is a good start. Please continue studying for more aspects of it. Such as,

- I/O speed on a single node
- I/O speed for parallel access
- Fault handling
---- a disk failure
---- a node failure
- Stability
---- under stress work load: parallel read heavy, parallel write heavy, parallel random read/write heavy
- Data recovery and re-replication after one replica is lost
- Monitoring capacity: how to find out there are problems like disk failure/node failure/node down and report it?

Will do this. also this need some time.

@xwcwt: please continue studying and testing on GlusterFS in these aspects. We are seriously considering it as a key system component http://tab.d-thinker.org/showthread.php?tid=8702 so please get some key numbers / info for the above aspects.

Got it. Xiaotong and I am promoting this work.

The 3-way distributed replicated volume seems a promising choice https://access.redhat.com/documentation/...https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Creating_Distributed_Replicated_Volumes.html#Creating_Three-way_Distributed_Replicat . We may study that configuration first.

Got it.

@xwcwt: please continue working on this. I guess we need a testing cluster for GlusterFS as the GM being used was not available now.
Visit this user's website Find all posts by this user
Quote this message in a reply
08-07-2017, 05:50 PM
Post: #18
RE: GlusterFS
(07-18-2017 11:31 AM)zma Wrote:  
(06-28-2017 05:53 PM)xwcwt Wrote:  
(06-28-2017 05:45 PM)zma Wrote:  
(06-28-2017 05:10 PM)xwcwt Wrote:  
(06-28-2017 04:45 PM)zma Wrote:  
(12-09-2016 01:29 PM)xwcwt Wrote:  
(12-08-2016 03:46 PM)zma Wrote:  
(11-22-2016 03:55 PM)xwcwt Wrote:  Installation & configuration:

This is a good start. Please continue studying for more aspects of it. Such as,

- I/O speed on a single node
- I/O speed for parallel access
- Fault handling
---- a disk failure
---- a node failure
- Stability
---- under stress work load: parallel read heavy, parallel write heavy, parallel random read/write heavy
- Data recovery and re-replication after one replica is lost
- Monitoring capacity: how to find out there are problems like disk failure/node failure/node down and report it?

Will do this. also this need some time.

@xwcwt: please continue studying and testing on GlusterFS in these aspects. We are seriously considering it as a key system component http://tab.d-thinker.org/showthread.php?tid=8702 so please get some key numbers / info for the above aspects.

Got it. Xiaotong and I am promoting this work.

The 3-way distributed replicated volume seems a promising choice https://access.redhat.com/documentation/...https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/sect-Creating_Distributed_Replicated_Volumes.html#Creating_Three-way_Distributed_Replicat . We may study that configuration first.

Got it.

@xwcwt: please continue working on this. I guess we need a testing cluster for GlusterFS as the GM being used was not available now.

tbg20 is a good choice and I had set up env now. if you want to have a try you can login tb20-1 to test(may need start the service then mount the fileystem according instructions).
Find all posts by this user
Quote this message in a reply
08-07-2017, 05:54 PM
Post: #19
RE: GlusterFS
(08-07-2017 05:50 PM)xwcwt Wrote:  
(07-18-2017 11:31 AM)zma Wrote:  - I/O speed on a single node
- I/O speed for parallel access
- Fault handling
---- a disk failure
---- a node failure
- Stability
---- under stress work load: parallel read heavy, parallel write heavy, parallel random read/write heavy
- Data recovery and re-replication after one replica is lost
- Monitoring capacity: how to find out there are problems like disk failure/node failure/node down and report it?
tbg20 is a good choice and I had set up env now. if you want to have a try you can login tb20-1 to test(may need start the service then mount the fileystem according instructions).

Okay. I may do some tests when I get a chance. I guess I can assume all data on the glusterfs are for testing and can be deleted without notice.

But please continue testing the aspects of GlusterFS about failure handling, monitoring and etc.
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 


Forum Jump: