NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
All -This discussion has been very helpful to me... I'm setting up a couple of new systems and was going to use ext3, but now I've decided to take the plunge and use ext4. It's not formally supported in RHEL 5.5 yet, but I discovered you can load it as a "Technology Preview" and from what I can tell, it looks like ext4 is nearing a supported release from RH and is pretty stable (though they warn you not to use it on production systems). Looks like all I have to do is "yum install e4fsprogs" and start using it, though the ext4 commands have a "4" in them instead of a "2" until formal release. Here's a couple of links in case anyone else is interested:
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/5.5_Technical_Notes/chap-Technical_Notes-_Technology_Previews_.html https://ext4.wiki.kernel.org/index.php/Ext4_Howto Art On Mon, 25 Oct 2010, Gerald Creager wrote:
Hi, Kevin,ext4 appears, so far, to be a food fit for our gluster services. That's all we've looked at it for, and for that matter, we're still using XFS on our gluster system for the HPC instance.XFS has proven sensitive to loss of a RAID shelf, or loss of a particular drive or two on a shelf, in our world when we were using LVM2 to create larger volumes. That can cause a lot of problems all of a sudden. When a hardware failure happens and XFS goes offline, an xfs-recovery operation often requires loss of data when the XFS log is blown away, which you shouldn't HAVE to do, but, in our experience, is required about 2/3d of the time.That said, if we never had hardware failures, XFS would be just about perfect...gc Kevin R. Tyle wrote:Hi Gerry, et al.:Any sense as to whether ext4 is improved enough to warrant its use in lieu of a non-journaling fs such as ext2?Also, what particular problems are of concern with XFS? --Kevin ______________________________________________________________________ Kevin Tyle, Systems Administrator ********************** Dept. of Atmospheric & Environmental Sciences ktyle@xxxxxxxxxxxxxxxx University at Albany, ES-235 518-442-4578 (voice) 1400 Washington Avenue 518-442-5825 (fax) Albany, NY 12222 ********************** ______________________________________________________________________ On 10/25/2010 08:43 PM, Gerald Creager wrote:I have not, but with the amount of disk we run, we've gone far away from ext3. We have XFS for our NFS partitions, and ext4 for our Gluster. There are a whole lot of problems associated with ext3, in cluding but not limited to how long it takes to do one of its periodic checks when you've exceed the time limit or reboot count.gerry Arthur A. Person wrote:Has anyone tried mounting ext3 with the data=writeback option (with the risk that active data files could be corrupt after a crash)? I'm wondering how close performance would come to ext2...Art On Mon, 25 Oct 2010, Gerald Creager wrote:The choices for files of this size are ext2 if you don't care about journaling, or XFS. XFS has its own share of problems but does work very well.gerry Tyler Allison wrote:We switched off of ext3 for our data partitions for high speed IO. It blows chunks compared to ext2. You loose journaling but I don't care. I don't have long term storage.Sent from my iPhoneOn Oct 25, 2010, at 3:10 PM, "Jeff Lake - Admin" <admin@xxxxxxxxxxxxxxxxxxxx> wrote:yep both 7200 SATA2'swas considering a Velociraptor 10K RPM but at the xtra $60/m .... need to really consider long and hardJeff,If your disk RPM isn't at least 7200, your I/O is going to be bad with Level2/3 feeds. ******************************************************************************* Gilbert Sebenste ********(My opinions only!) ****** Staff Meteorologist, Northern Illinois University **** E-mail: sebenste@xxxxxxxxxxxxxxxxxxxxx ***web: http://weather.admin.niu.edu ** ******************************************************************************* _______________________________________________ldm-users mailing list ldm-users@xxxxxxxxxxxxxxxxFor list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/_______________________________________________ ldm-users mailing list ldm-users@xxxxxxxxxxxxxxxxFor list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/_______________________________________________ ldm-users mailing list ldm-users@xxxxxxxxxxxxxxxxFor list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/-- Gerry Creager -- gerry.creager@xxxxxxxx Texas Mesonet -- AATLT, Texas A&M University Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983 Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843 _______________________________________________ ldm-users mailing list ldm-users@xxxxxxxxxxxxxxxxFor list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/Arthur A. Person Research Assistant, System Administrator Penn State Department of Meteorology email: person@xxxxxxxxxxxxx, phone: 814-863-1563_______________________________________________ ldm-users mailing list ldm-users@xxxxxxxxxxxxxxxxFor list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/-- Gerry Creager -- gerry.creager@xxxxxxxx Texas Mesonet -- AATLT, Texas A&M University Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983 Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843 _______________________________________________ ldm-users mailing list ldm-users@xxxxxxxxxxxxxxxxFor list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/
Arthur A. Person Research Assistant, System Administrator Penn State Department of Meteorology email: person@xxxxxxxxxxxxx, phone: 814-863-1563
ldm-users
archives: