[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Storage server



On 9/8/2012 1:10 PM, Martin Steigerwald wrote:
> Am Freitag, 7. September 2012 schrieb Stan Hoeppner:
>> On 9/7/2012 12:42 PM, Dan Ritter wrote:
> […]
>>> Now, the next thing: I know it's tempting to make a single
>>> filesystem over all these disks. Don't. The fsck times will be
>>> horrendous. Make filesystems which are the size you need, plus a
>>> little extra. It's rare to actually need a single gigantic fs.
>>
>> Whjat?  Are you talking crash recovery boot time "fsck"?  With any
>> modern journaled FS log recovery is instantaneous.  If you're talking
>> about an actual structure check, XFS is pretty quick regardless of
>> inode count as the check is done in parallel.  I can't speak to EXTx
>> as I don't use them.  For a multi terabyte backup server, XFS is the
>> only way to go anyway.  Using XFS also allows infinite growth without
>> requiring array reshapes nor LVM, while maintaining striped write
>> alignment and thus maintaining performance.
>>
>> There are hundreds of 30TB+ and dozens of 100TB+ XFS filesystems in
>> production today, and I know of one over 300TB and one over 500TB,
>> attached to NASA's two archival storage servers.
>>
>> When using correctly architected reliable hardware there's no reason
>> one can't use a single 500TB XFS filesystem.
> 
> I assume that such correctly architected hardware contains a lot of RAM in 
> order to be able to xfs_repair the filesystem in case of any filesystem 
> corruption.
> 
> I know RAM usage of xfs_repair has been lowered, but still such a 500 TiB 
> XFS filesystem can contain a lot of inodes.

The system I've been referring to with the ~500TB XFS is an IA64 SGI
Altix with 64P and 128GB RAM.  I'm pretty sure 128GB is plenty for
xfs_repair on filesystems much larger than 500TB.

> But for upto 10 TiB XFS filesystem I wouldn´t care too much about those 
> issues.

Yeah, an 8GB machine typically allows for much larger than 10TB xfs_repair.

-- 
Stan



Reply to: