[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Unidentified flying subject!



Charles Curley <charlescurley@charlescurley.com> writes:

> On Fri, 09 Feb 2024 04:30:14 +0000
> Richmond <dnomhcir@gmx.com> wrote:
>
>> So you need to store a lot of data and then verify that it has written
>> with 'diff'.
>
> Yeah.
>
> I've been thinking about this. Yeah, I know: dangerous.
>
> What I would do is write a function to write 4096 bytes of repeating
> data, the data being the block number being written to. So the first
> block is all zeros, the second all ones, etc.. For convenience they
> would be 64 bit unsigned ints.
>
> And, given the block number, a function to verify that block number N
> is full of Ns and nothing else.
>
> By doing it this way, we don't have to keep copies of what we've
> written. We only have to keep track of which block got written to which
> LBA so we can go back and check it later.
>
> Now, divide the drive in half. Write block zero there. Divide the two
> halves each in half, and write blocks one and two. Divide again, and
> write blocks three through five. Etc., a nice binary division.
>
> Every once in a while, I would go back and verify the blocks already
> written. Maybe every time I subdivide again.
>
> If we're really lucky, and the perpetrators really stupid, the 0th block
> will fail, and we have instant proof that the drive is a failure.
> We don't care why the drive is failing, only that the 0th block (which
> is clearly not at the end of the drive) has failed.
>
> Here's a conjecture: This was designed to get people who use FAT and
> NTFS. I know that FAT starts writing at the beginning of the partition,
> and goes from there. This is because floppy disks (remember them?) have
> track 0 at the outside, which is far more reliable than the tracks at
> the hub simply because each each flux reversal is longer. So the first
> 64G should be fine; only after you get past there do you see bad
> sectors. I believe NTFS does similarly.
>
> But I don't think that's what they're doing. Other operating systems
> have put the root directory and file allocation table (or equivalent)
> in the middle of the disk (for faster access), Apple DOS for one.
> mkfs.extX write blocks all over the place.
>
> I think that they are re-allocating sectors on the fly, regardless of
> the LBA, until they run out of real sectors. So we write 64G+ of my
> 4096 byte blocks. It'll take a while, but who cares?
>
> If Gibson is correct that these things only have 64 gig of real memory,
> and my arithmetic is correct, we should start seeing failures after
> writing 16777216 of my 4096 blocks. 
>
> Of course, these things might allocate larger sectors than 4096 bytes.
> In which case we'll hit the limit sooner.

I tried validrive on a 64G drive and it was very fast to run. Another
older drive with 32G was much slower. This is due to the design of
drives expecting to be written to sequentially.

Note this in the FAQ:

"Q:How much of the storage of a drive does ValiDrive test?

"A:ValiDrive's drive map contains 32 x 16 squares. So it tests 576
evenly-spaced 4k byte regions of any drive for a total of 2,359,296
bytes, or about 2.36 megabytes. If a drive contains internal RAM
caching, ValiDrive will detect that and may increase its testing region
size, as necessary, to bypass such caching; but this is not commonly
encountered.
"

This would be considerably quicker than your 64G write, and also cause
less wear.

But you need a friend with Windows to run it. :)


Reply to: