on flash disks the access time differences are smaller, but there are still advantages to sequential reads/writes compared to random ones.
in addition, flash devices actually write in blocks much larger than the typical block size (block sizes are in the 1-4K range, eraseblocks are in the 128K to multi-meg range).
a 'worst case' scenario with low-end current systems would be overwriting an existing file on a filesystem with 1k blocks and 256K eraseblocks. if you have a 1M file it could be 4 writes to the flash, or 1000 writes to the flash.
normally files don't get this fragmented, so you won't see that worst-case in real life, but the grouping can really matter.
another place where you could see a huge difference is on RAID 5/6 arrays. if your system checks parity on reads, a read of one block requires reading the entire stripe and verifying the parity data. if all your data is on that stripe you have it all, if not you have to repeat the process
for writes it can be even more drastic. to modify an existing stripe you need to read the existing block from the disk being changed and the parity disk(s), then calculate the changes to the parity and write the changes out. If you know that you are overwriting the entire stripe there is no need to read any old data first, just calculate the new parity and do the write. If you could do this (no system does this today), then streaming/large file writes to raid5/6 would be just about as fast as to raid0 (some additional cpu time, and the IO bandwidth of the parity disks, but unless the system is saturated these will not matter)