adventure with green drives

wd [and other vendors] sell green drives; they are reasonably cheap but have annoying tendency to spin down very very quickly. i bought recently 2x WD20NPVT-00Z2TT0 with intention of using them [in very unprofessional fashion] in PERC6 RAID1 array.

those drives park the heads and spin down after 8 seconds of inactivity:

while [ true ] ; do
        sleep 7 ;
        smartctl -d sat -a /dev/sdb|grep Load_Cycle_Count
done

gives:

193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       24
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       24
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       24
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       24
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       24

while the same code with slightly longer sleep value

while [ true ] ; do
        sleep 10 ;
        smartctl -d sat -a /dev/sdb|grep Load_Cycle_Count
done

gives:

193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       10
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       11
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       12
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       13
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       14
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       15
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       16
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       16
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       17
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       18
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       19
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       20
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       21
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       22
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       23

notice the increasing value in the last column; such behavior might be fine for a media center machine serving some video from time to time but is undesired for the 24×7 server with some background cron jobs running all the time.

i have downloaded idle3-tools for linux and run

./idle3ctl -s 129 /dev/sdb

129 corresponds to idle interval of 30 sec.

after power off + power on the same test as above gave much better results:

193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       31
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       31
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       31
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       31
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       31

the disk is no longer parking after 8s idle period.

in the long run i’ll use -s 254 parameter which should correspond to the idle time of 254-128=126 30s periods = 63 min. i’ll add a 30 min cron job dumping smart values to wake the hard drive and monitor of the load cycle count does not increase:

smartctl -d megaraid,2 -a /dev/sda >> log
smartctl -d megaraid,3 -a /dev/sda >> log

WD20NPVT worked fine in a desktop computer. read and write performance was reasonable – 80-100MB/s for a sequential access. but then i’ve put the disks to the server. i’ve read earlier that i should expect problems although have not seen any details. i have connected both disks to PERC6 in PE1950 and created RAID1 on 2x WD20NPVT, let it initialize and measured linear access speeds. reads were perfectly fine ~100MB/s, but writes – never above 30MB/s. i think i’ve tried it all:

  • creating raid0 on single drive
  • enabling drive’s internal cache [ MegaCli64 -LDSetProp EnDskCache -L1 -a0 ]
  • checking different stripe size [in theory it should not affect RAID1 as i understand it but apparently it does, although results vary by maybe +/-10%]
  • turning off and on write-back cache provided by the PERC6

but i never got better write speed. but actually 30MB/s should be enough for my particular use case. i’ve been running heavy IO load on the server for the last 10 days and so far all works fine.

i’ll update this post if i encounter any trouble in the future.

the server which i’m preparing also has 2x WD1000CHTZ wd 1TB raptor drives. there were no performance issues with those. raptors – also in RAID1 – will be boot drives and spool to which rsyncs / vmware dumps will arrive. from there rdiff-backup will created copies onto RAID1 on 2x WD20NPVT; from there data will be copied to the offsite location where it’ll be placed on offline media.

edit 2013-07-13 – out of blue write speed deteriorated from tolerable 20MB/s to 4MB/s on the 2TB 2.5 green disks. final verdict – don’t mix green WD20NPVT with PERC6. i’m moving data to another machine and will use 2x WD2003FYYS. according to this they should work fine with PERC6.
edit 2014-07-07 – WD RE4 2TB disks work indeed with dell’s PERC6 – they’ve been working flawlessly in RAID1 for the last 12 months under moderate load [ WD2003FYYS, WD2000FYYZ] . WD20NPVT are used as rotated / offline backup drives and are doing fine too.

One thought on “adventure with green drives

Leave a Reply

Your email address will not be published. Required fields are marked *

(Spamcheck Enabled)