2003年06月10日(火) [過去の今日]
#2 とうとうお亡くなりになられた模様
先日からおかしかったHDD、しばらく調子良かったのだがとうとう……
hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error } hdc: dma_intr: error=0x04 { DriveStatusError } hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error } hdc: dma_intr: error=0x04 { DriveStatusError } hdc: recal_intr: status=0x51 { DriveReady SeekComplete Error } hdc: recal_intr: error=0x04 { DriveStatusError } hdc: dma_intr: status=0x51 { DriveReady SeekComplete Error } hdc: dma_intr: error=0x04 { DriveStatusError } hdc: DMA disabled hdd: DMA disabled ide1: reset: master: formatter device error hdc: set_geometry_intr: status=0x51 { DriveReady SeekComplete Error } hdc: set_geometry_intr: error=0x04 { DriveStatusError } hdc: recal_intr: status=0x51 { DriveReady SeekComplete Error } hdc: recal_intr: error=0x04 { DriveStatusError } hdc: recal_intr: status=0x51 { DriveReady SeekComplete Error } hdc: recal_intr: status=0x51 { DriveReady SeekComplete Error } hdc: recal_intr: error=0x04 { DriveStatusError } hdc: set_multmode: status=0x51 { DriveReady SeekComplete Error } hdc: set_multmode: error=0x04 { DriveStatusError } hdc: recal_intr: status=0x51 { DriveReady SeekComplete Error } hdc: recal_intr: error=0x04 { DriveStatusError } ide1: reset: master: formatter device error end_request: I/O error, dev 16:01 (hdc), sector 20380248 raid1: Disk failure on hdc1, disabling device. Operation continuing on 1 devices end_request: I/O error, dev 16:01 (hdc), sector 20380256 md: updating md0 RAID superblock on device md: hdd1 [events: 000000f7]<6>(write) hdd1's sb offset: 39099200 md: recovery thread got woken up ... md0: no spare disk to reconstruct array! -- continuing in degraded mode md: recovery thread finished ... md: (skipping faulty hdc1 )
で、状態を見てみると……
$ cat /proc/mdstat Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 hdd1[1] hdc1[0](F) 39099200 blocks [2/1] [_U] unused devices: <none>
とほほ……。早急に代替ドライブを都合せねば……
(@590)
@ お知らせ:
RAIDのHDDぶっこわれると、debianのraidtools2はcronで知らせてくれるのだな。
/etc/cron.daily/raidtools2: WARNING: Some disks in your RAID arrays seem to have failed! Below is the content of /proc/mdstat: Personalities : [raid1] read_ahead 1024 sectors md0 : active raid1 hdd1[1] hdc1[0](F) 39099200 blocks [2/1] [_U] unused devices: <none>
なるほど、これは便利。しかしこんな機能知りたくなかったが……
(@955)