This is the old SliTaz forum - Please use the main

DMA issues, drives randomly disabling DMA mode
  • xtremeqgxtremeqg December 2009
    Greetings, I have been trying to install slitaz onto my computer for the last couple days but I ran into some problems which I cannot resolve myself.

    The situation is as follows: I have the liveCD version booting just fine from an XD card inside my camera (which is doubling as a card reader) as I dont have any usb thumb drives laying around. I have 8 hard drives of various sizes on multiple controllers and I wish to tie together using LVM ontop of software raid 5. I ran a couple simulations in VMWare to be sure that this setup would work and I did not encounter any showstopping problems.

    However now that I am actually building the raid arrays, I am running into many problems. Just building the first array (~80 GB in size) took about 2 hours at a very very slow speed of about 8MB/s. Raising the min and max limits had no effect whatsoever, nor could I find any reason for it being so slow on my hardware. While trying to diagnose the problem, I found numerous errors and warnings in dmesg about interrupts being lost and DMA mode being disabled at random:

    --- body too long, continuing below ---
  • xtremeqgxtremeqg December 2009
    hdb: dma_timer_expiry: dma status == 0x60
    hdb: DMA timeout retry
    hdb: timeout waiting for DMA
    hda: dma_timer_expiry: dma status == 0x21
    hda: DMA timeout error
    hda: dma timeout error: status=0x50 { DriveReady SeekComplete }
    ide: failed opcode was: unknown
    hdb: status timeout: status=0xd0 { Busy }
    ide: failed opcode was: unknown
    hdb: drive not ready for command
    Clocksource tsc unstable (delta = 4686797829 ns)
    ide0: reset: success
    hdh: dma_intr: status=0x51 { DriveReady SeekComplete Error }
    hdh dma_intr: error=0x84 { DriveStatus Error BadCRC }
    ide: failed opcode was: unknown
    -- last 3 lines repeated 4 times --
    hdg: DMA disabled
    hdh: UDMA/100 mode selected
    ide3: reset: success

    hdb: lost interrupt
    hdb: lost interrupt
    hdb: lost interrupt
    hdb: lost interrupt
    ide-cd: cmd 0x1e timed out
    hdc: lost interrupt
    -- repeated 30 or so times --

    At first I thought it was because some of these drives are quite old, however when I ran dd if=/dev/zero of=/test/somefile bs=1048576 count=1024 the following errors popped up in dmesg:

    hdf: dma_intr: status=0x51 { DriveReady SeekComplete Error }
    hdf: dma_intr: error=0x84 { DriveStatus Error BadCRC }
    ide: failed opcode was: unknown
    -- last 3 above repeated 4 times --
    hde: DMA disabled
    hdf: UDMA/100 mode selected
    ide2: reset: success

    I dont understand why drives have their DMA modes disabled when they are not in use (not mounted, nor part of array). Nor why interrups and DMA errors are occurring accross 8 different drives on 2 different controllers. Anyone have any suggestions?

    Forum ate my formatting, apologies.
  • xtremeqgxtremeqg December 2009
    Apparently nobody cares... In any case, this is not a hardware problem. I downloaded the Fedora 12 64-bit LiveCD. After having some initial problems (I/O buffer errors on sd0, solved by using disk-at-once burn method) I am now happily building my arrays at a more comfortable 20MB/s without errors or ever having touched my hardware. I will continue using Slitaz in VMs however, the extremely low memory and disk footprint of your distribution is quite nice.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In Apply for Membership

SliTaz Social