There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

General questions about LVM2 and RAID

Hi all! I recently built a cold storage server with three 1TB drives configured in RAID5 with LVM2. This is my first time working with LVM, so I’m a little bit overwhelmed by all its different commands. I have some questions:

  1. How do I verify that none of the drives are failing? This is easy in case of a catastrophic drive failure (running lvchange -ay <volume group> will yell at you that it can’t find a drive), but what about subtler cases?
  2. Do I ever need to manually resync logical volumes? Will LVM ever “ask” me to resync logical volumes in cases other than drive failure?
  3. Is there any periodic maintenance that I should do on the array, like running some sort of health check?
  4. Does my setup prevent me from data rot? What happens if a random bit flips on one of the hard drives? Will LVM be able to detect and correct it? Do I need to scan manually for data rot?
  5. LVM keeps yelling at me that it can’t find dmeventd. From what I understand, dmeventd doesn’t do anything by itself, it’s just a framework for different plugins. This is a cold storage server, meaning that I will only boot it up every once in a while, so I would rather perform all maintenance manually instead of delegating it to a daemon. Is it okay to not install dmeventd?
  6. Do I need to monitor SMART status manually, or does LVM do that automatically? If I have to do it manually, is there a command/script that will just tell me “yep, all good” or “nope, a drive is failing” as opposed to the somewhat overwhelming output of smartctl -a?
  7. Do I need to run SMART self-tests periodically? How often? Long test or short test? Offline or online?
  8. The boot drive is an SSD separate from the raid array. Does LVM keep any configuration on the boot drive that I should back up?

Just to be extra clear: I’m not using mdadm. /proc/mdstat lists no active devices. I’m using the built-in raid5 feature in lvm2. I’m running the latest version of Alpine Linux, if that makes a difference.

Anyway, any help is greatly appreciated!


How I created the array:


<span style="color:#323232;">pvcreate /dev/sda /dev/sdb /dev/sdc
</span><span style="color:#323232;">vgcreate myvg /dev/sda /dev/sdb /dev/sdc
</span><span style="color:#323232;">
</span><span style="color:#323232;">pvresize  /dev/sda
</span><span style="color:#323232;">pvresize  /dev/sdb
</span><span style="color:#323232;">pvresize  /dev/sdc
</span><span style="color:#323232;">
</span><span style="color:#323232;">lvcreate --type raid5 -L 50G -n vol1 myvg
</span><span style="color:#323232;">lvcreate --type raid5 -L 300G -n vol2 myvg
</span><span style="color:#323232;">lvcreate --type raid5 -l +100%FREE -n vol3 myvg
</span>

For education purposes, I also simulated a catastrophic drive failure by zeroing out one of the drives. My procedure to repair the array was as follows, which seemed to work correctly:


<span style="color:#323232;">pvcreate /dev/sda
</span><span style="color:#323232;">vgextend myvg /dev/sda
</span><span style="color:#323232;">vgreduce --remove --force myvg
</span><span style="color:#323232;">lvconvert --repair myvg/vol1
</span><span style="color:#323232;">lvconvert --repair myvg/vol2
</span><span style="color:#323232;">lvconvert --repair myvg/vol3
</span>
  • All
  • Subscribed
  • Moderated
  • Favorites
  • [email protected]
  • random
  • lifeLocal
  • goranko
  • All magazines