This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
tech:linux:start [2020/05/08 20:24] rk4n3 |
tech:linux:start [2022/11/13 23:20] (current) rk4n3 |
||
---|---|---|---|
Line 16: | Line 16: | ||
<code>qemu-img create -f qcow2 sda 18G</code> | <code>qemu-img create -f qcow2 sda 18G</code> | ||
<code>lvcreate -L 100G -n vmnamehome vgvms</code> | <code>lvcreate -L 100G -n vmnamehome vgvms</code> | ||
+ | |||
+ | See [[:tech:virt:start|here]] for further details | ||
===== Misc Tech Trivia ===== | ===== Misc Tech Trivia ===== | ||
Line 25: | Line 27: | ||
==== Storage Management ==== | ==== Storage Management ==== | ||
* Good article: [[https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-18-04]] | * Good article: [[https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-18-04]] | ||
- | * **NOTE:** There seems to be an issues with some hardware, where GPT partition tables are manipulated by some UEFI functionality, and so RAID volumes created on raw disk (without partition) are at risk of losing their superblock(s). Because of this, its generally recommended to utilize a partition instead of full-disk devices. | + | * **NOTE:** There seems to be issues with some hardware, where GPT partition tables are manipulated by some UEFI functionality, and so RAID volumes created on raw disk (without partition) are at risk of losing their superblock(s). Because of this, its generally recommended to utilize a partition instead of full-disk devices. |
* Set up RAID5 with ''mdadm'':<code>mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdX1 /dev/sdY1 /dev/sdZ1</code> | * Set up RAID5 with ''mdadm'':<code>mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdX1 /dev/sdY1 /dev/sdZ1</code> | ||
* Check status: ''cat /proc/mdstat'' | * Check status: ''cat /proc/mdstat'' | ||
Line 32: | Line 34: | ||
update-initramfs -u | update-initramfs -u | ||
</code> | </code> | ||
- | * Scrub to prevent bit-rot: ''echo check > /sys/block/md**X**/md/sync_action'' \\ (use ''cat /proc/mdstat'' to monitor) | + | * Scrub to prevent bit-rot: ''echo check > /sys/block/md**X**1/md/sync_action'' \\ (use ''cat /proc/mdstat'' to monitor) |
* Tune re-build speed ([[https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html]]): \\ To show limits: ''sysctl dev.raid.speed_limit_min ; sysctl dev.raid.speed_limit_max'' \\ To increase speed, enter: ''sysctl -w dev.raid.speed_limit_min=value'' \\ In ''/etc/sysctl.conf''<code>dev.raid.speed_limit_min = 50000 | * Tune re-build speed ([[https://www.cyberciti.biz/tips/linux-raid-increase-resync-rebuild-speed.html]]): \\ To show limits: ''sysctl dev.raid.speed_limit_min ; sysctl dev.raid.speed_limit_max'' \\ To increase speed, enter: ''sysctl -w dev.raid.speed_limit_min=value'' \\ In ''/etc/sysctl.conf''<code>dev.raid.speed_limit_min = 50000 | ||
## good for 4-5 disks based array ## | ## good for 4-5 disks based array ## |