User Tools

Site Tools


tech:linux:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
tech:linux:start [2020/05/01 01:22]
rk4n3
tech:linux:start [2022/11/13 23:20] (current)
rk4n3
Line 16: Line 16:
 <​code>​qemu-img create -f qcow2 sda 18G</​code>​ <​code>​qemu-img create -f qcow2 sda 18G</​code>​
 <​code>​lvcreate -L 100G -n vmnamehome vgvms</​code>​ <​code>​lvcreate -L 100G -n vmnamehome vgvms</​code>​
 +
 +See [[:​tech:​virt:​start|here]] for further details
  
 ===== Misc Tech Trivia ===== ===== Misc Tech Trivia =====
 +
 +==== X11 Display Auth ====
 +  * From display owner //(where X is display number)//: ''​xauth extract cookiefilename ://​**X**//''​
 +  * For remote user, get cookie file and:  ''​xauth merge cookiefilename''​
  
 ==== Storage Management ==== ==== Storage Management ====
   * Good article: [[https://​www.digitalocean.com/​community/​tutorials/​how-to-create-raid-arrays-with-mdadm-on-ubuntu-18-04]]   * Good article: [[https://​www.digitalocean.com/​community/​tutorials/​how-to-create-raid-arrays-with-mdadm-on-ubuntu-18-04]]
-  * Set up RAID5 with ''​mdadm'':<​code>​mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdX /dev/sdY /dev/sdZ</​code>​+  ​* **NOTE:** There seems to be issues with some hardware, where GPT partition tables are manipulated by some UEFI functionality,​ and so RAID volumes created on raw disk (without partition) are at risk of losing their superblock(s). ​ Because of this, its generally recommended to utilize a partition instead of full-disk devices. 
 +  ​* Set up RAID5 with ''​mdadm'':<​code>​mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdX1 /dev/sdY1 /dev/sdZ1</​code>​
   * Check status: ''​cat /​proc/​mdstat''​   * Check status: ''​cat /​proc/​mdstat''​
   * Configure for availability after boot: \\ //​(**NOTE**:​ this must be done after recovery status is complete)//<​code>​   * Configure for availability after boot: \\ //​(**NOTE**:​ this must be done after recovery status is complete)//<​code>​
Line 27: Line 34:
 update-initramfs -u update-initramfs -u
 </​code>​ </​code>​
-  * Scrub to prevent bit-rot: ''​echo check > /​sys/​block/​md**X**/​md/​sync_action''​ \\ (use ''​cat /​proc/​mdstat''​ to monitor)+  * Scrub to prevent bit-rot: ''​echo check > /​sys/​block/​md**X**1/​md/​sync_action''​ \\ (use ''​cat /​proc/​mdstat''​ to monitor) 
 +  * Tune re-build speed ([[https://​www.cyberciti.biz/​tips/​linux-raid-increase-resync-rebuild-speed.html]]):​ \\ To show limits: ''​sysctl dev.raid.speed_limit_min ; sysctl dev.raid.speed_limit_max''​ \\ To increase speed, enter: ''​sysctl -w dev.raid.speed_limit_min=value''​ \\ In ''/​etc/​sysctl.conf''<​code>​dev.raid.speed_limit_min = 50000 
 +## good for 4-5 disks based array ## 
 +dev.raid.speed_limit_max = 2000000 
 +## good for large 6-12 disks based array ### 
 +dev.raid.speed_limit_max = 5000000</​code>​
   * Sample LVM with ''​mdadm''​ RAID:<​code>​   * Sample LVM with ''​mdadm''​ RAID:<​code>​
 pvcreate -M2 /dev/md0 pvcreate -M2 /dev/md0
tech/linux/start.1588314138.txt.gz · Last modified: 2020/05/01 01:22 by rk4n3