Univention Bugzilla – Bug 38830
partman-md/raid10layout" is force-set to "n2"
Last modified: 2020-07-03 20:56:34 CEST
Created attachment 7000 [details] partman_rai10layout.patch Ticket: 2015070221000318 customer wanted to configure md RAID 10 with "offset copies" ("o2") instead of default "near copies" ("n2") with d-i partman-md/raid10layout string o2 but partman-md/choose_partition/md/do_option forces the option to "n2" and overwrites was has been defined in the preceed file.
Created attachment 7001 [details] partman-md_75_all.deb As workaround the attached partman-md package (containing the patch) may be added to the local repository with the following commands: REPO='/var/lib/univention-repository/mirror/4.0/maintained/' COMP='4.0-2/all' PKG='partman-md_75_all.udeb' cp ../"$PKG" "$REPO/$COMP/" cd "$BASE" sed -i -e "/^Package: ${PKG%%_*}/,/^\$/d" "$COMP/Packages" apt-ftparchive packages "$COMP/$PKG" | tee -a "$COMP/Packages" python -c 'from univention.updater.mirror import UniventionMirror;m=UniventionMirror(False);m.update_dists_files()'
With patched package (and "d-i partman-md/raid10layout string o2" of cause) /var/log/installer/cdebconf/questions.dat contains: --- Name: partman-md/raid10layout Template: partman-md/raid10layout Value: o2 Owners: d-i, partman-md Flags: seen Variables: ID = partman-md/raid10layout --- But /proc/mdstat still shows the MD with "2 near-copies"
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=790941
"partman-auto-raid" creates the RAID itself and does not call "partman-md". ../partman-auto-raid/auto-raidcfg 78 »···mdadm --create /dev/md$MD_NUM --auto=yes --force -R -l raid$RAID_TYPE \ 79 »··· -n $DEV_COUNT $MDADM_PARAMS "partman-md" is only used in interactive mode. As setting up the nested structure "partition(raid(lvm(fs)))" is quiet tricky, "partman-auto-raid" contains the logic for that, but it can't be tricked into leaving the creation to "partman-md" while still doing the rest. d-i partman-auto/method string raid d-i partman-auto/disk string \ /dev/vda /dev/vdb /dev/vdc /dev/vdd d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 256 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 1000 1000 1000000000 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 1024 10240 100000000 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ label{ root } \ . \ 1024 512 100% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ label{ swap } \ . d-i partman-auto-raid/recipe string \ 1 4 0 ext3 /boot \ /dev/vda1#/dev/vdb1#/dev/vdc1#/dev/vdd1 \ . \ 10 4 0 lvm - \ /dev/vda2#/dev/vdb2#/dev/vdc2#/dev/vdd2 \ . d-i partman-md/device_remove_md boolean true d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/confirm_nooverwrite boolean true d-i partman/confirm_nooverwrite boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true
Created attachment 7015 [details] Patched partman-auto-raid
Created attachment 7016 [details] Patch for partman-auto-raid source code Example: d-i partman-auto-raid/recipe string \ 1 4 0 ext3 /boot \ /dev/vda1#/dev/vdb1#/dev/vdc1#/dev/vdd1 \ . \ 10 4 0 lvm - \ /dev/vda2#/dev/vdb2#/dev/vdc2#/dev/vdd2 # --layout#o2 \ .
For GPT the following must be added - otherwise GRUB fails to install: (In reply to Philipp Hahn from comment #4) > d-i partman-partitioning/default_label select gpt > d-i partman-auto/expert_recipe string \ > multiraid :: \ 32 32 32 free \ $gptonly{ } \ $lvmignore{ } \ $primary{ } \ $biod_boot{ } \ method{ biosgrub } \ . \ > 100 512 256 raid \ ...
There is a Customer ID set so I set the flag "Enterprise Customer affected".
Fixed version is in Debian-Stretch - will likely be fixed with UCS-4.3
This issue has been filed against UCS 4.2. UCS 4.2 is out of maintenance and many UCS components have changed in later releases. Thus, this issue is now being closed. If this issue still occurs in newer UCS versions, please use "Clone this bug" or reopen it and update the UCS version. In this case please provide detailed information on how this issue is affecting you.