De acordo com as Leis 12.965/2014 e 13.709/2018, que regulam o uso da Internet e o tratamento de dados pessoais no Brasil, ao me inscrever na newsletter do portal DICAS-L, autorizo o envio de notificações por e-mail ou outros meios e declaro estar ciente e concordar com seus Termos de Uso e Política de Privacidade.
Colaboração: Paulo Licio de Geus
Data de Publicação: 05 de abril de 2012
Para servidores, é conveniente o uso de RAIDs para armazenagem segura (redundância nos discos), desempenho e flexibilidade no aumento de espaço de armazenagem. Segue-se o procedimento realizado na Twister para criação de RAID (software), LVM sobre RAID e EXT3 sobre LVM.
Com o uso de 3 discos IDEs, testou-se a criação de 3 RAIDs de tamanhos variados (5,10,15 GB), simulando falhas e expansões.
Partições criadas:
Pacotes utilizados:
mdadm --create --verbose /dev/mdx --level=5 --raid-devices=n /dev/part1 /dev/part2 ... /dev/part
onde x
é o numero do RAID (pode ser escolhido) e n
é o numero de devices
No caso do teste:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/hda5 /dev/hdb1 /dev/hdc1 mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 /dev/hda6 /dev/hdb2 /dev/hdc2 mdadm --create --verbose /dev/md2 --level=5 --raid-devices=3 /dev/hda7 /dev/hdb3 /dev/hdc3
O comando:
mdadm --detail /dev/mdx
Mostra detalhes dos RAIDs, como as partições que estão alocadas nele, seu tamanho, seu status, se há falha, etc...
No caso do teste:
mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Thu Sep 27 07:37:25 2007 Raid Level : raid5 Array Size : 10008192 (9.54 GiB 10.25 GB) Used Dev Size : 5004096 (4.77 GiB 5.12 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Sep 27 15:05:28 2007 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 82893a79:7e095f12:28a3a77f:3e276a5e Events : 0.14 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 3 65 1 active sync /dev/hdb1 2 22 1 2 active sync /dev/hdc1
Em cima dos RAIDs, criamos grupos de volumes LVM2
Preparando a partição
pvcreate /dev/mdx
Criando grupos de volume
vgcreate lvm-raidx /dev/mdx
Mostrando detalhes de todos os grupos de volume
vgdisplay
Observar o campo Free PE / Size xxxx / xx.x Gb ; Como vamos criar volumes usando o máximo de tamanho disponível, utlizaremos esse valor no próximo passo.
Criando os volumes
lvcreate -l xxxx lvm-raidx -n lvmx
Onde lvmx é o nome do novo volume
No caso do teste:
pvcreate /dev/md0 Physical volume /dev/md0 successfully created pvcreate /dev/md1 Physical volume /dev/md1 successfully created pvcreate /dev/md2 Physical volume /dev/md2 successfully created vgcreate lvm-raid0 /dev/md0 Volume group lvm-raid0 successfully created vgcreate lvm-raid1 /dev/md1 Volume group lvm-raid1 successfully created vgcreate lvm-raid2 /dev/md2 Volume group lvm-raid2 successfully created vgdisplay --- Volume group --- VG Name lvm-raid2 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 28.62 GB PE Size 4.00 MB Total PE 7326 Alloc PE / Size 0 / 0 Free PE / Size 7326 / 28.62 GB VG UUID KZna1j-MFa6-BLkd-wAPR-EulJ-RG2k-V7khjc --- Volume group --- VG Name lvm-raid1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 19.09 GB PE Size 4.00 MB Total PE 4886 Alloc PE / Size 0 / 0 Free PE / Size 4886 / 19.09 GB VG UUID MXqCKo-ukoh-LtEK-qrKn-VQut-Nh14-X4xGX5 --- Volume group --- VG Name lvm-raid0 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 9.54 GB PE Size 4.00 MB Total PE 24434/4/12 RAID/LVM - LAS Wiki Alloc PE / Size 0 / 0 Free PE / Size 2443 / 9.54 GB VG UUID Xq7RAT-aMBa-qSnp-MnIg-wq3x-EqyJ-CyN0LD lvcreate -l 2443 lvm-raid0 -n lvm0 Logical volume lvm0 created lvcreate -l 4886 lvm-raid1 -n lvm1 Logical volume lvm1 created lvcreate -l 7326 lvm-raid2 -n lvm2 Logical volume lvm2 create
Em cima dos volumes criados, vamos usar o sistema de arquivos EXT3:
mkfs.ext3 /dev/lvm-raidx/lvmx
No caso do teste:
mkfs.ext3 /dev/lvm-raid0/lvm0 mkfs.ext3 /dev/lvm-raid1/lvm1 mkfs.ext3 /dev/lvm-raid2/lvm2
Após esse passo, as partições /dev/lvm-raidx/lvmx estão prontas para serem montadas, podendo ser
adicionadas ao /etc/fstab.
Podemos adicionar partições de tamanho igual às existentes e assim expandir o tamanho total do RAID online, ou seja, sem precisar desmontar os volumes.
Adicionando à nova partição ao RAID:
mdadm /dev/mdx -a /dev/nova_particao
Ou ainda:
mdadm --manage /dev/mdx -a /dev/nova_particao
Ela ficará como partição de reserva (spare).
Aumentando o número de discos do RAID:
mdadm --grow /dev/mdx --level=5 --raid-disks=n+1
Onde n é número atual de discos
O processo de expansão do RAID é bastante lento, e pode ser acomapanhado com o comando de checagem de status:
mdadm --detail /dev/mdx
A porcentagem do processo aparece no campo Reshape. O tamanho total só aumentará ao fim do processo.
No caso to teste: (Obs.: Para uso no teste, adicionamos uma partição de uma HD que já possuia outra partição no RAID (hdb). Se essa HD falhar, as duas partições envolvidas falharão e o RAID não poderá se recuperar. Por esse motivo, nunca deve-se usar duas partições de uma mesma HD num mesmo RAID).
#Nova particao /dev/hdb4 de 10 Gb mdadm /dev/md1 -a /dev/hdb4 mdadm --grow /dev/md1 --level=5 --raid-disks=4 mdadm --detail /dev/md1 /dev/md1: Version : 00.91.03 Creation Time : Thu Sep 27 07:41:45 2007 Raid Level : raid5 Array Size : 20016768 (19.09 GiB 20.50 GB) Used Dev Size : 10008384 (9.54 GiB 10.25 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Fri Sep 28 11:55:09 2007 State : clean, recovering Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Reshape Status : 55% complete Delta Devices : 1, (3->4) UUID : a86fb88b:ad13de99:c3fd5b06:f4e079fb Events : 0.3726 Number Major Minor RaidDevice State 0 3 6 0 active sync /dev/hda6 1 3 66 1 active sync /dev/hdb2 2 22 2 2 active sync /dev/hdc2 3 3 68 3 active sync /dev/hdb4
Apenas após essa expansão é possível verificar quando espaço disponível há para o crescimento do volume.
Novamente, verificaremos os detalhes:
vgdisplay lvm-raidx
Atentando para o valor de Free PE / Size, que será usado em seguida:
Expandindo o volume:
lvresize -l +xxxx /dev/lvm-raidx/lvmx
Onde xxxx é o valor disponível para o crescimento (Free PE / Size)
No caso do teste:
pvresize /dev/md1 Physical volume /dev/md1 changed 1 physical volume(s) resized / 0 physical volume(s) not resized vgdisplay lvm-raid1 --- Volume group --- VG Name lvm-raid1 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 28.63 GB PE Size 4.00 MB Total PE 7330 Alloc PE / Size 4886 / 19.09 GB Free PE / Size 2444 / 9.55 GB VG UUID MXqCKo-ukoh-LtEK-qrKn-VQut-Nh14-X4xGX5 lvresize -l +2444 /dev/lvm-raid1/lvm1 Extending logical volume lvm1 to 28.63 GB Logical volume lvm1 successfully resized
Por fim, expandimos o sistema de arquivos do volume. Usando:
df -h
Podemos identificar as partições montadas, observando o nome daquela que queremos expandir
Expandindo:
resize2fs /dev/mapper/lvm--raidx-lvmx
(Obs.: O nome da partição aqui é diferente das anteriores)
O processo pode ser acompanhado através do comando df.
No caso do teste:
df Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda2 19228308 1980080 16271476 11% / udev 1036836 2696 1034140 1% /dev /dev/hda1 19228276 3338580 14912948 19% /mnt/ubuntu shm 1036836 0 1036836 0% /dev/shm /dev/mapper/lvm--raid2-lvm2 29536340 11268676 16767300 41% /l/lvm2 /dev/mapper/lvm--raid1-lvm1 19698968 8780456 9917860 47% /l/lvm1 /dev/mapper/lvm--raid0-lvm0 9849376 5533592 3815460 60% /l/lvm0 resize2fs /dev/mapper/lvm--raid1-lvm1 resize2fs 1.39 (29-May-2006) Filesystem at /dev/mapper/lvm--raid1-lvm1 is mounted on /l/lvm1; on-line resizing required Performing an on-line resize of /dev/mapper/lvm--raid1-lvm1 to 7505920 (4k) blocks. The filesystem on /dev/mapper/lvm--raid1-lvm1 is now 7505920 blocks long. df Filesystem 1K-blocks Used Available Use% Mounted on /dev/hda2 19228308 1980080 16271476 11% / udev 1036836 2696 1034140 1% /dev /dev/hda1 19228276 3338580 14912948 19% /mnt/ubuntu shm 1036836 0 1036836 0% /dev/shm /dev/mapper/lvm--raid2-lvm2 29536340 11268676 16767300 41% /l/lvm2 /dev/mapper/lvm--raid1-lvm1 29551588 8780456 19272120 32% /l/lvm1 /dev/mapper/lvm--raid0-lvm0 9849376 5533592 3815460 60% /l/lvm
Para remover um disco de um LVM sem perder o sistema de arquivos.
# umount /l/lv1 # e2fsck -f /dev/mapper/vgscsi1-lv1 # resize2fs te força... # resize2fs -s 69984M /dev/mapper/vgscsi1-lv1 # e2fsck -f /dev/mapper/vgscsi1-lv1 # just in case...
# lvreduce -L 69984M /dev/mapper/vgscsi1-lv1 # pvmove /dev/sda4 # se reclamar, adicionar --alloc anywhere ao comando # vgreduce vgscsi1 /dev/sda4
# e2fsck -f /dev/mapper/vgscsi1-lv1 # just in case, again... # badblocks -sv /dev/mapper/vgscsi1-lv1 # just reading, really paranoic... # mount /l/lv1
Foram realizadas simulações de falhas, para testar a recuperação dos RAIDs. Dois tipos de simulação foram realizadas. Primeiro, uma simples remoção de uma partição do RAID, e depois a remoção física de uma das HDS, simulando uma falha no disco.
Definindo a partição como falha(faulty):
mdadm /dev/mdx -f /dev/particao
Onde /dev/mdx é RAID e /dev/particao uma de suas partições
O processo todo pode ser observado:
mdadm --detail /dev/mdx
Removendo a partição do RAID:
mdadm /dev/mdx -r /dev/particao
Readicionando a partição:
mdadm /dev/mdx -a /dev/particao
Após isso, o RAID entra em um lento processo de reconstrução.
No caso do teste:
mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Thu Sep 27 07:37:25 2007 Raid Level : raid5 Array Size : 10008192 (9.54 GiB 10.25 GB) Used Dev Size : 5004096 (4.77 GiB 5.12 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Oct 1 08:45:30 2007 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 82893a79:7e095f12:28a3a77f:3e276a5e Events : 0.60 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda54/4/12 RAID/LVM - LAS Wiki 1 3 65 1 active sync /dev/hdb1 2 22 1 2 active sync /dev/hdc1 mdadm /dev/md0 -f /dev/hdb1 mdadm: set /dev/hdb1 faulty in /dev/md0 mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Thu Sep 27 07:37:25 2007 Raid Level : raid5 Array Size : 10008192 (9.54 GiB 10.25 GB) Used Dev Size : 5004096 (4.77 GiB 5.12 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Oct 1 11:16:51 2007 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 82893a79:7e095f12:28a3a77f:3e276a5e Events : 0.62 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 0 0 1 removed 2 22 1 2 active sync /dev/hdc1 3 3 65 - faulty spare /dev/hdb1 mdadm /dev/md0 -r /dev/hdb1 mdadm: hot removed /dev/hdb1 mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Thu Sep 27 07:37:25 2007 Raid Level : raid5 Array Size : 10008192 (9.54 GiB 10.25 GB) Used Dev Size : 5004096 (4.77 GiB 5.12 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Oct 1 11:17:08 2007 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 82893a79:7e095f12:28a3a77f:3e276a5e Events : 0.66 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 0 0 1 removed 2 22 1 2 active sync /dev/hdc1 mdadm /dev/md0 -a /dev/hdb1 mdadm: re-added /dev/hdb1 mdadm --detail /dev/md04/4/12 RAID/LVM - LAS Wiki /dev/md0: Version : 00.90.03 Creation Time : Thu Sep 27 07:37:25 2007 Raid Level : raid5 Array Size : 10008192 (9.54 GiB 10.25 GB) Used Dev Size : 5004096 (4.77 GiB 5.12 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Oct 1 11:17:18 2007 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 0% complete UUID : 82893a79:7e095f12:28a3a77f:3e276a5e Events : 0.70 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 3 3 65 1 spare rebuilding /dev/hdb1 2 22 1 2 active sync /dev/hdc1
Podemos remover um dos discos, formatá-lo, e readicionar as partições aos RAIDS, sem perda de dados.
No caso do teste:
mount /dev/hda2 on / type ext3 (rw,noatime) proc on /proc type proc (rw,nosuid,nodev,noexec) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec) udev on /dev type tmpfs (rw,nosuid) devpts on /dev/pts type devpts (rw,nosuid,noexec) /dev/hda1 on /mnt/ubuntu type ext3 (rw,noatime) shm on /dev/shm type tmpfs (rw,noexec,nosuid,nodev) /dev/mapper/lvm--raid2-lvm2 on /l/lvm2 type ext3 (rw,noatime) /dev/mapper/lvm--raid1-lvm1 on /l/lvm1 type ext3 (rw,noatime) /dev/mapper/lvm--raid0-lvm0 on /l/lvm0 type ext3 (rw,noatime) usbfs on /proc/bus/usb type usbfs (rw,noexec,nosuid,devmode=0664,devgid=85) df -h Filesystem Size Used Avail Use% Mounted on /dev/hda2 19G 1.9G 16G 11% / udev 1013M 2.7M 1010M 1% /dev /dev/hda1 19G 3.2G 15G 19% /mnt/ubuntu shm 1013M 0 1013M 0% /dev/shm /dev/mapper/lvm--raid2-lvm2 29G 11G 16G 41% /l/lvm2 /dev/mapper/lvm--raid1-lvm1 29G 8.4G 19G 32% /l/lvm1 /dev/mapper/lvm--raid0-lvm0 9.4G 5.3G 3.7G 60% /l/lvm0 mdadm --detail /dev/md{0,1,2} /dev/md0: Version : 00.90.03 Creation Time : Thu Sep 27 07:37:25 2007 Raid Level : raid5 Array Size : 10008192 (9.54 GiB 10.25 GB) Used Dev Size : 5004096 (4.77 GiB 5.12 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Oct 4 10:40:01 2007 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 82893a79:7e095f12:28a3a77f:3e276a5e Events : 0.86 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 3 65 1 active sync /dev/hdb1 2 0 0 2 removed /dev/md1: Version : 00.90.03 Creation Time : Thu Sep 27 07:41:45 2007 Raid Level : raid5 Array Size : 30025152 (28.63 GiB 30.75 GB) Used Dev Size : 10008384 (9.54 GiB 10.25 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Thu Oct 4 10:40:12 2007 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : a86fb88b:ad13de99:c3fd5b06:f4e079fb Events : 0.6702 Number Major Minor RaidDevice State 0 3 6 0 active sync /dev/hda6 1 3 66 1 active sync /dev/hdb2 2 0 0 2 removed 3 3 68 3 active sync /dev/hdb4 /dev/md2: Version : 00.90.03 Creation Time : Thu Sep 27 07:41:58 2007 Raid Level : raid5 Array Size : 30009216 (28.62 GiB 30.73 GB) Used Dev Size : 15004608 (14.31 GiB 15.36 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 4 10:40:14 2007 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : e8116732:4aadf4f5:7386ce84:809fccf44/4/12 RAID/LVM - LAS Wiki Events : 0.34 Number Major Minor RaidDevice State 0 3 7 0 active sync /dev/hda7 1 3 67 1 active sync /dev/hdb3 2 0 0 2 removed ### Disco hdc formatado mdadm /dev/md0 -a /dev/hdc1 mdadm: added /dev/hdc1 mdadm /dev/md1 -a /dev/hdc2 mdadm: added /dev/hdc2 mdadm /dev/md2 -a /dev/hdc3 mdadm: added /dev/hdc3 mdadm --detail /dev/md{0,1,2} /dev/md0: Version : 00.90.03 Creation Time : Thu Sep 27 07:37:25 2007 Raid Level : raid5 Array Size : 10008192 (9.54 GiB 10.25 GB) Used Dev Size : 5004096 (4.77 GiB 5.12 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Oct 4 11:22:52 2007 State : clean, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K Rebuild Status : 18% complete UUID : 82893a79:7e095f12:28a3a77f:3e276a5e Events : 0.96 Number Major Minor RaidDevice State 0 3 5 0 active sync /dev/hda5 1 3 65 1 active sync /dev/hdb1 3 22 1 2 spare rebuilding /dev/hdc1 /dev/md1: Version : 00.90.03 Creation Time : Thu Sep 27 07:41:45 2007 Raid Level : raid5 Array Size : 30025152 (28.63 GiB 30.75 GB) Used Dev Size : 10008384 (9.54 GiB 10.25 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Thu Oct 4 11:23:04 2007 State : clean, degraded Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K UUID : a86fb88b:ad13de99:c3fd5b06:f4e079fb Events : 0.6710 Number Major Minor RaidDevice State 0 3 6 0 active sync /dev/hda6 1 3 66 1 active sync /dev/hdb2 4 22 2 2 spare rebuilding /dev/hdc2 3 3 68 3 active sync /dev/hdb4 /dev/md2: Version : 00.90.03 Creation Time : Thu Sep 27 07:41:58 2007 Raid Level : raid5 Array Size : 30009216 (28.62 GiB 30.73 GB) Used Dev Size : 15004608 (14.31 GiB 15.36 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Oct 4 11:23:09 2007 State : clean, degraded Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 64K UUID : e8116732:4aadf4f5:7386ce84:809fccf4 Events : 0.42 Number Major Minor RaidDevice State 0 3 7 0 active sync /dev/hda7 1 3 67 1 active sync /dev/hdb3 3 22 3 2 spare rebuilding /dev/hdc3
Comandos do lvm (quaisquer) reclamavam com...
Incorrect metadata area header checksum
Diagnóstico: metadata do volume em questão (/dev/md0, o único...) corrompido. Metadata é a informação d econtrole queidentifica o volume como parte de um lvm, e que fica no início da parte de dados do volume. Não sabemos exatamente o que levou a isso, pois a máquina crashou sem motivo, e não sei se ela já havia terminado a reconstrução do raid, devido ao comando grow após a inclusão de novos discos.
Solução: Reboot em single-user, para poder levantar o raid sem o lvm dentro, e assim o comando poder rodar com o device /dev/mdX livre:
pvcreate -ff --uuid "qwLAHx-y7nb-PjXf-FYCT-wdAn-BIdh-5Mphgm" --restorefile /etc/lvm/backup/vgmedia /d
O uuid acima é o do pv problemático, no caso do único, que era o /dev/mdX do raid. paulo 15:02, 25 Outubro 2009 (BST)
Ao se manipular os volumes, seja com mdadm ou com LVM, pode-se ter problemas que resultem em mensagens como Device or resource busy. Isso pode acontecer porque o volume está sendo mapeado pelo Device Mapper (aparece em /dev/mapper). Se necessário manipular os volumes mapeados pelo Device Mapper, é só invocar:
dmsetup
O help dele é bem intuitivo.
This policy contains information about your privacy. By posting, you are declaring that you understand this policy:
This policy is subject to change at any time and without notice.
These terms and conditions contain rules about posting comments. By submitting a comment, you are declaring that you agree with these rules:
Failure to comply with these rules may result in being banned from submitting further comments.
These terms and conditions are subject to change at any time and without notice.
Comentários