should be an extremely quick fix for someone with mdadm lvm experience.
have a 4 drive USB3.0 external ext4 mdadm lvm raid-5 array connected to my Ubuntu 16.10 box (ripped from a broken Synology). Disk manager shows the array is fine and all the logical partitions are visible, however only 2 (of the 5) mount properly. The other 3 throw exit status 32 errors and dmesg says "Number of reserved GDT blocks insanely large: 8189"
A filesystem check on the partitions gives "Corruption found in superblock. (reserved_gdt_blocks = 8189)"
I've attempted to restore the superblocks on the partitions from backups using the e2fsck -b command but none of them work.
mdadm --examine --scan /dev/md127 gives no result
parted result: Error: /dev/md127: unrecognised disk label
Model: Linux Software RAID array (md)
Disk: /dev/md127: 8987GB Sector size: (logical/physical): 512B/512B
Partition Table: unknown
[ 41.648463] md/raid:md127: device sdb3 operational as raid disk 1
[ 41.649433] md/raid:md127: allocated 4374kB
[ 41.649675] md/raid:md127: raid level 5 active with 4 out of 4 devices, algorithm 2
[ 41.649675] RAID conf printout:
[ 41.649675] --- level:5 rd:4 wd:4
[ 41.649676] disk 0, o:1, dev:sdc3
[ 41.649677] disk 1, o:1, dev:sdb3
[ 41.649677] disk 2, o:1, dev:sde3
[ 41.649678] disk 3, o:1, dev:sdd3
[ 41.649697] md127: detected capacity change from 0 to 8987271954432
I can teamviewer the link to the person with the best experience to resolve this small but annoying issue.