How to fix “disk contains BIOS metadata error”

A couple of days ago I had to perform an OS install on a customer’s Dell RX 200 server. It was a spare machine that was previously used for backups.

Everything went fine until at some point during installation I received the following message: “Disk contains BIOS metadata, but is not part of any recognized BIOS RAID sets. Ignoring disk sdb

Ok, this is not an issue, one of the 2 disks was part of a raid before, so I jumped to a different terminal (press ALT +F2) during installation and decided to remove the raid metadata the “classic” way:

WARNING! If you don’t know what you’re doing you may end up deleting all your data! I strongly recommend you to backup everything before you start.

dmraid -r -E /dev/sdb
Do you really want to erase "pdc" ondisk metadata on /dev/sdb ? [y/n] :
y

 

Although no error message was displayed when I restarted the installation process, guess what! “Disk contains BIOS metadata, but is not part of any recognized BIOS RAID sets. Ignoring disk sdb” Damn, there was something wrong with this, so one simple solution would be to start the installation without raid support ( add nodmraid in boot options ) but this wasn’t possible in my case because I needed to configure the system with raid support. It was time to see why wasn’t the metadata removed by dmraid -E or dmraid –zero-superblock.

I tried to zero out the first bytes of the disk with dd which DID NOT HELP! At this point it became personal 🙂

After a bit of google search I reached to a kernel.org wiki page which provides details about the RAID superblock formats.

Ok, so it seems that the position of the metadata (a total of 256 bytes in size) depends of the subversion, it can be placed at the BEGINNING or at the END of the device! This means that if you get a drive configured with superblock metadata v0.9 on a system which uses v1.2 and you try to zero out or remove the metadata it will not remove it because it’s looking for it in the wrong location.

One can erase 99% of the device and that error message will not vanish, unless you erase the right portion of the device!

The solution:

1. Get the disk block information. You may get this from /proc partitions or using fdisk -s

root@server:~# cat /proc/partitions |grep -i sdb
8       16  125034840 sdb

root@server:~# fdisk -s /dev/sdb
125034840

 

 

2. Erase the first 1024 bytes from the beginning of the disk:

dd if=/dev/zero of=/dev/sdb bs=1k count=1

 

 

3. Erase the last 1024 bytes from the end of the disk:

 dd if=/dev/zero of=dev/sdb bs=1k seek=125034838

 

 

Or you can simply wrap up everything in a one liner which will remove both the first and last 1024 bytes of your disk:

dd if=/dev/zero of=/dev/sdb bs=1k count=1; dd if=/dev/zero of=/dev/sdb bs=1k seek=$((`fdisk -s /dev/sdb` - 2))

 

7 thoughts on “How to fix “disk contains BIOS metadata error””

  1. PC: AMD A7N8XRev 2.0 / XP Athlon 3200+ CPU 2.2gHz. Adding Sata 2 and Sata 3 drives with Syba SY-PCI40010 Sata II to PCI raid card. Two Hitachi 2TB 7200rpm Sata 2 server drives.

    1) Made clean install Win 7 with nonraid settings with Syba controller card. All went fine. 2) Decided to make raid 0. All went great. 3) Tried to undo Raid 0 by restoring image made of Win7pro raid 0 on single hard drive. Note: can only boot OS if system drive is a concatenated drive array.

    Want to know how to return again to having win7 pro booting as a non-raid system drive? Would your code work for me? How do I implement it? Thank you.

    Is there a way to get back to non-raid bootable Win 7 Pro with the Syba SYPCI40010 Sata2 Controller card?

    Reply
    • Hi,

      This is not something related to Microsoft Windows operating system, so the “fix” is only working for GNU/Linux OS, so if you use a GNU/Linux distribution it might help.
      The Syba controller card is the one where you defined the raid array, from it’s own tool you need to destroy it in order to revent to normal disk setup. Then you will have to reinstall your operating system.

      Reply
  2. Good Article. My addendum, So basically when commands fail (in this case dmraid remove superblock option, or mdadm remove superblock option), try them manually. The idea here is that dmraid (or mdadm) or similar raid utilizes. Newest raid protocols keep raid information – also known as raid superblocks – on the devices. Whats a device? the devices can be partitions or disks or lvs (anything that can hold data), in this case the whole disk is used as a raid device and thus that held the raid superblock. So ideally you can clear the whole raid superblock by doing this: dd if=/dev/zero of=/dev/sda bs=1M. but the problem is that will get rid of all of your data (including the superblock). So find the superblock (by finding the version of raid used, by doing research, or by using hextool and exploring the obvious areas – beginning of the device[disk/partition/lv- whatever] and the end of the device; obviously noone would put raid superblocks in the middle, because thats reserved for the higher IO layers data -> thats where data would go). In this case Cristian did the correct research and found exactly where the raid superblock is and he got rid of it using dd. He seeked to the place where the superblock starts, and he started writing zeros count times, until he overwrote that part of the superblock with zeros. At the end that part which held the raid information / raid superblock will be zeros so next time the system boots using whatever installation he used, that raid info will not be detected. Note if your not worried about data on the drives you could just write 10 mbs of zeros in the beginning, 10 mbs of zeros at the end, and you will be good to go (whocares about the random 1s and 0s in the middle, the higher layer – filesystem; will allocate that for data space and data will get overwritten to it over time)

    Reply
  3. The solution involved removing the BIOS RAID metadata that apparently was part of a residual software RAID that the 40GB HDD must of been used in. Running this command in the CentOS 6.

    Reply
  4. I had the same issue with the metadata. The drive isn’t going into a system with RAID but installing Linux failed due to the metadata. Thanks for the resolution. Please note that steps 2 & 3 did NOT solve the problem. I had to use the one-liner command for it to work.

    dd if=/dev/zero of=/dev/sdb bs=1k count=1; dd if=/dev/zero of=/dev/sdb bs=1k seek=$((`fdisk -s /dev/sdb` - 2))
    Reply

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.