How to make zpool online,how to become a millionaire from nothing 70,how to make a your own minecraft mod,how to legally write off your car - .

Author: admin, 11.03.2014. Category: The Power Of Thinking

I have several TBs of very valuable personal data in a zpool which I can not access due to data corruption. The first search result when googling for 'zfs recover' is the ZFS Troubleshooting and Data Recovery chapter from Solaris ZFS Administration Guide. Recently, I was sent an email from someone who had 15 years of video and music stored in a 10TB ZFS pool that, after a power failure, became defective. After spending about 1 week examining the data on the disk, I was able to restore basically all of it.
Further, Ben Rockwood has posted a detailed article and there is a video of Max Bruning talking about it (and mdb) at the Open Solaris Developer Conference in Prague on June 28, 2008. I think I have found the root cause: Max Bruning was kind enough to respond to an email of mine very quickly, asking for the output of zdb -lll.
So by the looks of it, it was not one of the OS installations that 'wrote a bootloader to one the drives' (as I had assumed before), it was actually the new motherboard (an ASUS P8P67 LE) creating a 2 MB host protected area at the end of three of the drives which messed up my ZFS metadata.
The links lead to the Serial Number Checker (which for some reason is protected by a captcha - mine was 'Invasive users') and a knowledge base article about the firmware update. I won't rush into updating the firmware of three drives at a time that have truncated partitions and are part of a broken storage pool. Therefore, the very first thing I'm going to do next is image the drives and work with the copies, so there's an original to go back to if anything goes wrong.
The other option however would be to work with the originals and keep the mirrored drives as backup, but then I'll probably run into above complexity when something went wrong with the originals. In order to clear out the three hard drives that will serve as imaged replacements for the three drives with the buggy BIOS in the broken pool, I need to create some storage space for the stuff that's on there now, so I'll dig deep in the hardware box and assemble a temporary zpool from some old drives - which I can also use to test how ZFS deals with swapping dd'd drives.
This article is already much longer than anyone with a ZFS file server out of action has the time to read, so I will go into details here and create an answer with the essential findings further below.
I dug deep in the obsolete hardware box to assemble enough storage space to move the stuff off the single 500GB drives to which the defective drives were mirrored. Ironically, when I connected the old drives the first time, I realized there's an old zpool on there I must have created for testing with an older version of some, but not all of the personal data that's gone missing, so while the data loss was somewhat reduced, this meant additional shifting back and forth of files.
Finally, I mirrored the problematic drives to backup drives, used those for the zpool and left the original ones disconnected.
I believe ZFS does notice the hardware change (by some hard drive UUID or whatever), but doesn't seem to care. As mentioned in the HPA Wikipedia article mentioned earlier, the presence of a host protected area is reported when Linux boots and can be investigated using hdparm.
So the problem obviously was that the new motherboard created a HPA of a couple of megabytes at the end of the drive which 'hid' the upper two ZFS labels, i.e.


After that, I restarted the FreeBSD 7.2 virtual machine on which the zpool had been originally created and zpool status reported a working pool again. I exported the pool on the virtual system and re-imported it on the host FreeBSD 8.2 system. The problem was that the new motherboard's BIOS created a host protected area (HPA) on some of the drives, a small section used by OEMs for system recovery purposes, usually located at the end of the harddrive. ZFS maintains 4 labels with partition meta information and the HPA prevents ZFS from seeing the upper two.
The problem did not only occur with the new motherboard, I had a similar issue when connecting the drives to an SAS controller card. The very first thing I would recommend you do is to get some more hard drives and make duplicate copies of the 8 drives you have with your data on them, using the dd command. I've done this before and there were times I didn't need it, but the times I did need it made it totally worth the effort.
Not the answer you're looking for?Browse other questions tagged freebsd zfs data-recovery or ask your own question. Mimicking Google's Persistent Disks — Is this a logical FreeBSD disaster recovery strategy? Why does Zaphod Beeblebrox call Ford Prefect "Ford" when they meet on the Heart of Gold? Since then, I had to replace pretty much all of the hardware of the host machine and install several host operating systems. Even if the underlying devices are repaired or replaced, the original data is lost forever.
I see there is zdb which doesn't seem to be officially documented by Sun or Oracle anywhere on the web. On any of the 4 hard drives in the 'good' raidz1 half of the pool, the output is similar to what I posted above.
However, I do remember I started the pool with 4 drives, then one of them died and was replaced under warranty by Seagate.
I believe this is because the HPA creation is only done on older drives with a bug that was fixed later on by a Seagate hard drive BIOS update: When this entire incident began a couple of weeks ago, I ran Seagate's SeaTools to check if there is anything physically wrong with the drives (still on the old hardware) and I got a message telling me that some of my drives need a BIOS update.
Just a last remark about DOS SeaTools: Don't bother trying to boot them standalone - instead, invest a couple of minutes and make a bootable USB stick with the awesome Ultimate Boot CD - which apart from DOS SeaTools also gets you many many other really useful tools. This might introduce an additional complexity, as ZFS will probably notice that drives were swapped (by means of the drive serial number or yet another UUID or whatever), even though it's bit-exact dd copies onto the same hard drive model. I've spent months with several open computer cases on my desk with various amounts of harddrive stacks hanging out and also slept a few nights with earplugs, because I could not shut down the machine before going to bed as it was running some lengthy critical operation.


I also had to rip out a few hard drives out of their USB cases, so I could connect them over SATA directly.
With that much hardware, it is an enormous help to have them stacked properly; cables coming loose or hard drive falling off your desk surely won't help in the process and might cause further damage to your data integrity. The backup drives have a newer firmware, at least SeaTools does not report any required firmware updates.
The guys from Sun from who created that system have all the reason the call it the last word in filesystems.
If you want another, possible newer point of view you could try a Solaris 11 Express live CD.
The FreeBSD VM is still available and running fine, only the host OS has now changed to Debian 6.
I post this question here in the hope that it helps me gather enough information for a sane, structured, controlled, informed, knowledgeable approach to get my data back - and hopefully help someone else out there in the same situation. However, on the first 3 of the 4 drives in the 'broken' half, zdb reports failed to unpack label for label 2 and 3.
As I am now trying to reproduce the exact details of that message and the link to the firmware update download, it seems that since the motherboard created the HPA, both SeaTools DOS versions to fail to detect the harddrives in question - a quick invalid partition or something similar flashes by when they start, that's it. For starters, the firmware update most likely can not be undone - and that might irrevocably ruin my chances to get my data back. There was some more, unrelated issues involved and some of the old drives started to fail when I put them back into action requiring a zpool replace, but I'll skip on that. There's likely a lot newer code running there (zpool in Solaris is now at version 31, whereas you are at version 6) and it might offer better recovery possibilities. I recommended the live CD because it's easy to try out -- you can run it in a VM as well. Depending on the importance of your data, you might wish to contact a professional recovery firm, however, as tampering with inaccessible data storage pools carries a good chance of making things worse. The hard drives are made accessible to the guest VM by means of VMWare generic SCSI devices, 12 in total.
Don't run zpool upgrade under Solaris though if you want to keep the pool mountable under FreeBSD.



Positive affirmations about relationships
Relish you i'm thinking of guitar chords
Power of critical thinking vaughn 3rd edition pdf xchange


Comments to «How to make zpool online»

  1. K_r_a_L writes:
    That Make The atmosphere of call centers is directly proportionate teaching is incomplete not that and have re-married.
  2. KOLUMBIA writes:
    Down by someone you were giving objective advice may even help.