Posts Tagged Error

Backup job failed. Cannot create a shadow copy of the volumes containing Exchange writer’s data.


Been facing the Exchange VSS Writer issue  when I backup my Exchange Server 2007 with Veeam Backup & Replication for quite sometime, and the only way to clear this out is by rebooting the Exchange Server or sometime it will pass through from the second or third job retry.

Last Friday 16th Nov, 2012 my Veeam Backup jobs suppose to run as Full Active Backup as per my configuration, Backup went fine for all the Backup Jobs, except the Backup Job which contains the Exchange Mailbox Server, it gives the usual error;

11/16/2012 8:43:20 PM :: Unable to release guest. Error: Unfreeze error: [Backup job failed.
Cannot create a shadow copy of the volumes containing writer's data.
A VSS critical writer has failed. Writer name: [Microsoft Exchange Writer]. Class ID: [{76fe1ac4-15f7-4bcd-987e-8e1acb462fb7}]. Instance ID: [{8ea7190d-337c-448f-b264-3401303b586b}]. Writer's state: [VSS_WS_FAILED_AT_FREEZE]. Error code: [0x800423f2].]

I have reboot it, and retry the job but no joy it didn’t help. I rebooted second and third time, but no joy The error is persistent.

I searched and searched for a solution but the usual result showed up, either reboot the server to clear the VSS writer Timed Out or restart the Microsoft Information Store which will clear the VSS Timed Out. I have done, both but without any luck.

I fed up from troubleshooting during my holiday and I left home for fishing. While I’m at the sea, my mind triggered out that why don’t you Exclude the C:\Drive of the Exchange VM and select only those drives which contains the Exchange Database?

Hummm, it seems it’s a brilliant idea. As soon as I reached home, I immideltly, logged in remotely again and I Excluded the C:\Drive VMDK SCSI (0:0) from the job and selected only those which contains the Exchange Database, “the disks are vRDM”;


SCSI (0:1)
SCSI (0:2)
SCSI (0:3)
SCSI (0:4)
SCSI (0:5)
SCSI (0:6)
SCSI (0:8)
SCSI (0:9)
SCSI (0:10)
SCSI (0:11)
SCSI (0:12)
SCSI (0:13)
SCSI (0:14)
SCSI (0:15)
SCSI (1:0)
SCSI (1:1)
SCSI (1:2)
SCSI (1:3)
SCSI (1:4)
SCSI (1:5)
SCSI (1:6)
SCSI (1:8)

Imagine, the Job passed the Snapshot Process so fast and I’m surprised that the job is started reading :) and the Exchange Database has been put in Backup Mode and the backup Speed is a bit faster 1.6 TB finished in 4 Hours :)

Wondering:
Why Excluding the SCSI (0:0) which is the C:\SystemDrive of the Exchange 2007 Virtual Machine and Including only the vRDM SCSI Drives the process will pass successfully without VSS Error or Timed Out?

What is the restoration impact of Backing up only the those drives where the Exchange Database resides without the the C:\Drive SCSI (0:0).

Will it be possible configuring another job against the Exchange VM to backup only the C:\Drive SCSI (0:0) without the rest of other drives? And when I want to restore it, I have to restore the Job which contains the C:\Drives and followed by the other Job which backed up the Database Drives?

Update:
I can confirm that, I have Selected All Disks to process under the Disk Exclusions and the job Failed. But I have amended the Selection and I included SCSI (0:0) which contains the Virtual Machine System Drive and the backup successful.
I think the reason why it didn’t work when the All Disks radio button is selected but it does when the SCSI Disks are selected including the SCSI (0:0) which includes the System Drive.

The Virtual Machine is limited to 60 Virtual SCSI Controllers/Targets and when Veeam Backup is initiating job to process the Backup, it creates a Loop within the SCSI Controllers selected in the VM. If any drives / SCSI Controller is presented, it will be added for the Backup. While Exchange is a bit sensitive to wait for the entire process of Veeam Backup Job to go from SCSI (0:0) till SCSI (3:15) which is the last SCSI Controller, it failed because VSS Snapshot it didn’t pass within the time frame that specified by Veeam.
But by selecting the correct SCSI Controller/Disks which are presented within the Virtual Machine, I think Veeam Intelligent enough to pickup only those SCSI Disks which are presented/Selected.

Advertisements

, , , , , , , , , , , ,

1 Comment

Cannot use CBT: Soap fault. Error caused by file


Hello,

I have encountered this warning on Veeam Backup & Replication version 6.1.0.205 on one Exchange Server 2007 VM.

The issue started the backup wants to run, but Veeam was unable to freeze the guest machine to take hot backup, and the snapshot left inside the datastore of the VM and Veeam didn’t remove it. Veeam job was configured to re-try the failed job after 15 minuets. Again, when it’s started, the Veeam created a snapshot and when it failed, Veeam didn’t remove the snapshot. This issue has caused the datastore to be filled up and left without space :)

Manually, I consolidated all the snapshots and deleted all of them. Then when I started the VM, luckily started fine.

But, when I re-run the Veeam Job, it started but very slow and gives the following warning. Cannot use CBT: Soap fault. Error caused by file.

Solution:

Edit the Veeam Backup Job, and in the Virtual Machines section, hit the button Recalculate. This will recalculate the exchange disks and will be re-added veeam/vCenter database. After that, the job started as incremental without warning and full backup without warning and faster :)

Thanks,

, , , , , , , ,

1 Comment

Mounting Backup Item Failed with vPower NFS with Veeam Backup 6


When you try to mount a backup job with Veeam Backup & Recovery you will be promoted with an error Failed on Checking if vPower NFS datastore is mounted on host.

On vSphere Client, you will get a failed task on Create NAS DatastoreAn error occurred during host configuration” the error in the view details is “Operation failed, diagnostics report: Unable to resolve hostname” of veeam server.

Error after second attempt:

Solution:

Since my vSphere Infrastructure sets on different subnet other that my domain and where the Veeam Machine sets, I have to edit the etc host of the ESX hosts as well as the Veeam Machine.

Ping by name and IP successful and vPower NFS successfully mounted the backup image to the ESX host and started working fine.

Note: I used Veeam 5 with ESX 4.1 and I didn’t face this kind of issue. Since I have upgraded to Veeam 6 and ESXi5 U1, I started facing this issue.

, , , , , , , ,

Leave a comment

“Virtual Disk ‘hard disk 0’ is a mapped direct access LUN and its not accessible”


Problem:

vMotion is not possible and when I attempt to vMotion a VM I got an error
“Virtual Disk ‘hard disk 0’ is a mapped direct access LUN and its not accessible”

This error is generated due to LUN ID Mismatch and vml.xxx LUN Signature mismatch.

Even When I created a match LUN IDs in both hosts, still I’m presented with the error when I attempt to vMotion the VM.

What’s wrong?

Both, LUN ID and WWN name are matched in both ESX Hosts. Vml.xxx also matches each LUN in each host correctly.  In here, at least Cold vMotion should works, but in my case it’s not and when I attempt to Cold vMotion the VM again I’m getting the “Virtual Disk ‘hard disk 0’ is a mapped direct access LUN and its not accessible”

LUN Name LUN ID   ESX01
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035
LUN Name LUN ID   ESX02
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035

I have found that in the VM Properties > Mapped RAW LUN Disk > Physical LUN and Datastore Mapping File, the LUN signature / vml.xxxx is wrong and it’s not referring any LUN among all the presented LUN.

Hummmmm, something strange going in here!!

This issue happened due to using existing RDM Mapper File. J This Exchange VM was running on ESX 4.0 and all the LUNs are mapped as RDM to the Exchange VM as a Virtual Compatibility Mode. I have upgraded one of the hosts which were in the same cluster, and I added it to another cluster where it’s managed by vCenter 4.1 latest build.

The new host has access to all the LUNs which accessed by the old host. Then, I shutdown the VM remove all the RDMs and I removed it from the old vCenter inventory, after that on the new host I browsed inside the datastore where the .vmx file located and added to the new inventory on the new vCenter 4.1. When the VM comes up normally, I start adding the RDM again but this time as an existing disk from the datastore which holds the mapper files.

The problem is this part, adding an exisiting rdm.vmdk mapper file to points to a LUN that directly presented to another host, here’s the result which created all the hassle.

This shows in the VM which is running on the new host and new vCenter01. But the Physical LUN and Datastore Mapping File points / referenced to the old vml.xxx signature where the VM were running on ESX 4.0 host.

LUN Name LUN ID   ESX01
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035
LUN Name LUN ID   ESX02
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035

If you have noticed on the above screen shot, the same LUN “6006016086f0250054536426c29ce011 points to new   vml.02000500006006016086f0250054536426c29ce011524149442035     but on the VM properties it shows different vml.xxx ID vml.0200000000060 which doesn’t have any referenceJ

Bottom line, the solution for this is vary

  1. Matching the LUN IDs across all the hosts it won’t solve the problem if the vml.xx ID is different.
  2. Matching the LUN IDs across all the hosts along with the vml.xxx signature, it might solve the problem or might not. Also, the datastore which holds in the RDM Mapper File should match the LUN ID and vml.xx in the other hosts.
  3. The only solution resolves this issue is
    1. Dismount Exchange Datastore to avoid any unpredictable  issue J
    2. Stop all exchange services and disable them.
    3. Shutdown the VM
    4. Remove the RDM LUNs and choose to Remove from Virtual Machine and Delete files from disk “This step won’t delete the actual Data in the NTFS LUN.
    5. Boot the VM and make sure the VM can be vMotioned using the VMDK which holds the OS only.
    6. Re-add the RDM to the VM and make sure it got all the WWN and vml.xxx matching all the hosts in the cluster.
    7. Start Exchange Services
    8. Mount Exchange Databases if they didn’t mount by itself

Result:

Migrate virtual machine

MAIL001

Completed

Administrator

26/07/2011 20:51:36

26/07/2011 20:51:36

26/07/2011 20:53:13

, , , ,

Leave a comment