Archive for category VMware vSphere

Unable to allocate processing resources. Error: No backup proxy is able to backup this VM. Check processing mode settings on proxies.


Hello,
Are you seeing this error when you try to back up your VMs with Veeam Backup & Replication 6.5 in Direct SAN Mode? Does your Proxy Servers sees the VMFS LUNs where you are fetching the VMs Data to the Backup Mode?

This error is generated because the Veeam Configured to backup VMs via Direct SAN Access “Using source proxy VMware Backup Proxy [san] and the Proxy doesn’t have access to the VMFS LUNs.

Solution:

To sort out this issue;
1. Make sure your Proxy Server can see the VMFS LUNs inside the Disk Management of the Windows.
2. Make sure the Proxy configured as Direct SAN Access.
3. Make sure the Direct SAN Access Proxy is selected in the Backup Job Configuration / Storage Section.
4. If you are still getting the error and failing to process the backup further, Open the Proxy Configuration under Backup Infrastructure -> Backup Proxies and select the desire proxy.
Right Click on the Desired Proxy and Select Properties in the Selected DataStores, select the VMFS LUNs which contains the target Backup VMs manually and try..

Hope it helps..

, , , , , ,

Leave a comment

Scheduling Veeam Backup Jobs for Daily Incremental and Weekly Full using PowerShell


As every Veeam Backup & Replication User/Administrator can see that only one schedule can be configured for each job, which will includes First Full backup, Active Full backup and Daily Increments.

Daily incremental takes less time than the full backup and if you have configured the jobs to run each after another with minimum 20 – 30 minutes cap. In the same job there you are restricted to select either Synthetic Backup with Transforming the Backups into one Full Backup *.VBK file chain of all previous incremental or you have to select Active Full.

Synthetic takes less time to finish incremental, but if you don’t have good CPU/Memory resources and Disk I/O, the transformation process could take time or you might end up with full VM freezing.

But if you are in Active Backup and selected Weekly Active Full Backup on Friday let assume, which again if Friday trigger it will run as Full in the same time that the daily incremental runs which will take longer to finish and in the same time the second job will start before the first job finish and will end up having whole jobs running and will slow down the read and write performance of the disks and Veeam server.

Since Friday is the day off and the servers/systems are being utilized very less, I have the whole day to run my backup, but can I can configure the Active Full Backup to run on different day other than the one which is configured in the Job Scheduler of the Backup Job?

Currently there is no way to configure it from the Veeam GUI and it would be very fantastic feature if Veeam.com would consider to add a separate schedule for the Active Full Backup.

The only way to make sure you run your jobs without overlapping with other jobs and will run into performance issue; with very simple Veeam PowerShell script and a Windows batch script to call the PS1 script will make sure all jobs runs as per your needs.
In my environment, I’m backing up 44 VMs those VMs divided into 5 Jobs; I have created 5 scripts for each job and I disabled the Backup Job Schedule;

Critical VMs-1

Critical VMs-2

Critical VMs-3

Critical VMs-4

Critical VMs-5

I have created two scripts for each job as follows;

PowerShell Script “Critical VMs-1.PS1

Add-PSSnapIn -Name VeeamPSSnapIn -ErrorAction SilentlyContinue
Get-VBRJob -Name "Critical VMs-1" | Start-VBRJob

This Power Shell Script, will find the job name “Critical VMs-1” via the Get-VBRJob command and will start the Backup job via Start-VBRJob.

And another Windows Batch Script to call the Critical VMs-1.PS1 PowerShell script and invoke it into the PowerShell;

Windows Batch Script “RunCritical-1.bat

PowerShell.exe G:\Scripts\Critical VMs-1.PS1

I have repeated the same scripts for each job. Then in the task Scheduler in windows server 2008 R2, I configured two schedules for each backup job, so one schedule will run the same script RunCritical-1.bat from Sunday to Thursday as daily incremental and the same script will run on Friday on different time than the daily schedule, this will make sure that I have enough time window to run my backup jobs from early morning on Friday.  And since the Veeam Backup Job already configured as Active Full on Friday, it will automatically triggered and run as Full.

Hope it helps;

, , , , ,

1 Comment

Validation of Virtual Machines Backup using VMWare vSphere and Veeam Backup


Validation of Virtual Machines Backup using Veeam & Replication 5.0 / 6.0

 Introduction:

The objective of this document is to validate the backup of the Virtual machines at my organization production environment that to be restored at another site/ test environment.

This validation report focus of Veeam technology that makes the restoration possible and successful of all the application running on Virtual Machines that backed up using Veeam software.

With Veeam backup the successful of any virtual machine backup can be restored to any virtual environment or they can be run directly from the backup disk-image or using Virtual Lab for instance exchange servers or domain controller.

Download it from the below link.

Validation of Virtual Machines Backup using VMWare vSphere and Veeam Backup

, , , ,

Leave a comment

Mounting Backup Item Failed with vPower NFS with Veeam Backup 6


When you try to mount a backup job with Veeam Backup & Recovery you will be promoted with an error Failed on Checking if vPower NFS datastore is mounted on host.

On vSphere Client, you will get a failed task on Create NAS DatastoreAn error occurred during host configuration” the error in the view details is “Operation failed, diagnostics report: Unable to resolve hostname” of veeam server.

Error after second attempt:

Solution:

Since my vSphere Infrastructure sets on different subnet other that my domain and where the Veeam Machine sets, I have to edit the etc host of the ESX hosts as well as the Veeam Machine.

Ping by name and IP successful and vPower NFS successfully mounted the backup image to the ESX host and started working fine.

Note: I used Veeam 5 with ESX 4.1 and I didn’t face this kind of issue. Since I have upgraded to Veeam 6 and ESXi5 U1, I started facing this issue.

, , , , , , , ,

Leave a comment

Cannot reach iSCSI target after enabling iSCSI port binding in ESXi5.0


Today I have encountered funny thing which kept me out of control trying to figure it out. In my environment my ESX Servers are connected to EMC AX4-5i and I configured an iSCSI Binding to achieve Round Robin Load Balancing..

I received a new IBM DS3512 and recently put the second controller to reply the EMC SAN Storage. In IBM  Storage manager I configured the host profile and created test LUN and presented to one of the ESX Hosts. I confirm everything is correct, I can ping the targets IP Address and i can ping ESX VMkernel IP Address but when I do a rescan, nothing is added. I was pulling my hair to figure out what is wrong.

It concluded that the previous iSCSI Binding is the issue. After removing both VMKernel PortGroups that binded to the VMHBA41 and did a rescan the IBM iSCSI Target detected and LUN appear.

In vSphere Client:

  1. Click Configuration > Storage Adapters > iSCSI Software Initiator > Properties > Configure.
  2. Select the Network Configuration tab.
  3. Select the vmk which is iSCSI compliant, then click OK.

Or you can use cli in command line;

esxcli iscsi networkportal add -A vmhbaX -n vmkY

Where X is the vmhba device number, and Y is the vmkernel port configured to access iSCSI storage.

, , , , , ,

Leave a comment

Integration Veeam Backup with Symantec Backup Exec 2010


Integration Veeam Backup with Symantec Backup Exec 2010

Introduction:

This document highlights some necessary steps required to setup a successful Veeam Backup and Symantec Backup Exec 2010 for tape integration and off-site storage.

Steps to a successful integration:

Veeam B& R v6.0.0.181 is installed on a Virtual Machine running guest operating system Windows 2008 R2 64bit with 8 GB of Ram and 2 Virtual vCPU Sockets and 4 Cores for each socket. The Veeam resources it depends on the environments and workload that put on the backup during backup operation.

All backup repositories’ are mapped through iSCSI Initiator with MPIO. The sizing of the repositories also it depends on each environment and how much data needs to be restored from disk and for how long. Again, this retention period plays a big role in the repository storage and how big your storage is.

In my environments, I have multiple targets;

No

Repository Target

Size

1. IBM iSCSI – DS3500 2.0 TB
2. IBM iSCSI – DS3500 745 GB
3. EMC iSCSI – AX4-5i 800 GB
4. OpenFiler iSCSI 2.3 1.4 TB
5. IBM iSCSI – DS3500 2.0 TB
Total 6.9 TB

Veeam Backup Configuration:

All the above repositories are added to the Veeam Backup & Replication v.6.0.0.181. Backup jobs configured for each VM to one of the above repositories.

Due to lack of disk space in the repositories and which limits the number of days  ofbacked up VMs should remain on disk, all the backup jobs are configured as follows;

Categorizing the backup jobs based on roles of the Virtual Machines helps a lot from backing up static Virtual Machines which doesn’t change very often.

No

Backup Category

Descriptions

1. Critical Virtual Machines Virtual machines are changing dynamically, such as Mail Server, file Server, Archive Server, Sharepoint, etc..
2. Non-Critical Virtual Machines Virtual machines are not changing on a daily basis, such as AntiVirus, Proxy Servers, WSUS, Deployment Serves

Critical Virtual Machines Job Settings:

Since those Virtual Machines are changing dynamically and daily, weekly and monthly backup requires for them backup job settings are configured based on the disk space and how long we can restore from disk.

Restore points to keep on disk: = 5

Deleted VMs data retention period: = 5

Backup Mode set as Incremental and Weekly Friday Active Full Backup. And schedule backup Sunday through Friday.

This settings result 1 Full Backup on Friday .vbk file and Sunday through Thursday is incremental which will gives 5 .vibs files.

Note: Since the Deleted VMs data retention period set to 5, CheckDataValue() when it reaches to 5 days, the full last backup chain will be deleted from disk and will have a new .vib file changed with last .vbk file.

Example:

Full Backup runs on 27th April, 2012 it will have a chain of Sunday through Thursday.

Week#1

No

Backup Date

Day

Backup Mode

1. 04/27/2012  02:43 VM001012012-04-27T143033.vbk Friday Full
2. 04/29/2012  02:37 VM001012012-04-29T143033.vib Sunday Incremental
3. 04/30/2012  09:42 VM001012012-04-30T093346.vib Monday Incremental
4. 05/01/2012  02:37 VM001012012-05-01T143039.vib Tuesday Incremental
5. 05/02/2012  02:37 VM001012012-05-02T143047.vib Wednesday Incremental
6. 05/03/2012  02:38 VM001012012-05-03T143029.vib Thursday Incremental

Week #2

No

Backup Date

Day

Backup Mode

1. 05/05/2012  11:18 VM001012012-05-04T143026.vbk Friday Full
2. 05/06/2012  02:36 VM001012012-05-06T143052.vib Sunday Incremental
3. 05/07/2012  02:36 VM001012012-05-07T143038.vib Monday Incremental
4. 05/08/2012  02:36 VM001012012-05-06T143052.vib Tuesday Incremental
5. CheckDateValue()
6.

When CheckDateValue() reach next week Wednesday, the previous backup chain will be deleted entirely.

Non-Critical Virtual Machines:

The same applies to the non-critical virtual machines backup jobs, except those backup jobs are set to 1 for both values, Restore points to keep on disk:= 1 && Deleted VMs data retention period:= 1.

This will allow only one restore point on disk.

Symantec Backup Exec 2010 Configuration:

All traditional backup software, the well-known backup methods are Full, Incremental or Differential. With Symantec Backup Exec 2010 and Veeam Integration there is a caution in here to select which backup methods to use when you Backup the Veeam Images *.VBK or *.VIB files.

In Symantec when Full Backup is configured to backup Veeam Images it will Backup everything on the drive and will reset the file Archive Bits.

When Veeam Backup runs again and created a VIB file or VBK file and again Symantec Runs as Incremental against veeam files, only the last changed files will be backed up.

But, when Symantec Full backup runs against Veeam, it will backup everything on the drive, means all Full *.VBK Files along with all *.VIB files. This increase the possibility of taking long time to finish the backup as well as doubling the backup twice.

The best option in here to backup Veeam Image using Symantec is to set all the backup methods as Incremental with Reset-Archive-Bit including Daily, weekly and Monthly Backup.

Also, Since Veeam Backup & Replication doesn’t have the option to set a monthly Backup, the last Wednesday of Last Week in the month can be considered as monthly backup and again the Backup Method has to be Copy otherwise backup will not backup anything to tapes because the Archive-Bit has been reset already.

In my example, Wednesday is my CheckDateValue() that configured on the Veeam backup jobs. So, all previous chain will be deleted.


, , , , ,

12 Comments

IBM DS3500 Multipathing / vSphere MPIO


Below I’m summrizing how I have configured the IBM DS3500 SAN Storage for my DR Site.

Each SCSI port in the IBM DS3500 with different subnet,

  • Port-3 10.10.10.1
  • Port-4 10.10.20.1
  • Port-5 10.10.30.1
  • Port-6 10.10.40.1 /24

 

The Storage Processor Ports are connected to the Dell 2724 switches, 10.x subnet goes into Port-1 in pSwitch1 and subnet 20.x goes into pSwitch1 Port-2, Subnet 30.x goes into pSwitch2 into Port-1 and Subnet 40.x goes into pSwitch2 Port-2.

In the ESX side, one vSwitch attached to two vmnic’s vmnic2 and vmnic3. In this vSwitch  created four VMkernel Portgroups,

  • iSCSI-01 10.10.10.30
  • iSCSI-02 10.10.20.30
  • iSCSI-03 10.10.30.30
  • iSCSI-04 10.10.40.30

If all the nics are active at the same time and try to vmkping to the Storage Processor, (DUP!) Duplicate pings from one of the subnet, then the ping will stop working but it will work for the other Storage Processor.

To overcome this issue is to have the vmnic2 is active for iSCSI-01 and iSCSI-02 and vmnic3 unused for iSCSI-01 and iSCSI-02. The same for the other Portgroups, vmnic3 is Active only for iSCSI-03 and iSCSI-04 and vmnic2 is unused these two Portgroups. After that the ping is steady without (DUP!).

iSCSI Binding is done on the iSCSI adapter to have all the VMkernel Portgroups. esxcli swiscsi nic add -n -d vmk# vmhba33.

After that, all the four paths are appeared, and changed the Storage Path to Round Robin to have them all Active(I/O) at the same time.

Testing were done, by switching off pSwitch1 and connection were alive to the SAN Storage, and the number of paths changed from four to Two. Turned off pSwitch2 and the connection were alive too to the SAN Storage and the number of paths changed from four to two.

As per IBM, they say it’s not the recommended way of having this config and its unsupported  I don’t know the reason why it’s not supported / recommended. The only support method they say as per the IBM engineer who installed this SAN Storage is to have only two paths if only one Controller is installed and cannot put four connections!!! But he surprised of what he has seen from my configuration :). As per him, he said I can use the 8 Paths ONLY if the second controller is installed, which is logically I can’t find any answer for it why is that!!!!

, , , , , , , ,

7 Comments

“Virtual Disk ‘hard disk 0’ is a mapped direct access LUN and its not accessible”


Problem:

vMotion is not possible and when I attempt to vMotion a VM I got an error
“Virtual Disk ‘hard disk 0’ is a mapped direct access LUN and its not accessible”

This error is generated due to LUN ID Mismatch and vml.xxx LUN Signature mismatch.

Even When I created a match LUN IDs in both hosts, still I’m presented with the error when I attempt to vMotion the VM.

What’s wrong?

Both, LUN ID and WWN name are matched in both ESX Hosts. Vml.xxx also matches each LUN in each host correctly.  In here, at least Cold vMotion should works, but in my case it’s not and when I attempt to Cold vMotion the VM again I’m getting the “Virtual Disk ‘hard disk 0’ is a mapped direct access LUN and its not accessible”

LUN Name LUN ID   ESX01
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035
LUN Name LUN ID   ESX02
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035

I have found that in the VM Properties > Mapped RAW LUN Disk > Physical LUN and Datastore Mapping File, the LUN signature / vml.xxxx is wrong and it’s not referring any LUN among all the presented LUN.

Hummmmm, something strange going in here!!

This issue happened due to using existing RDM Mapper File. J This Exchange VM was running on ESX 4.0 and all the LUNs are mapped as RDM to the Exchange VM as a Virtual Compatibility Mode. I have upgraded one of the hosts which were in the same cluster, and I added it to another cluster where it’s managed by vCenter 4.1 latest build.

The new host has access to all the LUNs which accessed by the old host. Then, I shutdown the VM remove all the RDMs and I removed it from the old vCenter inventory, after that on the new host I browsed inside the datastore where the .vmx file located and added to the new inventory on the new vCenter 4.1. When the VM comes up normally, I start adding the RDM again but this time as an existing disk from the datastore which holds the mapper files.

The problem is this part, adding an exisiting rdm.vmdk mapper file to points to a LUN that directly presented to another host, here’s the result which created all the hassle.

This shows in the VM which is running on the new host and new vCenter01. But the Physical LUN and Datastore Mapping File points / referenced to the old vml.xxx signature where the VM were running on ESX 4.0 host.

LUN Name LUN ID   ESX01
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035
LUN Name LUN ID   ESX02
Staff-DB1-H

5

naa.6006016086f0250054536426c29ce011                            vml.02000500006006016086f0250054536426c29ce011524149442035

If you have noticed on the above screen shot, the same LUN “6006016086f0250054536426c29ce011 points to new   vml.02000500006006016086f0250054536426c29ce011524149442035     but on the VM properties it shows different vml.xxx ID vml.0200000000060 which doesn’t have any referenceJ

Bottom line, the solution for this is vary

  1. Matching the LUN IDs across all the hosts it won’t solve the problem if the vml.xx ID is different.
  2. Matching the LUN IDs across all the hosts along with the vml.xxx signature, it might solve the problem or might not. Also, the datastore which holds in the RDM Mapper File should match the LUN ID and vml.xx in the other hosts.
  3. The only solution resolves this issue is
    1. Dismount Exchange Datastore to avoid any unpredictable  issue J
    2. Stop all exchange services and disable them.
    3. Shutdown the VM
    4. Remove the RDM LUNs and choose to Remove from Virtual Machine and Delete files from disk “This step won’t delete the actual Data in the NTFS LUN.
    5. Boot the VM and make sure the VM can be vMotioned using the VMDK which holds the OS only.
    6. Re-add the RDM to the VM and make sure it got all the WWN and vml.xxx matching all the hosts in the cluster.
    7. Start Exchange Services
    8. Mount Exchange Databases if they didn’t mount by itself

Result:

Migrate virtual machine

MAIL001

Completed

Administrator

26/07/2011 20:51:36

26/07/2011 20:51:36

26/07/2011 20:53:13

, , , ,

Leave a comment