XenServer: Internal error: another frontend device is already connect to this domain

XenServer: Internal error: another frontend device is already connect to this domain



During our testing we have had this error pop up a few times while trying to start VMs after we have  unplugged a blade without correctly shutting it down (simulating a failure scenario).

NOTE: This is the error that is shown in XenCenter while trying to start the VM.

4/05/2011 3:33:15 PM Error: Starting VM ’36’ – Internal error: another frontend device is already connected to this domain (frontend (domid=0 | kind=vbd | devid=51712); backend (domid=0 | kind=vbd | devid=51712))

It usually fixes itself if you wait a few hours or days, and attempt to start the VM again later.  This however is not an ideal situation, as it means your VM is down until the error disappears.

I fixed this in one instance (have not tested again, as I’m waiting for the error to pop up again) by doing the following.  This is for a VM with the name “36”:

[root@blXXX lib]# xe vbd-list vm-name-label=36
uuid ( RO)             : f84cfbef-1054-f8e8-5d95-43587b63060a
vm-uuid ( RO): b7fa54cf-4cc7-ec1e-0310-27b629a83d30
vm-name-label ( RO): 36
vdi-uuid ( RO): <not in database>
empty ( RO): true
device ( RO): xvdd

uuid ( RO)             : b4ff5293-46f6-6944-2b51-f5a0ea0aa92c
vm-uuid ( RO): b7fa54cf-4cc7-ec1e-0310-27b629a83d30
vm-name-label ( RO): 36
vdi-uuid ( RO): 996b046b-1960-4315-bad7-48830254d4c3
empty ( RO): false
device ( RO): xvda

[root@blXXX lib]#

 

As you can see the first VBD entry for this VM has:

vdi-uuid ( RO): <not in database>

By removing the VDB entry it seems to fix the problem:

[root@blXXX lib]# xe vbd-destroy uuid=f84cfbef-1054-f8e8-5d95-43587b63060a

I’ve seen some places have recommended using xenstore-ls and xenstore-rm the above method seems to work better – as the xenstore-rm command did not seem to address the issue.

UPDATE:

Upon further testing we found that if there are multiple entries with “vdi-uuid ( RO): <not in database>” in the vdb list for the VM – remove all of these.


 


Categories

  • trx

    same happened to us.
    seems all VMs got one “” VDB, no matter who much VDBs they had before.
    it was discovered right after applying of XS56EFP1006.xsupdate.