Monthly Archives: May 2016

vSphere Client timeout after replacing Machine SSL Certificate

The last few days I have been troubleshooting a very strange issue with the C# vSphere client on a new vSphere 6.0 install for a customer.  The vSphere Client was initially working fine, until I replaced the Machine SSL Certificate for vCenter.  After the Machine SSL Certificate was replaced, the vSphere client would timeout on connection. The issue was only connecting to vCenter, if connecting the vSphere client directly to hosts the client worked fine.

If I reverted back to VMCA signed certs, the vSphere client would begin working again. To make it even stranger, sometimes the client would actually connect but it would take upwards of 60 seconds to do so.

This particular customer is using an externally published CA. To clarify, the vSphere webclient was working.  It was just the C# client that was causing issues.

The error that was shown by the vSphere client on login is as follows


To begin troubleshooting, I used Baretail to tail  the vi-client logs whilst the vsphere client was connecting.  This is an excellent tool that is available for free here.

I created a filter to highlight text with the word “Error” in red and “Warning” in yellow and opened the vi-client log located in the following directory.

C:\Users\user_name\Local Settings\AppData\Local\VMware\vpx\viclient-x.log

The following log snippit shows a socket error whilst the client is connecting, just before the connection fails


Relevant text from the log is here.  I have masked the name of the customers vCenter server.

[viclient:Error :W: 6] 2016-05-28 17:48:19.743 RMI Error Vmomi.ServiceInstance.RetrieveContent – 1
<Error type=”VirtualInfrastructure.Exceptions.RequestTimedOut”>
<Message>The request failed because the remote server ‘SERVER FQDN’ took too long to respond. (The command has timed out as the remote server is taking too long to respond.)</Message>
<InnerException type=”System.Net.WebException”>
<Message>The command has timed out as the remote server is taking too long to respond.</Message>
<Title>Connection Error</Title>
<InvocationInfo type=”VirtualInfrastructure.MethodInvocationInfoImpl”>
<StackTrace type=”System.Diagnostics.StackTrace”>
<Target type=”ManagedObject”>ServiceInstance:ServiceInstance [SERVER FQDN]</Target>

To dig deeper into why I was getting a socket error, I fired up procmon from sysinternals to find out what the client was doing when it failed.  In sysinternals I created a filter to only output activity created by vpxclient.exe

procmnon filter

Whilst procmon was running, I noticed a TCP Reconnect happening to an Akamai URL.procmon error1

Notice the time difference of seven seconds from both TCP Reconnects.    This TCP reconnect would reoccur multiple times until the vSphere client timed out and subsequently failed.

I was curious on the status of this TCP connection, so I started another great sysinternals tools called Process Explorer. Process Explorer allows you to check a corresponding process’s Network status, including remote addresses and ports, along with the status of the connection.  Selecting vpxclient.exe in Process Explorer showed the following under TCP/IPSYN Sent

You can see the same remote connection to Akamai in process explorer.  The status of the connection is SYN_SENT, yet the connection is never established.

I was certain this external connection was causing the vSphere client to timeout.  Since the customer is using a third party issued cert, the client is checking the CRL of the cert on the internet.  This is why I did not see the error using the Self-Signed VMCA vCenter Machine SSL Cert. You can see the cert is using an external CRL distribution point in the screenshot below.


I ran an NSLOOKUP on the CRL distribution point hostname, and the address matched the Akamai address space with a cname pointing to the CRL.

After all this, I began troubleshooting why the vSphere Client could not connect to the CRL distribution point.  Well it turns out after all this the corporate proxy was not configured in Internet Explorer, so the management servers where the vSphere client was installed could not access the CRL address for the certs.

Once I had the details for the proxy and configured it in internet explorer, the vSphere client successfully created a TCP connection to the CRL on login and, then connected successfully to vCenter with no timeout.  This seemed to only need to be configured once. I removed the proxy for subsequent logins and the vSphere Client connected fine.

My recommendation would be if you do replace vSphere certificates, is to use an internal managed enterprise CA with a certificate revocation list that can be accessed internally.  Also add a copy of Procmon, Process Explorer and Baretail to your troubleshooting toolkit if you don’t already. They are all great tools that have helped me multiple times in the past.







Error while installing vCenter 6.0 “Failed to Register Service in Component Manager”

SPOILER: Check your time sync between platform services controller and vCenter Server

I had this strange error while standing up two new vCenter Instances with a single external platform services controller.

vCenter Server Version: vCenter Server Update1b, Build 3343019
Operating System:           Windows 2012 R2
Deployment topology:   vCenter Server with an External Platform Services Controller.

The platform services controller installed fine, however whilst installing the two vCenter Servers I ran into the same error on both of them

"Unable to call to Component Manager: Failed to register service in Component Manager; url:http://localhost:18090/cm/sdk/?hostid=75fc9250-0c07-11e6-ac93-000c290481b4, id:b2646ddb-aa7a-4f2d-a2aa-37be891d6e49"

Unable to call to component manager

After that error, I would get the following.

"Installation of component VCSServiceManager failed with error code "1603". Check the logs for more details.


Reviewing the installation logs showed the following

2016-04-27T09:35:25.317+10:00 [main ERROR com.vmware.cis.cli.CisReg] Operation failed
com.vmware.cis.cli.exception.ComponentManagerCallException: Failed to register service in Component Manager; url:http://localhost:18090/cm/sdk/?hostid=75fc9250-0c07-11e6-ac93-000c290481b4, id:b2646ddb-aa7a-4f2d-a2aa-37be891d6e49
at com.vmware.cis.cli.util.CmUtil.cmRegister(
at com.vmware.cis.cli.CisReg.registerService(
at com.vmware.cis.cli.CisReg.doMain(
at com.vmware.cis.cli.CisReg.main(
Caused by: java.util.concurrent.ExecutionException: ( {
faultCause = null,
faultMessage = null,
errorCode = 0,
errorMessage = UNKNOWN
at com.vmware.vim.vmomi.core.impl.BlockingFuture.get(
at com.vmware.cis.cli.util.CmUtil.cmRegister(
... 3 more
Caused by: ( {
faultCause = null,
faultMessage = null,
errorCode = 0,
errorMessage = UNKNOWN
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

The logs weren’t exactly clear on why the installation was failing, but after some troubleshooting I noticed the timesync between the Platform Services Controller and the vCenter Servers were slightly out.  Once the timesync issues were resolved, the installation completed successfully.

In my case, I temporarily fixed the timesync issues by moving both the vCenter Servers and the PSC to a single host, and configuring VMware tools to sync time with the host which was configured to point to an external NTP server.