Sorry for the late reply. I got the beta driver from Intel. But the 1.6.6 versioning was some sort of internal versioning. It was officially released as 1.4.28 and can be found here:
Sorry for the late reply. I got the beta driver from Intel. But the 1.6.6 versioning was some sort of internal versioning. It was officially released as 1.4.28 and can be found here:
Thanks for the feedback. I thought this was the case, and I can't understand why the PM1725 drives are on the vSAN HCL.
You can add the new connection to VRO from VRO configuration page , "add a new connection " This is same in VRO infoblox IPAM 4.0 plugin
Mine freeze at bootup, probably on average 40% of the time. Sometimes they freeze 2 or 3 times in a row before working.
Not sure how that could be related to sleep.
Vendors supply hardware to us(VMware) and then we run them against a series of tests. We test SSD drives for use as cache in AF and hybrid and as capacity in AF configurations. As long as they pass all of our tests, we publish them. That's my understanding of the process. It's worth noting that SSDs of this model are only support as cache for AF configurations and not hybrids. This is telling me that the drive can't keep up on read and writes enough to front end HDDs. It should work fine but if you have the ability and money to use MLC, go for it.
OK Thanks.....haven't even thought about that, guess I will do it over the weekend.
Thank you. That is precisely the response I was looking for!
It's worth noting that we retest hardware with every new version of VSAN. So all of these drives will be retested for compatibility with 6.5 and the VCG may change.... Not all devices will be approved before 6.5 goes GA.
Happy to help!
We have VM 2 TB and it is having one snap shot. Suddenly data store run out of space due to snapshot and VM is in accessible.
We have no option to increase the data store straight away, so we initiated the snap shot deletion program.it took 7 HRS to delete the snap shot,after deletion VM powered it on Is there any best option to power on quickly?
HI gurus
at this point I am setting up a Horizon 7 platform, I have all setup already ,sec server along with connection servers and VC servers, all is good in my world at this point, I just need a little guide on best load balancer solution that I can set in place and what has been your experience and your recommendation?
Thanks a bunch gurus
Yes, it was cleared
Anybody out there who can help me on this?
Hello - We use floating pools (all Win 7) with Persona and force a refresh on log out. Our hosts are at 5.5.0 (3343343) and the connection servers are version 6.2.1-328346. After logging out and waiting 5+ minutes some of my users occasionally find Persona has not pushed any of their files/config/settings to their desktop. It's as though they are logging in for the first time to a new desktop. If they log out then back on Persona usually starts working and they see their normal desktop. Has anyone ever experienced this?
Any help is appreciated.
Pete
Can we get free ESXi host information
We've used the Horizon View Client within XenApp since version 3.0, with decent success. We're still running the ICA protocol, so performance is marginal at best in comparison to non-virtualized/hosted installs. Most of our use cases are external, out-of-band, users though. Internal usage might perform decent enough to be an alternative to full-installs.
I have also ran the View Client within a locally-hosted Workstation VM for testing of package installation and feature-testing. Didn't seem to pose any issues that I noticed with View Client 3.3 and above. Would assume 3.0 would work just as well.i
We could not get Windows 10 to play well with any View Client prior to 3.5.2. Multiple issues with desktop sizing, display performance, and general usage being slow/buggy. Currently running 4.2.0, which seems to function quite well in comparison to the earlier builds. I don't believe you'd have a great user-experience with 3.0 or anything prior to 3.5.2.
Hopefully that helps!
Here is what I happened upon:
While regenerating some Instant Clones in the working pool, I noticed errors with some of them failing to regenerate.
Those errors led me to 2 of the Hosts in the Cluster.
I put them in Instant Clone Maintenance (which delete the Parent VMs from them) and regenerated the Instant Clones again. NO ERRORS.
I then updated my test pool with a new Golden Master Image and it successfully published (through the other Hosts in the Cluster).
Finally, I put one problem Host fully in Maintenance Mode and restarted the Host. Once up and out of Maintenance Mode, I did the same with the other problem Host.
Then I took both out of Instant Clone Maintenance and updated the main Instant Clone pool with our new Win10 Golden Master Image.
It completed successfully.
On a very first look, I noticed the difference in speed on uplinks (vmnics). The speed shows 10G (10000) on ESXi3 and It shows 1G (1000) on ESXi1 and ESXi2.
Also, check the MTU size on vSwitch and vmk ports. MTU size should be same across the environment.
Cheers!
-Shivam
What SDK fling version are you using? Can you try Fling 6 we pushed yesterday (html-client-sdk-6.5.0-4507438.zip)? Because we fixed dialog size issues.
If it is still not working with Fling 6 please describe the problem in details, thanks!