Automatically degrade CPU capabilities when there are missing extensions
@ebassi
Submitted by Emmanuele Bassi Link to original bug (#748100)
Description
After an upgrade a bunch of microcode changes dropped two capabilities from my host; this resulted in VMs created before the upgrade to fail to launch because the <cpu>
node in the libvirt description of the VM did not match the capabilities chosen when creating the VM.
I was notified by a deeply unhelpful "Failed to launch VM" error in the gnome-boxes UI which required me to hunt down the actual error from the journal (see bug 748043), then had to ask on IRC, and finally find an entry to the Red Hat Bugzilla that detailed what happened, as well as a series of commands I had to run in order to fix this issues — which boiled down to manually enter virsh and edit XML to drop the mismatched node.
This was an all around terrible user experience. I felt patronised and completely lost because nothing told me what was going on, or how to fix it.
In this particular case, Boxes has all the information needed to recognise the error and act accordingly:
• it gets notified by libvirt that the CPU capabilities from host to guest are mismatched • it can modify, via libvirt, the CPU capabilities of the guest • it can inform the user of the potential performance degradation inherent to dropping capabilities and using a more generic CPU emulation • it can ask the user what kind of performance degradation is acceptable and act accordingly
If we don't want to involve the user (which is acceptable, up to a certain point, as long as the user has the option to opt in to their involvement) then Boxes can automatically patch up the guest description to something resembling the requirements set by the user when the VM was created.
In all cases, though, Boxes should not just drop the ball, and refuse to run the guest VM.