How to manage virtual machine security at scale
Who looks after the security of VMs?
This can depend on how large the organisation is, but it’s usually overseen by security teams, working in cooperation with IT departments.
The security team will manage the operations, define all security baseline requirements, and oversee the day-to-day monitoring of compliance.
The security team defines what is needed, and how these needs will be addressed. They generally inform the architects how the master images should be configured from a security perspective – what hardening is needed, Endpoint Protection, VPN solutions, and other software considerations related to security.
The cloud architects then take this guidance and are responsible for the configuration and creation of the image templates. There are different ways this can be achieved – read our quick guide to comparing the different image lifecycle management solutions.
And finally, the cloud engineers are the users of the images. They’re not involved in configuration, they use the image artefact in the knowledge that everything they do is secured.
Using ImageFactory to create and deliver the custom images means that, from top to bottom – from security to end users – the defined standards are carried through all processes.
It means infrastructure and development teams can build and optimise, safe in the knowledge that you’re within the guardrails and in line with best practices.
So for example, when an audit is carried out, the evidence is provided that all processes are compliant. Essentially any documentation will prove the baseline enforced by the security team has been carried through any instance where the secured image has been used.
How does automation help day-to-day?
The most challenging part of server management is in the manual processes involved.
The security team tend to produce a lot of documentation around how the server should be configured, how to install software, how to create licenses to use it, and various other manual tasks.
Cloud engineers then follow this documentation but, when there’s a human element involved, inevitably mistakes happen. Particularly when working at scale.
It means when servers are built and up and running, the security team have no concrete evidence as to what standards it was configured to, or what processes were used.
Using an automated tool eradicates this issue. When engineers use images produced by ImageFactory, the evidence is clear, as everything needed is pre-baked into the image. So it’s essentially secure and compliant out of the box.
So engineers save time, don’t need to configure anything manually, and can scale in a secure way.
And how do teams manage updates?
While ImageFactory helps spin-up the new environments in a compliant way, a solution is still needed to manage these lifecycles at scale. Essentially ImageFactory alone will only build the image artefact, but doesn’t continually manage the instance where the image has been created and used.
For managing updates to servers, an automated patching solution will be able to automate and orchestrate updates, ensuring continued security and compliance across operating systems.
And a managed service provider can take the burden of day-to-day lifecycle management, with update management, patching, 24/7 monitoring and response to any complexities or security issues.
What are the risks of not using an automated lifecycle tool?
The risk is really carrying out these manual development cycles at scale. Often this means thousands of VMs, usually across various different operating systems. Even different versions of the same operating system can have different requirements. The security guidelines would differ, making things more complex.
So, if an engineer is building a Windows server, then the next week they might be working on Red Hat, and doing things in quite a different way. This can be quite difficult to maintain a consistent high quality of delivery.
Perhaps the engineer is able to configure a few servers a week. But they’ve been requested to create 1,000 servers in a few months. Eventually, a mistake is inevitable.
And the risk is that, once up and running in a production environment, it might not be known that the sever wasn’t configured to a secure standard.
So there’s a risk of exposing data. If something isn’t correctly in place on a server, it’s easy to exploit and a misconfigured server can lead, in the worst case, to an attacker gaining highly privileged access to a server. And from here, it can be easy to gain access to other parts of the internal network and confidential data.
How does ImageFactory help?
ImageFactory delivers secure, hardened images, providing out-of-the-box starters for all servers. Security and compliance requirements are all included, baked into the image from the start, so there’s no possibility to remove or not miss something during installation. The images are immutable.
So if an engineer has created a virtual machine based on an ImageFactory image, there is concrete, guaranteed evidence that the correct software and configurations were there from the start.
And security have the evidence that the VM was created from an approved ImageFactory artefact, so there’s absolutely no doubt that everything needed was pre-installed.
Peace of mind
Security teams have a lot to worry about, and can often be seen as a bottleneck when it comes to businesses fully realising the potential flexibility, agility and scalability of cloud. Having a solution in place that allows security the peace of mind that engineers are spinning up servers in a guaranteed secure way is a big win.
Automating the delivery of hardened images saves a lot of manual work in having to configure each of the images manually. So, engineers don’t have to spend days writing scripts for different servers, or double-checking if everything has been configured correctly.
The total costs of a solution like ImageFactory is actually cheaper than building your own solution in-house or using marketplace solutions. And the money saved by not having to dedicate a team of engineers to these manual tasks is substantial, while also freeing up your engineering talent for more valuable development jobs.