Based on lessons learned and experience from across the cloud industry, we should consider the following best practices for your organization’s planning.
As an organization plans for transitioning to a cloud service or deploying a private or hybrid cloud, the first step from a security standpoint is to consider what IT systems, applications, and data should or must remain within a legacy enterprise datacenter. Here are some considerations:
- Perform assessments of each application and data repository to determine the security posture, suitability, and priority to transition to the cloud. Match security postures to the cloud architecture, controls, and target security compliance
- Work with application and business owners to determine which applications and data you can move easily to a cloud and which you should evaluate further or delay moving to a cloud. Repeat this assessment on all key applications and data repositories to develop and priority list with specific notations on the desired sensitivity, regulatory, or other security classifications.
- Consider the cloud model(s) to be procured or deployed internally:
- Although public cloud services provide solid infrastructure security, they often do not have the level of security or customization you may
- A private cloud can be heavily customized to meet security or feature requirements, but you need to control costs, scope creep, and over-building your initial
- Determine who the consumers of the cloud services will be. If multiple departments or peer agencies will be using the cloud service, determine which security team or organization controls the standards for the overall cloud or each application workload:
- Adopt a baseline security posture so that individual consumers or peer agencies will be more involved in settings the security standards for their unique applications and mission critical
- Publish the security operational processes, ownership, and visibility or statistics, events, and reports to ensure acceptance of the cloud by consuming agencies and
Most clouds use software-based access controls and permissions to isolate customers from one another in a multitenant cloud environment. Hardware isolation is an option for private clouds and some virtual private clouds, but at additional cost.
- Understand how multitenancy is configured so that each consuming organization is isolated from all the others. In a public cloud, the use of software-based access controls, roles-based permissions, storage, and hypervisor separation is commonplace. If more levels of isolation or separation of workloads and data between customers is required, other options such as a virtual private cloud or a private cloud are often more
- Implement or connect an enterprise identity management system such as Active Directory, LDAP, or SAML Some cloud providers and management platforms can optionally connect to multiple directory or LDAP services—one for each consuming organization.
AUTOMATION IN A CLOUD
The first rule in an automated cloud is to plan and design a cloud system with as few manual processes as possible. This might be contrary to ingrained principles of the past, but you must avoid any security processes or policies that delay or prevent automation. Here are some considerations:
- Adopt the theme “relentless pursuit of automation ”
- Eliminate any legacy security processes that inhibit rapid provisioning and automation.
Experience has shown that traditional security processes have tended to be manual approvals, after-provisioning audits, and slow methodical assessments, tendencies that must change when building or operating a cloud. Pre-certify everything to allow automated deployment—avoid forcing any manual security assessments in the provisioning process.
- Have IT security teams pre-certify all “gold images” or templates that can be launched within new VMs. Certification of gold images is not just an initial step when using or deploying a new
- Have security experts perform scans and assessments of every new or modified gold image before loading it into the cloud management plat- form and presenting it for customers to
- Understand that when a new gold image is accepted and added to the cloud, the cloud operational personnel (provider or support contractor, depending on contractual terms) might now be responsible for all future patches, upgrades, and support of the
- Have security pre-certify all applications and future updates that will be available on the You should configure applications automated installation packages whereby any combination of application packages can be ordered and provisioned on top of a VM gold image. Additional packages for upgrades and patching of the OS and apps will also be deployed in an automated fashion to ensure efficiency, consistency, and configuration management.
- Realize that this precertification is not so difficult of a task but will be an ongoing effort as new applications and update packages are introduced to the cloud often and continuously. Finally, understand that more complex multitiered applications (e.g., multitiered PaaS applications) will require significantly more security assessment and involvement during the initial application
It is common for customers to request additional network configurations or opening of firewall ports. These can be handled through a manual vetting, approval, and configuration process, but you might want to charge extra for this service. Here are some things to keep in mind:
- Segment the network so that each customer (not VM, which is often over- kill), at a minimum, has its own virtual network. This is better than physical networks for each customer which is difficult to automate and more expensive.
- You can offer additional network segmentation as an option for each ten- ant or customer organization by using virtual firewalls to isolate networks. Applications that need to be Internet-facing should be further segmented and firewalled from the rest of the production cloud VMs and
- Avoid overdoing the default segmentation of networks, because this only complicates the offerings and usefulness of the cloud environment, and increases operational management. Stick with some basic level of network segmentation such as the one virtual network per customer by default and then offer upgrades only when necessary to create additional virtual net- works.
- Consider pre-certifying a pool of additional VLANs, firewall port rules, load balancers, and storage options and make these available to cloud consumers via the self-service control
ASSET AND CONFIGURATION MANAGEMENT
The key to success is to also automate the updating of asset and configuration databases. This means that you configure the cloud management platform, which controls and initiates automation, to immediately log the new VM, application, or software upgrade into the asset and configuration databases. Here are some considerations:
- Reconsider all manual approval processes and committees that are contrary to cloud automation and rapid provisioning (which includes routine software updates).
- Update the legacy change control process by preapproving new application patches, upgrades, gold images, and so on so that the cloud automation system can perform rapid
- Integrate the cloud management system to automatically update the con- figuration log/database in real-time as any new systems are provisioned and launched. These automated configuration changes, which are based on preapproved packages or configurations, should be marked as “automatically approved” in the change control
MONITORING AND DETECTION OUTSIDE YOUR NETWORK PERIMETER
Traditional datacenter and IT security had a focus on monitoring for threats and attacks of the private network, datacenter, and everything inside your perimeter. Cloud providers should increase the radius of monitoring and detection to find threats before they even find or hit your network. Here are some things to keep in mind:
- Traditional web hosting services and content delivery networks (CDNs) are a good fit to host, protect, and cache static web content, but many of these providers do not protect dynamic web content (logons, database queries, searches) so all inbound attackers need to do is perform a repetitive search every millisecond and your CDN network can do little about it because it must forward all requests to your backend application or
- Consider a third-party network hosting service in which all data traffic to your cloud infrastructure first goes through the provider’s network and filters. This provider will first take the attacks from the Internet and forward only legitimate traffic to your network. There is a significant number of configurable filtering and monitoring options available from these In addition, consider using these providers for all outbound traffic from your cloud—thus, truly hiding all network addresses and services from the public Internet.
- Consider a third-party provider of secure DNS services that has the necessary security and denial-of-service protections in place. As this provider hosts your DNS services, your internal DNS servers are not the attack vector by having this third-party DNS provider take the brunt of an attack and forward only legitimate traffic.
CONSOLIDATED DATA IN THE CLOUD
Many customers are concerned that data consolidated and hosted in the cloud might be less secure. The truth is that having centralized cloud services hosted by a cloud provider or your own IT organization enables a consolidation of all the top-level security personnel and security tools. Most organizations would rather have this concentration of expertise and security tools than a widely distributed group of legacy or mediocre tools and skillsets. Here are some considerations:
- Technically, a cloud service has no extra vulnerabilities compared to a traditional datacenter, given the same applications and use cases. The cloud might represent a bigger target because data is more consolidated, but you can offset this by deploying the newest security technologies and skilled security
- Continuous monitoring is the key to good security. Continuous monitoring in the cloud might mean protecting and monitoring multiple cloud service providers, network zones and segments, and
- Focus monitoring and protections not only at your network or cloud perimeter, but begin protections before your perimeter Don’t for- get monitoring your internal network, because a significant number of vulnerabilities still come from internal
- Focus on zero-day attacks and potential threats rather than relying solely on pattern or signature-based security that only contains past threats. Sophisticated attackers know that the best chance of success is to find a new vector into your network, not an older vulnerability that you’ve probably already
As soon as new systems are brought online and added to the asset and configuration management databases (as described earlier), the security management systems should immediately be triggered to launch any system scans and start routine monitoring. There should be little or no delay between a new system being provisioned in the cloud and the beginning of security scans and continuous monitoring. Monitoring of the automated provisioning, customer orders, system capacity, system performance, and security are critical in a 24-7, on- demand cloud environment. Here are some considerations:
- All new applications, servers/virtual servers, network segments, and so on should be automatically registered to a universal configuration database and trigger immediate scans and monitoring. Avoid manually adding new applications or servers to the security, capacity, or monitoring tools to ensure that continuous monitoring begins immediately when services are brought online through the automation
- Monitoring of automated provisioning and customer orders is critical in an on-demand cloud environment. Particularly during the initial months of a private cloud launch, there will be numerous tweaks and improvements needed to the automation tools and scripts to continuously remove manual processes, error handling, and resource
- Clouds often support multiple tenants or consuming organizations. Monitoring and security tools often consolidate or aggregate statistics and system events to a centralized console, database, and support staff. When tracking, resolving, and reporting events and statistics, the data must be segmented and reported back to each tenant such that they only see their private information—often the software tools used by the cloud provider have limitations in maintaining sovereignty of customer reports to multiple
There are three key tenets of continuous monitoring:
1 ) Aggregate diverse data
Combine data from multiple sources generated by different products/vendors and organizations in real time.
2 ) Maintain real-time awareness
Utilize real-time dashboards to identify and track statistics and attacks. Use real time alerting for anomalies and system changes.
3 ) Create real time data searches
Develop and automate searches across unrelated datasets to identify the IP addresses from which attacks were originating. Transform data into actionable intelligence by analyzing data to identify specific IP addresses from which attacks originated and terminated hostile traffic.
Denial-of-Service (DoS) attacks are so common that it is a matter of when and how often, not if, your cloud is attacked. Here are some recommendations:
- Try to isolate your inbound and outbound network traffic behind a third- party provider that has DoS protections, honey pots, and dark networks that can absorb an attack and effectively hide your network addresses and services from public.
- Have a plan for when a DoS attack against your network occurs. Perhaps you will initiate further traffic filters or blocks to try and redirect or block the harmful traffic. Maybe you have another network or virtual private net- work (VPN) that employees and partners can revert to during the attack and still access your cloud-based services. Remember that the time to find a solution for a DoS attack is before one occurs—after you are experiencing a DoS attack, your network and services are already so disrupted that it is much more difficult to
GLOBAL THREAT MONITORING
Consider implementing security tools, firewalls, and intrusion detection systems that subscribe to a reputable worldwide threat management service or matrix. These services detect new and zero-day attacks that might start somewhere across the globe and then transmit the patch, fix, or mitigation of that new threat to all worldwide subscribers immediately. Thus, everyone subscribed to the service is “immediately” immune from the attack even before the attack or intrusion attempt was ever made to your specific network. These services utilize some of the world’s best security experts to identify and mitigate threats. No individual cloud provider or consuming organization can afford the quantity and level of skills as these providers have.
Legacy change control processes need to evolve in an automated cloud environment. When each new cloud service is ordered and automated provisioning is completed, an automated process should also be utilized to process change controls that can also feed or monitor be security operations. Here are some recommendations:
- Avoid all manual processes that might slow or inhibit the automated order- ing and provisioning capabilities of the cloud
- When new IaaS VMs are brought online, for example, configure the cloud management platform to automatically enter an entry into the organizations change control system as an “automatic approval.” This immediately adds the change to the database and can be used to trigger further notifications to appropriate operational staff or trigger automatic security or inventory scanning
- Utilize preapproved VM templates, applications, and network configurations for all automatically provisioned cloud services. Avoid manual change control processes and approvals in the cloud ordering
- Remember to record all VMs, OS, and application patching, updates and restores in the change control database. Finally, also remember that the change control and inventory databases should also be immediately updated when a cloud service is stopped or a subscription is