Operational Considerations when Moving Applications to the Cloud

Operational Issues when moving to the Cloud

 

There are many application ‘modernization’ strategies and you can pick what you believe are the best methods to modernize your applications.  However, we must also consider operational issues when modernizing our applications.  These include support, maintenance, upgrades, automation and network utilization.

For example, if we migrate applications to a public cloud, we will still need to worry about database maintenance, code updates, or other routine administrative support.  With applications that are hosted as a PaaS or SaaS offering might need less support because the cloud provider will have responsibilities for updating and patching OSs and software components, but we will still own the data and its integrity along with HA, resiliency and other cloud characteristics. Of course, in a private-enterprise cloud, your staff (or hired contractor) continue to provide this support but hopefully in a more automated manner than a traditional datacenter.

Consider your current staffing levels, outsourced support, and overall IT organizational structure. Most organizations that have already deployed a private cloud or use some public cloud have not reduced their IT staffing levels; however, they have changed the skillsets and team structures to better accommodate a more service-oriented model that is best suited to support a cloud ecosystem.

 

Key areas to consider:

 

Application monitoring

When it comes to mission-critical applications that are core to your organization’s customers and livelihood, you might keep these applications hosted within a private enterprise cloud or a secure public provider. In either situation, you should still be concerned with monitoring the performance and user experience (UX). The private or public cloud management tools will provide some level of VM, and maybe some limited application-level, utilization monitoring but this is usually not adequate for truly mission-critical applications (they’re likely OK for normal business productivity systems). So, regardless of where your mission- critical apps are hosted, public or private cloud, you should still use your own application monitoring tools and techniques that include synthetic transactions, event logging, utilization threshold alerts, and more advanced UX simulated logon tools.

 

Service levels

Consider the service-level agreements (SLAs) for applications hosted in the cloud. Many public cloud providers provide a default level of service guarantee and support that is insufficient for mission-critical applications. In some cases, the public cloud provider does not even guarantee that it will back up, provide credit, or be liable for data loss. Be careful how cloud providers word their SLAs because they might only guarantee network availability in their uptime calculations instead of PaaS or SaaS platform service levels. Other vendors claim extensive routine maintenance windows (in other words, potential outages) that are also excluded from their SLA.

 

Federated authentication

Consider user authentication and access controls for cloud-hosted applications. You might want to federate an enterprise user directory and authentication system (e.g., Microsoft Active Directory or LDAP) to the cloud for an always up-to-date and consistent user logon experience. A preferred method is to use a vendor-agnostic industry standard for authentication, such as SAML, especially when federation and SSO is required.

 

Scalability

When migrating applications to IaaS or PaaS-based cloud services, you might gain scalability features that were not easy, cheap, or available in the legacy enterprise environment.

 

Scale out

This is the dynamic or automated scale out of additional VMs, or elasticity. It is preferable that the application be cloud native or cloud enabled so that it is capable of detecting peak utilization and triggering scale out automatically. For legacy applications moved to the cloud, you can use the hypervisor and cloud infrastructure to measure utilization with defined thresholds that will trigger scale out, even though the application is unaware of these events. Scaling down after peak utilization subsides is just as important as scaling out. Again, cloud-native applications that handle this automatically are more efficient and faster to react than legacy applications that rely on the hypervisor to scale.

Scale up

Scaling up an application refers to increasing the size of a server, or more common in cloud computing, a VM with more memory and processors to handle increasing workload. Whereas scale out involves launching new VMs to handle peak utilization, scale up involves enlarging the configuration of the same physical server or VM(s) running your applications (up to the maximum number of processors and memory capacity for that particular physical server or VM). A downside of scale up is that you often need to reboot the VMs to recognize the new processor and memory configuration. However, the need for this additional step will likely recede because some hypervisor platforms are beginning to support dynamic flexing of additional processors and memory.

 

Finally, consider scalability of your applications in terms of geographic access and performance. You might want to deploy your application in multiple cloud datacenters, or in different regions of the world so that end users of your applications are automatically routed to the closest and fastest datacenter. Be aware, however, that many cloud providers charge additional fees for data replication, geo-redundancy, band- width, scale up/scale out, and load-balancing capabilities. For an enterprise private cloud, these geo-redundant communication circuits are often cost prohibitive.

 

Application performance and benchmarking

When you migrate applications to the cloud, you should keep in mind that most public cloud providers do not guarantee the performance of your custom applications nor for any customer-managed PaaS databases or platforms. The cloud provider is simply trying to avoid the argument of who is at fault if an application particularly one that they didn’t create or manage, is not performing the way the customer believes it should. Hosting your own cloud keeps you in complete control of your applications and their performance.

 

Poor application performance in the cloud is often an indicator of a legacy application ported to the cloud that has not been optimized. For example, just because an application might be copied to a technically faster cloud, “as is” with little or no modifications, does not mean the legacy application will perform well. Performance testing using live test users and possibly load-testing tools, is recommended for all applications before and after porting them to the cloud.

 

Having the original application performance baseline measured before any transition to the cloud will give you valuable data to determine expected performance levels. You might be able to use the scale-up or scale-out techniques described earlier to improve performance and meet acceptable levels without redesigning the entire application.

 

Network bandwidth

Most public cloud providers do not charge for uploading or importing data but do charge transaction, metered bandwidth, or storage input/output fees for net- work bandwidth. Given that your applications are moving to the cloud for the first time, it is often very difficult to estimate the bandwidth over the Internet or other network circuit. This can result in a bit of a surprise at the end of the first month that the application goes into production use.

 

This bandwidth issue is a much lesser issue for private clouds, because they are usually hosted within your organization’s datacenters or via private network circuits. Some public cloud providers offer an optional direct connection option whereby you pay for a private circuit into the provider’s network, bypassing most of the normal variable band- width fees in lieu of the fixed, direct connect fee. This is well worth it for high utilization/bandwidth needs.