A baseline to understanding how these two architectural cloud approaches are being utilized by major enterprises.

An article from Neal Matthews.

Although it’s been a long road, enterprise IT is finally achieving at least a general awareness of the benefits of cloud computing. While a picture of cloud is emerging in their thinking and strategic planning, the path between the “here and now” and the rosy cloud future tends to be murky. These companies’ future cloud environments are variously described using phrases such as multi-cloud, hybrid cloud, cloud bursting, distributed cloud and even fog computing. While any of these represent topics sufficient for a whole series of articles, let’s just look at the two terms that are often confused, yet are likely to be the most important over the next few years: hybrid cloud and multi-cloud.

Hybrid Cloud

No large enterprise, no matter how well prepared, can simply leap to the cloud in one fell swoop, even if the goal is to migrate completely to a public cloud provider such as AWS, Google Cloud Platform, or Microsoft Azure. There is going to be a necessary transition period, during which the enterprise will have some resources, systems and workload capabilities that have been migrated to public cloud, while others remain in the enterprise data centers or colo hosting centers. This interoperability is a common example of a hybrid cloud.

Unless an organization is literally “born in the cloud” (built on the public cloud for essential infrastructure and product/service delivery, plus supporting SaaS services such as web-based email, Salesforce and Zendesk), every enterprise’s cloud journey must include preparation for simultaneously supporting a cloud infrastructure and a legacy infrastructure. This requires conscious decisions about the level of integration vs. isolation that will be achieved between the data center side and the cloud side.

For many organizations, it may be tempting to simply graft a separate cloud environment alongside their traditional data centers, so as to minimize disruption of the existing internal operations and the introductions of new tools into existing environments. However, this path leads to increasing complexity, as more and more functions have to be simultaneously performed in multiple environments. So while hybrid cloud architectures vary, it is a best practice to anticipate the need to develop and deploy integrated platforms and architectures wherever practical.

Here are some characteristics that are typical of successful hybrid cloud environments:

  • A centralized identity infrastructure that applies across multiple environments
  • Persistent, secure high-speed connectivity between the enterprise and the cloud environment
  • Integrated networking that securely extends the corporate network, creating a segmented but single overall network infrastructure
  • Unified monitoring and resource management

Multi-cloud

This term seems relatively self explanatory: deploy cloud infrastructure on more than one public cloud provider, with or without an existing private cloud. However, the motivation for WHY companies might consider multi-cloud approaches and architectures is where things get interesting.

Risk Reduction, (“Don’t put all your eggs in one basket!”)

When organizations decide to go to public cloud, a typical concern is the perception of risk associated with dependency on one external firm, such as Amazon, Google or Microsoft. In response, it is common to wonder whether it makes sense to minimize that perceived risk by using more than one cloud provider, thus maintaining a complete and separate environment in each one. This provides an additional option in case the relationship with one provider becomes untenable for some reason, and, in theory, makes it possible to maintain services in the event of an outage at one provider. There is obvious, instinctive logic to this approach; however there are also some realities that argue against it.

The first challenge is the complexity of maintaining an additional complete set of architectures and operational relationships, one for each provider. Given that most companies will already be operating in a hybrid cloud, this makes a total of three environments that must be maintained and operated. That doesn’t make multiple cloud providers impossible, but it needs to be understood. Note that there are vendors who offer valuable third-party products and services that can help provide standardized abstraction layers, theoretically minimizing the complexity of managing multiple cloud providers. A good example that comes to mind is Pivotal Cloud Foundry, especially known for enabling applications to run on multiple clouds.

But an important note here is that as soon as you depend on an “abstraction” provider, you have now re-created the single provider failure point you were trying to avoid in the first place. In addition, there is nearly always lag time between cloud providers releasing a feature, and the abstraction providers being able to support that feature. This creates an agility penalty. Given that enterprise agility and time-to-market for new products and features are critically important motivations for organizations to move to the cloud in the first place, giving away some of that agility is counter-productive. Finally, because the goal is to support duplicate environments with consistent capabilities regardless of which cloud provider is operating underneath the abstraction layer, it requires that each cloud provider has the same underlying capabilities. This leads to the next challenge.

The second (and larger) challenge to the “distribute your eggs” approach arises from the drive to the lowest common denominator. By definition, if the goal is to operate duplicate environments, then all the capabilities that are relied upon must exist in both environments. Of course, it is obvious that while the three main cloud providers have services and features that significantly overlap, they are not even remotely close to being identical. The result? Any full-fledged implementation of the “don’t put all your eggs in one basket” multi-cloud approach is by definition limited to using the lowest common denominator set of features shared by the two cloud providers. This again results in an agility penalty, because when new cloud provider features and services are being considered, it is necessary to wait until BOTH providers offer the feature or service before it can be used in this form of multi-cloud implementation.

Architectural Similarity (“Like for Like”)

It’s not uncommon to find different technology stacks in different divisions or departments, because of acquisitions or high levels of autonomy among groups. One division might be heavily built out on the Microsoft ecosystem with SQL Server, .NET and C#, while another has a history of Linux, Java and other open source technologies. We sometimes see a pattern where individual departments may choose to extend workloads into the public cloud based on ease of migration to a given public cloud. For example, Azure offers ease of migration for Microsoft workloads, so it’s not uncommon for a department or group to choose Azure as their public cloud for that reason, while another department may choose AWS.

It’s important to note that this pattern is not usually consistent with best practice. While it offers some cloud benefits (e.g., OpEx over CapEx, scalability and agility), it creates two or more separate public cloud footprints, adding operational complexity, and limiting the ability to achieve a cohesive view of costs. In the end, it essentially becomes two duplicate environments.

Feature Availability (“Best of Breed”)

While the multi-cloud approaches above are fairly common today, we feel that a different model should be considered as a more successful one moving into the future. This promising multi-cloud architecture can be thought of as “Best of Breed.” With this approach, the mindset is that the agility risk imposed by insisting on duplicating all features in both environments actually costs the enterprise more than the stability that is theoretically gained by deploying two totally interchangeable cloud provider feature sets. Here the guiding principle is that in order to reap the full benefits of cloud, being able to take advantage of the best service and feature advances is of utmost importance.

A good best of breed approach involves selecting a primary cloud vendor. This vendor is where the main center of gravity for cloud operations lives, with the primary identity and security designs centralized around the main provider. It is of course straightforward to utilize new services and features from the primary vendor, but the enterprise also explicitly leaves open the possibility of reaching across to another cloud vendor for a specific service, capability or feature that is either not available from the primary cloud vendor, or does not meet requirements as well.

A reasonable question might be, doesn’t that add complexity? The answer is, yes, it does. However, under the right circumstances, the benefit is worth the complexity. In this model, there is an architecture assessment process that explicitly considers the option of using a second (or third!) cloud provider, given that the value for the use case justifies the extra effort. These types of scenarios can include the following:

  • Reaching out from the primary cloud provider to use an API-driven service on the second provider. Because authentication can be handled at the individual request level, an entire duplicate identity infrastructure on the second cloud provider isn’t required.
  • Utilizing a particular query-friendly data store in the second cloud provider, populated via messaging queues or object storage originating in the first cloud provider. (This can be effective if egress data volumes aren’t too high.)
  • Machine learning training can be performed on a second cloud vendor, especially if the source data is publicly available. Then the results can be brought over to the primary cloud to build and deploy real-time scoring applications.

While the above are only three examples, they illustrate how this model provides a balanced approach, avoids the lowest common denominator problem, and provides access to the latest cloud innovations, all while keeping complexity in check.

Of course, every enterprise is different, and there may be compelling reasons and priorities in a specific case that indicate a different approach. There are certainly architectures and options available other than those just discussed, but this should provide a solid baseline to understanding how two major architectural approaches to cloud are being utilized by major enterprises. Maybe yours can utilize them as well!

 

Leave a comment

Your email address will not be published. Required fields are marked *