Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Google Cloud Platform an Architect's Guide
Google Cloud Platform an Architect's Guide
Google Cloud Platform an Architect's Guide
Ebook674 pages12 hours

Google Cloud Platform an Architect's Guide

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

 

New 2021 Edition with 12 new hands-on labs

Learn fundamental to advanced GCP architectural techniques using 30 + real-world use cases.

The 'Google Cloud Platform an Architect's Guide' is a comprehensive handbook that covers everything that you need to know from GCP fundamentals to advanced cloud architecture topics. The book covers what you need to understand to pass the Google certification exams but goes far further and deeper as it explores real-world use cases and business scenarios. But you don't need to be an IT expert as the book is designed to cater for both beginners and those experienced in other cloud or on other on-premises networks. To that end, the book is split into distinct parts that caters for all levels of expertise. Part -1 is aimed at the novice someone new to a cloud architecture environment that needs to become familiar with the fundamentals of cloud architecture and industry best practices so the more experienced reader may wish to skip this section. Part-2 takes a far deeper dive into GCP theory and practice as well as providing real-world use cases and practical tips that are beneficial for architects at all levels. Part-3 delves much deeper into GCP practical theory on elasticity, scalability and resilience. It also covers Kubernetes in greater detail and touches on High-Performance Computing and IoT designs. The book closes with a final part dealing with cloud-native design practices and as such it covers design, monitoring, notification and remediation techniques to ensure best practice in cloud-native application design, deployment, stabilisation and commissioning.

 

LanguageEnglish
Release dateJun 24, 2021
ISBN9781393875536
Google Cloud Platform an Architect's Guide

Read more from Alasdair Gilchrist

Related to Google Cloud Platform an Architect's Guide

Related ebooks

Information Technology For You

View More

Related articles

Reviews for Google Cloud Platform an Architect's Guide

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Google Cloud Platform an Architect's Guide - alasdair gilchrist

    Part I – Google Cloud Platform Fundamentals

    Chapter 1 - An Introduction to the Google Cloud Platform

    In this opening chapter of ‘Google Cloud Platform an Architect’s Guide’ we are going to present a brief overview of the Google Cloud Platform. In the following chapters in part 1 we will build upon this by introducing the four main categories of cloud services: compute, storage, Big Data and machine learning. However, we must not forget cloud security and networking as these are the essential glue that holds everything together in a robust and low latency manner.

    Moreover, we will explain how GCP is flexible enough to provide you options for computing and hosting where you can choose a platform to:

    ●  Work in a serverless environment – Function Engine

    ●  Use a managed application platform – Apps Engine

    ●  Leverage container technologies to gain lots of flexibility – Kubernetes Engine

    ●  Build your own cloud-based infrastructure to have the most control and flexibility – Compute Engine

    When you consider what technologies constitute Google Cloud Platform you should also contemplate the areas of shared responsibility. You can imagine a spectrum where, at one end, you have most of the responsibilities for resource management, which is Compute Engine. At the other end of the spectrum is Cloud Function Engine, where Google retains most of those responsibilities as it is a fully managed service often called a serverless environment. Regardless, the GCP gives you options for both hands-on managing of resources or to take advantage of Google’s managed services, it’s up to you.

    Figure -1

    In this chapter we will start by focusing on the VM infrastructure within the Compute and Storage services together with the topic of networking. Networking is important to us as we cannot effectively use any of the cloud’s resources without using the Cloud Networking services. However, cloud networking is often the least understood component of cloud computing as it differs in several ways from the traditional on-premises networking that you may already be familiar with. 

    As a result, we will through the course of this book build upon the overview we present in this chapter by providing deeper and wider discussions on cloud networking in subsequent chapters. The same can be said of security. Similarly, we will first introduce some basic security concepts that pertain to the cloud before going into greater detail later in the book. But for now, let us start with some basic concepts on cloud computing.

    Cloud Computing Concepts

    Cloud computing is about the seamless delivery of computing services—including infrastructure such as virtual servers, large scale storage, databases, and networking, as well as platforms for Big Data, advanced analytics, and artificial intelligence— all over the Internet (the cloud). These services provide you with the means for faster innovation through flexible infrastructure and rapid deployment of resources, with the benefits of economies of scale. You typically will pay only for cloud services you use; this helps you lower your capital expenditure, run your infrastructure and operations more cost efficiently, as well as the capability to scale quickly to meet unpredictable demand or volatile business requirements.

    Cloud computing platforms have become very popular over the last decade due to some fundamental principles:

    ●  First, the cloud can be with diligent planning, a very cost-effective and highly efficient place to rapidly develop, deploy, run your applications and store your app data. This is mainly because it reduces any upfront capital investments in servers and other infrastructure so you do not need to over-provision at the start of a project.

    ●  The cloud greatly reduces the operational burden if not the overall cost because it means you are no longer burdened with many time-consuming tasks such as provisioning servers, then having to maintain them or even backing them up so it frees you from a lot of housekeeping chores. 

    ●  Cloud services can scale dynamically to meet the current operational demands by growing or shrinking resource consumption as required. This type of dynamic or elastic scaling of resources can result in reasonably priced access to high performance and almost infinite infrastructure resources.

    ●  Cloud computing provides truly global access which enables the delivery of your services and products closer to your customers across the cloud provider’s own global-scale infrastructure. This vast network inherently provides global load balancing and the distribution of content closest to the customer resulting in lower latency and economy of scale.

    ●  Cloud computing makes data backup, disaster recovery, and business continuity easier and less expensive because data can be mirrored at multiple redundant sites on the cloud provider’s network at a fraction of the cost of an on-premise solution.

    ●  Last, but not least, productivity, as on-premises data centres typically require a lot of manual installation of hardware, application and stack provisioning on top of hardware configuration, software and security patching, and many other support duties. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more innovative and business orientated goals.

    Those are the main features and benefits of cloud computing in general but how is Google’s Platform organized and what differentiates it from the competition?

    Google Cloud Platform

    As we have just seen, the five drivers behind cloud computing that enthuse customers to migrate their IT systems to remote service providers, relates to cheaper and more efficient availability of resources. This also provides for disaster recovery and business continuity solutions that come at a fraction of the cost of redundant data centres. However, although this explains the attraction for companies with limited budgets, technical resources, little in-house IT skills, or only a local presence, it doesn’t address the needs of SME or larger organizations where IT skills are in abundance and where there are multiple locations for business continuity. Furthermore, with these large enterprises it is control, security and management that are the key concerns. Indeed, for many large organisations the actual cost savings that are so apparent to the smaller organisation and the start-up are possibly going to prove neutral or even negative. This is simply because the cloud financial model shifts the burden away from one-time capital expenditure (Capex) onto operational recurring costs (OpEx). Consequently, a large organisation with thousands of applications and servers may find that the cloud is not the cost-effective nirvana they have been led to believe. Thus, to understand why the cloud is still so attractive and compelling to these types of organizations we have to consider some history. 

    Colocation

    The fact is that many medium to large organizations have been on the path to cloud for many years, they just didn’t realize it. For instance, colocation has been around for decades as companies strived to reduce data centre costs by sharing facilities. Indeed, many large companies would offset the huge costs of operating their data centres by providing colocation services to medium sized companies as a very cost-effective solution that served both party’s needs. Consequently, many smaller nontechnology-based companies would much rather collocate than start investing scarce capital into running their own power hungry and operationally intensive data centres. Instead, they could simply rent space in another large organization’s shared facility and this was known as colocation. 

    For many businesses, running data centres are not core business goals so colocation frees up capital for more business centric initiatives such as research and development or improving their products and services rather than pouring all their hard-earned cash into paying huge cooling and electricity bills.

    However, for the large enterprise collocation was also very attractive as it contributed to the upkeep of their essential data centres. Indeed, we can look at colocation as being just one of the stages along the road to cloud computing.

    Virtualization

    Virtualization appeared on the scene in the early 2000s. Server virtualization was hugely popular as it addressed the inefficiencies of single servers running a single application, which was considered a common best practice at the time. This practice however led to the proliferation of small servers that were run using barely a fraction of their capabilities and that resulted in escalating energy costs and huge inefficiencies. The concept of virtualization led IT departments to adopt server virtualization as a priority as it led to much needed efficiencies and superior provisioning times.  Virtualization allowed IT to do a lot more with fewer servers and other infrastructure components. They could deploy it with little trouble as a virtualized data centre matches the architecture of a physical data centre. But now there was a problem, they were more complex to operate and manage.

    Virtualization lets us all use resources more efficiently and just like colocation, it lets us be more flexible too. However, it made troubleshooting an individual application vastly more difficult. This was because before with the one application one server model the tiresome conflicts of what was software and what was a hardware issue was easily sorted. However, with virtualization you now make efficient use of shared server resources and capabilities by running many low-resource applications on the same server. This was done in order to reduce waste and reduce the proliferation of physical servers. But there was the downside and that was that data centre support needed to know exactly how virtualization worked. They also needed to know the exact requirements of each application so maintaining the infrastructure became far more complex. Thus, virtualisation resulted in less capital and energy cost but that benefit was wiped out in higher operational costs such as wages for experienced support staff.

    Moreover, previously capacity planning had been considered to be an unattractive but relatively straightforward task. With virtualisation it became an essential role in server and network management.  After all, with virtualisation it’s even more important that you determine how many resources each application may need and how each might scale. Thus, you'll need to predetermine all these factors before you deploy them on a host server. One solution was to run a single VM on a server but that negated the benefits of virtualisation. But running several VMs on a single server requires detailed understanding of each VMs operational requirement. Indeed, before deployment of an application it was often the case that you had to perform detailed capacity planning and then present the results to the change control board or the project management team. This negated much of the flexibility and rapid provisioning benefits.

    Software Defined Networks (SDN)

    The advent of the internet in the early 90’s and its subsequent proliferation globally in the early 2000’s made the huge internet providers aware that virtualization was not a solution in itself as the ability to scale became paramount. Network issues and in particular latency brought about through traditional layer three routing made them look towards Software Defined Networks (SDN) and a shift away from virtualization towards SDN and container-based architecture.

    As a result, these internet-scale businesses like Google understood that traditional networking and server architecture with virtualisation would not scale to their global demands. Consequently, large internet-scale organisation switched to a container-based architecture based upon an automated SDN elastic cloud built around automated services.  With this model all the services are automatically proactively provisioned and seamlessly configured so that the infrastructure is optimized to run stacks, environments and popular applications.

    But what was most important is that the pioneers of cloud computing such as Google when developing their revolutionary SDN networks didn’t have a vendor roadmap to follow so they created their own SDN global networks and developed their own routers, switches and servers to meet their own world-wide criteria.

    Moreover, some of the pioneers realized that these new networks could support not just their own global ambitions and scope but also the requirements of others. What Amazon in particular realised early on was that their vast pool of resources could now be shared reliably, efficiently and cheaply over the internet just like co-location. The Amazon cloud then made colocation and in-house virtualisation now a poor alternative as cloud became very attractive for small organizations that lacked their global scale.

    Service Delivery (XaaS)

    Further, around the late 2000’s the cloud delivered a much easier way to manage applications through Software as a Service (SaaS). This way of delivering cloud hosted software over the internet was fantastic as it disconnected on-premises technicians from being application experts. Indeed, all you needed was a robust internet connection to benefit from an enterprise-class hosted software solution.

    The CRM platform Salesforce was one of the pioneers and it dislocated IT in many respects from desktop and application support. But SaaS was also the instigator of Infrastructure as a Service, (IaaS), and Platform as a Service, (PaaS) as these offerings soon followed. 

    IaaS though hard to comprehend at the time provided raw compute, storage, and network services that are organized in ways that are far cheaper to provision and more efficient than on-premises servers. In contrast, the PaaS offerings were more suitable for developers that needed a platform on which to run their application code. Moreover, PaaS provides developers with software libraries that give easy access to run-times, drivers and interfaces that run seamlessly on a transparent underlying managed infrastructure.

    Whereas, SaaS had been a huge success the introduction of IaaS by cloud providers was met with far more reticence. Many larger companies thought that giving up control and management of their infrastructure was not an attractive proposition. Indeed, the larger organisations were already running their own version of the cloud via colocation in their large data centres. Hence, they didn’t see any benefits of moving operations to the cloud; all they saw was the potential hazards mainly regarding loss of control and security.

    Security was the main obstacle to cloud adoption for larger organisations. Indeed, even with SaaS there had been concerns regarding the multi-tenant models and data segregation between cloud customers. Another concern for big companies was regards the added complexity of running multi, hybrid or private clouds as there was not one single solution for businesses operating proprietary technologies. This can be demonstrated through the understandable concerns of moving mission critical databases or applications to a single cloud provider. It simply couldn’t be done; for example, the easiest and generally best solutions would be the migration of the proprietary solutions such as Oracle, IBM and Microsoft databases and applications to their respective vendor clouds. This is because for large enterprises these databases and applications are not just mission-critical they also tend to be huge. This makes them unsuitable for running within a VM due to their scale and performance requirements. But of course, a multi-cloud solution raises more issues as these applications and databases often need to communicate with one another so this adds another layer of network latency and security complexity.

    On the other hand for small companies migrating to the cloud made perfect sense once they could be reassured of any security concerns. They could migrate their Oracle, IBM DB and SQL Server solutions easily as they could run them intact within VMs on the same cloud. This mitigated many of the security and inter-communications latency issues. But for the enterprise there still were significant hurdles to be overcome, such as decomposing vast monolithic applications and huge databases, as this would take a lot of time and money with little in the way of obvious rewards. The major issue being that organisations thought of moving to the cloud simply in terms of technology upgrade and an expensive upgrade at that. They did not see it as a business or cost differentiator as there were no barriers to entry, if you could do it so could anyone else. Furthermore, the much-heralded cost-efficiencies didn’t really work at their scale. 

    The re-emergence of interest around 2010 in SDN, microservices and containers changed the enterprise IT mindset. For now, the focus was on moving away from the 3-tier data centre structure and moving to faster layer-2 architecture. The former was how data centres were previously designed and client/server applications built and deployed. This model supported monolith applications and huge databases on massive server clusters. However, the 3-tier model also added latency between the layers as traffic had to pass through physical routers and often firewalls at the layer boundaries. The layer-2 model however was faster as there were no inter-layer communications as everything was on the same flat Ethernet network. This meant everything could communicate directly with everything else – if it was permitted - on a common network with the security and virtual firewall rules enforced centrally by the SDN network.

    The obvious improvement in on-premises performance and reduced latency between applications encouraged the use of microservices, which are single function modules of code. Microservices encouraged the decomposition of monolithic applications as it brought the benefits of faster development and the promise of agile development, continuous improvement through frequent updates and rapid deployment. Now large organisations’ business teams were beginning to pay attention.

    Next came containers and this is where the modern cloud technologies and large enterprise data centre technologies would eventually converge on a common architecture. Don’t get me wrong though the introduction of microservices, containers and flat networks have made cloud migrations feasible but it is still not easy. This is because decomposing a mission critical application into microservices or sharding vast databases into manageable components takes time and effort. However, the rewards are now scalability, flexibility, speed, redundancy, centralised security, global-reach, multi-regional caching, network efficiency and infrastructure cost effectiveness, amongst many others. But the drivers now for modernisation or as it is called digital transformation came from the business teams and the c-suite. This turn-around from an IT-push to a C-suite and business-team’s pull came-about because the need to modernise to a more pervasive digital, flexible, agile and rapid business environment was now deemed to be business critical. This is because the business teams could now translate the cloud benefits to real business objectives such as wider market visibility, better customer identification and personalisation, informed and dynamic decision-making, faster time to market and advanced capability-building. 

    Framing the ROI

    Consequently, when framing any questions regarding return on investment for moving to the cloud for all but the smallest of companies or start-ups we should avoid stressing the cost saving benefits. The reason for this is two-fold, one it may not actually be true for larger or well-established organizations. Two, it is better to concentrate on composite ROI whereby you take a three-tier approach. The lower tier would be regarding the infrastructure Total-Cost-of Ownership (TCO) and capital investments in modernisation. For a start-up the value generated at this tier could be significant but for most larger organizations the CapEx and OpEx ratio will typically be neutral or even negative over 3-5 years. Then we should consider the second tier which concerns Human Resources (HR) with regards to the operations and maintenance and this is where organisations can make some positive ROI due to offshoring much of the operations, management and maintenance burden and thus reducing staffing, training and administration requirements. However, even at this HR tier the ROI is typically only around 5%. It is at the third tier, which measures business value that we will typically see significant return on investment. Indeed, when we consider business benefits in the ROI this is where we start to see ROI in excess of 50%.

    Therefore, when considering ROI for moving to the cloud you should take a holistic approach by considering infrastructure, HR and business value to reach a composite value. This will require involving the business-teams in technology decision making as well as elevating IT to become a business function with responsibilities to create business value. After all this is what digital transformation is all about. 

    Spoilt for Choice

    In the last section we discussed how and why ROI and the benefits of moving to the cloud can vary depending on the size and maturity of the organisation. Indeed, we learned that cost-savings are not always the predominant driver and that increases in business flexibility, increased business capability and speed to market are often the key drivers. However, for smaller businesses and certainly a start-up where preserving precious seed capital is essential for if they hope to survive then reducing infrastructure CapEx is hugely important. So, switching perspective we can now view the benefits from a SMB or start up point-of-view.

    In years gone-by there were several internet providers that filled the market demand for outsourcing a data centre and to meet the needs of financial managers looking to offset the huge IT costs of investing in a data centre by outsourcing the service. Indeed, there were internet service providers that provided hosted web services that delivered all the advantages that we have already mentioned, touch of a button provisioning and monthly bills at a set monthly subscription. There was no need for on-premises experts, as these were managed services focused on the SMB or small companies that had no technical experience so they charged every month even though you had barely used their service. 

    On the other hand, today’s cloud providers like Google provide a service which is billed based upon the resources that you use. Indeed, Google introduced billing by the second, which can reduce operational expense significantly. Thus, some companies can make considerable saving using cloud services, which deliver a variety of functions and applications for running a network, managing the infrastructure and getting the best possible value from capturing, storing and processing data.

    The problem was that when cloud computing came to the attention of business the terms were not really intuitive; even IT who certainly embraced the more pervasive SaaS didn’t actually understand the difference between IaaS and PaaS.

    To get a better understanding of these cloud services we need to contemplate the IaaS model at some high level. What the concept of IaaS relates to is you pay for the use of cloud hosted infrastructure, but you only pay for the resources that you use or reserve and no more. This is very advantageous for business as it means you can forget about detailed capacity planning as should you get the forecasting wrong and your product or service goes viral you will still scale to meet demand but only pay for the resources your application uses. In contrast, when using on-premises infrastructure you would have had to predetermine and set the resources to scale and overprovision the servers and network bandwidth with a safety margin in order to scale to your most ambitious and affordable goals. In contrast IaaS is very forgiving as it is dynamically scalable so it expands and shrinks to meet the current resource requirements of the application.

    This is also true of the PaaS model; in so much as you pay for what you use. Again, this must be considered preferable to having to forecast your capacity and usage beforehand and pay up front for resources you may not use. Cloud computing has become very popular with SMB because of this pay-as-you-use model. Further, Cloud Computing has eventually become dominant in many SME IT environments due to the shift towards automated configuration, provisioning and deployment and all with no human intervention. It may seem counterintuitive but the IT leaders that first railed against cloud computing eventually came around to finding that the autonomous managed infrastructure and automated managed services was a good thing.

    Thus, we are now seeing more enterprises moving their IT operations to the cloud, but the old question persists, to what cloud?  The question is difficult to answer as it depends on circumstances. As we discussed earlier, if you are heavily dependent on a particular technology, say Oracle, IBM or VMware then it makes sense to lift-and-shift your operations straight up to the respective vendor’s cloud. Therefore, for the vast number of Microsoft business clients it makes more sense to deploy to Microsoft Azure cloud. This is simply due to the respective vendors knowing their products better than anyone else. For SMB and SME with more general requirements and open-source solutions then Amazon AWS is by far the most popular and largest cloud environment. AWS has the largest global reach and number of points-of-presence; it also has by far the largest portfolio of cloud services. So, why would an SMB, SME or large enterprise want to migrate their operations to specifically Google Cloud Platform?

    The Google Cloud Platform

    Google has built one of the largest fibre optic networks in the world and they have invested billions of dollars over the years to build the network to meet their own global requirements. As a result, the network has been designed to deliver the highest throughput with the lowest possible latencies for their internet-based applications. To meet their own specifications required that Google designed an SDN network unlike any vendor’s roadmap at the time.

    Google’s cloud network global scope may not be as large as AWS but Google is investing heavily and catching up quickly. For example, the global scale of the network requires interconnects at more than 90 Internet exchanges, which ensures fast efficient connection with low latency. What is more, Google has more than 100 points of presence worldwide that are geographically and strategically placed. This global distribution of PoPs (Points of Presence) ensures redundancy and high availability. Thus, if an Internet user sends traffic to a Google resource that is temporarily unavailable, Google will detect the request and will respond via the closest edge network location. This method provides the lowest latency and provides the best customer experience.

    In addition, Google's Edge-caching network also provides geographical caching of content so that it always delivers content closest to the end user. A distributed cache will minimize latency and increase customer experience by reducing connection and download times as well as provide a measure of business continuity.

    Although Google may be considered to have been slow to benefit from sharing their global network, they have tried over the last few years to catch up with Amazon’s AWS cloud platform. Consequently, they have invested billions of dollars building their own cloud platform and global network fibre links to ensure its networks are low latency, resilient and highly efficient. 

    Today, Google strives to make their cloud platform attractive, affordable and available to their customers. This change in strategy appears to come about as Google has come to believe like Amazon, that presently and in the future data is and will be integral to all businesses, regardless of size or industry. This was not always the case, as even quite recently IT was often considered to be a service function of a business.  After all manufacturing or product development, sales, marketing and finance were the core profit generating business units. IT was simply a service department that generated no new money or profit. But as technology has become integrated and almost inseparable with all these traditional business functions cloud providers have come to realize that the way a company will differentiate itself from its competitors will be through leveraging cost-effective advanced IT technology in order to create new business value from its data. Google’s belief is that future companies will largely deal in data, through advanced analytics, Artificial Intelligence and its subset of machine learning.

    Hence, Google’s focus is on providing cloud services for businesses that focus on data and analytics, but also for companies that need to reimagine their data centres in the cloud to extend their scale and global reach. As, in the future, companies must have the ability to service customers around the globe. Hence, customers that build on-premises solutions will be at a severe disadvantage as the costs are simply prohibitive.  What is worse is it is crucial capital that is potentially wasted as it should be spent on core business functions that generate profit. 

    Thus, Google believes that every company will have a need to rely on data analytics as well as have a global presence and this will require a global partner that can provide them the infrastructure to scale to their ambitions. Ultimately, this strategy will allow them to release their cash and energy to be spent on developing digital processes that permeate the entire organisation and its value and supply chains. It is by following this road-map that places an emphasis on pursuing affordable, scalable, global and highly-advanced technology that will enable organisations to differentiate themselves and become digital leaders and the creators of innovative business value.

    Cloud Native Architecture

    Google’s network is designed to support its 8 key web-based products which each have in excess of one billion subscribers so it must be designed to be robust, scalable and efficient. But how does it differ from traditional network design?

    At a high level, we can consider cloud-native architecture to mean a design that adapts to take advantage of the many new possibilities—but also the very different set of architectural constraints—offered by the cloud compared to traditional on-premises infrastructure. If we look to some of the high-level elements that you as a network architect will be trained to consider, such as:

    ●  The functional requirements of a system

    ●  The non-functional requirements

    ●  The architectural constraints

    What we actually will find is that the cloud is not much different from the on-premises data centre. This is simply because the cloud is made of the same fabric of servers, disks and networks that makes up traditional infrastructure. This means that almost all of the principles of good architectural design still apply for cloud-native architecture.

    While many of the functional aspects remain the same regardless as to on-premises or cloud the latter is often more challenging when it comes to the non-functional aspects. These do often result in design constraints. Indeed, some of the fundamental assumptions about how the infrastructure fabric performs do change when you’re in the cloud. The differences are in speed and flexibility. For instance, provisioning a replacement server can take weeks in traditional environments, even in VM environments it still takes a day, whereas in the cloud, it takes seconds. Hence in the cloud it’s much easier to provision a full application stack with fully automated scaling and fail-over scenarios than on-premises. However, in order to take advantage of these properties it is a requirement that the cloud provider has many application and vendor partners. To this end Google Cloud Platform supports a very large partner network, which allows you to provision both proprietary and open-source solutions with a single click via the Cloud Launcher service.

    Google’s SDN network facilitates a cloud-native design, which also tends to focus on key metrics such as resilience, horizontal scaling, distributed processing, and automation. So, we need to take these characteristics into account when evaluating other cloud-native architectures that are not based on Google’s own SDN cloud network.

    Google Software Defined Network

    The Google Cloud Platform is organized as a software defined network, now that might sound confusing but it’s not really and we will consider the architecture later in some detail. But for now, all you need to know is that Google’s global network is defined by segments and its smallest entity is a Zone, which can be defined as a deployment area for Google Cloud Platform Resources. It could be simplified to mean a data centre but that is not always the case as a zone could have several points of presence within a local zone. However, a zone works as one large switch operating within a common subnet so entities within a zone communicate at layer-2 or the Ethernet data layer – which we will cover later – but it means latency is minimal and configuration negligible.

    When you launch a virtual machine for example, in a GCP zone using Compute Engine, you will be prompted to select a local zone. This is the local area where you wish the application or service to run. Hence you would select the closest zone to where you want the application or service to run. This will ensure that the associated compute, storage and network resources are local and close to the area they serve to deliver low latency and high performance. However, zones, as we have seen, can be large geographical areas that contain several data centres which are many kilometres apart and this allows for a measure of redundancy and business continuity planning. For instance, you could mirror your infrastructure across two zones perhaps 100 km apart to give you a measure of mitigation for disaster recovery or business continuity should one zone’s data centre suffer a natural catastrophe.

    Zones are grouped into regions

    As we have seen zones are segments in larger independent geographic areas called regions. This means that you can select what zone and region you're GCP resources reside in. Importantly, all the zones within a region have very fast and very low latency interconnected network connectivity among them as they operate much like an Ethernet switch at layer-2 in the data communications protocol stack. Intercommunication between Zones within a region will typically have round trip network latencies of under-five milliseconds, which is very fast when you consider the devices may be 100 km apart or more.

    What this means is that zones within a region can be considered to be from a data network perspective both a single broadcast domain as well as a single failure domain. This is important when you contemplate building a fault tolerant application, as you can easily spread the application’s resources across multiple zones within a region. This is because they share a common subnet so the fail-over from one server to another will be seamless. Sharing a common layer-2 domain helps protect against unexpected failures as well as greatly assist in real-time synchronization or replication of data between cluster peers or master/slave models despite the fact they may be 100s of miles apart. 

    Furthermore, you can run synchronised or mirrored resources in different regions if you want greater security against natural disasters. However, it is not just for business continuity or disaster recovery that you may want to mirror infrastructure across regions as many organizations choose to do this in order to deliver content to customers at the most convenient point of presence.  Thus, you may choose to use regions to deliver content closer to your customer base. For example, this is how Google brings their applications closer to their users wherever they may be in the world. Diversifying across regions also protects against the loss of an entire region, say, due to a submarine fibre cable being severed or some natural disaster. 

    Sometimes you will find that some Google Cloud platform services do support placing resources in what we call a multi-regional configuration. Currently the GCP is configured to support 24 regions with 73 total zones. There are three zones per region, with the US-Central1 region having 4 zones. What this means is that data may well be stored across regions within the parent region. For example, data may be redundantly stored across several geographically diverse locations within the US Multi-Region. In practice there is a copy of the data stored in at least two geographic locations, separated by at least 160 kilometres within the US. You may want to look into this if you are located in Europe or Asia for example as some nations despite being in the same broad geographical region may not wish or be allowed to store data out with their own borders.  If that is the case then you can easily stipulate the regions and zones you wish your data to be backed-up that meet your nation’s regulatory compliance.

    Sustainable Cloud Networks

    In the era of the new green deal and the awakening of public awareness that demands a future based upon sustainable and carbon neutral industry, enterprise data centres have come under the spotlight. It is hardly surprising as computer networks consume vast quantities of energy and produce equal amounts of wasted energy in the form of heat. Indeed, the internet is built on physical infrastructure, and all those servers consume vast amounts of energy and produce heat as a by-product that has to be controlled or dissipated. Around the world today data centres consume around two percent of the world's electricity supply; hence there is a responsibility for the major data centre providers to provide sustainable and eco-friendly energy plans. To this end Google strives to make their internet-scale data centres run as energy efficiently as possible. 

    As a result of their efforts Google's data centres were the first cloud providers to achieve ISO 14001 certification, which is the gold standard for improving resource efficiency and reducing waste. In order to achieve this certification Google is one of the world's largest corporate purchasers of wind and solar energy, which means that Google has been 100 percent carbon neutral since 2007, and will shortly reach a 100 percent renewable energy sources for its data centres.

    Today many of Google’s customers pursue environmental goals as part of their mission statements, for sustainable and green business practices and they can responsibly live up to those ideals by running their workloads in GCP.

    Cloud Efficiency Model

    The most commonly used list of benefits for adopting cloud computing is the one created by the US National Institute of Standards and Technology. In their definition, cloud computing is considered to be a model for efficiency in remotely hosting and running I.T. services, which have typically five equally important components. 

    First, Google Cloud Platform provides you with access to online remote computing resources; these are delivered as an on-demand and self-service model. Hence, GCP is ideal for small medium business (SMB), small medium enterprises (SME) or for that matter companies of any size. This is because all you have to do is to use a simple interface to access all the processing power, storage, and network resources that you require. Importantly though you can have access to all the infrastructure, services and resources without the need for any of your own infrastructure or in-house skills as access to most GCP managed services requires no human intervention at all. 

    Second, because GCP resources are distributed around the globe and accessible on the internet you access these resources over the net from anywhere you have an internet connection. Furthermore, your customers can also access your applications or services from anywhere in the world with low latency and high availability. As a result, you or your customers don’t need to know or care about the exact physical location of those resources. What this means is that you can mirror your IT environments not only across zones but also regions and even continents to ensure business continuity in the case of a local or regional disruption of service. Previously, that would have required vast expense establishing multiple data centres and moreover in keeping them synchronised. 

    Third, Google has their own internet-scale network infrastructure, which provides a vast pool of resources that are allocated dynamically to their clients on a pay-as-you-use basis. That model allows Google to dynamically service their customer with only the resources they require at any given moment so Google can get vast efficiencies by balancing their pool of resources across their demand driven network of customers. The elastic nature of the allocation of resources means that should you quickly need more resources such as CPU, memory, network bandwidth or storage you will get allocated them seamlessly and rapidly without any human intervention being required. However, if the demand were to fall you can then scale back. The consequences of this are that you only pay what you use or reserve so if you no longer use a resource, you will no longer be charged for it.

    Fourth, Security, Google supports a wide array of vendor software and hardware but they also use their own securely built servers. These secure servers have Google’s own Titan security chips and secure-boot processes, which makes Google’s cloud infrastructure highly secure.

    Fifth, Google like other internet-scale providers takes advantage of economies of scale by buying and building infrastructure, licenses and services in bulk and then passing the savings on to the customers. However, Google has the largest partner network, which manifests itself through a wide array of open-source and proprietary software, which is pre-provisioned and available at a click. Thus, the considerable ease-of-use advantages when provisioning servers and application stacks or environments. What is more, each will have the correct versions, dependencies and libraries installed automatically doing away with a lot of the troublesome tasks when deploying stacks. For example, a LAMP (Linux, Apache, MySQL, and PHP) stack can be provisioned in just a click of a button and the same is true of other environments such as Java. In addition, networking becomes easier as it is configured automatically in most cases meaning the requirement for in-house expert-level networking skills is no longer a requirement.

    GCP Charging Model

    A key reason for migrating IT to the Google Cloud Platform is to reduce costs and typically GCP offers several ways to reduce operational spending and overall costs. For start-ups and Greenfield companies cloud computing greatly reduces the capital expenditure required as there are no upfront investments in servers and network equipment. Established businesses however can also benefit by reducing maintenance, administration and operational costs of running a data room/centre. This is primarily due to cloud's typical pay-as-you-use model, rapid provisioning of servers, and reduced time to profit in product/service development, which translates to tremendous savings if managed correctly. 

    Indeed, all cloud providers follow this pay-as-you-use model but it was Google that first introduced billing by the second rather than rounding up to the minute for the virtual machines that you configure as part of the Compute Engine service. 

    Initiatives such as charging by the second can have a large impact on monthly billing if you are running thousands of VM instances.  However, the per second model for billing is not restricted to the use of virtual machines through the Compute Engine as it is also available for several other services as well such as;

    ●  Kubernetes Engine, which is GCP’s container managed Infrastructure as a Service, 

    ●  Cloud Dataproc, which is GCP’s open-source Big Data system Hadoop as a service, and

    ●  App Engine Flexible Environment which is GCP’s platform as a service.

    ●  Cloud Spanner, which is GCPs vastly scalable SQL relational database 

    Moreover, GCP has several other cost-reducing or cost management initiatives that you should be aware of and take into consideration when planning deployments of services. For example, Compute Engine offers you discounts based upon consistent use of a VM resource. These discounts are automatically applied to reward sustained usage, i.e., these are automatic discounts for running a virtual machine for more than 25% of the billing month. 

    This discount translates to a significant saving for every incremental minute you use it over the 25% threshold. 

    Savings can also be made through diligent sizing of the VM that you create. This is because when you initially provision a VM you will select among other things, the memory size and how many virtual CPUs you require. In most cases customers will pick a reconfigured VM from a drop-down menu but you can customize your VM. Because Compute Engine allows you to select the required memory or virtual CPUs when provisioning the VM this enables you to fine-tune the VM to tailor the pricing to match your workloads.

    Moreover, if you are in development you might want to take advantage of the micro VMs that GCP offers within Compute Engine. These are minimal VMs that are ideal for building prototypes or proof-of-concept models. By using micro VMs in development you can reduce the cost significantly. Another way is to use pre-emptive VMs to reduce costs. A pre-emptive VM is one that Google can reclaim at any time so they are only suitable for certain applications but the pricing is significantly reduced.

    Avoiding Vendor Lock-In

    Earlier in this chapter we highlighted some of the reasons why customers are moving their IT operations to the cloud and to Google Cloud in particular. However not every IT customer was initially enamoured with the prospect of potentially losing control of their data’s security and integrity. That was the major factor amongst customers for not migrating operations to the cloud but there was another common reason and that was the fear of vendor lock-in.

    Many companies were concerned that if they migrated their workloads to the cloud they might become so dependent on the providers service that they became effectively locked into a particular vendor. This is a common concern amongst IT leaders.

    Google recognizes this concern and provides customers with the ability to run their applications elsewhere should the GCP no longer be the most suitable service provider. 

    For examples of how GCP allows customers to avoid being locked in, Google has designed the GCP services to be compatible with open-source products. 

    For example, if we consider GCP’s Cloud Bigtable, which is a massive database, which we'll discuss later in detail. But for now, what is important is that Cloud Bigtable uses the same interface as the open source database Apache HBase, which gives customers the benefit of code portability. Another example is GCP’s Cloud Dataproc, which offers the same open-source Big Data environment Hadoop, as a managed service. 

    In addition, Google is committed to the open-source community and projects and regularly publishes key elements of technology using open-source licenses. By doing so Google participates in creating and supporting open-source ecosystems. These open-source communities and systems provide customers with options other than Google. 

    For example, TensorFlow, which is an open-source software library for machine learning was originally developed inside Google, but is now a robust open-source ecosystem. 

    In addition to Google’s commitment to the open-source paradigm they are making efforts to integrate open-source projects into GCP in order to provide maximum interoperability.

    For example, Kubernetes provides GCP customers with the tools to mix and match microservices running on-premises or even on other clouds. There is also Google Operations Suite, which is GCP’s monitoring and management application, which is a collection of open-source tools that allows customers to monitor workloads across multiple cloud providers, such as Amazon’s AWS.

    Summary

    In this chapter – An Introduction to the Google Cloud Platform we described the benefits of cloud computing in general and then specifically the GCP,

    Enjoying the preview?
    Page 1 of 1