others

Understanding The Cloud

Understanding The Cloud

For the last couple of years, the IT industry has been getting excited and energized about Cloud. Large IT companies and consultancies have spent, and are spending, billions of dollars, pounds, and yen investing in Cloud technologies. So, what’s uh, the deal?

While Cloud is generating a lot more heat than light, it is, nonetheless, giving us all something to think about and something to sell our customers. In some respects, Cloud isn’t new; in other respects, it’s ground-breaking and will make an undeniable change in how business provides users with applications and services.

Beyond that, and it is already happening, users will, at last, be able to provide their own Processing, Memory, Storage, and Network (PMSN) resources at one level, and at other levels receive applications and services anywhere, anytime, using (almost) any mobile technology. In short, Cloud can liberate users, make remote working more feasible, ease IT management, and move a business from CapEx to more of an OpEx situation. If a business receives applications and services from Cloud, depending on the type of Cloud, it may not need a data center or server room anymore. All it will require is to cover the costs of the applications and services that it uses. Some in IT may perceive this as a threat, others as a liberation.

understanding

So, what is Cloud?

To understand Cloud, you need to understand the base technologies, principles, and drivers that support it and have provided many the impetus to develop it.

Virtualization

For the last decade, the industry has been super-busy consolidating data centers and server rooms from racks of tin boxes to fewer racks of fewer tin boxes. At the same time, the number of applications able to exist in this new and smaller footprint has been increasing.

Virtualization; why do it?

Servers hosting a single application have utilization levels of around 15%. That means that the server is ticking over and highly under-utilized. The cost of data centers full of servers running at 15% is a financial nightmare. Server utilization of 15% can’t return anything on the initial investment for many years, if ever. Servers have a lifecycle of about 3 years and a depreciation of about 50% out of the box. After three years, the servers are worth anything in corporate terms.

Today we have refined tool-sets that enable us to virtualize pretty much any server. In doing that, we can create clusters of virtualized servers that can host multiple applications and services. This has brought many benefits. Higher densities of Application servers hosted on fewer Resource servers enable the data center to deliver more applications and services.

It’s Cooler; It’s Greener

Read More Articles :

Besides reducing individual hardware systems through expeditious use of virtualization, data center designers and hardware manufacturers have introduced other methods and technologies to reduce the amount of power required to cool the systems and the data center halls. These days servers and other hardware systems have directional air-flow. A server may have front-to-back or back-to-front directional fans that drive the heated air into a particular direction that suits the air-flow design of the data center. Airflow is the new science in the IT industry. It is becoming common to have a hot-aisle and a cold-aisle matrix across the data center hall. Having systems that can respond and participate in that design can produce considerable savings in power requirements. The choice of where to build a data center is also becoming more important.

There is also the Green agenda. Companies want to be seen to be engaging with this new and popular movement. The amount of power needed to run large data centers is in the Megawatt region and hardly Green. Large data centers will always require high levels of power. Hardware manufacturers are attempting to bring down the power requirements of their products, and data center designers are making a big effort to make more use of (natural) air-flow. Taken together, these efforts are making a difference. If being Green is going to save money, then it’s a good thing.

Downsides

High utilization of hardware introduces higher levels of failure caused, for the most part, by heat. In the case of the 121 ratios, the server is idling, cool and under-utilized and costing more money than necessary (in terms of ROI) but will provide a long lifecycle. In virtualization, producing higher levels of utilization per Host will generate a lot more heat. Heat damages components (degradation over time) and shortens MTTF (Mean Time To Failure), which affects TCO (Total Cost of Ownership = the bottom line) and ROI (Return on Investment). It also raises the cooling requirement, which in turn increases power consumption. When Massive Parallel Processing is required, and this is very much a cloud technology, cooling and power will step up a notch. Massive Parallel Processing can use tens of thousands of servers/VMs, large storage environments along with complex and large networks. This level of processing will increase energy requirements. Basically, you can’t have it both ways.

Another downside to virtualization is VM density. Imagine 500 hardware servers, each hosting 192 VMs. That’s 96,000 Virtual Machines. The average number of VMs per Host server is limited by the number of vendor-recommended VMs per CPU. If a server has 16 CPUs (Cores), you could create approximately 12 VMs per Core (this is entirely dependent on what the VM will be used for). Therefore it’s a simple piece of arithmetic, 500 X 192 = 96,000 Virtual Machines. Architects take all this into account when designing large virtualization infrastructures and ensuring that Sprawl is kept strictly under control. However, the danger exists.

Virtualization; The basics of how to do it

Take a single computer, a server, and install software that enables the abstraction of the underlying hardware resources: Processing, Memory, Storage, and Networking. Once you’ve configured this virtualization-capable software, you can use it to fool various operating systems into thinking that they are being installed into a familiar environment that they recognize. This is achieved by the virtualization software that (should) contain all the necessary drivers used by the operating system to talk to the hardware.

At the bottom of the virtualization stack is the Hardware Host. Install the hypervisor on this machine. The hypervisor abstracts the hardware resources and delivers them to the virtual machines (VMs). On the VM, install the appropriate operating system. Now install the application/s. A single hardware Host can support several Guest operating systems or Virtual Machines, dependent on the purpose of the VM and the number of processing cores in the Host. Each hypervisor vendor has its own permutation of VMs to Cores ratio. Still, it is also necessary to understand exactly what the VMs will support to calculate the provisioning of the VMs. Sizing/Provisioning virtual infrastructures are the new black-art in IT, and there are many tools and utilities to help carry out that crucial and critical task. Despite all the helpful gadgets, part of the art of sizing is still down to informed guesswork and experience. This means that the machines haven’t taken over yet!

the

Hypervisor

The hypervisor can be installed in two formats:

1. Install an operating system that has within it some code that constitutes a hypervisor. Once the operating system is installed, click a couple of boxes and reboot the operating system to activate the hypervisor. This is called Host Virtualisation because there is a Host operating system, such as Windows 2008 or a Linux distribution, as the foundation and controller of the hypervisor. The base operating system is installed in the usual way, directly onto the hardware/server. A modification is made, and the system is rebooted. Next time it loads, it will offer the hypervisor configuration as a bootable choice

2. Install a hypervisor directly onto the hardware/server. Once installed, the hypervisor will abstract the hardware resources and make them available to multiple Guest operating systems via a Virtual machine. VMware’s ESXi and XEN are this types of hypervisor (on-the-metal hypervisor)

The two most popular hypervisors are VMware ESXi and Microsoft’s Hyper-V. ESXi is a stand-alone hypervisor that is installed directly onto the hardware. Hyper-V is part of the Windows 2008 operating system. Windows 2008 must be installed first to be able to use the hypervisor within the operating system. Hyper-V is an attractive proposition, but it does not reduce the footprint to the size of ESXi (Hyper-V is about 2GB on the disk and ESXi is about 70MB on the disk), and it does not reduce the disk overhead to a level as low ESXi.

To manage virtual environments requires other applications. VMware offers vCenter Server, and Microsoft offers System Center Virtual Machine Manager. There is a range of third-party tools available to enhance these activities.

Which hypervisor to use?

The choice of which virtualization software to use should be based on informed decisions. Sizing the Hosts, provisioning the VMs, choosing the support toolsets and models, and a whole raft of other questions need to be answered to make sure that money and time are spent effectively and what has implemented works and doesn’t need massive change for a couple of years (wouldn’t that be nice?).

What is Cloud Computing?

Look around the Web, and there are myriad definitions. Here’s mine. ““Cloud Computing is billable, virtualized, elastic services.”

Cloud is a metaphor for enabling users to access applications and services using the Internet and the Web.

Everything from the Access layer to the bottom of the stack is located in the data center and never leaves it.

Within this stack are many other applications and services that enable monitoring of the Processing, Memory, Storage, and Network, which can then be used by chargeback applications to provide metering and billing.

Cloud Computing Models

The Deployment Model and the Delivery Model.

Deployment Model

– Private Cloud
– Public Cloud
– Community Cloud
– Hybrid Cloud

Private Cloud Deployment Model

For most businesses, the Private Cloud Deployment Model will be the Model of choice. It provides a high level of security. For those companies and organizations that have to take compliance and data security laws into consideration, Private Cloud will be the only acceptable Deployment Model.

Note: There are companies (providers) selling managed to host as Cloud. They rely on the hype and confusion about what Cloud actually is. Check exactly what is on offer, or it may turn out that the product is not Cloud and cannot offer the attributes of Cloud.

Public Cloud Deployment Model

Amazon EC2 is a good example of the Public Cloud Deployment Model. Users, in this case, are, by and large, the Public, although more and more businesses are finding Public Cloud a useful addition to their current delivery models.

Small businesses can take advantage of the Public Cloud’s low costs, particularly where security is not an issue. Even large enterprises, organizations, and government institutions can find advantages in utilizing Public Cloud. It will depend on legal and data security requirements.

Community Cloud Deployment Model

This model is created by users allowing their personal computers to be used as resources in a P2P (Point-to-Point) network. Given that modern PCs/Workstations have multiprocessors, a good chunk of RAM, and large SATA storage disks, it is sensible to utilize these resources to enable a Community of users each contributing PMSN and sharing the applications and services made available. Large numbers of PCs and, possibly, servers can be connected into a single subnet. Users are the contributors and consumers of computing resources, applications, and services via the Community Cloud.

The advantage of the Community Cloud is that it’s not tied to a vendor and not subject to the business case. That means the community can set its own costs and prices. It can be a completely free service and run as a co-operative.

Security may not be as critical, but the fact that each user has access at a low level might introduce security breaches and consequent bad blood amongst the group.

While user communities can benefit from vendor detachment, vendors don’t need to be excluded. Vendor/providers can also deliver Community Cloud at a cost.

Large companies that may share certain needs can also participate using Community Cloud. Community Cloud can be useful where a major disaster has occurred, and a company has lost services. If that company is part of a Community Cloud (car manufacturers, oil companies, etc.), those services may be available from other sources within that Cloud.

Hybrid Cloud Deployment Model

The Hybrid Cloud is used where it is useful to have access to the Public Cloud while maintaining certain security restrictions on users and data within a Private Cloud. For instance, a company has a data center from which it delivers Private Cloud services to its staff. It needs to have some method of delivering ubiquitous services to the public or users outside its own network. The Hybrid Cloud can provide this kind of environment. Companies using Hybrid Cloud services can take advantage of the massive scalability of the Public Cloud delivered from Public Cloud providers while still maintaining control and security over critical data and compliance requirements.

cloud

Federated Clouds

While this is not a Cloud deployment or delivery model per se, it will become an important part of Cloud Computing services in the future.

As the Cloud market increases and enlarges across the world, the diversity of provision will become more and more difficult to manage or even clarify. Many Cloud providers will be hostile to each other and may not be keen to share across their Clouds. Businesses and users will want to diversify and multiply their choices of Cloud delivery and provision. Having multiple Clouds increases the availability of applications and services.

A company may find that it is good to utilize multiple cloud providers to enable data to be used in differing Clouds for differing groups. The problem is how to control/manage this multiple-headed delivery model? IT can take control back by acting as the central office clearinghouse for the multiple Clouds. Workloads may require different levels of security, compliance, performance, and SLAs across the entire company. Using multiple Clouds to fulfill each requirement for each workload is a distinct advantage over the one-size-fits-all principle that a single Cloud provider brings to the table. Federated Cloud also answers the question of How do I avoid vendor lock-in? However, multiple Clouds require careful management, and that’s where the Federated Cloud comes in.

So, what is stopping this from happening? Mostly it’s about the differences between operating systems and platforms. The other reason is that moving a VM can be difficult when that VM is 100GBs. If you imagine thousands of those being moved around simultaneously, you can see why true Cloud federation is not yet with us, although some companies are out there trying to make it happen. Right now, you can’t move a VM out of EC2 into Azure or OpenStack.

True federation is where disparate Clouds can be managed seamlessly and VMs can be moved between Clouds.

Abstraction

The hypervisor abstracted the physical layer resources to provide an environment for the Guest operating systems via the VMs. The appropriate vendor virtualization management tools manage this layer of abstraction (in VMware, its vSphere vCenter Server, and its APIs). The Cloud Management Layer (vCloud Director in VMware) is an abstraction of the Virtualisation Layer. It has taken the VMs, applications, and services (and users) and organized them into groups. It can then make them available to users.

Using the abstracted virtual layer, it is possible to deliver IaaS, PaaS, and SaaS to Private, Public, Community, and Hybrid Cloud users.

Cloud Delivery Models

IaaS-Infrastructure as a Service (Lower Layer)

When a customer buys IaaS, it will receive the entire compute infrastructure, including Power/Cooling, Host (hardware) servers, storage, networking, and VMs (supplied as servers). The customer’s responsibility is to install the operating systems, manage the infrastructure, and patch and update as necessary. These terms can vary depending on the vendor/provider and the individual contract details.

PaaS-Platform as a Service (Middle Layer)

PaaS delivers a particular platform or platform to a customer. This might be a Linux or Windows environment. Everything is provided, including the operating systems ready for software developers (the main users of PaaS) to create and test their products. Billing can be based on resource usage over time. There are several billing models to suit various requirements.

SaaS-Software as a service (Top Layer)

SaaS delivers a complete computing environment along with applications ready for user access. This is the standard offer in the Public Cloud. Examples of applications would be Microsoft’s Office 365. In this environment, the customer has no responsibility to manage the infrastructure.

Cloud Metering & Billing

Metering

Billing is derived from the chargeback information (Metering) gleaned from the infrastructure. Depending on the service ordered, the billing will include the resources outlined below.

Billable Resource Options: (Courtesy Cisco)

Virtual machine: CPU, Memory, Storage capacity, Disk, and network I/O
Server blade Options will vary by type and size of the hardware
Network services: Load balancer, Firewall, Virtual router
Security services: Isolation level, Compliance level
Service-level agreements (SLAs): Best effort (Bronze), High availability (Silver), Fault-tolerant (Gold)
Data services: Data encryption, Data compression, Backups, Data availability, and redundancy
WAN services: VPN connectivity, WAN optimization

Billing

Pay-as-you-Go: Straightforward payment based on billing from the provider. Usually, customers are billed for CPU and RAM usage only when the server is actually running. Billing can be Pre-Paid, or Pay-as-you-Go. For servers (VMs) that are in a non-running state (stopped), the customer only pays for the storage that the server is using. If a server is deleted, there are no further charges. Pay-as-you-Go can be a combination of a variety of information billed as a single item. For instance, Network usage can be charged for each hour that a network or networks are deployed. Outbound and Inbound Bandwidth can be charged; NTT America charges only for outbound traffic leaving a customer network or Cloud Files storage environment, whereas inbound traffic may be billed or not. It all comes down to what the provider offers and what you have chosen to buy.

Pre-Allocated

Some current cloud models use pre-allocation, such as a server instance or a compute slice,as the basis for pricing. Here, a customer’s resource is billed for has to be allocated first, allowing for predictability and pre-approval of the expenditure. However, the term instance can be defined in different ways. If the instance is simply a chunk of processing time on a server equal to 750 hours, that equates to a full month. If the size of the instance is linked to a specific hardware configuration, the billing appears to be based on hours of processing, but in fact, reflects access to a specific server configuration for a month. As such, this pricing structure doesn’t differ significantly from traditional server hosting.

Reservation or Reserved

Amazon, for instance, uses the term Reserved Instance Billing. This refers to the usage of VMs over time. The customer purchases several Reserved Instances in advance. There are three levels of Reserved Instance billing, Light, Medium, and Heavy Reserved Instances. If the customer increases usage of instance above the set rate, Amazon will charge higher. That’s not an exact description but, it’s close enough.
Cloud billing is not a straightforward and simple as vendors would like to have us believe. Read the conditions carefully and try to stick rigidly to the prescribed usage levels, or the bill could come as a shock.

The Future of Cloud

Some say Cloud has no future and that it’s simply another trend. Larry Ellison (of Oracle) made a statement a few years ago that Cloud was an aberration or fashion generated by an industry that was looking desperately for something, anything, new to sell (paraphrased). Others say that Cloud is the future of IT and IS delivery. The latter seems to be correct. Cloud is the topical subject on the lips of all IT geeks and gurus. It’s also true that the public at large is becoming Cloud-savvy and, due to the dominance of mobile computing, the public and business will continue to demand on-tap utility computing (John McCarthy, speaking at the MIT Centennial in 1961, forecast that computing would become a public utility), via desktops, laptops, netbooks, iPads, iPhones, Smartphones and gadgets yet to be invented. Cloud can provide that ubiquitous, elastic, and billable utility.