in ,

How clouds impact environment, Hacker News

How clouds impact environment, Hacker News


The digital economy requires a massive infrastructure

Today’s digital economy comprises of services such as Uber, Office 2019 and Netflix built on top of the data we store in the cloud. But what is the cloud? Storing, processing and transporting information across all digital services we employ requires a massive infrastructure. The cloud is nothing more than a vast amount of data centers scattered around the world covering large masses of land which in sum operate millions of servers, processing data on behalf of those services, in terms serving billions of users.

test

Aerial view of a large scale data center in San Diego (Source: WordPress)

The need to store and process this exponentially increasing amount of data has led to a golden period of development in the data center industry. The number of hyperscale datacenters in the world powering the cloud has experienced a 200% increase the last 5 years alone. Today there is8 million datacentersand500 hyperscale datacenters in the world, but the total amount hyperscale data centers needed to manage the expected amounts of data in the years to come is three times today’s amount.

Why is that a problem?

There are5 billionGoogle searches performed each day, each resulting in6-8 serversto be activated. In terms of greenhouse gases, each search isequivalent to about 0.2 grams of CO2.Despacitogoing viral, reaching 6.3 billion views, (burnedalone as much energy as 02 US homes.

Add on top of this all the other services powered by the cloud such as internet banking, ticketing services, Netflix, Spotify, Uber and Office 365, you’ll soon realize that data centers powering the cloud have become as mission critical and essential as water and electricity.

We depend on it at work to be able to send and receive email, share and collaborate on documents, code, schematics and files. We use it to buy shoes and other merchandise online, and we communicate with our colleagues, friends and family both near and far via services that run on the cloud. We’ve become dependent on the cloud to be able to perform tasks both at work and home and when cloud servicesgo down, the world halts. This begs for a number of questions with regards to environmental impact.

What is the problem?

In addition to the actual housing of the data centers, you need to fill each data center with lots and lots of hardware, hundreds of miles of cable, refrigerant and water for cooling, batteries and diesel generators to mitigate power outages, and last but not least – an enormous amount of energy to operate the hardware. I’ll describe some of the issues.

Hardware

It is hard to imagine how much data 250 ZB represents, but if you were able to store that onto BluRay discs, then you’d have a stack of discs that can get you to the moon times. To store it on hard drives, you would need (billion drives) !). Imagine the resources necessary to manufacture only the disk drives, and you will begin to realize that data centers consume a lot of resources.

Since data centers require so much hardware, it is natural to assess the lifecycle impact of the hardware used in the data centers. For data center operations, hardware needs to be manufactured, shipped, installed, operated and decommissioned. Each of these processes requires resources. The manufacturing process for computer electronics for instance requires the mining and extraction of both regular and rare earth metals such as neodymium, which is used in traditional hard drives, and terbium which is used in solid state electronics. Although rare earth metals is a somewhatmisleading term, the extraction and processing of ore to produce concentrates usually involves use of lots of heavy machinery and chemicals and leavessignificant impactson the environment via the open pits the mines (imprint on the geography) .

Hardware also usually has a fairly limited life span. Usually hardware is replaced every 3-5 years to keep up with performance requirements and reliability concerns for operations. Decomissioninghardware might entail both hazardous materials such as refrigerant used in cooling systems, electronic waste and recycling electronics. Especially recycling electronics will have a big impact on the life cycle assessment of the hardware.

Real estate

Data centers require a lot of space. Hyperscale datacenters require hundreds of acres of land in order to accomodate the hardware. One of the issues are that cloud providers usually want data centers to be as close as possible to most end users, in order to provide best performance and lowest latency. This has lead to a competition for real estate between data centers and humans, and in Amsterdam, which houses (**************************************************************% of all hyperscale data centers in Europe, it led to atemporary ban on building new ones.

Energy

Servers in data centers are on (/ 7 / 365. As each data center can have hundreds of thousands of servers, they naturally consume vast amounts of energy. According to estimates from the International Energy Agency, data centers in the world account for (******************************************************************* TWh / year energy consumption. This is 1% of the total worldwide energy consumption.

1% might not sound like a big deal, but that is the equivalent of thetotal energy consumption of Indonesia, a country of million citizens and the 4th most populated country in the world. What is more important, the source of this energy might come from fossil fuels such as coal. If all this energy were produced by coal plants, it would result in an annual emission of1.2 billion metrics tonnes of CO2. In comparison, the total aviation industry emits roughly0.9 billion metric tonnes of CO2.

Although many cloud providers have pledged to decarbonize their data centers, none have ditched fossil fuels entirely, and most of them rely on renewable energy credits rather than directly utilizing renewable energy sources such as solar or wind power. Greenpeace has beenfollowing upon the cloud providers pledges, and there are big differences, as laid out inthis recentWired article.

Some alarmist predictionsindicate that due to the increasing number of data centers in the world, the total energy consumption of data centers could rise to as much as 8% of total world energy consumption by 2030.

That’s why environmental impact of the cloud is first and foremost dictated by the amount and source of its energy.

Is there some light at the end of the tunnel?

Maybe. In order to answer that, we need to look at what has been done until now.

Increased energy efficiency on component level

Although various sources report an increased energy consumption due to the increased number of data centers, the International Energy Agencyrefutes this, claiming the total energy consumption of data center operations worldwide will be stable for at least the next three years despite a projected (**********************************************************************% increase in data center traffic and 80% increase in data center workloads.

This requires some explanation. Enter Moore’s Law. For the last years, driven by manufacturing process improvements, we’ve been able to reduce the size of transistors at an exponentially steady state each months. In turn, that has resulted in an exponentially larger number of transistors in each new processor generation, substantially improving computing power. But the scaling has also reduced power consumption with the same rate. So, even though we’ve seen a rapid increase in number of data centers, the technology improvement has been reducing power consumption by an equal amount.

However, these efforts have lately come to a halt. We are getting close to physical boundaries with regards to how small each transistor can be manufactured. The transistors today measure 7nm. In comparison, human DNA is 2.5nm. The processor industry has solved this by increasing the number of cores in each processor, and improving computing power by parallelization. Today, you can find processors with up to 64 cores, and GPUs with hundreds of cores, enabling parallellization of specific tasks. But not all computations are parallelizable, and this too will hit a rooftop in the future.

A third technology improvement has been dynamic scaling based on demand. Since servers must always be on, being able to automatically scale down processor speed can have a tremendous impact on power consumption when the need for computing is low. Most processors and servers nowadays have the ability to idle or throttle down when not in use, consuming only a fraction of energy.

Lastly, replacing older hard drives with SSD drives reduces the energy consumption by half.

Increased energy efficiency on data center level

Energy is the single largest expense for a data center operation, this is in particular true for hyperscale operators of public clouds. These companies have invested heavily in improving their infrastructure in orde to reduce power bills. A standard measure used in industry is power usage effectiveness (PUE) of a data center – the ratio of total power required to run an entire facility versus the direct power involved in compute and storage. While smaller data centers are still being measured with PUE values ​​greater than 2, large hyperscale cloud data centers have over the past 10 years decreased this value, beginning to recordPUE value of 1.1or less, which is very close to theoretically perfect PUE of 1.0.

What will the future bring?

In the past decade, manufacturing improvement and targeted efforts towards improving energy efficiency on both component level and data center level, via reductions in PUE, has been ensuring us an efficient offset in total energy consumption in data centers, even due to strong growth of both hyperscale and regular data centers.

No more low-hanging fruits

However, now that we are starting to hit some physical and theoretical limitations, these low-hanging fruits are gone. The shift away from small, inefficient data centers towards much larger cloud and hyperscale data centers seems evident. The Lawrence Berkeley National Laboratoryestimatedthat if percent of servers in the US were moved over to optimized hyperscale facilities, this would result in a percent drop in their energy usage. A prediction by the IAE is that this trend is already on its way, as illustrated in the chart below.

IEA, “Global data center energy demand by data center type”, IEA, Paris https://www.iea.org/data-and-statistics/charts/global-data-centre-energy- demand-by-data-center-type

Continued efforts on improving energy efficiency

Meanwhile, these hyperscale operators continue to innovate. Google for instance entered into a collaboration with DeepMind toimprove data center coolingvia Machine Learning and just recently launched a fully automated solution for their data centers, rendering a PUE of 1. on certain facilities.

A typical day of PUE (power usage effectiveness) with ML turned on and off. Source: DeepMind

The cloud vendors also continue to improve on runtimes, virtualization, compression and software that runs our workloads on top of their hardware improving the overall computation density. For instance, Googlerecently launcheda new task scheduler which assigns resources dynamically, hence increasing hardware utilization in massive-multiparallell environments. Microsoft has done substantial work to improve performance in their .NET Core libraries for the same reasons.

Why is this important and what can you do?

Resistance to both the cloud and the Borg is futile. Limiting viral videos, Google searches and users from using online services is obviously to no purpose.

However, for us in the industry of making such services, there lies a responsibility to inform and acquaint our leaders, customers and decision makers about environmental impacts of their decisions and where our workloads run.

As a part of this advent calendar, our CTO wrote an article about private PaaS being considered harmful. He was mainly arguing the benefits of public clouds versus private clouds and data centers. I hope this article has contributed to illuminate a new perspective. Not only are private clouds considered harmful from an innovation perspective, but from an environmental aspect as well, where hyperscale clouds continue to innovate not only on the breath of services, but also on energy efficiency on a scale.

However, there lies a responsibility on all of us developers as well to utilize libraries, coding techniques and compression algorithms which consume less storage and less energy. Mobile developers are well aware of the power restrictions due to batteries. It’s time the rest of us follows and give their contribution to lowering data storage and processing for our workloads. The benefit? Reducing storage, memory and CPU for your workloads has an economic benefit – you pay less money to the cloud vendor. Win-win.

(

(************************************************

(Read More************ (**********************************

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Hammerspoon – Powerful Automation of OS X, Hacker News

[FREE]Java Programming: Step by Step from A to Z (9 Hrs)