Energy Internet and eVehicles Overview

Governments around the world are wrestling with the challenge of how to prepare society for inevitable climate change. To date most people have been focused on how to reduce Green House Gas emissions, but now there is growing recognition that regardless of what we do to mitigate against climate change the planet is going to be significantly warmer in the coming years with all the attendant problems of more frequent droughts, flooding, sever storms, etc. As such we need to invest in solutions that provide a more robust and resilient infrastructure to withstand this environmental onslaught especially for our electrical and telecommunications systems and at the same time reduce our carbon footprint.

Linking renewable energy with high speed Internet using fiber to the home combined with autonomous eVehicles and dynamic charging where vehicle's batteries are charged as it travels along the road, may provide for a whole new "energy Internet" infrastructure for linking small distributed renewable energy sources to users that is far more robust and resilient to survive climate change than today's centralized command and control infrastructure. These new energy architectures will also significantly reduce our carbon footprint. For more details please see:

Using autonomous eVehicles for Renewable Energy Transportation and Distribution: http://goo.gl/bXO6x and http://goo.gl/UDz37

Free High Speed Internet to the Home or School Integrated with solar roof top: http://goo.gl/wGjVG

High level architecture of Internet Networks to survive Climate Change: https://goo.gl/24SiUP

Architecture and routing protocols for Energy Internet: http://goo.gl/niWy1g

How to use Green Bond Funds to underwrite costs of new network and energy infrastructure: https://goo.gl/74Bptd

Friday, December 21, 2007

Future Internet could reduce todays PSTN CO2 emissions by 40%


[The ITU has put out an excellent report called ICTs and Climate Change. Highly recommended reading and further support to my belief that the ICT industry can reduce its own emissions to zero but also enable other traditional carbon heavy sectors of society to reduce their carbon footprint through "bits and bandwidth for carbon" trading schemes such as free fiber to the home, free mobile telephony, and other free eProducts and eServices. Some excerpts --BSA]

http://www.itu.int/ITU-T/newslog/PermaLink,guid,9ba8aa93-e90d-4e9b-859c-b94b6d57c424.aspx


Information and Communication Technologies (ICTs) are undoubtedly part of the cause of global warming as witnessed, for instance, by the millions of computer screens that are left switched on overnight in offices around the world.

But ICTs can also be part of a solution. This Technology Watch briefing report looks at the potential role that ICTs play at different stages of the process, from contributing to global warming (section 1), to monitoring it (2), to mitigating its impact on the most vulnerable parts of the globe (3), to developing long term solutions, both directly in the ICT sector and in other sectors like energy, transport, buildings etc (4). The final sections look at what ITU-T is already doing in this field (5) strategic options (6), and the campaign for a climate-neutral UN (7).

A major focus of ITU’s work in recent years has been on Next-Generation Networks (NGN), which are expected by some commentators to reduce energy consumption by 40 per cent compared to today’s PSTN

The telecommunications industry is currently undergoing a major revolution as it migrates from today’s separate networks (for voice, mobile, data etc) to a single, unified IP-based next-generation network . The savings will be achieved in a number of ways: • A significant decrease in the number of switching centres required. For instance, BT’s 21st Century Network (21CN) will require only 100-120 metropolitan nodes compared with its current 3’000 locations; • More tolerant climatic range specifications for switching locations, which are raised from 35 degrees (between 5 and 40°C) to 50 degrees (between -5 and 45°C). As a result, the switching sites can be fresh-air cooled in most countries rather than requiring special air conditioning.



Wednesday, December 19, 2007

A carbon negative Internet - Freedom to Connect Conference

[I encourage all those who are interested in the issues of global warming and how the Internet call help mitigate against the greatest challenge of our lifetime to attend the upcoming Freedom to Connect Conference in Washington DC --BSA]

http://freedom-to-connect.net/

Announcing F2C: Freedom to Connect 2008!
March 31 & April 1, 2008, Washington, DC

The theme of F2C: Freedom to Connect 2008 is "The NetHeads Come to Washington."

This year there will be a second theme at F2C, "A Carbon-Negative Internet." We will devote at least one session, and perhaps a half day, to exploring the impacts of applications like user monitored edge-based control of energy usage, cloud routing of compute-intensive operations to geographical locations with renewable energy, peer-to-peer automobile traffic optimization, and the putative trade-off between physical presence and virtual presence.

Conventional wisdom is that NetHeads have sharply different interests than telephone companies and cable companies. This is mostly true, yet both need a robust, sustainable Internet. It is in the long-term interests of neither to kill the 'Net's success factors. Further, conventional wisdom is that NetHeads are represented by public advocacy groups like Free Press, Public Knowledge, and the New America Foundation and aligned with Internet companies like Google, Amazon, and eBay. Again this is directionally correct, but the diversity of the NetHead community ensures divergence on key issues.

Biology teaches that diversity is good. Most business practices teach the opposite. Washington hears much from the telcos and cablecos, and much from the Internet companies and the public advocacy groups, but way too little from the NetHeads themselves. F2C 2008 will provide a platform for NetHead voices and a forum for dialog among all parties with a stake in the future of an open, sustainable, state-of-the-art Internet.

So far (this is changing rapidly so check back here often) F2C speakers include:

* Tim Wu, Professor, Columbia Law School, Author of Wireless Carterphone (2007)
* Tom Evslin, founder ITXC, founder AT&T WorldNet, blogger, author, telecom activist
* Reed Hundt, former chairman of the FCC
* Andrew Rasiej, co-founder, Personal Democracy Forum
* Bill St. Arnaud, Chief Research Officer CANARIE and green-broadband blogger
* Brad Templeton, Chairman, Electronic Frontier Foundation
* Katrin Verclas, former Exec. Director NTEN, MobileActive blogger.
* Robin Chase, founder of ZipCar, entrepreneuse and environmentalist.

Tuesday, December 18, 2007

New undersea cable to Iceland to enable zero carbon data centres


[Hibernia Atlantic is planning to build a cable from Ireland to Iceland to attack the data centre opportunity that cheap geothermal and hydro Icelandic power presents. To my mind this is a classic example of the new business opportunities that are possible by first mover countries and companies who want to address the challenge of global warming. Newfoundland and Labrador in Canada is similarly well poised with their new undersea fiber networks to Nova Scotia and Greenland combined with the presences of renewable hydro electric energy at Churchill Falls. Newfoundland and Iceland could be the logical location for new zero carbon data centers for North America and Europe. Thanks to Rod Beck for this pointer --BSA]


http://www.hiberniaatlantic.com/documents/8607-IcelandPR-JSAFinal.pdf

HIBERNIA ATLANTIC WILL CONSTRUCT A NEW
SUBMARINE FIBER OPTIC CABLE CONNECTING ICELAND
DIRECTLY TO NORTH AMERICA AND EUROPE
THIS HISTORIC NETWORK BUILD MARKS ANOTHER “INDUSTRY FIRST”
FOR THE DIVERSE TRANS-ATLANTIC CABLE PROVIDER
BOSTON, MA & NEW YORK, NY – August 9, 2007

– Hibernia Atlantic, the only diverse
TransAtlantic submarine transport cable provider, today announces its plan to construct a brand new undersea fiber optic cable system connecting Iceland to its northern Atlantic submarine cable system. Hibernia Atlantic will deploy a branching unit off its existing northern cable, giving Iceland direct connectivity to North America, Ireland, London, Amsterdam and the rest of continental Europe. The new cable link will provide connectivity to Iceland at 192 X 10 Gbps Ethernet wavelengths, the only one of its kind in the region. This allows for communications traffic from Iceland to go either East or West, with direct access to 42 cities and 52 network Points of Presence (PoPs) and the ability to steer traffic around major metropolitan areas and bypass traditional backhaul routes. Hibernia Atlantic projects the system will become fully operational for customer traffic in the Fall of 2008.

“Many server-intensive customers who require reliable and inexpensive power for collocation services are looking to Iceland as their most cost-effective solution,” states Ken Peterson, Chairman of Hibernia Atlantic’s Board of Directors and the Chairman of Columbia Ventures Corporation, Hibernia’s parent company. “Iceland has an abundance of inexpensive geothermal and hydroelectric power that makes it attractive for many industries. The country is also committed to one day becoming entirely reliable on renewable energy sources, thereby making it an attractive and fertile place to do business.”

“Over a hundred years ago, Iceland marked a milestone in the history of its telecommunications,” continues Bjarni K. Thorvardarson, Hibernia Atlantic’s CEO and Icelandic native. “A submarine telegraph cable was laid from Scotland through the Faroe Islands to the East Coast of Iceland. That same year, a telegraph and telephone line was laid to the capital Reykjavik, thereby ending the country's isolation. Today, more than a century later, Hibernia is proud to announce its plans to build an upgraded submarine cable providing 10 Gbps Ethernet connectivity to Iceland, a major improvement on current capacity, and the addition of yet another key location in the growing list of Hibernia Atlantic operations and Points of Presence. We are pleased and excited to add this segment to our already healthy cable system.”

This new cable provides Iceland much needed diversity from its existing infrastructure. Currently, the only cable with available capacity is Farice, a submarine cable system connecting Iceland and the Faroe Islands to Scotland. Upon completion of the new Hibernia Atlantic cable, which will offer 192 X 10 Gbps wavelengths, Hibernia Atlantic will supply Iceland with a major upgrade in capacity, efficiency, reliability and first-to-market Ethernet services. Hibernia Atlantic will also serve as another redundant option to connect to North America, Ireland and other major European cities.

For the complete Hibernia Atlantic network map and service offerings, videocasts and the Hibernia Atlantic Blog, please visit www.hiberniaatlantic.com. If you have additional questions on network capacity, please email eric.gutshall@hiberniaatlantic.com.
# # #
About Hibernia Atlantic:
The Hibernia Atlantic is a privately held, US-owned, TransAtlantic submarine cable that provides “Security through Diversity” to European and US customers. Hibernia offers wholesale capacity prices, unparalleled support, flexibility and service while delivering customized solutions for its customers. Hibernia Atlantic’s redundant rings include access to Dublin, Manchester, London, Amsterdam, Brussels, Frankfurt, Paris, New York City, White Plains, Stamford, Newark, Ashburn, Boston, Albany, Halifax, Montreal and more. Hibernia provides dedicated Ethernet and optical level service up to GigE, 10G and LanPhy wavelengths and traditional sonet/SDH services. Hibernia Atlantic’s cutting-edge network technology allows enterprise customers, carriers and wholesale customers reliable, next-generation bundled services at affordable prices. For more information or a complete network map, please visit www.hiberniaatlantic.com. For Hibernia Atlantic media enquiries, please contact: Jaymie Scotto & Associates 866.695.3629 pr@jaymiescotto.com

Sunday, December 16, 2007

Cloud Routing, Cloud Computing, Global Warming and Cyber-Infrastructure

[To my mind "cloud computing" and "cloud routing" are technologies that will not only radically alter cyber-infrastructure but also enable the Internet and ICT community to address the serious challenges of global warming.

Cloud computing allows us to locate computing resources anywhere in the world. No longer does the computer (whether it is a PC or supercomputer) have to be collocated with a user or institution. With high bandwidth optical networks it is now possible to collocate cloud computing resources with renewable energy sites in remote locations.

Cloud routing will change the Internet in much the same way as cloud computing has changed computation and cyber-infrastructure. Today's Internet topologies are largely based on locating routers and switches with the shortest geographical reach to end users. But once again low cost high bandwidth optical networks allow us to distribute routing and forwarding to renewable energy sites at remote locations. In effect we are scaling up something that we routinely do today on the Internet with such concepts as remote peering and backhauling. By breaking up the Internet forwarding table into small blocks on /16 or finer boundaries we can also distribute the forwarding and routing load across a "cloud" of many thousands of PCs instead of specialized routers.

The other big attraction of "cloud" services, whether routing or computational is their high resiliency. This is essential if you want to collocate these services at remote renewable energy site around the world. Renewable energy sites, by their very nature are going to be far less reliable and stable. So "highly disruptive tolerant" routing and data replication services are essential

Some excerpts from postings on Gordon Cooks Arch-econ list--BSA]

For more information on cloud routing:

http://green-broadband.blogspot.com/2007/12/new-internet-architectures-to-re
duce.html

http://www.canarie.ca/canet4/library/recent/BELnet_Technical_presentation_De
c_11_2007.ppt
For more information on this item please visit my blog at
http://green-broadband.blogspot.com/ or http://billstarnaud.blogspot.com
-------------------------------------------

For more information on Next Generation Internet and reducing Global Warming http://green-broadband.blogspot.com



http://www.businessweek.com/magazine/toc/07_52/B4064magazine.htm

Google and the Wisdom of Clouds
A lofty new strategy aims to put incredible computing power in the hands of many by Stephen Baker

[...]
What is Google's cloud? It's a network made of hundreds of thousands, or by some estimates 1 million, cheap servers, each not much more powerful than the PCs we have in our homes. It stores staggering amounts of data, including numerous copies of the World Wide Web. This makes search faster, helping ferret out answers to billions of queries in a fraction of a second. Unlike many traditional supercomputers, Google's system never ages. When its individual pieces die, usually after about three years, engineers pluck them out and replace them with new, faster boxes. This means the cloud regenerates as it grows, almost like a living thing.

A move towards clouds signals a fundamental shift in how we handle information. At the most basic level, it's the computing equivalent of the evolution in electricity a century ago when farms and businesses shut down their own generators and bought power instead from efficient industrial utilities. Google executives had long envisioned and prepared for this change. Cloud computing, with Google's machinery at the very center, fit neatly into the company's grand vision, established a decade ago by founders Sergey Brin and Larry Page: "to organize the world's information and make it universally accessible

ONE-WAY STREET
For small companies and entrepreneurs, clouds mean opportunity-a leveling of the playing field in the most data-intensive forms of computing. To date, only a select group of cloud-wielding Internet giants has had the resources to scoop up huge masses of information and build businesses upon it.

This status quo is already starting to change. In the past year, Amazon has opened up its own networks of computers to paying customers, initiating new players, large and small, to cloud computing. Some users simply park their massive databases with Amazon. Others use Amazon's computers to mine data or create Web services. In November, Yahoo opened up a cluster of computers-a small cloud-for researchers at Carnegie Mellon University. And Microsoft
(MSFT) has deepened its ties to communities of scientific researchers by providing them access to its own server farms. As these clouds grow, says Frank Gens, senior analyst at market research firm IDC, "A whole new community of Web startups will have access to these machines. It's like they're planting Google seeds." Many such startups will emerge in science and medicine, as data-crunching laboratories searching for new materials and drugs set up shop in the clouds.

Many [scientists] were dying for cloud know how and computing power-especially for scientific research. In practically every field, scientists were grappling with vast piles of new data issuing from a host of sensors, analytic equipment, and ever-finer measuring tools. Patterns in these troves could point to new medicines and therapies, new forms of clean energy. They could help predict earthquakes. But most scientists lacked the machinery to store and sift through these digital El Dorados. "We're drowning in data," said Jeannette Wing, assistant director of the National Science Foundation.

All sorts of business models are sure to evolve. Google and its rivals could team up with customers, perhaps exchanging computing power for access to their data. They could recruit partners into their clouds for pet projects, such as the company's clean energy initiative, announced in November. With the electric bills at jumbo data centers running upwards of $20 million a year, according to industry analysts, it's only natural for Google to commit both brains and server capacity to the search for game-changing energy breakthroughs.

What will research clouds look like? Tony Hey, vice-president for external research at Microsoft, says they'll function as huge virtual laboratories, with a new generation of librarians-some of them human-"curating" troves of data, opening them to researchers with the right credentials. Authorized users, he says, will build new tools, haul in data, and share it with far-flung colleagues. In these new labs, he predicts, "you may win the Nobel prize by analyzing data assembled by someone else." Mark Dean, head of IBM's research operation in Almaden, Calif., says that the mixture of business and science will lead, in a few short years, to networks of clouds that will tax our imagination. "Compared to this," he says, "the Web is tiny. We'll be laughing at how small the Web is." And yet, if this "tiny" Web was big enough to spawn Google and its empire, there's no telling what opportunities could open up in the giant clouds.


================

December 13, 2007, 4:07PM EST text size: T T

Online Extra: The Two Flavors of Google
A battle could be shaping up between the two leading software platforms for cloud computing, one proprietary and the other open-source by Stephen Baker

Why are search engines so fast? They farm out the job to multiple processors. Each task is a team effort, some of them involving hundreds, or even thousands, of computers working in concert. As more businesses and researchers shift complex data operations to clusters of computers known as clouds, the software that orchestrates that teamwork becomes increasingly vital. The state of the art is Google's in-house computing platform, known as MapReduce. But Google (GOOG) is keeping that gem in-house. An open-source version of MapReduce known as Hadoop is shaping up to become the industry standard.

This means that the two leading software platforms for cloud computing could end up being two flavors of Google, one proprietary and the other-Hadoop-open source. And their battle for dominance could occur even within Google's own clouds. Here's why: MapReduce is so effective because it works exclusively inside Google, and it handles a limited menu of chores. Its versatility is a question. If Hadoop attracts a large community of developers, it could develop into a more versatile tool, handling a wide variety of work, from scientific data-crunching to consumer marketing analytics. And as it becomes a standard in university labs, young computer scientists will emerge into the job market with Hadoop skills.

Gaining Fans
The growth of Hadoop creates a tangle of relationships in the world of megacomputing. The core development team works inside Google's rival, Yahoo! (YHOO). This means that as Google and IBM ( IBM) put together software for their university cloud initiative, announced in October, they will work with a Google clone developed largely by a team at Yahoo. The tool is already gaining fans. Facebook uses Hadoop to analyze user behavior and the effectiveness of ads on the site, says Hadoop founder Doug Cutting, who now works at Yahoo.

In early November, for example, the tech team at The New York Times (NYT) rented computing power on Amazon's ( AMZN) cloud and used Hadoop to convert 11 million archived articles, dating back to 1851, to digital and searchable documents. They turned around in a single day a job that otherwise would have taken months.

[...]
========================

December 13, 2007, 5:00PM EST text size: T T

A Sea Change
Data from the deep like never before
Scientists knee-deep in data are longing for the storage capacity and power of cloud computing. University of Washington oceanographer John R. Delaney is one of many who are desperate to tap into it.

Delaney is putting together a $170 million project called Neptune, which could become the prototype for a new era of data-intensive research. Launching this year, Neptune deploys hundreds of miles of fiber-optic cable connected to thousands of sensors in the Pacific Ocean off the Washington coast. The sensors will stream back data on the behavior of the ocean: its temperature, light, life forms, the changing currents, chemistry, and the physics of motion. Microphones will record the sound track of the deep sea, from the songs of whales to the rumble of underwater volcanos.

Neptune will provide researchers with an orgy of information from the deep. It will extend humanity's eyes and ears-and many other senses-to the two-thirds of the planet we barely know. "We've lived on Planet Land for a long time," says Delaney, who works out of an office near Puget Sound. "This is a mission to Planet Ocean."

He describes the hidden planet as a vast matrix of relationships. Sharks, plankton, red tides, thermal vents spewing boiling water-they're all connected to each other, he says. And if scientists can untangle these ties, they can start to predict how certain changes within the ocean will affect the weather, crops, and life on earth. Later this century, he ventures, we'll have a mathematical model of the world's oceans, and will be able to "manage" them. "We manage Central Park now, and the National Forests," he says. "Why not the oceans?"

To turn Neptune's torrents of data into predictive intelligence, teams of scientists from many fields will have to hunt for patterns and statistical correlations. The laboratory for this work, says Delaney, will be "gigantic disk farms that distribute it all over the planet, just like Google (GOOG)." In other words, Neptune, like other big science projects, needs a cloud. Delaney doesn't yet know on which cloud Neptune will land. Without leaving Seattle, he has Microsoft (MSFT) and Amazon ( AMZN), along with a Google-IBM
(IBM) venture at his own university.

What will the work on this cloud consist of? Picture scientists calling up comparisons from the data and then posing endless queries. In that sense, cloud science may feel a bit like a Google search.



========================

December 13, 2007, 5:00PM EST text size: T T

Online Extra: Google's Head in the Clouds
CEO Eric Schmidt talks about the powerful globe-spanning networks of computers known as clouds, and discovering the next big idea

Instead, think about Google as a star-studded collection of computer scientists who have access to a fabulous machine, a distributed network of data centers that behave as one. These globe-spanning networks of computers are known as "clouds." They represent a new species of global supercomputer, one that specializes in burrowing through mountains of random, unstructured data at lightning speed. Scientists are hungry for this kind of computing. Data-deluged businesses need it.

On cloud computing:

What [cloud computing] has come to mean now is a synonym for the return of the mainframe. It used to be that mainframes had all of the data. You had these relatively dumb terminals. In the PC period, the PC took over a lot of that functionality, which is great. We now have the return of the mainframe, and the mainframe is a set of computers. You never visit them, you never see them. But they're out there. They're in a cloud somewhere. They're in the sky, and they're always around. That's roughly the metaphor.

On Google's place in cloud computing:

Google is a cloud computing server, and in fact we are spending billions of dollars-this is public information-to build data centers, which are in one sense a return to the mainframe. In another sense, they're one large supercomputer. And in another sense, they are the cloud itself.

So Google aspires to be a large portion of the cloud, or a cloud that you would interact with every day. Why would Google want to do that? Well, because we're particularly good at high-speed data and data computation.

On Google's software edge:

Google is so fast because more than one computer is working on your query. It farms out your question, if you will, to on the order of 25 computers. It says, "You guys look over here for some answers, you guys look over here for some answers." And then the answers come back very quickly. It then organizes it to a single answer. You can't tell which computer gave you the answer.

On the size of cloud computing:

There's no limit. The reason Google is investing so much in very-high-speed data is because we see this explosion, essentially digital data multimedia explosion, as infinitely larger than people are talking about today. Everything can be measured, sensed, tracked in real time.

On applications that run on a cloud:

Let's look at Google Earth. You can think of the cloud and the servers that provide Google Earth as a platform for applications. The term we use is location-based services. Here's a simple example. Everyone here has cell phones with GPS and a camera. Imagine if all of a sudden there were a mobile phone which took picture after picture after picture, and posted it to Google Earth about what's going on in the world. Now is that interesting, or will it produce enormous amounts of noise? My guess is that it'll be a lot of noise.

So then we'll have to design algorithms that will sort through to find the things that are interesting or special, which is yet another need for cloud computing. One of the problems is you have these large collections coming in, and they have relatively high noise to value. In our world, it's a search problem.

On Google becoming a giant of computing:

This is our goal. We're doing it because the applications actually need these services. A typical example is that you're a Gmail user. Most people's attachments are megabytes long, because they're attaching everything plus the kitchen sink, and they're using Gmail for transporting random bags of bits. That's the problem of scale. But from a Google perspective, it provides significant barriers to entry against our competitors, except for the very well-funded ones.

I like to think of [the data centers] as cyclotrons. There are only a few cyclotrons in physics and every one of them is important, because if you're a top flight physicist you need to be at the lab where that cyclotron is being run because that's where history's going to be made, that's where the inventions are going to come from. So my idea is that if you think of these as supercomputers that happen to be assembled from smaller computers, we have the most attractive supercomputers, from a science perspective, for people to come work on.

On the Google-IBM education project:

Universities were having trouble participating in this phenomenon [cloud computing] because they couldn't afford the billions of dollars it takes to build these enormous facilities. So [Christophe Bisciglia] figured out a way to get a smaller version of what we're doing into the curriculum, which is clearly positive from our perspective, because it gets the concepts out. But it also whets the appetite for people to say, "Hey, I want 10,000 computers," as opposed to 100.



Wednesday, December 12, 2007

High Speed Internet Help Cools the Planet

[Lightreading has been carrying a very useful blog on the Future of the Internet. Your faithful correspondent has been making some contributions in regards on how the Internet and ICT in general can contribute in reducing CO2 emissions. This can be done in 3 ways:

(a) The Internet and ICT industry has the tools today to reduce to its own global carbon emissions to absolute zero by collocating routers and servers with renewable energy sites and using advanced data replication and re-routing techniques across optical networks. If the ICT industry alone produces 10% of the global carbon emissions this alone can have significant impact

(b) Developing societal applications that promote use of the Internet as an alternate to carbon generating activities such as tele-commuting, distance learning, etc as outlined below

(c) Deploying "bits and bandwidth for carbon" trading programs as an alternate strategy to carbon taxes, cap and trade and/or carbon offsets as for example in the green broadand initiative - http://green-broadband. Blogspot.com

Thanks to Mr Roques in posting on Lightreading for this pointer--BSA]



Lightreading: The future of the Internet and Global Warming

http://www.internetevolution.com/messages.asp?piddl_msgthreadid=178018&piddl_msgid=151707#msg_151707



Study: High-speed Internet helps cool the planet http://www.news.com/8301-11128_3-9832021-54.html


Tempted to obsess over how another personal habit helps or hurts the Earth? Keep surfing with cable or DSL and you might save carbons in the process, according to the American Consumer Institute.

The world would be spared 1 billion tons of greenhouse gases within a decade if broadband Internet access were pervasive, the group's report (PDF) concluded in October.

Broadband is available to 95 percent of U.S. households but active in only half of them, the study said, noting that near-universal adoption of high-speed Internet would cut the equivalent of 11 percent of oil imports to the United States each year.

How would faster downloads and Web page loads curb the annual flow of globe-warming gases, and by how much? According to the report:

Telecommuting, a "zero emission" practice, eliminates office space and car commutes: 588 million tons.
E-commerce cuts the need for warehouses and long-distance shipping: 206 million tons.
Widespread teleconferencing could bring one-tenth of all flights to a halt: 200 million tons.
Downloading music, movies, newspapers, and books saves packaging, paper, and shipping: 67 million tons.

The Department of Energy estimates that the nation's emissions of carbon dioxide alone total 8 billion tons each year.

A study released and funded by a major Australian telecom company in October also suggested that broader use of broadband could cut that country's carbons by 5 percent by 2015.

All it would take is for more people to use software to monitor shipping schedules, cut the flow of power to dormant gadgets and so forth, the study said.

[...]

Monday, December 3, 2007

The Inefficient Truth - ICT carbon emission to surpass Aviation Industry


http://www.globalactionplan.org.uk/event_detail.aspx?eid=ef0cecc6-2621-4a3c-
962c-e4758b8952f8


The 'Inefficient Truth'

Inefficient ICT Sector's Carbon Emissions set to Surpass Aviation Industry

An Inefficient Truth is the first research report produced by Global Action Plan on behalf of the Environmental IT Leadership Team. The Leadership Team is a unique gathering of major ICT users from a range of different sectors who are committed to taking practical action to cut carbon dioxide emissions.

The report contains four sections.

1. The first section assesses the environmental impact of the ICT sector which is virtually the equivalent of the aviation industry.
2. Section two analyses survey results from major ICT users and discovers how quickly and effectively the sector is responding to the environmental agenda.
3. The third section takes a snapshot look at some case studies illustrating how companies are implementing practical solutions that are reducing carbon emissions and saving them money.
4. Finally, there is a Call to Action from Global Action Plan setting out some of the challenges facing Government, vendors and users in order to move the sector towards a lower carbon future.

An Inefficient Truth is the first part of a longer journey which will see Global Action Plan using its position as an independent practical environmental charity to help cut carbon emissions from the ICT sector.

The environmental charity Global Action Plan today calls on the UK government to introduce legislation and tax incentives to support the adoption of sustainable ICT policies and strategy in British businesses.

The report includes a national survey that is the first to measure awareness between the use of ICT in business and its contribution to the UK's carbon footprint; identify the proportion of companies seeking energy efficient strategies; and to promote examples of best practice.

Key findings in the report include:

* 61% of UK data centres only have the capacity for two years of growth.
* 37% of companies are storing data indefinitely due to government policy.
* Nearly 40% of servers are underutilised by more than 50%.
* 80% of respondents do not believe their company's data policies are environmentally sustainable.

Trewin Restorick, director of Global Action Plan and chair of the EILT, comments, "ICT equipment currently accounts for 3-4% of the world's carbon emissions, and 10% of the UK's energy bill. The average server, for example, has roughly the same annual carbon footprint as an SUV doing 15 miles-per-gallon! With a carbon footprint now equal to the aviation industry, ICT, and how businesses utilise ICT, will increasingly come under the spotlight as governments seek to achieve carbon-cutting commitments."

The survey, which was completed by CIOs, IT directors and senior decision makers from 120 UK enterprises, found that over 60% of respondents consider time pressures and cost the biggest barriers to adopting sustainable ICT policies, and believe that recognised standards and tax allowances would provide the most valuable support towards reducing ICT's contribution to the UK's carbon emissions.

Restorick adds, "The survey illustrates that ICT departments have been slow off the mark to address their carbon footprint. Awareness is now growing but to turn this into action, ICT departments need help. They need vendors to give them better information rather than selling green froth, they need Government policies to become more supportive and less contradictory, and they need more support from within their organisations."

Logicalis, international ICT provider and sponsors of 'An Inefficient Truth', agrees that legislation and tax incentives are important, but, first and foremost, businesses must evaluate the efficiency of existing ICT infrastructure, citing server under-utilisation and the data centre as prime examples of energy abuse. Tom Kelly, managing director for Logicalis UK,
comments:

"The government's draft climate change bill proposes a 60% cut in emissions by 2050. In this environment, a flabby business that guzzles budget and energy is likely to be a prime target for impending legislation.

"CIOs have a responsibility to ensure their ICT infrastructure can support a lean and dynamic business, yet as this survey demonstrates, many ICT departments are unsure if and how they can maximise their existing assets. With data centre capacity at a premium, and energy bills escalating, CIOs are well advised to look inward for energy saving initiatives and to instigate cultural change throughout the business. In short, efficient IT equals green IT."

As a result of the survey Global Action Plan is calling on ICT vendors and the government to provide businesses with the support and tools to implement ICT best practice. These demands include:

* Government to provide incentives to help companies reduce the carbon footprint of their IT activities
* Government to ensure that there is a sufficient supply of energy for data centre needs in the future
* Government to review its policies on long-term data storage to take into account the carbon implications
* ICT vendors to significantly improve the quality of their environmental information
* ICT departments to be accountable for the energy costs of running and cooling ICT equipment
* Companies to ensure ICT departments are fully engaged in their CSR and environmental policies
* Companies to ensure that their ICT infrastructure meets stricter efficiency targets

Gary Hird, Technical Strategy Manager for John Lewis Partnership and member of the EILT comments: "Green Computing is an opportunity for us all to clearly demonstrate IT's value in helping our companies tackle an urgent, and global, issue. It is vital that we do a good job collectively and that means being open about the specific problems we're facing and the solutions we're pursuing. The Global Action Plan survey provides a 'current state' understanding of companies' green IT initiatives and the obstacles we must overcome to help them succeed."

Carbon dixoide emissions from ICT industry equal that of aviation industry


[Here is a fascinating news clip from Sky news that puts the carbon emissions of ICT industry in perspective. They claim that carbon emissions from global ICT community equal that of the worldwide aviation industry and are growing much faster. One small computer server generates as much carbon dioxide as a SUV with a fuel efficiency of 15 miles per gallon. The ICT industry in the UK consumes the equivalent amount of electricity as produced by 4 nuclear reactors. The aviation industry is already going to great lengths to mitigate its carbon footprint, but to date very little comparable developments are being undertaken by the ICT industry. And yet the ICT industry in my opinion is in the best position of any sector in society to reduce its carbon footprint to nearly zero and beyond.

Thanks to Conal Henry for this posting on Gordon Cooks Arch-econ list --BSA]

http://news.sky.com/skynews/video/videoplayer/0,,31200-1295311,.html

New Internet architectures to reduce carbon emissions

[This is another posting as part of my own evolving thought processes how the Internet, and in particular research and education networks can help reduce carbon dioxide emissions, firstly by re-engineering the network and secondly by deploying applications and services that will encourage others to use the Internet in novel ways in order to minimize their own carbon footprint.

First of all I would like to thank all those people who sent me e-mails with additional suggestions, comments and ideas on how ICT technologies, in particular the Internet and broadband can be used to mitigate the impact of global warming. Given the large number of e-mails I have received on the subject I apologize if I have not been able to reply to some of you directly.

I want to assure you that none of my ideas, and those of others that have been posted here, are in any way cast in stone or anywhere close to deployment. Many of these ideas are come from my own fevered brain, and may likely never survive close scrutiny by experts or validation in the marketplace. The purpose of this e-mail and my blog is to hopefully stimulate some creative thinking in the Internet community and especially within R&E networks on ways we can collectively design "green" Internet solutions. This is a community that is used to rapid changes and has many of the most innovative people in business or academia. Hopefully my blog, in some small way, will stimulate others in developing more robust and scalable solutions that help address, what in my opinion, is the biggest challenge of this generation and of this decade - global warming.


In today's modern Internet networks one of the biggest energy sinks, and consequently a significant producer of carbon emission due to their electrical and cooling requirement, is the Internet core routers.

Internet routers are custom designed pieces of computing equipment which must operate at very high speeds in order to do fast lookups in the forwarding table in order to process packets at line speeds. The need to do fast lookups is further compounded by the continued growth of routing tables over the past few years.

In order to handle the processing of packets at wire line speeds modern routers usually have multiple ASICs on the forwarding card. Each ASIC handles only a subset of the forwarding address table, which is split up between the various ASICs on /8, /16 (or finer grained) address boundaries.

But an alternate routing architecture approach to big core routers with multiple ASICs is to deploy networks of multiple virtual routers, with each network of virtual routers assigned an address block. All virtual routers for a given address block linked together by a dedicated lightpath network independent of parallel virtual routers and networks for other address blocks.

Each address range or block would have a global set of virtual routers dedicated to forwarding and routing with that address block. And optical connections between the virtual routers can be traffic engineered to optimize flows for that address block. As well separate OSPF (or ISIS) networks can be deployed for each address block. At inter-domain boundaries these separate address block networks can be aggregated into a single connection to a neighbouring AS, or arrangements can be made to advertise separate BGP networks with parallel ASs for each address block network.

At first blush this seems to be an incredible waste of resources. Not only would separate routing tables and networks would have to be maintained, but multiple copies of filtering policies etc would have to be deployed for each network address block.

However by breaking up the forwarding table into multiple (roughly) parallel forwarding networks, where each network is assigned a specific address block allows us to deploy much more inexpensive commoditized routers using off the shelf open source routing engines like Vyatta.

Because these routers don’t have to do lookups on the entire forwarding table they can be built with more inexpensive commodity components. In effect we are trading off large forwarding tables using ASICs against commodity virtual routers with multiple parallel optical networks for each address block.

More importantly these low cost (and low energy, hence low carbon emission) devices can now be collocated nearby renewable energy sites. Not all such sites need to have to support all virtual routers to carry the entire routing table. Instead address block networks can be engineered with different topologies linking together independent renewable energy sites supporting alternate nodes for the various address block networks.

Because we have also broken down the Internet into many (roughly) parallel networks aligned along each address block, outages and re-routing can be more easily handled, especially as the routing nodes are located at renewable energy sites such as windmills and solar power farms.

Users would be backhauled to with dedicated optical links to two or more virtual router renewable energy sites. The assumption is that an all optical backhaul network has much lower carbon emissions than an energy consuming electronic local router or stat-mux switch.

This architecture would be ideal for R&E networks as generally they have a very small number of directly connected organizations such as universities and research centers. These organizations can even pre-classify their outgoing packets along the address block boundaries and send them out separate parallel optical channels to the nearest renewable energy site(s) supporting the multiple virtual routers for each address block.

Companies like Google are also well positioned to take advantage of this architecture as they have a world wide distributed network of low cost servers and they are rumored to be deploying costumed developed 10Gbe switches on their own private optical network. The same principles that Google used for their network of search engines could be applied to a virtual routed network as described here.

Optical networks are much better suited for this application as opposed to MPLS and PBT networks which require electronic devices to do the forwarding and label switching. Optical networks can be significantly more energy efficient than electronic networks, but unquestionably far less efficient in terms of multiplexing packets. Tools like Inocybe's Argia can be used to do the traffic engineering of the various optical paths assigned to each address block.

For more information on this architectural concepts please see my blog or presentations at http://green-broadband.blogspot.com


Blog Archive