Energy Internet and eVehicles Overview

Governments around the world are wrestling with the challenge of how to prepare society for inevitable climate change. To date most people have been focused on how to reduce Green House Gas emissions, but now there is growing recognition that regardless of what we do to mitigate against climate change the planet is going to be significantly warmer in the coming years with all the attendant problems of more frequent droughts, flooding, sever storms, etc. As such we need to invest in solutions that provide a more robust and resilient infrastructure to withstand this environmental onslaught especially for our electrical and telecommunications systems.

Linking renewable energy with high speed Internet using fiber to the home combined with eVehicles and dynamic charging where vehicle's batteries are charged as it travels along the road, may provide for a whole new "energy Internet" infrastructure for linking small distributed renewable energy sources to users that is far more robust and resilient to survive climate change than today's centralized command and control infrastructure. For more details please see:

Using eVehicles for Renewable Energy Transportation and Distribution: and

Free High Speed Internet to the Home or School Integrated with solar roof top:

High level architecture of Internet Networks to survive Climate Change:

Architecture and routing protocols for Energy Internet

Sunday, December 16, 2007

Cloud Routing, Cloud Computing, Global Warming and Cyber-Infrastructure

[To my mind "cloud computing" and "cloud routing" are technologies that will not only radically alter cyber-infrastructure but also enable the Internet and ICT community to address the serious challenges of global warming.

Cloud computing allows us to locate computing resources anywhere in the world. No longer does the computer (whether it is a PC or supercomputer) have to be collocated with a user or institution. With high bandwidth optical networks it is now possible to collocate cloud computing resources with renewable energy sites in remote locations.

Cloud routing will change the Internet in much the same way as cloud computing has changed computation and cyber-infrastructure. Today's Internet topologies are largely based on locating routers and switches with the shortest geographical reach to end users. But once again low cost high bandwidth optical networks allow us to distribute routing and forwarding to renewable energy sites at remote locations. In effect we are scaling up something that we routinely do today on the Internet with such concepts as remote peering and backhauling. By breaking up the Internet forwarding table into small blocks on /16 or finer boundaries we can also distribute the forwarding and routing load across a "cloud" of many thousands of PCs instead of specialized routers.

The other big attraction of "cloud" services, whether routing or computational is their high resiliency. This is essential if you want to collocate these services at remote renewable energy site around the world. Renewable energy sites, by their very nature are going to be far less reliable and stable. So "highly disruptive tolerant" routing and data replication services are essential

Some excerpts from postings on Gordon Cooks Arch-econ list--BSA]

For more information on cloud routing:
For more information on this item please visit my blog at or

For more information on Next Generation Internet and reducing Global Warming

Google and the Wisdom of Clouds
A lofty new strategy aims to put incredible computing power in the hands of many by Stephen Baker

What is Google's cloud? It's a network made of hundreds of thousands, or by some estimates 1 million, cheap servers, each not much more powerful than the PCs we have in our homes. It stores staggering amounts of data, including numerous copies of the World Wide Web. This makes search faster, helping ferret out answers to billions of queries in a fraction of a second. Unlike many traditional supercomputers, Google's system never ages. When its individual pieces die, usually after about three years, engineers pluck them out and replace them with new, faster boxes. This means the cloud regenerates as it grows, almost like a living thing.

A move towards clouds signals a fundamental shift in how we handle information. At the most basic level, it's the computing equivalent of the evolution in electricity a century ago when farms and businesses shut down their own generators and bought power instead from efficient industrial utilities. Google executives had long envisioned and prepared for this change. Cloud computing, with Google's machinery at the very center, fit neatly into the company's grand vision, established a decade ago by founders Sergey Brin and Larry Page: "to organize the world's information and make it universally accessible

For small companies and entrepreneurs, clouds mean opportunity-a leveling of the playing field in the most data-intensive forms of computing. To date, only a select group of cloud-wielding Internet giants has had the resources to scoop up huge masses of information and build businesses upon it.

This status quo is already starting to change. In the past year, Amazon has opened up its own networks of computers to paying customers, initiating new players, large and small, to cloud computing. Some users simply park their massive databases with Amazon. Others use Amazon's computers to mine data or create Web services. In November, Yahoo opened up a cluster of computers-a small cloud-for researchers at Carnegie Mellon University. And Microsoft
(MSFT) has deepened its ties to communities of scientific researchers by providing them access to its own server farms. As these clouds grow, says Frank Gens, senior analyst at market research firm IDC, "A whole new community of Web startups will have access to these machines. It's like they're planting Google seeds." Many such startups will emerge in science and medicine, as data-crunching laboratories searching for new materials and drugs set up shop in the clouds.

Many [scientists] were dying for cloud know how and computing power-especially for scientific research. In practically every field, scientists were grappling with vast piles of new data issuing from a host of sensors, analytic equipment, and ever-finer measuring tools. Patterns in these troves could point to new medicines and therapies, new forms of clean energy. They could help predict earthquakes. But most scientists lacked the machinery to store and sift through these digital El Dorados. "We're drowning in data," said Jeannette Wing, assistant director of the National Science Foundation.

All sorts of business models are sure to evolve. Google and its rivals could team up with customers, perhaps exchanging computing power for access to their data. They could recruit partners into their clouds for pet projects, such as the company's clean energy initiative, announced in November. With the electric bills at jumbo data centers running upwards of $20 million a year, according to industry analysts, it's only natural for Google to commit both brains and server capacity to the search for game-changing energy breakthroughs.

What will research clouds look like? Tony Hey, vice-president for external research at Microsoft, says they'll function as huge virtual laboratories, with a new generation of librarians-some of them human-"curating" troves of data, opening them to researchers with the right credentials. Authorized users, he says, will build new tools, haul in data, and share it with far-flung colleagues. In these new labs, he predicts, "you may win the Nobel prize by analyzing data assembled by someone else." Mark Dean, head of IBM's research operation in Almaden, Calif., says that the mixture of business and science will lead, in a few short years, to networks of clouds that will tax our imagination. "Compared to this," he says, "the Web is tiny. We'll be laughing at how small the Web is." And yet, if this "tiny" Web was big enough to spawn Google and its empire, there's no telling what opportunities could open up in the giant clouds.


December 13, 2007, 4:07PM EST text size: T T

Online Extra: The Two Flavors of Google
A battle could be shaping up between the two leading software platforms for cloud computing, one proprietary and the other open-source by Stephen Baker

Why are search engines so fast? They farm out the job to multiple processors. Each task is a team effort, some of them involving hundreds, or even thousands, of computers working in concert. As more businesses and researchers shift complex data operations to clusters of computers known as clouds, the software that orchestrates that teamwork becomes increasingly vital. The state of the art is Google's in-house computing platform, known as MapReduce. But Google (GOOG) is keeping that gem in-house. An open-source version of MapReduce known as Hadoop is shaping up to become the industry standard.

This means that the two leading software platforms for cloud computing could end up being two flavors of Google, one proprietary and the other-Hadoop-open source. And their battle for dominance could occur even within Google's own clouds. Here's why: MapReduce is so effective because it works exclusively inside Google, and it handles a limited menu of chores. Its versatility is a question. If Hadoop attracts a large community of developers, it could develop into a more versatile tool, handling a wide variety of work, from scientific data-crunching to consumer marketing analytics. And as it becomes a standard in university labs, young computer scientists will emerge into the job market with Hadoop skills.

Gaining Fans
The growth of Hadoop creates a tangle of relationships in the world of megacomputing. The core development team works inside Google's rival, Yahoo! (YHOO). This means that as Google and IBM ( IBM) put together software for their university cloud initiative, announced in October, they will work with a Google clone developed largely by a team at Yahoo. The tool is already gaining fans. Facebook uses Hadoop to analyze user behavior and the effectiveness of ads on the site, says Hadoop founder Doug Cutting, who now works at Yahoo.

In early November, for example, the tech team at The New York Times (NYT) rented computing power on Amazon's ( AMZN) cloud and used Hadoop to convert 11 million archived articles, dating back to 1851, to digital and searchable documents. They turned around in a single day a job that otherwise would have taken months.


December 13, 2007, 5:00PM EST text size: T T

A Sea Change
Data from the deep like never before
Scientists knee-deep in data are longing for the storage capacity and power of cloud computing. University of Washington oceanographer John R. Delaney is one of many who are desperate to tap into it.

Delaney is putting together a $170 million project called Neptune, which could become the prototype for a new era of data-intensive research. Launching this year, Neptune deploys hundreds of miles of fiber-optic cable connected to thousands of sensors in the Pacific Ocean off the Washington coast. The sensors will stream back data on the behavior of the ocean: its temperature, light, life forms, the changing currents, chemistry, and the physics of motion. Microphones will record the sound track of the deep sea, from the songs of whales to the rumble of underwater volcanos.

Neptune will provide researchers with an orgy of information from the deep. It will extend humanity's eyes and ears-and many other senses-to the two-thirds of the planet we barely know. "We've lived on Planet Land for a long time," says Delaney, who works out of an office near Puget Sound. "This is a mission to Planet Ocean."

He describes the hidden planet as a vast matrix of relationships. Sharks, plankton, red tides, thermal vents spewing boiling water-they're all connected to each other, he says. And if scientists can untangle these ties, they can start to predict how certain changes within the ocean will affect the weather, crops, and life on earth. Later this century, he ventures, we'll have a mathematical model of the world's oceans, and will be able to "manage" them. "We manage Central Park now, and the National Forests," he says. "Why not the oceans?"

To turn Neptune's torrents of data into predictive intelligence, teams of scientists from many fields will have to hunt for patterns and statistical correlations. The laboratory for this work, says Delaney, will be "gigantic disk farms that distribute it all over the planet, just like Google (GOOG)." In other words, Neptune, like other big science projects, needs a cloud. Delaney doesn't yet know on which cloud Neptune will land. Without leaving Seattle, he has Microsoft (MSFT) and Amazon ( AMZN), along with a Google-IBM
(IBM) venture at his own university.

What will the work on this cloud consist of? Picture scientists calling up comparisons from the data and then posing endless queries. In that sense, cloud science may feel a bit like a Google search.


December 13, 2007, 5:00PM EST text size: T T

Online Extra: Google's Head in the Clouds
CEO Eric Schmidt talks about the powerful globe-spanning networks of computers known as clouds, and discovering the next big idea

Instead, think about Google as a star-studded collection of computer scientists who have access to a fabulous machine, a distributed network of data centers that behave as one. These globe-spanning networks of computers are known as "clouds." They represent a new species of global supercomputer, one that specializes in burrowing through mountains of random, unstructured data at lightning speed. Scientists are hungry for this kind of computing. Data-deluged businesses need it.

On cloud computing:

What [cloud computing] has come to mean now is a synonym for the return of the mainframe. It used to be that mainframes had all of the data. You had these relatively dumb terminals. In the PC period, the PC took over a lot of that functionality, which is great. We now have the return of the mainframe, and the mainframe is a set of computers. You never visit them, you never see them. But they're out there. They're in a cloud somewhere. They're in the sky, and they're always around. That's roughly the metaphor.

On Google's place in cloud computing:

Google is a cloud computing server, and in fact we are spending billions of dollars-this is public information-to build data centers, which are in one sense a return to the mainframe. In another sense, they're one large supercomputer. And in another sense, they are the cloud itself.

So Google aspires to be a large portion of the cloud, or a cloud that you would interact with every day. Why would Google want to do that? Well, because we're particularly good at high-speed data and data computation.

On Google's software edge:

Google is so fast because more than one computer is working on your query. It farms out your question, if you will, to on the order of 25 computers. It says, "You guys look over here for some answers, you guys look over here for some answers." And then the answers come back very quickly. It then organizes it to a single answer. You can't tell which computer gave you the answer.

On the size of cloud computing:

There's no limit. The reason Google is investing so much in very-high-speed data is because we see this explosion, essentially digital data multimedia explosion, as infinitely larger than people are talking about today. Everything can be measured, sensed, tracked in real time.

On applications that run on a cloud:

Let's look at Google Earth. You can think of the cloud and the servers that provide Google Earth as a platform for applications. The term we use is location-based services. Here's a simple example. Everyone here has cell phones with GPS and a camera. Imagine if all of a sudden there were a mobile phone which took picture after picture after picture, and posted it to Google Earth about what's going on in the world. Now is that interesting, or will it produce enormous amounts of noise? My guess is that it'll be a lot of noise.

So then we'll have to design algorithms that will sort through to find the things that are interesting or special, which is yet another need for cloud computing. One of the problems is you have these large collections coming in, and they have relatively high noise to value. In our world, it's a search problem.

On Google becoming a giant of computing:

This is our goal. We're doing it because the applications actually need these services. A typical example is that you're a Gmail user. Most people's attachments are megabytes long, because they're attaching everything plus the kitchen sink, and they're using Gmail for transporting random bags of bits. That's the problem of scale. But from a Google perspective, it provides significant barriers to entry against our competitors, except for the very well-funded ones.

I like to think of [the data centers] as cyclotrons. There are only a few cyclotrons in physics and every one of them is important, because if you're a top flight physicist you need to be at the lab where that cyclotron is being run because that's where history's going to be made, that's where the inventions are going to come from. So my idea is that if you think of these as supercomputers that happen to be assembled from smaller computers, we have the most attractive supercomputers, from a science perspective, for people to come work on.

On the Google-IBM education project:

Universities were having trouble participating in this phenomenon [cloud computing] because they couldn't afford the billions of dollars it takes to build these enormous facilities. So [Christophe Bisciglia] figured out a way to get a smaller version of what we're doing into the curriculum, which is clearly positive from our perspective, because it gets the concepts out. But it also whets the appetite for people to say, "Hey, I want 10,000 computers," as opposed to 100.

Blog Archive