Energy Internet and eVehicles Overview

Governments around the world are wrestling with the challenge of how to prepare society for inevitable climate change. To date most people have been focused on how to reduce Green House Gas emissions, but now there is growing recognition that regardless of what we do to mitigate against climate change the planet is going to be significantly warmer in the coming years with all the attendant problems of more frequent droughts, flooding, sever storms, etc. As such we need to invest in solutions that provide a more robust and resilient infrastructure to withstand this environmental onslaught especially for our electrical and telecommunications systems and at the same time reduce our carbon footprint.

Linking renewable energy with high speed Internet using fiber to the home combined with autonomous eVehicles and dynamic charging where vehicle's batteries are charged as it travels along the road, may provide for a whole new "energy Internet" infrastructure for linking small distributed renewable energy sources to users that is far more robust and resilient to survive climate change than today's centralized command and control infrastructure. These new energy architectures will also significantly reduce our carbon footprint. For more details please see:

Using autonomous eVehicles for Renewable Energy Transportation and Distribution: and

Free High Speed Internet to the Home or School Integrated with solar roof top:

High level architecture of Internet Networks to survive Climate Change:

Architecture and routing protocols for Energy Internet:

How to use Green Bond Funds to underwrite costs of new network and energy infrastructure:

Thursday, May 8, 2008

Clouds, Grids and Resources for Green Cyber-Infrastructure

[Increasingly universities and research centers around the world are recognizing that the pursuit of scientific research without thinking about the consequences of power consumption or the impact on the environment is no longer an option. Cyber-Infrastructure and eScience in particular is placing huge new demands on campus power systems. In an growing number of situations high energy consuming HPC and instrumentation systems need to be located off campus, ideally at zero carbon data centers. Even the quintessential cyber-infrastructure project - the Large Hadron Collider at CERN - is now looking to offload computational tasks to other sites around the world because of power limitations and costs at CERN. Researchers also need to move their computational requirements to grids and clouds (whose underlying servers are also located at zero carbon data centers) in order to reduce power consumption load on their campuses (and in my opinion, it will also improve their eScience capabilities). Here is a list of some resources I have compiled that may help those researchers who are serious about reducing their carbon footprint -- BSA]

CyberInfrastructure 2.0 Blog

BCnet Workshop on Green Cyber-Infrastructure
May 22 Vancouver

CLS workshop on web services for remote instrumentation

The tools being developed by researchers to allow remote access for scientific instruments such as under the ocean or remote beam lines will serve as a model for future "green" cyber-infrastructure. Because of the huge power demands of new big science instruments and computers combined with the increasing shortage for power at our existing research centers means increasingly these facilities will have to be located in remote zero carbon, renewable energy, science centers. Instruments and computation will need to be accessed remotely.

Green House and Green Computing at Norte Dame

Clouds over Chicago
Integration of Grids and Clouds

4th International IEEE Computer Society Technical Committee on Scalable Computing eScience 2008 Conference

Organizing committees of the 4th International IEEE Computer Society Technical Committee on Scalable Computing eScience 2008 Conference are now accepting papers and proposals for tutorials; posters, exhibits, and demos; and workshops and special sessions.

Topics of interest cover applications and technologies related to e-Science and grid and cloud computing. They include, but are not limited to, the following:

* Application development environments
* Autonomic, real-time, and self-organizing grids
* Cloud computing and storage
* Collaborative science models and techniques
* Enabling technologies: Internet and Web services
* e-Science for applications including physics, biology, astronomy, chemistry, finance, engineering, and the humanities
* Grid economy and business models
* Problem-solving environments
* Programming paradigms and models
* Resource management and scheduling
* Security challenges for grids and e-Science
* Sensor networks and environmental observatories
* Service-oriented grid architectures
* Virtual instruments and data access management
* Virtualization for technical computing
* Web 2.0 technology and services for e-Science

NSF Cluster Exploratory Project

In an open letter to the academic computing research community, Jeannette Wing, the assistant director at NSF for CISE, said that the relationship will give the academic computer science research community access to resources that would be unavailable to it otherwise.

"Access to the Google-IBM academic cluster via the CluE program will provide the academic community with the opportunity to do research in data-intensive computing and to explore powerful new applications," Wing said. "It can also serve as a tool for educating the next generation of scientists and engineers."

"Google is proud to partner with the National Science Foundation to provide computing resources to the academic research community," said Stuart Feldman, vice president of engineering at Google Inc. "It is our hope that research conducted using this cluster will allow researchers across many fields to take advantage of the opportunities afforded by large-scale, distributed computing."

"Extending the Google/IBM academic program with the National Science Foundation should accelerate research on Internet-scale computing and drive innovation to fuel the applications of the future," said Willy Chiu, vice president of IBM Software Strategy and High Performance On Demand Solutions. "IBM is pleased to be collaborating with the NSF on this project."

In October of last year, Google and IBM created a large-scale computer cluster of approximately 1600 processors to give the academic community access to otherwise prohibitively expensive resources. Fundamental changes in computer architecture and increases in network capacity are encouraging software developers to take new approaches to computer-science problem solving. In order to bridge the gap between industry and academia, it is imperative that academic researchers are exposed to the emerging computing paradigm behind the growth of "Internet-scale" applications.

This new relationship with NSF will expand access to this research infrastructure to academic institutions across the nation. In an effort to create greater awareness of research opportunities using data-intensive computing, the CISE directorate will solicit proposals from academic researchers. NSF will then select the researchers to have access to the cluster and provide support to the researchers to conduct their work. Google and IBM will cover the costs associated with operating the cluster and will provide other support to the researchers. NSF will not provide any funding to Google or IBM for these activities.

While the timeline for releasing the formal request for proposals to the academic community is still being developed, NSF anticipates being able to support 10 to 15 research projects in the first year of the program, and will likely expand the number of projects in the future.

Information about the Google-IBM Academic Cluster Computing Initiative can be found at

Blog Archive