Energy Internet and eVehicles Overview
Governments around the world are wrestling with the challenge of how to prepare society for inevitable climate change. To date most people have been focused on how to reduce Green House Gas emissions, but now there is growing recognition that regardless of what we do to mitigate against climate change the planet is going to be significantly warmer in the coming years with all the attendant problems of more frequent droughts, flooding, sever storms, etc. As such we need to invest in solutions that provide a more robust and resilient infrastructure to withstand this environmental onslaught especially for our electrical and telecommunications systems.
Linking renewable energy with high speed Internet using fiber to the home combined with eVehicles and dynamic charging where vehicle's batteries are charged as it travels along the road, may provide for a whole new "energy Internet" infrastructure for linking small distributed renewable energy sources to users that is far more robust and resilient to survive climate change than today's centralized command and control infrastructure. For more details please see:
Free High Speed Internet to the Home or School Integrated with solar roof top: http://goo.gl/wGjVG
High level architecture of Internet Networks to survive Climate Change: http://goo.gl/juWdH
Architecture and routing protocols for Energy Internet http://goo.gl/niWy1g
Thursday, May 29, 2008
[An excellent summary of the economic benefits of zero carbon data centers. Green data centers also have the advantage of using renewable power that is uneconomical to develop for other industry sectors. Many renewable energy sites are not being developed because of the high cost of electrical transmission lines and the fees that utilities charge for wheeling of power. But connecting these sites with low cost fiber optic networks makes them ideal sites for green data centers. Some excerpts from ArsTechnica--BSA]
Renewable energy and the future of the datacenter
For the governors who attended the recent climate conference at Yale, one of the biggest political selling points for renewable energy sources was their long-term stability. In contrast to carbon-based fuels, where extraction mostly occurs overseas and for finite time periods, the sources of many forms of renewable power—solar, wind, geothermal, etc.—are widely distributed and indefinitely available. Several officials at the conference summarized these benefits of renewable power as the ability to create permanent jobs that don't get outsourced.
One point that state officials didn't raise, however, is that renewable power sources provide an additional opportunity for economic development when paired with the rise of the datacenter facility. Power consumption, and the costs that it entails, are now significant factors in locating and managing IT facilities. And though the present price of power is a major consideration, it isn't the only one; for long-term planning, the stability of a power source's cost per watt and the long-term availability of that source have to be factored in.
So in contrast to fossil fuel-based power, renewable sources represent power capacity with a source whose cost of extraction will be probably drop over time due to technological innovation, which means that, even as demand rises, the price may remain relatively stable. Renewable sources are also much less likely to suffer from disruptions in supply due to political events. These features make renewable energy sources incredibly appealing for a power- and cost-sensitive activity like running a server farm, and they're already beginning to influence decision making within the IT community. [..] The companies that build datacenters have responded to this reality not just by changing what kind of hardware they purchase, but by adopting the old real estate adage—location, location, location—in their quest for ever more cost-efficient datacenters. Google, Microsoft, and Yahoo have all purchased land in the Pacific Northwest in order to guarantee access to the region's cheap hydropower. Other companies are arranging for similar centers in upstate New York in order to get access to power from Hydro-Québec.
As the examples above indicate, hydroelectric power is currently the main player in the renewable energy game. Because hydroelectric is one of the cheapest forms renewable power, it's easy to dismiss its current popularity as not really indicative of the prospects for renewable energy in general. But other forms of power appear to be catching up, fast. Iceland is advertising that its combination of hydro and geothermal power make it an appealing location for data centers. Even in the US, companies are already offering renewable-power based hosting. The webhost AISO happily proclaims that it's willing to accept slightly lower profit margins in order to run its servers on solar power. Green House Data, in contrast, claims that a carefully chosen location combined with energy-conscious building techniques allows it to run off wind power for less than the cost of a typical datacenter's electric bill.
All told, the power from these renewable energy sources is competitively priced relative to non-renewable sources and, perhaps more significantly, its price and availability should remain relatively stable, simplifying long-term planning for companies. Moving forward, it's difficult to imagine any of these factors changing. The success of companies like Google and Amazon, and the continued emphasis on cloud computing means that datacenter expansion is unlikely to slow down any time soon. And the companies that build datacenters are likely to make stable and cheap energy a major focus of their construction decisions.
In the absence of federal action on carbon emissions, many states are enacting climate plans and attempting to increase the use of renewable power within their borders. If they can sell renewable power to their constituents as providing additional economic development through its ability to attract the high-tech sector, it may be easier to set these policies in motion.
Wednesday, May 28, 2008
[The provincial government in British Columbia has been a world trend setter in setting new standards and legislation to reduce GHG emissions. They are the first government in North America to introduce a carbon tax. They also have mandated all public sector institutions such as universities, schools and hospitals to be zero carbon by 2010. It is expected that other governments in Canada and around the world will soon follow BC's lead and implement similar policies, as public sector institutions should be seen as leaders in addressing the challenges of global warming.
This will have a major impact on universities as eScience and cyber-infrastructure are very energy intensive which can result in significant increase in GHG emissions if the power comes from fossil fuel plants. Moreover the power demand and concomitant GHG emission by computers and cyber-infrastructure is expected to double in the next 4 years.
Networks, grids and virtualization linked with zero carbon data centers will play a critical role in helping universities meet their carbon neutral targets. These technologies may mean even help universities earn additional dollars to support research and infrastructure through carbon offset trading. Kudos to BCnet and BC university CIOs who are already arranging strategy meetings to address this initiative. As I have always argued, jurisdictions that are the first to address this major environmental challenge will be the real winners in the future in creating new jobs and business opportunities of a zero carbon society. More details on my blog. Some excerpts from the BC legislation--BSA]
Targets for carbon neutral public sector
5 (1) Each public sector organization must be carbon neutral for the 2010 calendar year and for each subsequent calendar year.
(2) The Provincial government must be carbon neutral for the 2008 and 2009 calendar years in relation to its PSO greenhouse gas emissions that are directly related to public officials travelling on public business for which the travel expenses are covered by the consolidated revenue fund.
(3) In advance of the obligation under subsection (1), for the 2008 and 2009 calendar years, each public sector organization must pursue actions to minimize its PSO greenhouse gas emissions. Requirements for achieving carbon neutral status
6 (1) In order to be carbon neutral for a calendar year, a public sector organization must
(a) pursue actions to minimize its PSO greenhouse gas emissions for the calendar year,
(b) determine its PSO greenhouse gas emissions for that calendar year in accordance with the regulations, and
(c) no later than the end of June in the following calendar year, apply emission offsets in accordance with the regulations to net those emissions to zero.
[Another brilliant concept by Branson and Dutch Postcode Lottery. If the Internet and ICT can help countries reach 90% of their Kyoto commitments I think this is a great opportunity for Next Generation Internet researchers, as well as those involved with grid and virtualization research to develop architectures, business models and technologies that reduce carbon footprint--BSA]
Contest seeks pioneering ideas for climate change breakthrough PICNIC Green Challenge ‘08, fuelled by the Dutch Postcode Lottery, has started
Amsterdam, May 13th 2008 – The PICNIC Green Challenge ‘08, the international creative competition of the Dutch Postcode Lottery and cross-media event PICNIC, starts today. The PICNIC Green Challenge urges people to send in creative and innovative ideas to reduce greenhouse gas emissions. Like last year, the best idea will win €500,000 to execute the winning plan. The prize money, provided by the Dutch Postcode Lottery, will be awarded at PICNIC ’08 in Amsterdam the 25th of September.
Kicking off May 9th, The PICNIC Green Challenge is aimed at creative, innovative people who can instigate change. The contest is looking for products and services that contribute to an eco-friendly lifestyle, directly reducing greenhouse gas emissions and scoring well on convenience, quality and design too.
The contest closes at the 31st of July 2008. From the various entries sent the preliminary jury will select 3 to 5 finalists. These nominees will present their ideas on September the 25th at PICNIC ’08. The best entrant wins €500,000 to market the winning idea. The winner will be introduced to potential clients and business partners.
Last year the chairman of the jury, Sir Richard Branson, presented the €500,000 to finalist Igor Kluin of Qurrent and his Qbox. The Qbox enables people to generate their own energy locally from renewable sources. Other finalists presented a solar lamp, carbon reduced goods transport, online green initiatives and climate friendly clubbing. The PICNIC Green Challenge ’07 received 439 green ideas.
Thursday, May 22, 2008
[The OECD will be webcasting their entire workshop on ICTs and Environment Workshop. I congratulate OECD for making this workshop available by webcast as not only this save on CO2 emissions from air travel but I also believe this will be a seminal workshop on the topic and will tie in closely with the recent OECD Broadband report released earlier this week and also provide input to the upcoming OECD First Ministers conference in Seoul. In addition ITU will be hosting a similar workshop in London in June followed by the Green Telco Congress in Paris in January. Lots of conferences on this very important topic - but soon it will be time for action --BSA]
We are pleased to announce that the Workshop presentations will be web-cast via a link on the Workshop web-pages at these URLs:
Or directly accessible here: http://itst.media.netamia.net/green-ict/
ITU Symposia on ICTs and Climate Change http://www.itu.int/ITU-T/worksem/climatechange/index.html
Green Telco World Summit to occur in Paris next January, 2009: http://www.upperside.fr/greentelco2009/greentelco2009intro.htm
Wednesday, May 14, 2008
[Excerpts from Guardian article. Universities and their funding bodies around the world should take note --BSA]
Why the future's green for IT
A survey into ways in which colleges and universities can make computing greener and more sustainable is about to publish its preliminary findings.
Higher Education Environment Performance Improvement (Heepi) and SustainIT, an NGO set up to focus on the environmental and social impact of IT, are researching how sustainable further and higher education IT is, and how education best practice compares with the private sector.
The report being written for the Joint Information System Committee (Jisc) says green IT is best achieved through the collaboration of IT and estates management. It finds that increased energy and computing costs can be offset by technologies such as grid computing and virtualisation. The need to reduce carbon the footprint is behind a cull of wasteful IT practices.
The author of the report, Peter James, who is also part-time professor of environmental management at Bradford University and associate director of SustainIT, says: "Eighty to 90% of a computer's capacity is wasted.
"By linking PCs together we can run complex computing tasks broken down into manageable chunks when the computers are not in normal classroom use."
Virtualisation offers much more dramatic savings. "This is one component of grid computing that's really going mainstream," says Berry. "Many servers set up to run a single application are running at less than 10% capacity. By using virtualisation you can bring several applications onto one server and use less energy for IT, power and cooling."
Meanwhile, Cardiff University has come up with an innovative solution to the cost of running super computers for research projects by centralising departments' IT budgets and transferring byte-hungry number-crunching to clusters of smaller high-performance computers. The project is called Arcca (advanced research computing at Cardiff).
"Before Arcca, departments ran their own computers for their own researchers," says Dr Hugh Beedie, Cardiff's chief technology officer, who was personally charged with reducing IT costs throughout the institution. "When they weren't online the computers were idle. Now we manage things centrally and any researcher can access our super computer cluster." [...]
Thursday, May 8, 2008
The Virtual Data Center - http://www.rackforce.com/blog/?p=59
Is Your Data Center Green Enough - http://www.rackforce.com/blog/?p=49
GigaCenter: Where We are Going - http://www.rackforce.com/blog/?p=48
The Best Place to Build a Data Center in North America http://www.cio.com/article/183256/The_Best_Place_to_Build_a_Data_Center_in_North_America
It's Kelowna, British Columbia, says IBM, which is working with Rackforce to open a huge data center in this small city far from earthquake and flood zones, close to cheap power sources and just a short flight from Vancouver.
But what most tourist brochures don't mention is that the Okanagan also is becoming known in IT spheres for something else: data processing and storage.
Thanks to its seismic stability, cheap and accessible power and a talented workforce, the Okanagan recently has seen a proliferation of data services vendors and has attracted interest from at least one major international corporation to build one of the biggest data centers in the world.
When it opens later this year, this $100 million data center—appropriately dubbed the Gigacentre—will total 85,000 square feet and will have the capacity to store nearly 35,000 terabytes of data. Put differently, the Gigacentre will generate more than 700 watts per square foot, while most data centers currently generate a maximum of 300 watts per square foot.
The Gigacentre is a joint venture between IBM and Rackforce, a local hosting service provider. It will be IBM's first data center in British Columbia and is powered by hydroelectric energy from the Columbia River
Brian Fry, vice president and cofounder of Rackforce, says the center, expected to open by this summer, will cement the Okanagan's position as the new data capital of the West—a position that could be particularly intriguing for U.S. companies who are looking to keep mission-critical in
[Increasingly universities and research centers around the world are recognizing that the pursuit of scientific research without thinking about the consequences of power consumption or the impact on the environment is no longer an option. Cyber-Infrastructure and eScience in particular is placing huge new demands on campus power systems. In an growing number of situations high energy consuming HPC and instrumentation systems need to be located off campus, ideally at zero carbon data centers. Even the quintessential cyber-infrastructure project - the Large Hadron Collider at CERN - is now looking to offload computational tasks to other sites around the world because of power limitations and costs at CERN. Researchers also need to move their computational requirements to grids and clouds (whose underlying servers are also located at zero carbon data centers) in order to reduce power consumption load on their campuses (and in my opinion, it will also improve their eScience capabilities). Here is a list of some resources I have compiled that may help those researchers who are serious about reducing their carbon footprint -- BSA]
CyberInfrastructure 2.0 Blog
BCnet Workshop on Green Cyber-Infrastructure
May 22 Vancouver
CLS workshop on web services for remote instrumentation http://www.lightsource.ca/medsi-sri2008/workshops.php#remote
The tools being developed by researchers to allow remote access for scientific instruments such as under the ocean or remote beam lines will serve as a model for future "green" cyber-infrastructure. Because of the huge power demands of new big science instruments and computers combined with the increasing shortage for power at our existing research centers means increasingly these facilities will have to be located in remote zero carbon, renewable energy, science centers. Instruments and computation will need to be accessed remotely.
Green House and Green Computing at Norte Dame http://ianfoster.typepad.com/blog/2008/04/greenhouse-and.html
Clouds over Chicago http://ianfoster.typepad.com/blog/2008/04/clouds-over-chi.html
Integration of Grids and Clouds
4th International IEEE Computer Society Technical Committee on Scalable Computing eScience 2008 Conference http://escience2008.iu.edu
Organizing committees of the 4th International IEEE Computer Society Technical Committee on Scalable Computing eScience 2008 Conference are now accepting papers and proposals for tutorials; posters, exhibits, and demos; and workshops and special sessions.
Topics of interest cover applications and technologies related to e-Science and grid and cloud computing. They include, but are not limited to, the following:
* Application development environments
* Autonomic, real-time, and self-organizing grids
* Cloud computing and storage
* Collaborative science models and techniques
* Enabling technologies: Internet and Web services
* e-Science for applications including physics, biology, astronomy, chemistry, finance, engineering, and the humanities
* Grid economy and business models
* Problem-solving environments
* Programming paradigms and models
* Resource management and scheduling
* Security challenges for grids and e-Science
* Sensor networks and environmental observatories
* Service-oriented grid architectures
* Virtual instruments and data access management
* Virtualization for technical computing
* Web 2.0 technology and services for e-Science
NSF Cluster Exploratory Project http://www.nsf.gov/news/news_summ.jsp?cntn_id=111186
In an open letter to the academic computing research community, Jeannette Wing, the assistant director at NSF for CISE, said that the relationship will give the academic computer science research community access to resources that would be unavailable to it otherwise.
"Access to the Google-IBM academic cluster via the CluE program will provide the academic community with the opportunity to do research in data-intensive computing and to explore powerful new applications," Wing said. "It can also serve as a tool for educating the next generation of scientists and engineers."
"Google is proud to partner with the National Science Foundation to provide computing resources to the academic research community," said Stuart Feldman, vice president of engineering at Google Inc. "It is our hope that research conducted using this cluster will allow researchers across many fields to take advantage of the opportunities afforded by large-scale, distributed computing."
"Extending the Google/IBM academic program with the National Science Foundation should accelerate research on Internet-scale computing and drive innovation to fuel the applications of the future," said Willy Chiu, vice president of IBM Software Strategy and High Performance On Demand Solutions. "IBM is pleased to be collaborating with the NSF on this project."
In October of last year, Google and IBM created a large-scale computer cluster of approximately 1600 processors to give the academic community access to otherwise prohibitively expensive resources. Fundamental changes in computer architecture and increases in network capacity are encouraging software developers to take new approaches to computer-science problem solving. In order to bridge the gap between industry and academia, it is imperative that academic researchers are exposed to the emerging computing paradigm behind the growth of "Internet-scale" applications.
This new relationship with NSF will expand access to this research infrastructure to academic institutions across the nation. In an effort to create greater awareness of research opportunities using data-intensive computing, the CISE directorate will solicit proposals from academic researchers. NSF will then select the researchers to have access to the cluster and provide support to the researchers to conduct their work. Google and IBM will cover the costs associated with operating the cluster and will provide other support to the researchers. NSF will not provide any funding to Google or IBM for these activities.
While the timeline for releasing the formal request for proposals to the academic community is still being developed, NSF anticipates being able to support 10 to 15 research projects in the first year of the program, and will likely expand the number of projects in the future.
Information about the Google-IBM Academic Cluster Computing Initiative can be found at http://www.google.com/intl/en/press/pressrel/20071008_ibm_univ.html
[One of the world's first zero carbon data centers has been built in Cheyenne, which is taking advantage of the natural cooling because of its location in the northern US. Several more of these zero carbon data centers are being deployed around the world such as Bastionhost.com in Nova Scotia. To my mind zero carbon data centers are more important than targeting energy efficiency as a way of reducing the impact of Internet on global warming. There are many thousands of untapped renewable energy sites around the world which are uneconomical to develop by traditional power companies because of the cost of transmission lines etc. But rather than bringing power to the datacenters in major urban areas, it would be much easier to move the data centers to the renewable power sites with relatively low cost optical networks. As well the power will be essentially be free, because no other industry sector can compete for this power because of its remoteness -- BSA]
100% Renewable Energy Wind Energy
The entire facility IS powered by wind generated renewable power. The company will own several wind turbines to the north of its facility and purchase the excess energy needs from the local power company's wind farm or through grid tied green-e tags.
The facility will represent the largest wind powered public data center in the nation with over 10,000 square feet of raised floor computing facilities. Don't be alarmed by the 100% renewable energy, reliability is a must. The company ensures this by being tied to the main power grid with contracts to purchase supplemental renewable (wind) energy from the local power company.
Energy Efficiency Built In Energy Efficiency
Green House Data is working with MKK and APC to build out the green data center. As part of our highly refined Green efforts,Green House Data will opperate its facility at approximately 60% greater energy efficiency than the average data center.
The Data Center will leverage the following attributes to gain the efficiencies:
* Water-Side Economizers: Free cooling from Cheyenne's average annual temperatures of 45.6 degrees.
* Server Side Cooling - Cooling directly at the source of the heat for managed services.
* Modular Scalable Data Center - matching maximum efficiencies without over building and waste.
* Efficient Floor Layout and Design - aligning hot aisle/cold aisles for heat capture and efficient cooling.
* Ground Source Heat Pumps - provide up to 25% more energy efficient cooling than traditional HVAC cooling equipment.
This is a good comprehensive report but surprisingly misses the entire subject of Khazzoom-Brookes postulate (aka Jevons Paradox) which refutes the entire argument of energy efficiency. Khazzoom-Brookes have effectively demonstrated that improved efficiency actually results in increased energy consumption as it decreases the overall cost of a product or service and therefore increases demand.
For more details please see my blog on energy efficiency and data centers as well a paper prepared in part for the OECD on this subject. The OECD is also having a workshop where this subject will be discusses in Denmark May 22-23.--BSA]
Paper on Khazzoom-Brookes postulate and datacenters http://docs.google.com/Doc?id=dgbgjrct_2767dxpbdvcf
Some excerpts from the McKinsey report
For many industries, data centers are one of the largest sources of Greenhouse Gas (GHG) emissions. As a group, their overall emissions are significant, in-scale with industries such as airlines. Even with immediate efficiency improvements (and adoption of new technologies) enterprises and their equipment providers will face increased scrutiny given the projected quadrupling of their data-center GHG emissions by 2020
Significant failings in asset management (6% average server utilization, 56% facility utilization). Up to 30% of servers are dead [i.e. not being used at all, but consuming power nevertheless]
Data center facilities spend (CapEx and OpEx) is a large, quickly growing and very inefficient portion of the total IT budget in many technology intensive industries such as financial services and telecommunications. Some intensive data center users will face meaningfully reduced profitability if current trends continue
True costs are often 4-5x the cost of the server alone over a 5-10 year lifetime of a server
Incremental US demand for data center energy between now and 2010 is equivalent of 10 new power plants
EPA has advocated use of separate energy meters for large data centers
- ► 2012 (19)
- ► 2011 (37)
- ► 2010 (65)
- ► 2009 (80)
- Renewable Energy and the Future of the Internet da...
- Universities in BC mandated to be carbon neutral b...
- Save the Planet and win 500,000 Euros
- Webcast of OECD workshop ICTs and Environment Work...
- UK study on how grids and virtualization reduce CO...
- The Best Place to Build Zero Carbon Data Centers i...
- Clouds, Grids and Resources for Green Cyber-Infras...
- Excellent example of Zero Carbon Data Center in Wy...
- Excellent report on Internet Data Centers and Glob...
- ▼ May (9)