Energy Internet and eVehicles Overview
Governments around the world are wrestling with the challenge of how to reduce carbon dioxide emissions. The current preferred approaches are to impose carbon taxes and implement various forms of cap and trade. However another approach to help reduce carbon emission is to “reward” those directly who reduce their carbon footprint and complement their existing lifestyle. One possible reward system is to provide homeowners with free fiber to the home or free wireless products and other electronic services if they deploy micro renewable energy sources for their ICT equipment and use eVehicles for energy transportation. Not only does the consumer benefit, but this business model also provides new revenue opportunities for small businesses, network operators, and eCommerce application providers.
Linking renewable energy with the Internet using eVehicles and dynamic charging where vehicle's batteries are charged as it travels along the road, may provide for a whole new "energy Internet" infrastructure for linking small distributed renewable energy sources to users. For more details please see:
Free High Speed Internet to the Home: http://goo.gl/wGjVG
High level architecture of Building Zero Carbon Networks: http://goo.gl/juWdH
Monday, November 30, 2009
Green IT Conferences for research community
Thanks to Jordi Torres, Barcelona Supercomputing Center for sending a link to http://www.greenit-conferences.org/.
New web page for Green Computing research community
A group of outstanding researchers has set up a simple new web page to make it easier for research community find updated information about the emerging conferences in Green Computing, the next wave in computing.
The www.greenIT-conferences.org site, includes a list of research conferences focused on green computing and energy-aware computer and network technologies. The site has been designed to make it easier for researchers for find information about new conferences (and conference tracks) in the area. Hopefully the page will serve to improve research in this important area.
Monday, November 23, 2009
http://green-broadband.blogspot.com/ or http://billstarnaud.blogspot.com
[Doug Alder of Rackforce has put together an excellent in-depth analysis of the impact that cap and trade (with carbon at $20/tone) will have on web and computer servers that are located in jurisdictions that are dependent on coal based power. While the pending cap and trade bills in the US Congress will mitigate most of the costs for consumers, industry and institutions will not be similarly protected. The EPA estimates that cap and trade will raise the cost of electricity for these organizations by an “average” of 60% with significantly higher prices in states dependent on coal powered electricity . To put this in context, cap and trade will cost an organization at least an additional $65 -$150 per year per server (200 W) if those servers are located in a coal powered state or province versus a state or province that is powered by renewable energy such as hydro-electricity. Considering that most businesses and universities have thousands of servers, the aggregate bill could be gigantic. Some excerpts from his excellent blog-- BSA
Power Sources and Their Coming Importance To Your Business
Do you have your own website? If you do it’s hosted on a server. Do you know where that server is located? Do you know the type of carbon footprint that server has where it is hosted? Do you care? If you do pay attention and you’ll learn something.
If we look at energy, first we will see how the source of that energy is important when considering the carbon footprint of a data center.
Let’s look at an example of two data centers, one in West Virginia and the other in British Columbia. Based on the data from Stats Canada, Environment Canada, & US Department of Energy that I researched I was able to build a spreadsheet showing the likely carbon cost for operating a server in each province and state (click on image for a readable version)(terms: gCO2eq/Kwh= grams of CO2 equivalent per Kilowatt hour, mTCO2eq/Mwh = grams CO2 equivalent per Megawatt hour, PUE = Power Usage Effectiveness a way of measuring how efficiently a data center uses the incoming power, that is what is the ratio of power used by the data center to the amount of power required to operate the ICT [Infornation Communications Technology] equipment (servers, switches, routers) – 1:1 would be perfect but basically impossible)
Now let’s see what that could mean to your business.
Say each data center is 120,000 sq. ft. raised floor (not at all unusual) Now allow a standard 32 sq. ft. per cabinet. That would give you a maximum number of 3125 racks (120,000/32) and each rack can hold a maximum of 42u worth of gear (a standard rack) but some of that will be the power distribution units of the data center and likely some of their networking gear too, so in general you will get around 36u of usable space. Assume you put 36 1u 200W servers in those slots. That gives you 112,500 servers in those 3125 racks. In BC, each of those servers would cost you an additional $1.06 per year. In West Virginia that would be $65.72 extra per server (the actual results would be higher though as a 120K sq. ft. data center would use up at least 20% of that space on aisles and various components needed to run a data center.) which, translates to $2365.92 instead of $38.16per rack per year extra. How will you justify that extra $2327.76 per rack per year to your shareholders?
The calculations above though were theoretical. They were based on a data center with perfect utilization of energy. That is for every watt of power required to run the ITC equipment in that data center they only used 1 watt of incoming power. Sadly, that is not the case and the average data center today has a Power Usage Effectiveness (PUE) rating of 2.5 (and many are much worse – that is an average). That means they need to purchase 2.5 watts of power for every watt they sell to their customers. Now go back to the last paragraph and multiply those final numbers by 2.5. Your extra cost is now $4654.52 per rack.
If your company is a public company then, as carbon taxes and/or carbon Cap &Trade becomes legislated then you will have a fiduciary responsibility to your shareholders to seek out the option that has you paying the least amount of taxes in order to maximize your returns. If you are a private company then you still need to consider the source of what powers your servers lest your competition beats you to it and gets a substantial edge in costs over you.
Small Windpower Can Make a Difference in Remote Telecom Facilities
Yesterday at 7:12pm
In the spirit of James Burke, it is always fun to follow the leads and find the connections. In this case, we start with a USA Today article “Wind backs up Honolulu airport power.” Hawaii and clean tech are one of my personal interest. The crux of the story is how the Hawaii Department of Transportation (DOT) has supplemented the power consumed with 16 small 1 Kw wind turbines. Nothing remarkable about a 16 Kw system. 16 Kw would be fine to offset daily power use for a utility building (in this case, the backup power for the Honolulu airport). How these small turbines were mounted drew in my attention.
The system is a state Department of Transportation pilot project and data is being gathered to determine the system's cost savings and energy output. It was installed at the end of June and cost about $100,000. Photos by RICHARD AMBO | The Honolulu Advertiser
We’ve seen many different wind systems which take advantage of the building’s real estate. But, the leading roof top edge has interesting aerodynamic benefits. Buildings have interesting aerodynamic effects. It is a whole specialty realm of engineering which is currently focused on physical stress loads on the build’s structure.
AeroVironment, the makers of the small, modular wind turbine installed at Honolulu’s Airport is on to something which would have significant impact to the way we look at structures. AreoVironment is a revolutionary aviation company. They understand aerodynamics from a flight perspective. Yet, with their Architectural Wind Services, they are applying that knowledge to leverage “the natural acceleration in wind speed resulting from the building’s aerodynamic properties. This accelerated wind speed can increase the turbines’ electrical power generation by more than 50% compared to the power generation that would result from systems situated outside of the acceleration zone.” Imaging what would happen if the expertise from AeroVironment was synergized with a company like Force Technologies? What could be gained by mindfully designing a building to capitalized the natural wind dynamics and use the changes the build acts on those dynamics to recoup energy?
As a minimum today, we can see telecoms buildings in remote rual areas use AeroVironment’s small wind technology to cost effectively offset power utilization. The price range for 12 units range at list between $134,000 to $180,000. In most areas of the US country with commercial electrical rates, that would be a ~5 year payback for the investment. Given that most telecommunications facilities have lifecycles which last decades, this is an interesting investment in energy offsets. Move this to a developing country installation, where you have higher electricity rate, fuel cost (generators), and unpredictable power, and the attractiveness increases. Then add the utilization of space. AeroVironment’s installation on the building does not interfere with other roof mounted solar installations or pole/antenna mounted wind systems. So this specific design can be used as a local power producing suite – offsetting the electrical cost of the telecommunications facility while opening the door for feed-in tariffs for any excess (if there are feed-in tariffs).
Sunday, November 22, 2009
World on course for catastrophic 6° rise, reveal scientists - Climate Change, Environment -
The world is now firmly on course for the worst-case scenario in terms of climate change, with average global temperatures rising by up to 6C by the end of the century, leading scientists said yesterday. ...
Friday, November 20, 2009
E.U. to Mandate 'Nearly Zero' Power Use by Buildings
Most significantly, the European Union directive will require that nearly all buildings, including large houses, constructed after 2020 include stark efficiency improvements or generate most of their energy from renewable sources, coming close to "nearly zero" energy use.
European countries will also be required to establish a certification system to measure buildings' energy efficiency. These certificates will be required for any new construction or buildings that are sold or rented to new tenants. Existing buildings will also have to, during any major renovation, improve their efficiency if at all feasible.
Buildings are responsible for about 36 percent of Europe's greenhouse gas emissions, and stricter efficiency requirements have been sought for the past several years as absolutely necessary for the bloc to meets its goal of cutting emissions 20 percent from 1990 levels by 2020. Other regions should take note, said Andris Piebalgs, the E.U. energy commissioner, in a statement.
"By this agreement, the E.U. is sending a strong message to the forthcoming climate negotiations in Copenhagen," Piebalgs said. "Improving the energy performance of buildings is a cost effective way of fighting against climate change and improving energy security, while also boosting the building sector and the E.U. economy as a whole."
Gartner Says More Than 30 Percent of ICT Energy Use Is Generated by PCs and Associated Peripherals,"
Gartner news release, April 20, 2009,
Electricity consumption by consumer electronics exceeds that of traditional appliances in many homes
Thursday, November 19, 2009
The National Center for Atmospheric Research (NCAR) and its managing organization, the University Corporation for Atmospheric Research (UCAR), is building a new supercomputing center in Wyoming. The current NCAR data center in Mesa has outgrown the facility's capacity, and a new facility that can accommodate future expansion is needed. The Wyoming facility will contain some of the world's most powerful supercomputers dedicated to improving scientific understanding of climate change, severe weather, air quality, and other vital atmospheric science and geoscience topics. The center will also house a premier data storage and archival facility that holds irreplaceable historical climate records and other information.
NCAR is probably the world’s premier research facility for undertaking climate modeling and research. So it is very bizarre that such an organization would undertake to build a new data center in a state where almost 100% of the electricity comes from coal fired generating plants. What is ever more outrageous is that one of the principal partners in the project, Cheyenne Light Fuel and Power is leading a campaign to stop cap and trade - http://www.cheyennelight.com/cap-and-trade/.
NCAR’s strategy to build a data center in Wyoming also highlights the ridiculousness and absurdity of claims to build an energy efficient data center with a low PUE in a LEED qualified building. These claims are meaningless when all of the electricity is coal generated. If NCAR was genuinely concerned about the environment a much smarter move would have been to locate the data center a few hundred kilometers west to Idaho where almost of the electricity is generated from hydro. Relocating to Idaho would do more for the environment than even the most stringent energy efficiency and LEED qualified buildings. It would also send an important message that new jobs and business opportunities are only going to occur in those jurisdictions that provide clean, renewable energy.
I suspect NCAR is being seduced to locate its new data center in Wyoming because of the low price of electricity that comes from coal fired plants. But that strategy may backfire on them as Cheyenne Light Fuel and Power claims that their electricity prices will increase 73% with cap and trade.
thay also plan to earn carbon offsets by going carbon neutral. Some
excerpts -- BSA]
Australian ISP goes carbon-neutral
While most carriers are reluctant even to set targets for reducing
their carbon footprint, Australian ISP Internode has already been
carbon-neutral for a year.
The company, which has over 170,000 subscribers Australia-wide,
sources 100% of its electricity needs from renewable energy, and has
molded its equipment upgrade purchasing decisions towards energy
efficiency and sustainability.
The company has also started to invest in its own renewable energy
infrastructure, choosing to run a number of remote sites via solar
cells. With operators forced to pay a premium for piping power to
remote areas - and to provide expensive, long-lasting battery backups
- it is becoming cost-competitive to run these sites on solar, Lindsay
Becoming carbon-neutral is “not as expensive an undertaking as most
people looking at it would imagine,” Lindsay said. In South
Australia, green power costs around 20% more than traditional forms of
power, and that is the dominant cost.
The positive publicity benefits of the decision likely outweigh any
extra financial burden, he added.
“Any telecom company can do what we've done,” Lindsay said.
“It's not as big a challenge as it looks. It comes down to the
fundamental question – do the shareholders of the business care more
about the dividend this year, or about the long-term impact of people
on the planet?”
Tuesday, November 17, 2009
are starting to realize that cyber-infrastructure may soon have a
significant impact on the environment because of its huge electrical
consumption and the resultant CO2 emissions if the electricity that
powers these systems comes from coal fired electrical plants. As I
mentioned in a previous blog the UK Meteorological Office new
supercomputer is one of the single biggest sources of CO2 emissions
(Scope 2) in the UK. Paradoxically this is the same computer that is
being used for climate modeling in that country. Thanks to a pointer
from Steve Goldstein we learn that even America’s spy agency –NSA,
is also running into energy issues and as such is building a huge new
data centers in Utah and Texas, of which both will probably use dirty
coal based electricity as well. There are also rumors that NCAR is
building a new cyber-infrastructure center in Wyoming (presumably
which will also use coal based electricity) which sort of undermines
its own credibility as America’s leading climate research institute.
I suspect very shortly with all the new announcements of grids and
supercomputers from OSG to Jaguar, that cyber-infrastructure
collectively in the US will be one of the top sources of CO2 emissions
as it is now in the UK. This is an unsustainable path and will come to
haunt those cyber-infrastructure organizations, particularly if
Congress passes a cap and trade bill. Cap and trade will increase the
price of electricity for institutions and businesses by an
“average” of 60% according to the EPA. But electrical prices will
be substantially more in states that are totally dependent on coal
fired electrical generation. Not only that, under the proposed cap and
trade bills any organization that emits over 25,000 tons of CO2 per
year (which includes most universities and research institutions) will
be required to purchase emission allowances or offsets if they want to
exceed their current level of emissions. It is not only traditional
power generators, cement plants or manufacturers that will be affected
by cap and trade. Most of the US higher ed and cyber-infrastructure
research facilities will be similarly affected. However there is some
good news: Cyber-infrastructure, if done right, can be a powerful tool
for reducing CO2 emissions. Larry Smarr and I recently gave a talk on
this topic at Educause which is now available per the link below –
Cyber-Infrastructure in a Carbon Constrained World
See also article in Educause Review
Slides are available on Slideshare
Weather supercomputer used to predict climate change is one of
Britain's worst polluters
The Met Office has caused a storm of controversy after it was
revealed their £30million supercomputer designed to predict climate
change is one of Britain's worst polluters. The massive machine - the
UK's most powerful computer with a whopping 15 million megabytes of
memory - was installed in the Met Office's headquarters in Exeter,
Devon. It is capable of 1,000 billion calculations every second to
feed data to 400 scientists and uses 1.2 megawatts of energy to run -
enough to power more than 1,000 homes.
New NSA data centers in Utah and Texas
"..."As strange as it may sound," he writes, "one of the most urgent
problems facing NSA is a severe shortage of electrical power." With
supercomputers measured by the acre and estimated $70 million annual
electricity bills for its headquarters, the agency has begun browning
out, which is the reason for locating its new data centers in Utah and
Texas. And as it pleads for more money to construct newer and bigger
power generators, Aid notes, Congress is balking.
"The issue is critical because at the NSA, electrical power is
political power. In its top-secret world, the coin of the realm is the
More electrical power ensures bigger data centers. Bigger data
centers, in turn, generate a need for more access to phone calls and
e-mail and, conversely, less privacy. The more data that comes in, the
more reports flow out. And the more reports that flow out, the more
political power for the agency.
"Uranium mines provide us with 40,000 tons of uranium each year. Sounds like that ought to be enough for anyone, but it comes up about 25,000 tons short of what we consume yearly in our nuclear power plants. The difference is made up by stockpiles, reprocessed fuel and re-enriched uranium — which should be completely used up by 2013. And the problem with just opening more uranium mines is that nobody really knows where to go for the next big uranium lode. Dr. Michael Dittmar has been warning us for some time about the coming shortage (PDF) and has recently uploaded a four-part comprehensive report on the future of nuclear energy and how socioeconomic change is exacerbating the effect this coming shortage will have on our power consumption. Although not quite on par with zombie apocalypse, Dr. Dittmar's final conclusions paint a dire picture, stating that options like large-scale commercial fission breeder reactors are not an option by 2013 and 'no matter how far into the future we may look, nuclear fusion as an energy source is even less probable than large-scale breeder reactors, for the accumulated knowledge on this subject is already sufficient to say that commercial fusion power will never become a reality.'"
Dr Dittmar's study:
Monday, November 2, 2009
principal investigators and chief architect for the NSF TeraGrid
Dan Reed recently gave a great presentation on the Future of Cyber-Infrastructure at a SURA meeting. You can see a copy of his presentation at http://www.sura.org/news/2009/it_matsf.html
His basic thesis is that the bulk of academic computing will probably move to commercial clouds. Although there will still remain some very high end close coupled applications that need dedicated supercomputers the majority of academic computing can be done with clouds. Despite the presence of grids and HPC on our campuses most academic applications still run on small clusters in closets or stand alone servers. Moreover the challenge with academic grids is building robust, high quality middleware for distributed systems and solving the myriad political problems of sharing computation resources in different management domains. As well, the ever increasing costs of energy, space and cooling will soon force researchers to start looking for computing alternatives. Clouds are solution to many of these
problem and in many ways represent the commercialization of the original vision for grids.
Dan also ruminates about the possibility of building “follow the
sun/follow the wind” cloud architecture on his blog, which of course
is music to my ears:
**Geo-dispersion: The Other Alternative **
If it were possible to replicate data and computation across multiple, geographically distributed data centers, one could reduce or eliminate UPS costs, and the failure of a single data center would not disrupt the cloud service or unduly affect its customers. Rather, requests to the service would simply be handled by one of the service replicas at another data center, perhaps with slightly greater latency due to time of flight delays. This is, of course, more easily imagined than implemented, but its viability is assessable on both economic and technical grounds.
In this spirit, let me begin by suggesting that we may need to
rethink our definition of broadband WANs. Today, we happily talk of
deploying 10 Gb/s lambdas, and some of our fastest transcontinental
and international networks provision a small number of lambdas (i.e.,
10, 40 or 100 Gb/s). However, a single mode optical fiber
has much higher total capacity with current dense wave division
(DWDM) technology, and typical multistrand cables contain many
fibers. Thus, the cable has an aggregate bandwidth of many terabits,
even with current DWDM.
Despite the aggregate potential bandwidth of the cables, we are
really provisioning many narrowband WANs across a single fiber.
Rarely, if ever, do we consider bonding all of those lambdas to
provision a single logical network. What might one do with terabits of
bandwidth between data centers? If one has the indefeasible right to
(IRU) or owns the dark fiber
, one need only provision the equipment to exploit multiple fibers
for a single purpose.
Of course, exploiting this WAN bandwidth would necessitate dramatic
change in the bipartite separation of local area networks (LANs) and
WANs in cloud data centers. Melding these would also expose the full
bisection bandwidth of the cloud data center to the WAN and its
interfaces, simplifying data and workload replication and moving us
closer to true geo-dispersion and geo-resilience. There are deep
technical issues related to on-chip photonics
, among others, to make this a reality.
In the end, these technical questions devolve to risk assessment and
economics. First, the cost of replicated, smaller data centers without
UPS must be less than that of a larger, non-replicated data center
with UPS. Second, the wide area network (WAN) bandwidth, its fusion
with data center LANs and their cost must be included in the economic
These are interesting technical and economic questions, and I invite
economic analyses and risk assessments. I suspect, though, that it is
time we embraced the true meeting of high-speed networking and put our
eggs in multiple baskets.
- November (4)
- October (6)
- September (2)
- August (4)
- June (3)
- May (6)
- April (2)
- March (5)
- February (3)
- January (2)
- December (3)
- November (5)
- October (6)
- September (8)
- August (5)
- July (10)
- June (3)
- May (5)
- April (9)
- March (4)
- February (4)
- January (3)
- December (7)
- Green IT Conferences for research community
- The impact of cap and trade on your web server
- Small Windpower Can Make a Difference in Remote Te...
- World on course for catastrophic 6° rise, reveal s...
- E.U. to Mandate 'Nearly Zero' Power Use by Buildin...
- NCAR's new data center - an embarrassment to the c...
- Australian ISP goes carbon-neutral
- The impact of Cyber-infrastructure in a carbon con...
- Shortage of uranium may limit construction of nucl...
- Rethinking Cyber-infrastructure - Dan Reed on the ...
- October (7)
- September (9)
- August (10)
- June (4)
- May (9)
- April (6)
- March (4)
- February (4)
- January (10)
- December (3)
- November (4)
- October (7)
- September (4)
- August (3)
- July (3)
- June (2)
- May (9)
- April (5)
- March (4)
- February (4)
- January (7)