Energy Internet and eVehicles Overview

Governments around the world are wrestling with the challenge of how to prepare society for inevitable climate change. To date most people have been focused on how to reduce Green House Gas emissions, but now there is growing recognition that regardless of what we do to mitigate against climate change the planet is going to be significantly warmer in the coming years with all the attendant problems of more frequent droughts, flooding, sever storms, etc. As such we need to invest in solutions that provide a more robust and resilient infrastructure to withstand this environmental onslaught especially for our electrical and telecommunications systems and at the same time reduce our carbon footprint.

Linking renewable energy with high speed Internet using fiber to the home combined with autonomous eVehicles and dynamic charging where vehicle's batteries are charged as it travels along the road, may provide for a whole new "energy Internet" infrastructure for linking small distributed renewable energy sources to users that is far more robust and resilient to survive climate change than today's centralized command and control infrastructure. These new energy architectures will also significantly reduce our carbon footprint. For more details please see:

Using autonomous eVehicles for Renewable Energy Transportation and Distribution: and

Free High Speed Internet to the Home or School Integrated with solar roof top:

High level architecture of Internet Networks to survive Climate Change:

Architecture and routing protocols for Energy Internet:

How to use Green Bond Funds to underwrite costs of new network and energy infrastructure:

Wednesday, September 30, 2009

Understanding impact of cap and trade (Waxman-Markey) on IT departments and networks

[For further in depth analysis on this subject please see the upcoming Educause Review special publication on this topic and presentations by Dr. Larry Smarr and yours truly at Educause summit in Denver in November.

There has been a lot of discussion about climate change and what IT departments should do to reduce energy consumption. Most of this is being driven by corporate social responsibility. But a few organizations are undertaking processes to understand the impact of cap and trade on the bottom line of their IT and network operations. When the real cost of cap and trade starts to be felt a lot of organizations will be looking at their IT departments as the low hanging fruit in terms of reducing energy consumption and concomitant GHG emissions.

Only marginal energy reductions are possible with traditional electrical hogging sources such as lightning, heating, air conditioning etc. IT holds out the promise of much more significant savings because of its inherent flexibility and intelligence to support "smart" solutions. Several studies indicate that ICT represents at least 30% of the energy consumption in most organizations and it is estimated as much as 50% within certain sectors such as telecoms, IT companies themselves and research universities. Hard, quantifiable data is difficult to find - but CANARIE is funding 3 research projects to do a more detailed analysis of actual electrical consumption by ICT and cyber-infrastructure for at least one sector in our society - research universities. (Preliminary results are already pretty scary!)

To date the various cap and trade systems have had little impact because either emission permits have been effectively given away, or the underlying price of carbon has had a negligible impact on the cost of electricity. This is all about to change. First with Waxman-Markey bill (HR 2454) now before the senate and the move to auction permits in the European Trading System (ETS). Even if the Waxman-Markey bill fails to pass in the Senate, there are several regional cap and trade initiatives that will be implemented by US states and Canadian provinces in the absence of federal leadership. So, no matter which way you cut it, electrical costs for IT equipment and networks are projected to jump dramatically in the next few years because of cap and trade. On top of that there may be energy shortages as utilities move to shut down old coal plants where it does not make economic sense to install carbon capture sequestration (CCS) systems to comply with the requirements of these cap and trade systems.

The US Environmental Protection Agency (EPA) has done some extensive modeling and economic analysis of the impact of the Waxman-Markey bill. It is probably the best source for a general understanding of how various cap and trade systems around the world are going to affect IT operations. Even though some of the particulars of the bill may change in the US Senate, the broad outline of this bill as well as those of other cap and trade systems will remain essentially the same. Details of the EPA analysis can be found here:

Surprisingly there has been little analysis by the IT industry sector itself on the impact of cap and trade on this industry. IT may be the most significantly affected because of its rapid growth and its overwhelming dependency in several key sectors of society such as university research, banking, hospitals, education, etc. Although IT overall only consumes 5-8% of all electricity depending on which study you use and contributes 2-3% of global CO2 emissions, IT electrical consumption is over 30% in most businesses and even greater amounts at research universities. What is of particular concern is that IT electrical consumption is doubling every 4-6 years and the next generation broadband Internet alone could consume 5% of the world’s electricity. Data centers as well are project to consume upwards of 12% of the electricity in the US.

There are number of important highlights in the Waxman-Markey bill that will be of significance to IT departments and networks:

1. The proposed cap reduces GHG emissions to 17% below 2005 levels by 2020 and 83% by 2050.

2. Most of the GHG reduction will be from the electricity sector and purchase of international offsets in almost equal portions.

3. GHG emissions from the electricity sector represent the largest source of domestic reductions - although transportation accounts for 28% of emissions in the US, only about 5% of the proposed reductions will come from that sector and expected to raise gasoline prices by only a paltry $.13 in 2015, $.25 in 2030 and $.69 in 2050 (Much to the relief of the oil industry, Canada’s tar sands and owners of SUVs)

4. The share of low or zero carbon primary energy rises substantially to 18% of primary energy in 2020, 26% by 2030 and 38% by 2050, although this is premised on a significant increase of nuclear power and CCS. True renewables only make up to 8% in 2015, 12% in 2020, and 20% in 2030

5. Increased energy efficiency and reduced energy demand simultaneously reduces primary energy needs by 7% in 2020, 10% in 2030, and 12% in 2050.

As you can imagine there are many uncertainties and controversial assumptions that affect the economic impacts of H.R. 2454 and many other cap and trade bills. Briefly these are some of them:

(a) The degree to which new nuclear power and CCS is technically and politically feasible. HR 2454 assumes a dramatic increase in nuclear power and deployment of CCS. If either fails to materialize then the GHG reduction targets will not be met. Assumption of growth in nuclear power is particularly suspect as any new nuclear plants in the foreseeable future will be first needed to replace the many aging systems now at the end of their operating life.

(b) The availability of international offset projects. Given the controversy that already exists over international offsets many question the assumptions of being able to purchase this volume offsets particularly when every other country with a cap and trade system will be pursuing this same market.

(c) The amount of GHG emissions reductions achieved by the energy efficiency provisions. In the IT sector in particular growth of IT products and services may simply outweigh any gains made in efficiency.

Although the impact of HR 2424 on consumer electrical costs will be minimal, its a different story for business and industry users. The EPA estimates that the "average" price of electricity will increase by 66% for commercial users. But there will be huge regional variances in these prices depending upon the amount of electricity that is produced from coal without CCS. In those regions largely dependent on coal generated electricity the cost increase will be almost entirely dependent on the market price of carbon.

If your electricity is mostly generated by coal, which includes most of the mid-west in USA and western Canada, then a rough rule of thumb is 1000g of CO2 is produced for every kilo-watt hour of electricity which results nice easy one to one conversion of annual hourly consumption to metric tones of CO2. A typical research university has a 40 MW utilization which translates into about 350,000 MWhr of consumption. This would result in 350,000 mTCO2e. If carbon trades at $25/ton then the increased cost to the institution will be in excess of $8 million per year.

However if many of the assumptions in the Waxman-Markey fail to come to pass, particularly the availability of international offsets then cost of carbon could jump dramatically. (To protect against this the US senate is proposing a “collar” to limit variability in price of carbon). The EPA analysis has various projections for carbon, and depending on the scenario the cost could go up to as much as $350 per ton, if the objective of 17% in GHG reductions are going to be achieved by 2020 and 83% by 2050. The Nicholas Stern report in the UK suggests that carbon must trade at a $100 a ton to achieve meaningful GHG reductions.

One of the main concerns of the Waxman-Markey bill is that it is too little and too late. More and more evidence points to much more rapid warming of the planet than even the most pessimistic computer models have forecast. Although we had a wet and cool summer in eastern North America average global sea temperatures set a new high record this year. The latest study from UK Meteorological office, that incorporates CO2 feedback cycles for the first time, suggests that US could warm up by 13-18F and the Arctic by 27F by 2060. The bottom line is that Waxman-Markey is just a starting point to probably much more stringent GHG reduction policies. The IT sector needs to get prepared for this worst case eventuality. If nothing else it should be part of any disaster planning scenario. This will be the mother of all disaster planning scenarios as opposed to other natural disasters that might affect IT operations it will be long term, if not effectively permanent.

However there is some good news for the ICT sector. One of the requirements of the Waxman-Markey (Title I, Subtitle A, Sec. 101) requires retail electricity providers to meet a minimum share of sales with electricity savings and qualifying renewable generation funded through purchase of offsets or other credits. Nominal targets begin at 6% in 2012 and rise to 20% by 2020. The ICT sector is probably the best qualified to take advantage of these energy requirements by adopting follow the wind/follow the sun architectures and relocating, as much as possible, computers and databases to renewable energy sources. The key to take advantage of these opportunities is to start planning now. Several papers from MIT and Rutgers indicate that savings of up to 45% in electrical costs are possible with such a strategy. These savings will be more significant with the advent of cap and trade.

-- BSA]
skype: pocketpro

Tuesday, September 29, 2009

UK Met Office: Catastrophic climate change, 13-18°F over most of U.S. and 27°F in the Arctic by 2060,

[Excerpts from Climate Progress Blog. Although this summer has been cool for many regions of North America the oceans overall have reached record high temperatures. More evidence that we are unsustainable path -- BSA]

UK Met Office: Catastrophic climate change, 13-18°F over most of U.S. and 27°F in the Arctic, could happen in 50 years, but “we do have time to stop it if we cut greenhouse gas emissions soon.”
September 28, 2009

Finally, some of the top climate modelers in the world have done a “plausible worst case scenario,” as Dr Richard Betts, Head of Climate Impacts at the Met Office Hadley Centre, put it today in a terrific and terrifying talk (audio here).

No, I’m not taking about a simple analysis of what happens if the nation and the world just keep on our current emissions path. We’ve known that end-of-century catastrophe for a while (see “M.I.T. doubles its 2095 warming projection to 10°F — with 866 ppm and Arctic warming of 20°F“). I’m talking about running a high emissions scenario (i.e. business as usual) in one of the few global climate models capable of analyzing strong carbon cycle feedbacks. This is what you get [temperature in degrees Celsius, multiple by 1.8 for Fahrenheit]:

The key point is that while this warming occurs between 1961-1990 and 2090-2099 for the high-end scenarios without carbon cycle feedbacks, in about 10% of Hadley’s model runs with the feedbacks, it occurs around 2060. Betts calls that the “plausible worst case scenario.” It is something the IPCC and the rest of the scientific community should have laid out a long time ago.

As the Met Office notes here, “In some areas warming could be significantly higher (10 degrees [C = 15F] or more)”:

* The Arctic could warm by up to 15.2 °C [27.4 °F] for a high-emissions scenario, enhanced by melting of snow and ice causing more of the Sun’s radiation to be absorbed.
* For Africa, the western and southern regions are expected to experience both large warming (up to 10 °C [18 °F]) and drying.
* Some land areas could warm by seven degrees [12.6 F] or more.
* Rainfall could decrease by 20% or more in some areas, although there is a spread in the magnitude of drying. All computer models indicate reductions in rainfall over western and southern Africa, Central America, the Mediterranean and parts of coastal Australia.
* In other areas, such as India, rainfall could increase by 20% or more. Higher rainfall increases the risk of river flooding.

Large parts of the inland United States would warm by 15°F to 18°F, even worse than the NOAA-led 13-agency impacts report found “Our hellish future: Definitive NOAA-led report on U.S. climate impacts warns of scorching 9 to 11°F warming over most of inland U.S. by 2090 with Kansas above 90°F some 120 days a year — and that isn’t the worst case, it’s business as usual!”


Friday, September 18, 2009

The fallacy of tele-commuting, video conferencing and virtual meetings to reduce CO2

[A lot of equipment vendors are promoting video conferencing as a solution to reduce Co2 emissions, especially from air travel. But as this article shows there is not a lot of compelling evidence that video conferencing is that effective. At first blush it would seem that the savings from air travel would be significant. But most analysis of video conferencing fail to take in account that the video conference system is often left on 24 hours a day, seven days a week. The cumulative power consumption and resultant CO2 emissions may outweigh the gains made in eliminating air travel. Video conferencing should be part of an overall portfolio of reducing CO2 emissions, but on its own it is as affective as putting caulking around windows to seal out the drafts when you have still got the front door wide open --BSA]

By now, you’ve probably heard the following claim: Video conferencing, when done right, can offer companies significant benefits when it comes to travel. By eliminating the need to send employees to on-site meetings, companies can cut both the cost and the nasty carbon emissions bill associated with such journeys.

That’s the message used to help market next-best-thing-to-real-life video conferencing services like Cisco’s TelePresence. collaboration service — that virtual meetings can save both money and the planet. But look beyond the headlines and the soundbites, and you’re likely to find a somewhat less verdant tale.

Digging into the data

Those may sound like some big numbers, but if you look at the actual research, not just the press releases and marketing tie-ins, they start to shrink. The study from Australia? It goes on to say that those 2.4 million metric tons of emissions are just 0.43 percent — less than half a percent — of the country’s total. (To be fair, GreenBiz also notes this fact.)

The impact of video conferencing in the BCG/Climate Group study was equally lukewarm, if not more so. Emissions reductions from “dematerialization,” the category under which teleworking and video conferencing fall, account for just 0.9 percent of the total potential emissions reduction in its scenario, while video conferencing on its own accounts for just 0.15 percent of the total potential.

What’s more, the actual travel-replacement effects of video conferencing aren’t exactly carved in stone. According to the WWF study, some research indicates that video conferencing may actually have a neutral or negative impact on employee travel, because travel time and budgets associated with internal meetings are shifted to strategic meetings with contacts outside the organization. That may be good for business, but it doesn’t do much for the polar ice caps.

Wednesday, September 16, 2009

80% of green ICT initiatives don’t have measurable targets!

The OECD has just published a great summary of the various Green ICT programs around the world. As this article points out many of these initiatives have no way of measuring whether the initiatives are actually reduce GHG emissions. This is quite a common problem with most energy efficiency initiatives. Unless a program undertakes a valid carbon verification and audit process such as ISO 14064 then there is no way to really determine if these programs are being successful. In fact many initiatives on energy efficiency may be making the problem worse because of the Jevons paradox. This is why the CANARIE Green IT pilot insists that applicants go through an ISO 14064 process to insure that their low carbon Internet architecture actually reduces GHG emissions.

A new report published by the Organisation for Economic Co-operation and Development (OECD) reveals that only one in five green ICT programs by governments and industry organisations actually have any type of measurable targets, or ways of measuring whether they are working as plan.

According to the report, authored by consultant Christian Reimsbach Kounatze and presented to the Working Party on the Information Economy, while most programs have some form of broad objective, only one-fifth of all government programs and industry association initiatives have measurable targets and indicators to measure whether these targets are being achieved.

Of the government initiatives, all have set objectives, but only 17 out of 50 have measurable targets, the report said. Of these, only 10 actually have formalised assessment and evaluation. More astonishing is the fact that the report found only two out of the 42 green ICT programs by industry associations had any measurable targets.

At the same time, the report found that while there are many approaches to green ICT as illustrated by the 92 programs, and that each program would have its own objectives, the majority, or two thirds, are focused on improving the direct environmental impact from the use of ICT, thus neglecting the greater benefits of using green ICT to lower the impact of the society in general.

Only one third of the programs actually focused on “using ICTs across the economy and society in areas where there is a major potential to dramatically improve performance, for example in “smart” urban, transport and power distribution systems, despite the fact that this is where ICT have the greatest potential to improve environmental performance,” the report said.


OECD report

Assessing Policies and Programmes on ICT and the Environment

Intergrating Cyber-infrastructure with Smart Grids and energy management

How to Use Open-Source Hadoop for the Smart Grid<>

At first glance it’s hard to see how the open-source software framework Hadoop, which was developed for analyzing large data sets generated by web sites, would be useful for the power grid — open-source tools and utilities don’t often mix. But that was before the smart grid and its IT tools started to squeeze their way into the energy industry. Hadoop is in fact now being used by the Tennessee Valley Authority (TVA) and the North American Electric Reliability Corp. (NERC) to aggregate and process data about the health of the power grid, according to this blog post from Cloudera, a startup that’s commercializing Hadoop.

The TVA is collecting data about the reliability of electricity on the power grid using phasor measurement unit (PMU) devices. NERC has designated the TVA system as the national repository of such electrical data; it subsequently aggregates info from more than 100 PMU devices, including voltage, current, frequency and location, using GPS, several thousand times a second. Talk about information overload.

But TVA says Hadoop is a low-cost way to manage this massive amount of data so that it can be accessed all the time. Why? Because Hadoop has been designed to run on a lot of cheap commodity computers and uses two distributed features that make the system more reliable and easier to use to run processes on large sets of data.

The Smart Grid and Big Data: Hadoop at the Tennessee Valley Authority (TVA)

For the last few months, we’ve been working with the TVA to help them manage hundreds of TB of data from America’s power grids. As the Obama administration investigates ways to improve our energy infrastructure, the TVA is doing everything they can to keep up with the volumes of data generated by the “smart grid.” But as you know, storing that data is only half the battle. In this guest blog post, the TVA’s Josh Patterson goes into detail about how Hadoop enables them to conduct deeper analysis over larger data sets at considerably lower costs than existing solutions. -Christophe
The Smart Grid and Big Data

At the Tennessee Valley Authority (TVA) we collect phasor measurement unit (PMU) data on behalf of the North American Electric Reliability Corporation (NERC) to help ensure the reliability of the bulk power system in North America. The Tennessee Valley Authority (TVA) is a federally owned corporation in the United States created by congressional charter in May 1933 to provide flood control, electricity generation, and economic development in the Tennessee Valley. NERC is a self-regulatory organization, subject to oversight by the U.S. Federal Energy Regulatory Commission and governmental authorities in Canada. TVA has been selected by NERC as the repository for PMU data nationwide. PMU data is considered part of the measurement data for the generation and transmission portion of the so called “smart grid”.

PMU Data Collection

There are currently 103 active PMU devices placed around the Eastern United States that actively send TVA data while new PMU devices come online regularly. PMU devices sample high voltage electric system busses and transmission lines at a substation several thousand times a second which is then reported for collection and aggregation. PMU data is a GPS time-stamped stream of those power grid measurements which is transmitted at 30 times a second each consisting of a timestamp and a floating point value. The types of information a PMU point can contain are:

* Voltage (A,B, C phase in positive, negative, or zero sequence) magnitude and angle
* Current (A,B, C phase in positive, negative, or zero sequence) magnitude and angle
* Frequency
* dF/dt (change in frequency over time)
* Digitals
* Status flags

Commonly just positive sequence voltages and currents are transmitted but there is the possibility for all three phases. There can be several measured voltage and current phasors per PMU (each phasor having a magnitude and an angle value), a variable number of digitals (typically 1 or 2), and one of each of the remaining 3 types of data; on average there will be around 16 total measurements sent per PMU. Should a company wish to send all three phases or a combination of positive, negative, or zero sequence data, then the number of measurements obviously increases.

The amount of this time-series data created by even a regional area of PMU devices provides a unique architectural demand on the TVA infrastructure. The flow of data from measurement device to TVA is as follows:

1. A measurement device located at the substation (the PMU) samples various data values, timestamps them via a GPS clock, and sends them over fiber or other suitable lines to a central location.
2. For some participant companies this may be a local concentrator or it may be a direct connection to TVA itself. Communication between TVA and these participants is commonly a VPN tunnel over a LAN-to-LAN connection but several partners utilize a MPLS connection for more remote regions.
3. After a few network hops the data is sent to a TVA developed data concentrator termed the Super Phasor Concentrator (or SPDC) which accepts these PMUs’ input, ordering them into the correct time-aligned sequence - compensating for any missing data or delay introduced by network congestion or latency.
4. Once organized by the SPDC, its modular architecture allows this data to be operated on by third party algorithms via a simple plug-in layer.
5. The entirety of the stream, currently involving 19 companies, 10 different manufacturers of PMU devices, and 103 PMUs - each reporting an average of 16 measured values at a rate of 30 samples a second - with a possibility of 9 different encodings (and this only from the Eastern United States), is passed to one of three servers running an archiving application which writes the data to a size optimized fixed length binary file to disk.
6. A real-time data stream is simultaneously forwarded to a server program hosted by TVA which passes the conditioned data in a standard phasor data protocol (IEEE C37.118-2005) to client visualization tools for use at participant companies.
7. An agent moves PMU archive files into the Hadoop cluster via an FTP interface
8. Alternatively, regulators such as NERC or approved researchers can directly request this data over secure VPN tunnels for operation at their remote location.

TVA currently has around 1.5 trillion points of time-series data in 15TB of PMU archive files. The rate of incoming PMU data is growing very quickly with more and more PMU devices coming online regularly. We expect to have around 40TB of PMU data by the end of 2010 with 5 years worth of PMU data estimated to be at half a petabyte (500TB).

The Case For Hadoop At TVA

Our initial problem was how to reliably store PMU data and make it available and reliable at all times. There are many brand name solutions in the storage world that come with a high price tag and the assumption of reliable hardware. With large amounts of data that spans many disks; even at a high mean time to fail (MTTF) a system will experience hardware failures quite frequently. We liked the idea of being able to lose whole physical machines and still have an operational file system due to Hadoop’s aggressive replication scheme. The more we talked with other groups using HDFS the more we came away with the impression that HDFS worked as advertised and shined even with amounts of data the “reliable hardware” struggled with. Our discussions and findings also indicated that HDFS was quite good at moving data and included multiple ways to interface with it out of the box. In the end, Hadoop is a good fit for this project in that it allows us to employ commodity hardware and open source software at a fraction of the price of proprietary systems to achieve a much more manageable expenditure curve as our repository grows.

The other side of the equation is that eventually the NERC and its designated research institutions are to be able to access the data and run operations on the data. The concept of “moving computation to the data” with map-reduce made Hadoop an even more attractive choice, especially given its price point. Many of the proposed uses of our PMU data ranged from simple pattern scans to complex data mining operations. The type of analysis and algorithms that we want to run aren’t well suited to be run in SQL. It became obvious that we were more in the market for a batch processing system such as map-reduce as opposed to a large relational database system. We were also impressed with the very robust open source ecosystem that Hadoop enjoys; Many projects built on Hadoop are actively being developed such as:

* Hive
* HBase
* Pig

This thriving community was very interesting to us as it gives TVA a wealth of quality tools with which to analyze PMU data using analysis techniques that are native to “big data”. After reviewing the factors above, we concluded that employing Hadoop at TVA kills 2 birds with 1 stone — it solves our storage issues with HDFS and provides a robust computing platform with map reduce for researchers around North America.

PMU Data Analysis at TVA

Currently our analysis needs and wants are evolving with our nascent ideas on how best to use PMU data. Current techniques and algorithms on the board or in beta include

* Washington State’s Oscillation Monitoring System
* Basic averages and standard deviation over frequency data
* Fast Fourier transform filters including:
* Wiener Filter
* Kalman Filter
* Low Pass Filter
* High Pass Filter
* Band Pass Filter
* Indexing of power grid anomalies
* Various visualization rendering techniques such as creating power grid map tiles to watch the power grid over time and in history

We are currently writing map reduce applications to be able to crunch far greater amounts of power grid information than has be previously possible. Using traditional techniques to calculate something as simple as an average frequency over time can be an extremely tedious process because of the need to traverse terabytes of information; map-reduce allows us to not only parallelize the operation but also get much higher disk read speeds by moving the computation to the data. As we evolve our analysis techniques we plan to expand our range of indexing techniques from simple scans to more complex data mining techniques to better understand how the power grid reacts to fluctuations and how previously thought discrete anomalies may, in fact, be interconnected.

Additionally, we are also adding other devices such as Frequency Disturbance Recorders (FDRs, a.k.a. F-NET devices which are developed by Virginia Tech) to our network. Although these devices send samples at a third of the rate of PMU devices with a reduced measurement set, there exists the potential for many hundreds of these less expensive meters to come online which would effectively double our storage requirements. This FDR data would be interesting in that the extra data would allow us to create a more complete picture of the power grid and its behavior. Hadoop would allow us to continue scaling up to meet the extra demand not only for storage but for processing with map reduce as well. Hadoop gives us the flexibility and scalability to meet future demands that can be placed upon the project with respect to data scale, processing complexity, and processing speed.

Looking Forward With Hadoop

As we move forward using Hadoop, there are a few areas we’d like to see improved. Security is a big deal in our field, especially given the nature of the data and agencies involved. We would like to see security continue to be improved by the Hadoop community as a whole as time goes on. Security internally and externally is a big part of what we do, so we are always examining our production environment to make sure we fulfill our requirements. We also are looking at ways to allow multiple research projects to coexist on the same system, such that they share the same infrastructure but can queue up their own jobs and download the results from their own private account area while only having access to the data that their project allows. Research can be a competitive business and we are looking for unique ways to allow researchers to work with the same types of data while feeling comfortable about their specific work remaining private; additionally we are required to maintain the privacy of all the data providers - researchers will only be allowed to access a filtered set of measurements as allowed by the data providers or as deemed available for research by the NERC.

In our first discussions about whether or not we would explore cloud computing as an option for processing our PMU data, we wanted to know if there was a “Redhat-like” entity in the space that could answer questions and provide support for Hadoop. Cloudera has definitely stepped up to the plate to fulfill this role for Hadoop. Cloudera provides exceptional support in a very dynamic space, a space in which many companies have no experience and many consulting firms can provide no solid advice. Cloudera was quick to make sure that Hadoop was right for us and then provided extremely detailed answers to all of our questions and what-if scenarios. Their whole team was exceptionally adept in getting back to us on a myriad of details most sales or “front line support” teams would be stymied by. Cloudera’s distribution for Hadoop and guidance on hardware acquisition helped in saving us money and getting our evaluation of Hadoop off the ground in a very short amount of time.

Tuesday, June 2nd, 2009 at 10:00 am by Christophe Bisciglia, filed under community, guest, hadoop


More on Climate as a Service - cyber-infrastructure grand challenge

The Earth System Grid (ESG) integrates supercomputers with large-scale data and analysis servers located at numerous national labs and research centers to create a powerful environment for next generation climate research. This portal is the primary point of entry into the ESG.

Ian Foster reports that the Earth System Grid provides access to PCMDI's archives. We are now working to expand ESG, in collaboration with others in Europe and elsewhere, to address the challenges of next-generation models.


Of course one of the big challenges for ESG and other climate modeling HPC systems is that they do not become part of the problem as for example UK's new climate modeling supercomputer

Weather supercomputer used to predict climate change is one of Britain's worst polluters

The Met Office has caused a storm of controversy after it was revealed their £30million supercomputer designed to predict climate change is one of Britain's worst polluters.

The massive machine - the UK's most powerful computer with a whopping 15 million megabytes of memory - was installed in the Met Office's headquarters in Exeter, Devon.

It is capable of 1,000 billion calculations every second to feed data to 400 scientists and uses 1.2 megawatts of energy to run - enough to power more than 1,000 homes.


Thursday, September 10, 2009

Climate as a Service - a cyber-infrastructure grand challenge

At the recent World Climate Conference ( a lot of the discussion was around providing “climate services” and coordinating these globally. Climate services involves the provision of climate information relevant for adaptation to climate change and climatic swings, long-term planning, and facilitating early warning systems against rapid extreme climate change. Climate Services most importantly to provide information on regional and local scale implications of climate changes. Most computer modeling today on climate change is done one a global basis which masks may significant regional differences. For example Canada’s far north is expected to warm by as much as 11C, even though global average temperature increase may 2-4C. Please see for more details.

See also

Climate services will have a major impact on the research and education community and their corresponding networks. Global, national and regional climate models will now need to be integrated. In addition tracking and satellite data must be distributed to numerous computational facilities around the world. The scale of this challenge is evidenced by the new network capabilities of the Department of Energy Network, who now see climate data volumes comparable to high energy physics data. These data volumes are expected to grow even more significantly as the reality of climate change starts to set in and policy makers demand more accurate long range predictions of the impact of climate change on their region.

“The study of global climate change is a critical research area where the amount of data being created and accessed is growing exponentially. For example, an archive of past, present and future climate modeling data maintained by the Program for Climate Model Diagnosis and Intercomparison at Lawrence Livermore National Laboratory contains more than 35 terabytes of data and is accessed by more than 2,500 users worldwide. However, the next-generation archive is expected to contain at least 650 terabytes, and the larger distributed worldwide archive will be between 6 petabytes to 10 petabytes.”

Microsoft intends to cure server huggers

The bane of many universities and businesses is the plethora of servers and clusters scattered throughout the institution in just about every broom closet and under every desk. According to a Gartner report over 30% of an institution's electrical bill is attributable to PCs and peripherals, not counting all these servers. If institutions intend to be carbon neutral they have to address the challenge of server huggers --BSA]

Microsoft wants the engineers in its labs to manage their servers remotely, and is moving development servers from a bevy of computer rooms in labs to a new green data center about 8 miles from its Redmond campus. "I see today as a real transition point in our culture," said Rob Bernard, chief environmental strategist at Microsoft, who acknowledged that the change will be an adjustment for veteran developers but will save money and energy use. Microsoft expects its customers will run their apps remotely in data centers, and clearly expects the same of its employees."

Wednesday, September 2, 2009

Computing for the Future of the Planet- follow the sun/follow the wind research program

[There is another exciting research initiative at University of Cambridge called Computing for the Future of the Planet there is very much in line with similar initiatives at MIT, Rutgers/Princeton and CANARIE Green IT program. I have summarized these various initiatives below—BSA]

Computing for the Future of the Planet

Computing (computers, communications, applications) will make a major and crucial contribution to ensuring a sustainable future for society and the planet. Computing is an important tool that will enable developing societies to improve their standard of living without undue impact on the environment. At the same time, it will enhance the ability of developed societies to maintain their economic success while reducing their use of natural resources. The greater wealth generated using computing may reduce population growth and its problematic impact on the physical world.

Cost- and Energy-Aware Load Distribution Across Data Centers
Geographical distribution of the data centers often exposes many opportunities for optimizing energy consumption and costs by intelligently distributing the computational workload
Green data centers can decrease brown energy consumption by 35% by leveraging the green data centers at only a 3% cost increase

Cutting the Electric Bill for Internet-Scale Systems
Companies that have lots of data centers can take advantage of cheap bandwidth, smart software and fluctuating hourly energy prices to shift computing power to a data center in a location where it’s an off-peak time of the day and energy prices are low. substantial margin (45% maximum savings

Overview and background on CANARIE Green IT program

PROMPT Green Next Generation Internet Program (look under new initiatives

Blog Archive