Green Computing

Green Computing – Multifaceted Challenge of Improving Efficiencies

Green computing has become a focal topic on many discussion boards these days. There are many articles that focus on CPU scaling and the ability to tailor power consumption of systems to meet the applied load on those systems. The Linux kernel team is focusing on CPU scaling to reduce idle power consumption. Pacific Gas and Electric recently initiated a data center consolidation project with IBM to reduce the footprint of their physical datacenters and hopefully reduce their associate energy costs by 80% through the application of virtualization technology. Intel, IBM, EDS, Sun, Google, RedHat, and several other companies have formed a climate savers initiative to reduce the power consumption of desktop and server environments by 50 and 33 percent respectively and is expected to impose a reduction of the carbon footprint by 55 million tons per year. Google has recently initiated construction of a solar array to power their corporate headquarters in an attempt to reduce their carbon footprint. All of these activities are to be applauded – they address reduction of the consumption of electricity and the application of cleaner energy sources to improve the utilization of power within the data center.

But do these approaches truly encompass the complete breadth of the solution space – or could there be something more, something that could truely have a significant impact on the consumption of power and the corruption of our environment?

Application Performance Optimization

As recently as the late 1990s the IT industry was focused on building out applications with a vast range of functionality with limits imposed by the capabilities of the hardware and infrastructure that forced the application of efficiencies in the use of resources. Recently, we have become accustomed to having a surplus of capacity and have since become less mindful of the conservative application of that capacity as it is more than ample in most cases to power even the most inefficient solutions. This was the cost of portability as Java and other applications based on a portable runtime platform gained wider adoption. It was pretty common place to attribute poor scalability to the runtime performance and write it off as an operational cost incurred against the benefits of portability. But over time we’ve seen these runtime environments optimized to a point where they can claim parity in some respects with the performance of natively compiled code. However, somewhere along the way the focus on wringing performance from these applications through repeatable engineering processes has slipped away.

This is the problem – compute cycles are often not considered as a requirement to be addressed in the construction of many of today’s applications. When I started writing software, I built out a solution to manage the solution of a network routing problem. The optimization approach I first pursued would have consumed over 1GB of RAM – this was at a time when a workstation with 64MB of RAM cost upwards of $25K. Further, this solution took the better part of a minute to chew through all the system memory, pegging the CPU at 100% for the duration. In the end, I built out a less greedy algorithm that was based on a heuristic I designed specifically to model the business realities rather than a theoretical optimization. The new solution peaked at 2Mb memory consumption and would run in 3-5 seconds on a typical run. It is obvious to see that this reduced overall resource consumption by about 95% from a CPU perspective and 99.8% from a memory utilization perspective. The question is, with the cost of memory and CPU cycles today, would I have taken the additional month to optimize the solution performance? By extension, should you take the time to look into performance optimizations that could reduce your energy costs by 50-75%?

Green Power Supplies

Your average power supply in a server can consume upwards of 1000 Watts of power, even when idle. Recent advances in power supply technology step back power consumption in the transformers when the computer is not fully loaded by as much as 80%.

Power to the People

The average PC today is more powerful than that $25k workstation I mentioned earlier, but have we changed the way we design applications to leverage that capacity? Today’s web applications still build out the pages on the server, streaming HTML or images across to the client after dynamically assembling them on the server and streaming them across the internet. The client machine must still parse the HTML and render the HTML in a viewable form in the browser. Rather than transform data into a structured text based protocol that then requires additional parsing on the client side, why don’t we leverage the power of the client machine more efficiently to render the data directly from a set of instructions that can be managed separately? Most browsers support XSL transformation, however this has not always been the case. An XSL transformation within a browser is more efficient than a transformation to HTML on the server followed by a parse and rendering of the HTML stream on the client. Of course, there are a lot fewer engineers who understand XSL well enough to be able to maintain this kind of architecture.

Scale and Distribute Processing with SOA

Service Oriented Architectures have been hot for a few years now. C-level execs are all familiar with the benefits of SOA in scaling and rationalizing their IT infrastructures. Through extensive leverage of SOA, many enterprises have consolidated common application functionality into a single app that provides that functionality as a service to the enterprise, but at what cost?

SOA is not a panacea for curing enterprise software maintenance ills. In point of fact, SOA incurs a performance penalty in marshalling and unmarshalling the data in calling and responding to the service. From a compute time perspective, SOA can often times be less efficient and from a resource utilization perspective than local processing within the application. In such cases, other options might be explored to improve maintainability of common functionality – such as the creation of shared libraries.

In the end, it inevitably comes down to balancing the short and long term costs in terms of building, maintaining, and operating the solution. While SOA can relieve testing, deployment, and some repeated development costs it does incur penalties in network performance, processing power, memory consumption and in some cases marginal impact on end user productivity. Selection of the right solution is a question of scope of impact and scale of operations.

More …

About David Picard

David is the COO of Beacon BPM Solutions and the President and Founder of PSInd. He has been working in the consulting sector for the banking, financial services, insurance, transportation and telecommunications industries for over 20 years. David began work as an operations consultant after completing his initial tour of duty as an active duty US Army officer with responsibility for operations planning and oversight for site and movement security of nuclear weapons. He has spent considerable time working with Pegasystems building the PRPC BPMS offering and deploying successful BPM implementations on that platform.
This entry was posted in Energy Efficiency and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *