Green Supercomputing

Sorry for the double-posting, but I don’t want Professor Gus to have to read up on our blog over Winter Break, and I really feel this subject should be put forth to the community.

When an industry consumes large amounts of any resource, it soon finds itself in the crosshairs of environmentalist groups, as it well should. Cyberscience is no exception to this, as data centers absorb staggering amounts of power. In class we have already covered the progressive response of many companies to move to “greener” energy sources. Yes, that is a very useful step to take, but until solar, wind, and water power becomes efficient enough to power all of the world’s data centers, this cannot be the only step taken. Luckily, many companies share my view on that.

There is a list of green supercomputing sites called the Green500 List, on it I was surprised to see many government-owned computers. I knew the government has need for many data-centers, but our nation isn’t the most environmentally friendly country out there. Apparently, some scientists got to work real hard on making these Computing centers as eco-friendly as possible. So I decided to probe further, and found out I didn’t have to look much further.

If you were to google “green supercomputing” right now, you would first see a link for the Green500 list. The very next link is concise, yet very well written article about NASA’s use of green supercomputing. The center they are speaking of is called “Pleiades”. The first thing they mention is its power efficiency. They speak proudly of their #54 spot on the Green500, but even more proudly of their #5 spot in efficiency(232 megaflops/watt). The entire system was built from the ground up with efficiency in mind, and their initial thoughtfulness has proven very beneficial in the long run. They also have special programming to idle unused machines within the system, and perform constant maintenance on the entire system to keep it operating at full efficiency.

This system is a great example of green supercomputing and its ability to be equally as powerful as regular supercomputing. Its great amount of efficiency has saved NASA much money (don’t worry, they will find somewhere else to spend it), and more importantly, has kept its carbon footprint extremely low(for a data-center).

Google “green supercomputing” if you want to view the articles referenced in this post, they are results 1 and 2.

Posted in Uncategorized | Leave a comment

To the Cloud! Cyberscience’s mentality makes its way into our daily lives

In the formative years of computer technology, computers were something you came to when you had a task to get done. Computation was something to be done in a centralized location. With the rapid evolution of computing technology, personal computers became the norm for most computing. There always remained a need for large-scale computing centers, and the rapid evolution of these centers has made such things as “The Cloud” possible. Now computation could be done off-site, with results displayed on-site, for everything from playing a video game to doing truly high-throughput computing.

The Cloud is a very ambiguous term, generally referring to anything involving off-site data storage or computation. The simplest implementation of this would be programs along the lines of Google Docs. Google Docs is a program that stores and runs documents. Documents aren’t very computationally or data intensive, but the mentality of not having to store things on ones own computer is the main point of their use. Many internet sites offer off-site data storage services as well. Then there are programs like VNC. VNC (Virtual Network Computing) allows one to set up ones own personal computer to be remotely accessed by another device over the internet. Cloud computing has even advanced to the point where a company has found it better than distributing physical gaming consoles. OnLive allows people to play the latest videogames over broadband internet. No, this isn’t an ad for OnLive, personally, I like the option of playing games without requiring broadband hookup. However, one cannot deny the merits of a system that can (according to some reviews I read) play advanced games seamlessly while being controlled from many distant locations.

In summation, the technology for “The Cloud” is quite obviously improving at a rapid rate. The situation one might foresee is one of returning full circle, in a way, back to data being computed at centralized places. With the ever-lowering cost of technology and the explosive growth of “Cloud” technology, coupled with the ever-expanding area of broadband internet availability, it seems entirely possible that all personal computers could be bare bones machines with only enough functionality to connect themselves to an off-site location. If it were cost-effective, who wouldn’t want a light as air machine that would run for extended periods of time, and that has limitless computing potential?

Posted in Uncategorized | Leave a comment

Cyberscience and the Humanities: Not Just Dan Katz

I know that at this point Google Books is well-established, but this article I read in the New York Times reminded me of our class and gave me some new interesting thoughts about the fourth paradigm and the humanities:

In 500 Billion Words, a New Window on Culture

The article offers a glimpse at how the ability to search billions of words from an astounding number of documents can influence research about human culture.  Using Google’s database,  researchers of “culturomics” can look at how culture has changed, ranging from languages to the length that fame lasts.    It presents the idea that large databases have the ability to influence understanding of humanistic topics much more intensely and quickly than previous means, and that it is beginning to catch on throughout academia.

When Dan Katz came to class, I think that we were all intrigued by how influential cyberscience can be on the humanities.  I think we all saw it as kind of a rare occurrence.  However, this article offers another window into how the fourth paradigm is changing even the humanities very rapidly.

The most impressive notion that Dan Katz described was that there are researchers at the Law School still leafing through books one page at a time, one pair of eyes at a time, and that it seems a very silly way to go about doing research involving so many words and ideas.  This article seems to me to discuss a very similar issue, in which research that recently took months of time for very  few results now has a high tech avenue that can pay huge dividends.

I don’t doubt that we will be seeing many similar articles in the future, as the fourth paradigm seems to keep popping up in the most interesting (and sometimes surprising) places.

Posted in Uncategorized | 1 Comment

Computational science at home

Recently I came across a program called BOINC.  BOINC is a program that you can install on a home computer and it collects data from a central server.  It then takes the data and uses your home computer to do computations with that data.  When it is done it sends the results back to the central server.  BOINC is a way of using peoples home computers to sort through data instead of requiring a supercomputer to provide quick calculations on massive amounts of data.  Currently I have BOINC running on a Linux home server for a cosmology@home project.

Clearly BOINC has its disadvantages, but for something like this to be available to underfunded scientists shows massive potential.  This is a big part of computational science in regards to infrastructure.  Instead of spending a lot of money or waiting a long time to gain access to a supercomputer, scientists can use a network of home computers whose processing is given freely.  The upside of BOINC is that often a computer is sitting around doing nothing, so why not dedicate that unused processing power for a positive benifit.  Programs like this help to expand the field of cyberscience, and create new paths for the field to move into.

 

http://boinc.berkeley.edu/

http://www.cosmologyathome.org/

Posted in Uncategorized | 1 Comment

Flocks, Swarms, and Computing

After watching all of our class’s presentations last week as well as agonizing over my own (in a good way 😀 ), I must say I found the ‘Theoretical Ecological Modeling’ group’s topic rather interesting as it made me think how the behavior of living things can be modeled and simulated by computers. Although we normally don’t notice it, the social structures that other organisms on earth form are amazingly complex and remain understudied. From the flocks of birds that carpet bomb the Diag as they make their annual migrations to ant colonies that are somehow able to build underground cities, many animals somehow ‘know’ how to perform feats that we humans would consider extraordinary for their lack of intelligence and technology. Finding some way to model and analyze these complex behaviors through computing is a fascinating new area in science.

How such individually simple organisms such as insects come together to form an intelligent and goal oriented society or ‘hive mind’ still remains a mystery. Though we know that many animals communicate via powerful chemical signals or pheromones, the complexity involved in organizing hundreds to millions of individuals is mind-boggling. It would be cool to see a computer model show how a bee colony organizes workers to seek out plants to pollinate, care for its young, and defend against intruders. I’ve also wondered how animals such as bird flocks or schools of fish migrate over distances of thousands of miles through treacherous environments in one piece without any explicit sort of planning or strategy that a ‘smart’ human would do. Perhaps as computers become more powerful, we could simulate variables such as local climate and food supply in a realistic model of animal migration.

The prospect of computers modeling the adaptive behavioral patterns of animals in nature also reminds me of Michael Crichton’s sci-fi novel Prey. Admittedly, it was a cheesy, and scientifically absurd story of how a laboratory accidentally releases a swarm of intelligent nanorobots into the desert. The nanorobots were programmed with a ‘predator-prey’ algorithm that gave them a collective intelligence that gave the swarm the ability to learn, reproduce, and evolve eventually giving rise to predatory behavior against humans. The thought of computers become more like living organisms is both an interesting and frightening possibility as computing power grows exponentially. The ability to make large numbers of individually insignificant and simple ‘bots’ work together to perform large scale projects from medical treatments to space exploration may one day become reality. However, as artificial intelligence becomes more and more in tune with the vicious and relentless nature of natural selection and predator-prey relationships, this technology may become dangerous.

Either way it makes for somewhat entertaining science fiction.

 

Posted in Uncategorized | 1 Comment

Reaction to Computational Legal Studies

After Dan Katz’ presentation about Computational Legal Studies, the magnitude of the possibilities hit me. So far in legal studies, paper and brainpower has been the driving technology. In general, law requires much more thought than computation so it makes sense that computers would not have too much of a place. The only problem is, humans inherently have two problems: they are never perfect, and they have a limit as to how fast they can process information. For example, A computer can read a book in a fraction of a second while it takes a human several hours at least. Computers also almost never make mistakes. Of course they can fail, but it is almost always the fault of the human who wrote the program that failed. Computers aren’t the solution yet; they sill have one major shortfall, the ability to think. AI has always been out of reach, but with major advances in technology, it is not out of sight. There can never be a true AI, but when computers have the ability to make basic logic decisions, they could be a very powerful tool in law. Imagine a machine that can read in a speech made by an opposition lawyer, analyze what the argument point is, cross reference against a database of laws and precedents, and come up with specific counterarguments in a matter of seconds. These then could be used by a lawyer to refute the opposition’s point. Nowadays, it is up to the lawyer to memorize all of the precedents and laws and prepare for the case beforehand, but with technology like this, useless mass memorization would be avoided and lawyers would be able to focus on the actual logic part of the argument. My guess is as good as anyone’s as to what the future holds, but the possibilities are almost endless. One thing iis certain though, computers will, one day, influence the law process.

Posted in Uncategorized | 4 Comments

Racetrack Memory

A team at Texas A&M University has found a way to improve the speed and efficiency of racetrack memory by “using a series of current pulses rather than DC, AC, or a combination of the two.” While this new model of memory storage is still far from replacing hard disk drives or solid state drives, this is a step closer to making the new technology practical.

Racetrack memory works by storing memory on a thin wire as magnetic domains that can be pushed across the wire with pulses of current. If it becomes a practical alternative to hard disk drives, it will improve supercomputing with its faster read and write speeds. Since it also has a higher data density, it will make the current deluge of data take up less space in our data centers. Because it involves no moving parts, it is more energy efficient and produces less heat, making data storage more green.

More about how racetrack memory works

Posted in In the news | 2 Comments