Friday, May 22, 2009

Computers as number-crunchers. Oh yeah. (IBM System S)

If nothing else, the Wolfram Alpha discussion/hype has reminded us that computers can be used as computational devices. We've been so focused on avatars and real-time feeds and all of that junk that we (or at least I) sometimes forget that computers can actually be used to compute. But IBM, even though it's recast itself as a service company, hasn't forgotten that fact:

New software from I.B.M. can suck up huge volumes of data from many sources and quickly identify correlations within it. The company says it expects the software to be useful in analyzing finance, health care and even space weather....

I.B.M....spent close to six years working on the software and has just moved to start selling a product based on it called System S....

Most computers, of course, can digest large stores of information if given enough time. But I.B.M. has succeeded in performing very quick analyses on larger hunks of combined data than most companies are used to handling.


More here. Or you can see what IBM says:

The Exploratory Stream Processing Systems team at T.J. Watson Research center conducts research on advanced topics in highly scalable stream processing applications and systems. Most of the research efforts come under the umbrella System S project, which spans several teams at Watson.

As the amount of data available to enterprises and other organizations dramatically increases, more and more companies are looking to turn this data into actionable information and knowledge. Addressing these requirements require systems and applications that enable efficient extraction of knowledge and information from potentially enormous volumes and varieties of continuous data streams. System S provides an execution platform and services for user-developed applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams. It supports the composition of new applications in the form of stream processing graphs that can be created on the fly, mapped to a variety hardware configurations, and adapted as requests come and go, and relative priorities shift. System S is designed to scale from systems that acquire, analyze, interpret, and organize continuous streams on a single processing node, to high performance clusters of hundreds of processing nodes.
blog comments powered by Disqus