News Stay informed about the latest enterprise technology news and product updates.

Microsoft bids for role in cluster computing

Microsoft isn't exactly known as a supercomputing superpower, nor is it a superhero in the scientific research community, but an alliance with Dell, Intel and Cornell could open those worlds to Windows.

Raw computing power -- most researchers get it from supercomputers that run proprietary software. Many of the government agencies and universities that run those platforms believe that, right or wrong, that's the way things should be.

  • Best Web Links on clustering
  • Best Web Links on 64-bit computing
  • But a research program led by Cornell University and funded by the triumvirate of Microsoft Corp., Dell Computer Corp. and Intel Corp., is designed to develop high-performance clustered applications that could be used not just by scientists and academics, but also by mainstream industry customers such as financial institutions and pharmaceutical companies.

    Cornell officials said recently that, over the course of four years, the three companies will kick in resources valued at about $60 million, though the Ithaca, N.Y., university chose not to break down the contribution of each vendor.

    For its part, Microsoft is providing Cornell with some development tools and technical support from afar. Cornell staff will receive training at Microsoft, said Thomas Coleman, Cornell Theory Center director and a computer scientist at the university.

    Microsoft has no presence in scientific computing for a variety of reasons. The realm of supercomputing is largely the domain of the Unix crowd, which is also a collaborative community. People writing their own code is the antithesis of Microsoft, which is a "trust us and we will do everything for you" shop, according to Dave Passmore, research director at The Burton Group, a Sterling, Va.-based consulting firm.

    Microsoft's focus on business applications and its marriage to the Intel processor family have kept it out of scientific circles because supercomputers use a variety of proprietary processor hardware.

    Microsoft's reputation in the scientific community also suffers from the notion, which may or may not be justified, that Windows cannot scale as well as Unix. But Cornell's Coleman is confident that Windows can be just as flexible in high-performance computing environments as Unix.

    "There is nothing that prevents Windows from scaling other than inertia," Coleman said. "People make investments in certain types of software, and they don't want to change. I think the reputation of Windows being unstable applies more to the desktop."

    But in this era where everyone is looking to cut costs, the promise of lower prices, thanks mainly to the use of standard hardware, drives this effort. At the launch, Russ Holt, vice president of the enterprise systems group at Dell, had promised that the Microsoft/Intel/Dell products would probably cost about one-tenth of today's supercomputing technologies.

    Coleman said Cornell has already replaced one of its 64-processor supercomputers, which had a cost of about $15 million with annual maintenance costs of about $1 million. The university chose Windows over Linux because it wanted to attract industry users and not just the hard core scientific community: "those who already use Windows on their desktops and want to keep doing so," Coleman said.

    Coleman said Microsoft's move to the .NET architecture will be helpful because its load balancing features will help IT managers identify a set of processors to cluster based on their current workloads.

    Cornell will work with the three vendors out of a university facility in New York, which is geographically close to some large banks that have expressed an interest in this project, Coleman said.

    Though Microsoft may not work for every scientific environment, Cornell's effort is appreciated. John Noe, manager of scientific computing for Sandia National Laboratories in Albuquerque, N.M., wrote in an e-mail that while the collaboration doesn't appear to aim at the type of scientific simulation which is Sandia's specialty, he agreed that any investment of capital and human ingenuity in this field will benefit everyone. Sandia already has its own working arrangements with Dell and Intel to develop cluster-based systems for simulation.

    As part of its research, Cornell will use Dell's PowerEdge servers and Intel's Xeon and Itanium processors and tools. It will run Microsoft's server software. The university will double the size of an existing 425-server Microsoft, Dell and Intel cluster and will provide other users with its own research data.

    Coleman isn't worried that other users will think Cornell's results are skewed simply because the three vendors are paying for the research. "I think it's not so much the results of the research versus illustrating the different problems that can be solved on different machines," he said. "We will show them how that's done and help them get their applications running."

    Dig Deeper on Windows client management

    Start the conversation

    Send me notifications when other members comment.

    Please create a username to comment.