Show simple item record

dc.contributor.authorMiller, Barton P.en_US
dc.contributor.authorRoth, Philipen_US
dc.contributor.authorArnold, Dorianen_US
dc.date.accessioned2012-03-15T17:18:07Z
dc.date.available2012-03-15T17:18:07Z
dc.date.created2004en_US
dc.date.issued2004
dc.identifier.citationTR1503
dc.identifier.urihttp://digital.library.wisc.edu/1793/60394
dc.description.abstractMRNet is an infrastructure that provides scalable multicast and data aggregation functionality for distributed tools. While evaluating MRNet?s performance and scalability, we learned several important lessons about benchmarking large-scale, distributed tools and middleware. First, automation is essential for a successful benchmarking effort, and should be leveraged whenever possible during the benchmarking process. Second, microbenchmarking is invaluable not only for establishing the performance of low-level functionality, but also for design verification and debugging. Third, resource management systems need substantial improvements in their support for running tools and applications together. Finally, the most demanding experiments should be attempted early and often during a benchmarking effort to increase the chances of detecting problems with the tool and experimental methodology.en_US
dc.format.mimetypeapplication/pdfen_US
dc.publisherUniversity of Wisconsin-Madison Department of Computer Sciencesen_US
dc.titleBenchmarking the MRNet Distributed Tool Infrastructure: Lessons Learneden_US
dc.typeTechnical Reporten_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • CS Technical Reports
    Technical Reports Archive for the Department of Computer Sciences at the University of Wisconsin-Madison

Show simple item record