BSDCan 2011

BSDCan 2011 was great. The problem with a conference that’s routinely great is that great becomes routine, and hence boring. Several presentations struck me as notably interesting for a variety of reasons, and I wanted to comment on three of them. These are only my personal opinions, of course. BSDCan had three tracks, and I could only be in one talk at a time.

Mark Linimon’s talk on How not to build a lights-out facility discussed the FreeBSD Project’s efforts to mirror its core infrastructure in datacenter space donated by New York Internet. As a chronicle of lessons learned and things that should be done differently next time, it’s valuable listening for anyone who thinks that building heavy-duty project infrastructure is easy.

I’m not going to name the people, the projects, or the code involved in the second talk, because the talk itself is less important than what happened during it. A committer from one large BSD project presented on a new piece of infrastructure he had developed. The audience included people associated with a variety of BSD projects. At the end of the talk, a senior developer from a different BSD project asked a few questions. The presenter and the developer had several rounds of completely civil back-and-forth technical discussion, and at the end the presenter agreed that the developer had some strong points and that some parts of his infrastructure needed additional work. I’m told that this happened in more than one talk. Despite discussion of disagreements between various BSDs projects, it’s clear that technical correctness is still most important.

The presentation I found most technically interesting was Randall Stewart’s work in data center congestion control. Stewart did real-world testing of data center congestion control with ECN and SCTP, and presented his results. It wasn’t until hours later that I realized exactly why I found the talk so interesting: he had essentially done “Mythbusters” for a specific part of TCP/IP. He’d bought a bunch of $50 servers on eBay, repeatedly adjusted SCTP’s response to packets with ECN set, and graphed the results. This was real-world stuff suspiciously close to academic research, done in a basement. And this sort of research is something that almost anyone could do. Lots of claims are made for our network stacks, but very few people actually experiment to measure performance with their workloads.

I’m glad to see open source projects learning lessons. I’m glad to see different BSD camps politely testing their ideas against each other, creating better software for everyone. But I’m really really happy to see real-world experiments.

I see all sorts of claims for different BSD’s network stacks, disk performance, and so on. Please, put them to the test. Make changes. Measure the results. While this work requires real hardware rather than virtualization, it’s something that anyone can do. You know your workload. Read about benchmarking. While naive benchmarks aren’t useful, it’s not that hard to design valid benchmarks. Buy used hardware, run your own tests. Make changes, and test again. Measure and document everything. Capture packets, and keep the pcap files so that you can go back and answer interesting questions. Publish your results. You’ll get interest. Perhaps your results will be as you expect. Maybe they won’t. But you’ll never know until you try.

As a BSDCan committee member, I would love to see more work like this. I can’t guarantee that your paper would be accepted, but I can say I’m much more likely to vote for a paper with a real investigation than yet another talk on well-understood features. Even if your results say “Yes, the fooBSD disk I/O system works exactly as expected,” it’s still interesting. And if you discover weak spots, you’ll have evidence the developers will need to improve performance.

2 Replies to “BSDCan 2011”

Comments are closed.