Are there any budget concerns here, or is it simply a case of 'build the machine that will parse this data in the shortest possible time?'
If you're mirroring off-site data to an on-site network drive, network setup is going to be a crucial factor. Local storage should probably be optimized for speed rather than capacity. Setting up a striped array of Western Digital Raptors will give you a nice fat pipe with low seek times, which should be helpful. Feed the system through a gigabit ethernet connection to try to minimize bottlenecking there.
Linux is your OS of choice if you're writing your own software, which I imagine you've already concluded. I don't know that Debian or Suse will be the best distro though; you're probably going to be better off with a minimalist distro like Slackware. I like Debian (and am writing this on a box running Ubuntu), but from the sounds of it you don't really need the package management or all the extras, so why not go with a distro that doesn't use all that stuff? Less is more here.
Linux is fully capable of multithreading, but bear in mind that your software will need to be optimised to take advantage of it. From a logical standpoint, two or four cores on one processor is the same as two or four processors with one core each. I've never done any coding in this vein, so I don't know what you'll need to do to take advantage of it, but I'd imagine that there's a library and documentation available on the interwebs. I'd research that; once you have an idea of how difficult it will be to optimise your code to take full advantage of multiple cores, you'll be able to get a better handle on how beneficial a dual or quad core system will be. If one ignores cost as a factor, the Intel processors far superior in the area of raw data processing than the AMD equivalents, however.
As for a motherboard, my initial thought is something built around an X38, such as the
Asus P5E3. I'm going to have to do some research into architecture before I can be completely sure on that, though.