Jonathan Dursi

home banner

Understanding Partial Order Alignment for Multiple Sequence Alignment

Over at the Simpson Lab blog, I have an explainer on Understanding Partial Order Alignment, an under-appreciated method for multiple sequence alignment; I hope the explanation there (and explanatory implementation) is useful to those exploring graph-based approaches to alignment.

Continue...

HPC+MPI on RCE Podcast

In the latest episode of the RCE podcast, Jeff Squyres, Brock Palen, and I spoke about the HPC and MPI series of blogposts and the community reaction. It was a really interesting discussion; Brock has worked closely with an enormous variety of researchers and helps run an HPC centre, while Jeff deeply understands HPC networking, from the getting ones and zeros onto the wires at the lowest-level of hardware up to being an extremely active member of the MPI forum. I was really pleased that...

Continue...

Coarray Fortran Goes Mainstream: GCC 5.1

This past week’s release of GCC 5.1 contains at least two new features that are important to the big technical computing community: OpenMP4/OpenACC offloading to Intel Phi/NVIDIA accellerators, and compiler support for Coarray Fortran, with the communications layer provided by the OpenCoarrays Project. While I don’t want to downplay the importance or technical accomplishment of the OpenMP 4 offloading now being available, I think it’s important to highlight the widespread availability for the first time of a tried-and-tested post-MPI programming model for HPC; and one...

Continue...

In Praise of MPI Collectives and MPI-IO

While I have a number of posts I want to write on other topics and technologies, there is one last followup I want to make to my MPI post. Having said what I think is wrong about MPI (the standard, not the implementations, which are of very high quality), it’s only fair to say something about what I think is very good about it. And why I like these parts gives lie to one of the most common pro-MPI arguments I’ve been hearing for years;...

Continue...

Objections, Continued

Thanks for all of the comments about my HPC and MPI post, on the post itself, or on twitter, or via email. While much of the comments and discussions were positive, it won’t surprise you to learn that there were objections, too; so I thought I’d keep updating the Objections section in a new post. I’ve also posted one (hopefully last) followup. But do keep sending in your objections! Further Objections You’re saying we’d have to rewrite all our code! If someone had suggested I...

Continue...

HPC is dying, and MPI is killing it

Pictured: The HPC community bravely holds off the incoming tide of new technologies and applications. Via the BBC. This should be a golden age for High Performance Computing. For decades, the work of developing algorithms and implementations for tackling simulation and data analysis problems at the largest possible scales was obscure if important work. Then, suddenly, in the mid-2000s, two problems — analyzing internet-scale data, and interpreting an incoming flood of genomics data — arrived on the scene with data volumes and performance requirements which...

Continue...
-->