DOE CODE: The US Department of Energy’s New Software Services and Search Platform

I often enjoy thinking about how much software has changed our lives, how much software exists in the world, and how much is being written. I also like to consider how quickly the rate at which we write software is changing and the implications that this has for society. This is especially important for science where publications tend to summarize work done from some perspective, but the real record of the work may be a the software. But what do we really do today to preserve and, if you will, curate collections of software, especially scientific software and the business software that supports science?

About two years ago I was asked to join an effort by the US Department of Energy’s (DOE) Office of Scientific and Technical Information (OSTI) that, in part, looked at this question. The effort was to develop a new software services and search platform for the DOE’s vast – and I do mean vast – software collection, including both open and closed source projects. This effort came to be known as DOE CODE and the Alpha version of the platform was released in November 2017.

How vast is vast?

DOE CODE is the latest in a long line of names for a software center that has supported the scientific community since 1960 and was started by Margaret Butler at Argonne National Laboratory. At the time it was called the Argonne Code Center and later became the National Energy Software Center. In 1991, the center moved from Argonne National Laboratory to OSTI headquarters in Oak Ridge, Tennessee, and was renamed the Energy Science and Technology Software Center (ESTSC). The ESTSC website was launched in 1997 and the effort to develop DOE CODE as the new public facing platform for the software center started in 2017. In the 58 years between then and now, over 3600 software products have been submitted by national laboratories and DOE grantees, many of which are still active. Each record includes all of the metadata about the software, as described by DOE Order 241.4, as well as the code, either in binary or source form.

3600 software packages is a truly vast collection of software. However, when we started the project, we noticed that searching around on GitHub that many projects supported by DOE funds were not catalog. How many? Well, based on the fact that a GitHub search of “Department of Energy” returned over one million hits at the time and using the assumption that a file or class would be between one hundred or one thousand lines, we estimated that the number of DOE software packages on GitHub alone that were also not in the existing catalog was between one thousand and ten thousand packages. Further investigation by Ian Lee from LLNL suggested that it was closer to the lower of the two numbers. This does not include projects on other sites such as BitBucket or, but if we assume that there are roughly as many packages on those sites, then our estimate of the total number of DOE software packages is somewhere between 4000-7000 packages. While we may never catalog all of those packages, it is clear that open source software was a very important part of the DOE’s software development community and that the effort to redevelop the software services and search platform needed to strongly consider this point.


It became clear over the course of initial requirements gathering exercises that the DOE needed a new software services and search platform that could simultaneously meet the needs of both open and closed source projects. The platform also needed to assist with OSTI’s continuing mission to collect, preserve, and disseminate software, which is considered by the DOE to be another type of scientific and technical information. (This post will not address the topic of limited and/or classified software.) Figuring out exactly how the DOE community worked with open source software would be a challenge on its own, but establishing a balance between the needs of both open and closed software projects required significantly more effort. This new effort and the new service it would spawn were distinct enough that a new name was warranted, thus the adoption of the much simpler “DOE CODE” over previous names.

DOE CODE supports OSTI’s efforts to collect, preserve and disseminate software artifacts by acting as a single point of entry for those who need to discover, submit, or create projects. Instead of mandating that all DOE software exist in one place, DOE CODE embraces the reality that most projects exist somewhere on the internet and are generally accessible in one way or the other. DOE CODE reaches out to these repos directory or, in the case of GitHub, BitBucket, and, integrates directly with their programming interfaces. DOE CODE itself exposes a programming interface so it can be used the same way by libraries or similar services around the world.

Users can provide their own repositories or use repositories hosted by OSTI through GitLab or through a dedicated DOE CODE github community. DOE CODE also centralizes information on software policy for the DOE and links to developer resources from, for example, the Better Scientific Software project. The platform can also mint Digital Object Identifiers (DOIs) for software projects, which was a big request from the community early in development. To date, many of the projects which exist in OSTI’s full catalog have been migrated to DOE CODE and many of these projects have been assigned DOIs as well.

All of this is on top of an interface that is streamlined and easy to use. Adding project metadata is often as simple as providing the repository address and letting DOE CODE do the rest to scrape it from the repo!

These features combine to provide an experience that is focused on enabling social coding while simultaneously integrating software, publications, data, and researcher details to create a holistic picture of DOE development activities. Part of this includes embracing social media and allowing users to share what they find through their favorite social media platform.

Searching is as simple as using the search bar, but advanced options such as language and license are also available. The Alpha release of DOE CODE contained about 700 open source software packages and the total number of packages currently has grown to 874, which is about 1.5 new additions per day since the launch.

Custom Deployments

My favorite feature of DOE CODE, as its lead architect, is that it is open source itself. It is, in the words of my nephew, “epically meta” to build a service like DOE CODE that can list itself as an open source project. In fact, throughout the development process we used the DOE CODE repo on GitHub as our primary test case for working with source code repositories.

The open source nature of DOE CODE is my favorite feature because it means that the code can be reused and that this level of software project curation can be adopted, modified, and explored wherever it is needed. OSTI’s deployment of DOE CODE fits into their existing infrastructure as a plugin of sort. It feeds information to their ELink service, which ingests the metadata and executes a number of data processing workflows in the background to process the information according to a number of DOE orders, policies, business rules, and basic technical requirements. ELink then publishes this information to the main site and provides some additional metadata to DOE CODE. It doesn’t have to work that way though. Oak Ridge National Laboratory (ORNL) is in the process of deploying a DOE CODE clone, called ORNL Code, that leaves out the backend processing, restyles the site, and adds Single Sign-On (SSO) authentication to integrate with ORNL’s other applications.

What we have found with the deployment of ORNL Code, which I am also leading, is that it is relatively straightforward to do custom deployments of DOE CODE. That was by design, but it is always good to verify it! We are also taking the next step at ORNL by putting ORNL Code in the cloud on Amazon Web Services. I remain hopeful that other organizations will try this too.

Building the Platform with Strong Community Backing

The effort to build DOE CODE was one of the most vibrant and fast paced projects I’ve worked on in my time in the National Laboratories. Yes, I have definitely worked on projects that were shorter than sixteen months from conception to Alpha launch, but I have rarely worked on projects with such a large amount of engagement and scope that launched on time sixteen months later. The key to this success, in my opinion, was that we engaged as many people from the DOE community as possible and we kept every possible line of communication open. Part of this included, as previously discussed, releasing DOE CODE itself as an open source project.

Early in our development process we established about eighteen separate requirements teams that we used throughout our development process for guidance and testing. I lost count of the number of people that we interviewed when it was around eighty eight, and that was early in January 2017. These teams were composed of members from various communities of interest from within the US national laboratories. Each team was about five to eight people, to start, but some of the teams quickly swelled to eight to ten people. One team went from eight people to twenty seven, which was the phone call where I learned the consequences of saying “Sure, invite your friends!” We also had good community interactions on the GitHub site, Twitter, and conferences during the development cycle. I personally presented a talk on the project many times and some times multiple times in a single day. By the end of the year, we had presented thirteen invited talks on DOE CODE, which is the most invited talks I have ever presented in a single year.

To say that the DOE CODE team is grateful and indebted to the broader DOE community is an understatement, but it is a good start. We certainly could not have built the platform without their help and the many great people behind the scenes at OSTI and ORNL as well.

Getting Involved

If you are interested in getting involved or learning more, you should check out the DOE CODE site or the GitHub community. You can also reach out on Twitter: OSTI maintains an active Twitter account (@OSTIgov) and I am always available (@jayjaybillings).

What does a good code sample look like?

Here’s an example of a good code sample for entry-level programmers.

I review a lot of code samples, both for work and for pleasure. Grady Booch once tweeted that good coders read code, and I think he is absolutely correct. How can we improve our skills as developers if we don’t look around at other code to learn what’s new and helpful? So, I read samples, examples, tutorials, production code, research code, test cases, code in languages I don’t know, code from “top coder challenges,” and any other type of code I can find. I also review code samples from job applicants.

Inevitably, the responses I receive when I ask for a code sample are “Really?” or “I don’t have a code sample. Can we skip to the interview?” I wish that was the worst of it: Over half of my job applicants ghost me when I request the code sample. This is especially funny since my job posts include a special section that says “We will request a code sample from you.” I believe this happens because people are busy balancing priorities or don’t know what a good code sample looks like. So here are my suggestions for writing a good code sample:

Distribute only code you’re allowed to share. Never give someone a sample of proprietary code that you wrote for work. Don’t even suggest it. Take a half hour or hour to write a good sample, push it to GitHub, and then finish watching Future Man, Star Trek: Discovery, or the ballgame that you had on while you slung it together. On the other hand, if you work on open-source projects for a living, feel free to submit one of your open-source codes and suggest reviewers run Gitstats or something to see your contributions.

Include five or more classes that demonstrate a good design philosophy. Show off how well you understand object-oriented concepts such as inheritance, realization, and delegation by creating a simple-but-thorough design with distinct classes. This shows not only how well you understand the concepts but also that you can design things well. Plus it demonstrates that you know basic things like how to call functions.

Do some I/O. Showing some input and output operations in your code demonstrates that you know not only how to handle those all-important functions but also how to use logic and loops. Both are very important language constructs that you will almost always be asked about.

Show some tests. Whether or not the job will require a lot of software testing, showing how much you know about testing only makes you look awesome. If you don’t know about testing, then “treat yo’ self” to the Wikipedia articles on software testing and unit testing.

Use Git. I don’t know how many teams use Git these days, but I’m willing to bet most of them do. As with testing, this is a technology you want to show you know something about regardless of whether or not the team uses it, but especially if they do.

Include a build system. This might not seem like the most obvious thing to include in your code sample, but it’s important because I need to know that you know how to build your code. To steal and adapt a line from the movie “Three Amigos:”

Well, you told me your code sample has a build system. And I just would like to know if you know what a build system is. I would not like to think that a person would tell someone he has a build system, and then find out that that person has no idea what it means to have a build system.

Is it ok to use an auto-generated build system from an IDE? If it can be used without starting the IDE, like a Makefile or Maven script, then yes. If not, no.

Document your sample. The single biggest problem I see with code samples is that there is no documentation. I don’t care how “self-documenting” you think your code is, I don’t have a clue what it’s supposed to do and you won’t in five years either. More importantly, API-level documentation and skills with Javadoc and Doxygen are crucial in modern development shops. No user off the street is going to know how to use char ** getVal(int a, const int & a0, const double & a1) const; correctly! Furthermore, did you include a README.txt or file to tell me what functionality your code offers, who wrote it, and how to contact the author?

Make it pretty. Presentation matters. You’re probably worried about being dressed nicely to make a good first impression—shouldn’t your code also be “dressed nicely” to make a good first impression? Clean code is more readable than messy code and makes the review process easier.

Be transparent. Don’t lie or hide things about your code sample; put them out in the open, be transparent, and own it. If reviewers don’t like something in your code sample, adopt a growth mentality and ask them if they’ll let you fix it and submit it for re-evaluation. If you do something in your code sample that you know is wrong, document why you did it with something such as “A realistic implementation would replace this version that scales a random number from rand(), but this is sufficient for my code sample.”

I hope this helps you write a better code sample. I wrote a simple code sample that you can check out on the ORNL Training GitHub page. If you have questions or want to complain about either this article or the samples, drop me a Tweet at @jayjaybillings.

Dynamic Visitor Pattern in Java

The Visitor Pattern is a great way to extend the functionality of a class without extending the class per se. The gory details are very well explained on Wikipedia, but it works by a visiting class, the “Visitor,” calling a second class that “accepts” the Visitor. Once the Visitor is accepted, the second class reveals its type by calling a specific, typed method on the Visitor. Now that the Visitor knows the type of the second class, it can behave in a highly specific way that, in effect, extends the behavior of the second class. The Wikipedia page has good examples of the basic pattern implemented in multiple languages, including Java.

The Visitor Pattern has one major downside: The exact list of classes that can be visited is completely specified on the Visitor interface. This presents a problem if a new class needs to be visited that is not available on the interface. The easiest solution is to add the new class to the the interface, but that is not possible with third party code – downstream developers in general do not modify API that they then call.

Dynamic Visitors for New Types

So how can developers allow visitation for other classes or subclasses without requiring extension of the primary Visitor interface? The trick is to separate the act of accepting the Visitor from from the final visitation by way of a delegate. The code looks something like the following, which is taken from the Eclipse January project where I committed it yesterday:

 * A simple, templated Visitor interface as part of the Visitor pattern. This
 * interface is implemented by classes that work with IVisitorHandlers to
 * dynamically and generically extend the visitation capabilities of the forms
 * package. Using this interface in place of the IComponentVisitor interface
 * allows clients to create custom Components or other structures and visit
 * them dynamically, wherease the IComponentVisitor interface is static and does
 * not allow extension outside the basic components.
 * @author Jay Jay Billings
public interface IVisitor<T> {

	 * This operation directs the visitor to visit the provided data element.
	 * @param component The data element that should be visited.
	public void visit(T element);


 * This is a simple interface for registering visitors under a generic visitor
 * pattern. It is designed such that implementers should use the IVisitors that
 * are registered with the set() operation to visit the objects passed to the
 * visit() operation. This allows run-time registration of generic visitation
 * callbacks without the need for a verbose, static interface such as
 * IComponentVisitor. Registration is as simple as associating a Class with an
 * implementation of IVisitor<T>.
 * This class should not be used in general for all the data types in Forms. It
 * is better to implement IComponentVisitor or extend SelectiveComponentVisitor
 * in those cases because it minimizes the code and avoid bugs. This class and
 * the IVisitor interface are meant to be used only for classes that are not
 * already available on those two entities.
 * @author Jay Jay Billings
public interface IVisitHandler {

	 * This operation associates an IVisitor with a Class.
	 * @param classType The Class that should be associated with the Visitor
	 * @param visitor The IVisitor that will be invoked for the given class.
	public void set(Class classType, IVisitor visitor);

	 * This operation uses the registered IVisitor to visit the injected
	 * object.
	 * @param objectToVisit The object that should be visited.
	public void visit(Object objectToVisit);


 * This interface defines a visitation contract where visitation requests are
 * granted through a delegate provided by an IVisitHandler. This interface is an
 * alternative to IVisitable for classes that may need to execute visitation code
 * for classes not available on the IComponentVisitor interface.
 * @author Jay Jay Billings
public interface IGenericallyVisitable {

	 * This operation will accept a visit handler instead of a typed visitor
	 * that will then be called as a delegate for direct visitation.
	 * @param visitHandler
	public void accept(IVisitHandler visitHandler);


This works by providing a second interface – IGenericallyVisitable – that classes can realize to accept a delegate – the IVisitHandler – instead of directly accepting the Visitor. This allows the IVisitHandler to smoothly direct visitation to a generic IVisitor<T> that is typed specifically for a new class that is not part of the original Visitor interface.

The January repo has a good example showing how this works. All of the code there and here is licensed under the Eclipse Public License version 1.0.


There are a couple of downsides with this approach that may keep it from working for all projects. First and foremost, the compile type checking provided by the full visitor interface is not present for a generic visitor interface and it is possible that run time case exceptions could occur because the proper visit() operation is not available. In that case the error looks like the following:

java.lang.ClassCastException: cannot be cast to
at org.eclipse.january.form.BasicVisitHandler.visit(
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(


In C++, this can be handled a little differently compared to Java because of Explicit Specialization of function templates… which would have been so much nicer to have than resorting to a basic handle by using the Object class.

Second, the exact means by which the delegation to the IVisitHandler happens most likely requires either upfront registration or hardwiring in IVisitHandler implementations. In this case, January goes with upfront registration, which actually isn’t too bad.

Finally, delegation always comes with a performance hit if it isn’t handled efficiently.

Comments are welcome to my Twitter account! @jayjaybillings


NVIDIA CUDA 7.5 on Fedora 23 with NVIDIA Optimus Technology

I have a Dell XPS 15 that has both Intel and NVIDIA graphics using NVIDIA’s Optimus Technology:

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2)

Getting CUDA 7.5 to work was straightforward using Bumblebee, as described on the Fedora Bumblebee Page. I used the managed proprietary option to get the latest NVIDIA driver for my system. Using both optirun and primusrun works, however on glxgears they report different numbers

$ optirun glxgears
10162 frames in 5.0 seconds = 2032.270 FPS

$ primusrun glxgears
292 frames in 5.0 seconds = 58.370 FPS
primus: warning: dropping a frame to avoid deadlock
primus: warning: timeout waiting for display worker

I installed the CUDA Toolkit using the dnf repository provided by NVIDIA. It is important to only install the Toolkit from this repository, not the version of CUDA that is provided since Bumblebee already configured an appropriate Optimus-ready NVIDIA driver. So, instead you can just run

sudo dnf install cuda-toolkit-7-5

Additional configuration works as specified in the User’s Manual and Getting Started Guides. I had trouble getting the examples to compile because Fedora 23, which I’m on, defaults to gcc 5.3.1 and, unfortunately, CUDA 7.5 requires gcc 4.9 or less. I compiled GCC 4.9 from scratch and then manually modified the makefiles of each sample I wanted to compile to point to “g++49” instead of “g++”. You’ll notice that I told the gcc configuration during build time to add the 49 suffix with the –program-suffix=49 option. Aside from that, compilation and execution were simple enough with two catches:

  • It was necessarily to override the library path to point to Bumblebee’s version of the cuda shared library instead of a 32-bit version that somehow ended up in /usr/lib.
  • Contrary to what the Bumblebee documentation suggests, you must use optirun or primusrun to execute the samples. I’m not sure if this is required on more basic programs yet.

So, that looks something like this for the nbody example:

LD_LIBRARY_PATH=/usr/lib64/nvidia-bumblebee primusrun ./nbody