The Top 10 things I want in a new Stargate series

Here’s a list of some things that I think would be really cool to see in a new Stargate series. Lots of spoilers here though, so be warned!

One Villain

Can we have one villain? Like, just a hardcore over planner who starts wrecking the peace and prosperity built after the other shows ended because he’s been planning for so long. No super powers or special abilities beyond being able to out maneuver everyone with minimal resources and less technology.

Furlings

Furlings were one of the Four “original” Races that formed a great alliance across galaxies, which also included the Ancients, the Asgard, and the Nox. They never made an appearance in SG-1 (except the parody in 200), and it would be awesome to see in a new series. In fact, it could be really cool for a Furling to be the villain. Perhaps a rogue Furling general doesn’t buy the Tau’ri as the Fifth Race?

No Replicators

Please no more replicators in any galaxy. Let’s just say that the Asurans are gone and that the Dakara superweapon was really, really successful. Admittedly a Lego version in a museum or for sale in stores would be a funny easter egg.

More General Carter and Walter

I think this goes without explanation to the diehard fans out there. It would be great to see lots of familiar faces since there would probably not be a time jump, but definitely these two!

More aliens

Revive the Asgard and Tollans. Say the Asgards secretly hid their consciences on a back-up system and transmitted it through the gate while the planet blew up so that the replicators couldn’t track it. Years late, the Tau’ri find them. The Tollans had a colony that was destroyed, but maybe they got tricky and some of them evacuated from the colony and perhaps even the new Tollan homeworld? It would also be nice to see the Tok’ra and Jaffa thriving, maybe even with a super strong alliance 10+ years later? It would be great to see more about the Nox, Travelers, and Vanir, and other alien races too.

The Tok’ra should get something cool though. Those poor bastards deserve a break!

And did I mention the Furlings already?

Finish the war with the Wraith

So the superhive is gone, but there are a whole bunch of wraith back in Pegasus munching on humans. It would be really nice to see the end of the Wraith war because otherwise one must assume that they’ll pay the Milky Way a visit sometime. There’s also the possibility that one of them gets the ZPM idea like Todd. Cheeky bugger.

Main gate room is in Atlantis and on the Moon

I think it would be really cool to stretch a bit and do some local colonization. The idea of Moon bases and colonies on Mars are very contemporary topics. So why not say the SGC was moved out of the Cheyenne Mountain Complex in 2007 and relocated to Atlantis, which was moved from the SF Bay to, say, the Moon?

Putting the gate on the Moon is a great idea because it keeps all the attention away from Earth, which for a very long time was the brunt of many a nasty attack. Instead, plug three ZPMs into Atlantis, park it on the Moon, and keep all the alien attention there. You could do some really cool stuff with Tau’ri made rings that transport people from sites on Earth to the Moon, fly shuttles, etc.

This doesn’t exactly line up with the books that were published after SGA ended, but I don’t know if those books are canonical. If they are, then I’m sure we could find another Lantean cityship hanging out unused. Maybe there really is one submerged at the bottom of the Atlantic! 😉

Ba’al

Sure, I saw Stargate Continuum… but are they really sure they got all the clones? I mean, look at this message from Lord Ba’al in 2015?

Retrieve the Destiny, study it, and forget about it

I was not the biggest fan of SGU. I enjoyed the format of SG-1 and Atlantis much more. I would like to see the crew of the Destiny be rescued and for the ship itself to get “the ol’ Daniel Jackson treatment.” And… then it would be great to not see it or anything else from that series any more. However, it would be really cool for a new SG team to explore gates in some of the galaxies that the Destiny seeded.

A new “X-50x” line of Tau’ri ships

I think one of the coolest things possible in a new Stargate series would be a new line of Tau’ri ships that combine the best lessons and technology from all the races previously encountered. This could be a ship with a hull as strong as the Superhive, shields and drones like Lantean ships, Asgard plasma cannons, multiple types of support ships, self-healing abilities like a Wraith ship, and redundancy and hyperdrive technology from the Destiny.

I always loved that Tau’ri ships were small compared to other races. That always took me back to O’Neill’s statement that Jaffa staffs were weapons of fear, but the P-90 was a weapon of war. Small little Tau’ri ships are not fearsome warships, but they are warships none the less.

That’s it! Hope you like some of the stuff on this list and are joining the #StargateNow tweetstorm tonight. If you have any thoughts, I’m @jayjaybillings on Twitter.

DOE CODE: The US Department of Energy’s New Software Services and Search Platform

I often enjoy thinking about how much software has changed our lives, how much software exists in the world, and how much is being written. I also like to consider how quickly the rate at which we write software is changing and the implications that this has for society. This is especially important for science where publications tend to summarize work done from some perspective, but the real record of the work may be a the software. But what do we really do today to preserve and, if you will, curate collections of software, especially scientific software and the business software that supports science?

About two years ago I was asked to join an effort by the US Department of Energy’s (DOE) Office of Scientific and Technical Information (OSTI) that, in part, looked at this question. The effort was to develop a new software services and search platform for the DOE’s vast – and I do mean vast – software collection, including both open and closed source projects. This effort came to be known as DOE CODE and the Alpha version of the platform was released in November 2017.

How vast is vast?

DOE CODE is the latest in a long line of names for a software center that has supported the scientific community since 1960 and was started by Margaret Butler at Argonne National Laboratory. At the time it was called the Argonne Code Center and later became the National Energy Software Center. In 1991, the center moved from Argonne National Laboratory to OSTI headquarters in Oak Ridge, Tennessee, and was renamed the Energy Science and Technology Software Center (ESTSC). The ESTSC website was launched in 1997 and the effort to develop DOE CODE as the new public facing platform for the software center started in 2017. In the 58 years between then and now, over 3600 software products have been submitted by national laboratories and DOE grantees, many of which are still active. Each record includes all of the metadata about the software, as described by DOE Order 241.4, as well as the code, either in binary or source form.

3600 software packages is a truly vast collection of software. However, when we started the project, we noticed that searching around on GitHub that many projects supported by DOE funds were not catalog. How many? Well, based on the fact that a GitHub search of “Department of Energy” returned over one million hits at the time and using the assumption that a file or class would be between one hundred or one thousand lines, we estimated that the number of DOE software packages on GitHub alone that were also not in the existing catalog was between one thousand and ten thousand packages. Further investigation by Ian Lee from LLNL suggested that it was closer to the lower of the two numbers. This does not include projects on other sites such as BitBucket or Sourceforget.net, but if we assume that there are roughly as many packages on those sites, then our estimate of the total number of DOE software packages is somewhere between 4000-7000 packages. While we may never catalog all of those packages, it is clear that open source software was a very important part of the DOE’s software development community and that the effort to redevelop the software services and search platform needed to strongly consider this point.

DOE CODE

It became clear over the course of initial requirements gathering exercises that the DOE needed a new software services and search platform that could simultaneously meet the needs of both open and closed source projects. The platform also needed to assist with OSTI’s continuing mission to collect, preserve, and disseminate software, which is considered by the DOE to be another type of scientific and technical information. (This post will not address the topic of limited and/or classified software.) Figuring out exactly how the DOE community worked with open source software would be a challenge on its own, but establishing a balance between the needs of both open and closed software projects required significantly more effort. This new effort and the new service it would spawn were distinct enough that a new name was warranted, thus the adoption of the much simpler “DOE CODE” over previous names.

DOE CODE supports OSTI’s efforts to collect, preserve and disseminate software artifacts by acting as a single point of entry for those who need to discover, submit, or create projects. Instead of mandating that all DOE software exist in one place, DOE CODE embraces the reality that most projects exist somewhere on the internet and are generally accessible in one way or the other. DOE CODE reaches out to these repos directory or, in the case of GitHub, BitBucket, and Sourceforget.net, integrates directly with their programming interfaces. DOE CODE itself exposes a programming interface so it can be used the same way by libraries or similar services around the world.

Users can provide their own repositories or use repositories hosted by OSTI through GitLab or through a dedicated DOE CODE github community. DOE CODE also centralizes information on software policy for the DOE and links to developer resources from, for example, the Better Scientific Software project. The platform can also mint Digital Object Identifiers (DOIs) for software projects, which was a big request from the community early in development. To date, many of the projects which exist in OSTI’s full catalog have been migrated to DOE CODE and many of these projects have been assigned DOIs as well.

All of this is on top of an interface that is streamlined and easy to use. Adding project metadata is often as simple as providing the repository address and letting DOE CODE do the rest to scrape it from the repo!

These features combine to provide an experience that is focused on enabling social coding while simultaneously integrating software, publications, data, and researcher details to create a holistic picture of DOE development activities. Part of this includes embracing social media and allowing users to share what they find through their favorite social media platform.

Searching is as simple as using the search bar, but advanced options such as language and license are also available. The Alpha release of DOE CODE contained about 700 open source software packages and the total number of packages currently has grown to 874, which is about 1.5 new additions per day since the launch.

Custom Deployments

My favorite feature of DOE CODE, as its lead architect, is that it is open source itself. It is, in the words of my nephew, “epically meta” to build a service like DOE CODE that can list itself as an open source project. In fact, throughout the development process we used the DOE CODE repo on GitHub as our primary test case for working with source code repositories.

The open source nature of DOE CODE is my favorite feature because it means that the code can be reused and that this level of software project curation can be adopted, modified, and explored wherever it is needed. OSTI’s deployment of DOE CODE fits into their existing infrastructure as a plugin of sort. It feeds information to their ELink service, which ingests the metadata and executes a number of data processing workflows in the background to process the information according to a number of DOE orders, policies, business rules, and basic technical requirements. ELink then publishes this information to the main OSTI.gov site and provides some additional metadata to DOE CODE. It doesn’t have to work that way though. Oak Ridge National Laboratory (ORNL) is in the process of deploying a DOE CODE clone, called ORNL Code, that leaves out the backend processing, restyles the site, and adds Single Sign-On (SSO) authentication to integrate with ORNL’s other applications.

What we have found with the deployment of ORNL Code, which I am also leading, is that it is relatively straightforward to do custom deployments of DOE CODE. That was by design, but it is always good to verify it! We are also taking the next step at ORNL by putting ORNL Code in the cloud on Amazon Web Services. I remain hopeful that other organizations will try this too.

Building the Platform with Strong Community Backing

The effort to build DOE CODE was one of the most vibrant and fast paced projects I’ve worked on in my time in the National Laboratories. Yes, I have definitely worked on projects that were shorter than sixteen months from conception to Alpha launch, but I have rarely worked on projects with such a large amount of engagement and scope that launched on time sixteen months later. The key to this success, in my opinion, was that we engaged as many people from the DOE community as possible and we kept every possible line of communication open. Part of this included, as previously discussed, releasing DOE CODE itself as an open source project.

Early in our development process we established about eighteen separate requirements teams that we used throughout our development process for guidance and testing. I lost count of the number of people that we interviewed when it was around eighty eight, and that was early in January 2017. These teams were composed of members from various communities of interest from within the US national laboratories. Each team was about five to eight people, to start, but some of the teams quickly swelled to eight to ten people. One team went from eight people to twenty seven, which was the phone call where I learned the consequences of saying “Sure, invite your friends!” We also had good community interactions on the GitHub site, Twitter, and conferences during the development cycle. I personally presented a talk on the project many times and some times multiple times in a single day. By the end of the year, we had presented thirteen invited talks on DOE CODE, which is the most invited talks I have ever presented in a single year.

To say that the DOE CODE team is grateful and indebted to the broader DOE community is an understatement, but it is a good start. We certainly could not have built the platform without their help and the many great people behind the scenes at OSTI and ORNL as well.

Getting Involved

If you are interested in getting involved or learning more, you should check out the DOE CODE site or the GitHub community. You can also reach out on Twitter: OSTI maintains an active Twitter account (@OSTIgov) and I am always available (@jayjaybillings).

What does a good code sample look like?

Here’s an example of a good code sample for entry-level programmers.

I review a lot of code samples, both for work and for pleasure. Grady Booch once tweeted that good coders read code, and I think he is absolutely correct. How can we improve our skills as developers if we don’t look around at other code to learn what’s new and helpful? So, I read samples, examples, tutorials, production code, research code, test cases, code in languages I don’t know, code from “top coder challenges,” and any other type of code I can find. I also review code samples from job applicants.

Inevitably, the responses I receive when I ask for a code sample are “Really?” or “I don’t have a code sample. Can we skip to the interview?” I wish that was the worst of it: Over half of my job applicants ghost me when I request the code sample. This is especially funny since my job posts include a special section that says “We will request a code sample from you.” I believe this happens because people are busy balancing priorities or don’t know what a good code sample looks like. So here are my suggestions for writing a good code sample:

Distribute only code you’re allowed to share. Never give someone a sample of proprietary code that you wrote for work. Don’t even suggest it. Take a half hour or hour to write a good sample, push it to GitHub, and then finish watching Future Man, Star Trek: Discovery, or the ballgame that you had on while you slung it together. On the other hand, if you work on open-source projects for a living, feel free to submit one of your open-source codes and suggest reviewers run Gitstats or something to see your contributions.

Include five or more classes that demonstrate a good design philosophy. Show off how well you understand object-oriented concepts such as inheritance, realization, and delegation by creating a simple-but-thorough design with distinct classes. This shows not only how well you understand the concepts but also that you can design things well. Plus it demonstrates that you know basic things like how to call functions.

Do some I/O. Showing some input and output operations in your code demonstrates that you know not only how to handle those all-important functions but also how to use logic and loops. Both are very important language constructs that you will almost always be asked about.

Show some tests. Whether or not the job will require a lot of software testing, showing how much you know about testing only makes you look awesome. If you don’t know about testing, then “treat yo’ self” to the Wikipedia articles on software testing and unit testing.

Use Git. I don’t know how many teams use Git these days, but I’m willing to bet most of them do. As with testing, this is a technology you want to show you know something about regardless of whether or not the team uses it, but especially if they do.

Include a build system. This might not seem like the most obvious thing to include in your code sample, but it’s important because I need to know that you know how to build your code. To steal and adapt a line from the movie “Three Amigos:”

Well, you told me your code sample has a build system. And I just would like to know if you know what a build system is. I would not like to think that a person would tell someone he has a build system, and then find out that that person has no idea what it means to have a build system.

Is it ok to use an auto-generated build system from an IDE? If it can be used without starting the IDE, like a Makefile or Maven script, then yes. If not, no.

Document your sample. The single biggest problem I see with code samples is that there is no documentation. I don’t care how “self-documenting” you think your code is, I don’t have a clue what it’s supposed to do and you won’t in five years either. More importantly, API-level documentation and skills with Javadoc and Doxygen are crucial in modern development shops. No user off the street is going to know how to use char ** getVal(int a, const int & a0, const double & a1) const; correctly! Furthermore, did you include a README.txt or README.md file to tell me what functionality your code offers, who wrote it, and how to contact the author?

Make it pretty. Presentation matters. You’re probably worried about being dressed nicely to make a good first impression—shouldn’t your code also be “dressed nicely” to make a good first impression? Clean code is more readable than messy code and makes the review process easier.

Be transparent. Don’t lie or hide things about your code sample; put them out in the open, be transparent, and own it. If reviewers don’t like something in your code sample, adopt a growth mentality and ask them if they’ll let you fix it and submit it for re-evaluation. If you do something in your code sample that you know is wrong, document why you did it with something such as “A realistic implementation would replace this version that scales a random number from rand(), but this is sufficient for my code sample.”

I hope this helps you write a better code sample. I wrote a simple code sample that you can check out on the ORNL Training GitHub page. If you have questions or want to complain about either this article or the samples, drop me a Tweet at @jayjaybillings.

Getting my Twitter feed under control

Does anyone else out there feel like Twitter just headed south, what with all the politics, and Nazis, and unfortunate negativity towards #MeToo, and the Alt-Right, and the general lack of emotional intelligence, and #<insert-your-complaint-here>? I’m not able to deal with all of that right now, and I strongly considered quitting the service. I even took a break for a week or two. However, I really like micro-blogging, I have a good size following, and I love the different types of information I can access on Twitter, even though I primarily use it for keeping up with business contacts. That last bit is especially true for getting my #StarTrek fix when the work day is done! Twitter’s Trek community is amazing!

Here’s the list of things that I’ve done to clean up my Twitter feed, and make it worth my time. So far it is looking really great and interesting again. If you’re in the same boat as me, I hope this helps you out.

  1. Unfollow some folks – I started unfollowing some folks, mostly those who are inactive or rabble rousers. I especially unfollowed feeds that just depressed me, no matter how “valuable” some might find there feeds. (Here’s looking at you @SenBobCorker! #byebye)
  2. Mute people – Actually @Mrs_Billings put me onto this. Wait… did my wife mute me!? At any rate, muting someone you follow prevents their Tweets from showing up in your timeline, but you still see notifications and direct messages from them. It is a good way to keep following your friends even if you don’t want to see all their My Little Pony tweets, for example.
  3. Use more lists – If neither of the above are a great fit, putting feeds on lists is a good way to only check in when you want. I’m in the process of moving everyone I follow related to #StarTrek to a list because sometimes in the middle of the day I cannot afford to get side tracked by #AshIsVoq, even though I would really love to clear my schedule for it!
  4. Adjust your interests – Twitter tracks your “interests” for its own uses and for companies. You can set which of these auto-generated interests that you would like to see in your feed in the data settings.
  5. Mark uninteresting tweets – I’ve started telling Twitter when I don’t like tweets. I’m not sure how well this feature works since they keep posting certain types of content that I don’t like into my feed, but I’ll keep it up for a few days and see.

There you go. I’ll let you know how well this works in a few weeks. Sorry if I unfollowed, muted, or listed you, and you find that offensive. I promise I won’t be offended if you do the same to me too because I now know just how hard it is to get a Twitter feed under control!

Dynamic Visitor Pattern in Java

The Visitor Pattern is a great way to extend the functionality of a class without extending the class per se. The gory details are very well explained on Wikipedia, but it works by a visiting class, the “Visitor,” calling a second class that “accepts” the Visitor. Once the Visitor is accepted, the second class reveals its type by calling a specific, typed method on the Visitor. Now that the Visitor knows the type of the second class, it can behave in a highly specific way that, in effect, extends the behavior of the second class. The Wikipedia page has good examples of the basic pattern implemented in multiple languages, including Java.

The Visitor Pattern has one major downside: The exact list of classes that can be visited is completely specified on the Visitor interface. This presents a problem if a new class needs to be visited that is not available on the interface. The easiest solution is to add the new class to the the interface, but that is not possible with third party code – downstream developers in general do not modify API that they then call.

Dynamic Visitors for New Types

So how can developers allow visitation for other classes or subclasses without requiring extension of the primary Visitor interface? The trick is to separate the act of accepting the Visitor from from the final visitation by way of a delegate. The code looks something like the following, which is taken from the Eclipse January project where I committed it yesterday:

/**
 * A simple, templated Visitor interface as part of the Visitor pattern. This
 * interface is implemented by classes that work with IVisitorHandlers to
 * dynamically and generically extend the visitation capabilities of the forms
 * package. Using this interface in place of the IComponentVisitor interface
 * allows clients to create custom Components or other structures and visit
 * them dynamically, wherease the IComponentVisitor interface is static and does
 * not allow extension outside the basic components.
 *
 * @author Jay Jay Billings
 *
 */
public interface IVisitor<T> {

	/**
	 * This operation directs the visitor to visit the provided data element.
	 * @param component The data element that should be visited.
	 */
	public void visit(T element);

}

/**
 * This is a simple interface for registering visitors under a generic visitor
 * pattern. It is designed such that implementers should use the IVisitors that
 * are registered with the set() operation to visit the objects passed to the
 * visit() operation. This allows run-time registration of generic visitation
 * callbacks without the need for a verbose, static interface such as
 * IComponentVisitor. Registration is as simple as associating a Class with an
 * implementation of IVisitor<T>.
 *
 * This class should not be used in general for all the data types in Forms. It
 * is better to implement IComponentVisitor or extend SelectiveComponentVisitor
 * in those cases because it minimizes the code and avoid bugs. This class and
 * the IVisitor interface are meant to be used only for classes that are not
 * already available on those two entities.
 *
 * @author Jay Jay Billings
 */
public interface IVisitHandler {

	/**
	 * This operation associates an IVisitor with a Class.
	 * @param classType The Class that should be associated with the Visitor
	 * @param visitor The IVisitor that will be invoked for the given class.
	 */
	public void set(Class classType, IVisitor visitor);

	/**
	 * This operation uses the registered IVisitor to visit the injected
	 * object.
	 * @param objectToVisit The object that should be visited.
	 */
	public void visit(Object objectToVisit);

}

/**
 * This interface defines a visitation contract where visitation requests are
 * granted through a delegate provided by an IVisitHandler. This interface is an
 * alternative to IVisitable for classes that may need to execute visitation code
 * for classes not available on the IComponentVisitor interface.
 *
 * @author Jay Jay Billings
 */
public interface IGenericallyVisitable {

	/**
	 * This operation will accept a visit handler instead of a typed visitor
	 * that will then be called as a delegate for direct visitation.
	 * @param visitHandler
	 */
	public void accept(IVisitHandler visitHandler);

}

This works by providing a second interface – IGenericallyVisitable – that classes can realize to accept a delegate – the IVisitHandler – instead of directly accepting the Visitor. This allows the IVisitHandler to smoothly direct visitation to a generic IVisitor<T> that is typed specifically for a new class that is not part of the original Visitor interface.

The January repo has a good example showing how this works. All of the code there and here is licensed under the Eclipse Public License version 1.0.

Drawbacks

There are a couple of downsides with this approach that may keep it from working for all projects. First and foremost, the compile type checking provided by the full visitor interface is not present for a generic visitor interface and it is possible that run time case exceptions could occur because the proper visit() operation is not available. In that case the error looks like the following:


java.lang.ClassCastException: org.eclipse.ice.datastructures.test.TestClass2 cannot be cast to org.eclipse.ice.datastructures.test.TestClass
at org.eclipse.ice.datastructures.test.TestVisitor3.visit(BasicVisitHandlerTester.java:1)
at org.eclipse.january.form.BasicVisitHandler.visit(BasicVisitHandler.java:48)
at org.eclipse.ice.datastructures.test.TestClass2.accept(BasicVisitHandlerTester.java:24)
at org.eclipse.ice.datastructures.test.BasicVisitHandlerTester.testVisit(BasicVisitHandlerTester.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at

...

In C++, this can be handled a little differently compared to Java because of Explicit Specialization of function templates… which would have been so much nicer to have than resorting to a basic handle by using the Object class.

Second, the exact means by which the delegation to the IVisitHandler happens most likely requires either upfront registration or hardwiring in IVisitHandler implementations. In this case, January goes with upfront registration, which actually isn’t too bad.

Finally, delegation always comes with a performance hit if it isn’t handled efficiently.

Comments are welcome to my Twitter account! @jayjaybillings

 

NVIDIA CUDA 7.5 on Fedora 23 with NVIDIA Optimus Technology

I have a Dell XPS 15 that has both Intel and NVIDIA graphics using NVIDIA’s Optimus Technology:

00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2)

Getting CUDA 7.5 to work was straightforward using Bumblebee, as described on the Fedora Bumblebee Page. I used the managed proprietary option to get the latest NVIDIA driver for my system. Using both optirun and primusrun works, however on glxgears they report different numbers

$ optirun glxgears
10162 frames in 5.0 seconds = 2032.270 FPS

$ primusrun glxgears
292 frames in 5.0 seconds = 58.370 FPS
primus: warning: dropping a frame to avoid deadlock
primus: warning: timeout waiting for display worker

I installed the CUDA Toolkit using the dnf repository provided by NVIDIA. It is important to only install the Toolkit from this repository, not the version of CUDA that is provided since Bumblebee already configured an appropriate Optimus-ready NVIDIA driver. So, instead you can just run

sudo dnf install cuda-toolkit-7-5

Additional configuration works as specified in the User’s Manual and Getting Started Guides. I had trouble getting the examples to compile because Fedora 23, which I’m on, defaults to gcc 5.3.1 and, unfortunately, CUDA 7.5 requires gcc 4.9 or less. I compiled GCC 4.9 from scratch and then manually modified the makefiles of each sample I wanted to compile to point to “g++49” instead of “g++”. You’ll notice that I told the gcc configuration during build time to add the 49 suffix with the –program-suffix=49 option. Aside from that, compilation and execution were simple enough with two catches:

  • It was necessarily to override the library path to point to Bumblebee’s version of the cuda shared library instead of a 32-bit version that somehow ended up in /usr/lib.
  • Contrary to what the Bumblebee documentation suggests, you must use optirun or primusrun to execute the samples. I’m not sure if this is required on more basic programs yet.

So, that looks something like this for the nbody example:

LD_LIBRARY_PATH=/usr/lib64/nvidia-bumblebee primusrun ./nbody

cudaNBody_cropped

Family Rocking Chairs

We picked up our family rocking chairs from C & S Refinishing off of Old Broadway in Knoxville today! We’ve been so excited about these chairs and could barely wait to get them home. Needless to say, the boys at C & S did an excellent job and we are thoroughly impressed! Here they are!

20160817_165929

You can find more pictures of the refinished chairs at Rocking Chairs – Refinished Album and the chairs in their original condition at Rocking Chairs – Original Album. If you check out the original pictures, be sure to look at the damage to the arms of both chairs, as well as at the joints in a few places.

The Garage

These chairs are from the Billings side of our household and have quite a bit of sentimental value to us. Mrs. Billings and I recovered them from my Dad’s garage on June 17th 2016. They were so covered with mildew and damaged that I nearly took them to Goodwill, but as Mrs. Billings and I looked at them more and more we thought that maybe we could clean them up and refinish them, especially since they mean so much to the family. Both chairs are shown below. Each chair has been in my family for a few decades and many, many babies have been rocked in both of them. My parents were more than happy to provide details on each.

The Upholstered Chair

The upholstered chair on the left is a Depression era (1930s) nursing rocker. The frame is made of several different types of wood and the chair was probably reupholstered in the 60s or 70s. The yellow upholstery was leathery, but Mrs. Billings quickly and correctly pointed out that it was a type of faux leather. My parents bought this chair at the estate sale of Early Wampler in Rural Retreat, Virginia, sometime around late 1984. My mother was rocking little baby me in it when the auctioneer turned to it and put it up for bid. Dad did not like the idea that Mom would have to move and wake me up, so he bought the chair! Once they took it home, the chair was used to for me, my sister, and my little brother. At first this was in Rural Retreat, VA, but it later moved with us to our winter home in Watertown, TN. By the time Josh was born, this chair had been retired to the formal “living room” where all the furniture covered in plastic lived. Mom was very worried that once of us would damage the chair and we were not allowed to sit in it. I would regularly sneak into the room and curl up in this chair with a Hardy Boys or Nancy Drew book, fall asleep and wake up to a stern “Uh hmm” when Mom found me.The chair’s history after that is pretty simple. It moved around with the family, forever moved further and further away from the common sitting areas in an effort to preserve it. It would remain rarely used until I came home, found it, and fell asleep in it with another good book. Eventually, not having a good place to store it, Dad put it in his garage.

20160619_155353

The Youth Rocker

The second chair is from the 1950s. My Uncle Bill (William Billings) gave this chair to my Granny Billings (Virginia) sometime between then and the late 80s when I first remember it. It is a comfortable, all wooden youth rocker that is big enough for adults too. Mom would pack us into the car every weekend or two and take us to see Granny in Mountain City, TN, who would give us good food and very interesting books to read if “her shows,” (soap operas), were not on. I would curl up in this chair and flip through magazines or sleep, as would my baby sister. This chair has been used by most of my family members below the age of fifty at one point or another. The history of this youth rocker after that is pretty simple. Granny had a stroke in 1991 at 89 years old and the chair was left in her house. It was still there a decade later when a buddy and I visited the house, long after Granny had passed. Sometime after, about five years later, my Dad recovered the chair, cleaned it up and used it as part of his grandfatherly duties to rock my sister’s children. I was shocked to find it in the living room when I came home from college one Christmas. As you might suspect, my plan was simple: read a book in it and fall asleep! A few years later, this chair joined the upholstered chair in the garage when Dad moved to his new place, which is where Mrs. Billings and I found them both.

20160619_155703

The Refurbishing

After we decided to keep them and brought them home, Mrs. Billings and I decided that the damage to these chairs was significant enough that we should contact C & S to refurbish them. The biggest thing to us was that the chairs were covered in mildew and their prior finished had been compromised. We were worried that if we did the work ourselves that we might not be able to get everything properly cleaned such that mold and mildew would eventually get under the finish. We asked C & S to simply refinish the youth rocker, but to completely reupholster, refinish and repair the upholstered chair. We chose a high quality, lovely blue paisley fabric as shown in the first picture. We are very pleased with the work!

Closing

We haven’t figure out where the chairs will sit permanently in the house, but in front of the fire place was a good start. If you thought that I’ve probably already fallen asleep reading something in them in the few hours that you’ve been there, well, you’re right.

20160817_171508

Catch you next time! Feel free to direct questions to @jayjaybillings on Twitter or send an email to beingsocial<at> jayjaybillings <dot> org.