Go to content Go to menu

This past June I went to Open Repositories 2017 in Brisbane, Australia. I presented a workshop on how to get started with using Ansible and Serverspec. The workshop slides and materials are available. That was a great experience, and I’m happy to report that I was able to survive the virtual machine I brought to lead the workshop not working on my own notebook… thanks to the help of my pal, Kim Shepherd, who loaned me his own notebook, to run the machine. It did work for everyone else. When I returned from Australia, I rewrote the Vagrant configuration to make a sturdier VM for future workshops, calling it Workshop-o-matic. It is useful for anyone who might want to follow along with the workshop slides, so, if you’re interested, please do.

This blog post is very tardy, I’m sorry about that. Immediately after the conference, I took my wife on a vacation around northern Australia. We spent a wonderful week driving around in a rented campervan. It was a great time, and it has been a non-stop whirlwind to catch up with work and life, and everything else that piles up after a vacation, and at the start of a new school year. So, anyway, enough excuses, on with this recap.

The Keynote was given by Sir Timothy Gowers, entitled, “Perverse incentives: how the reward structures of academia are getting in the way of scholarly communication and good science.” There is a video recording. Incidentally, all the filmed conference sessions are also available (note only the general session tracks in the main ballroom were recorded).

Two things Sir Timothy said really struck a chord with me, have held my attention through the conference, and since, as well:

The current culture doesn’t really favor sharing [incomplete ideas].

An obvious thought is that if we did all start sharing our little scribblings, we could end up with a complete mess.

After hearing this, my mind started racing, becuase, us open source developers do exactly this: we already share our work in progress. And it hit me, I’d been thinking about this problem a while. I even had a phrase for my half-baked idea of how to approach the problem:

Are you going to eat that, mate?

And I filled a page with scribbly notes, and found a team to pitch this idea as part of the idea’s challenge. Alas, it didn’t win, but we had great fun making the slides.


So, my main takeaway from this conference is that I need to figure out how network data analysis works, and I need to tackle this challenge on my own, because I’m convinced there are a lot of really great ideas—almost finished code—just out there on GitHub, waiting for us to find, and ask that question: Hey, if you’re not using this code, can we use it?

Pardon this digression, however, after the conference, my pal Kim sent me a note on Slack, and says he was fiddling with a citation database and ended up finding this article:

MODELING DISTRIBUTED COLLABORATION ON GITHUB Journal Article published Dec 2014 in Advances in Complex Systems volume 17 issue 07n08 on page 1450024 Authors: NORA McDONALD, KELLY BLINCOE, EVA PETAKOVIC, SEAN GOGGINS

And an author name leaps out at me: Sean Goggins, hey, I think I know that guy. We have shared friends, we go to the same neighborhood pool. So, we become facebook friends. I still haven’t taken Sean out to lunch, but he’s working on this really interesting project:

CHAOSS

From the governance page, the mission of the project is to:

  1. produce integrated, open source software for analyzing software development, and definition of standards and models used in that software in specific use cases;
  2. establish implementation-agnostic metrics for measuring community activity, contributions, and health; and
  3. optionally produce standardized metric exchange formats, detailed use cases, models, or recommendations to analyze specific issues in the industry/OSS world.

Which is not quite what I want to do, but it is working with the same data set, to help foster the health of open source development communities. And goal 3 would at least help me in my own goal, which is essentially to build a recommendation engine for work in progress on GitHub.

Now, back to my OR17 recap. Here are some of the cool tools I found out about at the conference, the things I want to check out later:

In Dev Track 2, Conal Tuohy presented on mining linked data from text, and mentioned a tool I want to check out: XProc
which is an W3C recommendation for an XML transformation language, using XML pipelines. There’s a book and a tutorial I found, I’ll check them out later.

Also, Peter Sefton presented a static repository builder tool, Calcyte which makes extremely high-performance and inexpensive data repositories with static HTML.

The real draw for me for this session was the presentation Visualizing Research Graph using Neo4j and Gephi by Dr. Amir Aryani and Hao Zhang. I knew after the keynote that I really needed to find out more about graph data, and I knew Neo4j and Gephi would be tools I’d need to be familiar with, so I ended up in Dev Track 2 to see this presentation and be inspired, and it did not disappoint. I came out of this presentation convinced that I could use the network graph data in GitHub to build the recommendation engine I wanted to build. And, even if I didn’t build a full-fledged tool, at a minimum I should be able to explore this data on my own, using Neo4J and Gephi.

I’m not ashamed to admit that I sought out Dr. Aryani’s next presentation on the following day, in General Track 10, “Research Graph: Building a Distributed
Graph of Scholarly Works using Research Data Switchboard”. It was really interesting to find out how a distributed graph works, and why one would use it—It’s a way to produce a larger data set from more than one shared dataset, by connecting the graph data across disparate repositories. Doing so allows each parter institution to retain “ownership” of their own data, while still maintaining access to the shared whole of the larger dataset. Distributed graph databases also share a bit of the computational load of running large-scale queries, which helps the entire data set scale, and remain usable.

My other takeaway from OR17 is that I really need to keep better tabs on a particular colleague of mine, Andrea Schweer, as she often puts interesting code up on GitHub, and every time I look at her code I’m blown away by its quality, and how I can immediately make use of much of it. Don’t believe me? Look at this collection of cool stuff.

That’s the kind of thing I hope to be able to find with my future fiddling with the GitHub network graph. How many other developers have huge collections of interesting bits of code, maybe just half-finished, but still amazingly useful, waiting for us to discover, and use, and build communities around?

I’m really excited about this, and hope to be able to help make it happen.

UPDATE (12/15/2017):
Just to give you a tiny taste of what’s possible, GitHub has added a couple of recommendation-engine features. If you have a GitHub account, head on over to GitHub Discover and GitHub Explore which are both giant rabbit holes of fun, happy hunting! NOTE: neither of these features are what I had in mind, they’re just basic “you like these projects and follow these people, have you seen this projects?” or “hey, everyone else is excited about this, you should be, too” kinds of things. I’d like to focus in on branches in forks of a project, find the ones that have been pulled a lot, and mix that in with other social data (friends of friends, etc.).


So, this is a bit of a long story. As a background, one of my projects is building a DSpace repository for 3D architectural models. These models are stored in a format that is unique to the tool the researchers use, which means they are big binary files. There is interesting text data in these files, so a full text index would be helpful for people searching through them all, to find something they might have missed. I mean, we’ll have good metadata for these files, but not all data is surfaced by even the best metadata. The typical approach for DSpace repositories is to convert the data formats to something DSpace can index, like a PDF, or a CSV file. But these data files are not the kind of thing you just convert. And no text extraction tool exists for this data format. Until today. :-)

I’ve been thinking about this problem for a few months. I’ve even run the files through the very handy strings program. Unfortunately, strings is very exhuberant and pulls all kinds of gibberish out of a binary file. So… not very useful for a full text index. I shelved the idea. Until yesterday.

Yesterday, during a meeting, I had occasion to talk about a tool I’ve wanted to play with for a long time, the Data Science Toolkit I pulled up the URL so I could talk meaningfully about it, but didn’t get to make more than a passing mention. After the meeting, before closing the tab I had open, I noticed the Text to Sentences API for the DSTK. I tinkered with it. It had never occurred to me that you could filter out gibberish and retrieve meaningful text. This was the missing piece! I poked around a bit, looking for code, but, ran out of time to do any more investigation, so I instead shouted out to the void (i.e. I tweeted):


and, amazingly, I got a response:


So, I grabbed a copy of cruftstripper.rb and un-comented the last three lines renaming the script “sentences”, pointed strings at a data file and piped that gibberish through my new “sentences” script, voila:

strings big-data-file.3d | sentences

And got useful data, which I can’t share with you because I don’t have permission to do so. But, you’ll have to trust me, it’s beautiful.

And the really cool thing about this approach is, it works for any mysterious data format a repository might receive.

Now, what I have left to do is to wire this mess up into a media filter script for DSpace which will be pretty easy. The tricky bit will be figuring out a way to do it so that the code can be merged into the DSpace application by default. Because that’s what I really want to happen here—give DSpace the ability to produce a full text index from any file.