Go to content Go to menu

Code4Lib 2019 Recap

Mar 30, 12:11 PM

Code4Lib 2019 was held in San Jose, CA from February 19 to February 22, 2019. The theme of this conference, for me, was service. As in volunteering, and giving back to my community. There were plenty of jobs that needed doing, and my usual method in these cases is to needle people I know into volunteering to do them. However, the people I know in the Code4Lib community were already very deeply involved in this conference… asking them to do more was a terrible idea (I’m embarrassed to admit that it took me a while to figure this one out). So… I decided the best thing to do was to volunteer for as many things as I could conceivably take on. This turned out to be great fun, and I’ll do it again. However, the conference week itself was a bit of a whirlwind. I cannot even imagine what it must be like for anyone more involved than I was (and I was, frankly, barely involved at all). Let’s just leave it at: Code4Lib is an incredible community, and the yearly conference is a labor of love and devotion that we seldom are privileged to see anywhere. If you get a chance to go to this conference, please do consider volunteering for as much as possible… it’s worth it to see up close, everyone else give their all to pull this conference off.


OK, this is a recap, so, I will recap. The first day was for workshops, I had a workshop approved for the conference, but it was cancelled due to space constraints with the venue (and not many people signed up for it, either). So, after asking Dre if it was OK, I was able to go to his legendary Fail4Lib workshop. So many interesting questions and observations came up during this workshop! I jotted a few of them down. I’ll just run through a few of them here:

The opening discussion centered around some assigned reading that Dre sent out before the workshop. I will mention this reading at the end, but the general topic was the idea of megaprojects, how they are funded, and why they fail. Here are a few snippets from my notes:

  • Is an economic lens the best lens to use to evaluate the success of a project?
  • “Not only do planners typically underestimate the time, difficulties and costs involved in completing projects, they also overestimate the likely benefits – including the ease with which people will come up with ways to overcome obstacles.” —Flyvbjerg, professor at the University of Oxford’s Saïd Business School, here’s an interesting article that also mentions this quote.
  • Wealthy people want a monument, how can we make the monument useful?
  • The political use case is a use case, it’s OK to include this use case, if it unblocks other stories.
  • Make the trade-offs more visible to decision makers
  • Keep in mind who you are talking to, decision makers are often more responsive to political/visionary/branding/impact language.
  • Listen they will usually tell you or give you clues as to what their priorities are
  • It’s easier to evaluate a system from the outside of it. Being part of the system can blind you to important aspects of the system.

There followed a series of lightning talks from workshop attendees. I will not name names, but here are a few things from my notes which stand out to me now.

  • waiting even six months would have opened up possibilities
  • praise blinds you to that nagging voice that would normally have given you a heads up
  • seek out an honest opinion from someone outside of the project, avoid situations where no one will check your work
  • vendors are vendors, not your friends
  • retrospectives and honest evaluation are invaluable, and can help strengthen relationships, even new ones
  • postmortems after failures of any sort are healthy, and should help prevent similar failures in the future
  • testing all aspects of a design is important
  • communicating a failure in progress, is difficult, but once you get past that first admission of a problem, things will get better, stakeholders understand the problem space and the risks better than you think, and are on your team, keep them in the loop
  • have measures for what you’re doing, so you know whether you’ve done it

During the wrap-up, the group discussed tactics for surviving failure. The kick-off question was so interesting, I wrote it down: “How do you prepare collaborators for unexpected outcomes?” Here are a few things that came up during the discussion:

  • Admit that the work is an experiment
  • Agree on definitions: “prototype” is risky, because there is a desired to throw “working prototypes” into production, even if they aren’t “real”. “Beta” and “Alpha” don’t have inherent meaning, so spell out exactly what you mean about these deliverables.
  • Is failure an endpoint?
  • Is failure OK?
  • What can we do to make failure OK?
  • When is failure unavoidable? Useful? Desirable?

At the end of the wrap-up, someone brought up the article Blameless Postmortems in a Just Culture it’s worth a read. We follow this practice at UCLA Library, and have seen positive outcomes from employing it.

After a quick jog across the street for lunch (a vegetarian burrito, yum!), I made it back in time for Jon Weisman’s afternoon workshop Library Apps and the Modern Dev Workflow Jon walked us through using OpenAPI/Swagger to develop an API spec, and then develop an actual application that delivers a service based on that spec, and deploy it to Heroku. Here are Jon’s notes on the workshop, they are worth taking the time to follow. We are using these tools in my team’s work at UCLA Library, I am grateful to have had a chance to get my feet wet at this workshop.

Day 1

Sarah Roberts delivered the opening keynote (here’s a recording). The themes from this keynote ran through many of the presentations during the conference: a concern with ethics in computing, a concern with equitable labor arrangements, and an awareness of the realities of what sorts of labor are required to facilitate many modern online environments and services. Dr. Roberts’ keynote refocused my attention on a problem that is all too easy to forget: there are real people doing the job of content moderation, and other online services like Mechanical Turk. They live in harsh conditions and very often the work that they are asked to do carries a significant psychological toll. Just knowing this fact is important, as the topic of using Mechanical Turk does come up in Library circles, knowing the conditions these workers face will help us plan for a more equitable arrangement for this labor. I hope. Of course, the difficult part, for us technology workers, will be to speak up during project planning, and be the squeaky wheel, to be sure this unseen labor is seen. I will try to rise to this challenge.

Dominique Luster from Carnegie Museum of Art challenged us to “do the work of cultural competency” in her talk entitled Machine Learning and Metadata with the Charles “Teenie” Harris Collection. (here’s a recording) In part a cautionary tale of the dangers of overly detailed cataloging of images, and how that detail can obscure what’s actually important and significant about a collection…. the talk is not merely “metadata shaming” but a deeper dive into how machine learning can be utilized to tackle a large-scale metadata cleaning project. Luster is an engaging speaker, if you only watch one of the talks from this conference, this should be the one.

Better late than never

Oh my, it’s been over a year, this article has been in draft for that year… I don’t think I’ll ever finish it. So, I will instead post the draft. This year, I will have to get back to this blog.

Samvera Connect 2018 Recap

Mar 15, 10:04 AM

Samvera Connect 2018 was hosted by the University of Utah’s J. Willard Marriott Library. For me, the theme of this conference was testing. I registered for two different workshops on testing. The first was run by Carolyn Cole, and based on the excellent Rails Bridge training. [Carolyn’s workshop materials] The second was run by Tom Johnson, and was a slightly more Samvera-specific workshop on testing topics. [Tom’s workshop materials] I have several pages of notes from both workshops. But I’ll try to mention a few bullet point takaways from them:

From Carolyn’s workshop:

  • try to write tests that make sense to you
  • tests are really only useful if you understand how they work
  • generated code doesn’t always have tests, or what tests it does have are incomplete
  • to quote Carolyn: “Yes there are smart people writing this stuff, but it’s important to understand what you’re using.”
  • use Coveralls to find places where you are testing too much (ooh, interesting, but good tip)
  • name your tests in such a way that, when you see them fail, the failure message makes sense… you’ll see your tests fail right away, if you’re doing TDD correctly, and it helps a lot if you can understand what the failure message is telling you
  • a thing to read about later: Shameless Green from Sandi Metz.
  • Feature specs are expensive, but that’s because they are complete… since big tests require a long time investment, ensure there is a big payoff for running them, especially if you’re running them all the time as part of some automated process

From Tom’s workshop:

  • Are your tests using too-specific dependencies?
  • Do your tests “know too much”?
  • Don’t just “rebuild the implementation” in the test
  • Complicated tests should worry you, they are a warning sign
  • this snippet is golden: it (require 'pry' binding pry) see this article for more details

Tom also had lots to say about mock objects, and other such things. For sure check out both Carolyn and Tom’s workshop materials, and if you’re doing any sort of development work with Samvera, you should actually go through both of those workshops and follow along with the exercises. It’s good experience, you’ll learn a lot.

OK, workshops done, on to the main conference. Here are some highlights from my notes: The Code Stability Working Group’s Recommendations is something I should read. Glancing at it again, I see there are a few interesting links from that page… I need to set aside some time to explore all of that info more thoroughly. Along those same lines, the Hyrax Roadmap is something anyone who works with Hyrax should be aware of, and read. Also, Hyrax doesn’t just travel down that road all on its own, there is likely lots of work to be done, and if you’re in a position to help, there will be Hyrax pull requests waiting for review. Have some spare time? Make yourself useful and pitch in!

Another theme to this conference was the community coming to terms with what bringing in Valkyrie will actually mean, and how that work will proceed. And while that might have been interesting many months ago, proceed they have! :-) Valkyrie is now up to version 1.5.1 with version 2.0 expected in coming weeks. The project has been promoted to the core Samvera repository, and has been added as a dependency of Hyrax as of February 2.

During the Hyrax Working Group, Valkyrie discussion dominated. At some point, someone mentioned Martin Fowler’s Branch by Abstraction article … I need to read that.

Now I’ll just list off the talks I got the most out of, in the order they appear in my notes (likely chronological order).

David Schober from Northwestern presented a talk on DevOps [slides]
…really useful from the trenches information on running containers in AWS, and using Terraform to do it. This is an area my team at UCLA Library is trying to develop expertise in, these slides will be useful for our team. I particularly like their DevStack tool for setting up dev environments for the various services they build and maintain.

Kate Lynch from University of Pennsylvania Libraries presented a talk about a workflow they’ve written to manage backup of content to Amazon Glacier, which they call “Guardian Workflow”. [slides]
… really cool stuff, and worth a look if you want to do something similar. Even if you’re not interested in backing up to Glacier, it’s worth a look to see how they’ve managed to tackle it, as the approach might be useful for other work you have to do.

Justin Coyne from Stanford University Library spoke about deploying with AWS Elastic Container Service [slides] From my notes, they’re using CloudWatch to pull all their log files together, and auto-scaling will save you money, you can turn off the service when you don’t need it.

James Griffin from Princeton University Library presented on Synchronizing Samvera [slides] Lots of helpful information in this talk about how to get Samvera to talk to other web services. This is information that will help anyone who is building microservices alongside a Samvera repository, or anyone who needs to integrate a Samvera repository in a larger IT ecosystem.

There was a session about IIIF, and my notes from this session are pretty sparse, but I did jot down that Simeon Warner from Cornell mentioned they use something called “Art Store” to handle moving files, and I think it’s actually Archival Storage Ingest … which actually looks pretty cool, and useful for my team.

There was a Batch Ingest Working Group meeting which I attended, but the facilitator couldn’t make it. My colleague Lisa McAulay jumped in to lead the conversation, and we ended up walking through the working group’s docs on the wiki, and added new information from those present at this meeting, so hopefully we ended up helping. Those notes have moved on the wiki since this meeting, I think this is their current location

I, along with my colleague from UCSD, Jon Robinson, facilitated an unconference session we called the “DevOps Sandbox Swap Meet”, here are the [notes from the session]

My colleague from UCLA Library, Stephen Gurnick, and I presented a poster on things we’ve learned about making DevOps work at UCLA Library. [our poster]

That’s about it. Sorry these notes are so late.