Go to content Go to menu

Code4Lib 2019 Recap

Mar 30, 12:11 PM

Code4Lib 2019 was held in San Jose, CA from February 19 to February 22, 2019. The theme of this conference, for me, was service. As in volunteering, and giving back to my community. There were plenty of jobs that needed doing, and my usual method in these cases is to needle people I know into volunteering to do them. However, the people I know in the Code4Lib community were already very deeply involved in this conference… asking them to do more was a terrible idea (I’m embarrassed to admit that it took me a while to figure this one out). So… I decided the best thing to do was to volunteer for as many things as I could conceivably take on. This turned out to be great fun, and I’ll do it again. However, the conference week itself was a bit of a whirlwind. I cannot even imagine what it must be like for anyone more involved than I was (and I was, frankly, barely involved at all). Let’s just leave it at: Code4Lib is an incredible community, and the yearly conference is a labor of love and devotion that we seldom are privileged to see anywhere. If you get a chance to go to this conference, please do consider volunteering for as much as possible… it’s worth it to see up close, everyone else give their all to pull this conference off.


OK, this is a recap, so, I will recap. The first day was for workshops, I had a workshop approved for the conference, but it was cancelled due to space constraints with the venue (and not many people signed up for it, either). So, after asking Dre if it was OK, I was able to go to his legendary Fail4Lib workshop. So many interesting questions and observations came up during this workshop! I jotted a few of them down. I’ll just run through a few of them here:

The opening discussion centered around some assigned reading that Dre sent out before the workshop. I will mention this reading at the end, but the general topic was the idea of megaprojects, how they are funded, and why they fail. Here are a few snippets from my notes:

  • Is an economic lens the best lens to use to evaluate the success of a project?
  • “Not only do planners typically underestimate the time, difficulties and costs involved in completing projects, they also overestimate the likely benefits – including the ease with which people will come up with ways to overcome obstacles.” —Flyvbjerg, professor at the University of Oxford’s Saïd Business School, here’s an interesting article that also mentions this quote.
  • Wealthy people want a monument, how can we make the monument useful?
  • The political use case is a use case, it’s OK to include this use case, if it unblocks other stories.
  • Make the trade-offs more visible to decision makers
  • Keep in mind who you are talking to, decision makers are often more responsive to political/visionary/branding/impact language.
  • Listen they will usually tell you or give you clues as to what their priorities are
  • It’s easier to evaluate a system from the outside of it. Being part of the system can blind you to important aspects of the system.

There followed a series of lightning talks from workshop attendees. I will not name names, but here are a few things from my notes which stand out to me now.

  • waiting even six months would have opened up possibilities
  • praise blinds you to that nagging voice that would normally have given you a heads up
  • seek out an honest opinion from someone outside of the project, avoid situations where no one will check your work
  • vendors are vendors, not your friends
  • retrospectives and honest evaluation are invaluable, and can help strengthen relationships, even new ones
  • postmortems after failures of any sort are healthy, and should help prevent similar failures in the future
  • testing all aspects of a design is important
  • communicating a failure in progress, is difficult, but once you get past that first admission of a problem, things will get better, stakeholders understand the problem space and the risks better than you think, and are on your team, keep them in the loop
  • have measures for what you’re doing, so you know whether you’ve done it

During the wrap-up, the group discussed tactics for surviving failure. The kick-off question was so interesting, I wrote it down: “How do you prepare collaborators for unexpected outcomes?” Here are a few things that came up during the discussion:

  • Admit that the work is an experiment
  • Agree on definitions: “prototype” is risky, because there is a desired to throw “working prototypes” into production, even if they aren’t “real”. “Beta” and “Alpha” don’t have inherent meaning, so spell out exactly what you mean about these deliverables.
  • Is failure an endpoint?
  • Is failure OK?
  • What can we do to make failure OK?
  • When is failure unavoidable? Useful? Desirable?

At the end of the wrap-up, someone brought up the article Blameless Postmortems in a Just Culture it’s worth a read. We follow this practice at UCLA Library, and have seen positive outcomes from employing it.

After a quick jog across the street for lunch (a vegetarian burrito, yum!), I made it back in time for Jon Weisman’s afternoon workshop Library Apps and the Modern Dev Workflow Jon walked us through using OpenAPI/Swagger to develop an API spec, and then develop an actual application that delivers a service based on that spec, and deploy it to Heroku. Here are Jon’s notes on the workshop, they are worth taking the time to follow. We are using these tools in my team’s work at UCLA Library, I am grateful to have had a chance to get my feet wet at this workshop.

Day 1

Sarah Roberts delivered the opening keynote (here’s a recording). The themes from this keynote ran through many of the presentations during the conference: a concern with ethics in computing, a concern with equitable labor arrangements, and an awareness of the realities of what sorts of labor are required to facilitate many modern online environments and services. Dr. Roberts’ keynote refocused my attention on a problem that is all too easy to forget: there are real people doing the job of content moderation, and other online services like Mechanical Turk. They live in harsh conditions and very often the work that they are asked to do carries a significant psychological toll. Just knowing this fact is important, as the topic of using Mechanical Turk does come up in Library circles, knowing the conditions these workers face will help us plan for a more equitable arrangement for this labor. I hope. Of course, the difficult part, for us technology workers, will be to speak up during project planning, and be the squeaky wheel, to be sure this unseen labor is seen. I will try to rise to this challenge.

Dominique Luster from Carnegie Museum of Art challenged us to “do the work of cultural competency” in her talk entitled Machine Learning and Metadata with the Charles “Teenie” Harris Collection. (here’s a recording) In part a cautionary tale of the dangers of overly detailed cataloging of images, and how that detail can obscure what’s actually important and significant about a collection…. the talk is not merely “metadata shaming” but a deeper dive into how machine learning can be utilized to tackle a large-scale metadata cleaning project. Luster is an engaging speaker, if you only watch one of the talks from this conference, this should be the one.

Better late than never

Oh my, it’s been over a year, this article has been in draft for that year… I don’t think I’ll ever finish it. So, I will instead post the draft. This year, I will have to get back to this blog.

I recently was convinced that I need to look into Node.js to build simple webapps or tools, the sort of thing a programmer in a library is asked to do pretty much every day of the week. Yesterday, my friend Kevin Clarke from UCLA reminded me that he had mentioned Vert.x in the past, and that it is “like Node.js for Java.” He’s right. I do think Vert.x is at the top of my “things to play with more” stack. Thanks, Kevin!