Go to content Go to menu

Instant Pot Sushi Rice

Jun 7, 10:32 PM

You know how you’re looking for a recipe, and you find one, but the author goes on and on and on, and you’re just looking for the ingredients? I have a story to tell, but I’m going to give you the recipe first.

Instant Pot Sushi Rice

(makes enough for about 3 or 4 rolls, double it if you plan to make more)
1 cup short grain brown rice
1 3/4 cup water
1 Tb rice vinegar
1 Tb agave nectar
3 dashes of salt

Dump it all in the cooker, stir, cook for 22 minutes, natural release for 15 minutes. It’ll come out a bit on the wet side, that’s OK. Put it in a lidded glass container (I have a Pyrex bowl that’s perfect for this). Pop it in the fridge for about 30 minutes to an hour, it’ll be ready to use soon.

It’s OK to refrigerate it and use as needed, I estimate it’ll keep about 3 days. Ha, fat chance, you’ll burn through this so fast. When you want to use it for sushi, scoop out enough for however many rolls you want to make into a microwave-safe bowl, and nuke it for 45 seconds. Maybe splash a bit of rice vinegar on it before or after nuking, up to you, I don’t.

Story time

Many years ago, at a training in Michigan, I met someone who told me her husband makes sushi for lunch ever day, a habit he had acquired as a child. For years this has stuck with me, the idea of being able to casually make sushi for lunch. Easy enough for even a kid to do it. Today, I tried to “wing it” and make sushi with leftover brown rice, for my daughter for her school lunch. Suffice to say, it turned into a “sushi burrito” because the rice was not the right consistency. So, I went looking for an Instant Pot sushi rice recipe, and found one, here. Then, I thought, I should see if I can actually refrigerate sushi rice and re-use it. Turns out, you can. You have to dig through the responses until you find one that says you can. I know, this is probably an abomination, but I don’t care, I just want a way to make sushi every day. And I’m here to tell you, I think this recipe is what it’ll take to do it. I made two rolls for my oldest, a surly teen with a voracious appetite. It was really fun to be able to make one roll after another, until he declared himself full.


Code4Lib 2019 Recap

Mar 30, 12:11 PM

Code4Lib 2019 was held in San Jose, CA from February 19 to February 22, 2019. The theme of this conference, for me, was service. As in volunteering, and giving back to my community. There were plenty of jobs that needed doing, and my usual method in these cases is to needle people I know into volunteering to do them. However, the people I know in the Code4Lib community were already very deeply involved in this conference… asking them to do more was a terrible idea (I’m embarrassed to admit that it took me a while to figure this one out). So… I decided the best thing to do was to volunteer for as many things as I could conceivably take on. This turned out to be great fun, and I’ll do it again. However, the conference week itself was a bit of a whirlwind. I cannot even imagine what it must be like for anyone more involved than I was (and I was, frankly, barely involved at all). Let’s just leave it at: Code4Lib is an incredible community, and the yearly conference is a labor of love and devotion that we seldom are privileged to see anywhere. If you get a chance to go to this conference, please do consider volunteering for as much as possible… it’s worth it to see up close, everyone else give their all to pull this conference off.

Workshops

OK, this is a recap, so, I will recap. The first day was for workshops, I had a workshop approved for the conference, but it was cancelled due to space constraints with the venue (and not many people signed up for it, either). So, after asking Dre if it was OK, I was able to go to his legendary Fail4Lib workshop. So many interesting questions and observations came up during this workshop! I jotted a few of them down. I’ll just run through a few of them here:

The opening discussion centered around some assigned reading that Dre sent out before the workshop. I will mention this reading at the end, but the general topic was the idea of megaprojects, how they are funded, and why they fail. Here are a few snippets from my notes:

  • Is an economic lens the best lens to use to evaluate the success of a project?
  • “Not only do planners typically underestimate the time, difficulties and costs involved in completing projects, they also overestimate the likely benefits – including the ease with which people will come up with ways to overcome obstacles.” —Flyvbjerg, professor at the University of Oxford’s Saïd Business School, here’s an interesting article that also mentions this quote.
  • Wealthy people want a monument, how can we make the monument useful?
  • The political use case is a use case, it’s OK to include this use case, if it unblocks other stories.
  • Make the trade-offs more visible to decision makers
  • Keep in mind who you are talking to, decision makers are often more responsive to political/visionary/branding/impact language.
  • Listen they will usually tell you or give you clues as to what their priorities are
  • It’s easier to evaluate a system from the outside of it. Being part of the system can blind you to important aspects of the system.

There followed a series of lightning talks from workshop attendees. I will not name names, but here are a few things from my notes which stand out to me now.

  • waiting even six months would have opened up possibilities
  • praise blinds you to that nagging voice that would normally have given you a heads up
  • seek out an honest opinion from someone outside of the project, avoid situations where no one will check your work
  • vendors are vendors, not your friends
  • retrospectives and honest evaluation are invaluable, and can help strengthen relationships, even new ones
  • postmortems after failures of any sort are healthy, and should help prevent similar failures in the future
  • testing all aspects of a design is important
  • communicating a failure in progress, is difficult, but once you get past that first admission of a problem, things will get better, stakeholders understand the problem space and the risks better than you think, and are on your team, keep them in the loop
  • have measures for what you’re doing, so you know whether you’ve done it

During the wrap-up, the group discussed tactics for surviving failure. The kick-off question was so interesting, I wrote it down: “How do you prepare collaborators for unexpected outcomes?” Here are a few things that came up during the discussion:

  • Admit that the work is an experiment
  • Agree on definitions: “prototype” is risky, because there is a desired to throw “working prototypes” into production, even if they aren’t “real”. “Beta” and “Alpha” don’t have inherent meaning, so spell out exactly what you mean about these deliverables.
  • Is failure an endpoint?
  • Is failure OK?
  • What can we do to make failure OK?
  • When is failure unavoidable? Useful? Desirable?

At the end of the wrap-up, someone brought up the article Blameless Postmortems in a Just Culture it’s worth a read. We follow this practice at UCLA Library, and have seen positive outcomes from employing it.

After a quick jog across the street for lunch (a vegetarian burrito, yum!), I made it back in time for Jon Weisman’s afternoon workshop Library Apps and the Modern Dev Workflow Jon walked us through using OpenAPI/Swagger to develop an API spec, and then develop an actual application that delivers a service based on that spec, and deploy it to Heroku. Here are Jon’s notes on the workshop, they are worth taking the time to follow. We are using these tools in my team’s work at UCLA Library, I am grateful to have had a chance to get my feet wet at this workshop.

Day 1

Sarah Roberts delivered the opening keynote (here’s a recording). The themes from this keynote ran through many of the presentations during the conference: a concern with ethics in computing, a concern with equitable labor arrangements, and an awareness of the realities of what sorts of labor are required to facilitate many modern online environments and services. Dr. Roberts’ keynote refocused my attention on a problem that is all too easy to forget: there are real people doing the job of content moderation, and other online services like Mechanical Turk. They live in harsh conditions and very often the work that they are asked to do carries a significant psychological toll. Just knowing this fact is important, as the topic of using Mechanical Turk does come up in Library circles, knowing the conditions these workers face will help us plan for a more equitable arrangement for this labor. I hope. Of course, the difficult part, for us technology workers, will be to speak up during project planning, and be the squeaky wheel, to be sure this unseen labor is seen. I will try to rise to this challenge.

Dominique Luster from Carnegie Museum of Art challenged us to “do the work of cultural competency” in her talk entitled Machine Learning and Metadata with the Charles “Teenie” Harris Collection. (here’s a recording) In part a cautionary tale of the dangers of overly detailed cataloging of images, and how that detail can obscure what’s actually important and significant about a collection…. the talk is not merely “metadata shaming” but a deeper dive into how machine learning can be utilized to tackle a large-scale metadata cleaning project. Luster is an engaging speaker, if you only watch one of the talks from this conference, this should be the one.

Better late than never

Oh my, it’s been over a year, this article has been in draft for that year… I don’t think I’ll ever finish it. So, I will instead post the draft. This year, I will have to get back to this blog.