Highlights from Curry On 2019

I had the privilege to attend Curry On in London earlier this week, and it was a great conference. The idea of the conference is to get people from both academia and industry together to meet and talk about programming languages and related technologies.

For those who weren’t able to attend, I thought I’d summarize my favorite talks. You can also watch recordings. I link to the videos of each of my highlighted talks below.

Simon Marlow: Glean: Facts About Code

(video)

Simon Marlow (of Haskell fame) talked about the system Facebook is building to host a repository of information about their code, called Glean. The idea is to put all information about the code into one place: the code itself, type/static analysis information, test results, bugs, and more.

The central organizing idea of Glean is that it stores “facts” about the code. Whenever the codebase changes (e.g. someone commits a change), new facts are computed, and those new facts are stored as “diffs” against the old facts (this reminds me a lot of how I understand git works). Thus, Glean is an incremental database.

They’re currently using Glean at Facebook for dead-code detection, code navigation, and other miscellaneous IDE features. It sounds, however, like they want to push this idea to the limits and see how much value they can get out of making a centralized knowledge base for their code. I personally think our current tools don’t do nearly enough for us in terms of helping us understand our code, so I’m curious to see what comes out of this in the future.

Frank Luan and Celeste Barnaby: Using ML for Code Discovery at Facebook

(video)

Frank and Celeste presented on some of the ways Facebook is using machine learning to give developers more information about their code (and once again I’m happy to see more people working on this). My major takeaway was about their “neural search” work. The idea is that a process crawls over the entire Facebook codebase and records all “words” found in the code (where a “word” can either be a word found in comments, or a part of a variable or function name). Then after passing that information through some sort of neural network, they have a system that can determine what snippets of code are likely relevant to what words (even words that might not appear around that snippet, just based on related words found elsewhere in the codebase).

In the end, they have an automated system that can respond to Stack-Overflow-like queries like “how do I open a TCP socket?” with a relevant snippet of code. From what I recall, they said the code snippet is relevant about half the time. So that sounds like something you wouldn’t want to rely on exclusively, but I could see such a system complementing the human-sourced answers in Stack Overflow with automated answers of its own.

Edwin Brady: Idris 2: Type-Driven Development of Idris

(video)

I’ve heard Edwin’s talks are great, but this was my first chance to see one. He didn’t disappoint. This is a guy who clearly just really enjoys working on this stuff, and that energy and sense of fun really comes across in his talks.

The talk at Curry On was about his Idris 2, the successor to the dependently-typed Idris. The big feature he focused on was type-driven code synthesis. So if you have a function that takes as input some argument of an algebraic data type (say, a List type with the usual Cons/Nil definition), the underlying Idris 2 system can generate the case-split for you, then search for an expression that fits the expected return type. In certain cases, depending on the type, that resulting code might be exactly what you want.

According to Edwin, this pattern of case-split-then-search-for-code is quite common. He’d like to see if he can automate much of this process into a kind of “programmable tactics” for writing code. Hard to say if that would go anywhere or not at this point, but it’s an intriguing idea that sounds valuable if it works out.

He admits that the programs he’s demoing are completely toy programs, so who knows if this approach would work for programming larger programs, or even moderately sized ones. It was a really fun talk, though, and one that gave me yet another good reason to learn about dependently typed programming.

Adelbert Chang: Microservice Architecture: A Programming Languages Perspective

(video)

Adelbert started off by talking about the kinds of problems one must handle when deploying and managing services in a microservice architecture. For example, how do you make sure all of the dependencies for your service are ready to go when you deploy your service? How do you handle service maintenance, or network partitions? How do you update a service’s message format without breaking the services that rely on the previous format?

The solution he’s been using is a project called Nelson, which is a kind of deployment management layer/runtime for your whole microservice system. Whenever you bring up your system, Nelson knows about the dependencies between different services and can bring them up in the right order. Similarly, when you want to upgrade a service, it can bring the service dependencies up/down in the proper order without the programmer having to think about it. If a service dependency is no longer being used, the management layer notices and shuts it down automatically.

Adelbert pointed out that much of this system is similar to what you get from a language runtime like the JVM. For instance, the part about removing unused services is just like garbage collection. The philosophy here as I understand it (and which I agree with) is that we can get a lot of leverage by trying to treat the entire “system” as one big program and using analogues techniques from the language-implementation world like garbage collection, packages, and type-checking.

Adelbert is currently looking into interface description languages, or “IDLs”, to add better notions of type-checking between services to these systems, including subtyping (which is crucial for upgrading services). He also mentioned a paper Verifying Interfaces Between Container-Based Components based on some of these ideas, and he’s working with some of the authors of that paper to take the ideas further.

I’m really excited about this space. Reasoning about distributed systems using techniques from programming languages was the general theme of my dissertation, so I’m really glad to see other people working on these problems. I’m excited to see what they’ll come out with next.

Lin Clark & Till Schneidereit: Bringing WebAssembly Outside the Web with WASI

(video)

Lin and Till capped off the first day with a keynote on the WebAssembly System Interface, or WASI. The thesis here is that two of the biggest benefits of WebAssembly are portability and the sandboxed security model. Those benefits aren’t restricted to the web alone, though: desktop applications can benefit, too. This is the idea of WASI: it’s a simple interface for running WebAssembly programs outside the context of a web browser and allowing supervised access to operating-system components like the file system or network connections.

I’m especially glad to see the object capability model taking hold in this space. For those not familiar, the idea of object capabilities is that to access something like, say, the file system, you must have some capability token that says gives you access. Typically, the user running the system grants the program certain capabilities (much like the permissions to run an app on an Android phone), and then the program has to hand off those tokens to any dependencies that need to access the relevant systems. This avoids the “ambient authority” problem where a malicious component can gain access to systems it shouldn’t have access to, only because it’s running as a part of a program running in the context of some particular user.

One slogan I found useful was that we should be able to “read the label” on a piece of software in order to know what it does. Today typically the only way of truly knowing what a program might try to do is to read the code, which doesn’t scale. This nutrition-label analogy sounds like a good approach and gets the idea across.

Aside from the technical content of the talk, I continue to be astounded by Lin’s skill at conveying complex technical information. Having spent much of the last two years writing about my research and having put together several different research/technical talks during grad school, I have a deep understanding of how difficult it can be to find a way to communicate the ideas you might have in your head in a simple way. Lin’s slides are beautifully simple and tell the story using a narrative and “characters” that just about anyone can connect to. She’ll be a role model for me the next time I have to give a talk, and I recommend everyone check out more of her cartoons on web technology at Mozilla Hacks.

Cynthia Solomon: Logo, A Computer Language to Grow With

(video)

Cynthia started off day 2 with a fascinating keynote on the language Logo and some of the works it went on to inspire. A large part of the idea of designing Logo (and the curriculum around it) was to focus on the general underlying ideas of computing and ways to teach them rather than surface issues of a particular language: things like procedural thinking, debugging, project-based learning, talking about what you are doing, having objects to play with, and anthropomorphic thinking. Logo was not just a programming language, but a culture: a way of thinking about computers and learning how to talk about what you’re thinking about.

If you haven’t heard of Logo before, the general idea is that you control a little robotic “turtle” and tell it to walk forward, turn, walk backwards, etc., while also giving commands to raise or lower a pen so that the turtle is drawing images as it goes. What I had never realized, but I suppose should have been obvious in retrospect, was that the turtle was an actual, physical object! Cynthia showed videos of the turtle, which looked like a big upside-down bucket on wheels, drawing a picture on a piece of paper while the students who wrote the program watched their results come to life. I had always assumed the turtle was just a metaphor and/or something you saw on-screen in a simulator, so it was literally a revelation to see videos of the actual turtle running around.

Cynthia went through a lot of the history around Logo, both before and after. I don’t have time to recap it all, but I recommend watching the video if you’re interested. Some of the projects that Logo went on to inspire were a “Lego Logo” robotics system from Lego (this must have either been Mindstorms or some precursor), ButtonBox which gave 3-year-olds a board of buttons to use to program a turtle robot, and the popular learn-to-program language Scratch.

Tomas Petricek: The Search for Fundamental Software Engineering Principles

(video)

Spoiler alert: this was my favorite talk of all of Curry On. Tomas asked the question “what should we be teaching students about software engineering that will still be relevant in 100 years?”. It’s a hard question. In computer science, we at least have some fundamental ideas of computing like Turing machines, the halting problem, and Big-O analysis that underpin much of the field, but when it comes to actually doing something with that knowledge (i.e. software engineering), we lack the same kind of fundamental ideas. In fact, it’s hard to say anything concrete about a piece of software at all: how long will it take to build? How much will it cost? Will it produce the right answers? If not, how often will it be wrong? Can it crash? Most of these are not “formal” questions in the sense that questions about science or math are formal, so they’re much harder to answer definitively.

As a result, to the extent that we teach students anything in school about software engineering, it tends to be based in trends that may or may not be around in 20-30 years (e.g. design patterns, tools like git, the agile methodology, language peculiarities, etc.). I would quibble a little with this and say that many schools at least try to teach some more fundamental principles of ideas, but since none of us really know what those are, about the best we can do is teach what we have found from experience to be “best practices” in industry.

Tomas’ idea is that, to start, we should start by studying the context in which many of these best practices developed. In many cases, the decisions made sense only given that context, so we should try to understand that context before adapting them to our own situation.

Tomas then got into the complexities of software, and where they come from. An insightful question he presented was “which is more complex: building a chess program that can defeat a grandmaster, or writing an accounting system that can calculate VAT in the UK and France?”. Research often focuses on the former, because although it’s a hard problem it’s easier to specify, while the latter is more common in our day-to-day work. Much of the work that goes into software engineering is about managing this complexity, and finding ways to mitigate the risk it entails. My overall sense was that Tomas sees the way forward as finding ways to tame this complexity and risk to a manageable point, rather than completely eliminating it. He seems to believe (and I’m generally convinced) that software engineering will likely never be a problem that can be cleanly defined like the principles of computer science mentioned above, but we can study what kinds of issues happen in the aggregate (e.g. broken deadlines, security breaches) and find ways to deal with them.

Overall, I’d say his thesis was that we should be studying software engineering more like a social science than a physical one, and using the techniques from those fields. In particular, he emphasizes understanding the history and trying to find useful metaphors from other fields like urban planning.

On reflecting on this later (and talking to Tomas after the talk), I wondered if we do yet have any software engineering principles that will always be relevant. My thought was that abstraction is such a principle. Time and again, over several generations of programming languages, we see abstraction as the key mechanism for reasoning about software without understanding every tiny detail. Abstraction is what allows software to scale without collapsing in on itself, so I believe we’ll still be teaching about abstraction 100 years from today.

All in all, this talk was near and dear to my heart. I’ve long felt that teaching people to be software engineers by having them major in computer science is like asking mechanical engineers to major in physics. Yes, you need to understand the underlying science to do your work, but the work requires a much better knowledge of how to apply the ideas of that science in practice, and what can go wrong in the real world. I hope more researchers follow Tomas’ lead and come up with better answers for us.

Tudor Girba: Moldable Development

(video)

Tudor gave a description and demo of the Glamorous Toolkit that he and his team use for what he calls “moldable development”. The idea is that as software engineers, much of our time is spent not on the writing of code per se, but trying to understand what has already been written. The only method most development tools provide to do that is by reading the code, line-by-line. According to Tudor, this doesn’t scale. Instead, we need a moldable development environment: one that allows us to write the tools we need to understand our particular programs in a quick and easy way.

Tudor showed several examples of this, such as a color picker for visualizing different colors, a tree diagram, and a graph + arrows diagram for visualizing the flow of messages. Key to all of this is that it’s easy and lightweight to create all of these tools. The environment is built on top of a Smalltalk environment, which is built with this kind of idea in mind. The video of Tudor’s talk gives a much better idea of the kinds of tools and development methodology he’s proposing than I can describe in words alone.

I like the idea, but I’m skeptical that this approach can really make the understanding of code that much faster. Most of his demos seemed to be about visualizing data, but when I’m trying to understand a piece of code, I find that it’s usually the algorithm that’s hard to understand. Algorithms don’t usually have an obvious visual representation in the way data does, so I’m not convinced the moldable approach works there. That said, (a) I’d be happy to be proven wrong, and (b) this idea would still be great for visualizing lots of different kinds of data.

Alastair Donaldson: GraphicsFuzz: Metamorphic Testing for Graphics Shader Compilers

(video)

Alastair gave a talk on metamorphic testing, which was a term I had heard before but never knew what it was. It starts with the problem that it’s often difficult to come up with an extensive list of test cases for something like a compiler, especially if you’re really trying to eliminate every bug. Metamorphic testing offers a complementary approach. Instead of trying to come up with test programs and expected results manually, start with some known program, then modify it so that it should produce an “equivalent” result. This might be something like wrapping a block of code with an “if” test that’s always true, or expanding a number to some expression that computes that number. If you compile and run both the original program and the mutated one, and they produce different results, then you have a bug in your compiler.

The benefit of this approach is that it’s easy to generate lots of such “equivalent” programs randomly, so a computer can easily come up with a large number of test cases to stress test your compiler. Alastair and his team used this approach to test shader compilers for Android from a number of different vendors, and they found many bugs using this approach.

I liked that this approach offers an easy way to explore many corner cases of programs that manual generation of test cases would otherwise overlook. The approach isn’t necessarily applicable to all scenarios, and I don’t know that I have a way to use it in my work today, but it’s a tool worth keeping in my testing toolbox.

Nate Foster: Hyperproperties

This was actually a talk at the associated Papers We Love event the day before the conference itself started. Nate gave a great talk on a relatively recent (2010) idea called “hyperproperties”. The idea is generalization of the notion of “properties” from the world of model checking. If you’ve heard of safety properties and liveness properties before, then you’ve got the right idea.

Suppose we can model the execution of some process as a (possibly infinite) set of states. Think of each state like the state your program is in when you run it in a debugger and click the “next step” button. I’ll call this state-sequence a “trace”. We might want to specify what it means for a trace to be “good”: e.g., whether it always avoids deadlock between two threads, or whether every incoming request is eventually processed. For every propery like that we might come up with, there is a corresponding set of traces that obey that have that property. Thus, a trace is “good” if it’s in the set described by that property, and so the set of traces effectively is the property.

Researchers have studied this framework for describing properties for the last several decades, and it has proved useful for verifying certain kinds of program behavior. In particular, the most commonly used kinds of properties are safety properties that say that some particular bad thing never happens (e.g. “this program will never crash”), and liveness properties that say that some particular good thing always happens (e.g. “Function foo always terminates”).

Some things we’d like to say about our programs are not expressible in liveness properties, though. For example, maybe you have a latency requirement that 99% of
all requests are handled in 100 ms. We can’t say whether a given trace satisfies the requirement. Instead, this kind of behavior is defined in terms of a set of traces. This is the idea of hyperproperties: a hyperproperty is defined as the set of all sets of traces that have some kind of behavior.

This work is still in a relatively early stage. Most of the existing work, as I understand it, is around formalizing this idea, coming up with theories around hyperproperties, and thinking about how they might be used. For example, there’s work in the world of properties that says that every property is the intersection of a safety and a liveness property, and it turns out that a similar result holds for hyperproperties.

I’m curious to explore hyperproperties further and see whether there might be interesting applications, though. I’m working at Google now, and a big concern in the distributed system I work on is the SLOs (“service level objectives”) that have a similar flavor to the 99% latency requirement above. Perhaps there could be some value in thinking of these SLOs as hyperproperties, although it could equally be the case that the work is still too young to be applicable in such a scenario.

Conclusion

I had a great time at Curry On, as well as the colocated Papers We Love event. The conference had the perfect mix of academic and industry talks and participants, and I saw and heard a lot about how to make software engineering better. We may not have solved every issue, but I hope there are many more events like this in the future so those of us who care deeply about these issues can get together, talk, and try to improve the situation a little bit at a time.

Leave a Reply

Your email address will not be published. Required fields are marked *