Lionel C. Briand

Christian is a Senior Security Engineer at Mozilla Corporation – the company behind the Firefox browser. Christian’s speciality is “grammar-based fuzzing”, that is, feeding programs like the Firefox JavaScript interpreter with inputs that are syntactically correct, yet sufficiently unusual to trigger uncontrolled behavior. In 2012, the langfuzz tool he wrote as part of his Master’s Thesis found more than 105 security vulnerabilities in Firefox, netting him 53,000 US$ in bug bounties.

Christian holds a Google V8 Achievement Award, 20 Chromium Security Rewards, and 12 Mozilla Security Bug Bounty Awards. To date, his fuzzing work has found and closed more than 4,000 bugs in the JavaScript interpreter. He is giving a keynote at ISSTA
.

Christian – What is it that inspired langfuzz? Had you been working with fuzzers before? What was it you wanted to improve upon?

Prior to LangFuzz, I hadn’t worked with any fuzzers, but I was well aware of the concepts. We learned about basic fuzzing in one of our security lectures about universities and I found the idea interesting. My initial idea wasn’t even to improve anything, but I wanted to try and use this technique to find real issues and then see where to go from there.

Why JavaScript, and why Firefox?

In our lecture, we learned about the usual fuzzing targets, like text documents or images. I figured that these targets are already well tested and was looking for something new. Then I had the idea to fuzz a language interpreter instead and JavaScript is one of the few places where a bug has security implications, due to processing untrusted input from the web. The choice for Firefox was quickly made because the code base and the bug database are open and in previous research, we had already worked with both. From a scientific standpoint, such openness is a great opportunity for evaluation.

How were your initial reports received by the Firefox and Chrome teams?

In general, all of the reports were received well and got the necessary attention. Sometimes, developers were unaware of the testing method and asked why one produce such inputs, but the situation was quickly resolved. Both teams also quickly informed me, which additional testing options I should use to find more bugs and what is most valuable to them.

In the first three months of running langfuzz, you earned 53,000 US$ of bug bounties. Did you actually get that money? Does it come as a big check?

Both companies of course paid all of the bounties according to their programs. Bounties were paid on a weekly basis so there wasn’t an opportunity to receive a big check.

What is your role today in Mozilla?

I work as a Senior Security Enginneer in the Platform Fuzzing Team. Our task is to develop fuzzing tools and apply them to the platform code, which forms the core of Firefox. My current tasks are of course centered around JavaScript testing, but I am also working on scaling tools (FuzzManager), improving existing tools (e.g. AFLFuzz) and other experimental things. Many of these projects are open source and available on GitHub.

What are the most frequent errors that langfuzz discovers? Any common patterns that developers should avoid?

The JavaScript engine is a very complex piece of code and while there are some patterns that we keep seeing, they are probably very specific to our implementation. There are for example many problems with garbage collection and pointer ownership. I think every sufficiently complex software project has its own recurring patterns and it’s an important step for developers and security engineers to identify those.

How do developers react when you send them yet another bug? Do you fix things yourself?

Most of our developers are generally thankful for our bug reports. Especially the JavaScript team considers our fuzzing an invaluable for the project’s success. However, we also sometimes hit road blocks with other developers where they don’t see the immediate use that fuzzing brings them. There are a lot of psychological aspects associated with one team testing another team’s code. Many of these problems don’t have technical solutions but need education (realizing the use) and good management (having the resources) to be tackled successfully.

I previously fixed simple bugs myself, but in general this task is beyond my knowledge for this complex part of our code.

How many bugs has langfuzz discovered so far? Do you apply langfuzz to other JavaScript Interpreters?

To date, LangFuzz has discovered about 2300 bugs in Mozilla code. Currently, the tool is also being used by Google to test the V8 engine using their compute grid (called ClusterFuzz), also finding numerous bugs.

Do you plan to release langfuzz to the public? Isn’t there a risk that it falls into the wrong hands?

Open source is one of the most valuable principles that we have at Mozilla. This also applies to our fuzzing tools. Some of our tools are currently private, because we keep finding too many critical bugs with them too frequently and our scaling isn’t large enough yet to guarantee that we find those bugs first. However, in general, we aim at releasing all of the fuzzing tools, including LangFuzz, to public. Of course there is a risk involved with that, but we believe that the benefits of receiving contributions (which possibly allow us to find new bugs) outweigh these risks. Also, skilled attackers are very likely capable of creating such tools themselves, maybe even better ones. One should not forget that exploiting the bugs found with these tools also involves a lot of knowledge and people with that knowledge are typically also capable of building such tools. Putting the full power of the community against that is one of the ways to be faster in that arms race.

During your studies, you have been working in a group that focuses on formal system verification. Do you see constructive measures that would make fuzz testing obsolete?

In my experience, the current language, performance and system requirements often make formal verification infeasible. While verification on a protocol level can help identify potential problems (e.g. with TLS), code level verification seems impossible to me for a project like Firefox or even only the JavaScript engine. Maybe in the future with different languages, we will have the opportunity to verify isolated systems but I currently doubt that fuzz testing will be obsolete any time soon.

Where do you see the future of fuzz testing?

In my opinion, the most important future aspects for fuzzing are guided fuzzing based on feedback and, associated with that, automated learning for fuzzers. Projects like AFLFuzz have started to show us the value of coverage feedback for fuzzers, allowing them to find bugs involving complex data structures while having no initial knowledge of the code or format used. I think this is just the beginning and there is a lot more to investigate in that area: Ideally, fuzzers would be able to learn about the code and structures they are testing, allowing quicker deployment of these tools and making them even easier to use.

Thanks a lot. See you at ISSTA!

Thank you, I’m very much looking forward to it!

Find more interviews behind the link.