keynote at ISSTA
Christian – What is it that inspired langfuzz? Had you been working with fuzzers before? What was it you wanted to improve upon?
Prior to LangFuzz, I hadn’t worked with any fuzzers, but I was well aware of the concepts. We learned about basic fuzzing in one of our security lectures about universities and I found the idea interesting. My initial idea wasn’t even to improve anything, but I wanted to try and use this technique to find real issues and then see where to go from there.
How were your initial reports received by the Firefox and Chrome teams?
In general, all of the reports were received well and got the necessary attention. Sometimes, developers were unaware of the testing method and asked why one produce such inputs, but the situation was quickly resolved. Both teams also quickly informed me, which additional testing options I should use to find more bugs and what is most valuable to them.
In the first three months of running langfuzz, you earned 53,000 US$ of bug bounties. Did you actually get that money? Does it come as a big check?
Both companies of course paid all of the bounties according to their programs. Bounties were paid on a weekly basis so there wasn’t an opportunity to receive a big check.
What is your role today in Mozilla?
What are the most frequent errors that langfuzz discovers? Any common patterns that developers should avoid?
How do developers react when you send them yet another bug? Do you fix things yourself?
I previously fixed simple bugs myself, but in general this task is beyond my knowledge for this complex part of our code.
To date, LangFuzz has discovered about 2300 bugs in Mozilla code. Currently, the tool is also being used by Google to test the V8 engine using their compute grid (called ClusterFuzz), also finding numerous bugs.
Do you plan to release langfuzz to the public? Isn’t there a risk that it falls into the wrong hands?
Open source is one of the most valuable principles that we have at Mozilla. This also applies to our fuzzing tools. Some of our tools are currently private, because we keep finding too many critical bugs with them too frequently and our scaling isn’t large enough yet to guarantee that we find those bugs first. However, in general, we aim at releasing all of the fuzzing tools, including LangFuzz, to public. Of course there is a risk involved with that, but we believe that the benefits of receiving contributions (which possibly allow us to find new bugs) outweigh these risks. Also, skilled attackers are very likely capable of creating such tools themselves, maybe even better ones. One should not forget that exploiting the bugs found with these tools also involves a lot of knowledge and people with that knowledge are typically also capable of building such tools. Putting the full power of the community against that is one of the ways to be faster in that arms race.
During your studies, you have been working in a group that focuses on formal system verification. Do you see constructive measures that would make fuzz testing obsolete?
Where do you see the future of fuzz testing?
In my opinion, the most important future aspects for fuzzing are guided fuzzing based on feedback and, associated with that, automated learning for fuzzers. Projects like AFLFuzz have started to show us the value of coverage feedback for fuzzers, allowing them to find bugs involving complex data structures while having no initial knowledge of the code or format used. I think this is just the beginning and there is a lot more to investigate in that area: Ideally, fuzzers would be able to learn about the code and structures they are testing, allowing quicker deployment of these tools and making them even easier to use.
Thanks a lot. See you at ISSTA!
Thank you, I’m very much looking forward to it!
Find more interviews behind the link.