Lionel Briand is one of the most prolific (and most cited) researchers in Software Engineering. His research areas covers all sorts of software V&V problems, often addressed in collaboration with industry; In the last 15 years, his work has focused on model-based testing and verification, often using search-based approaches.
Lionel heads the Software Verification and Validation Lab at the SnT centre, University of Luxembourg. He is an IEEE Fellow, has received the IEEE Harlan Mills Award, and just recently got an ERC Advanced Grant, Europe’s highest research funding for individuals. Lionel is a keynote speaker at ISSTA 2016 .
Lionel – you work on the verification and validation of “software-intensive systems”. What does this mean?
I guess I don’t need to repeat standard definitions. In practice, most of the time, it means removing latent, critical faults from software systems in the most cost-effective way possible, under time and resource constraints. In industry, the focus is often on testing and verification in the large. I believe this is where the most acute research challenges lie, though it is only addressed by a relatively small subset of the published research.
Can you give some examples of the systems you work with?
I have worked with more than 30 industry partners in the last 20 years, in many domains ranging from energy, avionics, satellite, automotive, government, to finance. For example, I currently work on systems processing credit card transactions, controllers in automotive power-train systems, and satellite ground-control systems.
How can you ensure the dependability of these systems?
For most systems, including all the ones I have worked with, we can never be completely sure about dependability. Engineers know and accept that. The main goal in practice is to decrease risks as much as possible within time constraints. Solutions vary a great deal across contexts and domains. However, in many situations, I have noticed a certain pattern where scalable solutions typically rely on metaheuristic search, based on information provided at least in part by models, and often in combination with machine learning. Testing and verification problems can often be re-expressed as a search for worst-case scenarios in a large input space. This approach has the advantage of being scalable, very versatile and adaptable to many different verification and testing problems.
„Testing and verification in the large … is where the most acute research challenges lie”
You just won an ERC advanced grant on the concept of model testing; and I understand that this is also the main subject of your keynote. Can you give us the core ideas?
My keynote focuses on one out of several challenges that we plan to investigate during the ERC grant. The challenge that I will talk about has to do with what we call dynamic or time-dependent properties in cyber-physical systems. Broadly speaking, my ERC grant is geared towards generalizing to cyber-physical systems what is current practice in control engineering. More precisely, we want to raise the level of abstraction at which most of the system testing is done by developing executable and testable abstractions that are more adapted to how cyber-physical systems are designed and implemented in practice. We want to perform testing on models of a system and its environment in such a way that we can identify the highest risk scenarios. These scenarios can then be prioritized for testing the actual system with hardware and humans in the loop, which is very expensive in general but particularly so for cyber-physical systems. An important factor that we should account for when finding and prioritizing high-risk scenarios is the inherent uncertainty that exists in practical settings.
„Proving correctness (at least in the large) is an illusion“
In my own first semester, I learned that testing can never show the absence of errors. Is it sufficient to validate only the most critical aspects of these systems?
This statement about testing is obviously true. This being said, we need to strike a meaningful balance between how thorough we can be in our validation on the one hand, and the complexity of modern systems on the other. Proving correctness may be possible, in certain contexts, when we are focused on algorithms (e.g., various communication protocols for distributed systems), or on individual modules. But such an approach does not scale well, cannot deal with certain types of properties such as the ones I mentioned above in cyber-physical systems, and anyway relies on models that cannot be proven to be correct or consistent with the implemented system. I therefore believe that proving correctness, at least in the large, is an illusion. The best we can do, like in most other engineering disciplines, is to minimize the risk with available resources.
There is a reason why formal verification techniques – despite considerable resources that have been devoted to research on the topic over the last forty years – have rarely been used in practice. Little credible evidence is available regarding their applicability and benefits for large systems. Our lack of focus on scalable techniques has not served us well, as the main challenge in practice is testing and verification in the large (integration, system, and acceptance testing). Most of the time, V&V activities in the small are not an acute pain point in practice. Imperfect technology, for example regarding unit testing, can be compensated by the developers’ individual expertise. This does not work anymore on a larger scale.
How do you see the interplay between testing and formal verification?
It depends on how we define “formal verification”. Early techniques to verify requirements, architectures, and design can be very beneficial in certain contexts, if they scale and are practical. It is in general not advisable to start verifying systems with testing late in the development process. However, the application of techniques that attempt to prove correctness or to build systems that are correct by construction is going to be limited to very small problems. Though SAT/SMT/CSP solvers have shown to be effective in other domains (e.g., circuit design), in the foreseeable future, verification relying on them won’t be very effective at proving software correctness in realistic situations. You read many claims here and there about formal verification, but as I said earlier, there is little credible and verifiable evidence. Even in a large avionics company I visited recently, where safety certification is a highly expensive and crucial activity, their use of such techniques is very limited. They rely mostly on simulation. New forms of verification, which do not provide guarantees but are scalable, are emerging though; but these are based on metaheuristic search rather than on solvers of the kinds mentioned earlier. Combining search with exhaustive solvers has been successful in a few cases and such hybrid approaches should be further investigated in the future.
Should practitioners adopt more formal methods for their software design? For validation?
Again, we have to define what we mean by “formal methods”. This phrase is used in various ways by different people. If you ask whether practitioners should be more rigorous in their software development practice, I would answer yes, in many cases. Should they use more systematic approaches for requirements, architecture and design, and testing, I would also answer yes, in many situations. However, I strongly believe that in software engineering, everything is a matter of trade-off as software development is a socio-technical activity. One must therefore find the right balance, in a given context and domain, between the resources spent on various activities. Model-driven engineering, which is in my opinion a form of “formal methods”, has made a great deal of progress in the last two decades and many techniques and tools are now usable and are actually being used, including by some of my lab’s industry partners. However, if we refer to the formal verification techniques we were discussing earlier, I would say that, in most cases, I would be hesitant to recommend them for industrial use in their current state of maturity.
In a recent Facebook post, you criticized software engineering conferences for becoming less and less relevant to practitioners. Do you think that the current testing and analysis research in the community is sufficiently oriented towards actual or perceived needs?
Definitely not, and it saddens me. We have so many smart and talented people, and our impact as a research community is so limited. Given the importance of software in society and industry, this is a missed opportunity for our research community. Software practitioners are desperate for better solutions and technologies. An engineering research discipline with limited impact will inevitably have limited access to funding and resources in the future.
„Acquiring a good understanding of the problems in industrial domains and the constraints within which engineers operate is extremely important.“
There are several obstacles to making the research in our community more impactful. The quantification of research output and the obsession with conference publishing does not help. Why would we then expect the younger members of our community to work on real, hard problems? However, that is not the whole story. First, there is a confusion, very often, which leads people to believe that long-term, fundamental software engineering research must necessarily be blue-sky research, disconnected from any application or domain. This is wrong, at least in our discipline. There is no inverse correlation between novelty and depth on the one hand, and impact or practicality on the other. I like to define research in terms of timeline and risk. Some of our research will be longer term and higher risk, and that is fine. Risk is an inherent component of research. However, even in such situations, acquiring a good understanding of the problems in industrial domains and the constraints within which engineers operate is extremely important. Shorter term, more applied research in closer connection with industry is also perfectly legitimate and actually immensely useful. Basic research can be informed by applied research, which itself is informed by technology transfer and practice.
What are the central problems that you would like to see more researchers focus upon?
There are obviously many important problems, many of which I admittedly have little or no experience on. Most of the practical problems people face are not new, though they have never been satisfactorily resolved by research. Certain things come to mind, although I am obviously biased towards my more recent experiences. We need more flexible and at the same time scalable techniques to deal with natural-language requirements in different forms. We need practical and scalable verification techniques based on reasonable modeling requirements and that are risk-driven, even if they may not be able to provide full guarantees. With respect to testing, there are of course many different challenges. The effective automation of penetration testing, for example in the context of web services, still needs a great deal of work and experimentation. The rise and increased complexity of cyber-physical systems also bring about new challenges, including the need to cope with the time-dependent and dynamic properties of the physical world. Beyond testing, to address dependability once the system is deployed, we need truly practical and scalable run-time monitoring and verification approaches. As you can see the words “scalable” and “applicable” come back often in my statements: This is where the key challenges lie. In general, regardless of the specific topic at hand, we need to perform more industrial case studies. When done with rigor, these case studies are very informative. We should remember that software engineering is a socio-technical field. We cannot fully reproduce the phenomena we are studying in our labs.
On an organizational level, what would you change to reorient research towards these goals?
„The `number game’ makes it very difficult for academics to work on real, hard problems“
First, our emphasis on conferences as the primary publication venues does not help. In addition to being inconsistent with any other scientific field than computer science, reporting industry or demand-driven research within the space constraints of a conference paper is not easy. Also, these papers tend to be more frequently rejected by program committees as their relationship with existing work is often more complex than with a paper motivated by weaknesses or challenges in the scientific literature. Since the possibility of dialog with the reviewers is more limited with conference rebuttals, when they exist, than for a journal, this is detrimental to more industry-oriented papers.
Second, but this is a long discussion we won’t have here, the “number game” as Parnas called it – which refers to the race to maximize publications on a CV – makes it very difficult for academics to work on real, hard problems. This inevitably leads to less publications than research driven by opportunity and purely academic problems. It can have serious consequences on people’s careers in some environments. The American CRA recommendations for hiring, tenure and promotion – which emphasizes quality over quantity – might help address the problem in the future but I am not yet sure about how influential it will be at an international level. Funding agencies also have a role to play by encouraging private-public partnerships with generous funding.
Last, and this is perhaps the hardest factor to address, our mindset must change. We must learn to be more appreciative of papers tackling hard, industrial problems. We must be willing to accept papers that make a contribution, however imperfect, on such problems as opposed to academically-perfect papers on secondary problems that are easier to investigate.
Where do you see the future of the ISSTA conference, and its community?
ISSTA is a leading and key conference for the software V&V community. It publishes top-quality research but needs to be more encompassing of applied and industrial research. It also needs to widen its horizons and attract a larger portion of the research community. Given how many people are interested in this topic, ISSTA should be able to attract several hundred participants. I am glad to see that steps have already been taken in this direction this year.
Thanks a lot! Looking forward to your keynote!
I also look forward to delivering it! Thank you for the invitation.
Find more interviews behind the link.