|
Abstracts for Invited Speakers
|
Why are there so few successful "real-world" programming and testing tools
based on academic research? This talk focuses on program analysis tools,
and proposes a surprisingly simple explanation with interesting
ramifications.
For a tool aimed at developers or testers to be successful, people must use
it - and must use it to help accomplish their existing tasks, rather than as
an end in itself. If the tool does not help them get their job done, or the
effort to learn and/or use the tool is too great, users will not perceive
enough value; the tool will not get significant usage, even if it is free.
This talk focuses on the often-overlooked consequences of this seemingly
basic statement in two major areas: program analysis, and the work beyond
core analysis that must be done to make a successful tool. Examples will
be drawn from tools that have been successfully used in industry (sold
commercially, and developed for internal use).
Ever since formal verification was first proposed in the late sixties, the
idea of being able to definitively determine if a program meets its
specifications has been an appealing, but elusive, goal. Although
verification systems based on theorem proving have improved considerably
over the years, they are still inherently undecidable and require
significant guidance from mathematically astute users. The human effort
required for formal verification is so significant that it is usually only
applied to the most critical software components.
Alternative approaches to theorem proving based verification have also been
under development for some time. These approaches usually restrict the
problem domain in some way, such as focusing on hardware descriptions,
communication protocols, or a limited specification language. These
restrictions allow the problem to be solved by using reasoning algorithms
that are guaranteed to terminate and by representing the problem with a
finite state model, and thus these approaches have been called finite state
verification. Systems based on these approaches are starting to be
effectively applied to interesting software systems and there is increasing
optimism that such approaches will become widely applicable.
In this presentation, I will overview some of the different approaches to
finite state verification. In particular I will describe symbolic model
checking, integer necessary constraints, and incremental data flow analysis
approaches. The strengths and weaknesses of these approaches will be
described. In addition, I will outline the major challenges that must be
addressed before finite state verification will become a common tool for
the typical well-trained software engineer.
Software components enable practical reuse of software `parts' and
amortization of investments over multiple applications. Composition is the
key technique by which systems of software components are constructed.
However, composition yields the greatest challenges when it comes to
testing, as a component-based application may be composed out of parts that
were never tested together. Thus the most useful and reliable parts are
those which have been tested independently in as many ways as possible.
The Component Applications Group in Microsoft Research is developing tools,
techniques, and a large component library to enable the development of
sophisticated office, home and web-based applications. This talk will
highlight our composition and specification methods, processes, and tools
with a particular emphasis on our test harness and our testing results, as
well as how we made use of existing testing technology and ideas. A
discussion of the last few bugs found in each of several projects should
prove interesting. Some comparisons will be made with other projects inside
and outside Microsoft.
Automatic state exploration tools (model checkers) have had some
success when applied to protocols and hardware designs, but there are
fewer success stories about software. This is unfortunate, since the
software problem is worsening even faster than the hardware and
protocol problems. Model checking of concurrent programs is
especially interesting, because they are notoriously difficult to
test, analyze, and debug by other methods.
This talk will be a description of our initial efforts to check
Java programs using a model checker. The model checker supports
dynamic allocation, thread creation, and recursive procedures
(features that are not necessary for hardware verification), and
has some special optimizations and checks tailored to multi-threaded
Java program. I will also discuss some of the challenges for
future efforts in this area.
|