The following are all related to testing systems/ metrics or visualisation of data or logic.
To work on the relationship of testing effort and requirements to metrics in both OO and procedural large scale code.
To study the errors in distributed OO systems and to develop techniques to reduce the number of potential failures that escape detection due to non-deterministic event ordering.
To investigate static analysis of distributed systems for detecting faults, deadlock or starvation situations.
To investigate dynamic testing with error seeding to determine the number of faults in OO code. Also to compare results with procedural language experiments and determine effective test strategies.
To apply Mutation Analysis (a strict form of dynamic testing) to OO code.
To determine rules for test data adequacy for OO code.
To apply and evaluate boundary testing and partition testing techniques on OO code.
To develop a mathematical model for fault prediction in procedural and OO code.
To study the faults associated with the use of inheritance, encapsulation and polymorphism and to determine test methods and guidelines for their detection.
To study the limitations of the state graph mechanism for OO testing.
To enumerate the number of unreachable paths in OO code and to assign rules for their determination.
To study procedural testing using basis cycles and to determine their adequacy in terms of test sufficiency.
To determine information loss from the use of metrics.
To develop maintenance metrics based on class alterations.
To evaluate test suite adequacy for OO systems and generate rules for achieving levels of test adequacy.
To analyse the problems of inheritance with respect to portability.
To manipulate (efficiently) class libraries: recognising clusters and patterns for re-use.
To investigate the testing of agents and to categorise the possible interactions between agent levels.
To manipulate agent scripts and determine information loss and logic holes.
To investigate the visualisation of the testing process, using metrics to indicate where testing should be focused.
To investigate State Based Testing on clusters of classes.
To look at viewpoints on design for different users.
To use regular expressions to describe valid inter object message sequences and to generate test cases for them.
To use a relational database to store I/O information for message sequences and to join the tuples to form valid (and invalid) test sequences.
To use utility functions to describe the usefulness of a test case and to focus in on failure scenarios.
To test Java scripts automatically.
To develop adequacy measures for Java systems.
Agent driven visualisation of the test process.
Visualise data holes or logic holes in classes/ large scale code to determine problematic constructs or regions.
Visualise the test process or adequately tested regions of code.
Visualisation of large scale crop data determining useful traits in plant dynamics.
Determining plant connectivity from density charts.
To measure testing adequacy of computer games – from gaming and code complexity issues.