113 papers published in conference proceedings and journals in 2004 were submitted by IBM Research authors worldwide for consideration as 2004 Pat Goldberg Memorial CS/EE/Math Best Papers. The 2005 Best Paper Award Winners will be announced in August 2006.
The following three 2004 papers were selected (in alphabetical order):
Composing Schema Mappings: Second-Order Dependencies to the Rescue
Ronald Fagin, Phokion G. Kolaitis, Lucian Popa, Wang-Chiew Tan (UC Santa Cruz)
The paper addresses an important and persistent problem in information integration from heterogeneous sources related to the composition of schema mappings. The problem involves transforming data structured under one schema into data structured under another schema. The paper shows that an important case of schema mapping compositions is not definable by any set of source-to-target tuple-generating dependencies or formulas of fixed-point logic. Although first order logic formulas is a straightforward description language/formalism for specifying these mappings, when arbitrary first order formulas are used the schema mappings are not easily composable. This limitation is serious, because given a chain of schema mappings, it is not easy to compute a single mapping that simulates the chain, requiring the individual mapping steps to be computed sequentially, which also likely involves storing temporary results. The paper introduces a novel class of second-order formulas and function symbols as a more restricted subset of second-order logic formulas called "second order tuple-generating dependency" (SOTGD) that provides a language for composing schema mappings. The paper shows that SOTGD can be used to define any composition of finite sets of source-to-target tuple-generating dependencies, including second-order tuple-generating dependencies. Furthermore, the paper shows that the SOTGDs possess good properties for data exchange and query answering. The contribution on SOTGDs will have important impact on the problem of metadata management and forms the basis of the schema mapping language in IBM’s Clio/Criollo and Websphere Information Integration mapping tools. The paper was invited as one of three papers from ACM Principles of Database Systems (PODS)-2004 to submit a full version to ACM Transactions on Database Systems (TODS).
First-Order Incremental Block-Based Statistical Timing Analysis
Chandu Visweswariah, Kaushik Ravindran (UC Berkeley, summer intern), Kerim Kalafala (STG, Fishkill), Steven G. Walker, Sambasivan Narayan (STG, Burlington)
Design Automation Conference
This paper deals with the important question of verifying IC chip timing in the presence of process and manufacturing uncertainty. It adopts a probabilistic framework for timing verification that contrasts with the deterministic worst-case analysis now in practice. While this framework is not new, this paper contains the following three important innovations:
1) Linear-time algorithm for the propagation of the timing quantities: This algorithm is based on a first-order model for timing quantities (arrival and switching times, required arrival and switching times) in terms of the parameters describing the sources of process and manufacturing variations. The model allows for both global correlations as well as local variability. The algorithm is block-based in that it takes into account the probabilistic delay and transition time models for each gate in the chip rather than just for some "critical" logic paths. The first-order model and block-based nature of the algorithm enable the propagation of the statistics of timing quantities throughout the full chip in a time that is linear in the number of blocks and the number of sources of variations.
2) Incremental timing strategy: The creation of robust chip designs require that designers address timing violations wherever they occur using logic and circuit optimization methods. In order to enable the optimization process, the timing verification engine must be incremental in the sense that it should allow timing queries whenever local design changes occur without having to re-time the whole chip. Incremental timing has been used at IBM and in the industry for quite some time but only in the deterministic, worst-case context. The second innovation of this paper is an incremental timing algorithm that is compatible with the statistical framework needed to address process and manufacturing uncertainties.
3) Implementation: This paper goes far beyond the good academic conference paper in that it describes a full implementation of the algorithm within IBM's design automation environment. The implementation is illustrated on industrial ASIC chips, three of them have in excess of one million gates, with the largest having slightly more than 2 million gates and 51 clock domains. The runtime of the algorithm on the latter chip was about 45 minutes, a remarkable performance in the context of statistical timing analysis.
This paper won the best paper award at the Design Automation Conference in June 2004. Its impact is already being felt in academia and in the industry.
The Random Oracle Methodology, Revisited
Ran Canetti, Oded Goldreich (Weizmann Institute), and Shai Halevi
Journal of the ACM
The Random Oracle Methodology is useful in formal secure cryptographic protocol design as it allows a proof on an abstract system and provides a bridge to a secure implementation. It is based on defining a public random oracle that later can be implemented with a carefully chosen function. The assumption so far has been that if the design is proven secure using a random oracle, the implementation will also be secure. This journal paper shows that this doesn't necessarily hold true and shows counterexamples where a secure implementation cannot be found even though the protocol has been proven secure. So this paper at best shows that Random Oracle Methodology is incomplete and further research is needed to determine why there are instances of apparently successful implementations; at worst it shows that the Methodology is deeply flawed. In any case, we expect that this paper will have significant impact on the advancement of cryptography by spurring new efforts to either 'fix' the Methodology or find an alternative. While the authors published a conference paper on this work several years ago, the Journal of the ACM paper contains complete proofs of the results and has already received over 175 citations according to Google Scholar.