Currently the most important shortcoming of the Web as a client-server platform is the difficulty of interfacing with data in relational or other kind of databases. The Web's Common Gateway Interface (CGI) is limited and needs augmenting. GWB is a template based language for assisting the task of developing Web/database CGI applications. The language is based on HTML, with a few extension tags introduced to bring database connection and processing logics into a template. Using GWB, the task consists of just two steps: 1) Design and write an HTML form. 2) Compose an HTML-like template file that combines GWB and HTML tags to generate an application. From such templates, GWB generates ODBC (Open Database Connectivity) compliant WWW applications capable of accessing data in heterogeneous RDBMS's.
RainbowScheme is a visual stepping system that presents Scheme program's run-time state using icons, colored environment trees, and colored Scheme code. A Scheme program can be executed step by step in RainbowScheme using a set of visual transformation rules called "visualcode." Being able to visualize intermediate steps of a running program makes each individual step and the entire execution process easier to comprehend. In addition to describing visualcode with examples, this talk presents RainbowScheme's user interface, implementation techniques, and evaluation results of using it in introductory programming courses.
Additional information can be found at http://w3.yuntech.edu.tw/tungsh
We propose a method to obtain divide & conquer equations from sequential recursive functions. Traditionally, most parallelization methods are based on schematic rules which attempt to match each given program to a set of program schemes that have parallel counterparts. Instead of relying on ad-hoc program schemes, we propose a new approach to parallel derivation based on the elementary but powerful fold/unfold rules.
Recovering parallellism from sequential programs often require an induction step. To achieve this, our method applies a second-order generalisation step to selected instances of sequential equations, before an inductive derivation procedure. The new approach is systematic enough to be semi-automated, but most importantly it is quite widely applicable.
Full paper is also available.
This talk is about the implementation of logic programming languages, especially about Prolog and perhaps some CLP (constraint logic programming) languages. We survey the major analyses and optimization techniques as well as the problems with these methods. We shall also look at several recent implementations, including the compilers and the analyzers. Finally the question of efficiency in compared to imperative languages will be discussed. Some of the ideas and results given here may also be of interests to people working on the implementation of functional programming and other paradigms.
The lexical analyzer of a compiler usually adopts the longest-match rule to resolve ambiguities when deciding the next token in the input stream. However, that rule is not applicable in all situations. Because the longest-match rule is widely used, a language designer or a compiler implementor might overlook subtle implications of the rule. The consequence is either a flawed language design or a deficient implementation. We propose a method that automatically checks the applicability of the longest-match rule and identifies the situations in which that rule is not applicable. The method is useful to both language designers and compiler implementors. The crux of the method consists of two algorithms: One is to compute the regular set of the sequences of tokens produced by a nondeterministic Mealy automaton when the automaton processes elements of an input regular set. The other is to determine whether a regular set and a context-free language have nontrivial intersection with a set of equations.
Additional Key Words and Phrases: compiler, context-free grammar, finite-state automaton, Mealy automaton, Moore automaton, parser, regular expression, scanner.
Full paper is available at http://cissun51.cis.nctu.edu.tw/~wuuyang/LMR96.ps .
We present a criterion for semantic constraints in distributed systems where asynchronous processes interact via synchronous constructs --- usually called multiparty interactions . We show that if a semantic constraint violates the criterion, then no deterministic algorithm for scheduling multiparty interactions can satisfy the constraint. Conversely, the implementation is possible if the criterion is obeyed. Thus, the criterion is sufficient and necessary to guarantee the implementability of all possible semantic constraints. To our knowledge, this is the first such criterion to appear in the literature.
We then use this criterion to examine several important semantic constraints, including strong interaction fairness , strong process fairness , weak process fairness , U-fairness , and hyperfairness . All, except weak process fairness, fail to pass the criterion.
A general parametric analysis problem which allows the usage of parameter variables in both the real-time automata and the specifications is proposed and solved. We extend TCTL model-checking problem to parametric analysis problem for real-time systems and develop new techniques in solving it. The algorithm we present here accepts timed transition system descriptions and Parametric CTL formulas with parameter variables of unknown sizes and can give back general linear equations of timing parameter variables whose solutions make the systems working.
Optimizing and parallelizing compilers depend on ambitious data flow analyses and optimizations to generate efficient code. The precision of alias information is cruial to the precision of ambitious data flow analyses and the effectiveness of optimizations, especially for languages with pointer data types. In this talk, we will introduce an interprocedural analysis for computing the aliases induced via pointers. We have implemented a prototype of the analysis for C language and have applied the alias information in interprocedural def-use association analysis and interprocedural constant propagation. A set of preliminary experiments has been performed for these analyses and optimization.
With the support of distributed pointers and class abstractions in parallel C++-like languages, application programmers now can build their own pointer-based aggregate objects to provide a global name space on distributed environments. However, a critical question remains if the compiler can understand the distribution pattern of pointer-based distributed objects built by application programmers, and perform optimization as effectively as the HPF compiler does with distributed arrays. In this talk, we address this challenging issue. In our work, we first present a parallel programming model which allows application programmers to build pointer-based distributed objects at application levels. Next, we propose a distribution analysis algorithm which can automatically summarize the distribution pattern of pointer-based distributed objects built by application programmers. Our work, to our best knowledge, is the first work to attempt to address this open issue. Our distribution analysis framework employs Feautrier's parametric integer programming as the basic solver, and can always obtain precise distribution information from the class of programs written in our parallel programming model with static control. Experimental results done on a 16-node IBM SP-2 machine show that the compiler with the help of distribution analysis algorithm can significantly improve the performance of pointer-based distributed programs.
(Part of this work submitted to PLDI '97)