CATEGORII DOCUMENTE |
Bulgara | Ceha slovaca | Croata | Engleza | Estona | Finlandeza | Franceza |
Germana | Italiana | Letona | Lituaniana | Maghiara | Olandeza | Poloneza |
Sarba | Slovena | Spaniola | Suedeza | Turca | Ucraineana |
are computer programs that are derived from a branch of computer science research called Artificial Intelligence (AI). AI's scientific goal is to understand intelligence by building computer programs that exhibit intelligent behavior. It is concerned with the concepts and methods of symbolic inference, or reasoning, by a computer, and how the knowledge used to make those inferences will be represented inside the machine.
Of course, the term intelligence covers many cognitive skills, including the ability to solve problems, learn, and understand language; AI addresses all of those. But most progress to date in AI has been made in the area of problem solving -- concepts and methods for building programs that reason about problems rather than calculate a solution.
AI programs that achieve expert-level competence in solving problems in task areas by bringing to bear a body of knowledge about specific tasks are called knowledge-based or expert systems. Often, the term expert systems is reserved for programs whose knowledge base contains the knowledge used by human experts, in contrast to knowledge gathered from textbooks or non-experts. More often than not, the two terms, expert systems (ES) and knowledge-based systems (KBS), are used synonymously. Taken together, they represent the most widespread type of AI application. The area of human intellectual endeavor to be captured in an expert system is called the task domain. Task refers to some goal-oriented, problem-solving activity. Domain refers to the area within which the task is being performed. Typical tasks are diagnosis, planning, scheduling, configuration and design. An example of a task domain is aircraft crew scheduling, discussed in Chapter 2.
Building an expert system is known as knowledge engineering and its practitioners are called knowledge engineers. The knowledge engineer must make sure that the computer has all the knowledge needed to solve a problem. The knowledge engineer must choose one or more forms in which to represent the required knowledge as symbol patterns in the memory of the computer -- that is, he (or she) must choose a knowledge representation. He must also ensure that the computer can use the knowledge efficiently by selecting from a handful of reasoning methods. The practice of knowledge engineering is described later. We first describe the components of expert systems.
Every expert system consists of two principal parts: the knowledge base; and the reasoning, or inference, engine.
The knowledge base of expert systems contains both factual and heuristic knowledge. Factual knowledge is that knowledge of the task domain that is widely shared, typically found in textbooks or journals, and commonly agreed upon by those knowledgeable in the particular field.
Heuristic knowledge is the less rigorous, more experiential, more judgmental knowledge of performance. In contrast to factual knowledge, heuristic knowledge is rarely discussed, and is largely individualistic. It is the knowledge of good practice, good judgment, and plausible reasoning in the field. It is the knowledge that underlies the 'art of good guessing.'
Knowledge representation formalizes and organizes the knowledge. One widely used representation is the production rule, or simply rule. A rule consists of an IF part and a THEN part (also called a condition and an action). The IF part lists a set of conditions in some logical combination. The piece of knowledge represented by the production rule is relevant to the line of reasoning being developed if the IF part of the rule is satisfied; consequently, the THEN part can be concluded, or its problem-solving action taken. Expert systems whose knowledge is represented in rule form are called rule-based systems.
Another widely used representation, called the unit (also known as frame, schema, or list structure) is based upon a more passive view of knowledge. The unit is an assemblage of associated symbolic knowledge about an entity to be represented. Typically, a unit consists of a list of properties of the entity and associated values for those properties.
Since every task domain consists of many entities that stand in various relations, the properties can also be used to specify relations, and the values of these properties are the names of other units that are linked according to the relations. One unit can also represent knowledge that is a 'special case' of another unit, or some units can be 'parts of' another unit.
The problem-solving model, or paradigm, organizes and controls the steps taken to solve the problem. One common but powerful paradigm involves chaining of IF-THEN rules to form a line of reasoning. If the chaining starts from a set of conditions and moves toward some conclusion, the method is called forward chaining. If the conclusion is known (for example, a goal to be achieved) but the path to that conclusion is not known, then reasoning backwards is called for, and the method is backward chaining. These problem-solving methods are built into program modules called inference engines or inference procedures that manipulate and use knowledge in the knowledge base to form a line of reasoning.
The knowledge base an expert uses is what he learned at school, from colleagues, and from years of experience. Presumably the more experience he has, the larger his store of knowledge. Knowledge allows him to interpret the information in his databases to advantage in diagnosis, design, and analysis.
Though an expert system consists primarily of a knowledge base and an inference engine, a couple of other features are worth mentioning: reasoning with uncertainty, and explanation of the line of reasoning.
Knowledge is almost always incomplete and uncertain. To deal with uncertain knowledge, a rule may have associated with it a confidence factor or a weight. The set of methods for using uncertain knowledge in combination with uncertain data in the reasoning process is called reasoning with uncertainty. An important subclass of methods for reasoning with uncertainty is called 'fuzzy logic,' and the systems that use them are known as 'fuzzy systems.'
Because an expert system uses uncertain or heuristic knowledge (as we humans do) its credibility is often in question (as is the case with humans). When an answer to a problem is questionable, we tend to want to know the rationale. If the rationale seems plausible, we tend to believe the answer. So it is with expert systems. Most expert systems have the ability to answer questions of the form: 'Why is the answer X?' Explanations can be generated by tracing the line of reasoning used by the inference engine (Feigenbaum, McCorduck et al. 1988).
The most important ingredient in any expert system is knowledge. The power of expert systems resides in the specific, high-quality knowledge they contain about task domains. AI researchers will continue to explore and add to the current repertoire of knowledge representation and reasoning methods. But in knowledge resides the power. Because of the importance of knowledge in expert systems and because the current knowledge acquisition method is slow and tedious, much of the future of expert systems depends on breaking the knowledge acquisition bottleneck and in codifying and representing a large knowledge infrastructure.
is the art of designing and building expert systems, and knowledge engineers are its practitioners. Gerald M. Weinberg said of programming in The Psychology of Programming: ''Programming,' -- like 'loving,' -- is a single word that encompasses an infinitude of activities' (Weinberg 1971). Knowledge engineering is the same, perhaps more so. We stated earlier that knowledge engineering is an applied part of the science of artificial intelligence which, in turn, is a part of computer science. Theoretically, then, a knowledge engineer is a computer scientist who knows how to design and implement programs that incorporate artificial intelligence techniques. The nature of knowledge engineering is changing, however, and a new breed of knowledge engineers is emerging. We'll discuss the evolving nature of knowledge engineering later.
Today there are two ways to build an expert system. They can be built from scratch, or built using a piece of development software known as a 'tool' or a 'shell.' Before we discuss these tools, let's briefly discuss what knowledge engineers do. Though different styles and methods of knowledge engineering exist, the basic approach is the same: a knowledge engineer interviews and observes a human expert or a group of experts and learns what the experts know, and how they reason with their knowledge. The engineer then translates the knowledge into a computer-usable language, and designs an inference engine, a reasoning structure, that uses the knowledge appropriately. He also determines how to integrate the use of uncertain knowledge in the reasoning process, and what kinds of explanation would be useful to the end user.
Next, the inference engine and facilities for representing knowledge and for explaining are programmed, and the domain knowledge is entered into the program piece by piece. It may be that the inference engine is not just right; the form of knowledge representation is awkward for the kind of knowledge needed for the task; and the expert might decide the pieces of knowledge are wrong. All these are discovered and modified as the expert system gradually gains competence.
The discovery and cumulation of techniques of machine reasoning and knowledge representation is generally the work of artificial intelligence research. The discovery and cumulation of knowledge of a task domain is the province of domain experts. Domain knowledge consists of both formal, textbook knowledge, and experiential knowledge -- the expertise of the experts.
Compared to the wide variation in domain knowledge, only a small number of AI methods are known that are useful in expert systems. That is, currently there are only a handful of ways in which to represent knowledge, or to make inferences, or to generate explanations. Thus, systems can be built that contain these useful methods without any domain-specific knowledge. Such systems are known as skeletal systems, shells, or simply AI tools.
Building expert systems by using shells offers significant advantages. A system can be built to perform a unique task by entering into a shell all the necessary knowledge about a task domain. The inference engine that applies the knowledge to the task at hand is built into the shell. If the program is not very complicated and if an expert has had some training in the use of a shell, the expert can enter the knowledge himself.
Many commercial shells are available today, ranging in size from shells on PCs, to shells on workstations, to shells on large mainframe computers. They range in price from hundreds to tens of thousands of dollars, and range in complexity from simple, forward-chained, rule-based systems requiring two days of training to those so complex that only highly trained knowledge engineers can use them to advantage. They range from general-purpose shells to shells custom-tailored to a class of tasks, such as financial planning or real-time process control.
Although shells simplify programming, in general they don't help with knowledge acquisition. Knowledge acquisition refers to the task of endowing expert systems with knowledge, a task currently performed by knowledge engineers. The choice of reasoning method, or a shell, is important, but it isn't as important as the accumulation of high-quality knowledge. The power of an expert system lies in its store of knowledge about the task domain -- the more knowledge a system is given, the more competent it becomes.
The fundamental working hypothesis of AI is that intelligent behavior can be precisely described as symbol manipulation and can be modeled with the symbol processing capabilities of the computer.
In the late 1950s, special programming languages were invented that facilitate symbol manipulation. The most prominent is called LISP (LISt Processing). Because of its simple elegance and flexibility, most AI research programs are written in LISP, but commercial applications have moved away from LISP.
In the
early 1970s another AI programming language was invented in
PROLOG consists of English-like statements which are facts (assertions), rules (of inference), and questions. Here is an inference rule: 'If object-x is part-of object-y then a component-of object-y is object-x.'
Programs written in PROLOG have behavior similar to rule-based systems written in LISP. PROLOG, however, did not immediately become a language of choice for AI programmers. In the early 1980s it was given impetus with the announcement by the Japanese that they would use a logic programming language for the Fifth Generation Computing Systems (FGCS) Project. A variety of logic-based programming languages have since arisen, and the term prolog has become generic.
THE APPLICATIONS OF EXPERT SYSTEMS
The spectrum of applications of expert systems technology to industrial and commercial problems is so wide as to defy easy characterization. The applications find their way into most areas of knowledge work. They are as varied as helping salespersons sell modular factory-built homes to helping NASA plan the maintenance of a space shuttle in preparation for its next flight.
Applications tend to cluster into seven major classes.
Diagnosis and Troubleshooting of Devices and Systems of All Kinds
This class comprises systems that deduce faults and suggest corrective actions for a malfunctioning device or process. Medical diagnosis was one of the first knowledge areas to which ES technology was applied (for example, see Shortliffe 1976), but diagnosis of engineered systems quickly surpassed medical diagnosis. There are probably more diagnostic applications of ES than any other type. The diagnostic problem can be stated in the abstract as: given the evidence presenting itself, what is the underlying problem/reason/cause?
Planning and Scheduling
Systems that fall into this class analyze a set of one or more potentially complex and interacting goals in order to determine a set of actions to achieve those goals, and/or provide a detailed temporal ordering of those actions, taking into account personnel, materiel, and other constraints. This class has great commercial potential, which has been recognized. Examples involve airline scheduling of flights, personnel, and gates; manufacturing job-shop scheduling; and manufacturing process planning.
Configuration of Manufactured Objects from Subassemblies
Configuration, whereby a solution to a problem is synthesized from a given set of elements related by a set of constraints, is historically one of the most important of expert system applications. Configuration applications were pioneered by computer companies as a means of facilitating the manufacture of semi-custom minicomputers (McDermott 1981). The technique has found its way into use in many different industries, for example, modular home building, manufacturing, and other problems involving complex engineering design and manufacturing.
Financial Decision Making
The financial services industry has been a vigorous user of expert system techniques. Advisory programs have been created to assist bankers in determining whether to make loans to businesses and individuals. Insurance companies have used expert systems to assess the risk presented by the customer and to determine a price for the insurance. A typical application in the financial markets is in foreign exchange trading.
Knowledge Publishing
This is a relatively new, but also potentially explosive area. The primary function of the expert system is to deliver knowledge that is relevant to the user's problem, in the context of the user's problem. The two most widely distributed expert systems in the world are in this category. The first is an advisor which counsels a user on appropriate grammatical usage in a text. The second is a tax advisor that accompanies a tax preparation program and advises the user on tax strategy, tactics, and individual tax policy.
Process Monitoring and Control
Systems falling in this class analyze real-time data from physical devices with the goal of noticing anomalies, predicting trends, and controlling for both optimality and failure correction. Examples of real-time systems that actively monitor processes can be found in the steel making and oil refining industries.
Design and Manufacturing
These systems assist in the design of physical devices and processes, ranging from high-level conceptual design of abstract entities all the way to factory floor configuration of manufacturing processes.
BENEFITS TO END USERS
Primarily, the benefits of ESs to end users include:
A speed-up of human professional or semi-professional work -- typically by a factor of ten and sometimes by a factor of a hundred or more.
Within companies, major internal cost savings. For small systems, savings are sometimes in the tens or hundreds of thousands of dollars; but for large systems, often in the tens of millions of dollars and as high as hundreds of millions of dollars. These cost savings are a result of quality improvement, a major motivation for employing expert system technology.
Improved quality of decision making. In some cases, the quality or correctness of decisions evaluated after the fact show a ten-fold improvement.
Preservation of scarce expertise. ESs are used to preserve scarce know-how in organizations, to capture the expertise of individuals who are retiring, and to preserve corporate know-how so that it can be widely distributed to other factories, offices or plants of the company.
Introduction of new products. A good example of a new product is a pathology advisor sold to clinical pathologists in hospitals to assist in the diagnosis of diseased tissue.
THE EXPERT SYSTEMS BUSINESS
The industry, particularly in
the
Selling consulting services
is a vigorous part of the expert system business. In the
It's fair to say that the technology of expert systems has had a far greater impact than the expert systems business. Expert system technology is widespread and deeply imbedded.
Current Business Trends
As expert system techniques matured into a standard information technology in the 1980s, the increasing integration of expert system technology with conventional information technology -- data processing or management information systems -- grew in importance. Conventional technology is mostly the world of IBM mainframes and IBM operating systems. More recently, this world has grown to include distributed networks of engineering workstations. However, it's also the world of a wide variety of personal computers, particularly those running the MS DOS operating system.
Early in its history, commercial expert systems tools were written primarily in LISP and PROLOG, but more recently the trend has been to conventional languages such as C. Commercial companies dedicated to one language or the other (e.g., Symbolics, Lisp Machines Inc., Quintus Prolog) have gone into bankruptcy or have been bought out by other companies.1
Finally, the connection of expert systems to the databases that are managed by conventional information technology methods and groups is essential and is now a standard feature of virtually all expert systems.
1 Interestingly, this trend away from LISP and PROLOG is being reversed in some commercial computing systems. Apple Computer's new personal digital assistant, the Newton, has an operating system (Dylan) written in LISP, and one of the most popular systems for computer-aided design (AUTOCAD) is written in LISP dialect.
The basic categories of research in knowledge-based systems include: knowledge representation, knowledge use (or problem-solving), and knowledge acquisition (i.e., machine learning and discovery).
In knowledge representation, the key topics are concepts, languages, and standards for knowledge representation. There are many issues involved in scaling up expert systems: defining the problems encountered in the pursuit of large knowledge bases; developing the infrastructure for building and sharing large knowledge bases; and actually accumulating a large body of knowledge, for example, common sense knowledge or engineering and technical knowledge.
Knowledge use, or problem-solving, research efforts involve the development of new methods for different kinds of reasoning, such as analogical reasoning, reasoning based on probability theory and decision theory, and reasoning from case examples.
The first generation of expert systems was characterized by knowledge bases that were narrow and, hence, performance that was brittle. When the boundary of a system's knowledge was traversed, the system's behavior went from extremely competent to incompetent very quickly. To overcome such brittleness, researchers are now focusing on reasoning from models, principles and causes. Thus, the knowledge-based system will not have to know everything about an area, as it were, but can reason with a broader base of knowledge by using the models, the principles, and the causation.
The quest for a large knowledge base boils down to the problem of access to distributed knowledge bases involving multiple expert systems and developers. The effort to develop the infrastructure needed to obtain access is a research area called knowledge sharing. The goal of the knowledge sharing research is to overcome the isolation of first-generation expert systems, which rarely interchanged any knowledge. Hence, the knowledge bases that were built for expert systems in the 1980s did not accumulate.
A major issue of expert systems research involves methods for
reasoning with uncertain data and uncertain knowledge. One of the most widely
adopted methods is called 'fuzzy logic' or 'fuzzy
reasoning,' especially in
Very lately, there has come on the scene the research topic of neural networks -- networks of distributed components operating in parallel to make classification decisions. The links between neural networks technology and expert system technology are being explored.
Finally, research is underway to explore the use of new parallel computing methods in the implementation of expert systems and advanced knowledge-based systems. The new wave of computing is multi-processor technology. The question is, what will be the impact of such high-performance parallel computing activities on expert system techniques?
DESIGN OF THE JTEC STUDY GROUP ON KNOWLEDGE-BASED SYSTEMS AND THE SELECTION OF JAPANESE SITES.
Sponsors of this JTEC study defined the dimensions of the study as follows:
Business sector applications of expert systems
Advanced knowledge-based systems in industry
Advanced knowledge-based systems research in universities
Government laboratories, ICOT, the laboratory of the Japanese Fifth Generation Computer Project
EDR -- research and development on electronic dictionaries (lexical knowledge base) for natural language processing
Finally, we were asked to
observe the fuzzy systems work being done in
Half of this study effort has been aimed at applications of expert systems in the business sector. Knowledge-based system research in industry comprises fifteen percent of the effort, and knowledge-based system research in universities another fifteen percent. Two national laboratories (ICOT and EDR) each account for five percent of the total. The remaining ten percent focuses on miscellaneous topics.
During the week of our visit, the JTEC team visited 19 Japanese sites. Applications of expert systems to business sector problems and the industrial knowledge-based system research together accounted for 12 of the 19 visits. University knowledge-based systems research accounted for three, ICOT and EDR accounted for two, and other visits two.
We chose the industrial sites to be visited based on the following criteria:
Computer manufacturing companies that were known to be very active in KBS applications and research
Non-computer companies at which there was at least one well-known expert system application
Selected companies from certain industry groups that were known to be active and highly competent in building expert systems applications (for example, the steel, construction, electric power and communications industries)
Visits to university
professors were selected on the basis of the panel members' personal knowledge
of the leaders in academic knowledge-based system research in
Finally, we scheduled a special visit with the editor and the staff of Nikkei AI Newsletter to check facts that we believed we had accumulated and impressions that we had. Nikkei AI is the leading Japanese news publication in the field of knowledge-based systems applications and research.
Chapter 2
APPLICATIONS OF KNOWLEDGE -
BASED SYSTEMS IN
Edward Feigenbaum
Peter E. Friedland
Bruce B. Johnson
Howard Shrobe
INTRODUCTION
A major purpose of this JTEC
study was to survey the use of knowledge-based (KB) systems in Japanese
industry. Our goal was to determine the breadth (how many) and depth (how
important to the company) of KB systems as well as to analyze the methods and
tools used to develop the systems. This chapter will first survey general
trends of expert systems (ES) development and use within Japanese companies,
based on annual surveys by the publication Nikkei AI. We will then illustrate
some of the best of Japanese industrial utilization of KB systems technology
through three case studies which we believe have had a great impact upon the
business of the industrial user. Next, the breadth of applications that we
observed will be presented, with descriptions of application types and
type-specific features. Finally, we compare and contrast our view of KB systems
utilization in Japanese industry with that in the
TRENDS IN AI APPLICATIONS IN
Each year, Nikkei AI
publishes a survey of expert systems in
The number of fielded systems reported in the 1992 survey was analyzed according to the type of application, as shown in Figure 2.2, along with a comparison to previous years. Diagnostic systems have traditionally been the most popular type of ESs. They were first developed 20 years ago (starting with MYCIN; Shortliffe 1976), and now have a well-understood methodology for their construction. The recent upswing in the diagnostic systems is due to two factors: (1) the entry of new ES-user companies; and (2) the application of diagnostic techniques in new areas, such as 'help desk' and management analysis.
Figure 2.1. Growth of ESs in
The percentage of ES
applications in the planning and design areas is declining, although the
absolute number of systems in these areas is increasing because the total
number of systems is increasing at a faster rate. Figure 2.2 shows that diagnosis
still represents the largest class of applications, although planning
(including scheduling) is growing rapidly in importance. Design and control
applications are also high enough in numbers to be accorded mention as separate
categories. Three examples from our own review of Japanese ESs in scheduling,
design and control (all of which have had a high impact on the organizations
that use them) are discussed in the next section. Based on our knowledge of
activity in the
Figure 2.2. Types of ES Applications (Source: Nikkei AI 1992)
The Nikkei AI survey also
looked at ES development from the point of view of organizational structure,
i.e., who actually builds the systems. Here we see some contrast with the
situation in the
Figure 2.3. Locus of Development of ESs
(Source: Nikkei AI 1991)
Some other general conclusions from the Nikkei AI survey, for which we found support in our own visits, include:
a growing interest and involvement in related technologies, e.g., fuzzy logic, neural networks, object-oriented databases;
a steady move away from specialized computers (e.g., LISP machines) toward UNIX-based workstations;
a recognition of the value of KB systems in improving individual decision making, working procedures, and timeliness;
a recognition, by some companies, of operational problems such as obsolescence, maintenance and completeness of the KB. (One would expect similar problems with database systems.)
1 Neither Nikkei AI nor the JTEC panel know if the non-respondents have fielded systems or not. We must assume that some percentage of non-respondents have developed and/or fielded knowledge-based systems. Therefore, the figures quoted in the following discussion must represent minima.
2 The JTEC panel's best guess
of the number of fielded systems in
CASE STUDIES OF HIGH-IMPACT SYSTEMS
The impact of a new
technology like KB systems can be measured both by the depth and the breadth of
its impact. Later in this chapter we discuss the breadth of KB systems
applications that we observed in
Case 1: Modular House Configuration (Sekisui Heim)
The Company. Sekisui Heim is
a housing division of Sekisui Chemical Company. Begun in 1947, Sekisui Chemical
was the first to develop plastic molds in
In 1971, Sekisui Chemical
created the Heim division to build modular houses. Unlike prefabricated houses,
modular houses are semi-custom houses designed and constructed out of modules.
Sekisui Heim promises to have a house ready for occupancy in two months from
the time a customer signs a contract with a design on hand. Of the two months,
40 days are allocated for on-site work -- from ground breaking to completion --
with a five-man crew. More than 80 percent of the house is built in a factory.
Sekisui has six highly automated factories, each of which can complete enough
components for a house every 40 minutes. The six factories are distributed
throughout
The Problem. A class of houses is designed by the Sekisui architects. Instances of the house can be constructed out of modules called units. The units come in six sizes (1.2 m and 2.4 m widths, and 3.6 m, 4.5 m, and 5.4 m lengths) that are designed to be transportable on trucks. The units can be subdivided into smaller compartments, placed side by side to form a larger compartment, or stacked to form multiple stories. Within the constraints imposed for a particular house class (e.g., a particular roof design may be required for a particular class), the customer can design a house to suit his or her desires and needs. A room can be of any size as long as it can be configured from the units; the exterior walls can be made from a variety of different materials such as stucco, hardwood, and cement; the openings between rooms can be of many different sizes; and so on.
The most popular class of houses is called Parfait, of which more than 45,000 units have been sold. It is a contemporary looking two-story house with a flat roof. For a house with a floor space of 1600 square feet, the average cost is about $69,000 -- a unit cost of less than $45/sq. ft.
After the house has been designed, all the necessary parts must be identified and delivered to the factory floor in a timely manner. An average house requires about 5000 unique parts. Sekisui has 300,000 different parts in stock. Prior to the installation of an expert system (to be described in the next section), every time a new house design was introduced, there was an error rate in parts selection of about 30 percent for the first six months. As time went on the error rate decreased to about 5 percent. The high initial error rate made the introduction of new products highly problematic, and the steady-state error rate of 5 percent cut into profits. Some kind of computer-based help was needed to make Heim a viable business.
In 1984 Sekisui heard about
expert systems and made a strategic decision to invest in the technology. Since
the company was not familiar with the technology, it bought a 70 percent share
in a small start-up company, named ISAC (International Sekisui AI Corporation).
ISAC was started by the developer of a PROLOG language called K-PROLOG,
currently the best-selling PROLOG in
Sekisui Heim formulated a long-range plan for the development of computer programs for its business. Three kinds of capabilities were envisioned which were to be integrated into a tool kit to help with the complete process from design to production of modular houses:
An intelligent CAD tool to help with the development of new designs
A room layout consultant and other consultation systems to help in sales
Production control and other expert systems to help with the actual manufacture of units
The first expert system to be put in routine use, HAPPS, identifies and selects the necessary parts for a given house design and schedules the delivery of the parts to the right place on the factory floor at the right time. Sekisui Heim has also developed an expert system for sales personnel to help customers with room layouts, but it is experiencing limited acceptance by the sales force.
The HAPPS System. HAPPS is an expert system that selects and schedules the delivery to the factory floor of the correct parts for steel frame houses. There are two other versions for wood frame and high-rise houses, called TAPPS and MAPPS. Once the house design is finalized by the customer, it is passed on to the factory. There, various types of information about the house are entered into the system. The interface uses a number of menu-driven windows that display the 'legal' options available for different parts of the house. Figure 2.4 shows the different options for stairs. The display shows the layout of the rooms and available options as icons, making the specification easy and clear.
Figure 2.4. User Interface For Selecting Some Design Options
The necessary information for parts selection is gathered in the following order:
House class name, house name, name code, address, etc.
Sizes of the various units and their arrangements
Eaves for first and second floor: type and color, position of gutter pipes
Entry way and stairs: the shape and location of the entry way, the shape and position of the stairways, step-down location, if any; type of windows, rooms to be soundproofed, etc.
Interface between units (such as doorways) and corridors
Other information, such as the order in which the units are to be manufactured
The input information basically describes the compositional relationships among the rooms composed of modules. For example, 'an object of type x is located in unit y on floor z.' From the specifications of the final product, HAPPS must determine all the parts needed to create the product. This process is called parts explosion. In order to do the parts explosion, rules of the form, 'If parts x and y are joined using z method, then use part s,' are applied to all the objects specified in the design. The rule base is searched using K-PROLOG's pattern match and backtracking facilities.
The heart of the parts explosion process is the object hierarchy describing the parts and their interrelationships. Figure 2.5 shows the major classes and their position in the class hierarchy. Table 2.1explains the individual classes.
HAPPS was developed on the Sun workstation. The compiled production version is downloaded and runs on the Hitachi Engineering Workstation 2050.
Figure 2.5. Parts Class Structure in HAPPS
Table 2.1
Explanation of the Classes
HAPPS was originally developed using K-PROLOG. However, the maintenance (including additions and modifications of rules and data) of the PROLOG system became a real problem when new product introduction increased to four times per year. To overcome this problem ISAC developed an object-oriented PROLOG called MethodLog, and HAPPS has been implemented in this language. In object-oriented systems, the data description and the procedures associated with the data are encapsulated into entities called objects. The proximity of all the relevant information about a piece of data facilitates maintenance.
The size of HAPPS is about 14,000 lines of PROLOG and 6,000 lines of C code (used primarily for writing the interface and data base access functions). It took a team consisting of four domain experts, two knowledge engineers, and 14 programmers two years to develop HAPPS.
The Payoff. The decision to build HAPPS, TAPPS, and MAPPS was a strategic decision for Sekisui Heim. It enabled the company to put itself in a profit-making position and to expand its product lines. The expert systems reduce the cost of making modular houses, improve the quality of products and services, and reduce the error rate. The steady-state five percent error in parts selection has been reduced to near zero.
The three systems cost approximately 450 million yen($3.5 million at approximately 128 yen/$) to build. The cost figure was calculated using the following formula: 1.5 million yenx manpower x months. Sekisui claims that the savings has been 1 billion yen($8 million) annually.
Case 2: Aircraft Crew Scheduling (JAL)
The Company. Our next example
comes from Japan Airlines (JAL),
The Problem. Before the
development of the KB scheduling system, called COSMOS/AI, about 25 human
schedulers were involved in solving the crew allocation problem. The hardest
schedule (for JAL 747s) took 20 days to prepare (with a great deal of
overtime). Moreover, the schedulers needed about a year to become expert in the
problem. A related, very important issue was maintenance of scheduling
knowledge, that is, updating information on planes, crews, government
regulations, etc. In the summer of 1986, JAL decided to investigate various
automated approaches for improving its solution to the crew scheduling problem.
The airline developed two automated approaches to the problem, one a
traditional operations research-based scheduling system (in cooperation with
The System. Testing of both systems began in the summer of 1988. The KB system was easily the system of choice for two major reasons: first, it produced better schedules because it was far better able to represent complex, yet informal constraints on crew preferences and other related factors. Second, it was much easier to maintain.
Technically, the JAL scheduling system uses straightforward heuristic scheduling methods. It builds crew pattern blocks that include pre-flight rest time, flight time, mandatory time at the destination, and a return flight. These blocks are then placed on the flight schedule, along with other time allocations like crew testing, vacations, and training, in order of most-constrained allocations first. When a problem occurs, time allocations are shuffled, moving the least constrained blocks first. The most complicated problems (the 747s) take two to three hours for a single scheduling run. Constraining factors which must be considered in producing the schedule are shown in Figure 2.6. The human experts still make final adjustments to the schedule.
The Payoff. The KB system became fully operational in February, 1990. It has reduced the number of human schedulers from 25 to 19 in a time when JAL operations increased by five to ten percent. Moreover, these 19 schedulers are now also assisting in other crew management tasks, reducing the actual scheduling manpower effectively to the equivalent of 14. The aforementioned 747 schedule now takes a maximum of 15 days to produce, with no overtime required, compared with 20 or more days, including overtime, to do the job previously. Training time has been reduced to two months. Overall, scheduling productivity has approximately doubled.
Two nice features of the JAL system are an excellent human interface and full integration with a mainframe-based corporate database that provides constant updates to the KB system. An example of the system's clear output, showing allocation of crew members over a 30-day period, is shown in Figure 2.7. The scheduling system itself is distributed among workstations specialized to the different aircraft types, although the workstations themselves share information on crews trained on multiple aircraft types. The system cost about 500 million yen($3.9 million at 128 yen/$) to build and paid for itself in direct cost savings in about 18 months. JAL views the harder-to-measure savings in increased crew satisfaction and ease of maintenance as equally important.
Figure 2.6. Sources of Knowledge and Constraints Used for JAL Crew Scheduling
Figure 2.7. Example of Output, Showing Crew Assignments For Each Day of the Month.
Case 3: Blast Furnace Control (NKK)
The Company. As early as 1986, NKK Steel Company's Fukuyama Works developed an expert system to predict abnormal conditions within its blast furnace. A blast furnace is a complex, distributed, non-linear process. Conventional mathematical modeling techniques have never been able to predict future dynamic states of the furnace with enough accuracy to support automated control. The system became operational in 1987. NKK and other Japanese steel companies have since developed other knowledge-based blast furnace control systems.
The Problem. Because the blast furnace feeds all other processes in the steel mill, any instability in the operation of the furnace is compounded by the impact on other processes further down the production line. Avoiding unstable operation of the furnace requires characterizing the current state of the furnace and projecting the conditions which will occur over the next several hours while there is still time to make adjustments. Training a skilled blast furnace operator takes many years. Fewer young people want this type of career. Codifying the skill of experienced furnace operators reduces the training requirements.
Several factors contribute to the complexity of modeling a blast furnace. Material within it coexists in all three phases -- solid, liquid and gas. The large size of the furnace leads to long lag times (six to eight hours) before a change in raw-material charging takes effect. The device is inherently three-dimensional -- there are no symmetries to simplify the geometric modeling. Moreover, the flow of material inside the furnace is itself a complex process. The thermal state of the furnace cannot be measured directly, but must be inferred from various sensor measurements. The challenge for the furnace controller is to minimize the uncertainty in the operating temperature. The smaller the uncertainty, the lower the overall temperature needed to produce the pig iron (see Figure 2.8), resulting in very large fuel savings.
The System. An expert system has been developed which successfully models the current state, predicts future trends with sufficient accuracy to make control decisions, and actually makes the control decisions. These decisions can be implemented automatically or the operator can take manual control while still operating through the expert system's interfaces.
The system as a whole consists of three components: (1) a process computer gathers input data from various sensors in the furnace, maintains a process database and generates furnace control information; (2) the 'AI processor' provides the knowledge and reasoning for assessing and interpreting the sensor data, hypothesizing the internal state of the furnace, and determining appropriate control actions; and (3) a distributed digital controller uses the furnace control data from the process computer to control the actual blast furnace. A schematic of the system is shown in Figure 2.9. The system is implemented in LISP with FORTRAN used for data preprocessing. The knowledge in the AI processor is contained in 400 rules, 350 frames, and 200 LISP procedures; fuzzy theory is employed in its inference engine. The FORTRAN preprocessor contains 20,000 procedural steps. The system has a cycle time of 20 minutes, compared to the furnace time constant of six to eight hours. Fuzzy set membership is used to relate the temperatures inferred from the instruments to actual temperatures. The membership functions are revised from time to time to tune the performance of the system.
At any time, the operator can select either manual mode or automatic mode. The system continues to make inferences about the state of the furnace even in manual mode. Thus, the operator may manually change a set-point and the system will evaluate the influence of that action and make further inferences.
The Payoff. The blast furnace control application is noteworthy for many reasons. The problem had not been solved previously by other techniques. It was developed by systems engineers, not knowledge engineers. It is in daily operation now at two plants and will soon be installed in two more. The company reports an estimated annual savings of $6 million, a reduction in staff of four people, and an improvement in the quality of the furnace output because of reduced fluctuations in furnace temperature.
The benefits of the expert system, however, have not been separately established. It is considered an integral part of a new furnace control system that was installed the last time the blast furnace was shut down for relining. The JTEC team found that this was a common characteristic of expert systems used as closed loop controllers, viz. benefits are not traced to the component level. This suggests that expert systems have taken their place among the suite of techniques available to the controls engineer and do not require the special attention sometimes afforded new technologies.
Figure 2.8. Fuel Cost Savings
Figure 2.9. Blast Furnace Expert System
TYPES OF APPLICATIONS
As in the
The remainder of this section
will discuss techniques employed for some of these classes of application and
give a flavor for their prominence and diversity in
Diagnosis and Troubleshooting
Definition. Diagnostic systems comprise a broad range of applications that deduce faults and suggest corrective actions for a malfunctioning system or process. Numerous examples can be found in physical (electronic, mechanical, hydraulic, etc.) and biological domains, as well as in abstract domains such as organizations and software.
The functions of a diagnostic system typically include:
Collection of measurement data from instruments and/or the user
Characterization of the state of the system
If the system is in an abnormal state, an attempt to classify the problem
Use of a shallow knowledge, heuristic or statistical process to attempt to classify the problem
Use of a deep knowledge, casual or first principles model to predict which failures could cause observed symptoms
Suggested confirmation of tests or measurements
Refinement of diagnosis using the additional information
Presentation of diagnosis to human operator and, if desired, explanation of the line of reasoning
Suggested corrective or reparative actions
Receipt of confirming or corrective feedback and refinement of knowledge base for future use (learning)
Report of erroneous diagnoses or prescriptions to the knowledge base maintainer for manual corrections
Most diagnostic systems automate only a subset of these functions, leaving the other functions to humans or other types of information systems. It is common, for example, for a process control computer to capture measurement data over time and reduce these raw data to metrics. These metrics are then used by that part of the diagnostic system that is implemented with knowledge processing techniques. The capability of learning from past experience is rarely found. Diagnostic systems or classification systems made up a high proportion of the earliest knowledge systems. While the emphasis has now shifted toward planning and scheduling systems which often produce greater benefits, diagnostic systems still comprise 35-40 percent of the total number of fielded systems (Nikkei AI 1992). It is likely that diagnostic systems will continue to represent a substantial proportion of knowledge system applications because tool vendors have produced a number of task-specific shells for diagnostic applications. This will make the development task easier and will broaden the base of people capable of developing diagnostic applications.
Applications. Electromechanical systems are probably the most common domain of diagnostic applications. The JTEC team learned about major applications in electrical distribution system fault isolation, nuclear power plant diagnosis, telephone cross bar switch diagnosis and subway air conditioning diagnosis. Each of these are large systems with substantial benefits. One application at Nippon Steel handles 500 different kinds of electromechanical equipment comprising 25,000 component types (Minami and Hirata 1991).
Japanese steel producers have developed a number of diagnostic expert systems. One important application is the determination of the current state and trends in blast furnaces (see Case 3 earlier in this chapter). While not a complete diagnostic system, the characterization of 'state' is the first step in a diagnostic system.
Several applications in the domain of computer systems were mentioned by the Japanese computer manufacturers. These included determining the best means to recover from system failures, and offering advice on software debugging.
In construction, determining the cause of concrete cracking has been automated (Obayashi Corp.). Applications in medical diagnosis and finance were mentioned, but not detailed.
The benefits from diagnostic applications include reduced down time, safe recovery from failures, and accumulation or preservation of knowledge. In the latter case, it was specifically mentioned that the more educated young Japanese do not want to make their careers in certain operational jobs. Capturing knowledge of experienced workers today is essential to future operations using less educated labor. In one case, 'the cross bar switch diagnostician,' the application avoids the necessity of training replacement personnel for this obsolete equipment.
A number of task-specific shells have been developed for diagnostic applications. These are identified and described in Chapter 3. Research and development in support of diagnostic tasks has resulted in the many task-specific shells reported in that chapter. This work is continuing with emphasis on new forms of diagnostic reasoning. One particularly interesting research effort is a Toshiba project to diagnose unanticipated faults. A conventional heuristic expert system receives data about the system and diagnoses anticipated faults in a conventional way (using shallow knowledge that directly associates symptoms with likely faults). When it is unable to identify the problem, the knowledge system defers to a model-based reasoning subsystem which attempts to reason from first principles. That subsystem in turn utilizes a qualitative, fuzzy simulator to try out hypotheses and further reason from the simulated results.
Planning and Scheduling
A strong emphasis on this
class of systems was apparent at many of the sites we visited. Using corporate
estimates, somewhere between 30-50 percent of fielded KB systems in
Crew scheduling time at JAL was reduced from 20 to 15 days
Scheduling time for a Toshiba paper mill was reduced from three days to two hours
A Fujitsu printed circuit board assembly and test planner reduced the scheduling task by a man-year each calendar year
For an unspecified
All of the systems we were able to examine in detail used straightforward heuristic scheduling methods. As in the JAL application described above, constraints were described in some formal manner and used to guide an initial placement of tasks, with the most constrained tasks usually scheduled first. Then a variety of straightforward backtracking methods (sometimes called schedule shuffling) were used until a complete and correct schedule was found. In most cases, a complete and correct schedule was good enough; there was little emphasis on optimization of schedules.
While several sites mentioned
a need for reactive re-scheduling methods, we did not observe a currently
operational system with those capabilities. However, the elevator-group control
system described below may be considered an example of a highly reactive
scheduling system. Within the
Configuration of Manufactured Objects from Subassemblies
The JTEC team saw very little application of ESs to configuration-type problems such as those that occur in design or manufacturing. The most prominent examples were the systems developed at Sekisui Heim for modular housing, discussed above, and a system developed at NTT for design of private telecommunication networks. Fujitsu is planning expert systems for computer integrated manufacturing (CIM), but did not elaborate. NEC has investigated rule-based and algorithmic approaches to LSI design and developed EXLOG, a system for synthesizing customized LSI circuits and gate arrays (Iwamoto, Fujita et al. 1991).
Process Monitoring and Control
The most significant example
of a knowledge-based system for control was the one installed in 1987 at the
NKK blast furnace, as described above. Since that time the steel and
construction industries in
A second excellent example of a control system is one developed by Mitsubishi Electric for controlling a group of elevators. The AI-2100 Elevator-Group Control System, as it is called, uses a fuzzy rule base, divided into off-line and on-line types. Off-line rules are used as a sort of default set, independent of hall calls (i.e., the signals generated when passengers waiting in the halls push the up or down buttons). Off-line rules, for example, determine the number of elevators that should be near the ground floor, near the top, or near the middle of the building, depending on the time of day. On-line rules are invoked in response to hall calls, and aim to prevent bunching of cars in the same locations, thereby minimizing the average waiting time for passengers. Results of an extended simulation of a group of four elevators servicing a 15-story building revealed that the average waiting time was reduced by about 15 percent over a conventional elevator control system. The percentage of waits over 60 seconds dropped by almost a factor of two (Ujihara and Tsuji 1988). Since the degree of irritation felt by waiting passengers increases nonlinearly with waiting time, the reduction in psychological waiting time is quite significant. An optional feature of the system is a learning function, which allows the control system to adapt to changing traffic patterns and predict future conditions.
Examples from the construction industry of process monitoring and control KBSs are Obayashi Corporation's systems for automatic direction control of a shield tunneling machine and the swing cable control system (see site reports in Appendix E for more details).
Software Engineering
Applications of knowledge-based techniques to software engineering were one of the areas this JTEC panel was asked to cover. Improving the process of developing software is potentially one of the most highly leveraged applications for new technology.
Several Japanese companies indicated that 5-10 percent of their applications were in the development, testing or management of software. However, the panel's site visits were not structured to explore software engineering applications in any depth. We did not visit any software factories and we saw only three brief demonstrations of software-related applications. This in itself is an interesting outcome since several of the industrial research laboratories we visited had titles such as Software Engineering Research. Most research in those laboratories is focused upon new forms of software, as opposed to the use of knowledge-based techniques to support the development of conventional procedural software.
Applications. The examples
the JTEC team saw included the generation of procedural programs from a
state/event matrix and from a problem model. Both of these applications utilize
transformation technology to transform a declarative specification into
procedural code. Both operate at the individual program level, rather than the
system level. With respect to the state of the art in the
Another example was NEC's application of case-based reasoning to the retrieval of software quality improvement ideas. The case base has been developed over many years, which in itself is a unique contribution. Historically, the case base, which is company confidential, has been published in a book that is updated annually. However, the book has now become too large to be effective and an electronic case library has been established. The library indexes 130 attributes, and there are some clever techniques to minimize the number of index items which the user confronts. The JTEC team did not view this application as a convincing example of case-based reasoning, as there was no repair logic in the application. However, the main goal of the project was reformulation of the corporate knowledge to permit effective machine access, and the goal appears to have been successfully achieved.
At the
Comparative Assessment with
the
COMPANY-SPECIFIC APPLICATIONS
This section is a survey of some of the applications that were described to the panel, either by oral presentation or in answers to our questionnaire. Most of this material is abstracted from the site visit reports in Appendix E. The companies that we visited that are not listed below, namely Japan Air Lines and Sekisui Chemical, basically had one major application each, and that application has already been discussed.
Fujitsu
Fujitsu Laboratories reports that it has built about 240 systems for internal use. The company also has knowledge of about 250 systems built by its customers, but cannot categorize any of them in terms of operationality. Fujitsu estimates that about 20 percent of the projects started get as far as a complete prototype, and of those, 20 percent get to an operational system status. A best overall guess is that five percent of the reported systems are in routine use. The current success rate in fielding expert systems is better than five percent. Because of the experience base it now has, Fujitsu is better able to select problems which are solvable by this technology, and the success rate is now somewhere between 75 and 95 percent.
Planning/Scheduling. The largest percentage of systems developed for internal use are for planning or scheduling. The most successful is PSS, a production support system for planning assembly and test of printed circuit boards. The application is relatively small, built on ESHELL, and runs on a mainframe. The system, which is in daily use, saves about one person-year per year by speeding up the planning time. A workstation version is under development.
Fujitsu stresses integration of KBS and conventional systems. It now has ES tools written in COBOL (YPS/KR) and in FORTRAN (FORTRAN/KR), to support such integration (see Chapter 3).
At Fujitsu, 60-75 percent of the development cost of a system goes into the graphic user interface (GUI). Better GUIs are needed. That need has stimulated work on a GUI called GUIDEPOWER.
In addition to the need for better GUIs, Fujitsu also pointed to other problems with the existing technology. Knowledge changes rapidly in the real world (e.g., in banking), and hence the maintenance of the KB is too costly using existing techniques. A more automated means of knowledge acquisition/revision is needed. Another problem is the relative paucity of development tools, such as for testing a system. Our Fujitsu hosts expressed the view that the ES and CASE worlds are not well matched -- in general, expert systems are best suited to ill-structured problems, whereas CASE tools are better suited to well-structured problems.
Finally, Fujitsu worked with NKK on the blast furnace system described earlier. It is one of the largest applications with which Fujitsu has been associated.
Construction. One of
Process Scheduling. Another highly successful system is for process scheduling in chemical plants. Use of the system has resulted in reducing the costs of raw materials and labor by billions of yen annually.
Initially, most of
Toshiba
Approximately 500 expert systems have been developed at Toshiba for both internal and external use, with about 10 percent in routine use. Design and planning/scheduling are the major growth application areas. Within design, the principal tasks are LSI and PCB design.
Paper Production. The most successful expert system is a paper production scheduling system for the Tomakomai mill of Ohji Paper Co., Ltd. The system uses 25 kinds of pulp, which are combined in 10 papermaking machines to produce 200 different paper products. There are hundreds of constraints to be satisfied. The system employs a top-down hierarchical scheduling strategy, starting with scheduling product groups, then individual products, and then line balancing. This application has reduced the time required to produce a monthly schedule from three days to two hours.
Microwave Circuit Design. Toshiba also reported data on a microwave circuit design system, called FIRE, built with an internally developed tool called Debut. FIRE captures the design process for highly parametric design problems. The system runs on a workstation, is C-based, and interfaces with microwave circuit simulators and a mechanical CAD system. The primary benefits of the system are speed-up of problem solving and accumulation of design knowledge.
A fault diagnosis system
developed for Kyushu Electric Company is representative, and is in routine use
by
Toshiba also reported on a diagnostic system for a subway station facility, called SMART-7, which was built for the Tokyo Eidan 7th line. The system was built with a diagnostic knowledge acquisition support tool called DiKAST. SMART-7 is implemented as a support module that detects malfunctions in the air conditioning facilities. The system contains 1600 frames, and runs on a workstation. It was built by three system engineers in three months.
Electric Subassembly. Another expert system is used for placing electronic components on printed circuit boards. The knowledge base consists of about 70 rules and 8500 functions, and was built on top of the ASIREX tool. The ES is integrated with a PCB CAD tool called BoardMATE, a commercial product developed by Toshiba. The system took three years to develop, with an estimated labor cost of three man-years. The system has sped up problem solving by a factor of 10.
DSS. A small knowledge system (110 rules, 32K lines of C code) that Toshiba sells is MARKETS-I, a decision support system to determine the suitability of opening a convenience store at a particular site. Estimation accuracy is improved with the use of this system.
Banking. ESCORT is a banking operations advisor system that is used in Mitsui Bank. It plans the most appropriate procedure to get the computer banking system back on line following an accident. The system has about 250 rules and 900 frames, and was built using a LISP-based expert system shell called ExPearls. The GUI was written in C. The system runs on the AS3000 workstation.
Software Engineering. In the area of software engineering, Toshiba has developed an automatic programming system for sequence control. This system generates a control program for a steel plant from high-level specifications. It analyzes and refines the specification, generates code, and retrieves program modules. This is a fairly large system: 2,900 frames, 320 rules, and a library of 190 program modules. It was written in LISP, using an internally developed frame-based knowledge representation language with object oriented facilities. Twenty person-years went into its development, over a four-year span. The system has resulted in cost reduction and an improvement in the quality of the sequence control program designs. Test and verification are performed manually.
Reasoning Methodologies. One of the most advanced applications that was described to JTEC combines model-based and heuristic reasoning. The system is used for control of a manufacturing or processing plant (Suzuki, Sueda et al. 1990). The shallow reasoner uses knowledge in the form of a heuristic control sequence (see Figure 2.10). When unforeseen events occur, for which the shallow knowledge is inadequate, the system can resort to deep knowledge, in the form of a model of the plant, to reason about an appropriate control sequence. The deep knowledge includes the structure of the plant, the function of the plant devices, causal relations among plant components and principles of operation. The system combines several advanced technologies: model-based reasoning, knowledge compilation, and qualitative reasoning.
The following approach is used: If the shallow knowledge is insufficient, go to the deep model. In the deep model, (1) use the causal model to deduce the abnormality; (2) find the operations that would bring the plant to a desired state; (3) find the conditions under which an operation should be performed; (4) simulate to test the hypotheses (the simulator uses qualitative reasoning and fuzzy control techniques). The knowledge estimator checks to see if the simulation indicates any unforeseen side effects. If the answer is yes, then the system determines what supplementary operations should be performed. If the simulation result is satisfactory, the knowledge is stored in rule form for future use. This process is called knowledge compilation.
Figure 2.10. A Plant Control System Using Deep and Shallow Knowledge
The techniques of model-based reasoning and knowledge compilation have also been employed in an innovative system, called CAD-PC/AI, for automatically generating sequence control programs for programmable controllers. (For further details see Mizutani, Nakayama et al. 1992).
Assessment. Toshiba systems do not now use multiple sources of expertise, but they are trying to do so in their newer systems. Many ESs are implemented with a combination of a shell/tool plus a programming language such as C or LISP. The company has several training courses, ranging from a one-day basic course, to a two- to three-week application development course, to a multi-week advanced topics course. About 10 percent of research funds go into training. An important element of Toshiba methodology is to use task-specific shells, such as PROKAST or DiKAST.
ESs selected for implementation are chosen by a systems engineer or researcher. This technology is used only when conventional DP doesn't work. The prespecified selection criteria are performance and practical value. An economic justification is also sought. Usually the same people are used in all phases of application selection, development, insertion into the operational activity, maintenance and redesign.
Toshiba's largest project to date is a 5,000 rule system for diagnosis and control of an electrical power generator.
NEC
NEC has developed about 1,000 ES applications, of which 10 percent are in routine operation. The major task areas are diagnosis, scheduling, design, and software development aids. NEC's biggest success is the crew scheduling system, COSMOS/AI, developed with Japan Air Lines, discussed previously. Other applications include a software debugging advisor; FUSION, an LSI logic design tool; and a system called SOFTEX for synthesizing C/C++ programs from specifications (represented as state transition tables and/or flowcharts). SOFTEX is a 300-rule system built with EXCORE (see Chapter 3), developed by professional programmers and domain experts. The system is still about six months to one year from routine use. In order to make the system appeal to programmers, it has been necessary to incorporate in SOFTEX the functionality to enable customization, so that the generated program can fit each programmer's style.
Assessment. NEC admits to some unsuccessful projects, and attributes the lack of success to a number of reasons, including the knowledge acquisition bottleneck, the difficulty of integration with existing systems, and knowledge base maintenance. Unfortunately, these problems tended to get detected late rather than early.
Future ES applications at NEC are expected to employ technologies such as model-based diagnosis, case-based reasoning for scheduling and for software synthesis, and combining ES methods with algorithmic methods (model-based reasoning is one example).
NTT
NTT currently has 36 expert systems under development. Half of these systems perform diagnosis on various components of a communications system. Their most successful ES performs failure diagnosis and support of crossbar switching equipment. A task that typically takes four hours has been reduced to five minutes using the system. However, the main motivations for developing the system were the planned phase-out of crossbars and the need to avoid training new people on an obsolete device. Thus, the expert system's main value is in capturing and preserving expertise.
Nippon Steel has developed 100-130 expert systems for internal use (very few for external use). About 30 percent of these are now in routine use. Although most of the current applications are diagnostic/troubleshooting systems, the fastest growing area is in planning and scheduling.
Diagnostic/Process Control.
Nippon Steel selected three representative applications to present to the JTEC
panel, two of them diagnostic and one for process control. The first is a
system for process diagnosis, used in the
The second representative
system provides supervision of blast furnace control. This is a large system,
with 5,000-6,000 production rules. It was built on top of
Design. The third system is a design expert system for designing shaped beams. This is a large system, containing 3000 production rules and 500,000 lines of code, in LISP, FORTRAN and C. The system was built using ART (from Inference Corp.) and runs on networked Sun workstations. Twenty technical people (4 domain experts, 16 knowledge engineers) developed the system over an 18 month period. The principal payback is in reduction of the design cycle time by 85 percent and an increase in design accuracy of 30 percent. The estimated economic return is $200,000 annually. The system is felt to be too expensive, requiring a copy of ART at each new site.
Assessment. When developing systems that use multiple sources of knowledge (experts) the people at Nippon Steel have adopted a structured development method, much the same as used in conventional software development, which they feel is necessary to avoid unnecessary confusion. For diagnostic systems, they use their in-house tool, ESTO, which is designed to accommodate incomplete and inconsistent knowledge from multiple knowledge sources.
Nearly all systems are integrated with other systems, e.g., data processing, and Nippon Steel is working to establish an inter-factory LAN to facilitate this integration. Of 28 systems developed within the past two years, 60 percent can be characterized as using a combination of a rule-based inference engine plus added code written in a conventional language, typically C. A few systems integrate rule-based, fuzzy and neural network methods. Commercial tools were found to be inadequate in the way they permit access to external functions written in conventional languages, which motivated Nippon Steel to develop its own tools. Problems with current technology include slow execution speed for large-scale systems, high cost in time and effort of knowledge maintenance, lack of transparency of the inference process, tedious integration with existing software, and the general problem of knowledge acquisition.
The company cited many reasons for selecting an expert system application, among which are to acquire know-how, to capture and distribute expertise, to improve revenues and to reduce costs.
Most of the systems that have been developed are small (under 100 rules). It was found that the time to develop large projects increases more than linearly with the number of rules. The in-house consultants advise developers to keep their knowledge bases to within 300 rules, and if more are needed to segment the KB into modules each of which is within 300 rules.
Looking ahead several years, Nippon Steel envisions new expert systems that perform planning and scheduling over multiple factories or production lines. Future diagnostic and process control systems will employ model-based and case-based methods for more in-depth problem description and for recovery following diagnosis, with a capability for closed-loop control. Future planning/scheduling systems will be fully automatic and able to plan with multiple objectives. Future tools (infrastructure) will require knowledge engineers, be task-specific within a general problem solving paradigm, use a standard knowledge representation, and have a better user interface.
NKK
NKK has 25 ESs in routine operation, and five more in the field testing stage. Of the 37 systems that have been built or are in some stage of development, 16 combine the functions of advising, process control and diagnosis; 20 combine the functions of advising, planning/scheduling, and management integration aid. The two major applications are the blast furnace expert system discussed earlier in this chapter and a steelmaking scheduling system (Tsunozaki, Takekoshi et al. 1987; Takekoshi, Aoki et al. 1989).
All of the fully developed systems are integrated with conventional systems, and also use some high-level language (LISP, FORTRAN, PL-1) in addition to the ES shell (ESHELL for the blast furnace, KT for the planning/scheduling systems).
Rather than emphasize training in knowledge engineering and expert system development, NKK has chosen to train its systems engineers in systems analysis and modeling, which are more important skills for total system development. Expert systems techniques in themselves are of relatively small significance, in NKK's view. On the other hand, the company has developed expert system design tools, used in the Engineering and Construction Division, which embody a methodology for developing ESs. These tools, called NX-7 and NX-8, run on Xerox LISP machines and Sun workstations, and have been applied in developing ESs for operations support of refuse incinerators.
NKK often introduces an ES at the same time as it refurbishes its entire computer system (which itself may be just a part of a larger renewal project), making it difficult to evaluate the impact of the introduction of the ES. However, ESs are introduced only when conventional programming techniques fail to solve the problem at hand.
Regarding future development, NKK sees more use of in-the-loop control, moving from mainframes to engineering workstations and providing intelligent assistance on more advanced tasks of engineering and management. The company sees several problems with current technology: AI tools that are difficult to learn and use; relatively high difficulty in system maintenance; inadequate processing speed; the need to obtain knowledge automatically from data, and the need to solve problems (e.g., scheduling) by using previous cases.
Mitsubishi Electric
The JTEC team visited the Industrial Electronics and Systems Lab (IESL), a small group especially focused on power industry (electrical) problems. Thus we saw a very small part of the total ES activity at Mitsubishi Electric.
Mitsubishi Electric's single most successful application has been the ES for elevator group control, discussed earlier. Another success story is a fuzzy logic control system for metal machining, which became part of an electron-beam cutting machine that began selling three or four years ago.
Diagnosis. IESL has built three systems for internal use for finance, diagnosis, and planning (all are prototypes). The diagnosis system employs qualitative process modeling to determine problems with a boiler system. It started out as a 200-rule system, but when implemented with DASH is only 70-80 rules (some component knowledge is reusable). The system is fielded at Kansai Electric Power but is not in routine use yet. It reduces the time to make a diagnosis from three to four minutes down to one minute.
Energy Management. IESL is
most interested in ES for energy management of electric power distribution
networks. It envisions the technology used for diagnosis, restorative
operation, reliability assessment, dispatching control, and operations
planning. There are currently three energy management systems (EMS) (one from
Mitsubishi Electric) in practical use in
Assessment. Mitsubishi finds
that Japanese power companies are very eager to use ES technology. They were
led to believe that
IESL has a lot of experience in network diagnostic problems, so it does not have much failure in this area. In general, where an ES is built as an integral part of a larger system, the failure rate is very low. Mitsubishi considered integration from the very beginning and thus did not experience problems of integrating stand-alone ESs after they were built.
We were given a breakdown of types of ES applications in the power industry world-wide: diagnosis, 25 percent; operations, 25 percent; monitoring, 15 percent; control, 15 percent; planning 10 percent; others (simulators, maintenance, design, system analysis), 10 percent.
Tokyo Electric Power Company (TEPCO)
TEPCO has developed 30 systems, of which 11 are in routine use. The application domains for these 11 include design, consultation, control, prediction, planning and scheduling, fault location, hot-line service, and computer operations. Three systems are in field test, 14 in the prototyping, and two in the feasibility stage.
Forecasting. The most successful system is the daily maximum load forecasting system. Measures of success have been user satisfaction, a reduction in the absolute forecasting error rate from 2.2 percent to 1.5 percent, and a four-fold speedup in forecast generation. The system is actually quite small, with only about 100 rules, and was built using Toshiba's TDES3 tool. It runs on a Toshiba minicomputer and also on Toshiba workstations (Sun workstation compatible). The forecasting system is one component of, and integrated with, a much larger load forecasting system called ELDAC. The system was developed at a cost of approximately $2 million over a period of about 20 months. Two researchers and two experts at TEPCO designed the system and three system engineers from Toshiba built it. It is now used routinely by load dispatchers. Although the ROI is difficult to estimate, the use of the system precludes the need for a standby generator at a power station.
Assessment. About 50 percent of TEPCO's ES projects advance from the prototype stage to an operational system. The company's AI Technology Department is actively pursuing fuzzy logic, neural networks, genetic algorithms and computer graphics in addition to expert systems. Our TEPCO hosts made it clear that to them 'AI' means not only Artificial Intelligence but also Advanced Information Technology.
Obayashi Corporation
Obayashi has built 25 expert systems for internal use and one for external use. Of these, six are in routine operation and nine more are at the field testing stage. Most of the systems (14) are classified as advisory systems.
Direction Control. Obayashi's
most successful system is an automatic direction control system for shield
tunneling (a method of tunnel construction first used in
Assessment. Obayashi representatives report that 70 percent of ES projects that are started get as far as a prototype, and 30 percent actually get to an operational system. Using existing tools, they can envision building systems up to the size of a few thousand rules. Future systems planned by the corporation include other automatic control systems and intelligent CAD. The primary perceived problems with present technology are: knowledge acquisition; constructing design systems; incorporating model-based and case-based reasoning; and machine learning.
OBSERVATIONS AND CONCLUSIONS
It should first be noted that
this JTEC panel's sample set for KB systems applications in
We were also impressed by the number of high impact applications that we found. In addition to the ones detailed above, almost every site we visited seemed to have at least one KB system that had made a significant change in an important aspect of the company's business.
From a technology standpoint,
the JTEC team saw very little that differed from first-generation KB systems
applications in the
Chapter 3
TOOLS AND INFRASTRUCTURE FOR KNOWLEDGE - BASED SYSTEMS
H. Penny Nii
INTRODUCTION
In this chapter we focus on tools for building expert systems, and on the associated R&D infrastructure. The types of tools currently on the market are indicative of the current technology available to the end user community, while tools under development can provide clues about the kinds of applications one can expect in the future. The level of activity of tool development also indicates the value being placed on the future of this technology. We get at this, in part, by profiling some key ES development tools and leading-edge players in the market.
OBSERVATIONS AND CONCLUSIONS
In general, Japanese tool
vendors are optimistic about ES technology.
The majority of Japanese ES tools are developed, sold, and applied by computer companies. They have the resources to conduct research, develop new products, and persist in the business.
Because of the close relationship between industrial research, system development, and sales personnel in Japanese companies, solutions to customer problems are identified cooperatively, and then quickly find their way into ES tools.
Many Japanese tools under
development are at about the same level of sophistication as American tools.
Although many new ideas originate in
The predominant application
areas have been equipment diagnosis, planning and scheduling, design, and
process control. As in the
As in the
All the major Japanese computer companies conduct research in knowledge-based systems. Most of the research is in applying or integrating new techniques to customer problems. The industrial research laboratories serve as technology transfer agents for both imported and internally developed techniques and methodologies. At the same time, as in consumer products, Japanese companies are spending research money on improving and refining ideas and products.
On the negative side, the
Japanese suffer from a proliferation of tools that reflects their computing
industry: (1) there are several large computer manufacturers whose hardware
products are incompatible; (2) customer loyalty keeps end users from shopping
around; (3) customers tend to desire custom systems; and (4) there does not
appear to be any movement towards standardization. However, with the move
toward open systems architectures, these patterns may be significantly altered,
and one or two dominant players may appear.
Chapter 4
ADVANCED KNOWLEDGE - BASED SYSTEMS RESEARCH
Edward Feigenbaum
Peter E. Friedland
UNIVERSITY RESEARCH
Although the best research in
An analysis of the most
recent three IJCAIs (Australia in 1991, Detroit in 1989, and Italy in 1987)
reveals the following results. There were 37 Japanese single or co-authored
publications over that time span, compared to 387 American publications. Of
those publications, 18 came from academia (nine from
While in
Our host at
Prof. Nishida's particular specialty is fundamental work on the mix of qualitative and quantitative modeling of dynamic systems. He represents systems in the form of differential equations and then symbolically represent change in those systems in the form of flow diagrams in a phase space. He has developed a flow grammar to allow representation of a variety of complex processes and a simplification method to allow prediction of some forms of device and process behavior under change (IJCAI-91 and AAAI-87 papers). He also believes that large systems can be decomposed into smaller, more understandable systems. His goal is to build what he calls a 'knowledgeable community,' a library of component modules that can be combined to express the behavior of large systems.
The visit to Professor
Nishida's laboratory confirmed several important observations on the structure
of traditional academic research in
Our visit to
Professor Mizoguchi's laboratory is conducting research in four areas of knowledge-based systems work. The first is in the role of deep knowledge in next- generation expert systems. His focus is on knowledge compilation -- the automatic generation of shallow knowledge (like experiential diagnosis rules) from deep knowledge (i.e., structure-function models of complex devices). His group is building and testing a system called KCII in the domain of automobile diagnosis.
The second area of research is knowledge acquisition. Prof. Mizoguchi's goal is to build an interviewing system capable of automatically constructing an expert system for a particular task with no intervention of a knowledge engineer. A system called MULTIS (Multi-task Interview System) has been built which attempts to relate a proposed problem-solving task to prior tasks in a case library.
The third area of research is
large-scale, re-usable and shareable knowledge bases. Prof. Mizoguchi's
laboratory is conducting fundamental work into building ontologies for both
tasks and domains. To date, the work seems mainly theoretical, although Prof.
Mizoguchi authored a report for the Advanced Software Technology and
Mechatronics Research Institute of Kyoto detailing both a theoretical and
empirical research plan for the area. He told us that Prof. Okuno of
The final area of research discussed was intelligent tutoring systems. Prof. Mizoguchi's laboratory has designed a formal student modeling language (SMDL), based on PROLOG, but with a four-valued logic (true, false, unknown, and fail). Several prototype applications have been built, but none seemed to be in the formal testing stage at the time of our visit.
RCAST (
The
Interdisciplinary studies
International cooperation
Mobility and flexibility of staff and research areas
Openness to the public and to other organizations
All of these foci are
regarded as weaknesses of the
RCAST has five focus areas for interdisciplinary studies:
Advanced materials
Advanced devices
Advanced systems
Knowledge processing and transfer
Socio-technological systems
The group the JTEC team
visited is in the fourth of these areas and is headed by Professor Setsuo
Ohsuga and Associate Professor Koichi Hori. Professor Ohsuga, a former
president of the Japanese AI society, is the director of RCAST. The lab has 18
graduate students, five of whom are non-Japanese, and five research staff
members, two of whom are foreign visiting scholars. Much of the work in this
lab is conducted in conjunction with industry consortia. The lab appeared rich
in computer equipment: in addition to a dozen or more UNIX workstations of both
Professor Ohsuga's research interests have centered on knowledge representation for many years. The current work is particularly focused on knowledge representation for intelligent computer aided design applications across a variety of domains. The lab's research covers the following areas:
Knowledge representation
The integration of knowledge bases and databases
Intelligent CAD systems:
Knowledge-based design systems for feedback control of industrial plants
Design systems for aircraft wings
Chemical knowledge information processing systems
CASE; emphasis on specification development and conceptual modeling
Articulation problems
This last area is the special concern of Professor Hori and deals with the problem of transforming vague conceptualizations into representations that can be manipulated by a knowledge-based system.
The central tool of Professor Ohsuga's group is a representation and reasoning tool called KAUS (Knowledge Acquisition and Utilization System), which has been under development since the mid-1980s. KAUS is a logic-based system and is an implementation of a logical system called Multi-Level Logic (MLL) developed by Professor Ohsuga. This is a many-sorted first-order logic, in which data structures are the terms of the logic. Data structures are formally developed from axiomatic set theory. KAUS has a meta level for control of the base level reasoner. One component of this involves the use of procedural attachments (in much the same spirit as Weyhrauch's FOL). Certain predicates, called procedural type atoms (PTAs), are treated specially by the logic; an expression involving such a predicate is evaluated by fetching a procedure associated with the PTA and applying that procedure to the arguments. The returned result is treated as the logical value of the expression. One particularly useful PTA is EXEC, which calls the UNIX EXEC routine on its arguments; this makes any procedure accessible through UNIX a part of KAUS. This mechanism is used to access conventional database systems, which essentially transforms any normal database into a deductive database.
KAUS is used in most of the
research projects conducted in the lab. One project of note has been a
collaboration with chemists around
The following is a list of ongoing research projects in the lab:
Problem model design/transformation-based program development
Knowledge discovery and management in integrated use of knowledge bases and databases
A method for acquiring problem decomposition strategy from traces
Framework for connecting several knowledge-based systems under a distributed environment
Primitive based representation of adjectives and figurative language understanding
Structured connectionist studies in natural language processing
Interactive music generation system
Study of the development of neural networks in the context of artificial life
An approach to aid large-scale design problems by computer
Aiding FEM preprocessing with knowledge engineering
Supporting the development of intelligent CAD systems on KAUS
A method to assist the acquisition and expression of subjective concepts and its application to design problems
A study of computer aided thinking - mapping text objects in metric spaces
AIST,
An additional item of
potential importance to the field of knowledge-based systems research was
briefly mentioned during our visit to Prof. Nishida's laboratory. This is the
creation of two new graduate schools, called AIST and JAIST, Hokuriku with
campuses in
As stated above, it appears that current Japanese basic research efforts in AI could be characterized as good quality, but small in number. The total IJCAI output of the entire community for the last three meetings was less than CMU or Stanford over the same period. RCAST was the first attempt to significantly expand the scope, and JAIST is a much more ambitious attempt to do the same.
The sponsors of this JTEC study requested that the panel investigate AI-related research activities and achievements at major national laboratories. To that end, the panel visited the Electronic Dictionary Research Project; ICOT, which is the laboratory of the Japanese Fifth Generation Computer Systems (FGCS) Project; and LIFE, which is the Laboratory for International Fuzzy Engineering. The panel also looked into a new national project, called Real World Computing, which is a successor to the FGCS project.
ELECTRONIC DICTIONARY RESEARCH (EDR) PROJECT
History and Goals
EDR was spun out from ICOT in 1986 with a nine-year charter to develop a large- scale, practical electronic dictionary system that could be used in support of a variety of natural language tasks, such as translation between English and Japanese, natural language understanding and generation, speech processing, and so forth.
EDR was established as a
consortium in collaboration with eight member corporations, each of which
supports a local EDR research group. About 70 percent of EDR funding comes from
the
The EDR conception of an electronic dictionary is quite distinct from the conventional online dictionaries that are now common. These latter systems are designed as online reference books for use by humans. Typically, they contain the exact textual contents of a conventional printed dictionary stored on magnetic disk or CD-ROM. Usage is analogous to a conventional dictionary, except that the electronic version may have hypertext-like features for more rapid and convenient browsing.
EDR's primary goal, in part, is to capture: 'all the information a computer requires for a thorough understanding of natural language' (JEDRI, 1990). This includes: the meanings (concepts) of words; the knowledge needed for the computer to understand the concepts; and the information needed to support morphological analysis and generation, syntactic analysis and generation, and semantic analysis. In addition, information about word co-occurrences and listings of equivalent words in other languages are necessary. In short, the system is intended to be a very large but shallow knowledge base about words and their meanings.
The goals for the EDR research are to produce a set of electronic dictionaries with broad coverage of linguistic knowledge. This information is intended to be neutral in that it should not be biased towards any particular natural language processing theory or application; extensive in its coverage of common general purpose words and of words from a corpus of technical literature; and comprehensive it its ability to support all stages of linguistic processing, such as morphological, syntactic and semantic processing, the selection of natural wording, and the selection of equivalent words in other languages.
Accomplishments
Component Dictionaries
. EDR is well on the way to completing a set of component dictionaries which collectively form the EDR product. These include word dictionaries for English and Japanese, the Concept Dictionary, co-occurrence dictionaries for English and Japanese and two bilingual dictionaries: English to Japanese and Japanese to English. Each of these is extremely large scale, as follows:
Word Dictionaries
General Vocabulary:
English 200,000 Words
Japanese 200,000 Words
Technical Terminology
English 100,000 Words
Japanese 100,000 Words
Concept Dictionary 400,000 Concepts
Classification
Descriptions
Co-Occurrence Dictionaries
English 300,000 Words
Japanese 300,000 Words
Bilingual Dictionaries
English-Japanese 300,000 Words
Japanese-English 300,000 Words
The several component dictionaries serve distinct roles. All semantic information is captured in the Concept Dictionary, which is surface-language independent. The Concept Dictionary is a very large semantic network capturing pragmatically useful concept descriptions. (We will return to this later). The word dictionaries capture surface syntactic information (e.g., pronunciation, inflection) peculiar to each surface language. Each word entry consists of the 'headword' itself, grammatical information, and a pointer to the appropriate concept in the Concept Dictionary. The co-occurrence dictionaries capture information on appropriate word combinations. Each entry consists of a pair of words coupled by a co-occurrence relation (there are several types of co-occurrence relations). Strength of a co-occurrence relation is shown by the certainty factor, whose value ranges from 0 to 255. Zero means that the words cannot be used together. This information is used to determine that 'he drives a car' is allowable, that 'he drives a unicycle' isn't and that 'he rides a unicycle' is. This can be used to determine the appropriate translation of a word. For example, the Japanese word naosu corresponds to several English words, e.g., 'modify,' 'correct,' 'update,' and 'mend.' However if the object of naosu is 'error,' then the appropriate English equivalent is 'correct.' Finally, the bilingual dictionaries provide surface information on the choice of equivalent words in the target language as well as information on correspondence through entries in the concept dictionary.
From the point of view of KBS, the Concept Dictionary is the most interesting component of the EDR project. As mentioned above, this is a very large semantic network capturing roughly 400,000 word meanings. What is most impressive about this is the sheer breadth of coverage. Other than the Cyc project at MCC, there is simply no other project anywhere in the world which has attempted to catalogue so much knowledge.
The Concept Dictionary has
taken the approach of trying to capture pragmatically useful definitions of
concepts. This differs from much of the work in the
EDR describes the concept dictionary as a 'hyper-semantic network.' By which is meant that a chunk of the semantic network may be treated as a single node that enters into relationships with other nodes. Entries in the Concept Dictionary (for binding relations) are triples of two concepts (nodes) joined by a relationship (arc). For example, '<eat> - agent --> <bird>' which says birds can be the agents of an eating action (i.e., birds eat). The relationship can be annotated with a certainty factor; a 0 certainty factor plays the role of negation. For restrictive relations the entry consists of a pair of a concept and an attribute (with an optional certainty factor); e.g., 'walking' is represented by '<walk> -- progress /1 -->.'
The 'hyper' part of this hyper-semantic network is the ability of any piece of network to be aggregated and treated as a single node. For example, 'I believe birds eat' is represented by building the network for 'birds eat,' then treating this as a node which enters into an object relationship with 'believe.' Quantification is indicated by use of the 'SOME' and 'ALL' attributes; the scope of the quantification is indicated by nesting of boxes.
EDR has identified a set of relations which seem to be sufficient; these are shown below. In our discussions, the EDR people seemed to be relatively convinced that they had reached closure with the current set.
Relation Labels
Relation Description
Agent Subject of action
Object Object affected by action or change
Manner Way of action or change
Implement Tool/means of action
Material Material or component
Time Time event occurs
Time-from Time event begins
Time-to Time event ends
Duration Duration of event
Location Objective place of action
Place Place where event occurs
Source Initial position of subject or object of event
Goal Final position of subject or object of event
Scene Objective range of action
Condition Conditional relation of event/fact
Co-occurrence Simultaneous relation of event/fact
Sequence Sequential relation of event/fact
Quantity Quantity
Number Number
Semantic Relation Labels
Part of Part-whole
Equal Equivalence relation
Similar Synonymous relation
Kind of Super concept relation
This framework for representation is similar to semantic network formalisms used in the U.S.; it is most similar to those which are developed for language understanding tasks (e.g., Quillian's work in the late 1960s; OWL; SNePS; and Conceptual Dependency). However, the EDR framework seems to be a bit richer and more mature than most of this research -- particularly so, since this particular thread of research seems to have waned in the U.S.
EDR has tried to develop a systematic approach to the construction of the Concept Dictionary. This involves a layering of the inheritance lattice of concepts as well as a layering of the network structures which define concepts.
Use of Hierarchies. As in most knowledge representation work, the use of Kind Of (IsA) hierarchies plays an important role, capturing commonalities and thereby compressing storage. In the EDR Concept Dictionary a top-level division is made between 'Object' and 'Non-Object' concepts; the second category is further divided into 'Event' and 'Feature.' Under this top-level classification, EDR has tried to structure the inheritance hierarchy into three layers. In the upper section, 'Event' is broken into 'Movement,' 'Action,' and 'Change,' which are further refined into 'Physical-Movement' and 'Movement-Of-Information.'
The refinement of these concepts is governed by the way they bind with the subdivision of the 'Object' concept. A concept is further divided if the sub-concepts can enter into different relations with subconcepts of the 'Object' hierarchy. The middle section of the hierarchy is formed by multiple inheritance from concepts in the first layer. Again, concepts are subdivided to the degree necessary to correspond to distinct relationships with the subconcepts of 'Object.'
The third section consists of individual concepts. The subdivision of the 'Object' hierarchy is constructed in the same manner, guided by the distinctions made in the 'Non-Object' part of the network. The second layer is built by multiple inheritance from the first layer and bottoms out at concepts that need no further division to distinguish the binding relationships with parts of the Non-Object network (e.g., bird). The third layer consists specifically of instances of concepts in the second section (e.g., Swallow Robin).
Network Structures and Methods. The definition section of the network has six layers:
Section A contains concepts corresponding to entries in the word dictionaries; these 'define themselves'.
Section B contains semantic relations (part-of, kind-of, etc.) relating the concepts in Section A.
Section C contains concept descriptions describing the relations between defined concepts. These consist of non-semantic relations between entries of Section A (e.g., <a bird flies> [<to fly in space> - agent -> <bird>]).
Section D contains semantic relations between entries of Sections A and C. (For example, 'carouse' might be a concept in section A, 'to drink liquor' a description in section C in the form <drink> -object-> <liquor> and the kind-of link is in section D).
Section E contains compound concepts formed from two or more concepts from entries in section C.
Section F contains semantic relations between A, C, and E.
EDR's minimal goal is to finish sections A through C; they believe that this is achievable within their time-scale and will also be pragmatically useful.
EDR has adopted a pragmatic, example-driven approach to construction of the whole dictionary system. Although they have constructed computer tools to help in the process, people are very much in the loop. The work began with the selection and description of the dictionary contents of 170,000 vocabulary items in each language. Each of these is entered as a 'Headword' in the word dictionary. Next, the corresponding word in the other language is determined and entered in the word dictionary if not present. A bilingual correspondence entry is then made in the bilingual dictionaries. A corpus of 20 million sentences was collected from newspapers, encyclopedias, textbooks and reference books and a keyword in context (KWIC) database was created indicating each occurrence of each word in the corpus. Each word is checked to see whether it is already present in the word dictionaries; if not it is entered. Automated tools perform morphological analysis; the results are output on a worksheet for human correction or approval. For each morpheme, the appropriate concept is selected from the list of those associated with the word. Syntactic and semantic analysis is performed by computer on the results of the morphological analysis and output on worksheets. The results are verified by individuals. The relations between the concepts are determined and filled out on worksheets. Once corrected and approved, the morphological information is added to the word dictionaries. The parse trees and extracted concept relations, once approved, are stored away in the EDR Corpus. Co-occurrence information is extracted from the parse trees and entered into the co-occurrence dictionary. Concept information extracted in this process is compared to the existing concept hierarchy and descriptions. New descriptions are entered into the Concept Dictionary.
Milestones. In January 1991, EDR published the first edition of the dictionary interface. By December 1991, EDR had begun to organize an external evaluation group. In November 1992, the preliminary versions of the Word and Concept Dictionaries were distributed to twelve universities and research institutes, including two American universities, for evaluation. In March of 1993, EDR was scheduled to release for commercial use the second edition of the Word Dictionaries, the first edition of the Concept Dictionary, the first edition of the Bilingual Dictionaries and the first edition of the Co-occurrence Dictionaries. The second edition of the Dictionary Interface was also scheduled for release at this time. EDR plans to offer the dictionaries in commercial form world-wide; the commercial terms are planned to be uniform across national borders, although price has not yet been set. Discussions concerning academic access to the EDR dictionaries in the U.S. are ongoing.
Evaluation
Strengths and Weaknesses
. During our visit to EDR, we asked several times what facilities were provided for consistency maintenance or automatic indexing. Our EDR hosts answered that there were a variety of ad hoc tools developed in house, but that they had no overall approach to these problems. They told us that it was very likely that a concept might appear more than once in the dictionary because there weren't really very good tools for detecting duplications. In general, their approach has been to write simple tools that can identify potential problems and then to rely on humans to resolve the problems. Although they seemed quite apologetic about the weakness of their tools, they also seemed reasonably secure that success can be achieved by the steady application of reasonable effort to the problem. The chief indicator of this is a feeling that closure is being reached, that most words and concepts currently encountered are found in the dictionary. In short, there is a feeling that as the mass of the dictionary grows, the mass itself helps to manage the problem.
In contrast, research in the U.S. has placed a much stronger emphasis on strong theories and tools which automatically index a concept within the semantic network, check it for consistency, identify duplication, etc. For example, a significant sector of the knowledge representation community in the U.S. has spent the last several years investigating how to limit the expressiveness of representation languages in order to guarantee that indexing services can be performed by tractable computations (the clearest examples are in the KL-One family of languages). However, no effort in the U.S. has managed to deal with 'largeness' as an issue. With the exception of Cyc, knowledge bases in the U.S. are typically much smaller than the EDR dictionary system. Also, the theoretical frameworks often preclude even expressing a variety of information which practitioners find necessary.
Comparison with U.S. Research. In the U.S., we know of only three efforts which approach the scale of the EDR effort. The Cyc project is attempting to build a very large system that captures a significant segment of common sense 'consensus reality.' At present the number of named terms in the Cyc system is still significantly smaller than the EDR Concept Dictionary, although the knowledge associated with each term is much deeper. A second project is the 'lexicon collaborative' at the University of Arizona, which is attempting to build a realistic scale lexicon for English natural language processing. To our knowledge this work is not yet as extensive as EDR's. Finally, NIH has sponsored the construction of a medical information representation system. A large part of the information in this system comes from previously existing medical abstract indexing terms, etc. This system is intended to be quite large scale, but specialized to medical information.
In short, we might characterize the large knowledge base community in the U.S. as long on theories about knowledge representation and use of deep knowledge but short on pragmatism: the number of surface terms in its knowledge bases is relatively small. EDR, on the other hand, has no particular theory of knowledge representation, but has built a practical knowledge base with several hundred thousand concepts.
The JTEC team found EDR's pragmatic approach refreshing and exciting. Although U.S. researchers have spent significantly more time on theoretical formulations, they have not yet succeeded in building any general knowledge base of significant size. EDR's pragmatic approach (and ability to enlist a large number of lexicographers in a single national project) has allowed it to amass a significant corpus of concepts with significant coverage of the terms of natural language. The organizing ideas of the EDR dictionary are not particularly innovative; they have been in play since Quillian (mid 1960s) and Schank (late 1970s). While stronger theoretical work and better tools are both necessary and desirable, there is no substitute for breadth of coverage and the hard work necessary to achieve it. EDR has accomplished this breadth and this is an essentially unique achievement. EDR's work is among the most exciting in Japan (or anywhere else).
Next Steps: The Knowledge Archives Project
Background and Organization
. Dr. Yokoi, general manager of EDR, mentioned at the time of our visit to EDR that he has been working with others to prepare a plan for a new effort to be called the Knowledge Archives Project. At the time of our visit, the only information he had available was extremely preliminary and vague. We have since been sent a more complete draft entitled, A Plan for the Knowledge Archives Project, March 1992 with the following affiliations: The Economics Research Institute (ERI); Japan Society for the Promotion of Machine Industry (JPSMI); and Systems Research and Development Institute of Japan (SR&DI). An introductory note says that the project is called NOAH (kNOwledge ArcHives). 'The knowledge archives,' it states, 'is the Noah's ark which will overcome the chaos of the turn of the century and build a new foundation for the 21st century' (ERI 1992).
The goal of the Knowledge Archives Project is to amass very large knowledge bases to serve as the 'common base of knowledge for international and interdisciplinary exchange in research and technology communities.' This will be achieved primarily through the use of textual knowledge sources (i.e., documents), being (semi-)automatically processed into very large scale semantic networks.
Achieving the goal will require the development of a variety of new technologies. As the draft plan states:
The technology in which the acquisition and collection of vast amounts of knowledge are automated (supported); the technology in which knowledge bases are self-organized so that substantial amounts of knowledge can be stored systematically; the technology which supports the creation of new knowledge by using vast amounts of existing knowledge and by developing appropriate and applicable knowledge bases which fulfill the need for various knowledge usage; and the technology which translates and transmits knowledge to promote the interchange and common use of knowledge. In addition, development of a basic knowledge base which can be shared by all applications will be necessary. (ERI 1992)
The proposal is explicitly multi-disciplinary. There is a strong emphasis on multi-media technologies, the use of advanced databases (deductive and object oriented, such as the Quixote system at ICOT), and the use of natural language processing technology (such as that developed at EDR). An even more interesting aspect of the proposal is that there is a call for collaboration with researchers in the humanities and social sciences as well as those working in the media industry.
The proposal calls for a new national project to last for eight years beginning in the Japanese fiscal year 1993 and running through fiscal year 2000. There are three phases to the proposed project: (1) A basic research stage (1993-1994); (2) initial prototype development (1995-1997); and (3) final prototype (1998-2000). The output of the project is not imagined as a completed system, but rather as a usable system and a sound starting point for continuing research. The basic organization will comprise one research center and eight to ten research sites. Staffing will consist of transferred employees from research institutes and participating corporations.
Research Themes. A variety of research themes are associated with the archive project, including:
Knowledge-grasping semantics. It is important to be able to understand meanings, to have computers interact with users at levels deeper than syntax. This theme is stated several times throughout the report and is often mentioned in the context of 'reorganizing information processing systems from the application side.' However, the proposal also notes that 'AI has limited the range of objects and tried to search for their meaning in depth. Now it is important to deal with objects in a broader range but deal with the meaning in a shallow range' (ERI 1992).
The importance of being very large. 'Handling of small amounts of slightly complicated knowledge would not be enough. Computers have to be capable of dealing with very large-scale knowledge and massive information.' Largeness in and of itself is also inadequate. 'Totally new technologies are required to automatically acquire and store massive knowledge as efficiently as possible. The technologies are exactly what the Knowledge Archive Project has to tackle' (ERI 1992). Reference is also made to memory-based (case-based) reasoning, and neural network technologies also may be relevant.
Necessity for diverse knowledge representation media. Humans represent knowledge in a variety of ways: textually, graphically, through images, sounds, etc. 'Representation media for humans will play the leading role in knowledge representation, and those for computers will be considered as media for `computer interface.' Knowledge and software will no longer be represented and programmed for computers, but will be represented and programmed for the combination of humans and computers' (ERI 1992).
Necessity for research based on the ecology of knowledge. 'Knowledge is varied, diversified and comes in many forms. Knowledge is made visible as one document represented by a representation media. The key technology of knowledge processing for the Knowledge Archives automatically generates, edits, transforms, stores, retrieves, and transmits these documents as efficiently as possible' (ERI 1992).
Necessity for a shared environment for the reuse of knowledge. 'The first step is to standardize knowledge representation media [Editor's note: It's significant that media is plural] and make them commonly usable. Logic and logic programming have to be considered as the basis of knowledge representation languages, that is the medium for computers' (ERI 1992). The mechanism for storing large bodies of knowledge and retrieving knowledge on request form a key part of this agenda. Next generation databases (as opposed to current AI KBs) are seen as the best initial technology for this goal.
Progress of separate technologies. The goal of the NOAH project requires the use of a variety of technologies which the authors think have matured enough to enable an effort to begin. However, it will be a key goal to integrate these technologies and to foster their further development. The technologies identified are: natural language processing, knowledge engineering, multimedia, and software engineering.
The accumulation of information and the progress of related technologies. Although Japan has lagged the west in online text processing (because of its particular language and alphabet problems), it is now the case that a vast amount of information is becoming available in electronic form. Much of this information is now online and accessible through networks. The publishing industry, libraries and others traditionally involved in the management of large amounts of information are seen as prospective users of the fruits of the project and also as prospective collaborators in its development.
Fruitful results of various projects. The Fifth Generation Computer Project has produced new constraint logic programming languages which are seen as potential knowledge representation languages. The Parallel Inference Machines (PIMs) (or some follow-on) are expected to function as high performance database machines. EDR is seen as having produced robust natural language processing technology and 'has made natural language the kernel language of knowledge representation media' (ERI 1992). The EDR's dictionary is cited as a very large knowledge base of lexical knowledge. Finally, the various machine translation projects around Japan are cited as other examples of natural language processing technology.
What Will Be Done
. Although attention is paid to the notion that human knowledge has many representation media, the proposal singles out for special attention two media of importance: natural language, in particular, modern Japanese as the media to be used by humans, and the system knowledge representation language that will be used internally with the Knowledge Archives system.
Documents are seen as the natural unit of knowledge acquisition. Knowledge documents written in the system knowledge representation language form the knowledge base of the Knowledge Archives. Words, sentences, texts, and stories are examples of the various levels of aggregation which make up knowledge objects. Knowledge objects are mutually related to one another in a semantic network which enables them to be defined and to have attached attributes. Both surface structure and semantic objects are stored in the Knowledge Archives. Surface representations are retained because the semantic objects are not required to capture all the meanings of the surface documents.
Relationships connect knowledge objects. These include: links between surface structure objects, e.g., syntactic relationships and inclusion relationships; semantic links, e.g., case frame links between words, causality links between sentences; correspondence links between surface and semantic objects; equality and similarity links, etc. Corresponding to each type of link will be a variety of inference rules that allow deductions to be drawn from the information. Also, it is a goal to develop mechanisms for learning and self-organization.
Initially, much of the work will be done manually or in a semi-automated fashion. However, it is hoped that the system can take on an increasingly large part of the processing. To make the task tractable in the initial phases, documents will not be processed as they are but rather, summary documents will be prepared. These will be analyzed by the computer and attached to the complete document with synonym links.
The envisioned sources of documents for processing are at the moment quite broad, encompassing collaboration with humanities research groups such as the Opera Project (chairman, Seigou Matsuoka), newspaper articles, scientific and technical literature (primarily information processing), patents, legal documents, manuals, and more.
Summary and Analysis. Much of the archives proposal seems vague, although the general spirit is clear. It should be noted that the proposal is the initial edition of the plan. EDR is currently working on more concrete details for it. Using the technologies developed by EDR, machine translation projects, and to a lesser degree ICOT, the project will attempt to build a very large-scale knowledge base. Documents, particularly textual documents, will form the core knowledge source with natural language processing technology converting natural language text into internal form. At least initially, the work will be at best semi-automated, but as the technologies get better the degree of automation will increase. The knowledge base will support a variety of knowledge storage, retrieval and transformation tasks. Largeness is seen as the core opportunity and challenge in the effort. Cyc is the U.S. project most similar to the one proposed here, although it is important to note the difference in perspective (Lenat and Guha 1990). In Cyc, it is the job of human knowledge engineers to develop an ontology and to enter the knowledge into it. In the Knowledge Archives Project, the source of knowledge is to be existing textual material and the ontology should (at least somewhat) emerge from the self-organization of the knowledge base.
The proposal explicitly mentions the intention of collaborating with researchers outside Japan and of encouraging the formation of similar efforts in other countries. The proposal has not yet been approved, but its progress should be followed.
Chapter 6
INTEGRATION OF ES WITH CONVENTIONAL DATA PROCESSING SYSTEMS
Bruce B. Johnson
INTRODUCTION
Most early industrial applications of expert systems were self contained. They typically used a general purpose expert system shell, ran on a single computer, and did not interface with other types of processing. Several years ago, a general movement toward integration with conventional data management systems began. Shells were redesigned to run within conventional architectures and to interface with relational databases.
Many of the expert systems we saw or discussed, and virtually all of the high-valued systems, were interfaced to data management systems as their input and output mechanism. The trend is the same in the U.S.
However, the Japanese have moved well beyond interfacing with data management systems into several other types of integration, which we discuss in this chapter.
Chapter 7
BUSINESS PERSPECTIVE
Herbert Schorr
HISTORY AND TRENDS
The pursuit of expert systems by Japanese companies has initially been technology-driven. That is, many companies pursued expert systems because it looked like a technology that they shouldn't miss (technology push), rather than the solution to a real business need (demand pull). The Japanese focused primarily on knowledge-based systems initially and often chose a diagnostic problem as a first application. Despite its limited usefulness, this is a well understood entry point into AI and into the methodology of knowledge-based system (KBS) development. Learning the technology has been accomplished largely by on-the-job training rather than by hiring people with formal training in AI or KBS.
When knowledge-based systems was a new area of information processing, it was not clear at first where the high payoff applications would be found. The search for such applications was similar to 'wildcatting' in the oil industry, with an occasional big strike and many dry holes. Overall, there was a low initial success rate in finding and deploying applications, ranging from five to 40 percent for the four computer companies that the JTEC team visited. However, with several years of experience in selecting applications, the process has become much more reliable Fujitsu, for example, reported that its current success rate is between 75 and 90 percent in bringing a new application to operational status; the initial success rate was about 5 percent.
Several important business trends came across in our interviews:
Japanese computer manufacturers now produce task-specific shells especially for diagnostics and planning. These new shells are intended to allow end users to write their own applications more easily than general-purpose shells allow (see Chapter 3 for more details). Although task specific shells have been developed and marketed in the U.S. for financial planning and risk assessment (among others), the trend is more pronounced in Japan.
The need to integrate KBS applications with conventional data processing systems has led to a complete rewrite of shell products in C. This same trend has been in evidence in the U.S. for at least five years.
There is a steady migration from mainframes to PCs and engineering workstations for running KBS applications (though both often access mainframe databases). A parallel trend exists in the U.S.
The technology of knowledge-based systems has been assimilated within Japanese companies (in contrast to the U.S., where an outside consultant is often used to write such an application), and is part of the tool kit used to solve complex system problems. In fact, in many applications, the KBS is just part of an overall system, as in the NKK blast furnace. In companies where this in-house capability has been developed, we believe they are in a good position to gain competitive advantage. For example, the steel industry seems to be systematically using the technology throughout (NKK and Nippon Steel each have about 25 applications in routine use and more under development) and this, to our knowledge, surpasses anything being done in the steel industry in the U.S.
On the basis of our site visits, plus additional data gathered by Nikkei AI, we can draw a number of conclusions about the state-of-the-art of expert system applications within the business sector in Japan.
The technology of expert systems has now been mastered by the Japanese. Since the early 1980s, when they first entered this field, they have completely caught up with the United States. They can apply the technology to any problem within the state of the art. Their best applications are equal to the best elsewhere in the world. Their use of the technology is not niched, but is widespread across many business categories.
Japanese computer manufacturers (JCMs) play a dominant role in the technology and business of expert systems. The JCMs have mastered and absorbed expert system technology as a core competence. They tend to use systems engineers rather than knowledge engineers to build systems. Consequently, integration with conventional information technology poses no special problem for them, and is handled routinely and smoothly, without friction. These large computer companies also build many application systems for their customers; small firms play only a minor role in applications building, in contrast with the United States.
Within the computer manufacturing companies, there is a close coupling between activities in the research laboratories, the system development groups, and the sales departments. The development and sales groups work closely together to develop custom systems for clients. The results are fed back to the research lab to provide requirements for the next generation of ES tools.
Viewed as a technology (rather than as a business), the field of expert systems is doing well in Japan, as it is in the U.S. As in the U.S., the experimentation phase is over, and the phase of mature applications is in progress. Following a normal learning curve, the ratio of successful deployments of expert systems to projects initiated has risen sharply, from about 5 percent in the early years to about 75 percent in recent years. Japanese appliers of the technology make eclectic use of AI techniques. Most of these techniques originated in the U.S. or Europe. As in the U.S., expert systems technology is often just a component of a bigger system -- expert systems are just another tool in the software toolkit. The Japanese do not attempt to analyze payoff at the component level, but look at the system level. Thus they do not measure the return on investment of these embedded expert systems. However, there are many applications in which the expert system is the main technology.
Viewed as a business, the expert systems field in Japan did not take off in any exceptional way compared to the U.S. or Europe. Although the overall level of activity is significant and important, there is no evidence of exponential growth. Components of the business consist of expert system tools, consulting, and packaged knowledge systems. Hitachi's expert system business seems the most viable. Other major players, such as Fujitsu and CSK, have had limited business success.
With respect to ES tools, Japanese tools are similar in sophistication to those sold and used in the U.S. Techniques and methodology developed in the U.S. have been, and continue to be, made into products quickly.
Japan has more experience than the U.S. in applications of KBS technology to heavy industry, particularly the steel and construction industries.
Aside from a few exceptions, the Japanese and U.S. ES tool markets follow similar trends: vertical, problem-specific tools; a move towards open systems and workstations; and an emphasis on integration of ESs with other computational techniques.
The number of fielded applications in Japan is somewhere between 1,000 and 2,000, including PC-based applications. The number of U.S. applications is probably several times that of Japan.
Fuzzy control systems (not counted in the above tally) have had a big impact in consumer products (e.g., camcorders, automobile transmissions and cruise controls, television, air conditioners, and dozens of others).
The JTEC panel saw continued strong efforts by Japanese computer companies and industry-specific companies (e.g., Nippon Steel) to advance their KBS technology and business. This situation contrasts with that in the U.S., where we see a declining investment in knowledge-based systems technology: lack of venture capital, downsizing of computer company efforts, few new product announcements. It is a familiar story, and one for concern, as this trend may lead to Japanese superiority in this area relatively soon.
KNOWLEDGE-BASED SYSTEMS RESEARCH IN JAPAN
Our conclusions in this area are summarized as follows:
A survey of three years of working papers of the Special Interest Group on Knowledge-Based Systems of the Japan Society for AI shows a wide range of research topics, touching on most of the subjects of current interest in the U.S.
The quality of research at a few top-level universities in Japan is in the same range as at top-level U.S. universities and research institutes. However, in the remainder of the Japanese university system the quality of research is not equal to that at first or second tier U.S. research centers. The quantity of research (in terms of number of projects and/or number of publications) is considerably smaller compared to the U.S.
LIFE is the world leader in applying fuzzy logic concepts to classic AI core problems.
Japanese industrial laboratories appear to be doing advanced development that is tightly coupled to application or product development. JCMs and some other Japanese high technology companies are carrying out some knowledge-based systems research, but most non-computer companies do none. We saw, essentially, a thin layer of excellent work at Hitachi, Toshiba, NEC, Fujitsu and NTT, and (on previous visits) at IBM Japan and Sony. The most basic and deepest work is at Hitachi's Advanced Research Laboratory, which is conducting advanced research in model-based reasoning and machine learning.
Politica de confidentialitate | Termeni si conditii de utilizare |
Vizualizari: 1732
Importanta:
Termeni si conditii de utilizare | Contact
© SCRIGROUP 2024 . All rights reserved