Contents
WebLS is being used to implement web-based technical support for Amzi! The following example is taken from that application.
First, an HTML form is used to gather general information about the problem. In Figure 1 the trouble is with Delphi under Windows 95.
Figure 1: Amzi!'s Initial Tech Support Form
This is followed by one or more additional questions to determine the exact problem. In Figure 2, WebLS only needs to know what version of Delphi is being used.
It is hoped that typically there will be between one and three additional sets of questions.
Figure 2: An Additional WebLS Question
In Figure 3, knowing the user is working with Delphi 2.0 and the Amzi! Jan96 release is enough to determine the problem and the solution. It can be a couple of paragraphs, or it can be another web document. Here a paragraph describes how to fix the declarations to stdcall. Notice it also includes direct links for downloading the updated files.
Figure 3: WebLS Provides an Answer
A Problem Solver will not be able to answer 100% of your user's questions. A good goal is to answer most of the commonly asked questions.
The first task in designing a Problem Solver is to make a list of each of the problems and their symptoms. A trivial example might be 'If Error Message is Y then Problem is X'. More complex examples will have multiple symptoms, for example 'If Program is A and Command is B and Environment is C then Problem is Z'. Start with a handful of the most common problems, and you can build up the system from there.
You will notice some repeating elements in your symptoms lists. Generally they might be:
These repeating symptoms form the basis of the first set of questions to ask the user (see Figure 1).
The purpose of the first set of questions is to gather general symptoms of the problem in the hopes of determining a direction for further inquiry. Try to keep to a single page of questions and think carefully about the default answers if the user doesn't select one. The default answers will become facts in your logic-base and you don't want unexpected results because the user skipped over a couple of questions. Also if you can get some information (like an error number or message) that leads directly to an answer, try to fit that in as well. You will find that the questions on the first form change a lot as you develop your logic-base.
For each problem you will need to write some text for the answer. For example, here is a simple answer:
answer(cLargeModelRequired, [ text = [$16-bit C/C++ applications require the large memory model. $, $Failure to use it leads to immediate GPFs.$] ]).
Now you can start writing your rules. Start by using one of the facts on your first form. Here's the basic structure:
if fact = value then problem = problemID.
Rules can either directly determine the problem, or they can determine other facts (distilled facts). Here is a simple rule:
if errorMessage = 'Code too long to load' then problem = srcbufTooSmall.
It says if the fact 'errorMessage' has the value specified then the problem is 'srcbufTooSmall'. Here's another rule:
if languageTool = 'Visual C++' or languageTool = 'Borland C++' or languageTool = 'Watcom C++' then language = 'C/C++'.
This rule distills the fact 'language' from the user-provided fact 'languageTool'. This allows us to simplify rules such as the following:
if language = 'C/C++' and apiFunction = 'lsInit' and ((errorMessage = 'GPF (General Protection Fault)' and environmentNameVer = 'Windows 3.x') or environmentNameVer = 'DOS') then problem = cLargeModelRequired.
Note here that more of the power of the rule language becomes evident. We can use 'and', 'or' and parenthesis to check facts. Also, not shown here, is the ability to check if an fact is 'not equal' to a particular value. You can use these comparators: =, \= (not equal), >, <, >= and <=. For checking multivalued facts use 'include' or 'exclude' as follows:
if (languageTool = 'Borland C++' or languageTool = 'Borland C') and symptoms include 'Errors linking with the Logic Server libraries' and applicationMode = '16-bit' and releaseDate <= '19960302' then problem = borlandStatic16.
Notice that rules that have determined an answer end with 'problem='.
The most important performance enhancement you can do with your logic-base is to group related facts together. This is done as follows:
question(environmentNameVer, [ prompt = $What environment are you running under?$, ask = menu(['Windows 3.x', 'Windows 95', 'Windows NT', 'DOS', 'Linux']), related = [memSize, processorType] ]).
This says that 'memSize' and 'processorType' are related to 'environmentNameVer'. When the rules are executed, if WebLS is going to ask the user for environmentNameVer, it will also ask for the other two facts as well. This should optimize the number of times WebLS is invoked and greatly reduce the amount of time your users spend answering questions in order to obtain a resolution to their problem.
If you find that you are putting the same text in multiple answers, you might want to use the notes facility in WebLS. This allows you to define a list of notes to accompany the answer. Notes are outputted after the answer separated by a horizontal rule. Notes are defined like text answers as follows:
note(debugEmbed, [ text = [$For more information on debugging embedded Prolog modules $, $we suggest you see $, $<A HREF="ftp://ftp.amzi.com/pub/articles/APIDEBUG.TXT">Debugging Hints</A>.$] ]).
Notes are used by adding a notes list to the answer. For example:
answer(cLargeModelRequired, [ text = [$16-bit C/C++ applications require the large memory model. $, $Failure to use it leads to immediate GPFs.$], note = [debugEmbed, cLibraries] ]).
On-going maintenance of the logic-base is very straight-forward. The Amzi! tech support logic-base is organized by problem area. There is a section based on error message alone, then a section on the Amzi! samples, a section on the Amzi! IDE, a section on Prolog predicates, a section on Logic Server API calls and then sections for each of the tools and languages that Amzi! Logic Servers can be embedded into. The questions, rules, and answers follow the same organization.
Everytime tech support writes up an answer for a customer, it forwards it to the logic-base developer who incorporates it into the rules in the appropriate section, and then adds the answer definition for the problem. If the rule requires some new facts, question definitions (or rules) are created for each one.
The most important consideration when adding new rules, is to use related
facts lists wherever possible. This reduces the number of interactions
with the users, resulting in less frustration for them, and less load on
the web server.
The processing of rules is called inferencing, and different rule engines use different inferencing strategies. The WebLS inference engine is designed to determine the solution to the goal problem. WebLS is especially optimized for Internet use in that questions are batched on the forms, rather than presented one at a time as needed. This makes for a quicker interaction between user and system.
Now that you have questions, rules and answers, how are the rules executed to ask the questions and output the answers? First you start with a set of rules as shown above. There are also the definitions of the questions, that is how to format and output the HTML codes to ask for the value of a fact. And, there are the definitions of the answers, that is how to format and output the HTML codes to display the advice paragraph.
In order to get some facts for the rules to reason over, we output the questions that are designated to appear on the first form (see Figure 1). These initial facts are the basis of the entire execution of the rules.
Having gathered some initial facts as shown on the left side of the figure above, the rules are now executed for the first time. Usually, a conclusion is not reached the first time through. Instead WebLS identifies a set of hypotheses. These are rules that have neither been proven nor disproven. All the facts on the if-side that have values match, but there are still some facts for which the value is not known.
This leads to the next step which is to gather more facts. For each unknown fact in the hypotheses, WebLS outputs the corresponding question and all its related questions. After the user answers these questions, the rules run again. The result show above has changed as follows:
The gathering of facts is repeated to find a solution to the goal problem. WebLS tries to find as many answers as it can, although practically for a problem solver, it is usually one or two. So again, after asking the user more questions, we have more facts, more conclusions and less hypotheses as above.
Finally we have either proven or disproven every rule. The result is shown above. There are no more hypotheses, only conclusions. All the conclusions that are answers (goals) are outputted as separate paragraphs (see Figure 3).
Copyright ©1996 Amzi! inc. All Rights Reserved.