While at LynuxWorks I decided to leverage some code that I had previously developed (See Clearquest Daemon) that utilizes a client/server model to provide a service that interogates a database and returns information. Again this database happens to be a defect tracking database residing on another machine.
The daemon opens the database then listens on a socket for requests, in this case a defect ID, then obtains the detail information about the defect and returns it to the caller in the form of a Perl hash. This avoids the overhead associated with opening and closing the database or otherwise connecting to the datastore. The daemon runs continually in the background listening for and servicing requests (ecrd source code).
The caller, or client, then can process the information in anyway they see fit. Often the caller is a Perl or PHP script that outputs the information in to a nicely formatted web page but it can as easily be a command line tool that spits out the answer to a question. For example:
$ ecrc 142 owner adefaria
uses a command line client to display the owner of the defect 142. (ecrc source code).
As PHP is a nice language for writing dynamic web pages I then developed a PHP API library in order to be a client to ecrd which was written in Perl. This allowed me to call the daemon to get information about a defect then format out whatever web page I wanted (ecrc.php API source code).
For example, here is an example of a web page describing a specific defect. Notics that the ECR (LynuxWorks defect tracking system) displays the one line description as well as other fields such as State, Status, Severity and Fixed info. Additionally the long description is displayed as well as parsed for references to other ECRs or auxilary files, courtesy of PHP.
The link to ECR 22979 will not work unless you are within the LynuxWorks Intranet
Tying it into HtDig
Since ECRs and their full text descriptions are now available via a web link it was relatively trival to hook this up to HtDig to enable full text searching on all ECRs and their descriptions. All that was needed was to produce a web page with all ECRs listed linked to web pages of their descriptions. HtDig would then crawl through and index everything. Additionally, since the ECR descriptions were scanned for references to certain auxilary files (files not necessarily in the defect database but on a network accessible area and used to further support the ECR in question) HtDig would crawl through and index them too. This resulted in a very flexible and powerful internal search facility.