This is the first of an occasional series of reviews I intend to write to illustrate some important general traits of ontologies. In each review I will dissect an ontology and examine why it succeeded or failed. In this essay I mention concepts that are defined in my previous essay. Judging the likely success of an ontology. This first review covers an ontology called the Common Basic Specification (CBS) that was designed in the late 1980s to bring much needed standardization and rationalization to the fragmented information management processes of the British National Health Service (NHS). It persisted in various forms until the late 1990′s when it was finally abandoned. This is my explanation of why it failed.

I have been living in the US now for 6 years and have come to recognize a common reaction among Americans whenever the NHS is mentioned. They all try to hide it, but a patronizing look comes over them. It doesn’t matter how much you tell them that doctors in the NHS wash before they go into surgery, and even use anesthetics these days, Americans just can’t help feeling sorry for anyone who has to suffer under a 3 rd world healthcare system. So before I explain the CBS it is probably worth describing the NHS.

Background

The NHS is the largest employer in Europe it is the main provider of healthcare for the entire United Kingdom (pop 60 million). It’s annual budget is about 5 to 6 percent of the GDP of the United Kingdom. In 2002 that amounted to about 64 billion GBP which at current exchange rates is approximately 115 billion USD. By contrast the US spends about 15 percent of its GDP on healthcare and Americans live on average one year longer than the British. A more detailed breakdown of staffing and budget allocation for the NHS is available here. Primary healthcare in the UK is delivered through about 10,000 General Practices and about 2000 Hospitals, most with less than 100 beds. There are a few large Hospitals with up to 1000 beds, these larger hospitals are usually teaching Hospitals. All these numbers are approximate but they give a general idea of the scale of the NHS.

In the late 1980s it became apparent that the same information management problems were being encountered over and over again in hospital after hospital throughout the UK. Not only that, but they were being solved poorly time and time again. This was obviously wasteful. Hospitals had no reason to compete in the area of information management and every reason to cooperate. If these common problems could be solved correctly and then reused significant savings might be realized. At the same time the digital patient record became the holy grail of healthcare computing. In this vision of the future any patients complete medical history would be available anywhere it was needed and could be passed from GP practice to Hospital and back. It could even follow a patient as s/he moved around the country. The Common Basic Specification was suggested as the best way to ensure consistency across multiple solutions and thus enable standardization and portability of the digital patient record.

The Common Basic Specification

The Common Basic Specification ( CBS ) is a conceptual generic model of the activity of health care delivery.

[IT Standards Handbook - NHS Data Standards]

Definition of the NHS Data Model was started in 1986 and continued for several years. It was eventually renamed the Common Basic Specification (CBS). By 1992 a CBS Generic Model had been published. This was in effect a top level ontology (similar to the SUMO currently under development). It was developed for healthcare service delivery but it as can be seen from the diagram below it was universally applicable to any kind of service delivery.

Common Basic Specification

Class descriptions for the figure

  • Act for subject: An “Activity” that is directed towards a “Subject”.
  • Activity: Purposeful and intentional “Event”
  • Activity class: A kind of “Activity”.
  • Agent: Role assumed by a “Subject” enabling it to act purposefully
  • Authorised to perform: The recording of the fact that an “Agent” may perform certain classes of “Activity”.
  • Category: Abstraction on the basis of common properties.
  • Concept: “Object” which is a unit of thought.
  • Event: Something which happens.
  • Incident: “Event” occurring without known volition.
  • Knowledge concept: A collection of “Concepts”, the relationships between them and the reasons for them.
  • Located at: A “Location” for a “Subject”.
  • Location: Point or piece of space.
  • Object: Part of the conceivable or perceivable universe
  • Percept: Perceived or inferred to exist
  • Reason for activity: The identification of a “Subject property” as the reason for performing or planning an “Activity for > subject”.
  • Reference point: Point or piece of time or space.
  • Responsible for: The responsibility that an “Agent” has for a “Subject”.
  • Results in: A means of establishing that an “Activity for subject” has resulted in a “Subject property”.
  • Subject: “Percept” which is one or more physical objects.
  • Subject property: Anything that describes a subject:** location, identity, characteristic etc..
  • Timepoint: Point or span of time.

The initial publication of the CBS was found to be too high level. The model was hurriedly reworked and republished in 1993 as a series of CBS Application views one of which was later renamed the CBS Clinical View. This view was a slightly lower level ontology. After nearly 5 years and 5 million GBP had been invested in the model it was decided to use it as the foundation for a series of “demonstrator projects”. This work began in 1992/3. I was involved in the largest of these projects where the CBS was to be used as the foundation for an entire Hospital Information Support System. This system was to cover every aspect of hospital information management: patient master index, inpatient and outpatient management, orders and results reporting, , maternity, pharmacy, and many other ancillary activities including; clinical laboratories, laundry, and facilities management. By using the CBS as a conceptual model the final system was intended to be reusable across the NHS and be fully “future-proofed”. The system took almost 3 years to build and while it was successful in the first hospital it was only reused once.

In 1998, 3 years after the Hospital Information Support System was completed the model was redefined yet again and renamed the NHS Healthcare Model (HcM). But by this point it was too late. No one believed in the blue fairy of future proofing anymore and the model was abandoned. It is no longer even available online from any official NHS website. However, last year I downloaded a copy in anticipation of writing this article. So here is The Common Basic Specification (CBS) a.k.a. The NHS Healthcare Model (HcM). Take a look around and try drilling down on some of the diagrams to get a real flavor of the model. (If you want a complete copy just send me an email and I will send a zipped file. Please don’t over stress my poor machine by downloading the whole thing.)

As ontologies go the Common Basic Specification is large and complex, which is not surprising given its scope and the length of time it took to develop. The CBS was reworked and refined until it became a clean conceptual model for generic service delivery. It is; coherent – logical in the relationship of its parts, generic enough to handle any healthcare delivery use case, and it is mature – the corners have been knocked off. So why did it fail?

Reasons for Failure of the Common Basic Specification

Of the Millions of pounds spent on developing the CBS very little was spent on articulating the benefits of implementing a common standard or training people to use the model. Many potential beneficiaries of the model did not see the value of training staff to understand it. Implementing an ontology is a political activity. It requires persuasion, coercion and sometimes direct threats. Small specialized groups can be persuaded to agree on large complex ontologies but large groups find such agreement difficult and often impossible. It is almost as if there was a law governing the adoption of ontologies.

The size and complexity of an ontology is inversely proportional to the size and complexity of the community of agents that can be persuaded to adopt it.

But there is a more fundamental problem with top level ontologies that is not political. At higher levels of abstraction every conceptual model is subjective; there is always another way to model reality. It can be advantageous to deliberately take a contrary view precisely because it will lead to different conclusions that may offer competitive advantage over others. The field of Healthcare delivery is large enough to accommodate many different world views. The CBS failed because it was the fossilization of a single world view.

Finally the Common Basic Specification took a top down reductionist approach to a fundamentally bottom up emergent problem. This same fundamental error is made by all top level ontologies. It is being made today by the developers of SUMO, an ontology doomed to the same fate as the CBS. As a professional body the IEEE ought to know better. A top down reductionist approach is useful for constrained problem domains but it is the wrong strategy for broad areas of knowledge. Robert Graves understood the tradeoff between a top down and a bottom up and explained it better than I ever could in his poem In Broken Images.

In Broken Images

He is quick, thinking in clear images; I am slow, thinking in broken images.

He becomes dull, trusting to his clear images; I become sharp, mistrusting my broken images,

Trusting his images, he assumes their relevance; Mistrusting my images, I question their relevance.

Assuming their relevance, he assumes the fact, Questioning their relevance, I question the fact.

When the fact fails him, he questions his senses; When the fact fails me, I approve my senses.

He continues quick and dull in his clear images; I continue slow and sharp in my broken images.

He in a new confusion of his understanding; I in a new understanding of my confusion.

Robert Graves

The debate about the promised value of the Semantic Web seems to me to be missing a dispassionate examination of the success, or otherwise, of existing ontology based solutions. Clay Shirky is obviously right when he states that a single monolithic ontology will never work. His critics are equally right when they claim the Semantic web will only work if it is a melange of multiple interoperable Ontologies. What is missing from the debate is a more detailed explanation of what ontologies are good at, how they interoperate, and why systems based on ontologies succeed or fail. From my perspective as a systems designer this last point is the most significant. Debates about theory are nice, but examples of real solutions are more instructive. This essay will begin to examine this question by attempting to define the anatomy of an ontology. I will use this structure in later essays to examine the reasons for success and failure of individual ontologies.

Ontologies are nice, in theory, but difficult to extract value from, in practice. They fail for a variety of reasons: The fundamental assumptions on which they are based are often unsound. Sometimes they are inflexible and unable to adapt to new circumstances. Or, conversely, while attempting to make them adaptable their designers make them too abstract for the people who are then burdened with their implementation. Frequently they are inadequately specified, leaving vital areas open to interpretation, thus negating their usefulness. And just as frequently they are designed without consideration for interoperability with other ontologies – ironic considering their basic purpose. Despite all these opportunities for failure successful ontologies can be spectacularly powerful.

To analyze the success or failure of example ontologies it will be necessary to first define the anatomy or architecture of an ontology. By defining the roles and responsibilities of the component parts of an ontology it will be possible to explain the success or failure of various examples in terms of these components. Ontologies can vary widely in complexity. Some contain all the components listed below but many do not. The anatomy I develop below is based on practical experience of what has worked in systems I have designed, reviewed, or studied. This anatomy may not; be consistent with text books, agree with the latest theory, or reflect current best practice. It does however work for me. It may work for you, but I make no promises.

Ontologies in Theory

Tom Gruber of Stanford University defines the word Ontology as :-

A specification of a conceptualization. That is, an ontology is a description (like a formal specification of a program) of the concepts and relationships that can exist for an agent or a community of agents.

An Ontology can be thought of as a contract shared between agents that intend to exchange information. The contract takes the form of a model for structuring and interpreting exchanged data and a vocabulary that constrains these exchanges. Using a relatively small ontology agents can exchange vast quantities of data and consistently interpret it to extract information. Furthermore they can, in principle, infer new information by applying logical rules allowed, and sometimes explicitly specified, by the Ontology.

It is worth remembering that every non-trivial ontology will allow logical inconsistencies. As Godel pointed out in his Incompleteness Theorem – In any axiomatic system it is possible to create propositions that cannot be proved or disproved. This does not negate the usefulness of ontologies just as it does not negate the usefulness of mathematics. However it does mean ontologies, like everything else, have their limitations.

The sense of the word “Ontology” defined above was defined by the Artificial Intelligence research community after they stole the word from the field of philosophy. More recently it has been adopted to describe components of the Semantic Web. In the AI community the ability to infer new information from existing data is of fundamental importance and this is sometimes misinterpreted as a defining feature of an ontology. In fact many ontologies support this capability only weakly, if at all. The word is also sometimes narrowly defined to mean hierarchical taxonomies or constrained vocabularies, this usage is too narrow since an ontology also contains assertions about how data can be structured and interpreted and these assertions are missing from taxonomies and constrained vocabularies.

The following brief summary taken from the essay What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model? by Johannes Ernst provides a good description of the various frameworks often classified as ontologies.

Taxonomies and Thesauri may relate terms in a controlled vocabulary via parent-child and associative relationships, but do not contain explicit grammar rules to constrain how to use controlled vocabulary terms to express (model) something meaningful within a domain of interest. A meta-model is an ontology used by modelers. People make commitments to use a specific controlled vocabulary or ontology for a domain of interest

For the purpose of this essay I will use a broad definition for the word ontology

Ontology :- A specification of a conceptualization used by a community of agents to support the exchange and consistent use of information.

Ontologies in Practice

Ontologies, far from being an unproven new concept, are already in practical daily use. They form the foundation of classification systems, databases, and object oriented software applications. In a few notable cases ontologies have persisted and even evolved over many decades. What is new is the realization that all these seemingly different systems can be compared from an ontological point of view. With the rise of the Internet and the more recent global adoption of the Web the desire to discover and exchange information, rather than mere data, has grown. Developing methods to allow the interoperation of existing and new ontologies has become imperative – hence the efforts being expended on the development of the Semantic Web.

Ontologies in Context

The value of an ontology can only be judged by its ability to support the exchange of information between agents. An ontology considered outside its context of use is a meaningless abstraction. It is only when a number of agents agree to use the same ontology to constrain their interactions that it gains any value. Most ontologies, especially successful ones, are less than perfect. Every useful ontology is a compromise between the conflicting needs of different agents. To fully appreciate the ability of an ontology to successfully support the exchange of information it is necessary to examine the instance data that is exchanged. All too often ontologies are presented in isolation as if they were the end of the story when in fact they are only the beginning of the dialog. The exchange is what is important not the ontology. For this reason I also include the instance data in the discussion below even though it is strictly not part of an ontology. It is worth noting that use cases are an essential part of ontology design. An ontology provided without supporting use cases is likely to be a failure.

General Purpose and Single Purpose Ontologies

There are two basic types of ontology :- General purpose and single purpose. A General purpose ontology is analogous to a Universal Turing Machine in that it is capable of defining any other arbitrary ontology just as a Universal Turing Machine is capable of defining any other arbitrary Turing machine. General purpose ontologies are also capable of defining themselves. Self-definition is an significant capability because it provides for auto-discovery by both programmers and programs.

For example, a program or programmer that can read and understand xml schema is theoretically capable of examining the definition of the xml language (itself written in xml) and auto-discovering new features should they be introduced at some point in the future. Thus programmers and programs need only be taught one ontology definition language if that language is auto-defining. Codd understood this need for self-definition when he specified rule 4 of his 12 rules.

Codds Rule 4 – Active online catalog based on the relational model

The system is required to support an online, inline, relational catalog that is accessible to authorized users by means of their regular query language.

And if you don’t believe relational databases are ontologies you should read this.

Self-definition is only one of the benefits of general purpose ontologies. The other is the ability to define new arbitrarily complex ontologies. Paul Rendell has used John Conway’s game of life (a set of rules defining a simple tile game) to implement a Turing machine. This mind boggling example proves that the game of life is Turing complete and neatly illustrates how massive complexity can arise from a very simple specification. Another similar example is the XSLT Turing machine. In this example XSLT itself defined in XML is used to define a Turing Machine.

Self definition and the ability to define arbitrarily complex ontologies is more than a party-trick. Consider the C programming language. The first C compiler was written in a programming language called B by Dennis Ritchie. The first thing he wrote in C was another C compiler which he compiled using the compiler written in B. He then created another executable compiler from the same C source code this time using the new pure C compiler. Since he now had the source code for a C compiler written in C and an executable version of that same code he was free of the B language forever.

General purpose ontologies capable of self definition have existed for a century at the most and have only had any practical application outside mathematics and philosophy since the widespread adoption of the computer in the 1960′s and 70′s. Special purpose ontologies are a different matter. There are many successful special purpose ontologies that are not defined by any formal language (yet) but are never-the-less formally defined. I will examine several of these in later entries but for the time being two examples will illustrate their value.

The periodic table is a classic widely used taxonomy. It becomes an ontology when it is combined with the following assertions and constants. (Please forgive me for any errors in these rules I am not a chemist.)

  • A molecule is the smallest quantity of a compound composed of chemically bonded elements
  • A chemical reaction occurs when reactants (elements and/or molecules) combine to produce products (elements and/or molecules) of different composition
  • Mass is conserved in a chemical reaction
  • Elements cannot be transmuted in a chemical reaction
  • A Mole of any compound is the product of the Avogadro number and the sum of the atomic masses of its constituent elements

This classic chemistry ontology has withstood the attacks of scientist and abuse of school children for 200 years. It allows chemists to test the plausibility of any possible chemical reaction and predict the quantities of reaction products. The system is not perfect it cannot predict if reactions are thermodynamically likely, and it cannot predict some element properties, for example why Mercury is a liquid, but it can rule out many implausible chemical reactions. Most spectacularly it has been used to infer the existence of undiscovered elements and compounds.

The Benesh Movement Notation is an ontology with a very different purpose. It is a system for recording any form of human movement. Expert Choreologists can use the notation to record entire ballets with multiple, interacting performers so that other choreologist can recreate the ballet in high fidelity without the original choreographer being present. The system records human movement symbolically on a musical stave so that the movements can be synchronized with a musical accompaniment. The system is superior to video since it can record the intentions of the original choreographer rather than the individual interpretation created by a particular performer.

Special purpose ontologies are common. Some are widely used and some are very specialized, many are successful. All these ontologies will gradually be defined by general purpose ontology definition languages so they can be made to interoperate with other similar or overlapping ontologies. By studying these successful and in some cases long-lived ontologies we can learn a great deal about how to make the semantic web successful. Even interoperability of ontologies is not a new problem (more on interoperability in a later entry).

Anatomy of an Ontology. A Five Layer Model

It may be claimed that the model presented here is really a three layer model and that layers 0 and 4 are not strictly part of an ontology. This is true but it is necessary to consider all five layers when evaluating an ontology since they all affect the fitness for purpose of the complete solution.

Layer 0. Ontology Definition Language

The days of informally defined special purpose ontologies are over. From now on anyone taking the trouble to define an ontology will use a formal specification language. All the existing special purpose ontologies will gradually be re-defined using one of the available languages. The question is which one? There are legitimate reasons for having more than one language – different language bestow different qualities on the ontologies they define. It is still early days for many of these languages and today there are few, if any, experts who can make truly informed choices. A few general observations can be made.

Ontology specification languages are not a mature technology. They are still evolving. It is likely that there will be several generations of these languages just as there were several generations of programming languages. The languages in use today will be replaced by “better” languages tomorrow. We can already see this happening, SGML has largely given way to XML and XML may in turn give way to RDF or OWL at least for certain uses. A well designed ontology has the potential to retain its usefulness for a long time and so may need to be migrated from one specification language to another. Languages that are better able to support interoperability will be a safer choice since ontologies specified with them will be easier to migrate to new “better” languages.

In choosing an ontology specification language it should be remembered the relational data model and SQL have been the default choice for the past 25 years. This approach has been phenomenally successful. There is no doubt that it works and there is plenty of support for the model in terms of tools and expertise. It is no coincidence however, that new ontology specification languages are emerging just as the web has made the Internet ubiquitous. Relational databases work well on isolated servers where encapsulation of tightly coupled data and functionality are a benefit and applications interacting with the database can be strictly controlled. But smearing a relational database across multiple unreliably networked machines is to not a good idea. It can be done, but it isn’t pretty. The new languages (XML,RDF,OWL,etc) are designed to support exactly this kind of distribution. When functionality and data are distributed, loosely coupled, and independently controlled an ontology specification language will be a much better choice.

Layer 1. Data Structures

All ontologies specify data structures. Depending on the specification language selected these could be; tables containing columns, classes with slots, statements of the form: subject, predicate, object or one of several other basic formats. Whatever language is chosen a conceptual model must be developed that defines a set of data structures that is fit for the intended use. This process is the most influential factor in determining the quality of any solution subsequently designed to use the ontology. Support for flexibility, reliability, maintainability and many other qualities is either designed into the ontology or neglected at this point. Desirable qualities such as these are frequently weakened during later stages by poor development practices, but, even good development practices will not put these qualities back in if they were never there in the first place.

In a previous essay on system design reviews I defined a set of axioms for good conceptual modeling. I have reproduced them here in summary form below. They apply equally to ontology design since an otology is merely a specification of a conceptualization.

  1. Everything in moderation and nothing to excess
  2. A good system design is based on a sound conceptual model (Architecture)
  3. A sound conceptual model accounts for all system requirements at a reasonable level of abstraction
    1. A conceptual model is sufficiently generalized when it can account for all significant use cases in a concise way that reduces complexity by consolidating similar features
    2. A conceptual model is sufficiently specific when it is possible to demonstrate how a system design based on the model will achieve measurable targets for required system attributes
  4. A good conceptual model is easy to communicate

    1. A conceptual model is easier to understand and communicate if it is coherent – Logical in the relationship of it’s parts – Aesthetically consistent.
    2. A conceptual model is easier to understand and communicate if it is analogous to a commonly experienced, tangible, real world system 3. A conceptual model is easier to understand and communicate if it is anthropomorphized – Made to mimic human behavior, characteristics and modes of interaction

Layer 2. Assertions and Constraints

Data structures only implement part of a conceptual model. An ontology also contains a set of assertions and constraints. These assertions and constraints define rules concerning the relationships between data structures and the way the data they contain can be used.

  • Integrity Constraints define what it means for operational data to be well formed or well structured. These constraints define the rules controlling validity of data such as uniqueness of; objects, records, or statements, and the cardinality and optionality of allowed relationships. Simpler constraints defined the data types for individual items like dates, numbers and specially formatted strings.
  • Inference Constraints define how operational data can be combined and manipulated to produce new information – inferences. In most software applications these rules are usually static and embedded in the code and not directly accessible to other applications. However one of the features of ontology specification languages designed to support Artificial Intelligence systems is their ability to explicitly specify these rules so that external systems can learn them.

Layer 3. Reference Data

Many ontologies specify reference data in the form of constrained vocabularies, taxonomies, or thesauri. These data are used as components and classifiers of the operational data that is exchanged between agents. Agents agree on the meaning of reference data beforehand and thus can interpret exchanged messages or statements without ambiguity. Implicit in this use of reference data is the assumption that changes to the reference data will be infrequent as they will likely require version changes of the entire ontology. This is a non-trivial happening. Modification to a previously agreed constrained vocabulary may require many agents to change the way they interpret and process data and should be avoided if possible. Significant effort should be expended on considering the consequence of changes to reference data. It is too easy to ignore such issues and assume someone else will deal with problem should they arise.

Layer 4. Operational Data

Operational data, sometimes also called instance data, is supported by it’s ontology but is not part of it. The purpose of the ontology is to provide structure for the operational data. As a result it is necessary to consider the operational data when evaluating an ontology. It is my experience that there are two general types of operational data.

  • Configuration Data. This data supports the required degree of flexibility in the solution by allowing certain features to be reconfigured. Systems that make a special feature of this type of flexibility are often called data driven, In a banking solution the types of bank account and the data relating to interest rates associated with each account could be considered configuration data. This is not the same as reference data. Configuration data is expected to change and must not require a version change of the ontology. In a hospital solution configuration data may describe the wards in the hospital and the bed compliment on each ward. All agents in such a system expect these things change over time and must be capably of reconfiguring the way they process data accordingly.

  • Activity Data. Management of activity data is the main reason for the existence of any ontology. Activity data includes the actual exchanges of messages and statements between agents that have agreed to use the ontology. Without activity data everything that goes before is pointless. In a banking environment activity data could describe the opening or closing of an accounts or the actual deposits and withdrawals. In a health care setting activity data could describe clinical interventions; the x-rays and blood test performed on a patient, or at a slightly higher level the inpatient stays and diagnoses.

Summary

The success or failure of any ontology should be judged primarily by it’s ability to support the exchange of operational activity data between agents. This can only be confirmed after the ontology is implemented by assessing how the system performs in the context of use. To reduce the risk of failure in the early stages of specification the various components of the ontology should be assessed individually and collectively in terms of their ability to support required use cases for operational activity data. The design of any ontology should be assessed in terms of the following components

  1. Ontology Definition Language
  2. Data Structures
  3. Assertions and Constraints
    1. Integrity Constraints
    2. Inference Constraints
  4. Reference Data
  5. Operational Data
    1. Configuration Data
    2. Activity Data

When the history of early software development is written it will be a travesty. Few historians will have the ability, and even fewer the inclination, to learn long dead programming languages. History will be derived from the documentation not the source code. Alan Turings perplexed, hand written annotation “How did this happen?” on a cutting of Autocode taped into his note book will remain a mystery.

How did this happen? Annotation of a program bug by Alan Turing

What kind of bug would stump Alan Turing? Was it merely a typo that took a few hours to find? a simple mistake maybe? Or did the discipline of the machine expose a fundamental misconception and thereby teach him a lesson? The only way to know would be to learn Autocode.

Page from Alan Turing's notebook showing an annotated program with a bug

The first stored program to be successfully executed was written by Tom Kilburn and executed on Monday 21 st June 1948 at Manchester University, England. It is said that this was the first and last program that Kilburn ever wrote. The program found the highest factor of a number and took 1 minute to complete on it’s first run. A second run with a different number took 2 minutes and 52 seconds. Unfortunately no one thought to document the program until Geoffrey C. Tootill wrote an amended version in his note book a month later on the 18 th July 1948. The original has been lost. Below is a copy of Tootill’s version.

Page from Geoffrey C. Tootill's notebook showing his amended version of the first, successfully executed, stored program

Via The National Archive for the History of Computing

Background

Linksys BEFW11S4 Wireless-B Broadband Router

A few months ago I had to setup a home office and decided I would take the opportunity to upgrade my home network. My Linksys BEFSR41 Etherfast Cable / DSL Router had never given me any problems and so I decided to upgrade to the Linksys BEFW11S4 Wireless-B broadband Router. I now have everything working reliably but getting to this happy state and resolving the problems took a lot of luck and in the end the solution was far from obvious. Judging by the bad reviews on Amazon and elsewhere it appears that many people have been unable to fix similar problems with this device. Below is my description of the problem and a solution that worked for me. Hopefully this will help others, but as always, your mileage may vary!

Configuration

I have two Macs running OSX on my home network, both machines have static IP addresses and are wired to the router. I wanted to add a wireless Windows Laptop with a DHCP assigned IP address. I purchased a Linksys BEFW11S4 Wireless-B broadband Router and at the same time I bought a Proxim ORiNOCO Gold 802.11a/b/g ComboCard for the Laptop this came with a Proxim client utility. Installation and setup of the laptop card was very easy. Simply plug in the card, insert the disc and install the driver. The card immediately picked up my neighbors open network and I was on the Internet. Setting up the router was a bit more challenging; I copied the configuration from my previous device and enabled DHCP to start issuing IP numbers out of range of my 2 static machines. Everything worked and I was connected to the Internet via my new router.

Symptoms

The wireless network seemed to work fine for several days but then it “hung”. The only fix was to turn off the router and turn it back on again. This hang affected all machines connected to the router. Whenever they attempted any network operations they would time out. However the Proxim client utility and the laptop claimed the wireless network was still running! The network hung once or twice a week – Annoying, but tolerable. Then I upgraded the laptop from windows 2000 to XP and things got worse! The network would hang one or two times a day. At first I did not connect the OS upgrade with the router problems, besides, I had another bigger issue.

I connect to work via a Cisco VPN client which seemed to be working ok but all of a sudden (in fact immediately after the XP upgrade), Microsoft Exchange slowed to glacial speed. It would take hours to sync with the Exchange Server. This was not acceptable. I had to fix things. I called our company technical support and got through to the guy who manages the VPN. He said “What order did you install the wireless card driver and the VPN client? Because they don’t play nice together and you must install the wireless card driver first and then the VPN client”.

Solution

  1. Download and install the latest firmware upgrade from Linksys. This is not enough to fix the problem on its own. I tried this first and the network still hung. But it’s a good idea and this solution may not work without it.
  2. Uninstall the Cisco VPN client and the Proxim client utility from the laptop
  3. Reinstalled the Proxim client utility on the laptop
  4. Reinstalled the Cisco VPN client on the laptop Outcome

The network has been running for 10 days without a single hitch.

Cause

I’m still not certain what the cause was but this is what I suspect. The VPN client and Proxim client utility share something in common dlls or configuration or something! When installed in the wrong order things get messed up and in unusual circumstances the laptop sends network traffic that is somehow malformed. This affects the router and causes it to hang. Basically the router appears to be intolerant of glitches in low level network messages and this leads to low reliability. Not a great explanation I know and it may be completely spurious but my network now works reliably so I’m happy!

Some time between 1934 and 1950 the first modern computer was created. Pinning down exactly when that event occured is not easy. It depends on how you define the term computer and what you think is more important: The concept, the design, the first succesful test, or the first time the machine solved a real problem. In those early days it usually took years for a team to progress from concept through design to working machine. There were many such teams working mainly in the US and UK. These teams competed and cooperated somtimes they shared ideas and designs, and they sent representatives to visit each others laboratories. On one famous occassion in the Summer of 1946 almost all the leaders in the field got together at the Moore School for an 8 week long series of lectures. In short the story of the emergence of the modern computer is a complex one that involves both direct and indirect contributions from many people.

There are many Computer History Timelines in existence. But all of these suffer from the same flaws. They are incomplete and thier linear nature fails to capture the complex web of influence that was the hallmark of computer development.

The Evolution of Early Computers

Downloadable files available here

In an effort to visualize this web of interaction. I have started to develop a graphical representation of the evolution of the modern computer. Fortunately AT&T have kindly released a package called Graphviz which is capable of drawing complex directed graphs. The graph above is produced by Graphviz from a text file.

The text file contains a detailed description of my approach, the classification I have used, and lists all the machines and the references to the data sources I used. I have not duplicated that information here because the whole point of the exercise is to gather all the data in one place.

I have licensed this file with an attribution, share alike creative commons license. So please feel free to download and improve what I have started. If you do make changes please send me a copy and I will share the updates on this page.

For the record. I believe that The Manchester Mk I Prototype was the first Computer in a modern sense. But the text file is not intended to prove this or any other machine was first. It is only intended to record the known dates and influences for computing machines designed between 1934 and 1950. I Believe that the graph is complex enough to support many interpretations.