I've Been Thinking

Artificial Intelligence - A State of the Art
  • Home
  • Humour
  • For many years philosophers, scientists, engineers and people with nothing better to do, have been trying to develop a machine that displays some of the characteristics of human intelligence. These people believe that a suitable combination of hardware and software can be made to behave like an intelligent human being.

    Some time ago I decided to jump on the bandwagon and study the cognitive processes of mankind and the duplication of the amazing feats of thought that we humans can achieve.

    The starting point was to make a study of the literature on Artificial Intelligence and my co-workers and I discovered that, in spite of many claims by workers, the ultimate goal of a thinking machine has not yet been attained.

    We then conducted some of research into Artificial Intelligence and discovered that the reason for the failure of this technology to achieve any really useful results is because the problem has been tackled in the wrong way. The aim of Artificially Intelligent machines should be the ultimate goal not the first goal. It is a mistake to attempt to achieve the ultimate objective in a single step. It is rather like the Wright Brothers saying "OK we got it off the ground for twenty feet, now for the Atlantic".

    We have decided to take a radically new approach to the problem. We believe that it is necessary to define a set of sub-goals and try for the attainment of each in turn. This might eventually allow me to achieve the difficult aim of Artificial Intelligence.

    These sub-goals have been defined, along with the criteria that each must meet. We have also indicated where machines of each type might be utilised.

    Artificial Ignorance The first sub-goal defined was Artificial Ignorance. Ignorance here is defined as the ability to always get the answer wrong. An Artificially Ignorant Machine would consistently reply with incorrect information. Some of my co-workers believe that a computer that has not been switched on, already exhibits this behaviour but the general consensus is that this is cheating and that we need to create a machine that actively tries to get the right answer but is wrong. This is different from being wrong by omission or inaction.

    A machine of this type could be used in weather forecasting, in predicting the national budget or by television stations when covering general elections.

    Artificial Stupidity The second sub-goal defined was Artificial Stupidity. In order to duplicate this behaviour we must be able to model a class of behaviour epitomised by that of someone who believes the promises of politicians at election time, project managers at review meetings or computer software salesmen. Stupidity is different from ignorance in that facts are known but actions are not logical.

    If one of these machines was asked if it was willing to take on the job of creating a computer system, whose design was specified by an end user and do it on time and within budget, it would answer, yes. This is obviously a stupid answer. It is also the type of behaviour often displayed by management consultants.

    Another example of stupid behaviour that we must be able to duplicate is shown by people who believe that the computer system they have just specified will help to make their job easier or (even worse) be useful. We were tempted to put this behaviour into a special category called "When I get my new system, everything will be wonderful", but decided to leave it in Stupid.

    There has been much discussion as to whether a very large spreadsheet, used to make important company decisions, displays Artificial Ignorance or Artificial Stupidity. We have decided that this is a special case and should be labelled "Really Stupid".

    Artificial Incompetence Sub-goal three is Artificial Incompetence. We believe that the test for this is to train the machine in a particular skill, test it in another and have it believe that it can perform the second skill as well as the first. This is rather like an accountant building computer systems or an engineer running a marketing company.

    Another example of incompetent behaviour is that shown by a previously intelligent person standing for public office. There is a hierarchy involved here. It is believed that standing for Federal Parliament demonstrates less incompetence than standing for State Parliament. Local councils are near the bottom of the tree, with the ACT government being rock bottom.

    Our first Artificial Incompetence machine would be used to help in writing tenders for government departments. The process would go something like this.

    1    Write the tender
    2    Feed the tender into the Artificial Incompetence machine
    3    If the machine rejects the tender, re-write it and go back to step 2
    4    When the machine accepts the tender, submit it.

    In this way we could duplicate the evaluation procedure that seems to go on in government departments. This should give us a competitive advantage over other companies who rely on logic and common sense when responding to such tenders.

    Artificial Intelligence The final sub-goal is Artificial Intelligence itself. Unfortunately we have been unable to find many examples of true intelligence that we can model. Because of this, determining the test for Artificial Intelligence has been very difficult. Work is continuing in this area.

    Expert Systems There has also been heated debate on the topic of Expert Systems. We have been trying to determine into which of the four sub-goals this type of system might fit. Some felt that, while not exhibiting intelligence, they did go some way towards attaining the ultimate goal. Others thought that a special category should be created, along the lines of "I'm not letting that machine diagnose my illness. I bet it can't even spell schizophrenia".

    The conclusion was that Expert Systems themselves are examples of Artificial Stupidity but that the people who claim that these system are intelligent, fall into the "Who are they fooling?" class.

    Post Script. After a lot of deep thought, late nights and many bottles of wine, we have now decided to abandon the whole project. We have finally seen the light. We realised that the thought that we could actually duplicate intelligence is a demonstration of our naivety. Not only that, but naivety was not even included in our sub-goals, neither real or artificial.

    We also had the thought that, if it was possible to create an intelligent machine, what would become of we humans?. We would all probably be retrained as Chartered Accountants while the machines did the interesting jobs and had all the fun. We have decided to go back to creating incompetent computer systems, a demonstration of Real Stupidity. When we get to intelligence ourselves, we might try again.

    Then again we might not.

    Bernard Robertson-Dunn, May 2010

    Home        Humour