31Y7 : Computing Science Options: Biologically Inspired Computation and Computability

Wednesday 16 December 1998 1400 - 1700 hours

Answer TWO questions from Section A and TWO questions from Section B. Write your answers to each section in separate answer books.

All questions carry equal marks.

The distribution of marks among the parts of each question is indicated.





It is essential that you write your registration number on the front of each answer book.

Also, when you have completed the examination, the number of answer books that you have used must be prominently written on the front of one book.

Section A


a) Both computers and real neurons process inputs to produce outputs.

Discuss the differences between

i) The input to a neuron and the input to a computer

ii) The way processing takes place in a neuron, and in the CPU of a computer

iii) The output of a neuron, and the output of a computer.


Considering your answers to the above, discuss whether there is any similarity between neurons and computers. [3,4,3,4]

  1. McCulloch-Pitts neurons (that is, simple linear threshold neurons) are the simplest form of model neuron
    1. describe the operation of a McCulloch-Pitts model neuron
    2. explain what is meant by a bias input, and why it is unnecessary for a unit to have both a bias weight and a variable threshold.

iii) describe how a single McCulloch-Pitts neuron (with either a bias input or a threshold), may be used to implement a 2-input AND gate. [3,4,4]




  1. The perceptron learning rule and the Delta rule perform similar tasks using similar networks.
    1. Write down the formula for the perceptron learning rule, explaining what each term is.
    2. The perceptron learning rule and the Delta rule are superficially identical. Yet applying the Delta rule and the perceptron learning rule to the same training data will generally lead to different results.

    3. Discuss the reasons for this.

iii) The value used for the learning rate is more critical in the Delta rule than it is in the perceptron learning rule. Explain the reason for this. [4,4,4]


  1. The Back-propagated Delta rule is an extension to the Delta rule.
    1. Explain why this extension is important and
    2. describe the usual topology (or structure) of the network used.
    3. Explain why the neurons inside the network need to be non-linear.

iv) Briefly describe how a set of data which has been collected may be used to train a Back-propagated Delta rule network. [2,3,3,5]





i) Explain what is meant by a self-organising neural network.

ii) Explain why it is difficult to find clusters in high-dimensional data by inspection, and discuss how self-organised neural networks can assist in finding clusters in high-dimensional data.

iii) Briefly explain the differences between Vector Quantisation and the Kohonen feature mapping algorithm. [3,6,4]


b) Genetic algorithms are an optimisation technique inspired by Darwinian selection. Briefly describe the operation of a genetic algorithm, explaining terms such as generation, fitness function, crossover and mutation. [12]



Section B


a) Consider the following problem:

Given a string s consisting of a number of "(" characters followed by a number of ")" characters, answer "yes" if the number of "(" characters is the same as the number of ")" characters, and answer "no" otherwise.

Prove that it is not possible to construct a finite state machine to solve this problem. [10]


b) State the components of a Turing machine. [4]


c) Construct a Turing machine to solve the following problem:

Given a string s consisting of "(" and ")" characters (in any combination), answer "yes" if each ")" matches a previous "(", i.e. the machine should accept only strings of matched nested brackets, and answer "no" otherwise. [7]

d) Sketch informally how you might modify the Turing machine of part c) so that in the case where the answer is "no" the machine outputs the original string indicating the position at which the fault occurred.

You should not give the set of tuples for the new machine. [4]







a) State the Halting Problem for Turing machines, and discuss the significance of the unsolvability of this problem. [4]


b) Given the unsolvability of the Halting Problem, show the following problem is also unsolvable.

Given a Turing machine M and an input tape t, does the symbol "@" appear on the output tape at any point in the computation (assuming that the alphabet of symbols includes "@"). [9]


c) State the Church-Turing thesis. Discuss this statement, indicating its implications for designers and programmers of real hardware and software. How does the study of computational models provide evidence to support the Church-Turing thesis? Why can’t we prove the Church-Turing thesis holds? [12]





a) State the components of a Deterministic Finite Automaton (DFA). [3]


b) What does it mean if we say that a DFA M accepts a language L? The Deterministic Finite Automata as a class of machines accepts a class of languages (called LDFA ). LDFA can also be characterised in terms of language constructors as the class LRE. Explain in more detail what LRE is, and how it is constructed. [5]


c) Consider the class of Finite Automata obtained by adding nondeterminism (Nondeterministic Finite Automata — NFAs). Sketch the proof that the class of languages accepted by DFAs and NFAs is the same. [10]


d) Define the problem classes P, NP and NP-complete. Discuss the relationship between these problem classes and the significance of the question "does P=NP?". [4]


e) Consider the problem EC, which is known to be NP-complete. What conclusion can be drawn if:

    1. it were shown that EC is not in P?
    2. it were shown that EC is in P?

iii) it were shown that P=NP?