Variants of Turing machines

advertisement
CSC 4170
Theory of Computation
Variants of Turing machines
Section 3.2
3.2.a
Turing machines with the “stay put” option
A transition of this type of machine may have either L (move left),
Or R (move right), or S (stay put).
q1
ab,S
q2
“Replace a with b and go to state q2 without moving the head”
This does not increase the power of the machine, because the above
transition can be simulated with the ordinary TM as follows:
3.2.b
Multitape Turing machines
A multitape TM has k tapes, each with its own read/write had.
Initially, the input is written on the first tape, and all the other tapes
are blank, with each head at the beginning of the corresponding tape.
For a 3-tape TM, a transition will look like
0,1,11,0,1,R,R,L
q1
q2
“If you are in state q1 and see 0 on Tape1, 1 on Tape2 and 1 on Tape3,
• type 1 on Tape1, 0 on Tape2 and 1 on Tape3;
• move Head1 right, Head2 right and Head3 left;
• go to state q2.”
3.2.c
Multitape Turing machines: Example
Design a fragment of a 2-tape TM that swaps the contents of the tapes,
from the position where the heads are at the beginning and there are
no blanks followed by non-blank symbols. Tape alphabet: {0,1,-}
0,00,0,R,R
0,11,0,R,R
0,-  -,0,R,R
1,00,1,R,R
1,11,1,R,R
1,-  -,1,R,R
-,00,-,R,R
-,11,-,R,R
Tape1
1
1
0
-
Tape2
0
0
-
-
-,- - ,-,L,L
Swap
Done
3.2.d
Simulating multitape TM with ordinary TM
Theorem 3.13: Every multitape TM M has an equivalent single-tape TM S
Proof idea: The tape contents and the head positions of M can be
represented on the single tape of S and correspondingly updated as
shown on the following example for a 3-tape M; S follows the steps of
M and accepts iff M accepts.
M:
S:
#
Tape1
0
0
1
-
Tape2
-
-
-
-
Tape3
-
-
-
-
.
0
0
1
#
.
-
q0
#
.
-
#
-
-
-
-
-
-
3.2.e
Nondeterministic Turing machines
A nondeterministic TM is allowed to have more than 1 transition
for a given tape symbol:
q2
ab,R
q1
ac,L
q3
A string is accepted, if one of the branches of computation takes us
to the accept state.
Theorem 3.16: Every nondeterministic TM has an equivalent
deterministic TM.
Proof omitted. Idea: Simulate every possible branch of computation
in a breadth-first manner.
3.2.f
Enumerators
An enumerator is a TM with two tapes: the work tape and the printer.
• Initially both tapes are blank.
• On the work tape it can read, write and move in either direction just as an ordinary TM.
• On the printer, it can only print one of the symbols of the language alphabet, or #,
serving as the “end of string” symbol, which is not in the language alphabet.
So that transitions look like
c
a  b,D
q1
q2
meaning “if, in state q1, you see an a on the work tape, replace it with b, move in the
direction D (D=L or D=R), go to state q2 and print c on the printer”.
Every time a symbol is printed on the printer, the printer head moves right.
If c is absent, nothing happens on the printer.
• There is a start state, but there are no accept or reject states. Entering a configuration
from which there is no transition causes the enumerator to halt.
• The language enumerated by an enumerator is the set of strings (separated with #)
that it prints on the printer.
It is OK if some of the strings are printed more than once.
3.2.g
An example of an enumerator
# 0,L
-
start
-  $,L
$R
go to the beginning
print
0L
0 R
0
Work tape
-
-
-
-
-
-
-
-
-
-
Printer
-
-
-
-
-
-
-
-
-
-
3.2.h
Enumerability vs Turing recognizability
Definition: A language is said to be (recursively) enumerable iff
some enumerator enumerates it.
Theorem 3.21: A language is Turing recognizable iff it is enumerable.
Proof sketch.
(): Suppose E enumerates L. Construct a TM M that works as follows:
M = “On input w:
1. Simulate E. Every time E prints a new string, compare it with w.
2. If w is ever printed, accept.”
(): Suppose M recognizes L. Let s1,s2,s3,… be the lexicographic list of all strings
over the alphabet of L. Construct an enumerator E that works as follows:
E = “ 1. Repeat the following for i=1,2,3,…
2. Simulate M for i steps on each of the inputs s1,s2,…,si.
3. If any computations accept, print out the corresponding sj.”
3.2.i
Turing machines with an output
(not in the textbook!)
The only difference with ordinary TM is that a TM with an output
(TMO) has a state halt instead of accept and reject; if and when such a
machine reaches the halt state, the contents of the tape (up to the first
blank cell) will be considered its output.
Example: Design a machine that, for every input w, returns w0.
3.2.j
Computable functions
A function g: * * is said to be computable, iff there is a TMO
M such that for every input w* M returns the output u with u=g(w).
In this case we say that M computes g.
Example: Let f: {0,1}* {0,1}* be the function defined by
f(w)=w0,
so that f()=0, f(0)=00, f(1)=10, f(00)=000, f(01)=010, etc.
Then f is computable as we saw on the previous slide.
The graph of such a function is the language
{(w,u) | w*, u=g(w)}
E.g., the graph of the above function f is
{(,0), (0,00), (1,10), (00,000), (01,010), …},
i.e.
{(w,u) | w{0,1}*, u=w0}
3.2.k
Computability vs decidability
Theorem: A function is computable iff its graph is decidable.
Proof sketch. Let g: * * be a function.
(): Suppose C is a TMO that computes g. Construct a TM D that works as follows:
D = “On input t:
1. If t does not have the form (w,u), where w,u*, then reject. Otherwise,
2. Simulate C for input w. If it returns u, accept; otherwise reject.”
(): Suppose a TM D decides the graph of g. Let s1,s2,s3,… be the lexicographic list of
all strings over the alphabet . Construct a TMO C that works as follows:
C =“On input t:
Simulate D for each of the inputs (t,s1), (t,s2),(t,s3),… until you find si
such that (t,si) is accepted, and return this si”
Download