Lecture 4

advertisement
Computing Systems
Basic arithmetic for computers
claudio.talarico@mail.ewu.edu
1
Numbers

Bit patterns have no inherent meaning
 conventions define relationship between bits and numbers

Numbers can be represented in any base
n
 digiti  basei
i 0

Binary numbers (base 2)
 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001...
 decimal: 0...2n-1

Of course it gets more complicated:
 negative numbers
 fractions and real numbers
 numbers are finite (overflow)

How do we represent negative numbers?
i.e., which bit patterns will represent which numbers?
2
Possible Representations
Sign Magnitude
000 = +0
001 = +1
010 = +2
011 = +3
100 = -0
101 = -1
110 = -2
111 = -3
One's Complement Two's Complement
000 = +0
001 = +1
010 = +2
011 = +3
100 = -3
101 = -2
110 = -1
111 = -0
000 = +0
001 = +1
010 = +2
011 = +3
100 = -4
101 = -3
110 = -2
111 = -1
 Issues: balance, number of zeros, ease of operations
 Which one is best? Why?
3
Two’s complement format
… -4 -3 -2 -1 0 1 2 3 4 …
-2
1110
-1
1111
0
0000
1
0001
2
0010
3
0011
-3
1101
4
0100
-4
1100
5
0101
1011
-5
1010
-6 1001
-7
1000
-8
0111
7
0110
6
4
Two’s complement operations

Negating a two’s complement number: invert all bits and add 1
 remember: “negate” and “invert” are quite different !
 The sum of a number and its inverted representation must be
111….111two, which represent –1
x  x + 1 = 0  x  x = 1 x  x 1

Converting n bit numbers into numbers with more than n bits:
 copy the most significant bit (the sign bit) into the other bits
0010
 0000 0010
1010
 1111 1010
 Referred as "sign extension"
 MIPS 16 bit immediate gets converted to 32 bits for arithmetic
5
Two’s complement
 Two’s complement gets its name from the rule that the
unsigned sum of an n bit number and its negative is 2n
 Thus, the negation of a number x is 2n-x
 adding a number x and its negate considering the bit
patterns as unsigned:
x  x  1  (x  x)  1 
(2n  1)  1  2n
111…11two
 Negative integers in two’s complement notation look like
large numbers in unsigned notation
6
MIPS word is 32 bits long
 32 bit signed numbers:
0000
0000
0000
...
0111
0111
1000
1000
1000
...
1111
1111
1111
0000 0000 0000 0000 0000 0000 0000two = 0ten
0000 0000 0000 0000 0000 0000 0001two = + 1ten
0000 0000 0000 0000 0000 0000 0010two = + 2ten
1111
1111
0000
0000
0000
1111
1111
0000
0000
0000
1111
1111
0000
0000
0000
1111
1111
0000
0000
0000
1111
1111
0000
0000
0000
1111
1111
0000
0000
0000
1110two
1111two
0000two
0001two
0010two
=
=
=
=
=
+
+
–
–
–
2,147,483,646ten
2,147,483,647ten = 231-1
2,147,483,648ten = -231
2,147,483,647ten
2,147,483,646ten
1111 1111 1111 1111 1111 1111 1101two = – 3ten
1111 1111 1111 1111 1111 1111 1110two = – 2ten
1111 1111 1111 1111 1111 1111 1111two = – 1ten
msb (bit 31)
lsb (bit 0)
maxint
minint
7
Addition and subtraction
 Just like in grade school (carry/borrow 1s)
0111
+ 0110
_1101
0111
- 0110
0001
0110
- 0101
0001
 Two's complement operations easy
 subtraction using addition of negative numbers
0111
+ 1010
0001
 Overflow (result too large for finite computer word):
 adding two n-bit numbers does not yield an n-bit number
0111
+ 0001
_1000
8
Detecting overflow
 No overflow when adding a positive and a negative
number
 No overflow when signs are the same for subtraction
 Overflow occurs when the value affects the sign:
 overflow when adding two positives yields a negative
 or, adding two negatives gives a positive
 or, subtract a negative from a positive and get a
negative
 or, subtract a positive from a negative and get a
positive
 Consider the operations A + B, and A – B
 Can overflow occur if B is 0 ? cannot occur !
 Can overflow occur if A is 0 ? can occur !
9
Effects of overflow
 An exception (interrupt) occurs
 Control jumps to predefined address for exception
 Interrupted address is saved for possible resumption
 Details based on software system / language
 Don't always want to detect overflow
 new MIPS instructions: addu, addiu, subu
 unsigned integers are commonly used for memory
addresses where overflows are ignored
note: addiu still sign-extends!
note: sltu, sltiu for unsigned comparisons
10
Floating point numbers (a brief look)
 We need a way to represent
 numbers with fractions, e.g., 3.1416
 very small numbers, e.g., .000000001
 very large numbers, e.g., 3.15576 x 109
 solution: scientific representation
 sign, exponent, significand:
(–1)sign x significand x 2E
 more bits for significand gives more accuracy
 more bits for exponent increases range
 A number in scientific notation that has no leading 0s is
called normalized
(1)sign  1.xxx   xxx 2 E
(1+fraction)
11
Floating point numbers
 A floating point number represent a number in which the
binary point is not fixed
 IEEE 754 floating point standard:
 single precision: (32 bits)
 1 bit sign, 8 bit exponent, 23 bit fraction
 double precision: (64 bits)
 1 bit, 11 bit exponent, 52 bit fraction
12
IEEE 754 floating-point standard
 Leading “1” bit of significand is implicit
 Exponent is “biased” to make sorting easier
 all 0s is smallest exponent all 1s is largest (almost)
 bias of 127 for single precision and 1023 for double
precision
 summary: (–1)sign  (1fraction)  2exponent – bias
 Example:
 decimal: -.75 = - ( ½ + ¼ )
 binary: -.11 = -1.1 x 2-1
 floating point:
 exponent = E + bias = -1+127=126 = 01111110two
 IEEE single precision:
sign
exponent
fraction
1 01111110 10000000000000000000
13
IEEE 754 encoding of floating points
Single precision
Double precision
Number
Exponent
Fraction
Exponent
Fraction
0
0
0
0
0
0
nonzero
0
nonzero
+/- denormalized
1 - 254
nonzero
1 - 2046
nonzero
+/- floating point
255
0
2047
0
+/- infinity
255
nonzero
2047
nonzero
NaN
14
Floating point complexities
 Operations are somewhat more complicated (see text)
 In addition to overflow we can have “underflow”
 Overflow: a positive exponent too large to fit in the exponent field
 Underflow: a negative exponent too large to fit in the exponent field
 Accuracy can be a big problem
 IEEE 754 keeps two extra bits, guard and round
 four rounding modes
 positive divided by zero yields “infinity”
 zero divide by zero yields “not a number” (NaN)
 other complexities
 Implementing the standard can be tricky
 Not using the standard can be even worse: Pentium bug!
15
IEEE 754 encoding
Single precision
Double precision
Number
Exponent
Fraction
Exponent
Fraction
0
0
0
0
0
0
nonzero
0
nonzero
+/- denormilized
1 to 254
nonzero
1 to 2046
nonzero
+/- floating point
255
0
2047
0
+/- infinity
255
nonzero
2047
nonzero
NaN
16
Floating point addition/subtraction
To add/sub two floating point numbers:
 Step1. we must align the decimal point of the number with
smaller exponent
 compare the two exponents
 shift the significand of the smaller number to the right by an
amount equal to the difference of the two exponents
 Step 2. add/sub the significands
 Step 3. The sum is not in normalized scientific notation, so we
need to adjust it
 either shifting right the significand and incrementing the
exponent, or shifting left and decrementing the exponent
 Step 4. We must round the number
 add 1 to the least significant bit if the first bit being thrown
away is 1
 Step 5. If necessary re-normalize
17
Floating point addition/subtraction
Start
Sign
Exponent
Fraction
Sign
Exponent
Fraction
1. Compare the exponents of the two numbers.
Shift the smaller number to the right until its
exponent would match the larger exponent
Small ALU
2. Add the significands
Exponent
difference
0
1
0
1
0
1
3. Normalize the sum, either shifting right and
incrementing the exponent or shifting left
and decrementing the exponent
Shift right
Control
Overflow or
underflow?
Yes
Big ALU
No
0
0
1
Increment or
decrement
1
Exception
4. Round the significand to the appropriate
number of bits
Shift left or right
No
Still normalized?
Rounding hardware
Yes
Sign
Exponent
Fraction
Done
18
Accuracy of floating point arithmetic
 Floating-point numbers are often approximations for a
number they can’t really represent
 Rounding requires the hardware to include extra bits in
the calculation
 IEEE 754 always keeps 2 extra bits on the right during
intermediate additions called guard and round
respectively
19
Common fallacies
 Floating point addition is associative x + (y+z) = (x+y)+z
 Example:
 1.5ten 1038  (1.5ten 1038  1.0)  1.5ten 1038  (1.5ten 1038 )  0.0
(1.5ten 1038  1.5ten 1038 )  1.0  (0.0ten )  1.0  1.0
 Just as a left shift can replace an integer multiply by a power of
2, a right shift is the same as an integer division by a power of 2
(true for unsigned, but not for signed even when we sign extend)
 Example: (shift right by two = divide by 4ten)
-5ten = 1111 1111 1111 1111 1111 1111 1111 1011two
1111 1111 1111 1111 1111 1111 1111 10two = -2ten
It should be –1ten
20
Concluding remarks
 Computer arithmetic is constrained by limited precision
 Bit patterns have no inherent meaning (side effect of the
stored-program concept) but standards do exist
 two’s complement
 IEEE 754 floating point
 Computer instructions determine “meaning” of the bit patterns
 Performance and accuracy are important so there are many
complexities in real machines
 Algorithm choice is important and may lead to hardware
optimizations for both space and time (e.g., multiplication)
21
Download