Electronics Module 0: Introduction and Physics Background 1. Basic Physics Review We start from some basic quantities in physics: U ~ energy, units of Joules (J) F ~ force, units of Newtons (N) Q ~ charge, units of Coulombs (C) 1 C = 6.25 x 1018 elementary charges <-> e = 1.6 x 10-19 C For example, the force between two charges Q and q separated by a distance r is given by Coulomb’s law, F = kQq/r2, where k = 1/(4πε0) is Coulomb’s constant (more on this when we talk about capacitors). Now, here’s the cool thing: if we’re only interested in how that first charge, Q, affects everything around it, we can make an abstraction which eliminates the need for the second charge q, and say that Q creates an electric field that fills all space, which the second charge q (or any other unfortunate charge that wanders anywhere near Q) experiences, by defining F = qE where E is the electric field, measured in units of Newtons/Coulomb (or, as we’ll see later, equivalently in Volts/meter), and for a point charge is given by: E = kQ/r2 Similarly, there is a potential energy due to two charges separated by a distance r. One crucial point to keep in mind is that potential energy is only meaningful as a difference, i.e. the difference in potential energy between two different configurations. There’s no such thing as an “absolute” potential energy, since it represents the potential to do work when a system changes from one state into a different state; potential energy is undefined unless you are comparing two distinct states. In physics classes, we usually define the state where the charges have infinite separation to have U = 0, and find that as the two charges are moved towards each other, the potential energy (strictly, the potential energy difference relative to U = 0 at infinity) is given as: U = -∞∫r F●ds = kQq/r where the ● represents the usual dot product of two vectors, and the “cosine” factor cancels out since the path of the integral and the direction of the electric force are parallel to each other (along the radial direction). We can make the same abstraction as before, “removing” the second charge q, and saying that q experiences a potential difference (without the word “energy”) due to the effects of charge Q, which causes its potential energy to change as: U = qV where V is the “potential,” “potential difference,” or “voltage,” (hence V) measured in units of Joules/Coulomb (“energy per unit charge”). But since voltage appears so frequently it has its own unit, the volt: 1 V = 1 J/C (with the quantity and its associated unit named after the Italian physicist Alessandro Volta). For a point charge the potential at any point a radius r from it is given as V = U/q = -∞∫r E●ds = kQ/r where again, we remember that only the potential difference matters, and for convenience, we’ve taken the “physics convention” of V = 0 at r -> ∞. In actual circuits, the “V=0 at ∞” convention is of limited use. Why? For starters, it’s kind of hard to get there when you’re doing an experiment. Theorists can come up with all sorts of crazy ideas, but experimentalists have to deal with the complications of the real world, where infinity is out of reach. For that reason, in a circuit we’ll just pick some point and consider it to be at V = 0. This is known as the “circuit ground.” Oftentimes the circuit actually is connected to the ground (or to the frame of the building, which in turn connects to the ground), in which case “ground” actually does mean “ground” (which is sometimes specified as “earth ground” in electronics parlance). In other cases, “ground” is the voltage level of the device’s chassis or frame (whether or not it’s connected to anything larger), and so it’s more of a “common bus” to which parts of the circuit can be connected to ensure that they are at the same relative potential as other, physically isolated, parts of the device. This is sometimes referred to as “chassis ground.” The two take-home points here: 1. Potential, or voltage, (and potential energy) are only defined as a difference between two points or two states. This is why your multimeter has two leads, and why you must always use both of them! 2. Since infinity is unreachable, we usually pick some part of the circuit in question and call it “ground.” All choices of “ground” are equally valid, as long as you’re consistent. However, some choices are more useful than others. Okay, that’s what we need to know about charges at rest. Next, we look at charges, specifically, the case where charges are moving. If we count the charges moving past a particular point in a circuit, we can define the current I: I = dq/dt where I (mnemonic: I = current Intensity) is measured in Coulombs/second. However, since current arises so frequently, it even has its own unit, the Ampere (amp, named after the French physicist AndreMarie Ampere), with 1 A = 1 C/s. (In fact, for reasons relating to the practical accuracy of measurements, in the SI system, the Ampere and the second are fundamental units, and the Coulomb, since charge cannot be measured nearly as accurately as current or time, is a derived unit defined as the amount of charge that flows in one second at a current of 1 ampere. Similarly, the second and the speed of light are fundamental units since they can be measured so accurately, and the meter is defined as the distance light travels in one second. But I digress.) Finally, in cases where charges are moving, it makes sense not just to talk about work and energy, but also the rate at which work is being performed, or energy is being transferred from one form to another. Starting from the definition U = qV, we can write down the equation for power, which is the rate at which energy is consumed: P = dU/dt = d(qV)/dt = V*dq/dt = V*I Hence, in an electric circuit, P = VI is the rate at which electrical energy is consumed. (However, when we study capacitors, inductors and AC circuits, we’ll learn that the truth is a little more complicated than the simple P = VI formula since the current and voltage can be “out of phase” with one another so that energy is periodically stored and released instead of consumed, but forget that I said anything about this for now...) Now, what happens to the energy that is consumed? In most cases it is converted to heat, which we know from thermodynamics is where all useful energy eventually goes to die, where the Second Law of Thermodynamics places very strict limits on anything useful we can get out of it. (This is why we call thermodynamics “the dismal science.”) But in certain devices, dissipated electrical energy can be converted to light (and other electromagnetic waves such as radio), sound or mechanical work. These devices are called transducers, and arise very frequently in electronics. Light bulbs, light-emitting diodes, speakers, piezoelectric crystals, radio antennas and other devices are all examples of transducers. II. Units and Prefixes: Every physical quantity we measure has some unit attached to it. That is, everything is measured with respect to some set of units. Your height may be measured in feet, or inches, or centimeters or cubits, all of which are units of length. And the number you report (“my height is 175 centimeters”) only has meaning when you state what units you are using (i.e. “my height is 175” means nothing since we don’t know whether you mean cm, inches, feet or miles). In your mechanics course the numbers you dealt with were of the “order of magnitude” (or within a few orders of magnitude) of one. In other words, you typically would not deal with a force of 107 Newtons, or an object whose length was 10-13 meters. This is no accident, and occurs because the sizes of the units were chosen to be reasonably close to the sizes of measurements that were typically made. That is why all our typical units of length have reasonable sizes on human scales. In electronics, however, most of the units are derived from more fundamental units, and these derived units are often of inconvenient sizes. The Farad (capacitance) and the Henry (inductance), for example, are ridiculously large units – even a dumpster full of capacitors may have a total capacitance of well under 1 F, and a 1 H inductor is the type of thing you might need a forklift to move around. The Ohm (resistance), on the other hand, is a rather small unit; typical resistors you will work with typically have resistances of thousands of ohms. In the past, you worked around this problem using scientific notation, by writing a large (or small) number as a mantissa (a number between 1 and 10) multiplied by 10 raised to some power, i.e. 47,000,000 = 4.7 x 107. In electronics, for various reasons, scientific notation typically isn’t used. Instead, a form of “engineering notation” is used in which prefixes are added onto units which represent factors of 1,000. The prefixes we’ll use most are: Prefix Abbreviation Value Tera Giga Mega Kilo T G M k 1012 109 106 103 Milli Micro Nano Pico m µ n p 10-3 10-6 10-9 10-12 So, a voltage of 5,600 V would be written as 5.6 kV. A resistor with a resistance of 220,000 Ω would be written as 220 kΩ. A capacitor with a capacitance of 2.5 x 10-10 F would be written as 250 pF. A current of 0.065 A would be written as 65 mA. So, typically, values are expressed as a number between 1 and 1000, and a unit prefaced by the appropriate prefix. (One historical oddity: for capacitances, the prefixes µ and p are the only ones typically used; it’s unusual to see a capacitance listed in nF or mF. So a capacitor of 2.6 x 10-7 F is given as 0.26 µF rather than the 260 nF you would otherwise use. And in the really old days, before the “pico” prefix was in wide use (it’s only recently that we’ve been able to work routinely with things so small!), picofarad capacitances were listed as “micromicrofarads,” or µµF. This is not really important to know, but useful if you want to really sound like an “electronics person.”) III. Parting shot: The Sign Convention Voltage and current, as defined above, represent the potential energy and flow direction, respectively, of positive charges. (For negative charges a minus sign on the energy, or a 180 degree shift in flow direction, must be added.) This is a convention that dates back to the 1700’s and the time of Ben Franklin (who is a “big guy” in Philadelphia history!) When it was realized that electric charges must come in two varieties, positive and negative, due to the fact that the Coulomb force can be either attractive or repulsive, but that electric charges could also flow as currents, it was assumed that the carriers of electric current were positive (even though it was essentially a coin flip in the first place to decide “which” charge should be considered as positive, and which as negative). For this reason, it was taken as “convention” that electric current is to be treated as the flow of positive charges from the + terminal of a battery toward the – terminal of a battery. But for all intents and purposes of that time, everything measurable would have been exactly the same whether electrons had positive or negative charges, so there was really no way of knowing. It was not realized until much later (the late 1800’s and early 1900’s) that it became apparent that the electron was in fact negatively charged (with the largely-stationary nuclei holding the positive charges), and that in most materials the actual flow of current is the flow of negatively-charged electrons in the direction opposite the “conventional” current direction (i.e. from the negative terminal of the battery to the positive one). Surprisingly, even in this modern era it turns out to make very little difference at all whether you choose “conventional flow” or “electron flow” as long as you are rigorously consistent in which one you use; there are only a relatively small number of cases where you have to be aware of which convention you are using (the Hall effect being the most prominent example). Therefore, in order to avoid confusion (and to stick with practice observed elsewhere in the field), for the remainder of this course we will use “conventional flow” (from positive to negative). If I do use “electron flow” (which from time to time is useful), I will be very explicit in stating this. Finally, the mid-to-late 20th Century addendum to the debate: it is also worth pointing out that in some materials (particularly semiconductors), electrons are not the actual carriers of electrical current! In these materials sometimes it’s the lack of an electron (known as a “hole”) which actually carries current. I know that this makes no sense to you – even allowing for something as ridiculous as a “hole” to exist, an electron moving one way should be identical to a hole moving the other way - but when we talk about diodes I will discuss some solid state physics (band theory) in which it actually does make a difference. This is because the electrons and holes are in different energy bands with different conduction properties (and indeed, different “effective masses”), so it may be much easier (or harder) to move an electron than it is to move a hole! But this is another of those things I’ll ask you to forget about for now, and we’ll return to it later.