Modelling Colored Noise under Large-Signal Conditions by Nikhil M. Kriplani A dissertation submitted to the Graduate Faculty of North Carolina State University in partial satisfaction of the requirements for the Degree of Doctor of Philosophy Electrical Engineering Raleigh 2005 Approved By: Dr. Griff L. Bilbro Dr. W. Rhett Davis Dr. Michael B. Steer Chair of Advisory Committee Dr. Douglas W. Barlage ABSTRACT KRIPLANI, NIKHIL M. Modelling Colored Noise under Large-Signal Conditions. (Under the direction of Professor Michael B. Steer). A time-domain simulation approach to modelling colored noise in electrical circuits is described. This approach tries to place minimal restrictions on the magnitude and the nature of the noise present in a circuit in an effort to capture the effects of nonlinear interactions between signal and noise. The approach uses the mathematical theory of nonlinear dynamics and chaos to produce stochastic-looking series using simple deterministic iterative rules or maps. The characteristics of these series can be modified easily to produce a large range of spectral characteristics. The advantage of using the chaotic maps approach is that modifying the spectral characteristics usually requires the tweaking of a small number of parameters. This is in contrast to more traditional time-series-based approaches to noise generation which require a large number of parameters to accurately model the characteristics of common sources of noise found in electrical circuits. The validity of this approach to modelling is tested by implementing a unified deterministic and stochastic framework of equations in a high dynamic range simulator. The resulting stochastic system of equations describing a nonlinear noisy network are setup and solved assuming the Stratonovich interpretation. Simulated results are compared with measured results using two representative circuits. The first circuit is a varactor-tuned voltage-controlled oscillator and simulated phase noise at the output of the circuit is compared with measured values. The second circuit is a low-noise X-band MMIC power amplifier and the effect of noise on the amplification of this device is investigated. Gain versus input power curves are generated in simulation when the circuit is fed with large levels of input noise and contrasted with measurement. Both these cases demonstrate that this approach to the modelling of large levels of noise is valid and perhaps even essential in order to accurately predict the effects of having non-negligible levels of noise in an electronic circuit. ii To my parents, who provided me with the opportunity to make something of my life. To my fate, for allowing it to happen. iii Biography Nikhil M. Kriplani was born in the city of Mumbai, India in 1977. He obtained his Bachelor’s degree in Electronics and Telecommunications Engineering at the Maharashtra Institute of Technology in Pune, India in 1999 and his Master’s degree in Electrical Engineering from North Carolina State University at Raleigh, NC, U.S.A in 2002. His (unordered) interests include sports, mathematics, acoustics, philosophy, writing, healthy living and searching for the purpose of his existence. iv Acknowledgements I would like to thank Dr. G. Bilbro, Dr. D. W. Barlage, Dr. Wm. R. Davis and Dr. M. B. Steer for serving on my committee. In particular, I would like to thank Dr. M. B. Steer for the doses of motivation, guidance and funding for as long as I have been in the Ph.D. program. It still does seem quite amazing that it has all worked out this way, and needless to say, it would have been impossible without his generous influence. I would like to acknowledge the camaraderie between my fellow graduate student corps, Aaron Walker, Sonali Luniya, Mark Buff, Jayesh Nath, Frank Hart, Alan Victor and Wonhoon Jang and thank them for all the light moments, the seemingly endless discussions on unrelated, seemingly meaningless but seemingly interesting topics and most importantly, sharing the experience of sailing in this same boat together. Finally, I am grateful to my wife, Larisa, for giving up a part of her life and supporting me during the final stages of my work. v Contents List of Figures viii List of Tables xi List of Abbreviations xii 1 Introduction 1.1 Motivations and Objectives . . . . . . . 1.2 Unleashing Chaos . . . . . . . . . . . . . 1.3 Establishing a Framework . . . . . . . . 1.4 Circuit Simulator Noise Analysis Review 1.5 Original Contributions . . . . . . . . . . 1.6 Overview . . . . . . . . . . . . . . . . . 1.7 Publications . . . . . . . . . . . . . . . . . . . . . . . 1 1 3 5 6 10 12 13 . . . . . . . . . . . . . . . . 14 14 14 17 20 22 23 24 26 27 28 28 29 30 30 33 34 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Literature Review 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . 2.3 Thermal Noise . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Shot Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 1/f (Flicker) Noise . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Scale Invariance . . . . . . . . . . . . . . . . . . . . . 2.5.2 Stationarity and Gaussianity . . . . . . . . . . . . . . 2.5.3 The Empiricism of Hooge . . . . . . . . . . . . . . . . 2.5.4 Number Fluctuations of Charge and Mobility . . . . . 2.5.5 Surface Effect or Bulk effect . . . . . . . . . . . . . . . 2.5.6 Dependence on Mean Voltage, Current and Resistance 2.5.7 The Effect of Temperature . . . . . . . . . . . . . . . 2.6 1/f (Flicker) Noise Models . . . . . . . . . . . . . . . . . . . 2.6.1 Surface Trapping Model . . . . . . . . . . . . . . . . . 2.6.2 Transmission line 1/f noise . . . . . . . . . . . . . . . 2.6.3 Pole Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi 2.7 2.8 2.6.4 Fractional Noises . . . . . . . . . . . . 2.6.5 Power-Law Shot Noise . . . . . . . . 2.6.6 1/f Noise from Chaos . . . . . . . . . 2.6.7 Self-Organized Criticality (and others) 2.6.8 Phase Noise . . . . . . . . . . . . . . . Simulation of Noise in Circuits . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . 3 Stochastic Differential Equations 3.1 Introduction . . . . . . . . . . . . 3.2 Basic Theory . . . . . . . . . . . 3.3 Scalar SDEs . . . . . . . . . . . 3.3.1 A Simple Example . . . . 3.3.2 Equivalence of the Itô and 3.4 Vector SDEs . . . . . . . . . . . 3.5 Itô v/s Stratonovich Forms . . . 3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 44 46 48 53 54 57 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stratonovich forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 59 60 61 63 64 66 67 68 4 Nonlinear Dynamics, Chaos and Intermittency 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . 4.2 Nonlinear Dynamics and Chaos . . . . . . . . . . 4.2.1 Fixed Points . . . . . . . . . . . . . . . . 4.2.2 Periodic Points . . . . . . . . . . . . . . . 4.2.3 Neutral Fixed Points . . . . . . . . . . . . 4.2.4 Bifurcation Theory . . . . . . . . . . . . . 4.2.5 Sliding into Chaos . . . . . . . . . . . . . 4.3 Generating White Noise . . . . . . . . . . . . . . 4.4 Intermittency and Flicker Noise . . . . . . . . . 4.4.1 Basic Theory . . . . . . . . . . . . . . . . 4.4.2 Characteristics of Intermittency . . . . . . 4.4.3 Nonlinear Intermittent Functions . . . . . 4.5 The Logarithmic Intermittent Map . . . . . . . . 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 69 70 70 72 74 75 80 81 81 83 84 85 86 89 5 Implementation of Stochastic Framework 5.1 Introduction . . . . . . . . . . . . . . . . . . 5.2 An Acquaintance with f REEDATM . . . . 5.3 Transient Analysis in f REEDATM . . . . . 5.3.1 Linear Network . . . . . . . . . . . . 5.3.2 Nonlinear Network . . . . . . . . . . 5.3.3 Formulation of the Error Function . 5.3.4 Conversion to Algebraic Form . . . . 5.4 Implementation of Transient Noise Analysis 5.4.1 Enhanced Device Models . . . . . . 5.4.2 Nonlinear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 90 90 92 93 94 94 95 96 96 109 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 5.5 5.4.3 Noisy Error Function . . . . . . . . . . . . . . . . . . . . . . . . . . Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 111 6 Noise in a Voltage-Controlled Oscillator 6.1 Introduction . . . . . . . . . . . . . . . . . 6.2 A Varactor Voltage-Controlled Oscillator 6.3 Simulation and Validation . . . . . . . . 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 112 112 113 119 7 Noise and Amplification 7.1 Introduction . . . . . . 7.2 Setup and Verification 7.3 Further Investigations 7.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 120 120 125 131 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Conclusions and Future Work 133 8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 8.2 Further Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Bibliography 137 A Essential Itô and Stratonovich 150 B Implementing Infinite R-C Transmission Line Models 154 C Source Code C.1 Noise-enabled npn-BJT . . . . . . . . C.2 Noise-enabled p-n Junction Diode . . C.3 Noise-enabled Curtice-Cubic MESFET C.4 Noise-enabled Resistor . . . . . . . . C.5 White Noise Voltage Source . . . . . C.6 The Parker-Skellern Model . . . . . . C.7 The OML MESFET model . . . . . . C.8 The Ziggurat Technique . . . . . . . . C.9 The Logistic Map Noise Generator . . C.10 The Logarithmic Map Noise Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 157 168 176 183 187 192 199 205 208 210 D f REEDATM Netlists 213 D.1 The Varactor-tuned VCO circuit . . . . . . . . . . . . . . . . . . . . . . . . 213 D.2 The X-band MMIC Circuit Netlist . . . . . . . . . . . . . . . . . . . . . . . 215 viii List of Figures 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 2.16 2.17 2.18 2.19 2.20 3.1 4.1 4.2 4.3 4.4 4.5 4.6 Thermal noise equivalent circuits. . . . . . . . . . . . . . . . . . . . . . . . Thermal noise voltages and currents in equilibrium at T0 . . . . . . . . . . . Lumped RC transmission line excited by a white noise current source from [28]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A 1/f γ noise generator from [65]. . . . . . . . . . . . . . . . . . . . . . . . Digital 1/f γ noise generator from [66]. . . . . . . . . . . . . . . . . . . . . 1D Brownian motion realization. . . . . . . . . . . . . . . . . . . . . . . . . 1D fBm realization with H = 0.2. . . . . . . . . . . . . . . . . . . . . . . . 1D fBm realization with H = 0.8. . . . . . . . . . . . . . . . . . . . . . . . A general shot noise generator from [80]. . . . . . . . . . . . . . . . . . . . A power law impulse response function with β = 1/2. . . . . . . . . . . . . A general 1-D dynamical system consisting of a nonlinear function and a recursive loop from [32]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The bifurcation diagram for the logistic map. . . . . . . . . . . . . . . . . . Output spectrum with g=0.925. . . . . . . . . . . . . . . . . . . . . . . . . . Output spectrum with g=0.975. . . . . . . . . . . . . . . . . . . . . . . . . . Output spectrum with g=0.9975. . . . . . . . . . . . . . . . . . . . . . . . . PSD for f1 (x), f2 (x) and f3 (x). . . . . . . . . . . . . . . . . . . . . . . . . . SOC power law of the distribution of cluster sizes. . . . . . . . . . . . . . . Uncertainty in carrier frequency due to phase noise. . . . . . . . . . . . . . A typical oscillator configuration. . . . . . . . . . . . . . . . . . . . . . . . . A typical plot of the phase noise of an oscillator versus offset from the carrier. 17 18 34 35 37 40 41 42 44 45 46 47 49 49 50 50 52 53 54 55 Differences in the solution of the SDE in Eqn. (3.21) assuming the Itô and Stratonovich interpretations, with X0 = 0, k = 2 and α = 1. . . . . . . . . 65 The logistic map, with λ = 4. . . . . . . . . . . . . . . . . . An example orbit of the logistic map, with x0 = 0.2. . . . . . The logistic map with λ = 4 and its fixed points, as indicated. The quadratic map with c = 0 and its fixed points. . . . . . The quadratic map with c = −1 and prime-period 2 orbit. . Different types of neutral fixed points. . . . . . . . . . . . . . 71 71 72 73 73 74 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14 4.15 4.16 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 6.1 6.2 6.3 6.4 6.5 6.6 6.7 7.1 7.2 7.3 7.4 7.5 7.6 7.7 Quadratic map with c = 0.4 and with no fixed points. . . . . . . . . . . . . Quadratic map with c = 0.25 and with one fixed point. . . . . . . . . . . . Quadratic map with c = 0.0 and with two fixed points, one attracting and one repelling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bifurcation diagram for the quadratic map. . . . . . . . . . . . . . . . . . . Correlation plot of the logistic map with λ = 4. . . . . . . . . . . . . . . . Spectrum of the logistic map with λ = 4. . . . . . . . . . . . . . . . . . . . The logarithmic map, with β = 0.000005. . . . . . . . . . . . . . . . . . . . Sample realization of the logarithmic map, with β = 0.000005. . . . . . . . Correlation plot of the logarithmic map with β = 0.000005. . . . . . . . . . Spectrum of the logarithmic map with β = 0.000005. . . . . . . . . . . . . Connections between a heterogeneous collection of elements. . . . . . . . . A partitioned network of linear and nonlinear elements and sources. . . . . Large-signal Gummel-Poon BJT circuit. . . . . . . . . . . . . . . . . . . . . The Gummel-Poon BJT model, along with noise sources. . . . . . . . . . . Large-signal model for a p-n junction diode. . . . . . . . . . . . . . . . . . Noise-enabled p-n junction diode model. . . . . . . . . . . . . . . . . . . . Noiseless Curtice Cubic large-signal model. . . . . . . . . . . . . . . . . . . Noise-enabled Curtice Cubic large-signal model. . . . . . . . . . . . . . . . Partitioned network which now contains contributions from transient noise sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simulated output of the VCO at terminal (B) indicating a frequency of oscillation of 45 MHz. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Varactor-tuned VCO schematic, from [144]. . . . . . . . . . . . . . . . . . . Dependence on VCO oscillation frequency v/s bias. . . . . . . . . . . . . . Degradation of phase noise at varactor v/s bias. . . . . . . . . . . . . . . . Phase noise comparison between data and experiment with bias voltage at 0 V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phase noise comparison between data and experiment with bias voltage at 6 V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Phase noise comparison between data and experiment with bias voltage at 12 V. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Layout of the two-stage X-band MMIC. . . . . . . . . . . . . . . . . . . . . Measurement setup for the X-band MMIC amplifier. . . . . . . . . . . . . Comparison between measured curves of gain with no input noise and noise maintained at −20 dBc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power transferred from the center frequency into sidebands. . . . . . . . . Comparison between simulated curves of gain with no input noise and noise maintained at −20 dBc. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Degradation of the output sinusoid due to noise. . . . . . . . . . . . . . . . Comparing simulated gain obtained with a 20 ps delay and no delay with measured gain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 76 77 79 82 82 87 87 88 88 92 93 97 101 102 104 106 107 110 113 114 115 116 117 118 118 121 121 122 123 124 124 130 x 7.8 Distorted output voltage with a 20ps delay. . . . . . . . . . . . . . . . . . . 131 xi List of Tables 5.1 5.2 5.3 5.4 5.5 5.6 BJT model parameters in f REEDATM . . . . . . . . . . . . . . BJT model parameters in f REEDATM continued. . . . . . . . . Noisy diode model parameters in f REEDATM . . . . . . . . . . Noisy Curtice Cubic model parameters in f REEDATM . . . . . Noisy resistor model parameters in f REEDATM . . . . . . . . . White noise voltage source model parameters in f REEDATM . . . . . . . 98 99 103 105 108 109 6.1 6.2 BJT model parameters values as used in the VCO circuit. . . . . . . . . . . Diode model parameters values as used in the VCO circuit. . . . . . . . . . 116 116 7.1 7.2 7.3 7.4 MESFET model parameters in for X-band MMIC. . . . . . . . . . . Parker-Skellern model parameters in f REEDATM . . . . . . . . . . . Parker-Skellern model parameters used in the X-band MMIC netlist. OML model parameters used in the X-band MMIC netlist. . . . . . 123 126 128 129 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii List of Abbreviations AR Auto-Regressive ARIMA Auto-Regressive Integrated Moving Average ARMA Auto-Regressive Moving-Average BJT Bipolar Junction Transistors BM Brownian Motion Process CAD Computer-Aided Design CDF Cumulative Distribution Function fBM fractional Brownian Motion HOT Highly Optimized Tolerance IC Integrated Circuit JFET Junction Field-Effect Transistor KCL Kirchhoff’s Current Law KVL Kirchhoff’s Voltage Law LPTV Linear Periodically Time-Varying LSSS Large Signal Small Signal LTI linear time-invariant xiii LTV Linear Time-Varying MA Moving-Average MMIC Microwave Monolithic Integrated Circuit MNAM Modified Nodal Admittance Matrix MOSFET Metal Oxide Semiconductor Field-Effect Transistor OO Object-Oriented PDF Probability Distribution Function pHEMT pseudomorphic-High-Electron-Mobility-Transistor PMF Probability Mass Function PSD Power Spectral Density RF Radio Frequency SDE Stochastic Differential Equations SDIC Sensitive Dependence on Initial Conditions SOC Self-Organized Criticality SPICE Simulation Program with Integrated Circuit Emphasis WSS Wide-Sense Stationary 1 Chapter 1 Introduction 1.1 Motivations and Objectives Noise in electrical circuits presents a practical limit on the performance of electrical circuits and systems. The sources of noise are numerous and in most cases, their origins are well known and several models for these sources exist in the literature. In all cases, the models for representing noise are random in nature and the only characterization of these sources of noise is statistical. Noise sources can be broadly classified as thermal, shot and flicker type. Thermal noise is associated with the random motion of carriers in a material and the extent of the motion is proportional to the resistance of the material and its temperature. Shot noise is generally found in junction semiconductors, although it was originally observed in vacuum tubes and its existence is attributed to the motion of charges across a junction formed by joining two semiconductor materials with opposite charge concentrations. The origin of flicker noise, also referred to as 1/f noise where f is the frequency, is still somewhat ambiguous in the sense that there is no general consensus or agreement. Flicker noise is a more general form of a power law noise or a 1/f α noise where α is considered to vary between 0 and 2. Its various manifestations are found not just in electrical circuits but in a wide spectrum of scientific situations which is perhaps the biggest reason for there being no unified explanation on the origins of such a source of noise. Although there are several other types of noise sources that one can encounter, this works deals with the modelling of 2 the afore-mentioned sources of noise in the time domain. Traditionally, noise analysis is performed in the frequency domain and is commonly referred to as AC noise analysis. Computer-Aided Design (CAD) tools such as the circuit simulator SPICE, which are invaluable aids in linear Integrated Circuit (IC) design, all implement an AC noise analysis. Most linear ICs operate under small-signal conditions which requires the establishment of an operating point, which once fixed, is not expected to change during operation. This allows the development of Linear Time-Invariant (LTI) network models of the circuit elements. This approach makes it convenient to analyze such circuits in the frequency domain with the use of a rich and well-established numerical analysis theory from linear algebra. Accurate and efficient implementation of these techniques are the major reason for adoption of SPICE and SPICE-like simulators by every analog and RF engineer. In spite of the wide-spread use of SPICE, there are some limitations when it comes to simulating circuits which cannot be classified by an LTI or small-signal representation. An example of such a circuit would be an electrical oscillator which not only has variable operating points and large signal swings, but requires nonlinear effects to be able to start and sustain the oscillation phenomenon. Another example is a high-power amplifier which not only has large signal levels, but interactions with out-of-band noise has pronounced effects on performance. The LTI approach is inadequate for modelling noise in such circuits. Also, increasing density in ICs and reduced supply voltages result in a decrease in signal-to-noise ratio and the interaction of signal and noise becomes more important. The only way to model these effects is by performing a transient noise analysis without restrictions on signal level, nonlinearity and operating point. This work concentrates on modelling noise in the time domain with the use of Stochastic Differential Equations (SDEs) and efficient chaotic noise generators which can model transient noise with a wide variety of spectral characteristics. It requires a modification of the system of equations describing the circuit to include noisy, random additive and multiplicative terms, a circuit simulator with a high dynamic range to be able to satisfactorily test these concepts and example real-world circuits to be able to check for the accuracy and efficiency of the implementation. The noise analysis has been implemented in the circuit simulator f REEDATM and two examples have been used to verify the accuracy of modelling and simulation: a varactor-tuned voltage-controlled oscillator which has largesignal excursions and a X-band low-noise Microwave Monolithic Integrated Circuit (MMIC) 3 power amplifier which has a significant amount of nonlinear interactions between signal and noise at high power levels. Most importantly, the experimental validation indicates that the physical nature of noise, in particular flicker noise, is adequately modelled using the approach developed here. 1.2 Unleashing Chaos Effective modelling of noise requires noise sources that can efficiently capture the characteristics of noise as they appear in measurable phenomena. As mentioned in the previous section, the three main sources of noise in electrical circuits are thermal noise, shot noise and flicker noise. Thermal and shot noise have white spectral characteristics meaning that all their spectral components are assigned equal weight. Time-domain models for white noise are abundant and either use extremely efficient pseudo-random noise generators1 or use variations of time series that have Auto-Regressive (AR), Moving-Average (MA) or AutoRegressive Moving-Average (ARMA) forms as found in the classic work by Box, Jenkins and Reinsel [1]. Flicker noise has characteristics that are more “interesting”. There have been several approaches to try and find a model that can sufficiently mimic the various and sometimes contradictory properties of flicker noise with exponent one, also called 1/f noise. For example, 1/f noise has equal power in all decades of frequency. This has led researchers to question as to how low in frequency the flicker effect can persist and as to whether there should be infinite power at zero frequency. Another question often raised is if a 1/f process is a Gaussian process. Some studies have claimed that it must be Gaussian while others have shown it cannot be so because a Gaussian process would imply a flatenning of the frequency response at low frequencies. This cannot be true if the process is to have equal power for all frequency decades. Issues of this nature and several others are explored in the next chapter. Just as with white noise, variations of AR, MA and ARMA models, also called Auto-Regressive Integrated Moving Average (ARIMA) models, do exist for flicker noise. The problem with using these models for flicker noise is often associated with the concept of memory. Intuitively, the memory of a stochastic process is a measure of the inter-relatedness of a sample of the process at a particular instant with every other sample 1 http://www.random.org 4 of the process. Often this is characterized by the generation of the correlation plot of the process. Flicker noise with exponent zero (white noise) is termed memoryless and the correlation plot of ideal white noise has a single impulse at the origin. This implies a complete lack of correlation between the samples of a white noise sequence and that each sample exists independent of every other sample of the process. Flicker noise with exponent two (brown noise) is termed a low-memory process. This means that although there is some memory associated with the samples of a brown noise process, the amount of correlation is small and the correlation plot in this instance fades exponentially. AR, MA and ARMA models generally tend to produce such exponentially decaying behavior. Flicker noise with exponent one (1/f noise) exhibits what is termed long memory. In this case, the correlation plot decays at a rate that is slower than exponential and for the case of ideal 1/f noise, it never decays completely. Examples of slow decay rates are polynomial rates and logarithmic rates. ARIMA time series models can be made to have slower decay rates than the exponential rate but this comes at a cost. They typically require a large number of parameters in order to get a “sufficiently slow” rate of decay and this number tends to infinity as the stochastic process approaches ideal 1/f noise. Regardless, ARIMA models have been very popular and are widely used in physics, economics and engineering. The mathematical theory of chaos is associated with nonlinear dynamical systems and in its simplest form, it uses nonlinear 1-D iterative functions or maps that are characterized by a very small number of parameters, typically less than four. Nonlinear dynamics is a theory that essentially watches these nonlinear systems evolve with time. It has been observed that it is relatively straightforward to be able to produce rich and complex dynamical behavior from a very simple set of underlying rules that repeat in an iterative fashion. Different sets of behavior can be obtained with different values of parameters and initial conditions of the map. One can imagine this arrangement as being akin to a feedback network where the processing starts at some value of initial condition and the output that is processed at the current point in time serves as the input at the next instant of time. The chaos phenomenon is generally associated with what is popularly called “sensitive dependence on initial conditions” (SDIC) and although an accepted definition is more wide-ranging (see Chapter 4) SDIC is a convenient way to understand the concept. SDIC is a compact method of saying that no matter how close two unequal initial conditions are, when passed through a nonlinear chaotic network the corresponding outputs will continue to diverge away from each other. Viewing output sequences that result from chaotic networks 5 (examples are provided in Chapter 4) provides a feel for the complexity that is possible by using basic deterministic rules in a repetitive fashion. The single most important advantage of chaotic maps is that one can produce a large array of stochastic behavior by using a small number of parameters. This is a frugal approach to modelling complicated behavior and the economy in the number of parameters allows for easier parameter tweaking in order to accurately approximate the desired real-life characteristic. As it turns out, there is a certain class of maps called intermittent maps, which under the right conditions can produce stochastic behavior that closely match the behavior of 1/f noise. The intermittency phenomenon is essentially the presence of bursty behavior in the output sequence and is characterized by alternating periods of low activity and high activity. The length of these periods is random and if allowed to run for a long time, an intermittent sequence can be shown to possess long memory - the essential requirement of 1/f noise. An example of such a map is Holland’s logarithmic map [137] wherein the memory characteristics have a logarithmic decay and it is controlled by a single parameter. The features of this map and the requirements for any intermittent map to display long memory properties is explained in detail in Chapter 4. 1.3 Establishing a Framework Traditional nonlinear circuit simulation is performed in the time domain and it usually requires the setting up of integro-differential equations using a combination of KCL and KVL that suitably describes the circuit under consideration. This system of equations describing the circuit models is then discretized in time and integrated numerically and there exist a large collection of algorithms to determine the solution of this system at the next step in time based on information present at the current step. This is the approach used in the circuit simulator SPICE and has proved to be a numerically robust approach to circuit simulation. This approach however, is not without its limitations. The device models are intrinsically tied to the routines that analyze the circuits they are included in. This means that making even a minor change to an analysis routine would require a corresponding change in every device supported in the simulator. With a large number of devices in the simulator catalog and with increasing device complexity, this process tends to become 6 cumbersome and error-prone. The circuit simulator f REEDATM is one of the few circuit simulators that uses a modular approach to modelling and simulation. It relies heavily on Object-Oriented (OO) concepts which enables it to form a clean separation between data and algorithm. In other words, the device models are no longer a part of the simulator analysis routine and both analysis and model exist as individual entities. These entities are tied together by an abstract framework that makes use of several key OO principles such as container classes, encapsulation and overloading as is detailed in [2]. The establishment of this framework is an important building block to being able to incorporate sources of noise into the simulator. The separation of element and analysis facilitates rapid incorporation and testing of models for noise and allows the model designer to focus almost exclusively on the modelling process. Noise sources can either be treated as separate voltage or current sources in which case they serve as individual elements in a circuit or they can be embedded inside a pre-existing noiseless model. In either case, the basic solution algorithm requires no updates and changes and insertion of these stochastic elements transforms the ordinary differential system of equations to a stochastic differential system of equations. It can be argued that modelling any real-life phenomenon must include some unpredictability and with the appropriate framework for simulation in place, the majority of the intellectual focus can be exerted on handling the intricacies related to the development, deployment and performance of the underlying stochastic models. This framework flexibility permits a parallel or simultaneous simulation of both deterministic and stochastic phenomena while preserving the advantages of having an unmodified list of recipes, or solution routines, to solve this enhanced system of equations. On account of these changes, the simulator must now be able to track a larger variation of signals at each node in the circuit and therefore must have a high dynamic range. This was shown to be true for f REEDATM in [147]. 1.4 Circuit Simulator Noise Analysis Review This section presents a qualitative review on the popular techniques of simulating noise in circuit simulators. The essential assumptions and methodologies behind each technique is explained and their strengths and weaknesses are highlighted. The LTI approach to simulating noise in circuits is a frequency-domain approach. 7 It constructs small-signal equivalent networks for the devices in the circuit and their sources of noise. With this method, a nonlinear circuit is assumed to have a time-invariant steady state operating point which is maintained for the entire duration of the analysis. The nonlinear circuit is then linearized about this operating point and an LTI transfer function describing the circuit is determined. The noise sources in the network are assumed uncorrelated and the effect of each of these sources on the output is individually computed for each output frequency of interest. The noise analysis is nothing but a computation of the transfer functions between the input noise sources and the output for the frequencies of interest. This method relies on the theory of interreciprocal adjoint networks and was introduced in [102]. Precise details of this technique are provided in Section 2.7. A common technique of analyzing noise in circuits in the frequency domain with a periodically varying operating point is the Large Signal Small Signal (LSSS) technique or the conversion matrix technique, [87]. Here the nonlinear circuit is assumed to be driven by a large sinusoidal signal and another smaller signal. The response of the circuit to this smaller signal is desired. The circuit is first analyzed assuming the large signal input only, usually by the harmonic balance method. The nonlinear elements are then linearized about the steady-state solution, small-signal time-varying equivalents of the circuit are obtained and a small-signal analysis is performed. The frequency domain state variables in a timevarying circuit element are determined by finding the so-called conversion matrix for the network. The conversion matrix converts the input frequency into corresponding spectral lines or sidebands at the output thereby providing a way to determine the contribution of each input signal component on the output spectrum. If the small signal is assumed to be noise, then the conversion matrix technique can be used to determine the contribution of each noise component on the output spectrum, [88]. This technique, however, requires the number of output sidebands to be fixed and increasing numbers of sidebands sends the system closer to instability, [90]. In RF circuits, there are typically a large number of nonlinear elements which are represented by complex semiconductor device equations. In this case, harmonic balance simulation techniques can become unreliable, [91]. There are several techniques for simulating noise in the time domain and the underlying motivation for all of them is to be able to simulate circuits that are not operating under LTI conditions. Instead, the circuits are assumed to be operating under Linear Periodically Time-Varying (LPTV) conditions and it has been found that it is possible to predict the response of time-variant circuits such as electrical oscillators more accurately 8 with this procedure. Using this procedure, a nonlinear circuit is assumed forced with periodic excitations and to have a periodic steady state solution. This solution can be obtained by different techniques, [92]. The nonlinear circuit is then linearized about this periodic solution and a time-varying transfer function of the circuit is set up in order to enable the computation of the output variables. This procedure assumes that the noise processes are cyclostationary – in other words, the moments of the stochastic processes describing noise are periodic. With these assumptions, [93] formed discretized versions of the time-varying transfer function, associated a fixed number of time points with each output frequency of interest and approximated the derivatives numerically. As compared with the LTI case in which the time-invariant system of equations must be solved only once at each frequency of interest, the time-varying system of equations must now be solved for every time point corresponding to each output frequency. Speed and problem size optimizations associated with this technique have been shown to produce better results, [94]. The disadvantages of these methods is that it requires the signal and noise to be periodic in nature and it is only feasible to calculate the first-order moment of the cyclostationary output. Evaluation of higher-order moments is prohibitive. In [95], a harmonic-balance based frequency-domain method was introduced which calculates not just the first but also the second-order moment of the cyclostationary output. The method calculates the steady state output with periodic excitations using harmonic balance and the LPTV system of equations is obtained after linearization of this steady-state solution. The Fourier coefficients of the periodically timevarying transfer function from the noise sources to the output of the system are calculated by assuming a fixed number of harmonics and solving the linear system of equations with preconditioned iterative techniques. This technique is similar to the LSSS noise analysis except that the authors focus on solving the problem with numerically efficient techniques. The approach used in [96] can be thought of the time-domain version of the approach in [95] with the difference being that it can handle a more general Linear TimeVarying (LTV) network configuration. The authors numerically calculate the mean and autocorrelation functions, which are time-domain characterizations, of the output variables of the circuit thereby obtaining a second-order characterization of the random processes that constitute the variables in the circuit. The equations describing the nonlinear circuit are set up in the time domain and are treated as SDEs but the sources of noise are assumed to have an additive effect only and multiplicative effects of noise and state-dependent circuit variables are not considered. Performing a Fourier transform on the autocorrelation of these 9 output variables provides, in general, a time-varying power spectral density. The author in [101] focuses on the problem of phase noise in oscillators and treats the equation for an oscillator in the time domain with sources of noise. The equation is linearized in the time domain and the transversal and tangential components of the limit cycle corresponding to the periodic solution of the nonlinear differential equation are computed. The transversal component contributes amplitude noise and the tangential component contributes phase noise. A transient transfer function is then set up and the contribution of these noise components on the output is determined. The technique used in [97] uses a similar approach of obtaining the phase noise of a linearized oscillator equation but uses an Impulse Sensitivity Function (ISF). This function aids in the calculation of the contribution of the phase noise at the output of the system from each source of noise present in the circuit. The authors in [97] show how to derive the ISF for a simple oscillator circuit but the ISF will differ for different circuit configurations and it is not clear how this approach can be extended to any configuration. For both methods in [97] and [101], the approach of considering that the effect of noise sources on the output phase noise can be decomposed into orthogonal components is shown to be inadequate, [98], in the sense that they do not account for the fact that phase deviations can grow indefinitely with time and that amplitude deviations cause phase deviations. A similar argument can be made against the technique proposed in [104] which generates pseudo-random time domain noise generators and uses them to generate models for circuits elements. The noise sources are of the form of a sum of a fixed number of sinusoids which have random amplitude and phase components, each of which is a coefficient that must be determined by empirical means. Once the coefficients have been determined, a traditional transient analysis is performed on the circuit. It is also unclear as to how many components for the amplitude and phase coefficients must be selected and what their respective values should be. The work in [98] is based on the work in [101] but improves the modelling descriptions of the effects of perturbation on a steady-state periodic circuit. In particular, the perturbed oscillator state can have amplitude and phase deviations and while the amplitude deviations are always assumed small, the phase deviations can grow without bound. As before, there is still a separation between the amplitude and phase components. However, while amplitude can still be handled using linear perturbative analysis, the phase deviations are represented by a nonlinear differential equation that cannot, in general, be linearized. The authors show that the phase deviation increases linearly with time and develop numer- 10 ical techniques to solve for such conditions. This work can be seen as bringing together the techniques in [95] and [96] and extending it to include linearly increasing phase error. An recent analytical review of all the time and frequency-based approaches to treating phase noise is provided in [106]. There is tremendous improvement in the understanding of the effects of noise in circuits but some gaps still remain. Most the techniques highlighted above apply to the phase noise problem in periodic circuits where the noise is assumed to be small in the sense that it does not affect the steady state operation (either time-invariant or time-variant) of the circuit. Also, effects of multiplication of noise with random state-dependent variables are neglected. This can be an important consideration because an instantaneous value of noise, as in the case of flicker noise, can be large enough such that when multiplied with a deterministic signal, a shift in the bias point of the circuit can result. Another issue is the modelling of sources of colored noise in a circuit simulator. Several techniques for modelling these sources have been developed in theory and in practical circuits (see Section 2.6 for a more detailed exposition) but implementing them as sources of colored noise in a circuit simulator can be a challenge. An example of the difficulty is quantified in Appendix B. 1.5 Original Contributions This work represents a first attempt at using chaotic maps to model colored noise in electronic circuits. The chaotic noise generators are shown to be able to produce spectral characteristics that are very similar to white and 1/f noise and it requires the tuning of only a single parameter. This is a parsimonious approach to modelling complex stochastic behavior and as is shown in the later chapters of this work, this approach works quite well when applied to real circuits. The descriptions and properties of the white noise generating map are provided in Section 4.3 and for the flicker noise generating map in Section 4.4. The procedure for obtaining the noise sequences are based on iterative deterministic rules and the SDIC property of chaotic sequences ensures that two chaotic sequences starting at initial values that differ by arbitrarily small amounts will be different from each other at nearly every point. The use of this technique enables one to effectively harness the accuracy of the transient simulation approach and avoid the problems associated with random number generators in a Monte Carlo type simulation. 11 Another contribution of this work is the implementation of a transient noise analysis in a full-fledged electronic circuit simulator. It requires the extension of the robust simulation framework found in f REEDATM to allow for a concurrent deterministic and stochastic analysis of circuits that consist of transient sources of colored noise, details of which can be found in Section 5.4. It neither imposes a restriction on the nature of the noise itself nor does it assume that the noise will have a specific form of interaction with the deterministic elements. In other words, multiplicative effects of noise and signal are considered in addition to additive effects and the fact that instantaneous magnitudes of the noise components when multiplied with signal can be large enough to cause changes in the bias point of the circuit is considered. In addition, this work interprets the system of SDEs in the sense of Stratonovich [115], and it is argued in Section 3.5 that this interpretation is the only appropriate one when multiplicative effects are to be modelled appropriately. Previous attempts at using SDEs to model noise, [96], [99], [100], [101], all make use of the Itó assumption. The Stratonovich approach is found to be accurate when modelling circuits that have significant components of flicker noise and non-negligible levels of nonlinear interaction between the deterministic and stochastic terms. The intention is to develop and make use of a modelling process that approaches the workings of a physical circuit. Chapter 6 presents comparisons of simulation of phase noise of a circuit that contains large signal swings at several nodes of the circuit. Different levels of bias conditions are considered. The circuit is a varactor-tuned voltage-controlled oscillator and contains active nonlinear elements. It is shown that once the flicker noise parameters for the active devices are set for one level of bias, the same parameters can be used to predict the phase noise for other bias levels. A match is also obtained between between simulation and measurement for a higher level of bias that causes to the varactor in the circuit to approach the breakdown condition. For this match, it is shown that a scaling coefficient larger than unity is required for the shot noise of the varactor. Chapter 7 contains simulation runs that investigates the effect of large levels of input noise on the power gain of an X-band high power low-noise amplifier. The simulation approach developed in this work is found to be well-suited to analyze such circuits as indicated by a comparison of simulation runs with measured results. This chapter also presents, for the first time, measured results that indicate a reduction in power gain in the presence of large levels of noise. These levels of noise are generated digitally at baseband and upconverted to the carrier frequency at X-band in the signal generator. With further 12 simulations that include appropriate values of time delays inside the device models, it is demonstrated that the reduction in power gain in the presence of noise can be approximated more accurately. 1.6 Overview Chapter 2 is a concise review of a large body of literature on noise in systems. The attempts of various researchers to provide physical explanations for the noise phenomenon has been explored and a brief overview about these efforts is provided. Also reviewed are the approaches to modelling noise and in particular, flicker noise. The chapter ends with a mathematical explanation of how noise is traditionally analyzed in simulators. Chapter 3 is an introduction to the theory of SDEs and deals with common forms of modelling with and interpreting SDEs. In particular, the chapter explains the origins of the Itô form and the modified Stratonovich form for interpreting SDEs. It illustrates the differences in results that can occur when using the two forms to solve the same differential equation and also suggests a solution to this quandary by providing an intuitive look at some of the assumptions behind using each form of SDE. Chapter 4 highlights the theory of nonlinear dynamics and chaos by first providing the appropriate background to the theory and embellishing the theory with small doses of intuition and graphics. The chapter ends with explaining how a certain property of chaotic functions, namely intermittency, can be used to generate noisy sequences with spectral characteristics that resemble colored noise as found in electronic circuits and elsewhere. Chapter 5 provides details of the implementation of the parallel deterministic and stochastic framework to analyze transient noise in the circuit simulator f REEDATM . It provides details of the changes that were made to the linear and nonlinear device elements in f REEDATM to include white and flicker sources of noise and effects of these changes on the existing simulator framework. Chapter 6 simulates a varactor-tuned voltage-controlled oscillator with a resonant frequency of 45 MHz. Simulated phase noise is compared with measured results at different levels of bias conditions in an effort to verify the validity of this modelling approach. Flicker noise parameters for the nonlinear devices are determined by fitting simulation and measurement for one value of bias and using the same values, it is shown that it is possible 13 to predict the phase noise response at another bias level. Chapter 7 presents the modelling of the effect on gain of a high-power low-noise MMIC amplifier when fed in with a composite signal consisting of a carrier at 10 GHz and large levels of noise. Curves of gain versus input power are generated and simulated results are compared with results obtained from measurement. The MMIC amplifier consists of a pair of pseudomorphic-High-Electron-Mobility-Transistors (pHEMTs) connected back to back and is modelled in simulation using the Curtice model [148] for a MESFET. In order to model the distortive effects of noise at high levels of input power more accurately, the delayenabled Curtice model is used in simulation and preliminary comparisons with measurement are presented. Finally Chapter 8 summarizes this work and provides suggestions for future research in this interesting field of nonlinear circuit analysis and computer-aided design. 1.7 Publications 1. N. M. Kriplani, S. R. Luniya and M. B. Steer, “Capturing the Effect of Noise on Amplification,” Microwave Comp. Lett., submitted, Nov. 2005. 2. N. M. Kriplani, D. P. Nackashi, C. J. Amsinck, N. H. Di Spigna, M. B Steer, P. D. Franzon, R. L. Rick, G. C. Solomon and J. R. Reimers, “Physics-Based Molecular Device Model in a Transient Circuit Simulator,” Jnl. Chem. Phys., submitted Sep. 2005. 3. Frank P. Hart, Nikhil M. Kriplani, Sonali R. Luniya, Carlos E. Christoffersen and Michael B. Steer, “Streamlined Circuit Model Development with f REEDATM and ADOL-C,” The 4th International Conference on Automatic Differentiation, July 2004. 4. M. B. Steer, C. Christoffersen, S. Velu and N. Kriplani, “Global Modeling of RF and Microwave Circuits,” Mediterranean Microwave Conference Digest, June 2002. 14 Chapter 2 Literature Review 2.1 Introduction This chapter is a survey of the vast amounts of literature available on the history of the understanding of noise and noise processes. This chapter is by no means comprehensive but it makes an attempt to mention a fair number of references on the subject ranging from the historical origins of noise to some more recent approaches to noise modelling. Section 2.2 introduces mathematical definitions and formulas related to random variables and random processes that will be helpful in reading the rest of the chapter. Section 2.3 is a brief introduction to thermal noise based on thermodynamical principles followed by Section 2.4 which is an introduction to shot noise. Section 2.5 and Section 2.6 respectively survey theories of the physical origin of flicker noise and various mathematical models to describe flicker noise processes. Finally, Section 2.7 reviews the original method of analyzing noise in electronic circuit simulators. 2.2 Mathematical Preliminaries A random process is a family of random variables {X(t), t ∈ T } defined on a given probability space indexed by the parameter t, denoting time, where t varies over the index set T . The Cumulative Distribution Function (CDF) for a fixed time t1 of this random process is 15 defined as FX (x1 ; t1 ) = P {X(t1 ) ≤ x1 } (2.1) where P (.) denotes the probability function from the sample space Ω into the unit interval on the real line. FX (x1 ; t1 ) forms the first-order distribution of X(t). Likewise, given t1 and t2 , the joint CDF or the second-order distribution of the random process is given by FX (x1 , x2 ; t1 , t2 ) = P {X(t1 ) ≤ x1 , X(t2 ) ≤ x2 }. (2.2) In general, for an n-th order distribution, FX (x1 , . . . , xn ; t1 , . . . , tn ) = P {X(t1 ) ≤ x1 , . . . , X(tn ) ≤ xn }. (2.3) The Probability Mass Function (PMF) for a discrete random process is given by pX (x1 , . . . , xn ; t1 , . . . , tn ) = P {X(t1 ) = x1 , . . . , X(tn ) = xn } (2.4) and the Probability Distribution Function (PDF) for a continuous random process is expressed in terms of its CDF as ∂ n FX (x1 , . . . , xn ; t1 , . . . , tn ) . ∂x1 . . . ∂xn fX (x1 , . . . , xn ; t1 , . . . , tn ) = The mean of a random process or its first-order moment is defined by P∞ x(t)p (x, t) discrete random variable X −∞ µX (t) = E[X(t)] = R ∞ x(t)pX (x, t)dx continuous random variable −∞ (2.5) (2.6) provided the sum and the integrand exist1 . Likewise, the nth-order moment is defined as P∞ xn (t)p (x, t) discrete random variable X −∞ n (2.7) E[X (t)] = R ∞ xn (t)pX (x, t)dx continuous random variable −∞ The autocorrelation function is a method to determine the relationship between values of a function separated by different instants of time. For a random variable, it is given by RX (t, s) = E[X(t)X(s)]. (2.8) When the random variable is discrete, the one-dimensional autocorrelation function of a random sequence of length N is expressed as RX (i) = N −1 X j=0 1 are absolutely summable xj xj+i (2.9) 16 where i is the so-called lag parameter. It is a way of expressing the relationship between values of the random sequence for different values of the lag parameter. The autocovariance function of X(t) is given by KX (t, s) = E[{X(t) − µX (t)}]E[{X(t) − µX (t)}] = RX (t, s) − µX (t)µX (s) (2.10) while the variance or dispersion is given by σX (t) = E[{X(t) − µX (t)}]2 = KX (t, t). (2.11) A random process X(t) is stationary in the strict sense if FX (x1 , . . . , xn ; t1 , . . . , tn ) = FX (x1 , . . . , xn ; t1 + τ, . . . , tn + τ ) (2.12) ∀ti ∈ T , i ∈ N. If the random process is Wide-Sense Stationary (WSS), then it is stationary with order 2. This means that in most cases only its first and second moments, i.e. the mean and autocorrelation, are independent of time. More precisely, we have E[X(t)] = µ RX (t, s) = E[X(t)X(s)] = RX (|s − t|) = RX (τ ). (2.13) Note that the mean is constant for a WSS process and the autocorrelation is dependent only the time difference τ . A random process that is not stationary to any order is nonstationary and its moments are explicitly dependent on time. A Gaussian random process is a continuous random process with PDF of the form µ ¶ 1 −µ(x, t)2 fX (x, t) = p exp (2.14) 2σ(t)2 2πσ(t)2 where µ(x, t) represents the mean of the random process and σ(t) represents the variance. A normal random process is a special case of a Gaussian random process in that it has a mean of zero and a variance of unity. A Poisson random process is a discrete random process with parameter λ(t) > 0 has a PMF given by pX (k) = P (X(t) = k) = eλt (λt)k k! (2.15) 17 where λ(t) is generally time dependant and the mean and variance of a Poisson random process is λ(t). So, for a Poisson random process, µX = E[X(t)] = λ(t) (2.16) 2 σX = Var(X(t)) = λ(t). (2.17) A concise introduction to random variables and processes can be found in in [6]. For a classical and near-complete treatment, refer to [7]. 2.3 Thermal Noise According to the theorem of Nyquist, thermal noise is a result of the random motion of free electrons in a conductor which are in a state of constant thermal agitation at temperature T . These random fluctuations result in a random current and a random voltage across the terminals of the conductor shown schematically as in Fig. 2.1. The Power Spectral Density R noisy R noiseless R noisy Vn (t) R noiseless I (t) n Figure 2.1: Thermal noise equivalent circuits. (PSD) of Vn (t) and In (t) are related by SVn (f ) = R2 SIn (f ). (2.18) One can derive the PSD of the voltage spectrum using the principles of thermodynamics in accordance with the developments of Nyquist in 1928 [3], which is outlined below. 18 Consider a pair of resistances R1 and R2 whose values are independent of frequency and are in equilibrium at temperature T . They are arranged as in the circuit shown in Fig. 2.2 and thermal noise voltages and currents are indicated therein. Assuming the self- R (noisy) 1 R (noisy) 2 R (noiseless) 1 i1 R (noiseless) 2 i2 V (t) n1 V (t) n2 P12 P21 Figure 2.2: Thermal noise voltages and currents in equilibrium at T0 . inductance and capacitance to be negligible at all frequencies, the equilibrium state of the two resistors allows one to invoke the second law of thermodynamics which requires that there is no net exchange of energy between the two resistors. Hence, for an arbitrary frequency range (f, f + ∆f ) the power transferred from R1 to R2 must be the same as that transferred from R2 to R1 independent of the location of (f, f + ∆f ). Therefore, the mean power P 12 = P 21 where P ij is the mean power transferred from Ri to Rj for bandwidth ∆f . For voltages v1 , v2 and currents i1 , i2 for the circuit in the bandwidth ∆f then we can write i1 = i2 = v1 R1 + R2 v2 R1 + R2 P 12 = i21 R2 P 21 = i22 R1 . (2.19) Since P 12 = P 21 , v12 and v22 can be written in terms of the voltage spectra Sv1 (f ), Sv2 (f ) giving Sv1 (f )R2 = Sv2 (f )R1 . (2.20) Setting R1 = R2 shows that the PSD would have to be independent of the specifics of the 19 resistors and the method of conduction. What remains is to find the PSD as a function of voltage, Sv (f ). On coupling two equal resistances R together with the help of an ideal lossless transmission line of characteristic impedance Z0 = R such that the line is matched and reflectionless at either end, a certain mean amount of electromagnetic energy would be stored in the line over the bandwidth ∆f in the equilibrium state. This mean energy U = ∆k(UH + UE ), where UH is the mean magnetic energy per mode and UE is the mean electric energy per mode and ∆k = 2L∆f /v is the number of modes in the frequency range ∆f for a line L cm long and velocity v of propagation. Invocation of the Equipartition Theorem permits one to assign energy of amount kB T /2 to the electric and magnetic energy per mode. This gives UH + UE = kB T and so U = ∆k(UH + UE ) = 2kB T L∆f /v (2.21) which is the one dimensional case of the Rayleigh-Jeans law for black-body radiation. Since half the energy goes from R1 to R2 and vice-versa, the mean power (energy per second) from R1 into R2 is 1 P 12 = U/(L/v) = ∆f kB T 2 (2.22) with an identical expression for P 21 . From Eqn. (2.19) we have v2 P 12 1 v12 R2 Sv (f )R2 ∆f ∆f R2 ∆f = kB T ∆f = = = 1 (R1 + R2 )2 (R1 + R2 )2 (R1 + R2 )2 (2.23) and if R1 = R2 = R, then we immediately obtain Sv (f ) = 4kB T R. (2.24) The PSD of the current from Eqn. (2.18) is SI (f ) = 4kB T . R (2.25) This result is an ideal one as it is valid for all frequencies, in other words is frequency independent. However, it would also result in infinite power if all the frequencies were considered. There must therefore be a limit in frequency beyond which this equation is not valid. To do this, this expression is replaced by Planck’s expression hf (ehf /kB T − 1)−1 which gives Sv (f ) = 4Rhf . ehf /kB T − 1 (2.26) 20 The critical frequency at which the spectrum will roll-off is determined by hf0 = kB T ⇒ f0 = kB T /h = 2.1 × 1010 Hz. At room temperature this is roughly 6 × 1012 Hz. Nyquist’s theorem applies not only to resistors but to general linear passive elements. For a more thorough examination of this, refer to Nyquist’s original work [3]. For more recent revisits of these concepts, see [4] and [5]. 2.4 Shot Noise Shot noise was first observed by Schottky in 1926 [8] in vacuum tubes. Shot noise was the effect of the electrons incident on the anode which occurred at random intervals where the incidence of each electron on the anode is an individual event. A random superposition of these individual events which are non-overlapping, forms shot noise. In a conventional semiconductor p-n junction, the heavily doped p and n regions make a very small thermal noise contribution and it is the emission of carriers into the depletion region that contributes to shot noise. Every carrier that crosses the depletion region generates a pulse. This arrival rate of pulses can be modelled by a Poisson arrival process of rate λ. At a specific time instant t, the total charge that has crossed the depletion layer is Q(t) = qN (t) (2.27) where q is the charge associated with each carrier and N (t) is the number of carriers that have crossed until time t. The time derivative of this equation provides the expression for current which can also be represented as a superposition of charges arriving at randomly distributed intervals expressed as I(t) = X d Q(t) = qδ(t − Ti ) dt (2.28) i where Ti represents the random crossing interval. Assuming that the length of the Ti ’s have a Poisson distribution, the pulse train I(t) is a Poisson pulse train. Note that the implicit assumption is that the charge crossing is instantaneous and that successive pulses are non-overlapping. In a more general case, one can imagine passing a pulse train through a system having an impulse function h(t) which depends on the device under consideration. This gives I(t) = X i qh(t − Ti ). (2.29) 21 The mean of the pulse train is Z Z ∞ I(t) = E[(t)] = q λh(t − s)ds = 2qλ −∞ ∞ h(t − s)ds = 2qλ (2.30) 0 and defining the noise current to be the difference between the total current and the mean, we get ˜ = I(t) − I(t). I(t) (2.31) The mean of this noise current is obviously zero and its autocorrelation can be computed as ˜ + τ /2)I(t ˜ − τ /2)] RI˜(t, τ ) = E[I(t Z ∞ = 4q 2 λ h(t + τ /2 − s)h(t − τ /2 − s)ds Z0 ∞ = 4q 2 λ h(s + τ /2)h(s − τ /2)ds 0 = RI˜(τ ) (2.32) which shows that shot noise is a WSS process. To calculate the PSD, we use the standard form SI˜(f ) = 4λq 2 |H(f )|2 . (2.33) From Eqn. (2.30), we have λ= I 2q (2.34) which gives for the PSD ˜ SI˜(f ) = 2q I|H(f )|2 (2.35) which is an expression for shot noise as a function of the mean current across the barrier. If one considers the pulses across the barrier to be ideal Dirac delta functions, then |H(f )|2 = 1 and the PSD becomes SI˜(f ) = 2q I˜ (2.36) which is the PSD of a white noise process and is independent of frequency. This derivation can be enhanced by assuming a time-varying rate for the Poisson process λ(t) and that successive pulses have a degree of overlap, but in all cases it can be shown that the shot noise process has a white PSD and is Gaussian [75]. 22 2.5 1/f (Flicker) Noise One way of judging the importance and impact of 1/f noise is by considering the numerous references on the subject and the vast number of situations in which this phenomenon manifests itself. Apart from confirming the frequency independent nature of shot noise originally proposed by W. Schottky, J. B. Johnson observed another type of noise whose PSD increased with decreasing frequency. Schottky [8] suggested that this effect is independent of the shot noise effect and is a consequence of the irregularities in the properties of the surface of the cathode which results in “flicker” of the thermionic current. Since then, this type of low-frequency noise has been observed in a variety of engineering and scientific situations and perhaps the only thing that is universally agreed upon, is the ubiquity of the phenomenon of 1/f noise. The statistics of white noise (to be shown later), as seen in its autocorrelation function, indicate a process that has no memory. On a correlation plot, a plot which shows the degree of correlation between samples of a time-sequence, future values of a memoryless process are completely independent of the past values. In the frequency domain, this implies a PSD that is independent of frequency. Although no physical process is truly white for an infinite number of frequencies, white noise is “white” for all practical purposes. The slope of the PSD of white noise is zero and can also be referred to as 1/f 0 noise. On the other hand, brown noise is associated with a Brownian Motion Process (BM) defined later, and like white noise has very little long term memory. Although there is a high correlation between successive samples of a BM, the correlation plot decays exponentially indicating negligible long term memory properties. The PSD of brown noise has a slope of two and is also referred to as 1/f 2 noise. Lying in the middle of these extremes is “true” 1/f noise, some properties of which are explored beginning with the next subsection. For now, it must be emphasized that a true 1/f noise process exhibits long term memory character and its correlation plot, which in general, never decays. The remainder of this work will refer to all types of noise that follow a power law as 1/f noise irrespective of the precise value of the slope of its PSD. In general, any 1/f γ noise with 0 ≤ γ ≤ 2 is 1/f noise. Examples of the ubiquity of 1/f noise or long memory processes include vacuum tubes [8], height of the floods of the river Nile [9], analysis of fractal music [10], size distribution in meteorites that annually hit the earth [11], self-organized criticality and sandpile slides [12], fragmentation processes which show a power-law distribution at a critical value 23 of tuning parameter [13], in the PSD of fluctuations in the audio power of many musical selections like Bach’s Brandenburg Concertos [14], fluctuations in neuro-membranes [15], sunspot numbers in a 11 or 22 year period of the solar cycle [16], in the distribution of numbers in continued fraction expansions [17] and in the realizations of nowhere-differentiable functions like the Weierstrass-Mandelbrot function [18], to name a few, all of which exhibit some sort of power-law behavior. A large and varied collection of such examples can be found in [19]. 2.5.1 Scale Invariance A 1/f spectrum, as mentioned above, implies a more general form, i.e. 1/f γ where exponent γ typically lies between 0 and 2. This makes white noise (γ = 0) and Brownian motion or brown noise (γ = 2) subsets of the general set of 1/f γ type noises. However, a true 1/f spectrum is one in which the exponent γ = 1 and it is characterized by the PSD SX (f ) = c/f (2.37) where c is independent of frequency. Integrating Eqn. (2.37) gives power that tends to infinity as frequency tends to zero. There is a fair amount of discussion in the literature about the existence of the low frequency limit of this noise and it would seem likely that there should be a plateau in the graph of a measured 1/f process. However, measurements on operational amplifiers were conducted down to 0.5 µHz in [20] which corresponds to one cycle in 3 weeks, but no leveling of the spectrum was found. A variance analysis of a 1/f noise source was carried out in [21] down to 3.3µHz on a pair of noisy carbon resistors inserted in a Wheatstone bridge arrangement. The spectrum of the noise source was still found to be approximately 1/f and the variance was found to increase logarithmically. These results extended similar variance analysis results reported in [22] to lower frequencies. A high frequency limit for 1/f noise depends on its intensity because at higher frequencies, 1/f noise intensity eventually becomes lower than the thermal and shot noise intensities which represent a lower limit of the noise present in the system. For both low and high frequency limits, there exist plausible physical explanations as to why one should not expect infinite noise power i.e. some form of low pass filtering in the system sets the high frequency limit and discovery of the low frequency limit requires allocating large amounts of time per measurement. In general it is safe to assume that there exists a reasonable upper 24 and lower bound for frequencies. Thus the integrated PSD or the power is over some finite frequency range and is given by 1 PX (f ) = 2π Z f2 f1 SX (f )df. (2.38) An interesting perspective on this question of low-frequency and high-frequency limits of 1/f noise is provided in [23], which is based on the total number of decades of frequency that could conceivably exist in a 1/f spectrum. The lower limit is 10−17 Hz and is based on the age of the universe and the upper limit is 1023 Hz which corresponds to the time it takes for light to travel the classical radius of the electron. This gives a span of around 40 decades which is currently beyond the range of any measurement equipment. The noise level of 1/f noise, even at extremely low frequencies, is quite low and even in the unlikely case of the 40 decade span, the noise power will be insignificant compared to the DC power used in the biasing of a general circuit. 2.5.2 Stationarity and Gaussianity The answer to whether 1/f noise is stationary or nonstationary is strongly related to whether there exists a lower limit to the frequency at which the noise process exhibits a 1/f behavior. It is generally believed that if a lower cutoff does exist, then there will be a leveling off of the 1/f process as discussed earlier. Experimental evidence provided in [22], [24] and [25] indicate statistical fluctuations that are unlike those that one would find in a stationary system. This was, however, followed up by a theoretical analysis of variance of Gaussian random noise superimposed with a 1/f spectrum and experimental analysis of 1/f noise in carbon resistors and bipolar transistors as reported in [26]. The theory was developed with an inherent assumption of stationarity and experimental results were found to match closely with theoretical predictions thereby validating the underlying stationarity assumption. The authors in [27] take up the same issue and argue that the autocorrelation of a stationary random process is dependent only on time difference as indicated in Eqn. (2.13). The PSD is defined as the Fourier transform of the autocorrelation function and is given by Z SX (f ) = ∞ −∞ RX (τ ) exp(−j2πf τ )dτ. (2.39) 25 To determine the asymptotic behavior of SX (f ) as f → 0, one uses Leibniz’s theorem to take the derivative of Eqn. (2.39) in the limit as Z ∞ dSX (f ) ∂ lim = lim [RX (τ ) exp(−j2πf τ )dτ ] f →0 f →0 −∞ ∂f df Z ∞ = lim −j2πτ [RX (τ ) exp(−j2πf τ )dτ ] f →0 −∞ Z ∞ = −j 2πτ [RX (τ )dτ ] −∞ The property of RX (τ ) is that it is necessarily even, i.e. R∞ −∞ τ RX (τ )dτ (2.40) = 0. This means that at some frequency the derivative dSX (f )/df will vanish which would result in a flattening of the PSD curve. This contradicts the measurements in [20] and [21] as discussed above. This leads to the suggestion that 1/f noise is the result of a nonstationary process and the authors in [27] proceed to derive the time-dependent mean and autocorrelation functions for a general nonstationary stochastic process. Perhaps the best way to resolve this is to accept the analysis of Keshner in [28] wherein he shows that the nonstationary autocorrelation function produced by his 1/f noise model can be written down as the sum of two terms: one dependent on time and the other dependent only on time difference τ . If the underlying assumption is that the time of observation is much shorter than the total elapsed time since the process began, then the correlation function can be considered “almost stationary”. Another important question is whether 1/f noise is a Gaussian process. A large number of random processes are Gaussian or are assumed Gaussian in some limiting sense. Gaussian random processes require only up to second-order statistics (mean and autocorrelation) for complete characterization. For non-Gaussian random processes, one needs to take arbitrarily higher-order moments for a complete characterization of the process. The author in [29] investigated the amplitude distribution of 1/f noise from a carbon resistor in the frequency range 8 Hz to 10 kHz of various sample lengths and found they all exhibited Gaussian behavior. His analysis also showed that the variances of the sample sets taken at different times showed an exponential distribution and that the 1/f noise process displayed a weaker form of stationarity. A more comprehensive treatment can be found in [30] wherein the author performed measurements on five different sources of 1/f noise: A. Current noise near threshold in a MOSFET, B. A 1MΩ carbon resistor, C. Current noise in the base-collector junction of a reverse-biased n-p-n Si transistor, D. Voltage fluctuation at the output of a common-emitter n-p-n transistor and E. Current noise in a reverse-biased 26 p-n diode. The PSD in the range 0.03 Hz to 5 kHz were determined and were found to all be of the 1/f type. In the case of sources A, B and C, the probability density functions were found to be Gaussian in nature. Source D showed some deviation from a Gaussian structure while source E was significantly different, perhaps due to the differences in the mechanism generating the noise in these five cases. Linearity of the noise waveforms produced was measured in the following way: A string of 2N + 1 noise values were digitized thereby producing a sequence Xn , −N ≤ n ≤ N . The digitized values were scaled by a fixed amount to produce vn in some range about zero. The time behavior of this subset was averaged over the entire ensemble, < vn |v0 = V0 >, where <> indicates averaging. Note that this quantity, < vn |v0 = V0 >, is related to the autocorrelation and hence for a linear system, < vn |v0 = V0 > should be independent of v0 while for a nonlinear process, it should be dependent on v0 . It was found that the linearity degraded while going from source A to E which led to the interpretation: The more Gaussian a noise source is, the more independent it is of its starting point, i.e. v0 . This does not necessarily allow one to deduce the linearity or the nonlinearity of the system as pointed out in [31] wherein it was noted that the linearity or nonlinearity can only be attributed to the underlying equations that describe the physical property and not necessarily the physical process itself. A more recent analysis was made in [32] wherein artificially generated 1/f noise-like waveforms using some elementary nonlinear dynamical systems were considered and it was found that the probability density functions for all these systems were distinctly non-Gaussian. 2.5.3 The Empiricism of Hooge As a departure from the theories of the existence of 1/f noise prevailing at the time, the authors in [42] showed that it is possible to get 1/f noise in metal films, in particular, thin gold metal films. Based on his experiments and others reported in literature, Hooge also proposed in [43] that 1/f noise in all homogeneous materials can be represented by an empirical formula given by SX (f ) αH . = 2 Ro Ntot f (2.41) Ntot represents the total number of carriers in the specimen, Ro represents it’s mean resistance and Sxx (f ) the PSD of the resistance fluctuations. The parameter αH is the “universal” Hooge parameter and was claimed to be equal to 2 × 10−3 . This equation satisfied, in an empirical way, several cases of resistors showing 1/f noise at room temperature. 27 This parameter has been modified several times since, each time representing a seemingly smaller subset of materials which exhibit 1/f noise. Discrepancies have since been found by more than a factor of 10 in the experiments in [40] where an increase in this parameter value with increase in temperature was found. In [44] and [45], the authors took films made of the same metal, having roughly the same resistivity and made with the same technique and found that the parameter varied more than an order of magnitude. Hooge’s hypotheses were based on the presence of 1/f noise in metals and could not satisfactorily provide values of parameters for liquids. In an experiment on ionic solutions, the authors in [46], found that parameter αH varied according to the ionic concentration of the liquid. Although Hooge’s parameter was a grand attempt in unifying theories of 1/f noise, it is now treated mostly as a historical anecdote which in itself, is a grand achievement. 2.5.4 Number Fluctuations of Charge and Mobility Fluctuations in resistance that show a 1/f distribution arise due to a fluctuation in some number: in either the number of charge carriers or in the mobility of the charge carriers. As seen in Eqn. (2.41), Hooge’s law suggests an inverse dependence of the noise on the number of charge carriers. This suggests that a possible mechanism for 1/f noise in continuous homogeneous metals is a number fluctuation of the carriers. The mechanism of carrier trapping, in which carriers are trapped in a material for varying finite time intervals, could plausibly cause a fluctuation in the number of carriers. However, for metals, the number of traps is very low and the carriers are unlikely to find a large number of traps which eliminates number fluctuations as a possible mechanism of 1/f noise in metals. This, of course, is a direct contradiction of Hooge’s law which requires an inverse dependence on the number of carriers to be true. One can exclude trap mechanisms as a source for fluctuations in number of carriers in metals. The situation is different for semi-conductors where it is possible to find a large number of traps which can provide a range of lifetimes to sufficiently account for 1/f noise. This is the basis of the model proposed by McWhorter, and is outlined in the next section. The alternative to fluctuation in the number of carriers is fluctuation in mobility. In [47], 1/f noise in the open circuit thermo-electro-motive force of both intrinsic germanium and extrinsic germanium and silicon was observed. Based on the calculations therein, it was concluded that mobility fluctuations could be the only source of the 1/f fluctuations 28 in the Hall coefficient of the material and would also support Hooge’s law provided the mobility fluctuation in the carriers are independent. This has been explored further in [48]. However, independent mobility fluctuations associated with individual carriers as a source of 1/f noise would have to mean that fluctuations in mobility would have to show long characteristic times: much longer than what is observable which is usually of the order of picoseconds. This places a cloud of uncertainty over the theory of mobility fluctuations. 2.5.5 Surface Effect or Bulk effect The conductance near the surface of a semiconductor depends on the magnitude of the charge near its surface. Fluctuations in this charge will naturally lead to fluctuations in the value of the conductance. This makes it possible to modulate the conductance of the surface region of a semiconductor by changing the amount of charge near its surface. When Hooge proposed his law in [43] to explain 1/f noise in metals, he believed that 1/f noise had to be an effect of the bulk of the metal. However, papers citing the opposite appeared, for example, in [49] wherein after measuring 1/f noise on GaAs resistors, it was concluded that 1/f noise was a surface effect. This argument has been provided credence by experiments where surface treatments have been found to vary the levels of 1/f noise and it also correlates the absence of surface effects in a JFET to the low levels of 1/f noise present there. There still remains speculation about whether 1/f noise is a result of a surface effect or a bulk effect. In [50] it is suggested that both models are valid and one type of noise would dominate the other depending on the device under study. 2.5.6 Dependence on Mean Voltage, Current and Resistance From the time that voltage spectra with 1/f fluctuations were first observed in the presence of a steady current, the dependence of 1/f noise on the mean voltage or current has been queried. For a uniform conductor, we know that the voltage spectral density SV (f ) is proportional to V 2 and the current spectral density SI (f ) is proportional to I 2 in the region that Ohm’s law is obeyed. A long standing viewpoint is that noise or fluctuation is always present in a resistor even in the absence of a mean current but the current in the device serves simply to reveal this fluctuation, not cause it. The authors in [37] performed experiments on continuous metal films at room temperature and found that samples of pure 29 metals and bismuth had comparable amounts of 1/f voltage noise with the power spectrum proportional to the square of the voltage measured across the sample. They observed the PSD of the white noise in these samples is proportional to the resistance of the sample. The resistance of the samples themselves fluctuate and the white noise also fluctuates at the same frequencies. They measured the PSD of the low frequency fluctuations of the white noise, itself a fluctuating quantity. In effect, they measured the noise of the noise and found that its spectrum was proportional to 1/f . Since this is essentially the same result one would obtain using conventional measurement techniques, i.e. using nonzero current, they concluded that 1/f noise is indeed produced only by fluctuation in the resistance of the sample and the current serves to “show” this fluctuation. This experiment was also carried out in [38] on carbon resistors and the results were confirmed. What the above experiments illustrate is that resistance fluctuation is a possible source of 1/f noise. When current at RF is flowing through a resistor which shows 1/f noise in the presence of a direct current, a noise similar to 1/f noise is produced in the sidebands on each side of the center frequency. This is commonly referred to as phase noise and it scales in proportion to the mean-square value of the high-frequency current. This implies that there should be a high degree of correlation in the components of this noisy sideband with the low-frequency 1/f noise. This was confirmed in [39], who estimated the correlation coefficient to be as high as 0.95. This experiment was performed on a carbon resistor and they concluded that because of the high degree of correlation, phase noise can be attributed to resistance fluctuations. 2.5.7 The Effect of Temperature In [40], the authors performed measurements on temperature dependence of 1/f noise by analyzing thin films of Cu and Ag on a sapphire substrate. Their observations were over the frequency range of 0.2 Hz to 200 Hz and they used a high current density over this frequency band to maximize the ratio of 1/f noise to thermal noise. They found an inverse relationship between the exponent of the 1/f noise (γ) and temperature. γ increased by roughly 20% at 150K compared to the value at 390K, and at a given temperature, the thickness of the sample did not affect the amount of 1/f noise present. The effect of the substrate on the temperature dependence was investigated in [41], wherein the same experimental setup as in [40] was used but with both quartz and sapphire substrates. It was discovered that above room temperature, the substrate had little effect 30 on the spectra of 1/f noise and but below room temperature, the Cu on quartz showed a flattening effect below 300K and did not consistently reduce with reducing temperature as was the case with Cu on sapphire. In the case of Ag, the results were unchanged from those of [40]. This led them to propose that there are two types of 1/f noise: A Type-A noise which weakly depends on temperature and a Type-B noise which is strongly temperature dependent. Hence in Ag, the Type-B noise dominated to such an extent that different substrate materials did not matter to the eventual noise-temperature dependence, but in Cu on quartz type-A noise was high because of which the type-B noise was only present within a specific temperature range. Excellent references on the various resolved and mostly unresolved properties of 1/f noise can be found in several reviews: [51], [52], [53], [54], [55] and [58]. 2.6 1/f (Flicker) Noise Models There are several theoretical models in the literature that have been proposed for 1/f noise which are mathematical in nature. While some of them use physical intuition to explain this phenomenon, others use a broad array of mathematical functions to produce a model for 1/f noise. Several such models are discussed below. 2.6.1 Surface Trapping Model This model is based on the idea of traps present in a semiconductor. A free carrier is immobilized or trapped when it falls into a recombination center (trap). When several such carriers are trapped, it means that they are not available for conduction and as a result, the resistance of the semiconductor is modulated. If in the simplest case, a single trap is considered, then the kinetics of the fluctuation are characterized by a single relaxation time or time constant τz . If this trapping process obeys a Poissonian statistic, then the correlation of this process is purely exponential and its spectrum is Lorentzian. To see this, first note that the PMF of a Poisson random process of rate λ is given by pX (k) = eλt (λt)k k! (2.42) where the k’s are integers. Consider a Poisson random process X(t), with rate λ and consider the random process Y (t) = (−1)X(t) . Thus the process Y (t) starts at Y (0) = 1 31 and flips back and forth between -1 and +1 at times ti which are Poisson is distribution. This is also known as a semi-random telegraph signal. Following the steps as in [75] we have, +1 if X(t) is even Y (t) = −1 if X(t) is odd Thus we have for probabilities, P [Y (t) = +1] = P [X(t) is an even integer] (λt)2 = e−λt [1 + + . . .] 2! = e−λt cosh(λt) (2.43) P [Y (t) = −1] = P [X(t) is an odd integer] (λt)3 + . . .] = e−λt [λt + 3! = e−λt sinh(λt). (2.44) and The mean of this process is given by µY (t) = E[Y (t)] = (1)P [Y (t) = 1] + (−1)P [Y (t) = −1] = e−λt (cosh(λt) − sinh(λt)) = e−2λt . (2.45) To find the autocorrelation of Y (t), we note that Y (t)Y (t+s) = 1 if there is an even number of events in the interval (t, t + s) and Y (t)Y (t + s) = −1 if there is an odd number of events in the interval (t, t + s). So we have, RY (t, t + s) = EY [(t)Y (t + s)] X X (λs)n (λs)n = (1) e−λs + (−1) e−λs n! n! n even n odd ∞ X (−λs)n = e−λs n! 0 = e−λs e−λs = e−2λs . (2.46) 32 Thus, we see that the autocorrelation is exponential and WSS. The PSD is evaluated using the one-sided Weiner-Khintchine theorem as Z ∞ SY (f ) = 4 RY (t, t + s) cos(2πf s)ds Z0 ∞ = 4 e−2λs cos(2πf s)ds 0 = 4λ . (2πf )2 + 4λ2 (2.47) If the rate λ of the Poisson process is equal to the relaxation time associated with the trap, then the PSD above can be re-written as SY (f ) = 4τz . (2πf )2 + 4τz2 (2.48) This shows that the PSD is Lorentzian: it falls off as 1/f 2 at high frequencies and levels off at low frequencies. Considering a large number of traps each with an associated time constant, one can assume that the spreading of these time constants follow some probability distribution which must satisfy Z 0 ∞ p(τz )dτz = 1 and the PSD of the entire fluctuation N (t) is then given as Z ∞ τz p(τz ) SN (f ) = 4µN (t) dτz . 1 + (2πf )2 τz2 0 (2.49) (2.50) It was suggested in [59] that if p(τz ) ∝ 1/τz in some interval τ1 ≤ τz ≤ τ2 and is zero outside this interval then SN (f ) ∝ 1/f in the range τ2−1 ≤ f ≤ τ1−1 . It was showed in [60] and [61] showed that 1/f noise would result if there were a superposition of several such processes with different time constants such that each time constant was inversely proportional to temperature: τz = τ0 exp(E/kB T ), where E is the activation energy of the process, kB is Boltzmann’s constant, T is the temperature and τ0 represents the number of attempts made to overcome the energy barrier E. The distribution of these activation energies FE (E) was assumed almost constant and since p(τz )dτz = FE (E)dE one obtains the function p(τz ) = FE (E)/(dτz /dE) = kB T FE (E)/τz . Then p(τz ) ∝ 1/τz if FE (E) is constant. Instead of activation, this form of p(τz ) could also be obtained by assuming a tunnelling process as McWhorter did in [62]. He assumed that charge tunnels from the 33 surface of the semiconductor to the traps located in the oxide and the distribution of the times of tunnelling is exponential and is given by µ ¶ d τz = τ0 exp δ (2.51) where δ and τ0 are constants and McWhorter assumed δ ' 0.1 nm. For a distribution of time constants between τ1 and τ2 , it follows p(τz )dτz = dτz /τz . ln(τ2 /τ1 ) Substituting this equation into Eqn. (2.50), the PSD is found to be Z τ2 4µN (t) 1 SN (f ) = dτz ln(τ2 /τ1 ) τ1 1 + (2πf )2 τz2 4µN (t) [arctan(2πf τ2 ) − arctan(2πf τ1 )] = ln(τ2 /τ1 ) 2πf (2.52) (2.53) which closely approximates a 1/f spectrum over the specified spread of time constants. Extensions of the original McWhorter form have since appeared in the literature. For example [63] modified the WSS autocorrelation function of Eqn. (2.46) to 1 RY (t, t + s) = e−2λt (1 − e−4λt ) 4 (2.54) which is now dependent on time t and is hence nonstationary. They found that using this modification provided for a more accurate estimate of noise in switched MOSFET circuits. They used three examples to demonstrate this: A periodically switched transistor, a ring oscillator and a source follower. The authors in [64] considered Heterojunction Bipolar Transistors and formed a more generic framework of McWhorter’s theory which allows any form of the distribution of carrier lifetimes. More specifically, the distribution function was also a function of position of carrier in the base region. It allowed for the density of traps to vary with respect to energy and distance. In [68], the author suggested a generalized model to produce 1/f noise which he called a mechanical model. He perturbed his system with random time-dependent perturbations which had a certain mean square amplitude and lifetime probability density. This seems to eventually be equivalent to the models detailed above. 2.6.2 Transmission line 1/f noise The author in [28] introduced a simple RC transmission line model which when fed with white noise produced 1/f noise at its output. The circuit is shown in Fig. 2.3. The circuit 34 R R White Noise R C C R C RC line Figure 2.3: Lumped RC transmission line excited by a white noise current source from [28]. consists of an infinitely long transmission line and fed with a white noise current source of magnitude I at its input. The impedance of this line is s R Z(f ) = j2πf C (2.55) where R and C are the resistance and capacitance per unit length respectively. The PSD of the voltage at the output of the line is SX (f ) = I 2 R . j2πf C (2.56) Assuming a line of infinite length, SX (f ) is proportional to 1/f down to zero frequency. On the other hand, if the line is of finite length, then there will exist a lower frequency below which SX (f ) is white and its value is flow = 1 2πRC`2 (2.57) with ` being the length of line. Keshner went on to derive an autocorrelation function for this model and proved that it consists of a sum of two terms: a nonstationary one and a stationary one, making the overall function nonstationary. Keshner also showed that if the time of observation was much smaller than the time elapsed since the system was turned on, then this autocorrelation function could be considered almost stationary. 2.6.3 Pole Placement 1/f γ noise generators were developed in [65] by using a generic summation model that sums N Lorentzian spectra. If the poles are placed such that the density of poles is uniformly 35 distributed with respect to the logarithm of the frequency, then SX (f ) would represent a 1/f spectrum. The resultant power spectrum is N 2σ X fH SX (f ) = 2 + f2 π fH (2.58) h=1 and σ is the variance of the considered quantity and fH represents the frequency of the poles. If the discrete sum was replaced by an integral, the poles fH would also assume a continuum of values and the PSD would be expressed as Z 2σ ∞ fH SX (f ) = 2 + f 2 DfH dfH π 0 fH (2.59) where DfH represents the distribution of poles. If DfH ∝ 1/fH , then a 1/f spectrum is obtained. The motive behind their work was to find the appropriate number of poles (Lorentzian spectra) and their location required to generate a 1/f distribution. They used the circuit shown in Fig. 2.4, which consists of several parallel RC circuits that are connected in series to feed the input of an amplifier. Each resistor in the parallel RC combinations contributes thermal (white) noise. The PSD of the noise voltage at the output of the RC R1 en Vi C1 R2 Ro in C2 Ci Ri Sv Ao Vi Rn Cn Figure 2.4: A 1/f γ noise generator from [65]. circuits SX (f ) is of the form in Eqn. (2.58) and each fH = 1/2πCRH . If en , in and Yi are all zero, then the amplifier noise output spectrum would be SV (f ) = A2o SX (f ), where Ao is the frequency independent voltage gain of the amplifier which is the desired form. However, 36 there are the Ri and Ci frequency dependent terms that have to be accounted for. If Ri À N X Rh (2.60) h=1 and Ci ¿ C/N (2.61) then the effect of Ri and Ci can be neglected and it would be possible to get a 1/f response over a wide frequency range. In [66] was considered the finite form of the model in [28] which is nothing but a cascade of N first-order filters having a transfer function of the form QN i=1 (s − s0i ) Ha (s) = A QN i=1 (s − spi ) (2.62) for an appropriate gain term A. The N poles spi are uniformly distributed with respect to log f . The output spectrum of this filter with a white noise input is So (f ) = Ha (f )2 Si (f ) ∝ Ha (f )2 . (2.63) The more the number of poles per frequency decade, the greater the accuracy of the 1/f spectrum. Corsini and Saletti went on to further realize this analog model in the z-domain to produce a digital 1/f noise generator. The transfer function in the z-domain is QN −1 exp(s T ) oi i=1 1 − z H(z) = Az QN −1 exp(spi T ) i=1 1 − z (2.64) which is true because of the mapping of a pole/zero from the analog domain to a pole/zero to the frequency domain given by s − sX → 1 − z −1 exp(sX T ), where T is the sampling rate. The digital filter implementing this 1/f sequence is shown in Fig. 2.5. In [67] the RC transmission line model considered two special cases depending on the impedance of the load: a) open-circuited load and b) short-circuited load. For the open circuit case with reflection coefficient Γ = 1 the input impedance of the line can be written as √ cosh(` RC) √ ZIopen (s) = Z0 (s) (2.65) sinh(` RC) and s is the usual Laplace operator and ` is the length of the line, while for the short circuit case, the input impedance with reflection coefficient Γ = −1 can be written as √ sinh(` RC) √ ZIshort (s) = Z0 (s) . cosh(` RC) (2.66) 37 (i) x (n) (i) y (n) −1 z a i b i Figure 2.5: Digital 1/f γ noise generator from [66]. Using the rational series approximations for the sinh and cosh terms, the respective input impedances can be re-expressed as Q∞ 4`2 sRC 1 n=1 1 + (2n−1)2 π 2 ZIopen (s) = Q∞ `2 sRC `Cs n=1 1 + 2 2 (2.67) n π and Q∞ 2 1 + `nsRC 2 π2 ZIopen (s) = `R Q∞n=1 4`2 sRC n=1 ` + (2n−1)2 π 2 (2.68) The open circuit model acts like an integrator and hence its PSD varies as 1/f 2 . The short circuit model does show a 1/f response but only within a limited frequency range. If the frequency gets too high or too low, the characteristic tends to flatten out. 2.6.4 Fractional Noises Fractional noises or fractional Brownian Motions (fBM) are random processes that form a special class of Gaussian processes that are typically characterized by a parameter 0 < H < 1. They differ from typical Markov processes in the sense that a long span of these random sequences show a strong sense of dependence as made more precise in [73]. A random process X(t), t ≥ 0 is called a Wiener process or BM if the following conditions are satisfied: 1b. With probability 1, X(0) = 0, or the process starts at the origin and X(t) is a continuous function of t. 38 2b. ∀t ≥ 0, s > 0, the increment X(t + s) − X(t) is normally distributed with mean µ = 0 and variance σ 2 = s. Thus, 1 P (X(t + s) − X(t) ≤ x) = √ 2πs Z µ x exp −∞ −u2 2s ¶ du. (2.69) 3b. If 0 ≤ t1 ≤ t2 ≤ . . . ≤ t2n , the increments X(t2 ) − X(t1 ), X(t4 ) − X(t3 ), . . ., X(t2n ) − X(t2n−1 ) are independent. 4b. The increments X(t + s) − X(t) are stationary, which means they have distributions independent of t. For t > 0, s > 0 the autocovariance of a BM can be derived as follows: Var[X(t) − X(s)] = E((X(t) − X(s) − E[X(t) − X(s)])2 ) = E[((X(t) − E[X(t)]) − (X(s) − E[X(s)]))2 ] = E((X(t) − E[X(t)])2 − 2(X(t) − E[X(t)])(X(s) − E[X(s)]) +(X(s) − E[X(s)])2 ) = Var[X(t)] − 2Cov[X(t), X(s)] + Var[X(s)]. 1 (Var[X(t)] + Var[X(s)] − Var[X(t) − X(s)]). Cov[X(t), X(s)] = 2 This gives, for the autocovariance KX (t, s) = 1 2 [t 1 2 [t (2.70) + s − (t − s)] = s t > s + s − (s − t)] = t s>t KX (t, s) = min(t, s). (2.71) The autocorrelation is equal to the autocovariance for a zero mean process. Thus for a BM the autocorrelation is RX (t, s) = KX (t, s) = min(t, s) (2.72) which is time-dependent. A random process is said to have a mean square derivative X 0 (t) if (· lim E ²→0 ¸2 ) X(t + ²) − X(t) − X 0 (t) = 0. ² (2.73) It can be shown [74] that the mean square derivative of X(t) exists if ∂ 2 RX (t, s)/∂t∂s exists. If so, then the mean and autocorrelation of the X 0 (t) is E[X 0 (t)] = 0 RX = d 0 E[X(t)] = µX (t) dt ∂ 2 RX (t, s) . ∂t∂s (2.74) (2.75) 39 Using Eqn. (2.72), we can compute 1 t>s ∂ RX (t, s) = U (t − s) = 0 t<s ∂s where U (t−s) is the unit step function and is discontinuous at t = s. Thus ∂ 2 RX (t, s)/∂t∂s does not exist and one can say that a BM does not have a mean square derivative. However, we can still write ∂ 2 RX (t, s) ∂ = U (t − s) = δ(t − s) ∂t∂s ∂t (2.76) where δ(t − s) is the Dirac delta function. Thus the autocorrelation of the mean square derivative of a BM is a delta function. This implies a complete lack of correlation between adjacent samples of the process which is nothing but white noise. To see this in the frequency domain, the PSD of the mean square derivative of the BM Z ∞ 0 SX (f ) = RX 0 (τ )e−j2πf τ dτ Z−∞ ∞ = δ(τ )e−j2πf τ dτ −∞ = 1 (2.77) which is independent of frequency. Note that t − s has been replaced by τ above. This establishes that white noise can be thought of as a derivative of a BM which is an important observation. White noise is a 1/f process with an exponent of unity. Calculating the PSD of a BM is not so trivial since one look at Eqn. (2.72) will convince that the condition Z ∞ kRX (τ )kdτ < ∞ (2.78) −∞ will not be satisfied and this is a necessary condition to calculate the spectrum using the Wiener-Khintchine theorem. However, it can be numerically shown that the spectrum of a BM tapers off as 1/f 2 or the exponent is 2. An example of a sample realization of a 1D BM is shown in Fig. 2.6. Brownian motion is considered scale invariant. Intuitively this means that a sample path realization as in Fig 2.6 looks the same in a probabilistic sense under any resolution. If the same simulation is run with a different timestep, the resulting sample path will have identical properties. Since one can only simulate/measure with finite precision, the normal distribution of condition 2b from above can be reformulated as Z x 1 2 P (X(t + s) − X(t) ≤ x) = √ e−u /2ds du (2.79) 2πds −∞ 40 Brownian Motion (Wiener Process) 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 0 0.2 0.4 0.6 0.8 1 Time Figure 2.6: 1D Brownian motion realization. where d t represents the position of the interval of observation of the BM. The result is that the probability distribution function of BM is invariant in scale. For d > 0, replacing s by d s and x by d0.5 x does not change the value of the right hand side of Eqn. (2.69). P (X(t + s) − X(t) ≤ x) = P (X(dt + ds) − X(dt) ≤ d0.5 x) (2.80) Thus changing the temporal scale by a factor d and the spatial scale by a factor d0.5 results in a process that is indiscernible from the original, giving statistical self-similarity. In terms of PDFs, this works out as fX (x0 = d0.5 x, t0 = dt) = d−0.5 f (x, t). (2.81) Brownian motion can be quite restrictive for modelling purposes mainly because it imposes stationary and independent increments which are normally distributed. In [73] the concept of fractional Brownian Motion (fBM) was introduced which dispenses with the condition of independence. fBM with index H(0 < H < 1) is defined as a Gaussian process X(t) satisfying the following conditions: 1f. With probability 1, X(0) = 0, or the process starts at the origin and X(t) is a continuous function of t. 2f. ∀t ≥ 0, s > 0, the increment X(t + s) − X(t) is normally distributed with mean µ = 0 41 and variance σ 2 = s2α . 3f. The increments X(t + s) − X(t) are stationary which means they have distributions independent of t. ¶ −u2 exp du. (2.82) P (X(t + s) − X(t) ≤ x) = √ 2s2H 2πs2H −∞ In comparison to fBM, note the difference in variance and the absence of condition 3b. An 1 Z x µ 4 3 2 Amplitude 1 0 −1 −2 −3 −4 −5 100 200 300 400 500 600 sequence 700 800 900 1000 Figure 2.7: 1D fBm realization with H = 0.2. example of a sample realization of a 1D fBM process with H = 0.2 is shown in Fig. 2.7 and with H = 0.8 is shown in Fig. 2.8. Both plots were generated using the “wfbm” command in MATLAB. Brownian motion is a special case of fBM with H = 0.5. The distributions of functions defined by fBM cannot have independent increments except in the Brownian motion case of H = 0.5. The autocovariance of this process is given by 1 KX (t, s) = (t2H + s2H − |t − s|2H ). 2 Again, as a generalization of Eqn. (2.81) fX (x0 = dH x, t0 = dt) = d−H f (x, t). (2.83) (2.84) Just like white noise is the derivative of Brownian motion, a generic 1/f α noise can be defined by the fractional derivative dα x(t) = w(t) dtα (2.85) 42 Figure 2.8: 1D fBm realization with H = 0.8. where w(t) represents the generic 1/f α noise and α is any number between 0 and 2. From the theory in [76], it can be shown that the solution to the fractional differential equation in Eqn. (2.85) is given by 1 x(t) = Γ(α/2) Z t (t − τ )α/2−1 w(τ )dτ (2.86) 0 where Γ(.) is the Gamma function. This is equivalent to a linear system driven by white noise with an impulse response function h(t) = tα/2−1 Γ(α/2) (2.87) whose Laplace transform is 1/sα/2 thereby giving us a noise with arbitrary value of exponent. This was suggested in [77]. There are digital models which generate Brownian motion in the z-domain by starting out with a difference equation describing a certain process. Xk+1 = Xk + wk (2.88) describes a discretized random process Xk and wk represents an independent and identically distributed (IID) random variable with variance K∆k. This is the general random walk 43 formula as shown in [75]. Taking its z-transform yields H(z) = which results in the PSD SX (f ) = 1 1 − z −1 (2.89) K∆k (2 sin 2πf2∆k )2 w. (2.90) This approximates a BM at low frequencies since sin(ω) ≈ ω for small ω. Hence the PSD is SX (f ) = 1 K/∆k ∝ 2 2 (2πf ) f (2.91) which indicates a BM. In [78], the author proposed a fractional extension of this concept and used a digital model defined by Hf (z) = 1 (1 − z −1 )α/2 (2.92) and proceeded to derive the discrete autocorrelation and autocovariance functions of fBM analogous to the continuous versions discussed earlier. Hence, the PSD is also an extension of the BM case and can be described as K∆k (2 sin(πf ∆k))α SX (f ) = (2.93) which for small frequencies can be approximated by SX (f ) ≈ K∆k 1−α (2πf )α (2.94) giving a noise model with arbitrary exponent α. In [79], the author takes a similar approach by taking a power series expansion of Eqn. (2.92) given as H(z) = 1 + α −1 α/2(α/2 + 1) −2 z + z + ... 2 2! (2.95) and the coefficients of this series give the pulse response of the transfer function. The k th numerator in Eqn. (2.95) can be expressed as hk = k Y (α/2 + n − 1)n (2.96) n=1 and the pulse response values can be evaluated using the recursive algorithm h0 = 1 hk = (α/2 + k − 1) hk−1 . k (2.97) 44 Another variation of Brownian Motion is a Lévy process which does not assume a Gaussian distribution although the increments are independent and stationary. A Lévy process is said to be stable if d−1/γ X(dt) and X(t) have the same distribution for all d > 0 and for some γ. This idea has been used to generate 1/f noise and is covered in Section 2.6.5. 2.6.5 Power-Law Shot Noise The authors in [80] revisited the method of generation of shot noise which can be described in the time domain by a Poisson pulse generator of rate µ that feeds an input linear timeinvariant filter having a particular impulse response whose output is shot noise. The process is demonstrated graphically in Fig. 2.9 The amplitude distribution of shot noise approaches Rate u Poisson Generator Poisson point process Linear Filter h(t) Shot Noise Figure 2.9: A general shot noise generator from [80]. a Gaussian distribution as the rate of the Poisson process increases [81]. If this rate is much greater than the reciprocal of the characteristic time duration of the impulse response function, then the amplitude distribution of the resulting noise will approach a Gaussian form. An example of such an impulse function is a decaying exponential as is shown in Fig. 2.9 and a shot noise constructed from such an impulse response function tends to have a Gaussian amplitude distribution as rate µ increases. Shot noise can be expressed as an 45 infinite sum of such impulse response functions as follows: ∞ X I(t) = h(t − tk ) (2.98) k=−∞ where the times tk are random events and have a Poisson distribution with rate µ. The impulse response functions can be both deterministic or stochastic although in the stochastic case, it is unlikely to get accurate results for higher than first order statistics. When the impulse response function is a decaying power law, the characteristic time can become arbitrarily large or small as a consequence of which, the amplitude distribution ceases to be Gaussian. Shot noise generated as a result of using such an impulse response function is called power-law noise. The impulse response can be described as Kt−β A ≤ t < B h(K, t) = 0 otherwise. An example of a power law impulse response transfer function is shown in Fig. 2.10 wherein β = 1/2. The authors then derive statistical properties for generic power law shot noise 10 9 8 Impulse Response h(t) 7 6 5 4 3 2 1 0 0 50 100 150 200 250 Time t 300 350 400 450 500 Figure 2.10: A power law impulse response function with β = 1/2. and find that the amplitude probability density function, the autocorrelation function and the PSD all follow a power law. Since a power law dependence indicates the presence of 46 all time scales, they conclude that this implies fractal behavior. The amplitude probability density function follows a Lévy stable form for exponent β > 1. If the parameters A and B are equal to 0 and ∞ respectively, then it can be shown ([80]) that the PSD of the power law shot noise is SI (f ) ∝ 1 (2πf )α (2.99) where α = 2(1 − β). If α = 1, then this becomes a model for 1/f noise. In [82] a transfer function of the form h(t) = t−1/2 was used as a model for 1/f noise and the above power-law shot noise approach is a generalization of that, one special case (α = 1) being a model for 1/f noise. 2.6.6 1/f Noise from Chaos g n Output x f(.) n x n+1 Unit Delay Figure 2.11: A general 1-D dynamical system consisting of a nonlinear function and a recursive loop from [32]. The authors in [32] use elementary concepts of chaotic dynamical systems to generate sequences of 1/f noise (referred to as colored noise in [32]). They use a generic discrete nonlinear function with a recursive loop providing unit-delay feedback as shown in Fig. 2.11. The assumption is that the nonlinear function has no delay and the output of the function is obtained iteratively as follows: xn+1 = gn xn (2.100) where the subscript n denotes discrete time and gn is a gain factor. After a unit delay, xn+1 47 is transferred back to the input of the nonlinearity which instantaneously yields xn+2 = gn f (xn+1 ). The function f () is chosen carefully so as to satisfy the conditions for chaos (Chap. 10 in [33]), one of the features of which is that a chaotic function can generate stochastic looking sequences using completely deterministic functions, see [34]. A popular example of a chaotic nonlinear function is the logistic map expressed as fλ (x) = λx(1 − x) (2.101) where λ is the parameter of this function and typically takes value 4. The plot of values of this function on the ordinate versus the parameter values on the abscissa is called a bifurcation diagram and for this logistic map, this is shown in Fig. 2.12. The fixed points of 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 λ Figure 2.12: The bifurcation diagram for the logistic map. any dynamical system fc (x) with parameter c are the solutions of the equation fc (x) = x. For the logistic map in Eqn. (2.101), the fixed points are x = 0 and x = 1 − 1/λ. By fixed point it means that once an orbit hits a fixed point at time n ≥ 0, it stays at the point for all time n0 > n. If the orbit hits x = 1 at time n, then it is sent to fixed point x = 0 at time n + 1 making x = 0 an eventually fixed point. The authors consider the special case of λ = 4g including a constant gain term and restrict the range of x in the interval [0, 1] along the lines of [35] and the fixed points are 0 and 1 − 1/4g and the eventually fixed point is 1. At roughly λ = 3, a pair of paths on the bifurcation diagram opens up and this 48 corresponds to a period-doubling bifurcation (g > 0.75), as in [32]. As the parameter is increased still further, there is a seemingly endless cascade of period doubling bifurcations √ 0 until we reach the period 3 window corresponding to the parameter λ = (1 + 8)/2. A famous theorem in [36] proves that the existence of a period 3 window in the bifurcation 0 diagram implies chaos. For values of parameter just smaller than λ above, the sequence of values of the map would show a period of 3 but with values of parameters just greater than 0 λ , the periodicity disappears irrespective of the length of the sequence. The spectrum of such long sequences (107 terms ) was computed in [32] by performing an FFT resulting in X(f ) and the power spectrum was found by averaging X(f )2 over many runs, each having a different initial condition that is not a fixed point and lies inside the interval [0, 1]. This is expressed more precisely as N SX (f ) = ( 1 X ) (X(f ))2 N (2.102) m=1 where N different values of initial condition were chosen. The averaged PSD for the logistic map for gain values of 0.925, 0.975 and 0.9975 are shown in Fig. 2.13, Fig. 2.14 and Fig. 2.15 respectively. The authors then made two modifications to the standard Logistic function. The first modification of f1 (x) can be written as follows f1 (x) = h(x(4.5 − x)/3.5) h(x) = 4x(1 − x). (2.103) The second modification is f2 (x) = h(x(6 − x)/5) h(x) = 4x(1 − x) (2.104) while f3 (x) is the original Logistic map. The gain was now chosen as a uniform random variable distributed between [0, 1] and was allowed to vary at each step resulting in a three new time sequences. The PSD of all three sequences are computed as before and are plotted in Fig. 2.16. All the spectra show some sort of 1/f low-frequency behavior although their slopes are not exactly 1/f . The authors also computed the probability distributions of these variable gain sequences and found that they were significantly non-Gaussian. 2.6.7 Self-Organized Criticality (and others) In [12], it was found that certain dynamical systems with several degrees of freedom evolve naturally into what is called Self-Organized Criticality (SOC) or self-organized critical states 49 g = .925 1 0.9 0.8 0.7 0.6 S(f) 0.5 0.4 0.3 0.2 0.1 0 0 100 200 300 400 500 600 Figure 2.13: Output spectrum with g=0.925. g = .975 1 0.9 0.8 0.7 0.6 S(f) 0.5 0.4 0.3 0.2 0.1 0 0 100 200 300 400 500 Figure 2.14: Output spectrum with g=0.975. 600 50 g = .9975 1 0.9 0.8 0.7 S(f) 0.6 0.5 0.4 0.3 0.2 0 100 200 300 400 500 600 Figure 2.15: Output spectrum with g=0.9975. 5 Variable Gain sequence FFTs 10 4 10 3 10 S(f) 2 10 1 10 0 10 1 2 10 10 f Figure 2.16: PSD for f1 (x), f2 (x) and f3 (x). 3 10 51 which are states which are just barely stable. As an example of a system that exhibited self-organized criticality (SOC), they considered a simple sandpile model. If the slope of the pile of sand was too large, then the pile is considered far from equilibrium and will collapse soon after the slope reaches a critical value. At this critical value, the system would be marginally or critically stable with respect to small perturbations. It was argued that the phenomenon of 1/f noise was the dynamical response of this critically stable sandpile to small random perturbations. Numerical simulations on cellular automata were performed whose evolution followed some pre-defined rules. A 2-D case was considered in which the integer variable z signifying spatial co-ordinates is updated synchronously according to z(x, y) → z(x, y) − 4, z(x ± 1, y) → z(x ± 1, y) + 1, z(x, y ± 1) → z(x, y ± 1) + 1 (2.105) and a particular site would topple if z exceeds a critical value, say K at that site. This topple would cause the site to not only reduce in strength by 4 units, but would add one unit to each of its neighbors. Fixed boundary conditions were used, i.e. z = 0 at the boundaries. The system would start at some random condition z À K and would simply evolve according to the rules above until it would stop, i.e. all the z values would be less than K. Once this critical state would be reached, then the dynamics of the system would be probed by applying small local perturbations. Depending on the size of the perturbation, clusters of different sizes would form. If the size of a cluster is taken to be the number of occupied sites, then a log–log plot of the distribution of cluster sizes or avalanches D(s) versus s would follow a power law or in other words represent a 1/f process. A simulation was run on a 50x50 array and the distribution of avalanche sizes was found to resemble a power law in a first order approximation. This is shown in Fig. 2.17. This idea has generated a lot of inter-disciplinary interest in the scientific community and has resulted in further refinement. The concept of evolution into a self-organized critical state is called emergence when applied to the theory of complex systems. In other words, a large dynamical system starting out from some initial condition evolves according to a simple set of rules and finally organizes itself into what is called a complex system. A generalized explanation of this is provided in [70]. To illustrate a cross-disciplinary application of this idea, the theory of complexity applies to the evolution of biological systems as expounded in [71] wherein it is claimed that species have not evolved into their current state 52 9 Simulated Avalanche Data 1st order Polynomial Fit 8 7 6 Slope = −1.2 5 Avalanche 4 Magnitude 3 2 1 0 0 1 2 3 4 5 6 frequency Figure 2.17: SOC power law of the distribution of cluster sizes. purely by Darwinian natural selection [72] but have undergone a process of evolution, which in a simple case resembles the evolution of a cellular automaton, to reach a critical state only to be further refined by natural selection. The role of natural selection in evolution of biological systems is reassessed and it is suggested that emergence and natural selection have proceeded hand-in-hand as biological systems evolve to their current state. A recent alternative for an explanation of the occurrence of power laws can be found in the work in [85] wherein power law distributions are generated by a mechanism called “Highly Optimized Tolerance” or HOT. It is shown how systems which have evolved due to mechanisms such as natural selection (biological systems) or by virtue of good engineering practices, exhibit power laws. These systems which have evolved to a robust state show a good tolerance to external perturbative effects. The key difference between SOC and HOT is that in SOC, the system occupies a state that is close to its non-equilibrium state also referred to as the “edge of chaos” and can be sent into non-equilibrium by a small random perturbation while in HOT, the system is further away from the critical point and is therefore resistant to small perturbative effects. Such systems are designed as a tradeoff between factors such as yield, performance, resource overhead and sensitivity to risk. 53 2.6.8 Phase Noise The phenomenon of phase noise is encountered in circuits containing oscillators and mixers and is characterized as a frequency-domain phenomenon. Phase noise represents the uncertainty in determination of the carrier frequency and instead of appearing as a perfect delta function at the carrier frequency in the frequency domain, it occupies a band of frequencies about the carrier whose PSD exhibits a power-law characteristic. This is represented graphically in Fig. 2.18 where df represents the band of frequencies about the carrier fc . A fc fc df Figure 2.18: Uncertainty in carrier frequency due to phase noise. typical oscillator is shown in Fig. 2.19 and its instantaneous output can be represented by the equation Voutput (t) = V0 cos(2πfc t + φ(t)). (2.106) The function φ(t) affects the phase of the output signal and its random nature gives rise to phase noise. The PSD of the oscillator can be related to the PSD of the phase noise according to Soutput (f ) ≈ P (Sφ (f + fc ) + Sφ (f − fc )) 2 (2.107) where P is the power of the carrier. What now remains is a model for the nature of the PSD of φ(t), Sφ (f ). The author in [86] considered an LC tank as a feedback circuit having a bandwidth df . At frequencies close to the carrier, phase errors at the input will result in frequency uncertainty at the output. This is a function of the sources of noise of the amplifier and the response of the feedback network. The author quantified the fluctuations 54 due to phase noise in terms of the single sideband noise PSD. It is defined as · ¸ Psideband (fc + df, 1Hz) L(df ) = 10 log Pcarrier (2.108) where Psideband (fc + df, 1Hz) represents the single sideband power at a frequency offset of df from the carrier with a measurement bandwidth of 1 Hz. Note that this representation assumes variations in amplitude are negligible compared to the variations in phase. Leeson’s LTI phase noise model for tank oscillators predicts the following behavior for L(df ): Amplifier Input Output Feedback n/w Figure 2.19: A typical oscillator configuration. ( L(df ) = 10 log " µ ¶2 # µ ¶) df1/f 3 2F kT fc 1+ 1+ Ps 2QL df |df | (2.109) where F is an empirical parameter called the device excess noise number, k is Boltzmann’s constant, T is the absolute temperature, Ps is the average power dissipated in the tank, fc is the center frequency of oscillation of the tank, QL is the loaded Q, df is the frequency offset from the carrier and df1/f 3 is the corner frequency between the 1/f 3 and 1/f 2 regions. The typical nature of L(df ) is shown in Figure 2.20. The author hypothesized that this PSD has two major contributions of noise: 1) white noise near the oscillator frequency and 2) flicker noise away from the carrier which get up-converted about the carrier due to nonlinearities in the oscillator loop. 2.7 Simulation of Noise in Circuits The simulation technique behind most circuit simulators, the most popular and widely used being SPICE and its variants, is a formulation of Kirchhoff’s Laws, i.e. Kirchhoff’s Current 55 L (∆ f ) 1 f 3 1 f 2 noise corner ∆f Figure 2.20: A typical plot of the phase noise of an oscillator versus offset from the carrier. Law (KCL) and less frequently Kirchhoff’s Voltage Law (KVL). Resistors, capacitors, inductors and voltage-controlled current sources form the basic set of network elements used in circuit simulations. Semiconductor devices are modelled as components made up of an interconnection of these basic elements. Given an interconnection of several such elements in a circuit, a set of differential equations describing the network are set up by using KCL and KVL. The most popular technique of formulating the set of equations is called Modified Nodal Analysis and a typical system of equations can be represented as I(x, t) + d Q(x) = 0 dt (2.110) where I(x, t) represents the memoryless elements and the independent sources while Q(x) represents the reactive elements, L and C. The state variables are represented by x and they consist mainly of nodal voltages. When there are stochastic forcing terms in this equation, the state variables will also be stochastic which means in order to simulate a noise analysis, one must simulate the statistics of the state variables, where its mean would be a first order statistic and the autocorrelation and PSD would be a second order statistic. Usually, when the stochastic terms are Gaussian, we need only consider up to the second order statistic for complete characterization. For a SPICE-like noise simulation, first the time-invariant steady state solution of the nonlinear circuit (DC solution) is determined and this nonlinear circuit is linearized about this point. Eqn. (2.110) is then modified and expressed as GX + C d X=0 dt (2.111) 56 where G and C represent n × n matrices and X ∈ <n . Adding a current noise source to the above equation gives GX + C d X + TN(t) = 0 dt (2.112) where N (t) represents random noise and T is the incidence matrix mapping the noise source to the appropriate node. If the PSD of the noise source is known (usually assumed white), the PSD of the state variables can be determined from Eqn. (2.112) as GH(f ) exp(j2πf t) + C d H(f ) exp(j2πf t) + T exp(j2πf t) = 0. dt (2.113) where Hb (f ) is the Fourier Transform of the impulse response h(t) of the system and denotes a vector of transfer functions from the noise source to X. This leads to GH(f ) + j2πf CH(f ) + b = 0 ⇒ (G + j2πf C)H(f ) = −T (2.114) This now is a system of linear equations to be solved at each frequency by standard linear algebra routines like LU decomposition. The matrix of the PSD can then be calculated as SX (f ) = H(f )SN (f )HT (f )∗ (2.115) where HT (f ) is the transpose of H(f ). The PSD of a single output is usually desired, for example the voltage difference between two nodes. This output can be written as a linear combination of the components of the state variables as follows: Y (t) = dT X(t) (2.116) where d represents a vector of constants and if only the difference between the voltage at two nodes is desired, then d would contain only two terms, a 1 and a −1. The PSD of Y is given as SY (f ) = dT SX (f )d = H(f )SN (f )HT (f )∗ d (2.117) and one needs to calculate the vector dT H(f ) instead of the entire matrix H(f ). From Eqn. (2.114) we have H(f ) = −(G + j2πf C)−1 T (2.118) dT H(f ) = −dT (G + j2πf C)−1 T = V(f )T N (2.119) and 57 where the vector V(f ) is the solution of the equation (G + j2πf C)T V(f ) = −d. (2.120) Thus one needs to perform a single LU decomposition at every frequency point of the matrix (G + j2πf C) with the right hand side set to d. This is the classical form of the AC noise analysis first proposed in [102] and is used in the SPICE circuit simulator. Although this method is efficient in calculating the noise response of a circuit with time-invariant excitations, it requires considerably more work if time-varying excitations exist. In the case of the existence of a periodic steady-state, the nonlinear circuit can be linearized about this periodic steady state leading to a time-variant modification of Eqn. (2.112) which can be written as G(t)X + C(t) d X + TN(t) = 0 dt (2.121) and it is also assumed that the noise sources are cyclostationary where by cyclostationarity it is meant that a random variable has periodic statistics, i.e. mean, autocorrelation and higher order statistics are periodic. If only the mean and the autocorrelation are periodic, then the process is said to be wide-sense cyclostationary. Continuing the analysis of Eqn. (2.121) along the lines of the time-invariant case gives a time-varying PSD of state variable X and the transfer functions would also be time-varying. Methods to generate the Fourier series coefficients of time-varying PSD have been proposed in [103]. 2.8 Summary This chapter presented a summary of the common sources of noise in electronic circuits. These are thermal, shot and flicker noise. The derivation of the form of thermal noise is provided as based on thermodynamical principles. Shot noise was assumed to be a superposition of several random processes that follow a Poissonian statistic and its spectral density is found to be white. In the case of flicker noise, an extensive review of the various theories of the origins of flicker noise are presented and explorations on the properties of flicker noise are highlighted. These include the scale invariance of flicker noise, the nonstationarity of a flicker noise process, the distribution of a flicker noise process, claims about whether flicker noise is caused by surface effects or bulk effects, the dependence of flicker noise on current, voltage and resistance of the material under investigation and the 58 effect of temperature on flicker noise. This is followed by a collection of approaches to model flicker noise which includes infinite analog and digital transmission line models, fractionally differenced Brownian motion processes, shot noise shaped by appropriate transfer functions to produce 1/f spectral characteristics and generation of flicker-like sequences with iterative actions on nonlinear functions. The original theory of the important phenomenon of phase noise in electrical oscillators is reviewed followed by a description of the common SPICE-like approach to noise analysis in the frequency domain based on the use linear time-invariant models for circuit elements. 59 Chapter 3 Stochastic Differential Equations 3.1 Introduction Stochastic differential equations extend ordinary differential equations to accommodate the presence of random terms that model unpredictable real-life phenomena. Such random phenomena are found in a variety of places such as population growth models, option pricing in finance, electrical noise in analog circuits and in stochastic control problems, to name a few. In general, the difference between an SDE and an ODE lies in the presence of a random term with specific characteristics that matches (in a statistical sense) the characteristics of the real-life process under observation. There is neither a restriction on the effect of the insertion of this random term in the equation, i.e. it can be purely additive or it may multiply with some deterministic term, nor on its amplitude. Section 3.2 is an elementary mathematical introduction to the theory of SDEs and highlights the difference between an ODE and an SDE. Section 3.3 introduces the theory of scalar SDEs more thoroughly and explains the two common interpretations of a general SDE, i.e. the Itô and the Stratonovich forms. Depending on the interpretation assumed, one can in general obtain different results while solving the same SDE as illustrated in Subsection 3.3.1 while Section 3.4 extends the theory of scalar SDEs to the vector case. Finally, Section 3.5 provides a resolution on choosing between the Itô and Stratonovich forms while trying to model general real-life phenomena. 60 3.2 Basic Theory Langevin was among the first to consider SDEs for modelling the dynamics of Brownian motion [107]. Instead of an ODE such as dx = a(t, x) dt (3.1) he considered a noisy differential equation of the form d Xt = a(t, Xt ) + b(t, Xt )ξt . dt (3.2) The term a(t, Xt ) is the deterministic drift coefficient while the term b(t, Xt )ξt represents the stochastic perturbative effect. The term b(t, Xt ) is an intensity factor and the ξt s are random processes. Based on observations, if the random processes ξt are found to have a constant spectral density, then they are referred to as white noise processes. Since this means that the process is assigned an equal weight to each of the Fourier frequency components, the covariance must be a constant multiple of the Dirac delta function δt . Substituting a = 0 and b = 1 in the above equation implies that ξt is the pathwise derivative of a Brownian motion process Bt . Hence one can write d Bt = ξt . dt (3.3) Although the sample paths of the Brownian motion process are nowhere differentiable and are of unbounded variation, Japanese mathematician K. Itô [112], [113], [114] was able to provide a way out of this problem by introducing a new stochastic process, the Itô stochastic integral, which enables Eqn. (3.2) to be written in differential form as dXt = a(t, Xt )dt + b(t, Xt )dBt (3.4) or in integral form as Z Xt (ω) = X0 (ω) + 0 Z t a(s, Xs (ω))ds + 0 t b(s, Xs (ω))dBs (3.5) where the second integral is not a conventional Riemann integral but an Itô stochastic integral and is evaluated with respect to Brownian motion. The stochastic calculus so developed requires a modified chain rule formula and convergence exists not in the pathwise sense, but only in the mean-square sense. The following sections express these ideas in some detail and also illustrates common usage and examples of SDEs. 61 3.3 Scalar SDEs As mentioned already, a generic SDE can be written in differential form as dXt = a(t, Xt ) + b(t, Xt )dBt and is a symbolic representation of the stochastic integral equation Z t Z t Xt = X0 + a(s, Xs )ds + b(s, Xs )dBs t0 (3.6) (3.7) t0 where Bt is the Brownian motion process (Wiener process). Recall, the Brownian motion process is a random process whose increments are Gaussian with initial value zero. It has a zero mean E[Bt ] = 0, a mean square given by E[Bt2 ] = t, t > 0 and has independent increments expressed as E[(Bt4 − Bt3 )][(Bt2 − Bt1 )] = 0, ∀0 ≤ t1 ≤ t2 ≤ t3 ≤ t4 . Although the sample paths of Brownian motion are continuous they are nowhere differentiable and are not of bounded variation. Thus the second integral in Eqn. (3.7) does not exist in the Riemann sense and is a stochastic integral. The first integral however is completely deterministic and is of the conventional Riemann form. For a suitable class of random function h : [0, T ] → < and partitions 0 = t0 < t1 < t2 . . . < tN = T with maximum spacing ∆, the Itô integral is defined as the mean-square (m.s.) limit of Itô sums in which the integrand is evaluated at the lower end point of the partition tj of each sub-interval [tj , tj+1 ]. In other words Z 0 T h(t, ω)dBt = m.s. − l.i.m∆→0 N −1 X h(tj , ω)(Btj+1 − Btj ), (3.8) j=0 where l.i.m.∆→0 is a limit in the mean. Note the difference in this case with conventional calculus: there is equality here only in the mean-square sense. Evaluating this limit at points other than the starting point of each time interval, arbitrary values in this interval can be chosen. For instance, if h(t, ω) = Bt (ω) and evaluation point τj = (1 − λ)tj + λtj+1 where 0 ≤ λ ≤ 1, the mean-square limit is equal to 21 Bt2 − ( 12 − λ)T . The Itô integral corresponds to the case of λ = 0. R. L. Stratonovich proposed an alternative stochastic integral [115] wherein the integrand is evaluated at the midpoint of each interval, i.e. τj = (tn + tn+1 )/2. This is called the Stratonovich integral 62 which corresponds to λ = 1/2. So, Z T h(t, ω) ◦ dBt = m.s. 0 −l.i.m∆→0 N −1 X h((tj + tj+1 )/2), ω)(Btj+1 − Btj ) (3.9) j=0 and the Stratonovich SDE is written in differential form as dXt = a(t, Xt )dt + b(t, Xt ) ◦ dBt . (3.10) The symbol ◦ is used to indicate that the equation under consideration is being interpreted in the Stratonovich sense and the Stratonovich integral equation becomes Z t Z t Xt = X0 + a(s, Xs )ds + b(s, Xs ) ◦ dBs . t0 (3.11) t0 The solution of the Itô SDE in Eqn. (3.7) is a diffusion process with transition probability density p = p(s, x; t, y) that satisfies the Fokker-Planck equation [116] p := ∂p ∂ 1 ∂2 + (ap) − (σp) = 0 ∂t ∂y 2 ∂y 2 (3.12) with σ = b2 . The chain rule for Itô stochastic calculus differs from deterministic calculus because of the inclusion of an additional term which arises because the expectation of the squared increment of Brownian motion (E[(∆B)2 ]) on a time interval of length ∆t is equal to ∆t. This is a first order term. Letting Yt = U (t, Xt ) where U : [t0 , T ] × < → < has continuous second order partial derivatives and Xt is a solution of the second order Itô SDE (Eqn. (3.7)), the differential of Yt satisfies the equation dYt = L0 U (t, Xt )dt + L1 U (t, Xt )dBt . (3.13) The partial differentiation operators are defined as L0 U = ∂U ∂U 1 ∂2U +a + b2 2 ∂t ∂x 2 ∂x (3.14) and ∂U (3.15) ∂x with all functions evaluated at (t, x). The Itô formula of Eqn. (3.13) can be interpreted as L1 U = b the Itô stochastic integral equation Z t Z t Yt = Yt0 + L0 U (s, Xs )ds + L1 U (s, Xs )dBs t0 t0 (3.16) 63 for t ∈ [t0 , T ]. In the case for Stratonovich calculus, the chain rule happens to be exactly the same as for the conventional (deterministic) case. Like before, if Yt = U (t, Xt ) and Xt is the solution of the Stratonovich SDE in Eqn. (3.10), then dYt = L0 U (t, Xt )dt + L1 U (t, Xt ) ◦ dBt , (3.17) where the partial differential operators are defined as L0 U = ∂U ∂U +a ∂t ∂x (3.18) and ∂U . (3.19) ∂x This is exactly the same form as the chain rule for ODEs. A more rigorous explanation for L1 U = b the difference between the Itô and Stratonovich forms is provided in Appendix A. 3.3.1 A Simple Example As an example to illustrate the difference in the Itô and Stratonovich form of the solution, the following SDE dXt = a t Xt (3.20) dt is solved exactly. Here at = k + αBt , and α, k are constants. Writing this equation in differential form gives dXt = kXt dt + αXt dBt (3.21) or dXt = kdt + αXt dBt Xt Integrating this equation and using the initial condition B0 = 0, we get Z t dXs = kt + αBt . 0 Xs (3.22) (3.23) Itô’s formula (Eqn. (3.13)) is now used to evaluate the integral above. Choose Y (t, Xt ) = U (t, Xt ) = ln Xt and obtain d(ln Xt ) = = = 1 1 1 dXt + (− 2 )(dXt )2 Xt 2 Xt dXt −1 2 2 − α Xt dt Xt 2Xt2 dXt 1 2 − α dt. Xt 2 64 Hence dXt 1 = d(ln Xt ) + α2 dt. Xt 2 Using this result in the integral equation (Eqn. (3.23)) we get ln Xt 1 = (k − α2 )t + αBt X0 2 or equivalently 1 Xt = X0 exp((k − α2 )t + αBt ). (3.24) 2 This form of the solution corresponds to the Itô form. The Stratonovich form of the solution is exactly like one would expect from classical calculus. Interpreting the SDE in Eqn. (3.21) as being of Stratonovich form, we can rewrite it as dXt = kXt dt + αXt ◦ dBt . (3.25) Xt = X0 exp(kt + αBt ) (3.26) The solution of this equation is which serves to emphasize two points made earlier. One, that depending on the interpretation of the SDE, two markedly different outcomes can result and two, the Stratonovich form gives results that conform with results from classical calculus. The difference between the two forms is demonstrated graphically in Fig. 3.1, wherein the values X0 = 0, k = 2 and α = 1 have been used. Note that both results are of the form Xt = X0 exp(µt + αBt ) and the difference lies in the drift coefficient µt . This equivalence is taken up in the next subsection. The process of this type is called a geometric Brownian motion and is widely used in biology and economics. 3.3.2 Equivalence of the Itô and Stratonovich forms As noted in the earlier subsection, the Itô and Stratonovich forms provide in general, different results to the same SDE. The difference lies in the drift coefficient as was noted in the example above. This makes it possible to transform the solution from one form to another by modifying the drift coefficient appropriately. For example, starting out with the Itô SDE dXt = a(t, Xt )dt + b(t, Xt )dBt , (3.27) 65 50 Amplitude of random variable X Ito solution Stratonovich solution 25 0 0 1 0.5 1.5 time Figure 3.1: Differences in the solution of the SDE in Eqn. (3.21) assuming the Itô and Stratonovich interpretations, with X0 = 0, k = 2 and α = 1. the corresponding Stratonovich SDE is 1 ∂ dXt = [a(t, Xt ) − b(t, Xt ) b(t, Xt )]dt + b(t, Xt )dBt . 2 ∂x (3.28) On the other hand, starting with the Stratonovich SDE dXt = p(t, Xt )dt + q(t, Xt ) ◦ dBt , (3.29) the corresponding the Itô SDE is 1 ∂ dXt = [p(t, Xt ) + q(t, Xt ) q(t, Xt )]dt + q(t, Xt )dBt . 2 ∂x (3.30) This means it is possible alternate between the two forms, choosing whichever form is suitable to the application under consideration. 66 3.4 Vector SDEs All the above arguments carry over to vector values SDEs. An N-dimensional SDE and an M-dimensional BM process Bt = (Bt1 , . . . , BtM ) can be expressed in vector form as dXt = a(t, Xt )dt + M X bj (t, Xt )dBtj (3.31) bi,j (t, Xt )dBtj (3.32) j=1 which can be written component-wise as dXti = a(t, Xti )dt M X + j=1 where i = 1, . . . n. The coefficient bi,j is the (i, j)t h component of the N × M matrix B = [b1 | . . . |bM ] where the bj s are column vectors. The Itô chain rule for this system of equations now becomes 0 dYt = L U (t, Xt )dt + M X Lj U (t, Xt )dWtj (3.33) j=1 where the partial differential operators L0 , L1 , . . . Lj are given by L0 = M N N X ∂ ∂ 1 X X k,j l,j ∂ 2 ak k + b b + ∂t 2 ∂x ∂xk ∂xl (3.34) k,l=1 j=1 k=1 and j L = N X bk,j k=1 ∂ ∂xk (3.35) for j = 1, . . . , M . The corresponding Stratonovich SDE in component form is dXti = ā(t, Xti )dt + M X bi,j (t, Xt ) ◦ dBtj (3.36) j=1 which has the same solutions as the Itô SDE with a modified drift coefficient. That is N āi (t, Xt ) = ai (t, Xt ) − M 1 X X k,j ∂bi,j b (t, Xt ) (t, Xt ) 2 ∂xk (3.37) k=1 j=1 where i = 1, . . . n. Just like in the scalar case, the system of Stratonovich equations can be solved using classical ODE methods. 67 3.5 Itô v/s Stratonovich Forms As noted earlier, both the Itô and the Stratonovich forms are mathematically accurate but interpreting an SDE in either form will give different end results. This is a paradox, which must be overcome if one is to favorably accept the results of the stochastic differential framework. This is not the first instance in which this conundrum has been addressed. There are several approaches that have been used to provide a justification for choosing an interpretation. Here, we refer to the work of West et al. in [117] where they trace back the origin of the SDE to Langevin’s equation, see Eqn. (3.2). This equation in its original form is linear in the dependent variable of the system and the stochastic terms are purely additive. The probability distribution function of this dependent variable can be determined by the so-called Fokker-Planck equation of evolution, [116]. Obtaining this Fokker-Planck equation requires the use of the ordinary rules of calculus. If the Langevin equation is linear and the stochastic term is purely additive, then the final form of the Fokker-Planck equation obtained using Itô’s rules of calculus are identical to the form obtained using the ordinary rules of calculus, or the form proposed by Stratonovich. However, when the fluctuations are non-additive, as in the more general case, the two interpretations produce different results although as shown earlier, one can go from one form to another. The key to understanding the right approach to use requires an understanding of the underlying assumptions behind each approach. In [117], West et al. perform a “systematic analysis of correlations.” When the SDE involves a multiplication between the deterministic term and the stochastic term, the differences between the two interpretations become immediately evident. One form assumes that there is a finite amount of correlation between the deterministic and stochastic term. This is the form used by Stratonovich. The other form, or Itô’s classical form, considers zero correlation between the deterministic and stochastic components. As shown in [117], the probability density function obtained with the Itô assumption is not normalizable and is therefore not a valid probability density function. Starting out with the Stratonovich assumption will, on the other hand, produce a probability density function that is normalizable. The result remains unchanged even if one assumes a non-zero correlation between the stochastic and deterministic term which goes to zero in the limiting case, as demonstrated by the Wong-Zakai theorem, [119]. The conclusion one can draw is that when trying to model a “physical” system, there will in general always be a finite amount of correlation between the deterministic and stochastic terms. The solution of a such a model can then 68 only be appropriately determined in the Stratonovich sense. The essence of these conclusions are shown to be unchanged using other methods of rationalization like using a Master equation in [120]. This has also been validated experimentally in [118] wherein the authors consider physical and chemical systems described by Langevin models with non-additive fluctuations and find the Stratonovich assumption to be accurate. Further experimental validation can be found in [121]. To allow for consideration of multiplicative effects, our approach has therefore made use of the Stratonovich assumption. 3.6 Summary This chapter provided a brief overview of the theory of SDEs starting from the original Langevin form. It highlights the concepts behind the formulations of the Itô and Stratonovich forms and shows how one may obtain different results while using a particular interpretation. The vector SDE case is also touched upon and shown to be an extension of the scalar case. The “controversy” between the Itô and the Stratonovich forms is also brought up. The resolution of this controversy is dependent on the nature of the process that is being modelled with the SDE. If the interactions between the stochastic and deterministic terms in the equation cannot be neglected, then it is reasoned that the Stratonovich form is the appropriate one, which is based both on prior theoretical and practical work. 69 Chapter 4 Nonlinear Dynamics, Chaos and Intermittency 4.1 Introduction The mathematical theory of nonlinear dynamics attempts to explain the evolution of processes by using nonlinear equations. Every system is intrinsically dependent on time. Some systems show more variation with time than others. Some systems are periodic, i.e. they exhibit repetitive behavior while other systems tend to be less affected by the march of time. However, any system viewed long enough, will show a change in its properties or characteristics. Technically speaking, the time-dependent characteristics of every system falls under the rather wide umbrella of dynamical systems. A subset of these systems have linear responses in time while the vast majority of these systems are influenced by nonlinear mechanisms. It is the aim of nonlinear dynamics to try to describe these seemingly boundless variety of systems in a clear and precise framework of the language of mathematics. Popular examples of nonlinear dynamical systems include population dynamics, stock market trends, pendulum motion and weather patterns. 70 4.2 Nonlinear Dynamics and Chaos The types of systems typically considered here are nonlinear, discrete and iterative in nature and are also referred to as maps. To illustrate this, consider an example of such a system. A classic example for modelling the dynamics of population growth is the logistic map. Herein, if n denotes discrete time and xn denotes the fraction of the total population at time n, then the logistic map expresses xn+1 , the fraction of the total population at the next unit of time, as xn+1 = λxn (1 − xn ). (4.1) Here λ is a constant that depends on environmental conditions and for the logistic map, 0 < λ ≤ 4. Watching how this system works, or monitoring its evolution, is achieved by starting off with an initial condition x0 and computing x1 using the formula above. That completes iteration 1. For iteration 2, this x1 value is used to compute a value for x2 . This iterative process produces a stream of values which are collectively called the orbit of the map. This is reminiscent of the feedback system in Fig. 2.11. The logistic map by virtue of its construction takes values that lie between 0 and 1. The map is shown graphically in Fig. 4.1. An example of an orbit that starts at x0 = 0.2 and undergoes 100 iterations is shown in Fig. 4.2. A preliminary impression of this figure might indicate the appearance of some complex behavior that arises from a simple deterministic iterative rule. Indeed, it is this fascinating property of chaotic maps that is used in the development of electronic device noise models, as will be explained later. 4.2.1 Fixed Points There are several types of orbits possible for a dynamical system. A fixed point x∗ is a point that satisfies the equation f (x∗ ) = x∗ , for any nonlinear function f (x). For example, finding the fixed points of the logistic map would require solving the equation λx(1 − x) = x or λx(1 − x) − x = 0 for x. This gives the fixed points of the logistic map as either x∗ = 0 or x∗ = (λ − 1)/λ. Fixed points can also be determined geometrically by finding the point of intersection of the map with the y = x line or the 45◦ line. So, for the logistic map with λ = 4 the y = x line intersects the map at x = 0 and at x = 0.75, which can be analytically verified to be the fixed points of this map. This graphical procedure is made clearer in Fig. 4.3. A fixed point is called attracting if an orbit tends to approach it whereas 71 1 0.9 0.8 0.7 x n+1 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 xn 0.6 0.8 1 Figure 4.1: The logistic map, with λ = 4. Figure 4.2: An example orbit of the logistic map, with x0 = 0.2. 72 it is called repelling if an orbit moves further away from it with increasing iteration count. Mathematically, the nature of a fixed point can be determined by the magnitude of the derivative of the map evaluated at the fixed point. So, if x∗ is a fixed point, then it is an attracting fixed point if |f (x∗ )| < 1 and it is a repelling fixed point if |f (x∗ )| > 1. 1 0.9 0.8 0.7 xn+1 0.6 fixed point at x = 0.75 0.5 0.4 0.3 0.2 0.1 0 0 fixed point at x = 0 0.2 0.4 xn 0.6 0.8 1 Figure 4.3: The logistic map with λ = 4 and its fixed points, as indicated. 4.2.2 Periodic Points Another kind of orbit is a periodic orbit. A point xp is periodic if f k (xp ) = xp for some positive integer k. The smallest value of k for which this is true represents the prime period of the orbit. As another example of a nonlinear map, consider the quadratic map given by xn+1 = x2n + c (4.2) where c is the constant parameter for the map. Fig. 4.4 shows the map for c = 0. A prime-period 2 orbit for the quadratic map with c = −1 is shown in Fig. 4.5. The orbit starts at x0 = 0. 73 1 fixed point at x = 1 0.8 x n+1 0.6 0.4 fixed point at x = 0 0.2 0 −1 −0.5 0 xn 0.5 1 Figure 4.4: The quadratic map with c = 0 and its fixed points. Figure 4.5: The quadratic map with c = −1 and prime-period 2 orbit. 74 4.2.3 Neutral Fixed Points A neutral fixed point is another type of fixed point in the sense that it is neither attracting nor repelling. Therefore, if x∗ is a neutral fixed point, then the magnitude of the derivative of the map at the neutral fixed point is unity. In other words, |f 0 (x∗)| = 1. Although it is neither attracting nor repelling, a map having a neutral fixed point can behave in more complicated ways in the vicinity of the neutral fixed point. This behavior can be ascertained by finding higher order derivatives of the map evaluated at the neutral fixed point, typically second and third order derivatives. Graphically this is depicted in Fig. 4.6 which shows general nonlinear maps wherein the intersection of the map and the y = x line represents the neutral fixed point. The orbits behave differently in the vicinity of this I repel II attract attract repel Neither attracting nor repelling Neither attracting nor repelling III IV repel attract attract repel Weakly repelling Weakly attracting Figure 4.6: Different types of neutral fixed points. 75 fixed point as shown in the figure. The orbits can be either attracting from one end and repelling from the other (Types I and II), repelling from both ends (Type III) or attracting from both ends (Type IV). 4.2.4 Bifurcation Theory The word bifurcation itself means splitting into two. In the context of nonlinear dynamics, the theory of bifurcation is an analytical and visual technique to observe how the dynamics of a nonlinear map change with variation in the parameter of the map. As a visual example, consider once again the quadratic map with parameter c expressed by the equation fc (x) = x2 + c. When c > 0.25, there are no fixed points since the map (the parabola) does not intersect the y = x line. This situation is depicted in Fig. 4.7 where the parameter c = 0.4 and also shows a divergent orbit originating at x = 0. 1.5 x n+1 1 0.5 0 −2 −1.5 −1 −0.5 0 xn 0.5 1 1.5 2 Figure 4.7: Quadratic map with c = 0.4 and with no fixed points. As the value of c reduces to 0.25, a fixed point is formed as the y = x line becomes a tangent to the map. This is shown in the Fig. 4.8. As c decreases below 0.25, the fixed point splits 76 (bifurcates) into two fixed points. Thus a bifurcation is said to have occurred. In the case of this map, the bifurcation is of the “tangent” type or the “saddle-node” type, the precise definition of which will follow soon. There are other forms of bifurcations possible but apart from the tangent bifurcation, only the so-called period-doubling bifurcation will be defined later. The two fixed points produced as a result of the tangent bifurcation have different properties. One of them is an attracting fixed point and the other is repelling. This is shown in the Fig. 4.9. One of the orbits shown in the figure moves and eventually converges to the attracting fixed point at x = 0.0 and the other orbit moves away from the fixed point at x = 0.75 and diverges to infinity. 1.6 1.4 1.2 1 fixed point 0.8 0.6 0.4 0.2 0 −1 −0.5 0 0.5 1 Figure 4.8: Quadratic map with c = 0.25 and with one fixed point. The dynamics of this quadratic map are more complicated than presented so far. However, the presentation so far is meant to provide a feel for the richness in characteristics that are possible when the parameter of a map as seemingly simple as the quadratic map is varied. 77 1.4 1.2 1 repelling fixed point 0.8 0.6 0.4 attracting fixed point 0.2 0 −0.2 −1 −0.5 0 0.5 1 Figure 4.9: Quadratic map with c = 0.0 and with two fixed points, one attracting and one repelling. Tangent Bifurcation For a one-parameter family of nonlinear maps, fλ (x) with continuous partial derivatives, a tangent bifurcation is said to have occurred at parameter value λ0 and fλ0 (e) = e if there is an open interval I and a δ > 0 such that: 1. For λ0 − δ < λ < λ0 + δ, the map has no fixed points in I. 0 2. fλ0 (e) = 1, the fixed point is neutral. 00 3. fλ0 (e) 6= 0. 4. ∂ ∂λ fλ0 (e) 6= 0. 5. For λ0 < λ < λ0 + δ, the map has two fixed points, one attracting and one repelling. Intuitively, this means that the map has no fixed points outside of the interval I for values of λ slightly different from λ0 , has one fixed point when λ = λ0 and two fixed points, one attracting and one repelling, for λ > λ0 . Condition 3 indicates that a map is generally concave-up or concave-down near the fixed point, so that there is only one fixed point when λ = λ0 and x = e. 78 Period-Doubling Bifurcation Another common type of bifurcation is the period-doubling bifurcation which occurs in a one-parameter family of nonlinear maps, fλ (x), with continuous partial derivatives at parameter λ = λ0 and at x = e if there is an open interval I and δ > 0 if: 1. For all λ ∈ [λ0 − δ, λ0 + δ], there is a unique fixed point e for the map in I. 0 2. fλ0 (e) = −1, the fixed point is neutral. 3. For λ0 − δ < λ ≤ λ0 , the map has no period-2 cycles in I and e is attracting (resp. repelling). 4. For λ0 < λ < λ0 + δ, there is a unique period-2 cycle d1 and d2 in I with fλ (d1 ) = d2 . The period-2 cycle is attracting (resp. repelling) and the fixed point e is repelling (resp. attracting). 5. All the periodic orbits tend to the fixed point as λ → λ0 . 000 6. fλ0 (e) 6= 0. 7. 0 ∂ 2 ∂λ (fλ0 (e)) 6= 0. Intuitively, as the parameter of the map changes, a fixed point may go from attracting to repelling and give rise to a attracting period-2 cycle. Conversely, a fixed point may go from repelling to attracting and give rise to a repelling period-2 cycle. It is important to note that a period-2 cycle may itself undergo a period-doubling bifurcation and give rise to a period-4 cycle. This process can continue indefinitely and give rise to a large number of periods of the power of two. A bifurcation diagram is a convenient way to represent all forms of bifurcation for a particular map in the two-dimensional plot, or in the λ-x plane. For each value of λ, it plots all the possible values that the map can take. The bifurcation diagram (the c-x plane) for the quadratic map is shown in Fig. 4.10 with the tangent and period-doubling bifurcations associated with this map appropriately annotated. The lone tangent bifurcation represents the formation of a fixed point as was explained earlier and a successive cascade of perioddoubling bifurcations follow. A striking feature of this map is its self-similar structure. This is highlighted by the square boxes in the figure. The areas inside the boxes are scaled versions of each other and indeed, one can observe this repeating trend at smaller and smaller scales. This is a feature of a typical bifurcation diagram. The period-doubling regions abruptly end and give rise to what is known as the period-3 window which is then followed by a dense 79 Figure 4.10: Bifurcation diagram for the quadratic map. 80 seemingly structure-less region called the chaotic region. The significance of the period-3 window and the chaotic region are discussed next. 4.2.5 Sliding into Chaos The bifurcation diagram depicts the path or the route to chaos. It enables an investigation into the types of bifurcations that a map undergoes until it finally reaches the chaotic regime. In the quadratic map considered earlier, there are successions of period-doubling bifurcations before the relatively sparse region called the period-3 window is reached. As the name suggests, this region is characterized by the existence of an orbit having a period of 3. Li and Yorke in [36], as a special case of the theorem of Sarkovskii, showed that for a map to exhibit chaos it had to have an orbit of prime-period 3. This theorem provides a way to easily determine if a map is chaotic. All that is required is to find an orbit of period 3. Inside the period-3 window of the quadratic map further period-doubling bifurcations occur resulting in a period-6 orbit, and so on. Following this period-3 window is a region that is seemingly formless. This is the chaotic region and for every map there is a critical value of its parameter beyond which this regime is reached. Although it may seem impossible, there are several mathematical definitions available that can define what chaos is. Given next is the definition according to Devaney in [33]. Before that, one needs to understand the meaning of a dense set. If X and Y are sets and Y is a subset of X, then the set Y is dense in X if, for any point x ∈ X, there is a point y ∈ Y that is arbitrarily close to X. The measure of closeness is provided by some sort of distance function. According to the definition by Devaney, a dynamical system f (x) is chaotic if: 1. The periodic points of the dynamical system are dense in the set of all points. 2. The dynamical system is transitive, meaning that for any two points x and y and any ² > 0, there is a third point z within ² of x whose orbit comes within ² of y. 3. The dynamical system exhibits Sensitive Dependence on Initial Conditions (SDIC) which means that there exists a β > 0 such that for any x and any ² > 0, there is a y within ² of x and an integer k such that the distance between f k (x) and f k (y) is at least β. This is another way of saying that no matter how close together two orbits start off, after k iterations they will end up at least β apart. This is the most “popular” definition of chaos. Chaos manifests itself in a variety of ways and one way to look at it, apart from the bifurcation plot, is by finding a histogram plot or density plot of a dynamical system 81 in the chaotic regime. For a large enough number of iterations, the histogram will always contain points in every bin irrespective of the width of the bin. Manifestations of chaos have been observed in engineering, for example in Chua’s circuit [122]. Applications of this theory are also emerging like in the field of communications theory as in the work of Pecora and Caroll in [123], Cuomo, Oppenheim and Strogatz in [124], Chen and Yao in [125] and Kocarev and Parliz in [126], image watermarking in the work of Nikolaidis and Pitas in [127] and in the modelling of internet packet traffic flows in the work of Mondragin, Pitts and Arrowsmith in [128] and Erramilli, Singh and Pruthi in [129]. 4.3 Generating White Noise There are several chaotic functions that have been used to successfully generate a white noise sequence or a sequence with delta correlation. The Chebyshev map was used in [130], the Bernoulli map in [131] and several other examples found in [34]. In this work the logistic map with parameter λ = 4 is used as a source of white noise and a realization of the map produces a discrete sequence of numbers that are meant to represent a white noise process. This map has been defined in Eqn. (4.1). To demonstrate that this map is a suitable generator of white noise, the correlation plot of a sequence generated by this map is shown in Fig. 4.11. From the definition in Eqn. (2.9), the correlation plot is a measure of the relationship between different values of a random variable for different values of lag. This figure clearly shows the delta correlation properties that one expects from a good white noise generator. The spectrum of this noise source is obtained by performing the Fourier transform operation on the autocorrelation sequence and the result is shown in Fig. 4.12. Excellent techniques of generating white noise with pseudo-random numbers with large periods also exist, for example [132]. This approach has been used inside some of the circuit models in f REEDATM as white noise generators. These details are provided in the next chapter. 4.4 Intermittency and Flicker Noise The phenomenon of intermittency was reported by Pomeau and Manneville in [133] while observing turbulence in fluids wherein sudden transitions between stable periodic states and 82 1 Autocorrelation 0.8 0.6 0.4 0.2 0 0 20 40 60 80 100 Lag parameter (i) Figure 4.11: Correlation plot of the logistic map with λ = 4. Normalized Power 0.1 0.01 1 10 Frequency (Hz) Figure 4.12: Spectrum of the logistic map with λ = 4. 100 83 chaotic states was observed. A feature of any intermittent function is that it exhibits regular laminar phases separated by intermittent bursts and uses simple nonlinear iterative deterministic maps to produce complex stochastic-looking series. In the sub-sections that follow, a single parameter intermittent map is used to generate sequences with long memory characteristics but before that, some basic definitions are concepts are reviewed. An overview of the theory if intermittency can be found in reviews articles in [134] and [135]. The next few subsections present the mathematical basics of intermittent functions, describe typical properties and explore maps that display intermittency. In particular, the logarithmic map uses only a single parameter but still has very desirable memory characteristics. This makes it suitable for modelling noise in electronic devices and circuits. 4.4.1 Basic Theory Let fλ (x) : J → J be a parametric family of maps of parameter λ representing a chaotic nonlinear process where J is a closed interval, generally [0, 1]. A realization of the discrete time sequence produced by iteration on the nonlinear map is denoted by xn , n = 0, 1 . . ., where n denotes units of time. Let the parameter λ have a particular value, i.e. λe at which the map fλ (x) has a neutral fixed point at xe . So, at xe for parameter λe , the map has a value fλe (xe ). The neighborhood of the neutral fixed point xe is given by Nxδ (xe ) = {x : x ∈ [xe − δ, xe + δ]} (4.3) and the analogous neighborhood of the parameter value λe corresponding to the neutral fixed point is given by Nλδ = {λ : λ ∈ [λe − δ, λe + δ]} (4.4) Recall, that for any map a fixed point of the map is the solution of the equation fλ (x) = x. This fixed point is said to be attracting if the following condition holds: 0 |fλ (x)|(λe ,xe ) | < 1 (4.5) For a fixed point to be repelling 0 |fλ (x)|(λe ,xe ) | > 1 (4.6) and for a fixed point to be neutral 0 |fλ (x)|(λe ,xe ) | = 1. (4.7) 84 Neutral fixed points can be broadly classified into three categories: a) Weakly attracting, b) Weakly repelling and c) Attracting/Repelling fixed point. By weakly repelling we mean that the orbit of the map in the vicinity of the fixed point gradually tends to drift away from the fixed point. More precisely, a fixed point is weakly repelling if there exists a δ > 0 such that for all x ∈ Nxδ , the map fλ (x) can be represented as fλ (x) = x + F (x), (4.8) where F (x) satisfies the following properties: 1. F (x) is positive on the right-hand side of the neutral fixed point excluding the neutral fixed point. 2. F (x) is negative on the left-hand side of the neutral fixed point excluding the neutral fixed point. 3. F (x) is differentiable on Nxδ0 (xe ) and 4. F (λ, x) → o(|x − xe |) as x → xe . For the generation of long memory intermittent sequences, only maps with a weak repelling fixed point can be used, [136]. 4.4.2 Characteristics of Intermittency According to [136], a Type-II repelling fixed point intermittency for a parametric family of maps fλ (x) with a unique weakly repelling fixed point xe at parameter value λe satisfies the following conditions: (1) There exists a parameter value λ = λ0 such that the map fλe (x) exhibits a unique neutral fixed xe . (2) There exists a collection {Ij : j = 1, . . . , m} with m > 1 of closed intervals with disjoint interiors satisfying J = ∪m j=1 Ij , for all (λ, x) ∈ Nλδ ×J, the map is a continuous function of x and λ in the interior of each Ij and for λ = λe , the map satisfies the irreducibility condition, p(i) which means that for each i = 1, . . . , m, one can find and p(i) such that fλ (Ii ) ⊃ J. The irreducibility condition is another way of saying that the orbit of the map will eventually leave J. (3) There exists a j ∗ and a δ > 0 such that (i) xe ∈ Ij ∗ (ii) Nxδ (xe ) ⊂ Ij ∗ 85 (iii) For all λ ∈ Nλδ (λe ) and x ∈ Nxδ (xe ), fλ (x) can be written as fλ (x) = (λ − λe ) + x + F (λ, x) (4.9) (iv) F (λ, x) > 0 is a positive function of x if x > xe and F (λ, x) < 0 is a negative function of x is x < xe . (v) F (λ, x) is differentiable for all (λ, x) ∈ Nλδ (λe ) × Nxδ (xe ) and ∂ F (λ, x)|(λe ,xe ) = 0 ∂λ (4.10) F (λ, x) = o(|x − xe |), (4.11) and as x → xe . (4) For all x ∈ / Nxδ (xe ) and all λ ∈ Nλδ (λe ) the map is uniformly expanding, meaning that |fλ (x)| > 1. (5) For λ 6= λe , there exists a repelling fixed point in Nxδ (xe ) but no attracting fixed point. In [137], Eqn. (4.9) takes the form fλ (x) = x + x2 F (λ, x) (4.12) with limx→0 xF (λ, x) = 0 and limx→0 F (λ, x) = ∞. 4.4.3 Nonlinear Intermittent Functions In accordance with Eqn. (4.9), an example of a map that exhibits intermittency is the Polynomial Map defined on J = [0,1] as: x(1 + 2a xa ) if 0 ≤ x ≤ 1/2 fλ (x) = 2x − 1 if 1/2 < x ≤ 1 (4.13) The long-memory behavior of the map can be ascertained by computing its correlation function. In this case, although the correlation function cannot be found exactly, a bound for it has been found in [138]. It is given as Rf,g (n) ≤ Bn1−1/a . (4.14) 86 It can be shown that this map satisfies all the conditions in the definition in the previous section and it shows long memory if a ∈ (1/2, 1). In accordance with Eqn. (4.12), an example of a map that exhibits intermittency is the Logarithmic Map defined on J = [0,1] as: x(1 + Y (β)x log(1/x)1+β ) if 0 ≤ x ≤ 1/2 fβ (x) = 2x − 1 if 1/2 < x ≤ 1 (4.15) where Y (β) = 2(log 2)−(1+β) is chosen to ensure that limx→1/2− fβ (x) = 1. It can be shown [137] that the rate of decay of correlation of this map is bounded as R(n) ≤ B(log n)−β . (4.16) This is said to have a logarithmic mixing rate which can be made as slow as desired by varying the value of β. The logarithmic mixing rate is “slower” than the polynomial rate and this makes it a desirable generator of 1/f noise. 4.5 The Logarithmic Intermittent Map The intermittent map of Eqn. (4.15) is used to generate a chaotic sequence with power-law (long memory) properties. The description of the map over the unit interval is shown in Fig. 4.13. Iterations on this map produce an intermittent time sequence that shows distinct regions of laminar and chaotic behavior, thereby emphasizing its intermittent structure. This is shown in Fig. 4.14. The output from this map can be thought of as a “signal” with low-frequency content, a 1/f signal. This map shows the ability to generate a 1/f like frequency-domain response with a single parameter β. For the value of β used here (0.000005), the covariance of the map has been shown ([137]) to decay at a logarithmic rate which is slower than an exponential rate. The correlation plot of this map is shown in Fig. 4.15 and it makes the slow decay property evident, particularly in comparison to the white noise case of Fig. 4.11. This spectrum of this signal is shown in Fig. 4.16 where a million points were used to generate the corresponding time sequence. This slow rate or long memory translates to a 1/f response in the frequency domain. It is much harder to get a long-memory characteristic in the frequency domain with exponential rates which are conventionally produced by auto-regressive (AR) sequences 87 1 0.9 0.8 0.7 y 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.1 0.2 0.3 0.4 0.5 x 0.6 0.7 0.8 0.9 1 Figure 4.13: The logarithmic map, with β = 0.000005. 1 0.9 0.8 map output 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 1000 2000 3000 4000 5000 6000 sequence n 7000 8000 9000 10000 Figure 4.14: Sample realization of the logarithmic map, with β = 0.000005. 88 1 0.9 0.8 Autocorrelation 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 20 40 80 60 100 Lag parameter (i) Figure 4.15: Correlation plot of the logarithmic map with β = 0.000005. Normalized Power 1 0.1 0.01 0.1 1 10 100 Frequency (Hz) Figure 4.16: Spectrum of the logarithmic map with β = 0.000005. 89 or auto-regressive moving-average (ARMA) sequences. In both cases, a large number of parameters are required to get the desired response. In the case of the logarithmic map, a 1/f -like response is produced with a single parameter and a large variety of responses can be produced by tweaking β. This is one of the most important advantages of using chaotic maps. When compared with the infinite transmission line models of Section 2.6.2, this approach is also more suited to implementation in a circuit simulator. This has been detailed in Appendix B. 4.6 Summary This chapter presented a brief introduction to nonlinear dynamics and the meaning of chaos in a mathematical and graphical framework. After mentioning the various types of possible fixed points, bifurcation diagrams were used to understand the common types of bifurcations found in dynamical systems. The phenomenon of intermittency was introduced and its properties were mentioned. In particular, a class of intermittent maps having certain properties were highlighted and the logarithmic map was introduced. This map has a desirable property of producing a long-memory sequence or a 1/f -like response in the frequency domain using a single parameter. This is also more amenable to implementation in a circuit simulator environment. 90 Chapter 5 Implementation of Stochastic Framework 5.1 Introduction The theories of Chapter 3 and Chapter 4 are brought to life by implementing them in the circuit simulator f REEDATM the details of which are provided in this chapter. Section 5.2 is a brief introduction to f REEDATM and touches upon the main underlying principles upon which f REEDATM is built along with some of the support libraries it requires and the model catalog that it supports. Section 5.3 is an introduction to the basic fixed timestep routine in f REEDATM and explains how a general circuit is split up into its linear and nonlinear parts and how the error function is formulated and analyzed. Section 5.4 extends the basic transient analysis to include the noise generator components present in a circuit and highlights the differences in the error function as a result of these additions. 5.2 An Acquaintance with f REEDATM The circuit simulator f REEDATM is a state-variable based simulator which is developed on object-oriented principles [139] that allows for a clean separation of data and algorithm. 91 This permits a device model developer to be ignorant of the details of the implementation of the algorithm used to analyze the circuit. Likewise, the algorithm developer can be fairly uninvolved in the model-creation process and still produce a robust analysis routine. f REEDATM uses several off-the-shelf libraries to perform several computing/mathematical tasks. As of this writing, f REEDATM uses a combination of the BLAS 1 and SuperLU 2 libraries to perform linear algebra and sparse matrix operations respectively. It uses the FFTW 3 library to perform fast-fourier transform operations and the NNES 4 library to solve nonlinear systems of equations. It uses the package ADOL-C 5 to automatically compute derivatives for its device models with a minimal execution penalty and with a high degree of accuracy. This reduces the size of the model code by several degrees and allows the developer to focus on the numerical characteristics of the model than be bogged down by manually programming and debugging the derivatives, which has been known to be a time-consuming process. f REEDATM has a long list of models to choose from. These include conventional electronic models like resistors, capacitors and inductors, voltage and current sources, semiconductor diodes, BJT, JFET and several levels of MOSFET transistors. There are also microwave elements such as gyrators, circulators, microwave diodes and MESFETs. There is also support for behavioral models like analog mixers, filters and Foster’s canonical N-port formulation. There are some exotic models such as models for molecular devices, in particular molecular diodes. It can also read in table-based models such as the IBIS model or Y-matrix port information. New models are constantly under heavy development and this catalog is ever-expanding. f REEDATM supports several analysis types ranging from a basic fixed-time step analysis which is explained in detail in the next section, to more complex SPICE-like variable time step routines. There is also support to perform conventional operating point or DC analysis as well as the steady-state frequency-domain based Harmonic Balance analysis. The approach to f REEDATM can be thought of as equation-based process modelling as opposed to the older sequential modular approach to process modelling in the sense of [140]. This enables f REEDATM to to implement models that are not just restricted to electronic devices but become a universal circuit simulator with a generic model formulation system 1 www.netlib.org/blas http://crd.lbl.gov/ xiaoye/SuperLU/ 3 www.fftw.org 4 www.netlib.org/opt 5 http://www.math.tu-dresden.de/ adol-c/ 2 92 that can use any appropriate variable as a state variable. This allows for a heterogeneous nature of variables to be present in a single error function. In other words, it allows for the simulation of different types of devices connected together in useful configurations. A generic example is shown in Fig. 5.1, which connects spatially-distributed elements to linear and nonlinear electronic and thermal elements. SPATIALLY DISTRIBUTED CIRCUIT LINEAR NONLINEAR CIRCUIT NETWORK LINEAR NETWORK THERMAL NETWORK Figure 5.1: Connections between a heterogeneous collection of elements. 5.3 Transient Analysis in f REEDATM f REEDATM implements a fixed time step transient analysis routine tran2. The idea is to convert the differential equations describing the circuit into an algebraic system of nonlinear equations using time marching integration methods. The circuit is partitioned into separate groups which contain linear elements and sources in one group and nonlinear elements in the other group. This is represented schematically in Fig. 5.2. To formulate the error function, the nonlinear elements are replaced by nonlinear voltage or current sources. For every nonlinear element, one terminal is assumed to be the reference and the element is replaced by a set of sources connected between the remaining terminals and the reference terminal. In general, current sources are preferred over voltage sources because they are amenable to nodal analysis. 93 5.3.1 Linear Network To formulate the MNAM of the linear section, two matrices G and C of equal size nm are created, where nm represents the total number of non-reference nodes in the circuit and the number of additional required variables [141]. The time-dependent contributions from fixed sources and the nonlinear elements are inserted into a vector s of length nm on the right hand side. All conductors and frequency-independent MNAM stamps arising in the formulation will be entered in G, whereas capacitor and inductor values and other values associated with dynamic elements will be stored in matrix C. The linear system so obtained can be written as Gu(t) + C du(t) = s(t). dt (5.1) Here u is the vector of nodal voltages and required currents. The source vector s can be split into two parts, sf and sv such that s = sf + sv . The sf vector contains contributions from the independent sources in the circuit while the sv vector has the currents that are injected into the linear network from the nonlinear network. v1 I1 v2 I2 v3 I3 Linear network I1 Nonlinear device Vnl(1,2) I3 Vnl(2,3) and sources ... ... vn-1 I n-1 vn In ... Nonlinear v(n-1,n) Inl(n) device Figure 5.2: A partitioned network of linear and nonlinear elements and sources. 94 5.3.2 Nonlinear Network The nonlinear voltages and currents are expressed as a function of the state variable x as vN L (t) = u[x(t), dm x(t) dx(t) ,..., , xD (t)] dt dt (5.2) iN L (t) = w[x(t), dx(t) dm x(t) ,..., , xD (t)]. dt dt (5.3) The error function of an arbitrary circuit is developed using connectivity information which is described by an incidence matrix and the constitutive relations describing the nonlinear elements. This connectivity information is placed inside the incidence matrix T. If the number of columns of T is nm and the number of rows is equal to the number of state variables ns , then in each row a +1 is entered in the column corresponding to the positive terminal of the nonlinear element in that row and −1 in the column corresponding to the negative terminal of the nonlinear element in that row. This means that each row of T has at most two nonzero elements and the number of nonzero elements cannot exceed 2ns . Let vL (t) be the vector of port voltages of the nonlinear elements which can be calculated from the nodal voltages of the linear network as vL (t) = Tu(t). (5.4) Also, the current inserted into the linear network is given by sv (t) = TT iN L (t). 5.3.3 (5.5) Formulation of the Error Function The error function is formulated at the interface between the linear elements and the nonlinear elements/sources. In general, this is a reduced number of variables when compared with having to solve for every state variable in the circuit. The error functions combines the contributions from the linear and nonlinear sides as Gu(t) + C du(t) = sf (t) + TT iN L (t). dt (5.6) The error function itself is defined as f(t) = vL (t) − vN L (t) = Tu(t) − vN L (t) = 0. (5.7) 95 The size of the error function is equal to the number of state variables present in the nonlinear part of the circuit and this represents the minimum number of variables that are required to solve this equation. In the case of most microwave circuits, there are usually an equal number of linear and nonlinear elements and this approach will reduce the number of variables to be solved for as compared to the case when the voltage at every node is a variable. 5.3.4 Conversion to Algebraic Form The differential equations above are converted into nonlinear algebraic voltage and current equations and are expressed in discretized time as 0 vN L (xn ) = u[xn , xn , . . . , x(m) n , xD,n ] (5.8) 0 iN L (xn ) = w[xn , xn , . . . , x(m) n , xD,n ] 0 (5.9) 0 where xn = x(tn ), xn = x (tn ), (xD,n )i = xi (tn − τi ) and tn is the current value of time and τi is a fixed value of time delay. Eqn. (5.6) can be written in discretized form as 0 Gun + Cun = sf,n + TT iN L (xn ). (5.10) 0 Using time-marching integration approximations, the vector un can be calculated as 0 un = aun + bn−1 (5.11) 0 where bn−1 and un have the same dimensions. This value of un can now be used in Eqn. (5.10) to obtain Gun + C[aun + bn−1 ] = sf,n + TT iN L (xn ). (5.12) un = [G + aC]−1 [sf,n − Cbn−1 + TT iN L (xn )]. (5.13) Solving for un gives Compared with Eqn. (5.7), the discretized version of the error function can be written as f(xn ) = ssv,n + Msv iN L (xn ) − vN L (xn ) = 0 (5.14) ssv = T[G + aC]−1 [sf,n − Cbn−1 ] (5.15) where 96 and Msv = T[G + aC]−1 TT . (5.16) This error function can be written more compactly as f(xn ) = vL (xn ) − vN L (xn ) = 0. (5.17) The above equation represents a system of equations of order ns where ns is the number of common nodes between the linear part of the circuit and the nonlinear part, or the number of state variables in the nonlinear portion of the circuit. 5.4 Implementation of Transient Noise Analysis This section provides details of the changes that were required to be made in order to simulate noise in f REEDATM . In particular, changes to the linear and nonlinear models were made which enhanced the large-signal deterministic models into models that contain transient sources of noise. The details of these changes are provided in this section. The effects of these changes are reflected in the error function, which is now dependent on stochastic state variables describing the circuit under consideration. As a result, the error function is now a noisy error function as shown in Subsection 5.4.3. 5.4.1 Enhanced Device Models Several linear and nonlinear device models in f REEDATM were modified to include random voltage and current terms based on known sources of noise associated with the device. What follows are descriptions of the changes made to the npn-BJT model, the p-n junction diode, the MESFET model, the resistor and a white noise generating voltage source. Gummel-Poon npn-BJT The BJT model in f REEDATM is based on the Gummel-Poon [142] model which is similar to the large-signal model in the SPICE circuit simulator. The Gummel-Poon model addressed drawbacks of the original Ebers-Moll model. The most important of those are secondorder effects such as decrease in current gain at low-current levels and high-level injection into the base region. Low-current drop in gain results from increased base current due to 97 recombination effects that reduces current gain. The effect of base-width modulation, which results in a change of the collector-base and emitter-base current was included. High-level injection effects, which accounts for the injection of a significant amount of minority carriers into the base region was also considered as was the dependence of base resistance on current [56]. The large-signal equivalent circuit of the BJT is shown in Fig. 5.3. RC Ccx IBC CBC RBB C ICC βr B CBE IBE IEC βf IC RE E Figure 5.3: Large-signal Gummel-Poon BJT circuit. In f REEDATM , this model consists of 56 parameters and is implemented as a four-terminal device with 3 state variables. They are the voltage of the base with respect to emitter (Vbe ), the voltage of the base with respect to the collector (Vbc ) and the voltage of the collector with respect to the substrate (Vcs ). The outputs associated with the model are 3 voltages, namely the collector, base and emitter voltages with respect to substrate and 3 currents, namely the currents flowing in the collector, base and emitter terminals. The parameters used in the model along with the default values are listed in Table 5.1 and Table 5.2. What follows is a summary of the equations that model the characteristics of the device. The parameters of the model as they appear in the equations are written in a fixed-width font. The model work in both forward and reverse modes. The base current consists of two components – a base-emitter current and a base-collector current expressed as ib = ibe + ibc . (5.18) 98 Parameter is bf nf vaf ikf ise ne br nr var ikr isc nc re rb rbm irb rc eg cje vje mje cjc vjc mjc xcjc fc tf xtf vtf itf tr xtb xti tre1 tre2 trb1 trb2 Description transport saturation current ideal maximum forward beta forward current emission coefficient forward early voltage corner for forward-beta high current roll-off base-emitter leakage saturation current base-emitter leakage emission coefficient ideal maximum reverse beta reverse current emission coefficient reverse early voltage corner for reverse-beta high current roll off base collector leakage saturation current base-collector leakage emission coefficient emitter ohmic resistance zero bias base resistance minimum base resistance current at which rb falls to half of rbm collector ohmic resistance bandgap voltage base emitter zero bias p-n capacitance base emitter built in potential base emitter p-n grading factor base collector zero bias p-n capacitance base collector built in potential base collector p-n grading factor fraction of cbc connected internal to rb forward bias depletion capacitor coefficient ideal forward transit time transit time bias dependence coefficient transit time dependency on vbc transit time dependency on ic ideal reverse transit time forward and reverse beta temperature coefficient is temperature effect exponent re temperature coefficient (linear) re temperature coefficient (quadratic) rb temperature coefficient (linear) rb temperature coefficient (quadratic) Default 1e−16 100 1 0 0 0 1.5 1 1 0 0 0 2 0 0 0 0 0 1.11 0 0.75 0.33 0 0.75 0.33 1 0.5 0 0 0 0 0 0 3 0 0 0 0 Table 5.1: BJT model parameters in f REEDATM . Units A – – V A A – – – V A A – Ω Ω Ω A Ω eV F V – F V – – – S – V A S – – – – – – 99 Parameter trm1 trm2 trc1 trc2 tnom t cjs mjs vjs area ns iss beta kf alpha Description rbm temperature coefficient (linear) rbm temperature coefficient (quadratic) rbm temperature coefficient (linear) rbm temperature coefficient (quadratic) nominal temperature operating temperature collector substrate capacitance substrate junction exponential factor substrate junction built in potential current multiplier substrate p-n coefficient substrate saturation current exponent for chaotic map scaling factor for flicker noise power for dependence of flicker noise on current Default 0 0 0 0 300 300 0 0 0.75 1 1 0 0.000005 1 2 Units – – – – K K – – – – – – – – – Table 5.2: BJT model parameters in f REEDATM continued. This can also be written as ibf ibr + ile + + ilc . bf br The ideal forward diffusion current is expressed as µ µ ¶ ¶ Vbe ibf = is exp −1 nf vth ib = (5.19) (5.20) where vth = kT /q. k is Boltzmann’s constant, T is the temperature of operation of the device and q is the charge of an electron. The base-emitter recombination current ile is expressed as µ µ ¶ ¶ Vbe ile = ise exp −1 . (5.21) nf vth The reverse diffusion current is a function of the base-collector voltage and is written as µ µ ¶ ¶ Vbc ibr = is exp −1 (5.22) nf vth and the base-collector recombination current is written as ¶ ¶ µ µ Vbc −1 . ilc = ise exp nf vth (5.23) The equation for the collector-emitter current can be determined from the above defined terms as Ice = (Ibe − Ile )bf − (Ibc − Ilc )br KQB (5.24) 100 where the term KQB is used to model the effect of forward or reverse current roll-off which can occur due to base-width modulation and high-level current injection effects into the base. KQB can take two forms depending on whether the parameter ikf or ikr is set. If ikf is set (forward diffusion), then KQB 0.5 = 1 − TVAF − TVAR à r 1+ 4(Ibe − Ile )bf 1+ ikf ! whereas if ikr is set (reverse diffusion) then à ! r 0.5 4(Ibc − Ilc )bf KQB = 1+ 1+ . 1 − TVAF − TVAR ikr (5.25) (5.26) In the above TVAF = Vbc /var and TVAF = Vbe /var. The resistance of the base is a non-ideal resistance and is a function of KQB . It is written as µ ¶ rb − rbm 1 rbm + . Rbb = area KQB (5.27) The model in f REEDATM also includes the contribution to the output currents from the capacitors shown in Fig. 5.3 but the equations are not included here. To include noise sources in the above model, thermal, shot and flicker noise sources are considered. Based on the development in [55] and [56], when the transistor is in the forward active region, minority carriers diffuse and drift across the base into the basecollector region. They undergo acceleration when the enter the collector-base depletion region because of the field existing there and are swept into the collector. This is a random process and is a source of shot noise in the collector. Recombination effects in the baseemitter region and carrier injection from the base into the emitter are also random processes contributing to a shot effect in the base current and emitter current respectively. The parasitic resistors in the BJT contribute thermal noise modelled as current noise sources. For the collector, base and emitter resistances respectively, these noise sources are expressed as r it,rc = s it,rb = r it,re = 2kT ξc rc (5.28) 2kT ξb Rbb (5.29) 2kT ξe re (5.30) 101 where ξc , ξb and ξe are sequences of white noise generated by the Logistic map of Section 4.3. Three sources of shot noise currents are considered, proportional to the collector-emitter, base-emitter and base-collector currents respectively. These are written as is,ce = is,be = is,bc = p p p q|ice |ξce (5.31) q|ibe |ξbe (5.32) q|ibc |ξbc . (5.33) Although there is more than one known source of flicker noise in BJTs [83], it has been shown that the dominant source of flicker noise can be represented by a single current source between the base and emitter terminal and is a function of the base-emitter recombination current [84]. The flicker current noise source can be expressed as p if,be = kf |ile |alpha ξf (5.34) where ξf is a flicker noise sequence generated by the Logarithmic map in Section 4.4 and α controls the dependence of the flicker noise component on the non-ideal base current. By default, it is set to 2. RC Ccx B C BC I BC C i s,bc RBB i t,rc i s,be i t,rb C BE I BE i f,be IC i s,ce RE i t,re E Figure 5.4: The Gummel-Poon BJT model, along with noise sources. Apart from alpha, these changes add two additional parameters to the BJT element. The first parameter, beta, allows control over the slope of the flicker characteristic in the frequency domain and its default value is set to 0.000005. The other parameter, kf, allows 102 for scaling of the amplitude of the flicker noise process and its default value is set to 0.001. Adding these noise sources to the noiseless Gummel-Poon BJT model produces the noisy equivalent large-signal circuit shown in Fig. 5.4. The non-ideal diodes of Fig. 5.3 are omitted here for simplicity. The underlying assumption of the model in f REEDATM is that all the sources of noise are uncorrelated with one another. This can be justified by considering each intrinsic noise-generating processes within the device to be an independent process. The f REEDATM source code for the noise-enabled npn-BJT model is provided in Appendix C.1. Diode The diode model in f REEDATM is based on the SPICE diode and is implemented as a twoterminal device with one state variable, i.e. the voltage across the device. The large-signal model of the noiseless diode consists of a current source in parallel with a capacitance, both of which are in series with a parasitic resistance. This is shown in Fig. 5.5. It consists of 15 parameters which are listed in Table 5.3. Anode Rs I Cd d Cathode Figure 5.5: Large-signal model for a p-n junction diode. The current flowing through the diode, Id is modelled as a exponential function that depends on the voltage Vd across it. It is expressed as Id = is(eVd /nVth − 1) − Ib . (5.35) where Vth = kT /q. The current Ib is the current in the diode in breakdown mode. It is 103 Parameter is n bv bv fc cj0 vj m tt area charge rs beta kf alpha Description saturation current emission coefficient current magnitude at the reverse breakdown voltage breakdown voltage coefficient for forward-bias depletion capacitance zero-bias depletion capacitance built-in junction potential PN junction grading coefficient transit time area multiplier use charge-conserving model series resistance exponent for chaotic map scaling factor for chaotic noise power for dependence of flicker noise on current Default 1e−14 1 1e−10 0 0.5 0 1 0.5 0 1 true 0 0.000005 1 1 Units A – A V – F V – s – – Ω – – Table 5.3: Noisy diode model parameters in f REEDATM . written as 0 Ib = ibv e(Vd +bv)/nVth Vd ≥ (1 + bv) Vd < (1 + bv). (5.36) The diode model in f REEDATM calculates charge between the diode terminals and computes the resulting current as a derivative of charge. It also has an option (the charge parameter) which if unset uses the original diffusion capacitance of the SPICE model to compute time-varying currents. These equations are not provided here. The noise model for the diode adds a thermal current source for the parasitic resistance, a shot noise current source and a flicker noise current source that is dependent on the current flowing through the diode [56]. The enhanced diode model including noise sources is shown in Fig. 5.6. The thermal, shot and flicker noise models respectively are written as r 2kT ξt it,rs = rs p is,d = qId ξs q alpha if,d = kf Id ξf (5.37) (5.38) (5.39) where the parameter kf is the scaling coefficient for flicker noise and alpha controls the dependence of the flicker noise component on the current in the diode. The parameter beta 104 Anode Rs i t,rs i f,d i s,d I Cd d Cathode Figure 5.6: Noise-enabled p-n junction diode model. controls the slope of the generated 1/f characteristic. As before, the random variables ξt and ξs , representing thermal and shot noise processes are generated using the Logistic map and ξf representing a flicker noise process is generated using the Logarithmic map. The source code for this diode model in f REEDATM is provided in Appendix C.2. MESFET There are several MESFET models in f REEDATM , namely the Curtice Cubic model [148], the Materka model [149], the Parker- Skellern [150] model and the model proposed by Ooi, Ma and Leong in [151] henceforth referred to as the OML model. The Curtice Cubic model was enhanced by including sources of noise as described below. It consists of two state variables, the voltage across the gate-source junctions and the voltage across the gate-drain junctions. The large-signal equivalent circuit without noise sources for the Curtice Cubic MESFET is shown in Fig. 5.7. The noise-enabled model consists of 32 parameters as listed in Table 5.4. The Curtice Cubic model computes drain-source current Ids as a polynomial function of input voltage, Vin . This current is expressed as Ids = (a0 + a1 Vin + a2 Vin2 + a3 Vin3 ) tanh (gama Vds ) (5.40) where a third-order polynomial with coefficients a0, a1, a2 and a3 is used. The values of these coefficients are generally obtained from measured data. To be able to include the effect 105 Parameter a0 a1 a2 a3 beta vds0 gama vt0 cgs0 cgd0 is n ib0 nr td vbi fcc vbd tnom avt0 bvt0 tbet tm tme eg m xti tj area map beta kf af Description drain saturation current for VGS = 0 coefficient for Vin Vin coefficient for Vin3 coefficient for Vin3 dependence on Vds Vds at which beta was measured slope of drain characteristic in the linear region voltage at which the channel current is forced to 0 gate-source barrier capacitance for VGS = 0 gate-drain barrier capacitance for VGS = 0 diode saturation current diode ideality factor breakdown current parameter breakdown ideality factor channel transit time built-in junction potential forward-bias depletion capacitance coefficient breakdown voltage reference temperature pinch-off voltage linear temperature coefficient pinch-off voltage quadratic temperature coefficient beta power law temperature coefficient Ids linear temperature coefficient Ids power-law temperature coefficient Barrier height at 0 ◦ K grading coefficient diode saturation current temperature exponent junction temperature area multiplier exponent for chaotic map scaling factor for flicker noise flicker noise exponent Default 0.1 0.05 0 0 0 0.4 1.5 1e−10 0 0 0 1 0 10 0 0.8 0.5 1e10 293 0 0 0 0 0 0.8 0.5 2 293 1 0.000005 1 1 Table 5.4: Noisy Curtice Cubic model parameters in f REEDATM . Units A A/V A/V2 A/V3 1/V V 1/V V F F A − A − secs V V V ◦K 1/◦ K 1/◦ K2 1/◦ K 1/◦ K 1/◦ K2 eV − − ◦K − − − − 106 i DG C DG RD D G R IN i i DS GS RDS CDS C GS RS S Figure 5.7: Noiseless Curtice Cubic large-signal model. of the increase of pinchoff voltage with drain-source voltage, the transit time (parameter td) associated with the FET is considered in the formulation of the input voltage Vin as Vin = Vgs (t − td)[1 + vds0 + Vds ]. (5.41) Curtice in [148] found that this time-delay is dependent on the value of the input voltage to the device. In f REEDATM , this delay effect is modelled and is a parameter in the netlist describing the circuit under consideration. So, once this parameter is set to a non-zero value, the data structure associated with the input voltage is stored not just for the current time instant but also for the assigned delayed value of input voltage. This allows memory effects to be included in the simulation framework. Nonlinear capacitors associated with the MESFET as shown in Fig. 5.7 are also implemented in the model as described in [148] and is not repeated here. To model the effects of noise in the MESFET, the main sources of noise considered ([56], [57]) are thermal noise associated with the parasitics of the MESFET, thermal noise associated with the channel resistance of the device and flicker noise which is a function of the drain-source current. These main sources of noise are shown in Fig. 5.8. The shot noise associated with a MESFET is assumed to be generated from the gate-leakage current [56] and is not modelled here. The thermal noise associated with the drain and source 107 i DG i t,rd C DG RD D G R IN i i DS RDS GS CDS i t,ch + i f,ch C GS RS i t,rs S Figure 5.8: Noise-enabled Curtice Cubic large-signal model. resistances respectively is modelled as r 2kT ξt rd r 2kT = ξt rs it,rd = (5.42) it,rs (5.43) where ξt represents a white noise generated by the Logistic map of Section 4.3. The model for the thermal noise of the channel is as found in [57] and is expressed as It,ch = where 1 + η + η2 4kT beta(Vgs − vto) ξt 3 1+η 1− η= 0 Vds Vgs −vto Vds ≤ Vgs − vto Vds > Vgs − vto and the flicker noise of the device is modelled as q af ξ if,ch = kf Ids f (5.44) (5.45) (5.46) where kf is an amplitude scaling coefficient for flicker noise and af is the flicker noise exponent. The ξf are random variables representing a flicker noise process as generated in Section 4.4 and the parameter map beta controls the slope of the generated 1/f characteristic. The source code for the noise-enabled Curtice Cubic MESFET element is available in Appendix C.3. 108 Parameter res temp kth Description Resistance value Temperature Noise scaling coefficient Default 1000 300 1 Units Ω K – Table 5.5: Noisy resistor model parameters in f REEDATM . Resistor The resistor model in f REEDATM is a linear model that consists of a single parameter, i.e. the value of the resistance. The noise-enabled resistor was implemented as a nonlinear element with its state-variable being the voltage across its terminals. In the case of the resistor, the only noise source considered is thermal noise. Therefore, in addition to deterministic output of the resistor, there is a temperature-dependent thermal noise current contribution expressed as r it,r = 2kT ξr R (5.47) where R is the value of the resistance, T is the temperature and k is Boltzmann’s constant. The parameters associated with this resistor are shown in Table 5.5. The random processes ξr are assumed white and are generated using the Logistic map of Section 4.3. The source code for the resistor is provided in Appendix C.4. White Noise Generating Voltage Source A white noise voltage generator is implemented in f REEDATM as a linear source of Gaussian distributed random variables. The parameters for this element are listed in Table 5.6. The model can generate Gaussian random variables at some DC offset, it can provide a user-specified delayed sequence, different scales of random variables and allows for nonnormalized Gaussian random variables by adjusting the mean and variance to a desired value. The white noise sequences are generated using the Ziggurat technique, [132]. The essential source code to generate a white noise sequence with the Ziggurat technique has been used from [132] but has been modified to fit into the f REEDATM framework. The source code for the voltage source is listed in Appendix C.5. The Ziggurat implementation 109 Parameter vo td mean variance kn Description Offset value Delay time Mean of the random variable Variance of the random variable Scaling coefficient Default 0 0 0 1 1 Units V secs – – – Table 5.6: White noise voltage source model parameters in f REEDATM . in f REEDATM is provided in Appendix C.8. 5.4.2 Nonlinear Maps The nonlinear chaotic maps from Section 4.3 and Section 4.4, namely the Logistic map and the Logarithmic map are implemented in f REEDATM as separate classes which can be instantiated inside any element. These maps can work in conjunction with a transient analysis only and during the instantiation process, information about the time step and stop time must be passed. Each of the maps uses this information to dynamically allocate memory large enough to hold a sequence that is as long as the transient simulation itself. After this instantiation, the map chooses a random starting point in its domain and proceeds to generate an orbit whose characteristics are controlled by a single parameter. For the Logistic map, this parameter is λ and its default value is set to 4. For the Logarithmic map, the parameter is β and the default value is 0.000005. Each element generates one map for each intrinsic noise source. The SDIC property of chaos ensures that the noise sources realized in this way will have different instantaneous values. This ensures that the different noise sources for the devices are uncorrelated. The source code for the Logistic map generator is provided in Appendix C.9 and that for the Logarithmic map is provided in Appendix C.10. 5.4.3 Noisy Error Function The examples above show how changes to individual linear and nonlinear elements can be made. On account of the OO structure of f REEDATM , making a change to an element requires no changes to be made to the numerical solver routines. The analysis routines are 110 not aware of specific details about the elements under consideration and strive to minimize the error function presented to them by an abstract framework implemented in the simulator. The partitioning of a generic circuit in f REEDATM happens just like in the noiseless case but the contributions of the noise-enabled elements now appear at inter-facial nodes as is shown in Fig. 5.9. As is apparent from the figure, the nonlinear elements now include contributions from internal noisy current or voltage sources. Effects of linear noisy voltage sources also appear at the interface as in the case of the white noise generating voltage source model. The error function is now transformed to Linear elements and sources Vξ I1 V2 I2 V3 I3 Vn−1 In−1 Vn In Nonlinear device Vnl Noise voltage Nonlinear device Inl Noise current Figure 5.9: Partitioned network which now contains contributions from transient noise sources. f(xξ,n ) = vL (xξ,n ) − vN L (xξ,n ) (5.48) and contains a mixture of deterministic terms and stochastic noise terms that can be both additive and multiplicative. These stochastic terms need not necessarily be small signal as they may produce significant instantaneous components upon interaction with large deterministic signals in a circuit. If the resulting system of stochastic differential equations is interpreted in the Stratonovich sense, it can be solved by the same techniques used to solve the system of equations represented in Eqn. (5.17). This allows for easy implementation of a concurrent deterministic and stochastic framework and enables the use of pre-existing robust and established numerical solution techniques. It is also possible to modify the transient analysis routine to include random terms directly into the error function instead of modifying individual element models. This procedure was considered earlier and although it was quicker to deploy in practice, it was 111 discarded in favor of enabling noise in individual device elements. The reason for this is that modifying the analysis directly is too generic and unintuitive and does not correspond to sources of noise as known in electronic devices. To determine the noise characteristics correctly, it is essential to use noise models that are particular to the devices in the circuit under consideration. 5.5 Summary This chapter provides implementation details of the establishment of the stochastic framework in the circuit simulator f REEDATM . The method of this implementation keeps the basic transient analysis framework of f REEDATM unchanged and essentially extends the deterministic framework to include random sources of noise. This approach is made possible due to the object-oriented structure of f REEDATM and in particular, due to the fact that there is a clear distinction between element routines and analysis routines. This also permits the resulting stochastic system of differential equations to be analyzed with existing integration techniques, i.e. as a consequence of using the Stratonovich interpretation. If the fact that handling stochastic signals may require minor modifications to the convergence parameters of the solver or changing the solver routine itself, it can be accomplished by preserving the existing structure of data. 112 Chapter 6 Noise in a Voltage-Controlled Oscillator 6.1 Introduction This chapter presents simulation runs compared with measured results of an varactortuned voltage controlled oscillator circuit, the details of which are presented in Section 6.2. The circuit uses nonlinear devices like bipolar junction transistors and varactors which are modelled as reverse-biased diodes. Section 6.3 explains the procedure required to extract flicker noise parameters for the nonlinear devices in the circuit and presents comparisons between simulation and measurement for different values of input bias. 6.2 A Varactor Voltage-Controlled Oscillator This section presents a varactor-tuned voltage controlled oscillator circuit designed to oscillate at a resonant frequency between 45 and 55 MHz. The schematic of the oscillator is shown in Fig. 6.2. The point marked (A) is the output of the resonator and the transformers shown inside a box, represent the feedback path. The transistors Q1 and Q2 are emittercoupled npn-BJT transistors and are chosen to provide sufficient gain so as to maintain 113 oscillation. The voltage source, VSS, is used to bias the varactor which provides the requisite voltage-dependent capacitance to the resonator. The voltage swing at the resonator is fairly large, and can be as high as 40 V p-p. The output is measured at point (B). The output at this terminal is a square wave-like signal and a snapshot of the simulated output voltage versus time is shown in Fig. 6.1. This circuit has been used from [144]. Output (volts) 1 0 -1 -2 1.5e-06 1.6e-06 1.55e-06 1.65e-06 Time (seconds) Figure 6.1: Simulated output of the VCO at terminal (B) indicating a frequency of oscillation of 45 MHz. 6.3 Simulation and Validation As is detailed in [144], the oscillator phase noise, which is a single-sided power spectral density of phase fluctuation, was measured using the Agilent Model 4352S Phase Noise test set. External sources of noise were prevented from having a major impact on the measurements by use of appropriate shielding and the circuits were operated on battery. Single-sided phase noise measurements were taken at frequency offsets varying from 100 Hz to 100 KHz at bias conditions of 0V, 6V and 12V. According to measurement, the frequency of oscillation of the VCO is dependent on bias. This dependence is reproduced from [144] and is shown in Fig. 6.3. A measurement was also made of the degradation of phase noise observed at the varactor terminal with respect to the varactor bias voltage. The reason for these measurements is to observe the point when the varactor goes into the breakdown + − VSS L=10uH L=10uH Varactor C = 100pF C=47pF C=47pF (A) Q1 feedback R=47 L=10uH L=10uH Q2 C=1nf C=1nF R=1k R=10 R=3.9K C=1nF C=1nF + VS − V=12V R=1000 (B) 114 Figure 6.2: Varactor-tuned VCO schematic, from [144]. 115 60 VCO Oscillation Frequency (MHz) 59 58 57 56 55 54 53 52 0 2 4 6 8 10 Tuning Voltage (V) Figure 6.3: Dependence on VCO oscillation frequency v/s bias. region and its effect on phase noise at the varactor terminals. This plot reproduced from [144] is shown in Fig. 6.4. As is evident from the figure, once the bias voltage increases above 5 V, the phase noise at the varactor begins to degrade and beyond the 10 V mark, there is a noticeable increase of phase noise indicative of the beginning of the breakdown mechanism. Once the bias voltage crosses 18 V, the breakdown effect is apparent as indicated by the sudden vertical spike in the graph of Fig. 6.4. The models for the elements used in the circuit were modified in f REEDATM to include sources of noise as detailed in Section 5.4. In particular, the noise-enabled npn-BJT and resistor elements were used with the reverse-biased p-n junction diode that models the varactor in the circuit. The important parameter values used for the BJT model are shown in Table 6.1 while the diode parameters are shown in Table 6.2. The f REEDATM compatible netlist used for this simulation is provided in Appendix D.1. To perform the simulation, values were required to be set for the flicker noise scaling parameter kf for both the BJT and diode. This is because the mechanism of 1/f noise generation in a semiconductor device is not unique, unlike thermal and shot noise, which makes it difficult to set a global value for kf that will work with any device. Numerous simulations were performed in order to find a value of kf for both the BJT and the diode that will fit well to experimentally measured values of phase noise. Initially, a value was 116 Phase noise (dBc/Hz) -80 -100 -120 1 10 Tuning voltage (V) Figure 6.4: Degradation of phase noise at varactor v/s bias. BJT parameters bf = 255.9 ikf = 0.2847 mjc = 0.3416 nr = 1.0 tr = 46.91e-9 xtf = 3.0 br = 6.092 is = 14.34e-15 mje = 0.377 rb = 10.0 vaf = 74.03 kf = 1e-2 cjc = 7.306e-12 ise = 14.34e-15 nf = 1.0 rc = 1.0 vtf = 1.7 beta = 0.000005 cje = 22.01e-12 itf = 0.6 ne = 1.307 tf = 411.1e-12 xtb = 1.5 Table 6.1: BJT model parameters values as used in the VCO circuit. Diode parameters is = 1.365p m = 0.4261 ibv = 10.0e-6 rs = 1 vj = 0.75 kf = 1e-4 n=1 fc = 0.5 beta = 0.000005 cj0 = 14.93e-12 bv = 25 Table 6.2: Diode model parameters values as used in the VCO circuit. 117 0 Simulation Measurement Phase noise (dBc/Hz) -50 -100 -150 100 1k 10 k 100 k frequency offset (Hz) Figure 6.5: Phase noise comparison between data and experiment with bias voltage at 0 V. determined while trying to fit to phase noise measurements with 0 V bias. Once this value was determined and a suitable match to measurement was obtained, the same value of kf for the BJT and diode was used in a simulation with the 6 V bias condition to obtain a match to the corresponding phase noise measurements at 6 V. The values of kf for the BJT with 0 V bias was determined to be 1 × 10−2 , while for the diode kf = 1 × 10−4 was used. It has been shown before [145] that varactors contribute minimal 1/f noise to the overall phase noise response and this explains the low value of kf used for the diode. The main contribution of flicker noise in this circuit comes from the active elements. The BJT and diode contribute thermal noise associated with their resistive parasitics and shot noise associated with their junctions and the external resistors in the circuit contribute thermal noise. The inductors and capacitors are assumed ideal and hence have no internal sources of noise [146]. The results of comparison between simulated and measured curves for the cases of 0 V and 6 V bias conditions are shown in Fig. 6.5 and Fig. 6.6 respectively. This approach was also attempted for the 12 V bias condition and the match was initially not good. To understand why, it is informative to look at Fig. 6.4 which reveals an increased level of phase noise associated with the varactor at 12 V. However, further simulations were run 118 0 Simulation Measurement Phase noise (dBc/Hz) -50 -100 -150 100 1k 10 k 100 k frequency offset (Hz) Figure 6.6: Phase noise comparison between data and experiment with bias voltage at 6 V. 0 Simulation Measurement Phase noise (dBc/Hz) -50 -100 -150 100 1k 10 k 100 k frequency offset (Hz) Figure 6.7: Phase noise comparison between data and experiment with bias voltage at 12 V. 119 but this time the shot noise scaling component of the varactor, Ksh , was increased from unity and after several attempts, a value to 1×104 was found to provide an improved match between simulated runs and measured curves. This is indicated in Fig. 6.7. The correctness of this latter empirical fitting procedure for shot noise is questionable given the known fundamental nature of shot noise. The simulation was still performed and results presented because considering increased levels of shot noise in the varactor when it is approaching breakdown mode is valid and is a possible approach for obtaining a match of simulated curves from f REEDATM with measured data. Although this does not necessarily validate the simulation approach or change fundamental concepts about shot noise, it might open new avenues for modelling noise in devices when they are on the threshold of breakdown. 6.4 Summary This chapter describes the simulation of phase noise in a circuit which has nonlinear interactions between large values of signal and noise. It does not require that the magnitude of the noise be negligibly small compared to the signal. A varactor-tuned VCO circuit is set up for simulation with appropriate time-domain noise models for the devices in the circuit. Simulations for phase noise are carried out and compared with measured results. Parameter values for flicker noise scaling coefficients for the BJT and the diode are determined after performing several simulations with 0 V bias and trying to match simulated runs with measured data. Once these parameter values are found, they remain unchanged for the simulation with 6 V bias and a close match between data and experiment is obtained. It is found that phase noise can be determined quite accurately for a fairly large range of frequency offsets. Higher values of bias drive the varactor close to breakdown and this causes increased levels of phase noise. A match between simulation and experiment is obtained in this case as well but requires artificial adjustments to the amplitude of shot noise. It is reasoned that this scaling is justified since the varactor is on the threshold of breakdown when the reverse bias is set at 12 V. 120 Chapter 7 Noise and Amplification 7.1 Introduction This chapter explores the effect of high levels of noise, superimposed on a carrier, on the gain-compression characteristics of a linear X-band MMIC driver amplifier. A series of measurements are performed each with increasing levels of input power and the corresponding gain-compression curves are generated. These curves are compared with simulated results obtained from f REEDATM . Even though total noise power may be large, perturbations from time-point to time-point can be very small and high simulation precision is required to track signals and separate stochastic from deterministic results after each time step and each nonlinear iteration. This is possible because of the high dynamic range of f REEDATM [147], and the existence of a large catalog of nonlinear models. 7.2 Setup and Verification The power amplifier considered is an X-band pHEMT MMIC amplifier (Filtronic LMA411) whose layout diagram is shown in Fig. 7.1. The MMIC operates between 8.5 and 14 GHz and has an output power of +14 dBm at 1 dB gain compression and +17 dBm at 3 dB gain compression. At 10 GHz, the noise figure is 2 dB. The circuit consists of 2 cascaded pHEMT stages each in a class-A common source configuration. The device was biased at 121 a DC voltage of 6 V and a current of 94 mA. The measurement setup used is shown in Figure 7.1: Layout of the two-stage X-band MMIC. o/p i/p GPIB Bus Agilent E8267C Signal Generator DUT − LMA411 MMIC dual pHEMT LNA Spectrum Analyzer Figure 7.2: Measurement setup for the X-band MMIC amplifier. Fig. 7.2. A white noise sequence was generated digitally and transferred via a GPIB bus to the Agilent E8267C signal generator which was set to a carrier frequency of 10 GHz. The signal generator is capable of accepting externally generated signals in a band of 100 MHz about its center frequency. This procedure creates a composite signal consisting of a 10 GHz sinusoid and a synthesized 100 MHz wide band-limited white noise sequence. The white noise sequence was generated using the randn function in Mathworks’s MATLAB which 122 currently uses the Ziggurat technique [132] for generating white sequences, [157]. This approach is known to generate white noise samples with a very large period, roughly 21492 , thus ensuring that the random numbers so generated will be truly random for almost all lengths of datasets of interest. The amplitude of the composite signal varied from −20 dBm to 5 dBm at 10 GHz in steps of 1 dBm and of the gain of the amplifier was measured at 10 GHz with two different input conditions: one with no noise at the input and second with the Signal-to-Noise Ratio (SNR) maintained at −20 dBc. A plot of measured gain versus input power with no input noise is compared with the plot of measured gain with noise level maintained at −20 dBc and is shown in Fig. 7.3. 20 Noiseless case measurement -20 dBc noise measurement Power gain (dBm) 18 16 14 12 10 -20 -15 -10 -5 0 5 Input power (dBm) Figure 7.3: Comparison between measured curves of gain with no input noise and noise maintained at −20 dBc. In the plot the total noise power in the 100 MHz noise bandwidth is specified. High levels of noise suppress the effective power gain of the amplifier with respect to the carrier. The suppressed power at the center frequency is dissipated into frequency bands around 10 GHz. This is represented schematically in Fig. 7.4. It should be noted that a noise level at −20 dBc means that total input noise is maintained at a constant level of 20 dB below the carrier. A similar plot of simulated power gain versus input with no input noise is com- Power 123 f fc Figure 7.4: Power transferred from the center frequency into sidebands. Device 1 a0 = 0.09910 beta = 0.01865 vbi = 0.8 nr = 1.2 Device 2 a0 = 0.1321 beta = 0.03141 vbi = 1.5 n = 1.2 a1 = 0.08541 gama = 0.8293 cgd0 = 3f t = 1e-12 a2 = -0.0203 vds0 = 6.494 cgs0 = 528.2f vbd = 12 a3 = -0.015 vt0 = -1.2 is = 3e-12 kf = 1e-9 a1 = 0.1085 gama = 0.7946 cgd0 = 4e-15 t = 1e-12 a2 = -0.04804 vds0 = 5.892 cgs0 = 695.2f vbd = 12 a3 = -0.03821 vt0 = -1.2 is = 4e-12 kf = 1e-9 Table 7.1: MESFET model parameters in for X-band MMIC. pared with the plot of simulated power gain with noise level maintained at −20 dBc and is shown in Fig. 7.5. Just as in the case of MATLAB, the white noise sequence was generated in f REEDATM using the Ziggurat technique, the source code of which is provided in Appendix. C.8. The pHEMTs were modelled using the Curtice Cubic model [148] and the parameters for the polynomial coefficients for the cubic model were obtained from Filtronic Corporation 1 . These parameters are listed in Table 7.1. The simulation uses the noiseenabled version of the Curtice Cubic MESFET as described in Section 5.4. The netlist used for this simulation is provided in Appendix D.2. To observe the degradative effect of noise on the purity of the output waveform, a 1 http://www.filtronic.co.uk 124 20 Noiseless case simulation -20 dBc noise simulation Power gain (dBm) 18 16 14 12 -20 -15 -10 -5 0 5 Input power (dBm) Figure 7.5: Comparison between simulated curves of gain with no input noise and noise maintained at −20 dBc. 2 Output voltage (volts) 1 0 -1 -2 8.0 n 8.2 n 8.4 n 8.6 n 8.8 n Time (secs) Figure 7.6: Degradation of the output sinusoid due to noise. 125 snapshot of the transient output is shown in Fig. 7.6 with noise level maintained at −20 dBc. This figure shows distortion in the amplitude and shape of the output sinusoid with the input power level set at 0 dBm. It is important to note that this distortion is due to both saturation effects and noise interference. The simulated power gain curve in the high-noise level case shows a comparatively smaller degree of compression compared to the measured results of Fig. 7.3. It is important to be able to understand the reason for this discrepancy. The procedure used to uncover the reason for this difference is the topic of the next section and represents an interesting avenue for future research. 7.3 Further Investigations As seen in the previous section, simulated curves of power gain versus input power do not show as much compression at high power levels in the presence of noise as do measured curves. This section presents a summary of the attempts made to find a resolution to this problem. The first attempt focuses on improvements to the Curtice Cubic model. Since the release of the Curtice Cubic model, there have been several newer models for the MESFET that have appeared in the literature that aim to have better large signal behavior in the saturation and cutoff regions and better model behavior in the knee regions of the largesignal characteristics. Some examples are the Parker-Skellern (PS) model [150], Triquint’s own model (TOM) 2 and more recently, the MESFET model by Ooi, Ma and Leong (OML) [151]. The TOM model already exists in f REEDATM while the PS and OML models were implemented. The PS model takes a different approach compared to the Curtice Cubic model in the sense that it models drain current as a power-law function of effective gate and drain voltages. The parameter table for the model is provided in Table 7.2. The drain-source current is expressed as Ids = id 1 + delta P where q id = beta Vgt [1 − ( 2 www.triquint.com Vdt q ) ] Vgt (7.1) (7.2) 126 Parameter acgam beta cgs cds delta fc hfeta hfe1 hfe2 hfgam hfg1 hfg2 ibd is lfgam lfg1 lfg2 mvst n p q rs rd taud taug vbd vbi vst vto xc xi z tj tnom afac lam Description capacitance modulation linear region transconductance scale zero-bias gate-source capacitance zero-bias drain-source capacitance thermal reduction coefficient forward bias capacitance parameter high-frequency vgs feedback parameter hfgam modulation by vgd hfgam modulation by vgs high-frequency vgd feedback parameter hfgam modulation by vsg hfgam modulation by vdg gate-junction breakdown current gate-junction saturation current low-frequency feedback parameter lfgam modulation by vsg lfgam modulation by vdg sub-threshold modulation gate-junction ideality factor linear region power law exponent saturated region power law exponent source ohmic resistance drain ohmic resistance relaxation time for thermal reduction relaxation time for gamma feedback gate junction breakdown voltage gate junction potential sub-threshold potential threshold voltage capacitance pinch-off reduction factor saturation knee potential factor knee transition parameter device temperature nominal temperature gate width scale factor channel length modulation Default 0 10−4 0 0 0 0.5 0 0 0 0 0 0 0 10−14 0 0 0 0 1 2 2 0 0 0 0 1 1 0 -2.0 0 1000 0.5 300 300 1 0 Units – A.V−Q F F 1/W – – 1/V 1/V – 1/V 1/V A A – 1/V 1/V 1/V – – – Ω Ω sec sec V V V V – – – ◦K ◦K – – Table 7.2: Parker-Skellern model parameters in f REEDATM . 127 and P is the power dissipated at the drain-source junction and is expressed as P = id Vds . (7.3) Near pinchoff, the drain current reduces exponentially with respect to gate potential and will increase when gate-breakdown occurs. To model this effect correctly, gate potential Vgt is expressed as Vgt µ µ = mvst(1 + mvst Vds ) ln 1 + exp vgst vst(1 + mvst Vds ) ¶¶ . (7.4) where 0 0 vgst = vgs − vto − γl Vgd − γh (Vgd − Vgd ) − ηh (Vgs − Vgs ) (7.5) and 0 0 0 0 0 0 γl = lfgam − lfg1 Vgs + lfg2 Vgd γh = hfgam − hfg1 Vgs + hfg2 Vgd ηh = hfeta − hfe1 Vgd + hfe2 Vgs 0 dVgs dt dVgd = Vgd − taug . dt Vgs = Vgs − taug 0 Vgd (7.6) (7.7) (7.8) (7.9) (7.10) The intrinsic drain terminal potential Vdt is controlled by parameter z which ensures a smoother knee region of the large signal input-output characteristic and is expressed as q q √ √ 1 2 − 1 2 . Vdt = (vdp 1 + z + Vsat )2 + zVsat (vdp 1 + z − Vsat )2 + zVsat (7.11) 2 2 This formulation introduces an effective drain potential term vdp which has a range of [0, ∞] and gets mapped into [0, Vsat ], the range of Vdt . When vdp = 0, Vdt ≈ 0 and for large vdp , Vdt = Vsat . The vdp itself is expressed as a function of parameters p and q as µ ¶p - q Vgt p vdp = Vds q vbi − vto (7.12) and the drain voltage in the saturation is written as Vsat = xi(vbi − vto) Vgt . xi(vbi − vto) + Vgt (7.13) The capacitance model used is the one proposed in [152] and is not reproduced here. The source code for this model is provided in Appendix C.6. 128 Device 1 beta = 0.01865 vbi = 0.8 Device 2 beta = 0.03141 vbi = 1.5 vbd = 12 cgd = 3f q = 1.2 cgs = 528.2f vt0 = -1.2 is = 3e-12 vbd = 12 cgd = 4e-15 q = 1.2 cgs = 695.2f vt0 = -1.2 is = 4e-12 Table 7.3: Parker-Skellern model parameters used in the X-band MMIC netlist. This model was substituted for the Curtice Cubic in the netlist for the MMIC circuit (Appendix D.2). An example set of parameters for this model are provided in [150] and although they don’t exactly match the output characteristics of the circuit while using the Curtice model, the results are still comparable. The non-default parameter values are provided in Table 7.3. Upon running the netlist with the new PS model, the results both with and without noise unfortunately do not indicate any changes when compared to simulations with the original Curtice model. In other words, the model does not seem to predict any noticeable reduction in gain when the circuit is fed with noise as compared to when the circuit is fed with sinusoidal input alone. The PS model was replaced with the OML model [151] which shares many characteristics with the Curtice model. It also uses a third order polynomial to describe the output characteristics of a MESFET power amplifier. The difference lies in the fact that while the Curtice model uses a polynomial function of gate-source voltage, the OML model uses a polynomial function of the effective gate-source voltage, Veff . The output current according to the OML model is then given as 3 2 Ids = a1 Veff + a2 Veff + a3 Veff . (7.14) The parameters a1 , a2 and a3 and the effective voltage are expressed as a1 = b1 Vds a2 = b4 q a3 = b5 q Vdseff 2 1 + Vdseff Vdseff 2 1 + Vdseff (7.15) (7.16) (7.17) 129 Device 1,2 b1 = -0.7437 b5 = 21.8151 vto = -1.2 b2 = 2.8974 g = 0.2339 b3 = 4.4187 gamma = 0.12 b4 = 15.329 delta = 0.04 Table 7.4: OML model parameters used in the X-band MMIC netlist. 2 b2 Vds + b3 Vds 1 + g Vgst q ³ ´ 1 2 + delta2 = Vgst + Vgst 2 Vdseff = Veff Vgst = Vgs − vto + gamma Vds (7.18) (7.19) (7.20) where b1, b2, b3, b4, b5, g, gamma, delta and vto are parameters of the model. The capacitance equations of the Curtice model was retained in this model as were the related parameters in the netlist. The source code for this model is available in Appendix C.7. Using parameter values indicated in Table 7.4, the Curtice model was replaced by the OML model. However, just like in the case of the PS model, there is no significant change between simulations of the case of a pure sinusoidal input and a sine input superimposed with noise. On the assumption that higher-order polynomials are required to be able to accurately predict saturation effects, the f REEDATM model VccsPoly was considered. The VccsPoly model is a behavioral model that formulates output current as a polynomial function of arbitrary order N of input voltage. It requires the coefficients of the polynomial to be supplied by the user and contains an optional gain parameter. The method used to select these coefficients and the order of the polynomial chosen was somewhat arbitrary and required a large number of trials. The basic methodology was to consider the minimum polynomial order of 3 with decreasing values of higher-order coefficients. For the case of third order and greater, the even order coefficients were set to zero and consecutive odd order terms had opposite signs. There is no apparent technique to choose the values of these coefficients and up to 11 order polynomials with varying gain constants were considered. The results produced were unsatisfying and there was still no discernable difference between the compression characteristics in the absence and presence of input noise. This suggests that there is perhaps another mechanism at work that is causing a reduction in power in the presence of noise. 130 20 Curtice model with 20 ps delay Measured data Curtice model with no delay Power gain (dBm) 18 16 14 12 10 -1 0 1 2 3 4 5 Input power (dBm) Figure 7.7: Comparing simulated gain obtained with a 20 ps delay and no delay with measured gain. The f REEDATM framework makes it possible to handle time delays inside the model of an element. The Curtice Cubic model in f REEDATM has a parameter that allows the user to set a specific value of time delay in the netlist. This time delay value, τ , makes the Curtice model compute the output current as a function of voltage that is τ time units in the past. Choosing the appropriate time delay depends on the channel transit time of the active devices in the circuit and the elements surrounding it. Although it is possible to obtain a fairly accurate estimate of the time delay [153], a first attempt to selecting an appropriate value was made with a trial-and-error approach. For values of time delay of less than 20 ps, there was not an appreciable decrease in gain between the noiseless and noisy cases but there was a noticeable difference with delay set at 20 ps for both amplifiers. A plot showing the drop in gain with input power varying from 0 – 5 dBm with the delay set at 20 ps was compared with the measured gain characteristics in the presence of noise as shown in Fig. 7.3 and the simulated gain characteristics in the presence of noise as shown in Fig. 7.5. This plot is shown in Fig. 7.7. As seen in this figure, the power gain dropoff obtained with a delay of 20 ps is quite sharp. At 0 dBm, it is comparable with the gain value of the simulated case with no delay but at 5 dBm it is comparable with the measured result. At powers lower than 0 dBm, simulations using this delay value produces results than can differ quite significantly from the measured results. This might lead one to suggest 131 4 3 Output voltage (volts) 2 1 0 -1 -2 -3 8.0 n 8.2 n 8.4 n 8.6 n 8.8 n Time (secs) Figure 7.8: Distorted output voltage with a 20ps delay. that the effects of delay mechanisms, if validated, is dependent on input power which is not unreasonable and it originally reported by Curtice, [148]. A snapshot of the output voltage taken with the input power set at 0 dBm shows considerably more distortion compared with the output of Fig. 7.6 and this is shown in Fig. 7.8. It is known that power amplifiers can exhibit strong delays and this can have serious effects on performance, particularly at high input powers as measured in [154], quantified in [155] and modelled in [156]. Although the investigations in this section are rudimentary, they present an interesting avenue for future research in the understanding of the effects of interactions of noise with large input signals on the RF performance of amplifiers. 7.4 Summary The major result of this chapter is verification that effects of high levels of noise can be modelled in a transient circuit simulator. The major change required is to separately track deterministic and stochastic signals at each node in a circuit. Then at each node instead of performing the solution of an ordinary differential equation, the solution of a stochastic differential equation, discretized in the conventional manner, is required. Since stochastic 132 signals have very small variations relative to the expected deterministic signal levels it is crucial that the transient simulator achieve high dynamic range. The approach is verified by using nonlinear elements for the circuit devices in a high dynamic range simulator. Measured and simulated plots for the gain of the amplifier versus input power level are compared in the cases of no input noise and high level noise. The results also demonstrate an increased level of compression in the presence of larger levels of noise and this effect has also been captured by simulation, albeit not to the same extent as measurement. Further investigations were carried out to try and determine an effective way to model this discrepancy and it was found that advanced FET models and higher-order polynomial models did not have any impact on the simulated results in the presence of noise. Including fixed time delays into the element descriptions was found to be able to approximately capture the effect of reduction of power gain in the presence of noise. However, the lack of accuracy leaves several unanswered questions such as to whether the amount of delay is dependent on input power and whether the simulation including delays effectively captures the underlying distortive mechanism in the presence of noise. Exploring these questions is an important undertaking in order to be able to characterize the effects of amplifiers operating under large signal conditions in the presence of noise. 133 Chapter 8 Conclusions and Future Work This work details the modelling of sources of colored noise in the time domain in a high dynamic-range circuit simulator. Colored noise sources were generated using nonlinear iterative chaotic maps with special properties and were inserted as elements of the circuit simulator. The existing deterministic simulator framework was then extended to stochastic and with appropriate justifications, was solved assuming the Stratonovich interpretation for SDEs. The approach was verified by comparing simulated and measured results of real circuits that operate under large-signal conditions. 8.1 Summary Modelling sources of noise in electrical circuits and the effects of interactions of noise and large signals is important from the point of view of design of RF circuits that constantly shrinking in size and increasing in density. This places constraints on signal to noise ratios and increases the required dynamic range. It is no longer adequate to describe noise sources in circuits in the frequency domain where the levels of noise are small and the effects of noise can be assumed additive only. This work is an attempt to model sources of noise in the time domain and to include not just additive effects but effects of multiplication of noise with large values of deterministic signal. Instantaneous levels of noise multiplied with large signal can have an impact on the performance of the circuit and capturing these effects 134 adequately is essential. As was shown in Chapter 7, large levels of noise combined with high input power levels can have a serious impact on the performance of an power amplifier operating at X-band. The modelling approach should therefore place minimal restrictions on the levels of the noise present in the circuit and the nature of the interactions between noise and signal. The first main requirement for modelling noise in the time domain requires the circuits to be represented by Stochastic Differential Equations. While the use of SDEs is not new, previous attempts at using SDEs to model noise use the Itô form to interpret SDEs. This form is more suitable to the derivation of mathematical proofs related to SDEs on account of its non-anticipatory nature but requires a new calculus to solve a system of SDEs. This form is adequate when the sources of noise are purely additive and the noise processes are perfectly white. The Stratonovich form, on the other hand, is more suitable for modelling noise processes that can be colored and when there are multiplicative effects between noise and state-dependent deterministic terms. While the Stratonovich form is not as amenable to derivations of mathematical proofs, numerical solutions of SDEs with the Stratonovich interpretation can be determined with the conventional rules of calculus. All factors make the use of the Stratonovich interpretation very desirable when solving circuit simulator problems that involve SDEs. The second main requirement for simulating noise in circuit simulators is the availability of statistically accurate time domain noise generators. The mathematical theory of chaos provides examples of such generators that use simple deterministic iterative rules to generate sequences of noise using a small number of parameters. This frugal use of parameters is desirable both from an implementation and usage point of view as it requires little effort for the model developer to include sources of noise in an electronic device model and for the user who can quickly deploy the model without spending too long tuning the characteristics of the noise generator. Any chaotic sequence exhibits Sensitive Dependence on Initial Conditions which ensures that any two sequences starting at initial conditions that may differ by arbitrarily small amounts will have different values at almost every point of the sequence. Using chaotic maps, the Logistic map for white noise and the Logarithmic map for flicker noise, ensures that different noise sequences generated will be unique. This differs from traditional pseudo-random numbers which do not produce unique random numbers for every sequence length. The addition of these noise sources into the simulation framework of the f REEDATM 135 simulator required modifications to the existing noiseless linear and nonlinear device elements. The OO nature of the existing framework allows for both the addition of stochastic terms to current deterministic models and creation of new sources of noise without having to make changes to other parts of the simulator. Making changes to the elements allows the modelling of noise in a device based on the physics of the device. The simulator framework then considers these noise-enabled elements and sets up a stochastic system of equations and minimizes a noisy error function assuming the Stratonovich interpretation. The validity of this modelling approach is demonstrated with two circuit configurations. The first configuration is a varactor-tuned voltage-controlled oscillator which has high signal level excursions and nonlinear interactions between signal and noise. Simulated runs are fitted with measured phase noise results for one value of bias and the flicker coefficients for the devices in the circuit are obtained. Using these coefficients, a match is obtained between simulated runs and measured phase noise for another level of bias. The second configuration models a low-noise X-band MMIC amplifier fed in with a carrier at 10 GHz and large levels of white noise. Gain plots in both measured and simulated cases are compared with measurement to observe the effect of noise on amplification and it is found that noise has a discernible effect on the amplification properties at high input power levels. In particular, there seem to be larger levels of suppression in the presence of noise. The reasons for the existence of this discrepancy warrant further research and preliminary investigations seem to suggest that this distortive effect of noise can be more effectively captured if the amplifier model includes significant levels of possibly power-dependent amounts of time delay. 8.2 Further Research This work is a first attempt at using chaotic maps to model sources of noise in electrical circuits in a full-fledged electronic circuit simulator using SDEs and the Stratonovich interpretation. This presents a wide array of possibilities for future research. For example, such simulation techniques could be used to think about noise as a more integral part of the basic circuit during the design phase. Models for circuit devices usually contain deterministic terms and noise sources that are added to the circuit which are usually assumed to be white or linear or are considered to have values that are small enough such that they 136 can separated from the deterministic terms. With noise sources now intrinsically tied to the devices under consideration, it permits a more physical approach to modelling. For example, when designing an oscillator circuit as in Chapter 6, it is now easier to get a truer estimate of the phase noise at any node in the circuit. This can allow for an incremental approach to design based on both device operation and noise interactions. Not only does this apply to a wider range of circuits, it also provides the designer with a guideline as to which part of the circuit contributes most to the overall noise response. Another avenue exists in the development of device models that can take advantage of this noise analysis setup and try and reproduce results that are associated with highly nonlinear behavior. As was mentioned in Chapter 7, one of the important reasons that the effect of large levels of noise could not be completely captured was because of the lack of accurate delay mechanisms for the Curtice Cubic model of a MESFET. Being able to model these nonlinearities more effectively will allow one to extract maximum benefit of these transient noise simulation abilities. With the use of classical calculus to solve stochastic problems on account of the Stratonovich assumption, more attention can be devoted to finding numerical algorithms based on classical arguments that work well in the stochastic environment. A drawback of time-domain based numerical integration is that for certain problems, the amount of time involved to solve a particular problem can potentially become quite large. While the extent of this problem is reduced with improvements in processor speed and newer computer architectures it can also be alleviated with improved time-stepping algorithms that can reduce the time required to find a solution. With an understanding of the memory characteristics of flicker noise or in general, any power-law noise, there are several newer approaches that can be investigated to model these phenomena that use a small number of parameters and still successfully capture the behavior of the underlying complex noise mechanisms. Efforts such as Self-Organized Criticality and Highly-Optimized Tolerance have already been mentioned in Chapter 2. Promising research avenues for noise generation are, among others, in the field of cellular automata [158], wavelets [159] and neural networks [160] all of which can be thought of as complex systems [161]. It will be interesting to see whether these approaches can add value to the current state-of-the-art or whether they will produce a paradigm shift in the understanding and modelling of real-world systems. 137 Bibliography [1] G. Box, G. M. Jenkins and G. Reinsel, Time Series Analysis: Forecasting and Control, Prentice Hall, 3rd edition, 1994. [2] C. E. Christoffersen, U. A. Mughal and M. B. Steer, “Object Oriented Microwave Circuit Simulation,” Int. Journal of RF and Microwave Computer-Aided Engineering, Vol. 10, Issue 3, p. 164, 2000. [3] H. Nyquist, “Thermal Agitation of Electric Charge in Conductors,” Phys. Rev., Vol. 32, p. 110, 1928. [4] D. Middleton, An Introduction to Statistical Communication Theory, IEEE Press, 1996. [5] M. S. Gupta, “Thermal Noise in Nonlinear Resistive Devices and its Circuit Representation,” Proc. IEEE, Vol. 70, no. 8, p. 788, Aug. 1982. [6] Y. A. Rozanov, Probability Theory: A Concise Course, Dover Publications, New York, 1969. [7] W. Feller, An Introduction to Probablity Theory and Its Applications John Wiley and Sons, New York, Vol. 1 (1968) and Vol. 2 (1971). [8] W. Schottky, “Small-Shot Effect and Flicker Effect,” Phys. Rev., Vol. 28, p. 74, Jul. 1926. [9] M. Gardner, “White and brown music, fractal curves and one-over-f fluctuations,” Scientific American, Vol. 238, no. 4, p. 16, Apr. 1978. [10] M. Bulmer, “Music from Fractal Noise,” Proc. Math. Festival, Melbourne, p. 10, Jan. 2000. 138 [11] G. S. Hawkins, “Interplanetary Debris Near the Earth,” Annu. Rev. Astron. Astrophys., Vol. 2, p. 149, 1964. [12] P. Bak, C. Tang and K. Wiesenfeld, “Self-Organized Criticality: An Explanation of 1/f Noise,” Phys. Rev. Lett., Vol. 59, no. 4, p. 381, Jul. 1987. [13] A. J. Mekjian, “Model of a Fragmentation Process and Its Power-Law Behavior,” Phys. Rev. Lett., Vol. 64, no. 8, p. 2125, Apr. 1990. [14] R. F. Voss and J. Clarke, “1/f noise in music: Music from 1/f noise,” J. Acoust. Soc. Am., Vol. 63, p. 258, Jan. 1978. [15] T. Musha and M. Yamamoto, “1/f Fluctuations in Biological Systems,” Proc. 19th Intl. Conf. IEEE/EMBS, p. 2692, 1997. [16] D. H. Press, “Flicker Noises in Astronomy and Elsewhere,” Comments Astrophys., Vol. 7, no. 4, p. 103, 1978. [17] M. Planat, “1/f noise, the Measurement of Time and Number Theory,” Fluc. Noise Letters, Vol. 1, no. 1, p. R65, 2001. [18] M. V. Berry and Z. V. Lewis, “On the Weierstrass-Mandelbrot fractal function,” Proc. R. Soc. Lond., Vol. A 370, p. 459, 1980. [19] M. R. Schroeder, Fractals, Chaos, Power Laws: Minutes from an Infinite Paradise, W H Freeman and Co. New York, 1991. [20] M. A. Caloyannides, “Microcycle Spectral Estimates of 1/f noise in semiconductors,” J. Appl. Phys., Vol. 45, no. 1, p. 307, Jan. 1974. [21] A. H. De Kuijper and T. G. M. Kleinpenning, “1/f Noise in the Micro-Hertz Range,” Noise in Physical Systems and 1/f Noise, A. D’Amico and P. Mazzetti (editors), p. 441, 1985. [22] J. J. Brophy, “Low-Frequency Variance Noise,” J. Appl. Phys., Vol. 41, no. 7, p. 2913, Jun. 1970. [23] I. Flinn, “Extent of the 1/f Noise Spectrum,” Nature, Vol. 219, p. 1356, 1968. 139 [24] R. A. Dell, M. Epstein and C. R. Kannewurf, “Experimental study of 1/f noise stationarity by digital techniques,” J. Appl. Phys., Vol. 44, no. 1, p. 472, Jan. 1973. [25] W. E. Purcell, “Variance Noise Spectra of 1/f Noise,” J. Appl. Phys., Vol. 43, no. 6, p. 2890, Jun. 1972. [26] M. Stoisiek and D. Wolf, “Recent investigations on the stationarity of 1/f noise,” J. Appl. Phys, Vol. 47, no. 1, p. 362, Jan. 1976. [27] J. L. Tandon and H. R. Bilger, “1/f noise as a nonstationary process: Experimental evidence and some analytical conditions,” J. Appl. Phys., Vol. 47, no. 4, p. 1697, Apr. 1976. [28] M. S. Keshner, “1/f Noise,” Proc. IEEE, Vol. 70, no. 3, p. 212, Mar. 1982. [29] J. J. Brophy, “Statistics of 1/f Noise,” Phys. Rev., Vol. 166, no. 3, p. 827. [30] R. F. Voss, “Linearity of 1/f Noise Mechanisms,” Phys. Rev. Lett., Vol. 40, no. 14, p. 913, Apr. 1978. [31] A.-M. S. Tremblay and M. Nelkin, “Equilibrium resistance fluctuations,” Phys. Rev. B, Vol. 24, no. 5, p. 2551, Sep. 1981. [32] A. R. Murch and R. H. T. Bates, “Colored Noise Generation Through Deterministic Chaos,” IEEE Trans. Cir. Sys., Vol. 37, no. 5, p. 608, May 1990. [33] R. L. Devaney, A First Course in Chaotic Dynamical Systems, Perseus Books, 1992. [34] H. G. Schuster, Deterministic Chaos, VCH Publishers, 1989. [35] M. J. Feigenbaum, “Qualitative universality for a class of nonlinear transformations,” J. Stat. Phys., Vol. 19, no. 1, p. 25, 1978. [36] T-Y. Li and J. A. Yorke, “Period Three Implies Chaos,” Amer. Math. Monthly, Vol. 82, no. 10, p. 985, Dec. 1975. [37] R. F. Voss and J. Clarke, “Flicker (1/f ) noise: Equilibrium temperature and resistance fluctuations,” Phys. Rev. B, Vol. 13, no. 2, p. 556, Jan. 1976. [38] H. G. E. Beck and W. P. Spruit, “1/f noise in the variance of Johnson noise,” J. Appl. Phys., Vol. 49, no. 6, p. 3384, Jun. 1978. 140 [39] B. K. Jones and J. D. Francis, “Direct correlation between 1/f noise and other noise sources,” J. Phys. D, Vol. 8, no. 11, p. 1172, Jul. 1975. [40] J. W. Eberhard and P. M. Horn, “Temperature Dependence of 1/f Noise in Silver and Copper,” Phys. Rev. Lett., Vol. 39, no. 10, p. 643, Sep. 1977. [41] P. Dutta, J. W. Eberhard and P. M. Horn, “1/f noise in metal films: The role of the substrate,” Solid State Commun., Vol. 27, no. 12, p. 1389, Sep. 1978. [42] F. N. Hooge and A. M. H. Hoppenbrouwers, “1/f noise in continuous thin gold films,” Physica, Vol. 45, p. 386, 1969. [43] F. N. Hooge, “1/f noise is no surface effect,” Phys. Lett., Vol. 29A, no. 3, p. 139, Apr. 1969. [44] D. M. Fleetwood and N. Giordano, “Resistivity dependence of 1/f noise in metal films,” Phys. Rev. B, Vol. 27, no. 2, p. 667, Jan. 1983. [45] D. M. Fleetwood and N. Giordano, “Effect of strain on the 1/f noise in metal films,” Phys. Rev. B, Vol. 28, no. 6, p. 3625, Jan. 1983. [46] F. N. Hooge and J. L. M. Gaal, “Experimental study of 1/f noise in thermo E.M.F.,” Phillips Res. Rep., Vol. 26, p. 345, 1971. [47] Th. G. M. Kleinpenning, “1/f noise in thermo emf of extrinsic and intrinsic semiconductors,” Physica, Vol. 77, p. 78, 1974. [48] Th. G. M. Kleinpenning and D. A. Bell, “Hall effect noise: Fluctuation in number or mobility?” Physica, Vol. 81B, p. 301, 1976. [49] A. Mircea, A. Roussel and A. Mittoneau, “1/f Noise: still a surface effect,” Phys. Lett., Vol. 41A, no. 4, p. 345, Oct. 1972. [50] A. van der Ziel, “Flicker noise in semiconductors: Not a true bulk effect,” Appl. Phys. Lett., Vol. 33, no. 10, p. 883, Sep. 1978. [51] P. Dutta and P. M. Horn, “Low-frequency fluctuations in solids: 1/f noise,” Rev. Mod. Phys., Vol. 53, no. 3, p. 497, Jul. 1981. 141 [52] A. van der Ziel, “Unified Presentation of 1/f Noise in Electronic Devices: Fundamental 1/f Noise Sources,” Proc. IEEE, Vol. 76, no. 3, p. 233, Mar. 1988. [53] M. B. Weissman, “1/f noise and other slow, nonexponential kinetics in condensed matter,” Rev. Mod. Phys., Vol. 60, no. 2, p. 537, Apr. 1988. [54] F. N. Hooge, “1/f Noise Sources,” IEEE Trans. Elec. Dev., Vol. 41, no. 11, p. 1926, Nov. 1994. [55] M. J. Buckingham, Noise in Electronic Devices and Systems, Ellis Horwood series in electrical and electronic engineering, 1983. [56] P. Antognetti and G. Massobrio, Semiconductor Device Modeling with SPICE, McGraw-Hill, New York, 1988. [57] Y. Tsividis, Operation and Modeling of the MOS Transistor, McGraw Hill, 2nd ed., 1999. [58] Sh. Kogan, Electronic noise and fluctuations in solids, Cambridge Univ. Press, 1996. [59] M. Surdin, “Fluctuations de courant thermoionique et le ‘flicker effect’,” J. Phys. Radium, Vol. 10, p. 188, 1939. [60] A. van der Ziel, “On the noise spectra of semi-conductor noise and of flicker effect,” Physica, Vol. 16, no. 4, p. 359, Apr. 1950. [61] F. K. Du Pré, “A Suggestion Regarding the Spectral Density of Flicker Noise,” Phys. Rev., Vol. 78, no. 5, p. 615, Jun. 1950. [62] A. L. McWhorter, “1/f Noise and germanium surface properties,” Semiconductor Surface Physics, ed. R. H. Kingston, Univ. of Philadelphia Press, Philadelphia, 1957. [63] H. Tian and A. El Gamal, “Analysis of 1/f Noise in Switched MOSFET Circuits,” IEEE Trans. Cir. and Sys.-II: Analog and Dig. Sig. Proc., Vol. 48, no. 2, p. 151, Feb. 2001. [64] S. Mohammadi and D. Pavlidis, “A Nonfundamental Theory of Low-Frequency Noise in Semiconductor Devices,” IEEE Trans. Elec. Dev., Vol. 47, no. 11, p. 2009, Nov. 2000. 142 [65] B. Pellegrini, R. Saletti, B. Neri and P. Terreni, “1/f γ Noise Generators,” Noise in Physical Systems and 1/f Noise, A. D’Amico and P. Mazzetti (editors), p. 425, 1985. [66] G. Corsini and R. Saletti, “A 1/f γ Power Spectrum Noise Sequence Generator,” IEEE Trans. Instr. Meas., Vol. 37, no. 4, p. 615, Dec. 1988. [67] P. Gruber, “1/f -Noise Generator,” Noise in Physical Systems and 1/f Noise, A. D’Amico and P. Mazzetti (editors), p. 357, 1985. [68] “A General Mechanical Model for fα Spectral Density Random Noise with Special Reference to Flicker Noise 1/f ,” Proc. IEEE, Vol. 56, no. 3, p. 251, Mar. 1968. [69] J. Bernamont, “Fluctuation in the resistance of thin films,” Proc. Phys. Soc., Vol. 49, p. 138, 1937. [70] P. Bak and M. Paczuski, “Complexity, Contingency and Criticality,” Proc. Natl. Acad. Sci., Vol. 92, p. 6689, Jul. 1995. [71] S. Kauffman, At Home in the Universe: The Search for Laws of Self-Organization and Complexity, Oxford Univ. Press, 1996. [72] C. Darwin, The Origin of Species, Bantam Books, 1999. (Reprint of 1st ed. 1859). [73] B. B. Mandelbrot and J. W. Van Ness, “Fractional Brownian motions, fractional noises and applications,” Soc. Ind. Appl. Math. Rev., Vol. 10, no. 4, p. 422, 1968. [74] P. G. Hoel, S. C. Port and C. J. Stone, Introduction to Stochastic Processes, Waveland Press, 1987. [75] A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes, McGraw-Hill, 1994. [76] K. B. Oldham and J. Spanier, The Fractional Calculus, Theory and Applications of Differentiation and Integration to Arbitrary Order, New York: Academic, 1974. [77] J. A. Barnes and D. W. Allan, “A statistical model of flicker noise,” Proc. IEEE, Vol. PROC-54, p. 176, Feb. 1966. [78] J. R. M. Hosking, “Fractional differencing,” Biometrika, Vol. 68, no. 1, p. 165, 1981. 143 [79] N, Jeremy Kasdin, “Discrete Simulation of Colored Noise and Stochastic Processes and 1/f α Power Law Noise Generation,” Proc. IEEE, Vol. 83, no. 5, p. 802, May 1995. [80] S. B. Lowen and M. C. Teich, “Power-Law Shot Noise,” IEEE Trans. Infor. Th., Vol. 36, no. 6, p. 1302, Nov. 1990. [81] W. B. Davenport, Jr. and W. L. Root, An Introduction to the Theory of Random Signals and Noise, McGraw-Hill, 1958. [82] A. van der Ziel, “Flicker noise in electronic devices,” Adv. in Elec. and Electron. Phys., Vol. 49, p. 225, 1979. [83] A. van der Ziel, X. Zhang and A. H. Pawlikiewicz, “Location of 1/f Noise Sources in BJT’s and HBJT’s – I. Theory,” IEEE. Trans. Elec. Dev., Vol. ED-33, no. 9, p. 1371, 1986. [84] C. T. Green and B. K. Jones, “1/f Noise in bipolar transistors,” J. Phys. D: Appl. Phys., Vol. 18, p. 77, 1985. [85] J. M. Carlson and J. Doyle, “Highly optimized tolerance: A mechanism for power laws in designed systems,” Phys. Rev. E, Vol. 60, no. 2, p. 1412, Aug. 1999. [86] D. B. Leeson, “A simple model of feedback oscillator noise spectrum,” Proc. IEEE, Vol. 54, p. 329, Feb. 1966. [87] S. A. Maas, “Nonlinear Microwave and RF Circuits,” Artech House, 2003. [88] F. Bonani and G. Ghione, “Noise in Semiconductor Devices,” Springer-Verlag, 2001. [89] V. Rizzoli, F. Mastri and D. Masotti, “General Noise Analysis of Nonlinear Microwave Circuits by the Piecewise Harmonic-Balance Technique,” IEEE Trans. Microwave Th. Tech., Vol. 42, no. 5, p. 807, May 1994. [90] N. B. de Carvalho and J. C. Pedro, “Non-linear Circuit Simulation of Complex Spectra in the Frequency Domain,” Int. Conf. Elec., Cir. and Sys., Vol. 1, p. 129, Sep. 1998. [91] A. Dunlop, A. Demir, P. Feldmann, S. Kapur, D. Long, R. Melville and J. Roychowdhury, “Tools and Methodology for RF IC Design,” Proc. Assoc. Computing Mach. IEEE Int. Conf. Computer-Aided Design, p. 414, 1998. 144 [92] K. S. Kundert, J. K. White and A. Sangiovanni-Vincentelli, Steady-State Methods for Simulating Analog and Microwave Circuits, Kluwer Academic Press, 1990. [93] M. Okumura, H. Tanimoto, T, Itakura and T. Sugawara, “Numerical Noise Analysis for Nonlinear Circuits with a Periodic Large Signal Excitation Including Cyclostationary Noise Sources,” IEEE Trans. Cir. Sys. - I: Fund. Theor. Appl. Vol. 40, no. 9, p. 581, Sep. 1993. [94] R. Telichevesky, K. Kundert and J. White, “Efficient AC and noise analysis of two-tone RF circuits,” Proc. Design Automation Conf. p. 292, 1996, [95] J. S. Roychowdhury and P. Feldmann, “A new linear-time harmonic balance algorithm for cyclostationary noise analysis in RF circuits,” Proc. Asia and South Pacific Design Automation Conf. p. 483, 1997. [96] A. Demir, E. W. Y. Liu and A. L. Sangiovanni-Vincentelli, “Time-Domain Non-Monte Carlo Noise Simulation for Nonlinear Dynamic Circuits with Arbitrary Excitations,” IEEE Trans. Computer-Aided Design of Integrated Cir. Sys., Vol. 15, no. 5, p. 493, May 1996. [97] A. Hajimiri and T. H. Lee, “A General Theory of Phase Noise in Electrical Oscillators,” IEEE Jnl. of Solid-State Cir., Vol. 33, no. 2, p. 179, Feb. 1998. [98] A. Demir, A. Mehrotra and J. Roychowdhury, “Phase Noise in Oscillators: A Unifying Theory and Numerical Methods for Characterization,” IEEE Trans. Cir. and Sys. - I: Fund. Theory and Appl., Vol. 47, no. 5, p. 655, May 2000. [99] R. Winkler, “Stochastic DAEs in Transient Noise Simulation,” Proc. Scientific Computing in Elec. Engg., Jun. 2002, Eindhoven, p. 408. [100] O. Schein and G. Denk, “Numerical Solution of Stochastic Differential-Algebraic Equations with Applications to Transient Noise Simulation of Microelectronic Circuits,” Jnl. Comput. Appl. Math., Vol. 100, p. 77, 1998. [101] F. X. Kaertner, “Analysis of White and f−α Noise in Oscillators,” Int. Jnl. Cir. Th. Appl., Vol. 18, p. 485, 1990. 145 [102] R. Rohrer, L. Nagel, R. G. Meyer and L. Weber, “Computationally Efficient Electronic Circuit Noise Calculation,” IEEE Jnl. of Solid-State Cir., Vol. SC-6, no. 4, p. 204, Aug. 1971. [103] R. W. Freund and P. Feldmann, “Efficient Small-Signal Circuit Analysis and Sensitivity Computations with the PVL algorithm,” Proc. Assoc. Computing Mach. IEEE Int. Conf. Computer-Aided Design, Nov. 1994. [104] P. Bolcato and R. Pujois, “A New Approach for Noise Simulation in Transient Analysis,” Proc. IEEE Intl. Symp. on Cir. and Sys., May 1992. [105] M. M. Gourary, S. G. Rusakov, S. L. Ulyanov, M. M. Zharov and B. J. Mulvaney, “A New Numerical Method for Transient Noise Analysis of Nonlinear Circuits,” Proc. Asia South Pacific Design Automation Conference, p. 165, 1999. [106] A. Suárez, S. Sancho, S. Ver Hoeye and J. Portilla, “Analytical Comparison Between Time- and Frequency-Domain Techniques for Phase-Noise Analysis,” IEEE Trans. Microwave Th. Tech., Vol. 50, no. 10, p. 2353, Oct. 2002. [107] N. G. Van Kampen, “Stochastic Differential Equations,” Phys. Reports, Vol. 24, no. 3, p. 171, 1975. [108] B. Oksendal, Stochastic Differential Equations, Springer-Verlag, 6th ed., 2003. [109] K. L. Chung and R. J. Williams, Introduction to Stochastic Integration, Birkhauser, 1990. [110] S. S. Artemiev and T. A. Averina, Numerical Analysis of Systems of Ordinary and Stochastic Differential Equations, VSP, 1997. [111] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations, Springer-Verlag, 1999. [112] K. Itô, “Stochastic Integral,” Proc. Imp. Acad. Tokyo, Vol. 20, no. 8, p. 519, 1944. [113] K. Itô, “On a stochastic integral equation,” Proc. Imp. Acad. Tokyo, Vol. 22, no. 2, p. 32, 1946. [114] K. Itô, “Stochastic differential equations in a differentiable manifold,” Nagoya Math. Jnl., Vol. 1, p. 35, 1950. 146 [115] R. L. Stratonovich, “A new representation for stochastic integrals and equations,” SIAM Jnl. Control, Vol. 4, p. 362, 1966. [116] G. E. Uhlenbeck and L. S. Ornstein, “On the Theory of the Brownian Motion,” Phys. Rev., Vol 36, p. 823, 1930. [117] B. J. West, A. R. Bulsara, K. Lindenberg, V. Seshadri and K. E. Shuler, “Stochastic Processes with Non-Additive Fluctuations - I. Itô and Stratonovich Calculus and the Effects of Correlations,” Physica A, Vol. 97, p. 211, 1979. [118] A. R. Bulsara, K. Lindenberg, V. Seshadri, K. E. Shuler and B. J. West, “Stochastic Processes with Non-Additive Fluctuations - I. Some Applications of Itô and Stratonovich Calculus,” Physica A, Vol. 97, p. 234, 1979. [119] E. Wong and M. Zakai, “On the Convergence of Ordinary Integrals to Stochastic Integrals,” Ann. Math. Stat., Vol. 36, no. 5, p. 1560, Oct. 1965. [120] N. G. van Kampen, “Itô Versus Stratonovich, J. Stat. Phys., Vol. 24, no. 1, p. 175, 1981. [121] J. Smythe and F. Moss, “Ito Versus Stratonovich Revisited,” Phys. Lett., Vol. 97A, no. 3, p. 95, 1983. [122] L. O. Chua, C. W. Wu, A. Huang and G.-Q. Zhong, “A Universal Circuit for Studying and Generating Chaos - Part I: Routes to Chaos,” IEEE Trans. Cir. Sys.-1:Fund. Th. Appl., Vol. 40, no. 10, p. 732, Oct. 1993 [123] L. M. Pecora and T. L. Caroll, “Synchronozation in chaotic systems,” Phys. Rev. Lett., Vol. 64, p. 821, 1990. [124] K. M. Cuomo, A. V. Oppenheim and S. H. Strogatz, “Synchronization of Lorenzbased Chaotic Circuits with Applications to Communications,” IEEE Trans. Cir. Sys. - II, Vol. 40, p. 634, Oct. 1993. [125] C.-C. Chen and K. Yao, “Stochastic-Calculus-Based Numerical Evaluation and Performance Analysis of Chaotic Communication Systems,” IEEE Trans. Cir. Sys. - I, Vol. 47, no. 12, p. 1663, Dec. 2000. 147 [126] L. Kocarev and U. Parliz, “General approach for chaotic synchronization with applications to communication,” Phys. Rev. Lett., Vol. 74, no. 25, p. 5028, 1995. [127] A. Nikolaidis and I. Pitas, “Comparison of Different Chaotic Maps with Application to Image Watermarking,” IEEE Int. Sym. Cir. Sys., p. V-509, 2000. [128] R. J. Mondragon, J. M. Pitts and D. K. Arrowsmith, “Chaotic intermittency-sawtooth map model of aggregate self-similar traffic streams,” Elec. Lett., Vol. 26, no. 2, p. 184, Jan. 2000. [129] A. Erramilli, R. P. Singh and P. Pruthi, “Modeling Packet Traffic with Chaotic Maps,” Royal Institute of Technology, ISRN KTH/IT/R-94/18–SE, Stockholm-Kista, Sweden, Aug. 1994. [130] A. F. Goloubentsev, V. M. Anikin and Y. A. Barulina, “Chaotic Maps Generating White Noise,” PhysCon., Vol. 2, p. 452, 2003. [131] “Realization of Correlated Chaotic Signals,” S. A. Talwalkar and S. M. Kay, Int, Conf. Acoustics, Speech and Signal Proc., Vol. 2, p. 1360, May 1995. [132] G. Marsaglia and W. W. Tsang, “The Ziggurat Method for Generating Random Variables,” Jnl. Stat. Soft., Vol. 5, no. 8, 2000. [133] Y. Pomeau and P. Manneville, “Intermittent Transition to Turbulence in Dissipative Dynamical Systems,” Commun. Math. Phys., Vol. 74, p. 189, 1980. [134] J. E. Hirsch, B. A. Huberman and D. J. Scalapino, “Theory of Intermittency,” Phys. Rev. A, Vol. 25, no. 1, p. 519, Jan. 1982. [135] T. Kohyama and Y. Aizawa, “Theory of the Intermittent Chaos - 1/f Spectrum and the Pareto-Zipf Law,” Prog. Theor. Phys., Vol. 71, no. 5, p. 917, May 1984. [136] R. J. Bhansali, M. P. Holland and P. S. Kokoszka, “Chaotic Maps with Slowly Decaying Correlations and Intermittency,” To appear in Fields Inst. Comm., 2005. [137] M. P. Holland, “Slowly mixing systems and intermittency maps,” To appear in Erg. Th. Dyn. Sys., 2005. [138] L. S. Young, “Recurrence times and rates of mixing,” Isr. J. of Math., Vol. 110, p. 153, 1999. 148 [139] G. Booch, Object-Oriented Analysis and Design with Applications, Addison-Wesley Professional, 1993. [140] https://pse.cheme.cmu.edu/ascend/ftp/pdfPapersRptsSlides/processModeling.pdf. [141] J. Vlach and K. Singhal, Computer Methods for Circuit Analysis and Design, Van Nostrand Reinhold, 1994. [142] H.K.Gummel and H.C.Poon, “An Integral Charge Control Model of Bipolar Transistors”, Bell Syst. Tech. J., Vol. 49., p.827, 1970. [143] M. B. Steer and C. E. Christoffersen, “Generalized circuit formulation for the transient simulation of circuits using wavelet, convolution and time-marching techniques ,” Proc. of the 15th European Conf. on Cir. Th. and Design, p. 205, Aug 2001. [144] A. Victor, J. Nath, D. Ghosh, B. Boyette, J-P. Maria, M. B. Steer, A. I. Kingon and G. T. Stauf, “Noise Characteristics of an Oscillator with a Barium Strontium Titanate (BST) Varactor,” Proc. IEE Part H. Microwave Ant. and Prop., In Press, 2005. [145] V. Güngerich, F. Zinkler, W. Anzill and P. Russer, “Noise Calculations and Experimental Results of Varactor Tunable Oscillators with Significantly Reduced Phase Noise,” IEEE Trans. Microwave Th. Tech., Vol. 43, no. 2, p. 278, Feb. 1995. [146] J. R. Pierce, “Physical Sources of Noise,” Proc. IRE, Vol. 44, p. 601, May 1956. [147] S. Luniya, M. B. Steer and C. Christoffersen, “High Dynamic Range Transient Simulation of Microwave Circuits,” IEEE Radio and Wireless Conference, p. 487, Sep. 2004. [148] W. R. Curtice and M. Ettenberg, “A Nonlinear GaAs FET Model for Use in the Design of Output Circuits for Power Amplifiers,” IEEE Trans. Microwave Th. Tech., Vol. MTT-33, no. 12, p. 1383, Dec. 1985. [149] T. Kacprzak and A. Materka, “Compact dc Model of GaAs FETś for Large-Signal Computer Calculation,” IEEE Jnl. Solid-State Cir., Vol. SC-18, no. 2, p. 211, Apr. 1983. [150] A. E. Parker and D. J. Skellern, “A Realistic Large-Signal MESFET Model for SPICE,” IEEE Trans. Microwave Th. Tech., Vol. 45, no. 9, p. 1563, Sep. 1997. [151] B. L. Ooi, J. Y. Ma and M. S. Leong, “A New MESFET Nonlinear Model,” Microwave Optical Tech. Lett., Vol. 29, no. 4, p. 226, May 2001. 149 [152] H. Statz, P. Newman, I. W. Smith, R. A. Pucel and H. A. Haus, “GaAs FET device and circuit simulation in SPICE,” IEEE Trans. Electron Devices, Vol. ED-34, p. 160, Feb. 1987. [153] G. L. Heiter, “Characterization of Nonlinearities in Microwave Devices and Systems,” IEEE Trans. Microwave Th. Tech., Vol. 21, p. 797, 1973. [154] J. H. K. Vuolevi, T. Rahkonen and J. P. A. Manninen, “Measurement technique for characterizing memory effects in RF power amplifiers,” IEEE Trans. Microwave Th. Tech., Vol. 49, p. 1383, 2001. [155] H. Ku, M. D. McKinley and J. S. Kenney, “Quantifying Memory Effects in RF Power Amplifiers,” IEEE. Trans. Microwave Th. Tech., Vol. 50, no. 12, p. 2843, 2002. [156] H. Ku and J. S. Kenney, “Behavioral Modeling of Nonlinear RF Power Amplifiers Considering Memory Effects,” IEEE Trans. Microwave Th. Tech., Vol. 51, no. 12, p. 2495, 2003. [157] Numerical Computing with Matlab, Society for Industrial and Applied Mathematics, 2004. [158] S. Wolfram, A New Kind of Science, Wolfram Media, 2002. [159] G. W. Wornell, “Wavelet-Based Representations for the 1/f Family of Fractal Processes,” Proc. IEEE, Vol. 81, no. 10, p. 1428, Oct. 1993. [160] M. Usher and M. Stemmler, “Dynamic Pattern Formation Leads to 1/f Noise in Neural Populations,” Phys. Rev. Lett, Vol. 74, p. 326, 1995. [161] Y. Bar-Yam, Dynamics of Complex Systems (Studies in Nonlinearity), Westview Press, 2003. [162] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, Society for Industrial and Applied Mathematics, 2001. 150 Appendix A Essential Itô and Stratonovich This appendix provides the mathematical theory to explain the difference between solutions of an SDE when interpreted in the sense of Itô and Stratonovich. The main focus is to try and develop an understanding of the meaning of the stochastic integral Z T gdB (A.1) 0 where g represents a wide class of stochastic processes and B is the BM process. For all possible sample paths, the BM process has infinite variation and is non-differentiable. Therefore it is not possible to treat this integral as an ordinary integral, i.e. an integral in the Riemann sense. This means that a new procedure is required to study stochastic integrals with possibly random integrands. It is still desirable to use the Riemann sum approach to treat these stochastic integrals or in other words, construct a Riemann sum approximation and then pass to limits. This procedure is explained using the example Z T BdB. (A.2) 0 Definitions 1. For interval [0, T ], a partition P of [0, T ] is a finite collection of points in [0, T ] P := {0 = t0 < t1 < . . . < tm = T }. (A.3) 2. The mesh size of P , ∆, is the maximum value that a interval can take. ∆ := max0≤k≤m−1 |tk+1 − tk |. (A.4) 151 3. For a fixed 0 ≤ λ ≤ 1 and a given partition P , let τk := (1 − λ)tk + λtk+1 (A.5) where (k = 0, . . . , m − 1). The Riemann sum approximation of the integral in (A.2) is written as R = R(P, λ) := m−1 X B(τk )(B(tk+1 ) − B(tk )). (A.6) k=0 It remains to find out what happens to this sum when ∆ → 0. Lemma 1 (Quadratic Variation) Let [a, b] be an interval in [0, ∞]. Suppose P n := {a = tn0 < tn1 < . . . < tnmn = b} are partitions of [a, b] with ∆n → 0 as n → ∞. Then mX n −1 (B(tnk+1 ) − B(tnk ))2 → b − a k=0 in L2 (.) as n → ∞. Proof: Let Qn := Pmn −1 k=0 (B(tnk+1 ) − B(tnk ))2 . Then Qn − (b − a) = mX n −1 [(B(tnk+1 ) − B(tnk ))2 − (tnk+1 − tnk )]. k=0 Squaring both sides and taking expected values 2 E[(Qn − (b − a)) ] = mX n −1 m n −1 X E([(B(tnk+1 ) − B(tnk ))2 − (tnk+1 − tnk )] j=0 n [(B(tj+1 ) − k=0 B(tnj ))2 − (tnj+1 − tnj )]). A BM process has independent increments and for k 6= j, the expected value of the cross terms goes to 0. For k = j E[(Qn − (b − a))2 ] = mX n −1 E[(Yk2 − 1)2 (tnk+1 − tnk )2 ] k=0 where Yk = Ykn := B(tnk+1 ) − B(tnk ) pn tk+1 − tnk 152 is a Normal random variable with zero mean and a variance of unity. For some constant C E[(Qn − (b − a))2 ] ≤ C mX n −1 (tnk+1 − tnk )2 k=0 n ≤ C∆ (b − a) → 0 as n → ∞. ¤ This shows that a stochastic integral can be approximated to a Riemann sum only in the mean-square sense. Now given this result, what remains is to determine the limit of these Riemann sums. This is the subject of the next Lemma. Lemma 2 For partition P n of [0, T ] and 0 ≤ λ ≤ 1 fixed, define Rn := mX n −1 B(τkn )(B(tnk+1 ) − B(tnk )). k=0 Then in the limit in L2 (.), lim Rn = n→∞ B(T )2 1 + (λ − )T. 2 2 That is, E[(Rn − B(T )2 1 − (λ − )T )2 ] → 0. 2 2 Proof: Starting with Rn := mX n −1 B(τk )(B(tnk+1 ) − B(tnk )) k=0 = mn −1 B 2 (T ) 1 X − (B(tnk+1 ) − B(tnk ))2 2 2 | k=0 {z } :=X + mX n −1 |k=0 (B(τkn ) − B(tnk ))2 {z } :=Y + mX n −1 |k=0 (B(tnk+1 ) − B(tnk ))(B(τkn ) − B(tnk )), {z :=Z } 153 we use the Lemma of Quadratic Variation to obtain X → T /2 and Y → λT as n → ∞. For P n −1 n n n n term Z, let J = m k=0 (B(tk+1 ) − B(tk ))(B(τk ) − B(tk )). E[J 2 ] = = ≤ mX n −1 E([B(tnk+1 − B(τkn )]2 )E([B(τkn ) − B(tnk )]2 ) k=0 mX n −1 (1 − λ)(tnk+1 − tnk )λ(tnk+1 − tnk ) k=0 mX n −1 λ(1 − λ)T ∆n → 0. k=0 and hence Z → 0 and n → ∞. The property of independent increments of BM has been used. Combining the expressions for X, Y and Z establishes the Lemma. ¤ The importance of this Lemma lies in the fact that it proves that the final result of evaluating a stochastic integral is dependent on λ. Itô used λ = 0 which corresponds to the start of each interval whereas Stratonovich used λ = 0.5 which corresponds to the mid-point of each interval. Itô’s definition gives the solution Z T B 2 (T ) T − BdB = 2 2 0 (A.7) which is not what one would expect from conventional calculus. Using Stratonovich’s definition gives the solution Z T BdB = 0 B 2 (T ) 2 (A.8) which is the expected result if one were to consider the solution according to conventional calculus. In the general case, it is therefore apparent that using different definitions will result in different solutions to a stochastic integral and the Stratonovich form is the only form that will always provide a result consistent with conventional calculus. 154 Appendix B Implementing Infinite R-C Transmission Line Models A popular way of synthesizing 1/f noise with lumped circuit elements has been considered in Section 2.6.2, [28], and is a circuit implementation of the model originally proposed in [69]. It consists of a cascade of a number of R-C sections driven by a white noise source each having a different time constant. The idea behind this model to include a large number time constants of different orders to effectively capture the long memory behavior associated with a flicker noise process. Attempts were made in [65] and [67], among others, to implement digital versions of the infinite R-C transmission line by taking a finite number of sections, where each section contributes a pole to the rational transfer function of the line. The number of sections to consider depends on the maximum error from an ideal 1/f characteristic desired and the number of decades for which the 1/f sequence is to be generated. Difficulties with implementation in a circuit simulator have been mentioned in [98] and [105] and it is the focus of this appendix to try and quantify the difficulties involved by implementing these R-C sections in a circuit simulator. Based on the approach in [67], to effectively model a 1/f response for two decades with a relative error of 5% requires 14 R-C sections. For the same relative error, modelling the response for two and a half decades requires 100 sections. For the duration of this 155 appendix, number of sections N = 20 is considered. Although implementation of the model in [28] requires lesser sections, the model in [67] has the advantage of not requiring a modification in the values of the existing R and C components with the addition of more sections. According to [67], the locations of the poles corresponding to each section is expressed as pn = −(2n − 1)2 π 2 4τ (B.1) where τ is the largest time constant of the sections and n varies from 1 to N . There are three cases considered here, each with varying values of R and C and the synthesized sections are inserted into the varactor-tuned VCO circuit in Chapter 6. Case 1 fixes the values of each capacitor to be equal to 1nF which is equivalent in value to most of the capacitors in the VCO circuit. Based on this value of the capacitors and the poles, the values of the resistors is calculated and found to vary between a range of approximately 400 MΩ and 250 KΩ. Inserting these sections into the input of the VCO circuit causes a problem with the factoring of the admittance matrix. This can happen when the matrix is ill-conditioned, which can happen when a large range of values is present in locations in the matrix. Illconditioning is a serious problem and it makes it impossible to factor the matrix which is essential requirement to solving any linear algebra problem. Ill-conditioning is quantified by the condition number of a matrix which should generally be small, i.e. in the range of 1–10, [162]. After inserting this 20 section line into the VCO circuit, the admittance matrix cannot be factored by the numerical algebra library used in f REEDATM and the condition number is computed as roughly equal to 1040 . It is important to note that this number is approximate and different routines to factor the admittance matrix will yield slightly different values but such large numbers come about due to the disparity between the smallest and largest component in the circuit. Case 2 fixes the value of the resistor as 100Ω which is equivalent to the values of the other resistors in the VCO circuit and based on the values of the pole locations, computes the values of the capacitors to vary between a maximum of roughly 4 mF and a minimum of roughly 2 µF. Inserting this into the VCO circuit again produces an ill-conditioned system with a condition number of approximately 1040 . While Case 1 and Case 2 are extreme cases, a middle approach to selecting the values of the R and C components is possible. Case 3 sets the upper limit for the R element to 1 KΩ and the C element to 10 nF which corresponds to the time constant of 1 second. 156 For successively reducing values of time constant, the R and C values will keep reducing. The initial values of R and C are chosen so that most of the successive R-C values will be equivalent to the R and C values in the VCO circuit in the hope of obtaining a smaller condition number. The minimum value obtained for the resistors is roughly R = 25Ω while that for the capacitors is roughly C = 0.2nF. Inserting these values into the VCO circuit produces a condition number of approximately 1039 which once again emphasizes the ill-conditioned nature of the admittance matrix. Under some circumstances, it might be possible to find an optimum set of values for the elements of the section but it is not straight-forward and is dependent on the circuit under consideration. In the case of chaotic noise generators, setting up the flicker noise process is considerably simpler, does not produce ill-conditioning and is independent of the circuit under consideration. As a final note, this appendix considers the effect of adding a single R-C section to an existing circuit. For a circuit with several sources of flicker noise, this requires insertion of several R-C sections. This appendix does not consider the effect of having a larger number of unknowns in a circuit as a result of these additional R-C sections but it is reasonable to assume that the effect will be detrimental as the size of the problem keeps expanding. 157 Appendix C Source Code C.1 Noise-enabled npn-BJT //---------// BjtnpnN.h //---------// Spice BJT model with charge conservation model // with transient noise sources // // // 0 NCollector // | // | // | / // |/ // NBase 0----| // | // | // | // | // 0 NEmitter // // Author: Nikhil Kriplani // #ifndef BjtnpnN2_h 158 #define BjtnpnN2_h 1 #include #include #include #include "../analysis/TimeDomainSV.h" "../fRandn.h" "../fChaos.h" "../fLogis.h" class BjtnpnN2 : public AdolcElement { public: BjtnpnN2(const string& iname); ~BjtnpnN2(); static const char* getNetlistName() { return einfo.name; } // Do some local initialization virtual void init() throw(string&); private: // Generic state variable evaluation routine virtual void eval(adoublev& x, adoublev& vp, adoublev& ip); // Element information static ItemInfo einfo; // Number of parameters of this element static const unsigned n_par; // Parameter variables double is, bf, nf, vaf, ikf, ise, ne, br, nr, var, ikr, isc; double nc, re, rb, rbm, irb, rc, eg, cje, vje, mje, cjc, vjc, mjc; double xcjc, fc, tf, xtf, vtf, itf, tr, xtb, xti, double tre1, tre2, trb1, trb2; double trm1, trm2, trc1, trc2, tnom, t, cjs, mjs, double vjs, area, ns, iss; double ctime; 159 double tstep, tstop, kth, ksh, beta, kf, alpha; fChaos * fch; fLogis * flg; // temporary variables to store random numbers double tmpThermal1, tmpThermal2, tmpThermal3; double tmpThermal4, tmpThermal5, tmpThermal6; double tmpShot1, tmpShot2, tmpShot3; double tmpFlicker; // Parameter information static ParmInfo pinfo[]; }; #endif //-----------// BjtnpnN2.cc //-----------#include "../network/ElementManager.h" #include "../network/AdolcElement.h" #include "BjtnpnN2.h" const unsigned BjtnpnN2 :: n_par = 57; ItemInfo BjtnpnN2::einfo = { "bjtnpnn2", "Gummel-Poon model with noise", "Original model by Senthil Velu, modified by Nikhil Kriplani", DEFAULT_ADDRESS"elements/Bjtnpn2.h.html", "2005_07_31" }; // Parameter Information ParmInfo BjtnpnN2 :: pinfo[] = { {"is","Transport saturation current (A)", TR_DOUBLE, false}, {"bf","Ideal maximum forward beta", TR_DOUBLE, false}, {"nf","Forward current emission coefficient", TR_DOUBLE, false}, {"vaf","Forward early voltage (V)", TR_DOUBLE, false}, {"ikf","Forward-beta high current roll-off knee current (A)", TR_DOUBLE, false}, 160 {"ise","Base-emitter leakage saturation current (A)", TR_DOUBLE, false}, {"ne","Base-emitter leakage emission coefficient", TR_DOUBLE, false}, {"br","Ideal maximum reverse beta", TR_DOUBLE, false}, {"nr","Reverse current emission coefficient", TR_DOUBLE, false}, {"var","revers early voltage (V)", TR_DOUBLE, false}, {"ikr","Corner for reverse-beta high current roll off (A)", TR_DOUBLE, false}, {"isc","Base collector leakage saturation current (A)", TR_DOUBLE, false}, {"nc","Base-collector leakage emission coefficient", TR_DOUBLE, false}, {"re","Emitter ohmic resistance (W)", TR_DOUBLE, false}, {"rb","Zero bias base resistance (W)", TR_DOUBLE, false}, {"rbm","Minimum base resistance (W)", TR_DOUBLE, false}, {"irb","Current at which rb falls to half of rbm (A)", TR_DOUBLE, false}, {"rc","Collector ohmic resistance (W)", TR_DOUBLE, false}, {"eg","Badgap voltage (eV)", TR_DOUBLE, false}, {"cje","Base emitter zero bias p-n capacitance (F)", TR_DOUBLE, false}, {"vje","Base emitter built in potential (V)", TR_DOUBLE, false}, {"mje","Base emitter p-n grading factor", TR_DOUBLE, false}, {"cjc","Base collector zero bias p-n capacitance (F)", TR_DOUBLE, false}, {"vjc","Base collector built in potential (V)", TR_DOUBLE, false}, {"mjc","Base collector p-n grading factor", TR_DOUBLE, false}, {"xcjc","Fraction of cbc connected internal to rb", TR_DOUBLE, false}, {"fc","Forward bias depletion capacitor coefficient", TR_DOUBLE, false}, {"tf","Ideal forward transit time (S)", TR_DOUBLE, false}, {"xtf","Transit time bias dependence coefficient", TR_DOUBLE, false}, {"vtf","Transit time dependency on vbc (V)", TR_DOUBLE, false}, {"itf","Transit time dependency on ic (A)", TR_DOUBLE, false}, {"tr","Ideal reverse transit time (S)", TR_DOUBLE, false}, {"xtb","Forward and reverse beta temperature coefficient", TR_DOUBLE, false}, {"xti","IS temperature effect exponent", TR_DOUBLE, false}, {"tre1","RE temperature coefficient (linear)", TR_DOUBLE, false}, {"tre2","RE temperature coefficient (quadratic)", TR_DOUBLE, false}, {"trb1","RB temperature coefficient (linear)", TR_DOUBLE, false}, {"trb2","RB temperature coefficient (quadratic)", TR_DOUBLE, false}, {"trm1","RBM temperature coefficient (linear)", TR_DOUBLE, false}, {"trm2","RBM temperature coefficient (quadratic)", TR_DOUBLE, false}, {"trc1","RC temperature coefficient (linear)", TR_DOUBLE, false}, {"trc2","RC temperature coefficient (quadratic)", TR_DOUBLE, false}, {"tnom","Nominal temperature (K)", TR_DOUBLE, false}, {"t","temperature (K)", TR_DOUBLE, false}, {"cjs", "Collector substrate capacitance", TR_DOUBLE, false}, {"mjs", "substrate junction exponential factor", TR_DOUBLE, false}, 161 {"vjs", "substrate junction built in potential", TR_DOUBLE, false}, {"area","Current multiplier", TR_DOUBLE, false}, {"ns","substrate p-n coefficient",TR_DOUBLE,false}, {"iss","Substrate saturation current",TR_DOUBLE,false}, {"tstep", "time step for transient analysis", TR_DOUBLE, true}, {"tstop", "stop time for transient analysis", TR_DOUBLE, true}, {"kth", "thermal noise scaling factor", TR_DOUBLE, false}, {"ksh", "shot noise scaling factor", TR_DOUBLE, false}, {"beta", "exponent for chaotic map", TR_DOUBLE, false}, {"kf", "scaling factor for chaotic noise", TR_DOUBLE, false}, {"alpha", "power for dependence of flicker noise on current", TR_DOUBLE, false} }; BjtnpnN2::BjtnpnN2(const string& iname) : AdolcElement(&einfo, pinfo, n_par, iname) { // Set default parameter values paramvalue[0] = &(is = 1e-16); paramvalue[1] = &(bf = 100.); paramvalue[2] = &(nf = one); paramvalue[3] = &(vaf = zero); paramvalue[4] = &(ikf = zero); paramvalue[5] = &(ise = zero); paramvalue[6] = &(ne = 1.5); paramvalue[7] = &(br = one); paramvalue[8] = &(nr = one); paramvalue[9] = &(var = zero); paramvalue[10] = &(ikr = zero); paramvalue[11] = &(isc = zero); paramvalue[12] = &(nc = 2.); paramvalue[13] = &(re = zero); paramvalue[14] = &(rb = zero); paramvalue[15] = &(rbm = zero); paramvalue[16] = &(irb = zero); paramvalue[17] = &(rc = zero); paramvalue[18] = &(eg = 1.11); paramvalue[19] = &(cje = zero); paramvalue[20] = &(vje = 0.75); paramvalue[21] = &(mje = 0.33); paramvalue[22] = &(cjc = zero); paramvalue[23] = &(vjc = 0.75); paramvalue[24] = &(mjc = 0.33); paramvalue[25] = &(xcjc = one); 162 paramvalue[26] = paramvalue[27] = paramvalue[28] = paramvalue[29] = paramvalue[30] = paramvalue[31] = paramvalue[32] = paramvalue[33] = paramvalue[34] = paramvalue[35] = paramvalue[36] = paramvalue[37] = paramvalue[38] = paramvalue[39] = paramvalue[40] = paramvalue[41] = paramvalue[42] = paramvalue[43] = paramvalue[44] = paramvalue[45] = paramvalue[46] = paramvalue[47] = paramvalue[48] = paramvalue[49] = paramvalue[50] = paramvalue[51] = paramvalue[52] = paramvalue[53] = paramvalue[54] = paramvalue[55] = paramvalue[56] = setNumTerms(4); &(fc = 0.5); &(tf = zero); &(xtf = zero); &(vtf = zero); &(itf = zero); &(tr = zero); &(xtb = zero); &(xti = 3.); &(tre1 = zero); &(tre2 = zero); &(trb1 = zero); &(trb2 = zero); &(trm1 = zero); &(trm2 = zero); &(trc1 = zero); &(trc2 = zero); &(tnom = 300.); &(t = 300.); &(cjs = zero); &(mjs = zero); &(vjs = 0.75); &(area = one); &(ns = one); &(iss = zero); &(tstep = 1.0e-9); &(tstop = 1.0e-6); &(kth = 1.0); &(ksh = 1.0); &(beta = 0.005); &(kf = 1.0); &(alpha = 2.); //Set Flags setFlags(NONLINEAR | ONE_REF | TR_TIME_DOMAIN); setNumberOfStates(3); // initialize the random no. generators fch = new fChaos; flg = new fLogis; } void BjtnpnN2::init() throw(string&) { 163 //create tape IntVector var2(3); var2[0] = 0; var2[1] = 1; var2[2] = 2; IntVector novar; DoubleVector nodelay; createTape(var2, var2, var2, novar, nodelay); // initialize the random number generators int points = (int)(tstop/tstep) + 1; fch->setSize(points); fch->setBeta(beta); fch->generateMap(); flg->setSize(points); flg->generateMap(); // variable to hold the current time ctime = getCurrentTime(); } BjtnpnN2::~BjtnpnN2() { delete flg; delete fch; } //evaluate function void BjtnpnN2::eval(adoublev& x, adoublev& vp, adoublev& ip) { //x[vbe,vbc,vcjs,dvbe,dvbc,dvcjs,d2vbe,d2vbc,d2vcjs] //x[0] : Vbe //x[1] : Vbc //x[2] : Vcjs //x[3] : dvbe/dt //x[4] : dvbc/dt //x[5] : dvcjs/dt //x[6] : d2vbe/dt //x[7] : d2vbc/dt //x[8] : d2vcjs/dt //vp[0] : Vcjs ip[0] : Ic //vp[1] : Vbjs ip[1] : Ib 164 //vp[2] : Vejs ip[2] : Ie adouble Ibe, Ibc, Ice, Ibf, Ile, Ibr; adouble Ilc, kqb, kqbtemp; adouble cbej, cbet, cbcj, cbct, ibe, ibc; adouble vth = (kBoltzman * tnom) / eCharge; if (!isSet(&rbm)) rbm = rb; //dc current equations Ibf = is * (exp(x[0]/nf/vth) - one); Ile = ise * (exp(x[0]/ne/vth) - one); Ibr = is * (exp(x[1]/nr/vth) - one); Ilc = isc * (exp(x[1]/nc/vth) - one); Ibe = Ibf / bf + Ile; Ibc = Ibr / br + Ilc; adouble Ibf1 = (Ibe - Ile) * bf; adouble Ibr1 = (Ibc - Ilc) * br; adouble kqbtem = zero; if (ikf) kqbtem = 4.*(Ibf1/ikf); if (ikr) kqbtem += 4.*(Ibr1/ikr); kqbtemp = sqrt(one + kqbtem); adouble tempvaf = zero; if (vaf) tempvaf = x[1] / vaf; adouble tempvar = zero; if (var) tempvar = x[0] / var; kqb = 0.5 * (one / (one - tempvaf - tempvar)) * (one + kqbtemp); Ice = (Ibf1 - Ibr1) / kqb; // Charge base-collector condassign(cbcj, x[1]-fc*vjc, cjc*pow(one-fc, -one-mjc) * (one-fc*(one + mjc) + mjc*x[1]/vjc), cjc*pow(one-x[1]/vjc, -mjc)); cbct = tr * is * exp(x[1]/nr/vth)/(nr*vth); ibc = area * (cbct + xcjc*cbcj) * x[4]; //Current Base-Emitter 165 condassign(cbej,x[0]-fc*vje, cjc*pow(one-fc,-one-mjc)*(one-fc*(one+mje)+mje*x[0]/vje), cje*pow(one-x[0]/vje,-mje)); adouble tZ = one; if (vtf) tZ = exp(.69*x[1]/vtf); cbet=is*exp(x[0]/nf/vth)*tf*(one+xtf*tZ)/(nf*vth); ibe = area * (cbet + cbej) * x[3]; adouble Tibe,Tibc; Tibe = Ibe + ibe; Tibc = Ibc + ibc; adouble rb1; adouble vbx,cbx,ibx; rb1 = (rbm + (rb - rbm) / kqb) / area; vbx = x[1] + (Tibe + Tibc) * rb; condassign(cbx,vbx-fc*vjc, (one-xcjc)*cjc*pow(one-fc,-one-mjc)*(one-fc*(one+mjc)+mjc*vbx/vjc), (one-xcjc)*cjc*pow(one-vbx/vjc,-mjc)); adouble dibc,dibe,dIbc,dIbe; dibc = area*(cbct+xcjc*cbcj) * x[7]; dibe = area*(cbet+cbej) * x[6]; dIbe = is/bf * exp(x[0]/nf/vth) * (x[3]/nf/vth) + ise * exp(x[0]/ne/vth) * (x[3]/ne/vth); dIbc = is/br * exp(x[1]/nr/vth) * (x[4]/nr/vth) + isc * exp(x[1]/nc/vth) * (x[4]/nc/vth); ibx = cbx * (x[3] + rb*(dibc+dibe+dIbc+dIbe)); adouble Ijs; adouble cjjs, ijs; Ijs = area * iss * (exp(x[2]/ns/vth) - one); condassign(cjjs, x[2] - zero, cjs*(one+mjs*x[2]/vjs), cjs*pow(one-x[2]/vjs,-mjs)); ijs = Ijs + area*(cjjs * x[5]); 166 adouble iShot1, iShot2, iShot3; adouble iFlicker; adouble iThermal1, iThermal2, iThermal3; adouble vThermal1, vThermal2, vThermal3; double TEMP = 300.0; ip[0] = Ice - Tibc - ibx - ijs; // collector current ip[1] = ibx + Tibc + Tibe; // base current ip[2] = -(Ice + Tibe - ijs); // emitter current vp[0] = x[2] + rc*ip[0]; // Collector Substrate Voltage vp[1] = x[2] + x[1] + ip[1]*rb; // Base Substrate Voltage vp[2] = x[1] - x[0] + x[2] + ip[2]*re; // Emitter Substrate Voltage // If we are the next time step, then get new // random values if (getCurrentTime() > ctime) { ctime = getCurrentTime(); int mav = (int)(ctime / tstep); // calculate the shot noise component and the // flicker noise component //double xiShot1 = frn->RNOR(0,1); double xiShot1 = flg->getMapValue(mav); double xiShot2 = flg->getMapValue(mav); double xiShot3 = flg->getMapValue(mav); tmpShot1 = xiShot1; tmpShot2 = xiShot2; tmpShot3 = xiShot3; // shot noise component Ice iShot1 = ksh * sqrt(eCharge * fabs(Ice)) * xiShot1; // shot noise component Ibe iShot2 = ksh * sqrt(eCharge * fabs(Ibe)) * xiShot2; // shot noise component ibc iShot3 = ksh * sqrt(eCharge * fabs(Ibc)) * xiShot3; double xiFlicker = fch->getMapValue(mav); tmpFlicker = xiFlicker; iFlicker = kf * sqrt(pow(fabs(Ile),alpha)) * xiFlicker; // rc, rb, re contributes thermal noise components //double xiThermal1 = frn->RNOR(0,1); double xiThermal1 = flg->getMapValue(mav); 167 double double double double double xiThermal2 xiThermal3 xiThermal4 xiThermal5 xiThermal6 = = = = = flg->getMapValue(mav); flg->getMapValue(mav); flg->getMapValue(mav); flg->getMapValue(mav); flg->getMapValue(mav); tmpThermal1 = xiThermal1; tmpThermal2 = xiThermal2; tmpThermal3 = xiThermal3; tmpThermal4 = xiThermal4; tmpThermal5 = xiThermal5; tmpThermal6 = xiThermal6; iThermal1 = kth * sqrt(2.0 * kBoltzman * TEMP / rc) * xiThermal1; iThermal2 = kth * sqrt(2.0 * kBoltzman * TEMP / rb1) * xiThermal2; iThermal3 = kth * sqrt(2.0 * kBoltzman * TEMP / re) * xiThermal3; vThermal1 = kth * sqrt(2.0 * kBoltzman * TEMP * rc) * xiThermal4; vThermal2 = kth * sqrt(2.0 * kBoltzman * TEMP * rb1) * xiThermal5; vThermal3 = kth * sqrt(2.0 * kBoltzman * TEMP * re) * xiThermal6; } else if ((getCurrentTime() == ctime) && (ctime > 0.0)) // use the old random values { iShot1 = ksh * sqrt(eCharge * fabs(Ice)) * tmpShot1; iShot2 = ksh * sqrt(eCharge * fabs(Ibe)) * tmpShot2; iShot3 = ksh * sqrt(eCharge * fabs(Ibc)) * tmpShot3; iFlicker = kf * sqrt(pow(fabs(Ile),alpha)) * tmpFlicker; iThermal1 iThermal2 iThermal3 vThermal1 vThermal2 vThermal3 = = = = = = kth kth kth kth kth kth * * * * * * sqrt(2.0 sqrt(2.0 sqrt(2.0 sqrt(2.0 sqrt(2.0 sqrt(2.0 * * * * * * kBoltzman kBoltzman kBoltzman kBoltzman kBoltzman kBoltzman * * * * * * TEMP TEMP TEMP TEMP TEMP TEMP / / / * * * rc) * tmpThermal1; rb1) * tmpThermal2; re) * tmpThermal3; rc) * tmpThermal4; rb1) * tmpThermal5; re) * tmpThermal6; } // thermal current contributions adouble dr = rc*(rb1 + re) + rb1*re; adouble inc = (re*vThermal1 - (rc + re)*vThermal2 - rc*vThermal3)/dr; adouble inb = (re*vThermal2 - (rb1 + re)*vThermal1 - rb1*vThermal3)/dr; // Final collector current ip[0] += iShot1 + inc - iShot3; // Final base current 168 ip[1] += iShot2 + iShot3 + inb + iFlicker; // Final emitter current ip[2] += -iFlicker - iShot1 - iShot2; } C.2 Noise-enabled p-n Junction Diode //-----// DN2.h //-----// Spice Diode model with transient noise elements // // // |\ | // 0 o------| >|------o 1 // |/ | // anode cathode // // // Author: // Carlos E. Christoffersen // Mete Ozkar // Noise terms added by Nikhil Kriplani // #ifndef _DN2_h #define _DN2_h 1 #include #include #include #include "../analysis/TimeDomainSV.h" "../fRandn.h" "../fChaos.h" "../fLogis.h" class DN2 : public AdolcElement { public: DN2(const string& iname); ~DN2(); static const char* getNetlistName() { 169 return einfo.name; } // Do some local initialization virtual void init() throw(string&); private: virtual void eval(adoublev& x, adoublev& vp, adoublev& ip); // Some constants double v1; double ctime; // Element information static ItemInfo einfo; // Number of parameters of this element static const unsigned n_par; // Parameter variables double is, n, ibv, bv, fc, cj0, vj, m, tt, area, rs; bool charge; double tstep, tstop, kth, ksh, beta, kf, alpha; //fRandn * frn; fChaos * fch; fLogis * flg; // temporary variables to store random numbers double tmpThermal1, tmpShot, tmpFlicker; // Parameter information static ParmInfo pinfo[]; }; #endif //------// DN2.cc //------#include "../network/CircuitManager.h" 170 #include "../network/AdolcElement.h" #include "DN2.h" // Static members const unsigned DN2::n_par = 19; // Element information ItemInfo DN2::einfo = { "dn2", "Spice diode model (conserves charge)with transient noise elements", "Carlos E. Christoffersen / Nikhil Kriplani", DEFAULT_ADDRESS"elements/DN2.h.html", "2005_07_28" }; // Parameter information ParmInfo DN2::pinfo[] = { {"is", "Saturation current (A)", TR_DOUBLE, false}, {"n", "Emission coefficient", TR_DOUBLE, false}, {"ibv", "Current magnitude at the reverse breakdown voltage (A)", TR_DOUBLE, false}, {"bv", "Breakdown voltage (V)", TR_DOUBLE, false}, {"fc", "Coefficient for forward-bias depletion capacitance", TR_DOUBLE, false}, {"cj0", "Zero-bias depletion capacitance (F)", TR_DOUBLE, false}, {"vj", "Built-in junction potential (V)", TR_DOUBLE, false}, {"m", "PN junction grading coefficient", TR_DOUBLE, false}, {"tt", "Transit time (s)", TR_DOUBLE, false}, {"area", "Area multiplier", TR_DOUBLE, false}, {"charge", "Use charge-conserving model", TR_BOOLEAN, false}, {"rs", "Series resistance (ohms)", TR_DOUBLE, false}, {"tstep", "time step for transient analysis", TR_DOUBLE, true}, {"tstop", "stop time for transient analysis", TR_DOUBLE, true}, {"kth", "thermal noise scaling factor", TR_DOUBLE, false}, {"ksh", "shot noise scaling factor", TR_DOUBLE, false}, {"beta", "exponent for chaotic map", TR_DOUBLE, false}, {"kf", "scaling factor for chaotic noise", TR_DOUBLE, false}, {"alpha", "power for dependence of flicker noise on current", TR_DOUBLE, false} }; DN2::DN2(const string& iname) : 171 AdolcElement(&einfo, pinfo, n_par, iname) { // Set default parameter values paramvalue[0] = &(is = 1e-14); paramvalue[1] = &(n = one); paramvalue[2] = &(ibv = 1e-10); paramvalue[3] = &(bv = zero); paramvalue[4] = &(fc = .5); paramvalue[5] = &(cj0 = zero); paramvalue[6] = &(vj = one); paramvalue[7] = &(m = .5); paramvalue[8] = &(tt = zero); paramvalue[9] = &(area = one); paramvalue[10] = &(charge = true); paramvalue[11] = &(rs = zero); paramvalue[12] = &(tstep = 1.0e-9); paramvalue[13] = &(tstop = 1.0e-6); paramvalue[14] = &(kth = one); paramvalue[15] = &(ksh = one); paramvalue[16] = &(beta = 0.000005); paramvalue[17] = &(kf = one); paramvalue[18] = &(alpha = one); // Set flags setFlags(NONLINEAR | ONE_REF | TR_TIME_DOMAIN); // initialize the random no. generators fch = new fChaos; flg = new fLogis; } void DN2::init() throw(string&) { if (charge) { // Add one terminal Circuit* cir = getCircuit(); unsigned tref_id = getTerminal(1)->getId(); unsigned term_id1 = cir->addTerminal(getInstanceName() + ":extra"); cir->connect(getId(), term_id1); // Connect an external 1K resistor (not really needed if using the // augmentation network) unsigned newelem_id = cir->addElement("res", getInstanceName() + ":res"); 172 cir->connect(newelem_id, tref_id); cir->connect(newelem_id, term_id1); Element* elem = cir->getElement(newelem_id); double res = 1e3; elem->setParam("r", &res, TR_DOUBLE); elem->init(); // Set the number of terminals setNumTerms(3); // Set number of states setNumberOfStates(2); // create tape IntVector var(2); var[0] = 0; var[1] = 1; IntVector dvar(1,1); createTape(var, dvar); // initialize the random number generators int points = (int)(tstop/tstep) + 1; fch->setSize(points); fch->setBeta(beta); fch->generateMap(); flg->setSize(points); flg->generateMap(); // variable to hold the current time ctime = getCurrentTime(); } else { // Set the number of terminals setNumTerms(2); // Set number of states setNumberOfStates(1); // create tape IntVector var(1,0); createTape(var, var); // initialize the random number generators int points = (int)(tstop/tstep); fch->setSize(points); 173 fch->setBeta(beta); fch->generateMap(); // variable to hold the current time ctime = getCurrentTime(); } } DN2::~DN2() { delete flg; delete fch; } void DN2::eval(adoublev& x, adoublev& vp, adoublev& ip) { double alfa = eCharge / n / kBoltzman / 300.; // tnom = 300K double v1 = log(5e8 / alfa) / alfa; // normal is .5e9 double k3 = exp(alfa * v1); // x[0]: x as in Rizolli’s equations adouble vd,id; // Diode voltage condassign(vd, v1 - x[0], x[0] + zero, v1 + log(one + alfa*(x[0]-v1))/alfa); // Static current condassign(id, v1 - x[0], is * (exp(alfa * x[0]) - one), is * k3 * (one + alfa * (x[0] - v1)) - is); // subtract the breakdown current ip[0] = id - ibv * exp(-alfa * (vd + bv)); adouble iShot = 0.0; adouble iFlicker = 0.0; adouble iThermal = 0.0; adouble vThermal = 0.0; double TEMP = 300.0; // If we are the next time step, then get new // random values if (getCurrentTime() > ctime) 174 { ctime = getCurrentTime(); int mav = (int)(ctime / tstep); // calculate the shot noise component and the // flicker noise component double xiShot = flg->getMapValue(mav); tmpShot = xiShot; iShot = ksh * sqrt(eCharge * fabs(ip[0])) * xiShot; double xiFlicker = fch->getMapValue(mav); tmpFlicker = xiFlicker; iFlicker = kf * sqrt(pow(fabs(ip[0]),alpha)) * xiFlicker; // rs contributes a thermal noise component double xiThermal1 = flg->getMapValue(mav); tmpThermal1 = xiThermal1; iThermal = kth * sqrt(2.0 * kBoltzman * TEMP / rs) * xiThermal1; } else if ((getCurrentTime() == ctime) && (ctime > 0.0)) // use the old random values { iShot = ksh * sqrt(eCharge * fabs(ip[0])) * tmpShot; iFlicker = kf * sqrt(pow(fabs(ip[0]),alpha)) * tmpFlicker; iThermal = kth * sqrt(2.0 * kBoltzman * TEMP / rs) * tmpThermal1; } // add the noise current contributions ip[0] += iThermal + iShot + iFlicker; if (charge) { // x[1]: q // x[2]: dq/dt // Form the additional error function adouble qvj; double km; if (isSet(&cj0)) { condassign(qvj, fc * vj - vd, vj * cj0 / (one - m) * (one - pow(one - vd / vj, one - m)), cj0 * pow(one - fc, - m - one) * (((one - fc * (one + m)) * vd + .5 * m * vd * vd / vj) - 175 ((one - fc * (one + m)) * vj * fc + .5 * m * vj * fc * fc)) + vj * cj0 / (one - m) * (one - pow(one - fc, one - m))); km = vj * cj0 / (one - m) * 1e1; } else { qvj = zero; km = 1e-12; } if (isSet(&tt)) { qvj += tt * ip[0]; km += tt * 1e-2; } // Add capacitor current ip[0] += x[2] * km; ip[1] = - ip[0]; vp[1] = qvj / km - x[1]; vp[0] = vd + ip[0] * rs + vp[1]; // scale the current according to area. ip[0] *= area; ip[1] *= area; } else { adouble dvd_dx; condassign(dvd_dx, v1 - x[0], one, one / (one + alfa*(x[0]-v1))); adouble cd; if (isSet(&cj0)) condassign(cd, fc * vj - vd, cj0 * pow(one - vd / vj, -m), cj0 * pow(one - fc, - m - one) * ((one - fc * (one + m)) + m * vd / vj)); else cd = zero; if (isSet(&tt)) cd += alfa * tt * ip[0]; // Add capacitor current ip[0] += cd * dvd_dx * x[1]; vp[0] = vd + ip[0] * rs; 176 // scale the current according to area. ip[0] *= area; } } C.3 Noise-enabled Curtice-Cubic MESFET //----------// MesfetCN.h //----------// Curtice cubic MESFET model // with noise elements (transient analysis only) // // Drain 2 // o // | // | // |---+ // | // Gate 1 o-----| // | // |---+ // | // | // o // Source 3 // // // Author: Carlos Christofferssen, Nikhil Kriplani #ifndef MesfetCN_h #define MesfetCN_h 1 #include "../analysis/TimeDomainSV.h" #include "../fRandn.h" #include "../fChaos.h" class MesfetCN : public AdolcElement { public: MesfetCN(const string& iname); 177 ~MesfetCN() {} static const char* getNetlistName() { return einfo.name; } // Do some local initialization virtual void init() throw(string&); private: virtual void eval(adoublev& x, adoublev& vp, adoublev& ip); // Some constants double k2, k3; double delta_T, tn, Vt, k1, k4, k5, k6, Vt0, Beta, Ebarr, EbarrN, Nn; double Is, Vbi; // Element information static ItemInfo einfo; // Number of parameters of this element static const unsigned n_par; // Parameter variables double a0, a1, a2, a3, beta, vds0, gama, vt0; double cgs0, cgd0, is, n, ib0, nr; double t, vbi, fcc, vbd, area; double tnom, avt0, bvt0, tbet, tm, tme, eg, m, xti, tj; double tmpFlicker, tmpThermal1; double ctime; double tstep, tstop, kth, ksh, map_beta, kf, af; fRandn * frn; fChaos * fch; // Parameter information static ParmInfo pinfo[]; }; #endif //------------ 178 // MesfetCN.cc //-----------#include "../network/ElementManager.h" #include "../network/AdolcElement.h" #include "MesfetCN.h" // Static members const unsigned MesfetCN::n_par = 34; // Element information ItemInfo MesfetCN::einfo = { "mesfetcn", "Intrinsic noisy MESFET using Curtice-Ettemberg cubic model", "Carlos E. Christoffersen, Nikhil Kriplani", DEFAULT_ADDRESS"elements/MesfetCN.h.html", "2005_12_14" }; // Parameter information ParmInfo MesfetCN::pinfo[] = { {"a0", "Drain saturation current for Vgs=0 (A)", TR_DOUBLE, false}, {"a1", "Coefficient for V1 (A/V)", TR_DOUBLE, false}, {"a2", "Coefficient for V1^2 (A/V^2)", TR_DOUBLE, false}, {"a3", "Coefficient for V1^3 (A/V^3)", TR_DOUBLE, false}, {"beta", "V1 dependance on Vds (1/V)", TR_DOUBLE, false}, {"vds0", "Vds at which BETA was measured (V)", TR_DOUBLE, false}, {"gama", "Slope of drain characteristic in the linear region (1/V)", TR_DOUBLE, false}, {"vt0", "Voltage at which the channel current is forced to be zero for Vgs<=Vto (V)", TR_DOUBLE, false}, {"cgs0", "Gate-source Schottky barrier capacitance for Vgs=0 (F)", TR_DOUBLE, false}, {"cgd0", "Gate-drain Schottky barrier capacitance for Vgd=0 (F)", TR_DOUBLE, false}, {"is", "Diode saturation current (A)", TR_DOUBLE, false}, {"n", "Diode ideality factor", TR_DOUBLE, false}, {"ib0", "Breakdown current parameter (A)", TR_DOUBLE, false}, {"nr", "Breakdown ideality factor", TR_DOUBLE, false}, {"t", "Channel transit time (s)", TR_DOUBLE, false}, {"vbi", "Built-in potential of the Schottky junctions (V)", TR_DOUBLE, false}, {"fcc", "Forward-bias depletion capacitance coefficient (V)", TR_DOUBLE, false}, 179 {"vbd", "Breakdown voltage (V)", TR_DOUBLE, false}, {"tnom", "Reference Temperature (K)", TR_DOUBLE, false}, {"avt0", "Pinch-off voltage (VP0 or VT0) linear temp. coefficient (1/K)", TR_DOUBLE, false}, {"bvt0", "Pinch-off voltage (VP0 or VT0) quadratic temp. coefficient (1/K^2)", TR_DOUBLE, false}, {"tbet", "BETA power law temperature coefficient (1/K)", TR_DOUBLE, false}, {"tm", "Ids linear temp. coeff. (1/K)", TR_DOUBLE, false}, {"tme", "Ids power law temp. coeff. (1/K^2)", TR_DOUBLE, false}, {"eg", "Barrier height at 0.K (eV)", TR_DOUBLE, false}, {"m", "Grading coefficient", TR_DOUBLE, false}, {"xti", "Diode saturation current temperature exponent", TR_DOUBLE, false}, {"tj", "Junction Temperature (K)", TR_DOUBLE, false}, {"area", "Area multiplier", TR_DOUBLE, false}, {"tstep", "time step for transient analysis", TR_DOUBLE, true}, {"tstop", "stop time for transient analysis", TR_DOUBLE, true}, {"map_beta", "exponent for chaotic map", TR_DOUBLE, false}, {"kf", "scaling factor for chaotic noise", TR_DOUBLE, false}, {"af", "flicker noise exponent", TR_DOUBLE, false} }; MesfetCN::MesfetCN(const string& iname) : AdolcElement(&einfo, pinfo, n_par, iname) { // Set default parameter values paramvalue[0] = &(a0 = .1); paramvalue[1] = &(a1 = .05); paramvalue[2] = &(a2 = zero); paramvalue[3] = &(a3 = zero); paramvalue[4] = &(beta = zero); paramvalue[5] = &(vds0 = 4.); paramvalue[6] = &(gama = 1.5); paramvalue[7] = &(vt0 = -1e10); paramvalue[8] = &(cgs0 = zero); paramvalue[9] = &(cgd0 = zero); paramvalue[10] = &(is = zero); paramvalue[11] = &(n = one); paramvalue[12] = &(ib0 = zero); paramvalue[13] = &(nr = 10.); paramvalue[14] = &(t = zero); 180 paramvalue[15] paramvalue[16] paramvalue[17] paramvalue[18] paramvalue[19] paramvalue[20] paramvalue[21] paramvalue[22] paramvalue[23] paramvalue[24] paramvalue[25] paramvalue[26] paramvalue[27] paramvalue[28] paramvalue[29] paramvalue[30] paramvalue[31] paramvalue[32] paramvalue[33] = = = = = = = = = = = = = = = = = = = &(vbi = .8); &(fcc = .5); &(vbd = 1e10); &(tnom = 293.); &(avt0 = zero); &(bvt0 = zero); &(tbet = zero); &(tm = zero); &(tme = zero); &(eg = .8); &(m = .5); &(xti = 2.); &(tj = 293.); &(area = one); &(tstep = 1e-12); &(tstop = 10e-9); &(map_beta = 0.000005); &(kf = one); &(af = one); // Set the number of terminals setNumTerms(3); // Set flags setFlags(NONLINEAR | ONE_REF | TR_TIME_DOMAIN); // Set number of states setNumberOfStates(2); frn = new fRandn; fch = new fChaos; } void MesfetCN::init() throw(string&) { k2 = cgs0 / sqrt(one - fcc); k3 = cgd0 / sqrt(one - fcc); delta_T = tj - tnom; tn = tj / tnom; Vt = kBoltzman * tj / eCharge; k5 = n * Vt; k6 = nr * Vt; Vt0 = vt0 * (one + (delta_T * avt0) + (delta_T * delta_T * bvt0)); 181 if (tbet) Beta = beta * pow(1.01, (delta_T * tbet)); else Beta = beta; Ebarr = eg -.000702 * tj * tj / (tj + 1108.); EbarrN = eg -.000702 * tnom * tnom / (tnom + 1108.); Nn = eCharge / 38.696 / kBoltzman / tj; Is = is * exp((tn - one) * Ebarr / Nn / Vt); if (xti) Is *= pow(tn, xti / Nn); Vbi = vbi * tn -3. * Vt * log(tn) + tn * EbarrN - Ebarr; k1 = fcc * Vbi; k4 = 2. * Vbi * (one - fcc); // initialize the random number generators int points = (int)(tstop/tstep) + 1; fch->setSize(points); fch->setBeta(map_beta); fch->generateMap(); // variable to hold the current time ctime = getCurrentTime(); // create tape IntVector var(2); var[0] = 0; var[1] = 1; IntVector dvar(2); dvar[0] = 0; dvar[1] = 1; IntVector d2var; IntVector tvar(1,0); DoubleVector delay(1, t); createTape(var, dvar, d2var, tvar, delay); } void MesfetCN::eval(adoublev& x, adoublev& vp, adoublev& ip) { // x[0]: vgs // x[1]: vgd // x[2]: dvgs/dt // x[3]: dvgd/dt // x[4]: vgs(t-tau) 182 // vp[0]: vgs , ip[0]: ig // vp[1]: vds , ip[1]: id // Assign known output voltages vp[0] = x[0]; vp[1] = x[0] - x[1]; adouble ids, igd, igs, itmp, vx, cgs, cgd; // static igs current igs = Is *(exp(x[0] / k5) - one) - ib0 * exp(-(x[0] +vbd) / k6); // Calculate cgs, including temperature effect. condassign(cgs, k1 - x[0], cgs0 / sqrt(one - x[0] / Vbi), k2 * (one + (x[0] - k1) / k4)); cgs *= (one + m * (0.0004 * delta_T + one - Vbi / vbi)); // Calculate the total current igs = static + dq_dt igs += cgs * x[2]; // static igd current igd = Is * (exp(x[1] / k5) - one) - ib0 * exp(-(x[1] + vbd) / k6); // Calculate cgd, including temperature effect. condassign(cgd, k1 - x[1], cgd0 / sqrt(one - x[1] / Vbi), k3 * (one + (x[1] - k1) / k4)); cgd *= (one + m * (0.0004 * delta_T + one - Vbi / vbi)); // Calculate the total current igd = static + dq_dt igd += cgd * x[3]; // Calculate ids. Include temperature effects. vx = x[4] * (one + Beta * (vds0 - vp[1])); itmp = (a0 + vx*(a1 + vx*(a2 + vx * a3)))* tanh(gama * vp[1]); condassign(ids, (itmp * vp[1]) * (x[0] - Vt0), itmp, zero); if (tme && tm) ids *= pow((1 + delta_T * tm), tme); adouble eta, iFlicker, iThermal1; condassign(eta, vp[1] - x[0] - Vt0, zero, 1 - vp[1]/(x[0] - Vt0)); // If we are the next time step, then get new 183 // random values if (getCurrentTime() > ctime) { ctime = getCurrentTime(); int mav = (int)(ctime / tstep); // calculate the flicker noise component double xiFlicker = fch->getMapValue(mav); tmpFlicker = xiFlicker; iFlicker = kf * sqrt(pow(fabs(ids),af)) * xiFlicker; // Thermal noise is associated with channel resistance double xiThermal1 = frn->RNOR(0,1); tmpThermal1 = xiThermal1; iThermal1 = sqrt(4.0/3.0 * kBoltzman * tj * Beta * (x[0] - Vt0) * (1 + eta + eta*eta)/(1 + eta)) * xiThermal1; } else if ((getCurrentTime() == ctime) && (ctime > 0.0)) // use the old random values { iFlicker = kf * sqrt(pow(fabs(ids),af)) * tmpFlicker; iThermal1 = sqrt(4.0/3.0 * kBoltzman * tj * Beta * (x[0] - Vt0) * (1 + eta + eta*eta)/(1 + eta)) * tmpThermal1; } ids += iThermal1 + iFlicker; // Calculate the output currents ip[0] = (igd + igs) * area; ip[1] = (ids - igd) * area; } C.4 Noise-enabled Resistor //-----// RN2.h //-----#ifndef RN2_h #define RN2_h 1 #include "../analysis/TimeDomainSV.h" #include "../fLogis.h" #include "../network/AdolcElement.h" 184 class RN2 : public AdolcElement { public: RN2(const string& iname); ~RN2(); static const char* getNetlistName() { return einfo.name; } // Do some local initialization virtual void init() throw(string&); private: virtual void eval(adoublev& x, adoublev& vp, adoublev& ip); // Some constants double v1; double ctime; // Element information static ItemInfo einfo; // Number of parameters of this element static const unsigned n_par; // Parameter variables double res, temp, tstep, tstop, kth; fLogis * flg; // temporary variables to store random numbers double tmpThermal1; // Parameter information static ParmInfo pinfo[]; }; #endif 185 // -----// RN2.cc //------#include "../network/ElementManager.h" #include "../network/Element.h" #include "../network/AdolcElement2.h" #include "../network/CircuitManager.h" #include "../analysis/TimeMNAM.h" #include "../analysis/TimeDomainSV.h" #include "RN2.h" // Static members const unsigned RN2::n_par = 5; // Element information ItemInfo RN2::einfo = { "rn2", "RN2", "Nikhil Kriplani", DEFAULT_ADDRESS"elements/RN2.h.html", "2005_09_22" }; // Parameter information ParmInfo RN2::pinfo[] = { {"res", "Resistance value (Ohms)", TR_DOUBLE, false}, {"temp", "Temperature (K)", TR_DOUBLE, false}, {"tstep", "time step for transient analysis", TR_DOUBLE, true}, {"tstop", "stop time for transient analysis", TR_DOUBLE, true}, {"kth", "thermal noise scaling factor", TR_DOUBLE, false} }; RN2::RN2(const string& iname) : AdolcElement(&einfo, pinfo, n_par, iname) { // Set parameters paramvalue[0] = &(res); paramvalue[1] = &(temp = 300.0); paramvalue[2] = &(tstep = 1.0e-9); paramvalue[3] = &(tstop = 1.0e-6); paramvalue[4] = &(kth = 1.0); 186 // Set the number of terminals setNumTerms(2); // Set number of states setNumberOfStates(1); // Set flags setFlags(NONLINEAR | ONE_REF | TR_TIME_DOMAIN); // initialize the random no. generator flg = new fLogis; } void RN2::init() throw(string&) { // create tape IntVector var(1,0); createTape(var, var); int points = (int)(tstop/tstep) + 1; flg->setSize(points); flg->generateMap(); ctime = getCurrentTime(); } RN2::~RN2() { delete flg; } void RN2::eval(adoublev& x, adoublev& vp, adoublev& ip) { // x[0]: resistor voltage vp[0] = x[0]; ip[0] = x[0]/res; adouble iThermal = 0.0; if (getCurrentTime() > ctime) { 187 ctime = getCurrentTime(); int mav = (int)(ctime / tstep); // res contributes a thermal noise component double xiThermal1 = flg->getMapValue(mav); tmpThermal1 = xiThermal1; iThermal = kth * sqrt(2.0 * kBoltzman * temp / res) * xiThermal1; } else if ((getCurrentTime() == ctime) && (ctime > 0.0)) // use the old random values { iThermal = kth * sqrt(2.0 * kBoltzman * temp / res) * tmpThermal1; } // add the noise current contributions ip[0] += iThermal; } C.5 White Noise Voltage Source //-----// Vwn.h //-----// This is a gaussian-distributed random noise source // (transient analysis only) // // n1 + --- n2 // o----( )-----o // --// This element behaves as a short circuit for AC/HB analysis. #ifndef Vwn_h #define Vwn_h 1 #include "../fRandn.h" class Vwn : public Element { public: Vwn(const string& iname); 188 ~Vwn(); static const char* getNetlistName() { return einfo.name; } // This element adds equations to the MNAM virtual unsigned getExtraRC(const unsigned& eqn_number, const MNAMType& type); virtual void getExtraRC(unsigned& first_eqn, unsigned& n_rows) const; // fill virtual virtual virtual MNAM void fillMNAM(FreqMNAM* mnam); void fillMNAM(TimeMNAM* mnam); void fillSourceV(TimeMNAM* mnam); // State variable transient analysis virtual void svTran(TimeDomainSV *tdsv); virtual void deriv_svTran(TimeDomainSV *tdsv); private: // row assigned to this instance by the FreqMNAM unsigned my_row; // Time for the integer number of periods before current time double int_per_time; // Element information static ItemInfo einfo; // Number of parameters of this element static const unsigned n_par; // Parameter variables double vo, td, mean, variance, kn; // Parameter information static ParmInfo pinfo[]; // Dervied variables initialized at runtime 189 fRandn * frn; }; #endif //------// Vwn.cc //------#include "../network/ElementManager.h" #include "../analysis/FreqMNAM.h" #include "../analysis/TimeMNAM.h" #include "../analysis/TimeDomainSV.h" #include "Vwn.h" // Static members const unsigned Vwn::n_par = 5; // Element information ItemInfo Vwn::einfo = { "vwn", "White Noise voltage source", "Nikhil Kriplani" DEFAULT_ADDRESS"elements/Vwn.html", "2005_10_19" }; // Parameter information ParmInfo Vwn::pinfo[] = { {"vo", "Offset value (V)", TR_DOUBLE, false}, {"td", "Delay time (s)", TR_DOUBLE, false}, {"mean", "Mean of the white noise random variable", TR_DOUBLE, false}, {"variance", "Variance of the white noise random variable", TR_DOUBLE, false}, {"kn", "Scaling coefficient", TR_DOUBLE, false} }; Vwn::Vwn(const string& iname) : Element(&einfo, pinfo, n_par, iname) { // Set default parameter values paramvalue[0] = &(vo = zero); paramvalue[1] = &(td = zero); 190 paramvalue[2] = &(mean = zero); paramvalue[3] = &(variance = 1.0); paramvalue[4] = &(kn = 1.0); // Set the number of terminals setNumTerms(2); // Set flags setFlags(LINEAR | ONE_REF | TR_TIME_DOMAIN | SOURCE); // Set number of states setNumberOfStates(1); my_row = 0; frn = new fRandn; } Vwn::~Vwn() { delete frn; } unsigned Vwn::getExtraRC(const unsigned& eqn_number, const MNAMType& type) { // Keep the equation number assigned to this element my_row = eqn_number; // Add one extra RC return 1; } void Vwn::getExtraRC(unsigned& first_eqn, unsigned& n_rows) const { assert(my_row); first_eqn = my_row; n_rows = 1; } void Vwn::fillMNAM(FreqMNAM* mnam) { assert(my_row); // Ask my terminals the row numbers mnam->setOnes(getTerminal(0)->getRC(), getTerminal(1)->getRC(), my_row); // This element behaves as a short circuit for AC/HB analysis. 191 // Since the source vector is assumed to be initialized to zero, // we do not need to fill it if freq != frequency. return; } void Vwn::fillMNAM(TimeMNAM* mnam) { assert(my_row); // Ask my terminals the row numbers mnam->setMOnes(getTerminal(0)->getRC(), getTerminal(1)->getRC(), my_row); return; } void Vwn::fillSourceV(TimeMNAM* mnam) { const double& ctime = mnam->getTime(); double e = zero; double xiWhite = frn->RNOR(mean,variance); if (ctime < td) e = vo; else e = vo + kn * xiWhite; mnam->setSource(my_row, e); return; } void Vwn::svTran(TimeDomainSV* tdsv) { // Calculate voltage double& e = tdsv->u(0); if (tdsv->DC()) e = vo; else { const double & ctime = tdsv->getCurrentTime(); double xiWhite = frn->RNOR(mean, variance); if (ctime < td) 192 e = vo; else e = vo + kn * xiWhite; } // Scale state variable for numerical stability tdsv->i(0) = tdsv->getX(0) * 1e-2; } void Vwn::deriv_svTran(TimeDomainSV* tdsv) { tdsv->getJu()(0,0) = zero; tdsv->getJi()(0,0) = 1e-2; } C.6 The Parker-Skellern Model //----------// MesfetPS.h //----------// MESFET PS model - The Parker-Skellern model // // Drain 2 // o // | // | // |---+ // | // Gate 1 o-----| // | // |---+ // | // | // o // Source 3 // // // Author: Nikhil M. Kriplani #ifndef MesfetPS_h #define MesfetPS_h 1 class MesfetPS : public AdolcElement 193 { public: MesfetPS(const string& iname); ~MesfetPS() {} static const char* getNetlistName() { return einfo.name; } // Do some local initialization virtual void init() throw(string&); private: virtual void eval(adoublev& x, adoublev& vp, adoublev& ip); // Element information static ItemInfo einfo; // Number of parameters of this element static const unsigned n_par; double Vt; // Parameter variables double acgam, area, beta, cgd, cgs, delta; double fc, hfeta, hfe1, hfe2, hfgam, hfg1, hfg2; double ibd, is, lfgam, lfg1, lfg2, mvst, n, p, q; double rs, rd, taud, taug, vbd, vbi, vst, vto, xc, xi, z; double tj, tnom, afac, lam; // Parameter information static ParmInfo pinfo[]; }; #endif //-----------// MesfetPS.cc //------------ 194 #include "../network/ElementManager.h" #include "../network/AdolcElement.h" #include "MesfetPS.h" // Static members const unsigned MesfetPS::n_par = 37; // Element information ItemInfo MesfetPS::einfo = { "mesfetps", "Intrinsic MESFET using the Parker-Skellern model", "Nikhil Kriplani", DEFAULT_ADDRESS"elements/MesfetPS.h.html", "2005_12_15" }; // Parameter information ParmInfo MesfetPS::pinfo[] = { {"acgam", "Capacitance modulation", TR_DOUBLE, false}, {"area", "Area multiplier", TR_DOUBLE, false}, {"beta", "Linear region transconductance scale", TR_DOUBLE, false}, {"cgd", "Zero-bias gate-drain capacitance", TR_DOUBLE, false}, {"cgs", "Zero-bias gate-source capacitance", TR_DOUBLE, false}, {"delta", "Thermal reduction coefficient", TR_DOUBLE, false}, {"fc", "Forward bias capacitance parameter", TR_DOUBLE, false}, {"hfeta", "high-frequency vgs feedback parameter", TR_DOUBLE, false}, {"hfe1", "HFGAM modulation by vgd", TR_DOUBLE, false}, {"hfe2", "HFGAM modulation by vgs", TR_DOUBLE, false}, {"hfgam", "High-frequency vgd feedback parameter", TR_DOUBLE, false}, {"hfg1", "HFGAM modulation by vsg", TR_DOUBLE, false}, {"hfg2", "HFGAM modulation by vdg", TR_DOUBLE, false}, {"ibd", "Gate-junction breakdown current", TR_DOUBLE, false}, {"is", "Gate-junction saturation current", TR_DOUBLE, false}, {"lfgam", "Low-frequency feedback parameter", TR_DOUBLE, false}, {"lfg1", "LFGAM modulation by vsg", TR_DOUBLE, false}, {"lfg2", "LFGAM modulation by vdg", TR_DOUBLE, false}, {"mvst", "Sub-threshold modulation", TR_DOUBLE, false}, {"n", "gate-junction ideality factor", TR_DOUBLE, false}, {"p", "linear region power law exponent", TR_DOUBLE, false}, {"q", "saturated region power law exponent", TR_DOUBLE, false}, {"rs", "source ohmic resistance", TR_DOUBLE, false}, {"rd", "drain ohmic resistance", TR_DOUBLE, false}, {"taud", "relaxation time for thermal reduction", TR_DOUBLE, false}, {"taug", "relaxation time for gamma feedback", TR_DOUBLE, false}, 195 {"vbd", "gate junction breakdown voltage", TR_DOUBLE, false}, {"vbi", "gate junction potential", TR_DOUBLE, false}, {"vst", "sub-threshold potential", TR_DOUBLE, false}, {"vto", "threshold voltage", TR_DOUBLE, false}, {"xc", "Capacitance pinch-off reduction factor", TR_DOUBLE, false}, {"xi", "Saturation knee potential factor", TR_DOUBLE, false}, {"z", "knee transition parameter", TR_DOUBLE, false}, {"tj", "device temperature", TR_DOUBLE, false}, {"tnom", "nominal temperature", TR_DOUBLE, false}, {"afac", "gate width scale factor", TR_DOUBLE, false}, {"lam", "channel length modulation", TR_DOUBLE, false} }; MesfetPS::MesfetPS(const string& iname) : AdolcElement(&einfo, pinfo, n_par, iname) { // Set default parameter values paramvalue[0] = &(acgam = zero); paramvalue[1] = &(area = one); paramvalue[2] = &(beta = 1.0e-4); paramvalue[3] = &(cgd = zero); paramvalue[4] = &(cgs = zero); paramvalue[5] = &(delta = zero); paramvalue[6] = &(fc = 0.5); paramvalue[7] = &(hfeta = zero); paramvalue[8] = &(hfe1 = zero); paramvalue[9] = &(hfe2 = zero); paramvalue[10] = &(hfgam = zero); paramvalue[11] = &(hfg1 = zero); paramvalue[12] = &(hfg2 = zero); paramvalue[13] = &(ibd = zero); paramvalue[14] = &(is = 1.0e-14); paramvalue[15] = &(lfgam = zero); paramvalue[16] = &(lfg1 = zero); paramvalue[17] = &(lfg2 = zero); paramvalue[18] = &(mvst = zero); paramvalue[19] = &(n = one); paramvalue[20] = &(p = 2.0); paramvalue[21] = &(q = 2.0); paramvalue[22] = &(rs = zero); paramvalue[23] = &(rd = zero); paramvalue[24] = &(taud = zero); paramvalue[25] = &(taug = zero); 196 paramvalue[26] paramvalue[27] paramvalue[28] paramvalue[29] paramvalue[30] paramvalue[31] paramvalue[32] paramvalue[33] paramvalue[34] paramvalue[35] paramvalue[36] = = = = = = = = = = = &(vbd = one); &(vbi = one); &(vst = zero); &(vto = -2.0); &(xc = zero); &(xi = 1000); &(z = 0.5); &(tj = 300.0); &(tnom = 300.0); &(afac = one); &(lam = zero); // Set the number of terminals setNumTerms(3); // Set flags setFlags(NONLINEAR | ONE_REF | TR_TIME_DOMAIN); // Set number of states setNumberOfStates(2); } void MesfetPS::init() throw(string&) { // account for area scaling beta *= area; cgd *= area; cgs *= area; ibd *= area; is *= area; delta = delta/area; rs = rs/area; rd = rd/area; Vt = kBoltzman * tj / eCharge; // create tape IntVector var(2); var[0] = 0; var[1] = 1; IntVector novar; DoubleVector nodelay; createTape(var, var); } 197 void { // // // // // // // MesfetPS::eval(adoublev& x, adoublev& vp, adoublev& ip) x[0]: vgs x[1]: vgd x[2]: dvgs/dt x[3]: dvgd/dt x[4]: vgs(t-tau) vp[0]: vgs , ip[0]: ig vp[1]: vds , ip[1]: id // Assign known output voltages vp[0] = x[0]; vp[1] = x[0] - x[1]; // Parker-Skellern MESFET model adouble vgs_bar = x[0] - taug * x[2]; adouble vgd_bar = x[1] - taug * x[3]; adouble gamma_lf = lfgam - lfg1*vgs_bar + lfg2*vgd_bar; adouble gamma_hf = hfgam - hfg1*vgs_bar + hfg2*vgd_bar; adouble eta_hf = hfeta - hfe1*vgd_bar + hfe2*vgs_bar; adouble vgst = x[0] - vto - gamma_lf*vgd_bar gamma_hf*(x[1] - vgd_bar) eta_hf*(x[0] - vgs_bar); adouble Vst = vst * (1.0 + mvst * vp[1]); adouble vgt = Vst * log(1.0 + exp(vgst / Vst)); adouble vsat = xi*(vbi - vto)*vgt / (xi*(vbi - vto) + vgt); adouble vdp = vp[1] * (p/q) * pow(vgt/(vbi - vto), p-q); adouble + + vdt = 0.5 * sqrt(pow(vdp*sqrt(1.0 + z) + vsat, 2) z*vsat*vsat) 0.5 * sqrt(pow(vdp*sqrt(1.0 + z) - vsat, 2) z*vsat*vsat); adouble id = afac * beta * pow(vgt, q) * (1.0 - pow(1.0 - vdt/vgt, q)); adouble P = id * vp[1]; // Thermal modulation effects neglected; 198 adouble ids = id / (1.0 + delta/afac * P); // Capacitance model adouble igs, igd, Cgs, Cgd, Cds, Cm, m; double alpha = xi*(vbi - vto)/(2.0 * (xi + 1.0)); m = 0.5 * (1.0 - vp[1] / sqrt(vp[1]*vp[1] + alpha*alpha)); adouble ve = x[0] + m * sqrt(vp[1]*vp[1] + alpha*alpha) + acgam*vp[1]; adouble vn = ve + 0.5*((ve - vto)*(xc - 1.0) + sqrt(pow(ve - vto, 2) * pow(xc - 1.0, 2) + 0.04)); adouble qgd = afac*cgd*(x[1] - m*sqrt(vp[1]*vp[1] + alpha*alpha) + acgam * vp[1]); adouble qgs; condassign(qgs, vn - fc*vbi, afac*cgs*vbi*(2.0*(1.0-sqrt(1.0-fc)) + (vn/vbi-fc)/sqrt(1.0-fc) + pow(vn/vbi-fc,2)/(4.0*pow(1-fc,1.5))), 2.0*afac*cgs*vbi*(1.0-sqrt(1.0 - vn/vbi))); double c_gd = afac * cgd; adouble cgs0; adouble dvn_dve = 0.5 * (xc + 1.0 + (pow(1.0-xc,2)*(ve-vto))/(pow(1.0-xc,2) * pow(ve-vto,2) + 0.04)); condassign(cgs0, vn - fc*vbi, afac*cgs/sqrt(1.0-fc) * (1.0 + 0.5*(vn/vbi -fc)/(1.0-fc)) * dvn_dve, afac*cgs/sqrt(1.0 - vn/vbi) * dvn_dve); Cgd = c_gd + m*(cgs0 - c_gd) - acgam*(cgs0 + c_gd); Cgs = cgs0 + m*(c_gd - cgs0) - acgam*(cgs0 + c_gd); Cds = acgam*((1.0 - m)*cgs0 + m*c_gd) + m*(m - 1.0)*(cgs0 + c_gd + 2.0*(qgd-qgs)/sqrt(vp[1]*vp[1] + alpha*alpha)); Cm = -acgam * (cgs0 + c_gd); igs = afac*(is*(exp(x[0]/Vt)-1.0) - ibd*(exp(-x[0]/vbd) - 1.0)); igs += Cgs * x[2]; igd = afac*(is*(exp(x[1]/Vt)-1.0) - ibd*(exp(-x[1]/vbd) - 1.0)); igd += Cgd * x[3]; 199 // Calculate the output currents ip[0] = (igd + igs) * area; ip[1] = (ids - igd) * area; } C.7 The OML MESFET model //-----------// MesfetOML.h //-----------// MESFET OML model (from Microwave Optical Tech Letters, // Vol 29, no 4, p. 226, 2001.) // // Drain 2 // o // | // | // |---+ // | // Gate 1 o-----| // | // |---+ // | // | // o // Source 3 // // // Author: Nikhil M. Kriplani #ifndef MesfetOML_h #define MesfetOML_h 1 class MesfetOML : public AdolcElement { public: MesfetOML(const string& iname); ~MesfetOML() {} 200 static const char* getNetlistName() { return einfo.name; } // Do some local initialization virtual void init() throw(string&); private: virtual void eval(adoublev& x, adoublev& vp, adoublev& ip); // Some constants double k2, k3; double delta_T, tn, Vt, k1, k4, k5, k6, Vt0, Beta, Ebarr, EbarrN, Nn; double Is, Vbi; // Element information static ItemInfo einfo; // Number of parameters of this element static const unsigned n_par; // Parameter variables double b1, b2, b3, b4, b5, gamma; double gee, vt0, delta, cgs0, cgd0; double is, n, ib0, nr, t, vbi, fcc; double vbd, tnom, avt0, bvt0, tm, tme; double eg, m, xti, tj, area; // Parameter information static ParmInfo pinfo[]; }; #endif //------------// MesfetOML.cc //------------#include "../network/ElementManager.h" #include "../network/AdolcElement.h" #include "MesfetOML.h" 201 // Static members const unsigned MesfetOML::n_par = 29; // Element information ItemInfo MesfetOML::einfo = { "mesfetoml", "Intrinsic MESFET using OML model", "Nikhil Kriplani", DEFAULT_ADDRESS"elements/MesfetOML.h.html", "2005_12_15" }; // Parameter information ParmInfo MesfetOML::pinfo[] = { {"b1", "fitting parameter 1", TR_DOUBLE, false}, {"b2", "fitting parameter 2", TR_DOUBLE, false}, {"b3", "fitting parameter 3", TR_DOUBLE, false}, {"b4", "fitting parameter 4", TR_DOUBLE, false}, {"b5", "fitting parameter 5", TR_DOUBLE, false}, {"gamma", "Vds dependence on pinch-off potential", TR_DOUBLE, false}, {"gee", "Dependence of gate-bias on knee voltage", TR_DOUBLE, false}, {"vt0", "Voltage where the channel current is forced to 0", TR_DOUBLE, false}, {"delta", "dependence of Veff on Vgs", TR_DOUBLE, false}, {"cgs0", "Gate-source Schottky barrier capacitance for Vgs=0 (F)", TR_DOUBLE, false}, {"cgd0", "Gate-drain Schottky barrier capacitance for Vgd=0 (F)", TR_DOUBLE, false}, {"is", "Diode saturation current (A)", TR_DOUBLE, false}, {"n", "Diode ideality factor", TR_DOUBLE, false}, {"ib0", "Breakdown current parameter (A)", TR_DOUBLE, false}, {"nr", "Breakdown ideality factor", TR_DOUBLE, false}, {"t", "Channel transit time (s)", TR_DOUBLE, false}, {"vbi", "Built-in potential of the Schottky junctions (V)", TR_DOUBLE, false}, {"fcc", "Forward-bias depletion capacitance coefficient (V)", TR_DOUBLE, false}, {"vbd", "Breakdown voltage (V)", TR_DOUBLE, false}, {"tnom", "Reference Temperature (K)", TR_DOUBLE, false}, {"avt0", "Pinch-off voltage (VP0 or VT0) linear temp. coefficient (1/K)", TR_DOUBLE, false}, {"bvt0", "Pinch-off voltage (VP0 or VT0) quadratic temp. coefficient (1/K^2)", TR_DOUBLE, false}, 202 {"tm", "Ids linear temp. coeff. (1/K)", TR_DOUBLE, false}, {"tme", "Ids power law temp. coeff. (1/K^2)", TR_DOUBLE, false}, {"eg", "Barrier height at 0.K (eV)", TR_DOUBLE, false}, {"m", "Grading coefficient", TR_DOUBLE, false}, {"xti", "Diode saturation current temperature exponent", TR_DOUBLE, false}, {"tj", "Junction Temperature (K)", TR_DOUBLE, false}, {"area", "Area multiplier", TR_DOUBLE, false} }; MesfetOML::MesfetOML(const string& iname) : AdolcElement(&einfo, pinfo, n_par, iname) { // Set default parameter values paramvalue[0] = &(b1 = -0.7437); paramvalue[1] = &(b2 = 2.8974); paramvalue[2] = &(b3 = 4.4187); paramvalue[3] = &(b4 = 15.329); paramvalue[4] = &(b5 = 21.8151); paramvalue[5] = &(gamma = 0.0378); paramvalue[6] = &(gee = 0.2339); paramvalue[7] = &(vt0 = -1.2262); paramvalue[8] = &(delta = 0.1222); paramvalue[9] = &(cgs0 = zero); paramvalue[10] = &(cgd0 = zero); paramvalue[11] = &(is = zero); paramvalue[12] = &(n = one); paramvalue[13] = &(ib0 = zero); paramvalue[14] = &(nr = 10.); paramvalue[15] = &(t = zero); paramvalue[16] = &(vbi = .8); paramvalue[17] = &(fcc = .5); paramvalue[18] = &(vbd = 1e10); paramvalue[19] = &(tnom = 293.); paramvalue[20] = &(avt0 = zero); paramvalue[21] = &(bvt0 = zero); paramvalue[22] = &(tm = zero); paramvalue[23] = &(tme = zero); paramvalue[24] = &(eg = .8); paramvalue[25] = &(m = .5); paramvalue[26] = &(xti = 2.); paramvalue[27] = &(tj = 293.); 203 paramvalue[28] = &(area = one); // Set the number of terminals setNumTerms(3); // Set flags setFlags(NONLINEAR | ONE_REF | TR_TIME_DOMAIN); // Set number of states setNumberOfStates(2); } void MesfetOML::init() throw(string&) { k2 = cgs0 / sqrt(one - fcc); k3 = cgd0 / sqrt(one - fcc); delta_T = tj - tnom; tn = tj / tnom; Vt = kBoltzman * tj / eCharge; k5 = n * Vt; k6 = nr * Vt; Vt0 = vt0 * (one + (delta_T * avt0) + (delta_T * delta_T * bvt0)); Ebarr = eg -.000702 * tj * tj / (tj + 1108.); EbarrN = eg -.000702 * tnom * tnom / (tnom + 1108.); Nn = eCharge / 38.696 / kBoltzman / tj; Is = is * exp((tn - one) * Ebarr / Nn / Vt); if (xti) Is *= pow(tn, xti / Nn); Vbi = vbi * tn -3. * Vt * log(tn) + tn * EbarrN - Ebarr; k1 = fcc * Vbi; k4 = 2. * Vbi * (one - fcc); // create tape IntVector var(2); var[0] = 0; var[1] = 1; IntVector novar; DoubleVector nodelay; createTape(var, var); } void MesfetOML::eval(adoublev& x, adoublev& vp, adoublev& ip) { 204 // // // // // // // x[0]: vgs x[1]: vgd x[2]: dvgs/dt x[3]: dvgd/dt x[4]: vgs(t-tau) vp[0]: ugs , ip[0]: ig vp[1]: uds , ip[1]: id // Assign known output voltages vp[0] = x[0]; vp[1] = x[0] - x[1]; adouble ids, igd, igs, itmp, vx, cgs, cgd; // static igs current igs = Is *(exp(x[0] / k5) - one) - ib0 * exp(-(x[0] +vbd) / k6); // Calculate cgs, including temperature effect. condassign(cgs, k1 - x[0], cgs0 / sqrt(one - x[0] / Vbi), k2 * (one + (x[0] - k1) / k4)); cgs *= (one + m * (0.0004 * delta_T + one - Vbi / vbi)); // Calculate the total current igs = static + dq_dt igs += cgs * x[2]; // static igd current igd = Is * (exp(x[1] / k5) - one) - ib0 * exp(-(x[1] + vbd) / k6); // Calculate cgd, including temperature effect. condassign(cgd, k1 - x[1], cgd0 / sqrt(one - x[1] / Vbi), k3 * (one + (x[1] - k1) / k4)); cgd *= (one + m * (0.0004 * delta_T + one - Vbi / vbi)); // Calculate the total current igd = static + dq_dt igd += cgd * x[3]; // Calcutate Ids as a function of Veff adouble a1 = b1 * vp[1]; adouble Vgst = x[0] - vt0 + gamma * vp[1]; adouble Vdseff = (b2 * vp[1] + b3 * vp[1]*vp[1]) / (1 + gee * Vgst); adouble a2 = b4 * (Vdseff / sqrt(1 + Vdseff*Vdseff)); adouble a3 = b5 * (Vdseff / sqrt(1 + Vdseff*Vdseff)); adouble Veff = 0.5 * (Vgst + sqrt(Vgst*Vgst + delta*delta)); 205 itmp = a1*Veff*Veff*Veff + a2*Veff*Veff + a3*Veff; condassign(ids, (itmp * vp[1]) * (x[0] - Vt0), itmp, zero); if (tme && tm) ids *= pow((1 + delta_T * tm), tme); // Calculate the output currents ip[0] = (igd + igs) * area; ip[1] = (ids - igd) * area; } C.8 The Ziggurat Technique This is a technique to produce gaussian random variables with a user specified mean and variance. This code has been slightly modified from its original form in [132] to fit into the f REEDATM framework and is provided below. //--------// fRandn.h //--------#ifndef _FRANDN_H_ #define _FRANDN_H_ class fRandn { public: fRandn(); ~fRandn() { }; unsigned long SHR3(); double UNI(); double nfix(); void zigset(unsigned long jsrseed); double RNOR(double mean, double st_dev); private: }; #endif 206 //---------// fRandn.cc //---------#include "fRandn.h" #include <iostream> using namespace std; #include <cmath> #include <ctime> #include <cstdlib> static unsigned long jz, jsr=123456789; static long hz; static unsigned long iz, kn[128]; static double wn[128], fn[128]; const int N = 128; // Initialize the random no. in the generator. // A fREEDA element will initialize a variable of the // fRandn class in it’s own init() function. fRandn::fRandn() { srand(time(0)); unsigned long seed = rand(); zigset(seed); } unsigned long fRandn::SHR3() { jz = jsr; jsr ^= (jsr << 13); jsr ^= (jsr >> 17); jsr ^= (jsr << 5); return jz + jsr; } // create a Uniform r.v. double fRandn::UNI() { return 0.5 + (signed)SHR3() * .2328306e-9; } double fRandn::nfix() { 207 const float r = 3.442620f; double x, y; for(;;) { x = hz * wn[iz]; // iz==0, handles the base strip if(iz == 0) { do { x = -log(UNI()) * 0.2904764; y = -log(UNI()); } while (y + y < x*x); // .2904764 is 1/r return (hz > 0)? r+x : -r-x; } // iz > 0, handle the wedges of other strips if(fn[iz] + UNI() * (fn[iz-1] - fn[iz]) < exp(-.5*x*x)) return x; // initiate, try to exit for(;;) for loop hz = SHR3(); iz = hz & N-1; if(abs(hz) < kn[iz]) return (hz * wn[iz]); } } // set a seed for the ziggurat generator void fRandn::zigset(unsigned long jsrseed) { const double m1 = 2147483648.0; double dn = 3.442619855899; double tn = dn; double vn = 9.91256303526217e-3; double q; int i; jsr ^= jsrseed; q = vn/exp(-.5*dn*dn); kn[0] = (dn/q)*m1; kn[1] = 0; wn[0] = q/m1; wn[N-1] = dn/m1; fn[0] = 1.; fn[N-1] = exp(-.5*dn*dn); for(i = N-2; i >= 1; i--) 208 { dn = sqrt(-2.*log(vn/dn + exp(-.5*dn*dn))); kn[i+1] = (dn/tn)*m1; tn = dn; fn[i] = exp(-.5*dn*dn); wn[i] = dn/m1; } } // get a single normal random variable // Call this function inside the eval function of // every element, which is called at every time step. // This gives you a randn variable at every time step. double fRandn::RNOR(double mean = 0, double st_dev = 1) { hz = SHR3(); iz = hz & N-1; double x = (abs(hz) < kn[iz]) ? hz*wn[iz] : nfix(); return x*st_dev + mean; } C.9 The Logistic Map Noise Generator //--------// fLogis.h //--------#ifndef _FLOGIS_H_ #define _FLOGIS_H_ // The class which generates an intermittent chaotic time // series that has flicker characteristics class fLogis { public: fLogis(); ~fLogis(); inline void setSize(int sz) { size = sz; } inline double getMapValue(int i) { return Map[i]; } void generateMap(); private: double * Map; 209 int size; }; #endif //---------// fLogis.cc //---------#include "fLogis.h" #include <iostream> using namespace std; #include <ctime> #include <cmath> #include <cstdlib> fLogis::fLogis() { // Do nothing } void fLogis::generateMap() { srand(time(0)); double xt = (double)rand()/RAND_MAX; const int iter = size; // 1e7 Map = new double[iter]; // generate the map for (int j = 0; j < iter; j++) { Map[j] = 4.0*xt*(1-xt); xt = Map[j]; } // make the // subtract for (int i Map[i] sequence component’s mean about zero 0.5 from every value = 0; i < iter; i++) -= 0.5; } fLogis::~fLogis() { 210 delete [] Map; } C.10 The Logarithmic Map Noise Generator //--------// fChaos.h //--------#ifndef _FCHAOS_H_ #define _FCHAOS_H_ // The class which generates an intermittent chaotic time // series that has flicker characteristics class fChaos { public: fChaos(); ~fChaos(); inline void setSize(int sz) { size = sz; } inline void setBeta(double bt) { beta = bt; } inline double getMapValue(int i) { return Map[i]; } void generateMap(); private: double * Map_temp; double * Map; double beta; int size; }; #endif //---------// fChaos.cc //---------#include "fChaos.h" #include <iostream> using namespace std; #include <ctime> #include <cmath> #include <cstdlib> 211 fChaos::fChaos() { // Do nothing } void fChaos::generateMap() { srand(time(0)); double xt = (double)rand()/RAND_MAX; const int iter = 2*size; // 1e7 //const int iter = 2*nsteps; const int need = size; // iterate the vector Map_temp and finally // save the required values in Map Map_temp = new double[iter]; Map = new double[need]; // generate the map for (int j = 0; j < iter; j++) { if (xt <= 0.5) Map_temp[j] = xt + 2*pow(log(2.0),(beta-1)) * xt*xt * pow(abs(log(xt)),(beta+1)); else Map_temp[j] = 2*xt - 1; xt = Map_temp[j]; } // save the number of values specified by the value of "need" for (int i = 0; i < need; i++) Map[i] = Map_temp[iter - i]; // make the // subtract for (int i Map[i] flicker component’s mean about zero 0.5 from every value = 0; i < need; i++) -= 0.5; } fChaos::~fChaos() 212 { delete [] Map; delete [] Map_temp; } 213 Appendix D f REEDATM Netlists D.1 The Varactor-tuned VCO circuit * The Varactor-tuned VCO circuit .options method=3 maxit=1000000 jupdm=2 vsource:vs 1 0 vdc=0.0 vsource:vin1 7 0 vdc=12.0 vsource:vin2 12 0 vdc=12.0 dn2:d1 3 2 tstop=10m tstep=1n + is=1.365p rs=1.0 n=1.0 cj0=14.93e-12 + m=0.4261 vj=0.75 fc=0.5 bv=25.0 ibv=10.0e-6 + ksh=1 kth=1 kf=1e-4 beta=0.000005 dn2:d2 0 2 tstop=10m tstep=1n + is=1.365p rs=1.0 n=1.0 cj0=14.93e-12 + m=0.4261 vj=0.75 fc=0.5 bv=25.0 ibv=10.0e-6 + ksh=1 kth=1 kf=1e-4 beta=0.000005 dn2:d3 3 2 tstop=10m tstep=1n + is=1.365p rs=1.0 n=1.0 cj0=14.93e-12 + m=0.4261 vj=0.75 fc=0.5 bv=25.0 ibv=10.0e-6 + ksh=1 kth=1 kf=1e-4 beta=0.000005 dn2:d4 0 2 tstop=10m tstep=1n + is=1.365p rs=1.0 n=1.0 cj0=14.93e-12 + m=0.4261 vj=0.75 fc=0.5 bv=25.0 ibv=10.0e-6 + ksh=1 kth=1 kf=1e-4 beta=0.000005 214 dn2:d5 3 2 tstop=10m tstep=1n + is=1.365p rs=1.0 n=1.0 cj0=14.93e-12 + m=0.4261 vj=0.75 fc=0.5 bv=25.0 ibv=10.0e-6 + ksh=1 kth=1 kf=1e-4 beta=0.000005 dn2:d6 0 2 tstop=10m tstep=1n + is=1.365p rs=1.0 n=1.0 cj0=14.93e-12 + m=0.4261 vj=0.75 fc=0.5 bv=25.0 ibv=10.0e-6 + ksh=1 kth=1 kf=1e-4 beta=0.000005 bjtnpnn2:q1 8 5 9 0 tstop=10m tstep=1n + bf=255.9 br=6.092 cjc=7.306e-12 cje=22.01e-12 + ikf=0.2847 is=14.34e-15 ise=14.34e-15 itf=0.6 + mjc=0.3416 mje=0.377 nf=1.0 ne=1.307 nr=1.0 + rb=10.0 rc=1.0 tf=411.1e-12 tr=46.91e-9 vaf=74.03 + vtf=1.7 xtb=1.5 xtf=3.0 ksh=1 kth=1 kf=1e-2 + beta=0.000005 bjtnpnn:q2 10 6 9 0 tstop=10m tstep=1n + bf=255.9 br=6.092 cjc=7.306e-12 cje=22.01e-12 + ikf=0.2847 is=14.34e-15 ise=14.34e-15 itf=0.6 + mjc=0.3416 mje=0.377 nf=1.0 ne=1.307 nr=1.0 + rb=10.0 rc=1.0 tf=411.1e-12 tr=46.91e-9 vaf=74.03 + vtf=1.7 xtb=1.5 xtf=3.0 ksh=1 kth=1 kf=1e-2 + beta=0.000005 rn2:r1 rn2:r2 rn2:r3 rn2:r4 rn2:r5 6 0 res=1k tstop=10m tstep=1n kth=1 6 7 res=3.9k tstop=10m tstep=1n kth=1 10 12 res=50 tstop=10m tstep=1n kth=1 13 0 res=1k tstop=10m tstep=1n kth=1 11 0 res=47 tstop=10m tstep=1n kth=1 c1 c2 c3 c4 c5 c6 1 0 c=1000p 3 4 c=47p 4 0 c=22p 6 0 c=1000p 12 0 c=1000p 10 13 c=100p l1 l2 l3 l4 l5 l6 5 6 l=30n 4 5 l=200n 12 8 l=15n 1 2 l=20u 3 0 l=20u 9 11 l=20u * mutual inductance k:k1 0 coupling=0.9 l1="l1" l2="l3" 215 k:k2 0 coupling=0.9 l1="l2" l2="l3" k:k3 0 coupling=1.0 l1="l1" l2="l2" .tran2 tstop=10m tstep=1n im=0 opt=1 nst=9.99999m .out plot term 13 vt in "vco_output.out" .end D.2 The X-band MMIC Circuit Netlist NETLIST FOR FILTRONIC SOLID STATE MMIC LNA .options f0=10e9 jupdm=0 .model m_line1 tlinp4 (z0mag = 95.7 k=7.55 fscale=10e9 alpha=773 + nsect=20 fopt=10e9 tand=0.006) .model m_line2 tlinp4 (z0mag= 81.9 k=7.73 fscale=10e9 alpha=78 + nsect=20 fopt=10e9 tand=0.006) .model m_line3 tlinp4 (z0mag = 76.2 k=7.82 fscale=10e9 alpha= 156 + nsect=20 fopt=10e9 tand=0.006) c1 2 3 6e-12 tlinp4:t1 3 0 0 0 model="m_line1" length = 1194u tlinp4:t2 3 0 4 0 model="m_line2" length = 183u mesfetcn:m1 42 51 62 A0=0.09910 A1=0.08541 A2=-0.0203 A3=-0.015 + BETA=0.01865 GAMA=0.8293 VDS0=6.494 VT0=-1.2 VBI=0.8 + CGD0=3f CGS0=528.2f IS=3e-12 NR=1.2 T=1e-12 vbd=12 + kf=1e-9 tstep=1ps tstop=10ns mesfetcn:m2 172 192 182 A0=0.1321 A1=0.1085 A2=-0.04804 A3=-0.03821 + BETA=0.03141 GAMA=0.7946 VDS0=5.892 VT0=-1.2 VBI=1.5 + CGD0=4e-15 CGS0=695.2f IS=4e-12 N=1.2 T=1e-12 vbd=12 + kf=1e-9 tstep=1ps tstop=10ns rn:rg1 41 42 res=0.83 tstep=2ps tstop=20ns rn:rd1 5 51 res=0.83 tstep=2ps tstop=20ns rn:rs1 61 62 res=0.33 tstep=2ps tstop=20ns l:lg1 4 41 l=7e-12 l:ls1 6 61 l=11e-12 tlinp4:t3 6 0 8 0 model="m_line2" length=391u tlinp4:t4 6 0 7 0 model="m_line2" length=401u 216 c:c_via4 7 0 c=17e-12 rn:r_via4 7 0 res=6 tstep=2ps tstop=20ns c:c_via3 8 0 c=17e-12 tlinp4:t5 5 0 9 0 model="m_line2" length=102u tlinp4:t6 9 0 10 0 model="m_line1" length=368u rn:r1 10 11 res=10.53 tstep=2ps tstop=20ns rn:r2 11 12 res=24.93 tstep=2ps tstop=20ns c:c_s6 11 0 c=17e-12 tlinp4:t7 9 0 13 0 model="m_line2" length=33u c:c2 13 14 c=2e-12 tlinp4:t8 14 0 15 0 model="m_line1" length=705u tlinp4:t9 14 0 0 0 model="m_line2" length=419u tlinp4:t10 14 0 17 0 model="m_line2" length=58u rn:rg2 171 172 res=0.63 tstep=2ps tstop=20ns rn:rd2 191 192 res=0.63 tstep=2ps tstop=20ns rn:rs2 181 182 res=0.25 tstep=2ps tstop=20ns l:lg2 17 171 l=16e-12 l:ld2 19 191 l=11e-12 l:ls2 18 181 l=11e-12 c:c_via8 18 0 c=17p rn:r_via8 18 0 res=5 tstep=2ps tstop=20ns tlinp4:t11 19 0 20 0 model="m_line1" length=138u c:cfb 20 21 c=4.28e-12 rn:rfb 21 22 res=237.4 tstep=2ps tstop=20ns .ref 0 l:lfb 22 15 l=1.268n int_res=9.55 tlinp4:t12 19 0 23 0 model="m_line1" length=313u l:lp 23 24 l=1.268n int_res=9.55 rn:rpad 24 29 res=24 tstep=2ps tstop=20ns vsource:v2 29 0 vdc=6 c:c_via12 24 0 c=17e-12 tlinp4:t13 19 0 25 0 model="m_line3" length=229u c:cload 25 26 c=6e-12 217 rn:r50 26 0 res=50 tstep=2ps tstop=20ns vsource:vin 666 0 f=10.0e9 vdc=0.0 vac=0.1 phase=-90 vwn:vwhite 2 666 kn=0.01 vsource:v1 12 0 vdc=6 .tran2 tstep=1ps tstop=10ns im=0 nst=4n .out plot term 26 vt in "mmic.out" .end