Uploaded by rguppala

VLSI

advertisement
What is the difference between Verification Intellectual Property (VIP) and Intellectual
Property (IP) in VLSI?
 An Intellectual Property (IP) in VLSI is a reusable unit of logic or functionality or a cell
or a layout design that is normally developed with the idea of licencing to multiple
vendor for using as building blocks in different chip designs.
In today's era of IC designs more and more system functionality are getting integrated
into single chips (System on Chip /SOC designs). In these SOC designs, these predesigned IP cores/blocks are becoming more and more important. This is because
most of the SOC designs have a standard microprocessor and lot of system
functionality which are standardized and hence if designed once can be reused across several designs.
Refer following diagram (Reference wikipedia ) and you can see that a lot of the
components are standardized protocols and designs - eg the ARM bus protocols
like AHB, APB, the designs like Ethernet, SPI, USB , UART core etc. All of these can be
designed stand alone as IP cores/blocks and can be licenced to multiple design houses
and for different designs.
With increased trend of IP based designs,
need for Verification Intellectual Properties (VIPs).
there
also
came
the
Similar to design IPs, Verification IPs are pre-defined functional blocks that can be
inserted into the testbenches used for verifying a design.
Verification of a large SOC designs typically takes more than 50% of the overall project
life cycle and is done at multiple stages - Verifying smaller logical blocks, Verifying a
group of logic components at a sub-system level and then verification of the entire
SOC chip.
VIP blocks can help in all these levels of verification as simulation models for the actual
design IP. VIP blocks normally consists of bus functional models, stimulus generators,
protocol monitors, and functional coverage blocks. Since a lot of industry design
testbenches follow different languages (like SystemVerilog, C, specman) and
methodologies (OVM, UVM), these VIPs are generally designed as configurable
components that can be configured and easily integrated into different verification
environments.
Most of the complex SOC designs are now following this trend of IP cores based
design flow and VIP cores based Verification environments for successful products
and a shorter time to market.
What is the difference between soft IP and hard IP in VLSI?
 IP cores in VLSI are generally licensed as either Soft IP cores or Hard IP cores
Soft IP cores are IP blocks generally offered as synthesizable RTL models. These are
developed in one of the Hardware description language like SystemVerilog or VHDL.
Sometimes IP cores are also synthesized and provided as generic gate level netlist which can
be then mapped to any process technologies. This also falls under Soft IP cores. The
advantage of Soft IP cores is that those can be customized in the back end Placement and
Routing flow by a consumer to map to any process technologies.
Hard IP cores on the other hand are offered as layout designs in a layout format like GDS
which is mapped to a process technology and can be directly dropped by a consumer to the
final layout of the chip. These cores cannot be customized for different process
technologies.
Generally digital logic cores are developed and licensed as Soft IP cores. eg: a DRAM
controller IP, Ethernet MAC IP, AMBA bus procotol IPs etc
Analog and Mixed signal logic designs for serdes, PLLs, ADC or DAC, Phy layer logic for DDR,
PCIE etc are generally developed and licensed as Hard IP cores.
Is Analog VLSI going to lose out to digital VLSI?
 Well I am confused from which perspective the question is being asked but still I will try
my best to answer this. So being an Analog Engineer in the largest digital company in
the world I got the best flavour of both the world. It's true that as technology node is
shrinking, it's getting more and more difficult for analog designers to make pure analog
blocks so now all the analog IPs are made as mixed signal IPs which is called digitally
assisted analog. But that doesn't mean that analog is loosing out but analog designers
have to have strong digital base as well. He/she should be able to code RTL along with
making circuits.
Now coming to the question, is the significance of analog blocks coming down? The
answer is NO. Infact as high speed digital is coming up, the supporting analog
infrastructure needs to be very accurately designed.. for example high speed analog
circuits like data converters, clock generators are gaining more importance. Each and
every analog block design has become more challenging and critical...
To conclude, the job of analog designers have become difficult so industry needs skilled
analog designers who have good understanding of both analog and digital in both IP as
well as system level.. and as VLSI industry will continue to grow, the requirements of
analog designer will be growing.. so there is no question of loosing out to digital as to
support good digital infrastructure, we need good analog support and vice versa.
What is HVT, SVT & LVT cells? How they determine Power consumption and Timing?
Digital Hardware ASIC design involves not just complex logic implementation but also it
aims to reduce power consumption and minimum time taken for an operation. Here I
try to explain in a simplified way from Firmware Programmer’s perspective. A
ASIC contains a large number of logic cells. Each logic cell can be used in design to
implement various functions (it is like bricks to build a house). Each logical cell is made
up of multiple transistors (switching elements). So the power and time taken by these
cells determine overall characteristics of a final product. In this context it is essential to
control the switching element’s power consumption and timing to get final result.
Power consumption involves both Dynamic power and Static power. Dynamic power is
the power consumed during switching of transistors. Static power is due to leakage
current that flows when the transistor is powered on (in logical OFF state). Here we
focus on leakage current to understand HVT, SVT and LVT cells. Threshold voltage o a
transistor is designed such a way that if gate voltage is below this threshold voltage, the
transistor goes OFF state. Even though it goes OFF state, still there are some leak
current. If it goes above threshold voltage, it goes ON state.
Look at the figure shown below which shows various leakage currents in a transistor.
Threshold Voltage and Leakage Current
The broad classifications are,
1. Sub threshold current – This can be controlled by narrowing down the junction area between
transistor and substrate, which results in controlling threshold voltage for this transistor. High
threshold voltage (High Vt) causes less leakage current but on other hand it causes delay in
switching. Low threshold voltage (Low Vt) causes higher leak current and quick switching.
2. Gate leakage current
3. Reverse bias current
Now in this context let us see HVT, SVT and LVT.
HVT – High Threshold Voltage causes less power consumption and timing to switch is not
optimized. It is used in power critical functions.
LVT – Low Threshold Voltage causes more power consumption and switching timing is
optimized, used in time critical functions.
SVT – Standard Threshold Voltage offers trade-off between HVT and LVT i.e., moderate delay
and moderate power consumption.
BASICS OF UART COMMUNICATION
 Remember when printers, mice, and modems had thick cables with those huge clunky
connectors? The ones that literally had to be screwed into your computer? Those
devices were probably using UARTs to communicate with your computer. While USB has
almost completely replaced those old cables and connectors, UARTs are definitely not a
thing of the past. You’ll find UARTs being used in many DIY electronics projects to
connect GPS modules, Bluetooth modules, and RFID card reader modules to your
Raspberry Pi, Arduino, or other microcontrollers.
UART stands for Universal Asynchronous Receiver/Transmitter. It’s not a
communication protocol like SPI and I2C, but a physical circuit in a microcontroller, or a
stand-alone IC. A UART’s main purpose is to transmit and receive serial data.
One of the best things about UART is that it only uses two wires to transmit data
between devices. The principles behind UART are easy to understand, but if you haven’t
read part one of this series, Basics of the SPI Communication Protocol, that might be a
good place to start.
INTRODUCTION TO UART COMMUNICATION

In UART communication, two UARTs communicate directly with each other. The
transmitting UART converts parallel data from a controlling device like a CPU into
serial form, transmits it in serial to the receiving UART, which then converts the
serial data back into parallel data for the receiving device. Only two wires are
needed to transmit data between two UARTs. Data flows from the Tx pin of the
transmitting UART to the Rx pin of the receiving UART:


UARTs transmit data asynchronously, which means there is no clock signal to
synchronize the output of bits from the transmitting UART to the sampling of bits by
the receiving UART. Instead of a clock signal, the transmitting UART adds start and
stop bits to the data packet being transferred. These bits define the beginning and
end of the data packet so the receiving UART knows when to start reading the bits.
When the receiving UART detects a start bit, it starts to read the incoming bits at a
specific frequency known as the baud rate. Baud rate is a measure of the speed of
data transfer, expressed in bits per second (bps). Both UARTs must operate at about
the same baud rate. The baud rate between the transmitting and receiving UARTs
can only differ by about 10% before the timing of bits gets too far off.
Both UARTs must also must be configured to transmit and receive the same data
packet structure.
HOW UART WORKS
The UART that is going to transmit data receives the data from a data bus. The data
bus is used to send data to the UART by another device like a CPU, memory, or
microcontroller. Data is transferred from the data bus to the transmitting UART in
parallel form. After the transmitting UART gets the parallel data from the data bus, it
adds a start bit, a parity bit, and a stop bit, creating the data packet. Next, the data
packet is output serially, bit by bit at the Tx pin. The receiving UART reads the data
packet bit by bit at its Rx pin. The receiving UART then converts the data back into
parallel form and removes the start bit, parity bit, and stop bits. Finally, the receiving
UART transfers the data packet in parallel to the data bus on the receiving end:
UART transmitted data is organized into packets. Each packet contains 1 start bit, 5
to 9 data bits (depending on the UART), an optional parity bit, and 1 or 2 stop bits:
Download