Monday 27 August 2012

EUV

Extreme UV:

1)Found from some semiconductor blog. Gives good idea about the device scaling and the issues.

EUV is the great hope for avoiding having to go to triple (and more) patterning if we have to stick with 193nm light. There were several presentations at Semicon about the status of EUV. Here I'll discuss the issues with EUV lithography and in a separate post discuss the issues about making masks for EUV.

It is probably worth being explicit and pointing out that the big advantage of EUV, if and when it works, is it is single patterning technology (for the forseeable future) with just one mask and one photolithography process per layer.

First up was Stephan Wurm, the director of litho for Sematech (it's their 25th anniversary this year, seems like only yesterday...). He talked about where EUV is today. Just a little background about why EUV is so difficult. First, at these wavelengths, the photons won't go through lenses or even air. So we have to switch from refractive optics (lenses) to reflective optics (mirrors) and put everything in a vacuum. The masks have to be reflective too but I'll talk about that in the next blog. Obviously we need a different photoresist than we use for 193nm. And, most critically, we need a light source that generates EUV light (which is around 14nm wavelength, so by the time EUV is inserted into production it will already be close to the feature size but we've got pretty good at making small features with long wavelength light).

The status of the resist is that we now have chemically amplified resist (CAR) with adequate resolution for a 22nm half pitch (22nm lines with 22nm spaces) and seems to be OK down to 15nm. A big issue is sensitivity, it takes too much light to expose the resist which reduces throughput. However, when we have had sensitivity problems in the past they were not so severe and were solved earlier. Line width roughness (LWR) continues to be a problem and will need to be solved with non-lithographic cleanup. Contact holes continue to be a problem. Stephan discussed mask blank defect and yield issues but, as I said, that comes in the next blog.

Next up was Hans Meiling from ASML (with wads of Intel money sticking out of his back pocket). They have already shipped 6 NXE-3100 pre-production tools to customers for them to start doing technology development. They have 7 3300 scanners being built.

You can't get EUV out of a substance in its normal state, you need a plasma. So you don't just plug in an EUV bulb like you do for visible light. You take little droplets of tin, zap them with a very high powered CO2 laser, and get a brief flash of light. They have run sources like this for 5.5 hours continuously. It takes a power input of 30kW to get 30W of EUV light, so not the most efficient process.

Contamination of mirrors is one challenge, given that putting everything in a vacuum and using metal plasma is how we make interconnect and for sure we don't want to coat our mirrors with tin. ASML found problems with the collecting optics not staying clean after 10M pulses, which sounds a lot until you realize it is about 1 week of operation in a fab running the machine continuously. They now have 3 or 4 times more but there is clearly progress to be made.

Reflectivity of the mirrors is a problem. These are not the sort of mirror you have in your bathroom, they are Mo/Si multilayers which forms a Bragg reflectorthat reflects light due to multilayer interference. Even with really good mirrors, only about 70% of the EUV light is reflected from the mirror and since the optics require 8 or more mirrors to focus the light first on the mask and then on the wafer, very little of the light you start with (maybe 4%) ends up hitting the photoresist. Some of these mirrors are grazing incidence mirrors, which are mirrors that bend the light along their length like some pinball machine bending the path of the ball and can be used to focus a beam.

Currently they are managing to get 5-7W and have demonstrated up to 30W. For high throughput the source needs to be 200W so this is still something that seems out of reach from just tweaking the current technology. 

The light source power issue is the biggest issue in building a high-volume EUV stepper. Intel is betting that a few billion dollars and ASML will solve it.

double patterning

Refer below:

1)http://www.techdesignforums.com/eda/guides/double-patterning/

2)Taken from one of the semiconductor blog in quotes below:
"Cadence has a new white paper out about the changes in IC design that are coming at 20nm. One thing is very clear: 20nm is not simply "more of the same". All design, from basic standard cells up to huge SoCs has several new challenges to go along with all the old ones that we had at 45nm and 28nm.

I should emphasize that the paper is really about the problems of 20nm design and not a sales pitch for Cadence. I could be wrong but I don't think it mentions a single Cadence tool. You don't need to be a Cadence customer to profit from reading it.

The biggest change, and the one that everyone has heard the most about, is double patterning. This means that for those layers that are double patterned (the fine pitch ones) two masks are required. Half the polygons on the layer go on one mask and the other half on the other mask. The challenge is that no patterns on either mask can be too close, and so during design the tools need to ensure that it is always possible to divide the polygons into two sets (so, for example, you can never have three polygons that are minimum distance from each other at any point, since there is no way to split them legally into two). Since this is algorithmically a graph-coloring problem this is often referred to as coloring the polygons.



Place and route obviously needs to be double pattern aware and not create routing structures that are not manufacturable. Less obvious is that even if standard cells are double pattern legal, when they are placed next to each other they may cause issues between polygons internal to the cells.

Some layers at 20nm will require 3 masks, two to lay down the double patterned grid and then a third "cut mask" to split up some of the patterns in a way that wouldn't have been possible to manufacture otherwise.

Another issue with double patterning is that most patterning is not self-aligned, meaning that there is variation between polygons on the two masks that is greater than the variation between two polygons on the same mask (which are self-aligned by definition). This means that verification tools need to be aware of the patterning and, in some cases, designers need to be given tools to assign polygons to masks where it is important that they end up on the same mask.



Design rules at 20nm are incredibly complicated. Cadence reckon that of 5,000 design rules only 30-40 are for double patterning. There are layout direction orientation rules, and even voltage dependent design rules. Early experience of people I've talked to is that the design rule are now beyond human comprehension and you need to have the DRC running essentially continuously while doing layout.

The other big issue with 20nm are layout dependent effects (LDEs). The peformance of a transistor or a gate no longer depends just on its layout in isolation but also on what is near it. Almost every line on the layout such as the edge of a well has some non local effect on the silicon causing performance changes in active areas nearby. At 20nm the performance can vary by as much as 30% depending on the layout context.

A major cause of LDEs is mechanical stress. Traditionally this was addressed by guardbanding critical paths but at 20nm this will cause too much performance loss and instead all physical design and analysis tools will need to be LDE aware.

Of course in addition to these two big new issues (double patterning, LDEs) there are all the old issues that just get worse, whether design complexity, clock tree synthesis and so on.

Based on numbers from Handel Jones's IBS, 20nm fabs will cost from $4-7B (depending on capacity), process R&D will be $2.1-3B on top of that, and mask costs will range from $5-8M per design. And design costs: $120-500M. You'd better want a lot of die when you get to production."



SSTA

Product binning

Product binning can be done based on many operational parameters like voltage, freq., disabling some IPs, operating temperature, etc. Purpose is to increase the yield. Below link gives some basic idea about product binning:

http://en.wikipedia.org/wiki/Product_binning

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6212859&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6212859

Wednesday 22 August 2012

Logic Synthesis

What is Logic Synthesis ?


“ Logic synthesis is the process of converting a high-level description of the design into an optimized gate-level  representation, given a standard cell library and certain design constraints “

Why Perform Logic Synthesis ?


1. Automatically manages many details of the design process:

• Fewer bugs

• Improves productivity 



2.  Abstracts the design data (HDL description) from any particular implementation technology

• Designs can be re-synthesized targeting different chip technologies;

E.g.: first implement in FPGA then later in ASIC


3. In some cases, leads to a more optimal design than could be

achieved by manual means (e.g.: logic optimization) 


Logic Synthesis Flow : RTL TO GATES





RTL description:

Design at a high level using RTL constructs.


Translation:

Synthesis Tool convert the RTL description to un-optimized Internal representation.(Boolean form)


Un-optimized Intermediate Representation:

Represented internally by the logic synthesis tool in terms of Internal data structure.


Logic Optimization:

Logic is optimized to remove redundant logic.

Technology Mapping and Optimization:

The synthesis tool takes the internal representation and implements the representation in gates, using the cells provided in the technology library.



Technology Library:

library cells that can be basic gates or macro cells.


The cell description contains information about the following:


• Functionality of the cell.

• Area of the cell layout.

• Timing information about the cell.

• Power information about the cell.





Design Constraints:

1.Area:

  • Designer can specify area constraint and synthesis tool will optimize for minimum area. 
  • Area can be optimized by having lesser number of cells and by replacing multiple cells with single cell that includes both functionality. 

2. Timing:

  • Designer specifies maximum delay between primary input and primary output.
There are three types of critical paths:
I.Path between a primary input and primary output.
II.Path from any primary input to a register.
III.Path from a register to a primary output.
IV.Path from a register to another register


3. Power:

  • Development of hand-held devices has led to reduction of battery size and hence low power consuming systems..


Points to note about synthesis 



  • For very big circuits, vendor technology libraries may yield non- optimal result. 
  • Translation, logic optimization and technology mapping are done internally in the logic synthesis tool and are not visible to the designer. 
  • Timing analyzer built into synthesis tools will have to account for interconnect delays in the total delay calculation


Saturday 18 August 2012

Timing Optimization Techniques


Timing Optimization Techniques are as follows:

1.Mapping :

Mapping converts primitive logic cells found in a netlist to technology-specific logic gates found in the library on the timing critical paths.

2. Unmapping:
Unmapping converts the technology-specific logic gates in the netlist to primitive logic gates on the timing critical paths.

3. Pin Swapping :
Pin swapping optimization examines the slacks on the inputs of the gates on worst timing paths and optimizes the timing by swapping nets attached to the input pins, so the net with the least amount of slack is put on the fastest path through the gate without changing the function of the logic.

4. Buffering:

Buffers are inserted in the design to drive a load that is too large for a logic cell to efficiently drive.
If the net is too long then the net is broken and buffers are inserted to improve the transition which will ultimately improve the timing on data path and reduce the setup violation.
To reduce the hold violations buffers are inserted to add delay on data paths.

5. Cell Sizing

Cell sizing is the process of assigning a drive strength for a specific cell in the library to a cell instance in the design.If there is a low drive strength cell in the timing critical path then this cell is replaced by higher drive strength cell to reduce the timing violation.

6. Cloning :

Cell cloning is a method of optimization that decreases the load of a very heavily loaded cell by replicating the cell. Replication is done by connecting an identical cell to the same inputs as the original cell.Cloning clones the cell to divide the fanout load to improve the timing.

7. Logic Restructuring

Logic restructuring means to rearrange logic to meet timing constraints on critical paths of design.

Advanced OCV

What is Advanced OCV -

AOCV uses intelligent techniques for context specific derating instead of a single global derate value, thus reducing the excessive design margins and leading to fewer timing violations. This represents a more realistic and practical method of margining, alleviating the concerns of overdesign, reduced design performance, and longer timing closure cycles.
Advanced OCV  determines derate values as a function of logic depth and/or cell, and net location. These two variables provide further granularity to the margining methodology by determining how much a specific path in a design is impacted by the process variation.

There are two kinds of variations.
1) Random Variation
2) Systematic Variation

Random Variation-
Random variation is proportional to the logic depth of each path being analyzed.
The random component of variation occurs from lot-to-lot, wafer-to-wafer, on-die and die-to-die. Examples random variation are variations in gate-oxide thickness, implant doses, and metal or dielectric thickness.


Systematic Variation-
Systematic variation is proportional to the cell location of the path being analyzed.
The systematic component of variation is predicted from the location on the wafer or the nature of the surrounding patterns. These variations relate to proximity effects, density effects, and the relative distance of devices. Examples of systematic variation are variations in gate length or width and interconnect width.

Take the example of random variation, given the buffer chain shown in Figure 1, with nominal cell delay of 20, nominal path delay @ stage N = N * 20. In a traditional OCV approach, timing derates are applied to scale the path delay by a fixed percentage, set_timing_derate late 1.2; set_timing_derate early 0.8


Figure 1: Depth-Based Statistical Analysis
 Statistical analysis shows that the random variation is less for deeper timing paths and not all cells are simultaneously fast or slow. Using statistical HSPICE models, Monte-Carlo analysis can be performed to measure the accurate delay variation at each stage. Advanced OCV derate factors can then be computed as a function of cell depth to apply accurate, less pessimistic margins to the path.

Figure 2a shows an example of how PrimeTime Advanced OCV would determine the path depth for both launch and capture. These values index the derate table, as shown in Figure 7, to select the appropriate derate values.
                                  Fig 2a-Depth Based Advanced OCV


Effects of systematic variation shows that paths comprised of cells in close proximity exhibit less variation relative to one another. Using silicon data from test-chips, Advanced OCV derate factors based on relative cell-location are then applied to further improve accuracy and reduce pessimism on the path. Advanced OCV computes the length of the diagonal of the bounding box, as shown in Figure 2b, to select the appropriate derate value from the table.


Fig2b -Distance Based advanced OCV



PrimeTime Advanced OCV Flow -
PrimeTime internally computes depth and distance metrics for every cell arc and net arc in the design. It picks the conservative values of depth and distance thus bounding the worst-case path through a cell.

Fig-3