6. CMOS
Having explored the powerful combinational device abstraction as a model for our logical building blocks, we turn to the search for a practical technology for production of realistic implementations that closely match our model. Our technology wish list includes:- High gain and nonlinearity, as discussed in Section 5.6, to maximize noise immunity.
- Low power consumption. Some power will be used as changing signal levels cause current to flow in out out of parasitic capacitances, but an ideal technology will consume no power when signals are static.
- High speed. Of course, we'd like our devices to maximize performance (implying that $t_{pd}$ propagation delays be minimized and physical sizes be small.
- Low cost.
One technology, CMOS (for Complementary Metal-Oxide Semiconductor), has emerged as the logic technology of choice, getting high marks in each dimension of our wish list. We, like most of industry, will focus on CMOS as our choice for implementing logic devices.
6.1. MOSFETs
The nonlinear circuit element in CMOS is the MOSFET or Metal Oxide Semiconductor Field Effect Transistor, whose physical structure is sketched below.
MOSFET physical layout
An important feature of MOSFETs is that no steady-state current flows between the gate terminal and the source and drain: the gate behaves as if it is coupled to the other terminals through a capacitor. Current flows into and out of the gate terminal only when the gate voltage changes, and then only until the charge on the gate reaches an equilibrium that matches the voltage drop. This allows MOSFETs to draw no gate current while quiescent (i.e., when logic values are constant); if we design our CMOS circuits so that source and drain currents of quiescent circuits are zero as well, we can achieve the goal of zero quiescent power dissipation. We will adopt shortly a cookbook formula that offers this major power advantage.
6.1.1. NFETs and PFETs
A key to CMOS circuit design is manufacturing technology that allows fabrication of two different types of MOSFET on a single silicon surface. The two flavors of transistor are the N-channel and P-channel FET, which perform complementary functions in CMOS devices (hence the C in CMOS). Each FET is a four-terminal device; however our stylized use of NFETs and PFETs involves connecting the $B$ (Bulk) terminal to ground or $V_{DD}$ respecively, and using them as three-terminal circuit components.

NFET switching behavior

PFET switching behavior
6.1.2. Transistor Sizing
Design parameters for each transistor include its physical dimensions, which effect its current- and power-handling capacity. It is common to parameterize FETs by the width and length of the channel between the source and drain, in scaled distance units: length is the distance between source and drain, while width is the width is the length of the channel/source and channel/drain boundaries. Of particular interest is the ratio between the width and length of the channel, which determines the "on" resistance and hence the current carrying capacity of the source-drain connection when the transistor is "on". In general, the drain-source current $I_{DS}$ is proportional to the $Width/Length$ ratio.While we will largely ignore transistor sizing issues in our subsequent circuits, this parameter can play an important role in the optimization of performance and energy consumption of a digital circuit. A device whose output drives a long wire or heavy capacitive load, for example, may benefit from tuning of transistor sizes to provide higher-current output.
6.2. Pullup and Pulldown Circuits
Each of the NFET and PFET devices behaves as a voltage-controlled switch, enabling or disabling a connection between its source and drain terminals depending on whether its gate voltage is above or below a fixed threshold. Our intention is to select logic mapping parameters as described in Section 5.4.1 that place the FET switching thresholds reliably in the forbidden zone, so that valid logic levels applied to gate terminals of NFETs and PFETs will open or close their source/drain connections. By using logic levels to open and close networks of switches, we can selectively drive output terminals of logic devices to ground ($0$ volts) or $V_{DD}$ (power supply voltage), which represent maximally valid 0s and 1s respectively.

Pullup/Pulldown circuits
NFETs are well suited for use in pulldown circuits since they can connect the output terminal to ground potential via a small "on" resistance but without a threshold voltage drop between the output voltage and ground; for similar reasons, PFETs are a nearly ideal choice for use in pullup circuits. Consequently CMOS combinational logic devices (commonly called "gates", not to be confused with the gate terminal of a MOSFET) use only PFETs in pullup circuits and NFETs in pulldowns. The main cleverness required in the design of a CMOS gate is to come up with pullup and pulldown circuits that are properly complementary -- that is, to design the circuits so that for every combination of logical inputs, either the pullup or the pulldown is active but not both.

CMOS
Inverter

CMOS Inverter VTC
Recall that the voltage transfer curve represents an equilibrium output voltage for each input voltage.

VTC Test Setup
6.2.1. Series and Parallel Connections
Pullup and pulldown circuits for logic functions of multiple inputs involves configuring series and parallel connections of PFETs (in pullups) or NFETs (in pulldowns), again with each gate terminal connected to one logical input.A quick review of parallel and series connection of "switching" circuits:
![]() |
Assume that the notation shown to the left represents a circuit that conducts when some circumstance we call $A$ is true, and otherwise presents an open circuit between its two terminals. |
![]() |
Then the series circuit shown to the left conducts only when both $A$ and $B$ are true: if either $A$ or $B$ is false, the corresponding circuit will open and no current will flow. This allows us to effect a logical AND between the conditions $A$ and $B$. |
![]() |
Similarly the parallel circuit shown to the left represents a circuit that conducts when either $A$ or $B$, or both $A$ and $B$, are true. This effects a logical OR between the conditions $A$ and $B$. |
6.2.2. Complementary circuits
The remaining ingredient to our CMOS formula is the requirement that we connect each output to complementary pullups and pulldowns -- that is, that the pullup on an output is active on exactly the combination of input values for which the pulldown on that output is not active.Circuit | Complement | Description |
---|---|---|
![]() |
![]() |
Notation: The bar over the condition A indicates a circuit that is active when $A$ is not true; thus the two circuits to the left are complementary. |
![]() |
![]() |
Given a series connection of circuits active on $A$ and $B$, we can construct its complement by the parallel connection of complementary circuits $\bar A$ and $\bar B$. |
![]() |
![]() |
Given a parallel connection of circuits active on $A$ and $B$, we can construct its complement by the series connection of complementary circuits $\bar A$ and $\bar B$. |
![]() |
![]() |
Finally, we observe that an NFET connected to an input $A$ is the complement of a PFET connected to the same input: the NFET presents a closed circuit when $A$ is a logical 1, while the PFET closes when $A=0$ as we observed in the CMOS inverter. |
Because of the electrical characteristics of PFETs and NFETs, CMOS gates use only PFETs in pullup circuits and NFETs in pulldown circuits. Thus each output of a CMOS gate is connected to a pullup circuit containing series/parallel connected PFETs, as well as a complementary pulldown circuit containing parallel/series connected NFETs.
It is worth observing that
- For every PFET-based pullup circuit, there is a complementary NFET-based pulldown circuit and vice versa.
- Given a pullup or pulldown circuit, one can mechanically construct the complementary pulldown/pullup circuit by systematically replacing series with parallel connections, parallel with series connections, PFETs with NFETs, and NFETs with PFETs. The resulting ciruits are duals of one another, and are active under complementary input combinations.
- Our restrictions to PFETs in pullups and NFETs in pulldowns makes single CMOS gates naturally inverting: a logical 1 on an input can only turn on an NFET and turn off a PFET. This limits the set of functions we can implement as single CMOS gates: certain functions require multiple CMOS gates in their implementation.
6.3. The CMOS Gate Recipe

In the typical CMOS gate,
- The pullup and pulldown circuits are duals of each other, in the sense described in Section 6.2.2.
- Each input connects to one or more PFETs in the pullup, and to an equal number of NFETs in the dual pulldown circuit. An exception is the degenerate case of a CMOS gate whose output is independent of one or more inputs; in this case, the ignored inputs are not connected to anything.
- For the above reasons, a single CMOS gate typically comprises an even number of transistors, with equal numbers of NFETs and PFETs.
In general, we can approach the design of a CMOS gate for some simple function F by either
- Designing a pulldown circuit that is active for those input combinations for which $F=0$, or
- Designing a pullup circuit that is active for those input combinations for which $F=1$
6.3.1. Common 2-input gates

NAND gate
From the NAND truth table, we observe that the gate output should be zero if and only if both input values are 1, dictating a pulldown that is active (conducting) when the A and B inputs both carry a high voltage.

CMOS NAND gate

NOR gate

CMOS NOR gate
The CMOS implementation of a 2-input NAND gate can be easily extended to NAND of 3-, 4- or more inputs by simply extending the series pulldown chain and parallel pullup with additional FETs whose gate elements connect to the additional logical inputs. Similarly, the 2-input NOR gate can be extended to 3 or more input NOR operations by adding an NFET/PFET pair for each additional logical input. Electrical considerations (such as delays due to parasitics) usually limit the number of inputs of single CMOS gates (the so-called fan-in) to a something like 4 or 5; beyond this size, a circuit consisting of several CMOS gates is likely to offer better cost/performance characteristics.
6.4. Properties of CMOS gates
The CMOS gate recipe of an output node connected complementary PFET-based pullup and NFET-based pulldown circuits offers a nearly ideal realization of the combinational device abstraction, since- For each input combination, it will drive its output to an equilibrium voltage of 0 (ground) or $V_{DD}$, generating ideal representations of logical $0$ or $1$.
- At equilibruim, the FETs used in the pullup/pulldown circuits draw zero gate current. This implies that CMOS gates draw zero input current once parasitic capacitances have been charged to equilibrium, hence CMOS circuits have zero static power dissipation.
6.4.1. CMOS gates are inverting
It may seem peculiar that the example CMOS gates chosen for Section 6.3.1 include NAND and NOR but not the conceptually simpler logical operations AND and OR. In fact, this choice reflects a fundamental constraint of CMOS: only certain logical functions -- inverting functions -- can be implemented as a single CMOS gate. Other functions may be implemented using CMOS, but they require interconnections of multiple CMOS gates.To understand this restriction, consider the effect of an input to a CMOS gate on the voltage on its output node. A logical $0$ -- a ground potential -- can only affect the output by turning off NFETs in the pulldown and turning on PFETs in the pullup, forcing the output to a logical $1$. Conversely, an input $1$ can only effect the output voltage by driving it to a logical $0$. This constraint is consistent, for example, with a k-input NOR operation, where a $1$ on any input forces the output to become $0$; however, it cannot represent a logical $OR$, where a $1$ on any input must force the output to become a $1$. Thus we can implement k-input NOR as a single CMOS gate, but to implement k-input OR we use a k-input NOR followed by an inverter.
We can determine whether a particular function F can be implemented as a single CMOS gate by examining pairs of rows of its truth table that differ in only one input value. Changing an input from $0$ to $1$ can only effect a $1$-to-$0$ change on the output of a CMOS gate, or may leave the output unchanged; it cannot cause a $0$-to-$1$ output change. Nor can a $1$-to-$0$ input change cause a $1$-to-$0$ output change. Thus the observation that $F(0,1,1)=1$ but $F(0,0,1)=0$ causes us to conclude that the function $F$ cannot be implmented as a single CMOS gate.
To generalize slightly, suppose that we know that some 3-input function $F$ is implemented as a single CMOS gate, and that $F(0,1,1)=1$. Since changing either of the input $1$s to $0$ can only force the output to become $1$, that implies $F(0,x,y)=1$ for every choice of $x$ and $y$, an observation that we might denote as $F(0,*,*)=1$ using $*$ to denote an unspecified "don't care" value. Similarly, knowing that $F(1,0,1)=0$ implies that $F(1,*,1)=0$ since changing the second argument from $0$ to $1$ can only force the already-$0$ output to $0$.
Knowing that the output of a CMOS gate is $0$ for some particular set of input values assures us that the $1$s among its inputs are turning on a set of NFETs in its pulldown that connects its output node to ground: its output will be zero so long as these input $1$s persist, independently of the values on the other inputs. Similarly, knowing that a CMOS gate output is $1$ for some input combination assures us that the $0$s among the inputs enable a pullup path between the output and $V_{DD}$, independently of the other inputs.
Taking these observations to the extreme, a CMOS gate whose output is $1$ when all inputs are $1$ can only be the degenerate gate whose output is always $1$ independently of its inputs, and similarly for the gate whose output is $0$ given only $0$ input values.
6.4.2. CMOS gates are lenient
We can take the observations of the previous section one step further to show that CMOS gates are lenient combinational devices as described in Section 5.9.2. Recall that lenience implies the additional guarantee of output validity when a subset of the inputs have been stable and valid for $t_{PD}$, so long as that input subset is sufficient to determine the output value. For example, if the truth table of a 3-input lenient gate $G$ specifies that $G(0, 0, 1) = 1 = G(0, 1, 1)$, which we abbreviate as $G(0, *, 1) = 1$, then $G$'s lenience assures that its output will be $1$ whenever its first input has been $0$ and its third input has been $1$ for at least $t_{PD}$, independently of the voltage (or validity) of its second input.If $G$ is implemented as a single CMOS gate, this lenience property follows automatically. In our example, the $G(0, *, 1) = 1$ property of the truth table dictates a pullup with a path that connects the output to $V_{DD}$ whenever the first input is $0$, viz. a PFET gated by the first input between the output node and $V_{DD}$. This connection is independent of the second input as dictated by the truth table, and is in fact independent of the third input since PFET paths can only be turned on by $0$ inputs. Because of the complementary nature of the pulldown circuitry, a $0$ value on $G$'s first input must turn off an NFET in every path between the output node and ground, disabling the pulldown. Thus a $0$ on the first input electrically connects the output node to $V_{DD}$ independently of the voltages on the other two inputs: they may represent valid $0$, valid $1$, or be invalid (in the forbidden zone).
An analogous argument can be made for a CMOS gate $G$ whose value is $0$ (rather than $1$) for some subset of input values, e.g. if $G(0, *, 1) = 0$. In this case, $G$'s output must be connected to ground via some NFET in the pulldown gated by the third input, and that connection is assured by the $G(*, *, 1)$ input pattern independently of the voltages on the first two inputs.

NOR gate

NOR glitch


Lenient NOR timing
Generalizing, if we consider various paths through the pullup and pulldown circuits of a CMOS gate we can systematically constuct rows of a lenient truth table (containing don't-care inputs, written as $*$). Each path between the output and ground through the pulldown circuit determines a set of inputs (those gating the NFETs along the path) capable of forcing the output to $0$; similarly, each path through the pullup circuit determines a set of inputs capable of forcing a $1$ output. Whenever the inputs along a pullup path are all $0$ the gate output will be $1$, and whenever the inputs along a pulldown path are all $1$ the gate will output a $0$. Each of the pullup paths corresponds to a truth table line whose inputs are $0$s and $*$s and whose output is $1$; each of the pulldown paths corresponds to a line with $1$ and $*$s as inputs and a $0$ output. In general, the behaviour of every single CMOS gate can be described by a set of such rules, and conforms to our definition of a lenient combinational device.
It is important to note that, while every single CMOS gate is lenient, combinational devices constructed as circuits whose components are CMOS gates are not necessarily lenient. This topic will be revisited in Section 7.7.1.
6.4.3. CMOS gate timing
As combinational devices, the timing of a CMOS gate is characterized by two parameters: its propagation delay $t_{PD}$, an upper bound on the time taken to produce valid outputs given valid inputs, and its contamination delay $t_{CD}$, a lower bound on the time elapsed between an input becoming invalid and the consequent output invalidity. The physical basis for these delays is the time required to change voltage on circuit nodes due to capacitance, including the unavoidable parasitic capacitance discussed in Section 4.3.1.


6.4.4. Delay specifications
The combinational device abstraction of Chapter 5, based as it is on the idealized circuit theoretic model of components connected by equipotential nodes, requires that we bundle the timing behavior discussed in the prior section into the $t_{PD}$ and $t_{CD}$ timing specifications of devices driving each digital output node.The contamination delay $t_{CD}$ specifies a lower bound on the time a previously-valid output will remain valid after an input change; we choose this parameter to conservatively reflect the minimum interval required for an input change to propagate to affected transistors and begin their turnon/turnoff transition. Often we specify contamination delay as zero, which is always a safe bound; in certain critical situations, we choose a conservative non-zero value to guarantee a small delay between input and output invalidity.
Since the propagation delay $t_{PD}$ specifies an upper bound on the time between an input stimulus and a valid output level, it should be chosen to conservatively reflect the maximum delay (including transistor switching times and output rise/fall times) we anticipate. We might make this selection after testing many samples under a variety of output loading and temperature circumstances, extrapolating our worst-case observations to come up with a conservative $t_{PD}$ specification.
The $t_{PD}$/$t_{CD}$ timing model gives our digital abstraction simple and powerful analysis tools, including the ability to define provably reliable combinational circuits (the subject of Section 5.3.2) as well as sequential logic (Chapter 8). However, it squarely confronts the major dilemma previewed in Section 4.5.2: the abstraction of space from the circuit-theoretic model of signal timing. In practice, the timing of the output of a CMOS gate depends not only on intrinsic properties of the gate itself, but also on the electrical load placed on its output by connected devices and wiring. The delay between an input change to an inverter and its consequent valid output cannot realistically be bounded by an inverter-specific constant: it depends on the length and routing of connected wires and other device inputs. The fact that wire lengths and routing are determined late in the implementation process distinguishes these factors as implementation details rather than properties of a specified digital circuit, a fact that simply violates the constraints of our digital circuit abstraction.
Fortunately, the properties of CMOS as an implementation technology offer opportunities to reach a workable compromise between the simplicity and power of our propagation-delay timing model and realistic physics of digital systems. In particular, the fact that CMOS devices draw no steady-state input current implies that the output of a CMOS gate will eventually reach its valid output value; its simply not practical to bound the time this will take without consideration of wiring and loading details. Thus we can design circuits using "nominal" values for $t_{PD}$ specifications -- chosen using light loading -- and design circuits that will operate properly but whose overall timing specifications may turn out to be optimistic once wiring is taken into account. In Section 8.3.3, we describe an engineering discipline for designing systems with a single timing parameter -- a clock period -- that controls the tradeoff between computational performance and the amount of time allowed for signals on every circuit node to settle to their target values. Systems we design using this discipline will be guaranteed to work for some (sufficiently slow) setting of this parameter, as sketched in Section 4.5.2.1.
We will revisit this issue in later chapters. For now, we will assign nominal propagation delays to devices, and design and analyze our circuits using the fictitious promise made on their behalf by the combinational device abstraction.
6.5. Power Consumption
An attractive feature of CMOS technology for digital systems is the fact that, once it reaches a static equilibrium and all circuit nodes have settled to valid logic levels, no current flows and consequently no power is dissipated. This property implies that the power consumption of a CMOS circuit is proportional to the rate at which signals change logic levels, hence -- ideally -- to the rate at which useful computation is performed.The primary cause of current flow within a CMOS circuit is the need to charge and discharge parasitic capacitances distributed among the nodes of that circuit, and the primary mechanism for energy consumption is the dissipation of energy (in the form of heat) due to ohmic losses as these currents flow through the incidental resistances in wires and transistors.

A subsequent $1 \rightarrow 0$ transition on $V_{IN}$ will open the pulldown and close the pullup, charging the output capacitance back to $V_{DD}$ and dissipating another $C\cdot V_{DD}^2/2$ joules by ohmic losses from the necessary current flow. The total energy dissipation from this $0 \rightarrow 1 \rightarrow 0$ cycle is $C \cdot V_{DD} ^2$ joules.
Similar energy losses occur at each node of a complex CMOS circuit, at rates proportional to the frequency at which their logic values change. If we consider a system comprising $N$ CMOS gates cycling at a frequency of $f$ cycles/second, the resulting power consumption is on the order of $f \cdot N \cdot C \cdot V_{DD} ^ 2$ watts. As a representative example, a CMOS chip comprising $10^8$ gates driving an average output capacitance of $10^{-15}$ farads using $V_{DD} = 1$ volt and operating at a gigahertz ($10^9$) frequency would consume about $10^8 \cdot 10^9 \cdot 10^{-15} \cdot 1^2 = 100$ watts of power, all converted to heat that must be conducted away from the chip.
The constraint of power (and consequent cooling) costs has been a prime motivator for technology improvements. Historic trends have driven the number of gates/chip higher (due to Moore's law growth in transistors per chip), and -- until recently -- operating frequencies have increased with each generation of technology. These trends have been partially offset by scaling of CMOS devices to smaller size, with proportionate decreases in capacitance. Lowering operating voltage has been a particular priority (due to the quadratic dependence of power on voltage), but the current 1 volt range may be close to the lower limit for reasons related to device physics.
6.5.1. Must computation consume power?
The costs of energy consumption (and related cost of cooling) have emerged as a primary constraint on the assimilation of digital technologies, and are consequently an active area of research.A landmark in the exploration of fundamental energy costs of computation was the 1961 observation by Rolf Landauer that the act of destroying a single bit of information requires at least $k \cdot T \cdot ln 2$ joules of energy, where $k$ is the Boltzmann constant ($1.38 \cdot 10^{-23}$ J/K), $T$ is the absolute temperature in degrees Kelvin, and $ln 2$ is the natural logarithm of $2$. The attachement of this lower limit to the destruction of information implies that the cost might be avoided by building systems that are lossless from an information standpoint: if information loss implies energy loss, a key to energy-efficient computations is to make them preserve information.


Landauer and his disciples observed that such lossless computation can be made reversible: since no information is lost in each step of the computation, it may be run backward as well as forward. Although construction of a completely reversible computer system presents some interesting challenges (e.g. input/output operations), it is conceptually feasible to perform arbitrarily complex computations reversibly. Imagine, for example, an intricate and time-consuming analysis that produces a valuable single-bit result. We start this computation on a reversible computer in some pristine initial state, perhaps with a large memory containing only zeros. As the computation progresses using reversible operations, extraneous outputs are saved in the memory, which gradually fills with uninteresting data. During this process, we may pump energy into the computer to effect its state changes, but theoretically none of this energy is dissipated as heat. When we have computed our answer, we write it out (irreversibly) at a tiny energy cost. We then run our computation in reverse, using the stored extraneous data to step backwards in our computation; in the process, we get back any energy we spent during the forward computation. When the backward phase is completed, the machine state has been returned to its pristine initial state (ready for another computation), we have our answer, and the only essential energy cost has been the tiny amount needed to output the answer.
Such thought experiments in reversible computation seem (and are) somewhat far-fetched, but provide important conceptual models for understanding the relation between laws of physics and those of information. Reversible computing plays an important role in contemporary research, for example in the area of quantum computing.
6.6. Further Reading
- Moore, G., "Cramming More Components onto Integrated Circuits", Electronics, v 36 no 8, April 19, 1965. 1965 paper observing that the optimal transistor count for a chip seemed to double ever two years, and predicting that growth rate to continue for "at least 10 years". This conservative prediction predates the popularity of CMOS (and many other subsequent technological breakthroughs), and has come to be known as Moore's Law.
- Landauer, Rolf, "Irreversibility and heat generation in the computing process", IBM J. Research and Development, 1961. 1961 paper exploring the theoretical limit of energy consumption of computation. Establishes a lower bound on the energy cost of erasing a bit of information, and hence of irreversible computation.
6.7. Chapter Summary
The CMOS technology described in this chapter has become the tool of choice for implementing large digital systems, and for a number of good reasons:- Effective manufacturing techniques allow economical manufacture of reliable devices containing billions of logic elements;
- Quiescent CMOS circuits have virtually zero power dissipation;
- CMOS gates are naturally lenient combinational devices.
- The use of complementary transistor types, NFETs and PFETs, within each CMOS gate;
- Each CMOS output is a circuit node connected to an NFET-based pulldown circuit as well as a PFET-based pullup circuit, where for every combination of inputs either the pullup connects the output to $V_{dd}$ or the pulldown connects the output to ground;
- Each single CMOS gate is naturally inverting: positive input transitions can cause only negative output transitions, and vice versa. Hence certain logic functions cannot be implemented as a single CMOS gate and require multiple-gate implementations;
- High gain in the active region allows large noise margins, hence good noise immunity.
- Transitions take time: pumping charge into or out of an output node is work (in the physical sense) and cannot happen instantaneously. This leads to finite rise and fall times, which we accommodate in our $t_{pd}$ specifications.