• You are currently viewing our forum as a guest, which gives you limited access to view most discussions and access our other features. By joining our free community, you will have access to additional post topics, communicate privately with other members (PM), view blogs, respond to polls, upload content, and access many other special features. Registration is fast, simple and absolutely free, so please join our community today! Just click here to register. You should turn your Ad Blocker off for this site or certain features may not work properly. If you have any problems with the registration process or your account login, please contact us by clicking here.

For those interested in Computer Architecture...

So Please Choose:


  • Total voters
    22

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
I had to fix some "bugs" in the micro-architecture before proceeding.
1) I changed opcode to opcode + constant. Making that bus 12 bits, instead of the 4 bits it originally was.
2) Added an 8-bit data-bus from execute unit to fetch/decode unit.

Hopefully, you could follow why I needed to make those changes. I'll leave it as an opportunity to the reader to test their understanding :D

So now, I'll show what happens on all of the buses, during a few cycles so that you get the gist of what is happening in the micro-architecture, as well.

First cycle:
Curr. PC Bus: 0x00
Instruction Bus:0x8000B
PAC Bus: (Addr:0, Source:0, Target 0)
RAC Bus: (Addr1:0,Source1:0,Target1:1,Addr2:0,Source2:0,Target2:0)
MAC Bus: (Addr:0,Source1:0,Target:0)
Op.Code+Constant Bus (OpCode:0x8, Constant: 0X0B)
Data Bus: 0x00
PRD Bus: 0x00
RRD Bus: 0x0000
MRD Bus: 0x00
PWD Bus: 0x00
RWD Bus: 0x0B
MWD Bus: 0x00
New PC Bus: 0x01

Second cycle:
Curr. PC Bus: 0x01
Instruction Bus:0x80101
PAC Bus: (Addr:0, Source:0, Target 0)
RAC Bus: (Addr1:1,Source1:0,Target1:1,Addr2:0,Source2:0,Target2:0)
MAC Bus: (Addr:0,Source1:0,Target:0)
Op.Code+Constant Bus (OpCode:0x8, Constant: 0X01)
Data Bus: 0x00
PRD Bus: 0x00
RRD Bus: 0x0000
MRD Bus: 0x00
PWD Bus: 0x00
RWD Bus: 0x01
MWD Bus: 0x00
New PC Bus: 0x02

Third cycle:
Curr. PC Bus: 0x02
Instruction Bus:0x30001
PAC Bus: (Addr:0, Source:0, Target 0)
RAC Bus: (Addr1:0,Source1:1,Target1:1,Addr2:1,Source2:1,Target2:0)
MAC Bus: (Addr:0,Source1:0,Target:0)
Op.Code+Constant Bus (OpCode:0x3, Constant: 0X01)
Data Bus: 0x00
PRD Bus: 0x00
RRD Bus: 0x0B01
MRD Bus: 0x00
PWD Bus: 0x00
RWD Bus: 0x0A
MWD Bus: 0x00
New PC Bus: 0x03

Fourth cycle:
Curr. PC Bus: 0x03
Instruction Bus:0xA0000
PAC Bus: (Addr:0, Source:0, Target 0)
RAC Bus: (Addr1:0,Source1:1,Target1:0,Addr2:0,Source2:1,Target2:0)
MAC Bus: (Addr:0xA,Source1:0,Target:1)
Op.Code+Constant Bus (OpCode:0xA, Constant: 0X00)
Data Bus: 0x0A
PRD Bus: 0x00
RRD Bus: 0x0A0A
MRD Bus: 0x00
PWD Bus: 0x00
RWD Bus: 0x0A
MWD Bus: 0x0A
New PC Bus: 0x04

Fifth cycle:

Curr. PC Bus: 0x04
Instruction Bus:0xC0000
PAC Bus: (Addr:0, Source:0, Target 1)
RAC Bus: (Addr1:0,Source1:0,Target1:0,Addr2:0,Source2:1,Target2:0)
MAC Bus: (Addr:0xA,Source1:0,Target:0)
Op.Code+Constant Bus (OpCode:0xC, Constant: 0X00)
Data Bus: 0x0A
PRD Bus: 0x00
RRD Bus: 0x0A0A
MRD Bus: 0x00
PWD Bus: 0x0A
RWD Bus: 0x0A
MWD Bus: 0x0A
New PC Bus: 0x05

The lines in red represent buses that changed in the current cycle. The lines in green the ones that changed at all.
 

millerm277

New member
Joined
Feb 1, 2008
Messages
978
MBTI Type
ISTP
@ygolo, very interesting so far, I'll be paying attention to this thread, as I currently do some work for a company that does GaAs chips. (Electrical Engineering Intern)
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
I got rather ill today, so there will be a short break on the concrete examples since that takes a bit of concentration.

Though we don't want to go too far down the circuit design/device physics information (inceredibly intersting stuff in its own right), a certain amount needs to be understood before I specify an implimentation to our micro-architecture.

Although there are a lot of cicuit styles, the most popular one for a long-time is CMOS (Complementary Metal Oxide Semiconductor) technology. The reason for the popularity us due to the generally low power dissipation.

For our purposes, we can think of MOS transistors as simple switches. There are two types often refered to as NMOS, and PMOS.

The device physics is interesting and perhaps we can discuss it in another thread. Pictured below is the NMOS version.
625px-Scheme_of_n-metal_oxide_semiconductor_field-effect_transistor_with_channel_de.svg.png


Anyway, the MOS transistors are 4-terminal devices. They are called drain, gate, source, and bulk. For out purposes, we can consider the bulk terminals for the NMOS being connected to ground, and the PMOS bulk terminals connected to supply-voltage of the circuit.

What happens is when the appropriate voltage is applied to the gate terminals, the source and drain terminals become electically connected. It is actually more subtle than this, but we can save that for another thread.

So we can think of the NMOS and PMOS transistors in the following way:

When a high voltage is applied to the gate of an NMOS the source and drain become electrically connected.
When a low voltage is applied to the gate of an PMOS the source and drain become electrically connected.

One catch to this is that (due to particular voltage thresholds to keep the transistors on) NMOS's don't pass high-voltages well between drain and source, and PMOS's don't pass low voltages well.

For this reason, both versions need to be used in a complementary way. There is a lot that goes into the design of these circuits (its what I currently do for work) but for now I'll just show a few basic circuits (from which many other circuits can be built.

First is a simpile inverter, which outpurs the inverted sense of the input. A 0 (low-voltage) input and creates a 1 (high-voltage) output, and vice-versa.
4.48.png


Please, see if you can see how the inversion function is implemented by this particular configuration of PMOS and NMOS.
The NMOS transistors have arrows going in to the bulks and/or no bubbles at the gates. The PMOS transistors have arrows going out of the bulk and/or bubbles at the gates.

In order the keep more complex circuits looking less complicated, inverters tend to be replaced with the following symbol.

attachment.php


Another important function (form which all other logical functions can theoretically be built) is a NAND gate.

04141.png


If a, and b inputs are 1 (high-voltage) them the output is 0 (low-voltage). In all other cases(when the inputs are valid 1's and 0's), the output is 1 (high-voltage).

Please, see if you can see how the NAND function is implemented by this particular configuration.

Again, to make more complicated circuits more readable, the NAND gates are represented in schematics with a symbol like this:
attachment.php


So far, all the basic circuits I've mentioned have had no "memory." In order to make the flip-flops and other memory elements needed, we employ a little bit of positive feedback.

Here is a high-level schematic of a simple D-flip-flop:
cmos_dff0000.gif


The boxes with the lines in the center are called "transmission gates." They are composed of one PMOS and one NMOS in parallel. Notice that the inverted vesion of every signal that goes to transmission gate has the inverted version as well. That is because one is needed for the PMOS and the other for the NMOS.

There are two distiguishable states, "transparent" and "opaque." In the transparent state, the drains and sources are electrically connected. In the opaque state, the drains are elecrically issolated from the sources.

The end result is that the D input passes to the output when the CLK signal transitions from high to low. At all other times, there are positive feedback mechanisms that keep the output at the value it was last.

Please see if you can see this function from the confuguration.
Hint:The flip-flop is actually composed of two "latches." Each with its own set of tranmission gates and and feedback loop that are opaque or transparent based on the level of the clock.

I really suggest you spend some time puzzling out the basic circuits to see how they create the functions describes. I will answer questions if you get stuck. Once done, I dare say, you will have a much more knowledge than even "lay-people" even in other technichal fields (that is other than Electrical/Computer Engineering or Science).
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Been a little distracted. Has everyone who's interested puzzled out the circuits?

The inverter and the NAND get should have been real easy.

The flip-flop may have been tricky, but I think the hint should have been enough.

Was sick for most of the week, and there is the meet-up this weekend, but after that, I'll get cranking on explanations again. Just wanted to see if people are still interested.

Not planning too far ahead, but whats left for our little design is the actual circuits (crude-ones) to implement the bus traffic specified earlier. May have one more generic post on memories and multiplexers before that.

After that, I was think of using a more complicated example program that "polls" the switch bank...which would then segue well into the creation of interrupts (and we could go over an abstract implementation of those).

Then I was thinking, we could go over the concepts of "timing" and "clocking" in more detail in order to introduce the motivation for "pipelining." Once we "pipeline" then we can introduce the concepts of "hazards," and various ways to handle them, and to a light intro to "precise interrupts".

Then there are two possible tracks we could follow:
1) Stay focused on the processor and cover things like super-scalar architectures leading to out-of-order execution (where we need to return to precise interrupts) and we revisit precise interrupts. That has a whole bunch of stuff, like the Tomasulo algorithm/instruction scheduling, branch prediction/error recovery, register renaming, and a whole bunch of other stuff. Then there are vector processors (or vector instructions), stream processesors, and heterogeneous processors like CELL, and various GPUs (nVidia's Tesla is the likely candidate). Then there is the VLIW and EPIC type instructions sets as well. After which we can go on to 2) not having to worry about returning to the guts of a processor too often.

2) The other path is to step away from the processor itself for a little, and take a more system level view where we look at work-loads and benchmarking, and deciding what architectural improvements are actually worth doing for particular work-loads, memory hierarchies, caching, paging, segmentation, snooping and directories, etc. We'll Look at virtual-machines, offloading, and parallel processing in a lot of different models (SMP/CMP shared memory vs. message passing), and the various techniques and traps programmers can fall into when using multiple processors. Cover reliability, availability, bandwidth, and latency of clusters. Look at some basic compiler techniques, and evaluate parallelism in general from an abstract perspective. This gives adequate context for understanding the trade-offs we do when we cover the things in path 1). This is my preferred path.

Anyway, someone let me know if their still interested (breaks have a way of making people lose interest). It does take me some time, so I'd rather not post to the void.
 

Athenian200

Protocol Droid
Joined
Jul 1, 2007
Messages
8,828
MBTI Type
INFJ
Enneagram
4w5
I got rather ill today, so there will be a short break on the concrete examples since that takes a bit of concentration.

Though we don't want to go too far down the circuit design/device physics information (inceredibly intersting stuff in its own right), a certain amount needs to be understood before I specify an implimentation to our micro-architecture.

Although there are a lot of cicuit styles, the most popular one for a long-time is CMOS (Complementary Metal Oxide Semiconductor) technology. The reason for the popularity us due to the generally low power dissipation.

For our purposes, we can think of MOS transistors as simple switches. There are two types often refered to as NMOS, and PMOS.

The device physics is interesting and perhaps we can discuss it in another thread. Pictured below is the NMOS version.


Anyway, the MOS transistors are 4-terminal devices. They are called drain, gate, source, and bulk. For out purposes, we can consider the bulk terminals for the NMOS being connected to ground, and the PMOS bulk terminals connected to supply-voltage of the circuit.

What happens is when the appropriate voltage is applied to the gate terminals, the source and drain terminals become electically connected. It is actually more subtle than this, but we can save that for another thread.

So we can think of the NMOS and PMOS transistors in the following way:

When a high voltage is applied to the gate of an NMOS the source and drain become electrically connected.
When a low voltage is applied to the gate of an PMOS the source and drain become electrically connected.

One catch to this is that (due to particular voltage thresholds to keep the transistors on) NMOS's don't pass high-voltages well between drain and source, and PMOS's don't pass low voltages well.

For this reason, both versions need to be used in a complementary way. There is a lot that goes into the design of these circuits (its what I currently do for work) but for now I'll just show a few basic circuits (from which many other circuits can be built.

My first thought when reading this was, "Why do they need to have two kinds of gates that work in precisely opposite ways, instead of just one that can work either way?" But then I realized that they might need to do that because if they didn't, the state of the gate would be the same under either high or low voltage conditions, and would not have any effect on the circuit. Is that it?
First is a simpile inverter, which outpurs the inverted sense of the input. A 0 (low-voltage) input and creates a 1 (high-voltage) output, and vice-versa.


Please, see if you can see how the inversion function is implemented by this particular configuration of PMOS and NMOS.
The NMOS transistors have arrows going in to the bulks and/or no bubbles at the gates. The PMOS transistors have arrows going out of the bulk and/or bubbles at the gates.

The bulk just means the main part of the circuit, right? And the NMOS doesn't need bubbles because it's already connected to ground, but PMOS does because it's connected to the voltage? I'm guessing, here.
In order the keep more complex circuits looking less complicated, inverters tend to be replaced with the following symbol.

It looks like the the same idea as represented by the inversion circuit preceding it symbolically, but without the details of how it works.


Another important function (form which all other logical functions can theoretically be built) is a NAND gate.

If a, and b inputs are 1 (high-voltage) them the output is 0 (low-voltage). In all other cases(when the inputs are valid 1's and 0's), the output is 1 (high-voltage).

Please, see if you can see how the NAND function is implemented by this particular configuration.

All I can see is that somehow or other, Q3 takes the voltage-state of Input A into the circuit, Q4 takes the voltage-state of Input B into the circuit, and then their states are compared with what I assume are some kind of static positive voltage references in Q1 and Q2, which if met result in the current that normally causes the circuit to read as on being blocked/stifled. Is that it?

Abstractly, the idea of an NAND gate seems to be similar to this:

If both conditions A and B are true, this statement is false. Otherwise, this statement is true.

So... is this thing attempting to implement logic in the form of an electronic circuit?
Again, to make more complicated circuits more readable, the NAND gates are represented in schematics with a symbol like this:

It appears to be a representation of how the circuit works on an abstract level (two inputs and one output, with a body doing something with it), but without the details of how it works. It looks like a straightforward representation of it if you already know how it works.
So far, all the basic circuits I've mentioned have had no "memory." In order to make the flip-flops and other memory elements needed, we employ a little bit of positive feedback.

Here is a high-level schematic of a simple D-flip-flop:

The boxes with the lines in the center are called "transmission gates." They are composed of one PMOS and one NMOS in parallel. Notice that the inverted vesion of every signal that goes to transmission gate has the inverted version as well. That is because one is needed for the PMOS and the other for the NMOS.

There are two distiguishable states, "transparent" and "opaque." In the transparent state, the drains and sources are electrically connected. In the opaque state, the drains are elecrically issolated from the sources.

The end result is that the D input passes to the output when the CLK signal transitions from high to low. At all other times, there are positive feedback mechanisms that keep the output at the value it was last.

Please see if you can see this function from the confuguration.
Hint:The flip-flop is actually composed of two "latches." Each with its own set of tranmission gates and and feedback loop that are opaque or transparent based on the level of the clock.

I really suggest you spend some time puzzling out the basic circuits to see how they create the functions describes. I will answer questions if you get stuck. Once done, I dare say, you will have a much more knowledge than even "lay-people" even in other technichal fields (that is other than Electrical/Computer Engineering or Science).

I'm not sure I understand this exactly, but here's what it seems to be doing. If the blue lines are one state of the clock, and the red lines are the other... it seems as if the circuit is set up so that depending on which state the clock is in (red or blue), one part of whatever it is that started at D goes through to the end (and something happens to it) while the other is held in a loop of some kind. What goes through, and what is kept in a loop, seems to alternate between whether red or blue is the current state of the clock. Does it have something to do with volatile memory needing to be constantly self-refreshed in order to avoid being wiped out? Anyway, if the clock cycles frequently, it seems as if the circuit is set up so that all the stuff coming in at D can go through eventually, but ensures that only one thing at a time can do so.

I'm sorry I haven't been commenting as you've been writing... I tend to absorb passively, keep my thoughts to myself. I had some thoughts on your earlier posts, but I wasn't sure anyone would be interested. When I figured out that you were losing motivation because no one was commenting on it, I decided I should share some of my thoughts.
 

Bear Warp

New member
Joined
May 19, 2008
Messages
145
MBTI Type
epyT
I'm still interested, ygolo. I've just been distracted with college stuff lately. I have plenty of time to look over your last two posts today, though.

Just let me get some coffee first...
 

Bear Warp

New member
Joined
May 19, 2008
Messages
145
MBTI Type
epyT
(Noob) Questions!

What's Vdd?

What exactly are the drain, gate, source and bulk terminals?

What are bubbles?
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
My first thought when reading this was, "Why do they need to have two kinds of gates that work in precisely opposite ways, instead of just one that can work either way?" But then I realized that they might need to do that because if they didn't, the state of the gate would be the same under either high or low voltage conditions, and would not have any effect on the circuit. Is that it?

The reason they need two kinds of transistors is because the NMOS is not able to transfer the logic 1 from drain to source faithfully, it drops the voltage level by a threshold voltage (the voltage needed to keep the transistor on). The PMOS transistor, however, cannot transfer a logic 0 faithfully. It transfers a voltage that is a threshold voltage above ground.

The bulk just means the main part of the circuit, right? And the NMOS doesn't need bubbles because it's already connected to ground, but PMOS does because it's connected to the voltage? I'm guessing, here.

The bulk terminal is what is connected to "the substrate" or "the well" of the transistor. The device physics is interesting, but going too far down that path will lead us well off topic. For now, just think of it as a terminal you have to always connect to ground on an NMOS, and to the Voltage supply (usually called Vdd by convention) for PMOS.

On the circuit diagrams, they are the arrows on the transistors on the middle. In many symbols of transistors, the bulk is actually omitted, and you are to infer that the connections are as they should be.

The bubble usually indicates an inversion of the signal connected to it before getting to the device. However, in many symbols of PMOS transistors the bubble is omitted. That is actually a bit annoying to me, but it is done. The conventions are weird. Sometimes they don't really make sense, but once the convention is learned, it doesn't matter too much.

Transistor symbols tend to change a lot. The most proper symbol of an enhancement mode (meaning the applied voltage enhances the conductance of the channel) MOSFET (Metal Oxide Semiconductor Field Effect Transistor) are like the ones given in the inverter, but the second vertical line is split into three segments.

Again, there is a lot that can be said about FETs and differences between them. But as long as you can identify NMOS and PMOS in a standard CMOS circuit, we will assume they are enhancement mode (don't worry if this distinction is not understood right now), that should be enough to procede for our purposes.

The inverter works because when the input is a logic 1 (a high voltage), then the NMOS transistor on the bottom pulls the output node down to ground (a logic 0). But if the input is a logic 0 (a low voltage) then the PMOS transistor on the top pulls the output node to Vdd (a logic 1).

It looks like the the same idea as represented by the inversion circuit preceding it symbolically, but without the details of how it works.

Yup. That is the idea.

All I can see is that somehow or other, Q3 takes the voltage-state of Input A into the circuit, Q4 takes the voltage-state of Input B into the circuit, and then their states are compared with what I assume are some kind of static positive voltage references in Q1 and Q2, which if met result in the current that normally causes the circuit to read as on being blocked/stifled. Is that it?

Not quite. For digital CMOS circuits like these you don't need to worry about references or voltage comparisons (very much). Just treat the transistors like switches.

Here are a couple of things to note:
In order to connect the output node to ground, you need both Q3 and Q4 to connect their respective drains to their sources. So you need both their inputs to be logic 1s.
In order to connect the output node to Vdd, either Q1 or Q2 or both to connect their respective drains to their sources. This happens when either (or both) of their inputs are logic 0s.

Abstractly, the idea of an NAND gate seems to be similar to this:

If both conditions A and B are true, this statement is false. Otherwise, this statement is true.

That's the idea. Yes.

So... is this thing attempting to implement logic in the form of an electronic circuit?

Yup. Until other means of computing become efficient, the logic is implemented electronically.

The thing about the NAND gate is that it is a "logically complete" set of gates by itself. That means you can implement all possible logical functions by just using 2 input NAND gates. You can make it into an inverter by connecting the inputs together and/or connecting one of the inputs to 1. You can create an AND function by inverting the output of the NAND gate(using the NAND as inverter). You can get an OR gate by inverting the inputs(using the NAND as inverter) to a NAND gate.

It appears to be a representation of how the circuit works on an abstract level (two inputs and one output, with a body doing something with it), but without the details of how it works. It looks like a straightforward representation of it if you already know how it works.

Yes. Circuits can look really complicated without the use of symbols. Generally, we use a box with the inputs and outputs, with none of the internals as a symbol for generic circuits. So we'd have to actually look at the schematic for the symbol to understand what is going on. For standard circuits, like the inverter, NAND and others, there are standard sysmbols, so we don't use a box.

I'm not sure I understand this exactly, but here's what it seems to be doing. If the blue lines are one state of the clock, and the red lines are the other... it seems as if the circuit is set up so that depending on which state the clock is in (red or blue), one part of whatever it is that started at D goes through to the end (and something happens to it) while the other is held in a loop of some kind. What goes through, and what is kept in a loop, seems to alternate between whether red or blue is the current state of the clock.

That is basically it (though it is more black vs. colored rather than red vs. blue). Can you see why what was on D gets transfered to the red transmission gate when the CL is low, and then transferred from the red transmission gate to the Q node when the CL is high?

Does it have something to do with volatile memory needing to be constantly self-refreshed in order to avoid being wiped out?
It is volatile memory. But it is a "static" type. The feed-back loops in the latches keep the values stored (as long ad the circuit has power) even where there is no external circuit driving the D-input.
Refreshes (and "pre-charges") are usually associated with "dynamic" circuits, because in these the logic values are stored capacitively, not with the use of a static feedback loop.
This is also interesting stuff. But more in the realm of circuit design.

Anyway, if the clock cycles frequently, it seems as if the circuit is set up so that all the stuff coming in at D can go through eventually, but ensures that only one thing at a time can do so.
That is the essential idea. The clock cycles on a periodic basis, and these flip-flops will be used in various parts of a circuit to synchronize other signals to the clock.


I'm sorry I haven't been commenting as you've been writing... I tend to absorb passively, keep my thoughts to myself. I had some thoughts on your earlier posts, but I wasn't sure anyone would be interested. When I figured out that you were losing motivation because no one was commenting on it, I decided I should share some of my thoughts.

I like it when people make comments, that way I know where I am loosing people.

I'm still interested, ygolo. I've just been distracted with college stuff lately. I have plenty of time to look over your last two posts today, though.

Just let me get some coffee first...

That's cool. Hopefully, my responses to Athenian will help you as well.
I just wasn't sure since I was sick for a while, if people had just moved on.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
(Noob) Questions!

What's Vdd?

What exactly are the drain, gate, source and bulk terminals?

What are bubbles?

Vdd is the conventional name for the power supply. The bubbles are the little circles on the diagrams (on the symbols for the PMOS in the inverter, the symbol for the inverter, and the symbol for the NAND)--they usually indicate an inversion.

625px-Scheme_of_n-metal_oxide_semiconductor_field-effect_transistor_with_channel_de.svg.png

Drain is marked by d, or D, source with S or s, Gate with G or g, and Bulk is marked by B, b or unmarked or completely omitted in the diagrams provided.

So (again on a very high level) , what is happening in a transistor is that when a certain voltage is applied to the gate, the source and the drain get connected electrically.

Are you asking for more specifics of transistor operation?
 

Bear Warp

New member
Joined
May 19, 2008
Messages
145
MBTI Type
epyT
What I want to know is what is meant by drain, what is meant by source, and so on.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Oh. A lot of the names come about for historical reasons.
The enhancement MOSFET was not the first field effect transistor(FET), nor was a FET the first transistor. Bipolar Junction Transistors (BJTs) were first. Before that there were vacuum-tubes.

BJTs useful terminals were the emitter, collector and base (which made some sense based on the physical flow of charge carriers, but I wouldn't be surprised if they were left over from the vacuum-tube days). I think source and drain were to make them sound like emitter and collector. In an enhancement MOSFET, source and drain are physically the same due to the symmetry of the structure. You may be able to think of it like the charge carriers starting from the source and "draining" into the drain, but that is not really accurate. The gate you can think of as "gating" the transistor operation. Open the "gate" to have the current flow? The "bulk" may be an allusion to the fact that we have "bulk" silicon before any of the processing is done to make it an integrated circuit.

I'm speculating. I've gotten so used to calling them what they are conventionally called, I am not sure I can give accurate answers.

With this stuff, (and probably a lot of EE/CS related things) the naming tends to be highly arbitrary and based on the history of development. IMO, it is better to learn the meaning of terms through their use and context, since they often have little intrinsic value to their name.

The names are often just labels. That's my opinion. But I haven't really been tripped up by not knowing the etymology of technical terms. Why is the mouse called a "mouse" (perhaps it resembles one)? Why are errors in design or construction called "bugs?" (a hold-over from the vacuum-tube days?). There probably are reasons for the names, but many times they are lost to history.
 

Athenian200

Protocol Droid
Joined
Jul 1, 2007
Messages
8,828
MBTI Type
INFJ
Enneagram
4w5
The reason they need two kinds of transistors is because the NMOS is not able to transfer the logic 1 from drain to source faithfully, it drops the voltage level by a threshold voltage (the voltage needed to keep the transistor on). The PMOS transistor, however, cannot transfer a logic 0 faithfully. It transfers a voltage that is a threshold voltage above ground.

Ah, so using two transistors is the way they work around the problem of the threshold voltage leakage? Is this the same "gate leakage" I hear talked about so much in connection with processor reviews?

Again, there is a lot that can be said about FETs and differences between them. But as long as you can identify NMOS and PMOS in a standard CMOS circuit, we will assume they are enhancement mode (don't worry if this distinction is not understood right now), that should be enough to procede for our purposes.

Yes, there's quite a bit to learn, I imagine.
The inverter works because when the input is a logic 1 (a high voltage), then the NMOS transistor on the bottom pulls the output node down to ground (a logic 0). But if the input is a logic 0 (a low voltage) then the PMOS transistor on the top pulls the output node to Vdd (a logic 1).

That seems to be exactly how inversion should work.


Not quite. For digital CMOS circuits like these you don't need to worry about references or voltage comparisons (very much). Just treat the transistors like switches.

Here are a couple of things to note:
In order to connect the output node to ground, you need both Q3 and Q4 to connect their respective drains to their sources. So you need both their inputs to be logic 1s.
In order to connect the output node to Vdd, either Q1 or Q2 or both to connect their respective drains to their sources. This happens when either (or both) of their inputs are logic 0s.

Oh. So it's about circuits being completed, or not being completed under certain conditions. That's actually simpler. And I think I knew what it was doing on a higher level, so I probably have it now.




Yup. Until other means of computing become efficient, the logic is implemented electronically.

The thing about the NAND gate is that it is a "logically complete" set of gates by itself. That means you can implement all possible logical functions by just using 2 input NAND gates. You can make it into an inverter by connecting the inputs together and/or connecting one of the inputs to 1. You can create an AND function by inverting the output of the NAND gate(using the NAND as inverter). You can get an OR gate by inverting the inputs(using the NAND as inverter) to a NAND gate.

Wow. Come to think of it, I can almost imagine all the early x86 instructions being implemented in this way, if the person were creative enough with circuit design to get the logic to work out from a combination of these (explains why there were so many NTs in that field). That is, prior to Pentium class machines or possibly the advent of math co-processors. I bet it got complicated then. I'm still impressed that someone could implement logic in the form of a physical device. It makes one question the idea that human reasoning is irreducibly complex (You can ignore this, I know it's a bit off topic).

That is basically it (though it is more black vs. colored rather than red vs. blue). Can you see why what was on D gets transfered to the red transmission gate when the CL is low, and then transferred from the red transmission gate to the Q node when the CL is high?

Because... for some reason or other, the value can go through the first inverter positioned between the red and blue transmission gates when CL is low, and then can go through another one positioned later in the circuit (but in a similar yet opposite position) when CL is high?

It is volatile memory. But it is a "static" type. The feed-back loops in the latches keep the values stored (as long ad the circuit has power) even where there is no external circuit driving the D-input.
Refreshes (and "pre-charges") are usually associated with "dynamic" circuits, because in these the logic values are stored capacitively, not with the use of a static feedback loop.
This is also interesting stuff. But more in the realm of circuit design.


That is the essential idea. The clock cycles on a periodic basis, and these flip-flops will be used in various parts of a circuit to synchronize other signals to the clock.


Aha. That explains some of the limitations of circuit design, and why speeding up the "clock" is the easiest way to speed up a processor, but also causes it to generate more heat/friction.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Ah, so using two transistors is the way they work around the problem of the threshold voltage leakage? Is this the same "gate leakage" I hear talked about so much in connection with processor reviews?

The two transistor types is a way to make sure that logic 1's are always Vdd (unless you chose to do it differently, but that we'll save for later), while logic 0 is always ground. In engineering in general, we leave a lot of assumption in the wake of the design process. Because of this we have to make sure these assumptions are met reasonably well. In CMOS circuit design, we are assuming voltages of Vdd or ground for the inputs, so we make sure the outputs reach those levels too (there is a concept of Noise Margining we can get to later, if it becomes relevant).

Gate leakage is a different issue. As process technology has made the gate Oxide (the "O" in MOS) thinner, leading to more tunneling (you know, the quantum kind--actually 2 kinds, but I digress) producing current through the oxide (which is located under the gate).


Wow. Come to think of it, I can almost imagine all the early x86 instructions being implemented in this way, if the person were creative enough with circuit design to get the logic to work out from a combination of these (explains why there were so many NTs in that field). That is, prior to Pentium class machines or possibly the advent of math co-processors. I bet it got complicated then. I'm still impressed that someone could implement logic in the form of a physical device. It makes one question the idea that human reasoning is irreducibly complex (You can ignore this, I know it's a bit off topic).

In theory you can make ALL the logic functions including the Pentium instructions and math co-processors with 2 input CMOS NAND gates. However, that is not done because we can actually make things much faster with other techniques.

Because... for some reason or other, the value can go through the first inverter positioned between the red and blue transmission gates when CL is low, and then can go through another one positioned later in the circuit (but in a similar yet opposite position) when CL is high?

I'll post the image again for reference:
cmos_dff0000.gif


What happens is when CLK is high, CL# (my notation for CL with the bar over it) is low, and in-turn CL is high (note the use of inverters). The opposite happens when CLK is low.

Ignoring the small time delays, CLK is always the same level as CL which is always the opposite level of CL#.

A transmission gate (all the boxes here are transmission gates) is set-up so that when the signal connected to the top is high (and to the bottom low), the left and right sides are connected.

I'll let you try and puzzle it out from there since you seem really close.

Aha. That explains some of the limitations of circuit design, and why speeding up the "clock" is the easiest way to speed up a processor, but also causes it to generate more heat/friction.

I see you Ni is working well. You've anticipated what I wanted to talk about a in a few posts. There is a fundamental limit to how fast you can cycle the clock without resorting to other techniques (of which pipelining is most natural). These other techniques tend to take up yet more and more power.

I won't really be going into the heat generation, since that is quite off-tangent, and it is proportional to the power consumption anyway, which is fairly readily understood in terms of current (and our voltage supply).
 

Bear Warp

New member
Joined
May 19, 2008
Messages
145
MBTI Type
epyT
Oh. A lot of the names come about for historical reasons....

...The names are often just labels. That's my opinion. But I haven't really been tripped up by not knowing the etymology of technical terms. Why is the mouse called a "mouse" (perhaps it resembles one)? Why are errors in design or construction called "bugs?" (a hold-over from the vacuum-tube days?). There probably are reasons for the names, but many times they are lost to history.

OK then:nice:

I feel I'm close to having a strong grip on the inverter and NAND gate circuits, and I'm getting there on the flip-flop.

If I do some drawings, I'm sure it'll all come to me relatively quickly.
 

Athenian200

Protocol Droid
Joined
Jul 1, 2007
Messages
8,828
MBTI Type
INFJ
Enneagram
4w5
The two transistor types is a way to make sure that logic 1's are always Vdd (unless you chose to do it differently, but that we'll save for later), while logic 0 is always ground. In engineering in general, we leave a lot of assumption in the wake of the design process. Because of this we have to make sure these assumptions are met reasonably well. In CMOS circuit design, we are assuming voltages of Vdd or ground for the inputs, so we make sure the outputs reach those levels too (there is a concept of Noise Margining we can get to later, if it becomes relevant).

That's an interesting explanation of how a processor distinguishes 1 from 0 meaningfully... I always wondered how it did that.
Gate leakage is a different issue. As process technology has made the gate Oxide (the "O" in MOS) thinner, leading to more tunneling (you know, the quantum kind--actually 2 kinds, but I digress) producing current through the oxide (which is located under the gate).

That would screw up the whole circuit. It looks like they don't know how to account for the quantum level yet, or if they can compensate for what happens there.

In theory you can make ALL the logic functions including the Pentium instructions and math co-processors with 2 input CMOS NAND gates. However, that is not done because we can actually make things much faster with other techniques.

But if someone were trying to make a processor with the bare minimum of components, that might be done? In other words, if space and component cost became the primary issue instead of speed?

I'll post the image again for reference:
cmos_dff0000.gif


What happens is when CLK is high, CL# (my notation for CL with the bar over it) is low, and in-turn CL is high (note the use of inverters). The opposite happens when CLK is low.

Ignoring the small time delays, CLK is always the same level as CL which is always the opposite level of CL#.

A transmission gate (all the boxes here are transmission gates) is set-up so that when the signal connected to the top is high (and to the bottom low), the left and right sides are connected.

I'll let you try and puzzle it out from there since you seem really close.

Ah, that's what I was missing. The black ones with vertical lines allow pass-through when CL# is high and CL is low (one state of the clock cycle), and the ones with horizontal colored lines allow pass-through when CL# is low and CL is high. When one is in the opposite state, the inverters hold it in the feedback loop until the clock cycles.

I see you Ni is working well. You've anticipated what I wanted to talk about a in a few posts. There is a fundamental limit to how fast you can cycle the clock without resorting to other techniques (of which pipelining is most natural). These other techniques tend to take up yet more and more power.

I won't really be going into the heat generation, since that is quite off-tangent, and it is proportional to the power consumption anyway, which is fairly readily understood in terms of current (and our voltage supply).

Yeah, it looks like we might be reaching the point where more speed is no longer worth the power cost.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
That's an interesting explanation of how a processor distinguishes 1 from 0 meaningfully... I always wondered how it did that.

Perhaps its a bit philosophical, but I don't believe the processors really distiguish between 1's and 0's. The processor designers do. The processor simply does what it was designed to.

You know, in retrospect, the reason I gave you for the use of two types of transistors was misleading. You could replace all the PMOS transistors in a basic CMOS circuit with a single resistor (though it would no longer be a CMOS, but an NMOS circuit). Basically the resistor would pull the output node-up whenever the NMOS circuit didn't pull it down (the NMOS circuit would have to be stronger than the resistor, since the resistor is always pulling up). CMOS has MUCH lower power dissipation because once the output switches, no more current is needed till the next time the output changes.

You could alternatively use NMOS where the PMOS were and feed the PMOS-replacement NMOS transistors with inverted signals. But here, the voltage would never get pulled all the way up to Vdd, and the PMOS-replacement NMOS's would have to keep sinking current even after the switch from ground to Vdd-Vth (where Vth is the threshold voltage).

This is a more complicated story (if you don't understand it, then just ignore it). But explaining "why" we design things in certain ways is always complicated because there are many alternatives, and reasons for not using them.

That would screw up the whole circuit. It looks like they don't know how to account for the quantum level yet, or if they can compensate for what happens there.

The leakage currents are small (like 8-10 orders of magnitude smaller than normal dynamic currents). Although leakage has increasingly become problematic, they are more of an issue for the functionality of Analog circuits. For digital circuits, because it is always there, burning power, even when the circuits aren't switching.

But if someone were trying to make a processor with the bare minimum of components, that might be done? In other words, if space and component cost became the primary issue instead of speed?

You could certainly make a NAND gate based processor. But there are area and power reasons for not doing this also. Also, a lot of the input output circuits are analog in nature, because the Printed Circuit Board traces (used to connect the chips together) behave like little transmission lines even in MHz ranges.

Besides, integrated circuits can integrate a LOT. If you have a design which has its area limited by the number of pins that connect the processor to other circuits instead of the size of the circuits themselves, you are wasting die-area (which is directly related to cost).

Ah, that's what I was missing. The black ones with vertical lines allow pass-through when CL# is high and CL is low (one state of the clock cycle), and the ones with horizontal colored lines allow pass-through when CL# is low and CL is high. When one is in the opposite state, the inverters hold it in the feedback loop until the clock cycles.
Yeah, it looks like we might be reaching the point where more speed is no longer worth the power cost.

Glad to see things are making sense, now.
 

Lateralus

New member
Joined
May 18, 2007
Messages
6,262
MBTI Type
ENTJ
Enneagram
3w4
I would have been a lot more interested in this about 10 years ago. I took a couple microprocessor design courses, but it started to get too tedious for my taste.
 

ygolo

My termites win
Joined
Aug 6, 2007
Messages
5,988
Circuits and Boolean Equations

So I decided I am going to do smaller chunks than initially thought because there is a lot of writing I am doing.

More “Basic” Circuits

Puzzling out circuits earlier was not just for the purposes of understanding those circuits, but because I am about to hit you with A LOT of circuits, and they should be relatively easy to follow now.

So there are other logic gates that are often used besides the 2-input NAND and the inverter.

There are multiple input NAND gates which use the same symbol as the 2 input NAND gate but more inputs feeding the gate. The logical function outputs a logic 1 in all cases except when all the inputs are a logic 1 (and in this case the output is a logic 0).

Here is an 8-input NAND gate:
attachment.php


If you remove the bubble from the NAND symbol, you get an AND gate. The logical function of AND gates are to output a logic 0 in all cases except when the inputs are all logical1s (in this case the output is a logical 1).

Here is the symbol for an 8-input AND gate.
attachment.php


An AND gate is constructed by adding an inverter to the output of a NAND gate.

Another common function is a NOR gate. The function of a NOR gate is to output a logic 0 if ANY of the inputs is a logic 1, and only output a logic 1 if all the inputs are logic 0.

A two input NOR Gate is constructed in the following manner.

cmos-nor2.gif


The symbol for a 2 in put NOR is:

attachment.php


Multiple input NOR gates have similar symbols:

An 8-input NOR gate:
attachment.php


Invert the output of a NOR gate and you get an OR gate. The output of an OR gate is a logic 1 if and only if at least on of its inputs is a logic 1. Otherwise, if all the inputs are logic 0, then the output is a logic 0.

Here is an 8-input OR gate:

attachment.php



Multiple Input NAND Gates and NOR gates

Earlier, we saw how to implement 2 input NAND abd NOR gates directly from transistors. You can make higher inputs NAND and NOr gates in a similar fashion. Simply add more NMOS in series and more PMOS in parallel for NAND gates or NMOS in parallel and PMOS is series for NOR gates. Hopefully, you can see how this works logically. However, the gates cannot get very large because of the increase in output node capacitance (from the PMOS drains, even if many are shared), and pull-down resistance (each NMOS has a small resistance that adds up).

But there is another logical trick to be used to make higher input NAND and NOR gates from lower input ones.

In the NAND case, you simply take the output of a NAND with x inputs send it through an inverter to one of the inputs of a 2-input NAND gate, then take the remaining in put and send it to the other input of a 2-input NAND. Now you have a NAND gate with x+1 inputs. This is a logically correct construction because the first input to the 2-input NAND is only a logic 1 if all the inputs to the NAND with x inputs is a logic 1 (and 0 other wise). Also, the output of the 2-input NAND is only a logic 0 if both its inputs are logic 1. So we can see that the only way this configuration will output a logic 0 is if all the inputs are a logic 1. Otherwise, one of the two inputs to the 2-input NAND gate will be 0, and therefore the output will be a logic 1.

A similar construction works for larger NOR gates. Simply feed the output of a smaller NOR through an inverter to a 2-input NOR that gets the last input.

Various Ways to Describe/Specify General Logic Functions

Hopefully, in the constructions given above it was intuitive to see how the particular functions were built up, and what they were specified to do.

However, in many cases, a more rigorous and organized approach is needed.

Truth Tables

One very brute force, but rather effective way to specify a logical function is through what is known as a truth table. This is simply an enumeration of all possible input combination with a specification of what the output should be.

The truth table for a 3-intput NAND is:
A B C|Out
0 0 0|1
0 0 1|1
0 1 0|1
0 1 1|1
1 0 0|1
1 0 1|1
1 1 0|1
1 1 1|0

Boolean Equations

Generally more compactly we can specify a function through a Boolean Equation.

They will look something like: Y=A#*B+C. Where A# means the inverted version, otherwise known as the “compliment” of A. The “*” indicates and AND of what is on the left and the right. While a + indicates an OR of what is on the left and the right.

Generally, the order of operations is to do all # first, then all *, then all +.
Parentheses can change the order.

Y=A#*B+C is the same as saying Y=(A#*B)+C. However, Y=A#*(B+C) is different.

The “=” can be used in subtly different ways. It can be that a particular signal is defined a particular way. Or it can mean that what is on both sides are logically equivalent.

Manipulating the boolean equations should be rather straight forward once you understand what they are. Seeing a direct implementation using logic gates should be just as easy.

See if you can see that the following are true (use truth-tables if needed), and at the same time see if you can see the circuits each side of the equation would yield directly:

A*0=0
A+1=1
A#+A=1
A#*A=0
A##=A
A*B=B*A
A+B=B+A
A*A=A
A+A=A
A*(B*C)=(A*B)*C
A+(B+C)=(A+B)+C
A*(B+C)=A*B+A*C
A+B*C=(A+B)(A+C)
(A+B)#=A#*B#
(A*B)#=A#+B#
 
Top