User Tag List

View Poll Results: So Please Choose:

Voters
22. You may not vote on this poll
  • I want to read ygolo's brain dump on Computer Architecture (and perhaps do some dumping of my own)

    12 54.55%
  • I explicitly don't want a thread like this.

    5 22.73%
  • I want to click on something.

    5 22.73%
First 234

Results 31 to 39 of 39

  1. #31
    Senior Member Bear Warp's Avatar
    Join Date
    May 2008
    MBTI
    epyT
    Posts
    145

    Default

    What I want to know is what is meant by drain, what is meant by source, and so on.

  2. #32

    Default

    Oh. A lot of the names come about for historical reasons.
    The enhancement MOSFET was not the first field effect transistor(FET), nor was a FET the first transistor. Bipolar Junction Transistors (BJTs) were first. Before that there were vacuum-tubes.

    BJTs useful terminals were the emitter, collector and base (which made some sense based on the physical flow of charge carriers, but I wouldn't be surprised if they were left over from the vacuum-tube days). I think source and drain were to make them sound like emitter and collector. In an enhancement MOSFET, source and drain are physically the same due to the symmetry of the structure. You may be able to think of it like the charge carriers starting from the source and "draining" into the drain, but that is not really accurate. The gate you can think of as "gating" the transistor operation. Open the "gate" to have the current flow? The "bulk" may be an allusion to the fact that we have "bulk" silicon before any of the processing is done to make it an integrated circuit.

    I'm speculating. I've gotten so used to calling them what they are conventionally called, I am not sure I can give accurate answers.

    With this stuff, (and probably a lot of EE/CS related things) the naming tends to be highly arbitrary and based on the history of development. IMO, it is better to learn the meaning of terms through their use and context, since they often have little intrinsic value to their name.

    The names are often just labels. That's my opinion. But I haven't really been tripped up by not knowing the etymology of technical terms. Why is the mouse called a "mouse" (perhaps it resembles one)? Why are errors in design or construction called "bugs?" (a hold-over from the vacuum-tube days?). There probably are reasons for the names, but many times they are lost to history.

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  3. #33
    Protocol Droid Athenian200's Avatar
    Join Date
    Jul 2007
    MBTI
    INFJ
    Enneagram
    4w5
    Posts
    8,828

    Default

    Quote Originally Posted by ygolo View Post
    The reason they need two kinds of transistors is because the NMOS is not able to transfer the logic 1 from drain to source faithfully, it drops the voltage level by a threshold voltage (the voltage needed to keep the transistor on). The PMOS transistor, however, cannot transfer a logic 0 faithfully. It transfers a voltage that is a threshold voltage above ground.
    Ah, so using two transistors is the way they work around the problem of the threshold voltage leakage? Is this the same "gate leakage" I hear talked about so much in connection with processor reviews?

    Again, there is a lot that can be said about FETs and differences between them. But as long as you can identify NMOS and PMOS in a standard CMOS circuit, we will assume they are enhancement mode (don't worry if this distinction is not understood right now), that should be enough to procede for our purposes.
    Yes, there's quite a bit to learn, I imagine.
    The inverter works because when the input is a logic 1 (a high voltage), then the NMOS transistor on the bottom pulls the output node down to ground (a logic 0). But if the input is a logic 0 (a low voltage) then the PMOS transistor on the top pulls the output node to Vdd (a logic 1).
    That seems to be exactly how inversion should work.


    Not quite. For digital CMOS circuits like these you don't need to worry about references or voltage comparisons (very much). Just treat the transistors like switches.

    Here are a couple of things to note:
    In order to connect the output node to ground, you need both Q3 and Q4 to connect their respective drains to their sources. So you need both their inputs to be logic 1s.
    In order to connect the output node to Vdd, either Q1 or Q2 or both to connect their respective drains to their sources. This happens when either (or both) of their inputs are logic 0s.
    Oh. So it's about circuits being completed, or not being completed under certain conditions. That's actually simpler. And I think I knew what it was doing on a higher level, so I probably have it now.




    Yup. Until other means of computing become efficient, the logic is implemented electronically.

    The thing about the NAND gate is that it is a "logically complete" set of gates by itself. That means you can implement all possible logical functions by just using 2 input NAND gates. You can make it into an inverter by connecting the inputs together and/or connecting one of the inputs to 1. You can create an AND function by inverting the output of the NAND gate(using the NAND as inverter). You can get an OR gate by inverting the inputs(using the NAND as inverter) to a NAND gate.
    Wow. Come to think of it, I can almost imagine all the early x86 instructions being implemented in this way, if the person were creative enough with circuit design to get the logic to work out from a combination of these (explains why there were so many NTs in that field). That is, prior to Pentium class machines or possibly the advent of math co-processors. I bet it got complicated then. I'm still impressed that someone could implement logic in the form of a physical device. It makes one question the idea that human reasoning is irreducibly complex (You can ignore this, I know it's a bit off topic).

    That is basically it (though it is more black vs. colored rather than red vs. blue). Can you see why what was on D gets transfered to the red transmission gate when the CL is low, and then transferred from the red transmission gate to the Q node when the CL is high?
    Because... for some reason or other, the value can go through the first inverter positioned between the red and blue transmission gates when CL is low, and then can go through another one positioned later in the circuit (but in a similar yet opposite position) when CL is high?

    It is volatile memory. But it is a "static" type. The feed-back loops in the latches keep the values stored (as long ad the circuit has power) even where there is no external circuit driving the D-input.
    Refreshes (and "pre-charges") are usually associated with "dynamic" circuits, because in these the logic values are stored capacitively, not with the use of a static feedback loop.
    This is also interesting stuff. But more in the realm of circuit design.


    That is the essential idea. The clock cycles on a periodic basis, and these flip-flops will be used in various parts of a circuit to synchronize other signals to the clock.

    Aha. That explains some of the limitations of circuit design, and why speeding up the "clock" is the easiest way to speed up a processor, but also causes it to generate more heat/friction.

  4. #34

    Default

    Quote Originally Posted by Athenian200 View Post
    Ah, so using two transistors is the way they work around the problem of the threshold voltage leakage? Is this the same "gate leakage" I hear talked about so much in connection with processor reviews?
    The two transistor types is a way to make sure that logic 1's are always Vdd (unless you chose to do it differently, but that we'll save for later), while logic 0 is always ground. In engineering in general, we leave a lot of assumption in the wake of the design process. Because of this we have to make sure these assumptions are met reasonably well. In CMOS circuit design, we are assuming voltages of Vdd or ground for the inputs, so we make sure the outputs reach those levels too (there is a concept of Noise Margining we can get to later, if it becomes relevant).

    Gate leakage is a different issue. As process technology has made the gate Oxide (the "O" in MOS) thinner, leading to more tunneling (you know, the quantum kind--actually 2 kinds, but I digress) producing current through the oxide (which is located under the gate).


    Quote Originally Posted by Athenian200 View Post
    Wow. Come to think of it, I can almost imagine all the early x86 instructions being implemented in this way, if the person were creative enough with circuit design to get the logic to work out from a combination of these (explains why there were so many NTs in that field). That is, prior to Pentium class machines or possibly the advent of math co-processors. I bet it got complicated then. I'm still impressed that someone could implement logic in the form of a physical device. It makes one question the idea that human reasoning is irreducibly complex (You can ignore this, I know it's a bit off topic).
    In theory you can make ALL the logic functions including the Pentium instructions and math co-processors with 2 input CMOS NAND gates. However, that is not done because we can actually make things much faster with other techniques.

    Quote Originally Posted by Athenian200 View Post
    Because... for some reason or other, the value can go through the first inverter positioned between the red and blue transmission gates when CL is low, and then can go through another one positioned later in the circuit (but in a similar yet opposite position) when CL is high?
    I'll post the image again for reference:


    What happens is when CLK is high, CL# (my notation for CL with the bar over it) is low, and in-turn CL is high (note the use of inverters). The opposite happens when CLK is low.

    Ignoring the small time delays, CLK is always the same level as CL which is always the opposite level of CL#.

    A transmission gate (all the boxes here are transmission gates) is set-up so that when the signal connected to the top is high (and to the bottom low), the left and right sides are connected.

    I'll let you try and puzzle it out from there since you seem really close.

    Quote Originally Posted by Athenian200 View Post
    Aha. That explains some of the limitations of circuit design, and why speeding up the "clock" is the easiest way to speed up a processor, but also causes it to generate more heat/friction.
    I see you Ni is working well. You've anticipated what I wanted to talk about a in a few posts. There is a fundamental limit to how fast you can cycle the clock without resorting to other techniques (of which pipelining is most natural). These other techniques tend to take up yet more and more power.

    I won't really be going into the heat generation, since that is quite off-tangent, and it is proportional to the power consumption anyway, which is fairly readily understood in terms of current (and our voltage supply).

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  5. #35
    Senior Member Bear Warp's Avatar
    Join Date
    May 2008
    MBTI
    epyT
    Posts
    145

    Default

    Quote Originally Posted by ygolo View Post
    Oh. A lot of the names come about for historical reasons....

    ...The names are often just labels. That's my opinion. But I haven't really been tripped up by not knowing the etymology of technical terms. Why is the mouse called a "mouse" (perhaps it resembles one)? Why are errors in design or construction called "bugs?" (a hold-over from the vacuum-tube days?). There probably are reasons for the names, but many times they are lost to history.
    OK then

    I feel I'm close to having a strong grip on the inverter and NAND gate circuits, and I'm getting there on the flip-flop.

    If I do some drawings, I'm sure it'll all come to me relatively quickly.

  6. #36
    Protocol Droid Athenian200's Avatar
    Join Date
    Jul 2007
    MBTI
    INFJ
    Enneagram
    4w5
    Posts
    8,828

    Default

    Quote Originally Posted by ygolo View Post
    The two transistor types is a way to make sure that logic 1's are always Vdd (unless you chose to do it differently, but that we'll save for later), while logic 0 is always ground. In engineering in general, we leave a lot of assumption in the wake of the design process. Because of this we have to make sure these assumptions are met reasonably well. In CMOS circuit design, we are assuming voltages of Vdd or ground for the inputs, so we make sure the outputs reach those levels too (there is a concept of Noise Margining we can get to later, if it becomes relevant).
    That's an interesting explanation of how a processor distinguishes 1 from 0 meaningfully... I always wondered how it did that.
    Gate leakage is a different issue. As process technology has made the gate Oxide (the "O" in MOS) thinner, leading to more tunneling (you know, the quantum kind--actually 2 kinds, but I digress) producing current through the oxide (which is located under the gate).
    That would screw up the whole circuit. It looks like they don't know how to account for the quantum level yet, or if they can compensate for what happens there.

    In theory you can make ALL the logic functions including the Pentium instructions and math co-processors with 2 input CMOS NAND gates. However, that is not done because we can actually make things much faster with other techniques.
    But if someone were trying to make a processor with the bare minimum of components, that might be done? In other words, if space and component cost became the primary issue instead of speed?

    I'll post the image again for reference:


    What happens is when CLK is high, CL# (my notation for CL with the bar over it) is low, and in-turn CL is high (note the use of inverters). The opposite happens when CLK is low.

    Ignoring the small time delays, CLK is always the same level as CL which is always the opposite level of CL#.

    A transmission gate (all the boxes here are transmission gates) is set-up so that when the signal connected to the top is high (and to the bottom low), the left and right sides are connected.

    I'll let you try and puzzle it out from there since you seem really close.
    Ah, that's what I was missing. The black ones with vertical lines allow pass-through when CL# is high and CL is low (one state of the clock cycle), and the ones with horizontal colored lines allow pass-through when CL# is low and CL is high. When one is in the opposite state, the inverters hold it in the feedback loop until the clock cycles.

    I see you Ni is working well. You've anticipated what I wanted to talk about a in a few posts. There is a fundamental limit to how fast you can cycle the clock without resorting to other techniques (of which pipelining is most natural). These other techniques tend to take up yet more and more power.

    I won't really be going into the heat generation, since that is quite off-tangent, and it is proportional to the power consumption anyway, which is fairly readily understood in terms of current (and our voltage supply).
    Yeah, it looks like we might be reaching the point where more speed is no longer worth the power cost.

  7. #37

    Default

    Quote Originally Posted by Athenian200 View Post
    That's an interesting explanation of how a processor distinguishes 1 from 0 meaningfully... I always wondered how it did that.
    Perhaps its a bit philosophical, but I don't believe the processors really distiguish between 1's and 0's. The processor designers do. The processor simply does what it was designed to.

    You know, in retrospect, the reason I gave you for the use of two types of transistors was misleading. You could replace all the PMOS transistors in a basic CMOS circuit with a single resistor (though it would no longer be a CMOS, but an NMOS circuit). Basically the resistor would pull the output node-up whenever the NMOS circuit didn't pull it down (the NMOS circuit would have to be stronger than the resistor, since the resistor is always pulling up). CMOS has MUCH lower power dissipation because once the output switches, no more current is needed till the next time the output changes.

    You could alternatively use NMOS where the PMOS were and feed the PMOS-replacement NMOS transistors with inverted signals. But here, the voltage would never get pulled all the way up to Vdd, and the PMOS-replacement NMOS's would have to keep sinking current even after the switch from ground to Vdd-Vth (where Vth is the threshold voltage).

    This is a more complicated story (if you don't understand it, then just ignore it). But explaining "why" we design things in certain ways is always complicated because there are many alternatives, and reasons for not using them.

    Quote Originally Posted by Athenian200 View Post
    That would screw up the whole circuit. It looks like they don't know how to account for the quantum level yet, or if they can compensate for what happens there.
    The leakage currents are small (like 8-10 orders of magnitude smaller than normal dynamic currents). Although leakage has increasingly become problematic, they are more of an issue for the functionality of Analog circuits. For digital circuits, because it is always there, burning power, even when the circuits aren't switching.

    Quote Originally Posted by Athenian200 View Post
    But if someone were trying to make a processor with the bare minimum of components, that might be done? In other words, if space and component cost became the primary issue instead of speed?
    You could certainly make a NAND gate based processor. But there are area and power reasons for not doing this also. Also, a lot of the input output circuits are analog in nature, because the Printed Circuit Board traces (used to connect the chips together) behave like little transmission lines even in MHz ranges.

    Besides, integrated circuits can integrate a LOT. If you have a design which has its area limited by the number of pins that connect the processor to other circuits instead of the size of the circuits themselves, you are wasting die-area (which is directly related to cost).

    Quote Originally Posted by Athenian200 View Post
    Ah, that's what I was missing. The black ones with vertical lines allow pass-through when CL# is high and CL is low (one state of the clock cycle), and the ones with horizontal colored lines allow pass-through when CL# is low and CL is high. When one is in the opposite state, the inverters hold it in the feedback loop until the clock cycles.
    Quote Originally Posted by Athenian200 View Post
    Yeah, it looks like we might be reaching the point where more speed is no longer worth the power cost.
    Glad to see things are making sense, now.

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  8. #38
    Senior Member Lateralus's Avatar
    Join Date
    May 2007
    MBTI
    ENTJ
    Enneagram
    3w4
    Posts
    6,276

    Default

    I would have been a lot more interested in this about 10 years ago. I took a couple microprocessor design courses, but it started to get too tedious for my taste.

  9. #39

    Default Circuits and Boolean Equations

    So I decided I am going to do smaller chunks than initially thought because there is a lot of writing I am doing.

    More “Basic” Circuits

    Puzzling out circuits earlier was not just for the purposes of understanding those circuits, but because I am about to hit you with A LOT of circuits, and they should be relatively easy to follow now.

    So there are other logic gates that are often used besides the 2-input NAND and the inverter.

    There are multiple input NAND gates which use the same symbol as the 2 input NAND gate but more inputs feeding the gate. The logical function outputs a logic 1 in all cases except when all the inputs are a logic 1 (and in this case the output is a logic 0).

    Here is an 8-input NAND gate:
    For those interested in Computer Architecture...-nand8-gif

    If you remove the bubble from the NAND symbol, you get an AND gate. The logical function of AND gates are to output a logic 0 in all cases except when the inputs are all logical1s (in this case the output is a logical 1).

    Here is the symbol for an 8-input AND gate.
    For those interested in Computer Architecture...-and8-gif

    An AND gate is constructed by adding an inverter to the output of a NAND gate.

    Another common function is a NOR gate. The function of a NOR gate is to output a logic 0 if ANY of the inputs is a logic 1, and only output a logic 1 if all the inputs are logic 0.

    A two input NOR Gate is constructed in the following manner.



    The symbol for a 2 in put NOR is:

    For those interested in Computer Architecture...-nor2-gif

    Multiple input NOR gates have similar symbols:

    An 8-input NOR gate:
    For those interested in Computer Architecture...-nor8-gif

    Invert the output of a NOR gate and you get an OR gate. The output of an OR gate is a logic 1 if and only if at least on of its inputs is a logic 1. Otherwise, if all the inputs are logic 0, then the output is a logic 0.

    Here is an 8-input OR gate:

    For those interested in Computer Architecture...-or8-gif


    Multiple Input NAND Gates and NOR gates

    Earlier, we saw how to implement 2 input NAND abd NOR gates directly from transistors. You can make higher inputs NAND and NOr gates in a similar fashion. Simply add more NMOS in series and more PMOS in parallel for NAND gates or NMOS in parallel and PMOS is series for NOR gates. Hopefully, you can see how this works logically. However, the gates cannot get very large because of the increase in output node capacitance (from the PMOS drains, even if many are shared), and pull-down resistance (each NMOS has a small resistance that adds up).

    But there is another logical trick to be used to make higher input NAND and NOR gates from lower input ones.

    In the NAND case, you simply take the output of a NAND with x inputs send it through an inverter to one of the inputs of a 2-input NAND gate, then take the remaining in put and send it to the other input of a 2-input NAND. Now you have a NAND gate with x+1 inputs. This is a logically correct construction because the first input to the 2-input NAND is only a logic 1 if all the inputs to the NAND with x inputs is a logic 1 (and 0 other wise). Also, the output of the 2-input NAND is only a logic 0 if both its inputs are logic 1. So we can see that the only way this configuration will output a logic 0 is if all the inputs are a logic 1. Otherwise, one of the two inputs to the 2-input NAND gate will be 0, and therefore the output will be a logic 1.

    A similar construction works for larger NOR gates. Simply feed the output of a smaller NOR through an inverter to a 2-input NOR that gets the last input.

    Various Ways to Describe/Specify General Logic Functions

    Hopefully, in the constructions given above it was intuitive to see how the particular functions were built up, and what they were specified to do.

    However, in many cases, a more rigorous and organized approach is needed.

    Truth Tables

    One very brute force, but rather effective way to specify a logical function is through what is known as a truth table. This is simply an enumeration of all possible input combination with a specification of what the output should be.

    The truth table for a 3-intput NAND is:
    A B C|Out
    0 0 0|1
    0 0 1|1
    0 1 0|1
    0 1 1|1
    1 0 0|1
    1 0 1|1
    1 1 0|1
    1 1 1|0

    Boolean Equations

    Generally more compactly we can specify a function through a Boolean Equation.

    They will look something like: Y=A#*B+C. Where A# means the inverted version, otherwise known as the “compliment” of A. The “*” indicates and AND of what is on the left and the right. While a + indicates an OR of what is on the left and the right.

    Generally, the order of operations is to do all # first, then all *, then all +.
    Parentheses can change the order.

    Y=A#*B+C is the same as saying Y=(A#*B)+C. However, Y=A#*(B+C) is different.

    The “=” can be used in subtly different ways. It can be that a particular signal is defined a particular way. Or it can mean that what is on both sides are logically equivalent.

    Manipulating the boolean equations should be rather straight forward once you understand what they are. Seeing a direct implementation using logic gates should be just as easy.

    See if you can see that the following are true (use truth-tables if needed), and at the same time see if you can see the circuits each side of the equation would yield directly:

    A*0=0
    A+1=1
    A#+A=1
    A#*A=0
    A##=A
    A*B=B*A
    A+B=B+A
    A*A=A
    A+A=A
    A*(B*C)=(A*B)*C
    A+(B+C)=(A+B)+C
    A*(B+C)=A*B+A*C
    A+B*C=(A+B)(A+C)
    (A+B)#=A#*B#
    (A*B)#=A#+B#
    Attached Images Attached Images

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

Similar Threads

  1. For those born in October: What type are you?
    By hommefatal in forum The Fluff Zone
    Replies: 5
    Last Post: 06-12-2009, 02:01 PM
  2. Prayers by the Lake - for those interested in Eastern Christian mysticism
    By Sniffles in forum Philosophy and Spirituality
    Replies: 31
    Last Post: 12-02-2008, 04:51 PM
  3. For those who believe in spirit/soul...
    By Little Linguist in forum Philosophy and Spirituality
    Replies: 40
    Last Post: 08-16-2008, 09:17 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Single Sign On provided by vBSSO