User Tag List

First 210111213 Last

Results 111 to 120 of 129

  1. #111
    Occasional Member Evan's Avatar
    Join Date
    Nov 2007
    MBTI
    INFJ
    Enneagram
    1
    Posts
    4,223

    Default

    Quote Originally Posted by CaptainChick View Post
    I am a living thing made up of nonliving parts, it is a perplexing thing to think about.

    I am no cognitive-neuroscientist, but I am quite sure that a computer is a non-living thing made up of non-living parts.
    The real question is, since both humans and computers are made up of non-living parts, how can you think of a non-gray boundary between the two? They're fundamentally the same thing.

    Our concept of "life" is an emergent property of a certain kind of processing system. It's not anything about the physical implementation of that system.

    So basically, a brain is a specific kind of computer (with certain emergent properties that we call consciousness, the mind, feelings, etc.). A computer is not necessarily a brain, obviously. But we could program a computer and find ourselves looking at a system with similar (or equivalent) emergent properties. A properly programmed computer could literally "think".

    Because, if a human can think, and a human is made up of non-living functional parts, why wouldn't a computer be able to?

  2. #112
    Senior Member
    Join Date
    Jun 2008
    Posts
    681

    Default

    Quote Originally Posted by Nocapszy View Post
    And in response to your last post to me: I don't have to present anything in a thorough or even respectable way. I'm not here to convince anyone. I'm here to post my ideas. The computer thing is irrefutable. Dissonance doesn't know how to present it and I don't feel any sense of responsibility to explain it, so if you really want to

    know the truth, you'll look it up.

    Otherwise, it's clear that you're not interested in finding out about anything -- just defending CC, or knocking down dis.

    For the record, he and I aren't even the first two people to consider it, and we're not the only ones to believe it now.

    In my experience, the only people who don't see how a brain is a computer are people who don't really know the fundamental workings of either.

    If you read my posts and understood them (and the implications they make) then you would know why I object to your brash oversimpification of this matter, that meaning why your analogy is not just.

    Its a pretty simple idea that the brain works like a computer, but its an oversimplified analogy that "Brain = computer", and I've explained many ways in which they are very different.

    Its not like I don't understand the idea Nocap, I've thought of it myself very often, but I go a few steps further in my reasoning and take more things into account instead of ignoring them to make my idea fit 'just right'.

    Because you have failed to defend your idea all it shows me is that you don't know how to do it, instead you try and take the high-and-mighty-chair by saying that your idea is irrefutable and that you just don't care to prove it, that you only present your ideas and that we're the ones that are supposed to research them. I'm sorry but thats not how arguments work. If your idea was so irrefutable you would be able to explain it with ease because it would be perfectly logical, and were it irrefutable and easily reachable by intuition then I would find myself agreeing with you.

    Quote Originally Posted by dissonance View Post
    Meh. I do have problems describing my Ni "vision". My Intuition AND Thinking are based on the internal standard, so while my views are logically sound, it's somewhat unnatural for me to translate them. Especially because I don't have the luxury of non-verbal communication (tone, gesture, etc.).

    I never have any problem explaining my ideas to INTPs in real life, but I seem to be constantly misunderstood on the internet by INTPs especially.

    It's funny, because I've talked about all of these ideas (the analogy program:computer::mind:brain for example) to all of my professors who all understand exactly what my viewpoint is. One of them is INTP, one ENTP, one INTJ. They always agree with me too...

    Gah, these threads are so frustrating because it's taken me like 50 times the effort to get my point across (and people are still misunderstanding) than in real life.

    I hope nocapszy helps me out here because he at least gets what I'm sayin... (although people WANT him to be wrong, because he doesn't care about tact, heh.)

    Anyways, I'm done with this thread because I've presented my ideas in like 10 different ways and lots of people still don't get it.
    Appealing to authority doesn't make you any more right in my eyes

    I understand your idea dissonance, trust me, I do, but I see flaws, thats what I do. As I just mentioned in responding to nocap, the idea makes sense, but only because it's too simple, and the oversimplification of it makes it innaccurate, take the ideas that i've presented as evidence for their (the brain and computer's) differences and you'll see that the oversimplified analogy "Brain = Computer" is slightly ignorant...

  3. #113
    Occasional Member Evan's Avatar
    Join Date
    Nov 2007
    MBTI
    INFJ
    Enneagram
    1
    Posts
    4,223

    Default

    Quote Originally Posted by Didums View Post
    Appealing to authority doesn't make you any more right in my eyes
    I'm not trying to appeal to authority. I'm trying to explain how frustrating this is for me. You SHOULDN'T believe me more because other people do...but maybe you should believe that there's validity, and try to think of different ways to interpret my points until you find one that works better than your current one.

    I understand your idea dissonance, trust me, I do, but I see flaws, thats what I do.
    As do I, and there are none here. The flaws you're seeing are flaws in interpretation (maybe in my writing?), not in the idea.

    As I just mentioned in responding to nocap, the idea makes sense, but only because it's too simple, and the oversimplification of it makes it innaccurate, take the ideas that i've presented as evidence for their (the brain and computer's) differences and you'll see that the oversimplified analogy "Brain = Computer" is slightly ignorant...
    I don't see how any of your ideas disprove anything I've said. Humans have the subjective experience of "feeling" or "pain" or whatever standard argument you want to use, therefore we are fundamentally different than computers? Pretty weak, if you ask me...

    It's obvious that we are made out of non-living parts. It's obvious a computer is made out of non-living parts. In that sense, saying the two systems aren't capable of implementing the same function is like saying vacuum tube computers can't do the same things as other sorts of turing machines...

    You can made a computing machine out of legos. Would you call the output of a lego-calculator a fundamentally different sort of thing than the output of some other desk calculator? No.

    So, sure, the exact material making up the brain and making up computers are different. But to say that one is so fundamentally different from the other seems short-sighted. A computer is just an information processing machine. What does the brain do that isn't information processing?

    A properly programmed computer can think/feel/whatever (we are exactly those computers). To say the programming could not even hypothetically be instantiated on another system doesn't make sense.


    A computer is a device that accepts information (in the form of digitalized data) and manipulates it for some result based on a program or sequence of instructions on how the data is to be processed. Complex computers also include the means for storing data (including the program, which is also a form of data) for some necessary duration. A program may be invariable and built into the computer (and called logic circuitry as it is on microprocessors) or different programs may be provided to the computer (loaded into its storage and then started by an administrator or user). Today's computers have both kinds of programming.
    ^What is computer? - a definition from Whatis.com

  4. #114

    Default

    I think CC, brings up a good point by bringing up complex adaptive systems.

    I accept the computer:program::brain:mind analogy.

    But there are differences between a computer and a brain. Whether or not they are relavent to induction vs. deduction is a different matter.

    Can we extend the analogy to neurons:axons+dendrites::transistors:wires?

    Computers are designed with relatively few connections between basic components when compared to the brain. A transistor can handle only a single input (when used digitally, and has only 4 terminals in general), and can only support a limited number of outputs. Nuerons accept many inputs and outputs. Practical computers have limited numbers of "global" signals and they are designed in ahead of time. Brains have many neruotransmitters (I've seen at least thirteen).

    Network theory tells us that "more is different." Networks will exhibit much more complex behaviour when the average number of links increases even a little bit.

    Can we build a computer than matches the complexity of the brain? Can the some brain understand the human brain well enough to create a fuctioning version? How is that understanding stored? There at least one scientist that believes that the cortical algorithm is actually very simple.

    I think we also need to discuss the difference between a mechanism and deduction. Do we think of a scale model as "deducing" what will hapen in a full-size version? Do we consider simulations as "deducing" what will happen? Usually we ascribe the intelligence to the person who created the simulation or model, not the model itself.

    When I deduce something, I create a deductive argument, but does the argument itself "deduce" the conclusion? If I give the argument written down to someone else, then they may be able to deduce the same conclusion.

    Similarly, when I induce knowledge, one could say I collected enough evidence to convince myself. But does the evidence collected actually "induce" the knowledge?

    That is a subtle distinction, but I think crucial to understanding if deduction and induction are the same thing.

    I just want to say that we can already produce machines that both "deduce" and "induce" in essentially the same way our arguments "deduce" or "induce." But I think there is a difference between the deduction and induction used by the human creator and user of the machines/arguments/evidence and the machines/argumets/evidence that do the "deduction" and "induction."

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  5. #115
    Occasional Member Evan's Avatar
    Join Date
    Nov 2007
    MBTI
    INFJ
    Enneagram
    1
    Posts
    4,223

    Default

    Quote Originally Posted by ygolo View Post
    I think CC, brings up a good point by bringing up complex adaptive systems.

    I accept the computer:program::brain:mind analogy.

    But there are differences between a computer and a brain. Whether or not they are relavent to induction vs. deduction is a different matter.

    Can we extend the analogy to neurons:axons+dendrites::transistors:wires?
    It doesn't really extend that far. It's like Marr's Three Levels: he said it's useful to look at each issue in cognition from three perspectives -- computational (the problem itself, the bounds, etc.), algorithmic (the steps it would take to map inputs to outputs), and implementation (the physical realization of the system).

    The computer:program::brain:mind analogy only really works on the computational and algorithmic levels. The point is, there are always multiple possible implementations for one algorithm (and multiple algorithms for one computation).

    You could have the algorithmic level mapped out for humans, and then instantiate the system in some random physical way, and still get a functionally equivalent system to the mind.

    I think we also need to discuss the difference between a mechanism and deduction. Do we think of a scale model as "deducing" what will hapen in a full-size version? Do we consider simulations as "deducing" what will happen? Usually we ascribe the intelligence to the person who created the simulation or model, not the model itself.

    When I deduce something, I create a deductive argument, but does the argument itself "deduce" the conclusion? If I give the argument written down to someone else, then they may be able to deduce the same conclusion.

    Similarly, when I induce knowledge, one could say I collected enough evidence to convince myself. But does the evidence collected actually "induce" the knowledge?

    That is a subtle distinction, but I think crucial to understanding if deduction and induction are the same thing.

    I just want to say that we can already produce machines that both "deduce" and "induce" in essentially the same way our arguments "deduce" or "induce." But I think there is a difference between the deduction and induction used by the human creator and user of the machines/arguments/evidence and the machines/argumets/evidence that do the "deduction" and "induction."
    Yeah, I think you just pointed out the problem with my terms that's been confusing everyone. I was thinking of deduction in a more loose way -- anything that maps from assumptions/starting points/premises/inputs to an output is basically a deductive process. "Deductive reasoning" is somewhat of a different term. Should've made that more explicit probably.

    My argument is basically saying everything is a function.

  6. #116

    Default

    Also, the belief that we can make a computer/machine that functions like a human brain remains very speculative.

    I am not going to say it is impossible (since I see no clear arguments for that), but I can say it has not been done. There is no existence proof for this.

    Note, there are some things, we simply cannot make. We cannot write a program that will analyze all programs to determine if they stop or not--this is known as the halting problem. We cannot make a perpetual motion machine, nor can we make a machine that is 100% efficient.

    These are limitations to human creativity--just to show that there are limits.

    There are potential stumbling blocks to building a machine that does what a human does in terms of "reasoning"

    1) The mind/brain is connected to the body. A lot of what the brain does is based on its inputs from the rest of the body, and its ability to make decisions based on perceived abilities of that body.
    2) An individual is embedded in society, and much of what we call deduction and induction are done for particular social purposes.
    3) Although, we have been quite successful at making machines that mimic or exceed human capabilities in the mechanistic aspects of reasoning, we have yet to produce one that actually seems intelligent. I've played chess computers that beat be most of the time. I don't consider them intelligent, though I consider their designers to be quite intelligent.

    I could expand on those ideas, but I don't believe deduction is necessarily accurately characterized as following a mechanistic procedure.

    From mathematical proof to physical theory, to ideas for chess games. The intellectual act is not what is done mechanistically, but what is done to correctly feed the mechanistic processes.

    Yes, doing the mechanistic part of deduction properly is important--that is why we created computers in the first place, to do that part better (i.e. faster and more accurately) than we could.

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  7. #117
    Occasional Member Evan's Avatar
    Join Date
    Nov 2007
    MBTI
    INFJ
    Enneagram
    1
    Posts
    4,223

    Default

    Quote Originally Posted by ygolo View Post
    Note, there are some things, we simply cannot make. We cannot write a program that will analyze all programs to determine if they stop or not--this is known as the halting problem. We cannot make a perpetual motion machine, nor can we make a machine that is 100% efficient.
    Yeah, I mentioned the halting problem before (or the universal debugger problem). It has to apply to human perception as well, though.

    These are limitations to human creativity--just to show that there are limits.

    There are potential stumbling blocks to building a machine that does what a human does in terms of "reasoning"

    1) The mind/brain is connected to the body. A lot of what the brain does is based on its inputs from the rest of the body, and its ability to make decisions based on perceived abilities of that body.
    Right, which is why, if we actually wanted to make a "conscious" computer, we'd have to think of the body as part of the computer too, and make input/output systems that included all of that stuff (and time should be thought of as an output, too).

    2) An individual is embedded in society, and much of what we call deduction and induction are done for particular social purposes.
    Meh. A person raised just by their parents is still capable of inductive and deductive reasoning. We would have to build the computer such that we still had to "teach" it language and certain reasoning skills. We would have to build in strong constraints, though (as Chomsky pointed out).

    3) Although, we have been quite successful at making machines that mimic or exceed human capabilities in the mechanistic aspects of reasoning, we have yet to produce one that actually seems intelligent. I've played chess computers that beat be most of the time. I don't consider them intelligent, though I consider their designers to be quite intelligent.
    Yes. I wasn't making the point that it HAS happened, or even WILL happen. I was making the point that it COULD happen.

    I think we should all question our ideas of what constitutes "thought" and "consciousness", etc. If we define those things well, such that they apply to all humans we consider to think, then programmers can hypothetically think of ways to replicate certain functions that would fit the constraints in the definition.

    It's not that wacky of an idea -- again, there's something in philosophy called the "zombie argument" (or something like that). This states that we don't really know if anyone around us is "thinking" or if they are just well-built automatons that fool us. We make the conclusion that they think based on only visible evidence (which isn't enough to get at their internal states, obviously). Therefore, if we were fooled by a computer, we'd call it "conscious". In fact, we could be fooled by computers all around us all the time.

    We can't define the word "consciousness" in any way that could really separate the limits of computers out and keep all humans in.

    I could expand on those ideas, but I don't believe deduction is necessarily accurately characterized as following a mechanistic procedure.

    From mathematical proof to physical theory, to ideas for chess games. The intellectual act is not what is done mechanistically, but what is done to correctly feed the mechanistic processes.

    Yes, doing the mechanistic part of deduction properly is important--that is why we created computers in the first place, to do that part better (i.e. faster and more accurately) than we could.
    Meh, I still am not getting my point across about deduction, I guess.

    All I'm trying to say is that humans are systems built up out of (and only out of) little functions. Same with computers. (Same with everything, even).

    Anyway, we can control the functions we put on a computer. So if we replicated the right ones, we would get the same emergent trait of consciousness.

  8. #118

    Default

    Quote Originally Posted by dissonance View Post
    Yeah, I mentioned the halting problem before (or the universal debugger problem). It has to apply to human perception as well, though.
    Not that I disagree, but what evidence is there that the it applies to human perception as well? Is it based on categorizing human beings as mechanistic things? Or is there experimental evidence?

    Quote Originally Posted by dissonance View Post
    We can't define the word "consciousness" in any way that could really separate the limits of computers out and keep all humans in.
    This is my main point of contention.

    What if, as we learn more about the mathematical properties of conciousness as a function, we find out that its Kolmogrov complexity is such that no amount of consiousness can be applied to consciously design another concsiousness? --That is highly self-refential, but I think you'll get what I am saying.

    Note, it is an important distinction about consciously desing another consciousness. We can already create other consioucsiosness, it's called reproduction. We need little understanding to do this, we've done it since the caveman days--that's why we are here.

    Quote Originally Posted by dissonance View Post
    Meh, I still am not getting my point across about deduction, I guess.

    All I'm trying to say is that humans are systems built up out of (and only out of) little functions. Same with computers. (Same with everything, even).

    Anyway, we can control the functions we put on a computer. So if we replicated the right ones, we would get the same emergent trait of consciousness.
    I think I do understand your point. But some fuctions cannot be defined--What if consciousness is one of those functions that cannot be defined?

    Accept the past. Live for the present. Look forward to the future.
    Robot Fusion
    "As our island of knowledge grows, so does the shore of our ignorance." John Wheeler
    "[A] scientist looking at nonscientific problems is just as dumb as the next guy." Richard Feynman
    "[P]etabytes of [] data is not the same thing as understanding emergent mechanisms and structures." Jim Crutchfield

  9. #119
    Occasional Member Evan's Avatar
    Join Date
    Nov 2007
    MBTI
    INFJ
    Enneagram
    1
    Posts
    4,223

    Default

    Quote Originally Posted by ygolo View Post
    Not that I disagree, but what evidence is there that the it applies to human perception as well? Is it based on categorizing human beings as mechanistic things? Or is there experimental evidence?
    Well, the evidence is that the problem is literally unsolvable (proven). We aren't made up out of anything that sets us apart from the kind of stuff that can't solve the problem...

    If we can solve it, then computers can solve it. Because if we explain the solution to someone, we're basically "programming" their understanding.

    This is my main point of contention.

    What if, as we learn more about the mathematical properties of conciousness as a function, we find out that its Kolmogrov complexity is such that no amount of consiousness can be applied to consciously design another concsiousness? --That is highly self-refential, but I think you'll get what I am saying.
    I see what you're saying, sure. I just don't think "consciousness" is something that can be defined clearly. It's an elusive concept. It's gray, and if you make it a set of features, you can program it. But if it stays gray, I guess you can't.

    But it's worthless in conversation in that case.

    Note, it is an important distinction about consciously desing another consciousness. We can already create other consioucsiosness, it's called reproduction. We need little understanding to do this, we've done it since the caveman days--that's why we are here.



    I think I do understand your point. But some fuctions cannot be defined--What if consciousness is one of those functions that cannot be defined?
    Well it definitely can be defined if we go bottom-up. Like, all the way bottom. It's just too complex for our computer systems right now.

  10. #120
    no clinkz 'til brooklyn Nocapszy's Avatar
    Join Date
    Jun 2007
    MBTI
    ENTP
    Posts
    4,516

    Default

    Dude didums...

    What the he'll do you think? Its not an analogy. Brain isn't = computer.
    Brain is a type of computer. If you disagree, then you need to revise your idea of what a computer is.
    The brain does compute, ergo, it is a computer.
    Its a matter of linguistics. You can't deny that. Your stubborn refusal to appropriately examine the fundamental aspects of the matter is a display of nothing more than being deliberately obtuse.

    Read this very carefully.
    I never said that a brain could be plugged into a wall and we could run windows on it.
    I never said that you could go down to CompUSA and buy a brain.
    I said, a brain computes.
    Sure, it doesn't use ASCII like we know of. So what? Does that change the fact that ideas are generated by ordered impulses, and conclusions the same way?

    No it doesn't. You attempt to prove me wrong by saying I don't argue effectively. Your exact words were "Proof by assertion" which you so ignorantly assumed was my intention.
    In response, you attempted disproof, by desertion.

    Doesnt work that way either, but at least I'm not being hypocritical.
    we fukin won boys

Similar Threads

  1. The Banned and The Damned
    By Haight in forum Official Decrees
    Replies: 331
    Last Post: 11-30-2017, 07:12 PM
  2. Do Ti and Te map onto deduction vs. induction?
    By funtensity in forum Myers-Briggs and Jungian Cognitive Functions
    Replies: 5
    Last Post: 11-23-2013, 03:15 PM
  3. Replies: 9
    Last Post: 10-10-2011, 08:56 PM
  4. How can i develop my skills of observation and deduction?
    By Illmatic in forum General Psychology
    Replies: 23
    Last Post: 09-01-2011, 05:25 AM
  5. [NT] Probability Relations and Induction
    By Provoker in forum The NT Rationale (ENTP, INTP, ENTJ, INTJ)
    Replies: 51
    Last Post: 09-30-2009, 06:54 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Single Sign On provided by vBSSO