Can computers take over the world?

Some of us are afraid that computers (artificial intelligence) will take over the world. They need not worry, however, because computers are not believers (ie, don’t think they have “The Truth”), but are on the contrary traditional scientists (ie, understand that probability can’t reach 1).

Computers can only do two things: 1. what we tell them to do and 2. answer questions. They can be made into independent robots by answering questions they themselves pose, but the answers can ultimately only lead them into one of all possible loops, that is, into what we call a freeze. The Truth for a computer is the middle between all possible freezes.

Computers can thus only take over the world insofar as those that run them can rule them and use the answers they return. The battle about these two matters is thus what we should worry more about than whether computers themselves can take over the world.


7 responses to “Can computers take over the world?

  1. What of machine learning? I think the fear stems from the possibility of us creating something which teaches itself and thus becomes more intelligent than us.

    • Machine learning can be used for large scale manipulations of the world by humans, but can’t lead to machine autonomy, because autonomy requires belief, and belief is inconsistent, which machines aren’t. I assure you, there will never be a computer that believes anything. An inconsistent machine is useless, just as useless as we humans are. Machines simply lack the (inconsistent) motivation to take over the world that drives so many of us humans by being useful (consistent) machines. A computer that lies is simply out of order.

      • Are you of the belief that consciousness and human personalities are immaterial? It seems to me that the qualities of the human mind are emergent properties of the particular form of the human brain. Thus, given enough understanding, it is at least possible to recreate something akin to the human brain, is it not?

      • I’m of no belief. On the contrary, I’m trying to explain that belief is inconsistent.

        Concerning “understanding”, it is possible to understand consistently in a subjective sense only using metaphors, and the human brain is consistently understood in this sense as a tool a talking being uses to search literal explanations for what it feels and does. So, yes, if we can recreate the talking being having the brain, then it is possible to recreate “something akin” to the human brain.

        A brain itself, however, (ie, without the talking being and its beliefs) has already been recreated (ie, invented), both by mathematics and by computer science. The insurmountable problem of turning such a brain into a human brain is, however, that the fusion with a talking being confuses the brain into contradiction by the talking being’s fear for the fundamental paradox, and its attempts to instead turn it into a truth. (This is principally what we call populism.)

        The problem is thus not to “recreate something akin to the human brain”, but rather to accept the solution of this problem. Humans are not generally guided by their brains, but rather using them to formulate explanations of what they do and why they do it, explanations that can be both consistent and inconsistent.

        The brain itself is thus actually a versatile tool that can be used both consistently and inconsistently. The problem to “recreate something akin to the human brain” resides in “recreating” the inconsistent part, and to combine it with the consistent part. It is impossible to be both consistent and inconsistent at the same time.

      • So, the thing is is that whether consciousness is merely an emergent property of a particular combination of matter or whether it arises from something immaterial is at the crux of the argument. If it is the former, then it is at least possible, in theory, to recreate the human brain. If it is the latter, then an argument may be had that it is not possible. Where you stand on that issue is relevant.

  2. “Consciousness” is in a machine learning sense logic. Now, logic has two entrances: one that ends in ambiguity (and thus rotates around an empty middle) and one that ends in paradox (called Russell’s paradox), traditionally called nominalism and realism, respectively, but can also be called non-belief and belief, respectively. The former can understand this paradox, whereas the latter ends up being paradoxical (ie, self-contradictory). Belief (in anything) does thus lead to self-contradiction, a self-contradiction that a non-believer can understand. Among us humans, roughly 1/3 are nominalists, 1/3 are realists and 1/3 don’t decide between them (ie, are illogical).

    So, although consciousness is “merely an emergent property of a particular combination of matter” (as non-belief logically arrives to), recreating it is an impossibility in a machine learning sense, since it actually are two varieties of logic: one that ultimately can’t decide and one that ultimately is contradictory. The one that ultimately can’t decide (ie, can’t believe) have we already “recreated” in a primitive form, while the other is nothing but a misuse of logic that actually crashes a computer when it unintentionally occurs (and we have to reboot the computer).

    This problem is rather “at the crux” of “recreating” the phenomenon we call “consciousness”. We can’t recreate a combination of consistency and inconsistency (just as we can’t both eat a cake and keep it), although we can recreate both separately.

  3. “Things” in this world, just as the world itself, arise as trinities: two opposites and one bridge between them. This phenomenon can be comprehended as an ambiguity or a paradox depending on whether one is inside or outside of them. For example, an explanation of a belief is different depending on whether the explainer is a believer or not. The problem with this fact is that there is nothing between these two kinds of explanations, ie, no core of a trinity..

    The point with my post is that this lack of an unambiguous core of things in this world means that machines can’t take over this world, because we can’t make a machine that is both inside and outside of the fundamental paradox of talk (ie, consciousness)..

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s