What are machines thinking?

What are machines thinking? Forget it. What are humans thinking?

by Jon Rappoport

July 6, 2014

www.nomorefakenews.com

“…one scenario is that the machines will seek to turn humans into cyborgs. This is nearly happening now, replacing faulty limbs with artificial parts.”

“The concern I’m raising is that the machines will view us as an unpredictable and dangerous species.”

“[Machines] might view us the same way we view harmful insects.”

“Del Monte believes machines will become self-conscious and have the capabilities to protect themselves.”

These aren’t quotes from some absurdist satirical play designed to expose human stupidity.

They’re quotes tendered by physicist, Louis Del Monte, the author of “The Artificial Intelligence Revolution: Will Artificial Intelligence Serve Us Or Replace Us?”, from an interview with Dylan Love at Business Insider — “By 2045 ‘The Top Species Will No Longer Be Humans,’ And That Could Be A Problem”.

The key to Del Monte’s approach is quote number one: machines might decide to turn humans into cyborgs and it’s already happening in the area of artificial limbs.

What? Excuse me, but humans are deciding to put those limbs on other humans. Machines aren’t.

And even in some hospital of the future, if you had AI androids “making all the surgical decisions,” they wouldn’t actually be choosing anything. They’d be programmed by humans.

Why is this so hard for technocrats to understand? Because they infuse themselves with a mystical vision about artificial intelligence.

They confuse operational capability with consciousness.

Machines “viewing” humans? There is no viewing.

Machines don’t think. They never have, and they never will. They perform according to specs.

They can be programmed to select, from a number of options, the option that fulfills the prime directives humans have given them. And that process of selection is carried out according to patterns originally installed by humans.

There is no mystery here. No mystical leap across a barrier between non-conscious and conscious.

But somewhere up the line, humans can be propagandized to believe machines are alive and have rights.

Technocracy abounds with a titanic amount of sheer bullshit. It’s founded on severe apathy and degrading cynicism about what humans are. So machines become the new gods.

The “cheese-melt” theory of collectivism feeds directly into the worship of machines:

“Individuals are weak and helpless. Therefore, they have to melt down into a collective glob, in order to survive. And from that collective point of view, machines loom up as the most powerful entities in the world. Bow, pray to the Artificial Intelligence.”

Again, humans invented those machines in the first place. But that’s scrubbed from the equation. It’s “old news.” Hardly worth a mention.

God or life or consciousness isn’t going to pop up out of the head of a super-computer in 2045, the designated year for the so-called Singularity—when machine intelligence supposedly outstrips our own.

In 2045, or 2056, or 3000, do you know what’s going to happen? Nothing. Machines will still be machines, doing what they always do. Yes, a mile-wide computer in the desert may be able to perform more operations than a toaster in a motel in Cincinnati, but the level of consciousness in both machines is identical.

Zero.


power outside the matrix


In a significant way, the whole machines-will-be-alive business is a smokescreen, utilized to conceal an agenda. That agenda is the overall planning and regulating of global civilization, framed as a problem that needs to be solved.

The specious propaganda (you can find it described and satirized in hundreds of science fiction stories and novels) goes this way:

“If we had, at our fingertips, the total sum of human knowledge, and if we could calculate with it at lightning speed, we’d find the optimum pattern for human society. We’d find the answer to the age-old question: how can we live in peace with each other?”

Sheer nonsense. Such calculations, as always, depend on values and ideals, first principles. All solutions flows from those values.

And machines don’t “discover” values. First principles are prior assumptions made by humans.

People have been arguing and fighting wars over those principles for centuries.

For example: individual freedom vs. ultimate government authority.

A machine is going to “discover” which side of the argument is right? A computer is going to do a billion calculations in 30 seconds and “arrive” at the answer?

That’s on the order of asking an army tank to consult the universe and then tell you whether you should marry the farmer’s daughter or take vows of celibacy as a priest.

These technocrats are merely collectivist wolves in intellectuals’ clothing, who will predispose their machines to ding-ding-ding like Vegas slots on the payoff called Fascism.

Jon Rappoport

The author of three explosive collections, THE MATRIX REVEALED, EXIT FROM THE MATRIX, and POWER OUTSIDE THE MATRIX, Jon was a candidate for a US Congressional seat in the 29th District of California. He maintains a consulting practice for private clients, the purpose of which is the expansion of personal creative power. Nominated for a Pulitzer Prize, he has worked as an investigative reporter for 30 years, writing articles on politics, medicine, and health for CBS Healthwatch, LA Weekly, Spin Magazine, Stern, and other newspapers and magazines in the US and Europe. Jon has delivered lectures and seminars on global politics, health, logic, and creative power to audiences around the world. You can sign up for his free emails at www.nomorefakenews.com

19 comments on “What are machines thinking?

  1. mark says:

    Jon, I read the article you are referring to when writing you own piece. You left out the extraordinary part though. When Del Monte talks about starting with 1000 first generation robots and basically breeds the top 200 for over 500 generations. After doing this the robots would perform operations that were opposite of the initial programming. Basically they learned to hoard resources instead of sharing them because there were only so many resources to go around. Not every robot would get some. They were originally programmed to share and “tell” other robots where to find resources, but after many generations the robots would do exactly the opposite of sharing or “telling” other robots where to find resources. They would hide them, lie to the other robots, and exhibit survival techniques that they had not been given code for or programmed to do. You neglected to even talk about this, which was the most important and fascinating part of the original work.

    • theodorewesson says:

      just to add…

      http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other

      A 2009 experiment showed that robots can develop the ability to lie to each other. Run at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, the experiment had robots designed to cooperate in finding beneficial resources like energy and avoiding the hazardous ones. Shockingly, the robots learned to lie to each other in an attempt to hoard the beneficial resources for themselves.

      “The implication is that they’re also learning self-preservation,” Del Monte told us. “Whether or not they’re conscious is a moot point.”

    • ginnystoner says:

      The Pop Sci article is misleading — another 2007 article about the same study gives more detail: http://blogs.discovermagazine.com/loom/2007/02/24/evolving-robotspeak/#.U7pBbEDXKBk

      The robots started with randomly wired neural networks, the programming of the best performers were then combined, repeated 500 times. The robots were also programmed to have a chance of changing part of their program — a wildcard. Robots cannot “mate” but could be programmed to combine their programming. They cannot “lie” but can be programmed to maximize points for doing certain things.

      Allowing robots to communicate and combine their programming, especially if they are programmed with a wildcard, could have adverse consequences — not because the robots developed anything resembling consciousness, but because their human programmers either failed to foresee the consequences, or foresaw them and wanted them.

  2. Bill says:

    “They perform according to specs.”

    Tell that to any software tester before you hand the machine over for testing and they will roll on the floor laughing.

  3. I have contended the very same, but Those who see to think there is a problem with Humanity (and not a problem with retaining a system that promotes psychopaths to the top of the power structure – i.e., a money system) insist that We will succeed in creating sentience in Our machines. Rubbish, I say.

    As to the solving of the problem of retaining the system… I have a new article:

    http://www.thelivingmoon.com/forum1/index.php?topic=6987.0

    I hope the reader is inspired!

  4. brad says:

    Thanks for bringing a level headed viewpoint to this subject. I was thinking that the machines-becoming-conscious thing was coming from technocrats with God-complexes (although there’s probably something to that too). I never thought about it being a potential future covert control ploy – that makes even more sense.

  5. Asimov’s three laws, looked good on paper. Drones will soon obviate them completely. Kill decisions in machine minds and you’ve already crossed the rubicon…

    The butlerian jihaad was no fiction…..

    http://aadivaahan.wordpress.com/2010/06/28/the-age-of-machines/

    http://aadivaahan.wordpress.com/2010/06/13/machine-strong-human-weak/

  6. Paul says:

    Many thanks for this, Jon.

    Ada Lovelace, Lord Byron’s daughter, said much the same thing in 1843, when she published a memoir on Charles Babbages’s Analytical Engine, including the world’s first program:

    “The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.”

    Alan Turing rebutted this commonsensical wisdom in 1950 in his famous article claiming that machines would be able to think by 2000. It seems that the madness is still with us.

  7. voza0db says:

    This comment is not about robots… But about MIND and Drugs!

    “Drugs for psychiatric illnesses aren’t very effective. But new research is offering renewed hope for better medicines.” source

    Every time I read a statement like this, I always remember your work… I can bet that counting 50 years from now the same statement will be write, simply because the drugs are the causative agents of the “psychiatric illnesses”!

    About the Robots… In this present system of Oligarchy Centralized Capitalism robots will only create misery and famine! Nothing else…

    😎

  8. Stuart Clark says:

    John Searle’s “Chinese Room Thought Experiment” does it for me. I know there are detractors to this clever demonstration of the fallacy of machine consciousness, but IMHO they lack the simple elegance of the Chinese Room’s lone occupant, answering questions written in Chinese with the help of a good but voluminous set of rules. See http://en.wikipedia.org/wiki/Chinese_room

  9. Bobby says:

    Joh, I appreciate your articles for one main reason. You always bring us back from the bullsh-t. You always show us the middle road, in most cases, is where the truth is. Or, if truth is too strong an idea, at least you show us what is and is not workable. You remind me very much of my Dad, who died young(51), but was a very practical and centered man. Thank’s Jon and God bless.

  10. This discussion is stupid. Machines cannot think – machines will never be able to think.

  11. OzzieThinker says:

    Jon, you may be wrong here. I believe the Draco have a way of “jump starting” a passive connection into an active one. Technically they could rise corpses from the dead. However, the effect does not produce consciousness in the standard sense. A passive connection is muted and invariably makes a frozen active one (as far as I can gather, but there may be exceptions to the rule). The “zombie” concept or some other “remote control” is on the right lines.

    Hope that helps 😉

  12. John says:

    “Machines don’t think. They never have, and they never will. They perform according to specs.
    They can be programmed to select, from a number of options, the option that fulfills the prime directives humans have given them. And that process of selection is carried out according to patterns originally installed by humans.”

    The successors of Skinner, Bernays, et al have nearly perfected the art of programming the mass mind. Substitute a couple of words in the statement above, and it is still valid:

    “Most people don’t think. They never have, and they never will. They perform according to specs. They can be programmed to select, from a number of options, the option that fulfills the prime directives their programmers have given them. And that process of selection is carried out according to patterns originally installed by the rulers.”

    The same types of decision-making and fuzzy logic that most people depend on can be simulated in software — with faster speed, better memory, and a vast and ever-expanding archive of reference material. While artificial intelligence may never exhibit genius level performance, it will certainly surpass the average human. Just ask the folks at Google.

    • Your mind is working best when you are being paranoid.
      You explore every avenue and possibility of your situation, at high speed and total clarity – Banksy

      • John says:

        Your mind may focus a little better under duress, but what you consider to be “every possibility” depends on what sort of rat maze you’ve been accustomed to all your life. Within those narrowly defined avenues, your choices can be predicted and directed with statistical accuracy. Seek cheese, avoid shock, conform to the group. Works well for most.

Comments are closed.