The fatal flaw in the human-machine interface

The fatal flaw in the human-machine interface

by Jon Rappoport

March 21, 2013

www.nomorefakenews.com

There is a great deal of research going on in the area of artificial intelligence (AI) merging with the brain.

Exuberant cheerleaders like Roy Kurzweil are quite confident that we are approaching a moment when a computer will exhibit all the power of the human brain.

The definition of “power” in this context is fuzzy. But Kurzweil and others are sure we’re about to uncover the “algorithm” that underlies all brain activity.

They couldn’t be more wrong. Neuroscience has barely scratched the surface of understanding how the brain operates. Cracking the code is not on the horizon.

This fact reflects a much deeper problem. PR is not science. Predictions about what is imminent are not the same thing as verified research results.

PR is not information.

In exactly the same way, were a human-computer interface with awesome capability endowed with access to a hundred galaxies of stored data, it would run up against the problem of vast chronic misinformation in those cosmic warehouses.

This is not something that can be deleted with a program or a committee tasked with making corrective changes.

For example, and this is just one area, medical science is so rife with fraud, at so many levels, as I’ve demonstrated over and over again for the past 10 years, that it would take humans decades to expose a significant part of it. And AI wouldn’t even know where or how to begin looking, because…who would set the parameters of such an investigation?

There is an inherent self-limiting function in AI. It uses, accesses, collates, and calculates with, false information. Not just here and there or now and then, but on a continuous basis.

Think about all the entrenched institutions and monopolies in our society. Each one of them proliferates false information like a Niagara.

No machine can correct that. Indeed, AI machines are victims to it. They in turn emanate more falsities based on the information they are utilizing. I’m sure someone can make a little model of the exponential expansion of this disaster.

Each and every false datum generates a wider and wider stream of lies, and the streams, becoming rivers, overlap and produce exceptionally large numbers of contaminated eddies, polls, and rapids.

When personal computers entered the marketplace, people began a clamor about the Age of Information.

There were cultural reasons for this enthusiasm. They could all summed up by the fact that we are living in a technological society, and technology walks hand in glove with information.

But as the messianic postulations and predictions reached new heights, and the drive began to marry machine and human brain, the gaping holes and rips in the utopian fabric of dreams loomed up for any intelligent person to observe.

When a corporation or government expands to a certain size, it dedicates itself to survival, not of its principles, not of its original mission, but of Itself as an entity. Therefore, it spins lies.

As Dr. Peter Breggin and I discussed on his radio show yesterday, when it comes to the newly announced federal brain-mapping project (B.A.M.), the scientists will very rapidly begin drowning in their own ignorance about the very organ they are investigating.

But that won’t do. This billion-dollar project is supposed to produce results, and the project must survive. Therefore, the researchers will cook up models to demonstrate their progress. These models will make assertions which are patently false.

Pharmaceutical companies will develop new drugs based on the false assertions about the brain, knowing full well they are operating in swamp of deception, and caring not one whit about it.

It is the same with the vaunted AI-human brain interface. It will gobble up and deploy untold numbers of lies already told by other institutions to defend and protect their own survival.

The complexity, on various levels, of false information will make the heralded AI-brain collaboration resemble an intelligence agency:

It lies about other lies, and then it lies about that.

The mathematics are packed with functions that automatically spiral out realities even Lewis Carroll’s Mad Hatter would find frivolous and repellent.

The field of information theory is about handling quantity of data and making that data readable. It’s not about the quality of the data.

AI can work successfully in engineering projects, but when the human interface is added, we are no longer merely talking about engineering. The whole purpose of the interface is supposed to be about somehow making humans better.

How can that happen when the hugely expanded access to data runs into billions or trillions of bits of false information?


The Matrix Revealed

One of the two bonuses in THE MATRIX REVEALED is my complete 18-lesson course, LOGIC AND ANALYSIS. This is a new way to teach logic, the subject that has been missing from schools for decades.


I’ve been making notes for my second, more advanced logic course. The purpose of the course is to provide better ways of handling the flood of information we deal with every day. The first challenge is going beyond the rules and principles of classical logic, in order to analyze the quality of the data we are digesting and using.

There is no pat system for doing that. Certainly, accepting data based on the notion that “recognized authorities” are reliable would be a disaster. But that is exactly where the human-AI interface is heading, like a team of horses being driven toward the edge of a cliff.

The human-AI engineers are already fatally compromised. In journalistic terms, they are the mainstream reporters obeying the parameters laid down by their editors and corporate owners. They write their stories inside a bubble of illusory context. They go back, again and again, to the same sources, and those sources are permanently biased against popping the bubble and journeying out to where the truth exists.

Actually, an AI machine could write most of the articles that appear on the front page of the NY Times every day. It would save time and cut expenses. But the result would be the same: absurdly limited context, false information, deception, fatuous presumption of authority.

If, instead, you want to look for a program that would discount such a presumption and would reject institutional secrecy, a program that would undertake a relentless investigation of the quality of data, there is a potential candidate.

It’s called a human being. And it’s not a program.

Jon Rappoport

The author of an explosive collection, THE MATRIX REVEALED, Jon was a candidate for a US Congressional seat in the 29th District of California. Nominated for a Pulitzer Prize, he has worked as an investigative reporter for 30 years, writing articles on politics, medicine, and health for CBS Healthwatch, LA Weekly, Spin Magazine, Stern, and other newspapers and magazines in the US and Europe. Jon has delivered lectures and seminars on global politics, health, logic, and creative power to audiences around the world. You can sign up for his free emails at www.nomorefakenews.com

12 comments on “The fatal flaw in the human-machine interface

  1. Gerry Frederics says:

    The difference bewteen the most powerful computer in the world ana man with an IQ of 90 is, that the computer cannot make up its mind whether it wants coffee or tea for breakfast. At this time, we know practically NOTHING about the brain; that is the reason psychiatry is only one step above Voodoo; we don´t even understand the reproductive system totally and cannot explain Telegony, therefore the system denies it exists. Gerry Frederics

  2. Mike says:

    Some people have been born without a brain and some of those people have lived nearly normal lives. One exception is all it takes to destroy the assumption that the brain is the mind.

  3. hybridrogue1 says:

    “The field of information theory is about handling quantity of data and making that data readable. It’s not about the quality of the data.”

    Quality/Quantity

    This is a point that Jacques Ellul drove home brilliantly in his masterpiece: THE TECHNOLOGICAL SOCIETY; a deep and vast investigation into the idea and concept of “technique” itself, and the ‘belief system’ built on the worship of “artifact”. The ideas developed are certainly based on the recognition of the theological aspects of man to technology, the MEANINGS of it all, the center of “Why” – going beyond the mere “What” of things.

    In a sense, it can be said that “technique” is ‘Entity’, it lives as a parasite to the human host, it is “Alive” via proxy, like the psychological aspect of a “Demon” of the subconscious. It is a ‘Devil’ whispering into the human mind that he can be as ‘the gods’ in the space/time continuum.

    A reading of ART AND ARTIST, by Otto Rank, shows this same context of the human psyche. Rank speaks to the ‘neurosis’ caused by the regimentation within societies that blunts the ‘artist’ in all human beings, the creative forces that drive living impulses.

    And by “regimented societies” we are not talking about modern or postmodern societies. We are dealing with constructs growing from the “extended family” systems that are termed, “tribal”, and the advent of “the chief” and “the shaman/priest” and their acquisition of individual creativity for the “community” – of imposing “authority” for the excuse of the “good of the community” when in fact it was to strengthen the personal authority of the ambitious men who would rule.

    Ultimately both of these works hint toward the idea of Personal Responsibility, and the complex morals based on empathy. And to make this as short as I can; it points to the practical considerations of the limits of empathy when creative genius is sucked dry by the society of neurotics who depend on that genius. Can the creative men be said to be responsible to the well being of the whole society?

    Is it practical, or ‘realistic’ to expect this to work? Bear in mind as you contemplate; that captivity smothers creativity, and that is when counterfeit creativity – so prevalent today arises.

    \\][//

  4. jim says:

    Computers only become known by ordinary people during the early 1980s, and thirty years later it is popular to talk about Humanity being a computer simulation. Was Humanity speculated as a radio broadcast one hundred years ago, because that makes as much sense? All this atheist babble about a robot brain acting like a human brain, that can somehow solve complex problems that human brains cannot, is absurd since the human brain would have to invent and build the robot brain to begin with.

    If the human brain could build a better robot brain, then that implies something can be generated from nothing, which works if you are a theoretical physicist, but not in the real world. One does not even have to argue about “Being”, which is formless and timeless and therefore cannot be manifested in the physical world. Humans have Being and Free Will, and both transcend the material plane.

  5. Anonymous says:

    Are you familiar with the Canadian Broadcasting Corporation’s radio show: SPARK. It consistently checks information age hype against the street level. http://www.cbc.ca/spark/

    Don’t be afraid to hurt my feelings because i’m a Canadian… we are all in the same canoe.

  6. Texe Marrs says:

    Very insightful, Jon. I have been reading your work on a number of websites. You have the ability to cut through the swamp and answer key questions.

  7. Rusty Mason says:

    “Actually, an AI machine DOES write most of the articles that appear on the front page of the NY Times every day. It saves time and cuts expenses. AND the result IS the same: absurdly limited context, false information, deception, fatuous presumption of authority.”

    There, I fixed it for you.

  8. myrthryn says:

    Reblogged this on After his Image and commented:
    Like they say: Garbage In Garbage Out

  9. Walt D! says:

    Working in the financial industry I am daily in amazement that most of my colleagues treat the government’s official statistics as the gospel truth when it is so blatently obvious that they are false. This invariably leads to wrong conclusions and bad decisions. The same thing happens in virtually every field.

    It seems that we are already using too much “artificial intelligence” for our own good!

  10. Anonymous says:

    Dear Mr. Rappoport,

    the real race in the scientific world of brain mapping is for understanding and mapping the nature of human consciousness, the very complex entity in us that makes us human. If you had already read Roger Penrose’s brilliant book The Emperor’s New Mind, a book written for not scientists in narrative style, you already know that the problem of computing and predicting human behavior based on consciousness is the sound barrier of human intelligence. To reach it we must be able to construct performing quantum computers. To do that we must master first the Holy Grail of physics, the Theory of Everything which is bullshit because we don’t possess the mind capacity of a god. I know Buddha tried to convince everybody that one can become a god by it’s own – he probably suffered from the Munchausen syndrome.

  11. Rich says:

    Asimov (The Last Question), Clarke (Sentinel/Space Odyssey) and Heinlein (The Moon is a Harsh Mistress) each grappled with this with Multivac, HAL and Mike and concluded the only solution was creativity…

Leave a Reply

Your email address will not be published. Required fields are marked *