The Fatal Flaw in the Human-Machine Interface


There is a great deal of research going on in the area of artificial intelligence (AI) merging with the brain.

Exuberant cheerleaders like Roy Kurzweil are quite confident that we are approaching a moment when a computer will exhibit all the power of the human brain.

The definition of “power” in this context is fuzzy. But Kurzweil and others are sure we’re about to uncover the “algorithm” that underlies all brain activity.

They couldn’t be more wrong. Neuroscience has barely scratched the surface of understanding how the brain operates. Cracking the code is not on the horizon.

This fact reflects a much deeper problem. PR is not science. Predictions about what is imminent are not the same thing as verified research results.

PR is not information.

In exactly the same way, were a human-computer interface with awesome capability endowed with access to a hundred galaxies of stored data, it would run up against the problem of vast chronic misinformation in those cosmic warehouses.

This is not something that can be deleted with a program or a committee tasked with making corrective changes.

For example, and this is just one area, medical science is so rife with fraud, at so many levels, as I’ve demonstrated over and over again for the past 10 years, that it would take humans decades to expose a significant part of it. And AI wouldn’t even know where or how to begin looking, because…who would set the parameters of such an investigation?

There is an inherent self-limiting function in AI. It uses, accesses, collates, and calculates with, false information. Not just here and there or now and then, but on a continuous basis.

Think about all the entrenched institutions and monopolies in our society. Each one of them proliferates false information like a Niagara.

No machine can correct that. Indeed, AI machines are victims to it. They in turn emanate more falsities based on the information they are utilizing. I’m sure someone can make a little model of the exponential expansion of this disaster.

Each and every false datum generates a wider and wider stream of lies, and the streams, becoming rivers, overlap and produce exceptionally large numbers of contaminated eddies, polls, and rapids.

When personal computers entered the marketplace, people began a clamor about the Age of Information.

There were cultural reasons for this enthusiasm. They could all summed up by the fact that we are living in a technological society, and technology walks hand in glove with information.

But as the messianic postulations and predictions reached new heights, and the drive began to marry machine and human brain, the gaping holes and rips in the utopian fabric of dreams loomed up for any intelligent person to observe.

When a corporation or government expands to a certain size, it dedicates itself to survival, not of its principles, not of its original mission, but of Itself as an entity. Therefore, it spins lies.

As Dr. Peter Breggin and I discussed on his radio show yesterday, when it comes to the newly announced federal brain-mapping project (B.A.M.), the scientists will very rapidly begin drowning in their own ignorance about the very organ they are investigating.

But that won’t do. This billion-dollar project is supposed to produce results, and the project must survive. Therefore, the researchers will cook up models to demonstrate their progress. These models will make assertions which are patently false.

Pharmaceutical companies will develop new drugs based on the false assertions about the brain, knowing full well they are operating in swamp of deception, and caring not one whit about it.

It is the same with the vaunted AI-human brain interface. It will gobble up and deploy untold numbers of lies already told by other institutions to defend and protect their own survival.

The complexity, on various levels, of false information will make the heralded AI-brain collaboration resemble an intelligence agency:

It lies about other lies, and then it lies about that.

The mathematics are packed with functions that automatically spiral out realities even Lewis Carroll’s Mad Hatter would find frivolous and repellent.

The field of information theory is about handling quantity of data and making that data readable. It’s not about the quality of the data.

AI can work successfully in engineering projects, but when the human interface is added, we are no longer merely talking about engineering. The whole purpose of the interface is supposed to be about somehow making humans better.

How can that happen when the hugely expanded access to data runs into billions or trillions of bits of false information?

I’ve been making notes for my second, more advanced logic course. The purpose of the course is to provide better ways of handling the flood of information we deal with every day. The first challenge is going beyond the rules and principles of classical logic, in order to analyze the quality of the data we are digesting and using.

There is no pat system for doing that. Certainly, accepting data based on the notion that “recognized authorities” are reliable would be a disaster. But that is exactly where the human-AI interface is heading, like a team of horses being driven toward the edge of a cliff.

The human-AI engineers are already fatally compromised. In journalistic terms, they are the mainstream reporters obeying the parameters laid down by their editors and corporate owners. They write their stories inside a bubble of illusory context. They go back, again and again, to the same sources, and those sources are permanently biased against popping the bubble and journeying out to where the truth exists.

Actually, an AI machine could write most of the articles that appear on the front page of the NY Times every day. It would save time and cut expenses. But the result would be the same: absurdly limited context, false information, deception, fatuous presumption of authority.

If, instead, you want to look for a program that would discount such a presumption and would reject institutional secrecy, a program that would undertake a relentless investigation of the quality of data, there is a potential candidate.

It’s called a human being. And it’s not a program.

Jon Rappoport

The author of an explosive collection, THE MATRIX REVEALED, Jon was a candidate for a US Congressional seat in the 29th District of California. Nominated for a Pulitzer Prize, he has worked as an investigative reporter for 30 years, writing articles on politics, medicine, and health for CBS Healthwatch, LA Weekly, Spin Magazine, Stern, and other newspapers and magazines in the US and Europe. Jon has delivered lectures and seminars on global politics, health, logic, and creative power to audiences around the world. You can sign up for his free emails at

Facebook Comments