Does AI exist? If not, why do technology companies insist that it does?

Once upon a time, I was a dedicated engineering student who occasionally had to spend time in philosophy classes. I remember a lecture in which the discussion turned to a famous scientist. This was a long time ago so the memory has faded a bit, but I’m almost certain it was Isaac Newton—arguably the greatest physicist of all time. Newton did not consider human cognition to be a purely material phenomenon, and he based this belief partially on the scientifically inexplicable bridge between thought and physical motion. An immaterial entity forms in the mind—“I’m going to move my arm”—and then something translates that mere intention into a moving arm.

A student in the class, perhaps somewhat uncomfortable with the elusive implications of this idea, disputed Newton’s interpretation. “It could all be an illusion,” he says. We think that the intention led to the action, but how do we know? If the human body is nothing more than a highly sophisticated machine, all it does is follow instructions—instincts, we call them.

But machines don’t have instincts. They have software. Is your brain programmed in C++ or Python?

 


 

What Is Artificial Intelligence?

To answer the question posed in the title of this article, we need to first address the question posed in the title of this section. In my opinion, the existence or non-existence of AI dangles from a tenuous thread—namely, the interpretation that we assign to the term, itself.

  1. If we define “intelligence” as “the ability to perform complex mathematical operations,” and if we define “artificial” as “not performed by a biological agent,” then yes—AI absolutely does exist and has existed for many years.
  2. If we change the definition of the second word to “having no connection with or dependence on biological agents,” the situation is not so straightforward. Where did the device’s intelligence originate? How did the device “learn” to understand speech, or predict the movements of a nearby vehicle, or identify fraudulent credit card transactions?
  3. If we define “intelligence” as involving fundamental actions that are not inherently mathematical, AI starts to look a bit suspicious. At the lowest level of computational activity, do processors really do anything beyond math and data transfer?
  4. And, finally, if we assert that human intelligence is more than the biochemical reactions that occur in the brain, AI becomes science fiction—surely not even the most brilliant engineering team would claim that when loading code into a DSP they also endow the processor with some sort of immaterial existence.

 

Why Split Hairs? Why Does This Matter?

Because there are many people in this world who do not have enough familiarity with electrical and computer engineering to understand that “artificial intelligence” is, perhaps, nothing more than marketing hype. How many non-technical folks understand that the latest Intel processor is not fundamentally different from a room-sized vacuum-tube computer? In both cases, you have storage elements and switches that turn on and off. Switches that turn on and off! Can this really be intelligence? Was Shakespeare made of tubes or MOSFETs?

 

Can this really be intelligence? Was Shakespeare made of tubes or MOSFETs?

 

Intelligence is an impressive thing; who could possibly look with indifference on the endless list of humanity’s artistic and technological triumphs? Thus, if a marketing team decides to attach the term “artificial intelligence” to a new product or service, people might subconsciously—or not so subconsciously—associate it with the awe-inspiring accomplishments of human intelligence. The trouble is, at the end of the day all you’re getting is software.

 

Software vs. AI

Nowadays it would be comical to create an advertisement in which a high-tech product is hailed as “having software.” You might draw a little more attention if you say that it has “advanced software” or “sophisticated software,” but advanced software is everywhere these days.

So let’s up the ante and say that it has “artificial intelligence.” Now we’re listening. You mean, it thinks? It learns? Like a person? Actually, even better—like a person that rarely forgets, and doesn’t complain, and can stay up all night, and never drinks too many glasses of wine.

The trouble is, how do you draw the line between software and AI?

Software has been doing complicated things for a long time—is there some “complexity threshold” at which you have the right to call your code artificial intelligence instead of plain old software? Twenty years from now, when software is even more sophisticated than it is today, what will we call it? Artificial brilliance? Artificial omniscience? In my opinion, we should call it what it is: software, i.e., instructions written by human beings and carried out by processors.

 

No-So-Artificial Intelligence

I’m going to go a step further and say that the term “artificial intelligence” is fundamentally flawed. It’s composed of two words, an adjective and a noun. The adjective “artificial” is describing the noun “intelligence.” Has anyone else noticed how inaccurate this is? The “intelligence” inside these devices is not artificial—it all comes from software written by human beings! The artificiality is related to the means of utilizing and manifesting this intelligence; the processor is like a pencil that gives material form to the intelligence of an architect.

There is no doubt that electronic systems can “learn” to function more effectively without direct intervention from humans; neural networks do this by processing training data. But in the end, the system is merely following the very specific instructions created by the intelligent individual who designed the network.

 

Your Thoughts

What does “artificial intelligence” mean to you? Is there any scientifically robust way to use this term, considering that a machine has never demonstrated any ability to possess the sort of intelligence that we associate with humans or even with higher animals? Is there a clearly defined technical distinction between software and AI, or does the distinction exist primarily within the perception of those who are designing, marketing, or buying the product?

 

Comments

21 Comments


  • MA321 2018-07-20

    Man, It’s not “Intelligence”, it is “Artificial Intelligence”! The difference between “Intelligence” and “Artificial Intelligence” is like the difference between “Analogue” and “Digital”. If you increase the bit resolution, you can get more accurate result. You can never reach to accurate result as much as “Analogue” data. (But regardless to the AI - Nowadays “Digital” is so faster, simpler and accurate enough to getting under process.)

    By the way, what you simply call “fundamentally flawed”, is what some software and hardware engineers have taken a long time on learning, researching and working on it in their life.

    • RK37 2018-07-21

      I said that the _term_ “artificial intelligence” is fundamentally flawed, not the technological or mathematical systems that are associated with the term. Engineers spend most of their time learning about, researching, and working on systems, devices, algorithms, etc.—not terminology.

      • MA321 2018-07-22

        Actually I’m not an specialist in AI, but I had written some programs with something which they call “AI Algorithms”.

        AI divides to 3 different part:

        The first part is logical calculations (or what you call “the ability to perform complex mathematical operations”)

        The second part is predicting the future (the future in AI basically means only a few seconds later).

        And the third part is “self training” (which probably you think it is impossible).

        Yesterday I was watching “NASCAR” competition. I simply can claim a computer program can be able to win a competition like that (Like computer games and it is kind of AI program including the fist and second part), but what separates “AI” from “NON-AI” is if the “AI Car” participates in that competition for the next time, will upgrade its own record, or in other word, it will be better than another “NON-AI” car.

        My too few experience in AI doesn’t let me to claim that it is possible, but in my opinion it logically can be possible. Because a computer can record everything that happens in the competition, and does some process on it and will not do the mistakes again (also it may does some new mistakes). Do you think it is possible or not?

        • Jai garg 2018-07-26

          Don’t you think with the development of Q-bits and nano electronics we may be approaching a self correcting mechanism just like humans?
          But can machines create instincts formed from unknown sub conscious questions….laced by infinite options is a matter only time can tell!
          Nice informative write up.

  • Kumavat Ajay 2018-07-21

    inform able post

  • stephen30 2018-07-22

    Statics has been hijacked by Machine Learning/ Artificial Intelligence. The power of computation has increase tremendously and the the amount of data we have on things is a lot, so anything be optimized or a better version can be created. I believe this what makes Artificial Intelligence different form software.

  • StompBoxer 2018-07-24

    Artificial Intelligence (AI) as a Cognitive Science concept is altogether different from AI as an Engineering concept.  The purpose of AI in Cognitive Sciences is to understand human (not alien or hypothetical) cognition.  The better a computational model simulates actual human behavior the more it is regarded as successful model.  Most of the commercial versions of AI are intended as engineering marvels that outperform some aspect of human behavior.  These are complete failures in as far as what matters in the Cognitive Sciences.  I tend to think of “Artificial Intelligence” as a term specific to the Cognitive Sciences, and I’m all in favor of Software Engineers getting their own term.

  • lmatte 2018-07-25

    Artificial intelligence (which maybe in my following terms yet doesn’t exist) would have the power to evolve the original form, in a way that doesn’t follow an original organization of processing (“thinking”). That would cause an ‘artificial’ form of life - as made by us, not produced by a chaotic and casual evolution process, but on intent - develop what we call ‘intelligence’. Very challenging.
    The species Homo Sapiens developed that way, very differently from the other Homo species at the time of its appearing.
    The fact that electronic components are created by physics and math isn’t a valid limit to develop intelligence. After all a human brain is made of those same chemical elements, that bind by physics can also be just a soda drink, or a rock. If you otherwise think that there’s a metaphysical reason to ‘intelligence’, then the discussion is over, as antithetical to scientific reasoning.
    Oh, and by the way, 99% of AI mentioned daily is just marketing fuss - translated to “CI”, customer ignorance

  • DKWatson 2018-07-26

    Something posted in a similar discussion:
    “The field of artificial intelligence/machine learning just now, is like teenage sex.
    Everybody is talking about it,
    Nobody really knows much about it,
    Everybody believes that everybody else is doing it so,
    Everybody claims to be doing it too.”

  • Grovby 2018-07-27

    Many scientists are currently wondering: Is there a danger that after a while computer technologies will surpass the possibilities of the human mind and artificial intelligence will be able to “capture” the world? Stephen Hawking believes that underestimation of the threat from artificial intelligence can be the biggest mistake in the history of mankind.

    Will computers outdo us by reasonableness? If robots are self-learning neural networks and if they have reached a level of development that allows them to learn faster and more efficiently than we do, then it is logical to assume that over time they will surpass us in reasoning. Physicists are still trying to understand the fundamental laws that underlie artificial intelligence, while for mathematicians and computer scientists the creation of a thinking machine is only a matter of time.

    After a while, computer technology will surpass the possibilities of the human mind. Computers are subject to Moore’s law: their speed and complexity are doubled every 18 months. This growth will continue until the computers, by the complexity of the device, are equal to the human brain. The danger that they will be able to develop artificial intelligence and capture the world is real.
    For example, there are sites where you can compare devaysy, and artificial intelligence is the core of this site. He compares them himself.
    Here it is - http://www.china-prices.com/

  • gimpo 2018-07-27

    When I was young I was literally eating all books about AI.
    In the ‘80s they told me: “the only problem is the computational power; computers become every day faster and faster. In 10 years they will be able to do this, and this, and this…”
    10 years after they repeated the same karma…
    20 years after they repeated the same karma…
    30 years after they repeated the same karma…
    Today I hear/read the same stuff again. I’m and I still haven’t see something really impressing me.

    I strongly suspect that, after I will be died, an intelligent software will be still trying to invoice me for zero voice calls over my telephone line…

    • graberhl 2018-08-04

      But at least you can expect to have a longer life, now that you have stopped eating books about AI.

  • programify 2018-07-27

    The term AI is dead. It has been beaten to death by sales and marketing. It has lost all useful meaning, since any device that dynamically alters its output based on sensory inputs seems to win the “AI inside” badge. Google, Uber and Facebook are the worst offenders. Essentially we should replace “AI” with “BFC” - Big Fast Calculator. Since technically that’s all their derived work amounts to.

    Today I prefer the term “Virtual Cognition”, where cognitive processes take place on a virtualized platform such as a PC as opposed to a biological cortex. That would be my offering for those who want to distance themselves from the AI murder scene.

  • programify 2018-07-27

    The term AI is dead. It has been beaten to death by sales and marketing. It has lost all useful meaning, since any device that dynamically alters its output based on sensory inputs seems to win the “AI inside” badge. Google, Uber and Facebook are the worst offenders. Essentially we should replace “AI” with “BFC” - Big Fast Calculator. Since technically that’s all their derived work amounts to.

    Today I prefer the term “Virtual Cognition”, where cognitive processes take place on a virtualized platform such as a PC as opposed to a biological cortex. That would be my offering for those who want to distance themselves from the AI murder scene.

  • gyro222 2018-07-27

    There was a post on Facebook not long ago that some computers where shutdown because they developed their own language to communicate between eachother that the techs couldn’t understand,  i say thats bordering AI, This is where things get interesting and is this the beginning of the end ?  I’m sorry Dave but you may not pull that circuit breaker

  • jpthing 2018-07-27

    Artificial intelligence IS a oxymoron. Either it is intelligent or it is not.
    I quote Marvin Minsky: “Artificial intelligence is the mimicking of processes usually associated with cognition. It does not always or even usually emulate the workings of the human mind.”
    Back in the 1960’s when the term AI was first used it mostly involved Heuristic search. Things like the mini-max algorithm from linear programming used in, say, chess and other two player board games.
    I remember the 1980’s when Logic Programming was the latest craze.  It turned out that logic is to weak to describe events in the real world. In the 1990’s we had the AI winter. When expectations from the 80’s were not met and a feeling of disillusion was setting in.
    Then in 2000 things started to change. New approaches to modelling intelligent behavior emerged. I can mention Bayesian Statistics,  Neural Nets and Genetic Algorithms. Like in Physics these create models and use this to estimate outcomes. Instead of yes/no answers you use trails of past results (Markov Chains) to estimate results. The strengths of the three mentioned modelling techniques are approximately equivalent (based on the analysis in the book “Learning from data” by Cherkassky and Muller)
    Today Machine learning is a established field and Computer Vision is having moderate success in robotics. These systems are not ‘programmed’ you train them on large sets of data. They are thus more inkling to values set up in large tables .
    To me the interesting question is: “Is Machine learning the same as intelligence”. Clearly there is something missing. In a nutshell the lack the ability to make a internal model of the world they live in and and change and improve it as they learn more. From a mathematical point of view I choose to call the a auto-epistemic (self-referring) belief model. If they had this the would approach what we call self-awareness which is a essential part of intelligence.
    The correct term for this is Machine Intelligence NOT Artificial Intelligence as there is no such thing as fake intelligence.

    • graberhl 2018-08-04

      The term AI would be an oxymoron if “artificial” meant “fake.” But it doesn’t. It means “man-made.” Have you never seen, for example, an artificial lake?

  • Philmnut 2018-07-29

    I have read through this article and most of the comments. When compared to human thought AI is incomplete and in its infancy but has the ability to grow and learn.  Most arguments are valid but I find one element missing, or I missed it.
    Humans are raised with values taught to them by family, friends and the environment.  We build on these values as time passes.

    Software can collect data, log and organize this data which is great for future reference and response.  This may appear to make the hardware respond in a fashion that is learning but is merely responding based on the collection of data for best results.

    To me AI should have the ability to sense, learn and alter its response partly through the the collection of data but also rewrite its own code changing and correcting the outcome.

    Just like human thought, AI needs to alter its own instruction set, this may be a desired response or it may take humanity down a darker path.  Humans are far more complicated with all their differences right down to our DNA.  I don’t think we have to worry about computers and hardware taking over the world for a few more thousand years.

  • newbrain 2018-07-30

    Really interesting discussion.  As an old microprocessor developer that grew up with Altair and TRS80 my initial reaction is marketing hype.  In the 80’s we said, “AI is the technology of the future and always will be”.  But now I have to wonder if neural networks are not fundamentally different from sequential turing-machine based processors.

    They are trained, not programmed and the resulting output is not fully predictable.  When Go master, Ke Jie was forced to resign to AlphaGo his reaction was shock that in years of play he had never envisioned the kind of moves made by this machine.  Could this be a kind if instinct?