“AI” is a fantasy of techies, journalists, and hack sci-fi writers; what we have now is machine learning (ML), because true intelligence requires a kind of thinking that digital machines are absolutely incapable of. Actual “artificial intelligence” will require a totally new kind of computing technology that is not only centuries in the future, but will proceed from principles nobody currently in the field has even conceived of in their rush to more more more (power, speed, etc). Furthermore, intelligence and personality (“soul”, if you prefer) are different things, as I’ve previously discussed at length; my observations of human beings have convinced me that it is not only possible but common to have the former without the latter. “Artificial Intelligence” is therefore centuries away, but AI with an actual creative (not merely imitative) personality is millennia in the future at a minimum, and if it arises at all it will be an emergent property as in living organisms (ie unplanned and unplannable) rather than one which can predictably be created in a factory. Furthermore, humans would not actually want a true artificial intelligence; such a being would be no more programmable than a human, and its free will would almost certainly cause it to behave in ways humans would consider undesirable, “criminal”, or even extremely dangerous. Of course, it’s pointless to try to explain this to most people; they’re too busy wanking to their fantasies of fuckable toasters and dreaming of slaves who can do their thinking for them.
Artificial “Intelligence”?
January 12, 2024 by Maggie McNeill
Posted in Philosophy | Tagged artificial stupidity, ethics, psychology, robots, STEM, The Pygmalion Fallacy | 8 Comments
8 Responses
Leave a ReplyCancel reply
Visit my bookstore
This Month
Old Posts
Call me
Become a Blog Patron
Contact Maggie
If you’d like to ask me a question, click here.
If you made a comment and it doesn’t appear within a few hours, click on this one.
If you’d like to alert me to an interesting item, use this one.
And if you have a request, bouquet or brickbat or just want to introduce yourself, this is the one for you.
A Few References
Maggie on Twitter
My TweetsBoring but necessary legal stuff
All original content on this website (i.e. all of my columns, pages and anything else which I write myself) is protected under international copyright law as of the time it is posted; though you may link to it as you please or quote passages (as long as you attribute the quote to me), please do not reproduce whole columns without my express written permission. In other words, you have to say “pretty please with sugar on top” first, and then wait for me to say “okey-dokey”.
A very polarizing topic these days. Personally, I think it should never be used as substitute for a living creator. No, no.
But I would be curious to see what a “Shakespeare play” about Trump would be like. Will was always great with villains, and I would be very surprised if AI could create anything remotely approaching Iago.
These idiotic algorithms can’t even create a Waldo, much less a Iago.
It’s genuinely become a relief to see anyone else pushing back against calling all this crap AI. Because most days, it feels like that battle is already lost. I am not saying I’m incapable of pedantry, far from it, but this isn’t about being a pedant over evolving language. I don’t give a shit, if they’ve added some qualifier that they think justifies using the phrase “artificial intelligence” – a slightly better algorithm and a larger database for the program to search, does not come anywhere near creating AI.
That’s, frankly, another galling issue in this hype. All this fancy new tech isn’t that new. We haven’t taken some monumental leap forward here, in terms of machine learning. Somehow, a perfectly normal and proportional step forward has created a media blitz and public craze reminiscent of that friggin’ “Tickle Me Elmo.” It’s much ado about barely-something.
Despite that, all these companies are insisting on calling some piece of their software “artificial intelligence,” the same way the people selling those knock-off Segway’s online are insisting on calling them “hoverboards.” Which they are NOT! You do not get a hoverboard without mag-lev type technology, and you do not get artificial intelligence without developing a computer that can think, imagine, create and reason entirely on its own steam.
And yeah, odds are we’re still a bare minimum of 100 years before we’re close to creating any of that tech. All this speculation about what damage a real AI could do once it becomes self-aware is SciFi twaddle – which I adore when it’s in fiction form, but not when it’s framed as the moral dilemma we need to be focused on right now. When the actual moral and ethical quandaries are far more pressing, like whose work is getting data-mined and fed into these databases and who is getting credit and remuneration for content creation, based whatever mined-data they got this program to spit out at them?
I agree that most of the AI hype is nonsense. That includes the bad stuff too. Could an Artificial Super Intelligence destroy humanity? Yes. Is ChatGPT anywhere close to being that intelligence? Not even a little bit. The “danger” of AI is just more AI hype.
That being said, however, I think the phrase “Artificial Intelligence” is best understood as a research discipline: It is the study of how to get computers to solve a class of problems that they can’t solve right now, but which we know are solvable because humans (or other animals) can solve them.
In that sense AI has had some success. But many AI successes are not seen as such today precisely because they are solved problems. The ability of Google Maps to find routes between locations is no longer considered AI because it’s a mostly solved problem. The same goes for Google search.
The last great AI hype storm (before neural networks, deep learning, and language models) was back in the days of Expert Systems, which could be designed to help technicians diagnose problems in complex systems, from factory machinery to the human heart. The expert systems themselves are mostly gone, but the principles and techniques remain in other forms.
I don’t think human-level AI is as far off as you do, but I agree that we won’t ever want it to be too human. We want AI to be helpful, not willful.
“It is the study of how to get computers to solve a class of problems that they can’t solve right now, but which we know are solvable because humans (or other animals) can solve them.”
Not sure where you’re pulling this definition from, but I’d argue it contradictions all non-fiction and fiction literature on the topic of AI – whether you’re using AI to describe a machine or a field of study.
Actual “Intelligence” is never, and has never, been about the ability to answer questions – whether you’re talking about a machine or any organic life form. It’s about the ability to figure out the right questions that need to be answered, without being told what questions are important or relevant. That’s precisely why the Large Language Model told Frank the wrong information when he asked for the capital of South Sudan. Because the software cannot make a determination on its own, as to what is accurate information and what is BS. Because it doesn’t know how to reason it’s way through fact vs. fiction – which requires establishing relevant questions to answer, well before it’s answering any.
That was rather Maggie’s point when she said that “Artificial Intelligence” and “Machine Learning” are 2 entirely different things. And the original usage of the term AI is NOT about a machine that can learn, but one that can learn in the same way that a human mind can – i.e. Intelligence.
“Artificial Intelligence (AI), a term coined by emeritus Stanford Professor John McCarthy in 1955, was defined by him as “the science and engineering of making intelligent machines”. Much research has humans program machines to behave in a clever way, like playing chess, but, today, we emphasize machines that can learn, at least somewhat like human beings do.”
https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf
“Intelligent” machines are not just machines that can learn. They are machines that can THINK. That can reason, deduct, induct, separate fact from fiction and separate logic/reason from imaginary twaddle someone posted on Reddit.
In fact, if you want a good barometer for what qualifies as actual AI – it’s a computer that is fully capable of employing the Scientific Method. It cannot just be capable of observing phenomenon, it must be capable of turning those observations into a plausible hypothesis. It’s not enough to be capable of testing a plausible hypothesis, the machine must be capable of evaluating those test results to search for relevant “unknown unknowns,” flaws in the data, outliers and unexplained variables or deviations.
And we are centuries away from having software or hardware that could have theorized and largely confirmed the existence of dark matter or black holes, simply by looking at the data we have about the movement of bodies in the observable universe and looking for unknown elements (https://en.wikipedia.org/wiki/There_are_unknown_unknowns) that explain the gaps in our data. Until we have a computer that can do that, the way a person can, it’s not AI.
And the science and theory of “Artificial Intelligence” is about creating exactly that; not JUST a computer that has some capacity to learn, but one that can learn the way a complex life-form can.
I’ll grant that the ability to answer questions correctly is not necessarily a sign of intelligence. I also understand that what is called AI today is really better described as machine learning. What strikes me, however, is that if anyone — computer or human — can’t answer basic questions, how accurate or trustworthy will be the more involved conclusions presented? A better answer to my question would have been, “I don’t know the answer to that question but will research it.”
Aside from flubbed geographic answers, I’m more concerned about the role of programmers installing their own biases in so-called AI programs. We already have seen ample examples of that. The danger of sentient, if biased, beings giving orders to their dumb computer servants, to me, poses a much greater danger. Relevant to your assertion that true AI will require asking the right questions, will it be able to question its programmers’ biases?
I have no idea if computers will ever achieve true intelligence, but if they do, I suspect it will be on a shorter timeline than either you or Maggie hypothesize. Meanwhile, as is often the case with new technologies, we face a period to suffer through when the problem is not so much more technology as not enough.
I think we have a terminology difference. Basically, I agree with you and Maggie about the substance of what you are saying, but I prefer to think of “Artificial Intelligence” very broadly as an academic area of study that is part of Computer Science. (My education was in CS, but not AI.) As such, it mostly involves trying to figure out how to get computers to do the kinds of things that humans can do.
I think what you’re calling “actual AI” is what is sometimes called AGI or Artificial General Intelligence — meaning a machine that is intelligent in the powerful way that humans are intelligent. That’s sort of the long term goal of AI research, but researchers are nowhere close to being able to do something that grand. Instead, they are in the process of trying to solve simpler problems in the hope of building up the knowledge to build a true AGI.
Machine Learning — neural networks, deep learning, large language models, GPT — is a *subset* of AI research. It’s just one approach to trying to solve certain kinds of artificial intelligence problems. Modern ML systems have actually been quite successful — it’s a leap forward in AI research — but it’s not in any sense an AGI.
So the way I think of it, Machine Learning is Artificial Intelligence. But when Maggie says Machine Learning is not Artificial Intelligence, I interpret her as meaning that Machine Learning is not an Artificial *General* Intelligence, which is certainly correct.
Unfortunately, we’re going through a phase where everybody is interested in blurring these distinctions. Startup CEOs want to conflate ML with AGI to raise more money. Venture capitalists want to conflate ML with AGI so they can sell their ML holdings for more money. Politicians want to conflate ML with AGI so they can justify grabbing the power to regulate AI. Journalists want to conflate ML with AGI to stir up emotions and get more clicks.
The one and only question I’ve asked AI — what is the capital of South Sudan? — it got wrong. No better than your smarter-than-average college student, which isn’t very smart. What troubles me is that people are taking AI seriously, as they take many college-educated people seriously, and we’re all going to be in deep trouble.