There has been a lot of hype recently in the media about the potential of AI (Artificial Intelligence) to solve all our problems. AI is based on the concept that a computer program with a large enough data base and machine learning capabilities can provide answers to a wide range of questions The assumption is that an AI program with proper training can gain enough insight to answer questions on its own without human prompting, guidance or intervention.
It’s an interesting concept, but from what I’ve seen to date AI is nowhere near capable to tackling complex auto repair problems, let alone even answering simple factual questions. Basically, if you ask an AI program a simple question like “What day is this?”, the AI program checks its own data base to see if a similar question has already been asked and answered, and if it finds nothing it quickly scours the internet for key word related information that may help it answer your question.
As for the question of what day is this, the AI program needs to recognize several things before it can give you an accurate answer. It needs to know (1) your location, because half way around the world is the international date line that separates today from tomorrow, (2) your time zone because you may be right on the line between say 11:59 pm and 12:00 am the next day, and (3) have access to a calendar with the days and dates so it can regurgitate the correct date for you.
So far so good. But let’s ask AI another seemingly simple question, such as “What is the curb weight of a 2024 Ford F150 truck?”
There is no single correct answer because vehicle curb weights can vary several hundred pounds depending upon the engine and powertrain, body options and variations, even the size of the wheels and tires. So the correct answer should be a list of the various possible curb weights based on the various available factory configurations for the year, make and model of the vehicle in question. But what AI typically generates as a response for this type of question is a single answer which may or may not be correct based on the limited information you provided. And even if you give a very detailed description of the vehicle powertrain and other options that affect weight, AI may or may not pick the right answer.
One of the biggest problems I see with AI in its current state of development is its inability to discern the accuracy and credibility of the information it is gleaning from online sources. Much like people who believe anything they read on the internet (because they saw it on the internet), AI can pick up the wrong information, misinformation and conflicting information. To make matters worse, it lacks the discernment to distinguish one from the other. Consequently, the answer it generates may or may not be accurate, and it sometimes even contains conflicting information from different sources in the same answer. In short, AI can generate some very misleading (and dumb) answers to relatively simple questions.
A case in point: A friend of mine who has worked for Salesforce for a number of years has considerable experience with the company’s various certification tests. Currently there are 40 different certification tests that cover every aspect of what Salesforce does (it’s business management software for those who are unfamiliar with Salesforce)
If you work for Sales Force or want to advance your career, you need to prove your competency in various areas of Sales Force. To do that, you have to take online study courses and then pass an online exam much like ASE certification tests for automotive technicians.
So to put AI to the test, my friend asked ChatGTP to answer some Salesforce exam questions. Answering an exam question correctly involves more than just regurgitating a canned response. It takes a solid understanding on what exactly is being asked and how any relevant circumstances or additional information may affect the answer to that question.
I’m using the Salesforce exam as an example because it is similar to asking a car repair question. The most accurate answer to a car repair question may depend on the year, make and model of the vehicle, its mileage, its repair history, and any weather or other operating conditions that may cause a problem or a warning light to come on. Related sounds, smells, vibrations and other inputs may also be needed to provide additional clues as to what might be the source of a problem.
To make matters even more complicated, the “right” answer to a car repair question won’t necessarily be the same for every vehicle. Some car problems only affect very specific model years, makes, models, engines, transmissions and VIN numbers, and even then only under very specific operating conditions (cold start, hot start, extremely hot, cold or damp weather, turning left or turning right, accelerating or braking, etc
The correct answer may also involve searching for and summarizing a very application specific Technical Service Bulletin (TSB) published by the vehicle manufacturer, or finding the appropriate software upgrade for the vehicle’s onboard computers.
Back to Salesforce. So how well did ChatGTP answer some of the Salesforce certification exams?
It totally flunked the tests!
My friend said that about half the answers AI generated to the Salesforce questions were totally wrong.
The canned responses AI generated were basically copied from example tests it found online. The answers it copied were probably correct for the example questions, but not the actual questions on the real test. In other words, AI flunked the exam because it lacked real understanding and discernment about what was being asked.
The issue of AI being able to discern good information from bad information is a HUGE one in my opinion. As it stands today, anyone can post anything on the internet. There are no obvious distinctions between what is fact, what is fiction and what is opinion. There are no editors or human gatekeepers to fact check much of the information that is being published online. And that is a serious issue for human as well as machine intelligence.
Computer search algorithms claim to rank websites based on their perceived authority, but it’s mostly BS. Just because a website has a lot of inbound links or a lot of subscribers does not mean the information that is being published on that website is true, accurate or worthwhile.
When it comes to authoritative sources of automotive repair information, I would rank Vehicle Manufacturer Service Information Websites as the most accurate and dependable authority for up-to-date repair information. And sometimes even the OEM service information is wrong or out-of-date. But it’s the best source we have.
The problem is OEM service information websites are not free public access sites. Users have to pay a subscription fee that typically range from $20 to $30 for a one day access to hundreds or thousands of dollars for yearly access. This means AI can’t just go into an OEM service information website and glean it information it needs to answer an auto repair question.
The same goes for aftermarket repair information websites such as Alldata. Alldata complies repair information from all the vehicle manufacturers and organizes it into a common, easy to search format. But Alldata has to pay big fees to the OEMs to compile this information, and it takes a lot of human labor to reorganize the OEM information into Alldata’s format. So like the vehicle manufacturers, Alldata, Mitchell and other online repair information services all charge subscription fees to access their information. Because of this AI can’t get into their databases either.
So when you ask AI a car repair question, where does AI search for possible answers? It scours public automotive forums for similar questions. This seems like a logical approach, but most forums contain as much misinformation as they do helpful information. Read a typical post on a forum and you’ll get all kinds of conflicting responses.
How does AI discern good information from bad? It doesn’t and it can’t because it lacks the ability to discern authoritative and accurately written information by people who know what they are talking about from people who think they know what they are talking about but don’t.
Many answers in public automotive forums are often nothing but guesses, speculation, hearsay, rumor, out-of-date or totally wrong. Don't get me wrong. There are a lot of excellent responses and insights that can't be found elsewhere. But how do you tell the good information from the bad? Having a competent forum moderator who can separate the wheat from the chaff helps, but AI lacks that ability.
In other words, a lot of public forum information can't be trusted. Yet AI picks it up as if it were the Gospel truth and uses it to formulate an answer. That’s a scary way of doing things because some repair answers may suggest doing repair or test procedures that are potentially dangerous to the person who is doing them, or may result in damage to a vehicle (crossing up or jumping the wrong wires, for example, or doing something that risks igniting fuel vapors or exposing yourself to dangerous fumes, dust particles or physical injury).
Given the fact that AI is constantly evolving and improving, the hope is that eventually AI will get better and better at coming up with correct answers to all kinds of questions. I don’t see this happening for a long, long time until they figure out a way to give AI better discernment in how it gleans information from the internet, how it chooses more authoritative information over questionable information, and how it resolves conflicting information. Until it gets to that point, I wouldn’t trust AI to answer anything but the most basic questions. Even then, I’d take any advice with a very large grain of salt.