By Brian Roberts
In the demanding field of engine rebuilding, precision is paramount. An engine rebuild requires strict adherence to manufacturer specifications, be it torque settings, surface roughness, or clearance tolerances. Traditionally, machinists have relied on printed manuals and individual expertise, but this approach is not without its flaws. Human errors, outdated materials, and the sheer volume of data can compromise accuracy. Enter Artificial Intelligence (AI): a game-changing technology poised to revolutionize specification retrieval and redefine the accuracy of engine rebuilds, or so we thought!
The Accuracy Challenge in Engine Rebuilds
Rebuilding an engine is not a one-size-fits-all task. Each engine model comes with unique requirements, and even the slightest inaccuracies such as a mismatch in bolt torque or an improper machining tolerance—can lead to catastrophic engine failure. Accessing accurate and relevant data quickly is a longstanding challenge, especially for complex and modern engines.
Read this article with all images in the digital issue of Engine Professional magazine https://engineprofessional.com/2025EPQ3/#p=72
AERA utilizes service manuals and various software to obtain engine specifications. When a spec is not published by the manufacturer, we use “Reported” specs obtained by other members who provide their measurements they are getting on a given component. This can be used as a reference spec. The accuracy of the specs is dependent on the accuracy of the service information and the accuracy of the individual entering the data. Occasionally a member will question a spec and upon investigation it is found the spec has a decimal point in the wrong place, numbers reversed or conversion from metric to imperial is off.
Complicating this is the introduction of AI tools to scan the internet for data. AERA is aware of this tool and upon testing, we have found that it leaves members open to mistakes with the data results they receive. This can potentially cause them some expensive mistakes and damage their reputation.
Artificial Intelligence (AI) has become a transformative force across industries, from healthcare and finance to manufacturing and entertainment. However, as powerful as AI is, its effectiveness is fundamentally governed by a simple, timeless principle: “Garbage In, Garbage Out” (GIGO). This concept, originating in the early days of computing, highlights a fundamental truth about AI systems—they are only as good as the data they are trained on and the inputs they receive.
Understanding the GIGO Principle
GIGO (Garbage In, Garbage Out) refers to the idea that flawed, inaccurate, or incomplete input data will invariably produce flawed, inaccurate, or incomplete outputs, regardless of the sophistication of the system processing it. In the context of AI, this principle underscores the importance of high-quality data and well-defined processes for ensuring reliable and meaningful results. These limitations in AI are to be expected when there is no control over the verification of the results.
One such limitation is the phenomenon of “hallucinations.” In the context of AI, hallucinations refer to instances when an AI system generates information that is incorrect, nonsensical, or entirely fabricated. Understanding hallucinations is vital for both developers and users as AI becomes more embedded in daily life.
What Are AI Hallucinations?
AI hallucinations occur when an AI system produces outputs that deviate from reality. Unlike humans, who can hallucinate due to sensory or psychological factors, AI hallucinations arise from the way algorithms process and generate data. Essentially, an AI “hallucination” happens when the system extrapolates, or “guesses” information incorrectly based on gaps or ambiguities in its training data or input.
For example: During testing of AI tools to retrieve specifications, I found that many results that were incorrect had the same spec listed repeatedly. This to me is the AI hallucination effect. The AI tool could not find the result to the answer for the spec and basically used a published spec from another engine. This incorrect spec had no link to a source for the spec as one did not exist. This is a risk for machine shops, and they need to be aware of and understand AI limitations to avoid pitfalls that could cost them financially and their reputation.
Here is a chart of the accuracy of AI on specification retrieval. The results are quite shocking and concerning. The testing derived from taking 20 random engines from old to new, gas to diesel and asking AI for the specifications on each of the categories in the chart. This yielded a best result of 40% correct for connecting rod big end diameter and the lowest result of 6% on valve stem height. In an industry where accuracy is critical to success, anything not near 100% should be a concern.
Why Do Hallucinations Happen?
AI hallucinations can stem from various factors, including:
- Data Gaps or Errors – AI systems rely on massive datasets for training. If the training data is incomplete, inaccurate, or biased, the AI may produce incorrect or fabricated information.
- Overconfidence in Probabilities – AI models generate responses based on probabilities. When uncertain, the system might still produce an answer that seems plausible, even if it is incorrect, rather than admitting its limitations.
- Task Complexity – Complex or ambiguous queries may force an AI to “fill in the blanks” using patterns it has learned, potentially leading to hallucinated information.
- Lack of Context – AI systems, particularly large language models, process text without intrinsic understanding of the real world. This lack of deeper comprehension can result in outputs that are plausible on the surface but logically flawed or factually incorrect.
While hallucinations are a limitation of AI, they also highlight the importance of transparency and accountability in AI development. Understanding that AI is not infallible is crucial for developers, users, and policymakers alike. As AI technologies continue to advance, efforts to address hallucinations will remain a priority in making these systems more reliable and trustworthy. In the end, AI hallucinations serve as a reminder that while these systems are powerful tools, they are not a substitute for human judgment, expertise and experience.
AERA and the Technical Services department strive to make sure that the information provided to its members is as accurate as possible. The AERA tech staff work diligently researching and verifying engine specifications daily making sure that the information provided is correct. As an industry, you can’t complete your work with an average of 21% accuracy, you need to be 100% accurate. This is why AERA is The Source for Information!
Read this article with all images in the digital issue of Engine Professional magazine https://engineprofessional.com/2025EPQ3/#p=72

