Welcome knowledge seeker! Do you feel dazzled and awed by the great potential of artificial intelligence (AI)? Perhaps hesitant or lost when terms like convolution, deep learning, or autoencoder are thrown around? Well, fear not, for you’ve come to just the right place. You don’t need to be a computer wizard or a super-genius to understand how using AI can affect your system and, more importantly, whether using it is worth the investment.
ADVERTISEMENT |
This is the first of a multipart series aimed at helping you practically evaluate each step of investing in and using an AI-enhanced system or tool in an industrial application. We’ll address the fundamentals of what industrial AI (IAI) is and how it differs from your everyday AI. We’ll discuss its development, considerations for training, and concepts for practical evaluation in a live or test-grade environment. Although we won’t be covering everything you’ll need to know, you’ll get a good sense of the what, when, and why of assessing your IAI.
Finding the best application for AI
Let’s take a look at a few hypothetical scenarios where AI might be applied in industrial applications:
• Sam is considering purchasing an AI-enabled part-tracking system that will cost hundreds of thousands of dollars to install and maintain. Is it worth it?
• Robin wants to protect and monitor the expensive computer numerical control (CNC) milling machine in their startup production facility. Should they buy the newer, more expensive system with AI adaptric proto interegatrix technology, or can they save some money and buy the older model with standard built-in audio monitoring?
• Casey wants to use the latest machine learning (ML) technology to develop an AI-driven box that can determine the health of a planetary gearbox from observing 100-plus hours of operations data. How can he sell this idea to his supervisor?
I’ve been in the fields of reliability engineering and AI analytics for nearly 20 years. During that time, I’ve seen some ingenious and inexplicable applications of AI to problems, from the mundane to the bizarre. These have led me to create 10 basic questions that everyone should ask when considering or evaluating an AI-driven tool:
1. Why is AI needed for this problem?
2. Has this technology been proven on a (sufficiently) comparable system or problem?
3. Are the training data relevant to the task?
4. Do the training data provide sufficient coverage/characterization of the problem?
5. Is the model optimizing and training for the things you think it is?
6. Has the model had sufficient training?
7. Are any model or domain assumptions being violated?
8. Are the relationships that are being captured sensible?
9. Is this model overfitting data?
10. Are you looking at the appropriate performance metrics?
AI will be with us in the long run, and that’s not a bad thing. It can make our lives simpler, easier, and more efficient in many ways. If we ascribe to the axiom that the simplest tool for the job is the best from the perspective of maintaining and using that tool, then any additional complexity added by an AI system should also have a proportional level of additional benefit.
Although this axiom could be considered true in a broad sense, it becomes especially important when considering high-risk or high-value assets in industrial or economically high-stakes settings. In these settings, we must ask ourselves some rendition of the following questions:
• How do we ensure that the AI is doing what we think it is?
• How can we tell when the AI is worth the added price, complexity, or risk?
• How do we know we need AI at all?
What is IAI?
Industrial AI is the intersection of rules-based decision making, machine learning, and human insight. Importantly, IAI typically adheres to three fundamental principles:
1. IAI systems and models are made to solve a known problem or provide some explicit benefit.
2. Solutions that fulfill requirements and are easier to understand, verify, and maintain are preferred unless there’s a known reason to do otherwise.
3. Justifications for modeling choices come from the greater context of the application.
These principles have some implications that will help determine what philosophical goals we should address.
Providing performance benefits
The first principle implies that any evaluations must be directed toward the desired outcome of the larger system. Simple accuracy or precision measures might not be enough to determine the true effect of an AI-enhanced product without the context of the system in which it’s applied. The successful result of using an AI-enhanced product should bring benefits in terms of system-level performance or financial measures.
Furthermore, solutions that worked last year might not meet today’s needs. Problems can arise because the distribution of inputs to the AI model may change over time. For example, a model that estimates HVAC use from historical weather data may deteriorate if underlying weather patterns, sensing systems, or even if the use patterns or internal environment change. Categorically, AI systems lose efficacy as the learned relationships between their inputs and outputs no longer produce the required results. The clear solution to this situation involves evaluating both the system and your AI tool as a continual or periodic process.
Solutions that fulfill requirements
The second principle provides us with the perspective that practical use for the end user is the highest priority. A solution might be technically correct, but if it doesn’t perform the task the user wants, when and how they want it, it could fail to engage the user and be worthless. Instead, the task should be performed in a manner that users understand and feel confident that they can guide. Providing a simple solution to understand, maintain, and operate is the most time-tested and reliable way to ensure the end user is willing and able to use the AI.
Justifying model choices
Our last principle highlights that solutions aren’t applied in a void. Correspondingly, this means they must not be developed or evaluated in a way that’s agnostic to the domain and specific application. The application or use case dictates so much about the inputs, assets, and requirements for an AI tool that it seems misguided to ignore that information when developing or assessing tools for it.
10 common pitfalls of IAI development
Without assessment and evaluation against these philosophical waypoints, many IAI applications fail in their task or design. Time and again, I’ve encountered 10 elementary mistakes that ultimately contribute to some failure of the IAI—everything from adding unnecessary complexity to a well-understood problem to unreasonable expectations in performance. More insidious and subtle are the occasions where nothing seems wrong, but the application spits out nonsensical or trivial output.
The 10 most common pitfalls in IAI applications can be summarized as:
1. Creating an AI system that’s technically correct but functionally useless or unnecessarily complex
2. Not learning long enough
3. Basing expectations on cases that are insufficiently comparable to the target environment
4. Every model has assumptions: If you don’t know your assumptions, keep asking
5. Learning from the wrong information
6. Egregious anomalies aren’t the only bad actors in data
7. Learning in too small a space
8. Claims of 100% accuracy 100% of the time are 100% inaccurate, 100% of the time
9. Learning to solve the wrong problem
10. No single number can tell a complete story
Ask about the key elements of your IAI system
Knowing the key elements of your system can lead to a simple breakdown of any model or tool that performs a function.
• The need for the tool: What is the task or goal?
• What the tool uses: What are the inputs?
• How the tool works: Are the internal parts doing something you can understand?
• What the tool does: What outputs does it give, and how reasonable are they?
If we treat this evaluation as an assessment of a closed box (also known as a black box), we can examine it in three stages:
1. What is going into the box? You’ll need to know the characteristics of the training data, what preprocessing has been or needs to be done, and what (if any) parameters/hyperparameters the model needs.
2. What’s inside the box? You’ll need to know the model assumptions, what type of inputs the model is looking for, how the model trains, and what types of relationships the model is capturing or re-creating.
3. What’s the box giving back? You’ll need to know the model’s reliable input range, the units being reported, and the scenarios that can cause it to fail.
As we progress through this article series, we’ll address each of these stages to examine the risks and benefits of an IAI application, and provide a practical philosophy for assessing any IAI tool you may come across. This will lead us through the working philosophy of understanding when an IAI system is appropriate and adds value to the situation.
Many of the ideas and concepts prompted here apply beyond AI or machine learning applications. But our focus will be on any tool that ingests data or information and outputs some form of decision support or control action. Equipped with the appropriate information and tools, you’ll be in a better position to make the best decisions about adding IAI to your operations.
Published Nov. 13, 2024, in the NIST Manufacturing Innovation Blog.
Add new comment