Historically, the sensitive nature of personal and company proprietary information held in life sciences quality management systems (QMS) has been a factor for quality management teams’ reluctance to adopt AI. Add to that the complex global regulatory environment and the penalties of noncompliance, and this disinclination increases as the quality management teams work to reduce risk.
ADVERTISEMENT |
However, as AI’s capabilities and benefits, along with technologies such as machine learning (ML), generative AI (GenAI), and large language models (LLM) become more compelling, quality management teams are considering them to improve productivity and efficiency, reduce errors and duplication of effort, and empower industry professionals in their day-to-day activities.
An initial resistance
Initially, quality management teams were reluctant to adopt AI due to concerns about handling sensitive personal information. Two significant factors contributed to this.
First, there was a general distrust of AI technology in its early forms, particularly among C-suite executives and decision-makers. While AI is widely accepted for tactical decisions like chatbots, credit scoring, and sales assistance, industry professionals resisted its effect on critical decisions related to product lines, compliance issues, and direct communication with global regulators.
Second, the difficulty of validating AI systems to regulatory requirements in healthcare was a concern. Traditional validation methods fell short for complex AI algorithms, and the opaque nature of certain “black box” AI models posed significant challenges. Additionally, AI systems could produce unexpected or false “hallucinated” results, further compounding validation concerns. The regulatory environment and potential severe penalties for noncompliance, including fines, imprisonment, and product-line shutdowns, also affected the adoption of AI QMS.
There was a cultural aspect as well, with experienced and risk-averse quality assurance professionals prioritizing human oversight and proven methods over unproven technological solutions like AI QMS, especially when dealing with sensitive information or regulatory implications.
Regulators initially struggled to provide guidance on how to approach AI systems, exacerbating the challenges faced by quality management teams. It took time for regulatory bodies such as the U.S. Food and Drug Administration (FDA) and ISO to develop frameworks and guidelines for validating AI in regulated environments.
Proof of concept is paramount, especially given an understandable aversion to risk stemming from the severe consequences of noncompliance, which can jeopardize not only an organization’s finances but its very existence.
But despite initial trepidation, several emerging AI applications present opportunities within a QMS to enhance responsiveness, reduce cycle times, improve accuracy of “right first time” activities, and drive productivity gains.
Recommendation engines
The increasing adoption of recommendation engines enables the integration of AI with personal experience, thereby augmenting human intelligence. These engines gather relevant data and keep humans in the loop, allowing for proactive identification and correction of potential AI hallucinations by experienced quality and regulatory professionals. This synergy enhances responsiveness, reduces cycle times, improves accuracy and efficiency, and drives significant productivity gains within the QMS.
Human experience provides critical context that engines lack.
Recommendation engines present users with options for coding records based on system data, including the probability of accuracy for each option. However, they can’t account for undocumented, subjective aspects based on years of experience; this is where human expertise becomes crucial. Humans review the recommendations and underlying data, and apply experiential knowledge for optimal decision-making. Over time, if humans consistently select options other than the top AI recommendation, the engines can learn and refine their calculations accordingly. Human experience provides critical context that engines lack. Recommendation engines augment intelligence by curating data and top options, allowing humans to focus on applying valuable insights rather than routine coding tasks. The true value lies in this synergy: Engines operate on data while humans provide abstract thinking and subjective elements that machines can’t comprehend.
Content generation
GenAI provides significant value in content generation, particularly with the availability of private engines within the life sciences industry. Quality and regulatory professionals frequently need to review multiple records, execute tasks, and write summaries. GenAI excels at consolidating data from various sources and generating concise summaries, crucial for consistency in quality management practices while improving productivity and efficiency. By leveraging GenAI within secure, private environments, organizations can harness this technology’s power while maintaining strict confidentiality and protecting intellectual property. The evolution of private generative AI and GPT models facilitates human-like text generation, enabling expanded investment and applications in predictive and preventive analytics within quality management.
Breaking down silos: AI-driven, proactive quality management
Another notable trend is leveraging AI-enhanced connectivity to break down silos between quality departments and other functional areas. By using AI tools to work across a company’s infrastructure, including quality management, regulatory systems, enterprise resource planning (ERP), and product life cycle management (PLM), AI tools can proactively identify potential product and process issues and reduce the effect of a critical event by decreasing the time taken to identify an escalation. This cross-functional collaboration enabled by AI allows a shift from reactive to proactive problem-solving. For example, AI can aid in creating content that bridges different domains, such as generating regulatory submissions by combining data from regulatory affairs, clinical affairs, and quality management systems, streamlining the process of compiling regulatory submissions to meet the varying requirements of different target markets.
As these types of AI engines continuously learn, they become adept at contextualizing information and generating tailored content that bridges departmental boundaries, enhancing efficiency and accuracy of end-to-end quality management processes. Thus, they augment human-in-the-loop quality, giving the regulatory professional more time to focus on critical patient safety and market-access activities.
Addressing challenges
Although AI offers significant benefits, there are challenges that must be recognized and addressed. It’s important to start with a customer-centric approach, then understand the underpinning regulations and standards for that process and associated deliverable, look at the available data sets associated with the process, and finally, select the right AI technology for this specific use case. Using an inappropriate AI tool, like an LLM for a coding problem, may not yield optimal results if a company lacks relevant data for the use case needed to train, validate, and run the AI tool.
AI engines are only as effective as the quality and quantity of data they’re trained on. So, for data-rich AI systems to deliver accurate outputs, they require access to abundant and high-quality data sources. This underscores the importance of conducting thorough due diligence to ensure that an adopted AI technology has been trained on relevant and comprehensive datasets. When dealing with sensitive personal information such as protected health information (PHI) or personally identifiable information (PII), it’s crucial to ensure that the AI tool being deployed adheres to stringent security and privacy standards. Automated sourcing from the internet, even if it doesn’t allow data to escape, can potentially lead to inaccurate or hallucinated outputs where AI generates fictional content based on its training data, compromising the integrity of the AI system’s recommendations.
Hallucination is a known issue that can affect large language models and other AI tools. Organizations can take proactive measures to counteract this phenomenon. Combining the power of AI with human experience and expertise to effectively identify and catch hallucinations before they propagate further corrects these AI-generated inaccuracies proactively, rather than after they’ve been published, incorporated into records, or communicated to patients or end users. Catching hallucinations beforehand is crucial to maintaining data integrity and ensuring the reliability of AI-assisted processes within a QMS.
Catching hallucinations beforehand is crucial to maintaining data integrity and ensuring the reliability of AI-assisted processes within a QMS.
Another concern is the “black box” nature of many AI models, which obscures the decision-making process and makes it difficult to understand how the AI arrived at a particular output or recommendation. This lack of transparency can be problematic, especially in regulated environments like quality management, where interpretation and explanation are critical.
To address these challenges, there’s an increasing demand for “white box” models that provide visibility into the AI’s decision-making process—enabling users to understand the context and data that informed a specific recommendation. This level of transparency is essential for building trust in AI systems and ensuring that they’re operating as intended, particularly in high-stakes domains like quality management, where regulatory compliance is paramount.
Sufficient computing power is also necessary to get the most from AI, because performance limitations, such as constraints on available memory, can be problematic, especially when AI systems need to process and analyze vast amounts of data simultaneously.
Real-world successes
While there are many exciting use cases of AI in quality management processes, some real-world examples stand out as particularly compelling success stories. One such example is the implementation of “next best action” systems, where AI guides users through workflows based on policies and procedures, rather than relying on predefined, rigid processes. This intelligent workflow automation has proven to be a game-changer for many organizations.
Another area where AI demonstrates significant value is with chatbots. When deployed for the appropriate audience, such as end users who require frequent training or assistance, chatbots can significantly reduce the time and resources dedicated to user support and education. However, it’s crucial to ensure that chatbots are implemented judiciously, targeting the right user groups to maximize their effectiveness while avoiding potential slowdowns for more advanced users.
Perhaps the most exciting real-world examples of successful AI implementation in quality management processes are AI engines for coding appropriate regulatory reports, and generative AI for content creation. These applications have the potential to save organizations countless hours of human effort each day while maintaining or even improving accuracy and compliance to global regulations and standards. The ability of AI engines to efficiently code regulatory reports by accurately identifying and extracting relevant information from various data sources is extraordinary. This not only streamlines the reporting process but also reduces the risk of errors and ensures compliance with regulatory requirements.
The ability of AI engines to efficiently code regulatory reports by accurately identifying and extracting relevant information from various data sources is extraordinary.
Furthermore, the advent of generative AI for content creation has opened up new possibilities for seamless collaboration between quality management and other departments, such as regulatory affairs. By leveraging AI to generate interconnected content that bridges departmental silos, organizations can foster a more integrated approach to problem-solving and identifying issues proactively.
As these real-world examples illustrate, successfully implementing AI in quality management processes can yield significant benefits, including increased efficiency, improved accuracy, enhanced collaboration, and better compliance with regulatory requirements. However, it’s crucial to deploy AI judiciously, addressing potential challenges such as data quality, model interpretability, and regulatory compliance to ensure its responsible and effective adoption.
The road ahead
As AI advances, engines are becoming smarter and more accurate, capable of understanding context and translating information across languages. This enhanced contextual awareness and translation capability holds immense value for global organizations, enabling improved visibility and collaboration across diverse markets and regulatory landscapes.
AI systems are demonstrating remarkable prowess in recognizing patterns, classifying information, and transforming content, positioning AI as a powerful tool for augmenting decision-making processes and offering insights that can inform future strategies.
One promising area is the continued evolution of content creation as AI systems become more adept at generating contextually relevant and regulatory-compliant documentation, streamlining processes and enhancing efficiency within quality management teams. Furthermore, as AI systems become more adept at recognizing and adapting to varying regulatory standards across different countries and markets, they can play a crucial role in ensuring compliance and facilitating seamless operations in diverse global environments.
Final thoughts
The use of AI in QMS activities supports industry professionals and companies in delivering safe and effective products to global markets, which has a direct effect on patient outcomes.
AI integration isn’t a trivial undertaking, and a well-planned, strategic approach is essential.
As the road map for AI adoption in a QMS continues to unfold, teams must stay informed about emerging technologies and trends. AI integration isn’t a trivial undertaking, and a well-planned, strategic approach is essential. A key consideration is to start by clearly identifying the specific problem to be solved and selecting an AI technology well-suited to address that problem. Not all AI technologies are equally effective for every use case, and quality and regulatory affairs teams must verify that the chosen technology is fit for purpose and has access to the necessary data. It’s also crucial to ensure that the data governance and security measures in place are appropriate for the task at hand and the selected AI technology. Thorough research and due diligence are essential to ensure that the chosen solution meets the organization’s specific needs and regulatory requirements.
Finally, before committing to an AI implementation, it’s prudent to conduct a final financial viability assessment to evaluate whether the potential value and cost savings generated by solving the identified problem outweigh the investment required for the AI solution. By following these best practices—clearly defining the problem, selecting the appropriate AI technology, ensuring data quality and governance, and conducting a thorough financial analysis—organizations can increase their chances of successfully integrating AI into their QMS while mitigating risks and addressing challenges proactively. Using AI within a company’s QMS could offer a company a significant advantage in bringing safe and effective products to global markets, and ultimately benefiting patients and healthcare as a whole.
Add new comment