During the last 15 years, machine vision technology has matured substantially, becoming a very important -- and in some cases, indispensable -- tool for manufacturing automation. Today, machine vision applications crop up in many industries, including semiconductor, electronics, pharmaceuticals, packaging, medical devices, automotive and consumer goods.
Machine vision systems offer a noncontact means of inspecting and identifying parts, accurately measuring dimensions, or guiding robots or other machines during pick-and-place and other assembly operations.
Historically, machine vision has been most successful in applications where it was integrated into the production process. For example, guiding machines or closing a control loop. But, while vision guidance has proved its worth in placing surface-mount components on printed circuit boards, most users would hesitate before investing in a machine vision inspection station to catch defective parts on an existing production line.
However, continuous improvements in cost, performance, algorithmic robustness and ease of use have encouraged vision systems' use in general manufacturing automation. Further advances in these areas will characterize the future of machine vision and result in more vision systems on manufacturing floors during the next few years.
What characteristics will describe future vision systems? They must include three characteristics in order to be useful in most manufacturing industries. First, they must be fast enough to keep up with ever-increasing production rates. Second, they must be intuitive and easy to use. Finally, they must be intelligent enough to deal with part-to-part or other process variations.
While vision technology might not have reached this point yet, recent advances in the vision industry have helped facilitate and accelerate vision applications in manufacturing for the near future.
Faster hardware
Since its inception, the machine vision industry has been characterized by continually improving price and performance ratios. This trend has followed a similar one in the semiconductor industry, which has seen desktop PCs move from yesterday's 8 MHz 8086 CPUs to today's 300+ MHz Pentium IIs. The industry projects future 64-bit processors that will run at clock rates in the gigahertz range.
Higher vision-processing hardware speeds have been key to both faster parts-per-minute throughput and greater robustness in individual vision tools. Even in manufacturing processes where mechanical considerations limit production rates, higher-speed vision processing means more processing power reserve. More reserve means more intelligent vision tools that can help deal with process variations and simplify a system's programming.
Formerly, machine vision systems allowed only binary processing of low-resolution, black-and-white images. Today, sophisticated image processing and analysis performed on high-resolution, grayscale images is commonplace. Computationally intensive image-preprocessing operations such as mathematical morphology are widely used. In addition, vision hardware now allows not one but many processing passes through an image during a single frame time.
Other advances have brought about process improvements. For example, older systems' slow hardware couldn't rotate images to compensate for part rotation. This could be performed only in very expensive hardware or else required approximations that introduced artifacts. State-of-the-art vision-processing hardware permits full-frame image rotation within less than a frame time, which in turn supports additional processing in real time.
Until recently, the standard broadcast TV frame rate of 30 Hz was considered "real time." However, digital machine vision cameras, which run at higher rates, require processing at a faster real-time rate than conventional video. New vision-processing hardware readily supports image acquisition from such nonstandard cameras. This nimble hardware also can process the increased data contained in higher-resolution images as well as conventional resolution images that are acquired much faster.
Continuing advances in semiconductor technology during the last decade have enabled custom vision-processing systems to shrink continuously -- from board-filled cabinets to single boards to custom silicon chips. In fact, during the last few years, VLSI design tools and processes have matured to the point where vision vendors can develop truly high-performance vision hardware.
Today, users perform vision-processing tasks at substantially faster rates, using hardware that requires far less electrical power, while paying a much lower unit cost than what conventional PC CPUs offer. Custom vision-processing hardware also allows robust vision functionality, in less complex and lower cost configurations, closer to the manufacturing process.
PC-based vision
During the last 10 years, PC-based vision systems have become more widely accepted due to the ever-increasing speed of standard PC CPUs.
Key advantages of PC-based vision systems include both the vision supplier's and the user's abilities to leverage third-party hardware and software. Also important is the PC's wide acceptance for both desktops and factory floors. Today, PCs running under the Microsoft Windows NT operating system are becoming a dominant platform for delivering factory-floor monitoring and control applications.
Numerous low-cost frame grabbers and image-processing software packages allow individual users to build vision applications themselves. Although feasible, this option is not without potential problems, which include conventional multimedia frame grabbers' limitations, nondeter-ministic performance and, occasionally, excessive development, installation and support costs.
New technology that addresses current PC-based vision systems' limitations includes next-generation, single-board, PC plug-in vision engines and powerful, component-based software environments for vision application development and deployment.
Plug-in vision engines
This latest generation of vision engines incorporates a complete vision system in a single PCI board. Users can thus offload all vision-related processing from the host PC and use it for other tasks such as production monitoring, control or user interfacing. Because all high-bandwidth, image-capture operations are internal to the board, this vision engine configuration offloads the host PCI bus in addition to freeing up the CPU.
Furthermore, because the vision board fits completely inside a host PC used for other purposes, plug-in vision engines offer a zero-footprint solution -- a key consideration in many original equipment manufacturer or clean-room applications. By plugging multiple boards into a single PC, users can further leverage the single PC host over multiple vision-engine boards, each dedicated to different inspection tasks.
In addition to on-board, high-performance CPUs and custom vision-processing hardware, such next-generation vision engines typically run under a real-time multitasking operating system, which allows deterministic performance in all image-acquisition, vision-processing and input/output operations. This offers a distinct advantage over conventional PC-based systems, which run under a nonreal-time Windows operating system.
In contrast to multimedia frame grabbers, vision engine boards support machine-vision input devices ranging from conventional analog video to nonstandard digital cameras. Designed specifically for machine vision applications, these products offer options such as strobing and channel-switching between successive frames, which conventional multimedia frame grabbers don't always do. On-board display capabilities present images and graphics in a dedicated optional display or in a picture-in-picture fashion on the host PC/Windows display.
With on-board input/output communications controlled by the on-board, real-time operating system, plug-in systems also allow straightforward integration with other equipment as well as control of peripherals such as lighting without relying on the host PC and its nonreal-time behavior.
Finally, on-board network connectivity simplifies deployment and ensures participation in factory-floor networks and intranets. It also supports innovative remote monitoring and diagnostics options.
Ease of use and deployment
Many users might be involved with the development, deployment and day-to-day support of vision applications. Users range from factory-floor operators and engineers -- who typically would set up, install or modify vision applications -- to system integrators or OEMs, who would create custom vision applications or even develop new vision tools.
Ease of use no longer implies just a top-level, point-and-click graphical user interface but also multilevel, comprehensive access for all expected system users and skill levels. Newer systems' intelligence reduces the need for inordinate vision-processing expertise and minimizes application development and deployment time.
Most state-of-the-art vision systems include built-in graphical user interfaces and comprehensive run time or monitoring environments. These allow systems to select jobs, start and stop inspections, adjust inspection parameters, access reports and statistics, and capture and log failed-part images or other data.
At one level down, manufacturing engineers who set up or configure vision applications do so in an intuitive environment, using high-level tools as opposed to low-level image-processing and analysis operations.
Describing a vision application program as a sequence of steps and using high-level, application-oriented tools -- as opposed to low-level, image-processing or analysis operations in a conventional programming language -- is now a widely accepted approach. Implementing such tools on high-performance platforms has increased these tools' robustness and intelligence, while limiting the need for users' vision expertise. This trend certainly will continue in the future.
At the system's lowest level, system integrators or OEM users can develop customized user interfaces quickly and, in some cases, add to the system's functionality by working in industry-standard, software-development environments. One recent development in this area, which promises faster application deployment for system integrators as well as faster time to market for OEMs, is component-based software. Such software encapsulates core vision system functions required to develop and deploy vision applications.
In a Microsoft Windows environment, these components are developed as ActiveX controls (formerly OCXs or OLE custom controls). A vision application deployment component, for example, would encapsulate all functions related to training or trying out a job. Another deployment component would encapsulate all functions related to loading a job, and starting, stopping and monitoring an inspection. It's now possible to create or customize vision applications without knowing much about vision system internals by dropping such basic building blocks inside a custom graphical user interface developed using Visual Basic or Visual C++.
The future is now
The hardware and software trends highlighted above will continue and even intensify in the future. Faster hardware, more intelligent tools and better application software development and deployment environments all will enable a broader and deeper proliferation of machine vision in manufacturing.
However, through recent positive advances in price, performance, robustness and ease of use, vision technology now has reached a point very close to what the vision industry and marketplace projected as a distant promise a few years ago.
At the same time, the last 15 or 20 years of vision applications on the factory floor has educated manufacturers about optimal vision-system uses, and these application boundaries continue to move outward. Manufacturers now consider machine vision not as a research curiosity but rather a mature tool for manufacturing automation.
Although potential users may want to wait for the future's inevitable new technology -- including faster hardware and more intelligent software -- the recent vision technology developments mentioned in this article imply that the future is now, and it's an exciting time for vision users and suppliers.
About the author
John E. Agapakis is vice president of research and development for RVSI Acuity CiMatrix, which is based in Canton, Massachusetts. He can be reached by telephone at (603) 598-8400 or e-mail at jagapakis@qualitydigest.com . |