AdeptSight 2.0 Online Help
March 2007
AdeptSight 2.0 User Guide
Welcome to AdeptSight 2.0
AdeptSight 2.0 is a powerful vision package that integrates into Adept robotic systems. It allows quick
development of robust and accurate vision-guided and inspection applications thanks to a simple
graphical user interface.
Start with Adept Sight Tutorials
If you are unfamiliar with the AdeptSight environment, we recommend that you follow the Getting
Started with AdeptSight for a quick introduction and tour of the AdeptSight software.
• Getting Started with AdeptSight
• AdeptSight Pick-and-Place Tutorial
• AdeptSight Conveyor Tracking Tutorial (Requires CX Controller and conveyor belt)
• AdeptSight Standalone C# Tutorial
Start Building a Vision Application
The basic steps for building an application are:
1. Create a new vision sequence. See Using the Sequence Manager
2. Set up system devices. See Setting Up System Devices.
3. Calibrate the system. Calibrating System Devices.
4. Add vision tools to the vision sequence. Adding Tools to the Sequence.
Learn about the AdeptSight 2.0 Environment and Setup
• What is AdeptSight?
• Installing AdeptSight Software
• Setting Up System Devices
• Starting AdeptSight
• Calibrating System Devices
• Calibrating the Camera
Explore Support Files
Support Files included the AdeptSight CD include two sample applications, code examples and images.
To open sample applications from the Windows start menu:
1. Select one of the following:
• Programs > Adept Technology > AdeptSight 2.0
• Programs > Adept Technology > AdeptSight 2.0 > Example > Multithread
• Programs > Adept Technology > AdeptSight 2.0 > Tutorial > Hook Inspection
To browse for support files:
Support files are copied to the AdeptSight 2.0 program folder during the software installation.
Support files include:
• AdeptSight ReadMe file
• A 'Tutorial' folder containing images and code examples and project files for use with
AdeptSight tutorials, including the completed C# Tutorial
• An 'Example' folder containing images, code examples, and project files for use with the
Multithread example application, as well as the code and Visual Studio project files for the
example application.
What is AdeptSight?
What is AdeptSight?
AdeptSight is an easy-to-use, standalone vision guidance and inspection package that runs on a PC,
and comes complete with all the necessary hardware including camera, lens and accessories.
Incorporated into the Adept DeskTop development environment, it integrates vision guidance with
Adept robots. AdeptSight can also be used as a standalone vision inspection product, within the
Windows .NET development environment.
The AdeptSight vision software provides a suite of high performance vision tools, integrated to Adept
robotic systems. AdeptSight 2.0 vision software features:
• High-accuracy, robust, and fast part-finding and part-inspection capability.
• Interaction with robots & controllers from the PC and development of robotic applications in
V+/MicroV+ within Adept DeskTop.
• Development of standalone applications through The AdeptSight Framework.
• Built-in calibration wizards to ensure accurate part finding and robot guidance.
• an extensive set of inspection and image processing tools.
• AdeptSight Tutorials, Online Help, and support files that will assist you in learning and using
AdeptSight.
How it Works
In AdeptSight, the PC acts as a vision server. Using images from one or more cameras that are linked to
the PC, AdeptSight executes the vision processes such as part finding, part inspection, image
processing. Vision results and locations are sent to the controller. Conveyor belts, also controlled from
the PC can be added to AdeptSight vision applications
Overview of an AdeptSight Vision Project
An AdeptSight application is called a Vision Project. Each Vision Project contains the configuration of the
vision tools and the configuration of devices related to the vision application. Through the AdeptSight
vision project you can also calibrate the cameras, and calibrate the cameras to the other devices used
by the vision application: robots, controllers, and conveyor belts.
Figure 1 illustrates the contents and relationships of an AdeptSight Vision Project.
What is AdeptSight?
AdeptSight Vision Project
System Devices
Conveyor
Vision Sequences
Tools
Belts
Cameras
Controllers
Basler
Direct Show
Emulation
(virtual camera)
CX
AIB
Hardware & Communication
Environment
Figure 1 Overview of an AdeptSight Vision Project
Image Acquisition Tool
Locator
Finder Tools
Inspection Tools
Image Processing Tools
Color Tools
Frame Builder
Clearance Tool
Results Inspection Tool
User Tool
Vision Software & Tools and
Processes
Installing AdeptSight Software
Installing AdeptSight Software
Before Installing
• Install the USB protection key (dongle) that came with AdeptSight. This dongle is required
and must be present at all times to ensure the proper functioning of AdeptSight.
• Configure the PC to ensure that the hardware protection key will be properly detected at all
times.
• Uninstall any previous Adept DeskTop and AdeptSight versions.
• Uninstall any existing HexSight versions.
See the section Configuring the PC to Detect the Adept USB
Protection Key for more information.
Configuring the PC to Detect the Adept USB Protection Key
Power management options of the PC, such as a Low Power modes ("System Standby" and "Hibernate")
or the Screen Saver, may prevent the computer from properly detecting and reading the USB hardware
key (dongle) even though it is properly installed. The error message that appears in such a case is:
Hardware protection key not found, the software will run in demonstration mode.
Solution
To ensure that the computer will correctly detect the USB protection key (dongle), please follow the
procedures below to change the power options of the PC and to disable the screens saver.
Change the 'save power' option for the USB root hubs on the computer:
1. In the Device Manager, open the Properties window for the USB Root Hub on which the device
is connected.
2. In the Power Management tab, disable the option: 'Allow the computer to turn off this device to
save power'.
3. Repeat the above steps for all USB Root Hubs that may be used for the AdeptSight USB
hardware key.
Disable screen saver and power management options:
1. Right-click on the Windows Desktop and select: 'Properties'.
2. In the Display Properties window, select the Screen Saver tab.
3. Next to 'Screen saver' select: (none).
4. Still in the Screen Saver tab, under 'Monitor Power', click the 'Power...' button.
5. Next to 'Turn off monitor', select: 'Never'.
6. Next to 'Turn off hard disks', select: 'Never'.
7. Next to 'System standby', select: 'Never'.
8. Next to 'System hibernates', select: 'Never'.
AdeptSight 2.0 - User Guide
5
Installing AdeptSight Software
9. Select the Hibernate tab, and disable the 'Enable hibernation' check box.
10. Click OK or Apply. You may then need to reboot the computer to enable the reading of the
USB hardware key.
Installing the Software
1. Launch the installation from the AdeptSight CD-ROM.
2. Follow instruction of the installation program.
3. The installation will install and/or update:
• The driver for the Safenet Sentinel USB hardware key (dongle),
• The Basler camera driver (BCAM 1394 Driver),
• Microsoft .NET Framework 2.0, if not already installed.
4. The installation program will install the correct Adept DeskTop version that is required for
AdeptSight.
5. After installation, reboot the computer before using Adept DeskTop and AdeptSight.
Related Topics
AdeptSight 2.0 License Options
Starting AdeptSight
AdeptSight 2.0 - User Guide
6
AdeptSight 2.0 License Options
AdeptSight 2.0 License Options
The following describes the licenses that are available for AdeptSight 2.0.
Depending on the license that is installed on the system, you may not have access to some functions
and tools, or you may have a limited (demo mode) access to the use of certain vision tools.
Licenses are encoded on the hardware key (dongle) that is required to run AdeptSight. Multiple licenses
can be encoded on a single hardware key.
AdeptSight 2.0 Base License
Supports:
• Connection to a single CX controller or a single Cobra i-Series Robot
• 2 cameras
• Execution in Adept DeskTop or standalone
Conveyor Tracking License
Enables the use of conveyor-related functions and tools, such as:
• Belt Calibration Wizard
• Motion-related tools: Communication Tool and Overlap Tool
• Robot and Belt Latching functionality
Multiple Camera License
Adds support for 2 additional cameras.
Additional Controller License
Adds support for an additional controller. Can be added multiple times.
Color License
Adds support for color processing.
AdeptSight 2.0 - User Guide
7
Starting AdeptSight
Starting AdeptSight
Vision applications are built within the AdeptSight control, which is accessible from Adept DeskTop.
The hardware protection key (USB dongle) provided with the
AdeptSight package must be installed to properly run AdeptSight.
To start AdeptSight from Adept DeskTop:
1. Open Adept DeskTop.
2. From the Adept DeskTop menu, select View > AdeptSight, or click the 'Open AdeptSight' icon in the
Adept DeskTop toolbar.
3. If you have more than one controller license on your system, the Controller Information dialog
opens. Select the type of controller you will use.
Figure 2 Selecting a controller in the Controller Information dialog
4. The Vision Project manager opens, similar to Figure 3.
Vision applications are built and configured through the Vision Project window, also called the Vision
Project manager.
A Vision Project consists of one or more vision sequences and the configuration data for the devices
that are used by the vision guidance application. A Vision Project consists of one or more sequences of
tools as well as the configuration data for the system devices that are used by the vision guidance
application.
• The Sequence Manager enables you to manage and edit the tool sequences that are in the vision
project. There must be at least one sequence in a project. Sequences are created and edited in the
Sequence Editor. See Using the Sequence Editor.
• The System Devices Manager enables you to set up and configure the devices that are required for
the vision project.
AdeptSight 2.0 - User Guide
8
Starting AdeptSight
Sequence Manager
System Devices Manager
Figure 3 The Vision Project Manager Window
Building a New Vision Application
The basic steps for building an application are:
1. Open a Vision Project Window. See Using AdeptSight.
2. Add a vision sequence to the project. See Using the Sequence Manager.
3. Set up system devices. See Setting Up System Devices.
4. Calibrate the system. Calibrating System Devices.
5. Add vision tools to the vision sequence. Adding Tools to the Sequence.
AdeptSight 2.0 - User Guide
9
Using AdeptSight
Using AdeptSight
AdeptSight software enables you to create vision applications called vision projects, which are
configured and managed through the Vision Project interface.
The Vision Project Interface is divided into two sections: The Sequence Manager and the System Devices
Manager. Through the Vision Project interface you create and configure a vision project that can contain
any number of vision sequences and devices.
This section introduces you the basic use of the AdeptSight Vision Project interface.
What is a Vision Project?
A vision project consists of one or more vision sequences as well as the configuration of the system
devices that enable the vision project to be carried out.
A sequence is a series of processes that are carried out by vision tools. When you execute a sequence,
each tool in the sequence executes in order. You add, remove, and edit tools in the Sequence Editor.
See Using the Sequence Editor.
A system device can be a camera, a controller or a conveyor belt. See Setting Up System Devices.
Opening the Vision Project Interface
The Vision Project interface opens when you start AdeptSight.
To start AdeptSight from Adept DeskTop:
1. Open AdeptSight from the Adept DeskTop menu: View > AdeptSight.
Or, in the Adept DeskTop toolbar, click the 'Open AdeptSight' icon.
2. You can dock the Vision Project win anywhere in the Adept DeskTop Window. See Starting
AdeptSight if you need help.
Vision project toolbar
Sequence Manager
System Devices Manager
Vision Project status bar
Connection Status
Green: Connected
Red: Not Connected
Calibration Status
Warning: Not Calibrated
Check mark: Calibrated
Figure 4 Vision Project Interface
AdeptSight 2.0 - User Guide
10
Using AdeptSight
The Vision Project Toolbar
The functions available from the upper toolbar are related to managing the project and managing
sequences in the vision project. For information on the lower toolbar, under System Devices, see Using
the System Devices Manager.
Create Project
Starts a new vision project - Clears all the current sequences and all
System Devices settings from the Vision Project.
Load Project
Starts a new vision project - Clears all the current sequences and all
System Devices settings from the Vision Project.
Save Project
Saves all the current sequences and System Devices settings to a
Vision Project file (*.hsproj).
Add Sequence
Adds a new, blank sequence to the Vision Project.
Remove Sequence
Removes the selected sequence from the Vision Project.
Edit Sequence
Opens the Sequence Editor for the selected sequence. This enables
you to add and configure the vision tools in the sequence.
Execute Sequence
Runs the vision sequence. Depending on the state of the Continuous
Loop icon, Execute Sequence runs a single iteration, or runs the
sequence continuously until stopped.
Stop Sequence
Stops the execution of a sequence that is running.
Continuous Loop
Run the selected sequence in loop mode (continuous execution),
until the execution is stopped by the Stop Sequence icon.
Project Properties
Opens the Environment Options dialog, for viewing and editing of
properties that apply to the AdeptSight environment.
Help
Opens the AdeptSight online help.
Saving Vision Projects
All sequences in the Sequence Manager and all System Devices configurations are saved when you save
a vision project. Vision Project files are saved with the extension 'hsproj'.
Saved projects can
To save a vision project
1. Click the 'Save Project' icon to save the current vision project to a file.
2. Provide the filename and destination for your project file (*.hsproj).
To load a vision project:
1. Click the 'Load Project' icon:
AdeptSight 2.0 - User Guide
11
Using AdeptSight
2. Loading a Vision Project will clear and erase all current settings in the Vision Project. You will be
prompted to save changes to the current project, if required.
3. Provide the filename and location of project file (*.hsproj).
To create a new vision project:
1. Click the 'Create Project' icon:
2. Loading a Vision Project will clear and erase all current settings in the Vision Project. You will be
prompted to save changes to the current project, if required.
3. Provide the filename and location of project file (*.hsproj).
Adding and Managing Vision Sequences
When you open a new AdeptSight session, and have not defined any 'auto-loaded' projects, or create a
new Vision Project, the Sequence manager contains a blank sequence named New Sequence.
• If there is no sequence in the list you must create at least one sequence to start creating a
vision project.
• See Using the Sequence Manager for information on adding and managing the sequences in a
vision project.
Adding and Managing System Devices
A Vision Project requires at least one camera device. By default, AdeptSight adds an Emulation device
(virtual camera) and any cameras that are currently connected and detected by the computer.
• If there is no camera in the Cameras tab of the System Devices list, you must add at least one
camera to start creating a vision project.
• Other devices and robots must be added as required, for your vision application.
• See Using the System Devices Manager for information on configuring and calibrating
cameras, controllers, robots, and conveyor belts.
Related Topics
Using the Sequence Manager
Using the Sequence Editor
Using the System Devices Manager
Setting Up System Devices
AdeptSight 2.0 - User Guide
12
Environment Settings
Environment Settings
The Environment Settings dialog allows you to you configure and save preferences.
Startup Options
Auto Load Project
When Auto Load is enabled, AdeptSight automatically opens with the settings and parameters saved in
the specified project file.
• Select Enabled to activate automatic loading (Auto Load) of a project.
• Enter path and file name to the required project file (*.hsproj). This file will be automatically
loaded when whenever you open AdeptSight and Auto Load enabled.
• You cannot create a new project file here, only load an existing file. See Saving a Sequence
for more information on saving project files.
• The AdeptSight installation provides example project files you can load.
Check Enabled to activate AutoLoad
Project file that will be automatically
loaded when AdeptSight opens
Figure 5 Opening the Environment Settings Dialog
Enabled
Activates the automatic loading of a specified file whenever AdeptSight is opened.
Disable messages initiated by a framework operation
This option is useful for standalone (framework) applications that must run automatically, without
operator supervision. Enabling this check box will disable error messages that may pause or stop the
running of the vision system.
Note: A standalone (framework) application is an AdeptSight application that is not running from within
the Adept DeskTop environment.
Splash Screen Enabled
Enables the display of the splash screen when AdeptSight is opened.
Log Options
The Log tab contains options that define what types of events are written to log file by AdeptSight. There
are four levels of Event Logging:
AdeptSight 2.0 - User Guide
13
Environment Settings
Level 1: Errors
Level 1 logs contain Error events only. For example:
Error 4/27/2005 3:58:52 PM ActiveVBridge 130 username
You must be connected to the robot to use this function.
Level 2: Warnings
Level 2 logs contain Error events and Warning events. For example:
Warning 4/27/2005 3:50:58 PM Locator 2000 username
<unnamed>.Messages[4]= 4502-No hypotheses were generated.
Level 3: Information
Level 3 logs contain Error, Warning and Information events. For example:
Information 2005-04-27 15:47:50 ActiveVBridge 122 username OnDone event
processing.
Level 4: Verbose
Level 4 Logs contain Errors, Warnings, Information, and "Verbose" events. Verbose events give added
information. For example:
Verbose 4/27/2005 3:51:01 PM Locator 1500 username Execution Started.
Verbose 4/27/2005 3:51:01 PM Locator 1501 username Execution Ended.
Figure 6 AdeptSight Environment Settings - Log Tab
Communication Options
The AdeptSight Server Communication mode is the default communication mode between
AdeptSight and a SmartController. This communication is carried out through the TCP/IP protocol.
• The AdeptSight Server Communication mode opens three tasks on the server. These tasks
must remain open at all times.
• The AdeptSight server provides the fastest communication mode with the controller and can
perform requests in parallel.
• If you disable this mode, or if you are connected to an AIB controller, communication with the
server is carried out through the proprietary VPlink serial protocol.
After you make any change to the communication mode, by enabling or
disabling the AdeptSight Server Communication checkbox, you must reload
your project for the communication change to take effect in the current
session.
AdeptSight 2.0 - User Guide
14
Environment Settings
Figure 7 AdeptSight Environment Settings - Communication Options
Color Options
Color options allow you to modify the colors of markers and items that appear in the display.
• Colors preferences are saved to the user preferences folder in Windows.
• There is no undo action or option available for reverting to a previous or initial color setting.
Figure 8 AdeptSight Environment Settings - Color Options
To modify color options:
1. In the Category list, select the item for which you want to modify the color.
2. Click Change to open the Color selection dialog.
3. Chose a color (or create and select a custom color) for the item, and click OK.
If you set 'Scene' type markers (such as 'OutlineScene') to black (R,G,B = 0,0,0), the
markers will not be visible against the black background of scene displays.
Note: A scene is a vectorized representation of contours, outlines, and features that
are found in an image.
To reset default colors:
1. Click Reset to reset default colors
2. This will reset ALL colors to their default values.
About Options
The About tab displays version information for the AdeptSight software and plug-ins, as illustrated in
Figure 9.
The About button opens a window containing additional information on the current AdeptSight version
and license options.
AdeptSight 2.0 - User Guide
15
Environment Settings
Figure 9 AdeptSight Environment Settings - About Tab
AdeptSight 2.0 - User Guide
16
Using the Sequence Manager
Using the Sequence Manager
The Sequence Manager is the area of the Vision Project interface that allows you to manage and edit the
sequences that are part of the Vision Project.
What is a Sequence?
A sequence is a series of processes, also called tools. When you run or execute a sequence, each of
these tools executes in order in which they appear in the sequence. Vision Project can contain one or
more sequences, which are managed and executed from the Sequence Manager section of the Vision
Project interface. Sequences are built and configured through the Sequence Editor interface. See Using
the Sequence Editor.
The Sequence Manager Interface
The Sequence Manager is the top area of the Vision Project control that opens when you start
AdeptSight.
Sequence Manager area of
the Vision Project Interface
Starts a new vision project - Clears all the current sequences from the Vision Project
Add a new sequences to the project
Delete the selected sequences from the project
Edit the selected sequence in the Sequence Editor
Run/Stop the selected sequence
Run sequence in loop mode (run continuously until stop command)
Figure 10 Vision Manager Toolbar and Status Icons
Adding a New Sequence
When you open a new AdeptSight session, and have not defined any 'auto-loaded' projects, the
Sequence Manager contains a sequence named New Sequence. If there is no sequence in the list you
must create at least one sequence to start creating a vision project.
• After adding a sequence, you can edit the sequence in the Sequence Editor.
• Before editing and adding tools to the sequence, you should setup and calibrate the system
devices. See Basic Procedure for Setting Up System Devices.
AdeptSight 2.0 - User Guide
17
Using the Sequence Manager
To add a vision sequence:
1. Click the 'Add Sequence' icon to add a sequence to the list:
2. To rename the sequence, select it in the list, left-click once on the name then type in the name
of the sequence.
To delete a vision sequence:
1. Select a sequence in the list.
2. Click the 'Remove Sequence' icon:
Saving a Sequence
All sequences in the Sequence Manager are saved when you save the vision project, however sequences
can be saved to file individually. Sequences files are saved with the extension 'hsseq'.
Saved sequences can be loaded as a new sequence in the Vision Project, or in replacement of a selected
sequence.
To save a vision sequence:
1. Select a sequence in the list. Right-click on the selected sequence name.
2. From the dropdown context menu, select: Save Sequence, as illustrated in Figure 11.
3. Provide the filename and destination for your project file (*.hsseq).
To load a vision sequence:
1. To replace a sequence that is currently in the Vision Project by a saved sequence, right-click on
the name of the sequence you want to replace.
To load the saved sequence as a new sequence in the project, click in a blank area of the
Sequence Manager list.
2. From the dropdown context menu, select: Load Sequence.
3. Provide the filename and location of the sequence file (*.hsseq).
Figure 11 Context Menu for Loading or Saving a Vision Sequence
Editing a Sequence
Sequences are edited in a window called the Sequence Editor. In the Sequence Editor, you add the tools
that make up the vision sequence.
The most basic vision application carries out these two processes:
• Acquire an image of the workspace, with the Acquire Image tool.
AdeptSight 2.0 - User Guide
18
Using the Sequence Manager
• Locate parts in the workspace with the Locate Image tool.
To edit a sequence:
1. In the Sequence Manager, select a sequence in the list.
2. In the toolbar, click the 'Edit Sequence' icon to open the Sequence Editor:
3. See Using the Sequence Editor to setup and configure a sequence.
Related Topics
Using the Sequence Editor
Using the System Devices Manager
Setting Up System Devices
AdeptSight 2.0 - User Guide
19
Using the System Devices Manager
Using the System Devices Manager
The System Devices Manager is the area of the Vision Project interface that allows you to manage and
edit the devices that are part of the Vision Project.
A system device is any device such as camera, a robot controller, or a conveyor belt used by AdeptSight
to carry out the operations defined by a Vision Project.
• The type and number of devices that can be added to a vision project may be restricted by the
type of AdeptSight license.
• Devices are defined relative to a selected camera, and the order in which the devices are
assigned to a camera is important.
• Calibration of all devices can be launched and managed through the System Devices manager.
Using the System Devices Manager
The System Devices Manager is in the top frame of the Vision Manager window that opens when you
start AdeptSight from Adept DeskTop. The interface provides three tabs for different types of system
devices: Camera, Belts and Controllers
Through the System Devices manager you can:
• Add devices.
• Assign Controllers and Belts to a camera.
• Launch Calibration Wizards for devices.
Figure 12 System Devices Manager Showing the Cameras Tab
Adding System Devices to a Vision Project
The only device absolutely required for a vision project is a camera. A vision project with only a camera
can be created to prototype or test a vision application without being connected to a controller.
The number of devices of any kind is limited by the type of AdeptSight license. For example, conveyor
belts can be added to a vision project only if an active Conveyor Tracking license is present.
The suggested order for adding system devices is:
1. In the cameras Tab: Add required camera(s). Configure camera if required then calibrate the
camera.
2. In the Controllers tab. Add the controllers required for the application.
AdeptSight 2.0 - User Guide
20
Using the System Devices Manager
3. In the Belts tab add conveyor belt(s) if required. Belts can only be added if an active Conveyor
Tracking license is present.
4. In the Cameras tab, assign devices to camera(s).
5. Calibrate the system by launching a Vision-to-Robot calibration wizard
AdeptSight vision projects are said to be camera-centric. Controllers, robots, and belt devices must be
assigned to a specific camera. Any conveyor belt devices must be assigned to a camera before assigning
robot devices to the camera.
System devices must be assigned to a camera in the following order:
1. Assign required conveyor belt(s) to the selected camera (requires Conveyor tracking License).
2. Assign required controller(s) to the selected camera.
The System Devices Toolbar
Icons for managing system devices and calibration are available from the System Devices toolbar and
various icons appearing in the System devices list.
Edit Camera
Properties
Opens the properties window for the selected camera. If the selected
camera is an emulation device, this icon opens the Emulation Properties
dialog.
Live Display
Displays the live images provided by the selected camera. This is useful
for visualizing the effect of camera settings, such as brightness, focus,
aperture, white balance, etc.
Add Camera
Opens the Add a Camera dialog, allowing you to select and name the
camera to add.
Remove Camera
Removes the selected camera from the vision project.
Add Belt
In the Belts tab: Adds a belt to the list.
In the Cameras tab: Opens the Select a Belt dialog, allowing you to
select and assign a conveyor belt to a camera.
Remove Belt
Removes the selected camera from the vision project.
Add Robot
Opens the Select a Robot dialog, allowing you to select and assign a
robot to a camera.
Remove Robot
Removes the selected robot from the camera.
Add Controller
Opens a dialog that allows you to select and add a controller to the
devices list or to assign a controller to a selected conveyor belt.
Remove
Controller
Removes the selected controller from the vision project.
Calibrate Camera
Launches the 2D Vision Calibration Wizard, to calibrate the camera for
perspective and lens distortion.
AdeptSight 2.0 - User Guide
21
Using the System Devices Manager
Calibrate Color
Camera
Launches the Color Calibration Wizard, to calibrate the colors rendered
by a color camera.
Calibrate Visionto-Robot
Launches the Vision to Robot Calibration Wizard that enables you to
calibrate the setup of devices for your camera Calibration: cameras,
robots, controllers and conveyor belts.
"Not Calibrated"
status icon
The device is not calibrated.
"Calibrated" status
icon
The device is calibrated.
"Color Not
Calibrated" status
icon
The camera color is not calibrated.
"Color Calibrated"
status icon
The camera color is calibrated for color.
Connection state
icons for system
devices
Green: The device is connected
Red: The device is not connected.
Related Topics
Setting Up System Devices
Calibrating the Camera
Calibrating System Devices
Using the Sequence Manager
AdeptSight 2.0 - User Guide
22
Setting Up System Devices
Setting Up System Devices
The following explains the basic steps for adding devices to a vision application. Depending on your
system, you may need to add additional robots, controllers, cameras, and conveyor belts.
• AdeptSight vision applications are camera-centric. The system devices, and their calibration in
the vision system, are defined with respect to a selected camera.
• Before configuring the system you must have at least one camera present. If you do not have
a camera, you use the Emulation device to simulate camera input.
You can create and test vision sequences using only a camera and
then configure system devices later.
Basic Procedure for Setting Up System Devices
The following describes the typical order of actions for setting up the devices required by a vision
application.
To set up system devices follow this sequence:
1. Add required camera(s). Once cameras are detected and added you can create and test vision
sequences without being connected to a controller.
2. Calibrate the camera(s).
3. Add the controllers required for the application.
4. Add conveyor belt(s) if required. Belts can only be added if an active Conveyor Tracking license
is present.
5. Assign conveyor belt(s) to the camera.
6. Assign robot(s) to the camera.
In conveyor-tracking applications, belt devices MUST be assigned to a camera before the robots
are assigned to the camera.
7. Calibrate the system by launching a Vision-to-Robot calibration wizard
Adding a Camera
If you are using a single camera, which is correctly installed, AdeptSight will automatically detected the
camera by default, and display it in the list of devices, under the Cameras tab. If the camera has been
deleted from the list, or if you wish to add additional cameras, follow the procedure below.
To add a camera:
1. In the System Devices manager, select the Camera tab. Cameras detected by the system, as
well as an Emulation device, may already be present in the list.
2. Click the 'Add Camera' icon. This opens the Add a camera dialog shown in Figure 13.
3. Enter a Name that will represent the camera.
4. Select a camera from the Device drop-down list. You can also add an Emulation device.
AdeptSight 2.0 - User Guide
23
Setting Up System Devices
5. Click OK to add the camera and return to the System Devices manager. The added camera
will now appear in the Device list. A warning symbol beside the camera indicates that the
camera is not calibrated. See Calibrating the Camera for more details.
Figure 13 Adding a Camera to the System Devices Manager
To remove a camera:
1. In the System Devices Manager, select the Camera tab.
2. In the Device list, select the camera you want to remove.
3. Click the Remove Camera icon.
Adding a Controller
If you are using AdeptSight from Adept DeskTop, a controller is automatically added to the vision
project.
If you are using AdeptSight as a standalone application (outside Adept DeskTop) you must add a
controller to the System Devices manager. Depending on your license, you may be able to add multiple
controllers to the application.
When the required controller is present and connected, you can assign the robot(s) associated to the
controller, to a camera.
To add a controller in AdeptSight/Adept Desktop:
1. In the System Devices manager, select the Controllers tab.
2. A controller already appears in the list. The type of controller depends on the controller that
was specified when you opened the AdeptSight control in Adept DeskTop.
In the State column, a green icon indicates an open connection to the controller. A red icon
indicates a closed connection.
3. To connect to the controller, go to the Adept DeskTop menu and select File > Connect.
4. Connect to the controller. Refer to Adept DeskTop online help if needed.
To add a controller in a standalone AdeptSight application:
1. In the System Devices manager, select the Controllers tab.
2. Click the Add Controller icon. This opens the Add Controller dialog shown in Figure 14.
3. Click checkboxes to select the required controllers. When a controller is selected in the list, you
can connect to the controller by clicking the Connect button.
4. Use the Rescan button to detect available controllers on the network.
AdeptSight 2.0 - User Guide
24
Setting Up System Devices
5. Click OK to add the selected controllers and return to the System Devices manager.
The added controller appears in the Device list. A warning symbol beside the camera indicates
that the vision system (vision-to-robot) is not calibrated.
Click to enable and
add required controller(s)
Click to Connect/Disconnect selected controller
Click Rescan to scan network for available controllers
Figure 14 Adding a Controller to the Vision Application
Connecting to a Controller
If you are using AdeptSight within Adept DeskTop, you must connect to the controller through the Adept
DeskTop interface.
To connect to a controller in Adept DeskTop:
1. In the Adept DeskTop menu, select File > Connect.
2. Follow the procedure for connecting to the required controller. For help on this subject, refer to
the Adept DeskTop online help.
If you are using AdeptSight outside Adept DeskTop, you can directly connect to the controller from
AdeptSight.
To connect to a controller in standalone AdeptSight application:
1. When you add a controller, in the Controller Selection form, click Connect. See Figure 14.
2. Alternatively, in the Controllers or Cameras tab of the Vision Project window, click on the
State icon to connect or disconnect a selected controller.
Connection State - click icon to
connect/disconnect
Green icon = Connected
Red icon = Disconnected
Figure 15 Connecting to a Controller from the System Devices Manager
AdeptSight 2.0 - User Guide
25
Setting Up System Devices
Assigning a Robot to Camera
The robot(s) required for a vision project must be assigned to the camera that provides the images for
the application. To assign a robot, the controller for the robot must be present and the controller must
be connected.
In a conveyor tracking application, any belt device must be
assigned to the camera BEFORE any robots are assigned to the
camera.
To assign a robot to a camera:
1. In the cameras tab, select the required camera
2. In the toolbar, click the 'Add Robot' icon.
3. In the Robot Selection dialog, select the required robot.
If no controller is present, close the dialog and add a controller. If no robot is present, the
controller is probably disconnected, or no robot is available for the selected controller.
Adding a Belt Device
For conveyor tracking applications, one or more belt devices must be first added to the vision project.
Next, each belt must be assigned to a camera.
• Conveyor tracking is not supported on Cobra i-series robots.
• Belt devices will only function if a valid Conveyor Tracking license is present on the system.
Outside of AdeptSight you must also configure V+ to define which signal will latch the encoder, using the
config_c utility.
Set/Select Encoder
value here
Figure 16 Belt and its Associated Controller in the System Devices Manager
To add a belt device:
1. In the System Devices manager, select the Belts tab.
2. In the toolbar, click the 'Add Belt' icon. A belt is added to the list. The first belt is automatically
named Belt1.
3. Select the belt in the list and click the 'Add Controller' icon.
4. In the Select a Controller dialog, select the appropriate controller for the selected belt.
5. Click OK. The controller associated to the belt now appears in the list, as shown in Figure 16.
AdeptSight 2.0 - User Guide
26
Setting Up System Devices
6. Verify the encoder (belt tracker) number. If it is incorrect, double-click in the Encoder column
and select or type in the correct number. The encoder number corresponds to the device that
sends the belt tracking data to the selected controller.
The encoder value depends on the configuration of the connection
between the controller and the belt as well as the encoder signals
configured in the V+ Configuration Utility: config_c utility.
Please refer to the CX Controller User Guide and the camera
documentation for more information on connecting/wiring the
controller, the belt, and the camera.
Assigning a Belt Device to Camera
Any belt device required for a vision project must be assigned to the camera that provides the images
for the application.
NOTE: In a conveyor tracking application, any belt device must be assigned to the camera BEFORE you
any robots are assigned to the camera.
To assign a robot to a camera:
1. In the Cameras tab, select the required camera.
2. In the toolbar, click the 'Add Belt' icon.
3. In the Belt Selection dialog, select the required belt.
4. Click OK. The belt should appear under the camera, as illustrated in Figure 17
Figure 17 Belt Device Assigned to a Camera
Related Topics
Using the System Devices Manager
Calibrating System Devices
Using the Sequence Manager
Configuring the Camera
AdeptSight 2.0 - User Guide
27
Configuring the Camera
Configuring the Camera
Camera parameters can be configured within AdeptSight. You can access camera parameters from the
Vision Project window.
Accessing Camera Properties
You can access camera properties from the Vision Project window or from the Sequence Editor.
To access camera properties from the Vision Project window:
1. In the Vision Project manager, select the Cameras tab.
2. In the Devices list, select a camera.
3. Click the 'Camera Properties' icon to open the camera properties window. Figure 18 shows the
properties window for a Basler camera.
If you have added an Acquire Image tool to the sequence, you can access the camera properties from
the tool interface, in the Sequence Editor window.
To access camera properties from the Sequence Editor:
1. In an Acquire Image tool interface, select the required camera the drop-down list.
2. Click the 'Camera Properties' icon.
Figure 18 Basler Camera Properties Window
Configuring Camera Properties
For more information on configuring camera properties, consult the documentation for the camera.
For information on configuring the properties for an Emulation device, see Using the Emulation Device.
Saving and Importing Camera Properties
When you save a Vision Project, the camera properties data is saved in the project file. Camera
properties can also be saved separately to file, and reloaded to a camera in the vision project.
AdeptSight 2.0 - User Guide
28
Configuring the Camera
Import camera properties into a camera only if:
The camera is of the identical model as the camera from which the properties were
saved.
To save camera properties:
1. In the System Devices manager, select the Camera tab.
2. In the list of Devices, select the camera from which you want to save the properties.
3. Right-click on the name of the device. This displays the context menu.
4. From the context menu, select Camera Properties > Export.
5. Specify the name and destination for the camera properties file and save the file. Files are
saved with an hscam extension.
Related Topics
Acquiring Images in AdeptSight
Using the Emulation Device
AdeptSight 2.0 - User Guide
29
Calibrating the Camera
Calibrating the Camera
In AdeptSight you should first calibrate the camera before you create any object models with the Locator
tool. The basic camera calibration is a "spatial" calibration that corrects for perspective and distortion
and defines the relationship between the size of camera pixels and real-world dimensions.
This calibration can be carried out through the 2D Vision Calibration wizard or through a Vision-to-Robot
calibration wizard.
• Calibrating the camera through the 2D Vision Calibration wizard is the recommended method
if you need high accuracy for part handling and inspection. See Vision Calibration.
• Calibrating the camera through the Vision-to-Robot Calibration can provide adequate accuracy
for most pick-and-place applications. See Vision-to-Robot Calibrations. However, if your
application requires very high accuracy for part finding and part inspection, you should
calibrate the camera separately, through the 2D Vision Calibration Wizard, before carrying out
the Vision-to-Robot calibration.
• Applications that require the use and processing of color images should be calibrated through
the Color Calibration Wizard.
• Calibrations can be saved to file. See Saving and Importing Camera Calibrations.
Calibrating the Vision (camera) separately, before the Vision-to Robot calibration, is
necessary if there is significant lens distortion, otherwise significant lens distortion
may cause the Vision-to-Robot calibration to fail.
Before Calibrating
Before starting this calibration, make sure that the entire area covered by the camera field of view is
within the robot’s work range.
The camera calibration requires a grid of dots target. For demonstration or learning purposes you can
print and use one of the sample dot target that are provided in the AdeptSight installation, in the
AdeptSight 2.0/Tutorials/Calibration folder.
The sample target is intended for teaching purposes only; it is not a genuine, accurate
vision target.
Launching Camera Calibration
All calibration wizards are launched from the System Devices manager interface.
To start a camera calibration wizard:
1. In the System Devices manager, select the Camera tab.
2. In the list of Devices, select the camera you want to calibrate.
3. Click the icon for the required calibration: 'Calibrate Camera' or 'Calibrate Color'.
4. Alternatively, right-click on the camera in the Device list and select the required calibration
from the context menu, as illustrated in Figure 19.
5. The Calibration Wizard opens, beginning the calibration process.
AdeptSight 2.0 - User Guide
30
Calibrating the Camera
6. Follow the instructions in the wizard. If you need help during the Calibration process, Click
Help in the Calibration Wizard window.
Calibration icons
Right-click in list area to
display context menu
Figure 19 Starting a Camera Calibration from the Vision Manager
Viewing the Camera Calibration Status
Icons in the Device list indicate, at a glance, if a device has been calibrated. These icons are called
calibration status icons. Figure 20 illustrates status icons in the Cameras tab.
The status icons are:
The device has not calibrated, either through the 2D Vision
Calibration or a through a vision-to-robot calibration.
The device has been calibrated, either through the 2D Vision
Calibration or a through vision-to-robot calibration.
The color has no been calibrated.
The color has been calibrated through the Color Calibration
Wizard
Calibration status
icons
Figure 20 Calibration Status Icons in the System Devices List
Saving and Importing Camera Calibrations
When you save a Vision Project, the calibration data of the system devices is saved in the project file. If
you have not made changes to the robot and camera installation, then you do not have to recalibrate the
system when you load an existing project.
Calibrations can also be saved separately to file, and reloaded to a camera in the vision project.
AdeptSight 2.0 - User Guide
31
Calibrating the Camera
Import calibrations into a camera only if:
The camera is of the identical model as the camera with which the calibration
was created, and
The camera that is in the same physical position in the environment as the
camera position at the time the calibration was created.
To save a camera calibration:
1. In the System Devices manager, select the Camera tab.
2. In the list of Devices, select the camera containing the calibration that you want to save.
3. Right-click on the name of the device. This displays the context menu.
4. From the menu, select Camera Calibration > Export, or Color Calibration > Export.
5. Specify the destination for the calibration file and save the file. Files are saved with an hscal
extension.
AdeptSight 2.0 - User Guide
32
Calibrating System Devices
Calibrating System Devices
AdeptSight built-in Calibration Wizards enable you to calibrate the vision system to ensure accurate
performance and results. Calibration Wizards walk you through the steps required to calibrate:
• The camera: Vision Calibration.
• All the system devices used by a vision application: Vision-to-Robot Calibrations
Vision Calibration
Vision calibration, also called camera calibration, calibrates the camera to real world coordinates. This
calibration corrects for image errors to ensure the accuracy of your application.
The 2D Vision Calibration Wizard will guide and assist you through the steps required for the vision
(camera) calibration.
• For optimal accuracy, you should calibrate the camera before carrying out the Vision-to-Robot
calibration using the 2D Vision Calibration wizard.
• The vision system can optionally be calibrated through one of the Vision-to-Robot calibration
wizards. However, you will obtain better accuracy if you initially calibrate the vision before the
Vision-to-Robot calibration.
A vision-to-robot calibration does NOT compensate for lens distortion. If there is
significant lens distortion, the vision-to-robot calibration may fail. This can be
solved by calibrating the vision with the 2D Vision Calibration Wizard BEFORE
carrying out the vision-to-robot calibration.
See Calibrating the Camera for details on the 2D Vision Calibration.
Robot Calibration
For instructions on calibrating the robot, refer to the User’s Guide for the robot.
If the robot has not been calibrated, the robot will be calibrated during the Vision-to-Robot calibration.
Vision-to-Robot Calibrations
Adept Sight 'Vision-to-Robot' calibrations calibrate the devices associated to a camera, to ensure that a
robot will accurately move to parts that are seen by the camera.
AdeptSight provides Calibration Wizards adapted to various setups, depending on the devices that are
associated to a camera. For example, if the setup includes a belt
All calibrations start with the Calibration Interview wizard that will determine which calibration scenario
is required for you application.
The basic stages of Vision-to-Robot calibration are:
• Determine the correct calibration scenario for your environment.
• Verify if the robot is calibrated.
• Verify if the camera is calibrated.
AdeptSight 2.0 - User Guide
33
Calibrating System Devices
• Set robot parameters.
• Set an 'outside field of view point'.
• Create Object Model for the calibration.
• Execute calibration. Depending on the calibration scenario, the wizard may require that you
move the robot to different points in the workspace.
• Test the calibration.
When do I Calibrate?
The Vision-to-Robot calibration needs to be carried out once for a specific setup. If you make changes to
the setup, more specifically to the robot or camera position, parameters or configuration, then you must
recalibrate the new setup.
How do I Start the Calibration?
To launch the Vision-to-Robot calibration:
1. In the System Devices manager, select the Cameras tab
2. In the list of devices, select the camera that will be calibrated.
3. Select the Camera Calibration icon as shown in Figure 21.
4. The Interview Wizard opens, beginning the Vision-to-Robot calibration process.
5. Follow the instructions in the wizard.
Launches Vision-to-Robot calibration
Warning symbols indicate
non-completed calibration
Figure 21 Launching the Vision-to-Robot Calibration
Saving and Importing Vision-To-Robot Calibrations
When you save an AdeptSight project, the calibration data is saved. If you have not made changes to
the robot and camera installation, then you do not have to recalibrate when you load an existing project,
When you save a Vision Project, the calibration data of the system devices is saved in the project file. If
you have not made changes to the robot and camera installation, then you do not have to recalibrate the
system when you load an existing project.
Calibrations can also be saved separately to file, and reloaded to a device in the vision project.
AdeptSight 2.0 - User Guide
34
Calibrating System Devices
Import calibrations into a device and vision project only if you are sure that this
calibration is valid.
Otherwise this may cause hazardous and unexpected behavior of devices in the
workcell, which may lead to equipment damage or bodily injury.
To save a vision-to-robot calibration:
1. In the System Devices manager, select the Camera tab.
2. In the list of Devices, select the robot or belt for which you want to save the calibration.
3. Right-click on the name of the device. This displays the context menu.
4. From the menu, select Vision to Robot Calibration > Export, or Belt to Robot Calibration
> Export.
5. Specify the destination for the calibration file and save the file. Files are saved with an hscal
extension.
Related Topics
Calibrating the Camera
AdeptSight 2.0 - User Guide
35
Using the Sequence Editor
Using the Sequence Editor
A sequence is a series of vision processes that are executed by tools. These tools are added and
configured within the Sequence Editor.
Before editing a sequence, you should first calibrate the system:
• Calibrate the camera: This will ensure that object models are accurate for part-finding. The
camera can be calibrated separately or calibrated during a Vision-to-Robot calibration.
• Calibrate the vision and the robot: AdeptSight Vision-to-Robot calibration wizards will guide
you through the calibration that is adapted to your setup.
To open the Sequence Editor:
1. In the Sequence Manager, select a sequence in the list and click the Edit Sequence icon
2. The Sequence Editor appears as show in Figure 22.
3. If this is a new, unedited sequence, there are no tools in the Process Manager frame and 'Drop
Tools Here' appears in the frame.
Toolbars
Process Manager
Display area
Drag tools from
toolbox
to Process Manager
Toolbox
Grid of
Toolbox
results area
Status bar
Figure 22 AdeptSight Sequence Editor Window and its Components
Adding Tools to the Sequence
Vision tools are added, managed, and configured in the Process Manager area of the Sequence Editor.
The order of the tools is important: Tools in vision sequence are executed in order when you execute the
sequence.
• The first tool should be an Acquire Image tool, to provide the input images for the other vision
tools.
• Tools can receive input only from tools that are above (before) it in the sequence.
AdeptSight 2.0 - User Guide
36
Using the Sequence Editor
• For example a frame-based tool (that is positioned relative to a frame) must be placed below
the tool that is providing the frame of reference. See Frame-Based tool positioning.
To add a tool to the vision sequence:
1. In the Toolbox, select a tool and drag it into the Process Manager area.
2. If the toolbox is not visible, click the Toolbox icon:
3. Alternatively you can right-click in the Process Manager area to select a tool from the context
menu, as shown in Figure 23.
Display/Hide Toolbox icon
Collapse tool window
Context menu
Figure 23 Adding Vision Tools to a Sequence
The Sequence Editor Toolbar
The functions available from the toolbar are:
Execute Sequence
Runs the vision sequence. Depending on the state of the Continuous
Loop icon, Execute Sequence runs a single iteration, or runs the
sequence continuously until stopped.
Stop Sequence
Stops the execution of the vision sequence.
Continuous Loop
Run the vision sequence in loop mode (continuous execution), until
the execution is stopped by the Stop Sequence or Reset Sequence
icon.
Reset Sequence
Stops the execution of the vision sequence and resets the Overlap
Tool, the Communication Tool, and the Acquire Image tool.
AdeptSight 2.0 - User Guide
37
Using the Sequence Editor
Toolbox
Shows/Hides the toolbox. Tools can be added to the sequence by
dragging them from the toolbox to the Process Manager area. Tools
can also be added from the context menu that is displayed by rightclick in the Process Manager area.
Help
Opens the AdeptSight online help.
Collapse All
Collapses all tool interfaces in the Sequence Editor.
Executing Vision Sequences
You can execute a vision sequence from in the Sequence Editor. This executes only the sequence of tools
that is in the Sequence Editor. A sequence can also be executed from the Vision Project interface,
however executing the Vision Project also executes all other sequences that may be in the project.
Executing the sequence executes all the tools in the order in which they appear in the sequence.
To execute the sequence in continuous loop mode:
1. In the toolbar, click the 'Continuous Mode' icon to enable continuous running of the
application:
2. Click the 'Execute Sequence' icon in the toolbar:
3. To stop the execution of the sequence, click the 'Stop Sequence' icon:
Keyboard shortcuts in the Sequence Editor
Table 1 presents the keyboard shortcuts that you can use in the Sequence Editor.
Table 1 Keyboard Shortcuts in the Sequence Editor
Key
Action
F2
Rename the tool.
Delete
If no tool selected: Deletes all tools.
If tool selected: Deletes selected tool.
In Model Edition: Deletes selected feature.
Esc
Deselects single tool (selects all) to show results of all tools.
F5
If tool selected: Executes selected tool.
If no tool selected: Executes entire sequence.
Page Up Page
Down
Scrolls up/down through in the Process Manager area (Tool area.)
Scrolling is done in jumps: 6 jumps scroll through the entire list.
Home
Go to top of sequence - to first tool in the list.
End
Go to bottom of sequence - to last tool in the list.
Arrow up
Move tool selection up (select previous tool.)
Arrow down
Move tool selection down (select next tool in the sequence.)
Arrow left/Arrow
right
Arrow left: Same action as Arrow up.
Arrow right: Same action as Arrow down.
AdeptSight 2.0 - User Guide
38
Acquiring Images in AdeptSight
Acquiring Images in AdeptSight
In an AdeptSight application, the Acquire Image Tool provides images that will be used by other vision
tools in the sequence. This tool should always be the first process in any vision sequence.
Each Acquire Image tool acquires images from a specified camera, or from an Emulation device, which
simulates acquiring images from a camera. Emulation allows you to use stored imaged images in the
same manner as images being provided by a live camera.
• See Using the Acquire Image Tool for information on configuring and using the Acquire Image
tool in AdeptSight applications.
• Any number of Acquire Images tools can be added to a vision sequence. For example, in
multiple-camera applications, an Acquire Images tool can be created for each camera.
Display
Input images provided
by Acquire Images tool
Right-click in display
for viewing options
Figure 24 Acquire Image Tool Displaying Live Images from a Camera
Viewing Images Provided by the Acquire Image Tool
Images provided by the camera or an Emulation device appear in the display area of the Sequence
Editor window. There are two modes for viewing images:
• Live Mode displays the live, continuous images that are being acquired by the camera.
• Preview Mode displays a single static image at a time. Each time you click the 'Preview
Image' icon, a new image is shown in the display.
Executing the Acquire Images Tool
When an Acquire images tool is executed it retrieves the images taken by the camera device and makes
the images available to other tools in a vision sequence. For example in Figure 24, the Locator tool is
configured to receive Input images from the Acquire Images tool.
Related Topics
Using the Acquire Image Tool
Displaying Images
Using the Emulation Device
AdeptSight 2.0 - User Guide
39
Using the Emulation Device
Using the Emulation Device
An Emulation device acts as a virtual camera to simulate image acquisition, using a database of
images, referred to here as an emulation file. The Acquire Image tool can use images provided by an
Emulation device in the same manner as it uses images provided by a live camera.
• An Emulation device can be particularly useful for creating models, setting up, and testing a
new application, or analyzing and verifying performance with images from a real application.
• To familiarize yourself with the AdeptSight software without installing a camera setup, use
one of the image databases provided with the examples, in the AdeptSight program folder.
• Emulation devices are not counted in the limit of cameras allowed by an AdeptSight license.
Any number of Emulation devices can be added to an AdeptSight application.
Arrow indicates the next
image that will be acquired
Thumbnail image provides a
preview of the selected image
Options menu
Figure 25 Emulation Properties Window
Using the Emulation Device Database
An Emulation file is typically constituted by grabbing images of the same objects with various poses or
orientations in the workspace. Emulation files allow you to develop and test an application from away
from the factory or work environment.
Use the following commands and options to add or remove images from the emulation database.
Delete
Delete removes the currently selected image from the emulation database. Images can be deleted only
one at a time. To delete all images, use the Delete all command from the context menu.
Load
Load clears the existing database and loads a selected database file (*.hdb). Image databases can be
created by importing image with the Import command and saving the images with the save command.
AdeptSight 2.0 - User Guide
40
Using the Emulation Device
Save
The Save command saves all the images currently in the emulation database to an image file with an
hdb extension.
Import
Import allows you to add image files into the current database of images. Valid image formats are png,
jpg, tiff, and bmp.
Export
Export allows you to export the currently selected image to file. Supported image formats are as png,
jpg, tiff, and bmp.
Set as Current
Set as Current sets the currently selected image as the next image that will be acquired by Acquire
Image tool. The current image is indicated by a yellow arrow.
Delete All
Delete All removes all images currently in the emulation database. This is useful if you want to start
building a new image database from acquired images.
Append from Camera
Append allows you add images from a currently detected camera to the emulation database. Each
Append command adds the last image grabbed by the camera to the database.
Building an Emulation File
An emulation file can be built by:
• Importing images from external files.
• Adding images taken by a camera that is currently connected and detected by AdeptSight.
For example, the emulation file illustrated in Figure 25 consists of a single object with various poses.
Lighting conditions were set up to obtain well-contrasted and strongly detailed images. The images were
acquired by a connected camera and appended to the emulation file.
To grab images from a camera:
1. Click the collapse icon to show Append from Camera settings.
2. Select the camera that will provide the image.
3. Click Append. This will add the latest image taken by the camera to the end of the list of
images. Figure shows a live image being appended to the list of emulation images.
4. Repeat steps 2 and 3 as needed to grab the required number of images.
5. If needed, you can delete selected images from the list by clicking the Delete button.
To add images from image files:
1. Click the properties icon to display the dropdown menu.
2. In the drop-down menu, select Import.
3. In the Open dialog, browse to the folder that contains the images to add.
AdeptSight 2.0 - User Guide
41
Using the Emulation Device
4. Select one or more images and click Open. The selected images are added to the end of the list
of images.
5. Repeat steps 3 to 5 as needed to continue adding files to the image database.
6. If needed, you can delete selected images from the list by clicking the Delete button.
To save the image database:
1. Click the properties icon to display the dropdown menu.
2. In the drop-down menu, select Save.
3. In the Save As dialog, select destination and enter a filename for the database and click Save.
The file will be saved with an hdb extension. You can later load this file with the Load command
from the drop-down menu.
Appended image added to end of the list
Image taken by the selected camera
Figure 26 Camera image being appended to the emulation images
Related Topics
Acquiring Images in AdeptSight
AdeptSight 2.0 - User Guide
42
Displaying Images
Displaying Images
The Sequencer Editor display is a multipurpose display that:
• Displays the images provided by the Acquire Image tool.
• Displays live input from the camera, as continuous images or as a single, static image.
• Displays a visual representation of tool results.
• Provides an interactive interface editing or configuring some tools. For example: building and
editing models, selecting color areas for the Color Matching editor, building and editing
patterns, etc.
• Allows the user to manually position the region of interest of tools (Location parameters)
• Provides an interactive interface for building and editing models.
• Provides an interactive interface for editing or configuring certain tools, such as the pattern
Locator or the Color Matching tool.
Display Toolbar
Units in mm or pixels
depending on state of
'Calibrated' view icon
Context menu
(right click in display)
Status bar
Figure 27 Using the AdeptSight Display Interface
Using the Display Interface
In any mode, the display toolbar and context menu can assist in viewing images and working with
display objects, such as bounding boxes.
The Display Toolbar
The functions available from the toolbar are:
Calibrated
AdeptSight 2.0 - User Guide
Toggles between calibrated and non-calibrated display.
In calibrated mode, units in the display are expressed in mm. In noncalibrated mode, units are expressed in pixels.
43
Displaying Images
Zoom In
Zooms the display 2x the current view.
Zoom Out
Zooms the display 0.5x the current view.
Zoom Selection
Provides a dropdown list of zoom factors for the display.
Zoom
In this mode, each click in the display zooms the display.
You can also drag an area in the display to zoom the image to the
contents of the dragged area.
Pan
In this mode you can move in the image without having to use the
scroll bars.
Selection
In this mode you can select and interact with objects in the display.
The Display Status Bar
The display status bar provides image data at the position of the cursor (mouse). The information
displayed is:
• X-Y coordinates of the current position of the cursor. If the image is in calibrated mode, the
units are in millimeters. If the display is in uncalibrated mode, the units are in pixels.
• Greylevel value at the current cursor position. This information can be useful when configuring
Location tools and Inspection tools.
AdeptSight 2.0 - User Guide
44
Overview of AdeptSight Tools
Overview of AdeptSight Tools
AdeptSight provides an extensive set of vision tools for basic to complex applications.
Tools are added to vision applications in an arrangement called a sequence. A vision application can
contain any number of sequences. Within a sequence, tools are executed in order. The order of the tools
in the sequence is important because the output of given tool can be used as input by another tool.
Image Acquisition
Every sequence starts with an image acquisition tool, which provides the input images for other vision
tools. There can be more than one image acquisition tool in a sequence.
Acquire Image Tool
The Acquire Image tool provides images that are acquired from a
compatible camera, or from a database of images provides by an
Emulation device.
For conveyor-tracking applications, this tool provides latched
acquisition parameters allowing for 'soft' or 'hard' belt tracking, and
vision-on-the-fly position latching.
Motion Tools
Motion tools provide the functionality required for communication between the vision application and the
motion devices: controller, robot, conveyor belt.
Overlap Tool
The Overlap Tool filters instances that have already been found by
the Locator tool, so that the controller and robot do not attempt to
pick/inspect/handle an object more than once.
Communication Tool
The Communication Tool manages and sends instances found by
the Locator tool, to a queue on the controller.
Locator and Finder Tools
The Locator and Finder Tools create a vectorized description of objects, or object features. These tools
are faster, more reliable, and more accurate than grey-scale inspection tools in most situations.
Locator
The Locator finds and locates instances of model-defined objects.
Models characterize object types and are created and edited
through the Locator's Model Editor. The Locator is the ideal frameprovider tool for positioning inspection tools.
Arc Finder
The Arc Finder finds and locates circular features on objects and
returns the coordinates of the center of the arc, the start and end
angles, and the radius.
Line Finder
The Line Finder finds and locates linear features on objects and
returns the line angle and point coordinates.
Point Finder
The Point Finder finds and locates point features on objects and
returns the angle as well as the coordinates of the found point.
AdeptSight 2.0 - User Guide
45
Overview of AdeptSight Tools
Color Tools
Color Matching Tool
The Color Matching tool filters and analyzes areas of specified
color, or color ranges in RGB images.
Image Processing Tools
Image processing tools provide various operations and functions for the analysis and processing of
images.
Image Processing Tool
The Image Processing Tool processes grey-scale images by
applying arithmetic, assignment, logical, filtering, morphological or
histogram operators. Users can define custom filtering operators.
Image Sharpness Tool
The Image Sharpness Tool computes the sharpness of
preponderant edges in a user-defined region of interest.
Image Histogram Tool
The Image Histogram tool computes greylevel statistics within a
user-defined region of interest.
Sampling Tool
The sampling tool is used to extract an area of an image and
output it as a separate Image.
Inspection Tools
Inspection tools are commonly in vision applications to inspect objects and parts, typically found by a
Locator tool. Inspection tools rely on the analysis of pixel information, and do not create vector
descriptions of objects, as do the Locator and finder tools.
Blob Analyzer
The Blob Analyzer finds and locates blobs, and returns various
results for each blob.
Caliper
The Caliper finds and locates one or more edge pairs and
measures distances between the two edges within each pair.
Arc Caliper
The Arc Caliper finds and locates one or more edge pairs on an
arc-shaped or circular area and measures distances between the
two edges within each pair.
Edge Locator
The Edge Locator finds and locates an edge or a set of edges that
meet user-defined criteria.
Arc Edge Locator
The Arc Edge Locator finds and locates an edge or a set of edges
in an arc-shaped or circular area.
Pattern Locator
The Pattern Locator finds and locates instances of a greyscale
pattern.
AdeptSight 2.0 - User Guide
46
Overview of AdeptSight Tools
Other Tools
Result Inspection Tool
The results inspection tool filters results, from other tools, that
meet specific criteria. Logical operators AND and OR are applied
to a set of conditions that apply to the results of other tools in a
vision sequence.
Frame Builder Tool
The Frame Builder Tool creates allows the user to create custom
reference frames that can be used in AdeptSight vision
applications.
AdeptSight 2.0 - User Guide
47
Using AdeptSight Vision Tools
Using AdeptSight Vision Tools
This section explains basic functionality and concepts that apply to AdeptSight vision tools.
Vision Tool Interface
The tool interface allows the user to configure the tool and execute the tool individually. Figure 28
illustrates the main elements of a vision tool interface.
• All tools are executed when vision sequence is executed. It is also is possible to execute a tool
individually. This is useful for testing the configuration of a tool: you can repeatedly execute
the tool on the same image to view the effect of a change in parameters.
• Tools can also be saved individually and imported into other AdeptSight applications.
• Tool results can be saved to a log file for later analysis.
Executes the tool
Collapse button hides/displays the tool interface
Warning icon indicates that tool failed,
for example if an input image or frame is missing
Tool title bar
Frame input specifies 'frame-provider'
for frame-based positioning
Opens Location dialog for
positioning the tool
Results log can be output
for most Adept Sight tools
Advanced Parameters
are specific to each tool
Figure 28 Elements of a Vision Tool Interface
Adding and Managing Tools in a Vision Sequence
Tools are added to a vision sequence in the Process Manager area of the Sequence Editor.
To add a tool:
1. From the toolbox, select a tool and drag into the Process Manager (blue) area of the Sequence
Editor
2. If the toolbox is not visible, click the Toolbox icon in the toolbar:
AdeptSight 2.0 - User Guide
48
Using AdeptSight Vision Tools
3. Alternatively, you can right-click in the Process Manager (blue) area and select the tool from
the context menu. See Figure 29.
Process Manager area (blue)
Tool title bar
Right-click to display
context menu
Figure 29 Adding a tool to the vision sequence
To delete a tool:
1. Select the tool to remove.
2. Right-click beside the tool name and select Delete Tool from the context menu. See Figure 30.
Tool title bar
Selected tool indicated
by blue letters
Figure 30 Deleting a tool in the vision sequence
To change the order of a tool in the sequence:
1. Select a tool.
2. Drag the tool to the desired position in the sequence.
3. The tool must be positioned below the tool that is providing the Input.
To save a tool:
Tool configuration is saved in project file. All tool configurations in a sequence are saved when you save
a sequence.
Only individual tools can be saved from the Sequence Editor interface (*.hstool).
Sequences must be saved from the Sequence Manager interface (*.hsproj).
1. Select the tool to save.
2. Right-click beside the tool name and select Save Tool from the context menu. See Figure 30.
3. The file is saved with an hstool extension
To load a saved tool into the sequence:
1. Right-click in the Process Manager (blue) area.
2. From the context menu, Load Tool. See Figure 29.
To change the name of a tool:
1. Double-click on the tool name to enable editing of the name.
AdeptSight 2.0 - User Guide
49
Using AdeptSight Vision Tools
2. Alternatively, right-click beside the tool name and select Rename from the context menu. See
Figure 30. Each tool must have a unique name.
To show/hide (collapse) the tool interface:
1. Click grey arrow icon at right of the tool to collapse (hide) or show the tool interface.
• When collapsed, only the title bar is visible, containing the tool name, the 'Execute tool' icon,
and the 'Collapse" icon.
• When collapse is disabled, the entire tool interface is displayed.
2. To collapse all tool interfaces, click the 'Collapse All' icon:
Tool Input
Most tools require an input provided by another tool in the sequence. This is the Input parameter. The
tool that provides Input must be "higher" in the sequence, that the tool receiving the input. This is
because AdeptSight sequences execute tools in the order in which they appear in the sequence Editor.
Most tools require Input images, which are provided by an Acquire Image tool, or by other tools that
output images such as the Image Processing tool.
Some tools do not process images, and instead require a frame for the Input parameter. Tools that
provide input frames are called frame-providers. For example, the Frame Builder and the Results
Inspection tools require Input frames.
Many tools have an Input Frame parameter that is used for positioning the tool relative to a frame of
reference provided by another tool in the sequence. Most often, the frame-provider tool is a Locator tool,
but frames can also be provided by any tools that output frame results. See Frame-Provider Tools for
more information.
Positioning the Tool Region of Interest
The area of the image in which the tool carries out it process or action is called the region of interest,
ROI, or area of interest.
A tool can be positioned to execute on the entire image. However, most inspection tools are usually
applied to a specific part of the image, or to a specific area relative to an object.
• To position a tool relative to another tool, select the tool that will provide the frame of
reference, in the Frame Input dropdown box. For more details, see Using Frame-Based
Positioning.
• To configure a tool region of interest, click the Location button. This will allow you to define
the tool region of interest in the display and in the Location dialog.
AdeptSight 2.0 - User Guide
50
Using AdeptSight Vision Tools
Drag and resize the
bounding box to position
the tool
Figure 31 Tools can be positioned in the Location dialog and in the display
Results Log
Enabling Results Log allows you to save the results of a tool to a file.
The results log saves results of all tool executions, whether the tool is executed as part of a sequence, or
executed individually. See Saving Tool Results to a Log File for more details.
Executing Tools
When a sequence is executed, all tools in the sequence are executed in order. However tools can be
executed individually, to assist in the tool configuration.
• To execute a tool, click the 'Execute Tool' icon at right of the tool name:
• Executing the tool individually executes the tool process only, not the sequence.
Viewing Tool Results
Tool results are displayed in the results area of the Sequence Editor and graphically represented in the
display. See Viewing Tool Results for more details.
Related Topics
Viewing Tool Results
Saving Tool Results to a Log File
Overview of AdeptSight Tools
AdeptSight 2.0 - User Guide
51
Using Frame-Based Positioning
Using Frame-Based Positioning
Location parameters define where the region of interest of the tool is positioned. The region of interest
of the tool is the area in an image in which the tool carries out its process. There are two modes for
positioning a tool: frame-based and image-based.
Frame-Based Positioning
With frame-based positioning, the tool is positioned relative to a frame result provided by another tool,
called the frame-provider.
This type of positioning is dynamic: Each time the sequence is executed, the tool is repositioned, relative
to the frame results output by the frame-provider. If no instance of the frame-provider is present in an
image, no instance of the tool is applied.
Image-Based Positioning
In this mode the tool region of interest is always placed on the same area of the image. This type of
positioning is static: the tool remains positioned on a fixed portion of the image.
When to Use Frame-Based Positioning
The frame-based positioning mode is ideal for applications that require the inspection of randomly
oriented parts.
Also, frame-based positioning reduces the amount of code needed to carry out the inspection task
Frame-based tools are automatically positioned relative to the frame-provider tool each time a new
image is acquired.
The Locator tool is the frame-provider recommended for most applications. See for a list of other tools
that can be used as frame-provider.
• A frame is a result output by a tool. A frame consists of an X,Y position and an orientation.
(X,Y, Theta).
• The tool being positioned relative to another tool is said to be frame-based. The tool that
provides the reference frame is called the frame-provider.
• Figure shows a Caliper tool that was automatically applied to each instance of an object
found by the Locator tool, through frame-based positioning.
AdeptSight 2.0 - User Guide
52
Using Frame-Based Positioning
Figure 32 Example of Frame-Based Positioning
Frame-Provider Tools
A frame-provider is the tool that provides the Frame Input to a frame-based tool.
The ideal frame-provider is the Locator tool, which outputs frame called instances. Other, but not all,
AdeptSight tools, also output frames that can be used to position frame-based tools. provides
information using other AdeptSight tools as frame-providers.
Depending on the tool, an output frame can contain many instances. An instance is an occurrence of an
object or a result. For example, a Blob Locator tool can find many blobs in its region of interest. Each
blob occurrence is an instance. Each of these instances can subsequently be used as a frame for other
tools.
Table 2 Tools that can be Frame-Providers
Tool
Comment
Output Frames and Instances
Locator
The Locator is the ideal frame provider.
It is the most robust tool for providing
frames to other tools.
A Frame output by the Locator is called
an instance.
A single Locator can output multiple
frames (multiple) instances.
Frame Builder
The Frame Builder builds and outputs
Frames output by the Frame Builder tool
any number of frames that are based on are built relative to frames provided by
user-defined locations.
another tool, or relative to the input image
origin.
Line Finder
Reliable. Like the Locator, results output A Line Finder tool outputs a single
instance per frame.
by the Line Finder are vectorized
descriptions that are robust to noise,
occlusions, and changes in lighting.
Arc Finder
Reliable. Like the Locator, results output An Arc Finder tool outputs a single
instance per frame.
by the Arc Finder are vectorized
descriptions that are robust to noise,
occlusions, and changes in lighting.
AdeptSight 2.0 - User Guide
53
Using Frame-Based Positioning
Table 2 Tools that can be Frame-Providers
Tool
Comment
Point Finder
Not recommended for most applications. A Point Finder tool outputs a single
The Point Finder result does not contain instance per frame.
an orientation (rotation). A frame-based
tool will be positioned at the correct X,Y
offset but with an orientation of 0 (zero).
Blob Analyzer
To be used as a frame-based tool, the
IntrinsicInertiaResult and
ExtrinsicInertiaResult Parameters
must be set to True.
Because this tool uses grey scale
correlation, its results are affected by
lighting conditions, noise and
occlusions. This can affect the success,
accuracy, and positioning of a
dependent frame-based tool.
Caliper and Arc
Caliper
If the tool is configured to find more than A Caliper or arc Caliper tool can output
multiple instances per frame. Each found
one caliper pair, the positioning of the
caliper pair is an instance.
frame-based tool varies (relative to the
parent object) when one or more caliper
pairs are not detected.
Because these tools use grey scale
correlation, results are affected by
lighting conditions, noise and
occlusions. This can affect the success,
accuracy, and positioning of a
dependent frame-based tool.
Edge Locator
and Arc Edge
Locator
If the tool is configured to find more than An Edge Locator or Arc Edge Locator tool
one edge, the positioning of the frame- can output multiple instances per frame.
based tool varies (relative to the parent Each found edge is an instance.
object) when one or more edges are not
detected.
Because these tools use grey scale
correlation, results are affected by
lighting conditions, noise and
occlusions. This can affect the success,
accuracy, and positioning of a
dependent frame-based tool.
AdeptSight 2.0 - User Guide
Output Frames and Instances
A Blob Analyzer tool can output multiple
instances per frame. Each found blob is
an instance.
54
Using Frame-Based Positioning
Table 2 Tools that can be Frame-Providers
Tool
Comment
Pattern Locator
Not recommended for most applications. A Pattern Locator tool can output multiple
Frames are positioned correctly relative instances per frame. Each found pattern
is an instance.
to the found pattern however patterns
are generally not stable features on the
parent object. Also, results can be
significantly slower than using the
Locator as a frame-provider.
Results
Inspection Tool
Useful for creating a PASS/FAIL filter.
The Results Inspection tool filters results
and outputs frames that meet defined
criteria.
AdeptSight 2.0 - User Guide
Output Frames and Instances
Does not create a Frame but outputs a
frame created by another tool if the frame
passes conditions set by the Results
Inspection tool. Any number of frames
and instances can be output by this tool
depending on the input tool and the
configuration of filters.
55
Saving Tool Results to a Log File
Saving Tool Results to a Log File
The results of a tool process can be saved to a text file called the Results Log.
Creating a Results Log
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Deletes the current Log file
Name and path of the file to
which results are currently being logged
Browse to set filename and
path for the log file
Figure 33 Saving AdeptSight tool results to a log file
Contents of the Results Log
The Results Log saves the results of each iteration of the tool process whether or not it has found any
valid instances. The results are the same results as those that appear in the grid of results, below the
display in the sequence Editor. See Viewing Tool Results for more details.
Figure 34 shows an example of Locator results in a log file.
Figure 34 Example of the contents of a Results Log
Clearing the Log File
To clear the current log file, you must delete the existing file and restart a new one.
To clear the result log file:
1. Click the 'Delete Log' icon.
2. If you leave the Results Log check box enabled, the next time the sequence is executed, a
new results log will be started, with the name and file path that are currently shown in the text
box.
AdeptSight 2.0 - User Guide
56
Viewing Tool Results
Viewing Tool Results
The Sequence Editor can display results of all tools in a vision sequence or the results of a single tool in
the sequence.
The Sequence Editor provides results in the following manner:
• The display shows a visual representation of the results. For example, in Figure 35, the results
of a histogram tool are represented by a green box that represents the tool region of interest,
and an ID number, called the Frame ID, unique to each tool instance.
• The grid of results, below the display, shows values of the tool results, usually as numeric
values.
• The status bar shows the execution time of the entire sequence or of a single tool.
• Results logs can be output for all inspection and image processing tools.
Selected tool shown
with blue title
Results displayed
for selected tool
Figure 35 Displaying the Results of a Single Vision Tool
Displaying Tool Results
By default, the Sequence Editor displays the results on the currently active (selected) tool.
To display the results for a single tool in the sequence:
1. In the Process Manager (Sequence) click on the tool name to select it.
2. The selected (active) tool name becomes blue, as shown in Figure 35.
• The grid of results shows the results for the last execution of the tool.
• The display shows representation of the results of the last tool execution.
To display the results for all tools in a sequence:
1. To view the results of all tools, no tools in the list should be selected (active).
AdeptSight 2.0 - User Guide
57
Viewing Tool Results
2. If a tool is currently selected in the sequence (displayed in blue), click once on the name of the
tool that is currently selected.
3. The tool name becomes black as shown in Figure 36.
• The grid of results shows the results for the last execution of all the tools in the sequence.
• The display shows the representation of the last tool in the sequence.
all tool results
are displayed
when no tool
is selected
Figure 36 Displaying the Results of All Tools in a Vision Sequence
AdeptSight 2.0 - User Guide
58
Coordinate Systems
Coordinate Systems
Location and inspection tools can return results with respect to one of four coordinate systems. This
allows you to use the set of references best adapted to the requirements of your applications, by
selecting the coordinate system in which results will be displayed and returned.
The four coordinate systems are the World coordinate system, the Image coordinate system, the Object
coordinate system, and the Tool coordinate system.
This chapter describes the four coordinate systems and the advantages of each for building applications.
• Relationship between Coordinate Systems
• World Coordinate System
• Image Coordinate System
• Object Coordinate System
• Tool Coordinate System
Relationship between Coordinate Systems
The coordinate systems can be described as layered in physical space. The World coordinate system
occupies the highest layer of level since it physically contains the other coordinate systems.
• There can only be one World coordinate system, containing more than one instance of the
other coordinate systems. The relationship between the World coordinate and the Image
coordinate system is defined by the calibration procedure.
• Though there is usually only one Image coordinate system, there can be more than one, for
example when two independent camera devices are used for an application.
• The Object coordinate system is unique for each Model defined. The Object coordinate system
is defined with respect to the World coordinate system of the image from which the Model is
built.
Image
coordinate
system
World
World
coordinate
coordinate
system
system
Tool coordinate
system
Object
coordinate
system
Figure 37 Relationship Between AdeptSight Coordinate Systems
AdeptSight 2.0 - User Guide
59
Coordinate Systems
World Coordinate System
The World coordinate system is especially useful for guidance applications. The origin is set during the
calibration process, approximately at the center of the image.
This coordinate system can also be useful for applications where more than one camera is used: objects
found by each camera are thus located and defined within a same coordinate system.
Image Coordinate System
This coordinate system is useful for image-processing applications. The origin is set during the
calibration process. The origin of the Image coordinate system is depends on the type of camera and is
set during the calibration process. Units are always expressed in pixels.
• If the camera is calibrated with the 2D Vision Calibration Wizard, the origin (0,0) is at the
center of the image. If the camera is calibrated only through a Vision to Robot calibration, the
origin is the same as the Robot origin (Robot frame of reference).
• Figure 38 shows a typical, left-handed coordinate system and the image origin at the center of
the image.
Image
World
Image
boundary
Image
coordinate
system
X
Y
Figure 38 Image Coordinate System
Object Coordinate System
The origin of the Object coordinate system is defined by the user during the creation of a model, in
Model edition mode.
• In an application, all instances of the same type of object use the same origin for defining the
position of model-based inspection tools. This is useful when you must determine a feature’s
location in relation to other features in the object.
• This coordinate system is useful for quality control applications in which features must be
inspected at a specific point on the object.
AdeptSight 2.0 - User Guide
60
Coordinate Systems
Model
Object coordinate
system
The position and orientation
of the Object Coordinate system
is set during model edition
Figure 39 Object Coordinate System
Tool Coordinate System
Tools carry out their action within a region of interest that is bounded by a rectangle, or a sector in the
case of arc-tools. When the tool area is bounded by a rectangle, the origin of any Tool coordinate system
is the center point of the rectangle. If the tool area is bounded by a sector (arc tools) the origin of the
coordinate system is the origin of the sector.
• The origin of the tool coordinate system is fixed at the center of a tool’s region of interest.
• The Tool coordinate system is useful for measuring a feature that does not need to be defined
or located with respect to a specific location on the object. For example: a caliper measure on
an object, found and measured by a Caliper tool.
AdeptSight 2.0 - User Guide
61
Coordinate Systems
Frame-based tool
positioned relative to the
result of a Locator tool
Tool coordinate
system
Object coordinate
system
Frame-based tool
positioned relative to the
result of a Locator tool
Tool coordinate
system
Figure 40 Tool Coordinate System
AdeptSight 2.0 - User Guide
62
Color Support and Functionality in AdeptSight
Color Support and Functionality in AdeptSight
The AdeptSight Color license adds support for color image acquisition and processing. The optional
license for color processing is required to enable full color support in AdeptSight.
AdeptSight provides support for acquiring and using color images. Additionally, the optional color license
adds color capabilities to various AdeptSight tools and processes.
This section provides an overview of color support in AdeptSight, described under the following topics:
Color Image Acquisition
Color Calibration
Color Locator
Color Processing with Inspection Tools
Color Matching Tool
Color Image Acquisition
The Acquire Image tool accepts and outputs color images provided by supported color cameras.
Moreover, AdeptSight provides an optional color calibration that can ensure that the Acquire Image tool
provides accurate color images.
Figure 41 AdeptSight Application with Color Locator
Color Calibration
In applications where accurate color is required, color calibration ensures that the camera will provide
images that contain accurate color information. AdeptSight includes a Color Calibration Wizard that
steps you through color calibration, using a standard GretagMacbeth Color Checker target.
AdeptSight 2.0 - User Guide
63
Color Support and Functionality in AdeptSight
Calibration Wizard
guide the user through
the quick calibration process
AdeptSight color calibration
uses standard
color target
Figure 42 AdeptSight Color Calibration Wizard
Color Locator
The Locator can be configured to differentiate between object based on their color. In the model-building
process, a custom color shading area can be defined for each object. This shading area allows the
Locator to use color information when locating objects.
For more information on configuring a color Locator, see Configuring a Color Locator Tool.
Custom shading area in model
defines the color of the object
Figure 43 Custom Shading Area Enables Finding of Parts According to Color
Color Processing with Inspection Tools
Most inspection and finder tools can process images on the basis of color. An example of this is the
detection of edges based on color values.
Because color processing can increase execution time, when color processing is not required, you can
enable grey-scale processing to improve execution time.
The color processing tool requires a Color License.
AdeptSight 2.0 - User Guide
64
Color Support and Functionality in AdeptSight
The following inspection tools support color processing:
• Edge Tools: The Edge Locator, Arc Edge Locator, Caliper, and Arc Caliper tools use color
information to extract edges.
• Finder Tools: The Arc Finder, Line Finder, and Point Finder tools use color information to
extract edges for finding geometric entities (arc, lines, points.)
Color image
Grey-scale version of the image
Edges detected
at color
transitions
No edges detected
in grey-scale versions
of same image
Figure 44 Color Processing with Finder and Edge Tools
In AdeptSight 2.0 the Blob Analyzer and Patter Locator tools do not provide color processing.
Color Matching Tool
The Color Matching tool allows you to easily define filters to extract and analyze color areas in an input
image.
The Color Matching tool can identify the presence or absence of defined colors, and analyze the
predominance of colors in images. For example, to inspect and differentiate similar objects of different
colors.
The Color Matching tool requires a Color License.
For more information on this tool, see Using the Color Matching Tool.
AdeptSight 2.0 - User Guide
65
Color Support and Functionality in AdeptSight
Figure 45 Example of the Color Matching Tool interface
AdeptSight 2.0 - User Guide
66
Using the Acquire Image Tool
Using the Acquire Image Tool
In an AdeptSight application, the Acquire Image tool provides images that will be used by other vision
tools in the sequence. This tool should always be the first process in any vision sequence.
The Acquire Image tool acquires images from a camera, or from a database of images, through the
Emulation device.
This tool also supports latched image acquisition. Latching options are provided for applications that
require robot latching and belt encoder tracking. See Configuring Latching Parameters.
Executes the tool
Preview icons DO NOT acquire images
Click these icons preview to images
provided the camera
Click to save current Image
to file
Figure 46 Acquire Image Tool Interface
Selecting a Camera
The Camera drop-down list lets you select the camera that will provide the images for the current
sequence of tools.
This list contains the available cameras that can provide input images. For example:
• The Basler camera that is provided with AdeptSight, identified by model name and ID, as
shown in Figure 46.
• Any other compatible camera, identified by its name and ID.
• An Emulation device, which mimics image acquisition by using images from a database of
images instead of live images from a camera. See Importing Images to the Emulation Device.
Accessing Camera Properties
To access camera properties:
1. Select a camera from the drop-down list in the list.
2. Click the Camera Properties icon to access the parameters of the selected camera.
3. As needed, consult the documentation for the camera.
4. If you select the Emulation device instead of a camera, this will open the Emulation
Properties window. See Figure 47. Emulation mimics image acquisition by using images from
a database of images instead of live images from a camera.
Viewing Images Provided by the Acquire Image Tool
Images provided by the camera or the Emulation mode appear in the display area of the Sequence
Editor window. There are two modes for previewing images:
• Live Mode displays the live, continuous images that are being acquired by the camera.
AdeptSight 2.0 - User Guide
67
Using the Acquire Image Tool
• Preview Mode displays a single static image at a time. Each time you click the 'Grab Single
Image' icon, a new image is shown in the display.
• Previewing does not execute the Acquire Image tool. To acquire images from the camera for
use by other tools, you must execute the sequence, or execute the Acquire Images tool.
Saving Images
Images can be saved to file directly from the Acquire Image interface. Various standard file formats are
supported, as well as the hig format that is exclusive to Adept.
The hig format saves the calibration information in the image file. Files with this format can be reused in
AdeptSight applications, through an Emulation device.
To save the current image:
1. Click the 'Save Current Image' icon
2. Specify the destination where the file will be saved.
3. Specify the file format. The .hig format is an AdeptSight format that stores calibration data in
the image file.
Renaming the Acquire Image Tool
Automatically named Acquire Image tools in an application can be renamed with more significant
names. For example, rename tools named: Acquire Image, Acquire Image2, and Acquire Image3 to:
Camera 1, Camera 2, and Camera 3.
To rename the tool:
1. Right-click on the Acquire Image title bar.
2. From the context menu, select Rename Tool.
Alternatively, you can double-click in the title. When the cursor appears, modify or replace the
name.
Importing Images to the Emulation Device
You can provide images to the Emulation device by loading an image database, or by importing images
that are in a compatible format (bmp, png, etc).
See Using the Emulation Device for more information on creating and using images provided from image
databases and image files.
To import images from a database:
Click Import to select the file containing images. The Adept AdeptSight installation provides image files
that you can use in emulation mode. These image files have an "hdb" extension.
AdeptSight 2.0 - User Guide
68
Using the Acquire Image Tool
Thumbnail of the image
currently selected in the list
Click here to load or import images
'Delete' removes a
selected image
Figure 47 Importing Images in Emulation Mode
Acquire Images Icons
The functions available from the icons are:
Save Current Image
Saves the current image to specified file and image format. Various
standard formats are available. An additional format: hig, allows you
to store the calibration format in the image, for reuse in other
AdeptSight applications.
Camera Properties
Opens the properties window for the selected camera. If the selected
camera is an emulation device, this opens the Emulation Properties
dialog.
Live Mode
Provides a continuous display of live images being taken by the
camera. This is useful for visualizing the effect of camera settings,
such as brightness, focus, aperture, white balance, etc.
Image Preview
Provides a single image preview of an image taken by the camera. A
new image is displayed each time the icon is clicked.
Execute Tool
Executes the Acquire Image tool, and makes images available for
other tools in the sequence. Does not execute other tools in the
sequence.
Grab Failed
Indicates that tool execution failed. Often, this occurs because the
selected camera is unavailable, either because it is disconnected,
non-existent, or incorrectly configured.
Related Topics
Configuring Latching Parameters
Using the Emulation Device
AdeptSight 2.0 - User Guide
69
Configuring Latching Parameters
Configuring Latching Parameters
Latching Parameters provide options for latching both robot and conveyor belt locations. Two types of
latching are mentioned below: Hard latching and Soft latching.
Hard Latching
Hard Latching refers to a setup in which a cable transmits analog signals from the camera to the
controller. Each time the camera grabs an image, a signal is sent to the controller. The controller reads
the location (of a device) at the time the image is grabbed, and saves this information to a buffer. The
latching configuration must be set up on the controller, through the CONFIG_C utility.
Soft Latching
Soft Latching refers to a setup in which there is no true latching of a location. At the moment that the
Acquire Image tool acquires an image, it asks the controller for the location of a device (robot or
conveyor belt). Because there is a delay between the time the image is acquired and the time that the
position of the device is read, the device (robot or belt) should not be moving during the execution of
the Acquire Image tool.
Figure 48 Acquire Image Tool - Latching Parameters
Belt Latching
Belt latching parameters define the mode that will be used to latch encoder signals for belt tracking
applications. Read Latched Value is the default and recommended mode for most situations.
Read Latched Value
This enables true latching, also called hard latching, as described above. This mode should always be
used for conveyor tracking applications, except for specific situations, such as described below under
Read Value.
Read Value
Read Value sets a soft latching mode, in which the belt location. Instead the Acquire Image tool
requests the location of the conveyor belt from the controller at the moment an image is acquired.
However, because there is a delay between the time that the image is acquired and the time the acquire
AdeptSight 2.0 - User Guide
70
Configuring Latching Parameters
image tool receives the belt location, the location is no longer valid because the belt has moved. The
location is only valid if the belt was stationary when the image was taken.
• The advantage of using this mode is that there is no cable required for latching the signal.
• The disadvantage is that it can only be used in cases where the conveyor belt comes to a full
stop when the camera is taking an image, and/or when the conveyor belt movement is very
slow and a slight error in the location is not critical to the application.
Robot Latching
Robot latching parameters define the mode that will be used to latch the location of the robot. Read
Latched Value is the default and recommended mode for most situations.
Read Latched Value
Read Latched enables true latching (hard latching) of the robot location at the moment the image is
taken.
This mode should be enabled for most applications except in the following cases.
• This mode MUST be enabled for applications where the camera is arm-mounted or toolmounted.
• This mode is required for "on-the-fly" inspection with an upward facing camera, or when parts
are moved (without stopping) through the camera field of view to be inspected.
Read Value
Read Value sets a mode in which true latching does not provide the location of the robot. Instead the
Acquire Image tool requests the robot location at the moment an image is acquired. However, because
there is a delay between the time that the image is acquired and the time the Acquire Image tool
receives the robot location (30ms for example) position is no longer valid because the robot has moved.
The robot location is only valid if the robot was stationary, or moving very slowly when the image was
taken.
• The advantage of using this mode is that there is no cable required for latching the signal.
• The disadvantage is that it can only be used in cases where the robot comes to a full stop
when the camera is taking an image, and/or when the robot movement is very slow and a
slight error in the location is not critical to the application.
Related Topics
Using the Emulation Device
AdeptSight 2.0 - User Guide
71
Using the Locator Tool
Using the Locator Tool
The Locator finds and locates objects based on models, which describe the geometry of objects.
• Because of its speed, accuracy, and robustness, the Locator is the ideal "frame-provider" for
AdeptSight inspection tools.
• A Locator can also be frame-based. A frame-based Locator requires the input of another tool
in the application, preferably another Locator. A model-based Locator can be used to precisely
locate features or "sub-features" or "sub-parts" on a parent object.
• Locator offers color-based processing on systems that have an AdeptSight Color License. For
more information see Configuring a Color Locator Tool.
Basic Steps for Configuring a Locator Tool
1. Select the tool that will provide input images to the Locator.
2. Position the Locator tool.
3. Create (or add) the models that will be used by the Locator to find and locate objects.
4. Configure Search parameters, as needed.
5. Execute the tool and verify results.
6. Configure Advanced Parameters, as needed.
Locator Tool Interface
The Locator interface is subdivided into 6 sections:
Input
Selects the tool that provides input images.
See Input
Location
Positions the tool’s region of interest for the
search process.
See Location.
Models
Manages models of the parts to be located and
provides a Model Edition mode for creating and
editing models.
See Managing Models in
AdeptSight.
Search
Sets basic parameters used by the Locator.
See Managing Models in
AdeptSight.
Results Log
Enables the saving of tool results to a log file.
See Saving Tool Results to a Log
File and Locator Tool Results.
Advanced
Parameters
Provides advanced parameters for configuring
the Locator.
See Configuring Advanced Locator
Parameters.
AdeptSight 2.0 - User Guide
72
Using the Locator Tool
Executes the tool
List of models
Model options
Locator results can be saved to
a log file
Advanced Parameters provide
additional functionality
Figure 49 Locator Tool Interface
Input
The Locator requires the input of an image, typically provided by the Acquire Image tool.
Input images can also be provided by other AdeptSight tools that output image results, such an Image
Processing Tool.
Location
Location parameters define where the region of interest of the tool is positioned. The region of interest
of the tool is the area in an image in which the tool carries out its process.
There are two types of positioning for a tool:
• Image-based: The tool region of interest is always placed on the same area of the image.
Often, this area is the Entire Image. This type of positioning is static: the tool is always
positioned on a fixed portion of the image.
• Frame-based: The tool is positioned relative to a frame result instance) provided by another
tool, called the frame-provider. This type of positioning is dynamic: the position depends on
the position of the frame-provider. If no instance of the frame-provider is present in an image,
no instance of the tool is applied.
In most cases a Locator tool is image-based. A frame-based Locator requires a frame
input from another Locator tool (recommended), or from another frame provider tool.
To position an image-based Locator tool:
1. In the Location section, set Frame Input to (none).
AdeptSight 2.0 - User Guide
73
Using the Locator Tool
2. If the Locator must find parts in the entire area of the input image, check the Entire Area
check box.
3. If the Locator region of interest must be constrained to a specific area of an image, click
Location, for example, if the image covers an area larger than the robot workspace.
4. Use both the Location window and the Display to position the tool.
• Enter or select values of Location parameters to set the position of the tool region of interest.
Changes to these values are dynamically represented in the display.
• You can manually position the bounding box in the display. To resize the box, drag the sides of
the box with the mouse. To rotate the box, move the X-axis marker with mouse.
To position a frame-based Locator tool:
1. Execute the sequence once.
2. In the Frame Input drop-down list, select the tool that will provide the frame.
The tool that provides the Frame Input is called the frame-provider. The Frame Input can
only be provided by a tool that is 'above' the current tool in the sequence.
3. By default, the All Frames check box is enabled. Leave this check box enabled if the current
tool should be applied to all the frame results that are output by the frame-provider tool. This is
the recommended setting for most situations.
4. If the current tool must be applied only to a specific frame, disable the All Frames check box
and select the required frame. The default value is 0; numbering of frames is 0-based.
Instance that will be
used to set the Location of
the current tool.
Bounding box that
defines the Location of the
position of the current tool
Figure 50 Positioning a Frame-Based Locator tool
5. Click Location. This opens the Location window and positions the tool bounding box relative to
a frame of reference in the current image. The bounding box X-Y positions and rotation are
relative to the frame of reference.
In Figure 50, the tool is frame-based, and the Frame Input is provided by a Locator tool. The
Locator instance (result) that is the frame of reference is indicated by a blue X-Y axes marker.
AdeptSight 2.0 - User Guide
74
Using the Locator Tool
6. Enter or select values for Location parameters to set the location of the region of interest.
Changes to these values are dynamically represented in the Display.
• You can manually position the bounding box in the display by dragging the sides of the box
with the mouse. To rotate the box, move the X-axis marker with mouse.
Once the Locator in correctly positioned, one or more models must be added to the Locator. See related
topics on creating models and configuring the Locator.
Related Topics
Creating Models in AdeptSight
Configuring Locator Search Parameters
AdeptSight 2.0 - User Guide
75
Creating Models in AdeptSight
Creating Models in AdeptSight
The Locator tool finds and locates objects that are defined by a Model. Models that are created from
calibrated images can be used across different AdeptSight applications.
The Locator provides a Model Edition mode in which the user can create new models and modify existing
ones.
• Models can be built quickly through the Basic model edition mode. Models built through the
basic edition mode are typically satisfactory for many vision applications.
• The Expert model edition mode provides additional functionary edit, refine, or customize
models.
Launching Model Edition
The Model Edition mode opens when you add a new model or edit an existing model.
To create a new model:
1. Click the Add Model icon in the Models section:
2. The Model Editor opens. See Basic Model Edition Mode for details on creating models.
To edit an existing model:
1. Select a model in the list.
2. Click Edit.
3. The Model Editor opens, in basic model edition mode opens as show in Figure 51. See Expert
Model Edition Mode for information on editing an existing model.
Basic Model Edition Mode
The basic Model Edition mode allows you to quickly teach a Model from an object in the image.
Use display options to
zoom, select, and pan
Model bounding box
Coordinate system
marker
Outline Model features
displayed in green
Figure 51 Model Editor Display
AdeptSight 2.0 - User Guide
76
Creating Models in AdeptSight
To teach the model:
1. If the current image is not satisfactory, click the 'Grab Single Image' icon until you acquire a
satisfactory image.
2. Position the green bounding box marker around the object for which you want to create the
Model.
3. Position the yellow coordinate system marker. This marker sets the coordinate system (frame
of reference) of the model. To rotate the axes marker drag the arrow ends of the marker.
4. As needed, you can extend the axes of the coordinate system marker to help you visually
position the coordinate system marker.
5. Click Done.
6. The Model now appears in the list of Models. To rename the model, double-click on its name in
the list.
Expert Model Edition Mode
To edit a model, or change parameters that affect model building, you must access the Expert model
edition mode.
Figure 52, illustrates a model that contains features that are not part of the object. In expert mode you
can remove these features or adjust parameters that may allow you to automatically remove unwanted
features.
Features added to the
model are displayed in
magenta at Outline Level
and green at Detail Level
Blue lines display
detected edges that were
not selected as features
Figure 52 Expert Model Edition Mode
Choosing Features for the Model
Any number of features can be selected and added to a Model, at either the Outline or the Detail Levels
or both.
The following considerations should be taken into account when selecting features used to create the
Model.
AdeptSight 2.0 - User Guide
77
Creating Models in AdeptSight
• Select features that are stable: features that remain fixed or stable on all occurrences of the
part or object you are modeling.
• Select features that distinguish the part from the background and from other similar parts or
objects that will be processed within the same application.
• Select features that characterize an object it and set it apart from other somewhat similar
objects or background while being stable with respect to lighting.
• Favor features that are long and rich in their variation of curvature. This allows for more
robust recognition and more precise positioning.
• Non-distinctive and unstable features at the Outline Level can negatively impact the ability of
the location process to recognize the Model.
• Blurry and unstable features at the Detail Level can negatively impact the location and
positioning accuracy.
Adding Features to the Model
You can manually select and add features to a Model after it is created. Features can be added to the
model at the Detail Level, the Outline Level, or at both Levels. This is useful when a specific feature on
the object is must be present to disambiguate this object from other very similar objects.
To add features to the model:
1. In Model edition mode, select Advanced, to enter advanced model edition.
2. Under Show, select the level in which you will create the feature: Outline or Detail.
3. Select the feature by clicking on the feature in the display.
• To select the entire contour, double-click on the contour.
• To select only a section of a contour, left-click a starting point on the contour. Hold the Ctrl key
and click the end point of the portion you want to select.
• To modify (add or subtract) from the selected contour section, hold the Ctrl key while clicking
elsewhere on the line segment.
4. Click Insert key to add the feature to the model.
5. Click Apply to make this change definitive in the Model.
To edit a feature:
You cannot edit an existing feature. You must first delete the feature and then create a new feature from
the required portion of contour.
For example to add only a section of a contour from a contour that is entirely added as a feature:
1. Select and delete the feature as explained below.
2. Add a section of the contour as explained as in Adding Features to the Model.
Removing Features from a Model
You can manually remove features to a Model after it is created. Features can be removed from either
the Detail Level, the Outline Level, or from both levels.
AdeptSight 2.0 - User Guide
78
Creating Models in AdeptSight
To remove features from the Model:
1. Under Show, select the level in which you will create the feature: Outline or Detail.
2. Select the feature you want to remove: the feature appears in the display as a bold red
contour.
3. Click the Delete key to remove the feature.
4. Click Apply to make this change definitive in the Model.
Setting Contour Detection in the Model Editor
The default Contour Detection values are recommended.
Automatic Levels
When the Automatic Levels check box is enabled, the system automatically optimizes the Outline Level
and Detail Level values. Automatic Levels is the recommended mode. To manually set Outline Level
and Detail Level parameters you must disable the Automatic Level check box.
Outline Level
The Outline Level provides a coarser level of contours than the Detail Level. The location process uses
Outline Level contours to rapidly identify and roughly locate potential instances of the object.
• The higher the setting (maximum 16), the coarser the contour resolution.
• The Outline Level value can only be higher than, or equal to the Detail Level.
Detail Level
The location process uses Detail Level contours to confirm the identification of an object instance and
refine its location within the image.
• The lower the setting (minimum 1), the finer the contour resolution.
• The Detail Level value is always lower than, or equal to the Outline Level.
Contrast Threshold
Contrast Threshold sets the sensitivity to contrast that is used to generate contours. Adaptive settings
automatically adjust the numerical value according to the input image.
• The default Adaptive Normal Sensitivity is recommended for most applications.
• Adaptive High Sensitivity accepts lower contrasts and therefore results in a greater number
of source contours.
• Adaptive Low Sensitivity retains high-contrast contours and removes lower contrast
contours, such as those caused by noise and shadows.
• Fixed Value requires that you manually set the Contrast Threshold value, expressed in terms
of a step in greylevel values that ranges from 0 to 255. Higher values reduce sensitivity to
contrast, resulting in fewer, high-contrast contours. Conversely lower values increase
sensitivity and add a greater amount of source contours.
• The Fixed Value mode is useful for controlled environment applications in which only
contours above a predefined threshold should be processed.
AdeptSight 2.0 - User Guide
79
Creating Models in AdeptSight
Feature Selection
Use the Feature Selection slider to increase or decrease the amount of features that are selected from
the source contours when you create the model with the Build Model command.
• If you want to manually select all the features for a Model, set Feature Selection to none,
teach the Model with the required Contour Detection parameters to detect the source
contours, then manually add features.
Show
There are two levels of contours that are detected in an image and added to a Model. Use the Show
radio buttons to toggle between these two levels.
• Outline Level features are used by the Locator tool to rapidly identify and roughly locate
potential instances of the object. The Outline Level should contain at least a minimum amount
of features that allow the Locator to reliably generate hypotheses.
• Detail Level features are used are used by the Locator tool to refine the pose and location of
an object. The Detail Level should contain non-blurry and stable features that can be used to
accurately locate the part.
Setting Advanced Properties in the Model Editor
Use Custom Shading Area
Enabling the Use Custom Shading Area check box allows you to manually define an area of the model
that the Locator will use for Shading Consistency analysis.
• The Locator analyzes shading consistency by comparing the custom area in the Model to the
corresponding area on a found instance.
• A Custom Shading Area is used by the Locator when the Instance Ordering parameter is set
to Shading Consistency. If Use Custom Shading Area is not enabled, and Instance
Ordering set to Shading Consistency, the Locator uses the entire Model area for shading
analysis.
• Shading Consistency must be enabled to create Models that are based on color. In such a
case, the shading consistency analysis can help to discriminate between objects that are very
similar in color. For details on creating color models and configuring a color Locator, see
Configuring a Color Locator Tool.
To set a custom shading area:
1. Enable the Use Custom Shading Area check box. This will display a yellow bounding box in
the display.
2. Use the mouse to drag the shading area bounding box to the appropriate area on the model.
The bounding box cannot be rotated, only displaced and resized in the X-Y directions.
AdeptSight 2.0 - User Guide
80
Creating Models in AdeptSight
Custom shading area
applied to add color information
to the model
Figure 53 Applying a Custom Shading Area to Add Color Information to a Model
Center Coordinate System
When a feature selected on the Model, clicking Center Coordinate System moves the coordinate
system marker to center of gravity of the selected feature.
Commands
Build Model
Clicking Build Model initiates a new model creation process. It clears the Model that is currently in the
Model Editor, re-detects the source contours using the new settings, selects features from the source
contours according to the setting of the Feature Selection slider and teaches a new Model. Build Model
does not make any changes to the coordinate system.
Revert
Revert undoes all modifications made to the model since the last call to Apply.
Apply
Apply definitively applies current modifications made to the Model.
After you have created a Model, we recommend that you create or assign the proper
Gripper Offset correction for the model. The robot may not correctly handle parts
corresponding to the model if a Gripper Offset calibration is not carried out for the
model.
Keyboard Shortcuts in the Model Edition Display
Table 3 presents the keyboard shortcuts that you can use in the Model Editor.
Table 3 Keyboard shortcuts in the Model Edition Display
Key
Action
Double click
Selects an entire line segment to be added as feature.
Single-click
Selects an entire feature
Selects a small portion on a line segment (dark blue) to be added as
feature.
Ctrl+click
Adds/Removes sections of selected line segment.
AdeptSight 2.0 - User Guide
81
Creating Models in AdeptSight
Table 3 Keyboard shortcuts in the Model Edition Display
Key
Action
Insert
Adds selected feature to the model.
Delete
Deletes selected feature from the model.
Arrow up
Zooms out. (Display)
Arrow down
Zooms in. (Display)
Page Up
Scrolls up in the display.
Page Down
Scrolls down in the display
Home
Scrolls left in the display.
End
Scrolls right in the display
Related Topics
Creating Models in AdeptSight
Calibrating the Gripper Offset for Models
AdeptSight 2.0 - User Guide
82
Managing Models in AdeptSight
Managing Models in AdeptSight
The Models box in the Locator interface provides tools and options for managing the Models used by the
current Locator tool. Models are stored temporarily in a runtime database called a Models database.
Such a database can contain any number of Models.
• A Models database should contain Models that are similar in nature, size and calibration, such
as models for parts or objects commonly used within the same application.
• A large number of Models can significantly affect the search speed.
• Created Models, as well as modifications made to existing Models, will exist only in memory
and will be lost once the application is closed, unless the changes are saved to the current
database, or to a new models database.
• Models in a database can be individually enabled or disabled as needed for the current
application.
• Models can be saved to file and re-imported into a Locator tool.
Tool Offset indicator:
Thumbnail of selected model
Double-click here to
change Model Name
Opens the selected Model for editing
in the Model Editor
Open Model Options menu
Deletes the selected Model
Add a new Model - opens the Model Editor
Figure 54 Managing Models from the Locator Interface
List of Models
The list of models displays the models that are currently available for use in the sequence. The
thumbnail display shows the Model that is currently selected in the list.
• You can enable or disable individual Models in the database using the associated check boxes.
• Deactivating a model by disabling its check box does not delete or remove the Model. It
simply indicates to the Locator process that it should ignore this model during the search
process.
• The icon to the right of a model indicates whether or not a Gripper Offset has been calibrated
and assigned to the model. See Calibrating the Gripper Offset for Models for more information
on the importance of gripper offsets.
Adding Models
To create and add a new model:
1. In the Models section of the Locator tool, click the '+' icon.
AdeptSight 2.0 - User Guide
83
Managing Models in AdeptSight
2. This opens the Model Editor for creation of a new model. See Creating Models in AdeptSight for
details on creating models.
To add a model by importing from a file:
1. In the Models section of the Locator tool, click the 'Model Options' icon. See Figure 54.
2. From the drop-down menu, select Import Models. Browse to or specify the file (*.hdb) that
contains the models to be added.
To delete a model:
1. Select a model in the list
2. Click the Remove Model icon ('minus' symbol). See Figure 54.
3. Alternatively, you can click the 'Model Options' icon and select: Delete All models.
Saving Models
Models can be saved to a model database file (*.hdb) for reuse in AdeptSight.
To save models:
1. In the Models section of the Locator tool, click the 'Model Options' icon. See Figure 54.
2. From the drop-down menu, select Export Models.
3. This will save all the models that are currently in the model list, whether they are enabled or
disabled.
Enabling and Disabling Models
Models that are shown in the models list can be enabled or disabled for use by the Locator tool. When
the Locator tool is executed, it searches only for parts that match the Models that are enabled.
To enable or disable models:
1. Click in the check box to the left of the model name.
2. Alternatively, you can click the 'Model Options' icon and Enable All or Disable All models.
Related Topics
Creating Models in AdeptSight
Expert Model Edition Mode
Calibrating the Gripper Offset for Models
AdeptSight 2.0 - User Guide
84
Calibrating the Gripper Offset for Models
Calibrating the Gripper Offset for Models
For each object model, you must carry out a Gripper Offset calibration that will enable AdeptSight to
correctly pick up or move to objects.
What is the Gripper Offset?
• A Gripper Offset is a transform, expressed as: (x, y, z, yaw, pitch, roll).
• The Gripper Offset Calibration teaches the robot where it must grip a specified object.
• You must create and/or assign at least one gripper offset to a Model to enable the robot to
handle the part.
• More that one Gripper Offset can be added to a Model, for example if there are different
positions on an on object to which the robot can move to handle the object.
If you do not carry out the Gripper Offset calibration:
•
The VLOCATION parameter will return position in the image (vision) frame of reference.
• The robot may not be able to manipulate the found object.
When to Calibrate the Gripper Offset
Important:
• You must correct the gripper offset for every model you add to an application.
• You must recalibrate the gripper offset for a model whenever you make any changes to the
model’s coordinate system (frame of reference).
• You must recalibrate the gripper offset if you change the robot/controller pair used for the
application.
Launching the Gripper Offset Calibration
A Gripper Offset indicator appears to the right of models, in the list of models, as shown in Figure 55.
• An 'Information' icon indicates that no Gripper Offset has been calculated for the Model.
• A check mark indicates that at least one Gripper Offset has been calculated for the Model.
To launch the Gripper Offset calibration:
1. Select the Model from the list of models.
2. Select the 'Model options' icon. See Figure 55.
3. From the menu, select Gripper Offset > Wizard.
4. The Gripper Offset Manager opens, as shown in Figure 56.
5. In the Gripper Offset Manager select and apply existing gripper offsets to the Model, or create
one or more offsets for the object. See Using the Gripper Offset Manager for details.
6. You must run the Gripper Offset at least once for every Model you create.
AdeptSight 2.0 - User Guide
85
Using the Gripper Offset Manager
Gripper Offset indicator
- Check mark: gripper offset calibrated
- Information icon: gripper offset NOT calibrated
Launch Gripper Offset Wizard from here
Figure 55 Gripper Offset Indicator for Models
Carrying Out the Gripper Offset Wizard
The Gripper Offset Calibration is presented as a Wizard that walks through the steps required for
assigning Gripper offsets to a Model.
Before starting the Gripper Offset Wizard: Make sure you have on hand one or more objects of the type
defined by the Model.
To carry out a Gripper Offset calibration:
1. Launch the Wizard from the Gripper Offset Manager by clicking a 'Wizard' icon. See Figure 56.
2. Follow the instructions in the Wizard.
3. Once the Gripper Offset calibration is complete, the Gripper Offset indicator will display a check
mark beside the calibrated model, in the Models list
4. Repeat the Gripper Offset calibration for each model in the application.
Using the Gripper Offset Manager
The Gripper Offset manager allows you to assign gripper offsets to a selected Model and launch the
Gripper Offset Calibration wizard.
The Gripper Offset Manager interface contains:
• A toolbar for carrying out various actions. See Figure 56 and section
• The list of gripper offsets assigned to the current Model.
• A list of all gripper offsets defined in the system. You can add, remove, and change the order
of gripper offsets assigned.
AdeptSight 2.0 - User Guide
86
Using the Gripper Offset Manager
Toolbar
List of all Gripper
Offsets in the system
List of Gripper Offsets
for the selected Model
Use Up/Down arrows
to change order of items
Use Left/Right arrows
to move items between lists
Figure 56 Gripper Offset Manager
Gripper Offset Manager Toolbar
The functions available from the toolbar are:
Create New Gripper Offset
Adds a new blank gripper offset to the global list of offsets.
Calibrate New Gripper
Offset
Adds a new blank gripper offset to the Model and starts the
Gripper Offset Calibration.
Recalibrate New Gripper
Offset
Starts the Gripper Offset Calibration for the selected gripper
offset.
Global Gripper Offset
Manager
Opens the Global Gripper Offset Manager.
Gripper Offset Global Editor
The Global Offset Editor allows you to edit, and create gripper offsets.
Figure 57 Gripper Offset Editor
AdeptSight 2.0 - User Guide
87
Using the Gripper Offset Manager
Global Gripper Offset Global Manager
The Global Gripper Offset Global Manager displays the list of all Gripper offsets defined in the system,
and their values.
• Any existing gripper offsets can be assigned to existing models.
• From this window you can edit, remove and create Gripper Offsets.
Figure 58 Global Gripper Offset Manager
Related Topics
Creating Models in AdeptSight
Expert Model Edition Mode
AdeptSight 2.0 - User Guide
88
Configuring a Color Locator Tool
Configuring a Color Locator Tool
The Locator can be configured to find and differentiate objects based on their color information. This is
useful when locating objects that are similar in shape but of different colors. The color that distinguishes
an object is defined when creating a model for the object: a custom shading area must be defined in the
Model. The Locator search process must also be configured to find the objects based on their shading,
through the Shading Consistency mode of the Instance Ordering parameter.
Basic Steps for Configuring a Color Locator
1. Create a Model.
2. In the Model, define a custom shading area to define the color of the object.
3. In the Advanced Parameters
• Verify that Processing Format is set to hsNative.
• Set Instance Ordering to hsShadingConsistency.
Configuring Color Shading Area in a Mode
To enable the Locator to differentiate between object based on their color, a custom color shading area
can be defined for each object in the model-edition process. This is done by defining an area in the
model called a "custom shading area". The color information in this area is stored as part of the model.
and enables the Locator to use color information when locating objects.
To configure a color for a specific object:
1. Follow normal steps to create a model.
2. In Model Edition mode, open the Expert model editor parameters.
3. Enable (check) Use Custom Shading area.
4. Drag and resize the custom shading area bounding box, as show in Figure 59.
5. The bounding box should cover an area that is on the typical, distinctive color for the object.
For example, avoid including shadows or light reflections in this area.
6. Apply settings to the Model, complete any other required parameters, and exit the Model
Editor.
Custom shading area in model
defines the color of the object
Figure 59 Setting the Custom Shading Area in a Model
AdeptSight 2.0 - User Guide
89
Configuring a Color Locator Tool
To configure the Locator to search for models based on their color:
1. Expand the Advanced Parameters section of the Locator interface.
2. Under Configuration > Processing Format, make sure that hsNative is selected.
3. Under Search parameters, set InstanceOrdering to hsShadingConsistency
Configuring Advanced Parameters for Color
The Locator can be configured to differentiate between object based on their color. In the model-building
process, a custom color shading area can be defined for each object. This shading area allows the
Locator to use color information when locating objects.
Shading Consistency
Shading Consistency orders instances according to the custom shading area created in the model. If no
Custom Shading Area is defined in the model, the Locator uses the entire model area for shading
analysis.
To enable color processing of models (i.e. models are recognized on the basis of their color) you must
set this parameter to hsShadingConsistency, as illustrated in Figure 60
Processing Format
The Processing Format defines the format applied to process images provided by the camera. To process
color images, the ProcessingFormat parameter must be set to hsNative. This ensures that the
Locator is processing color images in their native color format, not in grey-scale mode.
Figure 60 Setting the Shading Consistency Mode in Advanced Locator Parameters
AdeptSight 2.0 - User Guide
90
Configuring Locator Search Parameters
Configuring Locator Search Parameters
Search parameters provide constraints to restrict the Locator's search process, for example to a set a
specific range of poses or a specific number of instances to be located.
Scale
The scale of objects to be located can be set at a fixed Nominal value (default), or as a Range of scale
values.
• Enable Nominal to search for a specific scale factor. When a Nominal value is used, the
Locator tool does not compute the actual scale of instances. Instances are positioned using
the nominal value and the scale value returned for all found instances is the nominal value.
• The default setting for the scale factor is a Nominal value of 1, which applies to most
situations.
• If a Nominal value is used with objects that present a slight variation in scale, the objects
may possibly be recognized and positioned with reduced quality because their true scale
factor will not be measured. In such a case it is preferable to configure a narrow scale range,
such as +/- 2%, instead of a nominal value.
Using a wide range in scale can significantly slow down the search process. This range
should be configured to include only the scale factors that are actually expected for a
given application. The scale factor range is one of the parameters that has the biggest
impact on search speed.
Rotation
The rotation (orientation) of objects to be located can be set as a Range of orientation values (default)
or set at a fixed, Nominal orientation value.
• By default Range is selected, with a full search range from 180 to -180. This means that the
Locator will search for objects at all orientations.
• The rotation range spans counterclockwise from the specified minimum angle to the
maximum specified angle. Figure 61 illustrates the impact of selecting a minimum and
maximum angle.
• Enable Nominal to search for objects at a specific angle of rotation. When a Nominal value is
applied, the Locator tool does not compute the actual rotation of instances; instead the
instances are positioned using the Nominal rotation value.
• If you want to search for an instance at a Nominal rotation but need to compute its actual
rotation, disable the Nominal check box and enter a small range such as 89 to 91.
• If a nominal value is used with objects that present a slight variation in rotation, the objects
may possibly be recognized and positioned with reduced quality because their true rotation
will not be measured. In such a case it is preferable to configure a narrow rotation range of +/
- 1 degree instead of a nominal value.
AdeptSight 2.0 - User Guide
91
Configuring Locator Search Parameters
Rotation
Minimum: 45°
Maximum: 135°
135°
Rotation
Minimum: 135° Maximum: 45°
45°
135°
45°
Valid range
of rotation
Valid range
of rotation
Figure 61 Rotation Range — Minimum to Maximum Angle
Instances to Find
Instances to Find sets the maximum number of instances that can be searched for.
• You can set a numerical value from 1 to 19 or set to ALL.
• If you select ALL, the number of instances that can be found is theoretically unlimited.
However, to optimize search time, you should set this value to no more than the expected
number of instances.
• When the actual number of instances exceeds the Instances to Find value, the Locator tool
stops once it attains the set value.
Minimum Model Recognition
Min Model Recognition sets the minimum amount of matched contour required for the Locator
process to accept a valid object instance.
• Lowering this parameter can increase recognition of occluded instances but can also lead to
false recognitions.
• A higher value can help eliminate instances in which objects overlap.
Related Topics
Configuring Advanced Locator Parameters
Locator Tool Results
AdeptSight 2.0 - User Guide
92
Configuring Advanced Locator Parameters
Configuring Advanced Locator Parameters
The Advanced Parameters section of the Locator tool interface provides access to advanced Locator
parameters and properties.
Configuration
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Locator uses the native format of images output
by the camera.
• hsGreyScale: When hsGreyScale is enabled, the Locator processes only the grey-scale
information in the input image regardless of the format in which the images are provided. This
can reduce the execution time when color processing is not required.
Display Results
Display results specify how the results of the Locator are represented in the Display area.
• The display is automatically refreshed when display settings are modified.
• The colors used to represent items in the display can be modified through the Environment
Options form.
Display Frames
When DisplayFrames is set to True, the frame for each located instance is represented by an X-Y axes
marker in the Results display.
Display Instances
When DisplayInstances is set to True, each located instance is represented Results display. An
instance is represented by the contour of the model that corresponds to the instance.
Display Grey-Scale Image
When DisplayGreyScaleImage is set to True, the last input grey-scale image is shown in the Results
display.
Display Detail Scene
When DisplayDetailScene is set to True, all detail-level contours that were found in the input image
are shown in the Results display. Detail level contours are the contours that are candidates for the
matching of detail-level features of a model.
Display Outline Scene
When DisplayOutlineScene is set to True, all detail-level contours that were found in the input image
are shown in the Results display. Outline level contours are the contours that are candidates for the
matching of contour-level features of a model.
AdeptSight 2.0 - User Guide
93
Configuring Advanced Locator Parameters
Edge Detection
The Locator detects edges in the input images then uses the edges to generate a vectorized description
of the image, called a Scene. The Locator tool generates source contours on two coarseness levels:
Outline and Detail.
Edge Detection parameters modify the quality and quantity of contours that are generated from the
input Image.
Contrast Threshold
ContrastThreshold sets the minimum contrast needed for an edge to be detected in the input image.
The threshold value expresses as the step in greylevel values required to detect contours.
This value can be set manually only when ContrastThresholdMode is set to FixedValue.
• Higher values reduce sensitivity to contrast. This reduces noise and the amount of lowcontrast edges.
• Lower values increase sensitivity and add a greater amount of edge at the expense of adding
more noise. This may possibly generate false detections and/or slow down the search process.
ContrastThresholdMode
Contrast Threshold Mode defines how contrast threshold is set. Contrast threshold is the level of
sensitivity that is applied to the detection of contours in the input image. The contrast threshold can be
either Adaptive, Fixed or based on Models.
Adaptive thresholds set a sensitivity level based on image content. This provides flexibility to variations
in image lighting conditions and variations in contrast during the Search process.
• AdaptiveLowSensitivity sets a low sensitivity, adaptive threshold for detecting contours.
AdaptiveLowSensitivity detects strongly defined contours and eliminates noise, at the risk of
losing significant contour segments.
• AdaptiveNormalSensitivity sets a default sensitivity threshold for detecting contours.
• AdaptiveHighSensitivity detects a great amount of low-contrast contours and noise.
• FixedValue sets an absolute value for the sensitivity to contrast. A typical situation for the
use of a fixed value is a setting in which there is little variance in lighting conditions.
Model-based thresholds allow you to base the contrast threshold on active Models.
• ModelsMode sets the most sensitive mode, selected from the currently active Models.
• ModelsValue sets the most sensitive value, selected from the currently active Models.
• ModelValuesAndMode sets a choice of either the most sensitive mode, or the most sensitive
value, selected from the currently active Models.
Detail Level
The Detail Level is used to confirm recognition and refine the position of valid instances. Its coarseness
setting ranges from 1-16.
• Values can be set manually only if ParametersBasedOn is set to Custom.
• The higher the setting, the coarser the contour resolution.
AdeptSight 2.0 - User Guide
94
Configuring Advanced Locator Parameters
• The DetailLevel value cannot exceed the OutlineLevel.
Outline Level
The Outline Level is used to rapidly identify potential instances of the object. Its coarseness setting
ranges from 1-16.
• Values can be set manually only if ParametersBasedOn is set to Custom.
• The higher the setting, the coarser the contour resolution.
• The OutlineLevel value must be lower than, or equal to, the DetailLevel.
Parameters Based On
For most applications, the ParametersBasedOn property should be set to AllModels. Custom contour
detection should only be used when the default values do not work correctly.
• When set to AllModels, the contour detection parameters are optimized by analyzing the
parameters that were used to build all the models that are currently active.
• When set to Custom, the contour detection parameters are set manually to an integer value.
Location
The Locator tool carries out its processes within a specific region of interest. By default, this region of
interest is the entire input image. However, region of interest can be reduced to a smaller area, to
exclude certain areas of the workspace and/or to reduce the execution time.
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters section
gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Locator region of interest.
Rotation
Angle of rotation of the Locator region of interest.
UseEntireImage
When set to True, the entire input image is used for the search. When set to False, the tool searches
only within the user-defined region of interest.
Width
Width of the Locator region of interest.
X
X coordinate of the center of the tool region of interest.
AdeptSight 2.0 - User Guide
95
Configuring Advanced Locator Parameters
Y
Y coordinate of the center of the region of interest.
Width
X, Y
Width
Angle of Rotation
Figure 62 Location Properties of the Locator Region of Interest
Model
Model Disambiguation Enabled
When ModelDisambiguationEnabled set to True (default), the Locator applies disambiguation to
discriminate between similar models and between similar hypotheses of a single object. When set to
False, the Locator does not apply disambiguation.
Model Optimizer Enabled
ModelOptimizerEnabled specifies if the models can be optimized interactively using the Model
Optimizer. When set to True, the models can be optimized. When set to False, the models cannot be
modified.
Percentage Of Points To Analyze
PercentageOfPointsToAnalyze sets the percentage of points on a model contour that are actually
used for the optimization process.
For example, when PercentageOfPointsToAnalyze is set to the default 50% value, one out of two
points are used. Increasing this value can increase the accuracy of the optimized model but incurs a
longer optimization time.
Model-Based
A model-based Locator can be useful for precisely locating a feature or small part on a parent object. A
model-based Locator requires the input from another Locator tool in the application, called the "initial"
Locator. When the Locator is model-based, both scale and rotation ranges can be provided here instead
of configuring ranges in the Locator's Search parameters.
Scale Parameters
The defined scale range should normally be the same as the Scale constraint defined in the Search of
the initial Locator tool that is providing the Instance Scene. Providing these range values allows relative
use of the current tool's Scale constraints to achieve better performance.
If your application must find a sub-feature, on an instance of a "parent" object that may vary in scale,
the range specified in the Relative mode enables the model-based Locator to search for a sub-feature at
AdeptSight 2.0 - User Guide
96
Configuring Advanced Locator Parameters
a nominal scale corresponding to the scale of currently selected instance. For example, if the initial
Locator is configured to locate instances ranging from 0.5 to 2.0 in scale, and the Scale constraint of the
model-based Locator is enabled to a Nominal value of 1.0, a specified instance having a scale value of
0.75 will make the tool search only for sub-features of 0.75 in scale. If instead, the Scale constraint is
set to a range from 0.9 to 1.1, for the same 0.75 scale instance, the Locator will search for sub-features
scaled between 0.675 and 0.825.
Model Based Scale Factor Mode
ModelBasedScaleFactorMode selects the method used to manage the scale properties used by the
Locator's search
• The hsAbsolute mode has no effect on the scale parameters of the Locator's search process.
It is useful for positioning objects, based on the position of an object found by an initial
Locator tool. hsRelative optimizes search speed and robustness when you need to accurately
position sub-parts of an object, based on the position of the source object.
• The hsRelative mode, the Locator's Learn phase applies ModelBasedMinimumScaleFactor
and ModelBasedMinimumRotation as the possible range in scale.
Model Based Maximum Scale Factor
ModelBasedMaximumScaleFactor sets the maximum scale allowed for the model-based Locator,
when ModelBasedScaleFactorMode is set to hsRelative.
Model Based Minimum Scale Factor
ModelBasedMinimumScaleFactor sets the minimum scale allowed for the model-based Locator, when
ModelBasedScaleFactorMode is set to hsRelative.
Rotation Parameters
The defined rotation range should normally be the same as the Rotation constraint defined in the Search
of the initial Locator tool that is providing the Instance Scene. Providing these range values allows
relative use of the current tool's Rotation constraints to achieve better performance.
If your application must find a sub-feature on a "parent" object that may vary in rotation, the range
specified in the Relative mode enables the model-based Locator to search for a sub-feature at a nominal
rotation corresponding to the rotation of currently selected instance. For example, if the initial Locator is
configured to locate instances ranging from -180 to 180, and the Rotation constraint of the model-based
Locator is enabled to a Nominal value of 0, a specified instance at a rotation of 45 will make the tool
search only for sub-features that are rotated to 45 degrees. If instead the Rotation constraint is set to a
range from -10 to 10, for the same instance rotated at 45 degrees, the Locator will search for subfeatures rotated between 35 and 45 degrees.
Model Based Rotation Mode
ModelBasedRotationMode selects the method used to manage the rotation parameters of the
Locator's search.
• The hsAbsolute mode has no effect on rotation parameters of the Locator's search process.
It is useful for positioning objects, based on the position of an object found by an initial
Locator tool.
AdeptSight 2.0 - User Guide
97
Configuring Advanced Locator Parameters
• The hsRelative mode optimizes search speed and robustness when you need to accurately
position sub-parts of an object, based on the position of the source object. In the hsRelative
mode, the Locator's Learn phase applies ModelBasedMinimumRotation and
ModelBasedMaximumRotation as the allowed rotation range.
Model Based Maximum Rotation
ModelBasedMaximumRotation sets the maximum angle of rotation allowed for the model-based
Locator, when ModelBasedRotationMode is set to hsRelative.
Model Based Minimum Rotation
ModelBasedMinimumRotation sets the minimum angle of rotation allowed for the model-based
Locator, when ModelBasedRotationMode is set to hsRelative
Output
Output Instance Scene Enabled
When OutputInstanceSceneEnabled is set to True, the instance scene is output in the runtime
database.
Output Detail Scene Enabled
When OutputDetailSceneEnabled is set to True, the Detail Contour Scene is output to the runtime
database.
Output Model Enabled
When OutputModelEnabled is set to True, the models enabled in the models database are output in
the runtime database.
For normal searches using both the Outline and Detail levels, the models at the Detail level are output in
the model view. When SearchBasedOnOutlineLevelOnly is True, the models at the Outline level are
output.
Output Outline Scene Enabled
When OutputOutlineSceneEnabled is set to True, the Outline Contour Scene is output to the runtime
database.
Output Mode
OutputMode sets the mode that is used to output object instances in the Instance Scene.
• Setting this property to hsMatchedModel or hsTransformedModel will usually increase the
processing time. It should be set to hsNoGraphics for optimal performance.
• For normal searches using both the Outline and Detail levels, the models at the Detail level
are used to draw the instances. When SearchBasedOnOutlineLevelOnly is True, the models at
the Outline level are used.
Results
Most Locator results can be viewed in User interface, in the grid of results. Results can also be saved to
a text file called the results log. See Viewing Tool Results for more details.
AdeptSight 2.0 - User Guide
98
Configuring Advanced Locator Parameters
Coordinate System
The Locator can results in one of four coordinate systems: World, Object, Image, or Tool.
• hsWorld selects the World Coordinate System. Results are output and displayed in the
selected units with respect to the World coordinate system of the input image. If the current
camera was calibrated with the 2D Vision Calibration Wizard, the origin (0,0) is at the center
of the image. If the camera was calibrated through the Vision to Robot calibration, the origin
is the same as the Robot origin (Robot frame of reference).
• hsObject selects the Object Coordinate System. Results are output and displayed in the
selected units with respect to the Model frame of reference.
• hsImage selects the Image Coordinate System. Results are output and displayed in pixels.
No scale, rotation, quality, symmetry results are calculated in this reference system. Object
coordinate system: Results are expressed in calibrated units with respect to the Object
coordinate system, if and only if the tool is frame-based. The Object coordinate system is the
coordinate system of the Model.
• hsTool selects the Tool Coordinate System. In this system of reference position is expressed
with respect to the Locator region of interest, where (0,0) is the center of the bounding box
that defines the region of interest. Results are returned in pixel values. No scale, rotation,
quality, or symmetry results are calculated in this reference system.
Instance Count
InstanceCount is the number of instances found by the Locator. Instances are the number of results,
and therefore the number of Frames output by the Locator.
Learn Time
LearnTime is the time required by the Locator tool to learn, or to re-learn, models and parameters.
There is a Learn process at the first execution of the Locator tool after models are added or modified, or
after a modification of certain parameters.
Search Time
SearchTime is the time required to locate all object instances found by the Locator tool.
Messages
The Messages result provides information on the search process. Messages are provided as a message
number followed by a text description. To view messages, click on the Browse button (...) to open the
messages window, such as the one show in Figure 63.
Figure 63 The Messages Window
AdeptSight 2.0 - User Guide
99
Configuring Advanced Locator Parameters
Search
Search parameters are constraints that restrict the Locator's search process.
Conformity Tolerance Constraints
Conformity Tolerance
ConformityTolerance defines the maximum allowable local deviation of instance contours from the
expected model contours. Its value corresponds to the maximum distance in calibrated units by which a
matched contour can deviate from either side of its expected position.
Portions of the contour that are not within the Conformity Tolerance range are not considered to be
valid. Only the contours within Conformity Tolerance are recognized and calculated for the Minimum
Model Recognition search constraint.
Model contour (red)
Conformity Tolerance (grey)
Conformity tolerance (grey)
value applies to both sides
of model contour (red)
Contours of the found object (blue)
-Contours outside conformity tolerance
are not valid
Portion of object contour
outside the conformity
tolerance zone
Figure 64 Conformity Tolerance
To manually set ConformityTolerance, you must first set
UseDefaultConformityTolerance to False.
Default Conformity Tolerance
DefaultConformityTolerance is a read-only value that is computed by the Locator tool by analyzing
the calibration, the contour detection parameters, and the search parameters.
AdeptSight 2.0 - User Guide
100
Configuring Advanced Locator Parameters
Use Default Conformity Tolerance
Disabling UseDefaultConformityTolerance allows you to manually modify the ConformityTolerance
value.
Conformity Tolerance Range
The ConformityToleranceRange defines the upper and lower limits for the Conformity Tolerance.
These limits are set by the parameters MinimumConformityTolerance and
MaximumConformityTolerance.
Instance Output Constraints
Instance Ordering
The InstanceOrdering parameter sets the order in which object instances are output.
To enable color processing of models (i.e. models are recognized on the basis of their
color) you must set this parameter to hsShadingConsistency.
• At the default Evidence setting, the instances are ordered according to their hypothesis
strength.
• Instances can be output in the order they appear in the image: LeftToRight, RightToLeft,
TopToBottom, and BottomToTop. This feature is particularly interesting for pick-and-place
applications in which parts that are farther down a conveyor must be picked first.
• The Quality setting orders instances according to their MatchQuality. Instances having the
same MatchQuality are subsequently ordered by their FitQuality. This setting can
significantly increase the search time because the Locator tool cannot output instance results
until it has found and compared all instances to determine their order. The time required to
output the first instance corresponds to the total time needed to search the image and
analyze all the potential instances. The time for additional instances is zero since the search
process is already complete.
• ImageDistance orders instances according to their proximity to the point defined by
InstanceOrderingX and InstanceOrderingY. The X,Y coordinates of the point are
expressed in pixels.
• WorldDistance orders instances according to their proximity to the point defined by
InstanceOrderingX and InstanceOrderingY. The X,Y coordinates of the point are
expressed in the selected Length units.
• ShadingConsistency orders instances according to the custom shading area created in the
model. If no Custom Shading Area is defined in the model, the Locator uses the entire model
area for shading analysis.
This mode must be selected for the Locator to use color processing for models.
This mode is useful when the shading information, in addition to the normal contour
information, can assist in discriminating between very similar hypotheses. This is a
AdeptSight 2.0 - User Guide
101
Configuring Advanced Locator Parameters
requirement for color processing of models and also often used for BGA application, as
illustrated in Figure 65.
Custom shading
area created in
the Model
Sorting By shading
consistency:
Hypothesis A
rates higher than
hypothesis B
with reference
to the Model
A
B
Figure 65 Instance Ordering - Shading Consistency
Output Symmetric Instances
The Output Symmetric Instances setting determines how the Locate Process will handle symmetrical,
or nearly symmetrical objects.
• When set to False, the search process will output only the first best quality instance of a
symmetric object.
• When set to True, the search process will to output the results for all possible symmetries of
a symmetric object. This can significantly increase execution time when there are many
possible symmetries of an object; for example if the object is circular.
Timeout
Timeout sets a limit to the tool execution time. When the Timeout value is attained, the Locator tool
outputs only those instances that is has found up to that moment.
Minimum Clear Percentage
Minimum Clear Percentage sets the minimum percentage of the model bounding box area that must
be free of obstacles to consider an object instance as valid. To enable this property, Minimum Clear
Percentage Enabled must be set to True (1). Enabling Minimum Clear Percentage may significantly
increase the search time; it is intended for use in pick-and-place applications. When enabled, Minimum
Clear Percentage also activates the computation of the Clear Quality result for each instance.
General Search Constraints
Contrast Polarity
ContrastPolarity indicates the change in polarity between an object and its background. The reference
polarity is the polarity in the model image.
AdeptSight 2.0 - User Guide
102
Configuring Advanced Locator Parameters
Model Image defines the
"Normal" polarity
Normal Polarity
Reverse Polarity here is
caused by change in
background color
Figure 66 Contrast Polarity
• Select Normal to search for objects having the same contrast polarity as the model and its
background.
• Select Reverse if the polarity between object instances and the background is the inverse of
the polarities in the Model image. For example dark object instances on a light background
using a model created from a light object on a dark background.
• Select Normal & Reverse enables the Locator to search for all cases that present either
Normal or Reverse polarity. This will not take into account cases where polarity is reversed at
various locations along the edges of an object.
• Select the Don't Care mode only in cases where there are local changes in polarity along an
object contour. A typical example is a case in which the background is unevenly colored:
striped, checkered, or spotted, as illustrated in Figure 67.
Contrast Polarity must be
set to Don’t Care for this
object to be detected
Changes in polarity
along the same edge
of an object
Figure 67 Contrast Polarity - ‘Normal and Reverse”’ Mode
Positioning Level
The PositioningLevel parameter allows you to modify the positioning accuracy. The default setting of 5
is the optimized and recommended setting for typical applications
• PositioningLevel has only a slight impact on the execution speed.
• In applications where accuracy is not critical, decreasing the value can provide a slight
improvement in speed.
You can increase the positional accuracy of found instances by increasing the value towards 10.
AdeptSight 2.0 - User Guide
103
Configuring Advanced Locator Parameters
Recognition Level
RecognitionLevel allows you to slightly modify the level of recognition effort. The default setting of 5 is
the optimized and recommended setting for typical applications.
• When changing the recognition effort, test your application to find the optimum speed at
which the process will still find all necessary objects within the image.
• If recognition effort is too low (quick) some instances may be ignored.
• If recognition effort is too high (exhaustive), your application will run in less than optimal
time.
• Recognition speed does not affect positioning accuracy.
Search Based On OutlineLevel Only
When SearchBasedOnOutlineLevelOnly is enabled, the Locator tool searches for object instance
using only the Outline Model and Outline Level contours.
SearchBasedOnOutlineLevelOnly should be enabled only for applications that do not require a high
positioning accuracy.
AdeptSight 2.0 - User Guide
104
Locator Tool Results
Locator Tool Results
The Locator outputs two type of results: a frame of reference, called an Instance, and Results that
provide information on each of the found Instances.
• Instances, which are a type of Frame result, can be used by other AdeptSight tools for framebased positioning. Instances found by the Locator are represented in the display interface.
• Results for each Instance found by the Locator tool are show in the grid of results, below the
display, as illustrated in Figure 68.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents each found instance by its related model, as illustrated in Figure 68.
AdeptSight 2.0 - User Guide
105
Locator Tool Results
Model name, Frame# and ID#
identify instances in
display and results grid
Each found object
instance is represented
by its related Model and
coordinate system
Results for each of
the found object
instances
Figure 68 Locator Results showing Result for Multiple Objects
Grid of Results
The grid of result presents the results for each instance found by the Locator tool. These results can be
saved to file by enabling the Results Log.
Description of Locator Results
The Locator outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Locator.
Frame
Frame identifies the number of the frames (or instances from another Locator tool) that provide the
positioning of the instance. This value is always 0 when Locator is not frame-based.
ID
ID identifies instances in the order in which they were found by the Locator. This order is based on the
value of the Instance Ordering parameter.
AdeptSight 2.0 - User Guide
106
Locator Tool Results
Model
Model identifies the name of model upon which the Locator tool based the recognition of the object
instance.
Scale
Scale is the ratio of the observed object size to its corresponding model size.
Rotation
The rotation of the object instance.
X
The X coordinate of the instance.
Y
The X coordinate of the instance.
Fit Quality
The Fit Quality score ranges from 0 to 1, with 1 being the best quality. This value is the normalized
average error between the matched model contours and the actual contours detected in the input
image.
A value of 1 means that the average error is 0. Conversely, a value of 0 means that the average
matched error is equal to Conformity Tolerance.
Match Quality
Match Quality ranges from 0 to 1, with 1 being the best quality. A value of 1 means that 100% of the
model contours were successfully matched to the actual contours detected in the input image.
Clear Quality
Clear Quality ranges from 0 to 1 with one being the best quality. A value of one means that the area
corresponding to the bounding box of the model that corresponds to the found instance found instance
is completely clear of obstacles.
Symmetry of
The Symmetry of index value is the index number of the instance of which the given instance is a
symmetry.
Time
The Time result provides the time that was needed to recognize and locate a given instance. The time
needed to locate the first instance is usually longer because it includes all of the low-level image
preprocessing.
AdeptSight 2.0 - User Guide
107
Using the Caliper Tool
Using the Caliper Tool
The Caliper tool finds, locates, and measures the gap between one or more edge pairs on an object.
The Caliper uses pixel greylevel values within region of interest to build projections needed for edge
detection.
After the Caliper detects potential edges, the Caliper determines which edge pairs are valid by applying
the constraints that are configured for each edge pair. Finally, the Caliper scores and measures each
valid edge pair.
Basic Steps for Configuring a Caliper
1. Select the tool that will provide input images. See Input.
2. Position the Caliper tool. See Location.
3. Configure Pair Settings for each edge pair. Configuring Caliper Settings
4. Test and verify results. See Caliper Results.
5. Configure Advanced properties if required. Configuring Advanced Caliper Parameters.
Input
The Input required by the Caliper is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Caliper.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
AdeptSight 2.0 - User Guide
108
Using the Caliper Tool
Position bounding box
so that edges are parallel to
the Y-Axis of the box
Position the tool
relative to the
frame that is
identified by a
blue marker
Figure 69 Positioning the Caliper Tool relative to a Frame
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Caliper is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Caliper must be placed on all frames output by the frame-provider tool, enable the All
Frames check box.
4. If the Caliper must be only be applied to a single frame, (output by frame-provider tool) disable
the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Caliper.
Positioning the Caliper
Positioning the tool defines the region of interest in which the tool will find and measure edge pairs.
AdeptSight 2.0 - User Guide
109
Using the Caliper Tool
To position the Caliper:
1. Click Location. The Location dialog opens as shown in Figure 69. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. If the tool is frame-based, a blue marker indicates the frame provided by the frame-provider
tool (Frame Input). If there is more than one object in the image, make sure that you are
positioning the bounding box relative to the object identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
4. Important: Position the bounding box so that Y-Axis is parallel to the edges that must be
detected. To rotate the bounding box, drag the X-Axis marker. To skew the bounding box, drag
the Y-Axis marker.
Before configuring the Caliper, execute the tool (or sequence) at least once
and verify in the display that the tool is being positioned correctly in the
image.
The display represents the Caliper as a green rectangle, with found edges
and caliper measure represented in red.
Related Topics
Configuring Caliper Settings
AdeptSight 2.0 - User Guide
110
Configuring Caliper Settings
Configuring Caliper Settings
The Caliper can measure any number of pairs. When the Caliper is executed, the Caliper first applies
edge detection parameters to the entire region of interest. Then, the tool applies pair settings
constraints to determine which caliper pairs. Results are then calculated for each edge pair as well as for
individual edges in each edge pair.
As shown in figure, the Pairs section contains a list of all the pairs that are configured for the current
Caliper tool. This list always contains at least one pair, which by default is called Pair(0).
From the Pairs list, you can:
• Access the configuration parameters for each pair.
• Add and remove edge pairs.
• Rename edge pairs.
Right-click on pair name
to edit name
Figure 70 Pairs List in the Caliper Interface
To access configuration parameters for an edge pair:
1. In the Pairs list, click on a pair to select it.
2. Click Edit. This opens the Pair Settings window for the selected pair.
3. See Configuring Pair Settings for details.
To add an edge pair:
1. Under the Pairs list, click the 'Add Pair' icon.
2. A pair is added with the default name: Pair(n).
3. The Pairs Settings window opens, ready for editing the new edge pair.
To remove an edge pair:
1. In the Pairs list, select the pair that must be removed.
2. Click the 'Remove Pair' icon.
To rename an edge pair:
1. In the Pairs list, double-click on the name of the pair to be renamed.
2. Type a new name for the edge pair. This will not affect the configuration parameters of the pair.
AdeptSight 2.0 - User Guide
111
Configuring Caliper Settings
Configuring Pair Settings
When the Caliper is executed, the Caliper first applies edge detection constraints to the entire region of
interest. Then, the tool applies edge scoring constraints to determine which edges are valid for the
caliper measure. If only one valid edge is found, no caliper measure is output.
Pair Settings parameters set how the tool detects edges and determines which edge pair are valid.
Before configuring the Caliper, execute the tool (or sequence) at least once
and verify in the display that the tool is being positioned correctly in the
image.
The display represents the Caliper as a green rectangle, with found edges
and caliper measure represented in red.
To configure edge pair settings:
1. Under the Pairs section of the interface, select a pair name in the list. The default name for a
first pair is Pair(0).
2. Click Edit.
3. The Pair Settings window opens, as shown in Figure 71. This window provides parameters for
each edge of the Caliper edge pair, named: First Edge and Second Edge.
4. Configure settings for each edge. Refer to sections below for help on configuring Pair
Settings, and using the display and function editor.
AdeptSight 2.0 - User Guide
112
Configuring Caliper Settings
Set constraints
individually for each edge
Right-click in display to show
edge detection values
Graphical Function Editor for
setting Position constraints
and Threshold constraints
Figure 71 Configuring Pair Settings
If the display in the Pair Settings window is blank, or the edges are not
properly placed, close the window and verify the following:
Are Location parameters are correct? The Y-axis of the tool must be parallel
to the edges you want to detect.
Was the tool executed after positioning the tool? Execute the tool or
sequence at least once before opening the Pair Settings window.
Pair Settings
There are two basic types of constraints that affect the choice of valid edges: Polarity and edge-score
Constraints, which are based on position and magnitude of the edges.
Polarity
Polarity corresponds to the change in light values, moving from left to right in the display, along the XAxis in the region of interest. The Caliper applies the Polarity constraint before applying edge-score
Constraints.
AdeptSight 2.0 - User Guide
113
Configuring Caliper Settings
Polarity is does not affect the edge score, however only edges that meet the selected Polarity
constraint are retained as valid edges, regardless of their scores.
• Dark to Light will only accept edges occurring at transitions from a dark area to a light area.
• Light to Dark will only accept edges occurring at transitions from a light area to a dark area.
• Either will accept any edge, regardless of its polarity.
Polarity:
light to dark
Slope of the blue projection
curve indicates changes in
polarity
Polarity:
dark to light
Figure 72 Edge Polarity
Constraints
There are two types of constraints: Position and Magnitude. You can set the Caliper to use only one
constraint type or both. The graphical function editor is provided for viewing and setting each type of
constraint.
• If only one constraint is selected, edges are scored only based on the selected constraint
• If both constraints are selected, then each constraint accounts for 50% of the edge score.
Magnitude Constraint
The Magnitude constraint is based on edge values relative to the Magnitude Threshold, which is
represented in the display by 2 red lines. Edges having a magnitude equal to, or exceeding the
Magnitude Threshold, are attributed a score of 1. Edges with values below the Magnitude Threshold
receive a score ranging from 0 to 0.999, according to a manually set magnitude constraint function.
The Magnitude Threshold value can be modified in the Advanced Parameters section of the tool
interface. See Magnitude Constraint.
• A Magnitude constraint must be defined individually for each edge.
• Figure 73 shows examples of two different setups for a magnitude constraint function.
To set a Magnitude Constraint:
1. In the drop-down list above the function editor, select First Edge Magnitude Constraints or
Second Edge Magnitude Constraints.
2. In the Function Editor, use the mouse to drag handles and set the magnitude limits. See
examples in Figure 73.
AdeptSight 2.0 - User Guide
114
Configuring Caliper Settings
Score=1
Score=0
Edge Score = 1.0 if Magnitude > 95
Edge Score = 0.0 if Magnitude < 95
Edge Score = 1.0 if Magnitude >130
Edge Score = [0.01 to 0.99] for 130 > Magnitude > 50
Edge Score = 0.0 if Magnitude < 50
Figure 73 Setting the Magnitude Constraint in the Function Editor
Position Constraint
Position constraints restricts the Caliper’s search for edges to a specific zone of the region of the region
of interest.
• It is possible to graphically set a position constraint function when the approximate position of an
edge is known beforehand. This is useful for scoring an edge based on its offset from the expected
position.
• Values in the Constraint function Editor indicate relative distance in the region of interest where
0.0 is the leftmost position and 1.0 is the rightmost position.
To set a Position Constraint:
1. In the drop-down list above the function editor, select First Edge Position Constraints or
Second Edge Position Constraints.
2. In the Function Editor, use the mouse to drag handles and set the position limits. See examples
in Figure 74.
The physical position in the function editor corresponds to the same physical position in the
display.
AdeptSight 2.0 - User Guide
115
Configuring Caliper Settings
Physical position in the
display directly maps to
physical position
in the function editor
Score =1
Value here represents
of 40% of distance from
the left edge of the
region of interest
Score=0
Figure 74 Setting the Position Constraint Function Editor
Score Threshold
The score threshold sets the minimum acceptable score for a valid edge. The Caliper will disregard edges
that obtain a score lower than the Score Threshold.
• Scores attributed by the Caliper for constraints range from 0 to 1.
• If both Position and Magnitude constraints are enabled, each constraint accounts for 50%
of the total edge score.
Related Topics
Configuring Advanced Caliper Parameters
AdeptSight 2.0 - User Guide
116
Caliper Results
Caliper Results
The Caliper outputs two types of results: Frames and Results that provide information on each of the
found edges.
• Frames output by the Caliper can be used by other AdeptSight tools for frame-based
positioning. The output frames are represented in the display, and numbered, starting at 0.
• Results for edges found by the Caliper tool are show in the grid of results, below the display,
as illustrated in Figure 75.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents each frame output by the Caliper, as well as the Caliper measure, edge
pair results and results for each edge in an edge pair.
AdeptSight 2.0 - User Guide
117
Caliper Results
Rectangle represents
output frame
Red lines represent
caliper measures
Figure 75 Representation of Caliper Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all caliper measures found by the Caliper tool. Results include
the score and position for each edge in an edge pair. These results can be saved to file by enabling the
Results Log.
Description of Caliper Results
The Caliper outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Caliper. Elapsed Time is not visible in the results
grid but is it output to the results log for each iteration of the Caliper.
Frame
Frame identifies the number of the frame output by the Caliper tool. If the tool is frame-based, this
number corresponds to the input frame that provided the positioning.
Pair
The name of the edge pair, as it appears in the Pairs list. Each pair instance outputs a frame that can be
used by a frame-based tool for which the Caliper is a frame-provider.
Score
Score is the calculated score, between 1 and 0, for the edge pair. The score is calculated according to
the constraint functions defined for the pair. If both Position and Magnitude constraints are enabled,
each constraint accounts for 50% of the score.
AdeptSight 2.0 - User Guide
118
Caliper Results
Each edge of the pair is also scored individually, in a similar manner. See Edge1/Edge2 results below.
Size
Size is the Caliper measure, which is the calculated distance between the pair of edges.
Position X
Position X is the X coordinate of the center point of the caliper measure, at the midpoint of the edge
pair.
Position Y
Position Y is the Y coordinate of the center point of the caliper measure, at the midpoint of the edge
pair.
Rotation
The angle of rotation for the edge pair.
Edge 1/Edge 2 Score
The score of the individual edge, calculated according to the defined constraints.
Edge 1/Edge 2 Position X
The X coordinate of the edge, at the midpoint of the edge segment.
Edge 1/Edge 2 Position Y
The Y coordinate of the edge, at the midpoint of the edge segment.
Edge 1/Edge 2 Rotation
The angle of rotation for the edge.
Edge 1/Edge 2 Position Score
Position score for the edge, calculated according to the Position constraint function.
Edge 1/Edge 2 Magnitude
The calculated Magnitude value for the edge.
Edge 1/Edge 2 Magnitude Score
Magnitude score for the edge, calculated according to the Magnitude constraint function.
AdeptSight 2.0 - User Guide
119
Configuring Advanced Caliper Parameters
Configuring Advanced Caliper Parameters
The Advanced Parameters section of the Caliper tool interface provides access to advanced Caliper
parameters and properties.
Configuration Parameters
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Caliper processes images in the format in which
they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Caliper processes only the grey-scale
information in the input image, regardless of the format in which the images are provided.
This can reduce the execution time when color processing is not required.
Edge Detection Parameters
Edge Detection settings configure the parameters that the Caliper will use to find potential edges in the
area of interest. The display represents the Caliper region of interest and provides information to assist
in configuring Edge Detection parameters.
Edge Magnitude Threshold
EdgeMagnitudeThreshold sets the acceptable magnitude value for potential edges. This value is
expressed as an absolute value; there are two magnitude lines: an upper (positive) threshold and lower
(negative) threshold.
Edge Magnitude expresses the strength of a potential edge. The (green) magnitude curve, represents
magnitude values across the area of interest. Potential edges must have a magnitude above the upper
threshold, or below the lower threshold. See Figure 76.
Upper magnitude
threshold
Magnitude curve
Lower magnitude
threshold
Potential edges are
shown as yellow dotted
lines
Figure 76 Interpreting the Magnitude Threshold in the display area
Filter Half-Width
The filtering process attenuates peaks in the magnitude curve that are caused by noise.
EdgeFilterHalfWidth should be set to a value approximately equivalent to the width of the edge, in
pixels. An incorrect value can cause edges to be incorrectly detected.
Frame Transform Parameters
The Scale To Instance parameter is applicable only to a Caliper that is frame-based, and for which the
Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is configured to
AdeptSight 2.0 - User Guide
120
Configuring Advanced Caliper Parameters
locate parts of varying scale, the Scale To Instance parameter determines the effect of the scaled
instances on the Caliper.
Scale To Instance
When ScaleToInstance is True, the Caliper region of interest is resized and positioned relative to the
change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Caliper ignores the scale and builds frame relative to the input frame
without adapting to the change in scale.
Location Parameters
Tool Position Parameters
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters section
gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Caliper region of interest.
Width
Width of the Caliper region of interest.
Rotation
Angle of rotation of the Caliper region of interest.
Width
Width of the Caliper region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
Width
X, Y
Height
Angle of Rotation
Figure 77 Location Properties of the Caliper Region of Interest
AdeptSight 2.0 - User Guide
121
Configuring Advanced Caliper Parameters
Tool Sampling Parameters
Sampling refers to the procedure used by t he tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
For specific applications where a more appropriate tradeoff between speed and precision must be
established, the sampling step can be modified by setting the CustomSamplingStepEnabled to True
and modifying the CustomSamplingStep value.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed for image sharpness.
To ensure subpixel precision in inspection applications, Bilinear Interpolation should always be set to
true (enabled). Non-interpolated sampling (Bilinear Interpolation disabled) should only be used in
applications where the speed requirements are more critical than precision.)
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step by the tool to sample the input image that is bounded by the tool region of
interest. The sampling step represents the height and the width of a sampled pixel.
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
SamplingStepCustomEnabled
Setting SamplingStepCustomEnabled to True, enables the tool to apply a custom sampling step
defined by SamplingStepCustom. When set to False (default) the tool applies the default, optimal
sampling step defined by SamplingStepDefault.
Results
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Edge Count
EdgeCount indicates the number of valid edges that were found.
AdeptSight 2.0 - User Guide
122
Using the Edge Locator Tool
Using the Edge Locator Tool
The Edge Locator tool finds, locates, and measures the position of one or more edges on an object.
The Edge Locator uses pixel greylevel values to detect edges found within the region of interest. Once
potential edges have been located, the Edge Locator applies the constraints to determine which edges
are valid.
The Edge Locator determines the position of one or more edges, it does not measure the length of lines
detected in the region of interest. To extrapolate and measure a line on an object, use the Edge Finder
tool.
Basic Steps for Configuring an Edge Locator
1. Select the tool that will provide input images. See Input.
2. Position the Edge Locator tool. See Location.
3. Configure edge detection settings. See Configuring Edge Locator Settings.
4. Test and verify results. See Edge Locator Results.
5. Configure Advanced properties if required. Configuring Advanced Edge Locator Parameters.
Input
The Input required by the Edge Locator is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Edge Locator.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
AdeptSight 2.0 - User Guide
123
Using the Edge Locator Tool
Position bounding box
so that edges are parallel to
the Y-Axis of the box
Position the tool
relative to the
frame that is
identified by a
blue marker
Figure 78 Positioning the Edge Locator Tool relative to a Frame
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Edge Locator is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Edge Locator must be placed on all frames output by the frame-provider tool, enable the
All Frames check box.
4. If the Edge Locator must be only be applied to a single frame, (output by frame-provider tool)
disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Edge Locator.
AdeptSight 2.0 - User Guide
124
Using the Edge Locator Tool
Positioning the Edge Locator
Positioning the tool defines the area of the image that will be processed by the Edge Locator. Location
parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Edge Locator:
1. Click Location. The Location dialog opens as shown in Figure 78. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
4. Important: Position the bounding box so that Y-Axis is parallel to the edges that must be
detected. To rotate the bounding box, drag the X-Axis marker. To skew the bounding box, drag
the Y-Axis marker.
Before configuring the Edge Locator, execute the tool (or sequence) at least
once and verify in the display that the tool is being positioned correctly in the
image.
The display represents the Edge Locator as a green rectangle, with found
edges represented in red.
Related Topics
Configuring Edge Locator Settings
AdeptSight 2.0 - User Guide
125
Configuring Edge Locator Settings
Configuring Edge Locator Settings
When the Edge Locator is executed, the Edge Locator first applies edge detection parameters to the
entire region of interest. Then, the tool applies edge scoring constraints to determine which edges are
output as valid edges.
Edge Settings parameters set how the tool detects edges and determines which edges are valid.
Before configuring the Edge Locator, execute the tool (or sequence) at least
once and verify in the display that the tool is being positioned correctly in the
image.
The display represents the Edge Locator as a green rectangle, with found
edges represented in red.
To configure edge detection parameters:
1. Under the Edges section of the interface, click Configure.
2. The Edge Settings window opens, as shown in Figure 79. This window provides edge detection
settings and constraints, as well as visual aids for configuring edge location settings.
3. Refer to sections below for help on configuring edge settings, and using the display and
function editor.
Right-click in display to show
edge detection values
Graphical function Editor for
setting Position constraints
and Threshold constraints
Figure 79 The Edge Settings Window
AdeptSight 2.0 - User Guide
126
Configuring Edge Locator Settings
If the display in the Edge Settings window is blank, or the edges are not
properly placed, close the window and verify the following:
Are Location parameters are correct? The Y-axis of the tool must be parallel
to the edges you want to detect.
Was the tool executed after positioning the tool? Execute the tool or
sequence at least once before opening the Pair Settings window.
Edge Detection
Edge Detection settings configure the parameters that the Edge Locator will use to find potential edges
in the area of interest. The display represents the Edge Locator region of interest and provides
information to assist in configuring Edge Detection parameters.
Magnitude Threshold
The Magnitude Threshold sets the acceptable magnitude value for potential edges. This value is
expressed as an absolute value; there are two magnitude lines: an upper (positive) threshold and lower
(negative) threshold.
Edge Magnitude expresses the strength of a potential edge. The (green) magnitude curve, represents
magnitude values across the area of interest. Potential edges must have a magnitude greater than the
upper threshold, or lower than the lower threshold. See Figure 80.
Upper magnitude
threshold
Magnitude curve
Lower magnitude
threshold
Potential edges are
shown as yellow dotted
lines
Figure 80 Interpreting the Magnitude Threshold in the display area
Filter Half-Width
The filtering process attenuates peaks in the magnitude curve that are caused by noise. Filter Half-Width
should be set to a value approximately equivalent to the width of the edge, in pixels. An incorrect value
can cause edges to be incorrectly detected.
Edge Score
The Edge Locator scores potential edges according to the constraints set for edges. The scoring method
restricts the Edge Locator’s search so that only results for valid edge pairs are returned.
There are two basic types of constraints that affect the choice of valid edges: Polarity and edge-score
Constraints, which are based on position and magnitude of the edges.
Polarity
Polarity corresponds to the change in light values, moving from left to right in the display, along the XAxis in the region of interest. The Edge Locator applies the Polarity constraint before applying edgescore Constraints.
AdeptSight 2.0 - User Guide
127
Configuring Edge Locator Settings
Polarity is does not affect the Edge Score, however only edges that meet the selected Polarity
constraint are output as valid edges, regardless of their scores.
• Dark to Light will only accept edges occurring at transitions from a dark area to a light area.
• Light to Dark will only accept edges occurring at transitions from a light area to a dark area.
• Either will accept any edge, regardless of its polarity.
Polarity:
light to dark
Slope of the blue projection
curve indicates changes in
polarity
Polarity:
dark to light
Figure 81 Edge Polarity
Constraints
There are two types of constraints: Position and Magnitude. You can set the Edge Locator to use only
one constraint type or both. The graphical function editor is provided for viewing and setting each type
of constraint.
• If only one constraint is selected, edges are scored only based on the selected constraint
• If both constraints are selected, then each constraint accounts for 50% of the edge score.
Magnitude Constraint
The Magnitude constraint is based on edge values relative to the Magnitude Threshold. Edges having
a magnitude equal to, or exceeding the Magnitude Threshold, are attributed a score of 1. Edges with
values below the Magnitude Threshold receive a score ranging from 0 to 0.999, according to a
manually set magnitude constraint function.
• The Magnitude Constraint is applied globally to all edges detected by the Edge Locator.
• Figure 82 shows two different setups for a magnitude constraint function.
To set the Magnitude Constraint:
1. In the drop-down list above the function editor, select Magnitude Constraints.
2. In the Function Editor, use the mouse to drag handles and set the Magnitude limits. See
examples in Figure 82.
AdeptSight 2.0 - User Guide
128
Configuring Edge Locator Settings
Score=1
Score=0
Edge Score = 1.0 if Magnitude > 95
Edge Score = 0.0 if Magnitude < 95
Edge Score = 1.0 if Magnitude >130
Edge Score = [0.01 to 0.99] for 130 > Magnitude > 50
Edge Score = 0.0 if Magnitude < 50
Figure 82 Setting the Magnitude Constraint in the Function Editor
Position Constraint
Position constraints restricts the Edge Locator’s search for edges to a specific zone of the region of the
region of interest.
• It is possible to graphically set a position constraint function when the approximate position of an
edge is known beforehand. This is useful for scoring an edge based on its offset from the expected
position.
• Values in the Constraint function Editor indicate relative distance in the region of interest where
0.0 is the leftmost position and 1.0 is the rightmost position.
To set the Position Constraint:
1. In the drop-down list above the function editor, select Position Constraints.
2. In the Function Editor, use the mouse to drag handles and set the Position limits. See examples
in Figure 82.
• The position in the function editor corresponds to the same position in the display.
Physical position in the
display directly maps to
physical position
in the function editor
Score =1
Value here represents
of 87.5% of distance from
the left edge of the
region of interest
Score=0
Figure 83 Setting the Position Constraint Function Editor
AdeptSight 2.0 - User Guide
129
Configuring Edge Locator Settings
Score Threshold
The score threshold sets the minimum acceptable score for a valid edge. The Edge Locator will disregard
edges that obtain a score lower than the Score Threshold.
• Scores attributed by the Edge Locator for constraints range from 0 to 1.
• If both Position and Magnitude constraints are enabled, each constraint accounts for 50%
of the total edge score.
Sort Results
You can enable the Sort Results check box to sort the located edges in descending order of score values.
By default, Sort Results is not enabled and edges are output in the same left to right order as they
appear on the projection curve.
AdeptSight 2.0 - User Guide
130
Edge Locator Results
Edge Locator Results
The Edge Locator outputs two types of results: Frames and Results that provide information on each of
the found edges.
• Frames output by the Edge Locator can be used by other AdeptSight tools for frame-based
positioning. The output frames are represented in the display, and numbered, starting at 0.
• Results for edges found by the Edge Locator tool are show in the grid of results, below the
display, as illustrated in Figure 84.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results
Results Display
The Results display represents each frame output by the Edge Locator, as well as the edges found in
each frame.
AdeptSight 2.0 - User Guide
131
Edge Locator Results
Green rectangles represent
output frames
Red lines represent
found edges
Frame number
helps to identify edge
results in the grid
Results for all
valid edges
found by the
Edge Locator
Figure 84 Representation of Edge Locator Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all found by the Edge Locator tool. Results include the score
and position for each edge. These results can be saved to file by enabling the Results Log.
Description of Edge Locator Results
The Edge Locator outputs the following results.
Elapsed Time
The Elapsed Time is the total execution time of the Edge Locator. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Edge Locator.
Frame
Frame identifies the number of the frame output by the Edge Locator tool. If the tool is frame-based,
this number corresponds to the input frame that provided the positioning.
Edge
Identification number of the edge. Enabling Edge Sort affects the order of edge numbering. Each edge
outputs a frame that can be used by a frame-based tool for which the Edge Locator is a frame-provider.
AdeptSight 2.0 - User Guide
132
Edge Locator Results
Score
Score is the calculated score, between 1 and 0, for each edge. The score is calculated according to the
defined constraint functions. If both Position and Magnitude constraints are enabled, each constraint
accounts for 50% of the score.
Position X
The X coordinate of the center point for each edge segment.
Position Y
The Y coordinate of the center point for each edge segment.
Rotation
Rotation shows the angle for the edge.
Position Score
Position Score for the edge, calculated according to the Position Constraint function.
Magnitude
The Magnitude of the edge indicates its peak value in the magnitude curve.
Magnitude Score
Magnitude Score for the edge, calculated according to the Magnitude Constraint function.
AdeptSight 2.0 - User Guide
133
Configuring Advanced Edge Locator Parameters
Configuring Advanced Edge Locator Parameters
The Advanced Parameters section of the Edge Locator tool interface provides access to advanced
Edge Locator parameters and properties.
Configuration
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Edge Locator processes images in the format in
which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Edge Locator processes only the greyscale information in the input image, regardless of the format in which the images are
provided. This can reduce the execution time when color processing is not required.
Frame Transform
The Scale To Instance parameter is applicable only to an Edge Locator that is frame-based, and for
which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Edge Locator.
Scale to Instance
When ScaleToInstance is True, the Edge Locator region of interest is resized and positioned relative to
the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Edge Locator ignores the scale and builds frame relative to the input
frame without adapting to the change in scale.
Location
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Edge Locator region of interest.
Width
Width of the Edge Locator region of interest.
Rotation
Angle of rotation of the Edge Locator region of interest.
AdeptSight 2.0 - User Guide
134
Configuring Advanced Edge Locator Parameters
Width
Width of the Edge Locator region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
Width
Y
X, Y
Height
Angle of Rotation
Figure 85 Location Properties of the Edge Locator Region of Interest
Tool Sampling
Sampling refers to the procedure used by t he tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
For specific applications where a more appropriate tradeoff between speed and precision must be
established, the sampling step can be modified by setting the CustomSamplingStepEnabled to True
and modifying the CustomSamplingStep value.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed for image sharpness.
To ensure subpixel precision in inspection applications, Bilinear Interpolation should always be set to
true (enabled). Non-interpolated sampling (Bilinear Interpolation disabled) should only be used in
applications where the speed requirements are more critical than precision.)
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step by the tool to sample the input image that is bounded by the tool region of
interest. The sampling step represents the height and the width of a sampled pixel.
AdeptSight 2.0 - User Guide
135
Configuring Advanced Edge Locator Parameters
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
SamplingStepCustomEnabled
Setting SamplingStepCustomEnabled to True, enables the tool to apply a custom sampling step
defined by SamplingStepCustom. When set to False (default) the tool applies the default, optimal
sampling step defined by SamplingStepDefault.
Results
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Edge Count
EdgeCount indicates the number of valid edges that were found.
Sort Results Enabled
SortResultsEnabled specifies if edges are sorted in descending order of score values. When set to
False (default) edges are sorted in order of their location within the region of interest. When True, edges
are sorted in the order of their score, from highest to lowest.
AdeptSight 2.0 - User Guide
136
Using the Arc Caliper Tool
Using the Arc Caliper Tool
The Arc Caliper tool finds, locates, and measures the gap between one or more edge pairs on a circular
object. Edges can be disposed in a radial or an annular position.
The Arc Caliper uses pixel greylevel values within region of interest to build projections needed for edge
detection.
After the Arc Caliper detects potential edges, the Arc Caliper determines which edge pairs are valid by
applying the constraints that are configured for each edge pair. Finally, the Arc Caliper scores and
measures each valid edge pair.
Basic Steps for Configuring an Arc Caliper
1. Select the tool that will provide input images. See Input.
2. Position the Arc Caliper tool. See Location.
3. Configure Pair Settings for each edge pair. Configuring Arc Caliper Settings
4. Test and verify results. See Arc Caliper Results.
5. Configure Advanced properties if required. Configuring Advanced Arc Caliper Parameters
Input
The Input required by the Arc Caliper is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Arc Caliper.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
AdeptSight 2.0 - User Guide
137
Using the Arc Caliper Tool
Position the tool
relative to the
frame that is
identified by a
blue marker
Figure 86 Positioning the Arc Caliper Tool relative to a Frame
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Arc Caliper is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Arc Caliper must be placed on all frames output by the frame-provider tool, enable the
All Frames check box.
4. If the Arc Caliper must be only be applied to a single frame, (output by frame-provider tool)
disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning and Modifying the Arc Caliper Region of Interest.
AdeptSight 2.0 - User Guide
138
Using the Arc Caliper Tool
Positioning and Modifying the Arc Caliper Region of Interest
Positioning the tool defines the area of the image that will be processed by the Arc Caliper. Location
parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest in the image
display.
The tool’s region of interest is bounded by a Sector defined by the parameters Position X, Position Y,
Opening, Thickness, Rotation, and Radius.
To position and resize the sector region of interest in the display:
1. Click Location. The Location dialog opens as shown in Figure 86. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding sector.
2. If the tool is frame-based, a blue marker indicates the frame provided by the frame-provider
tool (Frame Input). If there is more than one object in the image, make sure that you are
positioning the bounding box relative to the object identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding sector in the
display. Values are relative to the origin of the frame-provider tool (blue marker) if the tool is
frame-based. If the tool is image-based, values are relative to the image origin.
• To move the sector, drag its border or its origin, which located at the intersection of its two
bounding radii shown with dotted lines. Handle A in Figure 87.
• To adjust the radius, drag the center that is located at the intersection between the bisector
and the median annulus. Handle F in Figure 87.
• To set the thickness, drag any of the four resizing handles located at intersections between
its two bounding radii and annuli or by dragging the two resizing handles located at the
intersections of the bisector and the two bounding annuli. Handles B, D, E, G, H, and J in
Figure 87.
• To set the opening, drag any of the four resizing handles located at intersections between
its two bounding radii and annuli. Handles B, D, H, and J in Figure 87.
• To set the rotation, drag either of the intersection points between the median annulus and
the two bounding radii. Handles C, and I in Figure 87.
AdeptSight 2.0 - User Guide
139
Using the Arc Caliper Tool
D
C
Thickness
Bisector
B
G
Green letters indicate
points where sector can be
dragged with mouse in the
display
F
E
Opening
(angle)
Annulus
A
X-Y Position
(origin)
H
I
J
Radius
Rotation
Y axis of reference frame
Figure 87 Illustration of Location Parameters for a Sector Region of Interest
Before configuring the Arc Caliper, execute the tool (or sequence) at least
once and verify in the display that the tool is being positioned correctly in the
image.
The display represents the Arc Caliper region of interest in green and any
found edges in red.
Related Topics
Configuring Arc Caliper Settings
AdeptSight 2.0 - User Guide
140
Configuring Arc Caliper Settings
Configuring Arc Caliper Settings
The Arc Caliper can measure any number of pairs. When the Caliper is executed, the Arc Caliper first
applies edge detection parameters to the entire region of interest. Then, the tool applies pair settings
constraints to determine which caliper pairs. Results are then calculated for each edge pair as well as for
individual edges in each edge pair.
As shown in figure, the Pairs section contains a list of all the pairs that are configured for the current
Caliper tool. This list always contains at least one pair, which by default is called Pair(0).
From the Pairs list, you can:
• Access the configuration parameters for each pair.
• Add and remove edge pairs.
• Rename edge pairs.
Right-click on pair name to show
to edit name
Figure 88 Pairs List in the Arc Caliper Interface
To access configuration parameters for an edge pair:
1. In the Pairs list, click on a pair to select it.
2. Click Edit. This opens the Pair Settings window for the selected pair.
3. See Configuring Pair Settings for details.
To add an edge pair:
1. Under the Pairs list, click the 'Add Pair' icon.
2. A pair is added with the default name: Pair(n).
3. The Pairs Settings window opens, ready for editing the new edge pair.
To remove an edge pair:
1. In the Pairs list, select the pair that must be removed.
2. Click the 'Remove Pair' icon.
To rename an edge pair:
1. In the Pairs list, double-click on the name of the pair to be renamed.
2. Type a new name for the edge pair. This will not affect the configuration parameters of the pair.
AdeptSight 2.0 - User Guide
141
Configuring Arc Caliper Settings
Configuring Pair Settings
When the Arc Caliper is executed, the Arc Caliper first applies edge detection constraints to the entire
region of interest. Then, the tool applies edge scoring constraints to determine which edges are valid for
the caliper measure. If only one valid edge is found, no caliper measure is output.
Pair Settings parameters set how the tool detects edges and determines which edge pair are valid.
Before configuring the Arc Caliper, execute the tool (or sequence) at least once and
verify in the display that the tool is being positioned correctly in the image.
The display represents the Arc Caliper as a green rectangle, with found edges and
caliper measure represented in red.
To configure edge pair settings:
1. Under the Pairs section of the interface, select a pair name in the list. The default name for a
first pair is Pair(0).
2. Click Edit.
3. The Pair Settings window opens, as shown in Figure 89. This window provides parameters for
each edge of the Caliper edge pair, named: First Edge and Second Edge.
4. Configure settings for each edge. Refer to sections below for help on configuring Pair
Settings, and using the display and function editor.
AdeptSight 2.0 - User Guide
142
Configuring Arc Caliper Settings
Set constraints
individually for each edge
Right-click in display to show
edge detection values
The display is a mapped
representation of the
sector-shaped region of interest
Graphical Function Editor for
setting Position constraints
and Threshold constraints
Figure 89 Configuring Pair Settings
If the display in the Pair Settings window is blank, or the edges are not
properly placed, close the window and verify the following:
Is the correct Projection Mode enabled in the Advanced Parameters section?
Choose between Annular (hsannular) and Radial (hsradial).
Was the tool executed after positioning the tool? Execute the tool or
sequence at least once before opening the Pair Settings window.
Pair Settings
There are two basic types of constraints that affect the choice of valid edges: Polarity and edge-score
Constraints, which are based on position and magnitude of the edges.
Polarity
Polarity corresponds to the change in light values, moving from left to right in the display, along the XAxis in the region of interest. The Arc Caliper applies the Polarity constraint before applying edge-score
Constraints.
AdeptSight 2.0 - User Guide
143
Configuring Arc Caliper Settings
Polarity is does not affect the edge score, however only edges that meet the selected Polarity
constraint are retained as valid edges, regardless of their scores.
• Dark to Light will only accept edges occurring at transitions from a dark area to a light area.
• Light to Dark will only accept edges occurring at transitions from a light area to a dark area.
• Either will accept any edge, regardless of its polarity.
Polarity:
light to dark
Slope of the blue projection
curve indicates changes in
polarity
Polarity:
dark to light
Figure 90 Edge Polarity
Constraints
There are two types of constraints: Position and Magnitude. You can set the Arc Caliper to use only
one constraint type or both. The graphical function editor is provided for viewing and setting each type
of constraint.
• If only one constraint is selected, edges are scored only based on the selected constraint
• If both constraints are selected, then each constraint accounts for 50% of the edge score.
Magnitude Constraint
The Magnitude constraint is based on edge values relative to the Magnitude Threshold, which is
represented in the display by 2 red lines.
Edges having a magnitude equal to, or exceeding the Magnitude Threshold, are attributed a score of 1.
Edges with values below the Magnitude Threshold receive a score ranging from 0 to 0.999, according to
a manually set magnitude constraint function.
The Magnitude Threshold value can be modified in the Advanced Parameters section of the tool
interface. See Magnitude Constraint.
• A Magnitude constraint must be defined individually for each edge.
• Figure 91 shows examples of two different setups for a magnitude constraint function.
To set a Magnitude Constraint:
1. In the drop-down list above the function editor, select First Edge Magnitude Constraints or
Second Edge Magnitude Constraints.
2. In the Function Editor, use the mouse to drag handles and set the magnitude limits. See
examples in Figure 91.
AdeptSight 2.0 - User Guide
144
Configuring Arc Caliper Settings
Score=1
Score=0
Edge Score = 1.0 if Magnitude > 95
Edge Score = 0.0 if Magnitude < 95
Edge Score = 1.0 if Magnitude >130
Edge Score = [0.01 to 0.99] for 130 > Magnitude > 50
Edge Score = 0.0 if Magnitude < 50
Figure 91 Setting the Magnitude Constraint in the Function Editor
Position Constraint
Position constraints restricts the Arc Caliper’s search for edges to a specific zone of the region of the
region of interest.
• It is possible to graphically set a position constraint function when the approximate position of an
edge is known beforehand. This is useful for scoring an edge based on its offset from the expected
position.
• Values in the Constraint function Editor indicate relative distance in the region of interest where
0.0 is the leftmost position and 1.0 is the rightmost position.
To set a Position Constraint:
1. In the drop-down list above the function editor, select First Edge Position Constraints or
Second Edge Position Constraints.
2. In the Function Editor, use the mouse to drag handles and set the position limits. See examples
in Figure 92.
The physical position in the function editor corresponds to the same physical position in the
display.
AdeptSight 2.0 - User Guide
145
Configuring Arc Caliper Settings
Physical position in the
display directly maps to
physical position
in the function editor
Score =1
Value here represents
of 40% of distance from
the left edge of the
region of interest
Score=0
Figure 92 Setting the Position Constraint Function Editor
Score Threshold
The score threshold sets the minimum acceptable score for a valid edge. The Arc Caliper will disregard
edges that obtain a score lower than the Score Threshold.
• Scores attributed by the Arc Caliper for constraints range from 0 to 1.
• If both Position and Magnitude constraints are enabled, each constraint accounts for 50%
of the total edge score.
Related Topics
Positioning and Modifying the Arc Caliper Region of Interest
Configuring Advanced Arc Caliper Parameters
AdeptSight 2.0 - User Guide
146
Arc Caliper Results
Arc Caliper Results
The Arc Caliper outputs two types of results: Frames and Results that provide information on each of the
found edges.
• Frames output by the Arc Caliper can be used by other AdeptSight tools for frame-based
positioning. The output frames are represented in the display, and numbered, starting at 0.
Results for edges found by the Arc Caliper tool are show in the grid of results, below the
display, as illustrated in Figure 93.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents each frame output by the Arc Caliper, as well as the Arc Caliper measure,
edge pair results and results for each edge in an edge pair.
AdeptSight 2.0 - User Guide
147
Arc Caliper Results
Cursor icon illustrates
XY position of the
Caliper result
Figure 93 Representation of Arc Caliper Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all caliper measures found by the Arc Caliper tool. Results
include the score and position for each edge in an edge pair. These results can be saved to file by
enabling the Results Log.
Description of Arc Caliper Results
The Arc Caliper outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Arc Caliper. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Arc Caliper.
Frame
Frame identifies the number of the frame output by the Arc Caliper tool. If the tool is frame-based, this
number corresponds to the input frame that provided the positioning.
Pair
The name of the edge pair, as it appears in the Pairs list. Each pair instance outputs a frame that can be
used by a frame-based tool for which the Arc Caliper is a frame-provider.
Score
Score is the calculated score, between 1 and 0, for the edge pair. The score is calculated according to
the defined constraint functions. grid of results. If both Position and Magnitude constraints are
enabled, each constraint accounts for 50% of the score.
Each edge of the pair is also scored individually, in a similar manner. See Edge1/Edge2 results below.
Size
Size is the Caliper measure, which is the calculated distance between the pair of edges.
AdeptSight 2.0 - User Guide
148
Arc Caliper Results
Position X
Position X is the X coordinate of the center point of the caliper measure, at the midpoint of the edge
pair.
Position Y
Position Y is the Y coordinate of the center point of the caliper measure, at the midpoint of the edge
pair.
Rotation
The angle of rotation for the edge pair.
Edge 1/Edge 2 Score
The score of the individual edge, calculated according to the defined constraints.
Edge 1/Edge 2 Position X
The X coordinate of the edge, at the midpoint of the edge segment.
Edge 1/Edge 2 Position Y
The Y coordinate of the edge, at the midpoint of the edge segment.
Edge 1/Edge 2 Rotation
The angle of rotation for the edge.
Edge 1/Edge 2 Position Score
Position score for the edge, calculated according to the Position constraint function.
Edge 1/Edge 2 Magnitude
The calculated Magnitude value for the edge.
Edge 1/Edge 2 Magnitude Score
Magnitude score for the edge, calculated according to the Magnitude constraint function.
AdeptSight 2.0 - User Guide
149
Configuring Advanced Arc Caliper Parameters
Configuring Advanced Arc Caliper Parameters
The Advanced Parameters section of the Arc Caliper tool interface provides access to advanced Arc
Caliper parameters and properties.
Configuration
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Arc Caliper processes images in the format in
which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Arc Caliper processes only the grey-scale
information in the input image, regardless of the format in which the images are provided.
This can reduce the execution time when color processing is not required.
Frame Transform
The Scale to Instance parameter is applicable only to an Arc Caliper that is frame-based, and for which
the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Arc Caliper.
Scale To Instance
When ScaleToInstance is True, the Arc Caliper region of interest is resized and positioned relative to
the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Arc Caliper ignores the scale and builds frame relative to the input frame
without adapting to the change in scale.
Edge Detection
Edge Detection settings configure the parameters that the Arc Caliper will use to find potential edges in
the area of interest. The display represents the Arc Caliper region of interest and provides information to
assist in configuring Edge Detection parameters.
Magnitude Threshold
The Magnitude Threshold sets the acceptable magnitude value for potential edges. This value is
expressed as an absolute value; there are two magnitude lines: an upper (positive) threshold and lower
(negative) threshold.
Edge Magnitude expresses the strength of a potential edge. The (green) magnitude curve, represents
magnitude values across the area of interest. Potential edges must have a magnitude above the upper
threshold, or below the lower threshold. See Figure 94.
AdeptSight 2.0 - User Guide
150
Configuring Advanced Arc Caliper Parameters
Upper magnitude
threshold
Magnitude curve
Lower magnitude
threshold
Potential edges are
shown as yellow dotted
lines
Figure 94 Interpreting the Magnitude Threshold in the display area
Filter Half-Width
The filtering process attenuates peaks in the magnitude curve that are caused by noise. Filter HalfWidth should be set to a value approximately equivalent to the width of the edge, in pixels. An incorrect
value can cause edges to be incorrectly detected.
Location
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Opening
Angle between the two bounding radii of the tool's sector.
Radius
The radius of the tool corresponds to the radius of the median annulus of the tool’s sector.
Thickness
Distance between its two bounding annuli of the tool sector.
Rotation
Angle of rotation of the Arc Caliper region of interest.
Width
Width of the Arc Caliper region of interest.
X
X coordinate of the origin of the Tool.
Y
Y coordinate of the origin of the Tool.
AdeptSight 2.0 - User Guide
151
Configuring Advanced Arc Caliper Parameters
D
C
Thickness
Bisector
B
G
F
E
Opening
(angle)
Annulus
A
X-Y Position
(origin)
H
I
J
Radius
Rotation
Y axis of reference frame
Figure 95 Illustration of Tool Position for a Sector-type Region of Interest
Tool Sampling
Sampling refers to the procedure used by t he tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
For specific applications where a more appropriate tradeoff between speed and precision must be
established, the sampling step can be modified by setting the CustomSamplingStepEnabled to True
and modifying the CustomSamplingStep value.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed for image sharpness.
To ensure subpixel precision in inspection applications, Bilinear Interpolation should always be set to
true (enabled). Non-interpolated sampling (Bilinear Interpolation disabled) should only be used in
applications where the speed requirements are more critical than precision.)
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step used by the tool to sample the area of the input image that is bounded by the
tool region of interest. The sampling step represents the height and the width of a sampled pixel.
AdeptSight 2.0 - User Guide
152
Configuring Advanced Arc Caliper Parameters
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
SamplingStepCustomEnabled
Setting SamplingStepCustomEnabled to True, enables the tool to apply a custom sampling step
defined by SamplingStepCustom. When set to False (default) the tool applies the default, optimal
sampling step defined by SamplingStepDefault.
Results
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Edge Count
EdgeCount indicates the number of valid edges that were found.
AdeptSight 2.0 - User Guide
153
Using the Arc Edge Locator Tool
Using the Arc Edge Locator Tool
The Arc Edge Locator tool finds, locates, and measures the position of one or more edges on a circular
object. Edges can be disposed in a radial or an annular position.
The Arc Edge Locator uses pixel greylevel values within region of interest to build projections needed for
edge detection.
After the Arc Edge Locator detects potential edges, the Arc Edge Locator determines which edge pairs
are valid by applying the constraints that are configured for each edge pair. Finally, the Arc Edge Locator
scores and measures each valid edge pair.
The Arc Edge Locator uses pixel greylevel values to detect edges found within the region of interest.
Once potential edges have been located, the Arc Edge Locator applies the constraints to determine
which edges are valid.
The Arc Edge Locator determines the position of one or more edges, it does not measure the length of
lines detected in the region of interest. To extrapolate and measure a line on an object, use the Edge
Finder tool.
Basic Steps for Configuring an Arc Edge Locator
1. Select the tool that will provide input images. See Input.
2. Position the Arc Edge Locator tool. See Location.
3. Configure edge detection settings. See Configuring Arc Edge Locator Settings.
4. Test and verify results. See Arc Edge Locator Results.
5. Configure Advanced properties if required. Configuring Advanced Arc Edge Locator
Parameters.
Input
The Input required by the Arc Edge Locator is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Arc Edge
Locator.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
AdeptSight 2.0 - User Guide
154
Using the Arc Edge Locator Tool
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
Position the tool
relative to the
frame that is
identified by a
blue marker
Figure 96 Positioning the Arc Edge Locator Tool relative to a Frame
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Arc Edge Locator is positioned relative to a frame of
reference provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Arc Edge Locator must be placed on all frames output by the frame-provider tool, enable
the All Frames check box.
4. If the Arc Edge Locator must be only be applied to a single frame, (output by frame-provider
tool) disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning and Modifying the Arc Edge Locator Region of Interest.
AdeptSight 2.0 - User Guide
155
Using the Arc Edge Locator Tool
Positioning and Modifying the Arc Edge Locator Region of Interest
Positioning the tool defines the area of the image that will be processed by the Arc Edge Locator.
Location parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest in the image
display.
The tool’s region of interest is bounded by a Sector defined by the parameters Position X, Position Y,
Opening, Thickness, Rotation, and Radius.
To position and resize the sector region of interest in the display:
1. Click Location. The Location dialog opens as shown in Figure 96. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding sector.
2. If the tool is frame-based, a blue marker indicates the frame provided by the frame-provider
tool (Frame Input). If there is more than one object in the image, make sure that you are
positioning the bounding box relative to the object identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding sector in the
display. Values are relative to the origin of the frame-provider tool (blue marker) if the tool is
frame-based. If the tool is image-based, values are relative to the image origin.
• To move the sector, drag its border or its origin, which located at the intersection of its two
bounding radii shown with dotted lines. Handle A in Figure 97.
• To adjust the radius, drag the center that is located at the intersection between the bisector
and the median annulus. Handle F in Figure 97.
• To set the thickness, drag any of the four resizing handles located at intersections between
its two bounding radii and annuli or by dragging the two resizing handles located at the
intersections of the bisector and the two bounding annuli. Handles B, D, E, G, H, and J in
Figure 97.
• To set the opening, drag any of the four resizing handles located at intersections between
its two bounding radii and annuli. Handles B, D, H, and J in Figure 97.
• To rotate the sector, drag either of the intersection points between the median annulus and
the two bounding radii. Handles C, and I in Figure 97.
AdeptSight 2.0 - User Guide
156
Using the Arc Edge Locator Tool
D
Thickness
C
Bisector
B
G
F
E
Opening
(angle)
Annulus
H
X-Y Position A
Radius
I
J
Note: Letters indicate
points where sector can be
dragged with mouse in the
display
Rotation
Y axis of reference frame
Figure 97 Illustration of Location Parameters for a Sector Region of Interest
Before configuring the Arc Edge Locator, execute the tool (or sequence) at
least once and verify in the display that the tool is being positioned correctly
in the image.
The display represents the Arc Edge Locator in green and found edges in red.
Related Topics
Configuring Arc Edge Locator Settings
AdeptSight 2.0 - User Guide
157
Configuring Arc Edge Locator Settings
Configuring Arc Edge Locator Settings
When the Arc Edge Locator is executed, the Arc Edge Locator first applies edge detection parameters to
the entire region of interest. Then, the tool applies edge scoring constraints to determine which edges
are output as valid edges.
Edge Settings parameters set how the tool detects edges and determines which edges are valid.
Before configuring the Arc Edge Locator, execute the tool (or sequence) at least once
and verify in the display that the tool is being positioned correctly in the image.
To configure edge detection parameters:
1. Under the Edges section of the interface, click Configure.
2. The Edge Settings window opens, as shown in Figure 98. This window provides edge detection
settings and constraints, as well as visual aids for configuring edge location settings.
3. Refer to sections below for help on configuring edge settings, and using the display and
function editor.
Right-click in display to show
edge detection values
Graphical function Editor for
setting Position constraints
and Threshold constraints
Figure 98 The Edge Settings Window
AdeptSight 2.0 - User Guide
158
Configuring Arc Edge Locator Settings
If the display in the Edge Settings window is blank, or the edges are not
properly placed, close the window and verify the following:
Is correct the Projection Mode enabled in the Advanced Parameters section?
Choose between Annular (hsannular) and Radial (hsradial).
Was the tool executed after positioning the tool? Execute the tool or
sequence at least once before opening the Edge Settings window.
Edge Detection
Edge Detection settings configure the parameters that the Arc Edge Locator will use to find potential
edges in the area of interest. The display represents the Arc Edge Locator region of interest and provides
information to assist in configuring Edge Detection parameters.
Magnitude Threshold
The Magnitude Threshold sets the acceptable magnitude value for potential edges. This value is
expressed as an absolute value; there are two magnitude lines: an upper (positive) threshold and lower
(negative) threshold.
Edge Magnitude expresses the strength of a potential edge. The (green) magnitude curve, represents
magnitude values across the area of interest. Potential edges must have a magnitude greater than the
upper threshold, or lower than the lower threshold. See Figure 99.
Upper magnitude
threshold
Magnitude curve
Lower magnitude
threshold
Potential edges are
shown as yellow dotted
lines
Figure 99 Interpreting the Magnitude Threshold in the display area
Filter Half-Width
The filtering process attenuates peaks in the magnitude curve that are caused by noise. Filter HalfWidth should be set to a value approximately equivalent to the width of the edge, in pixels. An incorrect
value can cause edges to be incorrectly detected.
Edge Score
The Arc Edge Locator scores potential edges according to the constraints set for edges. The scoring
method restricts the Arc Edge Locator’s search so that only results for valid edge pairs are returned.
There are two basic types of constraints that affect the choice of valid edges: Polarity and edge-score
Constraints, which are based on position and magnitude of the edges.
Polarity
Polarity corresponds to the change in light values, moving from left to right in the display, along the XAxis in the region of interest. The Arc Edge Locator applies the Polarity constraint before applying edgescore Constraints.
AdeptSight 2.0 - User Guide
159
Configuring Arc Edge Locator Settings
Polarity is does not affect the Edge Score, however only edges that meet the selected Polarity
constraint are output as valid edges, regardless of their scores.
• Dark to Light will only accept edges occurring at transitions from a dark area to a light area.
• Light to Dark will only accept edges occurring at transitions from a light area to a dark area.
• Either will accept any edge, regardless of its polarity.
Polarity:
light to dark
Slope of the blue projection
curve indicates changes in
polarity
Polarity:
dark to light
Figure 100 Edge Polarity
Constraints
There are two types of constraints: Position and Magnitude. You can set the Arc Edge Locator to use only
one constraint type or both. The graphical function editor is provided for viewing and setting each type
of constraint.
• If only one constraint is selected, edges are scored only based on the selected constraint.
• If both constraints are selected, then each constraint accounts for 50% of the edge score.
Magnitude Constraint
The Magnitude constraint is based on edge values relative to the Magnitude Threshold. Edges having
a magnitude equal to, or exceeding the Magnitude Threshold, are attributed a score of 1. Edges with
values below the Magnitude Threshold receive a score ranging from 0 to 0.999, according to a
manually set magnitude constraint function.
• The Magnitude Constraint is applied globally to all edges detected by the Arc Edge Locator.
• Figure 101 shows two different setups for a magnitude constraint function.
To set the Magnitude Constraint:
1. In the drop-down list above the function editor, select Magnitude Constraints.
2. In the Function Editor, use the mouse to drag handles and set the Magnitude limits. See
examples in Figure 101.
AdeptSight 2.0 - User Guide
160
Configuring Arc Edge Locator Settings
Score=1
Score=0
Edge Score = 1.0 if Magnitude > 95
Edge Score = 0.0 if Magnitude < 95
Edge Score = 1.0 if Magnitude >130
Edge Score = [0.01 to 0.99] for 130 > Magnitude > 50
Edge Score = 0.0 if Magnitude < 50
Figure 101 Setting the Magnitude Constraint in the Function Editor
Position Constraint
Position constraints restricts the Arc Edge Locator’s search for edges to a specific zone of the region of
the region of interest.
• It is possible to graphically set a position constraint function when the approximate position of an
edge is known beforehand. This is useful for scoring an edge based on its offset from the expected
position.
• Values in the Constraint Function Editor indicate relative distance in the region of interest where
0.0 is the leftmost position and 1.0 is the rightmost position.
To set the Position Constraint:
1. In the drop-down list above the function editor, select Position Constraints.
2. In the Function Editor, use the mouse to drag handles and set the Position limits. See examples
in Figure 101.
• The position in the function editor corresponds to the same position in the display.
Physical position in the
display directly maps to
physical position
in the function editor
Score =1
Value here represents
of 87.5% of distance from
the left edge of the
region of interest
Score=0
Figure 102 Setting the Position Constraint Function Editor
AdeptSight 2.0 - User Guide
161
Configuring Arc Edge Locator Settings
Score Threshold
The score threshold sets the minimum acceptable score for a valid edge. The Edge Locator will disregard
edges that obtain a score lower than the Score Threshold.
• Scores attributed by the Arc Edge Locator for constraints range from 0 to 1.
• If both Position and Magnitude constraints are enabled, each constraint accounts for 50%
of the total edge score.
Sort Results
You can enable the Sort Results check box to sort the located edges in descending order of score values.
By default, Sort Results is not enabled and edges are output in the same left to right order as they
appear on the projection curve.
AdeptSight 2.0 - User Guide
162
Arc Edge Locator Results
Arc Edge Locator Results
The Arc Edge Locator outputs two types of results: Frames and Results that provide information on each
of the found edges.
• Frames output by the Arc Edge Locator can be used by other AdeptSight tools for frame-based
positioning. The output frames are represented in the display, and numbered, starting at 0.
• Results for edges found by the Arc Edge Locator tool are show in the grid of results, below the
display, as illustrated in Figure 103.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents each frame output by the Arc Edge Locator, as well as the edges found in
each frame.
AdeptSight 2.0 - User Guide
163
Arc Edge Locator Results
Red lines represent
found edges
Results for all
valid edges
found by the Arc Edge Locator
Figure 103 Representation of Arc Edge Locator Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all edges found by the Arc Edge Locator tool. These results can
be saved to file by enabling the Results Log.
Description of Arc Edge Locator Results
The Arc Edge Locator outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Arc Edge Locator. Elapsed Time is not visible in
the results grid but is it output to the results log for each iteration of the Arc Edge Locator.
Frame
Frame identifies the number of the frame output by the Arc Edge Locator tool. If the tool is framebased, this number corresponds to the input frame that provided the positioning.
Edge
Identification number of the edge. Enabling Edge Sort affects the order of edge numbering.
Score
Score is the calculated score, between 1 and 0, for each edge. The score is calculated according to the
defined constraint functions. If both Position and Magnitude constraints are enabled, each constraint
accounts for 50% of the score.
AdeptSight 2.0 - User Guide
164
Arc Edge Locator Results
Position X
The X coordinate of the center point for each edge segment.
Position Y
The Y coordinate of the center point for each edge segment.
Rotation
Rotation shows the angle for the edge.
Position Score
Position Score for the edge, calculated according to the Position Constraint function.
Magnitude
The Magnitude of the edge indicates its peak value in the magnitude curve.
Magnitude Score
Magnitude Score for the edge, calculated according to the Magnitude Constraint function.
AdeptSight 2.0 - User Guide
165
Configuring Advanced Arc Edge Locator Parameters
Configuring Advanced Arc Edge Locator Parameters
The Advanced Parameters section of the Arc Edge Locator tool interface provides access to advanced
Arc Edge Locator parameters and properties.
Configuration Parameters
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Arc Edge Locator processes images in the format
in which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Arc Edge Locator processes only the greyscale information in the input image, regardless of the format in which the images are
provided. This can reduce the execution time when color processing is not required.
Frame Transform Parameters
The Scale to Instance parameter is applicable only to an Arc Edge Locator that is frame-based, and for
which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Arc Edge Locator.
Scale to Instance
When ScaleToInstance is True, the Arc Edge Locator region of interest is resized and positioned
relative to the change in scale of the Input frame. This is the recommended setting for most cases.
When ScaleToInstance is False, the Arc Edge Locator ignores the scale and builds frame relative to the
input frame without adapting to the change in scale.
Location Parameters
Tool Position parameters
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Opening
Angle between the two bounding radii of the tool's sector.
Radius
The radius of the tool corresponds to the radius of the median annulus of the tool’s sector.
Thickness
Distance between its two bounding annuli of the tool sector.
AdeptSight 2.0 - User Guide
166
Configuring Advanced Arc Edge Locator Parameters
Rotation
Angle of rotation of the Arc Edge Locator region of interest.
Width
Width of the Arc Edge Locator region of interest.
X
X coordinate of the origin of the Tool
Y
Y coordinate of the origin of the Tool
D
C
Thickness
Bisector
B
G
F
E
Opening
(angle)
Annulus
A
X-Y Position
(origin)
H
I
J
Radius
Rotation
Y axis of reference frame
Figure 104 Illustration of Tool Position for a Sector-type Region of Interest
Tool Sampling Parameters
Sampling refers to the procedure used by t he tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
For specific applications where a more appropriate tradeoff between speed and precision must be
established, the sampling step can be modified by setting the CustomSamplingStepEnabled to True
and modifying the CustomSamplingStep value.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed.
To ensure subpixel precision in inspection applications, Bilinear Interpolation should always be set to
true (enabled). Non-interpolated sampling (Bilinear Interpolation disabled) should only be used in
applications where the speed requirements are more critical than precision.)
AdeptSight 2.0 - User Guide
167
Configuring Advanced Arc Edge Locator Parameters
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step used by the tool to sample the area of the input image that is bounded by the
tool region of interest. The sampling step represents the height and the width of a sampled pixel.
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
SamplingStepCustomEnabled
Setting SamplingStepCustomEnabled to True, enables the tool to apply a custom sampling step
defined by SamplingStepCustom. When set to False (default) the tool applies the default, optimal
sampling step defined by SamplingStepDefault.
Results Parameters
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Edge Count
EdgeCount indicates the number of valid edges that were found.
Sort Results Enabled
SortResultsEnabled specifies if edges are sorted in descending order of score values. When set to
False (default) edges are sorted in order of their location within the region of interest. When True, edges
are sorted in the order of their score, from highest to lowest.
AdeptSight 2.0 - User Guide
168
Using the Blob Analyzer Tool
Using the Blob Analyzer Tool
The Blob Analyzer tool processes pixel information within the region of interest and uses these pixel
values to apply image segmentation algorithms used for blob detection. User-defined criteria then
restrict the Blob Analyzer’s search for valid blobs.
The Blob Analyzer returns an array of numerical results for each valid blob that has been found and
located. Blob results include geometric, topological and greylevel blob properties.
Basic Steps for Configuring an Blob Analyzer
1. Select the tool that will provide input images. See Input.
2. Position the Blob Analyzer region of interest. See Location.
3. Configure parameters and image subsampling if required. See Configuring Blob Analyzer
Settings.
4. Test and verify results. See Blob Analyzer Results.
5. Configure Advanced Parameters properties if required. Configuring Advanced Blob Analyzer
Parameters.
Input
The Input required by the Blob Analyzer is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Blob Analyzer.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
AdeptSight 2.0 - User Guide
169
Using the Blob Analyzer Tool
Position the tool
relative to the
frame that is
identified by a
blue marker
Figure 105 Positioning the Blob Analyzer Tool relative to a Frame
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Blob Analyzer is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Blob Analyzer must be placed on all frames output by the frame-provider tool, enable the
All Frames check box.
4. If the Blob Analyzer must be only be applied to a single frame, (output by frame-provider tool)
disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Blob Analyzer.
AdeptSight 2.0 - User Guide
170
Using the Blob Analyzer Tool
Positioning the Blob Analyzer
Positioning the tool defines the area of the image in which the tool will search for blobs.
To position the Blob Analyzer:
Positioning the tool defines the area of the image that will be processed by the Blob Analyzer. Location
parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Blob Analyzer:
1. Click Location. The Location dialog opens as shown in Figure 105. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
Before configuring the Blob Analyzer, execute the tool (or sequence) at least
once and verify in the display that the tool is being positioned correctly in the
image.
The display represents any found blobs as regions of green pixels.
Related Topics
Configuring Blob Analyzer Settings
AdeptSight 2.0 - User Guide
171
Configuring Blob Analyzer Settings
Configuring Blob Analyzer Settings
The Blob Analyzer tool determines which area of an image will be output as a blob by applying Image
Segmentation constraints and Area Constraints.
• The Blob Analyzer can find any number of blobs in a single image.
• Because the Blob Analyzer relies on differences in pixel greylevel values to divide the region of
interest into blob and non-blob areas, efficient blob detection depends on the appropriate
choice of a segmentation mode.
Before configuring the Blob Analyzer, execute the tool (or sequence) at least once and
verify in the display that the tool is being positioned correctly in the image.
To configure Blob Settings:
1. Under the Blob Settings section of the interface, click Configure.
2. The Blob Settings window opens, as shown in Figure 106. This window provides blob
constraint parameters and a graphical editor for configuring image segmentation thresholds.
To configure a Blob Analyzer tool, you will most often have to work back and forth between the
Threshold function editor, and verifying the effects of selected thresholds and segmentation
modes in the results display and results grid.
3. Set the Minimum Area and Maximum Area constraints.
4. Select an Image Segmentation mode. See Selecting the Image Segmentation Mode for
details.
5. Configure the Threshold function for the selected segmentation mode. See Configuring
Thresholds for details.
Threshold function editor
Drag handles with mouse to configure
image segmentation thresholds
Display represents histogram of pixel values
within the tool area of interest
Figure 106 The Blob Settings Window
AdeptSight 2.0 - User Guide
172
Configuring Blob Analyzer Settings
Blob Settings
Setting Area Constraints
Area constraints define the Minimum Area and Maximum Area required for valid blobs.
Area constraints are useful for separating potential blobs from background regions, or from other blobs
having similar pixel values.
Selecting the Image Segmentation Mode
The Blob Analyzer applies Image segmentation to separate pixels within the region of interest into two
categories: blob and non-blob areas. The segmentation mode you choose depends on the nature of the
images and the relationship between blob data and background data.
Dark Segmentation
The Dark segmentation mode is used to extract dark blobs on a light background. This is the inverse
function of the Light segmentation mode.
• All pixels with values to the left of the threshold function are potential blob regions.
• All pixels with values to the right of the threshold are non-blob regions.
• Blobs include all pixels with a value equal to the threshold value.
Light Segmentation
The Light segmentation mode is used for extract light blobs on a dark background.
• All pixels values to the right of the threshold function are potential blob regions.
• All pixels with values to left of the threshold function are non-blob regions.
• Blobs include all pixels with a value equal to the threshold value.
Inside Segmentation
The Inside segmentation mode applies a dual threshold function. This mode is used to extract grey
(neither dark nor light) blobs from a background containing dark and light areas.
Examples:
• The region of interest contains a grey blob on a light object or part, with a dark image
background.
• The region of interest contains a grey blob on a dark object or part, with a light image
background.
Inside Segmentation should always be configured using Soft Threshold Functions. In
most cases the Dynamic Inside segmentation mode will provide better flexibility and
better results than the Inside segmentation mode.
Outside Segmentation
The Outside segmentation mode applies a dual threshold function and is the inverse of the Inside
mode. This mode is used for extracting dark and light blobs from a grey background.
AdeptSight 2.0 - User Guide
173
Configuring Blob Analyzer Settings
Cases for using outside segmentation are not frequent because it is best to analyze dark and light blobs
within an image by creating two Blob Analyzer tools: one for dark blobs and one for light blobs.
Dynamic Dark
The Dynamic Dark mode sets a percentage of the pixel distribution that is valid for the detection of
dark blobs.
This mode is similar to the Dark Segmentation mode to which a dynamic threshold mode is applied.
• A dynamic threshold is an adaptive threshold that varies according to changes in lighting in
the input images. See Dynamic Threshold Functions for more information.
• Either hard or soft thresholds can be applied to this mode.
Dynamic Light
The Dynamic Light mode sets a percentage of the pixel distribution that is valid for the detection of
light blobs.
This mode is similar to the Light Segmentation mode to which a dynamic threshold mode is applied.
• A dynamic threshold is an adaptive threshold mode varies according to changes in lighting in
the input images. See Dynamic Threshold Functions for more information.
• Either hard or soft thresholds can be applied to this mode.
Dynamic Inside
The Dynamic Inside mode sets a percentage of the pixel distribution that is valid for the detection of
grey (neither dark nor light) blobs on a background containing dark and light areas.
This mode is similar to the Inside Segmentation mode to which a dynamic threshold mode is applied.
• A dynamic threshold is an adaptive threshold that varies according to changes in lighting in
the input images. See Dynamic Threshold Functions for more information.
• Either hard or soft thresholds can be applied to this mode.
Dynamic Outside
The Dynamic Inside mode sets a percentage of the pixel distribution that is valid for the detection of
dark and light blobs on a grey (neither dark nor light).
This mode is similar to the Outside Segmentation mode to which a dynamic threshold mode is applied.
• A dynamic threshold is an adaptive threshold mode that varies according to changes in
lighting in the input images. See Dynamic Threshold Functions for more information.
• Either hard or soft thresholds can be applied to this mode.
Configuring Thresholds
Threshold functions set values at which image segmentation takes place. Depending on the
segmentation mode selected, there may be either a single threshold or a double threshold. There are
three types of threshold functions: hard, soft and dynamic.
• Thresholds are modified in the threshold function Editor, using the mouse.
• The threshold function varies depending on the selected Image Segmentation mode.
AdeptSight 2.0 - User Guide
174
Configuring Blob Analyzer Settings
Hard Threshold Functions
Hard thresholds produce a blob image in which pixels have only two possible states: blob or non-blob.
• Background or non-blob pixels are each attributed a value of 0 in the Blob Image.
• Blob pixels are each attributed a value of 1 in the Blob Image.
• Hard thresholding is sometimes referred to as binary thresholding since all pixels can be
considered as having one of two states: blob or background
Hard thresholding assumes that changes in data values occur at the boundary between pixels, without
allowing a variation in pixels values across blob boundaries. Since this is rarely the case, soft
thresholding is more often used for applications.
Blob pixels
Background pixels
Figure 107 Hard Thresholding Example
Soft Threshold Functions
Soft thresholds let you use blob detection in cases where boundaries of a blob region span a few pixels
in width, with varying greylevels between the blob and the background.
A soft threshold is sloped and covers a range of pixel values that become weighted pixels once they are
processed.
• Weighted pixels are used in the calculation of the blob’s center of mass in proportion to their
weighted value within the soft threshold range.
• Weighted pixels are shown as completely included within the Blob Image.
Non weighted pixels
contribute entirely
to blob results
Weighted pixels
contribute to blob
results in proportion
to their weight
Weighted blob pixels
Figure 108 Soft Thresholding Example
Dynamic Threshold Functions
Dynamic threshold modes provide the same functionality as other segmentation modes with the added
advantage of adaptive threshold. An dynamic threshold is particularly useful when there are lighting
AdeptSight 2.0 - User Guide
175
Configuring Blob Analyzer Settings
variations from one image to another because the threshold is defines a percentage of the pixel
distribution, not a range of light values.
• A dynamic threshold is set as a percentage of the total pixels in the Image.
• To properly set a dynamic threshold, initially use an image that provides an "ideal blob" to
determine what percentage of the image contains blob pixel values.
Example:
Dynamic light
segmentation
with soft
threshold
Background pixels
Blob pixels
greylevel value
proportion of histogram
retained as blob pixels
Figure 109 Dynamic Threshold Example
AdeptSight 2.0 - User Guide
176
Blob Analyzer Results
Blob Analyzer Results
The Blob Analyzer outputs two types of results: Frames and Results that provide information on each of
the found blobs.
• Frames output by the Blob Analyzer can be used by other AdeptSight tools for frame-based
positioning. The output frames are represented in the display, and numbered, starting at 0.
• Results for found blobs appear in the grid of results, below the display, as illustrated in Figure
110.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results
Results Display
The display window provides a visual representation of the each blob, as illustrated in Figure 110.
AdeptSight 2.0 - User Guide
177
Blob Analyzer Results
Green pixels represent pixels
in the region of interest
that correspond to the selected
image segmentation mode
Each found blob is
represented by it Frame and
Instance number
Figure 110 Representation of Blob Analyzer Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all blob instance found by the Blob Analyzer. These results can
be saved to file by enabling the Results Log.
Enabling Blob Analyzer Results
Because of the large number of results that can be calculated and output by the Blob Analyzer only
General Results are output by default.
To enable the output of other types of results, the output must be configured in the Advanced
Parameters section.
To optimize the tool execution time, you should enable only the results that
you need for your application
Description of Blob Analyzer Results
Results are presented below, by group, in the order in which they are output to the results log.
• General Results
• Perimeter Results
• Extrinsic Inertia Results
• Intrinsic Inertia Results
AdeptSight 2.0 - User Guide
178
Blob Analyzer Results
• Extrinsic Box Results
• Intrinsic Box Results
• Chain Code Results
• Greylevel Results
• Topological Results
General Results
The Blob Analyzer outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Blob Analyzer. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Blob Analyzer.
Frame
Frame identifies the number of the frame output by the Blob Analyzer tool. If the tool is frame-based,
this number corresponds to the input frame that provided the positioning.
Instance
Identification number of the found blob. Each blob found and output by the Blob Analyzer tool is a blob
instance. Each instance outputs a frame that can be used by a frame-based tool for which the Blob
Analyzer is a frame-provider.
Area
Score is the calculated score, between 1 and 0, for each edge. The score is calculated according to the
defined constraint functions. If both Position and Magnitude constraints are enabled, each constraint
accounts for 50% of the score.
Position X
The X coordinate of the center of mass of the blob. The center of mass is defined by the average position
of the pixels in the blob and takes into account the effect of weighted pixel values, in the case of soft
thresholding.
Position Y
The Y coordinate of the center of mass of the blob. The center of mass is defined by the average position
of the pixels in the blob and takes into account the effect of weighted pixel values, in the case of soft
thresholding.
Perimeter Results
Roundness
Roundness quantifies degree of similarity between the blob and a circle. The roundness is for a perfectly
circular blob is 1.
Convex Perimeter
The Convex perimeter is calculated from the average projected diameter of the blob and is more stable
and accurate than the raw perimeter for convex shapes, including rectangular forms.
AdeptSight 2.0 - User Guide
179
Blob Analyzer Results
Raw Perimeter
The raw perimeter of a blob is defined as the sum of the pixel edge lengths on the contour of the blob.
Because the raw perimeter is sensitive to the orientation of the blob with respect to the pixel grid results
may vary greatly. Unless blobs are not convex, convex perimeter results provide greater accuracy.
Intrinsic Inertia Results
Intrinsic Moments of Inertia
The intrinsic moments of inertia measure the inertial resistance of the blob to be rotated about its
principal axes. Since their orientation depends on the coordinate system in which the blob is
represented, the principal axes, major and minor, are defined in the section on extrinsic blob properties.
Elongation
The elongation expresses the degree of dispersion of all pixels belonging to the blob around its center of
mass. The elongation of the blob is calculated as the square root of the ratio of the moment of inertia,
about the minor axis, to the moment of inertia about the major axis.
Elongation =
InertiaMaximum
-------------------------------------------InertiaMinimum
Extrinsic Inertia Results
A moment of inertia of the blob is a measure of the inertial resistance of the blob to be rotated about a
given axis. Extrinsic moments of inertia measure the moment of inertia about the x-y axes of the Tool
coordinate system.
Major axis
Minor Axis
Rotation of the Principal Axes
Center of mass
Selected coordinate system
Figure 111 Illustration of Extrinsic Inertia Results
Extrinsic Moments of Inertia
A moment of inertia of the blob is a measure of the inertial resistance of the blob to be rotated about a
given axis. Extrinsic moments of inertia measure the moment of inertia about the x-y axes of the Tool
coordinate system.
Principal Axes
Principal Axes designates a reference system that is constituted of the major axis and the minor axis.
The major axis (X) is the axis about which the moment of inertia is smallest. Conversely, the minor axis
(Y) is the axis about which the moment of inertia of the blob is the greatest.
The principal axes are orthogonal and are identified by the angle between the X-axis of the region of
interest and the major axis of the blob.
AdeptSight 2.0 - User Guide
180
Blob Analyzer Results
Inertia X-Axis
The moment of inertia about the X-axis of the Tool coordinate system.
Inertia Y-Axis
The moment of inertia about the Y-axis of the Tool coordinate system.
Rotation of the Principal Axes
The rotation of the Principal Axes reference system is the counterclockwise angle between the X-axis of
a selected coordinate system and the major axis.
Principal Axes Rotation
The angle of axis of the smallest moment of inertia with respect to the X-axis of the selected coordinate
system.
Intrinsic Box Results
The intrinsic bounding box, which is aligned with the principal axes, defines the smallest rectangle
enclosing the blob. The principal axes are defined with the minor axis and the major axis. Extents
measure the distance between a blob's center of mass and the four sides of the bounding box.
Major
axis
Minor
Axis
Right extent
Top extent
Center of mass
Bottom extent
Left extent
Bounding box
Figure 112 Intrinsic Bounding Box and Extents
Intrinsic Extents
Intrinsic extents are the distances between a blob’s center of mass and the four sides of the intrinsic
bounding box.
Inertia X-Axis
The moment of inertia about the X-axis of the Tool coordinate system.
Inertia Y-Axis
The moment of inertia about the Y-axis of the Tool coordinate system.
AdeptSight 2.0 - User Guide
181
Blob Analyzer Results
Rotation of the Intrinsic Bounding Box
The rotation of the intrinsic bounding box corresponds to the counter-clockwise angle between the Xaxis of the bounding box (major axis) and the X-axis of the selected coordinate system.
Principal Axes Rotation
The angle of axis of the smallest moment of inertia with respect to the X-axis of the selected coordinate
system.
Chain Code Results
A chain code is a sequence of direction codes that describes the boundary of a blob.
Delta Y
Delta X
Tool
Y-Axis
Start
Tool X-Axis
Start X: 5
Start Y: 1
Delta X: 1
Delta Y: 1
Length: 20
Figure 113 Illustration of Chain Code results
Chain Code Start X /Chain Code Start Y
Designate start position of the chain code, which corresponds to the position of the first pixel associated
to the chain code.
Chain Code Delta X and Delta Y
Designate the horizontal and vertical length of a boundary element in the chain code.
Chain Code Length
The length of the chain code corresponds to the number of boundary elements in the chain code.
Extrinsic Box Results
An Extrinsic box is a bounding box that defines the smallest rectangle, aligned with the Tool coordinate
system, that can enclose the blob.
Extents of a blob are the distances between the center of mass and the four sides of the extrinsic
bounding box.
AdeptSight 2.0 - User Guide
182
Blob Analyzer Results
X right, Y top
Center of mass
Bounding box center
Bounding box
X left, Y bottom
Figure 114 Illustration of Extrinsic Box results
Left
Left is the leftmost coordinate of the bounding box aligned with respect to the X-axis of the Tool
coordinate system.
Bottom
Bottom is the bottommost coordinate of the bounding box aligned with respect to the X-axis of the Tool
coordinate system.
Right
Right is the rightmost coordinate of the bounding box aligned with respect to the X-axis of the Tool
coordinate system.
Top
Top is the topmost coordinate of the bounding box aligned with respect to the X-axis of the Tool
coordinate system.
Center X and Center Y
Center X and Center Y are the X-Y coordinates of the center of the bounding box.
Height
Height is the of the bounding box with respect to the Y-axis of the Tool coordinate system.
Greylevel Results
In all cases, greylevel properties apply to pixels included in the blob regardless of weight values
attributed by soft thresholding.
Mean Grey Level
The mean greylevel is the average greylevel of the pixels belonging to the blob.
Minimum Grey Level
The minimum greylevel is the lowest greylevel pixel found in the blob.
Maximum Grey Level
The maximum greylevel is the highest greylevel pixel found in the blob.
Grey Level Range
The greylevel range is the difference between the highest and the lowest greylevel found in the blob.
AdeptSight 2.0 - User Guide
183
Blob Analyzer Results
Standard Deviation Grey Level
The standard deviation of greylevels for the pixels belonging to the blob.
Topological Results
Hole Count
The Hole Count property returns the number of holes found in each blob. The holes in smaller blobs that
are contained within a larger blob are not included in the Hole Count. In other words, the Hole Count
does not take into account the hierarchical relationship between blobs. Figure 115 illustrates such a case
where Blob#1 returns a Hole Count of three, not four.
Blob #2
Hole #4
Blob #1
Hole #3
Hole #1
Y
Hole #2
X
Blob #1: 3 holes
Blob #2: 1 hole
Image coordinate system
Figure 115 Illustration of Topological Results
AdeptSight 2.0 - User Guide
184
Configuring Advanced Blob Analyzer Parameters
Configuring Advanced Blob Analyzer Parameters
The Advanced Parameters section of the Blob Analyzer tool interface provides access to advanced
Blob Analyzer parameters and properties.
Configuration
Hole Filling Enabled
HoleFillingEnabled is enabled (True), all background pixels inside within the perimeter of a given blob
becomes included in the blob. All smaller blobs within a larger blob are also included in the "filled" larger
blob. Both the background and smaller blobs are then considered as part of the filled blobs.
Hole Filling Enabled
(True)
Hole Filling Disabled
(False)
Figure 116 Illustration of the Hole Filling Parameter
Clear Output Blob Image Enabled
ClearOutputBlobImageEnabled specifies if the image output by the tool will be cleared in the next
execution of the tool.
ClearOutputBlobImageEnabled is enabled (True) by default and ensures that only the last blob
Image remains in the Blob Image output. When ClearBlobImage is disabled (False), the resulting blob
Images for each execution of the application remain in the Blob Output Image.
Output Blob Image Enabled
OutputBlobImageEnabled specifies if a blob image will be output after the blob segmentation and
labelling process.
The blob image is an overlay image, represented by default as green pixels in the results display. The
color of the blob overlay image can be modified in the AdeptSight Environment Settings dialog.
This image provides useful visual information when developing applications, especially for verifying the
effect of image segmentation and thresholds configuration.
It is recommended to set OutputBlobImageEnabled to False (disable
output blob images) during the runtime of the actual application because
displaying the image can create a significant increase in processing time.
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
AdeptSight 2.0 - User Guide
185
Configuring Advanced Blob Analyzer Parameters
In AdeptSight 2.0, the Blob Analyzer does not process color images. Input color images are converted
internally by the tool to grey-scale and processed as grey-scale image. This conversion process may
increase the execution time. To improve execution time, Processing Format can be set to hsNative.
• hsNative: When hsNative is selected, the Blob Analyzer processes images in the format in
which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Blob Analyzer processes only the greyscale information in the input image, regardless of the format in which the images are
provided. This can reduce the execution time when color processing is not required.
Frame Transform Parameters
The Scale to Instance parameter is applicable only to a Blob Analyzer that is frame-based, and for
which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Blob Analyzer.
Scale to Instance
When ScaleToInstance is True, the Blob Analyzer region of interest is resized and positioned relative
to the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Blob Analyzer ignores the scale and builds frame relative to the input
frame without adapting to the change in scale.
Location Parameters
Tool Position Parameters
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Blob Analyzer region of interest.
Width
Width of the Blob Analyzer region of interest.
Rotation
Angle of rotation of the Blob Analyzer region of interest.
Width
Width of the Blob Analyzer region of interest.
X
X coordinate of the center of the tool region of interest.
AdeptSight 2.0 - User Guide
186
Configuring Advanced Blob Analyzer Parameters
Y
Y coordinate of the center of the region of interest.
Width
Y
X, Y
Height
Angle of Rotation
Figure 117 Location Properties Blob Analyzer Region of Interest
Tool Sampling Parameters
Sampling refers to the procedure used by the tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
Bilinear Interpolation
BilinearInterpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed.
Bilinear interpolation is crucial for obtaining accurate Blob Analyzer results. To ensure proper accurate
blob results. To ensure subpixel precision in blob results, BilinearInterpolation should always be set to
True (enabled)
If the Blob Analyzer is used in a frame-based mode, the tool region of interest, and the blobs found
within it, are rarely aligned with the pixel grid, resulting in jagged edges on blob borders. Therefore
interpolated pixel values provide a more true-to-life representation of blob contours. As illustrated in
Figure 118, a detail from a non-interpolated image shows a blob's contour as being very jagged and
irregular.
Bilinear interpolation
Bilinear
interpolation enabled
Disabled
Bilinear interpolation Bilinear interpolation disabled
Enabled
Input Images
Blob Images
Figure 118 Effect of Bilinear Interpolation on Blob Detection
AdeptSight 2.0 - User Guide
187
Configuring Advanced Blob Analyzer Parameters
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step used by the tool to sample the area of the input image that is bounded by the
tool region of interest. The sampling step represents the height and the width of a sampled pixel.
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
Results Parameters
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Sort Results Enabled
SortResultsEnabled enables the sorting of blob instances, as they appear in the results log and results
grid.
• When False (default) blobs instances are sorted in the order in which they are found by the
Blob Analyzer.
• When set to True, blobs are sorted according to the value of the result that is selected by the
SortBlobsBy property.
• Blob results are always sorted in descending order.
Sort Blobs By
SortBlobsBy selects a blob result that will be used as basis for sorting the blob instances, as they
appear in the results log and results grid.
To sort blobs:
1. Set the SortResultsEnabled to True.
2. In the SortBlobsBy list, select the blob property that will serve as basis for the sorting order.
Output Results
Blob Locator can output a wide choice of blob results. These results are grouped by families or types of
results. By default only the General results such as Position and Area are output to the results log and
AdeptSight 2.0 - User Guide
188
Configuring Advanced Blob Analyzer Parameters
the results grid. Other Blob results can be enabled by enabling the Output Result categories mentioned
below.
To optimize the tool execution time, you should enable only the results that
you need for your application
Perimeter Results
Setting Output IntrinsicInertiaResults enables the output of the following results. For more details
see the Intrinsic Inertia Results section.
• Roundness
• Convex Perimeter
• Raw Perimeter
Intrinsic Inertia Results
Setting Output IntrinsicInertiaResults enables the output of the following results. For more details
see the Intrinsic Inertia Results section.
• Inertia Minimum
• Inertia Maximum
• Elongation
Extrinsic Inertia Results
Setting Output ExtrinsicInertiaResults enables the output of the following results. For more details
see the Extrinsic Inertia Results section.
• Inertia X-Axis
• Inertia Y-Axis
• Principal Axes Rotation
Intrinsic Box Results
Setting IntrinsicBoxResultsEnabled to True enables the output of the following results. For more
details see the Intrinsic Box Results section.
• Intrinsic Bounding Box Center X
• Intrinsic Bounding Box Center Y
• Intrinsic Bounding Box Height
• Intrinsic Bounding Box Width
• Intrinsic Bounding Box Left
• Intrinsic Bounding Box Right
• Intrinsic Bounding Box Top
AdeptSight 2.0 - User Guide
189
Configuring Advanced Blob Analyzer Parameters
• Intrinsic Bounding Box Bottom
• Intrinsic Bounding Box Rotation
• Intrinsic Extent Left
• Intrinsic Extent Right
• Intrinsic Extent Top
• Intrinsic Extent Bottom
Extrinsic Box Results
Setting ExtrinsicBoxResultsEnabled to True enables the output of the following results. For more
details see the Extrinsic Box Results section.
• Bounding Box Center X
• Bounding Box Center Y
• Bounding Box Height
• Bounding Box Width
• Bounding Box Left
• Bounding Box Right
• Bounding Box Top
• Bounding Box Bottom
• Bounding Box Rotation
• Extent Left
• Extent Right
• Extent Top
• Extent Bottom
Chain Code Results
Setting ChainCodeResultsEnabled to True enables the output of the following results. For more details
see the Chain Code Results section.
• Chain Code Length
• Chain Code Start X
• Chain Code Start Y
• Chain Code Delta X
• Chain Code Delta Y
AdeptSight 2.0 - User Guide
190
Configuring Advanced Blob Analyzer Parameters
Greylevel Results
Setting GreylevelResultsEnabled to True enables the output of the following results. For more details
see the Greylevel Results section.
• Greylevel Mean
• Greylevel Range
• Greylevel StdDev
• Greylevel Minimum
• Greylevel Maximum
Topological Results
Setting TopologicalResultsEnabled to True enables the output of the following results. For more
details see the Topological Results section.
• Hole Count
AdeptSight 2.0 - User Guide
191
Using the Pattern Locator Tool
Using the Pattern Locator Tool
The Pattern Locator finds and locates point-type features on objects and returns the coordinates of the
found point.
The Pattern Locator is best suited for applications that require the detection of low contrast and/or small
features such as letters, numbers, symbols and logos on a part. Patterns that can provide high-contrast
and well-defined contours should be modeled and found by a Locator tool.
Typical cases for using the Pattern Locator include:
• Detecting the presence/absence of a grey-scale pattern on a modeled object (Locator).
• Disambiguating objects having the same contours by their grey-scale features.
The Pattern Locator does not support rotated patterns and should generally
be used as a model-based inspection tool for detecting the presence of small
grey-scale patterns on small areas in the image or on an object.
What is a Pattern?
A pattern is defined as grid of pixels having a specific arrangement of greylevel values. A sample pattern
must be created for each Pattern Locator tool in the vision sequence.
Example A:
number ‘5’ pattern
Example B:
‘quadrant’ pattern
Figure 119 Pattern Examples
Basic Steps for Configuring a Pattern Locator
1. Select the tool that will provide input images. See Input.
2. Position the Pattern Locator tool. See Location.
3. Create and edit a Pattern. See Creating and Editing Patterns
4. Test and verify results. See Point Finder Results.
5. Configure Advanced properties if required. See Configuring Advanced Point Finder
Parameters.
Input
The Input required by the Pattern Locator is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
AdeptSight 2.0 - User Guide
192
Using the Pattern Locator Tool
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Pattern Locator.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
Figure 120 Positioning the Pattern Locator Tool
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Pattern Locator is positioned relative to a frame of
reference provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
AdeptSight 2.0 - User Guide
193
Using the Pattern Locator Tool
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Pattern Locator must be placed on all frames output by the frame-provider tool, enable
the All Frames check box.
4. If the Pattern Locator must be only be applied to a single frame, (output by frame-provider
tool) disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Pattern Locator.
Positioning the Pattern Locator
Positioning the tool defines the area of the image that will be processed by the Pattern Locator. Location
parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Pattern Locator tool:
1. Click Location. The Location dialog opens as shown in Figure 120. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. If the Pattern Locator is frame-based, a blue marker indicates the frame provided by the frameprovider tool (Frame Input). If there is more than one object in the image, make sure that
you are positioning the bounding box relative to the object identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display. If the tool is frame-based, these values are relative to the origin of the frame-provider
tool (blue marker).
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
Creating the Pattern
Each Pattern Locator tool in the sequence can store a single sample pattern. This pattern will be saved
when you save the sequence or save the tool.
AdeptSight 2.0 - User Guide
194
Using the Pattern Locator Tool
The Pattern Locator searches for the sample pattern within the tool region of interest but does not
search for rotated patterns.
The sample pattern can be created on any image that contains the required pattern.
• The pattern does not have to be created from a pattern that is in the tool region of interest.
• The sample pattern can be on any image that contains a pattern image.
• The rotation (orientation) and size of the sample pattern affect the success of the pattern
finding process.
Creating a pattern "destroys" an existing pattern.
To erase the current pattern and create a new one, click New.
To edit or reposition an existing sample pattern, click Edit.
To create and position a sample pattern:
1. Click New. This open the Pattern Edition mode.
2. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
3. Important: Correct size and rotation are critical to ensure successful finding of patterns:
• The bounding box should be just large enough to encompass the pattern.
• The X-Y axes marker defines the orientation of the pattern. Make sure the XY axes of the
Pattern region of interest are aligned in the correct orientation with respect to the Pattern
Locator region of interest. See Figure 121
Pattern Locator ROI (region of interest)
Pattern ROI
(Pattern Edition region of interest)
Pattern is found
relative to alignment
of Pattern ROI with
Pattern Locator ROI
Figure 121 Setting the rotation of the sample pattern
Editing the Pattern
Once the pattern is created, it is temporarily saved to memory. The sample pattern will be saved when
you save the tool or the vision sequence. Changes to the sample pattern can be made by at any time.
AdeptSight 2.0 - User Guide
195
Using the Pattern Locator Tool
To edit or modify the sample pattern:
1. Under the Pattern section, Click Edit. This open the existing sample in Pattern Edition mode.
2. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
3. To change the orientation of the pattern, rotate the X-Y axes marker or enter values in the
Rotation text box.
Correctly Orienting Sample Patterns
The Pattern Locator find patterns that are aligned with the Pattern Locator region of interest.
• The axes marker of the Pattern region of interest set the orientation of the pattern. When the
tool searches for pattern instances, it searches for only patterns with X-Y axes that are aligned
with the X-Y axes of tool region of interest
• A very slight range of rotation is supported. Only patterns that are rotated within less than +/
- 20 degrees can be found within the area of interest.
• The example in Figure 122 illustrates an example of a correctly oriented pattern, as well as
the effect of the pattern rotation relative to the tool rotation on Pattern Locator results.
X
Pattern Locator
region of interest
Pattern Locator
region of interest
X
A
Y
Y
B
C
Patterns A and C
found by the Pattern Locator
Pattern
Pattern B
not found
Figure 122 Correct orientation of sample patterns
Correctly Sizing Patterns
The size of the bounding box sets the size of the pattern. The bounding box should be just large enough
to contain the pattern
find patterns that are aligned with the Pattern Locator region of interest.
• Patterns that are too large can unnecessarily increase processing time
• Patterns that are too large can often result in false detections.
• The minimum size of a pattern is fixed as 3X3 pixels.
Related Topics
Configuring Advanced Pattern Locator Parameters
AdeptSight 2.0 - User Guide
196
Pattern Locator Results
Pattern Locator Results
The Pattern Locator outputs two types of results: Frames and Results that provide information on each
of the pattern instances.
• Results for found instances appear in the grid of results, below the display, as illustrated in
Figure 123.
• Frames output by the Pattern Locator are represented in the display, and numbered, starting
at 0.
Using thePattern Locator as a frame-provider for other tools is not generally
recommended. This is because patterns are often small and/or low contrast
features that do not provide an accurate or repeatable position.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents only a single sampled image, when the display is in "non-calibrated
mode.
When the Pattern Locator outputs more than one sampled image, all the sample images can be viewed
only when the display is in "calibrated" mode, as shown in Figure 123.
AdeptSight 2.0 - User Guide
197
Pattern Locator Results
Multiple pattern instances
found in a single region of interest
Each found pattern
is identified by frame and
instance number.
Rotation is the rotation of the
Pattern Locator region of
interest, not
the rotation of the pattern
Figure 123 Representation of Pattern Locator results
Grid of Results
The grid of result presents the statistical results for the region of interest analyzed by the Pattern
Locator. These results can be saved to file by enabling the Results Log.
Description of Pattern Locator Results
Results are presented below in the order in which they are output to the results log.
Elapsed Time
The Elapsed Time is not visible in the results grid but is output to the results log for each iteration of
the Pattern Locator.
Frame
Frame identifies the number of the frame output by the Pattern Locator. If the tool is frame-based, this
number corresponds to the input frame that provided the positioning.
Instance
Index number of the located pattern instance, starting at 0. Each pattern instance outputs a frame that
can be used by a frame-based tool for which the Pattern Locator is a frame-provider.
Match
The Match value ranges from 0 to 1, with 1 being the best quality. A value of 1 means that 100% of the
reference pattern was successfully matched to the found pattern instance.
AdeptSight 2.0 - User Guide
198
Pattern Locator Results
Position X
X coordinate of the center of the Pattern region of interest, with respect to the selected Coordinate
System.
Position Y
Y coordinate of the center of the Pattern region of interest, with respect to the selected Coordinate
System.
Rotation
The rotation is that of the Pattern Locator region of interest, with respect to the selected Coordinate
System. Rotation IS NOT calculated for individual patterns.
AdeptSight 2.0 - User Guide
199
Configuring Advanced Pattern Locator Parameters
Configuring Advanced Pattern Locator Parameters
The Advanced Parameters section of the Pattern Locator interface provides access to advanced
Pattern Locator parameters and properties.
Frame Transform Parameters
The Scale to Instance parameter is applicable only to a Pattern Locator that is frame-based, and for
which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Pattern Locator.
Scale to Instance
When ScaleToInstance is True, the Pattern Locator region of interest is resized and positioned relative
to the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Pattern Locator ignores the scale and builds frame relative to the input
frame without adapting to the change in scale.
Location Parameters
Tool Position Parameters
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Pattern Locator region of interest.
Width
Width of the Pattern Locator region of interest.
Rotation
Angle of rotation of the Pattern Locator region of interest.
Width
Width of the Pattern Locator region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
AdeptSight 2.0 - User Guide
200
Configuring Advanced Pattern Locator Parameters
Width
Height
X, Y
Angle of Rotation
Figure 124 Illustration of Tool Position for a Sector-type Region of Interest
Tool Sampling Parameters
Sampling refers to the procedure used by t he tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
For specific applications where a more appropriate tradeoff between speed and precision must be
established, the sampling step can be modified by setting the CustomSamplingStepEnabled to True
and modifying the CustomSamplingStep value.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed.
To ensure subpixel precision in inspection applications, Bilinear Interpolation should always be set to
true (enabled). Non-interpolated sampling (Bilinear Interpolation disabled) should only be used in
applications where the speed requirements are more critical than precision.)
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step used by the tool to sample the area of the input image that is bounded by the
tool region of interest. The sampling step represents the height and the width of a sampled pixel.
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
AdeptSight 2.0 - User Guide
201
Configuring Advanced Pattern Locator Parameters
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
SamplingStepCustomEnabled
Setting SamplingStepCustomEnabled to True, enables the tool to apply a custom sampling step
defined by SamplingStepCustom. When set to False (default) the tool applies the default, optimal
sampling step defined by SamplingStepDefault.
Pattern Location Parameters
Pattern Position parameters can be set through the Pattern section of the tool interface. These are the
parameters that define the location of the pattern in an input image.
Height
Height of the pattern region of interest.
Width
Width of the pattern region of interest.
Rotation
Angle of rotation of the pattern region of interest.
Width
Width of the pattern region of interest.
X
X coordinate of the center of the pattern region of interest.
Y
Y coordinate of the center of the pattern region of interest.
Results Parameters
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Match Count
MatchCount is the number of pattern matches found. Read only.
Search Parameters
The Pattern Locator searches for patterns by applying a multi-resolution search strategy, The Pattern
Locator carries out a search at each defined resolution level. The coarser resolution levels (Search
Coarseness) are used to generate hypotheses; the higher resolution levels (Positioning Coarseness) are
used to refine the position of validated pattern instances. Only those patterns that meet the Match
Threshold are retained.
AdeptSight 2.0 - User Guide
202
Configuring Advanced Pattern Locator Parameters
Auto Coarseness Selection Enabled
When AutoCoarsenessSelectionEnabled is set to true (default) the Pattern Locator automatically
determines and sets the values for Search Coarseness and Positioning Coarseness.
To manually set the Search and Positioning coarseness values you must set
AutoCoarsenessSelectionEnabled to false.
Match Threshold
MatchThreshold sets the minimum Match strength required for a Pattern to be recognized as valid. A
perfect match value is 1.
• If the match threshold is too high, many pattern instances may be rejected.
• If the match threshold is too low, too many false pattern instances may be detected.
Maximum Instance Count
MaximumInstanceCount sets the maximum number of instances that the Pattern Locator will search
for. You should set this value to no more than the expected number of instances.
Positioning Coarseness
PositioningCoarseness levels are use to confirm pattern hypotheses and refine their pose. The
Positioning Coarseness value ranges from 1 (Accurate) to 4 (Fast).
Search Coarseness
SearchCoarseness levels are use to generate pattern hypotheses in the input image. The Search
Coarseness value ranges from 1 (Exhaustive) to 32 (Fast).
AdeptSight 2.0 - User Guide
203
Using the Image Processing Tool
Using the Image Processing Tool
The Image Processing Tool processes images by applying arithmetic, assignment, logical, filtering,
morphological or histogram operations. Users can also define and apply custom filtering operations.
Each Image Processing Tool in application performs a selected operation on an image called the input
image. An image processing operation can also involve another image or a constant, as well as set of
processing parameters.
Image Types
The tool can accept unsigned 8-bit, signed 16-bit and signed 32-bit images as input. The processing is
usually performed in the more defined type, based on input or operand image, or in a promoted type
(signed 16-bit) if needed.
The Image Processing Tool output is of same type as the input image unless the user overrides the type
by setting another value, or an output image already exists.
What is an Image Processing Operation
An image processing operation is a process carried out by the Image Processing Tool on an input image
Input image. The result of an operation is an output image that can be used by other AdeptSight tools.
Image Processing operations are typically applied to images before they are processed other vision
tools. Some complex image processing applications may require a sequence of more 2 or Image
Processing tools.
Some common uses of an image processing tool are:
• Inverting images (negative image)
• Creating a binary image, using a threshold operation
• Sharpening or averaging an image to improve quality.
AdeptSight 2.0 - User Guide
204
Using the Image Processing Tool
Figure 125 Example of an Image Processing after a Histogram Threshold Operation
Basic Steps for Configuring an Image Processing Tool
1. Select the tool that will provide Input images. See Input Image.
2. Select the tool that provides and Operand image, if required. Many operations do not require
an operand Image.
3. Select the Operation that will be performed by the tool.
4. In the Advanced Parameters, configure the parameters for the selected operation. Advanced
Image Processing Tool Parameters
5. Test and verify results. See Image Processing Tool Results.
Input Image
The Input Image required by the Image Processing Tool is an image provided by another tool in the
sequence. The Input image is the image that will be processed and modified by the Image Processing
Tool.
The Image Processing Tool cannot be frame-based, and the tool’s region of interest is always the entire
Input image. Therefore, this tool does not have any Location (positioning) parameters.
The Image Processing Tool processes grey-scale images only. If the Input
Image is a color image, the Image Processing Tool may fail to execute, or
may execute and output invalid results.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
AdeptSight 2.0 - User Guide
205
Using the Image Processing Tool
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Image
Processing Tool.
Operand Image
Some image processing operations require an Operand Image. This operand image is provided by
another tool, either another Image Processing Tool in the sequence, or an Acquire Image tool.
• Some operations require a constant as a second operation. This constant must be defined in
the Advanced Parameters section of the tool interface.
• The Operand Image must be set to (none) if a constant must be applied as an operand.
Otherwise any selected Operand Image will override the selected constant.
Operation
The selected Operation corresponds to the process that will be applied to the input image by the Image
Processing Tool. Each Image Processing Tool can apply a single operation.
Once an operation is selected, parameters related to the operation such as clipping, scale, constant
(operand), and others, must be configured in the Advanced Parameters section.
Table 4 provides a list and short description of the available operations. For more information on a
specific operation, see the Image Processing Operations section.
Table 4 List of Available Operations
Name
Description
hsArithmeticAddition
Operand value (constant or Operand Image pixel) is added to the
corresponding pixel in the input image.
hsArithmeticSubtraction
Operand value (constant or Operand Image pixel) is subtracted
from the corresponding pixel in the input image.
hsArithmeticMultiplication
The input image pixel value is multiplied by the Operand value
(constant or corresponding Operand Image pixel).
hsArithmeticDivision
The input image pixel value is divided by the Operand value
(constant or corresponding Operand image pixel). The result is
scaled and clipped, and finally written to the output image.
hsArithmeticLightest
The Operand value (constant or Operand Image pixel) and
corresponding pixel in the input image are compared to find the
maximal value.
hsArithmeticDarkest
The Operand value (constant or Operand Image pixel) and
corresponding pixel in the input image are compared to find the
minimal value.
hsAssignmentInitialization
All the pixels of the output image are set to a specific constant
value. The height and width of the output image must be
specified.
hsAssignmentCopy
Each input image pixel is copied to the corresponding output
image pixel.
hsAssignmentInversion
The input image pixel value is inverted and the result is copied to
the corresponding output image pixel.
AdeptSight 2.0 - User Guide
206
Using the Image Processing Tool
Table 4 List of Available Operations
Name
Description
hsLogicalAnd
AND operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
hsLogicalNAnd
NAND operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
hsLogicalOr
OR operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
hsLogicalXOr
XOR operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
hsLogicalNOr
NOR operation is applied using the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
hsFilteringCustom
Applies a Custom filter.
hsFilteringAverage
Applies an Average filter.
hsFilteringLaplacian
Applies a Laplacian filter.
hsFilteringHorizontalSobel
Applies a Horizontal Sobel filter.
hsFilteringVerticalSobel
Applies a Vertical Sobel filter.
hsFilteringSharpen
Applies a Sharpen filter.
hsFilteringSharpenLow
Applies a SharpenLow filter.
hsFilteringHorizontalPrewitt
Applies a Horizontal Prewitt filter.
hsFilteringVerticalPrewitt
Applies a Vertical Prewitt filter.
hsFilteringGaussian
Applies Gaussian filter.
hsFilteringHighPass
Applies High Pass filter.
hsFilteringMedian
Applies a Median filter.
hsMorphologicalDilate
Sets each pixel in the output image as the largest luminance
value of all the input image pixels in the neighborhood defined by
the selected kernel size.
hsMorphologicalErode
Sets each pixel in the output image as the smallest luminance
value of all the input image pixels in the neighborhood defined by
the selected kernel size.
hsMorphologicalClose
Has the effect of removing small dark particles and holes within
objects.
hsMorphologicalOpen
Has the effect of removing peaks from an image, leaving only the
image background.
hsHistogramEqualization
Equalization operation enhances the Input Image by flattening the
histogram of the Input Image
hsHistogramStretching
Stretches (increases) the contrast in an image by applying a
simple piecewise linear intensity transformation based on the
histogram of the Input Image.
AdeptSight 2.0 - User Guide
207
Using the Image Processing Tool
Table 4 List of Available Operations
Name
Description
hsHistogramLightThreshold
Changes each pixel value depending on whether they are less or
greater than the specified threshold. If an input pixel value is less
than the threshold, the corresponding output pixel is set to the
minimum acceptable value. Otherwise, it is set to the maximum
presentable value.
hsHistogramDarkThreshold
Changes each pixel value depending on whether they are less or
greater than the specified threshold. If an input pixel value is less
than the threshold, the corresponding output pixel is set to the
maximum presentable value. Otherwise, it is set to the minimum
acceptable value.
hsTransformFFT
Converts and outputs a frequency description of the input image
by applying a Fast Fourier Transform (FFT).
hsTransformDCT
Converts and outputs a frequency description of the input image
by applying a Discrete Cosine Transform (DCT).
Related Topics
Image Processing Operations
Advanced Image Processing Tool Parameters
Image Processing Tool Results
AdeptSight 2.0 - User Guide
208
Image Processing Operations
Image Processing Operations
Each Image Processing Tool in application performs a selected operation on an image, called the input
image. An image processing operation can also involve another image or a constant, as well as set of
processing parameters.
Image Types
The tool can accept unsigned 8-bit, signed 16-bit and signed 32-bit images as input. The processing is
usually performed in the more defined type, based on input or operand image, or in a promoted type
(signed 16-bit) if needed.
The Image Processing Tool output is of same type as the input image unless:
• the user overrides the type by setting another value or
• an output image already exists: the output image type remains the same unless otherwise
specified.
Elements of an Operation
An image processing operation requires at least one operand that is acted upon by an operation. For the
Image Processing Tool, this first operand is always the Input image. Some operations require a second
operand. This Operand can be an image, called the Operand image, or a constant. The basic elements
of an operation are illustrated in Figure 126. Furthermore, some operations involve other parameters
such as clipping, scaling and filters. Such parameters are discussed under the category of operation to
which they apply.
Input
Image
Operand
Output
Image
Operation
=
Required
Required
Operand image
or constant
Output image at
each iteration
of the Image
Processing tool
Figure 126 Basic Elements of an Image Processing Operation
Input Image
An Input image is required as the first operand. This image can be only type of operation that does not
require an Input Image is an Assignment operation.
Operation
The available operations are described in greater detail under the following sections: Arithmetic
Operations, Assignment Operations, Transform Operations, Logical Operations, Filtering Operations,
Morphological Operations, and Histogram Operations.
Operand
Operations that require a second operand can use either an Operand image or a constant.
AdeptSight 2.0 - User Guide
209
Image Processing Operations
Operand Image
An Operand Image is used by an operation that acts on two images. If an Operand image is specified it
will override the use of the constant specified for the operation.
• The Image Processing Tool applies logical and arithmetic operators, first to the Input image,
secondly to the Operand image.
• The Image Processing Tool can accept unsigned 8-bit, signed 16-bit ,and signed 32-bit images
as Operand image.
Constant
Any constant value specified for an operation will be overridden by an Operand image that has been
defined for the operation.
Output Image
The output image is the image resulting from an image processing operation.
• The user can specify the type of output images as either unsigned 8-bit, signed 16-bit or
signed 32-bit images.
• AdeptSight processes other than the Image Processing Tool can only take unsigned 8-bit
images as input.
Arithmetic Operations
Arithmetic operations are performed by promoting the input values of the source pixels (from the Input
image) and the Operand values to the more defined type, based on the Input Image, Operand image or
desired output image type. The results of the operation are converted according to the following rules:
• Destination pixel value = ClipMode (Result * Scale)
• Destination pixel value is truncated as necessary
Clipping Modes
Two clipping modes are available for arithmetic operations: normal and absolute.
Normal Clipping Mode
Normal Clipping mode forces the value of a destination pixel to a value from 0 to 255 for unsigned 8-bit
images, to a value from -327678 to 32767 for signed 16-bit images, or to a value from -2,147,483,648
to 2,147,483,647 for signed 32-bit images. Values that are less than the specified minimum value are
set to the minimum value. Values greater than the specified maximum value are set to the maximum
value.
Absolute Clipping Mode
The absolute clipping mode takes the absolute value of the result and clips it using the same algorithm
as for Normal Clipping mode.
Arithmetic Operation Modes
There are two Arithmetic operation modes. In the first, the operation is applied to every pixel of an input
image and the corresponding pixel in the Operand image. The result is written in the output image.
AdeptSight 2.0 - User Guide
210
Image Processing Operations
In the second mode, the operand is a constant, and it is used on every pixel of the input image and the
result is written in the output image.
The Image Processing Tool supports the following arithmetic operations: Addition, Subtraction,
Multiplication, Division, Lightest and Darkest.
Addition
The operand value (constant or Operand image pixel) is added to the corresponding pixel in the input
image. The result is scaled and clipped, and finally written to the output image.
Subtraction
The operand value (constant or Operand image pixel) is subtracted from the corresponding pixel in the
input image. The result is scaled and clipped, and finally written to the output image.
Division
The input image pixel value is divided by the operand value (constant or corresponding Operand image
pixel). The result is scaled and clipped, and finally written to the output image.
Multiplication
The input image pixel value is multiplied by the operand value (constant or corresponding Operand
image pixel). The result is scaled and clipped, and finally written to the output image.
Lightest (Maximum)
The operand value (constant or Operand image pixel) and corresponding pixel in the input image are
compared to find the maximal value. The result is scaled and clipped, and finally written to the output
image.
Darkest (Minimum)
The operand value (constant or Operand image pixel) and corresponding pixel in the input image are
compared to find the minimal value. The result is scaled and clipped, and finally written to the output
image.
Assignment Operations
Assignment operations promote the input values of the source pixels and the Operand values to the
more defined type, based on the input image, the Operand image or the desired output image type.
This type of operation does not support scaling or clipping. The Image Processing Tool provides the
following assignment operations: Initialization, Copy and Inversion.
Initialization
All the pixels of the output image are set to a specific constant value. The height and width of the output
image must be specified.
Copy
Each input image pixel is copied to the corresponding output image pixel.
AdeptSight 2.0 - User Guide
211
Image Processing Operations
Inversion
The input image pixel value is inverted and the result is copied to the corresponding output image pixel.
Transform Operations
Transform operations convert and output a frequency description of the input image. The available
operations are a Fast Fourier Transform (FFT) and a Discrete Cosine Transform (DCT). These transforms
can be output as 1D Linear, 2D Linear, 2D Logarithmic or Histogram.
Logical Operations
There are two logical operation modes. In the first, the operation is applied to every pixel of an input
image and the corresponding pixel in the Operand image. The result is written in the output image. In
the second mode, the operand is a constant, and it is used on every pixel of the input image and the
result is written in the output image. No scaling or clipping is supported for logical operations.
AND
The logical AND operation is applied using the operand value (constant or Operand image pixel) and the
corresponding pixel in the input image. The result is written to the output image.
NAND
The logical NAND operation is applied using the operand value (constant or Operand image pixel) and
the corresponding pixel in the input image. The result is written to the output image.
NOR
The logical NNOR operation is applied using the operand value (constant or Operand image pixel) and
the corresponding pixel in the input image. The result is written to the output image.
OR
The logical Or operation is applied using the Operand value (constant or Operand image pixel) and the
corresponding pixel in the input image. The result is written to the output image.
XOR
The logical XOR operation is applied using the Operand value (constant or Operand image pixel) and
the corresponding pixel in the input image. The result is written to the output image.
Filtering Operations
A filtering operation can be described as the convolution of an input image using a square, rectangular
or linear kernel. The Image Processing Tool provides a set of defined filters as well as a custom filtering
operation that applies a user-defined kernel.
The predefined filters are: Average, Gaussian, Horizontal Prewitt, Vertical Prewitt, Horizontal Sobel,
Vertical Sobel, High Pass, Laplacian, Sharpen, SharpenLow and Median.
AdeptSight 2.0 - User Guide
212
Image Processing Operations
Initial Image
Sharpen Low Filter
High Pass Filter
Figure 127 Example Of Image After Some Common Filtering Operations
Normal Clipping Mode
The Normal Clipping mode forces the destination pixel value to a value from 0 to 255 for unsigned 8-bit
images, to a value from -327678 to 32767 for signed 16 bits images, and so on.
Values less than the specified minimum value are set to the minimum value. Values greater than the
specified maximum value are set to the maximum value.
Absolute Clipping Mode
The Absolute Clipping mode takes the absolute value of the result and clips it using the same algorithm
as for Normal Clipping mode.
Creating A Custom Filter
AdeptSight enables the creation of a Custom Kernel for in the Image Processing Tool.
Figure 128 Custom Kernel Dialog of the Image Processing Tool
AdeptSight 2.0 - User Guide
213
Image Processing Operations
To create a custom filter:
1. In the Image Processing Tool interface, expand the Advanced Parameters list.
2. Under Configuration, select the Operation parameter. In the right-hand column, select:
hsCustomFilter. This enables the tool to apply a the custom filter you will create in the next
steps. Next you must create the custom filter.
3. Under Filtering, select the FilteringCustomKernel parameter.
4. In right-hand column click the Browse (...) icon. This opens the Custom Filter Properties
dialog, as illustrated in Figure 128.
5. In the Dimensions box, enter values for Width and Height of the kernel. Grid boxes in white
indicate kernel elements.
6. Enter the required in the value in each box of the kernel grid.
7. In the Anchor box, enter the X and Y positions of the kernel anchor, with respect to the defined
kernel. The box indicating the anchor position in identified by a different color.
Average Filter
The Average operation sets each pixel in the output image as the average of all the input image pixels
in the neighborhood defined by the selected kernel size. This has the effect of blurring the image,
especially edges.
The average filters are designed to remove noise. The kernel size can be 3, 5 or 7. The kernels used by
the Image Processing Tool are shown Figure 129.
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Figure 129 Average Filtering Kernels
Gaussian Filter
The Gaussian operation acts like a low pass filter. This has the effect of blurring the image. Gaussian
filters are designed to remove noise. The kernel size can be 3, 5 or 7. The kernels used by the Image
Processing Tool are shown in Figure 130.
AdeptSight 2.0 - User Guide
214
Image Processing Operations
1
2
1
2
7
12 7
2
1
1
2
2
2
1
1
2
4
2
7
31 52 31 7
1
3
4
5
4
3
1
1
2
1
12 52 127 52 12
2
4
7
8
7
4
2
7
31 52 31 7
2
5
8
10 8
5
2
2
7
2
4
7
8
7
4
2
1
31 4
5
4
3
1
1
1
2
2
1
1
12 7
2
2
Figure 130 Gaussian Filtering Kernels
Horizontal Prewitt Filter
The Horizontal Prewitt operation acts like a gradient filter. This has the effect of highlighting horizontal
edges (gradients) in the image. The kernel size used is 3 and it is shown in Figure 131. The absolute
clipping method is usually used with this filtering operation.
1
1
1
0
0
0
-1
-1
-1
Figure 131 Horizontal Prewitt Filtering Kernel
Vertical Prewitt Filter
The Vertical Prewitt operation acts like a gradient filter. This has the effect of highlighting horizontal
edges (gradients) in the image. The kernel size used is 3 and it is shown in Figure 132. Absolute clipping
method is usually used with this filtering operation.
-1
0
1
-1
0
1
-1
0
1
Figure 132 Vertical Prewitt Filtering Kernel
Horizontal Sobel Filter
The Horizontal Sobel operation acts like a gradient filter. This has the effect of highlighting horizontal
edges (gradients) in the image. The kernel size can be 3, 5 or 7. The absolute clipping method is usually
used with this filtering operation. The kernels used by the Image Processing Tool are shown in Figure
133.
AdeptSight 2.0 - User Guide
215
Image Processing Operations
1
2
1
1
4
7
4
1
1
4
0
0
0
2
10 17 10 2
3
11 26 34 26 11 3
-1
-2
-1
0
0
3
13 30 40 30 13 3
-2
-10 -17 -10 -2
0
0
-1
-4
-3
-13 -30 -40 -30 -13 -3
-3
-11 -26 -34 -26 -11 -3
-1
-4
0
0
-7
-4
0
-1
9
13 9
0
0
-9
4
0
0
-13 -9
-4
1
0
-1
Figure 133 Horizontal Sobel Filtering Kernels
Vertical Sobel Filter
The Vertical Sobel operation acts like a gradient filter. This has the effect of highlighting vertical edges
(gradients) in the image. The kernel size can be 3, 5 or 7. The absolute clipping method is usually used
with this filtering operation. The kernels used by the Image Processing Tool are shown in Figure 134.
-1
0
1
-1
-2
-2
0
2
-4
-1
0
1
0
2
1
-1
-3
-10 0
10 4
-4
-11 -13 0
13 11 4
-7
-17 0
17 7
-9
-26 -30 0
30 26 9
-4
-10 0
10 4
-13 -34 -40 0
40 34 13
-1
-2
2
-9
-26 -30 0
30 26 9
-4
-11 -13 0
13 11 4
-1
-3
3
0
1
-3
0
-3
3
0
3
3
1
1
Figure 134 Vertical Sobel Filtering Kernels
High Pass
The High Pass operation acts like a circular gradient filter high pass filter. It essentially extracts high
frequency detail. This has the effect of highlighting all edges (gradients) in the image. The kernel size
can be 3, 5 or 7. The absolute clipping method is usually used with this filtering operation. The kernels
used by the Image Processing Tool are shown in Figure 135.
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
8
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
24 -1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
48 -1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
Figure 135 High Pass Filtering Kernels
AdeptSight 2.0 - User Guide
216
Image Processing Operations
Laplacian Filter
The Laplacian operation also acts like a circular gradient filter. This has the effect of highlighting all
edges (gradients) in the image. The kernel size can be 3, 5 or 7. The absolute clipping method is usually
used with this filtering operation. The kernels used by the Image Processing Tool are shown in Figure
136.
-1
-1
-1
-1
-3
-4
-3
-1
-2
-3
-4
-6
-4
-3
-2
-1
8
-1
-3
0
6
0
-3
-3
-5
-4
-3
-4
-5
-3
-1
-1
-1
-4
6
20 6
-4
-4
-4
9
20 9
-4
-4
-3
0
6
0
-3
-6
-3
20 36 20 -3
-6
-1
-3
-4
-3
-1
-4
-4
9
20 9
-4
-4
-3
-5
-4
-3
-4
-5
-3
-2
-3
-4
-6
-4
-3
-2
Figure 136 Laplacian Filtering Kernels
Sharpen Filter
The Sharpen operation sets each pixel in the output image as the subtraction of the average of all the
input image pixels in the neighborhood defined by the selected kernel size. This has the effect of
sharpening the image, especially edges. The kernel size can be 3, 5 or 7. The kernels used by the Image
Processing Tool are shown Figure 137.
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
9
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
25 -1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
49 -1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
Figure 137 Sharpen Filtering Kernels
SharpenLow Filter
The SharpenLow operation has the effect of sharpening and smoothing the image at the same time. The
kernel size can be 3, 5 or 7. The kernels used by the Image Processing Tool are shown in Figure 138.
AdeptSight 2.0 - User Guide
217
Image Processing Operations
(1/8) *
-1
-3
-4
-3
-1
-2
-3
-4
-6
-4
-3
-2
-4
-5
-3
-1
-1
-1
-3
0
6
0
-3
-3
-5
-4
-3
-1
16 -1
-4
6
40 6
-4
-4
-4
9
20 9
-4
-4
-1
-1
-3
0
6
0
-3
-6
-3
20 72 20 -3
-6
-1
-3
-4
-3
-1
-4
-4
9
20 9
-4
-4
-3
-5
-4
-3
-4
-5
-3
-2
-3
-4
-6
-4
-3
-2
-1
Figure 138 SharpenLow Filtering Kernels
Median
The Median operation sets each pixel in the output image as the median luminance of all the input image
pixels in the neighborhood defined by the selected kernel size. This has the effect of reducing impulsive
image noise without degrading edges or smudging intensity gradients.
Morphological Operations
Morphological operations are used to eliminate or fill small and thin holes in objects, break objects at
thin points or connect nearby objects. These operations generally smooth the boundaries of objects
without significantly changing their area. The Image Processing Tool provides the following predefined
morphological operations, each of which can only be applied to a 3x3 neighborhood: Dilate, Erode, Close
and Open.
Initial Image
Open
High Pass Filter
Figure 139 Example Of Image After Some Common Morphological Operations
Dilate
The Dilate operation sets each pixel in the output image as the largest luminance value of all the input
image pixels in the neighborhood defined by the selected kernel size. (Currently fixed to 3x3)
AdeptSight 2.0 - User Guide
218
Image Processing Operations
Erode
The Erode operation sets each pixel in the output image as the smallest luminance value of all the input
image pixels in the neighborhood defined by the selected kernel size. (Fixed to 3x3)
Close
The Close operation is equivalent to a Dilate operation followed by an Erode operation. This has the
effect of removing small dark particles and holes within objects.
Open
The Open operation is equivalent to an Erode operation followed by a Dilate operation. This has the
effect of removing peaks from an image, leaving only the image background.
Histogram Operations
The action of a histogram operation depends on the histogram of the Input Image. The Image
Processing Tool provides the following histogram operations, each of which can only be applied to an
unsigned 8-bit image: Equalization, Stretching, Light Threshold and Dark Threshold.
Initial Image
Light Threshold
Dark Threshold
Figure 140 Example Of Image After Some Common Histogram Operations
Equalization
The Equalization operation enhances the Input Image by flattening the histogram of the Input Image.
Stretching
The Stretching operation stretches (increases) the contrast in an image by applying a simple piecewise
linear intensity transformation, based on the histogram of the input image.
Light Threshold
The Light Threshold operation changes each pixel value depending on whether they are less or greater
than the specified threshold. If an input pixel value is less than the threshold, the corresponding output
pixel is set to the minimum acceptable value. Otherwise, it is set to the maximum presentable value.
AdeptSight 2.0 - User Guide
219
Image Processing Operations
Dark Threshold
The Dark Threshold operation changes each pixel value depending on whether they are less or greater
than the specified threshold. If an input pixel value is less than the threshold, the corresponding output
pixel is set to the maximum presentable value. Otherwise, it is set to the minimum acceptable value.
Related Topics
Advanced Image Processing Tool Parameters
Image Processing Tool Results
AdeptSight 2.0 - User Guide
220
Image Processing Tool Results
Image Processing Tool Results
The Image Processing Tool outputs images that can be used by other vision tools. This tool does not
output frame results.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents the processed image.
Figure 141 Representation of Image Processing Tool Results in Display and Results Grid
Grid of Results
The grid of result displays information on the processed image.
AdeptSight 2.0 - User Guide
221
Image Processing Tool Results
Description of Image Processing Tool Results
The Image Processing Tool outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Image Processing Tool. Elapsed Time is not visible
in the results grid but is it output to the results log for each iteration of the Image Processing Tool.
Frame
ID of the output frame. This is always 0 because the Image Processing tool only outputs a single Image
result per execution.
Last Operation
Operation applied by the last execution of the Image Processing Tool.
Last Output Type
Type of the image output at the last execution of the Image Processing Tool.
AdeptSight 2.0 - User Guide
222
Advanced Image Processing Tool Parameters
Advanced Image Processing Tool Parameters
The Advanced Parameters section of the Image Processing Tool interface provides access to advanced
Image Processing Tool parameters and properties.
Arithmetic Parameters
Use this section to set the parameters for an arithmetic operation.
Arithmetic operations are applied in the following manner, depending on the type of operand.
• If the operand is an Operand Image, the operation is applied to every pixel of an input image
and the corresponding pixel in the Operand Image. The result is written in the output image.
• If the operand is a constant, the constant it is applied on every pixel of the input image and
the result is written in the output image.
ArithmeticClippingMode
ArithmeticClippingMode sets the clipping mode applied by an arithmetic operation.
hsClippingNormal is the default mode.
• hsClippingNormal mode forces the destination pixel value to a value from 0 to 255 for
unsigned 8-bit images, to a value from -327678 to 32767 for signed 16 bits images and so on.
Values that are less than the specified minimum value are set to the minimum value. Values
greater than the specified maximum value are set to the maximum value.
• hsClippingAbsolute mode takes the absolute value of the result and clips it using the same
algorithm as for the hsClippingNormal mode.
ArithmeticConstant
ArithmeticConstant defines a constant that is applied as an operand by an arithmetic. This constant is
applied only when no valid Operand Image is specified.
Arithmetic Scale
ArithmeticScale sets the scaling factor applied by an arithmetic operation. After the operation has
been applied, the value of each pixel is multiplied by the ArithmeticScale value.
Assignment Parameters
Use this section to set the parameters for an assignment operation.
Assignment operations promote the input values of the source pixels and the Operand values to the
more defined type, based on the input image, the Operand image or the desired output image type. This
type of operation does not support scaling or clipping
Arithmetic Constant
ArithmeticConstant defines constant that applied as an operand by an arithmetic operation. This
constant is applied only when no valid Operand Image is specified.
Assignment Height
AssignmentHeight is a constant value that sets the height, in pixels, of the output image.
AdeptSight 2.0 - User Guide
223
Advanced Image Processing Tool Parameters
Assignment Width
AssignmentWidth is a constant value that defines the width, in pixels, of the output image.
Configuration Parameters
Use this section to set the operation applied by the tool as well as parameters related to the type of
image output by the Image Processing Tool.
By default an output image is of same type as the input image, unless an
output image of another type already exists. The output image type remains
the same unless otherwise specified.
OverrideType
OverrideType enables the tool to output image to the selected image type, when OverrideTypeEnabled
property is set to True. Supported image types are unsigned 8-bit, signed 16-bit, and signed 32-bit
images.
Override Type Enabled
Setting the OverrideTypeEnabled to True enables the tool to apply the value set by the
OverrideType parameter.
Filtering Parameters
Use this section to set the parameters for a filtering operation.
• Filtering operations do not apply an operand.
• For more information on filters and the creating custom filters, see the Filtering Operations
section.
Filtering Clipping Mode
FilteringClippingMode sets the clipping mode applied by a filtering operation. Typically, the
hsClippingAbsolute mode is used for filter operations.
• hsClippingNormal mode forces the destination pixel value to a value from 0 to 255 for
unsigned 8-bit images, to a value from -327678 to 32767 for signed 16 bits images and so on.
Values that are less than the specified minimum value are set to the minimum value. Values
greater than the specified maximum value are set to the maximum value.
• hsClippingAbsolute mode takes the absolute value of the result and clips it using the same
algorithm as for the hsClippingNormal mode.
FilteringCustomKernel
FilteringCustomKernel displays size of a defined custom kernel and provides access to the Custom
Kernel Properties dialog, in which you configure a kernel for a custom filtering operation.
For more information on this subject see the Creating A Custom Filter section.
Filtering Kernel Size
FilteringKernelSize sets the size of the kernel applied by a fixed (predefined) filtering operation.
AdeptSight 2.0 - User Guide
224
Advanced Image Processing Tool Parameters
Filtering Scale
FilteringScale sets the scaling factor applied by an filtering operation. After the operation has been
applied, the value of each pixel is multiplied by the FilteringScale value.
Histogram Parameters
Use this section to set the parameters for a histogram operation.
Histogram operations can only be applied to an unsigned 8-bit image.
Histogram Threshold
HistogramThreshold sets threshold value applied by a histogram thresholding operation
Logical Parameters
Use this section to set the constant operand for a logical operation.
Logical Constant
Logical Constant defines constant that applied as an operand by a logical operation. This constant is
applied only when no valid Operand Image is specified.
Morphological Parameters
Use this section to set the parameters for a morphological operation.
Morphological operations are used to eliminate or fill small and thin holes in objects, break objects at
thin points or connect nearby objects. These operations generally smooth the boundaries of objects
without significantly changing their area.
Morphological Neighborhood Size
MorphologicalNeighborhoodSize neighborhood applied by a morphological operation. This value is
currently fixed at 3x3. No other values are allowed.
Results Parameters
Use this section to get or view results parameters.
LastOperation
Operation applied by the last execution of the Image Processing Tool.
LastOutputType
Type of the image output at the last execution of the Image Processing Tool.
Transform Parameters
Transform operations convert and output a frequency description of the input image. The available
operations are a Fast Fourier Transform (FFT) and a Discrete Cosine Transform (DCT). These transforms
can be output as 1D Linear, 2D Linear, 2D Logarithmic or Histogram.
TransformFlags
TransformFlags sets the flag used by a transform operation, either FFT or DCT.
AdeptSight 2.0 - User Guide
225
Using the Image Histogram Tool
Using the Image Histogram Tool
The Image Histogram Tool computes image statistics and provides the distribution of all the pixel values
contained in the tool’s region of interest. Pixels can be excluded from the distribution by thresholds or
tail functions. The final histogram ignores pixels that have been excluded.
This tool is typically used in applications where a greylevel distribution needs to be compared against an
ideal known distribution.
Typical cases for using the Image Histogram Tool include:
• Verifying that the zone around an object is clear of clutter and can therefore be gripped by a
robot. Figure illustrates an example in which a histogram is configured to examine the clear
space between objects, based on histogram results.
• Verifying and validating the camera iris adjustment or the lighting setup of an application.
Figure 142 Example of an Image Histogram Tool
Basic Steps for Configuring an Image Histogram Tool
1. Select the tool that will provide input images. See Input.
2. Position the Image Histogram Tool region of interest. See Location.
3. Configure parameters and image subsampling if required. See Configuring Image Histogram
Parameters.
4. Test and verify results. See Image Histogram Tool Results.
5. Configure Advanced Parameters if required. See Advanced Image Histogram Tool
Parameters.
Input
The Input required by the Image Histogram Tool is an image provided by another tool in the sequence.
AdeptSight 2.0 - User Guide
226
Using the Image Histogram Tool
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Image
Histogram Tool.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
Position the tool
relative to the
frame that is
identified by a
blue marker
Figure 143 Positioning the Image Histogram Tool Tool relative to a Frame
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Image Histogram Tool is positioned relative to a frame of
reference provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
AdeptSight 2.0 - User Guide
227
Using the Image Histogram Tool
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Image Histogram Tool must be placed on all frames output by the frame-provider tool,
enable the All Frames check box.
4. If the Image Histogram Tool must be only be applied to a single frame, (output by frameprovider tool) disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Image Histogram Tool.
Positioning the Image Histogram Tool
Positioning the tool defines the area of the image that will be processed by the Image Histogram Tool.
Location parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Image Histogram Tool:
1. Click Location. The Location dialog opens as shown in Figure 143. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
Before configuring the Image Histogram Tool, execute the tool (or
sequence) at least once and verify in the display that the tool is being
positioned correctly in the image.
The display represents the region of interest of the Image Histogram
Tool as a green box.
AdeptSight 2.0 - User Guide
228
Using the Image Histogram Tool
Related Topics
Configuring Image Histogram Parameters
AdeptSight 2.0 - User Guide
229
Configuring Image Histogram Parameters
Configuring Image Histogram Parameters
The Image Histogram tool calculates greylevel statistics for a selected region of interest. The final
histogram, for which the tool calculates the statistics ignores pixels that have been excluded by
thresholds or tails.
Thresholds
Thresholds exclude a range of pixel values from the histogram, according their greylevel value.
Black Threshold
The Black threshold excludes dark pixels, having a greylevel value lower than the threshold value. The
excluded pixels are not used to calculate histogram results.
Dark pixels on border of
this image are excluded
by a black threshold
Line illustrates the
black threshold value
Pixels with value greater than
black threshold value are
excluded from histogram results
Figure 144 Illustration of pixels excluded by a black threshold
White Threshold
The White threshold excludes light pixels, having a greylevel value higher than the threshold value. The
excluded pixels are not used to calculate histogram results.
Tails
A tail specifies an amount pixels to be removed from the dark and light ends of the initial histogram. This
value expresses a percentage of the total number of pixels in the histogram before tails are removed.
Black Tail
A Black tail specifies the amount of dark pixels to exclude from the histogram, starting from the dark
end of the histogram distribution (0). The amount of pixels to exclude is expressed as a percentage of
the total number of pixels in the tool's region of interest.
White Tail
A White tail specifies the amount of light pixels to exclude from the histogram, starting from the light
end of the histogram distribution (255). The amount of pixels to exclude is expressed as a percentage of
the total number of pixels in the tool's region of interest.
Image Subsampling
The image subsampling function coarsely resamples the image in the tool's region of interest.
AdeptSight 2.0 - User Guide
230
Configuring Image Histogram Parameters
Use the Image Subsampling slider to set a subsampling level from 1, which means no subsampling, to 8,
where a subsampled pixel represents a tile of 8 by 8 pixels in the original image.
Image sampling in can significantly improve execution speed but should be
used only in cases where the image does not have high frequency transitions
or textures and in which the averaging process does not significantly affect
the statistics.
AdeptSight 2.0 - User Guide
231
Image Histogram Tool Results
Image Histogram Tool Results
The Image Histogram Tool outputs images read-only results that provide statistical and general
information. This tool does not output frame results.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents the region of interest of each instance of an Image Histogram Tool. If the
tool is frame-based, the frame numbers correspond to the frames that provided the positioning.
Green rectangles represent
region of interest
of applied histogram tools
Figure 145 Representation of Image Histogram Tool Results in Display and Results Grid
Grid of Results
The grid of result presents the statistical results for the region of interest analyzed by the Image
Histogram Tool. These results can be saved to file by enabling the Results Log.
AdeptSight 2.0 - User Guide
232
Image Histogram Tool Results
Description of Image Histogram Tool Results
The Image Histogram Tool outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Image Histogram Tool. Elapsed Time is not visible
in the results grid but is it output to the results log for each iteration of the Image Histogram Tool.
Frame
Frame identifies the number of the frame output by the Image Histogram Tool. If the tool is framebased, this number corresponds to the input frame that provided the positioning.
Mean
The Mean of the greylevel distribution in the histogram.
Median
The Median of the greylevel distribution in the histogram.
Variance
The Variance of the greylevel distribution in the histogram.
Standard Deviation
The Standard Deviation of the greylevel distribution in the histogram.
Mode
The Mode of the greylevel distribution, which corresponds to the greylevel value for which there is the
highest number of pixels.
Mode Pixel Count
The Mode Pixel Count is the number of pixels in the histogram that corresponds to the mode value of
the greylevel distribution.
Minimum Greylevel
The Minimum Greylevel Value is the lowest greylevel value found in the histogram.
Maximum Greylevel
Maximum Greylevel Value is the highest greylevel value found in the histogram.
Greylevel Range
The Greylevel Range specifies the range of greylevel values in the histogram; this is equal to
[Maximum Greylevel Value - Minimum Greylevel Value + 1].
Tail Black Greylevel
The Tail Black Greylevel Value represents the darkest greylevel value that remains in the histogram
after a Black tail is removed.
Tail White Greylevel
The Tail White Greylevel Value represents the lightest greylevel value that remains in the histogram
after a White tail is removed.
AdeptSight 2.0 - User Guide
233
Image Histogram Tool Results
Histogram Data
Numerical value that identifies the index number of the Histogram.
Histogram Pixel Count
Total number of pixels in the histogram. This is equal to the Image Pixel Count minus the pixels excluded
from the histogram by Thresholds or Tail constraints.
Image Pixel Count
Number of pixels in the tool region of interest. This is equal to Image Height x Image Width.
Image Width
X-axis length in pixels of the tool region of interest.
Image Height
Y-axis length in pixels of the tool region of interest.
AdeptSight 2.0 - User Guide
234
Advanced Image Histogram Tool Parameters
Advanced Image Histogram Tool Parameters
The Advanced Parameters section of the Image Histogram Tool interface provides access to advanced
Image Histogram Tool parameters and properties.
Frame Transform Parameters
The Scale To Instance parameter is applicable only to a Image Histogram Tool that is frame-based,
and for which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the
Locator is configured to locate parts of varying scale, the Scale to Instance parameter determines the
effect of the scaled instances on the Image Histogram Tool.
Scale to Instance
When ScaleToInstance is True, the Image Histogram Tool region of interest is resized and positioned
relative to the change in scale of the Input frame. This is the recommended setting for most cases.
When ScaleToInstance is False, the Image Histogram Tool ignores the scale and builds frame relative
to the input frame without adapting to the change in scale.
Location Parameters
Tool Position Parameters
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters section
gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Image Histogram Tool region of interest.
Width
Width of the Image Histogram Tool region of interest.
Rotation
Angle of rotation of the Image Histogram Tool region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
AdeptSight 2.0 - User Guide
235
Advanced Image Histogram Tool Parameters
Width
Height
X, Y
Angle of Rotation
Figure 146 Location Properties of the Image Histogram Tool Region of Interest
Tool Sampling
Sampling refers to the procedure used by t he tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
For specific applications where a more appropriate tradeoff between speed and precision must be
established, the sampling step can be modified by setting the CustomSamplingStepEnabled to True
and modifying the CustomSamplingStep value.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed.
To ensure subpixel precision in inspection applications, Bilinear Interpolation should always be set to
true (enabled). Non-interpolated sampling (Bilinear Interpolation disabled) should only be used in
applications where the speed requirements are more critical than precision.)
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step used by the tool to sample the area of the input image that is bounded by the
tool region of interest. The sampling step represents the height and the width of a sampled pixel.
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
AdeptSight 2.0 - User Guide
236
Advanced Image Histogram Tool Parameters
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
SamplingStepCustomEnabled
Setting SamplingStepCustomEnabled to True, enables the tool to apply a custom sampling step
defined by SamplingStepCustom. When set to False (default) the tool applies the default, optimal
sampling step defined by SamplingStepDefault.
AdeptSight 2.0 - User Guide
237
Using the Image Sharpness Tool
Using the Image Sharpness Tool
The Image Sharpness Tool computes the sharpness of preponderant edges in a user-defined region of
interest.
• A typical uses of the Image Sharpness Tool is the verification or validation of the sharpness of
an image before it is processed by other tools.
• This tool can also be used as a building block for implementing an auto focus procedure which
consists of a motorized focus lens and uses the sharpness value to close the loop.
Basic Steps for Configuring an Image Sharpness Tool
1. Select the tool that will provide input images. See Input.
2. Position the Image Sharpness Tool region of interest. See Location.
3. Test and verify results. See Image Histogram Tool Results.
4. Configure Advanced Parameters if required. See Advanced Image Histogram Tool
Parameters.
Input
The Input required by the Image Sharpness Tool is an image provided by another tool in the sequence.
• Input should be provided by an Acquire Image tool when the purpose of the tool is to analyze
the sharpness quality of camera images.
• Images can be provided by other tools such as an Image Processing tool if the purpose of the
tool is to analyze the sharpness of processed image.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Image
Sharpness Tool.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
AdeptSight 2.0 - User Guide
238
Using the Image Sharpness Tool
Figure 147 Positioning the Image Sharpness Tool
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Image Sharpness Tool is positioned relative to a frame of
reference provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Image Sharpness Tool must be placed on all frames output by the frame-provider tool,
enable the All Frames check box.
4. If the Image Sharpness Tool must be only be applied to a single frame, (output by frameprovider tool) disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Image Sharpness Tool
AdeptSight 2.0 - User Guide
239
Using the Image Sharpness Tool
Before configuring the Image Sharpness Tool, execute the tool (or sequence)
at least once and verify in the display that the tool is being positioned
correctly in the image.
The display represents the region of interest of the Image Sharpness Tool as
a green box.
Positioning the Image Sharpness Tool
Positioning the tool defines the area of the image that will be processed by the Image Sharpness Tool.
To position the Image Sharpness Tool:
1. Click Location. The Location dialog opens as shown in Figure 147. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
Related Topics
Image Sharpness Tool Results
Image Sharpness Basics
Advanced Parameters
AdeptSight 2.0 - User Guide
240
Image Sharpness Tool Results
Image Sharpness Tool Results
The Image Sharpness Tool outputs read-only results that provide statistical and general information.
This tool does not output frame results.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents the region of interest of each instance of the Image Sharpness Tool. If
the tool is frame-based, the frame numbers correspond to the frames that provided the positioning.
Figure 148 Representation of Image Sharpness Tool Results in Display and Results Grid
AdeptSight 2.0 - User Guide
241
Image Sharpness Tool Results
Grid of Results
The grid of result presents the statistical results for the region of interest analyzed by the Image
Sharpness Tool. These results can be saved to file by enabling the Results Log.
Description of Image Sharpness Tool Results
The Image Sharpness Tool outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Image Sharpness Tool. Elapsed Time is not visible
in the results grid but is it output to the results log for each iteration of the Image Sharpness Tool.
Frame
Frame identifies the number of the frame output by the Image Sharpness Tool. If the tool is framebased, this number corresponds to the input frame that provided the positioning.
Sharpness
Sharpness is the average sharpness value calculated for the current instance. The sharpness value
ranges from a maximum of 1000, indicating a very sharp image, to 0 indicating a very blurry image.
Measurement Points
MeasurementPointsCount is the number of points actually used to measure the average sharpness
for the current region of interest. This can be less than the number of Candidate Points set in the
Configuration panel
Measurement Points are the points used to calculate the average Sharpness result. Only the Candidate
Points that meet the Standard Deviation Threshold are retained as measurement points.
Sharpness History Size
SharpnessHistorySize is the size of the array used to store the history of previous sharpness values,
which are used to calculate the SharpnessPeak. When the number of tool executions exceeds the
history size, earliest results are dropped and newer results are added to the array.
Sharpness Peak
SharpnessPeak is the maximum average sharpness value computed by the tool since the history was
reset
AdeptSight 2.0 - User Guide
242
Advanced Image Sharpness Tool Parameters
Advanced Image Sharpness Tool Parameters
Image Sharpness Basics
The Image Sharpness Tool process operates by first identifying a set of points with high local grey-scale
variations at various points and applying an autocorrelation method to calculate an average sharpness
factor from these points.
Candidate Points
Candidate Points are the points with the highest local grey-scale variation in the region of interest. These
points are candidates at which a sharpness measurement will be made if the local variation is sufficient.
The number of candidate points is by default, automatically set by the tool, based on size of the region
of interest.
When the tool is executed, it first scans the region of interest and identifies a number of candidate
points where the local standard deviation is the highest. It then evaluates the sharpness at each of the
candidate location that has a local standard deviation above the Standard Deviation Threshold. The
locations where the sharpness is actually measured become Measurement Points.
• Candidate points are by default set automatically by the tool. When the default, and
recommended, Automatic setting is enabled, the tool uses 500 Candidate Points for a region
of interest over 320x240 pixels in size. If the area is smaller than 320x240, the number of
Candidate Points is equal to: width*height (500 / 320x240)
• The number of candidate points can be set manually by entering a value for the Candidate
Point Count parameter.
Standard Deviation Threshold
Standard Deviation Threshold sets the minimum required standard deviation required for a Candidate
Point to be used as a Measurement Point for calculating the average image sharpness.
When the tool is executed, it scans the region of interest and identifies a number of candidate locations,
set by Candidate Points Count, where the local standard deviation is the highest. Points having a
standard deviation equal to or above the threshold are used by the tool as the measurement points for
calculating the average image sharpness.
Sharpness Operator
The Sharpness Operator is a processing operation that evaluates the blurriness at a Measurement Point
using a local autocorrelation method.
The default kernel size of 5 should be appropriate in typical applications. However, the Kernel should be
larger than the number of pixels over which a typical contrast is spread.
A larger kernel may be used for blurrier images, for example in the case where the blurriness of the
contrast is larger than the default kernel value. This is illustrated in Figure 315, where the contrast is
about 6-8 pixels wide. Note the difference in values obtained with different kernel sizes. With a 7X7
kernel, all the candidate points are used as measurement points. Larger kernels subsequently have
almost no impact on the Sharpness result.
A smaller kernel size, 2 or 3 for example, may be helpful for images with fine details or for images
constituted of fine high-frequency textures.
AdeptSight 2.0 - User Guide
243
Advanced Image Sharpness Tool Parameters
Advanced Parameters
The Advanced Parameters section of the Image Sharpness Tool interface provides access to advanced
Image Sharpness Tool parameters and properties.
Configuration Parameters
Automatic Candidate Count Enabled
When AutomaticCandidateCountEnabled is True the number of candidate measurement points is
automatically determined according to the dimension of the tool's region of interest. When
AutomaticCandidateCountEnabled is False, the number of candidate measurement points is set
manually through the CandidatePointsCount property.
Candidate Points Count
CandidatePointsCount sets the maximum number of points that can be used by the tool to calculate
the image sharpness.
• When AutomaticCandidateCountEnabled is set to True, the CandidatePointsCount value
is determined and set by the tool, according to the dimension of the tool's region of interest.
This is the recommended setting.
• When AutomaticCandidateCountEnabled is set to False is disabled, you must manually
provide a value.
Kernel Size
KernelSize sets the size of the kernel of the operator for the sharpness process. The default setting of
5 (a 5X5 kernel) is generally sufficient for most cases.
Sharpness History Scale
SharpnessHistoryScale sets the range of values that can be displayed.
• When hsScaleAutomatic is selected, the tool automatically sets the scale.
• Other settings allow you to select a scale. All results are clipped back to the maximum scale
value. For example, when hsScale0100 is selected, all Sharpness results over 100 are
displayed as a value of 100.
Standard Deviation Threshold
StandardDeviationThreshold sets the minimum allowable Standard Deviation for a candidate points
to be accepted as a measurement point.
Frame Transform Parameters
The Scale To Instance parameter is applicable only to a Image Sharpness Tool that is frame-based,
and for which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the
Locator is configured to locate parts of varying scale, the Scale To Instance parameter determines the
effect of the scaled instances on the Image Sharpness Tool.
Scale To Instance
When ScaleToInstance is True, the Image Sharpness Tool region of interest is resized and positioned
relative to the change in scale of the Input frame. This is the recommended setting for most cases.
AdeptSight 2.0 - User Guide
244
Advanced Image Sharpness Tool Parameters
When ScaleToInstance is False, the Image Sharpness Tool ignores the scale and builds frame relative
to the input frame without adapting to the change in scale.
Scaled
Objects
Tool ROI
Non-Scaled
Object
Scaled
Objects
Scale to Instance
Scale to Instance
Enabled
Disabled
Figure 149 Effect of Scale To Instance Parameter
Location Parameters
Tool Position Parameters
Most tool Location parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters section
gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Image Sharpness Tool region of interest.
Width
Width of the Image Sharpness Tool region of interest.
Rotation
Angle of rotation of the Image Sharpness Tool region of interest.
Width
Width of the Image Sharpness Tool region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
AdeptSight 2.0 - User Guide
245
Advanced Image Sharpness Tool Parameters
Width
Height
X, Y
Angle of Rotation
Figure 150 Location Properties of the Image Sharpness Tool Region of Interest
Tool Sampling Parameters
Sampling refers to the procedure used by the tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed.
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool to sample the image. This
default sampling step is usually recommended. SamplingStepDefault is automatically used by the tool
if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the sampling step used by the tool in the last execution. A default SamplingStep is
computed by the tool, based on the average size, in calibrated units, of a pixel in the input image.
Sampling Step Custom
SamplingStepCustom allows you to set a sampling step value other than the default sampling step. To
set a custom sampling step, SamplingStepCustomEnabled must be set to False.
SamplingStepCustomEnabled
If SamplingStepCustomEnabled is True, this sampling step is used to sample the image instead of
SamplingStepDefault.
AdeptSight 2.0 - User Guide
246
Using the Sampling Tool
Using the Sampling Tool
The Sampling Tool is used to extract an area of an image and output it as a separate Image.
Sampled Image
Input Image
Figure 151 Example of a Sampling Tool
Basic Steps for Configuring an Sampling Tool
1. Select the tool that will provide input images. See Input.
2. Position the Sampling Tool region of interest. See Location.
3. Test and verify results. See Sampling Tool Results.
4. Configure Advanced Parameters if required. See Configuring Advanced Sampling Tool
Parameters.
Input
The Input required by the Sampling Tool is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Sampling Tool.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
AdeptSight 2.0 - User Guide
247
Using the Sampling Tool
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
Figure 152 Positioning the Sampling Tool
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Sampling Tool is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Sampling Tool must be placed on all frames output by the frame-provider tool, enable the
All Frames check box.
4. If the Sampling Tool must be only be applied to a single frame, (output by frame-provider tool)
disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Sampling Tool.
AdeptSight 2.0 - User Guide
248
Using the Sampling Tool
Positioning the Sampling Tool
Positioning the tool defines the area of the image that will be processed by the Sampling Tool. Location
parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Sampling Tool:
1. Click Location. The Location dialog opens as shown in Figure 152. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
4. If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
Related Topics
Sampling Tool Results
Configuring Advanced Sampling Tool Parameters
AdeptSight 2.0 - User Guide
249
Sampling Tool Results
Sampling Tool Results
The Sampling Tool outputs read-only results as well as an image result.
The image output by the Sampling Tool can be used as an image input by
other tools ONLY if the Sampling tool is NOT frame-based.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents only a single sampled image, when the display is in "non-calibrated
mode.
When the Sampling Tool outputs more than one sampled image, all the sample images can be viewed
only when the display is in "calibrated" mode, as shown in Figure 153.
AdeptSight 2.0 - User Guide
250
Sampling Tool Results
Click 'Calibrated' icon
to display multiple
sampled images
Figure 153 Representation of multiple frame-based Sampling Tool results
Grid of Results
The grid of result presents the statistical results for the region of interest analyzed by the Sampling Tool.
These results can be saved to file by enabling the Results Log.
Description of Sampling Tool Results
The Sampling Tool outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Sampling Tool. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Sampling Tool.
Frame
Frame identifies the number of the frame output by the Sampling Tool. If the tool is frame-based, this
number corresponds to the input frame that provided the positioning.
Pixel Width
PixelWidth is the calibrated width of a single pixel in the sampled image, expressed in mm.
Pixel Height
PixelHeight is the calibrated height of a single pixel in the sampled image, expressed in mm.
Calibrated Image Width
CalibratedImageWidth is the total calibrated width of the sampled Image, expressed in mm.
AdeptSight 2.0 - User Guide
251
Sampling Tool Results
Calibrated Image Height
CalibratedImageHeight is the total calibrated height of the sampled Image, expressed in mm.
Image Width
ImageWidth is the uncalibrated width of the sampled Image in pixels.
Image Height
ImageHeight is the uncalibrated height of the sampled Image in pixels.
Image Bottom Left X
ImageBottomLeftX is the X coordinate of the bottom left corner of the sampled Image's bounding box
with respect to the selected coordinate system, expressed in mm.
Image Bottom Left Y
ImageBottomLeftY is the Y coordinate of the bottom left corner of the sampled Image's bounding box
with respect to the selected coordinate system, expressed in mm.
Image Bottom Right X
ImageBottomRightX is the Y coordinate of the bottom right corner of the sampled Image's bounding
box with respect to the selected coordinate system, expressed in mm.
Image Bottom Right Y
ImageBottomRightY is the X coordinate of the bottom right corner of the sampled Image's bounding
box with respect to the selected coordinate system, expressed in mm.
Image Top Left X
ImageTopLeftX is the X coordinate of the top left corner of the sampled Image's bounding box with
respect to the selected coordinate system, expressed in mm.
Image Top Left Y
ImageTopLeftY is the Y coordinate of the top left corner of the sampled Image's bounding box with
respect to the selected coordinate system, expressed in mm.
Image Top Right X
ImageTopRightX is the X coordinate of the top right corner of the sampled Image's bounding box with
respect to the selected coordinate system, expressed in mm.
Image Top Right Y
ImageTopRightY is the X coordinate of the top right corner of the sampled Image's bounding box with
respect to the selected coordinate system, expressed in mm.
AdeptSight 2.0 - User Guide
252
Sampling Tool Results
Y
Y
X
Bottom Right Corner
Tool coordinate system
Top Right Corner
Bottom Left Corner
Y
Y
Top Left Corner
X
Object coordinate system
X
Image coordinate system
X
World coordinate system
Figure 154 Illustration of Sampling Tool Results
AdeptSight 2.0 - User Guide
253
Configuring Advanced Sampling Tool Parameters
Configuring Advanced Sampling Tool Parameters
The Advanced Parameters section of the Sampling Tool interface provides access to advanced
Sampling Tool parameters and properties.
Configuration Parameters
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Sampling Tool processes images in the format in
which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Sampling Tool processes only the greyscale information in the input image, regardless of the format in which the images are
provided. This can reduce the execution time when color processing is not required.
Frame Transform Parameters
The Scale To Instance parameter is applicable only to a Sampling Tool that is frame-based, and for
which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale To Instance parameter determines the effect of
the scaled instances on the Sampling Tool.
Scale To Instance
When ScaleToInstance is True, the Sampling Tool region of interest is resized and positioned relative
to the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Sampling Tool ignores the scale and builds frame relative to the input
frame without adapting to the change in scale.
Scaled
Objects
Tool ROI
Non-Scaled
Object
Scaled
Objects
Scale to Instance
Scale to Instance
Enabled
Disabled
Figure 155 Effect of Scale To Instance Parameter
Location Parameters
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
AdeptSight 2.0 - User Guide
254
Configuring Advanced Sampling Tool Parameters
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Sampling Tool region of interest.
Width
Width of the Sampling Tool region of interest.
Rotation
Angle of rotation of the Sampling Tool region of interest.
Width
Width of the Sampling Tool region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
Width
Height
X, Y
Angle of Rotation
Figure 156 Illustration of Tool Position for a Sector-type Region of Interest
Tool Sampling
Sampling refers to the procedure used by the tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
Bilinear Interpolation
In the sampled region of interest axes are rarely aligned with the grid of pixels that constitute the input
image, especially in frame-based positioning mode. Without interpolation, any given pixel within the
region of interest is assigned the value of the image pixel closest to the sampled pixel's center. This
results in jaggedness and loss of precision. The bilinear interpolation function smoothes out the
AdeptSight 2.0 - User Guide
255
Configuring Advanced Sampling Tool Parameters
jaggedness within the sampled image by attributing to each pixel a value interpolated from values of
neighboring pixels.
When subpixel precision is required in an inspection application, bilinear interpolation should always be
enabled for the sampling process. For applications where the speed requirements are more critical than
precision, non-interpolated sampling can be used.
Bilinear interpolation enabled
Bilinear interpolation disabled
Figure 157 Effect of Bilinear Interpolation
Sampling Step
In the sampled region of interest, all pixels are the same size and are square. The sampling step defines
the height and width in calibrated units of each of the pixels in the tool’s region of interest.
• A default SamplingStep is computed by the tool, based on the average size, in calibrated
units, of a pixel in the input image. This default sampling step is usually recommended.
• For specific applications where a more appropriate tradeoff between speed and precision must
be established, the sampling step can be modified by setting the
CustomSamplingStepEnabled to True and modifying the CustomSamplingStep value.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time. Undersampling can be useful in applications where an approximate measure
is sufficient. False, tool results are returned in pixels.
Results Parameters
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool), Object (hsObject), Tool
(hstool).
Image Height
ImageHeight is the width of the tool's region of interest expressed in pixels. Read only
Image Width
ImageWidth is the width of the tool's region of interest expressed in pixels. Read only
AdeptSight 2.0 - User Guide
256
Using the Point Finder Tool
Using the Point Finder Tool
The Point Finder finds and locates point-type features on objects and returns the coordinates of the
found point.
Basic Steps for Configuring a Point Finder
1. Select the tool that will provide input images. See Input.
2. Position the Point Finder tool. See Location.
3. Test and verify results. See Point Finder Results.
4. Configure Advanced properties if required. See Configuring Advanced Point Finder
Parameters.
Input
The Input required by the Point Finder is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Point Finder.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
AdeptSight 2.0 - User Guide
257
Using the Point Finder Tool
Position bounding box
so that the point to find is
close to the guideline
Figure 158 Positioning the Point Finder Tool
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Point Finder is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Point Finder must be placed on all frames output by the frame-provider tool, enable the
All Frames check box.
4. If the Point Finder must be only be applied to a single frame, (output by frame-provider tool)
disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Point Finder.
AdeptSight 2.0 - User Guide
258
Using the Point Finder Tool
Positioning the Point Finder
Positioning the tool defines the area of the image that will be processed by the Point Finder. Location
parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Point Finder:
1. Click Location. The Location dialog opens as shown in Figure 158. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
4. The orange Guideline marker can be displaced along the X-axis. The Guideline acts as both a
visual guide for positioning the tool and as a constraint for the tool's Search Mode. Guideline
Offset is the offset from the tool's X-axis.
Important: The tool searches for a point on an edge that is parallel to the Y-Axis,
moving through the region of interest in a negative-to-positive direction relative to the XAxis.
Best results are generally obtained by when the Guideline is placed on, or very close to
the point to be found.
Use Search and Edge Detection parameters (Advanced Parameters) to further
configure and refine the finding of the correct point entity.
Related Topics
Configuring Advanced Point Finder Parameters
AdeptSight 2.0 - User Guide
259
Point Finder Results
Point Finder Results
The Point Finder outputs two types of results: Frames and Results for each point found by the tool.
• Frames output by the Point Finder do not provide an orientation and therefore this tool is not
not recommended for frame-based positioning of other AdeptSight tools. The output frames
are represented in the display, and numbered, starting at 0.
• Results for each by the Point Finder tool are show in the grid of results, below the display, as
illustrated in Figure 159.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results
Results Display
The Results display represents each frame output by the Point Finder, as well as the edges found in each
frame.
AdeptSight 2.0 - User Guide
260
Point Finder Results
Green rectangles represents
output frames
Red dot represents
the found point entity
Figure 159 Representation of Point Finder Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all found by the Point Finder tool. Results include the score and
position for each edge. These results can be saved to file by enabling the Results Log.
Description of Point Finder Results
The Point Finder outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Point Finder. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Point Finder.
Frame
Frame identifies the number of the frame output by the Point Finder tool. If the tool is frame-based, this
number corresponds to the input frame that provided the positioning.
Position X
The X coordinate of the center point for each edge segment.
Position Y
The Y coordinate of the center point for each edge segment.
Average Contrast
The average contrast of edges used calculate the point entity.
AdeptSight 2.0 - User Guide
261
Configuring Advanced Point Finder Parameters
Configuring Advanced Point Finder Parameters
The Advanced Parameters section of the Point Finder tool interface provides access to advanced Point
Finder parameters and properties.
Configuration
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Point Finder processes images in the format in
which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Point Finder processes only the grey-scale
information in the input image, regardless of the format in which the images are provided.
This can reduce the execution time when color processing is not required.
Edge Detection
Finder tools detect edges in the input images then use edges to generate a vectorized description called
an entity.
• Edge Detection parameters modify the quality and quantity of edges that are generated from
the input image.
• Edges are detected parallel to the Y-Axis, moving through the region of interest in a negativeto-positive direction relative to the X-Axis.
Figure 160 Positioning the Point Finder Tool
Contrast Threshold
ContrastThreshold sets the minimum contrast needed for a edge detection to be detected in the input
image. The threshold value expresses the step in light values required to detect edges.
• This value can be set manually only when ContrastThresholdMode is set to FixedValue.
• Higher values reduce sensitivity to contrast. This reduces noise and the amount of lowcontrast edges.
AdeptSight 2.0 - User Guide
262
Configuring Advanced Point Finder Parameters
• Lower values increase sensitivity and add a greater amount of edge at the expense of adding
more noise. This may possibly generate false detections and/or slow down the search process.
ContrastThresholdMode
Contrast Threshold Mode defines how contrast threshold is set. Contrast threshold is the level of
sensitivity that is applied to the detection of edges in the input image. The contrast threshold can be
either Adaptive, or Fixed.
Adaptive thresholds set a sensitivity level based on image content. This provides flexibility to variations
in image lighting conditions and variations in contrast during the Search process.
• AdaptiveLowSensitivity uses a low sensitivity adaptive threshold for detecting edges.
AdaptiveLowSensitivity detects strongly defined edges and eliminates noise, at the risk of
losing significant edge segments.
• AdaptiveNormalSensitivity sets a default sensitivity threshold for detecting edges.
• AdaptiveHighSensitivity detects a great amount of low-contrast edges and noise.
• FixedValue sets an absolute value for the sensitivity to contrast. A typical situation for the
use of a fixed value is a setting in which there is little variance in lighting conditions.
Subsampling Level
SubsamplingLevel sets the subsampling level used to detect edges that are used by the tool to
generate hypotheses.
• High values provide a coarser search with a shorter execution time.
• Lower values can provide a more refined search with slower execution time.
• A higher subsampling value may help improve accuracy in blurry images.
Frame Transform
The Scale to Instance parameter is applicable only to a Point Finder that is frame-based, and for which
the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Point Finder.
Scale to Instance
When ScaleToInstance is True, the Point Finder region of interest is resized and positioned relative to
the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Point Finder ignores the scale and builds frame relative to the input
frame without adapting to the change in scale.
Location
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
AdeptSight 2.0 - User Guide
263
Configuring Advanced Point Finder Parameters
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Guideline Offset
The Guideline Offset is the offset from the tool's X-axis. The Guideline marker can be displaced along
the X-axis. This marker acts as both a visual guide for positioning the tool and as a constraint for the
tool's Search Mode.
Height
Height of the Point Finder region of interest.
Width
Width of the Point Finder region of interest.
Rotation
Angle of rotation of the Point Finder region of interest.
Width
Width of the Point Finder region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
Width
offset
Height
X axis
X, Y
Angle of Rotation
Figure 161 Location Properties for the Point Finder Region of Interest
Output
Output Entity Enabled
OutputEntityEnabled specifies if a found entity will be output to the runtime database.
AdeptSight 2.0 - User Guide
264
Configuring Advanced Point Finder Parameters
Results
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Found
Found specifies if an entity was found. If True, then at least one point entity was found in the current
image.
Search
Connectivity
Connectivity a minimum number of connected edges required to generate a point hypothesis.
By default, Connectivity is disabled. When enabled, you can set the minimum number of connected
edges that are required to generate a point hypothesis.
Connectivity Enabled
When ConnectivityEnabled is set to True, the tool uses the value of the Connectivity property to
generate a point hypothesis.
Interpolate Position
InterpolatePosition sets the mode used by the tool to compute a point hypothesis. By default,
InterpolatePosition is disabled. When Enabled, you can select one of the following modes
• Corner: The tool will compute a hypothesis that fits a corner point to interpolated lines from
connected edges.
• Intersection: The tool will compute a hypothesis that is an intersection between the search
axis and connected edges of an interpolated line.
Interpolate Position Mode Enabled
When InterpolatePositionModeEnabled is set to True, the tool uses the value set by the
InterpolatePositionMode parameter to compute a point hypothesis. Otherwise, point hypothesis
coordinates are taken directly from a specific found edge that satisfies search constraints.
Polarity Mode
PolarityMode sets the mode that will apply to the search for entities. Polarity identifies the change in
greylevel values along the tool’s X axis, in the positive direction.
The available modes are
• Dark To Light: The Point Finder searches only for point instances occurring at a dark to light
transition in greylevel values.
• Light To Dark: The Point Finder searches only for point instances occurring at a light to dark
transition in greylevel values.
• Either: The Point Finder searches only for point instances occurring either at a light to dark or
dark to light transition in greylevel values. This mode will increase processing time.
AdeptSight 2.0 - User Guide
265
Configuring Advanced Point Finder Parameters
• Don’t Care: The Point Finder searches only for point instances occurring at any transition in
greylevel values including reversals in contrast , for example on an unevenly colored
background.
Positioning Level
PositioningLevel sets the effort level of the instance positioning process. A value of 0 will provide
coarser positioning and lower execution time. Conversely, a value of 10 will provide high accuracy
positioning of Point instances.
Search Mode
Search Mode sets the mode used by the tool to generate and select a hypothesis.
The available mode are:
• Point Closest To Guideline: Selects the point hypothesis closest to the Guideline.
• Point With Maximum Negative X Offset: Selects the point hypothesis closest to the region
of interest boundary that is at maximum negative X offset.
• Point With Maximum Positive X Offset: Selects the point hypothesis closest to the region
of interest boundary that is at maximum positive X offset.
AdeptSight 2.0 - User Guide
266
Using the Line Finder Tool
Using the Line Finder Tool
The Line Finder finds and locates linear features on objects and returns the angle and point coordinates
of the found line.
Basic Steps for Configuring a Line Finder
1. Select the tool that will provide input images. See Input.
2. Position the Line Finder tool. See Location.
3. Test and verify results. See Line Finder Results.
4. Configure Advanced properties if required. See Configuring Advanced Line Finder Parameters.
Input
The Input required by the Line Finder is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Line Finder.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
Position bounding box
so that the line to find
is parallel to the Y-Axis
Figure 162 Positioning the Line Finder Tool
AdeptSight 2.0 - User Guide
267
Using the Line Finder Tool
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Line Finder is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Line Finder must be placed on all frames output by the frame-provider tool, enable the
All Frames check box.
4. If the Line Finder must be only be applied to a single frame, (output by frame-provider tool)
disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Line Finder.
Positioning the Line Finder
Positioning the tool defines the area of the image that will be processed by the Line Finder. Location
parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Line Finder tool:
1. Click Location. The Location dialog opens as shown in Figure 162. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
AdeptSight 2.0 - User Guide
268
Using the Line Finder Tool
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
4. The orange Guideline marker can be displaced along the X-axis. The Guideline acts as both a
visual guide for positioning the tool and as a constraint for the tool's Search Mode. Guideline
Offset is the offset from the tool's X-axis.
Important: The tool searches for a line that is parallel to the Y-Axis, moving through the
region of interest in a negative-to-positive direction relative to the X-Axis.
Best results are generally obtained by when the Guideline is placed on, or very close to
the line to be found.
Use Search and Edge Detection parameters (Advanced Parameters) to further
configure and refine the finding of the correct line entity.
Related Topics
Configuring Advanced Point Finder Parameters
AdeptSight 2.0 - User Guide
269
Line Finder Results
Line Finder Results
The Line Finder outputs two types of results: Frames and Results for each found line.
• Frames output by the Line Finder can be used by other AdeptSight tools for frame-based
positioning. The output frames are represented in the display, and numbered, starting at 0.
• Results for each by the Line Finder tool are show in the grid of results, below the display, as
illustrated in Figure 163.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results
Results Display
The Results display represents each frame output by the Line Finder, as well as the edges found in each
frame.
AdeptSight 2.0 - User Guide
270
Line Finder Results
Green rectangles represent
output frames
Red lines represents
the found line entities
Figure 163 Representation of Line Finder Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all found by the Line Finder tool. Results include the score and
position for each edge. These results can be saved to file by enabling the Results Log.
Description of Line Finder Results
The Line Finder outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Line Finder. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Line Finder.
Frame
Frame identifies the number of the frame output by the Line Finder tool. If the tool is frame-based, this
number corresponds to the input frame that provided the positioning.
Vector Position X
X coordinate of the point of intersection between the line and the X axis of the tool bounding box. See
Figure 164. Exceptionally, when the line exits the bounding box without covering the entire bounding
box height, the returned Vector Point may be located outside the bounding box boundary
Vector Position Y
Y coordinate of the point of intersection between the line and the X axis of the tool bounding box.
Exceptionally, when the line exits the bounding box without covering the entire bounding box height,
the returned Vector Point may be located outside the bounding box boundary
AdeptSight 2.0 - User Guide
271
Line Finder Results
Start Position X
X coordinate of the point at the start of the line segment.
Start Position Y
Y coordinate of the point at the start of the line segment.
End Position X
X coordinate of the point at the end of the line segment.
End Position Y
Y coordinate of the point at the end of the line segment.
Angle
Angle of the found line. A line is defined as the line passing through the point coordinates Vector Point X
and Vector Point Y at the given Angle.
Average Contrast
Average greylevel contrast between light and dark pixels on either side of the found line.
Fit Quality
FitQuality is the normalized average error between the calculated line and the actual edges matched to
the found line. Fit quality ranges from 0 to 1, with 1 being the best quality. A value of 1 means that the
average error is 0. Conversely, a value of 0 means that the average matched error is equal to the
Conformity Tolerance.
Match Quality
MatchQuality corresponds to the percentage of edges actually matched to the found line.
MatchQuality ranges from 0 to 1, with 1 being the best quality. A value of 1 means that edges were
matched for every point along the found line. Similarly, a value of 0.2 means edges were matched to
20% of the points along the found line.
End
Point
Vector
Point
Angle
Y
Start
Point
Search
Area
X
Figure 164 Line Finder Results
AdeptSight 2.0 - User Guide
272
Configuring Advanced Line Finder Parameters
Configuring Advanced Line Finder Parameters
The Advanced Parameters section of the Line Finder tool interface provides access to advanced Line
Finder parameters and properties.
Configuration
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Line Finder processes images in the format in
which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Line Finder processes only the grey-scale
information in the input image, regardless of the format in which the images are provided.
This can reduce the execution time when color processing is not required.
Edge Detection
Finder tools detect edges in the input images then use edges to generate a vectorized description called
an entity.
• Edge Detection parameters modify the quality and quantity of edges that are generated from
the input image.
• Edges are detected parallel to the Y-Axis, moving through the region of interest in a negativeto-positive direction relative to the X-Axis.
Contrast Threshold
ContrastThreshold sets the minimum contrast needed for a edge detection to be detected in the input
image. The threshold value expresses the step in light values required to detect edges.
• This value can be set manually only when ContrastThresholdMode is set to FixedValue.
• Higher values reduce sensitivity to contrast. This reduces noise and the amount of lowcontrast edges.
• Lower values increase sensitivity and add a greater amount of edge at the expense of adding
more noise. This may possibly generate false detections and/or slow down the search process.
ContrastThresholdMode
Contrast Threshold Mode defines how contrast threshold is set. Contrast threshold is the level of
sensitivity that is applied to the detection of edges in the input image. The contrast threshold can be
either Adaptive, or Fixed.
Adaptive thresholds set a sensitivity level based on image content. This provides flexibility to variations
in image lighting conditions and variations in contrast during the Search process.
• AdaptiveLowSensitivity uses a low sensitivity adaptive threshold for detecting edges.
AdaptiveLowSensitivity detects strongly defined edges and eliminates noise, at the risk of
losing significant edge segments.
• AdaptiveNormalSensitivity sets a default sensitivity threshold for detecting edges.
AdeptSight 2.0 - User Guide
273
Configuring Advanced Line Finder Parameters
• AdaptiveHighSensitivity detects a great amount of low-contrast edges and noise.
• FixedValue sets an absolute value for the sensitivity to contrast. A typical situation for the
use of a fixed value is a setting in which there is little variance in lighting conditions.
Subsampling Level
SubsamplingLevel sets the subsampling level used to detect edges that are used by the tool to
generate hypotheses.
• High values provide a coarser search with a shorter execution time.
• Lower values can provide a more refined search with slower execution time.
• A higher subsampling value may help improve accuracy in blurry images.
Frame Transform
The Scale To Instance parameter is applicable only to a Line Finder that is frame-based, and for which
the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Line Finder.
Scale To Instance
When ScaleToInstance is True, the Line Finder region of interest is resized and positioned relative to
the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Line Finder ignores the scale and builds frame relative to the input frame
without adapting to the change in scale.
Location
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Guideline Offset
The Guideline Offset is the offset from the tool's X-axis. The Guideline marker can be displaced along
the X-axis. This marker acts as both a visual guide for positioning the tool and as a constraint for the
tool's Search Mode.
Height
Height of the Line Finder region of interest.
Width
Width of the Line Finder region of interest.
AdeptSight 2.0 - User Guide
274
Configuring Advanced Line Finder Parameters
Rotation
Angle of rotation of the Line Finder region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
Width
offset
Height
X axis
X, Y
Angle of Rotation
Figure 165 Location Properties for the Line Finder Region of Interest
Output
Output Entity Enabled
OutputEntityEnabled specifies if a found entity will be output to the runtime database.
Results
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Found
Found specifies if an entity was found. If True, then at least one line entity was found in the current
image.
Search
Search parameters are constraints that restrict the tools search process, for example to a specific range
of poses or a to specific number of instances.
Conformity Tolerance Parameters
Conformity Tolerance
Conformity Tolerance corresponds to the maximum distance in calibrated units by which a matched edge
can deviate from either side of its expected position on the line.
AdeptSight 2.0 - User Guide
275
Configuring Advanced Line Finder Parameters
• To manually set Conformity Tolerance you must first set UseDefaultConformityTolerance
to False.
• If you set a value lower than the MinimumConformityTolerance value, the
ConformityTolerance value will be automatically reset to the minimum valid value.
• If you set a value higher than the MaximumConformityTolerance value, the
ConformityTolerance value will be automatically reset to the maximum valid value.
Default Conformity Tolerance
DefaultConformityTolerance is a read-only value that is computed by the tool by analyzing the
calibration, the edge detection parameters, and the search parameters.
Use Default Conformity Tolerance
Disabling UseDefaultConformityTolerance allows you to manually modify the ConformityTolerance
value.
Maximum Conformity Tolerance
MaximumConformityTolerance is a read-only value that expresses the maximum value allowed for
the ConformityTolerance property.
Minimum Conformity Tolerance
MinimumConformityTolerance is a read-only value that expresses the minimum value allowed for the
ConformityTolerance property. Read only.
Other Search Parameters
Maximum Angle Deviation
MaximumAngleDeviation relates to the deviation in angle between a hypothesis and the line found by
the tool. By default the Line Finder accepts a 20 degree deviation. However, the tool uses the defined
MaximumAngleDeviation value to test the hypothesis and refine the pose of the found line.
Minimum Line Percentage
MinimumLinePercentage sets the minimum percentage of line contours that need to be matched for a
line hypothesis to be considered as valid.
Polarity Mode
PolarityMode sets the mode that will apply to the search for entities. Polarity identifies the change in
greylevel values along the tool’s X axis, in the positive direction.
The available modes are
• Dark To Light: The Line Finder searches only for lines occurring at a dark to light transition
in greylevel values.
• Light To Dark: The Line Finder searches only for lines occurring at a light to dark transition
in greylevel values.
• Either: The Line Finder searches only for lines occurring either at a light to dark or dark to
light transition in greylevel values. This mode will increase processing time.
AdeptSight 2.0 - User Guide
276
Configuring Advanced Line Finder Parameters
• Don’t Care: The Line Finder searches only for lines occurring at any transition in greylevel
values including reversals in contrast, for example on an unevenly colored background.
Positioning Level
PositioningLevel sets the effort level of the instance positioning process. A value of 0 will provide
coarser positioning and lower execution time. Conversely, a value of 10 will provide high accuracy
positioning of lines.
Search Mode
SearchMode specifies the method used by the tool to generate and select a hypothesis.
The available methods are:
• Best Line: Selects the best line according to hypotheses strengths. This mode will increase
processing time.
• Line Closest To Guideline: Selects the line hypothesis closest to the Guideline.
• Line With Maximum Negative X Offset: Selects the line hypothesis closest to the
Rectangle bound that is at maximum negative X offset.
• Line With Maximum Positive X Offset: Selects the line hypothesis closest to the Rectangle
bound that is at maximum positive X offset.
AdeptSight 2.0 - User Guide
277
Using the Arc Finder Tool
Using the Arc Finder Tool
The Arc Finder finds and locates circular features on objects and returns the coordinates of the center of
the arc, the start and end angles, and the radius.
Basic Steps for Configuring an Arc Finder
1. Select the tool that will provide input images. See Input.
2. Position the Arc Finder tool. See Location.
3. Test and verify results. See Arc Finder Results.
4. Configure Advanced properties if required. See Configuring Advanced Arc Finder Parameters.
Input
The Input required by the Arc Finder is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Arc Finder.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
Position bounding sector
so that the arc to find
is parallel to the
(orange) Guideline marker
Figure 166 Positioning the Arc Finder Tool
AdeptSight 2.0 - User Guide
278
Using the Arc Finder Tool
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Arc Finder is positioned relative to a frame of reference
provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Arc Finder must be placed on all frames output by the frame-provider tool, enable the All
Frames check box.
4. If the Arc Finder must be only be applied to a single frame, (output by frame-provider tool)
disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Arc Finder.
Positioning the Arc Finder
Positioning the tool defines the location in the image where the tool will be placed and the size of the
area of interest in which the tool will carry out its process.
To position the Arc Finder tool:
1. Click Location. The Location dialog opens as shown in Figure 166. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding sector.
2. If the Arc Finder is frame-based, a blue marker indicates the frame provided by the frameprovider tool (Frame Input). If there is more than one object in the image, make sure that
you are positioning the bounding box relative to the object identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
AdeptSight 2.0 - User Guide
279
Using the Arc Finder Tool
4. The orange Guideline marker can be displaced along the X-axis. The Guideline acts as both a
visual guide for positioning the tool and as a constraint for the tool's Search Mode. Guideline
Offset is the offset from the tool's X-axis.
Important: The tool searches for an arc that is parallel to the Y-Axis Guideline, moving
through the region of interest in a negative-to-positive direction relative to the X-Axis.
Best results are generally obtained by when the Guideline is placed on, or very close to
the arc to be found.
Use Search and Edge Detection parameters (Advanced Parameters) to further
configure and refine the finding of the correct arc entity.
Related Topics
Configuring Advanced Arc Finder Parameters
AdeptSight 2.0 - User Guide
280
Arc Finder Results
Arc Finder Results
The Arc Finder outputs two types of results: Frames and Results for each found line.
• Frames output by the Arc Finder can be used by other AdeptSight tools for frame-based
positioning. The output frames are represented in the display, and numbered, starting at 0.
• Results for each by the Arc Finder tool are show in the grid of results, below the display, as
illustrated in Figure 167.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results
Results Display
The Results display represents each frame output by the Arc Finder, as well as the edges found in each
frame.
AdeptSight 2.0 - User Guide
281
Arc Finder Results
Green sectors represent
output frames
Red lines represents
the found arc entities
Figure 167 Representation of Arc Finder Results in Display and Results Grid
Grid of Results
The grid of result presents the results for all arcs found by the Arc Finder tool. Results include the score
and position for each edge. These results can be saved to file by enabling the Results Log.
Description of Arc Finder Results
The Arc Finder outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Arc Finder. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Arc Finder.
Frame
Frame identifies the number of the frame output by the Arc Finder tool. If the tool is frame-based, this
number corresponds to the input frame that provided the positioning.
Center Position X
The coordinates of the arc Center Point.See Figure 168.
Center Position Y
The coordinates of the arc Center Point.
Start Position X
The X coordinate of the arc Start Point. On an x-to-y-axis trajectory, the arc Start Point, is the first point
encountered.
Start Position Y
The Y coordinate of the arc Start Point. On an x-to-y-axis trajectory, the arc Start Point, is the first point
encountered.
AdeptSight 2.0 - User Guide
282
Arc Finder Results
End Position X
The X coordinate of the arc End Point. On an x-to-y-axis trajectory, the arc End Point is the last point
encountered.
End Position Y
The Y coordinate of the arc end point. On an x-to-y-axis trajectory, the arc end point is the last point
encountered.
Start Angle
Angle of the arc at the Start Point of the found arc.
End Angle
Angle of the arc radius at the End Point of the found arc.
Average Contrast
Average greylevel contrast between light and dark pixels on either side of the found arc.
Fit Quality
Normalized average error between the calculated arc and the actual edges matched to the found arc. Fit
quality ranges from 0 to 1, with 1 being the best quality. A value of 1 means that the average error is 0.
Conversely, a value of 0 means that the average matched error is equal to conformity tolerance.
Match Quality
Percentage of edges actually matched to the found arc. Match Quality ranges from 0 to 1, with 1 being
the best quality. A value of 1 means that edges were matched for every point along the found arc.
Similarly, a value of 0.2 means edges were matched to 20% of the points along the found arc.
End
Angle
End Point
Radius
Start Point
Start
Angle
Center Point
Figure 168 Archfiends Results
AdeptSight 2.0 - User Guide
283
Configuring Advanced Arc Finder Parameters
Configuring Advanced Arc Finder Parameters
The Advanced Parameters section of the Arc Finder tool interface provides access to advanced Arc
Finder parameters and properties.
Configuration
Processing Format
ProcessingFormat defines the format applied to process images provided by the camera.
• hsNative: When hsNative is selected, the Arc Finder processes images in the format in
which they are output by the camera - either grey-scale or color.
• hsGreyScale: When hsGreyScale is enabled, the Arc Finder processes only the grey-scale
information in the input image, regardless of the format in which the images are provided.
This can reduce the execution time when color processing is not required.
Edge Detection
Finder tools detect edges in the input images then use edges to generate a vectorized description called
an entity.
• Edge Detection parameters modify the quality and quantity of edges that are generated from
the input image.
• Edges are detected parallel to the Y-Axis, moving through the region of interest in a negativeto-positive direction relative to the X-Axis.
Contrast Threshold
ContrastThreshold sets the minimum contrast needed for a edge detection to be detected in the input
image. The threshold value expresses the step in light values required to detect edges.
• This value can be set manually only when ContrastThresholdMode is set to FixedValue.
• Higher values reduce sensitivity to contrast. This reduces noise and the amount of lowcontrast edges.
• Lower values increase sensitivity and add a greater amount of edge at the expense of adding
more noise. This may possibly generate false detections and/or slow down the search process.
ContrastThresholdMode
Contrast Threshold Mode defines how contrast threshold is set. Contrast threshold is the level of
sensitivity that is applied to the detection of edges in the input image. The contrast threshold can be
either Adaptive, or Fixed.
Adaptive thresholds set a sensitivity level based on image content. This provides flexibility to variations
in image lighting conditions and variations in contrast during the Search process.
• AdaptiveLowSensitivity uses a low sensitivity adaptive threshold for detecting edges.
AdaptiveLowSensitivity detects strongly defined edges and eliminates noise, at the risk of
losing significant edge segments.
• AdaptiveNormalSensitivity sets a default sensitivity threshold for detecting edges.
AdeptSight 2.0 - User Guide
284
Configuring Advanced Arc Finder Parameters
• AdaptiveHighSensitivity detects a great amount of low-contrast edges and noise.
• FixedValue sets an absolute value for the sensitivity to contrast. A typical situation for the
use of a fixed value is a setting in which there is little variance in lighting conditions.
Subsampling Level
SubsamplingLevel sets the subsampling level used to detect edges that are used by the tool to
generate hypotheses.
• High values provide a coarser search with a shorter execution time.
• Lower values can provide a more refined search with slower execution time.
• A higher subsampling value may help improve accuracy in blurry images.
Frame Transform
The Scale To Instance parameter is applicable only to an Arc Finder that is frame-based, and for which
the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale To Instance parameter determines the effect of
the scaled instances on the Arc Finder.
Scale To Instance
When ScaleToInstance is True, the Arc Finder region of interest is resized and positioned relative to
the change in scale of the Input frame. This is the recommended setting for most cases. When
ScaleToInstance is False, the Arc Finder ignores the scale and builds frame relative to the input frame
without adapting to the change in scale.
Location
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Guideline Offset
The Guideline Offset is the offset from the tool's X-axis. The Guideline marker can be displaced along
the X-axis. This marker acts as both a visual guide for positioning the tool and as a constraint for the
tool's Search Mode.
Opening
Opening of the Arc Finder region of interest.
Radius
Radius of the Arc Finder region of interest.
AdeptSight 2.0 - User Guide
285
Configuring Advanced Arc Finder Parameters
Rotation
Angle of rotation of the Arc Finder region of interest.
Thickness
Thickness of the Arc Finder region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
Arc
Thickness
Guideline
Radius
Rotation
Opening
(Position X, Position Y)
Figure 169 Location Properties for the Arc Finder Region of Interest
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Output
Output Entity Enabled
OutputEntityEnabled specifies if a found entity will be output to the runtime database.
Results
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Found
Found specifies if an entity was found. If True, then at least one arc entity was found in the current
image.
AdeptSight 2.0 - User Guide
286
Configuring Advanced Arc Finder Parameters
Search
Search parameters are constraints that restrict the tools search process, for example to a specific range
of poses or a to specific number of instances.
Conformity Tolerance Parameters
Conformity Tolerance
Conformity Tolerance corresponds to the maximum distance in calibrated units by which a matched edge
can deviate from either side of its expected position on the arc.
• To manually set Conformity Tolerance you must first set UseDefaultConformityTolerance
to False.
• If you set a value lower than the MinimumConformityTolerance value, the
ConformityTolerance value will be automatically reset to the minimum valid value.
• If you set a value higher than the MaximumConformityTolerance value, the
ConformityTolerance value will be automatically reset to the maximum valid value.
Default Conformity Tolerance
DefaultConformityTolerance is a read-only value that is computed by the tool by analyzing the
calibration, the edge detection parameters, and the search parameters.
Use Default Conformity Tolerance
Disabling UseDefaultConformityTolerance allows you to manually modify the ConformityTolerance
value.
Maximum Conformity Tolerance
MaximumConformityTolerance is a read-only value that expresses the maximum value allowed for
the ConformityTolerance property.
Minimum Conformity Tolerance
MinimumConformityTolerance is a read-only value that expresses the minimum value allowed for the
ConformityTolerance property. Read only.
Other Parameters
Arc Must Be Totally Enclosed
By default ArcMustBeTotallyEnclosed is enabled, which means that the tool will find an arc only if
both its start and end points are located on the radial bounding sides of the Arc search area.
When disabled, the tool can find an arc that enters and/or exits the Arc at the inner or outer annular
bounds of the Arc search area.
Fit Mode
FitMode specifies the mode used by the tool to calculate and return values for the found arc.
There are three modes for fitting hypotheses to a valid arc entity.
• Both: The Arc Finder calculates and returns both the arc center and the arc radius. This is the
default mode, which will typically provide the most accurate results.
AdeptSight 2.0 - User Guide
287
Configuring Advanced Arc Finder Parameters
• Center: The Arc Finder calculates the arc center. The returned Radius value is the tool's
radius (i.e. Arc Radius).
• Radius: The Arc Finder calculates the arc radius. The arc returned Center Point values are the
tool's center (i.e. Arc PositionX and PositionY).
Maximum Angle Deviation
Maximum Angle Deviation is the maximum allowable deviation in angle between the arc hypothesis and
the arc found by the tool.
The deviation is calculated between the tangent angle of the arc at points where the edge is matched to
the arc.
Minimum Arc Percentage
MinimumArcPercentage sets the minimum percentage of arc contours that need to be matched for aa
arc hypothesis to be considered as valid.
Polarity Mode
PolarityMode sets the mode that will apply to the search for entities. Polarity identifies the change in
greylevel values along the tool’s X axis, in the positive direction.
The available modes are
• Dark To Light: The Arc Finder searches only for arcs occurring at a dark to light transition in
greylevel values.
• Light To Dark: The Arc Finder searches only for arcs occurring at a light to dark transition in
greylevel values.
• Either: The Arc Finder searches only for arcs occurring either at a light to dark or dark to light
transition in greylevel values. This mode will increase processing time.
• Don’t Care: The Arc Finder searches only for arcs occurring at any transition in greylevel
values including reversals in contrast, for example on an unevenly colored background.
Positioning Level
PositioningLevel sets the effort level of the instance positioning process. A value of 0 will provide
coarser positioning and lower execution time. Conversely, a value of 10 will provide high accuracy
positioning of arcs.
Search Mode
SearchMode specifies the mode used by the tool to generate and select a hypothesis.
The available modes are:
• Best Arc: Selects the best arc according to hypotheses strengths. This mode will increase
processing time.
• Arc Closest To Guideline: Selects the arc hypothesis closest to the Guideline.
• Arc Closest To Inside: Selects the arc hypothesis closest to the inside of the tool Arc
(closest to the tool center).
AdeptSight 2.0 - User Guide
288
Configuring Advanced Arc Finder Parameters
• Arc Closest To Outside: Selects the arc hypothesis closest to the outside of the tool Arc
(farthest to the tool center).
AdeptSight 2.0 - User Guide
289
Using the Color Matching Tool
Using the Color Matching Tool
The Color matching Tool searches analyzes images to find areas of color that match user-defined filters.
Typically, this tool is used to verify and analyze an area on an object for the purpose of verifying if the
object meets defined color criteria.
Figure 170 Example of a Color Matching Tool
Basic Steps for Configuring a Color Matching Tool
1. Select the tool that will provide input images. See Input.
2. Position the Color Matching Tool region of interest. See Location.
3. Configure Subsampling if required. See Setting the Image Subsampling Mode.
4. Configure parameters and image subsampling if required. See Creating and Configuring Color
Filters.
5. Test and verify results. See Color Matching Tool Results.
6. Configure Advanced Parameters if required. See Configuring Advanced Color Matching Tool
Parameters.
AdeptSight 2.0 - User Guide
290
Using the Color Matching Tool
To ensure that color results obtained with the color matching tools are accurate, the
input images should be calibrated for color. Calibrate the color camera that provides
the images through the Color Calibration wizard.
Input
The Input required by the Color Matching Tool is an image provided by another tool in the sequence.
• Typically, the Input is provided by an Acquire Image tool.
• Input can also be provided by other AdeptSight tools that output images, such as the Image
Processing Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input image.
3. If the required tool does not appear in the dropdown list, make sure that the required tool
(Acquire Image or other) has been added to the Sequence Manager, above the Color Matching
Tool.
Location
Location parameters define the position of the tool’s region of interest in which the tool carries out its
process.
The region of interest can be positioned relative to another tool (frame-based) or relative to a fixed area
in the input image (image-based). The positioning mode is defined by the Frame Input parameter.
To position a frame-based tool:
Position region of interest
relative to the frame
identified by a blue marker
Figure 171 Positioning the Color Matching Tool relative to a Frame
Frame Input
The Frame Input defines whether the tool will be frame-based or image-based.
AdeptSight 2.0 - User Guide
291
Using the Color Matching Tool
• Frame-Based positioning is the recommended mode for applications in which the tool needs
to be repeatedly applied to a feature on an object, or to a specific area relative to an object.
With frame-based positioning, the Color Matching Tool is positioned relative to a frame of
reference provided by another tool, called the frame-provider.
• Image-Based positioning is applied when the tool is not frame-based. In this mode, the tool
region of interest is always positioned on the same area of the image, relative to the frame of
reference of the image.
To set image-based positioning, set the Frame Input value to (none).
To set the Frame Input:
1. From the Frame Input dropdown list, select the frame-provider tool. Selecting a tool in the list
enables frame-based positioning.
The ideal frame-provider tool is a Locator. See Frame-Provider Tools for more details on using
other tools as frame-providers.
2. If the tool must be positioned to a static area on all images (image-based) select (none) in the
Frame Input dropdown list.
3. If the Color Matching Tool must be placed on all frames output by the frame-provider tool,
enable the All Frames check box.
4. If the Color Matching Tool must be only be applied to a single frame, (output by frame-provider
tool) disable the All Frames check box and select the required frame.
The default value is 0; the numbering of frames is 0-based.
5. Click Location to position the tool region of interest relative to the frame provider tool. See
Positioning the Color Matching Tool
Before configuring the Color Matching Tool, execute the tool (or sequence) at least once
and verify in the display that the tool is being positioned correctly in the image.
The display represents the region of interest of the Color Matching Tool as a green box.
Positioning the Color Matching Tool
Positioning the tool defines the area of the image that will be processed by the Color Matching Tool.
Location parameters define the position of the tool region of interest.
Location
The Location button opens the Location dialog and displays the tool region of interest as a bounding
box in the image display. The bounding box can be configured in both the display area and in the
Location dialog.
To position the Color Matching Tool:
1. Click Location. The Location dialog opens as shown in Figure 171. This dialog defines the size
and position of the tool region of interest. The display represents the region of interest as a
green bounding box.
AdeptSight 2.0 - User Guide
292
Using the Color Matching Tool
2. A blue marker indicates the frame provided by the Frame Input tool. If there is more than one
object in the image, make sure that you are positioning the bounding box relative to the object
identified by a blue axes marker.
3. Enter values in the Location dialog, or use the mouse to configure the bounding box in the
display.
If the tool is frame-based, Location values are relative to the origin of the frame-provider tool
(blue marker). If the tool is image-based, values are relative to the origin of the image frame of
reference.
Setting the Image Subsampling Mode
By default, Image Subsampling is set to 1, which means that the image is not subsampled, and each
pixel in the image is processed by the Color Matching Tool, for every filter.
• Increasing the subsampling level reduces the number of pixels and the quantity information
analyzed by the tool. With a subsampling factor of 2, the image in the region of interest is
subsampled in tiles of 2x2 pixels. With a subsampling factor of 3 the image is subsampled in
tiles of 3x3 pixels and so forth.
• Increasing the Image Subsampling may reduce the execution time but affects the accuracy of
color matching results. Increasing subsampling may be advantageous in some applications,
particularly when the region of interest is quite large, when many filters are configured for the
tool, and when color results are acceptable at resampled levels.
• Figure 172 illustrates the results of a color filter tool at different subsampling levels.
Image
Subsampling
=1
(no subsampling)
Image
Subsampling
=2
Image
Subsampling
=6
Figure 172 Effect of Image Subsampling
Related Topics
Creating and Configuring Color Filters
AdeptSight 2.0 - User Guide
293
Creating and Configuring Color Filters
Creating and Configuring Color Filters
The Color Matching Tool analyzes the region of interest by applying all the defined filters to the image
within the region of interest. Any number of filters can be added to the Color Matching Tool.
The Filters section contains a list of all the filters that are configured for the current tool. This list
always contains at least one pair, which by default is called Pair(0)
From the Filters list, you can:
• Add and remove color filters.
• Rename the color filters
• Activate/Deactivate color filters.
To add a filter:
1. Under the Filters list, click the 'Add Filter' icon.
2. A filter is added with the default name: Filter(n).
To remove a filter:
1. In the Filters list, select the filter that must be removed.
2. Click the 'Remove Pair' icon.
To rename a filter:
1. In the Filters list, double-click on the name of the pair to be renamed.
2. Type a new name for the filter. This will not affect the configuration parameters of the filter.
To edit a filter:
1. In the Filter list, select on the filter.
2. Click Edit. This opens the Filter Editor window for the selected filter.
3. Configure the filter using the display or by entering values. See Configuring Color Filters in the
Filter Editor for more details.
Configuring Color Filters in the Filter Editor
Color filters can be configured and edited in the Filter Editor window, illustrated in Figure 173. A filter
contains a color definition, displayed both as RGB or HSL values, and tolerances for a variation in hue
saturation and luminance, with respect to the defined color.
The Color and the Tolerances can be set by entering values or by using the display and selection tools
that are the Filter Editor.
To configure the color filter:
1. Set an initial color in one of the following manners:
• Pick a specific color in the display: Under Selection Tools, select the 'Color Selection'' icon.
Using the mouse, move the cursor in the image display and click once to selects the color of
the pixel where the cursor is placed.
AdeptSight 2.0 - User Guide
294
Creating and Configuring Color Filters
• Pick an average color in the display: Under Selection Tools, select the 'Range Selection' icon.
Using the mouse, drag an area cursor in the image display. This calculates and selects the
average color in the selected area.
• Set the color RGB or HSL values: Enter values under R, G, and B, or under H, S, and L. The
single color box above the 'values' area provides a preview of the defined color.
2. Define Tolerances in one of the following manners:
• Enter values for H, S, and L, in the Tolerances section.
• In the Filter Editor display, resize the bounding boxes to set tolerance values. As illustrated in
Figure 173, the bounding box in the multicolored area sets tolerance ranges for hue and
saturation. The bounding box in the greylevel area sets a tolerance range for luminance.
3. Click OK to confirm changes and return to the Sequence Editor.
Name of the filter
as it appears in the Filters list
Double-click to
select a color selection tool
Tolerances are defined by HSL
values
This Bounding box
sets range of values for
Hue (horizontal direction) and
Saturation (vertical direction)
This Bounding box
sets range of values for
Luminance (horizontal direction only)
Figure 173 Configuring a Color Filter in the Filter Editor Window
Color Values
The value of a filter can be configured either by its HSL values or its RGB values.
AdeptSight 2.0 - User Guide
295
Creating and Configuring Color Filters
Table 5 lists a few colors with their corresponding RGB and HSL values
Table 5 RGB and HSL Values for some Common Colors
Color Name
RGB Values
HSL Values
White
255,255,255
0,0,255
Black
0,0,0
0,0,0
Middle Grey
127,127,127
0,0,128
Red
255,0,0
85,255,128
Green
0,255,0
0,255,128
Blue
0,0,255
170,255,128
Defining a Color by its RGB values
RGB refers to a way mode of describing, or quantifying a colors by its red (R), green (G), and blue (b)
values.
• Value for R,G, and B range from 0 (no color) to 255 (maximum color).
• Pure white is defined by (R,G,B) = (255,255,255)
• Pure black is defined by (R,G,B) = (0,0,0)
• Table shows the some common colors expressed in RGB and HSL
Defining a Color by its HSL Values
HSL refers to a way mode of describing, or quantifying colors by their Hue, Saturation, and Luminance
values.
Hue - H
Hue is the quality of color that we perceive as the color itself, for example: red, green, yellow. The hue
is determined by the perceived dominant wavelength, or the central tendency combined wavelengths,
within the visible spectrum.
• Hue values range from 0 to 255. These values correspond to a displacement along the color
spectrum starting from red =0.
• At H=0, the color is a shade of red, at H=117, the color is a shade of green, at H=170, the
color is a shade of blue.
Saturation - S
Saturation is what we perceive as the amount of purity of the color, or of the grey in a color. For example
a high saturation value produces a very pure, intense color. Reducing the saturation value adds grey to
the color.
Luminance - L
Luminance is perceived as the brightness of the color, or the amount of white contained in the color. As
the value increases the color becomes lighter and tends towards white. As the luminance value
decreases the color is darker and tends towards black.
AdeptSight 2.0 - User Guide
296
Creating and Configuring Color Filters
Color Tolerances
The color filter accepts any color values that are within defined tolerances. Tolerance values can only be
expressed in HSL values. The tolerance range for each H, S, and L tolerance is applied to the defined
color,.
A defined tolerance value is distributed equally above and below the color value to which it applies.
For example, if the Color luminance (L) value is 200 and the Tolerance luminance (L) value is 20, the
filter will accept pixels within a range of luminance values equal to [190,200].
AdeptSight 2.0 - User Guide
297
Color Matching Tool Results
Color Matching Tool Results
The Color Matching Tool outputs read-only results.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents only a single sampled image, when the display is in "non-calibrated
mode.
When the Color Matching Tool outputs more than one sampled image, all the sample images can be
viewed only when the display is in "calibrated" mode, as shown in Figure 174.
AdeptSight 2.0 - User Guide
298
Color Matching Tool Results
Figure 174 Representation of Color Matching Tool results
Grid of Results
The grid of result presents the statistical results for the region of interest analyzed by the Color Matching
Tool. These results can be saved to file by enabling the Results Log.
Description of Color Matching Tool Results
The Color Matching Tool outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Color Matching Tool. Elapsed Time is not visible in
the results grid but is it output to the results log for each iteration of the Color Matching Tool.
Frame
Frame identifies the number of the frame output by the Color Matching Tool. If the tool is frame-based,
this number corresponds to the input frame that provided the positioning.
Best Filter Name
BestFilterName is the name of the filter for which the greatest number of pixels were found. This is the
name of the filter as it appears in the filters list of the tool interface.
Best Filter Index
BestFilterIndex is the index number of the filter for which the greatest number of pixels were found.
This is the name of the filter as it appears in the filters list of the tool interface.
AdeptSight 2.0 - User Guide
299
Color Matching Tool Results
Image Pixel Count
ImagePixelCount is the number of pixels in the tool region of interest. This is equal to Image Height x
Image Width.
Image Width
X-axis length, in pixels, of the tool region of interest.
Image Height
Y-axis length, in pixels, of the tool region of interest.
Filter (n) Name
Name of the filter. This result is output for each filter, starting at Filter 0.
Filter (n) Match Pixel Count
Number of pixels that match the conditions set by the filter. This result is output for each filter, starting
at Filter 0.
Filter (n) Match Quality
Filter Match Quality is a percentage value of pixels matched to the specified filter. This value is equal to
the number of matched pixels (Filter (n) Match Pixel Count), divided by the total number of pixels in
the region of interest (Image Pixel Count). This result is output for each filter, starting at Filter 0.
AdeptSight 2.0 - User Guide
300
Configuring Advanced Color Matching Tool Parameters
Configuring Advanced Color Matching Tool Parameters
The Advanced Parameters section of the Color Matching Tool interface provides access to advanced
Color Matching Tool parameters and properties.
Configuration
Output Filter Image Enabled
OutputFilterImageEnabled specifies if an image will be output to display the filter results of the Color
Matching tool.
• When OutputFilterImageEnabled is set to true, the Color Matching tool generates an
output image that displays results of the color filters.
• Generating an image increases in the execution time, and should typically used as a visual aid
when configuring the application. If the displaying of results is not useful during runtime,
Frame Transform
The Scale to Instance parameter is applicable only to a Color Matching Tool that is frame-based, and
for which the Input Frame is provided by a Locator. Otherwise this parameter is ignored. If the Locator is
configured to locate parts of varying scale, the Scale to Instance parameter determines the effect of the
scaled instances on the Color Matching Tool.
Scale to Instance
When ScaleToInstance is True, the Color Matching Tool region of interest is resized and positioned
relative to the change in scale of the Input frame. This is the recommended setting for most cases.
When ScaleToInstance is False, the Color Matching Tool ignores the scale and builds frame relative to
the input frame without adapting to the change in scale.
Location
Tool Position
Most tool position parameters can be set through the Location section of the tool interface. These are
the parameters that define the tool’s region of interest. Additionally, the Advanced Parameters
section gives access to the CalibratedUnitsEnabled parameter.
Calibrated Units Enabled
When CalibratedUnitsEnabled is set to True (default value), the tool results are returned in
millimeters. When set to False, tool results are returned in pixels.
Height
Height of the Color Matching Tool region of interest.
Width
Width of the Color Matching Tool region of interest.
Rotation
Angle of rotation of the Color Matching Tool region of interest.
AdeptSight 2.0 - User Guide
301
Configuring Advanced Color Matching Tool Parameters
Width
Width of the Color Matching Tool region of interest.
X
X coordinate of the center of the tool region of interest.
Y
Y coordinate of the center of the region of interest.
Width
Height
X, Y
Angle of Rotation
Figure 175 Illustration of Tool Position of a Tool Region of Interest
Tool Sampling
Sampling refers to the procedure used by t he tool for gathering values within the portion of the input
image that is bounded by the tool’s region of interest. Two sampling parameters, the Sampling Step
and Bilinear Interpolation, can be used as necessary to create a required tradeoff between speed and
precision.
For specific applications where a more appropriate tradeoff between speed and precision must be
established, the sampling step can be modified by setting the CustomSamplingStepEnabled to True
and modifying the CustomSamplingStep value.
Bilinear Interpolation
Bilinear Interpolation specifies if bilinear interpolation is used to sample the image before it is
analyzed.
To ensure subpixel precision in inspection applications, Bilinear Interpolation should always be set to
True (enabled). Non-interpolated sampling (Bilinear Interpolation disabled) should only be used in
applications where the speed requirements are more critical than precision.)
Sampling Step Default
SamplingStepDefault is the best sampling step computed by the tool, based on the average size, in
calibrated units, of a pixel in the Image. This default sampling step is usually recommended.
SamplingStepDefault is automatically used by the tool if SamplingStepCustomEnabled is True.
Sampling Step
SamplingStep is the step used by the tool to sample the area of the input image that is bounded by the
tool region of interest. The sampling step represents the height and the width of a sampled pixel.
AdeptSight 2.0 - User Guide
302
Configuring Advanced Color Matching Tool Parameters
Sampling Step Custom
SamplingStepCustom enables you to set a sampling step value other than the default sampling step.
To set a custom sampling step, SamplingStepCustomEnabled must be set to False.
• Increasing the sampling step value reduces the tool's precision and decreases the execution
time.
• Reducing the sampling step can increase the tool's precision but can also increase the
execution time.
SamplingStepCustomEnabled
Setting SamplingStepCustomEnabled to True, enables the tool to apply a custom sampling step
defined by SamplingStepCustom. When set to F.
Results
Coordinate System
The CoordinateSystem parameter sets the coordinate system used by the tool to express results. The
available coordinate systems are: Image (hsImage), World (hsTool) , Object (hsObject), Tool
(hstool).
Image Height
ImageHeight is the width of the tool's region of interest expressed in pixels. Read only.
Image Width
ImageWidth is the width of the tool's region of interest expressed in pixels. Read only.
AdeptSight 2.0 - User Guide
303
Using the Results Inspection Tool
Using the Results Inspection Tool
The Result Inspection Tool can be used to filter results that meet specific conditions. This tool filters
frame results of tools in vision sequence and applies one or more conditions.
• The Result Inspection Tool outputs a frame result and therefore can be a frame provider for
other vision tools.
• A typical use of the Result Inspection Tool is to filter results of a vision inspection tools so that
only the objects that pass certain inspection criteria conditions are picked by a robot while
objects that have fail inspection are ignored.
Example of a Simple Results Inspection Tool
In this example, a Locator tool finds objects that are then measured by a Caliper tool. Objects pass the
inspection if the Caliper measure is between 8.6 to 8.9 millimeters. Only objects that pass inspection
are picked with a robot. Objects that fail are ignored.
The two following filters are configured in the Result Inspection Tool:
•
CaliperSize ≥ 8.6
•
CaliperSize ≤ 8.9
The AND Global Operator is applied because only objects that meet both filters (conditions) pass the
inspection criteria.
Figure 176 illustrates an image in which only one object passes inspection. The Result Inspection Tool
outputs a result frame only for the valid object, illustrated by a red marker in the display.
Figure 176 Example of results output by the Results Inspection Tool
AdeptSight 2.0 - User Guide
304
Using the Results Inspection Tool Configuring the Result Inspection Tool
Basic Steps for Configuring a Result Inspection Tool
1. Select the tool that will provide input frames. See Defining the Input for the Result Inspection
Tool.
2. Select a Global Operator.
3. Define on or more Filters.
4. Test and verify results. See Result Inspection Tool Results.
5. Configure Advanced properties if required. Configuring Advanced Edge Locator Parameters.
Defining the Input for the Result Inspection Tool
The Input for the Result Inspection Tool consists of a frame, not an image. This frame can be provided
by any tool in the vision sequence that outputs a result frame. As shown in the preceding example, input
is typically provided by a Locator tool.
The choice of the Input frame affects:
• The choice of Operands, in the filter(s) that will be added.
• The frames output by the tool.
To set the Input:
1. From the Input dropdown list, select the tool that will provide the output frame.
2. If the required tool does not appear in the dropdown list, make sure that the required tool has
been added to the Sequence Manager, above the Result Inspection Tool.
Configuring the Result Inspection Tool
The configuration of the Results Tool consists in adding one or more filters that will filter results from
other tools.
Each filter requires two operands and an operator. When the tool executes, each filter is executed and
then the Global Operator (AND/OR) is applied to all the filters. Each filter returns either a Pass (true)
result or a Fail (false) result.
Choosing a Global Operator
The Global operator is a logical operator (AND or OR) that is applied to all the filters configured in the
tool. The selected operator will be applied to the all filter results. This operation will output the Global
Result.
The Global Result is returned as a Pass (true) or a Fail (false)
• If the global operator is AND, all filters must return a Pass (true) for the Global Result to
return a Pass (true).
• If the global operator is OR, if at least one filter results must return Pass (true) for the Global
Result to be true.
AdeptSight 2.0 - User Guide
305
Using the Results Inspection Tool Configuring the Result Inspection Tool
Creating Filters
Any number of filters can be created and added to the list of filters. Filters are defined in the Filter Editor,
as illustrated in Figure 177.
• Each filter consists of a Left Operand an Operator and a Right Operand.
• The first operand must be a tool that is in the vision sequence
• Important: The tools selected as operands must be "based on" the Input tool.
• Each filter will return one of two results: Pass or Fail.
To create a filter in the Filter Editor:
1. Click the Add Filter icon in the Filters section:
2. The Filter Editor opens, as illustrated in Figure 177.
3. Under First Operand, use the dropdown list to select a Tool.
This tool must be based on, or have a direct relationship with the tool that was selected as
Input. See Defining the Input for the Result Inspection Tool.
Under First Operand, use the dropdown list to select a Result. This is a result from the tool
selected above.
4. Select an Operator from the drop-down list.
5. Under Right Operand select either a Constant (the most commonly used option) o a Tool and
Result.
6. Click OK to exit the Filter Editor.
Figure 177 Filter Editor
Choosing Operands and Operator
The proper choice of Operands and an Operator must take into account which tool was chosen for
provide the Input frame for the Result inspection Tool.
The choice of operands also has an impact on the possible number of results output by the Result
Inspection Tool.
AdeptSight 2.0 - User Guide
306
Using the Results Inspection Tool Configuring the Result Inspection Tool
Left Operand
The Left Operand is composed of two elements: a tool that exists in the vision sequence, and a result
that is output by the selected tool.
The Left Operand MUST have a direct relationship with the Input tool. It must be either
the same as the Input tool or it must be a tool that is "based" on the Input tool.
For example, if the Input tool is a Locator, an acceptable operand would be a framebased tool, for which the Locator is the frame-provider.
Operator
An operator is a mathematical function that will be applied to the selected operand values.
Right Operand
In most applications the right operand is a constant that specifies a fixed numerical value. Figure 178
illustrates a filter that will Pass only objects instances on which exactly 3 blobs were found.
Constant = 3
Left Operand Result
Left Operand Tool
Figure 178 Example of filter with a constant as an operand
The right operand can also be a combination of a Tool and Result in the same manner as the Left
Operand. In this case the same restrictions apply: the operand must have a direct relationship with the
Input tool.
Figure 179 illustrates a filter with two tools as operands. In this case, the filter will Pass only objects on
which the radius of a first Arc Caliper radius is greater than the radius of a second Arc Caliper.
Figure 179 Example of filter with two tools as operands
AdeptSight 2.0 - User Guide
307
Using the Results Inspection Tool Result Inspection Tool Results
Result Inspection Tool Results
By default, the Result Inspection Tool outputs results only for iterations in which the result of the global
operator is Pass (true). However this can be modified in the Advanced Parameters, by modifying the
value of OutputFrames and OutputResults parameters.
The number and numbering of frames output by the Results Inspection tool depends on:
• The number of frames provided by the input tool. The number of result frames is directly
related to the Input tool. The Result Tool will output at most the same number of frames that
are provided by the input tool.
• The results for each Filter Result and for the Global Filter: Fail or Pass.
• The properties of the OutputFrames and OutputResults advanced parameters.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The display and result grid provide information on the Results. The number of frames and results
depends on the number of frames or instances provided by the input tool
AdeptSight 2.0 - User Guide
308
Using the Results Inspection Tool Result Inspection Tool Results
Figure 180 Representation of the Result Inspection Tool Results in Display and Results Grid
Description of Results
The Result Inspection Tool outputs the following results:
Elapsed Time
The Elapsed Time is the total execution time of the Result Inspection Tool. Elapsed Time is not visible
in the results grid but is it output to the results log for each iteration of the Result Inspection Tool.
Output Frame
Index number of the result frame output by the Result Inspection Tool.
Input Frame
Index number of the frame provided by the Input tool.
X
The X coordinate of the result frame.
Y
The Y coordinate of the result frame.
Rotation
The rotation of the of the result frame.
Global Result
Global Results returns the overall success of the tool, with respect to global operator (AOND/OR, and the
Filter Results.
AdeptSight 2.0 - User Guide
309
Using the Results Inspection Tool Result Inspection Tool Results
Filter Result 0,1,..n
A Filter Result is output for each filter operation defined in the tool. The global operator AND or OR is
applied to the entire set of Filter Results to determine the success of the Results Inspection tool.
Table 6 shows the effect of the AND and OR operator on the same set of Filter Results.
Table 6 Example of results output by the Result Inspection Tool
Operator
Filter0
Filter1
Filter2
Global
Result
AND
Pass
Fail
Fail
Fail
OR
Pass
Fail
Pass
Pass
AdeptSight 2.0 - User Guide
310
Using the Frame Builder
The Frame Builder Tool allows the user to create custom reference frames that can be used by other
vision and motion tools. The Frame Builder builds frames of reference from each frame definition
configured in the tool.
Input frame
Frame defined
in Frame Editor
Figure 181 Example of an Frame Builder Tool Positioned Relative to an Frame
How it Works
Whenever a Frame Builder is tool executed, it generates (builds) frames based on the frame definitions
that are configured in the Frame Builder tool. Any number of frame definitions can be configured in a
Frame Builder tool.
• A frame definition can generate multiple frames, depending on the parameters configured in
the definition.
• Frame definitions are configured in the Frame Editor.
• A Frame definition can be configured to build frames in one of two modes: relative to another
frame result, provided by another tool, or relative to a fixed position in an input image.
Basic Steps for Configuring a Frame Builder tool
1. Add a Frame to the list of Frames. See Adding Frames
2. Edit the created frame in the Frame Editor:
• Select the Mode used to the position the frame: relative to an image, relative to a frame.
• Provide the source of Input, either images or frames provided by another tool.
• Test and verify results. See Description of Frame Builder Results.
Adding Frames
Frame definition are created by adding them to the Frames list.
AdeptSight 2.0 - User Guide
311
Configuring Frame Definitions in the Frame Editor
An unlimited number of frame definitions can be created and configured in a single Frame Builder tool.
These frame definitions can be renamed, added and deleted through the Frames list.
From the Frames list, you can:
• Add and remove frames.
• Rename the frames, which by default are named Frame(0), Frame(1), and so forth.
To add a frame:
1. Under the Frames list, click the 'Add Frame' icon.
2. A frame is added with the default name: Frame(n).
To remove a frame:
1. In the Frames list, select the frame that must be removed.
2. Click the 'Remove Pair' icon.
To rename a frame:
1. In the Frames list, double-click on the name of the pair to be renamed.
2. Type a new name for the frame. This will not affect the configuration parameters of the frame.
To edit a frame:
1. In the Frame list, select on the frame.
2. Click Edit. This opens the Frame Editor window for the selected frame.
3. Configure the frame using the display or by entering values. See Configuring Frame Definitions
in the Frame Editor for more details.
Configuring Frame Definitions in the Frame Editor
The Frame Editor defines the positioning parameters for a selected frame definition. Depending on the
selected parameters, frames built by the Frame Builder will be automatically positioned relative an
image or relative to frames provided by other vision tools.
Figure 182 The Frame Editor Dialog
AdeptSight 2.0 - User Guide
312
Configuring Frame Definitions in the Frame Editor
To configure a frame definition in the Frame Editor:
1. Select a Mode. Relative To Image will build frames relative to a static position in the input
image. Relative To Frame will build a frames positioned relative input frames.
2. Select the Input tool. If the Mode is Relative To Frame, the Input must be a tool that outputs
frame results, for example a Locator tool.
If the selected Mode is Relative To Image, the Input must be a tool that outputs image results,
for example, an Acquire Image tool.
3. If the Mode is Relative To Frame but only a specific input frame must be used to build frames,
disable the All Frames checkbox and specify the index of the required input frame in the Frame
Index box.
4. By default, Scale To Instance is enabled. However, if the position of built frames must not
change when an input frame, provided by a Locator, is of a varying scale, disable Scale To
Instance.
5. Define the Location parameters. Depending on the selected Mode, Location parameters are
expressed relative to an Input image or relative to an Input frame, depending on the selected
Mode.
Configuring Frame Editor Parameters
Mode
The selected Mode parameter determines the type of reference frame and input that will be used to
build and generate frames.
• The Relative To Image mode builds frames relative to the frame of reference of the Input
image.
• The Relative To Frame mode builds frames relative to the Input frame of reference.
Input
Selects the tool that provides the input image or the input frame. This depends on the selected Mode.
• In Relative To Image mode, the Input must be provided by a tool that outputs images, such
as an Acquire Image tool or an Image Processing tool.
• In Relative To Frame mode, the required input is a frame, provided by a frame providertool.
All Frames
If the Frame Builder must be build a frame for each frames output by the frame-provider tool, enable
the All Frames check box. This is the default behavior. If the Frame Builder must be only be applied to
a single frame, disable the All Frames check box and select the required frame with the Frame Index
parameter.
Frame Index
When All Frames is disabled, Frame Index sets the index of the input frame that will be used to build
frames output by the Frame Builder.
AdeptSight 2.0 - User Guide
313
Frame Builder Results
Scale To Instance
The Scale To Instance parameter is applicable to frame definitions that are built relative to a Locator
frame of reference. If the Locator is configured to locate parts of varying scale, the Scale To Instance
parameter determines the effect of the change in scale on frames output by the Frame Builder.
When Scale to Instance is enabled, the Frame Builder adapts the Location parameters relative to the
change in scale on the Input frame. When Scale to Instance is disabled, the Frame Builder ignores the
scale and builds frame relative to the input frame without adapting to the change in scale.
Figure Figure 183 illustrates the effect of the Scale To Instance parameter.
Non-Scaled
Object
Frame defined
relative to object
Scaled
Objects
Scaled
Objects
Scale to Instance
Scale to Instance
Enabled
Disabled
Figure 183 Effect of Scale To Instance Parameter
Location
Location parameters define the transform relative to the selected input imager or frame.
• In Relative To Image mode, parameters are expressed relative to the origin of the Input
image.
• In Relative To Frame mode, parameters are expressed relative to the origin of the Input
reference frame, provided by a frame-provider tool.
In this mode, the display window provides markers that identify frame definition and the Input
frame. The frame definition marker can be positioned manually in the display.
Frame Builder Results
The Frame Builder outputs a frame result that can be used as frame input by other tools.
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
AdeptSight 2.0 - User Guide
314
Frame Builder Results
Figure 184 Representation of Frame Builder Results in Display and Results Grid
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents the region of interest of each instance of an Frame Builder. If the tool is
frame-based, the frame numbers correspond to the frames that provided the positioning.
Grid of Results
The grid of result presents the results for each frame output by the Frame Builder. These results can be
saved to file by enabling the Results Log.
Description of Frame Builder Results
The Frame Builder outputs the following results.
Elapsed Time
The Elapsed Time is the total execution time of the Frame Builder. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Frame Builder.
AdeptSight 2.0 - User Guide
315
Frame Builder Results
Frame
Index of the Frame, defined in the list of Frames of the Frame Builder. The first frame defined in the list
is 0.
Output Frame
Index of the Frame result output by the Frame Builder tool.
X
X position of the output frame.
Y
Y position of the output frame.
Rotation
Rotation of the output frame.
AdeptSight 2.0 - User Guide
316
Using the Overlap Tool
Using the Overlap Tool
The Overlap Tool is a motion tool for conveyor tracking applications. The purpose of the Overlap Tool is
to make sure that parts moving on the belt are recognized only once.
Because a part found by the Locator (or another input tool) may be present in many images acquired by
the camera, the Overlap Tool ensures that the robot is not instructed to pick up the same part more
than once.
Conveyor tracking requires a CX controller. Motion Tools (Overlap
Tool and Communication Tool) require a valid Conveyor Tracking
License.
How It Works
The Overlap Tool filters results. If an instance in the image is a new instance (Pass result) it is passed on
to the next tool in the sequence. If an instance is already known it is rejected (Fail result) and is not sent
to other tools in the sequence, to avoid "double-picking" or "double-processing" of the object.
Requirements for Using the Overlap Tool
The Overlap Tool will only function in a conveyor tracking environment. This tool executes correctly only
if:
• The camera, robot , and conveyor belt are calibrated.
• The connection to the controller is active.
• The tool is receiving latched values from the Acquire Images tool.
• The conveyor belt and the controller have been correctly assigned to a camera, in the
AdeptSight vision project. See Setting Up System Devices.
• A valid Conveyor Tracking License.
Order of the Overlap Tool in a Vision Sequence
The Overlap Tool should be placed near the beginning of a sequence, just under a Locator tool (or other
Input Tool) and before any inspection tools in the sequence. This ensures that the same instance is not
processed many time by the inspection tools in the sequence.
Basic Steps for Configuring the Overlap Tool
1. Select the tool that will provide the Input. This is typically a Locator tool. See Input
2. Test the Overlap Tool by executing the sequence and verifying results. See Overlap Tool Results
3. If required, configure Advanced Parameters. See Advanced Overlap Tool Parameters.
AdeptSight 2.0 - User Guide
317
Using the Overlap Tool
Figure 185 Overlap Tool Interface
Input
The Input required by the Overlap Tool is typically provided by the Locator tool. This input consists of
instances (frames) output by the Locator. It is also possible for the Input to be provided by a Blob
Analyzer or a Results Inspection Tool.
To set the Input:
1. Execute the sequence once to make sure that an input image is available.
2. From the Input dropdown list, select the tool that will provide the input instances.
Related Topics
Overlap Tool Results
Advanced Overlap Tool Parameters
Using the Communication Tool
AdeptSight 2.0 - User Guide
318
Overlap Tool Results
Overlap Tool Results
The Overlap Tool outputs 2 sets of results. These results are displayed in the grid of results under the
names: Overlap Results and Overlap Results Debug.
Overlap Results
The results designated as Overlap Results, provide a Pass/Fail result for each instance, as well as
other instance results that are passed on by the Locator, or other input tool
If the Input tool is not a Locator (a Blob Tool for example), results that are exclusive to the Locator Tool,
such as MatchQuality or Fit Quality, will not be valid.
Overlap Results Debug
These results are provided only for troubleshooting purposes, such as determining the reasons of a
malfunction, such as missed instances or "double-picking".
Overlap Results Debug values provide information on known instances, recognition tolerance, and
management of known instances.
Saving Results
The results of a tool process can be saved to a text file. This can be useful for analyzing performance of
each tool. At each execution of the tool, time, date and results for each execution are appended to the
results log.
To create and store results to a log file:
1. Enable the check box under Results Log.
2. Click the 'Browse' icon.
3. Set the name of the file (*.log) and the location where the file will be saved.
4. The next time the sequence is executed, a new results log will be started, with the name and
file path that are currently shown in the text box.
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents the results of Overlap Tool in the input image.
• When an object instance is found by the Locator, for the first time, the Overlap Tool displays
the new instance in blue.
• Instances that are that have been recognized in previous images, are displayed in red. These
"Known Instances" receive a Fail result, and are not passed on to the following tools in the
sequence.
Grid of Results
The grid of result presents the results for each frame output by the Overlap Tool. These results can be
saved to file by enabling the Results Log.
AdeptSight 2.0 - User Guide
319
Overlap Tool Results
Description of Overlap Tool Results
Basic Overlap Tool results are displayed in the top half of the results grid, or in the top half of the results
log for each iteration of the tool. Most of these results, except for Pass/Fail are generated by the Input
tool. Pass/Fail is the only result generated by the Overlap Tool.
Elapsed Time
The Elapsed Time is the total execution time of the Overlap Tool. Elapsed Time is not visible in the
results grid but is it output to the results log for each iteration of the Overlap Tool.
Pass/Fail
New instances receive a Pass result and these instances are passed on to the following tool in the
sequence which is typically a Communication Tool. Instances that already have been recognized in
previous images receive a Fail result. These instances, which are identified as Known Instances, have
already been seen and are not passed on to the following tools in the sequence. Failed Instances also
receive a Fail result. These are instances that are within the limits of Recognition Tolerance parameter.
Overlap Tool Debug Results
ID
Type of the instance, followed by the index number. Instance types are: New Instance, Known Instance,
and Rejected Instance.
Model ID
Index number of the Model upon which this instance is based. If the Input tool is not a Locator, this
result is not relevant.
Known Instance Index
If the instance is a Known Instance, this is the ID of the (original) instance, when it was first
recognized. If this is a New Instance this value is -1.
Original Belt Encoder Value
Value of the belt encoder when the instance was recognized for the first Time (New Instance)
Belt Encoder Value
Value of the belt encoder when the Known Instance was recognized.
Belt Translation X
X position, in the Belt frame of reference of a Known Instance.
Belt Translation Y
Y position, in the Belt frame of reference of a Known Instance.
Diagonal
Internal value used by the Overlap Tool to determine when an instance has moved out of the image and
no longer needs to be examined or tracked.
Position Tolerance
Calculated Position Tolerance calculated by comparing the current location of a Known Instance to its
expected location. An instance is a Known Instance if this value is less than or equal to the Recognition
Tolerance parameter.
AdeptSight 2.0 - User Guide
320
Overlap Tool Results
Related Topics
Advanced Overlap Tool Parameters
Using the Communication Tool
AdeptSight 2.0 - User Guide
321
Advanced Overlap Tool Parameters
Advanced Overlap Tool Parameters
The Advanced Parameters section of the Overlap Tool tool interface provides access to advanced
Overlap Tool parameters.
Encoder Units
Sets the units used by the controller to calculate the position of instance on the conveyor belt. When the
conveyor belt is calibrated, the calibration calculates the scale factor that converts the number of
encoder ticks (encoder signals) to millimeters. Encoder Ticks is the default and recommended mode.
The Millimeters mode is reserved for special cases, and requires configuring V+ to return results in
millimeters (through SETDEVICE) .
Recognition Tolerance
Recognition Tolerance sets the allowed tolerance for recognizing a known instance beyond the expected
position of the object. Tolerance is the percentage of displacement of the X,Y position of the instance.
The size and shape of objects influences the optimal setting for this parameter.
Increasing the Recognition Tolerance can help reduce double-picking but may increase the number of
missed instances. Conversely, decreasing the Recognition Tolerance may increase the number of
"double-picks".
The default value of 25% is the recommended value for
Recognition Tolerance. This value should be suitable for most
applications.
Do not modify the default value unless the Overlap Tool fails to
function as expected.
AdeptSight 2.0 - User Guide
322
Using the Communication Tool
Using the Communication Tool
The Communication Tool is a motion tool for conveyor tracking applications. The purpose of the
Communication Tool is to provide instructions to the controller for the handling of objects that must be
picked or manipulated by a robot.
Conveyor tracking requires a CX controller. Motion Tools (Overlap
Tool and Communication Tool) require a valid Conveyor Tracking
License.
How It Works
The Communication Tool receives instances from an Overlap Tool, which eliminates "duplicate" instances
of the same instance. The Communication Tool then processes the input instances by applying
parameters and region of interest parameters. The Communication Tool then acts as a filter:
• Instances that are successfully processed by the Communication Tool are sent to the
controller. These are Queued Instances.
• Instances that are not output to the controller, because they are outside the Communication
Tool region of interest, or because the queue is full, are Outputted Instances. These
"rejected" instances can be passed to the following tools in the sequence.
Requirements for Using the Communication Tool
The Communication Tool will only in a conveyor tracking environment. This tool executes correctly only
if:
• The camera, robot , and conveyor belt are calibrated.
• The connection to the controller is active.
• The conveyor belt and the controller have been correctly assigned to a camera, in the
AdeptSight vision project. See Setting Up System Devices.
• A valid Conveyor Tracking License.
Order of the Communication Tool in a Vision Sequence
In a simple pick-and-place application, one or more tCommunication Tools are placed at the end
sequence, just after the Overlap Tool.
In a sequence that requires inspection of parts before they are picked by a robot, the Communication
tool must be after one ore more Inspection tools that may be followed by a Results Inspection tool. In
such a case, the Results Inspection tool filters the results if inspection tools and provides valid instances
(parts that have passed inspection) to the Communication Tool.
Figure 186 illustrates the order of a Communication Tool in two types of conveyor-tracking applications.
AdeptSight 2.0 - User Guide
323
Using the Communication Tool
Simple Pick-and-Place Application
Acquire
Images
Tool
Locator
Tool
Overlap
Tool
Communication
Tool
Pass/Fail Inspection Followed by Pick-and-Place
Acquire
Images
Tool
Locator
Tool
Overlap
Tool
Inspection
Tools
Results
Communication
Inspection
Tool
PASS
Tool
FAIL
rejected
objects
Figure 186 Position of the Communication Tool in a Vision Sequence
Multiple Communication Tools
In many applications it may be useful to add one or more Communication Tools. For example:
• Two Communication Tools handling either side of a conveyor belt. Each Communication Tool
sends instances to a robot that picks parts on one side of the belt only.
• Two Communication Tools (or) more so that the second tool may catch instances that were
rejected by the first tool because the queue was full.
• Sorting instances for good and bad parts (Pass/Fail). This requires using the Results
Inspection Tool.
Basic Steps for Configuring an Communication Tool
1. Select the tool that will provide input images. See Input.
2. Select the Robot that will handle or pick the instance output by the Communication Tool.
3. Position the Communication Tool region of interest. See Location.
4. Test and verify results.
5. Configure Advanced Parameters if required. See Configuring Advanced Communication Tool
Parameters.
AdeptSight 2.0 - User Guide
324
Using the Communication Tool
Figure 187 Communication Tool Interface
Input
The Input required by the Communication Tool is typically provided by an Overlap Tool. The Input can
also be provided by other tools that output instances, such as a Results Inspection tool or a Locator.
To set the Input:
1. From the Input dropdown list, select the tool that will provide the input instances.
2. If the required tool does not appear in the dropdown list, make sure that the system devices
are correctly set up and configured in the System Devices Manager. See Setting Up System
Devices.
Robot
The Robot parameter selects the robot that will handle or pick the instances output by the
communication tool.
Coordinate
The Coordinate parameter selects the coordinate system in which instance locations will be expressed
when they are sent to the controller. For most applications, the Belt frame of reference is the
recommended coordinate system. The Auto mode automatically expresses locations in the Belt frame
of reference when the belt has been calibrated. The Robot mode expressed instance locations in the
Robot frame of reference.
Location
Location parameters define the position of the tool’s region of interest, in which the tool carries out its
process. This region of interest can be the entire image, or a portion of the input image
AdeptSight 2.0 - User Guide
325
Using the Communication Tool
Select or enter values
to subdivide the ROI
into require number of
columns and rows
Rectangles represent
proportions of the
tool ROI
(region of interest)
Figure 188 Positioning the Communication Tool
By default the tool region of interest is set to Entire Image. If only a portion of the image needs to be
processed by the Communication Tool, you can modify Location parameters to define the region of
interest on a fixed area of the input image.
Modifying the region of interest is useful for applications in which two or more robots pick or handle
objects on different sides of the belt. For example: a first Communication Tool configured to output
objects on the right side of the belt to Robot A and a second Communication Tool configured to output
instances on the left side of the belt to Robot B.
To position the Communication Tool on a portion of the input image:
1. Disable the Entire Area check box.
1. Click Location. The region of interest is represented in the display by a green bounding box.
2. Resize the bounding box in one of the following manners:
• Enter or select values for the Location parameters: Position X, Position Y, Width, and
Height.
• Resize the bounding box directly in the display.
• Click Location to open the ROI dialog shown in Figure 188. Rectangle represent the tool ROI
(region of interest). Drag the mouse across the rectangles in the ROI Dialog, to select the
portion of the image that should be included in the image. Selected areas are shown in blue.
For example, in figure Figure 188, the tool region of interest will cover two thirds of the input
image, on the left side of the image.
Queue and Gripper Parameters
Queue Index
This is the index number that identifies the queue to which instances will be sent.
AdeptSight 2.0 - User Guide
326
Using the Communication Tool
Only one AdeptSight communication tool can write to a specific queue on a controller. If there are
multiple Communication Tools, either on a same or different PCs, each tool must be assigned a specific
queue index.
Queue Size
Specifies the number of instances that can be written the queue. The maximum value is 100. The ideal
queue size varies greatly. It may require some trial and error to optimize this value for a specific
application and environment.
Gripper Offset Index
The Index value of the Gripper Offset assigned to the instance.
Related Topics
Configuring Advanced Communication Tool Parameters
Communication Tool Results
Using the Overlap Tool
Overlap Tool Results
AdeptSight 2.0 - User Guide
327
Communication Tool Results
Communication Tool Results
The Communication Tool outputs 2 sets of results. These results are displayed in the grid of results as
Queued Instances and Outputted Instances. Queued instances are sent to the controller, so they
can be picked or handled by a robot. Outputted Instances are either lost, or can be passed to
subsequent tools in the sequence, typically one or more additional Communication Tools.
Region of interest
configured to cover only
half of the input image
Queued Instances
are sent to controller
Outputted Instances
are output for use
by another tool in
the sequence
Figure 189 Illustration of Queued and Outputted Instances in Communication Tool Results
Viewing Results
The results for each execution of the tool are represented in the display window, and the grid of results.
Results Display
The Results display represents the results of Communication Tool in the input image.
Queued Instances are displayed in red. These instances are sent to the controller.
Outputted Instances are displayed in blue. These are instances are "rejected" either because they are
outside the region of interest of the tool, because the queue stack was full, or because the tool is not
receiving latched values. Outputted Instances are passed to the next tool in the sequence. In such a
case, one or more Communication Tools can be added to the sequence to handle the queue overflow, or
to handle instances that are in another area of the conveyor belt.
For example, Figure 189 shows the tool region of interest (green bounding box) placed over the image
of the tool results. Only objects within this region of interest are sent to the controller. Instances in the
rest of the image could be "found" and queued by a second communication tool.
AdeptSight 2.0 - User Guide
328
Communication Tool Results
Grid of Results
The grid of result passes the results for all instances that were passed by the Input tool to the
Communication Tool tool. These results are grouped by Queued Instances and Output Instances.
The results in the grid are the results for each instance, passed on by the Locator (or other Input) tool.
AdeptSight 2.0 - User Guide
329
Configuring Advanced Communication Tool Parameters
Configuring Advanced Communication Tool Parameters
The Advanced Parameters section of the Communication Tool interface provides access to advanced
Communication Tool parameters and properties.
Update Queue Head
Update Queue head sets the frequency with which the Communication Tool will update information that
is written to the queue on the controller. When After Every Instance is enabled, the queue head
pointer is updated and sent by the PC to the controller after each new instance is written. When After
the Last Instance is enabled, the queue head pointer is only updated on the controller when all
instances have been written to the queue by the PC.
The default recommended mode is After Every Instance. However, this mode can slow the
throughput. The disadvantage of the After the Last Instance mode is that the robot is inactive during
the time that the PC is writing instances to the queue.
Set Soft Signal
Sets the soft signal that can be used by V+ to synchronize the controller and the PC. This signal instructs
the controller that all instances detected by the Locator (or other input tool) have been sent to the
controller.
AdeptSight 2.0 - User Guide
330
AdeptSight 2.0 Online Tutorials
March 2007
AdeptSight Tutorials
AdeptSight Tutorials help you learn how to use AdeptSight by walking you through setting up and
running some basic, functional, vision/robot applications.
Welcome to AdeptSight Tutorials
We recommend that you start with the Getting Started with AdeptSight tutorial, to familiarize yourself
with the AdeptSight environment.
Getting Started with AdeptSight
This tutorial explains how to build a basic vision application. This tutorial requires a PC with AdeptSight,
and a camera. You do not need to connect to a controller to complete this tutorial.
AdeptSight Pick-and-Place Tutorial
This tutorial explains how to create a basic pick and place application. It requires an Adept Cobra robot,
connected to a CX controller or an i-Series Cobra with AIB. Familiarity with V+/MicroV+ is
recommended, but not necessary.
AdeptSight Conveyor Tracking Tutorial
This tutorial requires an CX Controller and conveyor belt. It explains how to create a basic conveyortracking application. Knowledge of V+ is recommended.
AdeptSight Upward-Looking Camera Tutorial
This tutorial explains how to create an application for an upwards-facing camera. This tutorial applies to
a system where the robot pick objects and brings each object into the field of view of an upward-facing
came. Each part is then placed by the robot in a location called the 'Place Location'.
AdeptSight Standalone C# Tutorial
This tutorial explains how to create a vision application in Visual Studio, using the AdeptSight
Framework. It also provides an introduction to some often-used vision tools. Familiarity with the Visual
Studio environment and with the C# programming language is recommended.
Getting Started with AdeptSight
Getting Started with AdeptSight
This tutorial explains the basics of the AdeptSight software and guides you through the creation of a
simple vision application.
Tutorial Overview
This tutorial does not require being connected to a controller and robot. You can also follow this tutorial
without a camera, by importing images such as those provided in the Tutorials folder that is in the
AdeptSight installation folder.
• Installing AdeptSight Hardware
• Installing the Software
• Starting AdeptSight
• Adjusting the Camera
• Calibrating the Camera
• Creating a Vision Sequence
• Adding Tools to the Vision Sequence
• Acquiring Images
• Adding a Locator Tool
• Creating a Model
• Editing the Model
• Configuring Search Parameters
• Running and Testing the Locator
• Adding a Frame-Based Vision Tool
• Configuring Frame-Based Positioning of a Vision Tool
• Completing the Vision Application
Additional Tutorials
• AdeptSight Pick-and-Place Tutorial
• AdeptSight Conveyor Tracking Tutorial
• AdeptSight Standalone C# Tutorial
System Requirements
• PC running Windows 2000 SP4 or Windows XP
• PC with an OHCI-compliant 1394 (FireWire) Bus Controller
• Windows .NET Framework. If it is not present on your computer, it will be installed during the
software installation.
AdeptSight 2.0 - AdeptSight Tutorials
2
Getting Started with AdeptSight
The type of PC processor will influence the execution speed of the vision applications.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Installing AdeptSight Hardware
3
Getting Started with AdeptSight - Installing AdeptSight Hardware
Installing AdeptSight Hardware
Installing the Robot
This tutorial does not require that you be connected to a controller and a robot. However this is a
requirement for the other AdeptSight Tutorials. Refer to installation instructions in the manuals that
came with the robot and controller for details on installing the robot.
Figure 2 illustrates a setup for a robotic vision guidance application with AdeptSight.
Figure 2 Overview of a Robotic Setup with AdeptSight
Installing the Lens
1. Locate the lens that came with the package.
2. Install the lens on the camera. Do not over-tighten.
3. Do not set the lock screws, since you will need to adjust the lens aperture and focus later.
Installing the Camera
1. Mount the camera in the workcell using the camera mount brackets.
2. Locate the IEEE 1394 (FireWire) cable that is included in the shipment box.
3. Connect one end of the cable to the 1394 port on the camera. See Figure 2.
4. Connect the other end of the cable to a 1394 FireWire port on the PC. A hub may be required if
the PC (laptop) has a 4-pin port.
5. Typically, you should mount the camera so that it is perpendicular to the workspace.
6. Make sure that the installed camera clears the top of the robot and the entire work envelop.
AdeptSight 2.0 - AdeptSight Tutorials
4
Getting Started with AdeptSight - Installing AdeptSight Hardware
Back of Basler
camera
IEEE 1394 port
Figure 4 Basler Camera connection ports
In Windows 2000 you may get a "Found New Hardware” popup. In such a case, click
Cancel: the Basler camera driver will be installed with the AdeptSight software
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Installing the Software
5
Getting Started with AdeptSight - Installing the Software
Installing the Software
Before Installing
• Install the USB hardware key (dongle) that came with AdeptSight. This dongle is required and
must be present at all times to ensure the proper functioning of AdeptSight.
• Uninstall any previous Adept DeskTop versions that are on the PC.
• Uninstall any previous AdeptSight versions.
• Uninstall any existing HexSight versions
Installing the Software
1. Launch the installation from the AdeptSight CD-ROM
2. Follow the instructions of the installation program.
3. The installation program will install the correct Adept DeskTop version that is required for
AdeptSight.
4. The installation will install and/or update:
• The driver for the Safenet Sentinel USB hardware key (dongle)
• The Basler camera driver (BCAM 1394 Driver)
• Microsoft .NET Framework 2.0
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Starting AdeptSight
6
Getting Started with AdeptSight - Starting AdeptSight
Starting AdeptSight
Vision applications are created and managed in the Vision Project window in AdeptSight.
This tutorial explains how to open and use AdeptSight within Adept DeskTop. To follow this tutorial you
can also open AdeptSight in one of the example programs provided in the support files that were
installed with AdeptSight.
To start AdeptSight from Adept DeskTop:
1. Open Adept DeskTop.
2. From the Adept DeskTop menu, select View > AdeptSight, or click the 'Open AdeptSight' icon
in the Adept DeskTop toolbar.
3. If you have more than one controller license on your system, the Controller Information
dialog opens. Select the type of controller you will use.
Figure 5 Select Controller
4. The Vision Project window opens, similar to Figure 6.
Vision applications are built and configured through the Vision Project window, also called the
Vision Project manager.
Sequence Manager
Allows you to manage and edit the sequences
that make up a vision application
System Devices Manager
Allows you to manage and set up the devices that
are used in a vision application
You can dock the Vision Project control
anywhere in the Adept DeskTop window
Figure 6 The Vision Manager Window
Using the Vision Project Window
A vision project consists of one or more vision sequences, which are managed and run from the
Sequence Manager section of Vision Project interface.
From the Sequence Manager you open the Sequence Editor to add and configure vision tools. This is
covered later in this tutorial in the section Creating a Vision Sequence.
AdeptSight 2.0 - AdeptSight Tutorials
7
Getting Started with AdeptSight - Starting AdeptSight
From the System Devices Manager you add, manage and configure the camera, controllers, robots
and conveyor belts needed for your application. This is explained in other tutorials: AdeptSight Pick-andPlace Tutorial and AdeptSight Conveyor Tracking Tutorial.
Before creating a new vision application, you will have to adjust the camera, set up the devices used by
the application, and calibrate the system.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Adjusting the Camera
8
Getting Started with AdeptSight - Adjusting the Camera
Adjusting the Camera
The Camera tab in the System Devices manager should display the cameras detected on the system.
To adjust camera focus and contrast, you will need to open a live display of the images being grabbed
by the camera.
Displaying Live Camera Images
1. Select the Cameras tab in the System Devices manager, as shown in Figure 7.
2. In the list, select the Basler Camera (A601F…).
3. Click the 'Live Display' icon:
4. The Live Display window opens. Use this display to adjust the camera position, as well as focus
and aperture, as explained below.
Opens the Live Display window
Detected camera(s)
Emulation acts like a virtual camera
by using a database of images
Figure 7 Opening a Live Display Window
Adjusting Lens Focus and Aperture
When the Live Display window is open, you can use it to guide you in adjusting the lens aperture and
focus.
To adjust camera focus and aperture:
1. Place one or more objects in the field of view.
2. If needed, use zoom options by right-clicking in the display window. See Figure 8.
3. Adjust the focus until objects in the display are sharp.
4. Once you have obtained the best possible focus, adjust the lens aperture (f-stop) until you
obtain a well-contrasted image. If it is too highly contrasted (saturated) you will lose detail.
Right-click in display
window for display options
Figure 8 Live Display Window
AdeptSight 2.0 - AdeptSight Tutorials
9
Getting Started with AdeptSight - Adjusting the Camera
You can now optionally adjust camera parameters, although the default camera parameters should be
satisfactory for this tutorial.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Adjusting Camera Properties (optional) or skip to Calibrating the Camera
10
Getting Started with AdeptSight - Adjusting Camera Properties (optional)
Adjusting Camera Properties (optional)
If you want to adjust camera parameters, follow the steps below. Otherwise go to the next module.
Opening the Camera Properties Window
1. Select the Basler camera in the list of Available Cameras (A601F…).
2. Click on the 'Camera Properties' icon. See Figure 9.
Figure 9 Opening the Camera Properties
Configuring the Camera Properties
In the Camera Properties window:
1. Select the Stream Format tab and set the following properties:
• Format: select Format 0.
• Frame Rate: select 60 fps.
• Mode: select 640 x 480, Mono 8
2. Select the Video Format tab and set the following properties by moving the sliders or directly
typing in the values:
• Shutter: set to 600.
• Gain: set to 10.
• Brightness: set to 400.
3. Leave other parameters at their default settings, and click OK to close the camera properties
window.
AdeptSight 2.0 - AdeptSight Tutorials
11
Getting Started with AdeptSight - Adjusting Camera Properties (optional)
Figure 10 Camera Properties Window
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrating the Camera
12
Getting Started with AdeptSight - Calibrating the Camera
Calibrating the Camera
Calibrating the camera/vision system increases the accuracy of your results by correcting image errors.
• The camera calibration requires a grid of dots target. For the purpose of this tutorial, you can
print out one of the sample calibration targets that is provided in the Tutorials/Calibration
folder, installed in the AdeptSight program folder.
• Sample targets are intended for teaching purposes only. Targets printed on paper are not
accurate calibration targets. See Why is Vision Calibration Important? for information on
creating accurate dot targets and the importance of calibrating the vision system.
• You can skip this step, in which case the camera will be calibrated during the Vision-to-Robot
calibration. This is explained in the next module of this tutorial. However, calibrating the
camera separately, with a grid of dots target, can provide higher accuracy to your application
than the camera calibration that is done through the Vision-to-Robot calibration.
• If you do not calibrate the camera first, and there is a strong lens distortion, this may cause
the Vision-to-Robot to fail.
Figure 11 Example of a Grid of Dots Calibration Target
Calibrating the Camera with a Calibration Target
1. In the Cameras tab of the System Devices manager, select the camera you want to calibrate
in the Devices list.
2. Click the 'Calibrate Camera' icon:
3. The Vision Calibration Wizard opens, beginning the vision (camera) calibration process.
4. Follow the instructions in the wizard, then return to this tutorial once the calibration is finished.
5. If you need help during the Calibration process, Click Help in the Calibration Wizard.
AdeptSight 2.0 - AdeptSight Tutorials
13
Getting Started with AdeptSight - Calibrating the Camera
'Calibrate Camera icon
launches camera 'calibration wizard
Warning symbols indicates
non-completed calibration
Figure 12 Starting the Camera Calibration Wizard
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Creating a Vision Sequence
14
Getting Started with AdeptSight - Creating a Vision Sequence
Creating a Vision Sequence
A sequence is a series of tasks that are executed by vision tools. When you execute a sequence, each
tool in the sequence executes in order. You add, remove, and edit the vision tools in the Sequence
Editor.
Saving a Sequence
All sequences in the Sequence Manager are saved when you save the vision project.
• Sequences are saved as part of the project, not individually.
• Project files are saved with the extension hsproj.
To save the vision sequence:
1. Click the 'Save Project' icon:
2. Save the project as GettingStarted.hsproj
Opening the Sequence Editor
By default, there is already a first sequence in the application.
1. Select the first sequence in the list.
2. In the Sequence Manager, click the 'Edit Sequence' icon:
3. The Sequence Editor window opens. Continue to the next step of the tutorial for information on
the Sequence Editor.
Click Edit Sequence icon
to open Sequence Editor
To edit the sequence name,
left-click once on the name and
type in the new name
Figure 13 New vision sequence in the Sequence Manager
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Adding Tools to the Vision Sequence
15
Getting Started with AdeptSight - Adding Tools to the Vision Sequence
Adding Tools to the Vision Sequence
A vision sequence is built by adding vision tools to the sequence. These tools are added in the Sequence
Editor Window.
In this module you will add an image acquisition tool, called Acquire Image and an object finding tool,
called Locator.
The Sequence Editor Window
Process Manager
Display area
If the toolbox is not
visible, click the toolbox
icon
Toolbox
Grid of results area
Figure 14 The Sequence Editor interface
When you first open the Sequence Editor, it is empty, as illustrated in Figure 14. Tools are added by
dragging them from the Toolbox into the Process Manager area labeled "Drop Tools Here".
Add an Acquire Image Tool
Acquire Image is the first tool to add because it supplies images to other tools in the sequence.
1. In the Toolbox, select Acquire Image and drag it to the area marked "Drop Tools Here".
2. The Process Manager should look like the image shown Figure 15.
3. You are now ready to acquire images for the application.
Camera model and
ID appears here
Figure 15 Acquire Image Tool Added to the Editor
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Acquiring Images
16
Getting Started with AdeptSight - Acquiring Images
Acquiring Images
The Acquire Image tool provides the images to subsequent tools in the sequence, such as the Locator
tool that you will add later.
Displaying Images
Executing the Acquire Image tool displays images. You can also preview images as a continuous live
display or as single static images.
Figure 16 Live Display in the Sequence Editor
1. Select the Basler camera in the list if it is not already selected. If you are not using a camera
for this tutorial, see the note below.
2. To grab an image, click the 'Execute tool' icon:
3. The display should now contain an image.
4. Place an object in the field of view of the camera. To assist you in positioning the object, use
the Live display mode, by clicking the 'Live Mode' icon:
5. In the Live display mode, the word Live appears at top left of the display, as shown in Figure
16
6. If you are not satisfied with the image quality, click the 'Camera Properties' icon to access and
edit the camera parameters:
7. To exit the Live display mode, click again on the 'Live Mode' icon.
AdeptSight 2.0 - AdeptSight Tutorials
17
Getting Started with AdeptSight - Acquiring Images
If you are following this tutorial without a camera, select Emulation in the Camera
dropdown list. The properties icon opens the Emulation dialog from which you can
import images, such as those contained in the Tutorials folder of the AdeptSight
program folder.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Adding a Locator Tool
18
Getting Started with AdeptSight - Adding a Locator Tool
Adding a Locator Tool
The Locator tool searches for the objects you have defined in your application and returns results on the
location of the objects it finds.
1. In the Toolbox, select Locator and drag it into the Process Manager area, below the Acquire
Image tool.
Acquire Image tool
supplies images to the
Locator tool
Process Manager area
Object models will be
added here
'Search' parameters
Figure 17 Locator Tool added to the vision sequence
2. A Locator tool should now appear in the Process Manager area, as shown in Figure 17.
Beside Input, select Acquire Images. Input defines the tool that provides images to the
Locator tool.
3. Under Location, leave the Entire Image check box enabled. This ensures that the search
process will look for objects in the entire image provided by the camera.
4. Execute the sequence at least once to ensure that the Locator has an image available before
continuing to the next step.
To execute the sequence, click the execute sequence icon in the toolbar:
5. You are now ready to create a model for the object that you want to find with this application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Creating a Model
19
Getting Started with AdeptSight - Creating a Model
Creating a Model
To find a specific part or object, AdeptSight must have an active model of the part. You will now create
the model for the part you want to locate with this application.
Figure 18 Basic Model Edition mode provides quick model-building
Creating a New Model
To create a model:
1. Place the object in the camera field of view and acquire an image by executing the sequence.
To execute the sequence, click the 'Execute Sequence' icon:
2. In the Models section, click the '+' icon to create a new model. The display is now in Model
Edition mode as illustrated in Figure 18.
3. The Model's bounding box appears in the image as a green rectangle.
4. Drag and resize the green bounding box to completely enclose the entire object. Green outlines
show the contours that have been selected to create the model.
5. Ignore the yellow axes marker for now, it will be covered in the next module of the tutorial.
6. Click Done to complete the creation of the model. This new model now appears as Model0 in
the list of Models.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Editing the Model
20
Getting Started with AdeptSight - Editing the Model
Editing the Model
Each modeled part has a frame of reference called the Object coordinate system. The origin of an
object's coordinate system is the position that is returned in the results when an instance of this object
is found.
In this module you will edit the model to reposition the coordinate system.
Information icon indicates
that no Gripper Offset has
been associated to this model.
Figure 19 List of Models with the newly created model
1. Select Model0 in the list of models.
2. Click Edit to enter the Model Edition mode.
3. Reposition the yellow axes marker that indicates the position and orientation of the object’s
coordinate system:
• To rotate the marker, click on the arrow of the X or the Y-axis and drag with the mouse.
• To move the marker, click the origin of the X and Y axes and drag with the mouse.
• You can also drag the arrow end of the axes to stretch the marker to help align the marker
over long features.
4. Once you have finished positioning the marker, click Done to apply the changes made to the
Model and close the Model Editor.
The Model you have created should be satisfactory and ready to use. If it needs further editing, see the
next step of the tutorial: Editing the Model in Expert Mode (optional).
• Typically the next step is to calibrate the gripper offset for the model, which is required for
robot handling of parts. The gripper offset calibration is not required for this tutorial; it is
explained in other AdeptSight tutorials.
• Optionally, you can continue to edit the model, as explained in the following module.
Bold lines show the features
that make up the model
Coordinate System marker
sets the frame of reference
for this object
Figure 20 Editing the coordinate system of the model
AdeptSight 2.0 - AdeptSight Tutorials
21
Getting Started with AdeptSight - Editing the Model
Next:
Editing the Model in Expert Mode (optional) or skip to Configuring Search Parameters
AdeptSight 2.0 - AdeptSight Tutorials
22
Getting Started with AdeptSight - Editing the Model in Expert Mode (optional)
Editing the Model in Expert Mode (optional)
If you want to edit model features, follow the steps below. Otherwise go to the next module: Configuring
Search Parameters
In this module you will edit model features with the Expert model-edition mode.
1. If you are not in model edition mode, Select Model0 in the list of models and click Edit to open
the Model Edition mode.
2. Click Expert to enter the advanced model-edition mode.
3. Under Show, select Outline to display the outline-level features of the Model. Outline
features are coarser than the Detail features. The model contains features in both Outline and
Detail level. You can edit features in either of these levels.
4. Once you have finished editing the Model, click Done to exit the Model Editor.
Sections below explain a few basic model-edition tasks. More extensive information on
editing models is available in the online User Guide.
To build/rebuild a model
1. Click Build to build a new model. This rebuilds the model using the current 'Expert' mode
parameters.
2. Use the Feature Selection slider to modify the amount of features that are added when you
Build the model.
3. The model is rebuilt each time you click Build. This will undo any manual changes made to the
model such as adding or removing features.
4. Click Apply to save the modifications to the model and Done to exit model edition mode.
To delete a feature:
1. Select a feature by clicking or double-clicking the feature. The selected feature is displayed as
a bold red contour.
2. Click the Delete key to remove the feature. The contour now appears in blue.
3. Click Apply to save the modifications to the model.
To add a feature:
1. Double click on a blue contour to select it.
2. Click the Insert key to add the selected contour to model features
3. Click Apply to save the modifications to the model.
AdeptSight 2.0 - AdeptSight Tutorials
23
Getting Started with AdeptSight - Editing the Model in Expert Mode (optional)
Detail Level Model
Outline Level Model
Models Contain both Detail and
Outline Level Features
Figure 21 Advanced Model Editing in Expert Mode
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Configuring Search Parameters
24
Getting Started with AdeptSight - Configuring Search Parameters
Configuring Search Parameters
Search parameters set basic constraints for the search process. This module introduces you to editing of
the basic search parameters.
Basic Search Parameters
Figure 22 Configuring Search Parameters
1. Under Search, locate Min Model Recognition (%).
2. Replace the default value (50) by 75. This instructs the Locator to search only for objects that
contain at least 75% of the feature contours that were defined in the Model.
3. Leave other parameters to their default settings. For more information on this subject, consult
the online User Guide.
You can experiment later with other Search constraints:
• Scale: Select Range then select a scale range to find objects of varying scale.
• Rotation: Select Nominal and then enter rotation value to find only those objects that are
positioned in the defined angle of rotation.
• Instance to Find: Restricts the number of objects that the Locator can find in an input
image.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Running and Testing the Locator
25
Getting Started with AdeptSight - Running and Testing the Locator
Running and Testing the Locator
Now that you have created a Model and configured search parameters, you will verify that the Locator
tool finds the object in images similar to the one that was used to create the model.
Execution time in
status bar
Grid of results
Figure 23 Display and Results of Found Objects
1. Click the 'Execute Sequence' icon in the toolbar:
2. When an object is found, it is shown with a blue contour. The results for the object appear in
the grid below the display. See Figure 23.
3. If results do not appear in the grid, click on the Locator tool interface. The name of the tool
('Locator') will be highlighted in blue as shown in Figure 23
4. Move the object, or add more objects in the field of view.
5. The results for the found instances are updated every time you press the 'Execute Sequence'
button.
Test in Continuous Mode
1. To start a continuous running mode, click the 'Continuous Loop' icon in the toolbar:
2. Click 'Execute Sequence' icon. The application should run in continuous mode.
3. Exit the continuous running mode by clicking 'Stop Sequence':
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Adding a Frame-Based Vision Tool
26
Getting Started with AdeptSight - Adding a Frame-Based Vision Tool
Adding a Frame-Based Vision Tool
In this module, you will add a tool to the vision application. This tool will be configured to use
AdeptSight’s frame-based positioning feature.
In this tutorial, you will add an Image Histogram tool that will be positioned relative to the Locator tool
frame of reference.
Adding the Image Histogram Tool
1. From the Toolbox, select and drag an Image Histogram tool below the Locator tool.
2. Alternatively, you can right click in the Process Manager and select the tool from the context
menu.
3. The Histogram Tool should appear below the Locator tool as illustrated in Figure 24.
4. In the Input box, select: 'Acquire Image'. This instructs the histogram tool to use input images
provided by the Acquire Image tool.
Click here to collapse/or expand the
displaying of a tool interface
Image Histogram interface
Right-click in blue
area to display this context menu
Figure 24 Image Histogram Tool added to the Vision Sequence
In the following Module you will configure the Image Histogram Tool.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Configuring Frame-Based Positioning of a Vision Tool
27
Getting Started with AdeptSight - Configuring Frame-Based Positioning of a Vision Tool
Configuring Frame-Based Positioning of a Vision Tool
When a tool is frame-based, it is positioned relative to another tool, which is the "frame provider".
• When the vision sequence executes, the frame-based tool is automatically positioned relative
to the results provided by the frame-provider tool.
• In this tutorial, the Locator tool will be the frame provider.
Positioning the Histogram Tool
1. In the Input Frame text-box, select Locator.
2. Enable the All Frames check-box.
3. Click the Location button. This opens the tool’s Location dialog, as shown in Figure 25
• The Location parameters define the area of interest to which the tool will be applied.
• This area of interest is shown as a green bounding box in the image display.
4. Enter values in the Location dialog, or manually set the bounding box in the display. The X
and Y positions and Rotation are expressed relative to the frame of reference represented in
blue in the display area.
5. To modify the bounding box in the display area, click the Selection icon then use your mouse
to drag handles, or to rotate the X-axis marker.
6. Position the bounding box to an area in the image. As shown in Figure 25, an Image Histogram
tool can be placed to analyze the area just beyond the edges of an object, so that the results
can be used to determine if the area is free of obstacles.
7. Once the bounding box is positioned, click OK to apply the parameters.
Enable 'Selection' icon
to resize and rotate
the bounding box in the
display.
Blue axes marker
represents a
Locator frame
Green box shows
region of interest
of Histogram tool
Figure 25 Frame-Based Positioning of the Image Histogram Tool
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Testing the Image Histogram Tool
28
Getting Started with AdeptSight - Testing the Image Histogram Tool
Testing the Image Histogram Tool
For this tutorial, ignore other parameters and settings of the Histogram tool.
To execute tool and view results:
1. Execute the sequence by clicking the 'Execute Sequence' icon in the toolbar:
2. When an object is found by the Locator, the Histogram tool is applied to the area that was
defined in the Location window. The Histogram tool is represented by a green rectangle, as
shown in Figure 26.
3. Verify the histogram results in the grid below the display.
4. If the Image Histogram results do not appear in the display or grid of results, click on the
'Image Histogram' title. The tool title should be blue; other tool titles will be displayed in
black. See Figure 26.
5. Results are updated every time the sequence is executed. See step 1.
6. If you enable the Results Log and define a log file you can save these results for further
statistical analysis.
To test in continuous mode:
1. To start a continuous running mode, click the 'Continuous Loop' icon in the toolbar:
2. Click 'Execute Sequence' icon. The application will run in continuous mode.
3. Exit the continuous running mode by clicking the 'Stop Sequence' icon:
Blue letters indicate
that tool results are
displayed for this tool
Figure 26 Histogram Tool Results
To complete the tutorial, you will save and run the sequence from the Sequence Manager.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Completing the Vision Application
29
Getting Started with AdeptSight - Completing the Vision Application
Completing the Vision Application
Return to the Sequence Manager window by clicking on Sequence Manager window or the Adept
DeskTopinterface.
Renaming the Vision Sequence
Unless you previously modified the name of the sequence, it is still named 'New Sequence'.
To rename the sequence:
1. In the Sequence Manager, left click once on 'New Sequence' to select the sequence.
2. Left-click once again, on the sequence to enter the 'edit/rename' mode.
3. Type in a new name for the sequence, for example 'Histogram Inspection'.
Saving the Vision Project
Saving the vision project saves the vision sequence and the configuration of the vision tools, including
all models.
To save the vision project:
1. In the Sequence Editor toolbar, click the 'Save Project' icon:
2. Save the project as GettingStarted.hsproj.
Executing the Vision Application
When you execute the vision project from the Sequence Manager, this executes all sequences that are in
the application.
To execute the vision application:
1. Select the sequence in the list.
2. In the toolbar, click the 'Continuous Loop' icon the toolbar to enable continuous running of the
application:
3. Click the 'Execute Sequence' icon in the toolbar:
The flag beside the sequence is now green, indicating that the sequence is running.
4. To stop the application, click the 'Stop Sequence' icon:
You have completed the tutorial!
Continue learning about AdeptSight in the following tutorials:
• AdeptSight Pick-and-Place Tutorial
• AdeptSight Conveyor Tracking Tutorial
AdeptSight 2.0 - AdeptSight Tutorials
30
AdeptSight Pick-and-Place Tutorial
AdeptSight Pick-and-Place Tutorial
This tutorial will walk you through the creation of a basic pick-and-place application for a single robot
and single camera.
This tutorial assumes you have a basic knowledge of Adept robotic systems, MicroV+ and Adept
DeskTop. If you are new to 2.0, we recommend that you start with the Getting Started with AdeptSight
tutorial.
This tutorial uses as an example a system with:
• AdeptSight 2.0 running from Adept DeskTop
• A Basler Camera (A610f or A631f)
• A Cobra iSeries robot with AIB controller
Some steps may differ from the tutorial, if your system uses a CX controller, or you
are not working in Adept DeskTop.
Tutorial Overview
• Overview of the Pick and Place System
• Start an AdeptSight Vision Project
• Verify Camera Calibration and Configuration
• Connect to the Controller
• Assign the Robot to the Camera
• Calibrate the Vision System to the Robot
• Create a Vision Sequence
• Add Tools to the Vision Sequence
• Acquire Image
• Add a Locator Tool
• Create a Model
• Calibrate the Gripper Offset for the Model
• Configure Locator Search Parameters
• Run and Test the Locator
• Integrate AdeptSight with a MicroV+ Program
System Requirements
• A PC running AdeptSight and Adept DeskTop software
• The camera provided with AdeptSight, or another DirectShow-compatible IEEE 1394 camera.
AdeptSight 2.0 - AdeptSight Tutorials
31
AdeptSight Pick-and-Place Tutorial
• A Cobra iSeries robot with AIB controller or an Adept robot controlled by a CX controller .
Before Starting the Tutorial
You will need a few identical objects that you will pick with the robot. These same objects can be used to
calibrate the system, during the Vision-to-Robot calibration wizard.
Before starting this tutorial you should:
1. Install the camera.
2. Install the software.
3. Calibrate the camera.
Please refer to the Getting Started with AdeptSight tutorial if you need help with any
of these preliminary steps.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Overview of the Pick and Place System
32
AdeptSight Pick-and-Place Tutorial - Overview of the Pick and Place System
Overview of the Pick and Place System
In this tutorial you will set up a system that picks parts from a work surface.
• AdeptSight acts as a vision server that provides vision guidance to controller.
• Vision applications are created on the PC.
• The MicroV+ (or V+) program on the controller, through Adept DeskTop, integrates the vision
application to the motion control.
• Easy-to-use AdeptSight Calibration Wizards allow you to calibrate the entire system to ensure
accurate part finding.
Camera
PC
AdeptSight
(vision server)
Controller
Robot
Figure 27 Overview of a Robotic Setup with AdeptSight - Data Flow Schema
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Connect to the Controller
33
AdeptSight Pick-and-Place Tutorial - Start an AdeptSight Vision Project
Start an AdeptSight Vision Project
Vision applications are created and managed in the Vision Project manager window in AdeptSight.
Opening AdeptSight
1. From the Adept DeskTop menu, select View > AdeptSight.
2. AdeptSight opens in the Vision Project window, similar to Figure 28.
Sequence Manager
Allows you to manage and edit the sequences
that make up a vision application
System Devices Manager
Allows you to manage and set up the devices that
are used in a vision application
Figure 28 The Vision Project manager window
The Vision Project Interface
A vision project consists of one or more vision sequences, which are managed and run from the
Sequence Manager that is part of Vision Project interface.
• From the Sequence Manager you open the Sequence Editor to add and configure vision
tools.
• From the System Devices Manager you add, manage and configure the camera, controllers,
robots and conveyor belts needed for your application.
Creating and Name the New Sequence
You will now create and name the vision sequence that you will use for this tutorial.
1. By default, the Vision Project list contains a sequence named NewSequence.
If the list is empty, create a new sequence by clicking the 'Create Project" icon:
.
2. Select NewSequence in the list then left-click once on the name to edit the name.
3. Name the sequence PickAndPlace. The project now contains one vision sequence named
PickAndPlace.
4. Click the 'Save Project' icon to save the vision project now:
5. Save the project as PickAndPlace.hsproj.
AdeptSight 2.0 - AdeptSight Tutorials
34
AdeptSight Pick-and-Place Tutorial - Start an AdeptSight Vision Project
Sequence Manager toolbar
Rename the vision sequence here
Figure 29 Renaming a new vision sequence
Next you will verify the camera that you will use for the application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Verify Camera Calibration and Configuration
35
AdeptSight Pick-and-Place Tutorial - Verify Camera Calibration and Configuration
Verify Camera Calibration and Configuration
When the camera is correctly installed and recognized by the system it appears in the System Devices
manager, in the Cameras tab, as shown in Figure 30.
System Devices toolbar
Warning icon indicates
that camera is not calibrated
Detected camera
Green icon indicates that
camera is 'ON' (active)
and ready to grab images
Figure 30 Verifying Camera State and Calibration Status
Camera Calibration
If you have not previously calibrated the camera, a warning symbol appears to the right of the camera
State icon, as shown in Figure 30.
Choose a camera calibration method:
1. Calibrate the camera now, by launching the 2D Vision Calibration Wizard from the toolbar.
This requires a "grid of dots" calibration target.
Sample calibration targets are provided in the AdeptSight support files, in the AdeptSight
installation folder: ...\AdeptSight 2.0\Tutorials\Calibration.
2. Calibrate the camera later, through the vision to robot calibration, as explained later in this
tutorial. This will provide acceptable accuracy in most cases. However, a separate vision
calibration can provide increased accuracy to your application.
Calibrating the camera only through the vision-to-robot calibration will not correct
for lens distortion. In some cases, strong lens distortion may cause the vision-torobot calibration to fail if you do not calibrate the vision first.
For more details on this subject, see Why is Vision Calibration Important?
Camera Configuration
If you have not yet verified the quality of the images provided by the camera, you can verify and
configure the camera now.
To verify camera images:
1. In the Devices list, select the camera.
2. In the System Devices toolbar, click the 'Live Display' icon:
3. Use the Live Display window to set camera focus and lens aperture.
AdeptSight 2.0 - AdeptSight Tutorials
36
AdeptSight Pick-and-Place Tutorial - Verify Camera Calibration and Configuration
To modify camera properties:
1. If you need to configure other camera settings, click the 'Camera Properties' icon:
2. Refer to the camera documentation for information on setting/changing camera properties and
parameters.
You are now ready to add devices to the application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Connect to the Controller
37
AdeptSight Pick-and-Place Tutorial - Connect to the Controller
Connect to the Controller
You will now start to set up the devices that will be used by the vision guidance application.
Adding the Controller from Adept DeskTop
If you are using AdeptSight from within Adept DeskTop, a controller device is present in the Controllers
tab, as shown in Figure 31.
You must connect to continue setting up this application.
If you have a multiple-controller license, or are creating the application outside the
Adept DeskTop environment you may have to add a controller in the Controllers tab.
Consult the online User Guide if you need help adding a controller.
Red icon indicates that
controller is not connected
Figure 31 AIB Controller device displayed in the System Devices Manager
Connecting to the Controller
1. From the Adept DeskTop menu, select File > Connect...
2. Connect to the controller. Refer to the Adept DeskTop online help if needed.
3. When the controller is connected, the State icon for the controller becomes green, and a robot
is attached to the controller, as shown in Figure 32.
Controller is connected
Robot attached to the controller
Figure 32 AdeptSight connected to controller with robot
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Assign the Robot to the Camera
38
AdeptSight Pick-and-Place Tutorial - Assign the Robot to the Camera
Assign the Robot to the Camera
You must now assign to the camera the robot that will be used for the vision application. Later in this
tutorial you will calibrate the camera and robot together in the Vision-to-Robot calibration.
1. Select the Cameras tab.
2. Select the camera you will use for the vision guidance application.
3. In the System Devices toolbar, select the 'Add Robot icon:
4. The Select a Robot window opens as shown in Figure 33.
Figure 33 Assigning a Robot to the Camera
5. From the list, select the robot that you will use for the vision guidance application and click OK.
6. The robot is now assigned to the selected camera in the Devices List, as shown in Figure 35.
Figure 34 Robot Assigned to the Camera
You will now need to calibrate the system using a Vision-to-Robot calibration wizard.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrate the Vision System to the Robot
39
AdeptSight Pick-and-Place Tutorial - Calibrate the Vision System to the Robot
Calibrate the Vision System to the Robot
Vision-to-Robot calibration ensures that the robot will accurately move to parts that are seen by the
camera.
The calibration enables AdeptSight to accurately transform coordinates in the camera frame of reference
to coordinates in the robot frame of reference.
To start the calibration wizard:
1. In the Sequence Devices Manager, select Cameras tab.
2. In the list of devices, select the robot (Robot1).
3. Click the 'Calibrate Vision to Robot' icon, as shown in Figure 35.
4. The Calibration Interview Wizard begins, beginning the Vision-to-Robot calibration process.
Questions in the Interview Wizard determine the type of calibration required for your system.
To carry out the calibration:
1. Follow the instructions in the wizard, then return to this tutorial once the calibration is
complete.
2. If you need help during the Calibration process, click the Help button in the Calibration Wizard.
Robot Calibration icon launches
Vision-to-Robot calibration wizard
Check mark indicates that camera is calibrated
Warning icon indicates that vision-to-robot
calibration has not been performed
Figure 35 Starting Vision-to-Robot Calibration from the Vision Manager
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Vision Sequence
40
AdeptSight Pick-and-Place Tutorial - Create a Vision Sequence
Create a Vision Sequence
A sequence is a series of tasks that are executed by vision tools. When you execute a sequence, each
tool in the sequence executes in order. You add, remove, and edit the vision tools in the Sequence
Editor.
Saving a Sequence
All sequences in the Sequence Manager are saved when you save the vision project.
• Sequences are saved as part of the project, not individually.
• Project files are saved with the extension "hsproj".
Click the Save icon to save changes you have made up to now:
Opening the Sequence Editor
To open the Sequence Editor:
1. In the Sequence Manager, select the PickAndPlace sequence.
2. In the toolbar click the 'Edit Sequence' icon. See Figure 36.
Click Edit Sequence icon
to open Sequence Editor
To edit the sequence name,
left-click once on the name and
type in the new name
Figure 36 PickAndPlace sequence in the Sequence Manager
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add Tools to the Vision Sequence
41
AdeptSight Pick-and-Place Tutorial - Add Tools to the Vision Sequence
Add Tools to the Vision Sequence
A vision sequence is built by adding vision tools to the sequence. These tools are added to the Process
Manager area of the Sequence Editor interface. See Figure 37
In this module you will add an image acquisition tool, called Acquire Image tool.
The Sequence Editor Window
The Toolbox contains the tools available for building sequences.
Add tools by dragging them from the Toolbox into the Process Manager area, labeled "Drop Tools
Here".
Process Manager
Toolbox
Display area
Grid of results area
Figure 37 The Sequence Editor Interface
Adding an Acquire Image Tool
Acquire Image is the first tool to add because it supplies images to other tools in the sequence.
To add the Acquire Image tool:
1. In the Toolbox, select Acquire Image and drag it into the Process Manager area, that reads
'Drop Tools Here'.
2. The Process Manager (blue area) now contains the Acquire Image tool. See Figure 38.
3. You are now ready to acquire images.
Execute tool icon
acquires images
Camera that will provide the images
Figure 38 Acquire Image Tool Added to the Editor
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Acquire Image
42
AdeptSight Pick-and-Place Tutorial - Acquire Image
Acquire Image
The Acquire Image tool will provide the images taken by the camera to the Locator tool.
Displaying Images
Images acquired by the tool appear in the display, as illustrated in Figure 39.
Drag here to resize image
Figure 39 Live Display of camera images in the Sequence Editor
To display acquired images:
1. In the toolbar, click the 'Execute Sequence' icon:
Alternatively, you can execute only the Acquire Image tool by clicking the 'Execute Tool'
icon:
2. If the Acquire Image tool is unable to get images from the camera, the 'Status Failed' icon
appears in the tool title bar:
In such a case, return to the Vision project window and verify that the camera is active and is
detected by the system.
3. If objects are not correctly positioned in the field of view, or if you need to adjust the camera
position, focus or aperture, open the Live display mode by clicking the 'Live Mode' icon:
4. To exit the Live display, click the 'Live Mode' icon.
5. To preview single images, click the 'Image Preview' icon:
You will now add the Locator tool to your application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add a Locator Tool
43
AdeptSight Pick-and-Place Tutorial - Add a Locator Tool
Add a Locator Tool
The Locator tool will search for the objects you have defined in your application and returns results on
the location of the objects it finds.
To add the Locator tool:
1. In the Toolbox, select Locator and drag it into the Process Manager frame, below the
Acquire Image tool, as shown in Figure 40.
Acquire Image tool
supplies image to the
Locator tool
Object models will be
added here
'Search' properties
Figure 40 Locator Tool added to the vision sequence
2. Under Location, leave the Entire Image check box enabled. This ensures that the search
process will look for objects in the entire image provided by the camera.
You are now ready to create a model for the object that will be handled by the application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Model
44
AdeptSight Pick-and-Place Tutorial - Create a Model
Create a Model
To find a specific part or object, AdeptSight must have an active model of the part. You will now create
the model for the part you want to locate with this application.
Figure 41 Basic Model-Edition Mode Provides Quick Model-Building
Creating a New Model
To create a model:
1. Place the object in the camera field of view and acquire an image by executing the sequence.
2. In the Models section, click the '+' icon to create a new model. The display is now in Model
Edition mode as illustrated in Figure 41.
3. Drag and resize the green bounding box to completely enclose the entire object. Green outlines
show the features that have been selected to add to the model.
4. Drag and rotate the yellow axes marker to position the coordinate system of the model.
5. If you need to edit the model, for example add or remove features, click Expert to enter the
Expert Model Edition mode. Refer to the online User Guide or the 'Getting Started' tutorial if
you need help for this feature.
6. Click Done to complete the creation of the model. This new model now appears as Model0 in
the list of Models.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrate the Gripper Offset for the Model
45
AdeptSight Pick-and-Place Tutorial - Calibrate the Gripper Offset for the Model
Calibrate the Gripper Offset for the Model
For each object model, you must carry out a gripper offset calibration to ensure that parts of this type
will be correctly gripped.
• The Gripper Offset Calibration teaches AdeptSight the position on an object to which the robot
will move to manipulate the object. If you do not carry out the Gripper Offset Calibration, the
robot may not be able to manipulate the found object.
• This calibration needs to be carried out at least once for each model.
• You must recalibrate the gripper offset for a model if the model coordinate system is modified,
or if the camera-robot setup is modified.
Gripper Offset indicator
- Check mark icon: Gripper Offset is calibrated
- Info icon: Gripper Offset NOT calibrated
Launch Gripper Offset Wizard from here
Figure 42 Gripper Offset Indicator for Models
Launching the Gripper Offset Calibration
A Gripper Offset indicator appears to the right of models in the list of models as shown in Figure 43. The
Gripper Offset Calibration wizard walks you through the process of calibrating the Gripper Offset for the
model.
To start the calibration wizard:
1. Select the model from the list of models.
2. Select the 'Model Options' icon:
3. From the menu, select Gripper Offset > Manager as shown in Figure 42. This opens the
Gripper Offset Manager shown in Figure 43.
4. In the Gripper Offset Manager, select the wizard icon, as shown in Figure 43.
AdeptSight 2.0 - AdeptSight Tutorials
46
AdeptSight Pick-and-Place Tutorial - Calibrate the Gripper Offset for the Model
Launch Gripper Offset Wizard
from here
Figure 43 Gripper Offset Manager
Carrying out the Gripper Offset Calibration
The Gripper Offset Calibration is presented as a Wizard that walks through the steps required for
assigning Gripper offsets to a Model.
1. Follow the instructions in the Wizard to complete the process. Click 'Help' for assistance during
the calibration.
2. Once the calibration is complete, the Gripper Offset is added to the Gripper Offset Manager.
3. Click Close to return to the Locator tool interface. A check mark icon indicates that the
calibration has been completed for the model:
4. Repeat the Gripper Offset Calibration for each model you create.
Before starting the Gripper Offset Wizard: Make sure you have an object that is
identical to the object used to create the model.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Configure Locator Search Parameters
47
AdeptSight Pick-and-Place Tutorial - Configure Locator Search Parameters
Configure Locator Search Parameters
Search parameters set basic constraints for the search process. This module introduces you to editing of
the basic search parameters.
Figure 44 Configuring Search Constraints
You can leave Search parameters to their default value and continue the Tutorial.
However you may need to make some changes to the basic search parameters. Below is a brief
description of these parameters. Refer to the online User Guide for more details on Search parameters.
• Instances to Find determines the maximum number of instances that can be searched for,
regardless of Model type. To optimize search time you should set this value to no more than
the expected number of instances.
• Scale: If you want to find parts that vary in scale, select Range (instead of Nominal) then
select the scale range of the objects to find.
• Rotation: If you want to find only objects positioned at a specific orientation, select Nominal
(instead of Nominal) and set the required value. The Locator will search only for parts
positioned in the defined angle of rotation.
• Min Model Recognition: Select the percentage of object contours that are required to locate
a valid object instance.
Lowering this parameter can increase recognition of occluded instances but can also lead to
false recognitions. A higher value can help eliminate instances in which objects overlap.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Run and Test the Locator
48
AdeptSight Pick-and-Place Tutorial - Run and Test the Locator
Run and Test the Locator
Now that you have created a Model and configured search parameters, you will verify that the Locator
tool finds the object in images similar to the one that was used to create the model.
Markers and
ID number identify
instances fond
by the Locator
Grid of results
Execution time in
status bar
Figure 45 Display and Results of Found Objects
1. Click the 'Execute Sequence' icon at the top of the window:
2. When an object is found, it is shown with a blue contour. The results for the object appear in
the grid below the display. See Figure 45.
3. Verify in the grid of results that the instance was located correctly.
4. Move the object or add more objects in the field of view.
5. The results for the found instances are updated every time you press the 'Execute Sequence'
button.
Test in Continuous Mode
1. To start a continuous running mode, click the 'Continuous Loop' icon in the toolbar:
2. Click 'Execute Sequence' icon. The application should run in continuous mode.
3. Exit the continuous running mode by clicking 'Stop Sequence':
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Integrate AdeptSight with a MicroV+ Program
49
AdeptSight Pick-and-Place Tutorial - Integrate AdeptSight with a MicroV+ Program
Integrate AdeptSight with a MicroV+ Program
To enable the robot to handle the objects found by the vision application, you now need to create or add
a MicroV+ program.
For this tutorial we have provided a sample application that instructs the robot to pick up a modeldefined object at whatever position and whatever angle it has been found in the field of view.
If your robot has a pointer-type tool instead of a gripper, the robot will point to the
located objects.
V+ Code Library
Program assigned to
Task 0
Figure 46 Adding and assigning the Micro V+ program
1. In the Adept DeskTop window, Open the Code Library tab. If it is not visible, open the Code
Library from the menu: select View > Code Library.
2. In the list of code examples, also called clips, select Vision-guided pick-and-place in the
AdeptSight Examples folder. See Figure 46.
3. Right-click on Vision-guided pick-and-place and select New Program.
4. In the New Program window, click Create.
5. The program is added to the Program Manager.
You must now assign the program to a task in the program execution tool.
1. Select tutorial in the Program Manager list. See Figure 46.
2. Drag it to Task 0 in the Task Manager list.
3. Execute the program.
You have completed the tutorial!
Continue learning about AdeptSight in the following tutorials:
• AdeptSight Pick-and-Place Tutorial
AdeptSight 2.0 - AdeptSight Tutorials
50
AdeptSight Pick-and-Place Tutorial - Integrate AdeptSight with a MicroV+ Program
• AdeptSight Standalone C# Tutorial
AdeptSight 2.0 - AdeptSight Tutorials
51
AdeptSight Conveyor Tracking Tutorial
AdeptSight Conveyor Tracking Tutorial
This tutorial will walk you through the creation of a basic vision application that uses conveyor tracking.
This tutorial assumes you have a working knowledge of Adept robotic systems, V+, and Adept DeskTop.
If you are new to AdeptSight, we recommend that you start with the Getting Started with AdeptSight
tutorial.
System Requirements for this Tutorial
• A PC running AdeptSight and Adept DeskTop software.
• A conveyor tracking license.
• The camera provided with AdeptSight, or another DirectShow compatible camera.
• An Adept robot controlled by a CX controller
• A conveyor belt with an encoder
This tutorial illustrates a system with a Basler camera, a Cobra s-Series robot, and
AdeptSight running in Adept DeskTop. Steps may differ if you are using another type of
camera, or running Adept DeskTop from a standalone application.
Tutorial Overview
• Overview of the Conveyor-Tracking System
• Start an AdeptSight Vision Project
• Verify Camera Calibration and Configuration
• Connect to the Controller
• Add the Conveyor Belt
• Configure V+ to Define the Latching Signal
• Assign the Conveyor Belt to the Camera
• Assign the Robot to the Camera
• Calibrate the Vision System to the Robot and Belt
• Create a Vision Sequence
• Add the Acquire Image Tool
• Configure Latching Parameters
• Add a Locator Tool
• Create a Model
• Calibrate the Gripper Offset for the Model
• Add an Overlap Tool
• Add a Communication Tool
AdeptSight 2.0 - AdeptSight Tutorials
52
AdeptSight Conveyor Tracking Tutorial
• Integrate AdeptSight with a V+ Program
Before Starting the Tutorial
You will need a few identical objects that you will pick with the robot. These same objects can be used to
calibrate the system, during the Belt Calibration Wizard.
Before starting this tutorial you should:
1. Install the camera. Make the required connections between the camera and controller to enable
belt encoder latching. Consult the camera documentation for more information.
For the Basler A601f and A631f connections see Table 1.
2. Install the software.
3. Calibrate the camera.
Table 1 Basler Camera Connections to the CX Controller
Connect:
To:
CX Controller XDIO Connector
24 V Output
Pin 41, 42, 43, or 44
Basler Camera Pinout
Pin 7
(Out VCC Comm)
CX Controller XDIO Connector
Input Signal 1001,1002,1003, or 1004
Pin 1, 3, 5,or 7
Basler Camera Pinout
Pin 4 - Output 0
(integrate enable output)
CX Controller XDIO Connector
Connect Return Signal 1001,1002,1003, or
1004
Pin 2, 4, 6 or 8
CX Controller XDIO Connector
Ground on CX Controller
Pin 47, 48, 49, or 50
CX Controller
Camera
24 V
Input Signal
Out VCC Comm
Output 0
Return Signal
Ground
Figure 47 Illustration of CX Controller /Basler Camera Pinout
Please refer to the Getting Started with AdeptSight tutorial if you need
help with any of these preliminary steps.
AdeptSight 2.0 - AdeptSight Tutorials
53
AdeptSight Conveyor Tracking Tutorial - Overview of the Conveyor-Tracking System
Overview of the Conveyor-Tracking System
In this tutorial you will set up a system that picks parts from a moving conveyor-belt.
• AdeptSight acts as a vision server that provides vision guidance to controller.
• Vision applications are created on the PC.
• V+ programs on the controller, via Adept DeskTop, integrate the vision application to the
motion control.
• AdeptSight Motion tools ensure correct part handling and communication between the vision
results and the motion control system.
• Easy-to-use AdeptSight Calibration Wizards allow you to calibrate the entire system, including
the conveyor belt, to ensure accurate part finding.
Camera
PC
AdeptSight
(vision server)
Controller
Encoder
Robot
Figure 48 Overview of a Conveyor Tracking Setup with AdeptSight - Data Flow Schema
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Start an AdeptSight Vision Project
54
AdeptSight Conveyor Tracking Tutorial - Start an AdeptSight Vision Project
Start an AdeptSight Vision Project
Vision applications are created and managed in the Vision Manager window in AdeptSight.
Opening AdeptSight
1. From the Adept DeskTop menu, select View > AdeptSight.
2. AdeptSight opens in the Vision Project window, similar to Figure 49.
Sequence Manager
Allows you to manage and edit the sequences
that make up a vision application
System Devices Manager
Allows you to manage and set up the devices that
are used in a vision application
Figure 49 The Vision Project manager window
The Vision Project Interface
A vision project consists of one or more vision sequences, which are managed and run from the
Sequence Manager that is part of Vision Project interface.
• From the Sequence Manager you open the Sequence Editor to add and configure vision
tools.
• From the System Devices Manager you add, manage and configure the camera, controllers,
robots and conveyor belts needed for your application.
Create and Name the New Sequence
You will now create and name the vision sequence that you will use for this tutorial.
1. By default, there is a blank vision sequence named 'NewSequence' in the Vision project.
If the list is empty, create a new sequence by clicking the 'Create Project" icon:
2. Left-click once on 'NewSequence' to edit the name.
3. Name the sequence ConveyorTracking. The project now contains one vision sequence named
ConveyorTracking.
4. Click the 'Save Project" icon to save the vision project now:
5. Save the project as ConveyorTracking.hsproj.
AdeptSight 2.0 - AdeptSight Tutorials
55
AdeptSight Conveyor Tracking Tutorial - Start an AdeptSight Vision Project
Sequence Manager toolbar
Rename the vision sequence here
Figure 50 Renaming a new vision sequence
Next you will verify the camera that you will use for the application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Verify Camera Calibration and Configuration
56
AdeptSight Conveyor Tracking Tutorial - Verify Camera Calibration and Configuration
Verify Camera Calibration and Configuration
When the camera is correctly installed and recognized by the system it appears in the System Devices
manager, in the Cameras tab, as shown in Figure 51.
System Devices toolbar
Warning icon indicates
that camera is not calibrated
Detected camera
Green icon indicates that
camera is 'ON' (active)
and ready to grab images
Figure 51 Verifying Camera State and Calibration Status
Camera Calibration
If you have not previously calibrated the camera, a warning symbol appears to the right of the camera
State icon, as shown in Figure 51.
Choose a camera calibration method:
1. Calibrate the camera now, by launching the 2D Vision Calibration Wizard from the toolbar.
This requires a "grid of dots" calibration target.
Sample calibration targets are provided in the AdeptSight support files, in the AdeptSight
installation folder: ...\AdeptSight 2.0\Tutorials\Calibration.
2. Calibrate the camera later, through the vision to robot calibration, as explained later in this
tutorial. This will provide acceptable accuracy in most cases. However, a separate vision
calibration can provide increased accuracy to your application.
Calibrating the camera only through the vision-to-robot calibration will not correct
for lens distortion. In some cases, strong lens distortion may cause the vision-torobot calibration to fail if you do not calibrate the vision first.
For more details on this subject, see Why is Vision Calibration Important?
Camera Configuration
If you have not yet verified the quality of the images provided by the camera, you can verify and
configure the camera now.
To verify camera images
1. In the Devices list, select the camera.
2. In the System Devices toolbar, click the 'Live Display' icon:
3. Use the Live Display window to set camera focus and lens aperture.
AdeptSight 2.0 - AdeptSight Tutorials
57
AdeptSight Conveyor Tracking Tutorial - Verify Camera Calibration and Configuration
To modify camera properties
1. If you need to configure other camera settings, click the 'Camera Properties' icon:
2. Refer to the camera documentation for information on setting/changing camera properties and
parameters.
You are now ready to add devices to the application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Connect to the Controller
58
AdeptSight Conveyor Tracking Tutorial - Connect to the Controller
Connect to the Controller
You will now start to set up the devices that will be used by the vision guidance application.
Adding the Controller from Adept DeskTop
If you are using AdeptSight from within Adept DeskTop, a controller device is present in the Controllers
tab, as shown in Figure 52.
You must connect to the controller to continue setting up this application.
If you have a multiple-controller license, or are creating the application outside the
Adept DeskTop environment you may have to add a controller in the Controllers tab.
Consult the Adept DeskTop online help for assistance in adding a controller.
Red 'State' icon indicates that
controller is not connected
Figure 52 AIB Controller device displayed in the System Devices Manager
Connecting to the Controller
1. From the Adept DeskTop menu, select File > Connect...
2. Connect to the controller. Refer to the Adept DeskTop online help if needed.
3. When the controller is connected, the State icon for the controller becomes green, and a robot
is attached to the controller, as shown in Figure 53.
Controller is connected - State icon is green
Robot attached to the controller
Figure 53 AdeptSight connected to controller with robot
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add the Conveyor Belt
59
AdeptSight Conveyor Tracking Tutorial - Add the Conveyor Belt
Add the Conveyor Belt
In this step you will add the conveyor belt, and assign a controller to the belt.
To add the belt that will be used for the vision application:
1. In the System Devices manager, select the Belts tab.
2. In the System Devices toolbar, click the 'Add Belt' icon.
3. The belt is added to the list, as shown in Figure 54.
Figure 54 Conveyor Belt added to the System Devices list
You must now assign a controller to the belt and set the encoder signal.
To assign a controller and encoder to the belt:
1. Select the newly-created Belt, in the Device list.
2. In the toolbar, click the 'Add Controller' icon. This opens the Select a controller window.
3. Select the controller that will be used for the application then click OK.
4. The controller now appears in the list as shown in Figure 55.
Belt and
associated controller
Figure 55 Controller assigned to a conveyor belt
To set the encoder signal:
1. In the Device list, select the controller that is assigned to the belt.
2. Double-click in the Encoder column to edit the Encoder value. See Figure 56.
3. This tutorial uses 1 as the value of belt encoder index.
Set Encoder
value here
Figure 56 Selecting a belt encoder value
AdeptSight 2.0 - AdeptSight Tutorials
60
AdeptSight Conveyor Tracking Tutorial - Add the Conveyor Belt
The Encoder value depends on the configuration of the connection
between the controller and the belt and the encoder value defined
in the config_c utility.
Please refer to the CX Controller User Guide for more information.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Configure V+ to Define the Latching Signal
61
AdeptSight Conveyor Tracking Tutorial - Configure V+ to Define the Latching Signal
Configure V+ to Define the Latching Signal
You now need to configure V+ to define which signal will latch the encoder, using the config_c utility.
To instruct V+ which signal will latch the encoder, you need to access V+ Configuration Utility: config_c
utility.
To open the CONFIG_C Utility:
1. In Adept DeskTop menu, select View > Debug Tools > Monitor Terminal.
2. In the Monitor Terminal window, execute CONFIG_C by typing the following:
cd util
load config_c
execute 1 a.config_c
3. The CONFIG_C utility opens in the Monitor terminal window.
To configure the CONFIG_C belt statement:
1. In the *** Adept System Configuration Program *** page, select:
'2'
(2 => V+ Configuration System)
2. In the *** Controller Configuration Editor *** page, select:
'2'
(2 => Edit System Configuration)
3. In the *** V+ System Configuration Editor *** page, select:
'9' (9 => Change ROBOT configuration)
4. The current V+ statements of the ROBOT section are then displayed, as shown in Figure 57.
5. Add a Belt definition if no belt is configured, or edit the statement if required. Refer to the
Config_C documentation if you need help for this step.
• For POS_LATCH [1], select the signal that will latch the encoder.
• For POS_LATCH [2], select '1' (1 => None).
• For LATCH_BUFFER enter '1'.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Assign the Conveyor Belt to the Camera
62
AdeptSight Conveyor Tracking Tutorial - Configure V+ to Define the Latching Signal
Figure 57 Editing the Belt statement in the CONFIG_C Utility
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Assign the Conveyor Belt to the Camera
63
AdeptSight Conveyor Tracking Tutorial - Assign the Conveyor Belt to the Camera
Assign the Conveyor Belt to the Camera
In this step you will assign the conveyor belt to the camera.
The belt must be assigned to the camera before assigning the robot or other devices.
If a robot has already been assigned to the camera you must remove the assigned
robot. Once the belt is added you can assign the robot, as explained later in this tutorial
1. In the System Devices manager, Select the Cameras tab.
2. Select the camera you will use for the vision guidance application.
3. In the System Devices toolbar, click the 'Add Belt' icon.
4. The Select a Belt dialog opens. In this dialog, select the belt and click OK.
5. The added belt now appears, assigned (attached) to the camera in the Device list as shown in
Figure 58
Figure 58 Conveyor belt assigned to the camera
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Assign the Robot to the Camera
64
AdeptSight Conveyor Tracking Tutorial - Assign the Robot to the Camera
Assign the Robot to the Camera
You must now assign to the camera the robot that will be used for the vision application. Later in this
tutorial you will calibrate the camera and robot together in the Vision-to-Robot calibration.
1. Select the Cameras tab.
2. Select the camera you will use for the vision guidance application.
3. In the System Devices toolbar, select the 'Add Robot' icon.
4. The Select a Robot window opens as shown in Figure 59.
Figure 59 Assigning a Robot to the Camera
5. From the list, select the robot that you will use for the vision guidance application and click OK.
6. The robot is now assigned to the selected camera in the Devices List, as shown in Figure 60.
Figure 60 Robot Assigned to the Camera
You will now need to calibrate the system using a Vision-to-Robot calibration wizard.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrate the Vision System to the Robot and Belt
65
AdeptSight Conveyor Tracking Tutorial - Calibrate the Vision System to the Robot and Belt
Calibrate the Vision System to the Robot and Belt
In this Module you will calibrate the system: the camera, the robot and the belt, with the appropriate
Vision-to-Robot calibration wizard.
To calibrate this application you will use the Belt Calibration Wizard.
To start the calibration wizard:
1. In the System Devices Manager, select the Cameras tab.
2. In the list of devices, select the robot that is assigned to the camera (Robot1.)
3. Click the 'Calibrate Vision to Robot' icon:
4. The Calibration Interview Wizard begins, beginning the Vision-to-Robot calibration process.
Questions in the Interview Wizard determine the type of calibration required for your system.
To carry out the calibration:
1. Follow the instructions in the wizard, then return to this tutorial once the calibration is
complete.
2. In the Interview wizard, make sure you select options that specify that you are using a
conveyor belt, as shown in Figure 61. This will select the Belt Calibration Wizard.
3. If you need help during the Calibration process, click the Help button in the Calibration Wizard.
Figure 61 Selecting Belt-Conveyor option in the Calibration Interview Wizard
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Vision Sequence
66
AdeptSight Conveyor Tracking Tutorial - Create a Vision Sequence
Create a Vision Sequence
A sequence is a series of tasks that are executed by vision tools. When you execute a sequence, each
tool in the sequence executes in order. You add, remove, and edit the vision tools in the Sequence
Editor.
Saving a Sequence
All sequences in the Sequence Manager are saved when you save the vision project.
• Sequences are saved as part of the project, not individually.
• Project files are saved with the extension "hsproj".
Click the 'Save Project' icon to save changes you have made up to now:
Opening the Sequence Editor
To open the Sequence Editor:
1. In the Sequence Manager, select the ConveyorTracking sequence.
2. In the toolbar click the 'Edit Sequence' icon:
3. The Sequence Editor opens, similar to Figure 62.
Figure 62 New vision sequence in the Sequence Manager
In this tutorial you will add the following tools to the sequence:
• Acquire Image
• Locator
• Overlap Tool
• Communication Tool
AdeptSight 2.0 - AdeptSight Tutorials
67
AdeptSight Conveyor Tracking Tutorial - Create a Vision Sequence
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add the Acquire Image Tool
68
AdeptSight Conveyor Tracking Tutorial - Add the Acquire Image Tool
Add the Acquire Image Tool
The Acquire Image tool is the first tool to add because it supplies images to other tools in the
sequence.
The Acquire Image tool will provide the images taken by the camera to the Locator tool and define the
latching parameters for the conveyor belt.
To add the Acquire Image tool:
1. In the Toolbox, select Acquire Image and drag it into the Process Manager area that reads
'Drop Tools Here'.
2. The Process Manager (blue area) now contains the Acquire Image tool. See Figure 63.
3. You are now ready to acquire images.
To display acquired images:
1. In the toolbar, click the 'Execute Sequence' icon.
2. Alternatively, you can execute the only the Acquire Image tool by clicking the 'Execute Tool'
icon:
To preview live images grabbed by the camera:
1. :Click 'Live Mode' icon:
To exit the Live display, click the 'Live Mode' icon again.
2. To preview single images grabbed by the camera, click the 'Image Preview' icon:
Live Mode an Image Preview modes do not execute the Acquire Image tool, they only display
input from the camera.
Figure 63 Live Display of camera images in the Sequence Editor
You will now configure latching parameters.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Configure Latching Parameters
69
AdeptSight Conveyor Tracking Tutorial - Configure Latching Parameters
Configure Latching Parameters
In a previous step, you configured V+ to define which signal will latch the encoder, using the config_c
utility.
You must now set the corresponding latching parameter in the Acquire Image Tool.
To set the latching parameters:
1. Under Latching Parameters there are two sections Robot and Belt.
2. Under Belt select Read Latched Value option (add check mark).
3. Under Robot, do not select either option (leave check marks blank)
Figure 64 Latching Parameters of the Acquire Image tool
You will now add the Locator tool to your application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add a Locator Tool
70
AdeptSight Conveyor Tracking Tutorial - Add a Locator Tool
Add a Locator Tool
The Locator tool will search for the objects you have defined in your application and returns results on
the location of the objects it finds.
Acquire Image tool
supplies image to the
Locator tool
Object models will be
added here
Figure 65 Locator Tool added to the vision sequence
To add the Locator tool:
1. In the Toolbox, select Locator and drag it into the Process Manager frame, below the
Acquire Image tool, as shown in Figure 65.
2. Under Location, leave the Entire Image check box enabled. This ensures that the search
process will look for objects in the entire image provided by the camera.
You are now ready to create a model for the object that will be handled by the application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Model
71
AdeptSight Conveyor Tracking Tutorial - Create a Model
Create a Model
To find a specific part or object, AdeptSight must have an active model of the part. You will now create
the model for the part you want to locate with this application.
Figure 66 Basic Model-Edition Mode Provides Quick Model-Building
To create a model:
1. Place the object in the camera field of view and acquire an image by executing the Acquire
Image tool.
2. In the Models section, click the '+' icon to create a new model. The display is now in Model
Edition mode as illustrated in Figure 66.
3. Drag and resize the green bounding box to completely enclose the entire object. Green outlines
show the features that have been selected to add to the model.
4. Drag and rotate the yellow axes marker to position the coordinate system of the model.
5. If you need to edit the model, for example add or remove features, click Expert to enter the
Expert Model Edition mode. Refer to the online User Guide or the 'Getting Started' tutorial if
you need help for this feature.
6. Click Done to complete the creation of the model. This new model now appears as Model0 in
the list of Models.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrate the Gripper Offset for the Model
72
AdeptSight Conveyor Tracking Tutorial - Calibrate the Gripper Offset for the Model
Calibrate the Gripper Offset for the Model
For each object model, you must carry out a gripper offset calibration that will enable AdeptSight to
correctly pick up or move to the part.
• The Gripper Offset Calibration teaches AdeptSight the position on an object to which the robot
will move to manipulate the object. If you do not carry out the Gripper Offset Calibration, the
robot may not be able to manipulate the found object.
• This calibration needs to be carried out at least once for each model.
• You must recalibrate the gripper offset for a model if the model frame of reference is modified
or the camera-robot setup is modified.
Gripper Offset indicator
- check mark icon: Gripper Offset is calibrated
- "info icon": Gripper Offset NOT calibrated
Launch Gripper Offset Wizard from here
Figure 67 Gripper Offset Indicator for Models
A Gripper Offset indicator appears to the right of models in the list of models as shown in Figure 68. The
Gripper Offset Calibration wizard walks you through the process of calibrating the Gripper Offset for the
model.
To start the calibration wizard:
1. Select the model from the list of models.
2. Select the 'Model Options' icon:
3. From the menu, select Gripper Offset > Manager as shown in Figure 67. This opens the
Gripper Offset Manager shown in Figure 68.
4. In the Gripper Offset Manager, select the wizard icon, as shown in Figure 68.
Launch Gripper Offset Wizard
from here
Figure 68 Gripper Offset Manager
AdeptSight 2.0 - AdeptSight Tutorials
73
AdeptSight Conveyor Tracking Tutorial - Calibrate the Gripper Offset for the Model
Carrying out the Gripper Offset Calibration
The Gripper Offset Calibration is presented as a Wizard that walks through the steps required for
assigning Gripper offsets to a Model.
1. Follow the instructions in the Wizard to complete the process. Click 'Help' for assistance during
the calibration.
2. Once the calibration is complete, the Gripper Offset is added to the Gripper Offset Manager.
3. Click Close to return to the Locator tool interface. A check mark icon indicates that the
calibration has been completed for the model.
4. Repeat the Gripper Offset Calibration for each model you create.
The Gripper Offset Calibration is presented as a Wizard that walks through the steps required for
assigning Gripper offsets to a Model.
Before starting the Gripper Offset Wizard: Make sure you have an object that is
identical to the object used to create the model.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Configure Locator Search Parameters
74
AdeptSight Conveyor Tracking Tutorial - Configure Locator Search Parameters
Configure Locator Search Parameters
Search parameters set basic constraints for the search process. This module introduces you to editing of
the basic search parameters.
Figure 69 Configuring Search Constraints
You can leave Search parameters to their default value and continue the Tutorial.
However you may need to make some changes to the basic search parameters. Below is a brief
description of these parameters. Refer to the online User Guide for more details on Search parameters.
• Instances to Find determines the maximum number of instances that can be searched for,
regardless of Model type. To optimize search time you should set this value to no more than
the expected number of instances.
• Scale: If you want to find parts that vary in scale, select Range (instead of Nominal) then
select the scale range of the objects to find.
• Rotation: If you want to find only objects positioned at a specific orientation, select Nominal
(instead of Range) and set the required value. The Locator will search only for parts
positioned in the defined angle of rotation.
• Min Model Recognition: Select the percentage of object contours that are required to locate
a valid object instance.
Lowering this parameter can increase recognition of occluded instances but can also lead to
false recognitions. A higher value can help eliminate instances in which objects overlap.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Run and Test the Locator
75
AdeptSight Conveyor Tracking Tutorial - Run and Test the Locator
Run and Test the Locator
Now that you have created a Model and configured search parameters, you will verify that the Locator
tool finds the object in images similar to the one that was used to create the model.
Grid of results
Execution time in
status bar
Figure 70 Display and Results of Found Objects
1. Click the 'Execute Sequence' icon at the top of the window:
2. When an object is found, it is shown with a blue contour. The results for the object appear in
the grid below the display. See Figure 70.
3. Verify in the grid of results that the instance was located correctly.
4. Move the object or add more objects in the field of view.
5. The results for the found instances are updated every time you press the 'Execute Sequence'
button.
Test in Continuous Mode
1. To start a continuous running mode, click the 'Continuous Loop' icon in the toolbar:
2. Click 'Execute Sequence' icon. The application should run in continuous mode:
3. Exit the continuous running mode by clicking the 'Stop Sequence' icon:
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add an Overlap Tool
76
AdeptSight Conveyor Tracking Tutorial - Add an Overlap Tool
Add an Overlap Tool
You will now add an Overlap tool to the sequence. The purpose of the Overlap Tool is to make sure that
parts moving on the belt are recognized only once.
• Because a part found by the Locator may be present in many images acquired by the camera,
the Overlap tool ensures that the robot is not instructed to pick up the same part more than
once.
• The Overlap tool requires input from the Locator tool.
Acquire Image
provides the input for
the Locator tool
Locator provides the input
for the Overlap Tool
Figure 71 Adding an Overlap Tool to the sequence
1. In the Toolbox, under Motion Tools, select Overlap Tool and drag it into the Process
Manager frame, below the Locator tool, as shown in Figure 71.
2. Under Input, leave Locator as the input provider.
3. Under Advanced Parameters, leave Encoder Ticks selected.
4. Test the Overlap Tool by executing the sequence:
• When an instance is found by the Locator, for the first time, the object is highlighted in blue in
the image.
• In the following executions, the instances recognized by the Overlap Tool are highlighted in
red.
Next you will add the Communication Tool.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add a Communication Tool
77
AdeptSight Conveyor Tracking Tutorial - Add a Communication Tool
Add a Communication Tool
You will now add a Communication Tool to the vision sequence. The Communication Tool provides
instructions to the controller for handling the vision results provided by the Overlap tool.
Acquire Image tool
provides the input for the Locator tool
l
Locator provides the input
for the Overlap Tool
Overlap Tool provides the input
for the Communication Tool
Select the robot that is used
by the application
Figure 72 Adding a Communication Tool to the sequence
1. In the Toolbox, under Motion Tools, select Communication Tool and drag it into the
Process Manager frame, below the Overlap tool, as shown in Figure 72.
2. In the Input text box, leave Overlap Tool as the input provider.
3. In the Robot text box, select the robot used by this application, as shown in Figure 72.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Integrate AdeptSight with a V+ Program
78
AdeptSight Conveyor Tracking Tutorial - Integrate AdeptSight with a V+ Program
Integrate AdeptSight with a V+ Program
To enable the robot to handle the objects found by the vision application, you now need to create or add
a V+ program.
For this tutorial we have provided a sample application that instructs the robot to pick up a modeldefined object on a conveyor belt.
Select and open
new program with
the Vision-guided belt
tracking V+ example
belt_demo program
assigned to Task 0
Figure 73 Adding and assigning the Micro V+ program
1. In the Adept DeskTop window, Open the Code Library tab. If it is not visible, open the Code
Library from the menu: select View > Code Library.
2. In the list of code examples, also called clips, select Vision-guided conveyor tracking from
the AdeptSight Examples folder. See Figure 73.
3. Right-click on Vision-guided belt tracking and select New Program.
4. In the New Program window, click Create.
5. This adds the belt-demo and other dependent programs to the Program Manager.
6. Important: Before continuing, in the new program, configure the value of $myip to the IP
address of the vision server (the PC on which AdeptSight is running.)
You must now assign the program to a task in the program execution tool.
1. In the Program Manager list, select the belt_demo program:. See Figure 73.
2. Drag belt_demo to Task 0 in the Task Manager list.
3. Execute the program.
You have completed the tutorial!
Continue learning about AdeptSight in the following tutorials:
• AdeptSight Pick-and-Place Tutorial
• AdeptSight Standalone C# Tutorial
AdeptSight 2.0 - AdeptSight Tutorials
79
AdeptSight Upward-Looking Camera Tutorial
AdeptSight Upward-Looking Camera Tutorial
This tutorial will walk you through the creation of a basic vision application on a system in which the
camera faces upwards, and in which the robots picks up objects and moves them to the camera. In this
example, once a object is recognized by the vision applicator, the robot places the object in a pallet-type
array.
This tutorial assumes you have a working knowledge of Adept robotic systems, V+, and Adept DeskTop.
If you are new to AdeptSight, we recommend that you start with the Getting Started with AdeptSight
tutorial.
System Requirements for this Tutorial
• A PC running AdeptSight and Adept DeskTop software.
• The camera provided with AdeptSight, or another DirectShow-compatible IEEE 1394 camera.
• The camera must face upwards.
• An Adept robot controlled by a CX controller or a Cobra i-Series robot with AIB controller.
• And end-effector that can pick up and move objects into the camera field of view.
This tutorial illustrates a system with a Basler camera, a Cobra s-Series robot, and
AdeptSight running in Adept DeskTop. Steps may differ if you are using another type of
camera, or running Adept DeskTop from a standalone application.
Tutorial Overview
• Start an AdeptSight Vision Project
• Calibrating the Camera
• Connect to the Controller
• Assign the Robot to the Camera
• Calibrate the Vision System to the Robot
• Create a Vision Sequence
• Add the Acquire Image Tool
• Configure Latching Parameters
• Add a Locator Tool
• Create a Model
• Calibrate the Place Location for the Model
• Integrate AdeptSight with a V+ Program
Before Starting the Tutorial
You will need an object that you will pick with the robot. This object will be also be used to calibrate the
system, during the Object-Attached-To-Robot Calibration Wizard.
AdeptSight 2.0 - AdeptSight Tutorials
80
AdeptSight Upward-Looking Camera Tutorial
Before starting this tutorial you should:
1. Install the camera. Make the required connections between the camera and controller to enable
belt encoder latching. Consult the camera documentation for more information.
2. Install the software.
Please refer to the Getting Started with AdeptSight tutorial or the
AdeptSight Online Help for assistance any of these preliminary steps.
AdeptSight 2.0 - AdeptSight Tutorials
81
AdeptSight Upward-Looking Camera Tutorial - Start an AdeptSight Vision Project
Start an AdeptSight Vision Project
You will now create the vision project for this tutorial in the AdeptSight Vision Project manager window.
Opening AdeptSight
1. From the Adept DeskTop menu, select View > AdeptSight.
2. AdeptSight opens in the Vision Project window, similar to Figure 74.
Sequence Manager
Allows you to manage and edit the sequences
that make up a vision application
System Devices Manager
Allows you to manage and set up the devices that
are used in a vision application
Figure 74 The Vision Project manager window
Create and Name the New Sequence
You will now create and name the vision sequence that you will use for this tutorial.
1. By default, there is a blank vision sequence named NewSequence in the Vision project.
If the list is empty, create a new sequence by clicking the 'Create Project" icon:
2. Left-click once on 'NewSequence' to edit the name.
3. Name the sequence UpwardFacing. The project now contains one vision sequence named
UpwardFacing.
4. Click the 'Save Project" icon to save the vision project now:
5. Save the project as UpwardFacing.hsproj.
Sequence Manager toolbar
Rename the vision sequence here
Figure 75 Renaming a new vision sequence
Next you will verify the camera that you will use for the application.
AdeptSight 2.0 - AdeptSight Tutorials
82
AdeptSight Upward-Looking Camera Tutorial - Start an AdeptSight Vision Project
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrating the Camera
83
AdeptSight Upward-Looking Camera Tutorial - Calibrating the Camera
Calibrating the Camera
When the camera is correctly installed and recognized by the system it appears in the System Devices
manager, in the Cameras tab, as shown in Figure 76.
System Devices toolbar
Warning icon indicates
that camera is not calibrated
Green icon indicates that
camera is 'ON' (active)
and ready to grab images
Detected camera
Figure 76 Verifying Camera State and Calibration Status
If you have not previously calibrated the camera, a warning symbol appears to the right of the camera
State icon, as shown in Figure 76.
Choose a camera calibration method:
1. Calibrate the camera now, by launching the 2D Vision Calibration Wizard from the toolbar.
This requires a "grid of dots" calibration target.
Sample calibration targets are provided in the AdeptSight support files, in the AdeptSight
installation folder: ...\AdeptSight 2.0\Tutorials\Calibration.
2. Calibrate the camera later, through the vision to robot calibration, as explained later in this
tutorial. This will provide acceptable accuracy in most cases. However, a separate vision
calibration can provide increased accuracy to your application.
Calibrating the camera only through the vision-to-robot calibration will not correct
for lens distortion. In some cases, strong lens distortion may cause the vision-torobot calibration to fail if you do not calibrate the vision first.
For more details on this subject, see Why is Vision Calibration Important?
Next:
Calibrating the Camera With the 2D Vision Calibration
or skip and continue to Connect to the Controller
AdeptSight 2.0 - AdeptSight Tutorials
84
AdeptSight Upward-Looking Camera Tutorial - Calibrating the Camera With the 2D Vision Calibration
Calibrating the Camera With the 2D Vision Calibration
To calibrate an upward-looking camera the calibration target must be placed at the end of the robot endeffector, parallel to the camera.
To correctly position a calibration target for the camera calibration:
1. Fix the target at the end of the end effector so that it is stable and parallel to the camera field
of view, with the calibration pattern facing towards the camera.
2. Make sure the camera-to-target height is equal to the camera-to-object height. The camera-toobject height is the distance between the camera and an object at the moment that the camera
takes an image of the object.
Before starting the calibration make sure:
1. You have available one or more objects that will be used as calibration objects.
2. You know the camera-to-object height that will be used during the real-time application for
which you are calibrating the system.
The camera-to-object height that you will set during the calibration must be the same as the
distance that will be used during the real-time application.
Calibration Target
Camera-to-target distance
must be same as
Camera-to-object distance
Camera
Figure 77 Camera-to-Target Distance for Calibrating an Upwards-Facing Camera
To start the camera calibration:
1. In the Cameras tab of the System Devices manager, select the camera you want to calibrate in
the Devices list.
2. Click the 'Calibrate Camera' icon:
3. The 2D Vision Calibration Wizard opens, beginning the vision (camera) calibration process.
AdeptSight 2.0 - AdeptSight Tutorials
85
AdeptSight Upward-Looking Camera Tutorial - Calibrating the Camera With the 2D Vision Calibration
4. Follow the instructions in the wizard, then return to this tutorial once the calibration is finished.
5. If you need help during the Calibration process, Click Help in the Calibration Wizard.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Connect to the Controller
86
AdeptSight Upward-Looking Camera Tutorial - Connect to the Controller
Connect to the Controller
You will now start to set up the devices that will be used by the vision guidance application.
Adding the Controller from Adept DeskTop
If you are using AdeptSight from within Adept DeskTop, a controller device is present in the Controllers
tab, as shown in Figure 78.
You must connect to the controller to continue setting up this application.
If you have a multiple-controller license, or are creating the application outside the
Adept DeskTop environment you may have to add a controller in the Controllers tab.
Consult the Adept DeskTop online help for assistance in adding a controller.
Red 'State' icon indicates that
controller is not connected
Figure 78 AIB Controller Device Displayed in the System Devices Manager
Connecting to the Controller
1. From the Adept DeskTop menu, select File > Connect...
2. Connect to the controller. Refer to the Adept DeskTop online help if needed.
3. When the controller is connected, the State icon for the controller becomes green, and a robot
is attached to the controller, as shown in Figure 79.
Controller is connected - State icon is green
Robot attached to the controller
Figure 79 AdeptSight connected to controller with robot
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Assign the Robot to the Camera
87
AdeptSight Upward-Looking Camera Tutorial - Assign the Robot to the Camera
Assign the Robot to the Camera
You must now assign to the camera the robot that will be used for the vision application. Later in this
tutorial you will calibrate the camera and robot together in the Vision-to-Robot calibration.
1. Select the Cameras tab.
2. Select the camera you will use for the vision guidance application.
3. In the System Devices toolbar, select the 'Add Robot' icon.
4. The Select a Robot window opens as shown in Figure 80.
Figure 80 Assigning a Robot to the Camera
5. From the list, select the robot that you will use for the vision guidance application and click OK.
6. The robot is now assigned to the selected camera in the Devices List, as shown in Figure 81.
Figure 81 Robot Assigned to the Camera
You will now need to calibrate the system using a Vision-to-Robot calibration wizard.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrate the Vision System to the Robot
88
AdeptSight Upward-Looking Camera Tutorial - Calibrate the Vision System to the Robot
Calibrate the Vision System to the Robot
In this Module you will calibrate the system. To calibrate this application you will use the ObjectAttached-to Robot Calibration Wizard.
To start the calibration wizard:
1. In the System Devices manager, select the Cameras tab.
2. In the list of devices, select the robot that is assigned to the camera (Robot1.)
3. Click the 'Calibrate Vision to Robot' icon:
4. The Calibration Interview Wizard begins, beginning the Vision-to-Robot calibration process.
Questions in the Interview Wizard determine the type of calibration required for your system
In the Calibration Interview Wizard, make sure you the options that specifies that the calibration object
will be attached to the robot tool (end-effector).
Figure 82 Selecting Object-Attached-To-Robot options in the Calibration Interview Wizard
To select the Object-Attached-to Robot Calibration Wizard:
1. In the Calibration Interview Wizard, at the Choose Interview Mode step, select: I wish to
select the correct calibration options from a list.
2. Under section 1, select: The camera is field-mounted relative to the robot base.
3. Under section 2, select: The workspace is not a conveyor belt.
4. Under section 3, select: The calibration object will be attached to the robot tool.
5. Under section 4, select: The robot is equipped with a tool that can pick up and move an object.
6. Under section 5, select the appropriate option.
• If the robot is able to move freely in the workspace, an Automated calibration will be selected.
In this case the robot will automatically carry out part of the calibration.
AdeptSight 2.0 - AdeptSight Tutorials
89
AdeptSight Upward-Looking Camera Tutorial - Calibrate the Vision System to the Robot
• Otherwise, a Manual calibration will be selected, In this case you need to manually move the
robot to various parts of the camera field of view to gather calibration data.
7. Under section 6, select the appropriate option.
• If the end-effector is centered on the tool flange, the calibration process will be quicker
because the calibration wizard will not have to calculate offset between the end-effector and
the robot tool flange.
• If the end effector is not centered on the tool flange, the calibration process will have to
correct for the offset between the end-effector gripper and the robot tool flange. If the
calibration is a Manual calibration (see step 6.), you will have to manually move the robot to
gather the several calibration points required for the offset correction.
Using the Object-Attached-to Robot Calibration Wizard
1. Follow the instructions in the wizard, then return to this tutorial once the calibration is
complete.
2. If you need help during the Calibration process, click the Help button in the Calibration Wizard.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Vision Sequence
90
AdeptSight Upward-Looking Camera Tutorial - Create a Vision Sequence
Create a Vision Sequence
A sequence is a series of tasks that are executed by vision tools. When you execute a sequence, each
tool in the sequence executes in order. You add, remove, and edit the vision tools in the Sequence
Editor.
Saving a Sequence
All sequences in the Sequence Manager are saved when you save the vision project.
• Sequences are saved as part of the project, not individually.
• Project files are saved with the extension "hsproj".
Click the 'Save Project' icon to save changes you have made up to now:
Opening the Sequence Editor
To open the Sequence Editor:
1. In the Sequence Manager, select the UpwardFacing sequence.
2. In the toolbar click the 'Edit Sequence' icon:
3. The Sequence Editor opens, similar to Figure 83.
Figure 83 New vision sequence in the Sequence Manager
In this tutorial you will add the following tools to the sequence:
• Acquire Image: This tool acquire images from the camera and outputs the Acquired images
for use by other tools in the sequence.
• Locator: The Locator will find an object in input images, based on the object model that you
will create for the object.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add the Acquire Image Tool
91
AdeptSight Upward-Looking Camera Tutorial - Add the Acquire Image Tool
Add the Acquire Image Tool
The Acquire Image tool is the first tool to add because it supplies images to other tools in the
sequence.
The Acquire Image tool will provide the images taken by the camera to the Locator tool and define the
latching parameters for the conveyor belt.
To add the Acquire Image tool:
1. In the Toolbox, select Acquire Image and drag it into the Process Manager area that reads
'Drop Tools Here'.
2. The Process Manager (blue area) now contains the Acquire Image tool. See Figure 84.
3. You are now ready to acquire images.
To display acquired images:
1. In the toolbar, click the 'Execute Sequence' icon. Alternatively, you can execute the only the
Acquire Image tool by clicking the 'Execute Tool' icon:
Figure 84 Live Display of camera images in the Sequence Editor
You will now configure latching parameters.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Configure Latching Parameters
92
AdeptSight Upward-Looking Camera Tutorial - Configure Latching Parameters
Configure Latching Parameters
You must now set a robot-latching parameter in the Acquire Image Tool. This latching parameter enables
the V+ to latch the location of the robot when an image is grabbed.
To set the latching parameters:
1. Under Latching Parameters there are two sections Robot and Belt.
2. Under Robot enable Read Value if:
The robot comes to a full stop when the camera is taking an image, and/or when the robot
movement is very slow and a slight error in the location is not critical to the application.
This latching mode does not require a cable for latching the signal.
3. Under Robot enable Read Latched Value if:
This robot in continuous movement when it passes the inspected object through the camera
field of view (on-the-fly inspection).
This latching mode requires a cable for latching the signal.
Figure 85 Latching Parameters of the Acquire Image tool
You will now add the Locator tool to your application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add a Locator Tool
93
AdeptSight Upward-Looking Camera Tutorial - Add a Locator Tool
Add a Locator Tool
The Locator tool will search for the objects you have defined in your application and returns results on
the location of the objects it finds.
Acquire Image tool
supplies image to the
Locator tool
The object model will be
added here
Figure 86 Locator Tool added to the vision sequence
To add the Locator tool:
1. In the Toolbox, select Locator and drag it into the Process Manager frame, below the Acquire
Image tool, as shown in Figure 86.
2. Under Location, leave the Entire Image check box enabled. This ensures that the search
process will look for objects in the entire image provided by the camera.
You are now ready to create a model for the object that will be handled by the application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Model
94
AdeptSight Upward-Looking Camera Tutorial - Create a Model
Create a Model
To find a specific part or object, AdeptSight must have an active model of the part. You will now create
the model for the part you want to locate with this application.
Figure 87 Basic Model-Edition Mode Provides Quick Model-Building
To create a model:
1. With the robot, grip the object that you will handle with this application.
2. Move the robot so that the object is in the field of view of the camera.
3. Execute the sequence to acquire an image, by clicking 'Execute Sequence' icon:
4. In the Models section, click the '+' icon to create a new model. The display is now in Model
Edition mode as illustrated in Figure 87.
5. Drag and resize the green bounding box to completely enclose the entire object. Green outlines
show the features that have been selected to add to the model.
6. Drag and rotate the yellow axes marker to position the coordinate system of the model.
7. If you need to edit the model, for example to add or remove features, click Expert to enter the
Expert Model Edition mode. Refer to the online User Guide or the 'Getting Started' tutorial if
you need help for this feature.
8. Click Done to complete the creation of the model. This new model now appears as Model0 in
the list of Models.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Calibrate the Place Location for the Model
95
AdeptSight Upward-Looking Camera Tutorial - Calibrate the Place Location for the Model
Calibrate the Place Location for the Model
For each object model, you must carry out a place location calibration that will enable the application to
correctly place the part in its final place location after in has been found. This is done through the Place
Location Wizard.
• The Place Location Wizard teaches AdeptSight the location in the workspace where the robot
will place the object after it has been correctly found and identified in the camera image.
• If you do not carry out the Place Location calibration, the robot may not be able to precisely
and accurately place the part in its correct location. You will need to manage and define the
place location with a V+/MicroV+ program.
• This calibration must be carried out at least once for each model.
• You must recalibrate the place location for a model if the model frame of reference is modified
or the camera-robot setup is modified.
Gripper Offset indicator
- check mark icon: Gripper Offset is calibrated
- "info icon": Gripper Offset NOT calibrated
Launch Gripper Offset Wizard from here
Figure 88 Gripper Offset Indicator for Models
To start the Place Location Wizard:
1. Select the model from the list of models.
2. Select the 'Model Options' icon:
3. From the menu, select Gripper Offset > Manager as shown in Figure 88. This opens the
Gripper Offset Manager shown in Figure 89.
4. In the Gripper Offset Manager, select the wizard icon, as shown in Figure 89.
5. This will automatically choose the Place Location Wizard because AdeptSight will detect that the
system was calibrated for an upwards-facing camera.
AdeptSight 2.0 - AdeptSight Tutorials
96
AdeptSight Upward-Looking Camera Tutorial - Calibrate the Place Location for the Model
Launch Gripper Offset Wizard
from here
Figure 89 Gripper Offset Manager
AdeptSight 2.0 - AdeptSight Tutorials
97
AdeptSight Upward-Looking Camera Tutorial - Carrying out the Place Location Calibration
Carrying out the Place Location Calibration
The Place Location Wizard walks through the steps required for assigning a place location to a Model.
1. Follow the instructions in the Wizard to complete the process. Click 'Help' for assistance during
the calibration.
2. Once the calibration is complete, the Place Location is added to the Gripper Offset Manager.
3. Click Close to return to the Locator tool interface. A check mark icon indicates that the
calibration has been completed for the model.
4. Repeat the Place Location Wizard for each model you create.
Before starting the Gripper Offset Wizard: Make sure you have an object that is
identical to the object used to create the model.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Integrate AdeptSight with a V+ Program
98
AdeptSight Upward-Looking Camera Tutorial - Integrate AdeptSight with a V+ Program
Integrate AdeptSight with a V+ Program
To enable the robot to handle the objects found by the vision application and test the application you will
need , you now need to create or add a V+ program that will run the application.
Refer to the AdeptSight Pick-and-Place Tutorial for information on integrating this application with V+.
The program used for the Pick-And-Place tutorial can be adapted to the upwards facing application by
switching the OPEN and CLOSE commands.
You have completed the tutorial!
Continue learning about AdeptSight in the following tutorials:
• AdeptSight Pick-and-Place Tutorial
• AdeptSight Standalone C# Tutorial
AdeptSight 2.0 - AdeptSight Tutorials
99
AdeptSight Standalone C# Tutorial
AdeptSight Standalone C# Tutorial
Welcome to the AdeptSight Standalone C# Tutorial.
This tutorial will guide you through the development of a standalone AdeptSight application in Visual C#.
As you follow the steps for the tutorial you will build an object location application to which you will add
and configure a full range of inspection tools.
This tutorial uses as an example a system with Microsoft Visual Studio 2005 and AdeptSight 2.0.
Tutorial Overview
• Create the AdeptSight Vision Project
• Build the Program Interface
• Add Basic Code to the Application Form
• Create a Vision Inspection Sequence
• Create a Model with the Locator Tool
• Add Code for the Locator
• Test the Application
• Add and Configure a Display Interface
• Add a Caliper Tool
• Add Code for the Caliper
• Add a Blob Analyzer Tool
• Add Code for the Blob Analyzer
• Add a Pattern Locator Tool
• Add Code for the Pattern Locator
• Add Two Edge Locator Tools
• Add Code for the Edge Locators
System Requirements
• PC running Windows 2000 SP4 or Windows XP SP2
• Microsoft Visual Studio 2005
The type of PC processor will influence the execution speed of the vision
applications. This tutorial presumes you have a basic beginner’s knowledge of
Microsoft Visual C#.
AdeptSight 2.0 - AdeptSight Tutorials
100
AdeptSight Standalone C# Tutorial - Create the AdeptSight Vision Project
Create the AdeptSight Vision Project
This first step shows you how to build a basic AdeptSight application that will locate a model-defined
object, at whatever angle and displacement the object appears.
Creating the Project
In this section, you will create the Project of the application, add basic lines of code to interact with the
interface, and add an AdeptSight VisionProjectControl, which you will edit to build the application.
Constructing the Project
You will now build the project that will allow you to specify which type of program you want.
1. Start Microsoft Visual Studio .NET 2005.
2. Create a new Visual C# Windows Application (exe) project.
3. Name it HookInspection and specify a working folder in the location field.
4. Click OK.
You can already compile this basic project, but you must add some lines of code and create the interface
for your application to make this a useful project.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Build the Program Interface
101
AdeptSight Standalone C# Tutorial - Build the Program Interface
Build the Program Interface
In this step you will build the interface that will allow you to interact with the application and visualize
results.
Figure 90 Visual C# Hook Inspection Application Form
1. Click on the Solution Explorer tab, select file named Form1.cs and rename it to
HookInspection.cs. Then double-click on the file to edit the form template.
2. Resize the form template to approximately 810 X 570.
3. On the form template, remove all existing dialog items and add the appropriate controls to
make the form template look like the one in Figure 90.
4. The top left control is the main AdeptSight control. It is called the VisionProjectControl. It
can be added to the form by following these steps:
a. From the Toolbox context menu, select the Add/Remove Items … command.
b. From the .NET Framework Components tab, select VisionProjectControl component.
c. Click OK to accept selection.
d. Select VisionProjectControl from the Toolbox and paste one instance on the form.
e. From Properties, rename the newly created control to mVisionProjectControl.
5. The top right control is also an AdeptSight control. It is called the Display. It can be added to
the form by following these steps:
a. From the Toolbox context menu, select Add/Remove Items … command.
b. From the .NET Framework Components tab, select the Display component.
c. Click OK to accept the selection.
AdeptSight 2.0 - AdeptSight Tutorials
102
AdeptSight Standalone C# Tutorial - Build the Program Interface
d. Select Display from the Toolbox and paste one instance on the form.
e. From Properties, rename the newly created control to mDisplay.
6. The other components on the form are standard GroupBox, Label, TextBox, CheckBox and
Button components, pasted from the Toolbox. Below is the list of controls you need to add,
and the name to give to each control.
Control to add
Name of the control
Type
mType
Scale
mScale
Rotation
mRotation
Translation X
mTranslationX
Translation Y
mTranslationY
Width
mWidth
Height
mHeight
Diameter
mDiameter
Offset
mOffset
Part Label
mPartLabel
Part Width
mPartWidth
Elapsed Time
mTime
Continuous Mode
mCheckContinuous
Execute Inspection
mExecuteButton
7. The interface is now complete. Save your work.
AdeptSight 2.0 - AdeptSight Tutorials
103
AdeptSight Standalone C# Tutorial - Add Basic Code to the Application Form
Add Basic Code to the Application Form
You need to add basic code to be able to interact with your application. As you will add new features
throughout the other tutorial sections, you will also add lines to this code.
1. Select HookInspection.cs [Design] window.
2. From Properties, rename form to HookInspectionForm.
3. Add a Closing event handler by double clicking to the right of event from Events view.
4. Modify created event handler code to look like this:
// <summary>
/// Form Closing event handler.
/// </summary>
private void
HookInspectionForm_Closing(
object sender, System.ComponentModel.CancelEventArgs eventArguments
)
{
WaitForCompletion();
}
5. Add the private method, shown below, that will be called at appropriate places in the code to
ensure continuous mode deactivation, in the case of a critical operation, such as an application
closing event.
// <summary>
/// Utility method to ensure continuous mode stopping before any critical
operation.
/// </summary>
private void
WaitForCompletion()
{
if ( mCheckContinuous.Checked )
{
mCheckContinuous.Checked = false;
}
6. Return to the HookInspection.cs [Design] window.
7. Select the 'Execute Inspection' button.
8. Add a Click event handler by typing ExecuteButton_Click in the edit field to the right of event
from Events view.
9. Modify the created event handler code to look like this:
/// <summary>
/// Execute button event handler.
/// </summary>
private void ExecuteButton_Click(object sender, System.EventArgs e)
{
try
AdeptSight 2.0 - AdeptSight Tutorials
104
AdeptSight Standalone C# Tutorial - Add Basic Code to the Application Form
{
mExecuteButton.Enabled = false;
do
{
mVisionProjectControl.VisionProject.Sequences[0].Loop =
false;
mVisionProjectControl.VisionProject.
Sequences[0].Execute();
Application.DoEvents();
}
while ( mCheckContinuous.Checked );
mExecuteButton.Enabled = true;
}
catch
{
mExecuteButton.Enabled = true;
}
}
This is the main loop of the inspection application. It execute the first defined sequence using the
Execute() method and loops until Continuous Mode is disabled in the interface.
You have now completed the code needed to use your application. Run and test the application. When
you are sure that everything functions well, save your work and go on to the next section.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Vision Inspection Sequence
105
AdeptSight Standalone C# Tutorial - Create a Vision Inspection Sequence
Create a Vision Inspection Sequence
To create and name the vision sequence that you will use for this tutorial, do the following:
1. Run the application.
2. From VisionProjectControl interface, rename first sequence to Hook Inspection.
3. The project now contains one vision sequence.
4. Click the 'Save Project' icon to save the vision project now. Save the project as
HookInspection.hsproj.
5. Click the 'Project Properties' icon to change AdeptSight environment settings.
6. From the Startup tab, enable Auto Load Project and select the previously saved project.
7. Click Close to accept changes and quit the application.
You have now created the vision project file for this tutorial. The application has been configured to
automatically load the vision project when the application starts.
In this step you will start adding the necessary tools to perform a vision inspection.
1. Restart the application
2. In the Sequence Manager, select the Hook Inspection sequence.
3. In the toolbar, click the 'Edit Sequence' icon to start the Sequence Editor.
4. The Toolbox contains the tools available for building sequences.
5. Select Acquire Image and drag it into the frame that reads 'Drop Tools Here'.
6. Select Emulation and click the 'Camera Properties' icon to show the Emulation Properties
dialog.
7. Load the Hook.hdb emulation database from the Tutorial/Images folder, in the distributed files
from the AdeptSight installation.
8. The Sequence Editor should now look like Figure 91.
9. Click Done to accept changes and close the Sequence Editor.
10. Click the 'Save Project' icon to save the vision project now.
AdeptSight 2.0 - AdeptSight Tutorials
106
AdeptSight Standalone C# Tutorial - Create a Vision Inspection Sequence
Figure 91 Configuring Acquire Image Tool to use an Emulation device
You have now begun building the vision inspection sequence. The first step was to add a tool for image
acquisition.
The next step is to create a model from an acquired image and then use this model to locate all
instances of a specific object in any acquired image.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Create a Model with the Locator Tool
107
AdeptSight Standalone C# Tutorial - Create a Model with the Locator Tool
Create a Model with the Locator Tool
In this step you will add a Locator tool to the sequence and build a model of an object.
Add the Locator
1. Double-click on the Hook Inspection sequence to start the Sequence Editor.
2. Click the 'Execute Sequence' icon to execute the sequence once and acquire a first image.
3. The Toolbox contains the tools available for building sequences.
4. From the Toolbox, select Locator and drag just below the Acquire Image tool.
5. Under Location, leave the Entire Image check box enabled to ensure that the Locator will
search the entire image.
You are now ready to create a model for the object that you want to find with this application.
Create a Model
1. In the Models section, click the '+' icon to create a new model.
2. The display is now in Model Edition mode.
3. The Model’s bounding box appears in the image as a green rectangle.
4. Drag and resize the green bounding box to completely enclose the entire object. Green outlines
show the contours that currently represent the model.
5. Drag and rotate the yellow axes marker to position the coordinate system. The Sequence Editor
should now look like Figure 92.
6. Click Done to accept new model changes.
7. Rename the newly added model from Model0 to Hook.
8. Close the Sequence Editor.
9. Click the 'Save Project' icon to save the vision project.
Figure 92 Creating a Model in the Model Edition Mode
AdeptSight 2.0 - AdeptSight Tutorials
108
AdeptSight Standalone C# Tutorial - Create a Model with the Locator Tool
You have now completed creation of the tools required for this first step of the tutorial. Before continuing
to add code, run the application once and execute it on all images from emulation database to verify
that Locator tool appropriately locates all instances in all images.
To see the Locator results in the Sequence Editor display and the results grid, simply select the tool as
the active one by clicking in its title bar.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add Code for the Locator
109
AdeptSight Standalone C# Tutorial - Add Code for the Locator
Add Code for the Locator
In this section, you will add code to output the properties of the instance found by the Locator in your
application interface. But before adding any lines of code, appropriate reference must be added.
1. From the Solution Explorer, select the Add Reference … command from context menu.
2. From the displayed dialog, click on Browse … button.
3. Move to [Common Files]\Adept Technology\AdeptSight\PlugIns\Tool.
4. Select LocatorPlugIn.dll file and click OK twice to accept adding new reference.
5. Select the newly added reference and from Properties, change Copy Local to false.
Now that appropriate references have been added, code referencing the Locator can be added.
1. Select HookInspection.cs window.
2. At the top, add appropriate 'using' directives, as follows:
using Adept.iSight.Tools;
using Adept.iSight.Forms;
3. Locate the ExecuteButton_Click method and insert the lines of code shown in bold:
try
{
Locator lLocator = null;
mExecuteButton.Enabled = false;
The above code simply defines a new variable ready to reference a Locator tool.
4. In the existing do loop, add the lines of code shown in bold:
do
{
mVisionProjectControl.VisionProject.Sequences[0].Loop = false;
mVisionProjectControl.VisionProject.Sequences[0].Execute();
// Retrieving / Showing Locator results
lLocator = mVisionProjectControl.VisionProject.Sequences[0][1] as Locator;
if ( lLocator.GetInstanceCount( 0 ) > 0 )
{
// An instance of the object is found
// Output the properties of the located instance
mType.Text = lLocator.GetInstanceModelName(0,0);
mScale.Text = lLocator.GetInstanceScaleFactor(0,0).ToString( "0.00" );
mRotation.Text = lLocator.GetInstanceRotation(0,0).ToString( "0.00" );
mTranslationX.Text = lLocator.GetInstanceTranslationX(0,0).ToString( "0.00" );
mTranslationY.Text = lLocator.GetInstanceTranslationY(0,0).ToString( "0.00" );
}
else
{
// No instance of the object found
AdeptSight 2.0 - AdeptSight Tutorials
110
AdeptSight Standalone C# Tutorial - Add Code for the Locator
mType.Text = "";
mScale.Text = "";
mRotation.Text = "";
mTranslationX.Text = "";
mTranslationY.Text = "";
}
// Retrieving / Showing sequence execution timing
mTime.Text =
mVisionProjectControl.VisionProject.Sequences[0].ElapsedTime.ToString( "0.00 ms" );
Application.DoEvents();
}
while ( mCheckContinuous.Checked );
This code first creates a reference to the Locator tool to enable specific programmatic access.
From this reference, results are retrieved and displayed in corresponding controls. Finally,
execution time is retrieved to show current execution time before next execution occurs.
5. Coding is now completed. Save your work and move on to the next step.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Test the Application
111
AdeptSight Standalone C# Tutorial - Test the Application
Test the Application
You are ready to test your application.
1. To start the running mode, press the F5 key. Click Execute Inspection a few times.
The properties of the found instance should be updated after each inspection.
2. Enable the Continuous Mode check box and click Execute Inspection. The application
should run in continuous mode. The application should now look like Figure 93.
Figure 93 Application Interface after first execution of the vision sequence
This concludes this tutorial step. After debugging, save your work and move on to the next tutorial step
where you will add a display to your application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add and Configure a Display Interface
112
AdeptSight Standalone C# Tutorial - Add and Configure a Display Interface
Add and Configure a Display Interface
This second step shows you how to interface with the Display in your application interface. A Display
allows you to view Images and Scenes processed by an AdeptSight Vision Sequence. In the previous
step, you have already added a Display object to your application interface.
Adding Code for the Display
Code must now be added to update the Display output every time the inspection loop execute.
1.Locate the ExecuteButton_Click method and insert the lines of code shown in bold:
try
{
Locator lLocator = null;
mDisplay.Markers.Clear();
mExecuteButton.Enabled = false;
This code simply removes all overlay graphic markers that could have been added to the Display from a
previous execution.
2.In the existing do loop, add the lines of code shown in bold:
do
{
mVisionProjectControl.VisionProject.Sequences[0].Loop = false;
mVisionProjectControl.VisionProject.Sequences[0].Execute();
// Showing input image in Display
mDisplay.Images[0].SetImageDatabase(
mVisionProjectControl.VisionProject.Sequences[0].Database.Handle );
mDisplay.Images[0].SetImageViewName(
mVisionProjectControl.VisionProject.Sequences[0][0].Name );
mDisplay.Images[0].SetImageName( "Image" );
// Retrieving / Showing Locator results
lLocator = mVisionProjectControl.VisionProject.Sequences[0][1] as
Locator;
if ( lLocator.GetInstanceCount( 0 ) > 0 )
{
// An instance of the object is found
// Output the properties of the located instance
mType.Text = lLocator.GetInstanceModelName(0,0);
mScale.Text = lLocator.GetInstanceScaleFactor(0,0).ToString(
"0.00" );
mRotation.Text = lLocator.GetInstanceRotation(0,0).ToString(
"0.00" );
mTranslationX.Text =
lLocator.GetInstanceTranslationX(0,0).ToString( "0.00");
mTranslationY.Text =
lLocator.GetInstanceTranslationY(0,0).ToString( "0.00" );
// Showing instance scene in Display
mDisplay.Scenes[0].SetSceneDatabase(
AdeptSight 2.0 - AdeptSight Tutorials
113
AdeptSight Standalone C# Tutorial - Add and Configure a Display Interface
mVisionProjectControl.VisionProject.Sequences[0].Database.Handle );
mDisplay.Scenes[0].SetSceneViewName(
mVisionProjectControl.VisionProject.Sequences[0][1].Name );
mDisplay.Scenes[0].SetSceneName( lLocator.GetOutputInstanceSceneName( 0 ) );
mDisplay.Scenes[0].SetSceneColor( Color.Blue );
mDisplay.Scenes[0].SetScenePenWidth( PenWidth.Thin );
}
else
{
// No instance of the object found
mType.Text = "";
mScale.Text = "";
mRotation.Text = "";
mTranslationX.Text = "";
mTranslationY.Text = "";
mDisplay.Scenes[0].SetScenePenWidth( PenWidth.None );
}
// Retrieving / Showing sequence execution timing
mTime.Text =
mVisionProjectControl.VisionProject.Sequences[0].ElapsedTime.ToString(
"0.00 ms" );
mDisplay.RefreshDisplay();
Application.DoEvents();
}
while ( mCheckContinuous.Checked );
This code first shows how to setup the Display to show image provided by the Acquire Image
tool. Then appropriate Display settings are modified to show the Output Instance Scene
provided by the Locator tool. Finally, a RefreshDisplay( ) call is issued to show current
selections before next execution occurs. Details about Display capabilities can be found in the
documentation.
3. Coding is now completed. If you execute the application, you should now see a image in which
the current instance is highlighted. Save your work and move on to the next step.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add a Caliper Tool
114
AdeptSight Standalone C# Tutorial - Add a Caliper Tool
Add a Caliper Tool
In this tutorial step, you will learn how to set up, configure and use a Caliper tool to precisely measure
the distance between parallel edges on an object.
Figure 94 Positioning the Caliper Tool
Positioning the Caliper
In the previous step, you created the vision project file for this tutorial. The application has been
configured to automatically load the vision project when the application starts.
In this step you will modify the vision sequence to include a Caliper tool.
1. Restart the application.
2. In the Sequence Manager, select the Hook Inspection sequence.
3. In the toolbar, click the 'Edit Sequence' icon to start the Sequence Editor.
4. Execute once by clicking the 'Execute' icon.
5. From the context menu (in the frame where tools are created), select Add > Caliper.
6. Double-click on the tool title and rename tool from 'Caliper' to 'Width Caliper.'
7. Acquire Image is automatically selected as the tool that will provide the Input image.
8. Under Location, select Locator as Frame Input and click on the Location button.
9. In the Location dialog, set the frame-based position of the tool as shown in Figure 94. You can
either enter values manually or edit the location bounding box in the display, with the mouse.
The Caliper detects edges that are parallel to its Y-axis: adjust the rotation to
best match the inclination of the edges you want to measure.
10. Click OK to apply the Location parameters and execute the tool once by clicking the 'Execute
Tool' icon.
AdeptSight 2.0 - AdeptSight Tutorials
115
AdeptSight Standalone C# Tutorial - Add a Caliper Tool
The Caliper must always be as perpendicular as possible to the edges to be
measured. For the current example, the tool had to be rotated to 90 degrees and
the skew was left at its default value of 0 degrees.
Configuring the Caliper
You are now ready to set properties for the pair of edge you want to identify.
1. Under Pairs, rename default created pair from Pair0 to Width Measurement. Double click on
'Pair0' to enable renaming.
2. Click on Edit to change pair properties.
3. Set First Edge Polarity to Light to Dark and Second Edge Polarity to Dark to Light
4. Enable Position Constraint for both edges.
5. Set edge position constraints with the bottom controls, as shown in Figure 95.
6. Click OK to apply pair properties and return to the Sequence Editor
7. Leave default values for all other Caliper parameters.
8. Close the Sequence Editor.
9. Click the 'Save Project' icon to save changes made to the tutorial vision project.
Configure first
edge
Configure second
edge
Configure graph below
to include an area on
each side of an expected
edge
Use mouse to
drag and configure
position constraint
graph
Figure 95 Setting Edge Pair properties for the Caliper tool
AdeptSight 2.0 - AdeptSight Tutorials
116
AdeptSight Standalone C# Tutorial - Add a Caliper Tool
Now that the Caliper is properly configured to measure the rectangular hole feature, you will want to
test and observe the Caliper on other instances of the Object. Code must now be added to your
application in order to highlight Caliper results.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add Code for the Caliper
117
AdeptSight Standalone C# Tutorial - Add Code for the Caliper
Add Code for the Caliper
You will complete this tool up by adding code to your Visual C# application that will display the
measurement of the Caliper in the interface. You will also add a line marker on the display to show the
measurement.
1. From the Solution Explorer context menu, select the Add Reference … command.
2. In the displayed dialog, click the Browse … button.
3. Move to [Common Files]\Adept Technology\AdeptSight\PlugIns\Tool.
4. Select CaliperPlugIn.dll file and click OK twice to accept adding new reference.
5. Select newly added reference and from Properties, change Copy Local to false.
6. Now that the appropriate reference has been added, code referencing the Caliper can be
added.
7. Select HookInspection.cs window.
8. Locate the ExecuteButton_Click method and insert the lines of code shown in bold:
try
{
Locator lLocator = null;
Caliper lWidthCaliper = null;
mExecuteButton.Enabled = false;
This code simply defines a new variable ready to reference a Caliper tool.
9. In the existing do loop, add the lines of code shown in bold:
do
{
...
// Retrieving / Showing Calipers results
lWidthCaliper = mVisionProjectControl.VisionProject.Sequences[0][2] as Caliper;
if ( lLocator.GetInstanceCount( 0 ) > 0 && lWidthCaliper.GetPairScore( 0, 0 ) > 0 )
{
// Output the caliper results
mWidth.Text = lWidthCaliper.GetPairSize( 0, 0 ).ToString( "0.00" );
// Showing width marker in Display
MarkerLine lMarker = new MarkerLine(
"Width",
lWidthCaliper.GetEdge1PositionX( 0, 0 ),
lWidthCaliper.GetEdge1PositionY( 0, 0 ),
lWidthCaliper.GetEdge2PositionX( 0, 0 ),
lWidthCaliper.GetEdge2PositionY( 0, 0 ),
true );
mDisplay.Markers.Add( lMarker );
lMarker.AnchorStyle = MarkerAnchorStyle.CROSS;
AdeptSight 2.0 - AdeptSight Tutorials
118
AdeptSight Standalone C# Tutorial - Add Code for the Caliper
lMarker.Constraints = LineMarkerConstraints.LineNoEdit;
lMarker.PenWidth = PenWidth.Thin;
lMarker.Color = HSColor.Red;
}
else
{
mWidth.Text = "";
}
// Retrieving / Showing sequence execution timing
mTime.Text =
mVisionProjectControl.VisionProject.Sequences[0].ElapsedTime.ToString(
"0.00 ms" );
mDisplay.RefreshDisplay();
Application.DoEvents();
}
while ( mCheckContinuous.Checked );
This code first creates a reference to the Caliper tool to enable specific programmatic access.
From this reference, results are retrieved and displayed in corresponding controls. If the first
pair of the Caliper (Index = 0) is found (Score > 0.250), its measurement is displayed. Using
the World coordinate system, the calibrated X-Y position of the first and second edge of the pair
is used to draw a non-editable line marker on the Display.
10. Coding is now completed. Save your work and test the application. You should see the
measurement in the Inspection Width text box of your application. You should also see the line
marker you added on the Display highlighting the measurement.
Adding a Second Caliper
Using the previous steps, try adding a second Caliper that will measure the height of the rectangular
hole in the hook.
1. Add a second caliper tool and rename it Height Caliper.
2. In the Location dialog, place the caliper to measure the height of the rectangular hole in the
hook.
3. Rename Pair0 of the second caliper to Height Measurement.
4. Edit the edge pair settings similarly to first caliper tool (Width Measurement pair).
5. Add the appropriate code to display the result in the appropriate text box on your application
form. Also add the code to draw another line marker on the Display.
Testing the Application
You are ready to test your application.
1. To start the running mode, press the F5 key. Click Execute Inspection a few times.
2. The measurements on the found instance should be updated after each inspection.
AdeptSight 2.0 - AdeptSight Tutorials
119
AdeptSight Standalone C# Tutorial - Add Code for the Caliper
3. Enable the Continuous Mode check box and click Execute Inspection.
4. The application should run in continuous mode.
5. The application should now look like Figure 96.
Figure 96 Application Interface showing Width and Height Measurements
AdeptSight 2.0 - AdeptSight Tutorials
120
AdeptSight Standalone C# Tutorial - Add a Blob Analyzer Tool
Add a Blob Analyzer Tool
In this tutorial step, you will learn how to set up, configure and use a Blob Analyzer tool to find and
analyze irregular shaped features on the part.
Positioning the Blob Analyzer
In the previous steps, you created and modified the vision project file for this tutorial. The application
has been configured to automatically load the vision project when the application starts.
In this step you will modify the vision sequence to include a Blob Analyzer tool.
1. Restart the application.
2. In the Sequence Manager, select the Hook Inspection sequence.
3. In the toolbar, click the 'Edit Sequence' icon to start the Sequence Editor.
4. Execute once by clicking the 'Execute Tool' icon:
5. From the context menu, select Add > Blob Analyzer.
6. Double-click on the tool title and rename tool from Blob Analyzer to Hole Blob.
7. Acquire Image is by default selected as the tool that will provide the Input image.
8. Under Location, select Locator as Frame Input and click on Location button.
9. Set the frame-based position of the tool as shown in Figure 97.
Figure 97 Setting Location of the Blob Analyzer Tool
Configuring the Blob Analyzer
You will now set up constraints that will allow the Blob Analyzer to find only valid blobs, that is, those
that meet the criteria you have determined. These constraints can be set from Blobs parameters.
1. Under Blobs, click Configure.
2. ,Set Minimum Area to 0 and Maximum Area to 100.
3. For Image Mode, select the Dark segmentation mode.
AdeptSight 2.0 - AdeptSight Tutorials
121
AdeptSight Standalone C# Tutorial - Add a Blob Analyzer Tool
4. In the histogram window, use the mouse to drag and set segmentation limits: top to 90 and
bottom to 110 as shown in figure.
This instructs the process to keep pixels with a greylevel lower then 90 and reject those with a
greylevel higher then 110. Pixels between these values will receive a weight between 0 and 1.
Alternatively you can set the segmentation limits in the Advanced Parameters grid by setting
Segmentation Dark Top and Bottom to 90 and 110.
5. Click OK to close the Blob Settings window.
6. In the Advanced Parameters grid, under Results, select Object Coordinate System.
7. Keep default values for all other Advanced Parameters.
8. Close the Sequence Editor.
9. Click the 'Save Project' icon to save changes made to the tutorial vision project.
Figure 98 Configuring Blob detection parameters
Now that the Blob Analyzer is properly configured to inspect the circular hole feature, you will want to
test and observe the Blob Analyzer results on other instances of the Object. Code must now be added to
your application in order to highlight Blob Analyzer results.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add Code for the Blob Analyzer
122
AdeptSight Standalone C# Tutorial - Add Code for the Blob Analyzer
Add Code for the Blob Analyzer
You will finish up by adding code to your Visual C# application that will display the measurements of the
Blob Analyzer in the interface. You will also add a target marker on the display to show the position and
diameter of the hole.
1. From the Solution Explorer, select the Add Reference … command from context menu.
2. From the displayed dialog, click on the Browse … button.
3. Move to [Common Files]\Adept Technology\AdeptSight\PlugIns\Tool.
4. Select BlobAnalyzerPlugIn.dll file and click OK twice to accept adding new reference.
5. Select newly added reference and from Properties, change Copy Local to false.
Now that appropriate reference have been added, code referencing the Blob Analyzer can be
added.
6. Select the HookInspection.cs window.
7. Locate the ExecuteButton_Click method and insert the lines of code shown in bold:
try
{
Locator lLocator = null;
Caliper lWidthCaliper = null;
Caliper lHeightCaliper = null;
BlobAnalyzer lHoleBlob = null;
mExecuteButton.Enabled = false;
This code simply defines a new variable ready to reference a Blob Analyzer tool.
8. In the existing do loop, add the lines of code shown in bold:
do
{
...
// Retrieving / Showing Blob results
lHoleBlob = mVisionProjectControl.VisionProject.Sequences[0][4] as BlobAnalyzer;
if ( lLocator.GetInstanceCount( 0 ) > 0 && lHoleBlob.GetBlobCount( 0 ) > 0 )
{
// Output the blob results
double lDiameter = Math.Sqrt( 4 * lHoleBlob.GetBlobArea( 0, 0 ) / 3.14159 );
mDiameter.Text = lDiameter.ToString( "0.00" );
double lOffset = Math.Sqrt(
lHoleBlob.GetBlobPositionX( 0, 0 ) *
lHoleBlob.GetBlobPositionX( 0, 0 ) +
lHoleBlob.GetBlobPositionY( 0, 0 ) *
lHoleBlob.GetBlobPositionY( 0, 0 ) );
mOffset.Text = lOffset.ToString( "0.00" );
// Showing blob marker in Display
AdeptSight 2.0 - AdeptSight Tutorials
123
AdeptSight Standalone C# Tutorial - Add Code for the Blob Analyzer
MarkerTarget lMarker = new MarkerTarget(
"Hole",
lHoleBlob.GetBlobPositionXWorld( 0, 0 ),
lHoleBlob.GetBlobPositionYWorld( 0, 0 ),
(float) (lDiameter / 2.0),
true );
mDisplay.Markers.Add( lMarker );
lMarker.Constraints = TargetMarkerConstraints.TargetNoEdit;
lMarker.Color = HSColor.Red;
}
else
{
mDiameter.Text = "";
mOffset.Text = "";
}
// Retrieving / Showing sequence execution timing
mTime.Text =
mVisionProjectControl.VisionProject.Sequences[0].ElapsedTime.ToString(
"0.00 ms" );
mDisplay.RefreshDisplay();
Application.DoEvents();
}
while ( mCheckContinuous.Checked );
This code first creates a reference to the Blob Analyzer tool to enable specific programmatic
access. From this reference, results are retrieved and displayed in corresponding controls. If
the blob is found, the offset from the origin of the Object coordinate system and the diameter
of the hole are computed and displayed. Using the World coordinate system, the calibrated X-Y
position of blob is used to draw a non-editable target marker on the Display.
9. Coding is now completed. Save your work and test the application. You should see the
measurement in the Inspection Diameter and Offset text boxes of your application. You should
also see the target marker you added on the Display highlighting the hole measurements.
Testing the Application
You are ready to test your application.
1. To start the running mode, press the F5 key. Click Execute Inspection a few times.
The properties of the found instance should be updated after each inspection.
2. Enable the Continuous Mode check box and click Execute Inspection.
3. The application should run in continuous mode.
4. The application should now look like Figure 99.
AdeptSight 2.0 - AdeptSight Tutorials
124
AdeptSight Standalone C# Tutorial - Add Code for the Blob Analyzer
Figure 99 Application Interface showing Hole Diameter and Offset Measurements
This concludes this tutorial step. After debugging, save your work and move on to the next tutorial step
where you will add another inspection tool to your application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add a Pattern Locator Tool
125
AdeptSight Standalone C# Tutorial - Add a Pattern Locator Tool
Add a Pattern Locator Tool
In this tutorial step, you will learn how to set up, configure and use a Pattern Locator tool to find and
locate instances of the HS label printed on the part.
Positioning the Pattern Locator
In the previous steps, you have created and modified the vision project file for this tutorial. The
application has been configured to automatically load the vision project when the application starts.
In this step you will modify the vision sequence to include a Pattern Locator tool.
1. Restart the application
2. In the Sequence Manager, select the Hook Inspection sequence.
3. In the toolbar, click the 'Edit Sequence' icon to start the Sequence Editor.
4. Execute the sequence once by clicking 'Execute' icon.
5. From the context menu, select Add > Pattern Locator.
6. Double-click on the tool title and rename tool from Pattern Locator to Label Locator.
7. Acquire Image is by default selected as the tool that will provide the Input image.
8. Under Location, select Locator as Frame Input and click on Location button.
9. Set the frame-based position of the tool as shown in Figure 100.
Figure 100 Setting the Location of the Pattern Locator tool
Creating the Pattern Image
You will now create the pattern that you want the tool to search for in the defined Location area. The
pattern can be defined from any input image.
1. Under Pattern, click Create to start a pattern creation process, from the current input image.
AdeptSight 2.0 - AdeptSight Tutorials
126
AdeptSight Standalone C# Tutorial - Add a Pattern Locator Tool
2. Set the location of the pattern as shown in Figure 101.
Figure 101 Setting the Location of the Pattern Image.
Configuring Pattern Locator Constraints
You will now set up constraints that will allow the Pattern Locator to match pattern defined in defined
frame-based location. These constraints can be changed in the Advanced Parameters property grid.
1. In Advanced Parameters, under Results, set Coordinate System to Object.
2. Under Search, set Match Threshold to 0.4.
3. Keep default values for all other Advanced Parameters.
4. Close the Sequence Editor.
5. Click the 'Save Project' icon to save changes made to the tutorial vision project.
Now that the Pattern Locator is properly configured to check presence of the HS label, you will want to
test and observe the Pattern Locator results on other instances of the Object. Code must now be added
to your application in order to highlight Pattern Locator results
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add Code for the Pattern Locator
127
AdeptSight Standalone C# Tutorial - Add Code for the Pattern Locator
Add Code for the Pattern Locator
You will finish up by adding code to your Visual C# application that will display the result of the pattern
search in the interface. You will also add a point marker on the display to show the position of the label
located by the Pattern Locator.
1. From Solution Explorer context menu, select the Add Reference ….
2. In the displayed dialog, click the Browse … button.
3. Move to [Common Files]\Adept Technology\AdeptSight\PlugIns\Tool.
4. Select PatternLocatorPlugIn.dll file and click OK twice to accept adding new reference.
5. Select the newly added reference and in Properties, change Copy Local to false.
6. Now that appropriate reference have been added, code referencing the Pattern Locator can be
added.
7. Select the HookInspection.cs window.
8. Locate the ExecuteButton_Click method and insert the lines of code shown in bold:
try
{
Locator lLocator = null;
Caliper lWidthCaliper = null;
Caliper lHeightCaliper = null;
BlobAnalyzer lHoleBlob = null;
PatternLocator lLabelLocator = null;
mExecuteButton.Enabled = false;
This code simply defines a new variable ready to reference a Pattern Locator tool.
9. In the existing do loop, add the lines of code shown in bold:
do
{
...
// Retrieving / Showing Pattern results
lLabelLocator =
mVisionProjectControl.VisionProject.Sequences[0][5] as PatternLocator;
if ( lLocator.GetInstanceCount( 0 ) > 0 && lLabelLocator.GetMatchCount( 0 ) > 0 )
{
// Output the pattern results
mPartLabel.Text = "Present";
// Showing pattern marker in Display
MarkerPoint lMarker = new MarkerPoint(
"HSLabel",
lLabelLocator.GetMatchPositionX( 0, 0, false ),
lLabelLocator.GetMatchPositionY( 0, 0, false ),
true );
mDisplay.Markers.Add( lMarker );
AdeptSight 2.0 - AdeptSight Tutorials
128
AdeptSight Standalone C# Tutorial - Add Code for the Pattern Locator
lMarker.Color = HSColor.Blue;
}
else
{
mPartLabel.Text = "Absent";
}
// Retrieving / Showing sequence execution timing
mTime.Text =
mVisionProjectControl.VisionProject.Sequences[0].ElapsedTime.ToString(
"0.00 ms" );
mDisplay.RefreshDisplay();
Application.DoEvents();
}
while ( mCheckContinuous.Checked );
This code first creates a reference to the Pattern Locator tool to enable specific programmatic
access. From this reference, Match Count is retrieved and appropriate status is displayed in
corresponding control. If a match is found, using the World coordinate system, the calibrated
X-Y position of match is used to draw a non-editable point marker on the Display.
10. Coding is now completed. Save your work and test the application. You should see the match
status in the Inspection Part Label text box of your application. You should also see the point
marker you added on the Display highlighting the match position.
Testing the Application
1. You are ready to test your application.
2. To start the running mode, press the F5 key. Click Execute Inspection a few times.
3. The measurements on the found instance should be updated after each inspection.
4. Enable the Continuous Mode check box and click Execute Inspection.
5. The application should run in continuous mode.
6. Application should now look as shown in Figure 102:
AdeptSight 2.0 - AdeptSight Tutorials
129
AdeptSight Standalone C# Tutorial - Add Code for the Pattern Locator
Figure 102 Application Interface showing Part HS Label match
This concludes this tutorial step. After debugging, save your work and move on to the next tutorial step
where you will add other inspection tools to your application.
Next:
AdeptSight 2.0 - AdeptSight Tutorials
Add Two Edge Locator Tools
130
AdeptSight Standalone C# Tutorial - Add Two Edge Locator Tools
Add Two Edge Locator Tools
In this tutorial step, you will learn how to set up, configure and use two Edge Locator tools to measure
non-parallel edges on the part.
Positioning the First Edge Locator
In the previous steps, you have created and modified the vision project file for this tutorial. The
application has been configured to automatically load the vision project when the application starts.
In this step you will modify the vision sequence to include two Edge Locator tools.
1. Restart the application
2. In the Sequence Manager, select the Hook Inspection sequence.
1. In the toolbar, click the 'Edit Sequence' icon to start the Sequence Editor.
2. Execute once by clicking Execute icon from toolbar.
3. From context menu select Add > Edge Locator command.
4. Double-click on the tool title and rename tool, from Edge Locator to Part Left Edge.
5. Acquire Image is automatically selected as the Input image provider.
6. Under Location, select Locator as Frame Input and click on Location button.
7. Set the frame-based position of the tool as shown in Figure 103. You can either enter values
manually or edit the location bounding box in the display, with the mouse.
The Edge Detector detects edges that are parallel to its Y-axis: adjust the rotation to best
match the inclination of the edges you want to measure.
Figure 103 Setting the Location of the Edge Locator Tool
Configuring First Edge Locator Constraints
You will now set up constraints that will allow the Edge Locator to find edges in defined frame-based
location. These constraints can be changed in the Advanced Parameters property grid.
AdeptSight 2.0 - AdeptSight Tutorials
131
AdeptSight Standalone C# Tutorial - Add Two Edge Locator Tools
1. In Advanced Parameters, under Results, set Coordinate System to Object.
2. Under Edge Constraints, set Constraints to Position.
3. Under Edge Constraints, set Polarity Mode to Dark To Light.
4. Under Constraints, set Position Constraint to 0, 1, 0.509, 0.560.
5. Keep default values for all other Advanced Parameters.
6. Close the Sequence Editor.
7. Click the 'Save Project' icon to save changes made to the tutorial vision project.
Now that the first Edge Locator is properly configured, the second one must be added.
Positioning the Second Edge Locator
The vision sequence must now be modified to include a second Edge Locator tool.
1. In the toolbar, click the 'Edit Sequence' icon to start the Sequence Editor.
2. Execute once by clicking the 'Execute' icon from toolbar.
3. From context menu select Add > Edge Locator command.
4. Double-click on tool title, rename tool from Edge Locator to Part Right Edge.
5. Acquire Image is automatically selected as the Input image provider.
6. Under Location, select Locator as Frame Input and click on Location button.
7. Set the frame-based position of the tool as shown in Figure 104. You can either enter values
manually or edit the location bounding box in the display, with the mouse.
The Edge Detector detects edges that are parallel to its Y-axis: adjust the rotation to
best match the inclination of the edges you want to measure. Skew can also be modified
to configure the tool to match the orientation of the edge.
For the current example, the skew was set to -18 degrees to adapt to the edge slope in
the inspected area. The same result could be obtained with Rotation=72 and Skew=0
degrees.
Adjust skew angle
so that edge is parallel
to Y-axis
Figure 104 Setting Location of the Second Edge Locator
AdeptSight 2.0 - AdeptSight Tutorials
132
AdeptSight Standalone C# Tutorial - Add Two Edge Locator Tools
You will now set up constraints that will allow the Edge Locator to find edges in defined frame-based
location. These constraints can be changed in the Advanced Parameters property grid.
Configuring Second Edge Locator Constraints
You will now set up constraints that will allow the Edge Locator to find edges in defined frame-based
location. These constraints can be changed in the Advanced Parameters property grid.
1. In Advanced Parameters, under Results, set Coordinate System to Object.
2. Under Edge Constraints, set Constraints to Position.
3. Under Edge Constraints, set Polarity Mode to Light to Dark.
4. Under Constraints, set Position Constraint to 0, 1, 0.497, 0.497.
5. Keep default values for all other Advanced Parameters.
6. Close the Sequence Editor.
7. Click the 'Save Project' icon to save changes made to the tutorial vision project.
Now that the Edge Locator tools are properly configured to measure part width, you will want to test and
observe the Edge Locator results on other instances of the Object. Code must now be added to your
application in order to highlight Edge Locator results.
AdeptSight 2.0 - AdeptSight Tutorials
133
AdeptSight Standalone C# Tutorial - Add Code for the Edge Locators
Add Code for the Edge Locators
You will finish up by adding code to your Visual C# application that will display the measurement
computed from the Edge Locator tools in the interface. You will also add a line marker on the display to
show the measurement.
1. From the Solution Explorer context menu, select the Add Reference ….
2. In the displayed dialog, click the Browse … button.
3. Move to [Common Files]\Adept Technology\AdeptSight\PlugIns\Tool.
4. Select PatternLocatorPlugIn.dll file and click OK twice to accept adding new reference.
5. Select the newly added reference and in Properties, change Copy Local to false.
6. Now that appropriate reference have been added, code referencing the Edge Locator tools can
be added.
7. Select the HookInspection.cs window.
8. Locate the ExecuteButton_Click method and insert the lines of code shown in bold:
try
{
Locator lLocator = null;
Caliper lWidthCaliper = null;
Caliper lHeightCaliper = null;
BlobAnalyzer lHoleBlob = null;
PatternLocator lLabelLocator = null;
EdgeLocator lPartLeftEdge = null;
EdgeLocator lPartRightEdge = null;
mExecuteButton.Enabled = false;
This code simply defines new variables ready to reference the Edge Locator tools.
9. In the existing do loop, add the lines of code shown in bold:
do
{
...
// Retrieving / Showing EdgeLocators results
lPartLeftEdge =
mVisionProjectControl.VisionProject.Sequences[0][6] as
EdgeLocator;
lPartRightEdge =
mVisionProjectControl.VisionProject.Sequences[0][7] as EdgeLocator;
if ( lLocator.GetInstanceCount( 0 ) > 0 &&
lPartLeftEdge.GetEdgeCount( 0 ) > 0 &&
lPartRightEdge.GetEdgeCount( 0 ) > 0 )
{
// Output the computed width results
float lPartWidth =
lPartRightEdge.GetEdgePositionY( 0, 0 ) –
AdeptSight 2.0 - AdeptSight Tutorials
134
AdeptSight Standalone C# Tutorial - Add Code for the Edge Locators
lPartLeftEdge.GetEdgePositionY( 0, 0 );
mPartWidth.Text = lPartWidth.ToString( "0.00" );
// Showing part width marker in Display
MarkerLine lMarker = new MarkerLine(
"Part Width",
lPartRightEdge.GetEdgePositionXWorld( 0, 0 ),
lPartRightEdge.GetEdgePositionYWorld( 0, 0 ),
lPartLeftEdge.GetEdgePositionXWorld( 0, 0 ),
lPartLeftEdge.GetEdgePositionYWorld( 0, 0 ),
true );
mDisplay.Markers.Add( lMarker );
lMarker.AnchorStyle = MarkerAnchorStyle.CROSS;
lMarker.Constraints = LineMarkerConstraints.LineNoEdit;
lMarker.PenWidth = PenWidth.Thin;
lMarker.Color = HSColor.Red;
}
else
{
mPartWidth.Text = "";
}
// Retrieving / Showing sequence execution timing
mTime.Text =
mVisionProjectControl.VisionProject.Sequences[0].ElapsedTime.ToString("0
.00 ms" );
mDisplay.RefreshDisplay();
Application.DoEvents();
}
while ( mCheckContinuous.Checked );
This code first creates references to the Edge Locator tools to enable specific programmatic
access. From these references, results are retrieve and part width is computed and displayed in
corresponding control. Using the World coordinate system, the calibrated X-Y position of the
first and second edges are used to draw a non-editable line marker in the Display.
10. Coding is now completed. Save your work and test the application. You should see the
measurement in the Inspection Part Width text box of your application. You should also see the
line marker you added, highlighting the measurement in the Display .
Testing the Application
You are ready to test your application.
1. To start the running mode, press the F5 key. Click Execute Inspection a few times.
2. The measurements on the found instance should be updated after each inspection.
AdeptSight 2.0 - AdeptSight Tutorials
135
AdeptSight Standalone C# Tutorial - Add Code for the Edge Locators
3. Enable the Continuous Mode check box and click Execute Inspection.
4. The application should run in continuous mode.
5. Application should now look as shown in Figure 105.
Figure 105 Application Interface showing all Measurements
You have completed the tutorial!
Continue learning about AdeptSight in the following tutorials and online help topics
• AdeptSight Pick-and-Place Tutorial
• AdeptSight Conveyor Tracking Tutorial
• Setting Up System Devices
• Managing Models in AdeptSight
AdeptSight 2.0 - AdeptSight Tutorials
136
AdeptSight 2.0 Online Help
March 2007
AdeptSight Reference Guide
The AdeptSight reference contains the following topics:
AdeptSight Properties Reference for V+ and MicroV +
AdeptSight V+ and MicroV+ Keywords
AdeptSight Quick Reference
AdeptSight V+ and MicroV+ Keywords
AdeptSight V+ and MicroV+ Keywords
The following keywords are required for programming AdeptSight applications in MicroV+ or V+.
Click on links below to go to the keyword descriptions.
VLOCATION
transformation function
VPARAMETER
program instruction
VPARAMETER
real-valued function
VRESULT
real-valued function
VRUN
program instruction
VSTATE
real-valued function
VTIMEOUT
system parameter
VWAITI
program instruction
AdeptSight 2.0 - AdeptSight Reference2
VLOCATION
transformation function
Syntax
MicroV+ VLOCATION (sequence_id, tool_id, instance_id, result_id, index_id, frame_id)
V+ VLOCATION ($ip, sequence_id, tool_id, instance_id, result_id, index_id, frame_id)
Description
Returns a Cartesian transform result of the execution of the specified vision sequence. The returned
value is a transform result: x, y, z, yaw, pitch, roll.
Parameters
$ip
IP address of the vision server.
Standard IP address format. For example 255.255.255.255.
This parameter applies to V+ syntax only.
sequence_id
Index of the vision sequence. 1-based.
tool_id
Index of the tool in the sequence. 1-based.
instance
Index of the instance for which you want the transform. 1-based.
result
Identifier of the result. Typically this value = 1311.
For gripper offset location this value can be set to 1400 and incremented by 1
for each additional gripper offset. The maximum value is 1499. See Example
2.
index_id
Reserved for internal use. Value is always '1'.
frame
Index of the frame that contains the specified instance.
Details
Parameters sequence_id, tool_id, instance, index_id, and frame are optional. These parameters are
1-based. If no value is provided for these parameters, they default to 1.
In V+ the vision server is the PC on which the AdeptSight vision software is running.
AdeptSight 2.0 - AdeptSight Reference3
To retrieve specific values
To retrieve global values: sequence_id = -1, tool_id = -1
To retrieve camera
values:
sequence_id = -1, tool_id = cameraIndex
To retrieve camerarelative-to robot values:
sequence_id = -1, tool_id = cameraIndex, index_id = robotIndex
To retrieve sequence
values:
sequence_id = sequenceIndex, tool_id = -1
To retrieve Belt Calibration related values (read only)
Property
sequence_id tool_id
instance result_id index_id
_id
frame_id
Frame
-1
cameraIndex
n/a
10000
robotIndex
n/a
UpstreamLimit
-1
cameraIndex
n/a
10001
robotIndex
n/a
DownstreamLimit -1
cameraIndex
n/a
10002
robotIndex
n/a
-1
cameraIndex
n/a
10050
robotIndex
n/a
VisionOrigin
Examples
Example 1
In this example, the 1311 result ID indicates using the first gripper offset. This is equivalent to
using the 1400 result ID.
; Retrieve the location of a found instance
; instance location = 1311
SET location = VLOCATION(1, 2, 1, 1311)
Example 2
; set 1st gripper offset location
; 1st gripper offset location = 1400
SET location = VLOCATION (1,2,1,1400)
;set 2nd gripper offset location
SET location = VLOCATION (1,2,1,1401)
...
;set 6th gripper offset location
SET location = VLOCATION (1,2,1,1405)
Example 3
; Retrieve the location of the Belt frame
; BeltCalibrationFrame index is 10000
VLOCATION ($ip, -1, cameraIndex, ,10000, robotIndex)
; Retrieve the location of the Vision origin
; VisionOrigin index is 10050
AdeptSight 2.0 - AdeptSight Reference4
VLOCATION ($ip, -1, cameraIndex, ,10050, robotIndex )
AdeptSight 2.0 - AdeptSight Reference5
VPARAMETER
program instruction
Syntax
MicroV+ VPARAMETER (sequence_id, tool_id, parameter, index_id, object_id) = value
V+ VPARAMETER (sequence_id, tool_id, parameter_id, index_id, object_id) $ip = value
Description
Sets the current value of a vision tool parameter.
Parameters
sequence_id
Index of the vision sequence. First sequence is '1'
tool_id
Index of the tool in the sequence.
parameter_id
Identifier (ID) of the parameter. Refer to the AdeptSight Quick Reference
tables to find the ID for the required parameter.
index_id
Some parameters require an index. For example, the index of a model, of an
edge pair, or of a blob.
object_id
Some parameters require an object index to access specific values in an array.
$ip
IP address of the vision server.
Standard IP address format. For example 255.255.255.255.
This parameter applies to V+ syntax only.
Details
Parameters sequence_id, tool_id, parameter_id, index_id, and object_id are optional. These
parameters are 1-based. If no value is provided for these parameters, they default to 1.
In V+ the vision server is the PC on which the AdeptSight vision software is running.
Example
; Set a Locator to find
; a maximum of 4 object instances
; MaximumInstanceCount = 519
VPARAMETER(1,2,519) = 4
AdeptSight 2.0 - AdeptSight Reference6
VPARAMETER
real-valued function
Syntax
MicroV+ value = VPARAMETER (sequence_id, tool_id, parameter_id, index_id, object_id)
V+ value = VPARAMETER ($ip, sequence_id, tool_id, parameter_id, index_id, object_id)
Description
Gets the current value of a vision tool parameter.
Parameters
$ip
IP address of the vision server.
Standard IP address format. For example 255.255.255.255.
This parameter applies to V+ syntax only.
sequence_id
Index of the vision sequence. First sequence is '1'
tool_id
Index of the tool in the sequence.
parameter_id
Identifier (ID) of the parameter. Refer to the AdeptSight Quick Reference
tables to find the ID for the required parameter.
index_id
Some parameters require an index. For example, the index of a model, of an
edge pair, or of a blob.
object_id
Some parameters require an object index to access specific values in an array.
Details
Parameters sequence_id, tool_id, parameter_id, index_id, and object_id are optional. These
parameters are 1-based. If no value is provided for these parameters, they default to 1.
AdeptSight 2.0 - AdeptSight Reference7
To retrieve specific values
To retrieve global values:
sequence_id = -1, tool_id = -1
To retrieve camera values:
sequence_id = -1, tool_id = cameraIndex
To retrieve sequence values:
sequence_id = sequenceIndex, tool_id = -1
To retrieve Belt-Calibration-related values ( read only )
Scale
(10004)
sequence_id = -1, tool_id = cameraIndex, index_id = robotIndex, object_id = n/a
To retrieve sequence-related values
Mode
(10200)
sequence_id = sequenceIndex, tool_id = -1, index_id = n/a, object_id = n/a
Example
; Get the Scale value for the Belt Calibration
VPARAMETER ($ip, -1, cameraIndex, 10004, robotIndex )
AdeptSight 2.0 - AdeptSight Reference8
VRESULT
real-valued function
Syntax
MicroV+
VRESULT (sequence_id, tool_id, instance_id, result_id, index_id, frame_id)
V+ VRESULT ($ip, sequence_id, tool_id, instance_id, result_id, index_id, frame_id)
Description
Returns a specifed result of a vision tool, or returns the status of a specified tool.
Parameters
$ip
IP address of the vision server.
Standard IP address format. For example 255.255.255.255.
This parameter applies to V+ syntax only.
sequence_id
Index of the vision sequence.
tool_id
Index of the tool in the sequence.
instance
Index of the instance for which you want the transform.
result
Identifier (ID) of the result. Refer to the AdeptSight Quick Reference tables
to find the ID for the required result.
index_id
Reserved for internal use. Value is always '1'.
frame
Some parameters require an the index of the frame for which you want to
retrieve the result contains the specified instance.
Details
Parameters sequence_id, tool_id, instance, index_id, and frame are optional. These parameters are
1-based. If no value is provided for these parameters, they default to 1.
In V+ the vision server is the PC on which the AdeptSight vision software is running.
The Status property (result_id=1002) retrieves the status of a specified tool.
Example
The following illustrates how to retrieve a specific tool result.
;Get the number of instances found a Locator
; instance count = 1310
instance_count = VRESULT(1, 2, 1, 1310)
AdeptSight 2.0 - AdeptSight Reference9
VRUN
program instruction
Syntax
Micro V+ VRUN sequence_id
V+ VRUN $ip, sequence_id
Description
Initiates the execution of a vision sequence.
Parameters
$ip
IP address of the vision server.
Standard IP address format. For example 255.255.255.255.
This parameter applies to V+ syntax only.
sequence_id
Index of the vision sequence. Optional. 1-based; if unspecified defaults to '1'.
Details
In V+ the vision server is the PC on which the AdeptSight vision software is running.
Example
; Execute the first sequence
VRUN 1
AdeptSight 2.0 - AdeptSight Reference10
VSTATE
real-valued function
Syntax
MicroV+ VSTATE (sequence_id)
V+ VSTATE ($ip, sequence_id)
Description
Returns the state of the execution of a sequence.
Parameters
$ip
IP address of the vision server.
Standard IP address format. For example 255.255.255.255.
This parameter applies to V+ syntax only.
sequence_id
Index of the vision sequence. Optional. 1-based; if unspecified defaults to '1'.
Details
In V+ the vision server is the PC on which the AdeptSight vision software is running.
Return
Return values are different for V+ and MicroV+:
MicroV+
Value
Description
0
Running
1
This value is currently unused.
2
Completed
3
Error
V+
Value
Description
0
Idle
1
Running
2
Paused
3
Done
4
Error
AdeptSight 2.0 - AdeptSight Reference11
V+
Value
Description
5
Starting
Example
; Get the state of the first sequence
VSTATE(1)
AdeptSight 2.0 - AdeptSight Reference12
VTIMEOUT
system parameter
Syntax
MicroV+ PARAMETER VTIMEOUT = value
V+ PARAMETER VTIMEOUT = value
Description
Sets a timeout value so that an error message is returned if no answer is received following an vision
command. The timeout value is expressed in seconds; i.e value = 0.15 = 150 ms.
The default value is 0, which is an infinite timeout.
Details
It is important to set a value other than the default value of 0.
TIMEOUT = 0 sets the timeout value to 'infinite'. In this case the operation will wait indefinitely for
an error message.
Example
; Get error message if no answer after 200ms
PARAMETER VTIMEOUT = .20
AdeptSight 2.0 - AdeptSight Reference13
VWAITI
program instruction
Syntax
MicroV+ VWAITI (sequence_id) type
V+ VWAITI (sequence_id) $ip, type
Description
Waits efficiently until the specified vision sequence reaches the state specified by the type parameter.
Use VWAITI call after VRUN. In a V+ conveyor-tracking application, the absence of a specific VWAITI
instruction can interfere with Acquire Images tool and the Communication tool, and cause a delay in the
execution of the application.
Parameters
sequence_id
Index of the vision sequence. 1-based; if unspecified defaults to '1'
$ip
IP address of the vision server.
Standard IP address format. For example 255.255.255.255.
This parameter applies to V+ syntax only.
type
0 Wait for full completion (default)
1 Wait for partial completion
Details
Parameters sequence_id and type are optional.
In V+, the vision server is the PC on which the AdeptSight vision software is running.
Example
; Execute the first sequence
VRUN 1
; Wait for completion of first sequence
VWAIT (1) 0
AdeptSight 2.0 - AdeptSight Reference14
AdeptSight Properties Reference for V+ and MicroV +
This reference guide provides details on all AdeptSight properties and their use in V+ and MicroV+.
• All properties are described in alphabetical order in the following pages.
• To find a property by name, by tool, or by ID, click a link below.
Global Tables
All properties by name or ID number:
Search for Properties by Name
Search for Properties by ID
Framework Properties
Properties required for configuring standalone vision applications.
AdeptSight Framework Properties
Tool Properties
Properties that apply to the selected vision tool.
Acquire Image Tool Properties
Arc Caliper Properties
Arc Edge Locator Properties
Arc Finder Properties
Blob Analyzer Properties
Color Matching Tool Properties
Communication Tool Properties
Edge Locator Properties
Frame Builder Tool Properties
Image Histogram Tool Properties
Image Processing Tool Properties
Image Sharpness Tool Properties
Locator Tool Properties
Line Finder Properties
Overlap Tool Properties
Pattern Locator Properties
Point Finder Properties
Result Inspection Tool Properties
AdeptSight 2.0 - AdeptSight Reference
15
Sampling Tool Properties
AdeptSight 2.0 - AdeptSight Reference
16
Abort
Abort
VPARAMETER
5500
Abort stops the execution of the specified Acquire Image tool.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5500, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5500, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5500, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5500, index, object)
Type
Long
Range
Value
Description
1
Start and end points of the arc must be located on the sides of the
bounding area.
0
Start and end points of the arc can be anywhere inside or outside of the
bounding area.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the Acquire Image tool in the vision sequence. First tool is '1'.
ID
5500: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
17
ArcMustBeTotallyEnclosed
ArcMustBeTotallyEnclosed
VPARAMETER
5141
When ArcMustBeTotallyEnclosed is True, the start and end points of the arc must be located on the
radial bounding sides of the Search Area. When set to False, the found arc can enter and/or exit the
Search Area at the inner or outer annular bounds of the Search Area.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5141, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5141, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5141, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5141, index, object)
Type
Boolean
Range
Value
Description
1
Start and end points of the arc must be located on the sides of the
bounding area.
0
Start and end points of the arc can be anywhere inside or outside of the
bounding area.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5141: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
18
ArithmeticClippingMode
ArithmeticClippingMode
VPARAMETER
5360
Clipping mode applied by an arithmetic operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5360, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5360, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5360, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5360, index, object)
Remarks
hsClippingNormal mode forces the destination pixel value to a value from 0 to 255 for unsigned 8bit images, to a value from -327678 to 32767 for signed 16 bits images and so on. Values that are
less than the specified minimum value are set to the minimum value. Values greater than the
specified maximum value are set to the maximum value.
hsClippingAbsolute mode takes the absolute value of the result and clips it using the same
algorithm as for the hsClippingNormal mode.
Type
Long
Range
Value
Image Processing Clipping Mode
Description
0
hsClippingNormal
Normal clipping method is used.
1
hsClippingAbsolute
Absolute clipping method is used.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5360: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
19
ArithmeticConstant
ArithmeticConstant
VPARAMETER
5361
Constant applied by an arithmetic operation when no valid operand image is specified.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5361, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5361, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5361, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5361, index, object)
Type
long
Range
Unlimited.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5361: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
20
ArithmeticScale
ArithmeticScale
VPARAMETER
5362
Scaling factor applied by an arithmetic operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5362, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5362, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5362, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5362, index, object)
Type
double
Range
Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5362: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
21
AssignmentConstant
AssignmentConstant
VPARAMETER
5365
Constant applied by an assignment operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5365, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5365, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5365, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5365, index, object)
Type
long
Range
Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5365: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
22
AssignmentHeight
AssignmentHeight
VPARAMETER
5366
Constant value that defines the height of the output image.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5366, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5366, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5366, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5366, index, object)
Type
long
Range
Unlimited but values from 1 to 2048 are preferable.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5366: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
23
AssignmentWidth
AssignmentWidth
VPARAMETER
5367
Constant value that defines the width of the output image.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5367, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5367, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5367, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5367, index, object)
Type
long
Range
Unlimited but values from 1 to 2048 are preferable.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5367: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
24
AutoCoarsenessSelectionEnabled
AutoCoarsenessSelectionEnabled
VPARAMETER
5421
When AutoCoarsenessSelectionEnabled is True, the values of SearchCoarseness and
PositioningCoarseness are automatically determined by the Pattern Locator process when the pattern is
learned.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5421, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5421, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5421, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5421, index, object)
Type
long
Range
Value
Description
1
The Coarseness levels are automatically determined and set by the tool.
0
The Coarseness levels (search and positioning) are set by the user.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5421: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
25
AutomaticCandidateCountEnabled
AutomaticCandidateCountEnabled
VPARAMETER
5301
When AutomaticCandidateCountEnabled is True the number of candidate measurement points is
automatically determined according to the dimension of the tool's region of interest. When
AutomaticCandidateCountEnabled is False, the number of candidate measurement points is set
manually through the CandidatePointsCount property.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5301, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5301, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5301, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5301, index, object)
Type
long
Range
Value
Description
1
The number of candidate measurement points is set automatically.
0
The number of candidate measurement points is set manually.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5301: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
26
AverageContrast
AverageContrast
VRESULT
1801
Average contrast between light and dark pixels on either side of the found entity (point, line, or arc),
expressed in greylevel values. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1801, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1801, index, frame)
Type
double
Range
Minimum: Greater than 0.
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1801: the value used to reference this property
index
N/A
frame
Frame that contains the entity for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
27
BeltCalibrationDownstreamLimit
BeltCalibrationDownstreamLimit
VLOCATION
10002
The downstream limit of the belt, defined during the Belt Calibration. Expressed as a transform. Read
only.
Syntax
V+ VLOCATION ($ip, sequence, tool, instance, 10002, index , frame)
MicroV+ Not applicable. Conveyor-tracking supported only in V+ on CX controller.
Type
Location
Parameters
$ip
IP address of the vision server. Applies to V+ syntax only.
sequence
Index of the vision sequence. First sequence is '1'.
tool
N/A
instance
Index of the instance for which you want the transform. 1-based.
location
10002. The value used to reference this property.
index
Reserved for internal use. Value is always '1'.
frame
Index of the frame that contains the specified instance.
AdeptSight 2.0 - AdeptSight Reference
28
BeltCalibrationFrame
BeltCalibrationFrame
VLOCATION
10000
The belt frame of reference, defined during the Belt Calibration. Expressed as a transform. Read only.
Syntax
V+ VLOCATION ($ip, sequence, tool, instance, 10000, index , frame)
MicroV+ Not applicable. Conveyor-tracking supported only in V+ on CX controller
Type
Location
Parameters
$ip
IP address of the vision server. Applies to V+ syntax only.
sequence
Index of the vision sequence. 1-based.
tool
Index of the tool in the sequence. 1-based.
instance
Index of the instance for which you want the transform. 1-based.
location
10000. The value used to reference this property.
index
Reserved for internal use. Value is always '1'.
frame
Index of the frame that contains the specified instance.
AdeptSight 2.0 - AdeptSight Reference
29
BeltCalibrationScale
BeltCalibrationScale
VPARAMETER
10004
The scale factor between encoder counts and millimeters, defined during the Belt Calibration. This is
the number of millimeters that the belt advances for each encoder count. Read only.
Syntax
V+ VPARAMETER (sequence, tool, 10004, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 10004, index, object)
MicroV+ Not applicable. Conveyor-tracking supported only in V+ on CX controller
Type
Double
Parameters
sequence
Index of the vision sequence. First sequence is '1'.
tool
N/A
parameter
10004. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server. Applies to V+ syntax only.
AdeptSight 2.0 - AdeptSight Reference
30
BeltCalibrationUpstreamLimit
BeltCalibrationUpstreamLimit
VLOCATION
10001
The upstream limit of the belt, defined during the Belt Calibration. Expressed as a transform. Read
only.
Syntax
MicroV+ Not applicable. Conveyor-tracking supported only in V+ on CX controller
V+ VLOCATION (sequence, tool, instance, 10001, index , frame)
Type
Location
Parameters
$ip
IP address of the vision server. Applies to V+ syntax only.
sequence
Index of the vision sequence. First sequence is '1'.
tool
N/A.
instance
Index of the instance for which you want the transform. 1-based.
location
10001. The value used to reference this property.
index
Reserved for internal use. Value is always '1'.
frame
Index of the frame that contains the specified instance.
AdeptSight 2.0 - AdeptSight Reference
31
BilinearInterpolationEnabled
BilinearInterpolationEnabled
VPARAMETER
120
Specifies if bilinear interpolation is used to sample the input image. By default, bilinear interpolation is
enabled because it ensures subpixel accuracy.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 120, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 120, index, object)
V+ VPARAMETER (sequence_index, tool_index, 120, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 120, index, object)
Remarks
Bilinear interpolation is crucial for obtaining accurate results with inspection tools. When a tool is
positioned in frame-based mode, the tool region of interest is rarely aligned with the pixel grid,
resulting in jagged edges on edges of objects. The bilinear interpolation function smooths out the
jaggedness within the sampled image by attributing to each pixel a value interpolated from values of
neighboring pixels to provide more true-to-life representation of contours, as illustrated in Figure 1.
Uninterpolated sampling may provide a small increase in speed but will provide less accurate
results.
Bilinear interpolation enabled
Bilinear interpolation disabled
Figure 1 Effect of Bilinear Interpolation
Type
Boolean
Range
Value
Description
1
Bilinear interpolation is enabled. Recommended default setting.
0
Bilinear interpolation is disabled.
Parameters
$ip
IP address of the vision server
AdeptSight 2.0 - AdeptSight Reference
32
BilinearInterpolationEnabled
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
120: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
33
BlobArea
BlobArea
VRESULT
1611
Area of the selected blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1611, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1611, index, frame)
Type
double
Range
Minimum: MinimumBlobArea
Maximum: MaximumBlobArea
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1611: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
34
BlobBoundingBoxBottom
BlobBoundingBoxBottom
VRESULT
1648
The bottommost coordinate of the bounding box aligned with respect to the X-axis of the Tool
coordinate system. This value is returned with respect to the selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1648, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1648, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1648: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
35
BlobBoundingBoxCenterX
BlobBoundingBoxCenterX
VRESULT
1624
X-coordinate of the center of the bounding box aligned with the Tool coordinate system. This value is
returned with respect to the selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1624, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1624, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1624: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
36
BlobBoundingBoxCenterY
BlobBoundingBoxCenterY
VRESULT
1625
Y-coordinate of the center of the bounding box aligned with the Tool coordinate system. This value is
returned with respect to the selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1625, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1625, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1625: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
37
BlobBoundingBoxHeight
BlobBoundingBoxHeight
VRESULT
1626
Height of the bounding box with respect to the Y-axis of the Tool coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1626, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1626, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1626: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
38
BlobBoundingBoxLeft
BlobBoundingBoxLeft
VRESULT
1645
The leftmost coordinate of the bounding box aligned with respect to the X-axis of the Tool coordinate
system. This value is returned with respect to the selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1645, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1645, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1645: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
39
BlobBoundingBoxRight
BlobBoundingBoxRight
VRESULT
1646
The rightmost coordinate of the bounding box aligned with respect to the X-axis of the Tool coordinate
system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1646, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1646, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1646: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
40
BlobBoundingBoxRotation
BlobBoundingBoxRotation
VRESULT
1649
Rotation of the bounding box with respect to the X-axis of the selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1649, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1649, index, frame)
Type
double
Range
Minimum: 0
Maximum: 360
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1649: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
41
BlobBoundingBoxTop
BlobBoundingBoxTop
VRESULT
1647
The topmost coordinate of the bounding box aligned with respect to the X-axis of the Tool coordinate
system. This value is returned with respect to the selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1647, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1647, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1647: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
42
BlobBoundingBoxWidth
BlobBoundingBoxWidth
VRESULT
1627
Width of the bounding box with respect to the X-axis of the Tool coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1627, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1627, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1627: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
43
BlobChainCode
BlobChainCode
VRESULT
1656
Direction of a given boundary element associated to the chain code. This direction can only be
expressed with respect to the Tool coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1656, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1656, index, frame)
Type
long
Range
Value
Name
Description
0
hsDirectionRight
Right direction
1
hsDirectionTop
Top direction
2
hsDirectionLeft
Left direction
3
hsDirectionBottom
Bottom direction
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1656: the value used to reference this property
index
index of the selected boundary element.
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
44
BlobChainCodeDeltaX
BlobChainCodeDeltaX
VRESULT
1659
Horizontal length of a boundary element associated to the chain code. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1659, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1659, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1659: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
45
BlobChainCodeDeltaY
BlobChainCodeDeltaY
VRESULT
1660
Vertical length of a boundary element associated to the chain code. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1660, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1660, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1660: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
46
BlobChainCodeLength
BlobChainCodeLength
VRESULT
1655
Number of boundary elements in the chain code of the blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1655, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1655, index, frame)
Type
long
Range
Minimum: Greater than 4
Maximum: unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1655: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
47
BlobChainCodeStartX
BlobChainCodeStartX
VRESULT
1657
X position of the first pixel associated to the chain code according to the Tool coordinate system. Read
only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1657, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1657, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1657: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
48
BlobChainCodeStartY
BlobChainCodeStartY
VRESULT
1658
Y position of the first pixel associated to the chain code according to the Tool coordinate system. Read
only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1658, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1658, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1658: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
49
BlobConvexPerimeter
BlobConvexPerimeter
VRESULT
1614
Convex perimeter of the selected blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1614, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1614, index, frame)
Type
double
Range
Minimum: Greater than 0.0
Maximum: unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1614: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
50
BlobCount
BlobCount
VRESULT
1610
Number of blobs detected by the tool. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1610, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1610, index, frame)
Type
long
Range
Minimum:
Maximum: 65534
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1610: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
51
BlobElongation
BlobElongation
VRESULT
1616
Ratio of the moment of inertia about the blob’s minor axis (BlobInertiaMaximum) to the moment of
inertia about the blob’s major axis (BlobInertiaMinimum). Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1616, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1616, index, frame)
Remarks
No units.
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1616: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
52
BlobExtentBottom
BlobExtentBottom
VRESULT
1653
Distance along the Y-axis between the blob’s center of mass and the bottom side of the bounding box.
Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1653, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1653, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1653: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
53
BlobExtentLeft
BlobExtentLeft
VRESULT
1650
Distance along the Y-axis between the blob’s center of mass and the bottom side of the bounding box.
Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1650, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1650, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1650: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
54
BlobExtentRight
BlobExtentRight
VRESULT
1651
Distance along the X-axis between the blob’s center of mass and the right side of the bounding box.
Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1651, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1651, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1651: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
55
BlobExtentTop
BlobExtentTop
VRESULT
1652
Distance along the Y-axis between the blob’s center of mass and the top side of the bounding box. Read
only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1652, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1652, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1652: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
56
BlobGreyLevelMaximum
BlobGreyLevelMaximum
VRESULT
1622
Highest greylevel value of the selected blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1622, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1622, index, frame)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1622: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
57
BlobGreyLevelMean
BlobGreyLevelMean
VRESULT
1618
Mean greylevel value in the selected blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1618, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1618, index, frame)
Type
double
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1618: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
58
BlobGreyLevelMinimum
BlobGreyLevelMinimum
VRESULT
1621
Lowest greylevel value in the selected blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1621, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1621, index, frame)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1621: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
59
BlobGreyLevelRange
BlobGreyLevelRange
VRESULT
1619
Range of the greylevel values in the selected blob. The range is calculated as [BlobGreyLevelMaximum
- BlobGreyLevelMinimum + 1]. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1619, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1619, index, frame)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1619: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
60
BlobGreyLevelStdDev
BlobGreyLevelStdDev
VRESULT
1620
Standard deviation of the greylevel values in the selected blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1620, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1620, index, frame)
Type
double
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1620: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
61
BlobHoleCount
BlobHoleCount
VRESULT
1654
The number of holes found in the selected blob. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1654, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1654, index, frame)
Type
long
Range
Minimum: 0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1654: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
62
BlobInertiaMaximum
BlobInertiaMaximum
VRESULT
1633
Moment of inertia about the minor axis, which corresponds to the highest moment of inertia. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1633, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1633, index, frame)
Type
double
Range
Minimum: Greater than .0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1633: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
63
BlobInertiaMinimum
BlobInertiaMinimum
VRESULT
1632
Moment of inertia about the major axis, which corresponds to the lowest moment of inertia. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1632, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1632, index, frame)
Type
double
Range
Minimum: Greater than 0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1632: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
64
BlobInertiaXAxis
BlobInertiaXAxis
VRESULT
1634
Moment of inertia about the X-axis of the Tool coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1634, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1634, index, frame)
Type
double
Range
Minimum: Greater than 0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1634: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
65
BlobInertiaYAxis
BlobInertiaYAxis
VRESULT
1635
Moment of inertia about the Y-axis of the Tool coordinate system. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1635, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1635, index, frame)
Type
double
Range
Minimum: Greater than 0.
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1635: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
66
BlobIntrinsicBoundingBoxBottom
BlobIntrinsicBoundingBoxBottom
VRESULT
1639
The bottommost coordinate of the bounding box with respect to the X-axis (major axis) of the principal
axes. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1639, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1639, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1639: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
67
BlobIntrinsicBoundingBoxCenterX
BlobIntrinsicBoundingBoxCenterX
VRESULT
1628
X-coordinate of the center of the bounding box with respect to the X-axis (major axis) of the principal
axes. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1628, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1628, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1628: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
68
BlobIntrinsicBoundingBoxCenterY
BlobIntrinsicBoundingBoxCenterY
VRESULT
1629
Y-coordinate of the center of the bounding box with respect to the Y-axis (minor axis) of the principal
axes. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1629, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1629, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1629: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
69
BlobIntrinsicBoundingBoxHeight
BlobIntrinsicBoundingBoxHeight
VRESULT
1630
Height of the bounding box with respect to the Y-axis (minor axis) of the principal axes. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1630, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1630, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1630: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
70
BlobIntrinsicBoundingBoxLeft
BlobIntrinsicBoundingBoxLeft
VRESULT
1636
The leftmost coordinate of the bounding box aligned with respect to the X-axis (major axis) of the
principal axes. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1636, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1636, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1636: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
71
BlobIntrinsicBoundingBoxRight
BlobIntrinsicBoundingBoxRight
VRESULT
1637
The rightmost coordinate of the bounding box aligned with the X-axis (major axis) of the principal axes.
Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1637, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1637, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1637: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
72
BlobIntrinsicBoundingBoxRotation
BlobIntrinsicBoundingBoxRotation
VRESULT
1640
Rotation of the intrinsic bounding box with respect to the X-axis of the selected coordinate system.
Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1640, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1640, index, frame)
Type
double
Range
Minimum: -180
Maximum: 180
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1640: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
73
BlobIntrinsicBoundingBoxTop
BlobIntrinsicBoundingBoxTop
VRESULT
1638
The topmost coordinate of the bounding box aligned with the Y-axis (minor axis) of the principal axes.
Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1638, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1638, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1638: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
74
BlobIntrinsicBoundingBoxWidth
BlobIntrinsicBoundingBoxWidth
VRESULT
1631
Width of the bounding box with respect to the X-axis of the Tool coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1631, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1631, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1631: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
75
BlobIntrinsicExtentBottom
BlobIntrinsicExtentBottom
VRESULT
1644
Distance along the minor axis between the blob's center of mass and the bottom side of the intrinsic
bounding box. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1644, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1644, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1644: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
76
BlobIntrinsicExtentLeft
BlobIntrinsicExtentLeft
VRESULT
1641
Distance along the major axis between the blob's center of mass and the left side of the intrinsic
bounding box. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1641, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1641, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1641: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
77
BlobIntrinsicExtentRight
BlobIntrinsicExtentRight
VRESULT
1642
Distance along the major axis between the blob's center of mass and the right side of the intrinsic
bounding box. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1642, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1642, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1642: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
78
BlobIntrinsicExtentTop
BlobIntrinsicExtentTop
VRESULT
1643
Distance along the major axis between the blob's center of mass and the top side of the intrinsic
bounding box. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1643, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1643, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1643: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
79
BlobPositionX
BlobPositionX
VRESULT
1612
X coordinate of the center of mass of a given blob in the currently selected coordinate system. Read
only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1612, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1612, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1612: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
80
BlobPositionY
BlobPositionY
VRESULT
1613
Y coordinate of the center of mass of a given blob in the currently selected coordinate system. Read
only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1613, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1613, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1613: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
81
BlobPrincipalAxesRotation
BlobPrincipalAxesRotation
VRESULT
1617
Angle of axis of the smallest moment of inertia with respect to the X-axis of the selected coordinate
system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1617, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1617, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1617: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
82
BlobRawPerimeter
BlobRawPerimeter
VRESULT
1615
Raw perimeter of the selected blob. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1615, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1615, index, frame)
Type
double
Range
Minimum: Greater than 0.
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1615: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
83
BlobRoundness
BlobRoundness
VRESULT
1623
The degree of similarity between the blob and a circle. The roundness is 1 for a perfectly circular blob.
Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1623, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1623, index, frame)
Remarks
No units.
Type
double
Range
Minimum: Greater than 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
1623: the value used to reference this property
index
N/A
frame
Frame containing the blob for which you want the result.
AdeptSight 2.0 - AdeptSight Reference
84
CalibratedImageHeight
CalibratedImageHeight
VRESULT
1703
Height of the sampled image. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1703, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1703, index, frame)
Remarks
This property is equal to ImageHeight * PixelHeight and is therefore subject to the same validity
conditions as the PixelHeight property.
Type
double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1703: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
85
CalibratedImageWidth
CalibratedImageWidth
VRESULT
1702
Width of the sampled image. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1702, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1702, index, frame)
Remarks
This property is equal to ImageWidth * PixelWidth and is therefore subject to the same validity
conditions as the PixelWidth property.
Type
double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1702: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
86
CalibratedUnitsEnabled
CalibratedUnitsEnabled
VPARAMETER
103
When CalibratedUnitsEnabled is set to True, the dimensions of the tool are expressed in millimeters.
Otherwise tool dimensions are expressed in pixels.
Syntax
MicroV+ VPARAMETER (sequence, tool, 103, index, object) = value
value =VPARAMETER (sequence, tool, 103, index, object)
V+ VPARAMETER (sequence, tool, 103, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 103, index, object)
Type
Boolean
Range
Value
Description
1
Dimensions are expressed in millimeters. (Default)
0
Dimensions are expressed in pixel units.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
103: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
87
CandidatePointsCount
CandidatePointsCount
VPARAMETER
5300
Sets the number of candidate locations where the tool tries to evaluate the sharpness. When the tool is
executed, it scans the region of interest and identifies a number of candidate locations (equal to
CandidatePointsCount) where the local standard deviation is the highest. The local sharpness is then
evaluated at each of the candidate location that has a local standard deviation above
StandardDeviationThreshold. The number of locations where the sharpness is actually measured is
returned by the MeasurementPointsCount property. When the AutomaticCandidateCountEnabled
property is True, the number of candidate measurement points is determined automatically according
to the size of the region of interest and CandidatePointsCount.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5300, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5300, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5300, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5300, index, object)
Type
long
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5300: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
88
ChainCodeResultsEnabled
ChainCodeResultsEnabled
VPARAMETER
1607
Enables the computation of the blob chain code properties: BlobChainCode, BlobChainCodeDeltaX,
BlobChainCodeDeltaY, BlobChainCodeLength, BlobChainCodeStartX and BlobChainCodeStartY
Delta Y
DeltaX
Tool
Y-Axis
Start
Tool X-Axis
Figure 2 Illustration of Chain Code Results
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 1607, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 1607, index, object)
V+ VPARAMETER (sequence_index, tool_index, 1607, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 1607, index, object)
Type
Boolean
Range
Index
Description
1
Chain Code Results are output by the tool.
0
Chain Code Results are not output.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1607: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
89
ClearOutputBlobImageEnabled
ClearOutputBlobImageEnabled
VPARAMETER
31
Specifies if the image output by the tool will be cleared in the next execution of the tool.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 31, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 31, index, object)
V+ VPARAMETER (sequence_index, tool_index, 31, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 31, index, object)
Type
Boolean
Range
Value
Description
0
The output image of the blob will not be cleared.
1
The output image of the blob will not be cleared.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
31: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
90
ColorFilterBestMatchIndex
ColorFilterBestMatchIndex
VRESULT
2500
The index number of the filter having the best match quality.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2500, filter_index,
frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2500, filter_index,
frame)
Type
long
Range
Minimum: 0
Maximum: none
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
2500: the value used to reference this property
filter_index
Index of the filter for which you want the result. First Filter is 0.
frame
Frame for which you want the results.
AdeptSight 2.0 - AdeptSight Reference
91
ColorFilterCount
ColorFilterCount
VPARAMETER
5700
ColorFilterCount indicates the number of filters that are defined for the Color Matching tool. Read only.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5700, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5700, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5700, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5700, index, object)
Remarks
ColorFilterCount reports the number of filters that are defined in the tool, and that appear in the
Filters list in the interface. This value is not affected by the number of filter results in an image.
Type
long
Range
Minimum: 0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5700: the value used to reference this property
index
N/A.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
92
ColorFilterEnabled
ColorFilterEnabled
VPARAMETER
5701
Specifies if the selected filter is enabled, which means that the Color Matching tool will apply this filter
to process the image.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5701, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5701, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5701, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5701, filter_index, object)
Type
long
Range
Value
State
Description
1
TRUE
The selected color filter is enable and will be applied by the color matching tool to
process the image.
0
FALSE The selected color filter is disabled and will be not be applied by the color matching
tool.
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5701: the value used to reference this property
filter_index
Index of the filter to enable/disable. First Filter is 0.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
93
ColorFilterMatchPixelCount
ColorFilterMatchPixelCount
VRESULT
2502
Number of pixels that match the conditions set by the filter. This result is output for each filter, starting
at Filter 0
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2502, filter_index,
frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2502, filter_index,
frame)
Type
long
Range
Minimum: 0
Maximum: ImagePixelCount
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
2502: the value used to reference this property
filter_index
Index of the filter for which you want the result. First Filter is 0.
frame
Frame for which you want the results.
AdeptSight 2.0 - AdeptSight Reference
94
ColorFilterMatchQuality
ColorFilterMatchQuality
VRESULT
2501
ColorFilterMatchQuality is the percentage of pixels matched to the specified filter. This value is equal to
the number of matched pixels (Filter (n) Match Pixel Count), divided by the total number of pixels in
the region of interest (Image Pixel Count).
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2501, filter_index,
frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2501, filter_index,
frame)
Type
long
Range
Minimum: Greater than 0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the blob for which you want the result.
ID
2501: the value used to reference this property
filter_index
Index of the filter for which you want the result. First Filter is 0.
frame
Frame for which you want the results.
AdeptSight 2.0 - AdeptSight Reference
95
ConformityTolerance
ConformityTolerance
VPARAMETER
556
Maximum local deviation between the expected model contours of an instance and the contours
actually detected in the input image. It corresponds to the maximum distance by which a matched
contour can deviate from either side of its expected position in the model. This property can only be set
when UseDefaultConformityTolerance is set to false. Read only otherwise.
Syntax
MicroV+ VPARAMETER (sequence, tool, 556, index, object) = value
value =VPARAMETER (sequence, tool, 556, index, object)
V+
VPARAMETER (sequence, tool, 556, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 556, index, object)
Type
Double
Remarks
This property can be set to any positive value if ConformityToleranceRangeEnabled is set to False.
Type
Double
Range
Minimum: MinimumConformityTolerance
Maximum: MaximumConformityTolerance
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
556: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
96
ConformityToleranceRangeEnabled
ConformityToleranceRangeEnabled
VPARAMETER
553
When ConformityToleranceRangeEnabled is set to True, the allowable range of values for
ConformityTolerance is set by the read-only MinimumConformityTolerance and
MaximumConformityTolerance properties. When set to False, ConformityTolerance can be set to any
positive value.
Syntax
MicroV+ VPARAMETER (sequence, tool, 553, index, object) = value
value =VPARAMETER (sequence, tool, 553, index, object)
V+
VPARAMETER (sequence, tool, 553, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 553, index, object)
Remarks
Disabling the conformity tolerance range can be useful for finding deformable objects, which
requires a high conformity tolerance value for a better match.
Type
Boolean
Range
Value
Description
0
ConformityToleranceRange is enabled.
1
ConformityToleranceRange is disabled.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
553: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
97
Connectivity
Connectivity
VPARAMETER
5120
Defines a minimum number of connected edges required to generate a point hypothesis from a a
specific found edge that satisfies search constraints.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5120, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5120, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5120, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5120, index, object)
Type
Long
Range
Minimum: 1
Maximum: 20
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5120: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
98
ConnectivityEnabled
ConnectivityEnabled
VPARAMETER
5121
When ConnectivityEnabled is set to True, the tool uses the value of the Connectivity property to
generate a point hypothesis
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5121, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5121, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5121, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5121, index, object)
Type
Boolean
Range
Value
Description
0
Connectivity is enabled.
1
Connectivity is disabled.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5121: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
99
Constraints
Constraints
VPARAMETER
5220
Defines the edge detection constraints of an Arc Locator tool or an Edge Locator tool. Constraints can
be set for position and/or magnitude and are used to score edges.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5220, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5220, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5220, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5220, index, object)
Type
long
Range
Value
Constraints NAme
Description
0
hsNone
No constraint.
1
hsPosition
Position constraint.
2
hsMagnitude
Magnitude constraint.
3
hsAllConstraints
Magnitude and position
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5220: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
100
ContrastPolarity
ContrastPolarity
VPARAMETER
522
Selects the type of polarity accepted for object recognition. Contrast polarity identifies the direction of
change in greylevel values between an object and its surrounding area. Polarity is always defined with
respect to the initial polarity in the image on which the Model was created.
Model Image defines the
"Normal" polarity
Normal Polarity
Reverse Polarity here is
caused by change in
background color
Figure 3 Contrast Polarity
Syntax
MicroV+ VPARAMETER (sequence, tool, 522, index, object) = value
value =VPARAMETER (sequence, tool, 522, index, object)
V+ VPARAMETER (sequence, tool, 522, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 522, index, object)
Type
Long
Range
Value
hsContrastPolarity
Description
1
hsContrastPolarityNormal
The Locator accepts only instances having the same
polarity as that of the model and does not recognize local
changes in polarity.
2
hsContrastPolarityReverse
The Locator accepts only instances having the inverse
polarity as that of the model and does not recognize local
changes in polarity.
3
hsContrastPolarityNormalAndRe- The Locator accepts only instances having a polarity that
verse
is either the same or the inverse of the model's polarity
but does not recognize local changes in polarity.
4
hsContrastPolarityDontCare
Accepts any polarity for the object, INCLUDING local
changes in polarity.
Parameters
$ip
IP address of the vision server
AdeptSight 2.0 - AdeptSight Reference
101
ContrastPolarity
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
522: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
102
ContrastThreshold
ContrastThreshold
VPARAMETER
303
Defines the minimum contrast needed for an edge to be detected in the input image and used for arc
computation. This threshold is expressed in terms of a step in greylevel values. The property is read
only except when ContrastThresholdMode is set to hsContrastThresholdFixedValue.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 303, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 303, index, object)
V+ VPARAMETER (sequence_index, tool_index, 303, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 303, index, object)
Type
Integer
Range
Minimum: 1
Maximum: 255
Remark(s)
By default, the tool selects a ContrastThresholdMode based on image content to provide flexibility to
variations in image lighting conditions and contrast. Adaptive threshold modes are generally
recommended. A fixed-value contrast threshold should only be used when adaptive values do not
provide satisfactory results.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
303: the value used to reference this property
index
N/A
object
N/A
Related Properties
ContrastThresholdMode
AdeptSight 2.0 - AdeptSight Reference
103
ContrastThresholdMode
ContrastThresholdMode
VPARAMETER
302
Selects the method used to compute the threshold used for detecting edges in the input image.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 302, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 302, index, object)
V+ VPARAMETER (sequence_index, tool_index, 302, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 302, index, object)
Type
Long
Remarks
By default, the tool selects a ContrastThresholdMode based on image content to provide flexibility
to variations in image lighting conditions and contrast. Adaptive threshold modes are generally
recommended. A fixed-value contrast threshold should only be used when adaptive values do not
provide satisfactory results.
Range
The valid range for this property is as follows:
Value Contrast Threshold Mode Name
Description
0
hsContrastThresholdAdaptiveLowSensitivity
Uses a low sensitivity adaptive threshold for
detecting edges. Adaptive Low Sensitivity
reduces the amount of noisy edges but may
also cause significant edges to be undetected.
1
hsContrastThresholdAdaptiveNormalSensitiv- Uses a normal sensitivity adaptive threshold
ity
for detecting edges.
2
hsContrastThresholdAdaptiveHighSensitivity
Uses a high sensitivity adaptive threshold
for detecting edges. Adaptive High Sensitivity can help detect weak-contrast edges but
also increases the amount of noisy edges.
3
hsContrastThresholdFixedValue
Uses a fixed value threshold for detecting
edges.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
302: the value used to reference this property
index
N/A
AdeptSight 2.0 - AdeptSight Reference
104
ContrastThresholdMode
object
N/A
AdeptSight 2.0 - AdeptSight Reference
105
CoordinateSystem
CoordinateSystem
VPARAMETER
1000
Coordinate system used to express the results.
Syntax
MicroV+ VPARAMETER (sequence, tool, 1000, index, object) = value
value =VPARAMETER (sequence, tool, 1000, index, object)
V+ VPARAMETER (sequence, tool, 1000, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 1000, index, object)
Type
Double
Range
Valu
e
Coordinate System Name
Description
0
hsCoordinateSystemImage Results are expressed in pixel units with respect to the Image
coordinate system.
1
hsCoordinateSystemWorld
2
hsCoordinateSystemObject Results are expressed in millimeters with respect to the World
coordinate system.
3
hsCoordinateSystemTool
Results are expressed in millimeters with respect to the World
coordinate system.
Results are expressed in pixels with respect to the Tool coordinate system. This coordinate system is positioned at the center
of the Search Area of the Locator.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1000: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
106
DefaultConformityTolerance
DefaultConformityTolerance
VPARAMETER
552
Default value for ConformityTolerance computed by the Locator by analyzing the calibration, the
contour detection parameters, and the search parameters. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 552, index, object) = value
value =VPARAMETER (sequence, tool, 552, index, object)
V+ VPARAMETER (sequence, tool, 552, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 552, index, object)
Remarks
This default value is used for ConformityTolerance when UseDefaultConformityTolerance is set to
True.
Type
Double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
552: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
107
DetailLevel
DetailLevel
VPARAMETER
301
The coarseness of the contours at the Detail level. This property can only be set when
ParametersBasedOn is set to hsParametersCustom. Read only otherwise.
Syntax
MicroV+ VPARAMETER (sequence, tool, 301, index, object) = value
value =VPARAMETER (sequence, tool, 301, index, object)
V+ VPARAMETER (sequence, tool, 301, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 301, index, object)
Remarks
For most applications, the ParametersBasedOn property should be set to hsParametersAllModels.
Custom contour detection should only be used when the default values do not work correctly.
Type
Double
Range
Minimum: 1
Maximum: 16
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
301: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
108
Edge1Constraints
Edge1Constraints
VPARAMETER
5221
Defines the detection constraints for the first edge of the selected pair. Constraints can be set for
position and/or magnitude and are used to score edges.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5221, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5221, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5221, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5221, index, object)
Type
Long
Range
Value
Constraint Name
Description
0
hsNone
No constraint
1
hsPosition
Position constraint
2
hsMagnitude
Magnitude constraint
3
hsAllConstraints
Magnitude and position constraints.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5221: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
109
Edge1Magnitude
Edge1Magnitude
VRESULT
1940
Magnitude of the first edge of the selected pair. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1940, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1940, index, frame)
Type
double
Range
Minimum: -255
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index Index of the instance for which you want the result.
ID
1940: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
110
Edge1MagnitudeConstraint
Edge1MagnitudeConstraint
VPARAMETER
5227
Indexed property used to set the magnitude constraint function. Two points are used: Base and Top.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5227, pair_index,
constraint_index) = value
value =VPARAMETER (sequence_index, tool_index, 5227, pair_index, constraint)
V+ VPARAMETER (sequence_index, tool_index, 5227, pair_index, constraint) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5227, pair_index, constraint)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5227: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
constraint_index
One of the two points of the magnitude constraint function
(hsMagnitudeConstraintIndex)
1: Base point
2: Top point
AdeptSight 2.0 - AdeptSight Reference
111
Edge1MagnitudeScore
Edge1MagnitudeScore
VRESULT
1942
Magnitude score of the first edge of the selected pair. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1942, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1942, index, frame)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1942: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
112
Edge1PolarityMode
Edge1PolarityMode
VPARAMETER
5211
Selection criterion of the first edge of the selected pair. The grey-scale transition of the edge must
respect the polarity set by this property.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5211, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5211, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5211, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5211, index, object)
Type
long
Range
Value
Polarity Mode
Name
Description
0
hsDarkToLight
The greylevel value must go from dark to light when crossing an
edge.
1
hsLightToDark
The greylevel value must go from light to dark when crossing an
edge.
2
hsEitherPolarity
The change in greylevel value change is not a criterion for locating an
edge.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5211: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
N/A
AdeptSight 2.0 - AdeptSight Reference
113
Edge1PositionConstraint
Edge1PositionConstraint
VPARAMETER
5224
Indexed property used to set the position constraint function of the first edge of the selected pair. Four
points are used: Base Left, Top Left, Top Right, Base Right.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5224, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5224, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5224, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5224, index, object)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5224: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
N/A
AdeptSight 2.0 - AdeptSight Reference
114
Edge1PositionScore
Edge1PositionScore
VRESULT
1944
Position score of the first edge of the selected pair. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1944, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1944, index, frame)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1944: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
115
Edge1PositionX
Edge1PositionX
VRESULT
1946
X coordinate of the center of the first edge of the selected pair in the currently selected coordinate
system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1946, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1946, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1946: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
116
Edge1PositionY
Edge1PositionY
VRESULT
1947
Y coordinate of the center of the first edge of the selected pair in the currently selected coordinate
system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1947, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1947, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1947: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
117
Edge1Radius
Edge1Radius
VRESULT
1954
Radius of the first edge of the selected pair. ToolPositionX and ToolPositionY ar at center of the circular
arc described by the selected edge. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1954, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1954, index, frame)
Type
double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1954: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
118
Edge1Rotation
Edge1Rotation
VRESULT
1950
Rotation of the first edge of the selected pair in the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1950, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1950, index, frame)
Remarks
The rotation is defined as the angle between the X-axis of the active coordinate system (specified by
the CoordinateSystem property) and the selected edge,
Type
double
Range
Minimum: -180
Maximum: 180
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1950: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
119
Edge1Score
Edge1Score
VRESULT
1952
Minimum score to accept an edge as the first edge of the selected pair The score is computed according
to the constraints set by the Edge1Constraints property. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1952, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1952, index, frame)
Remarks
The rotation is defined as the angle between the X-axis of the active coordinate system (specified by
the CoordinateSystem property) and the selected edge,
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1952: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
120
Edge1ScoreThreshold
Edge1ScoreThreshold
VPARAMETER
5241
Minimum score to accept an edge as the first edge of the selected pair. The score of the first edge is
returned by the Edge1Score property.
Syntax
MicroV+ VPARAMETER (sequence_index, tool_index, 5241, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5241, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5241, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5241, index, object)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5241: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
121
Edge2Constraints
Edge2Constraints
VPARAMETER
5222
Defines the detection constraints for the second edge of the selected pair. Constraints can be set for
position and/or magnitude and are used to score edges.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5222, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5222, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5222, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5222, index, object)
Type
long
Range
Value
Name
Description
0
hsNone
No constraint
1
hsPosition
Position constraint
2
hsMagnitude
Magnitude constraint
3
hsAllConstraints
Magnitude and position constraints.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5222: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
N/A
AdeptSight 2.0 - AdeptSight Reference
122
Edge2Magnitude
Edge2Magnitude
VRESULT
1941
Magnitude of the second edge of the selected pair. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1941, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1941, index, frame)
Type
double
Range
Minimum: -255
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1941: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
123
Edge2MagnitudeConstraint
Edge2MagnitudeConstraint
VPARAMETER
5228
Indexed property used to set the magnitude constraint function of the second edge of the selected pair.
Two points are used: Base and Top.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5228, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5228, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5228, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5228, index, object)
Type
long
Range
Minimum: 0255
Maximum:
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5228: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
N/A
AdeptSight 2.0 - AdeptSight Reference
124
Edge2MagnitudeScore
Edge2MagnitudeScore
VRESULT
1943
Magnitude score of the second edge of the selected pair. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1943, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1943, index, frame)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1943: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
125
Edge2PolarityMode
Edge2PolarityMode
VPARAMETER
5212
Selection criterion of the second edge of the selected pair. The grey-scale transition of the edge must
respect the polarity set by this property.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5212, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5212, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5212, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5212, index, object)
Type
long
Range
Value
Name
Description
0
hsDarkToLight
The greylevel value must go from dark to light when crossing an edge.
1
hsLightToDark
The greylevel value must go from light to dark when crossing an edge.
2
hsEitherPolarity The change in greylevel value change is not a criterion for locating an edge.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5212: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
N/A
AdeptSight 2.0 - AdeptSight Reference
126
Edge2PositionConstraint
Edge2PositionConstraint
VPARAMETER
5225
Indexed property used to set the position constraint function of the second edge of the selected pair.
Four points are used: Base Left, Top Left, Top Right, Base Right.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5225, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5225, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5225, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5225, index, object)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5225: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
N/A
AdeptSight 2.0 - AdeptSight Reference
127
Edge2PositionScore
Edge2PositionScore
VRESULT
1945
Position score of the second edge of the selected pair. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1945, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1945, index, frame)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1945: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
128
Edge2PositionX
Edge2PositionX
VRESULT
1948
X coordinate of the center of the second edge of the selected pair in the currently selected coordinate
system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1948, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1948, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1948: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
129
Edge2PositionY
Edge2PositionY
VRESULT
1949
Y coordinate of the center of the second edge of the selected pair in the currently selected coordinate
system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1949, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1949, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1949: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
130
Edge2Radius
Edge2Radius
VRESULT
1955
Radius of the second edge of the selected pair. ToolPositionX and ToolPositionY ar at center of the
circular arc described by the selected edge. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1955, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1955, index, frame)
Type
double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1955: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
131
Edge2Rotation
Edge2Rotation
VRESULT
1951
Rotation of the second edge of the selected pair in the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1951, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1951, index, frame)
Remarks
The rotation is defined as the angle between the X-axis of the active coordinate system (specified by
the CoordinateSystem property) and the selected edge,
Type
double
Range
Minimum: -180
Maximum: 180
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1951: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
132
Edge2Score
Edge2Score
VRESULT
1953
Minimum score to accept an edge as the second edge of the selected pair The score is computed
according to the constraints set by the Edge2Constraints property. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1953, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1953, index, frame)
Remarks
The rotation is defined as the angle between the X-axis of the active coordinate system (specified by
the CoordinateSystem property) and the selected edge,
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1953: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
frame
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
133
Edge2ScoreThreshold
Edge2ScoreThreshold
VPARAMETER
5242
Minimum score to accept an edge as the second edge of the selected pair. The score of the second edge
is returned by the Edge2Score property.
Syntax
MicroV+ VPARAMETER (sequence_index, tool_index, 5242, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5242, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5242, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5242, index, object)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5242: the value used to reference this property
index
Index of the edge pair. Range [1, PairCount -1].
object
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
134
EdgeCount
EdgeCount
VRESULT
1900
Number of edges detected by the tool. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1900, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1900, index, frame)
Type
long
Range
Minimum: 0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1900: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
135
EdgeFilterHalfWidth
EdgeFilterHalfWidth
VPARAMETER
5203
Half-width of the convolution filter used to compute the edge magnitude curve from which actual edges
are detected. The filter approximates the first derivative of the projection curve. The half width of the
filter should be set in order to match the width of the edge in the projection curve (the extent of the
grey-scale transition, expressed in number of pixels).
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5203, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5203, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5203, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5203, index, object)
Type
long
Range
Minimum: 1
Maximum: 25
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5203: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
136
EdgeMagnitude
EdgeMagnitude
VRESULT
1901
Magnitude of the selected edge. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1901, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1901, index, frame)
Type
long
Range
Minimum: -255
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1901: the value used to reference this property
index
Index of the edge.
frame
Index of the frame containing the edge.
AdeptSight 2.0 - AdeptSight Reference
137
EdgeMagnitudeScore
EdgeMagnitudeScore
VRESULT
1902
Magnitude score of the selected edge. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1902, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1902, index, frame)
Type
long
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1902: the value used to reference this property
index
Index of the edge.
frame
Index of the frame containing the edge.
AdeptSight 2.0 - AdeptSight Reference
138
EdgeMagnitudeThreshold
EdgeMagnitudeThreshold
VPARAMETER
5201
Magnitude threshold is used to find edges on the magnitude curve. A subpixel peak detection algorithm
is applied on the region of every minimum or maximum of the curve that exceeds this threshold in
order to locate edges.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5201, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5201, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5201, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5201, index, object)
Type
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5201: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
139
EdgePolarityMode
EdgePolarityMode
VPARAMETER
5210
Edge selection criterion. The grey-scale transition of the edge must respect the polarity set by this
property.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5210, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5210, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5210, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5210, index, object)
Type
long
Range
Value
Name
Description
0
hsDarkToLight
The greylevel value must go from dark to light when crossing an edge.
1
hsLightToDark
The greylevel value must go from light to dark when crossing an edge.
2
hsEitherPolarity
The change in greylevel value change is not a criterion for locating an
edge.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5210: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
140
EdgePositionScore
EdgePositionScore
VRESULT
1903
Position score of the selected edge. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1903, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1903, index, frame)
Type
long
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1903: the value used to reference this property
index
Index of the edge for which you want the results.
frame
Index of the frame that contains the selected edge.
AdeptSight 2.0 - AdeptSight Reference
141
EdgePositionX
EdgePositionX
VRESULT
1904
X coordinate of the center of the selected edge in the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1904, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1904, index, frame)
Type
long
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1904: the value used to reference this property
index
Index of the edge for which you want the results.
frame
Index of the frame that contains the selected edge.
AdeptSight 2.0 - AdeptSight Reference
142
EdgePositionY
EdgePositionY
VRESULT
1905
Y coordinate of the center of the selected edge in the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1905, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1905, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1905: the value used to reference this property
index
Index of the edge for which you want the results.
frame
Index of the frame that contains the selected edge.
AdeptSight 2.0 - AdeptSight Reference
143
EdgeRadius
EdgeRadius
VRESULT
1908
Radius of the selected edge, ToolPositionX and ToolPositionY being the center of the circular arc
described by the selected edge. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1908, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1908, index, frame)
Type
Range
Minimum: -180
Maximum: 180
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1908: the value used to reference this property
index
Index of the edge for which you want the results.
frame
Index of the frame that contains the selected edge.
AdeptSight 2.0 - AdeptSight Reference
144
EdgeRotation
EdgeRotation
VRESULT
1906
Rotation of the selected edge with respect to the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1906, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1906, index, frame)
Type
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1906: the value used to reference this property
index
Index of the edge for which you want the results.
frame
Index of the frame that contains the selected edge.
AdeptSight 2.0 - AdeptSight Reference
145
EdgeScore
EdgeScore
VRESULT
1907
Score of the selected edge. The score is computed according the constraints set by the Constraints
property. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1907, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1907, index, frame)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1907: the value used to reference this property
index
Index of the edge for which you want the results.
frame
Index of the frame that contains the selected edge.
AdeptSight 2.0 - AdeptSight Reference
146
EdgeSortResultsEnabled
EdgeSortResultsEnabled
VPARAMETER
5243
Property that specifies if edges are sorted in descending order of score values.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5243, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5243, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5243, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5243, index, object)
Type
Boolean
Range
Value
Description
0
The edges are sorted in descending order of score values.
1
The edges are not sorted.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5243: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
147
ExtrinsicBoxResultsEnabled
ExtrinsicBoxResultsEnabled
VPARAMETER
1606
Enables the computation of bounding box and extent properties: BlobBoundingBoxBottom,
BlobBoundingBoxCenterX, BlobBoundingBoxCenterY, BlobBoundingBoxHeight, BlobBoundingBoxLeft,
BlobBoundingBoxRight, BlobBoundingBoxRotation, BlobBoundingBoxTop, BlobBoundingBoxWidth,
BlobExtentBottom, BlobExtentLeft, BlobExtentRight and BlobExtentTop.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 1606, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 1606, index, object)
V+ VPARAMETER (sequence_index, tool_index, 1606, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 1606, index, object)
Type
Boolean
Range
Value
Description
1
The extrinsic bounding box properties will be computed
0
No extrinsic bounding box properties will be computed
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1606: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
148
ExtrinsicInertiaResultsEnabled
ExtrinsicInertiaResultsEnabled
VPARAMETER
1604
Enables the computation of the following blob properties: BlobInertiaXAxis, BlobInertiaYAxis and
BlobPrincipalAxesRotation.
Major axis
Minor Axis
Rotation of the Principal Axes
Center of mass
Selected coordinate system
Figure 4 Illustration of Extrinsic Inertia Results
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 1604, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 1604, index, object)
V+ VPARAMETER (sequence_index, tool_index, 1604, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 1604, index, object)
Type
Boolean
Range
Value
Description
1
The extrinsic inertia properties will be computed
0
No extrinsic inertia properties will be computed
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1604: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
149
ElapsedTime
ElapsedTime
VRESULT
1001
Total time elapsed (in milliseconds) during the last execution of the Locator tool. This time includes the
time for the learn process, the time for the search process and the overhead required to create and
output the results structures. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1001, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1001, index, frame)
Remarks
This property gives the total elapsed time, not the CPU time used.
Type
Double
Range
Minimum: 0.0
Maximum: unlimited
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1001. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
150
FilterBlueValue
FilterBlueValue
VPARAMETER
5712
Value of the Blue component, in the RGB colorspace, for the selected filter. This value may be modified
if any changes are made to the HSL values of the filter.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5712, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5712, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5712, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5712, filter_index, object)
Remarks
The value of a filter can be configured either by its HSL values or its RGB values. The Tolerance in a
color filter can only be expressed in HSL values.
RGB values are defined by properties: FilterRedValue, FilterGreenValue, and FilterBlueValue.
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5712: the value used to reference this property
filter_index
Index of the filter to which the value applies First Filter is 0.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
151
FilterCount
FilterCount
VPARAMETER
5601
Number of filters applied by tool. Read only.
Syntax
MicroV+ VPARAMETER (sequence_index, tool_index, 5601, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5601, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5601, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5601, index, object)
Type
Long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server. Applies to V+ syntax only.
sequence_index
Index of the vision sequence. The first sequence is '1'.
tool_index
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
5601. The parameter value used to reference this property.
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
152
FilterGreenValue
FilterGreenValue
VPARAMETER
5711
Value of the Green component, in the RGB colorspace, for the selected filter. This value may be
modified if any changes are made to the HSL values of the filter.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5711, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5711, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5711, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5711, filter_index, object)
Remarks
The value of a filter can be configured either by its HSL values or its RGB values. The Tolerance in a
color filter can only be expressed in HSL values.
RGB values are defined by properties: FilterRedValue, FilterGreenValue, and FilterBlueValue.
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5711: the value used to reference this property
filter_index
Index of the filter to which the value applies First Filter is 0.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
153
FilterHalfWidth
FilterHalfWidth
VPARAMETER
5202
Half-width of the convolution filter used by the tool to compute an edge magnitude curve from which
edges are detected. This value should be set to a value approximately equivalent to the width of the
edge, in pixels, as it appears in the image.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5202, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5202, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5202, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5202, index, object)
Type
Long
Range
Minimum: 1
Maximum: 25
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5202: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
154
FilterHueTolerance
FilterHueTolerance
VPARAMETER
5716
Value of the tolerance allowed for Hue value defined by FilterHueValue, for the selected filter. The
FilterHueTolerance value is distributed equally above and below the FilterHueValue.
For example, if FilterLuminanceValue = 200 and FilterHueTolerance = 20, the filter will accept pixels
with a range of hue values = [190,200].
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5716, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5716, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5716, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5716, filter_index, object)
Remarks
When FilterHueTolerance = 1, no tolerance (variation) in luminance is accepted. The filter will
only accept pixels with a luminance value equal to FilterHueValue.
Type
long
Range
Minimum: 1
Maximum: 128
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5716: the value used to reference this property
filter_index
Index of the filter to which the value applies First Filter is 0.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
155
FilterHueValue
FilterHueValue
VPARAMETER
5713
Value of the Hue component, in the HSL colorspace, for the selected filter. This value may be modified
if any changes are made to the RGB values of the filter.
Hue is the quality of color that is perceived as the color itself and is commonly expressed by the color
name, for example: red, green, yellow. Hue is determined by the perceived dominant wavelength, or
the central tendency of combined wavelengths, within the visible spectrum.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5713, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5713, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5713, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5713, filter_index, object)
Remarks
The value of a filter can be configured either by its HSL values or its RGB values. The Tolerance in a
color filter can only be expressed in HSL values.
HSL values are defined by properties: FilterHueValue, FilterLuminanceValue, and
FilterSaturationValue.
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5713: the value used to reference this property
filter_index
Index of the filter to which this value applies.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
156
FilteringClippingMode
FilteringClippingMode
VPARAMETER
5370
FilteringClippingMode sets the clipping mode applied by a filtering operation. Typically, the
hsClippingAbsolute mode is used for filter operations.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5370, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5370, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5370, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5370, index, object)
Remarks
hsClippingNormal mode forces the destination pixel value to a value from 0 to 255 for unsigned 8bit images, to a value from -327678 to 32767 for signed 16 bits images and so on. Values that are
less than the specified minimum value are set to the minimum value. Values greater than the
specified maximum value are set to the maximum value.
hsClippingAbsolute mode takes the absolute value of the result and clips it using the same
algorithm as for the hsClippingNormal mode.
Range
Value
Image Processing Clipping Mode
Description
0
hsClippingNormal
Normal clipping method is used.
1
hsClippingAbsolute
Absolute clipping method is used.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5370: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
157
FilteringCustomKernelAnchorX
FilteringCustomKernelAnchorX
VPARAMETER
5373
The horizontal position of the kernel anchor for a custom filtering operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5373, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5373, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5373, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5373, index, object)
Type
Long
Range
[1,2,3,4,5,6,7,8,9]
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5373: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
158
FilteringCustomKernelAnchorY
FilteringCustomKernelAnchorY
VPARAMETER
5374
The vertical position of the kernel anchor for a custom filtering operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5374, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5374, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5374, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5374, index, object)
Type
Long
Range
[1,2,3,4,5,6,7,8,9]
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5374: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
159
FilteringCustomKernelHeight
FilteringCustomKernelHeight
VPARAMETER
5375
Height of the kernel applied by a custom filtering operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5375, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5375, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5375, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5375, index, object)
Type
Long
Range
[1,2,3,4,5,6,7,8,9]
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5375: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
160
FilteringCustomKernelValue
FilteringCustomKernelValue
VPARAMETER
5377
Sets/gets the value at the specified location in the matrix that defines a custom kernel.
Range
Value
Name
Description
Integer
1 to 9
Column
Column of the custom filtering kernel
Integer
1 to 9
Line
Line of the custom filtering kernel
AdeptSight 2.0 - AdeptSight Reference
161
FilteringCustomKernelWidth
FilteringCustomKernelWidth
VPARAMETER
5376
Width of the kernel applied by a custom filtering operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5376, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5376, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5376, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5376, index, object)
Type
Long
Range
[1,2,3,4,5,6,7,8,9]
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5376: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
162
FilteringKernelSize
FilteringKernelSize
VPARAMETER
5371
Kernel size applied by a fixed (predefined) filtering operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5371, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5371, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5371, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5371, index, object)
Remarks
The kernel size applied by a custom filter is defined by FilteringCustomKernelValue.
Type
Long
Range
Valid sizes are 3,5,7
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5371: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
163
FilteringScale
FilteringScale
VPARAMETER
5372
Scaling factor applied by a filtering operation. After the operation has been applied, the value of each
pixel is multiplied by the FilteringScale value.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5372, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5372, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5372, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5362, index, object)
Type
Double
Range
Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5372: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
164
FilterLuminanceTolerance
FilterLuminanceTolerance
VPARAMETER
5718
Value of the tolerance allowed for Luminance value defined by FilterLuminanceValue, for the selected
filter. The FilterLuminanceTolerance value is distributed equally above and below the
FilterLuminanceValue.
For example, if FilterLuminanceValue = 200 and FilterLuminanceTolerance = 20, the filter will
accept pixels within a range of luminance values = [190,200].
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5718, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5718, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5718, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5718, filter_index, object)
Remarks
When FilterLuminanceTolerance = 1, no tolerance (variation) in luminance is accepted. The filter
will only accept pixels with a luminance value equal to FilterLuminanceValue.
Type
long
Range
Minimum: 1
Maximum: 128
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5718: the value used to reference this property
filter_index
Index of the filter to which this value applies.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
165
FilterLuminanceValue
FilterLuminanceValue
VPARAMETER
5715
Value of the Luminance component, in the HSL colorspace, for the selected filter. This value may be
modified if any changes are made to the RGB values of the filter.
Luminance is perceived as the brightness of the color, or the amount of white contained in the color.
When FilterLuminanceValue = 0, the color is completely black (RGB= 0,0,0). When
FilterLuminanceValue = 255, the color is almost completely white.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5715, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5715, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5715, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5715, filter_index, object)
Remarks
The value of a filter can be configured either by its HSL values or its RGB values. The Tolerance in a
color filter can only be expressed in HSL values.
HSL values are defined by properties: FilterHueValue, FilterLuminanceValue, and
FilterSaturationValue.
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5715: the value used to reference this property
filter_index
Index of the filter to which this value applies.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
166
FilterRedValue
FilterRedValue
VPARAMETER
5710
Value of the red component, in the RGB colorspace, for the selected filter. This value may be modified
if any changes are made to the HSL values of the filter.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5710, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5710, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5710, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5710, filter_index, object)
Remarks
The value of a filter can be configured either by its HSL values or its RGB values. The Tolerance in a
color filter can only be expressed in HSL values.
RGB values are defined by properties: FilterRedValue, FilterGreenValue, and FilterBlueValue.
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5710: the value used to reference this property
filter_index
Index of the filter to which this value applies.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
167
FilterResult
FilterResult
VRESULT
2301
Result of a specified filter, for a specified output instance. The Results Tool can contain any number of
filters. The global result of all the filters, after application of the global Operator, is returned by the
Result property.
Remarks
By default, the Result and ResultFilter properties return results for output frames that receive a
Pass result. This behavior can be modified in the tool interface, though the OutputFrames and
OutputResults (advanced) parameters.
Range
Value
Result Name
1
Pass
0
Fail
Related Topics
Result
IntermediateFilterResult
IntermediateResult
IntermediateResultCount
AdeptSight 2.0 - AdeptSight Reference
168
FilterSaturationTolerance
FilterSaturationTolerance
VPARAMETER
5717
Value of the tolerance allowed for the saturation value defined by FilterSaturationValue, for the selected
filter. The FilterSaturationTolerance value is distributed equally above and below the
FilterSaturationValue.
For example, if FilterSaturationValue = 200 and FilterSaturationTolerance = 20, the filter will accept
pixels with a range of saturation values = [190,200].
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5717, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5717, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5717, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5717, filter_index, object)
Remarks
When FilterSaturationTolerance = 1, no tolerance (variation) in saturation is accepted. The filter will
only accept pixels with a saturation value equal to FilterSaturationValue.
Type
long
Range
Minimum: 1
Maximum: 128
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5717: the value used to reference this property
filter_index
Index of the filter to which this value applies.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
169
FilterSaturationValue
FilterSaturationValue
VPARAMETER
5714
Value of the Saturation component, in the HSL colorspace, for the selected filter. This value may be
modified if any changes are made to the RGB values of the filter.
Saturation is perceived as the amount of purity of the color, or of the amount of grey in a color. When
FilterSaturationValue = 0, the color appears as middle grey (RGB = 112,126,126).
When FilterSaturationValue = 255, the color is said to be saturated.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5714, filter_index, object) =
value
value =VPARAMETER (sequence_index, tool_index, 5714, filter_index, object)
V+ VPARAMETER (sequence_index, tool_index, 5714, filter_index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5714, filter_index, object)
Remarks
The value of a filter can be configured either by its HSL values or its RGB values. The Tolerance in a
color filter can only be expressed in HSL values.
HSL values are defined by properties: FilterHueValue, FilterLuminanceValue, and
FilterSaturationValue.
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5714: the value used to reference this property
filter_index
Index of the filter to which this value applies.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
170
FitMode
FitMode
VPARAMETER
5140
Specifies the mode used by the tool to calculate and return values for the found arc.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5140, index, object) = value
value =VPARAMETER (sequence, tool, 5140, index, object)
V+ VPARAMETER (sequence, tool, 5140, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5140, index, object)
Type
Long
Range
Value
hsFitMode
Description
0
hsBoth
The Arc Finder calculates and returns both the arc center and arc
radius.
1
hsRadius
The arc radius is calculated, the arc center returned is the value of
the tool's center.
2
hsCenter
The arc center is calculated; the radius returned is the value of the
tool's radius
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
522: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
171
FitQuality
FitQuality
VRESULT
1803
Normalized average error between the calculated arc or line entity and the actual edges matched to the
found entity. Fit quality ranges from 0 to 1, with 1 being the best quality. A value of 1 means that the
average error is 0. Conversely, a value of 0 means that the average matched error is equal to
Conformity Tolerance. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1803, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1803, index, frame)
Type
Double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1803. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
172
Found
Found
VRESULT
1800
Found specifies if an entity was found. If True, then at least one entity (point, line or arc) was found in
the current image.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1800, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1800, index, frame)
Type
Long
Range
Value
State
Description
0
False
No entity was found.
1
True
An entity was found.
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1800: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
173
FrameCount
FrameCount
VRESULT
2410
Number of frames output by the tool. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 2410, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 2410, index, frame)
Type
Long
Range
Greater than or equal to 0.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
2410. The value used to reference this property.
index
N/A
frame
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
174
FrameIntrinsicBoundingBox
FrameIntrinsicBoundingBox
VRESULT
2420
Sets the coordinates of the instrinsic bounding box that defines a frame. The intrinsic bounding box is
the smallest box that can enclose the frame.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 2420, bounding_index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 2420, bounding_index, frame)
Type
Double
Range
Boundaries of the input image.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
2420. The value used to reference this property.
bounding_index
1 to 8: Index of the XY coordinates that define corners of the intrinsic
bounding box:
1: X coordinate of the corner
2: Y coordinate of the corner
3: X coordinate of the corner
4: Y coordinate of the corner
5: X coordinate of the corner
6: Y coordinate of the corner
7: X coordinate of the corner
8: Y coordinate of the corner
frame
Index of frame.
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
175
FrameMode
FrameMode
VPARAMETER
5650
Specifies the mode that is applied to the positioning of the selected frame. When the RelativeToImage
mode is enabled, the frame is positioned relative to the input image. When the RelativeToFrame is
selected, the location of the frame is relative to a frame provided by a specified tool.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5650, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5650, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5650, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5650, index, object)
Remarks
The location of the selected frame is defined by FrameRotation, FrameTranslationX, and
FrameTranslationY.
Range
Value
Name
Description
0
RelativeToFrame
The frame location is defined relative to a frame provided by another
tool.
1
RelativeToFrame
The frame location is defined relative to the input image origin.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5250: the value used to reference this property
index
The index of the frame for which you want to set the mode.
object
N/A
AdeptSight 2.0 - AdeptSight Reference
176
FrameRotation
FrameRotation
VRESULT
2402
The rotation of the specified output frame.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2402, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2402, index, frame)
Type
Boolean
Range
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
2402: the value used to reference this property
index
The index of the frame for which you want to set the mode.
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
177
FrameTranslationX
FrameTranslationX
VRESULT
2400
The X coordinate of the origin of the specified output frame. If the camera is calibrated, units are
expressed in millimeters. Otherwise they are expressed in pixels.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2400, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2400, index, frame)
Type
Boolean
Range
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
2400: the value used to reference this property
index
The index of the frame for which you want to set the mode.
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
178
FrameTranslationY
FrameTranslationY
VRESULT
2401
The Y coordinate of the origin of the specified output frame. If the camera is calibrated, units are
expressed in millimeters. Otherwise they are expressed in pixels.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2401, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2401, index, frame)
Type
Boolean
Range
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
2401: the value used to reference this property
index
The index of the frame for which you want to set the mode.
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
179
GreylevelRange
GreylevelRange
VRESULT
1508
Range of greylevel values of the pixels in the tool's region of interest that are included in the final
histogram. Pixels removed from the histogram by tails or thresholds are not included in this calculation.
The range is equal to MaximumGreylevelValue - MinimumGreylevelValue + 1. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1508, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1508, index, frame)
Type
long
Range
Minimum: 0
Maximum: 256
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1508: the value used to reference this property
index
The index of the frame for which you want to set the mode.
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
180
GreyLevelResultsEnabled
GreyLevelResultsEnabled
VPARAMETER
1608
Enables the computation of the following blob greylevel properties: BlobGreyLevelMaximum
BlobGreyLevelMean BlobGreyLevelMaximum, BlobGreyLevelMinimum, BlobGreyLevelRange, and
BlobGreyLevelStdDev.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 1608, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 1608, index, object)
V+ VPARAMETER (sequence_index, tool_index, 1608, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 1608, index, object)
Type
Boolean
Range
Value
Description
1
The greylevel blob properties will be computed
0
No greylevel blob properties will be computed
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1608: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
181
GripperOffset
GripperOffset
VLOCATION
10100
The transform that defines the selected gripper offset.
Syntax
MicroV+ VLOCATION (sequence, tool, instance, 10100, index, frame)
V+ VLOCATION (sequence, tool, instance, 10100, index , frame)
Type
Location
Remarks
To calculate the GripperOffset index, add 1 to the Global Offset ID that appears in the Global Gripper
offset Editor. This is necessary because this value is 0-based in AdeptSight and 1-based in V+/
MicroV+.
For example to reference the second Gripper offset that appears in Figure 5, the index number will
be 2. (i.e. GripperOffsetID +1)
Figure 5 Global Gripper Offset Editor
Parameters
sequence
Index of the vision sequence. First sequence is '1'.
tool
Index of the tool in the vision sequence. First tool in sequence is '1'.
parameter
10100. The value used to reference this property.
index
See remarks.
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
182
GripperOffsetIndex
GripperOffsetIndex
VPARAMETER
5510
The Index of the Gripper Offset to assigned to a instance output to the controller by the communication
Tool.
Type
Long
Remarks
To calculate the GripperOffset index, add 1 to the Global Offset ID that appears in the Global Gripper
offset Editor. This is necessary because this value is 0-based in AdeptSight and 1-based in V+/
MicroV+.
For example to reference the second Gripper offset that appears in Figure 5, the index number will
be 2. (i.e. GripperOffsetID +1)
Figure 6 Global Gripper Offset Editor
AdeptSight 2.0 - AdeptSight Reference
183
Histogram
Histogram
VRESULT
1511
Histogram of greylevel values of the pixels in the tool's region of interest that are included in the final
histogram. Pixels removed from the histogram by tails or thresholds are not included in this calculation.
The histogram comprises 256 bins. To each of the 256 possible greylevel values is associated one
histogram bin It contains the number of pixels with the corresponding greylevel value in the region of
interest. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1511, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1511, index, frame)
Type
long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1511: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
184
HistogramPixelCount
HistogramPixelCount
VRESULT
1512
Total number of pixels in the histogram. The number of pixels in the histogram is equal to
ImagePixelCount minus the pixels excluded from the Histogram by any threshold or tail functions, set
by the ThresholdBlack, ThresholdWhite, TailWhite, or TailBlack properties. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1512, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1512, index, frame)
Type
long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1512: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
185
HistogramThreshold
HistogramThreshold
VPARAMETER
5385
Threshold value applied by a histogram thresholding operation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5385, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5385, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5385, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5385, index, object)
Type
Long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5385: the value used to reference this property
index
N/A
object
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
186
HoleFillingEnabled
HoleFillingEnabled
VPARAMETER
5002
Enables the filling of the holes in each blob.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5002, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5002, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5002, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5002, index, object)
Type
Boolean
Range
Value
Description
1
All holes will be filled.
0
No hole will be filled.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5002: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
187
ImageBottomLeftX
ImageBottomLeftX
VRESULT
1704
X coordinate of the bottom left corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1704, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1704, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1704: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
188
ImageBottomLeftY
ImageBottomLeftY
VRESULT
1705
Y coordinate of the bottom left corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1705, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1705, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1705: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
189
ImageBottomRightX
ImageBottomRightX
VRESULT
1706
X coordinate of the bottom right corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1706, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1706, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1706: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
190
ImageBottomRightY
ImageBottomRightY
VRESULT
1707
Y coordinate of the bottom right corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1707, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1707, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1707: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
191
ImageHeight
ImageHeight
VRESULT
1021
Height of the tool's region of interest expressed in pixels. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1021, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1021, index, frame)
Type
long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1021: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
192
ImageOriginBelt
ImageOriginBelt
VLOCATION
10053
Origin of the Image frame of reference. Expressed as a transform relative to the Robot frame of
reference. Read only.
Belt Frame
of Reference
FIELD OF VIEW
mm
(0,50)
Pixel
(300,0)
mm
(50,0)
VisionOrigin
mm (0,0)
Vision Frame of Reference
Units = mm
ImageOrigin
Pixel (0,0)
Pixel
(0,300)
Image Frame of Reference
Units = pixels
Figure 7 Illustration of ImageOrigin and VisionOrigin Properties
Syntax
V+ VLOCATION ($ip, sequence, tool, instance, 10053, index, frame)
MicroV+ Not applicable. Conveyor tracking is supported only in V+.
Type
Location
Parameters
$ip
IP address of the vision server. Applies to V+ syntax only.
sequence
Index of the vision sequence. First sequence is '1'.
tool
Index of the tool in the vision sequence. First tool in sequence is '1'.
instance
Index of the instance for which you want the transform. 1-based.
location
10053. The value used to reference this property.
index
Reserved for internal use. Value is always '1'.
frame
Index of the frame that contains the specified instance.
Related Properties
ImageOriginRobot
VisionOriginRobot
VisionOriginBelt
AdeptSight 2.0 - AdeptSight Reference
193
ImageOriginRobot
ImageOriginRobot
VLOCATION
10051
Origin of the Image frame of reference. Expressed as a transform relative to the Robot frame of
reference. Read only.
Robot Frame
of Reference
FIELD OF VIEW
mm
(0,50)
Pixel
(300,0)
mm
(50,0)
VisionOrigin
mm (0,0)
Vision Frame of Reference
Units = mm
ImageOrigin
Pixel (0,0)
Pixel
(0,300)
Image Frame of Reference
Units = pixels
Figure 8 Illustration of ImageOrigin and VisionOrigin Properties
Syntax
MicroV+ VLOCATION (sequence, tool, instance, 10051, index, frame)
V+ VLOCATION ($ip, sequence, tool, instance, 10051, index, frame)
Type
Location
Parameters
$ip
IP address of the vision server. Applies to V+ syntax only.
sequence
Index of the vision sequence. First sequence is '1'.
tool
Index of the tool in the vision sequence. First tool in sequence is '1'.
instance
Index of the instance for which you want the transform. 1-based.
location
10051. The value used to reference this property.
index
Reserved for internal use. Value is always '1'.
frame
Index of the frame that contains the specified instance.
Related Properties
ImageOriginBelt
VisionOriginRobot
VisionOriginBelt
AdeptSight 2.0 - AdeptSight Reference
194
ImageOriginRobot
AdeptSight 2.0 - AdeptSight Reference
195
ImagePixelCount
ImagePixelCount
VRESULT
1513
Number of pixels in the tool's region of interest. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1513, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1513, index, frame)
Type
long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1513: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
196
ImageSubsampling
ImageSubsampling
VPARAMETER
5324
Factor used to subsample the grey-scale image in the tool's region of interest. With a subsampling
factor of 1, the grey-scale image is not subsampled. With a subsampling factor of 2, the grey-scale
image is subsampled in tiles of 2x2 pixels. With a subsampling factor of 3 the grey-scale image is
subsampled in tiles of 3x3 pixels and so forth.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5324, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5324, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5324, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5324, index, object)
Remarks
Color Matching Tool
Increasing the subsampling level reduces the number of pixels and the quantity information
analyzed by the tool. Increasing the Image Subsampling may reduce the execution time but affects
the accuracy of color matching results.
Image Histogram
Using a higher subsampling factor speeds up the generation of the histogram but slightly reduces
the accuracy of the statistics computed from the histogram. The pixel properties computed by the
Image Histogram tool are normalized with respect to the subsampling factor (HistogramPixelCount,
ImageHeight, ImagePixelCount and ImageWidth). So for instance, the total number of pixels in the
histogram should remain the same at any subsampling factor. Note that there might be slight
differences in the values of these properties when either of the width or the height of the region of
interest is not a multiple of the subsampling factor used.
Type
Long
Range
1 (no subsampling),2, 3, 4, 5, 6, 7, 8
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5324: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
197
ImageTopLeftX
ImageTopLeftX
VRESULT
1708
X coordinate of the top left corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1708, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1708, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1708: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
198
ImageTopLeftY
ImageTopLeftY
VRESULT
1709
Y coordinate of the top left corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1709, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1709, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1709: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
199
ImageTopRightX
ImageTopRightX
VRESULT
1710
X coordinate of the top right corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1710, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1710, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1710: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
200
ImageTopRightY
ImageTopRightY
VRESULT
1711
Y coordinate of the top right corner of the sampled image expressed with respect to the coordinate
system specified by the CoordinateSystem property. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1711, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1711, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1711: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
201
ImageWidth
ImageWidth
VRESULT
1020
Width of the tool's region of interest expressed in pixels. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1020, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1020, index, frame)
Type
long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index Index of the instance for which you want the result.
ID
1020: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
202
InstanceClearQuality
InstanceClearQuality
VRESULT
1319
Measure of the unencumbered area surrounding the specified object instance. Clear quality ranges from
0 to 1, with 1 being the best quality. A value of 1 means that the instance is completely free of
obstacles. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1319, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1319, index, frame)
Type
Double
Remarks
In MicroV+/V+, the frame parameter is required. The value is the index of frame that contains the
specified instance. Range of the frame parameter is [1, ResultCount -1].
Range
Minimum: 0.0
Maximum: 1.0
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1319. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
203
InstanceCount
InstanceCount
VRESULT
1310
Number of object instances found by the Locator tool. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1310, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1310, index, frame)
Remarks
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
Type
Double
Range
Greater than or equal to 0.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1310. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
204
InstanceFitQuality
InstanceFitQuality
VRESULT
1317
Normalized average error between the matched model contours of the selected object instance and the
actual contours detected in the input image. Fit quality ranges from 0 to 1, with 1 being the best
quality. A value of 1 means that the average error is 0. Conversely, a value of 0 means that the
average matched error is equal to conformity tolerance. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1317, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1317, index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1]
Type
Double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1317. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
205
InstanceIntrinsicBoundingBox
InstanceIntrinsicBoundingBox
VRESULT
1330
Sets the coordinates of the instrinsic bounding box that defines an instance. The intrinsic bounding box
is the smallest box that can enclose the instance.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1330, bounding_index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1330, bounding_index, frame)
Type
Double
Range
Boundaries of the input image.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1330. The value used to reference this property.
bounding_index
1 to 8: Index of the XY coordinates that define corners of the intrinsic
bounding box:
1: X coordinate of the corner
2: Y coordinate of the corner
3: X coordinate of the corner
4: Y coordinate of the corner
5: X coordinate of the corner
6: Y coordinate of the corner
7: X coordinate of the corner
8: Y coordinate of the corner
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
206
InstanceLocation
InstanceLocation
VLOCATION
1311
InstanceLocation returns the location of the selected instance, in the frame of reference of the
specified robot. If a gripper offset has been assigned to the instance, it is automatically applied to the
location. If no robot-to-vision calibration has been carried out, InstanceLocation returns the location
in the Vision frame of reference.
Syntax
MicroV+ VLOCATION (sequence, tool, instance, 1311, index, frame)
V+ VLOCATION ($ip, sequence, tool, instance, 1311, index, frame)
Remarks
If there is a single gripper offset, InstanceLocation (1311) is the same as
InstanceLocationGripperOffsetMinimum (1400). If there are multiple gripper offsets that can be
applied to the instance you should use InstanceLocationGripperOffsetMinimum = 1400 for the
location with the first gripper offset, InstanceLocationGripperOffsetMinimum = 1401 for the location
with the second gripper offset, and so forth for additional gripper offsets.
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. First sequence is '1'.
tool
Index of the tool in the vision sequence. First tool in the sequence is '1'.
instance
Index of the instance for which the location is required.
parameter
1311. The value used to reference this property.
index
Index of the robot.
frame
Index of the frame in which the instance is found. Typically this is '0' (i.e. the
Locator is not frame-based).
Related Properties
InstanceRobotLocation
InstanceLocationGripperOffsetMinimum
InstanceLocationGripperOffsetMaximum
AdeptSight 2.0 - AdeptSight Reference
207
InstanceLocationGripperOffsetMaximum
InstanceLocationGripperOffsetMaximum
VLOCATION
1499
InstanceLocationGripperOffsetMaximum is the maximum number of gripper offsets.
Syntax
MicroV+ VLOCATION (sequence, tool, instance, 1499, index, frame)
V+ VLOCATION (&ip, sequence, tool, instance, 1499, index , frame)
Type
Location
Remarks
If there is a single gripper offset, InstanceLocation (1311) is the same as
InstanceLocationGripperOffsetMinimum (1400). If there are multiple gripper offsets that can be
applied to the instance you should use InstanceLocationGripperOffsetMinimum = 1400 for the
location with the first gripper offset, InstanceLocationGripperOffsetMinimum = 1401 for the location
with the second gripper offset, and so forth for additional gripper offsets.
Range
Minimum: Greater than or equal to InstanceLocationGripperOffsetMinimum
Maximum: 100
Parameters
sequence
Index of the vision sequence. First sequence is '1'.
tool
Index of the tool in the vision sequence. First tool in sequence is '1'.
parameter
1499. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
Related Properties
InstanceLocation
InstanceLocationGripperOffsetMaximum
AdeptSight 2.0 - AdeptSight Reference
208
InstanceLocationGripperOffsetMinimum
InstanceLocationGripperOffsetMinimum
VLOCATION
1400
InstanceLocationGripperOffsetMiniimum is the minimum number of gripper offsets.
Syntax
MicroV+ VLOCATION (sequence, tool, instance, 1400, index, frame)
V+ VLOCATION (&ip, sequence, tool, instance, 1400, index, frame)
Type
Location
Remarks
If there is a single gripper offset, InstanceLocation (1311) is the same as
InstanceLocationGripperOffsetMinimum (1400). If there are multiple gripper offsets that can be
applied to the instance you should use InstanceLocationGripperOffsetMinimum = 1400 for the
location with the first gripper offset, InstanceLocationGripperOffsetMinimum = 1401 for the location
with the second gripper offset, and so forth for additional gripper offsets.
Range
Minimum: Greater than or equal to 0
Maximum: Greater than or equal to InstanceLocationGripperOffsetMaximum
Parameters
sequence
Index of the vision sequence. First sequence is '1'.
tool
Index of the tool in the vision sequence. First tool in the sequence is '1'.
parameter
1400. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
Related Properties
InstanceLocation
InstanceLocationGripperOffsetMaximum
AdeptSight 2.0 - AdeptSight Reference
209
InstanceMatchQuality
InstanceMatchQuality
VRESULT
1318
Amount of matched model contours for the selected object instance. Match quality ranges from 0 to 1,
with 1 being the best quality. A value of 1 means that 100% of the model contours were successfully
matched to the actual contours detected in the input image. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1318, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1318, index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1]
Type
Double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1318. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
210
InstanceModel
InstanceModel
VRESULT
1312
Index of the model associated to the selected object instance. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1312, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1312 index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1]
Type
Double
Range
Minimum: 0
Maximum: Number of Models - 1
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1312. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
211
InstanceOrdering
InstanceOrdering
VPARAMETER
530
Order in which the instances are processed and output.
Syntax
MicroV+ VPARAMETER (sequence, tool, 530, index, object) = value
value =VPARAMETER (sequence, tool, 530, index, object)
V+ VPARAMETER (sequence, tool, 530, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 530, index, object)
Remarks
With the hsDistanceImage and hsDistanceWorld modes, the reference coordinate used to compute
the distance is set with the InstanceOrderingReferenceX and InstanceOrderingReferenceY
properties.
Type
Double
Range
Value
Mode Name
Description
1
hsEvidence
Instances are processed and output according to their
hypothesis strength, beginning with the strongest
hypothesis.
2
hsLeftToRight
Instances are processed and output in the order they
appear in the search area, from left to right.
3
hsRightToLeft
Instances are processed and output in the order they
appear in the search area, from right to left.
4
hsTopToBottom
Instances are processed and output in the order they
appear in the search area, from top to bottom.
5
hsBottomToTop
Instances are processed and output in the order they
appear in the search area, from bottom to top.
6
hsQuality
All the instances are first processed and then they are
output according to their Quality, beginning with the highest quality.
7
hsDistanceImage
Instances are processed and output according to their
distance from a reference image coordinate, beginning
with the closest.
8
hsDistanceWorld
Instances are processed and output according to their
distance from a reference world coordinate, beginning
with the closest.
9
hsShadingConsistency Instances are processed and output according to their
shading consistency with respect to the Model, beginning with the strongest hypothesis.
AdeptSight 2.0 - AdeptSight Reference
212
InstanceOrdering
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
530. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
213
InstanceOrderingReferenceX
InstanceOrderingReferenceX
VPARAMETER
531
Reference X coordinate used to compute the distance when either of the hsDistanceImage or
hsDistanceWorld ordering modes are enabled using the InstanceOrdering property.
Syntax
MicroV+ VPARAMETER (sequence, tool, 531, index, object) = value
value =VPARAMETER (sequence, tool, 531, index, object)
V+ VPARAMETER (sequence, tool, 531, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 531, index, object)
Type
Double
Range
Not applicable.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
531. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
214
InstanceOrderingReferenceY
InstanceOrderingReferenceY
VPARAMETER
532
Reference Y coordinate used to compute the distance when either of the hsDistanceImage or
hsDistanceWorld ordering modes are enabled using the InstanceOrdering property.
Syntax
MicroV+ VPARAMETER (sequence, tool, 532, index, object) = value
value =VPARAMETER (sequence, tool, 532, index, object)
V+ VPARAMETER (sequence, tool, 532, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 532, index, object)
Type
Double
Range
Not applicable.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
532. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
215
InstanceReferencePointCount
InstanceReferencePointCount
VRESULT
1340
Number of reference points of the selected object instance. Read only.
Syntax
MicroV VRESULT (sequence, tool, instance, 1340, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1340, index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1]
Type
Double
Range
Greater than or equal to zero.
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1340. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
216
InstanceReferencePointPositionX
InstanceReferencePointPositionX
VRESULT
1341
X coordinate of the selected reference point of the selected object instance, with respect to the
coordinate system set by the CoordinateSystem property. Expressed in calibrated units when the
CalibratedUnitsEnabled property is set to True. Expressed in pixels otherwise. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1341, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1341, index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1]
Type
Double
Range
Not applicable.
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1341. The value used to reference this property.
index
Index of the reference point.
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
217
InstanceReferencePointPositionY
InstanceReferencePointPositionY
VRESULT
1342
Y coordinate of the selected reference point of the selected object instance, with respect to the
coordinate system set by the CoordinateSystem property. Expressed in calibrated units when the
CalibratedUnitsEnabled property is set to True. Expressed in pixels otherwise. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1342, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1342, index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1]
Type
Double
Range
Not applicable.
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1342. The value used to reference this property.
index
Index of the reference point.
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
218
InstanceRobotLocation
InstanceRobotLocation
VLOCATION
1371
InstanceRobotLocation returns the location of the selected instance, in the frame of reference of the
specified robot. No offset transformations are applied to the location. If a gripper offset has been
assigned to the instance, it is ignored. If no vision-to-robot calibration has been carried out, the system
returns an error.
Syntax
MicroV+ VLOCATION (sequence, tool, instance, 1371, index, frame)
V+ VLOCATION ($ip, sequence, tool, instance, 1371, index, frame)
Remarks
This differs from InstanceLocation, which applies any calculated offset, and returns vision frame of
reference coordinates it there is no robot-to-vision calibration.
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. First sequence is '1'.
tool
Index of the tool in the vision sequence. First tool in the sequence is '1'.
instance
Index of the instance for which the location is required.
parameter
1371. The value used to reference this property.
index
Index of the robot.
frame
Index of the frame in which the instance is found. Typically this is '0' (i.e. the
Locator is not frame-based).
Related Properties
InstanceLocation
InstanceLocationGripperOffsetMinimum
InstanceLocationGripperOffsetMaximum
AdeptSight 2.0 - AdeptSight Reference
219
InstanceRotation
InstanceRotation
VRESULT
1314
Angle of rotation of the Object coordinate system of the selected object instance, with respect to the
coordinate system set by the CoordinateSystem property. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1314, index, frame)
V+ VRESULT ($ip, sequence, tool, instance, 1314, index, frame)
Remarks
When the NominalRotationEnabled property is True, the rotation of the object instance is always
equal to NominalRotation.
In MicroV+/V+, you must provide the index of the frame that contains the specified instance.
Range: [1, ResultCount -1]
Type
Double
Range
Minimum: MinimumRotation or NominalRotation
Maximum: MaximumRotation or NominalRotation
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1314. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
220
InstanceScaleFactor
InstanceScaleFactor
VRESULT
1313
Scale factor of the selected object instance, giving its relative size with respect to its associated model.
Available only when the CoordinateSystem property is set to hsLocatorWorld. Unavailable otherwise.
Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1313, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1313, index, frame)
Remarks
When the NominalScaleFactorEnabled property is True, the scale factor of the object instance is
always equal to NominalScaleFactor.
In MicroV+/V+, you must provide the index of frame that contains the specified instance. Range: [1,
ResultCount -1]
Type
Double
Range
Minimum: MinimumScaleFactor or NominalScaleFactor
Maximum: MaximumScaleFactor or NominalScaleFactor
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1313. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
221
InstanceSymmetry
InstanceSymmetry
VRESULT
1320
Index of the object instance of which the selected object instance is a symmetry of. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1320, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1320, index, frame)
Remarks
If OutputSymmetricInstances is set to False, InstanceSymmetry is always equal to the instance's
index.
In MicroV+/V+, you must provide the index of the frame that contains the specified instance.
Range: [1, ResultCount -1]
Type
Double
Range
Minimum: 0
Maximum: InstanceCount -1
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1320. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
222
InstanceTime
InstanceTime
VRESULT
1322
Time needed to recognize and locate the selected object instance, expressed in milliseconds. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1322, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1322, index, frame)
Remarks
The time needed to locate the first object instance is usually longer because it includes all of the
low-level image preprocessing.
In MicroV+/V+, you must provide the index of the frame that contains the specified instance.
Range: [1, ResultCount -1]
Type
Double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1322. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
223
InstanceTranslationX
InstanceTranslationX
VRESULT
1315
X translation of the Object coordinate system of the selected object instance, with respect to the
coordinate system set by the CoordinateSystem property. Expressed in calibrated units when the
CalibratedUnitsEnabled property is set to True. Expressed in pixels otherwise. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1315, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1315, index, frame)
Type
Double
Range
Not applicable.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1315. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
224
InstanceTranslationY
InstanceTranslationY
VRESULT
1316
Y translation of the Object coordinate system of the selected object instance, with respect to the
coordinate system set by the CoordinateSystem property. Expressed in calibrated units when the
CalibratedUnitsEnabled property is set to True. Expressed in pixels otherwise. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1316, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1316, index, frame)
Remarks
In MicroV+/V+, you must provide the index of the frame that contains the specified instance.
Range: [1, ResultCount -1]
Type
Double
Range
Not applicable.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1316. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
225
InstanceVisible
InstanceVisible
VRESULT
1321
The percentage of the instance that was found in the image. If found instance was partially outside the
image (outside the field of view) the percentage is less than 100.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1321, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1321, index, frame)
Remarks
In MicroV+/V+, you must provide the index of the frame that contains the specified instance.
Range: [1, ResultCount -1]
Type
Double
Parameters
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1321. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
AdeptSight 2.0 - AdeptSight Reference
226
IntermediateFilterResult
IntermediateFilterResult
VRESULT
2312
Value of a specified filter, for a specified filter, for a specified Input Frame. The IntermediateFilterResult
is useful for determining why an Input Frame does not Pass the global operator applied by the Results
Inspection tool.
Remarks
The Index (ID) number of input frames are not related to the Index (ID) of their associated output
frames. The input frame ID comes from the tool providing the input frames, for example a Locator
tool. The Output frame ID is generated by the Results Inspection tool. Figure 9 Illustrates the
numbering of Input Frames and Output Frames.
Figure 9 Example of Filter Results
Range
Value
Result Name
1
Pass
0
Fail
Related Topics
FilterResult
Result
IntermediateResult
IntermediateResultCount
AdeptSight 2.0 - AdeptSight Reference
227
IntermediateResult
IntermediateResult
VRESULT
2311
Value of the Global Result for a specified input frame. IntermediateResult is useful for determining
which input frames did not Pass, and therefore were not output by the Results Inspection Tool.
Remarks
The Index (ID) number of input frames are not related to the Index (ID) of their associated output
frames. The input frame ID comes from the tool providing the input frames, for example a Locator
tool. The Output frame ID is generated by the Results Inspection tool. Figure 9 Illustrates the
numbering of Input Frames and Output Frames.
Figure 10 Example of Filter Results
Range
Value
Result Name
1
Pass
0
Fail
Related Topics
FilterResult
Result
IntermediateFilterResult
IntermediateResultCount
AdeptSight 2.0 - AdeptSight Reference
228
IntermediateResultCount
IntermediateResultCount
VRESULT
2310
The number of input frames processed by the results inspection tool.This number is equal to, or greater
than the ResultCount. If the tool is frame-based the IntermediateResultCount is equal to the number of
frames provided by the input tool (frame-provider.)
Remarks
The Index (ID) of Input Frames are not related to the Index (ID) of their associated Output Frames.
The Input Frame ID comes from the tool providing the input frames, for example a Locator tool. The
Output frame ID is generated by the Results Inspection tool. Figure 9 illustrates the numbering of
Input Frames and Output Frames.
Figure 11 Example of Filter Results
Range
The number of frame instances provided by the Input tool.
Related Topics
FilterResult
Result
IntermediateFilterResult
IntermediateResult
AdeptSight 2.0 - AdeptSight Reference
229
InterpolatePositionMode
InterpolatePositionMode
VPARAMETER
5122
Sets the mode used by the Point Finder tool to compute a point hypothesis
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5122, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5122, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5122, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5122, index, object)
Type
long
Range
Value
Name
Description
0
hsCorner
The tool will compute a hypothesis that fits a corner point to interpolated lines
from connected edges.
1
hsIntersection The tool will compute a hypothesis that is an intersection between the search
axis and connected edges of an interpolated line.
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5122: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
230
InterpolatePositionModeEnabled
InterpolatePositionModeEnabled
VPARAMETER
5123
When InterpolatePositionModeEnabled is set to True, the Point Finder tool uses the value set by the
InterpolatePositionMode property to compute a point hypothesis. Otherwise, point hypothesis
coordinates are taken directly from a specific found edge that satisfies search constraints.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5123, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5123, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5123, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5123, index, object)
Type
Long
Range
Value
Name
Description
1
True
The Point Finder tool uses the value set by the InterpolatePositionMode property to compute a point hypothesis
0
False
The Point Finder tool calculates point hypothesis directly from a specific found
edge that satisfies search constraints.
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5123: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
231
IntrinsicBoxResultsEnabled
IntrinsicBoxResultsEnabled
VPARAMETER
1605
Enables the computation of the following intrinsic bounding box and intrinsic extent:
BlobIntrinsicBoundingBoxBottom BlobIntrinsicBoundingBoxCenterX BlobIntrinsicBoundingBoxCenterY,
BlobIntrinsicBoundingBoxHeight, BlobIntrinsicBoundingBoxLeft, BlobIntrinsicBoundingBoxRight,
BlobIntrinsicBoundingBoxRotation, BlobIntrinsicBoundingBoxTop, BlobIntrinsicBoundingBoxWidth,
BlobIntrinsicExtentBottom, BlobIntrinsicExtentLeft, BlobIntrinsicExtentRight and
BlobIntrinsicExtentTop.
Major
axis
Minor
Axis
Right extent
Top extent
Center of mass
Bottom extent
Left extent
Bounding box
Figure 12 Illustration of Intrinsic Box Results
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 1605, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 1605, index, object)
V+ VPARAMETER (sequence_index, tool_index, 1605, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 1605, index, object)
Type
Boolean
Range
Value
Description
1
The intrinsic box properties will be computed
0
No intrinsic box properties will be computed
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1605: the value used to reference this property
index
N/A
AdeptSight 2.0 - AdeptSight Reference
232
IntrinsicBoxResultsEnabled
object
N/A
AdeptSight 2.0 - AdeptSight Reference
233
IntrinsicInertiaResultsEnabled
IntrinsicInertiaResultsEnabled
VPARAMETER
1603
Enables the computation of the following blob properties: BlobInertiaMinimum, BlobInertiaMaximum
and BlobElongation.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 1603, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 1603, index, object)
V+ VPARAMETER (sequence_index, tool_index, 1603, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 1603, index, object)
Type
Boolean
Range
Value
Description
1
The intrinsic inertia properties will be computed
0
No intrinsic inertia properties will be computed
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1603: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
234
InverseKinematics
InverseKinematics
VLOCATION
10060
For a robot with a tool-mounted or an arm-mounted camera, InverseKinematics retrieves the location
to which to move the robot so that the camera sees a specific point in the workspace (robot frame of
reference) at a specific point in the image (image frame of reference). The X-Y coordinates of the point
in the workspace are defined by RobotXPosition and RobotYPosition and X-Y coordinates of the point in
the image are defined by VisionXPosition and VisionYPosition.
If the tool is arm-mounted, there are two possible solutions for positioning the robot so you must
specify the robot configuration: RIGHTY or LEFTY, using the RobotConfiguration property.
If the camera is tool mounted, there are an infinite number of solutions for positioning the robot so you
must specify the angle of rotation between the Vision X Axis and the Robot X axis, using the
VisionRotation property.
Type
Location
Example
This example illustrates the use an relation of the following properties: InverseKinematics,
RobotXPosition, RobotYPosition, VisionXPosition, VisionYPosition, RobotConfiguration, and
VisionRotation.
.PROGRAM demo()
; This program will make move the robot so that a given point in the
; Robot frame of reference can be seen in a given point in the vision
; Coordinate system (Calibrated)
; This defines the point in the robot coordinate system that should be visible
; in the camera
robot_x = 300
robot_y = 0
; This is the point where the robot point should be seen in the camera coordinate
; System. These units are mm (Calibrated Image).
; When they are set to (0,0), it means the center of the image.
; Vision_rot only applies for a ToolMountedCamera
vision_x = 0
vision_y = 0
vision_rot = 0
; This is used when the camera is cobra arm-mounted on a Cobra robot
;It does not apply for a tool mounted camera.
; 0 Means Righty Robot configuration
; 1 Mean Lefty Robot configuration
robot_config = 1
AdeptSight 2.0 - AdeptSight Reference
235
InverseKinematics
; Tell AdeptSight what are the chosen values
; for configuration and vision points.
VPARAMETER(0, 1, 10400, 1) = robot_config
VPARAMETER(0, 1, 10401, 1) = vision_x
VPARAMETER(0, 1, 10402, 1) = vision_y
VPARAMETER(0, 1, 10403, 1) = vision_rot
WHILE TRUE DO
; Tell AdeptSight what are the chosen values for robot point.
VPARAMETER(0, 1, 10404, 1) = robot_x
VPARAMETER(0, 1, 10405, 1) = robot_y
; Ask AdeptSight where to move the robot in order to make
; robot point seen in vision point
SET loc = VLOCATION(0, 1, , 10060, 1)
; For ToolMounted Camera
MOVES loc
BREAK
; For ArmMounted Camera
DECOMPOSE loc2[] = loc
HERE #lhere
DECOMPOSE lhere2[] = #lhere
SET #pos = #PPOINT(loc2[0],loc2[1],lhere2[2],lhere2[3])
MOVE #pos
BREAK
END
.END
Related Properties
RobotXPosition
RobotYPosition
RobotConfiguration
VisionXPosition
VisionYPosition
VisionRotation
AdeptSight 2.0 - AdeptSight Reference
236
KernelSize
KernelSize
VPARAMETER
5304
KernelSize sets the size of the kernel of the operator for the sharpness process. The default setting of 5
(a 5X5 kernel) is generally sufficient for most cases.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 5304, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 5304, index, frame)
Type
Long
Range
Minimum: 2
Maximum: 16
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pair for which you want the result.
ID
5304: the value used to reference this property
index
N/A
frame
Frame containing the pair.
AdeptSight 2.0 - AdeptSight Reference
237
LastOperation
LastOperation
VRESULT
2200
Operation applied by the Image Processing tool at the last iteration. Read Only.
Type
long
Range
Value
Name
Description
0
hsArithmeticAddition
Operand value (constant or Operand Image pixel) is added to the
corresponding pixel in the input image.
1
hsArithmeticSubtraction Operand value (constant or Operand Image pixel) is subtracted
from the corresponding pixel in the input image.
2
hsArithmeticMultiplication
The input image pixel value is multiplied by the Operand value
(constant or corresponding Operand Image pixel).
3
hsArithmeticDivision
The input image pixel value is divided by the Operand value (constant or corresponding Operand image pixel). The result is scaled
and clipped, and finally written to the output image.
4
hsArithmeticLightest
The Operand value (constant or Operand Image pixel) and corresponding pixel in the input image are compared to find the maximal value.
5
hsArithmeticDarkest
The Operand value (constant or Operand Image pixel) and corresponding pixel in the input image are compared to find the minimal value.
6
hsAssignmentInitialization
All the pixels of the output image are set to a specific constant
value. The height and width of the output image must be specified.
7
hsAssignmentCopy
Each input image pixel is copied to the corresponding output
image pixel.
8
hsAssignmentInversion
The input image pixel value is inverted and the result is copied to
the corresponding output image pixel.
9
hsLogicalAnd
AND operation is applied to the Operand value (constant or Operand image pixel) and the corresponding pixel in the input image.
10
hsLogicalNAnd
NAND operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
11
hsLogicalOr
OR operation is applied to the Operand value (constant or Operand image pixel) and the corresponding pixel in the input image.
12
hsLogicalXOr
XOR operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
AdeptSight 2.0 - AdeptSight Reference
238
LastOperation
Value
Name
Description
13
hsLogicalNOr
NOR operation is applied using the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
14
hsFilteringCustom
Applies a Custom filter.
15
hsFilteringAverage
Applies an Average filter.
16
hsFilteringLaplacian
Applies a Laplacian filter.
17
hsFilteringHorizontalSo- Applies a Horizontal Sobel filter.
bel
18
hsFilteringVerticalSobel Applies a Vertical Sobel filter.
19
hsFilteringSharpen
Applies a Sharpen filter.
20
hsFilteringSharpenLow
Applies a SharpenLow filter.
21
hsFilteringHorizontalPrewitt
Applies a Horizontal Prewitt filter.
22
hsFilteringVerticalPrewitt
Applies a Vertical Prewitt filter.
23
hsFilteringGaussian
Applies Gaussian filter.
24
hsFilteringHighPass
Applies High Pass filter.
25
hsFilteringMedian
Applies a Median filter.
26
hsMorphologicalDilate
Sets each pixel in the output image as the largest luminance
value of all the input image pixels in the neighborhood defined by
the selected kernel size.
27
hsMorphologicalErode
Sets each pixel in the output image as the smallest luminance
value of all the input image pixels in the neighborhood defined by
the selected kernel size.
28
hsMorphologicalClose
Has the effect of removing small dark particles and holes within
objects.
29
hsMorphologicalOpen
Has the effect of removing peaks from an image, leaving only the
image background.
30
hsHistogramEqualization
Equalization operation enhances the Input Image by flattening
the histogram of the Input Image
31
hsHistogramStretching
Stretches (increases) the contrast in an image by applying a simple piecewise linear intensity transformation based on the histogram of the Input Image.
32
hsHistogramLightThreshold
Changes each pixel value depending on whether they are less or
greater than the specified threshold. If an input pixel value is less
than the threshold, the corresponding output pixel is set to the
minimum acceptable value. Otherwise, it is set to the maximum
presentable value.
33
hsHistogramDarkThreshold
Changes each pixel value depending on whether they are less or
greater than the specified threshold. If an input pixel value is less
than the threshold, the corresponding output pixel is set to the
maximum presentable value. Otherwise, it is set to the minimum
acceptable value.
AdeptSight 2.0 - AdeptSight Reference
239
LastOperation
Value
Name
Description
34
hsTransformFFT
Converts and outputs a frequency description of the input image
by applying a Fast Fourier Transform (FFT).
35
hsTransformDCT
Converts and outputs a frequency description of the input image
by applying a Discrete Cosine Transform (DCT).Parameters
AdeptSight 2.0 - AdeptSight Reference
240
LastOutputType
LastOutputType
VRESULT
2201
Type of the image output by the Image Processing tool at the last iteration. Read Only.
Type
long
Range
Value
Name
Description
1
hsType8Bits
Unsigned 8-bit image.
10
hsType16Bits
Signed 16-bit image.
7
hsType32Bits
Signed 32-bit image
AdeptSight 2.0 - AdeptSight Reference
241
LearnTime
LearnTime
VRESULT
1302
Time elapsed (in milliseconds) for the learn process during the last execution of the Locator tool. If the
Learn process was not required, the LearnTime is time is 0. A Learn process is required and
automatically called on the next execution of the Locator after modifications to models and/or changes
in some search parameters. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1302, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1302, index, frame)
Remarks
In MicroV+/V+, you must provide the index of the frame that contains the specified instance.
Range: [1, ResultCount -1]
Type
Double
Range
Greater than 0.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1302. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
242
LoadBeltCalibration
LoadBeltCalibration
VPARAMETER
10305
Loads the selected belt calibration file to the selected belt device. The valid format for a belt calibration
file is hscal.
Remarks
Import calibrations into a vision project only if you are sure that this calibration
is valid.
Otherwise this may cause hazardous and unexpected behavior of devices in the
workcell, which may lead to equipment damage or bodily injury.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
SaveBeltCalibration
AdeptSight 2.0 - AdeptSight Reference
243
LoadCameraSettings
LoadCameraSettings
VPARAMETER
10306
Loads the selected camera settings file to the selected camera device. The valid format for a camera
settings file is hscam.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
SaveCameraSettings
AdeptSight 2.0 - AdeptSight Reference
244
LoadColorCalibration
LoadColorCalibration
VPARAMETER
10302
Loads the selected camera settings file to the selected camera device. The valid format for a camera
settings file is hscam.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
SaveProject
AdeptSight 2.0 - AdeptSight Reference
245
LoadProject
LoadProject
VPARAMETER
10300
Loads the selected vision project file (*.hsproj). This clears all the sequences, settings and data,
including calibration data, that are currently in the vision project.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
SaveProject
AdeptSight 2.0 - AdeptSight Reference
246
LoadRobotCalibration
LoadRobotCalibration
VPARAMETER
10304
Loads the selected robot calibration file to the selected robot device. The valid format for a robot
calibration file is hscal.
Remarks
Import calibrations into a vision project only if you are sure that this calibration
is valid.
Otherwise this may cause hazardous and unexpected behavior of devices in the
workcell, which may lead to equipment damage or bodily injury.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
SaveRobotCalibration
AdeptSight 2.0 - AdeptSight Reference
247
LoadSequence
LoadSequence
VPARAMETER
10301
Loads the selected vision sequence file to the current vision project. The valid format for a vision
sequence file is hsseq.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
AdeptSight 2.0 - AdeptSight Reference
248
LoadVisionCalibration
LoadVisionCalibration
VPARAMETER
10303
Loads the selected camera calibration file to the selected camera device. The valid format for a camera
calibration file is hscal.
Remarks
Import calibrations into a vision project only if you are sure that this calibration
is valid.
Otherwise this may cause hazardous and unexpected behavior of devices in the
workcell, which may lead to equipment damage or bodily injury.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
SaveVisionCalibration
AdeptSight 2.0 - AdeptSight Reference
249
LogicalConstant
LogicalConstant
VPARAMETER
5380
Constant applied by a logical operation when no valid operand image is specified.
Type
long
Range
Unlimited
AdeptSight 2.0 - AdeptSight Reference
250
MagnitudeConstraint
MagnitudeConstraint
VPARAMETER
5226
Indexed property used to set the magnitude constraint function for edge detection. Two points are
used: Base and Top.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5226, pair_index,
constraint_index) = value
value =VPARAMETER (sequence_index, tool_index, 5226, pair_index, constraint)
V+ VPARAMETER (sequence_index, tool_index, 5226, pair_index, constraint) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5226, pair_index, constraint)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5226: the value used to reference this property
index
N/A
constraint_index
One of the two points of the magnitude constraint function
(hsMagnitudeConstraintIndex)
1: Base point
2: Top point
AdeptSight 2.0 - AdeptSight Reference
251
MagnitudeThreshold
MagnitudeThreshold
VPARAMETER
5200
Magnitude threshold sets the threshold used to find edges on the magnitude curve. A subpixel peak
detection algorithm is applied on the region of every minimum or maximum of the curve that exceeds
this threshold in order to locate edges.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5200, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5200, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5200, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5200, index, object)
Type
Double
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5200: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
252
MatchCount
MatchCount
VRESULT
2100
Number of matched patterns found. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2100, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2100, index, frame)
Type
long
Range
Minimum: 0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pattern instance for which you want the result.
ID
2100: the value used to reference this property
index
N/A
frame
Index of the frame containing the pattern instance for which you want the
result.
AdeptSight 2.0 - AdeptSight Reference
253
MatchPositionX
MatchPositionX
VRESULT
2102
X coordinate of a matched pattern in the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2102, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2102, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pattern instance for which you want the result.
ID
2102: the value used to reference this property
index
N/A
frame
Index of the frame containing the pattern instance for which you want the
result.
AdeptSight 2.0 - AdeptSight Reference
254
MatchPositionY
MatchPositionY
VRESULT
2103
X coordinate of a matched pattern in the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2103, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2103, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pattern instance for which you want the result.
ID
2103: the value used to reference this property
index
N/A
frame
Index of the frame containing the pattern instance for which you want the
result.
AdeptSight 2.0 - AdeptSight Reference
255
MatchQuality
MatchQuality
VRESULT
1802
Percentage of edges actually matched to the found entity (point, arc, or line). MatchQuality ranges from
0 to 1, with 1 being the best quality. A value of 1 means that edges were matched for every point along
the found entity. Similarly, a value of 0.2 means edges were matched to 20% of the points along the
found entity.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1802, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1802, index, frame)
Type
Double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1802. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
256
MatchRotation
MatchRotation
VRESULT
2104
Rotation of a matched pattern in the currently selected coordinate system. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 2104, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 2104, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pattern instance for which you want the result.
ID
2104: the value used to reference this property
index
N/A
frame
Index of the frame containing the pattern instance for which you want the
result.
AdeptSight 2.0 - AdeptSight Reference
257
MatchStrength
MatchStrength
VPARAMETER
2101
Strength of the match matrix for the selected matched pattern. Match value ranges from 0 to 1, with 1
being the best quality. A value of 1 means that 100% of the reference pattern was successfully
matched to the found pattern instance. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool_index, 2100, index, object) = value
value =VPARAMETER (sequence, tool_index, 2100, index, object)
V+ VPARAMETER (sequence, tool_index, 2100, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool_index, 2100, index, object)
Type
Double
Range
Minimum: MatchThreshold
Maximum: 1.0
Parameter
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pattern instance for which you want the result.
ID
2101: the value used to reference this property
index
N/A
frame
Index of the frame containing the pattern instance for which you want the
result.
AdeptSight 2.0 - AdeptSight Reference
258
MatchThreshold
MatchThreshold
VPARAMETER
5420
Sets the minimum match strength required for a pattern to be recognized as valid. A perfect match
value is 1.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5420, index, object) = value
value =VPARAMETER (sequence, tool, 5420, index, object)
V+ VPARAMETER (sequence, tool, 5420, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool_index, 5420, index, object)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1]
Type
Double
Range
Minimum: 0.0 (weak match)
Maximum: 1.0 (strong match)
Parameters
$ip
IP address of the vision server.
sequence
Index of the vision sequence. The first sequence is '1'.
tool_index
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
ID
5420: The value used to reference this property.
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
259
MaximumAngleDeviation
MaximumAngleDeviation
VPARAMETER
5102
Maximum deviation in angle allowed for a detected edge to be used for generate an entity hypothesis.
Remarks
For an arc entity the deviation is calculated between the tangent angle of the arc at points where the
edge is matched to the arc. For a line entity, the Line Finder accepts a 20 degree deviation by
default. However, the tool uses the defined MaximumAngleDeviation value to test the hypothesis
and refine the pose of the found line.
Type
double
Range
Minimum: 0 degrees
Maximum: 20 degrees
AdeptSight 2.0 - AdeptSight Reference
260
MaximumBlobArea
MaximumBlobArea
VPARAMETER
5001
Maximum area for a blob. This validation criterion is used to filter out unwanted blobs from the results.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5001, index, object) = value
value =VPARAMETER (sequence, tool, 5001, index, object)
V+ VPARAMETER (sequence, tool, 5001, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5001, index, object)
Remarks
If CalibratedUnitsEnabled is set to True, this property is expressed in millimeters squared.
Otherwise, this property is expressed in pixels squared.
Type
Double
Range
Minimum: MinimumBlobArea
Maximum: Area of the input image
Parameter
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
5001. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
261
MaximumConformityTolerance
MaximumConformityTolerance
VPARAMETER
555
Maximum value allowed for the ConformityTolerance property. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 555, index, object) = value
value =VPARAMETER (sequence, tool, 555, index, object)
V+ VPARAMETER (sequence, tool, 555, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 555, index, object)
Remarks
This property is computed by the Locator by analyzing the calibration, the contour detection
parameters, and the search parameters. See also ConformityTolerance, DefaultConformityTolerance,
and ConformityToleranceRangeEnabled.
Type
Double
Range
Greater than MinimumConformityTolerance
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
555. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
262
MaximumGreylevelValue
MaximumGreylevelValue
VRESULT
1507
Highest greylevel value of all pixels in the tool's region of interest that are included in the final
histogram. Pixels removed from the histogram by tails or thresholds are not included in this calculation.
Read only.
Type
long
Range
Minimum: 0
Maximum: 255
AdeptSight 2.0 - AdeptSight Reference
263
MaximumInstanceCount
MaximumInstanceCount
VPARAMETER
519
Maximum number of object instances that are searched for in the input grey-scale Image. All of the
object instances respecting the search constraints are output, up to a maximum of
MaximumInstanceCount. They are ordered according to the InstanceOrdering property.
Syntax
MicroV+ VPARAMETER (sequence, tool, 519, index, object) = value
value =VPARAMETER (sequence, tool, 519, index, object)
V+ VPARAMETER (sequence, tool, 519, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 519, index, object)
Remarks
This property is applicable only if the MaximumInstanceCountEnabled property is set to True.
Type
Double
Range
Minimum: 1
Maximum: 2000
Parameter
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
519. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
264
MaximumInstanceCountEnabled
MaximumInstanceCountEnabled
VPARAMETER
518
When MaximumInstanceCountEnabled is True, the search is limited to the number of instances set
by the MaximumInstanceCount property
Syntax
MicroV+ VPARAMETER (sequence, tool, 518, index, object) = value
value =VPARAMETER (sequence, tool, 518, index, object)
V+ VPARAMETER (sequence, tool, 518, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 518, index, object)
Type
Boolean
Range
Value
Description
1
Search is limited to number of instances specified by MaximumInstanceCount.
0
Search is not limited to a set number of instances.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
518. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server.
AdeptSight 2.0 - AdeptSight Reference
265
MaximumRotation
MaximumRotation
VPARAMETER
517
Maximum angle of rotation allowed for an object instance to be recognized.
Syntax
MicroV+ VPARAMETER (sequence, tool, 517, index, object) = value
value =VPARAMETER (sequence, tool, 517, index, object)
V+ VPARAMETER (sequence, tool, 517, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 517, index, object)
Remarks
This property is applicable only if the NominalRotationEnabled property is set to False. When
MaximumRotation is lower than MinimumRotation, the search range is equivalent to
MinimumRotation to (MaximumRotation + 360 degrees).
Type
Double
Range
Minimum: -180.0 degrees
Maximum: +180.0 degrees
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
517. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
266
MaximumScaleFactor
MaximumScaleFactor
VPARAMETER
513
Maximum scale factor allowed for an object instance to be recognized.
Syntax
MicroV+ VPARAMETER (sequence, tool, 513, index, object) = value
value =VPARAMETER (sequence, tool, 513, index, object)
V+ VPARAMETER (sequence, tool, 513, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 513, index, object)
Remarks
This property is applicable only if the NominalRotation property is set to False.
Type
Double
Range
Minimum: 0.1
Maximum: 10.0
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
513. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
267
Mean
Mean
VRESULT
1500
Mean of the greylevel distribution of the pixels in the tool's region of interest that are included in the
final histogram. Pixels removed from the histogram by tails or thresholds are not included in this
calculation. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1500, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1500, index, frame)
Type
Double
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1500: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
268
MeasurementPointsCount
MeasurementPointsCount
VRESULT
1506
The number of points where the local sharpness is evaluated. When the Image Sharpness tool is
executed, it scans the region of interest and identifies a number of candidate locations (equal to
CandidatePointsCount) where the local standard deviation is the highest. The local sharpness is then
evaluated at each of the candidate locations that has a local standard deviation above
StandardDeviationThreshold.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1506, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1506, index, frame)
Type
long
Range
Minimum: 0
Maximum: CandidatePointsCount
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1506: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
269
Median
Median
VRESULT
1501
Median of the greylevel distribution of the pixels in the tool's region of interest that are included in the
final histogram. Pixels removed from the histogram by tails or thresholds are not included in this
calculation. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1501, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1501, index, frame)
Type
Double
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1501: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
270
MessageCount
MessageCount
VRESULT
1300
The number of messages issued during the last execution. The identification number of each message
is retrieved using the MessageNumber property. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1300, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1300, index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1].
Type
Double
Range
Minimum: 0
Maximum: Unlimited
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1300. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
271
MessageNumber
MessageNumber
VRESULT
1301
The identification number of a message issued during the last execution. MessageCount returns the
total number of messages issued. Read only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1301, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1301, index, frame)
Remarks
Index of frame that contains the specified instance. Range: [1, ResultCount -1
Type
Double
Range
Not applicable.
Parameters
sequence_index
Index of the vision sequence. The first sequence is '1'.
tool_index
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
ID
1301. The value used to reference this property.
index
Index of the message.
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
272
MinimumArcPercentage
MinimumArcPercentage
VPARAMETER
5142
Minimum percentage of arc contours that need to be matched for an arc hypothesis to be considered as
valid.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5142, index, object) = value
value =VPARAMETER (sequence, tool, 5142, index, object)
V+ VPARAMETER (sequence, tool, 5142, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5142, index, object)
Type
Double
Range
Minimum: Greater than 0.
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5142: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
273
MinimumBlobArea
MinimumBlobArea
VPARAMETER
5000
Minimum area for a blob. This validation criterion is used to filter out unwanted blobs from the results.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5000, index, object) = value
value =VPARAMETER (sequence, tool, 5000, index, object)
V+ VPARAMETER (sequence, tool, 5000, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5000, index, object)
Remarks
If CalibratedUnitsEnabled is set to True, this property is expressed in millimeters squared.
Otherwise, this property is expressed in pixels squared.
Type
Double
Range
Minimum: Greater than 0.
Maximum: MaximumBlobArea
Parameter
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
5000. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
274
MinimumClearPercentage
MinimumClearPercentage
VPARAMETER
559
When MinimumClearPercentageEnabled is set to true, MinimumClearPercentage sets the minimum
percentage of the model bounding box area that must be free of obstacles to consider an object
instance as valid.
Syntax
MicroV+ VPARAMETER (sequence, tool, 559, index, object) = value
value =VPARAMETER (sequence, tool, 559, index, object)
V+ VPARAMETER (sequence, tool, 559, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 559, index, object)
Type
Long
Range
Minimum: Greater than 0.
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
559: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
275
MinimumClearPercentageEnabled
MinimumClearPercentageEnabled
VPARAMETER
558
When MinimumClearPercentageEnabled is set to true, the MinimumClearPercentage constraint is
applied to the search process.
Syntax
MicroV+ VPARAMETER (sequence, tool, 558, index, object) = value
value =VPARAMETER (sequence, tool, 558, index, object)
V+ VPARAMETER (sequence, tool, 558, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 558, index, object)
Type
Boolean
Range
Value
Description
1
The MinimumClearPercentage constraint is enabled and applied to the Search process.
0
The MinimumClearPercentage constraint is not enabled.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
558: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
276
MinimumConformityTolerance
MinimumConformityTolerance
VPARAMETER
554
Minimum value allowed for the ConformityTolerance property. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 554, index, object) = value
value =VPARAMETER (sequence, tool, 554, index, object)
V+ VPARAMETER (sequence, tool, 554, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 554, index, object)
Remarks
This property is computed by the Locator by analyzing the calibration, the contour detection
parameters, and the search parameters. See also ConformityTolerance, DefaultConformityTolerance,
and ConformityToleranceRangeEnabled.
Type
Double
Range
Minimum: Greater than zero.
Maximum: Lower than MaximumConformityTolerance.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
554: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
277
MinimumGreylevelValue
MinimumGreylevelValue
VRESULT
1506
Lowest greylevel value of all pixels in the tool's region of interest that are included in the final
histogram. Pixels removed from the histogram by tails or thresholds are not included in this calculation.
Read only.
Type
long
Range
Minimum: 0
Maximum: 255
AdeptSight 2.0 - AdeptSight Reference
278
MinimumLinePercentage
MinimumLinePercentage
VPARAMETER
5130
Minimum percentage of line contours that need to be matched for a line hypothesis to be considered as
valid.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5130, index, object) = value
value =VPARAMETER (sequence, tool, 5130, index, object)
V+ VPARAMETER (sequence, tool, 5130, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5130, index, object)
Type
Double
Range
Minimum: Greater than 0.
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5130: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
279
MinimumModelPercentage
MinimumModelPercentage
VPARAMETER
557
Minimum percentage of model contours that need to be matched in the input image in order to consider
the object instance as valid.
Syntax
MicroV+ VPARAMETER (sequence, tool, 557, index, object) = value
value =VPARAMETER (sequence, tool, 557, index, object)
V+ VPARAMETER (sequence, tool, 557, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 557, index, object)
Type
Double
Range
Minimum: Greater than 0.
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
557: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
280
MinimumRequiredFeatures
MinimumRequiredFeatures
VPARAMETER
560
Minimum percentage of required features that must be recognized in order to consider the object
instance as valid.
Syntax
MicroV+ VPARAMETER (sequence_index, tool_index, 560, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 560, index, object)
V+ VPARAMETER (sequence_index, tool_index, 560, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 560, index, object)
Type
Double
Range
Minimum: Greater than zero.
Maximum: 100.0
Remark(s)
The minimum percentage of required features is expressed in terms of the number of required
features in a model without considering the amount of contour each required feature represents in
the model. For example, if the model contains 3 required features and
MinimumRequiredFeatures is set to 50%, an instance of the object will be considered valid as
long as 2 out of 3 required features are recognized.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
560: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
281
MinimumRotation
MinimumRotation
VPARAMETER
516
Minimum angle of rotation allowed for an object instance to be recognized.
Syntax
MicroV+ VPARAMETER (sequence_index, tool_index, 516, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 516, index, object)
V+ VPARAMETER (sequence_index, tool_index, 516, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 516, index, object)
Remarks
This property is applicable only if the NominalRotationEnabled property is set to False. When
MaximumRotation is lower than MinimumRotation, the search range is equivalent to
MinimumRotation to MaximumRotation + 360 degrees.
Type
Double
Range
Minimum: -180.0 degrees
Maximum: +180.0 degrees
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
516: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
282
MinimumScaleFactor
MinimumScaleFactor
VPARAMETER
512
Minimum scale factor allowed for an object instance to be recognized.
Syntax
MicroV+ VPARAMETER (sequence, tool, 512, index, object) = value
value =VPARAMETER (sequence, tool, 512, index, object)
V+ VPARAMETER (sequence, tool, 512, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 512, index, object)
Remarks
This property is applicable only if the NominalScaleFactor property is set to False.
Type
Double
Range
Minimum: 0.1
Maximum: 10.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
512: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
283
Mode
Mode
VRESULT
1503
Mode of the greylevel distribution of the pixels in the tool's region of interest that are included in the
final histogram. Pixels removed from the histogram by tails or thresholds are not included in this
calculation. The mode is the greylevel value which corresponds to the histogram bin with the highest
number of pixels. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1504, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1504, index, frame)
Type
double
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1504: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
284
ModelAutomaticLevels
ModelAutomaticLevels
VPARAMETER
410
When set to True, this property specifies that the Outline and Detail coarseness levels were
automatically optimized while building the model. When False, the Outline and Detail coarseness levels
were configured manually before building the model. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 410, index, object) = value
value = VPARAMETER (sequence, tool, 410, index, object)
V+ VPARAMETER (sequence, tool, 410, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 410, index, object)
Remarks
Index of the model. Range: [1, ModelCount -1]
Type
Boolean
Range
Value
Description
1
Detail and coarseness levels were automatically optimized while building the model.
0
Detail coarseness levels were manually configured before building the model.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
410: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
285
ModelBasedMaximumRotation
ModelBasedMaximumRotation
VPARAMETER
225
Maximum angle of rotation allowed when the and the ModelBasedRotationMode is set to hsRelative.
Syntax
MicroV+ VPARAMETER (sequence, tool, 225, index, object) = value
value =VPARAMETER (sequence, tool, 225, index, object)
V+ VPARAMETER (sequence, tool, 225, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 225, index, object)
Type
Double
Range
Minimum: -180.0 degrees
Maximum: +180.0 degrees
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
225: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
286
ModelBasedMaximumScaleFactor
ModelBasedMaximumScaleFactor
VPARAMETER
222
Maximum scale allowed when ModelBasedScaleFactorMode is set to hsRelative.
Syntax
MicroV+ VPARAMETER (sequence, tool, 222, index, object) = value
value =VPARAMETER (sequence, tool, 222, index, object)
V+ VPARAMETER (sequence, tool, 222, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 222, index, object)
Remarks
See ModelBasedScaleFactorMode property for more details.
Type
Long
Range
Minimum: 0.1
Maximum: 10.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
222: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
287
ModelBasedMinimumRotation
ModelBasedMinimumRotation
VPARAMETER
224
Minimum angle of rotation allowed when ModelBasedRotationMode is set to hsRelative.
Syntax
MicroV+ VPARAMETER (sequence, tool, 224, index, object) = value
value =VPARAMETER (sequence, tool, 224, index, object)
V+ VPARAMETER (sequence, tool, 224, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 224, index, object)
Remarks
See ModelBasedRotationMode property for more details.
Type
Long
Range
Minimum: -180.0 degrees
Maximum: +180.0 degrees
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
224: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
288
ModelBasedMinimumScaleFactor
ModelBasedMinimumScaleFactor
VPARAMETER
221
Minimum scale allowed when ModelBasedScaleFactorMode is set to hsRelative.
Syntax
MicroV+ VPARAMETER (sequence, tool, 221, index, object) = value
value =VPARAMETER (sequence, tool, 221, index, object)
V+ VPARAMETER (sequence, tool, 221, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 221, index, object)
Remarks
See ModelBasedScaleFactorMode property for more details.
Type
Long
Range
Minimum: 0.1
Maximum: 10.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
221: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
289
ModelBasedRotationMode
ModelBasedRotationMode
VPARAMETER
223
Selects the method used to manage the rotation parameters of the Locator's search (MinimumRotation,
MaximumRotation and NominalRotation).
Syntax
Syntax
MicroV+ VPARAMETER (sequence, tool, 223, index, object) = value
value =VPARAMETER (sequence, tool, 223, index, object)
V+ VPARAMETER (sequence, tool, 223, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 223, index, object)
Remarks
hsAbsolute has no effect on rotation parameters of the Locator's search process. It is useful for
positioning objects, based on the position of an object found by an initial Locator tool. hsRelative
optimizes search speed and robustness when you need to accurately position sub-parts of an object,
based on the position of the source object. In the hsRelative mode, the Locator's Learn phase
applies ModelBasedMinimumRotation and ModelBasedMaximumRotation as the allowed rotation
range.
Type
Double
Range
Value
Method
Description
0
hsAbsolute Default Value. The Search directly applies the rotation parameters specified
for the Locator search
1
hsRelative
In the hsRelative mode, the Locator's Learn phase applies ModelBasedMinimumRotation and ModelBasedMaximumRotation as the allowed rotation
range.
The hsRelative mode optimizes search speed and robustness when you need
to accurately position sub-parts of an object, based on the position of the
source object.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
223: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
290
ModelBasedScaleFactorMode
ModelBasedScaleFactorMode
VPARAMETER
220
Selects the method used to manage the scale properties used by the Locator's search
(MinimumScaleFactor, MaximumScaleFactor and NominalScaleFactor), when the Locator is ModelBased.
Syntax
MicroV+ VPARAMETER (sequence, tool, 220, index, object) = value
value =VPARAMETER (sequence, tool, 220, index, object)
V+ VPARAMETER (sequence, tool, 220, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 220, index, object)
Remarks
hsAbsolute has no effect on the scale parameters of the Locator's search process. It is useful for
positioning objects, based on the position of an object found by an initial Locator tool. hsRelative
optimizes search speed and robustness when you need to accurately position sub-parts of an object,
based on the position of the source object. In the hsRelative mode, the Locator's Learn phase
applies ModelBasedMinimumScaleFactor and ModelBasedMinimumRotation as the possible range in
scale.
Type
Long
Range
Value
Method
Description
0
hsAbsolute
Default Value. The Search directly applies the scale parameters specified for the Locator search.
1
hsRelative
The Locator's Learn phase applies ModelBasedMinimumScaleFactor
and ModelBasedMinimumRotation as the possible range in scale.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
220: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
291
ModelBoundingAreaBottom
ModelBoundingAreaBottom
VPARAMETER
417
The Y position in the image of the bottom of the model's bounding box. Unlike other parameters of a
model such as its origin or reference points, the bounding box is not a calibrated property. It is used as
an indication of the image area in which the contours used to construct the model were detected.
ModelBoundingAreaBottom is expressed in pixels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 417, index, object) = value
value =VPARAMETER (sequence, tool, 417, index, object)
V+ VPARAMETER (sequence, tool, 417, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 417, index, object)
Type
Double
Range
Minimum: 0
Maximum: Height of the image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
417: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
292
ModelBoundingAreaLeft
ModelBoundingAreaLeft
VPARAMETER
419
The X position in the image of the left side of the model's bounding box. Unlike other parameters of a
model such as its origin or reference points, the bounding box is not a calibrated property. It is used as
an indication of the image area in which the contours used to construct the model were detected.
ModelBoundingAreaLeft is expressed in pixels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 419, index, object) = value
value =VPARAMETER (sequence, tool, 419, index, object)
V+ VPARAMETER (sequence, tool, 419, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 419, index, object)
Type
Long
Range
Minimum: 0
Maximum: Width of the image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
419: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
293
ModelBoundingAreaRight
ModelBoundingAreaRight
VPARAMETER
420
The X position in the image of the right side of the model's bounding box. Unlike other parameters of a
model such as its origin or reference points, the bounding box is not a calibrated property. It is used as
an indication of the image area in which the contours used to construct the model were detected.
ModelBoundingAreaRight is expressed in pixels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 420, index, object) = value
value =VPARAMETER (sequence, tool, 420, index, object)
V+ VPARAMETER (sequence, tool, 420, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 420, index, object)
Type
Long
Range
Minimum: 0
Maximum: Height of the image
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
420: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
294
ModelBoundingAreaTop
ModelBoundingAreaTop
VPARAMETER
418
The Y position in the image of the top of the model's bounding box. Unlike other parameters of a model
such as its origin or reference points, the bounding box is not a calibrated property. It is used as an
indication of the image area in which the contours used to construct the model were detected.
ModelBoundingAreaTop is expressed in pixels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 418, index, object) = value
value =VPARAMETER (sequence, tool, 418, index, object)
V+ VPARAMETER (sequence, tool, 418, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 418, index, object)
Type
Long
Range
Minimum: 0
Maximum: Height of the image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
418: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
295
ModelContrastThreshold
ModelContrastThreshold
VPARAMETER
414
The contrast threshold value used to detect the contours from which the model features were selected.
Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 414, index, object) = value
value =VPARAMETER (sequence, tool, 414, index, object)
V+ VPARAMETER (sequence, tool, 414, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 414, index, object)
Type
Long
Range
Minimum: 1
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
414: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
296
ModelContrastThresholdMode
ModelContrastThresholdMode
VPARAMETER
413
The method used to compute the threshold used for detecting contours in the model image. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 413, index, object) = value
value =VPARAMETER (sequence, tool, 413, index, object)
V+ VPARAMETER (sequence, tool, 413, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 413, index, object)
Type
Long
Range
Valu
e
Method Name
Description
1
hsContrastThresholdAdaptiveLowSensitivity
Uses a low sensitivity adaptive threshold for
detecting contours.
2
hsContrastThresholdAdaptiveNormalSensitiv- Uses a normal sensitivity adaptive threshold for
ity
detecting contours.
3
hsContrastThresholdAdaptiveHighSensitivity
Uses a high sensitivity adaptive threshold for
detecting contours.
4
hsContrastThresholdFixedValue
Uses a fixed value threshold for detecting contours
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
413: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
297
ModelCount
ModelCount
VPARAMETER
404
Number of models in the models database. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 404, index, object) = value
value =VPARAMETER (sequence, tool, 404, index, object)
V+ VPARAMETER (sequence, tool, 404, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 404, index, object)
Type
Long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
404: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
298
ModelDatabaseModified
ModelDatabaseModified
VPARAMETER
402
When True, indicates that the current models database has been modified. These modifications include
edition of an existing model, addition of a new model to the database and deletion of an existing model
from the database. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 402, index, object) = value
value =VPARAMETER (sequence, tool, 402, index, object)
V+ VPARAMETER (sequence, tool, 402, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 402, index, object)
Type
Boolean
Range
Value
Description
1
The current models database has been modified
0
The current models database has not been modified
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
402: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
299
ModelDetailLevel
ModelDetailLevel
VPARAMETER
412
The coarseness of the contours used to build the model at the Detail level. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 412, index, object) = value
value =VPARAMETER (sequence, tool, 412, index, object)
V+ VPARAMETER (sequence, tool, 412, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 412, index, object)
Type
Long
Range
Minimum: 1
Maximum: 16
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
412: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
Related Properties
ModelOutlineLevel
AdeptSight 2.0 - AdeptSight Reference
300
ModelDisambiguationEnabled
ModelDisambiguationEnabled
VPARAMETER
403
When set to True (default), the Locator applies disambiguation to discriminate between similar models
and between similar hypotheses of a single object. When set to False, the Locator does not apply
disambiguation.
Syntax
MicroV+ VPARAMETER (sequence, tool, 403, index, object) = value
value =VPARAMETER (sequence, tool, 403, index, object)
V+ VPARAMETER (sequence, tool, 403, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 403, index, object)
Type
Boolean
Range
Value
Description
1
Locator applies disambiguation.
0
Locator does not apply disambiguation.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
403: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
301
ModelEnabled
ModelEnabled
VPARAMETER
405
Specifies if the selected model is enabled, which means that the Locator will search for this model when
the Locator is executed.
Syntax
MicroV+ VPARAMETER (sequence, tool, 405, index, object) = value
value =VPARAMETER (sequence, tool, 405, index, object)
V+ VPARAMETER (sequence, tool, 405, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 405, index, object)
Type
Boolean
Range
Value
Description
1
The model is enabled. The Locator will search for instances of this model.
0
The model is disabled. The Locator will not search for instances of this model.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
405: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
302
ModelFeatureSelection
ModelFeatureSelection
VPARAMETER
416
The mode used to select model features from the detected contours at both the Outline and Detail
levels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 416, index, object) = value
value =VPARAMETER (sequence, tool, 416, index, object)
V+ VPARAMETER (sequence, tool, 416, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 416, index, object)
Type
Long
Range
Value
Mode
Description
1
hsModelFeatureSelectionNone
No features were automatically selected from the contours.
2
hsModelFeatureSelectionLess
Fewer features than the optimal set were automatically selected from the contours.
3
hsModelFeatureSelectionNormal
The optimal features were automatically selected from
the contours.
4
hsModelFeatureSelectionMore
More features than the optimal set were automatically
selected from the contours.
5
hsModelFeatureSelectionAll
All of the contours were automatically selected as features.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
416: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
303
ModelOriginPositionX
ModelOriginPositionX
VPARAMETER
421
The X position in the world coordinate system of the selected model's origin. The origin, defined by the
ModelOriginPositionX, ModelOriginPositionY, and ModelOriginRotation properties, is used by the Locator
to express the pose of instances of the model. The translation of the instance represents the position of
the model's origin in the coordinate system selected by the CoordinateSystem property. The model's
origin is also used as the pivot point around which the rotation of the instance is measured. This origin
also defines the object coordinate system that can be used to express results of model-based
inspection tools. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 421, index, object) = value
value =VPARAMETER (sequence, tool, 421, index, object)
V+ VPARAMETER (sequence, tool, 421, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 421, index, object)
Type
Long
Range
Not applicable.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
421: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
304
ModelOriginPositionY
ModelOriginPositionY
VPARAMETER
422
The Y position in the world coordinate system of the selected model's origin. The origin, defined by the
ModelOriginPositionX, ModelOriginPositionY, and ModelOriginRotation properties, is used by the Locator
to express the pose of instances of the model. The translation of the instance represents the position of
the model's origin in the coordinate system selected by the CoordinateSystem property. The model's
origin is also used as the pivot point around which the rotation of the instance is measured. This origin
also defines the object coordinate system that can be used to express results of model-based
inspection tools. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 422, index, object) = value
value =VPARAMETER (sequence, tool, 422, index, object)
V+ VPARAMETER (sequence, tool, 422, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 422, index, object)
Type
Long
Range
Not applicable.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
422: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
305
ModelOriginRotation
ModelOriginRotation
VPARAMETER
423
The rotation in the world coordinate system of the selected model's origin. The origin, defined by the
ModelOriginPositionX, ModelOriginPositionY, and ModelOriginRotation properties, is used by the
Locator to express the pose of instances of the model. The translation of the instance represents the
position of the model's origin in the coordinate system selected by the CoordinateSystem property. The
model's origin is also used as the pivot point around which the rotation of the instance is measured.
This origin also defines the object coordinate system that can be used to express results of modelbased inspection tools.
Syntax
MicroV+ VPARAMETER (sequence, tool, 423, index, object) = value
value =VPARAMETER (sequence, tool, 423, index, object)
V+ VPARAMETER (sequence, tool, 423, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 423, index, object)
Remarks
Index of the model. Range: [1, ModelCount -1]
Type
Long
Range
Minimum: -180.0
Maximum: 180.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
423: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
306
ModelOutlineLevel
ModelOutlineLevel
VPARAMETER
411
The coarseness of the contours used to build the model at the Outline level. Read only .
Syntax
MicroV+ VPARAMETER (sequence, tool, 411, index, object) = value
value =VPARAMETER (sequence, tool, 411, index, object)
V+ VPARAMETER (sequence, tool, 411, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 411, index, object)
Remarks
Index of the model. Range: [1, ModelCount -1]
Type
Double
Range
Minimum: 1
Maximum: 16
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
411: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
See Also
ModelDetailLevel
AdeptSight 2.0 - AdeptSight Reference
307
ModelReferencePointCount
ModelReferencePointCount
VPARAMETER
424
Number of reference points defined on the selected model. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 424, index, object) = value
value =VPARAMETER (sequence, tool, 424, index, object)
V+ VPARAMETER (sequence, tool, 424, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 424, index, object)
Remarks
Index of the model. Range: [1, ModelCount -1]
Type
Long
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
424: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
308
ModelReferencePointPositionX
ModelReferencePointPositionX
VPARAMETER
425
X coordinate of the selected reference point on the selected model.
Syntax
MicroV+ VPARAMETER (sequence, tool, 425, index, object) = value
value =VPARAMETER (sequence, tool, 425, index, object)
V+ VPARAMETER (sequence, tool, 425, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 425, index, object)
Remarks
Index of the model. Range: [1, ModelCount -1]
Type
Long
Range
Not applicable.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
425: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
See Also
ModelReferencePointPositionY
AdeptSight 2.0 - AdeptSight Reference
309
ModelReferencePointPositionY
ModelReferencePointPositionY
VPARAMETER
426
Y coordinate of the selected reference point on the selected model.
Syntax
MicroV+ VPARAMETER (sequence, tool, 426, index, object) = value
value =VPARAMETER (sequence, tool, 426, index, object)
V+ VPARAMETER (sequence, tool, 426, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 426, index, object)
Remarks
Index of the model. Range: [1, ModelCount -1]
Type
Long
Range
Not applicable.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
426: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
See Also
ModelReferencePointPositionY
AdeptSight 2.0 - AdeptSight Reference
310
ModelShadingAreaBottom
ModelShadingAreaBottom
VPARAMETER
427
Description
The Y position of the bottom of the shading area that is bounded by ModelShadingAreaBottom,
ModelShadingAreaLeft, ModelShadingAreaRight and ModelShadingAreaTop. The shading area is used
for shading consistency analysis when the InstanceOrdering property is set to hsShadingConsistency.
Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 427, index, object) = value
value =VPARAMETER (sequence, tool, 427, index, object)
V+ VPARAMETER (sequence, tool, 427, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 427, index, object)
Type
Long
Range
Minimum: 0
Maximum: Height of the image
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
427: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
See Also
ModelShadingAreaLeft
ModelShadingAreaRight
ModelShadingAreaTop
AdeptSight 2.0 - AdeptSight Reference
311
ModelShadingAreaLeft
ModelShadingAreaLeft
VPARAMETER
429
The X position in the image of the left side of the model's bounding box. Unlike other parameters of a
model such as its origin or reference points, the bounding box is not a calibrated property. It is used as
an indication of the image area in which the contours used to construct the model were detected.
ModelShadingAreaLeft is expressed in pixels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 429, index, object) = value
value =VPARAMETER (sequence, tool, 429, index, object)
V+ VPARAMETER (sequence, tool, 429, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 429, index, object)s
Type
Long
Range
Minimum: 0
Maximum: Height of the image
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
429: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
See Also
ModelShadingAreaBottom
ModelShadingAreaRight
ModelShadingAreaTop
AdeptSight 2.0 - AdeptSight Reference
312
ModelShadingAreaRight
ModelShadingAreaRight
VPARAMETER
430
The X position in the image of the right side of the model's bounding box. Unlike other parameters of a
model such as its origin or reference points, the bounding box is not a calibrated property. It is used as
an indication of the image area in which the contours used to construct the model were detected.
ModelBoundingAreaRight is expressed in pixels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 430, index, object) = value
value =VPARAMETER (sequence, tool, 430, index, object)
V+ VPARAMETER (sequence, tool, 430, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 430, index, object)
Type
Long
Range
Minimum: 0
Maximum: Width of the image
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
430: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
Related Properties
ModelShadingAreaBottom
ModelShadingAreaLeft
ModelShadingAreaTop
AdeptSight 2.0 - AdeptSight Reference
313
ModelShadingAreaTop
ModelShadingAreaTop
VPARAMETER
428
The Y position in the image of the top of the model's bounding box. Unlike other parameters of a model
such as its origin or reference points, the bounding box is not a calibrated property. It is used as an
indication of the image area in which the contours used to construct the model were detected.
ModelBoundingAreaTop is expressed in pixels. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 428, index, object) = value
value =VPARAMETER (sequence, tool, 428, index, object)
V+ VPARAMETER (sequence, tool, 428, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 428, index, object)
Remarks
Index of the model. Range: [1, ModelCount -1]
Type
Long
Range
Minimum: 0
Maximum: Height of the image
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
428: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
Related Properties
ModelShadingAreaBottom
ModelShadingAreaLeft
ModelShadingAreaRight
AdeptSight 2.0 - AdeptSight Reference
314
ModelTrackingInertia
ModelTrackingInertia
VPARAMETER
415
The tracking inertia setting used to detect the contours of the model. Read only.
Syntax
MicroV+ VPARAMETER (sequence, tool, 415, index, object) = value
value =VPARAMETER (sequence, tool, 415, index, object)
V+ VPARAMETER (sequence, tool, 415, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 415, index, object)
Remarks
In MicroV+/V+ The index parameter specifies the index of the model. Range: [1, ModelCount -1]
Type
Long
Range
Minimum: 0
Maximum: 1
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
415: the value used to reference this property
index
Index of the model. Range: [1, ModelCount -1]
object
N/A
AdeptSight 2.0 - AdeptSight Reference
315
ModePixelCount
ModePixelCount
VRESULT
1505
Number of pixels in the histogram bin which corresponds to the Mode of the greylevel distribution of all
pixels in the tool's region of interest that are included in the final histogram. Pixels removed from the
histogram by tails or thresholds are not included in this calculation. The mode is the greylevel value
which corresponds to the histogram bin with the highest number of pixels. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1505, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1505, index, frame)
Type
Double
Range
Greater than or equal to 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
1505: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
316
MorphologicalNeighborhoodSize
MorphologicalNeighborhoodSize
VPARAMETER
5390
Neighborhood applied by a morphological operation
Type
long
Range
Fixed value: 3
AdeptSight 2.0 - AdeptSight Reference
317
NominalRotation
NominalRotation
VPARAMETER
515
Required angle of rotation for an object instance to be recognized.
Syntax
MicroV+ VPARAMETER (sequence, tool, 515, index, object) = value
value =VPARAMETER (sequence, tool, 515, index, object)
V+ VPARAMETER (sequence, tool, 515, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 515, index, object)
Remarks
This property is applicable only if the NominalRotationEnabled property is set to True.
Type
Long
Range
Minimum: -180.0 degrees
Maximum: +180.0 degrees
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
515: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
318
NominalRotationEnabled
NominalRotationEnabled
VPARAMETER
514
Specifies whether the rotation of a recognized instance must fall within the range set by
MinimumRotation and MaximumRotation or be equal to the nominal value set by the NominalRotation
property. When NominalRotationEnabled is set to True, the nominal value is applied, otherwise the
range is used.
Syntax
MicroV+ VPARAMETER (sequence, tool, 514, index, object) = value
value =VPARAMETER (sequence, tool, 514, index, object)
V+ VPARAMETER (sequence, tool, 514, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 514, index, object)
Type
Boolean
Range
Value
Description
1
Locator searches for instances that meet NominalRotation constraint
0
Locator searches for instances within range set by MinimumRotation and
MaximumRotation.
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
514: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
319
NominalScaleFactor
NominalScaleFactor
VPARAMETER
511
Required scale factor for an object instance to be recognized.
Syntax
MicroV+ VPARAMETER (sequence, tool, 511, index, object) = value
value =VPARAMETER (sequence, tool, 511, index, object)
V+ VPARAMETER (sequence, tool, 511, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 511, index, object)
Remarks
This property is applicable only if the NominalRotationEnabled property is set to True
Type
Long
Range
Minimum: 0.1
Maximum: 10.0
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
511: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
320
NominalScaleFactorEnabled
NominalScaleFactorEnabled
VPARAMETER
510
Specifies whether the scale factor of a recognized instance must fall within the range set by
MinimumScaleFactor and MaximumScaleFactor or be equal to the nominal value set by the
NominalScaleFactor property. When NominalScaleFactorEnabled is set to True, the nominal value is
applied, otherwise the range is used.
Syntax
MicroV+ VPARAMETER (sequence, tool, 510, index, object) = value
value =VPARAMETER (sequence, tool, 510, index, object)
V+ VPARAMETER (sequence, tool, 510, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 510, index, object)
Type
Boolean
Range
Value
Description
1
Locator searches for instances that meet NominalScaleFactor constraint
0
Locator searches for instances within range set by MinimumScaleFactor and
MaximumScaleFactor.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
510: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
321
Operation
Operation
VPARAMETER
5355
Operation applied by the Image Processing tool.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5355, index, object) = value
value =VPARAMETER (sequence, tool, 5355, index, object)
V+ VPARAMETER (sequence, tool, 5355, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5355, index, object)
Type
Long
Range
Value
Name
Description
0
hsArithmeticAddition
Operand value (constant or Operand Image pixel) is added to the
corresponding pixel in the input image.
1
hsArithmeticSubtraction Operand value (constant or Operand Image pixel) is subtracted
from the corresponding pixel in the input image.
2
hsArithmeticMultiplication
The input image pixel value is multiplied by the Operand value
(constant or corresponding Operand Image pixel).
3
hsArithmeticDivision
The input image pixel value is divided by the Operand value (constant or corresponding Operand image pixel). The result is scaled
and clipped, and finally written to the output image.
4
hsArithmeticLightest
The Operand value (constant or Operand Image pixel) and corresponding pixel in the input image are compared to find the maximal value.
5
hsArithmeticDarkest
The Operand value (constant or Operand Image pixel) and corresponding pixel in the input image are compared to find the minimal value.
6
hsAssignmentInitialization
All the pixels of the output image are set to a specific constant
value. The height and width of the output image must be specified.
7
hsAssignmentCopy
Each input image pixel is copied to the corresponding output
image pixel.
8
hsAssignmentInversion
The input image pixel value is inverted and the result is copied to
the corresponding output image pixel.
9
hsLogicalAnd
AND operation is applied to the Operand value (constant or Operand image pixel) and the corresponding pixel in the input image.
10
hsLogicalNAnd
NAND operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
AdeptSight 2.0 - AdeptSight Reference
322
Operation
Value
Name
Description
11
hsLogicalOr
OR operation is applied to the Operand value (constant or Operand image pixel) and the corresponding pixel in the input image.
12
hsLogicalXOr
XOR operation is applied to the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
13
hsLogicalNOr
NOR operation is applied using the Operand value (constant or
Operand image pixel) and the corresponding pixel in the input
image.
14
hsFilteringCustom
Applies a Custom filter.
15
hsFilteringAverage
Applies an Average filter.
16
hsFilteringLaplacian
Applies a Laplacian filter.
17
hsFilteringHorizontalSo- Applies a Horizontal Sobel filter.
bel
18
hsFilteringVerticalSobel Applies a Vertical Sobel filter.
19
hsFilteringSharpen
Applies a Sharpen filter.
20
hsFilteringSharpenLow
Applies a SharpenLow filter.
21
hsFilteringHorizontalPrewitt
Applies a Horizontal Prewitt filter.
22
hsFilteringVerticalPrewitt
Applies a Vertical Prewitt filter.
23
hsFilteringGaussian
Applies Gaussian filter.
24
hsFilteringHighPass
Applies High Pass filter.
25
hsFilteringMedian
Applies a Median filter.
26
hsMorphologicalDilate
Sets each pixel in the output image as the largest luminance
value of all the input image pixels in the neighborhood defined by
the selected kernel size.
27
hsMorphologicalErode
Sets each pixel in the output image as the smallest luminance
value of all the input image pixels in the neighborhood defined by
the selected kernel size.
28
hsMorphologicalClose
Has the effect of removing small dark particles and holes within
objects.
29
hsMorphologicalOpen
Has the effect of removing peaks from an image, leaving only the
image background.
30
hsHistogramEqualization
Equalization operation enhances the Input Image by flattening
the histogram of the Input Image
31
hsHistogramStretching
Stretches (increases) the contrast in an image by applying a simple piecewise linear intensity transformation based on the histogram of the Input Image.
32
hsHistogramLightThreshold
Changes each pixel value depending on whether they are less or
greater than the specified threshold. If an input pixel value is less
than the threshold, the corresponding output pixel is set to the
minimum acceptable value. Otherwise, it is set to the maximum
presentable value.
AdeptSight 2.0 - AdeptSight Reference
323
Operation
Value
Name
Description
33
hsHistogramDarkThreshold
Changes each pixel value depending on whether they are less or
greater than the specified threshold. If an input pixel value is less
than the threshold, the corresponding output pixel is set to the
maximum presentable value. Otherwise, it is set to the minimum
acceptable value.
34
hsTransformFFT
Converts and outputs a frequency description of the input image
by applying a Fast Fourier Transform (FFT).
35
hsTransformDCT
Converts and outputs a frequency description of the input image
by applying a Discrete Cosine Transform (DCT).Parameters
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5355: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
324
Operator
Operator
VPARAMETER
5600
Logical operator applied by the Results Inspection tool.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5600, index, object) = value
value =VPARAMETER (sequence, tool, 5600, index, object)
V+ VPARAMETER (sequence, tool, 5600, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5600, index, object)
Type
Long
Range
0 or 1
Range
Value
State
Description
1
AND
AND operator is applied.
0
OR
OR operator is applied.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5600: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
325
OutlineLevel
OutlineLevel
VPARAMETER
300
The coarseness of the contours at the Outline level. This property can only be set when
ParametersBasedOn is set to hsParametersCustom. Read only otherwise.
Syntax
MicroV+ VPARAMETER (sequence, tool, 300, index, object) = value
value =VPARAMETER (sequence, tool, 300, index, object)
V+ VPARAMETER (sequence, tool, 300, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 300, index, object)
Remarks
For most applications, the ParametersBasedOn property should be set to hsParametersAllModels.
Custom contour detection should only be used when the default values do not work correctly.
Type
Long
Range
Minimum: 1
Maximum: 16
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
300. The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
326
OutputArcAngle
OutputArcAngle
VRESULT
1841
Angle of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1841, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1841, index, frame)
Type
double
Range
Minimum: -180
Maximum: 180 degrees
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1841: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
327
OutputArcCenterPointX
OutputArcCenterPointX
VRESULT
1846
X coordinate of the center point of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1846, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1846, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1846: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
328
OutputArcCenterPointY
OutputArcCenterPointY
VRESULT
1847
The Y coordinate of the center point of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1847, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1847, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1847: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
329
OutputArcEndPointX
OutputArcEndPointX
VRESULT
1844
The X coordinate of the end point of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1844, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1844, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1844: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
330
OutputArcEndPointY
OutputArcEndPointY
VRESULT
1845
The Y coordinate of the end point of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1845, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1845, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1845: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
331
OutputArcRadius
OutputArcRadius
VRESULT
1840
The radius of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1840, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1840, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1840: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
332
OutputArcStartPointX
OutputArcStartPointX
VRESULT
1842
The X coordinate of the start point of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1842, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1842, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1842: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
333
OutputArcStartPointY
OutputArcStartPointY
VRESULT
1843
The Y coordinate of the start point of the specified arc entity.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1843, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1843, index, frame)
Type
double
Range
Unbounded
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1843: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
334
OutputBlobImageEnabled
OutputBlobImageEnabled
VPARAMETER
30
Specifies if a blob image will be output after the blob segmentation and labelling process.
Syntax
MicroV+ VPARAMETER (sequence, tool, 30, index, object) = value
value =VPARAMETER (sequence, tool, 30, index, object)
V+ VPARAMETER (sequence, tool, 30, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 30, index, object)
Remarks
Generating a blob image considerably increases the tool’s execution time.
Type
Boolean
Range
Value
Description
0
The blob image will not be output.
1
The blob image will be output.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
30: The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server. Applies to V+ syntax only.
AdeptSight 2.0 - AdeptSight Reference
335
OutputEntityEnabled
OutputEntityEnabled
VPARAMETER
35
Specifies if a found entity will be output to the runtime database.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 35, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 35, index, object)
V+ VPARAMETER (sequence_index, tool_index, 35, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 35, index, object)
Remarks
Outputting an entity may significantly increase the tool’s execution time.
Type
Long
Range
Value
Name
Description
1
True
Entity will be output to the runtime database
0
False
Entity will be not output to the runtime database
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
35: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
336
OutputDetailSceneEnabled
OutputDetailSceneEnabled
VPARAMETER
22
When OutputDetailSceneEnabled is set to True, the Detail Contour Scene is output to the runtime
database.
Syntax
MicroV+ VPARAMETER (sequence, tool, 22, index, object) = value
value =VPARAMETER (sequence, tool, 22, index, object)
V+ VPARAMETER (sequence, tool, 22, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 22, index, object)
Type
Long
Range
Value
Description
1
Detail Contour Scene is output to runtime database.
0
Detail Contour Scene is not output to runtime database.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
337
OutputInstanceSceneEnabled
OutputInstanceSceneEnabled
VPARAMETER
23
When OutputInstanceSceneEnabled is set to True, the instance scene is output in the runtime
database.
Syntax
MicroV+ VPARAMETER (sequence, tool, 23, index, object) = value
value =VPARAMETER (sequence, tool, 23, index, object)
V+ VPARAMETER (sequence, tool, 23, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 23, index, object)
Type
Boolean
Range
Value
Description
1
Instance Scene is output to runtime database
0
Instance Scene is not output to runtime database
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
338
OutputLineAngle
OutputLineAngle
VRESULT
1820
Angle of the specified line entity.
Type
double
Range
Minimum: -180
Maximum: 180
AdeptSight 2.0 - AdeptSight Reference
339
OutputLineEndPointX
OutputLineEndPointX
VRESULT
1823
X coordinate of the end point of the specified line entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
340
OutputLineEndPointY
OutputLineEndPointY
VRESULT
1824
Y coordinate of the end point of the specified line entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
341
OutputLineStartPointX
OutputLineStartPointX
VRESULT
1821
X coordinate of the start point of the specified line entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
342
OutputLineStartPointY
OutputLineStartPointY
VRESULT
1822
Y coordinate of the start point of the specified line entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
343
OutputLineVectorPointX
OutputLineVectorPointX
VRESULT
1825
X coordinate of the vector point of the specified line entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
344
OutputLineVectorPointY
OutputLineVectorPointY
VRESULT
1826
Y coordinate of the vector point of the specified line entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
345
OutputMode
OutputMode
VPARAMETER
24
Mode used to output object instances in the Instance Scene.
Syntax
MicroV+ VPARAMETER (sequence, tool, 24, index, object) = value
value =VPARAMETER (sequence, tool, 24, index, object)
V+ VPARAMETER (sequence, tool, 24, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 24, index, object)
Remarks
For normal searches using both the Outline and Detail levels, the models at the Detail level are used
to draw the instances. When SearchBasedOnOutlineLevelOnly is True, the models at the Outline
level are used. Setting this property to hsMatchedModel or hsTransformedModel will usually increase
the processing time. It should be set to hsNoGraphics for optimal performance.
Type
Long
Range
Value
Mode
Description
0
hsNoGraphics
The output Scene does not contain any graphical representation
of object instances.
1
hsTransformedModel
In the output scene, an object instance is represented by transforming its associated model according the pose computed by
the Locator.
3
hsMatchedModel
In the output scene, an object instance is represented by transforming the sections of its model that were matched to actual
contours according the pose computed by the Locator.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
346
OutputOutlineSceneEnabled
OutputOutlineSceneEnabled
VPARAMETER
21
When OutputOutlineSceneEnabled is set to True, the Outline Contour Scene is output to the runtime
database.
Syntax
MicroV+ VPARAMETER (sequence, tool, 21, index, object) = value
value =VPARAMETER (sequence, tool, 21, index, object)
V+ VPARAMETER (sequence, tool, 21, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 21, index, object)
Type
Boolean
Range
Value
Description
1
Outline Contour Scene is output to runtime database.
0
Outline Contour Scene is not output to runtime database.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
347
OutputPointX
OutputPointX
VRESULT
1810
X coordinate of the specified point entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
348
OutputPointY
OutputPointY
VRESULT
1811
Y coordinate of the specified point entity.
Type
double
Range
Unbounded
AdeptSight 2.0 - AdeptSight Reference
349
OutputSymmetricInstances
OutputSymmetricInstances
VPARAMETER
520
When set to true, all the symmetric pose of the object instances are output. If False, only the best
quality of the symmetric poses of the object instance is output.
Syntax
MicroV+ VPARAMETER (sequence, tool, 520, index, object) = value
value =VPARAMETER (sequence, tool, 520, index, object)
V+ VPARAMETER (sequence, tool, 520, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 520, index, object)
Remarks
See also InstanceSymmetry.
Type
Boolean
Range
Value
Description
1
Locator outputs all symmetrical poses of an instance.
0
Locator outputs only single best pose of an instance.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
350
OverrideModelImageEnabled
OverrideModelImageEnabled
VPARAMETER
40
For a model-based tool only, setting OverrideModelImageEnabled to True makes it possible to define a
pattern from an image other than the model grey-scale image. For a non-model-based tool this
property is invalid.
Type
Long
Range
Value
State
1
True
0
False
AdeptSight 2.0 - AdeptSight Reference
351
OverrideType
OverrideType
VPARAMETER
5351
Output image type when OverrideTypeEnabled property is set to True. By default, the Image Processing
Tool outputs all resulting images as unsigned 8-bit images.
Type
Long
Range
Value
Name
Description
1
hsType8Bits
Unsigned 8-bit image.
10
hsType16Bits
Signed 16-bit image.
7
hsType32Bits
Signed 32-bit image
AdeptSight 2.0 - AdeptSight Reference
352
OverrideTypeEnabled
OverrideTypeEnabled
VPARAMETER
5350
Enables or disables the OverrideType property
Type
Long
Range
Value
State
Description
1
Enabled
The OverrideType represents the wanted output image type.
0
Disabled The output image type is automatically selected to the same type as the output
image if it exists otherwise, it is created with the same type as the input image.
AdeptSight 2.0 - AdeptSight Reference
353
PairCount
PairCount
VPARAMETER
1920
PairCount indicates the number of pairs that have configured for the tool. Read only.
Syntax
Micro V+ VPARAMETER (sequence, tool, 404, index, object) = value
value =VPARAMETER (sequence, tool, 404, index, object)
V+ VPARAMETER (sequence, tool, 1920, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 1920, index, object)
Remarks
If an edge pair is not found, results for this edge pair appear as zero, but the PairCount property is
not affected. To get the number of pairs found by the tool use the ResultCount property.
Type
long
Range
Minimum: 0
Maximum: Unlimited
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1920: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
354
PairPositionX
PairPositionX
VRESULT
1921
X coordinate of the center of the selected pair. The position of a pair is defined as the middle of the line
segment drawn from the X-Y coordinates of the first and the second edges of the pair. Read only.
Arc Caliper - Radial Projection
Caliper
Pair
position
Arc Caliper- Annular Projection
Pair
position
Pair
position
Figure 13 Position of Arc Caliper and Caliper Pairs
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1921, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1921, index, frame)
Type
long
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pair for which you want the result.
ID
1921: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
355
PairPositionY
PairPositionY
VRESULT
1922
Y coordinate of the center of the selected pair. The position of a pair is defined as the middle of the line
segment drawn from the X-Y coordinates of the first and the second edges of the pair. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1922, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1922, index, frame)
Type
double
Range
Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pair for which you want the result.
ID
1922: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
356
PairRotation
PairRotation
VRESULT
1923
Angle of rotation of the selected pair in the currently selected coordinate system. The rotation of a
given pair is always the same as the rotation of its first and second edges. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1923, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1923, index, frame)
Type
Long
Range
Minimum: -180
Maximum: 180
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pair for which you want the result.
ID
1923: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
357
PairScore
PairScore
VRESULT
1924
Score of the selected pair. The score of the pair is equal to the mean score of the two edges
(Edge1Score and Edge2Score) that comprise the pair. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1924, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1924, index, frame)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1924: the value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
358
PairSize
PairSize
VRESULT
1925
Score of the selected pair. The score of the pair is equal to the mean score of the two edges
(Edge1Score and Edge2Score) that comprise the pair. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1925, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1925, index, frame)
Type
double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the pair for which you want the result.
ID
1925: the value used to reference this property
index
N/A
frame
Frame containing the pair.
AdeptSight 2.0 - AdeptSight Reference
359
ParametersBasedOn
ParametersBasedOn
VPARAMETER
304
Sets how the contour detection parameters are configured. When set to
hsContourParametersAllModels, the contour detection parameters are optimized by analyzing the
parameters used to build all the models. When set to hsContourParametersCustom, the contour
detection parameters are set manually. When set to a value greater than hsContourParametersCustom,
the contour detection parameters of a specific model are used. The contour detection parameters on
which this property has an effect are DetailLevel, OutlineLevel, ContrastThresholdMode, and
ContrastThreshold
Syntax
MicroV+ VPARAMETER (sequence, tool, 304, index, object) = value
value =VPARAMETER (sequence, tool, 304, index, object)
V+ VPARAMETER (sequence, tool, 304, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 304, index, object)
Remarks
For most applications, the ParametersBasedOn property should be set to hsParametersAllModels.
Custom contour detection should only be used when the default values do not work correctly.
Type
Long
Range
Value
Detection Mode
Description
-2
hsContourParametersAllModels
The contour detection parameters are optimized by analyzing the parameters used to build all the models.
-1
hsContourParametersCustom
The contour detection parameters are set manually.
Integer specifying the index The contour detection parameters of the specified model are
Integer
used.
greater of a model
than or
equal to
0
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
360
ParametersBasedOn
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
361
PatternHeight
PatternHeight
VPARAMETER
5403
Height of the region of interest of the Pattern. This is the sample pattern for which the Pattern Locator
searches.
Y
Height
Region of interest
X
Pattern coordinate system
Width
Figure 14 Illustration of Pattern Height and Width
Syntax
MicroV+ VPARAMETER (sequence, tool, 5403, index, object) = value
value =VPARAMETER (sequence, tool, 5403, index, object)
V+ VPARAMETER (sequence, tool, 5403, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5403, index, object)
Remarks
This property is expressed in calibrated units when CalibratedUnitsEnabled is set to True. Otherwise,
it is expressed in pixels.
Type
Long
Range
Greater than or equal to three pixels. Minimum pixel size is 3x3 pixels.
Parameters
$ip
IP address of the vision server.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
ID
5403: The value used to reference this property.
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
362
PatternPositionX
PatternPositionX
VPARAMETER
5400
X coordinate of the center of the pattern region of interest. This is the sample pattern for which the
Pattern Locator searches.
Pattern region of interest
(X,Y)
position of the pattern
Reference coordinate system
Figure 15 Illustration of the Pattern location
Syntax
MicroV+ VPARAMETER (sequence, tool, 5400, index, object) = value
value =VPARAMETER (sequence, tool, 5400, index, object)
V+ VPARAMETER (sequence, tool, 5400, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5400, index, object)
Remarks
This property is expressed in calibrated units when the CalibratedUnitsEnabled is set to True.
Otherwise, it is expressed in pixels.
Type
Long
Range
Unbounded
Parameters
$ip
IP address of the vision server.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
ID
5400: The value used to reference this property.
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
363
PatternPositionY
PatternPositionY
VPARAMETER
5401
Y coordinate of the center of the pattern region of interest. This is the sample pattern for which the
Pattern Locator searches.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5401, index, object) = value
value =VPARAMETER (sequence, tool, 5401, index, object)
V+ VPARAMETER (sequence, tool, 5401, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5401, index, object)
Remarks
This property is expressed in calibrated units when CalibratedUnitsEnabled is set to True. Otherwise,
it is expressed in pixels.
Type
Long
Range
Unbounded
Parameters
$ip
IP address of the vision server.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
ID
5401: The value used to reference this property.
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
364
PatternRotation
PatternRotation
VPARAMETER
5404
Angle of rotation of the pattern region of interest. This is the sample pattern for which the Pattern
Locator searches.
Pattern coordinate
system
Angle of Rotation
Reference coordinate system
Figure 16 Illustration of Pattern Rotation
MicroV+ VPARAMETER (sequence, tool, 5404, index, object) = value
value =VPARAMETER (sequence, tool, 5404, index, object)
V+ VPARAMETER (sequence, tool, 5404, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5404, index, object)
Type
Double
Range
Minimum: -180 degrees.
Maximum: +180 degrees.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
5404. The parameter value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server. Applies to V+ syntax only.
AdeptSight 2.0 - AdeptSight Reference
365
PatternWidth
PatternWidth
VPARAMETER
5402
Width of the pattern region of interest.This is the sample pattern for which the Pattern Locator
searches.
Y
Height
Region of interest
X
Pattern coordinate system
Width
Figure 17 Illustration of Pattern Height and Width
Syntax
MicroV+ VPARAMETER (sequence, tool, 5402, index, object) = value
value =VPARAMETER (sequence, tool, 5402, index, object)
V+ VPARAMETER (sequence, tool, 5402, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5402, index, object)
Remarks
This property is expressed in calibrated units when CalibratedUnitsEnabled is set to True. Otherwise,
it is expressed in pixels.
Type
Long
Range
Greater than or equal to three pixels. Minimum pixel size is 3x3 pixels.
Parameters
$ip
IP address of the vision server.
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
ID
5402: The value used to reference this property.
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
366
PerimeterResultsEnabled
PerimeterResultsEnabled
VPARAMETER
1602
Enables the computation of the following blob properties: BlobRawPerimeter, BlobConvexPerimeter and
BlobRoundness.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 1602, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 1602, index, object)
V+ VPARAMETER (sequence_index, tool_index, 1602, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 1602, index, object)
Type
Boolean
Range
Value
Description
1
The perimeter properties will be computed.
0
No perimeter properties will be computed.
Parameters
$ip
IP address of the vision server.
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
160.: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
367
PixelHeight
PixelHeight
VRESULT
1701
Height of a pixel of the sampled image. Pixels in the sampled image are square, therefore PixelHeight
is always equal to CalibratedImageWidth. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1701, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1701, index, frame)
Type
double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1701. The value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
368
PixelWidth
PixelWidth
VRESULT
1700
Width of a pixel of the sampled image. Pixels in the sampled image are square, therefore PixelWidth is
always equal to CalibratedImageHeight. Read only.
Syntax
Micro V+ VRESULT (sequence_index, tool_index, instance_index, 1700, index, frame)
V+ VRESULT ($ip, sequence_index, tool_index, instance_index, 1700, index, frame)
Remarks
This property is equal to ImageWidth * PixelWidth and is therefore subject to the same validity
conditions as the PixelWidth property.
Type
double
Range
Greater than 0.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
instance_index
Index of the instance for which you want the result.
ID
1700. The value used to reference this property
index
N/A
frame
N/A
AdeptSight 2.0 - AdeptSight Reference
369
PolarityMode
PolarityMode
VPARAMETER
5100
Selects the type of polarity accepted for finding an entity. Polarity identifies the change in greylevel
values from the tool center (inside) towards the outside.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5100, index, object) = value
value =VPARAMETER (sequence, tool, 5100, index, object)
V+ VPARAMETER (sequence, tool, 5100, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5100, index, object)
Type
Long
Range
Value
Mode
Description
0
hsDarkToLight
The tool searches only for arc instances occurring at a dark to light
transition in greylevel values.
1
hsLightToDark
The tool searches only for arc instances occurring at a light to dark
transition in greylevel values.
2
hsEither
The tool searches only for arc instances occurring either at a light to
dark or dark to light transition in greylevel values.
3
hsDontCare
The tool searches only for arc instances occurring at any transition in
greylevel values including reversals in contrast along the arc, for
example on an unevenly colored background.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5100: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
370
PositionConstraint
PositionConstraint
VPARAMETER
5223
Indexed property used to set the position constraint function for edge detection. Four points are used:
Base Left, Top Left, Top Right, Base Right.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5223, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5223, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5223, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5223, index, object)
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5223: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
371
PositioningCoarseness
PositioningCoarseness
VPARAMETER
5431
Subsampling level at which the hypothesis refinement ends. High values provide coarser positioning
and lower execution time than higher values. Read only if AutoCoarsenessSelectionEnabled is True.
Remarks
Can only be less than or equal to the SearchCoarseness value.
Type
long
Range
[1,2,4]
Related Topics
SearchCoarseness
AdeptSight 2.0 - AdeptSight Reference
372
PositioningLevel
PositioningLevel
VPARAMETER
561
Configurable effort level of the instance positioning process. A value of 0 will provide coarser positioning
and lower execution time. Conversely, a value of 10 will provide high accuracy positioning of object
instances.
Syntax
MicroV+ VPARAMETER (sequence, tool, 561, index, object) = value
value =VPARAMETER (sequence, tool, 561, index, object)
V+ VPARAMETER (sequence, tool, 561, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 561, index, object)
Type
Long
Range
Minimum: 0
Maximum: 10
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
373
ProjectionMode
ProjectionMode
VPARAMETER
140
Projection mode used by the tool to detect edges.
Radial Projection
Annular Projection
Edge
Edge
Pair
position
Edge
Pair
position
Edge
Figure 18 Projection Modes used by Arc Caliper and Arc Edge Caliper
Syntax
MicroV+ VPARAMETER (sequence_index, tool_index, 140, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 140, index, object)
V+ VPARAMETER (sequence_index, tool_index, 140, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 140, index, object)
Type
0 or 1
Range
Value
Projection Mode
Description
0
hsProjectionAnnular
Annular projection is used to find edges that are aligned with the
median annulus, such as arcs on concentric circles.
1
hsProjectionRadial
Radial projection is used to find edges aligned along radial projections, much like the spokes of a wheel.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
140: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
374
RecognitionLevel
RecognitionLevel
VPARAMETER
550
Configurable effort level of the search process. A value of 0 will lead to a faster search that may miss
instances that are partly occluded. Conversely, a value of 10 is useful for finding partly occluded objects
in cluttered or noisy images, or for models made up of small features at the Outline Level.
Syntax
MicroV+ VPARAMETER (sequence, tool, 550, index, object) = value
value =VPARAMETER (sequence, tool, 550, index, object)
V+ VPARAMETER (sequence, tool, 550, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 550, index, object)
Type
Long
Range
Minimum: 1
Maximum: 10
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
550: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
375
Reset
Reset
VPARAMETER
5500
Resets the data currently stored for the tool.
Syntax
MicroV+ VPARAMETER (sequence, tool, 5500, index, object) = value
value =VPARAMETER (sequence, tool, 5500, index, object)
V+ VPARAMETER (sequence, tool, 5500, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 5500, index, object)
Type
Long
Range
Not applicable
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5500: the value used to reference this property
index
N/A
object
N/A
AdeptSight 2.0 - AdeptSight Reference
376
Result
Result
VRESULT
2300
Global result of the Results Inspection Tool for the specified output frame. The Results Inspection tool
generates an output frame for each input frame that obtains a Pass result after application the global
Operator. The number of frames output by the tool is returned by the ResultCount property.
Remarks
By default, the Result and ResultFilter properties return results for Output Frames that receive a
Pass result. This behavior can be modified in the tool interface though the OutputFrames and
OutputResults (advanced) parameters.
Range
Value
Result Name
1
Pass
0
Fail
Related Topics
FilterResult
ResultCount
AdeptSight 2.0 - AdeptSight Reference
377
ResultCount
ResultCount
VRESULT
1010
The number of frames that were output by the Results Inspection Tool.
Range
Less than or equal to IntermediateResultCount.
Related Topics
FilterResult
ResultCount
AdeptSight 2.0 - AdeptSight Reference
378
RobotConfiguration
RobotConfiguration
VPARAMETER
10400
Specifies a LEFTY or RIGHTY configuration required to define the InverseKinematics property for an
arm-mounted camera.
Type
Long
Range
Value
Definition
0
Sets a RIGHTY robot configuration.
1
Sets a LEFTY robot configuration.
Example
See the InverseKinematics for an example of this property and related properties
Related Properties
RobotXPosition
RobotYPosition
VisionXPosition
VisionYPosition
VisionRotation
InverseKinematics
AdeptSight 2.0 - AdeptSight Reference
379
RobotXPosition
RobotXPosition
VPARAMETER
10404
X coordinate of a location in the robot frame of reference required for the InverseKinematics property.
Type
Long
Example
See the InverseKinematics for an example of this property and related properties
Related Properties
RobotYPosition
RobotConfiguration
VisionXPosition
VisionYPosition
VisionRotation
InverseKinematics
AdeptSight 2.0 - AdeptSight Reference
380
RobotYPosition
RobotYPosition
VPARAMETER
10405
Y coordinate of a location in the robot frame of reference required for the InverseKinematics property.
Type
Long
Example
See the InverseKinematics for an example of this property and related properties
Related Properties
RobotXPosition
RobotConfiguration
VisionXPosition
VisionYPosition
VisionRotation
InverseKinematics
AdeptSight 2.0 - AdeptSight Reference
381
SamplingStep
SamplingStep
VPARAMETER
122
SamplingStep defines the step used in the tool's last execution to sample the tool’s region of interest,
from the input image. This step is expressed either in pixels or in millimeters, as defined by
CalibratedUnitsEnabled. All pixels in the sampled rectangle are square and of the same size. The
sampling step represents the height and the width of a sampled pixel. Read only.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 122, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 122, index, object)
V+ VPARAMETER (sequence_index, tool_index, 122, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 122, index, object)
Remarks
The sampling step can either be default or custom, depending on the value of the
SamplingStepCustomEnabled property.
Type
Single
Range
Minimum: Greater than zero.
Maximum: Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
122: the value used to reference this property
index
N/A
object
N/A
Related Properties
SamplingStepCustom
SamplingStepCustomEnabled
SamplingStepDefault
CalibratedUnitsEnabled
AdeptSight 2.0 - AdeptSight Reference
382
SamplingStepCustom
SamplingStepCustom
VPARAMETER
124
When SamplingStepCustomEnabled is True, defines the sampling step used to sample the region of
interest from the input image. When SamplingStepCustomEnabled is False, SamplingStepDefault
is used instead as the sampling step.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 124, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 124, index, object)
V+ VPARAMETER (sequence_index, tool_index, 124, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 124, index, object)
Remark
A custom sampling step is usually not recommended.
Type
Single
Range
Minimum: Greater than zero.
Maximum: Boundaries of the input image.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
124: the value used to reference this property
index
N/A
object
N/A
Related Properties
SamplingStepDefault
SamplingStepCustomEnabled
AdeptSight 2.0 - AdeptSight Reference
383
SamplingStepCustomEnabled
SamplingStepCustomEnabled
VPARAMETER
121
When enabled, the tool uses the user-defined sampling step (SamplingStepCustom) instead of the
default optimal sampling step to sample the region of interest from the input image.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 121, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 121, index, object)
V+ VPARAMETER (sequence_index, tool_index, 121, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 121, index, object)
Remark
A custom sampling step is usually not recommended.
Type
Boolean
Range
Value
Description
0
The tool uses the default sampling step.
1
The default sampling step is overridden by SamplingStepCustom.
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
121: the value used to reference this property
index
N/A
object
N/A
Related Properties
SamplingStepCustom
SamplingStep
AdeptSight 2.0 - AdeptSight Reference
384
SamplingStepDefault
SamplingStepDefault
VPARAMETER
123
Default optimal sampling step used by the tool to sample the region of interest from the input image.
This sampling step is used by the tool if SamplingStepCustomEnabled is False. Read only
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 123, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 123, index, object)
V+ VPARAMETER (sequence_index, tool_index, 123, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 123, index, object)
Remark
A custom sampling step is usually not recommended.
Type
Single
Range
Minimum: Greater than zero
Maximum: Boundaries of the input grey-scale Image
Parameters
$ip
IP address of the vision server
sequence_inde Index of the vision sequence. First sequence is '1'.
x
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
123: the value used to reference this property
index
N/A
object
N/A
Related Properties
SamplingStepCustomEnabled
SamplingStepCustom
SamplingStep
AdeptSight 2.0 - AdeptSight Reference
385
SaveBeltCalibration
SaveBeltCalibration
VPARAMETER
10325
Saves the calibration data for the specified conveyor belt to a calibration file (*hscal).
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Properties
SaveRobotCalibration
LoadBeltCalibration
AdeptSight 2.0 - AdeptSight Reference
386
SaveCameraSettings
SaveCameraSettings
VPARAMETER
10326
Saves the settings and properties of the selected camera to a file (*.hscam).
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
LoadCameraSettings
AdeptSight 2.0 - AdeptSight Reference
387
SaveColorCalibration
SaveColorCalibration
VPARAMETER
10322
Saves the color calibration data for the specified camera to a calibration file (*hscal).
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
SaveVisionCalibration
LoadColorCalibration
AdeptSight 2.0 - AdeptSight Reference
388
SaveImage
SaveImage
VPARAMETER
10327
Saves the current image to file. Various file formats are available, including the Adept hig file format.
The hig format saves the calibration information in the image file. Files with this format can be reused
in AdeptSight applications, through an Emulation device.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
AdeptSight 2.0 - AdeptSight Reference
389
SaveProject
SaveProject
VPARAMETER
10320
Saves the current vision project to file(*.hsproj). The data saved to the project file includes the
configuration of the sequences in the project and system devices settings, including the calibration data
of the devices.
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
LoadProject
AdeptSight 2.0 - AdeptSight Reference
390
SaveRobotCalibration
SaveRobotCalibration
VPARAMETER
10324
Saves the calibration data for the specified robot to a calibration file (*hscal).
Type
Long
Example
See SaveSequence for an example of the syntax and the use of all Load and Save properties.
Related Topics
SaveSequence
LoadRobotCalibration
AdeptSight 2.0 - AdeptSight Reference
391
SaveSequence
SaveSequence
VPARAMETER
10321
Saves the specified sequence to a file (*.hsseq). All sequences in a specific vision project can be saved
to a vision project file. See SaveProject.
Type
Long
Example
The following example illustrates the use of all Load and Save properties in AdeptSight.
.PROGRAM aaa()
PARAMETER VTIMEOUT = 10
$pc_ip = "192.168.145.68"
load_project = 10300
load_sequence = 10301
load_color_cal = 10302
load_vision_cal = 10303
load_robot_cal = 10304
load_belt_cal = 10305
load_cam_settin = 10306
save_project = 10320
save_sequence = 10321
save_color_cal = 10322
save_vision_cal = 10323
save_robot_cal = 10324
save_belt_cal = 10325
save_cam_settin = 10326
empty_parameter = -1
camera_index = 1
robot_index = 1
$path = "C:\temp\ASLoading\"
; Load Test Project
CALL as.load(load_project, $path+"Project2.hsproj", $pc_ip, empty_parameter,
empty_parameter)
; Save Testing
AdeptSight 2.0 - AdeptSight Reference
392
SaveSequence
CALL as.save(save_project, $path+"Project1.hsproj", $pc_ip, empty_parameter,
empty_parameter)
CALL as.save(save_project, $path+"Project2.hsproj", $pc_ip, empty_parameter,
empty_parameter)
CALL as.save(save_project, $path+"Project3.hsproj", $pc_ip, empty_parameter,
empty_parameter)
CALL as.save(save_sequence, $path+"seq1.xml", $pc_ip, 1, empty_parameter)
CALL as.save(save_sequence, $path+"seq2.xml", $pc_ip, 2, empty_parameter)
CALL as.save(save_cam_settin, $path+"CamSettings.xml", $pc_ip, camera_index,
empty_parameter)
CALL as.save(save_color_cal, $path+"ColorCalibration1.hscal", $pc_ip, camera_index,
empty_parameter)
CALL as.save(save_vision_cal, $path+"VisionCalibration1.hscal", $pc_ip,
camera_index, empty_parameter)
CALL as.save(save_robot_cal, $path+"RobotCalibration1.hscal", $pc_ip, camera_index,
robot_index)
CALL as.save(save_belt_cal, $path+"BeltCalibration1.hscal", $pc_ip, camera_index,
robot_index)
; Load an empty project
TYPE "Clear Project"
PAUSE
; Load Testing
CALL as.load(load_project, $path+"Project1.hsproj", $pc_ip, empty_parameter,
empty_parameter)
CALL as.load(load_project, $path+"Project4.hsproj", $pc_ip, empty_parameter,
empty_parameter)
TYPE "The previous fail is normal"
CALL as.load(load_project, $path+"Project2.hsproj", $pc_ip, empty_parameter,
empty_parameter)
; Load an empty project
TYPE "Clear calibrations, change camera settings and remove all sequences"
TYPE "keep the cameras, robots and controllers"
PAUSE
CALL as.load(load_sequence, $path+"seq1.xml", $pc_ip, 1, empty_parameter)
CALL as.load(load_sequence, $path+"seq2.xml", $pc_ip, 3, empty_parameter)
TYPE "The previous fail is normal"
CALL as.load(load_sequence, $path+"seq2.xml", $pc_ip, 2, empty_parameter)
AdeptSight 2.0 - AdeptSight Reference
393
SaveSequence
CALL as.load(load_cam_settin, $path+"CamSettings.xml", $pc_ip, camera_index,
empty_parameter)
CALL as.load(load_color_cal, $path+"ColorCalibration1.hscal", $pc_ip, camera_index,
empty_parameter)
CALL as.load(load_vision_cal, $path+"VisionCalibration1.hscal", $pc_ip,
camera_index, empty_parameter)
CALL as.load(load_robot_cal, $path+"RobotCalibration1.hscal", $pc_ip, camera_index,
robot_index)
CALL as.load(load_belt_cal, $path+"BeltCalibration1.hscal", $pc_ip, camera_index,
robot_index)
.END
.PROGRAM as.load(load_type, $filename, $ip, parameter_index, object_index)
old_timeout = PARAMETER(VTIMEOUT)
PARAMETER VTIMEOUT = 10*old_timeout
file_index = 0
$as.filename[file_index] = $filename
TYPE "Loading... Please wait"
;V+
VPARAMETER(-1, -1, load_type, parameter_index, object_index) $ip = file_index
;uV+
;VPARAMETER(, , load_type, parameter_index, object_index) = file_index
WHILE TRUE DO
value = 4
value = VPARAMETER($ip, -1, -1, load_type, parameter_index, object_index)
IF (value == 3) THEN
TYPE "Load Succeeded"
EXIT
END
IF (value == 4) THEN
TYPE "Load Failed"
EXIT
AdeptSight 2.0 - AdeptSight Reference
394
SaveSequence
END
END
PARAMETER VTIMEOUT = old_timeout
.END
.PROGRAM as.save(save_type, $filename, $ip, parameter_index, object_index)
old_timeout = PARAMETER(VTIMEOUT)
PARAMETER VTIMEOUT = 10*old_timeout
file_index = 0
$as.filename[file_index] = $filename
TYPE "Saving... Please wait"
;V+
VPARAMETER(-1, -1, save_type, parameter_index, object_index) $ip = file_index
;uV+
;VPARAMETER(, , load_type, parameter_index, object_index) = file_index
WHILE TRUE DO
value = 4
value = VPARAMETER($ip, -1, -1, save_type, parameter_index, object_index)
IF (value == 3) THEN
TYPE "Save Succeeded"
EXIT
END
IF (value == 4) THEN
TYPE "Save Failed"
EXIT
END
END
PARAMETER VTIMEOUT = old_timeout
.END
Related Properties
SaveProject
LoadSequence
AdeptSight 2.0 - AdeptSight Reference
395
SaveVisionCalibration
SaveVisionCalibration
VPARAMETER
10323
Saves the vision calibration data for the specified camera to a calibration file (*hscal).
Syntax
V+ VPARAMETER(-1, -1, save_type, parameter_index, object_index) $ip = file_index
MicroV+ VPARAMETER(, , load_type, parameter_index, object_index) = file_index
Type
Long
Example
See SaveSequence for an example of use of all Load and Save properties.
Related Properties
SaveSequence
SaveColorCalibration
LoadVisionCalibration
AdeptSight 2.0 - AdeptSight Reference
396
ScoreThreshold
ScoreThreshold
VPARAMETER
5240
Minimum score to accept an edges. The score of an edge is returned by the EdgeScore property.
Syntax
MicroV+ VPARAMETER (sequence_index, tool_index, 5240, index, object) = value
value =VPARAMETER (sequence_index, tool_index, 5240, index, object)
V+ VPARAMETER (sequence_index, tool_index, 5240, index, object) $ip = value
value = VPARAMETER ($ip, sequence_index, tool_index, 5240, index, object)
Remarks
Type
double
Range
Minimum: 0.0
Maximum: 1.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5240: the value used to reference this property
index
N/A
object
Index of the frame containing the edge pair.
AdeptSight 2.0 - AdeptSight Reference
397
SearchBasedOnOutlineLevelOnly
SearchBasedOnOutlineLevelOnly
VPARAMETER
521
When set to True, the Locator does not use the models at the Detail level for the positioning process.
This mode can be used to improve the speed when only a coarse positioning of object instances is
required
Syntax
MicroV+ VPARAMETER (sequence, tool, 521, index, object) = value
value =VPARAMETER (sequence, tool, 521, index, object)
V+ VPARAMETER (sequence, tool, 521, index, object) $ip = value
value = VPARAMETER ($ip, sequence, tool, 521, index, object)
Type
Boolean
Range
Value
Description
1
Only Outline Level Models are used to position instances
0
Both Outline Level and Detail Level Models are used to position instances.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
parameter
The value used to reference this property.
index
N/A
object
N/A
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
398
SearchCoarseness
SearchCoarseness
VPARAMETER
5430
Subsampling level used to find pattern match hypotheses. High values provide a coarser search and
lower execution time than lower values. Read only if AutoCoarsenessSelectionEnabled is True.
Remarks
Can only be higher than or equal to the PositioningCoarseness value.
Type
long
Range
[1,2,4,8,16,32]
Related Topics
PositioningCoarseness
AdeptSight 2.0 - AdeptSight Reference
399
SearchMode
SearchMode
VPARAMETER
5101
Specifies the method used by a Finder tool to select a hypothesis.
Type
long
Range
The range depends on the type of entity. For arcs (Arc Finder Tool), the range is:
Value
Search Mode
Description
0
hsBestArc
Selects the best arc according to hypothesis strength.
1
hsArcClosestToGuideline
Selects the arc hypothesis closest to the Guideline.
2
hsArcClosestToInside
Selects the arc hypothesis closest to the inside of the
tool Search Area. (closest to the tool center).
3
hsArcClosestToOutside
Selects the arc hypothesis closest to the outside of the
tool Search Area. (furthest from the tool center)
For lines (Line Finder Tool), the range is:
Value
Search Mode
Description
0
hsBestLine
Selects the best line according to hypothesis strength.
1
hsLineClosestToGuideline
Selects the line hypothesis closest to the Guideline.
2
hsLineWithMaximumNegativeXOffset
Selects the line hypothesis closest to the Search Area
bound that is at maximum negative X offset.
3
hsLineWithMaximumPositiveXOffset
Selects the line hypothesis closest to the Search Area
bound that is at maximum positive X offset.
For points (Point Finder Tool), the range is:
Value
Search Mode
Description
1
hsPointClosestToGuideline
Selects the point hypothesis closest to the Guideline.
2
hsPointWithMaximumNegativeXOffset
Selects the point hypothesis closest to the Search Area
bound that is at maximum negative X offset.
3
hsPointWithMaximumPositiveXOffset
Selects the point hypothesis closest to the Search Area
bound that is at maximum positive X offset
AdeptSight 2.0 - AdeptSight Reference
400
SearchTime
SearchTime
VRESULT
1303
Time elapsed (in milliseconds) for the search process during the last execution of the Locator tool. Read
only.
Syntax
MicroV+ VRESULT (sequence, tool, instance, 1303, index, frame)
V+ VRESULT (&ip, sequence, tool, instance, 1303, index, frame)
Type
Long
Range
Greater than 0.
Parameters
sequence
Index of the vision sequence. The first sequence is '1'.
tool
Index of the tool in the vision sequence. The first tool in the sequence is '1'.
instance
Index of the instance for which you want the result.
result
1030. The value used to reference this property.
index
N/A
frame
Index of frame that contains the specified instance.
Range: [1, ResultCount -1]
$ip
IP address of the vision server, in standard IP address format. Applies to V+
syntax only.
AdeptSight 2.0 - AdeptSight Reference
401
SegmentationDark
SegmentationDark
VPARAMETER
5005
Indexed property used to access the Dark Segmentation function. Two points are available, from left to
right: Top and Bottom.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5005, index, constraint_index) =
value
value =VPARAMETER (sequence_index, tool_index, 5005, index, constraint_index)
V+ VPARAMETER (sequence_index, tool_index, 5005, index, constraint_index) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5005, index, constraint_index)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5005: the value used to reference this property
index
N/A.
constraint_index
Dark segmentation function point index
(hsSegmentationDarkPoint)
1: DarkTop point
2: DarkBottom point
AdeptSight 2.0 - AdeptSight Reference
402
SegmentationDynamicDark
SegmentationDynamicDark
VPARAMETER
5009
Indexed property used to access the Dynamic Dark Segmentation function. Two points are available,
from left to right: Top and Bottom.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5009, index, constraint_index) =
value
value =VPARAMETER (sequence_index, tool_index, 5009, index, constraint_index)
V+ VPARAMETER (sequence_index, tool_index, 5009, index, constraint_index) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5009, index, constraint_index)
Type
long
Range
Minimum: 0.0
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5009: the value used to reference this property
index
N/A.
constraint_index
Dynamic Dark segmentation function point index
(hsSegmentationDarkPoint)
1: DarkTop point
2: DarkBottom point
AdeptSight 2.0 - AdeptSight Reference
403
SegmentationDynamicInside
SegmentationDynamicInside
VPARAMETER
5010
Indexed property used to access the Dynamic Inside Segmentation function. Four points are available,
from left to right: Bottom Left, Top Left, Top Right and Bottom Right.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5010, index, constraint_index) =
value
value =VPARAMETER (sequence_index, tool_index, 5010, index, constraint_index)
V+ VPARAMETER (sequence_index, tool_index, 5010, index, constraint_index) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5010, index, constraint_index)
Type
long
Range
Minimum: 0.0
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5010: the value used to reference this property
index
N/A.
constraint_index
Dynamic Inside segmentation function point index.
(hsSegmentationInsidePoint)
0: hsInsideBottomLeft point
1: hsInsideTopLeft point
2: hsInsideTopRight point
3: hsInsideBottomRight point
AdeptSight 2.0 - AdeptSight Reference
404
SegmentationDynamicLight
SegmentationDynamicLight
VPARAMETER
5008
Indexed property used to access the Dynamic Light Segmentation function. Two points are available,
from left to right: Bottom and Top.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5008, index, constraint_index) =
value
value =VPARAMETER (sequence_index, tool_index, 5008, index, constraint_index)
V+ VPARAMETER (sequence_index, tool_index, 5008, index, constraint_index) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5008, index, constraint_index)
Type
long
Range
Minimum: 0.0
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5008: the value used to reference this property
index
N/A.
constraint_index
Dynamic Light segmentation function point index
(hsSegmentationLightPoint)
1: hsLightBottom point
2: hsLightTop point
AdeptSight 2.0 - AdeptSight Reference
405
SegmentationDynamicOutside
SegmentationDynamicOutside
VPARAMETER
5011
Indexed property used to access the Dynamic Outside Segmentation function. Four points are
available, from left to right: Top Left, Bottom Left, Bottom Right and Top Right.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5011, index, constraint_index) =
value
value =VPARAMETER (sequence_index, tool_index, 5011, index, constraint_index)
V+ VPARAMETER (sequence_index, tool_index, 5011, index, constraint_index) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5011, index, constraint_index)
Type
long
Range
Minimum: 0.0
Maximum: 100.0
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5011: the value used to reference this property
index
N/A.
constraint_index
Dynamic Outside segmentation function point index.
(hsSegmentationOutsidePoint)
0: hsOutsideTopLeft point
1: hsOutsideBottomLeft point
2: hsOutsideBottomRight point
3: hsOutsideTopRight point
AdeptSight 2.0 - AdeptSight Reference
406
SegmentationInside
SegmentationInside
VPARAMETER
5006
Indexed property used to access the Inside Segmentation function. Four points are available: from left
to right, Bottom Left, Top Left, Top Right and Bottom Right.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5006, index, constraint_index) =
value
value =VPARAMETER (sequence_index, tool_index, 5006, index, constraint_index)
V+ VPARAMETER (sequence_index, tool_index, 5006, index, constraint_index) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5006, index, constraint_index)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5006: the value used to reference this property
index
N/A.
constraint_index
Inside segmentation function point index.
(hsSegmentationInsidePoint)
0: hsInsideBottomLeft point
1: hsInsideTopLeft point
2: hsInsideTopRight point
3: hsInsideBottomRight point
AdeptSight 2.0 - AdeptSight Reference
407
SegmentationLight
SegmentationLight
VPARAMETER
5004
Indexed property used to access the Light Segmentation function. Two points are available, from left to
right: Bottom and Top.
Syntax
Micro V+ VPARAMETER (sequence_index, tool_index, 5004, index, constraint_index) =
value
value =VPARAMETER (sequence_index, tool_index, 5004, index, constraint_index)
V+ VPARAMETER (sequence_index, tool_index, 5004, index, constraint_index) $ip =
value
value = VPARAMETER ($ip, sequence_index, tool_index, 5004, index, constraint_index)
Type
long
Range
Minimum: 0
Maximum: 255
Parameters
$ip
IP address of the vision server
sequence_index
Index of the vision sequence. First sequence is '1'.
tool_index
Index of the tool in the vision sequence. First tool is '1'.
ID
5004: the value used to reference this property
index
N/A.
constraint_index
Light segmentation function point index
(hsSegmentatio