ppt - SquidNet

advertisement
The Basics
Cross platform (WINDOWS, Linux, OS X)
• Graphical User interface (GUI)
• Command line interface
• Software Development Kit (API)
• Application Path Translations
• Patch translation management
• Mixed environment rendering
• more…
Job Management
• Application-specific job request templates
• Priority queue
• Job control(suspend, cancel, re-queue, etc…)
• Job status monitoring
• Email/SMS notifications
• SSH/SFTP/FTP Image Transfer
• Tile rendering
• Efficient load balancing
• Multiple queuing algorithms
• more…
Network Management
• Automatic node detection
• Client-Master-Slave configuration
• Node pool management
• Active/Inactive node assignments
• No configuration files
• Usage statistics(timeline, CPU, memory,
etc…)
• more…
Power Management
• Automatic render farm shutdown
• Wake-On-LAN (WOL) management
• Remote management (shutdown, reboot, etc..)
• more…
WORKSTATION01
LAN:192.168.0.50/24
Internet
RENDERNODE01
192.168.0.100/24
WORKSTATION02
LAN:192.168.0.51/24
Modem
WAN:76.23.14.23
LAN:192.168.0.1/24
RENDERNODE02
192.168.0.101/24
RENDERNODE03
192.168.0.102/24
WORKSTATION03
LAN:192.168.0.52/24
WORKSTATION04
LAN:192.168.0.53/24
MASTER
LAN:192.168.0.75/24
Network Switch
Render
Nodes
RENDERNODE04
192.168.0.103/24
RENDERNODE05
192.168.0.104/24
Network Storage
192.168.0.108/24
RENDERNODE06
192.168.0.105/24
RENDERNODE07
192.168.0.106/24
RENDERNODE08
192.168.0.107/24
SquidNet Master
Client workstations send render
requests to Master controller
SquidNet Client Workstations
Workstations store ALL scene
content on NAS. They also retrieve
render results from NAS
Master controller manages slave
rendering operations (start, stop, etc…)
Slaves return render results to Master
controller.
SquidNet Slaves
Slaves access scene content on NAS
and store rendered images on NAS
NAS Storage
•
•
•
Network folder: A directory on a computer or NAS that is available to all computers on the network.
UNC Path: A reference to a folder that’s accessible on the local network. For example, \\NAS-SERVER\mayaprojects is a UNC path.
Mapped Drive: A WINDOWS-only shortcut to a network folder. For example, local mapped drive M:\ can point
to UNC path \\NAS-SERVER\maya-projects. In this case, local drive M:\ and UNC path \\NAS-SERVER\mayaprojects both point to the same content.
M:\myscene.mb
or
\\NAS-SERVER\maya-projects\myscene.mb
/Volume/Volume_1/maya-projects/myscene.mb
All paths point to the same
physical folder on NAS
NAS box: NAS-SERVER
Exported folder: /Volume_1/maya-projects
/mnt/maya-projects/myscene.mb
Additional information: http://en.wikipedia.org/wiki/UNC_path#Uniform_Naming_Convention
NAS Storage
•
•
•
Make sure all network folders are created.
Make sure all network folders (SAMBA, NAS, etc…) are accessible (read, write permissions) from all render
farm nodes.
For WINDOWS machines:
•
•
•
•
All SquidNet installation accounts MUST have ADMINISTRATOR privileges.
All nodes MUST have the same ADMIN account name AND same password on each node.
WARNING: WINDOWS (non Server versions) has a connection limit to network folders. If your farm has more than 5
nodes, it’s recommended that you use a NAS for content storage.
Check with your IT professional on configuration settings.
Verify Account
permissions
Read/Write
Access
Read/Write Access
NAS Storage
•
WINDOWS:
•
•
•
•
Must be installed when logged in under an ADMIN account.
During installation, enter login information for any ADMIN account. This ADMIN account must exist on all
WINDOWS nodes and have the same password. It must also exist on NAS server.
SquidNet Server runs as background service.
LINUX and OS X:
•
•
Install from shell under root account.
Standard tarball installation. Untar and run squidnet-install.sh script.
Use root account for
installation
Enter local node computer name
and name/password for ADMIN
account
Use DMG installer for
installation
•
•
•
•
•
•
On each render farm computer, SquidNet runs silently as a background process waiting for commands from the local user
interface, SDK API, command line interface or from another node on the farm.
On WINDOWS, background processes are called services. On OSX and Linux they’re called daemons. Generally, background
processes are called “services” on any platform.
On the MASTER node, the local UI communicates directly with the local SquidNet service.
On client nodes, the local UI connections with the local SquidNet service AND with the MASTER service.
The local UI on slave nodes only connect with the local service.
It is never necessary to log in to the local node to get the SquidNet service running. The local service gets started when the
computer starts up.
SquidNet Background Service
Graphical User Interface
Command Line Interface
Remote SquidNet Server
SDK API Interface
•
•
•
By default, user configuration settings (job profiles, application paths, etc…) are stored in the <install-path>\settings folder. If
SquidNet is uninstalled and reinstalled all user configuration settings will be lost. Therefore, its recommended that the default
folder location be changed in the preferences window.
In a render farm where a single workstation will be used to submit jobs, the configuration path can be set to any local hard drive
path (example: C:\Squidnet-config). Make sure to backup often.
In multi-workstation environments, set the configuration path to a folder on a NAS box that all workstations have access to. This
prevents from having to duplicate the same settings on each workstation.
The configuration settings folder is only used by submitting workstations. MASTER and SLAVE nodes do not need to have the
configuration path set in their local user interface.
Multiple Workstations
UNC Path
UNC Path
Configuration
Path
Single Workstation
Configuration
Path
•
Local Drive
NAS Storage
•
4 different node types:
•
PEER: The default node type when SquidNet is installed.
•
CLIENT: Defines and submits job requests to farm. Can process jobs at low priority, when user is logged out or
never.
•
MASTER: Manages render farm network. Can be configured to process jobs. Can also assign specific master-like
permissions to client nodes.
•
SLAVE: Processes job requests only.
SquidNet Clients
When configuring a CMS setup,
determine which node will be the
MASTER first. Then setup the clients
and slaves accordingly.
SquidNet Master
SquidNet Slaves
To change configuration, convert all
CMS nodes to PEERS starting with
slaves and clients. Un-configure the
MASTER node last.
•
•
•
Render Farm Pool: A set of nodes on a render farm allocated to perform a specific task or perform specific operations.
SquidNet has a default pool called “NETWORK” that all nodes are a member of. By default, all jobs render to the
“NETWORK” pool.
Typical scenario: Based on node performance, segment render farm nodes so that higher priority jobs always get
processed on faster machines.
NETWORK POOL
Defined pools
NIGHTLY POOL
STAFF POOL
HIGH PERFORMANCE
POOL
LOW PERFORMANCE
POOL
RENDERNODE10 belongs to
these pools
Pool assignments
Available nodes
•
•
•
•
In order to process job requests, SquidNet needs to know where applications are installed on each node.
Different versions of the same application can be installed on each node.
Use the Application Path Manager to define “profiles” that contain absolute paths to each application on each render
node for a given rendering application.
Create one profile for each application.
RENDERNODE01
LightWave Installation path
C:\Program Files\...\lwsn.exe
Each profile can have
multiple entries but
only one per node.
RENDERNODE02
Modo Installation path
C:\Program Files\...\modo_cl.exe
RENDERNODE03
Register
installation
paths with
Application Path
Manager
3DSMAX Installation path
C:\Program
Files\...\3dsmaxcmd.exe
RENDERNODE04
Maya Installation path
C:\Program Files\...\render.exe
•
•
•
•
Translation paths allow SquidNet to submit the same job to different platform types (WINDOWS, Linux and OS X).
Not needed if the same operating system platform is being used.
Each entry “maps” the same physical network share location to one translation path.
Embed $XPATH() macro in template when substitution is required.
Same physical folder is mapped to a single translation path
\\raid-server00\volume_1\SquidNet
/mnt/raid/SquidNet
/Volumes/Volume_1/SquidNet
•
•
•
Any object (maps, textures, etc…) embedded in scene file MUST NOT be located on a local hard drive (C:\, D:\, etc…).
They MUST be physically located on a network share drive (\\NAS-SERVER\maya-projects\maps…\...).
If stored locally, render jobs will render just fine on the node where the scene objects exist but WILL NOT render on
remote nodes because they’re not present on their local drives.
Most applications will produce an error for any job that has inaccessible scene objects.
GOOD!!
NAS Storage
(local reference) C:\maya-projects\maps….\....
(network path) \\NAS_SERVER\maya-projects\maps….\....
(local reference) D:\objects\textures\….\....
(network path) \\NAS_SERVER\objects\textures\….\....
BAD!!
Local Drive
Local Drive
•
•
•
•
SquidNet uses a project-based framework to track job profiles.
All submitted jobs are placed in specific project folders.
At install time, a default project folder is created (SQUIDNET DEFAULT)
Use Project Manager to create new folders.
Project Folder
Quick launch buttons
Project Manager
•
•
•
•
SquidNet job templates contain processing instructions for supported rendering and compositing applications.
Each template contains application specific and common fields that define how the job is to be processed.
When submitted, job template can be saved in to a job profile. Job profiles can be later resubmitted with the same or
altered processing parameters.
Group job profiles according to project. Use project manager to define new project.
Common fields
Application specific
fields
•
•
•
•
•
•
In render farms, a job queue is where rendering requests get stored for processing.
Typically, jobs are processed in first-come-first-served order (FIFO).
With SquidNet, jobs are processed according to a user defined priority level (0 thru 24: 0 being highest).
Clients nodes submit jobs to the queue.
The Master node manages the queue.
Slave get assigned jobs from the queue by the Master node.
JOB QUEUE
IN
Job n
Job n+1
SquidNet
Job
Queue
Job n+2
Job n+3
Job n+4
OUT
•
•
Jobs at higher priority are always processed first.
Priority 0 (zero) is highest priority. 24 is the lowest.
Jobs with same priorities are processed on firstcome-first-serve basis.
•
•
•
•
By default, SquidNet assigns one frame to each available processing node. The rendering application on each render node must
load the scene file before any rendering operation can begin. For small-footprint scene files this is straightforward. However, for
large-footprint scenes (200MB or larger) this can be extremely inefficient because of the time involved to load each scene file
before processing. In some cases, loading the scene file can take considerably more time then rendering the actual frame.
For multi-frame render jobs, SquidNet supports the concept of job slices. Job slices allow you to determine how many frames will
be rendered each time an application loads a scene file.
Setting the job slice count to a value that evenly distributes the farm load maximizes render times considerably.
For example in an extreme case: Processing a 500MB scene file on a 10-node farm using a slice count of 10 (each render node
loads scene once and processes 10 complete frames) is by far more efficient than using a slice count of 1 (the default) where
each scene file is loaded 10 times per node (once per frame).
Frames
Example: 30 Frame
Scene
10 frames per slice
JOB QUEUE
Each render node will
load scene once and
render 10 frames at a
time
Frames
Frames
JOB SLICE QUEUE
•
SquidNet’s processing pipeline is as follows:
•
Verify the scene is properly formatted (object files paths, etc…)
•
Setup SquidNet application job template with processing parameters. Submit job to render farm
•
Monitor job queue for status.
•
Monitor network queue for resource usage.
•
Verify output content.
Prepare scene
Submit job
Monitor job queue
flow
Pipeline workflow
flow
Pipeline workflow
Monitor network
flow
Pipeline workflow
Verify output
flow
Pipeline workflow
•
Monitor queued jobs in the network job queue. The job queue shows the following:
•
•
•
•
•
•
Status of job (pending, processing, complete, etc…)
Position on the queue
Percentage complete
Job log showing detailed activity
and more…
Monitor job slices using the job slice view: The job slice view shows the following:
•
•
•
•
•
The status of each job slice (pending, processing, complete, etc…)
Render currently processing job slice.
Completion status
Job slice log showing detailed activity
and more
JOB SLICE QUEUE
JOB QUEUE
JOB LOG
JOBSLICE
LOG
•
•
Use the network view to monitor all active nodes.
Use the network work queue view to:
•
See which jobs each node is processing.
•
Current status of job slices.
•
Number of node resources allocated.
Network View
Network View
NODE LOG
Download