GigaVUE-VM

advertisement
Data Sheet
GigaVUE-VM
Product Description
The Gigamon® GigaVUE-VM Visibility Fabric™ node provides an intelligent filtering technology that allows virtual
machine (VM) traffic flows of interest to be selected, forwarded, and delivered to the monitoring infrastructure
centrally attached to the GigaVUE® platforms, thereby eliminating any traffic blind spots in the enterprise private
clouds or Service Provider NFV deployments.
Table 1: Features and Benefits
GigaVUE-VM
Features
Benefits
Visibility into VM Traffic
Intelligent selection, filtering, and forwarding of VM traffic to the monitoring and tool infrastructure;
extend the reach and leverage of existing tools to monitor virtual network infrastructure; onboard
virtual traffic visibility for n-tier application cluster.
Multi-hypervisor support
Supports the most popular private cloud hypervisors, VMware ESXi, VMware NSX-V
and KVM/OpenStack
Virtual switch
agnostic solution
Support for VMware vSS/vDS and Cisco Nexus 1000V and any virtual switch on OpenStack/KVM.
Automated Visibility for
VMware NSX
Use VMware NSX Dynamic Service Insertion to associate visibility policies with security groups,
thereby providing continuous and automated traffic visibility for applications as they scale up
Centralized management
Manage and monitor the physical and virtual fabric nodes using GigaVUE-FM while also configuring
the traffic policies to access, select, transform, and deliver the traffic to the tools.
Support for packet slicing
Conserve production network backhaul and optimize monitoring infrastructure processing by slicing
VM traffic at required offset, before forwarding it for analysis.
Tunneling support (standards Leverage the production network to tunnel and forward the filtered virtual traffic from the hypervisor
L2 GRE encapsulation)
to the GigaVUE platforms; tenant-based IP Tunneling facilitates isolation, privacy, and compliance of
monitoring traffic. Simplified virtual traffic policy creation to identify and select the physical tunnel
termination end-point where the filtered and transformed virtual workload traffic is to be delivered.
Optimized traffic delivery
Tunneled traffic can be marked with DSCP values for per hop behavior to get preferential treatment
on the production network. If changing MTU size in the network is an issue, fragmentation can be
enabled to transport the packets using standard MTU sizes. These packets will then be re-assembled
at the Visibility Fabric nodes before further analysis.
Support for vMotion
and LiveMigration
Ensure the integrity of visibility and monitoring policies in a dynamic infrastructure, have real-time
adjustment of monitoring and security posture to virtual network changes, and the ability to respond
to disasters/failures without losing NOC insight and control.
Hotspot monitoring
Pro-actively monitor and troubleshoot GigaVUE-VM nodes by elevating Top-N and Bottom-N virtual
traffic policies to the centralized dashboards.
Having an end-to-end solution that provides traffic visibility into both the physical and virtualized infrastructures empowers the infrastructure
administrators and operators with the insight needed to ensure service quality, security compliancy, and maintain business continuity.
© 2013-2016 Gigamon. All rights reserved.
1
Data Sheet: GigaVUE-VM
VMware ESX Integration
• A vSphere guest VM, the light footprint GigaVUE-VM fabric node is installed without the need for special software, kernel modules,
or changes to the hypervisor
• GigaVUE-FM (Fabric Manager), Gigamon’s centralized management application, tightly integrates with VMware vCenter and to facilitate
simplified bulk onboarding of the GigaVUE-VM fabric nodes and configuration of the VM level traffic monitoring policies
• Leveraging vCenter APIs, GigaVUE-FM can track vMotion events across Distributed Resource Scheduler (DRS) and high-availability (HA)
cluster environments, enabling visibility policies to be tied to the monitored VMs and migrate with the VMs as they move across physical
hosts; this automation provides Active Visibility into an agile and dynamic SDDC
• GigaVUE-VM is auto-pinned to a host, so DRS doesn’t impact continuous traffic visibility
• In addition to ESXi hypervisor, GigaVUE-VM also extends traffic visibility to the VMs deployed on the VMware NSX-V network hypervisor,
a network virtualization platform that delivers the operational model of a hypervisor for the network
VM
VM
VM
VM
HYPERVISOR
HYPERVISOR
SERVER I
SERVER II
Internet
•
•
•
•
Virtual Traffic Policies
GigaVUE-FM
vCenter
Tunneling
vCenter integration
Bulk GigaVUE-VM onboarding
Virtual traffic policy creation
Automatic migration of monitoring policies
Private
Cloud
Application
Performance
Network
Management
Visibility Fabric
Production Network
Security
Tools and Analytics
GigaVUE-VM integrated with Unified Visibility Fabric and VMware’s vCenter
© 2013-2016 Gigamon. All rights reserved.
2
Data Sheet: GigaVUE-VM
VMware NSX Integration
• Automate traffic visibility for securing the micro-segmented SDDC
• Enable SecOps and NetOps teams to automate the selection, filtering, and forwarding of the ever-growing east-west virtual traffic for
security and monitoring analytics
• Leverage the power of the NSX network virtualization platform and distributed service insertion framework for automated deployment of
virtual components in the GigaSECURE® Security Delivery Platform, while also enabling dynamic provisioning of visibility traffic policies
within the customer’s software defined data center
• Insert a Visibility Service using the GigaSECURE platform’s virtual visibility component, GigaVUE-VM
• Define security or traffic policies that select, filter, and forward the tenant’s virtual traffic to security and monitoring tools for analysis
• Auto update this service and the traffic policies as new tenants come onboard or existing tenant’s security groups scale dynamically
VMware vCenter
NSX Manager
GigaVUE-FM
1
2
NetOps / SecOps
Admin
Register ‘Gigamon Traffic Visibility Service’ and ‘Traffic Policies’
4
Associate Traffic Policies to Security Groups
licy
Deploy ‘Traffic Visibility’ Service VM on NSX Cluster
an
tus
Sta
7
SG1
SG2
SG3 GigaVUE-VM
VM
VM
VM
6
vSwitch
VMware NSX-V
5
Filtered Virtual Traffic
Copy Packet
GigaSECURE Security Delivery Platform
dT
raf
fic
Po
3
Ch
ec
ks
Cloud
Admin
vCenter and NSX APIs for Inventory, Security Groups, Events
APM
SIEM
IDS
GigaVUE-VM on VMware NSX integrated with GigaSECURE Security Delivery Platform
© 2013-2016 Gigamon. All rights reserved.
3
Data Sheet: GigaVUE-VM
Use Cases with VMware NSX
Secure the SDDC with GigaSECURE – Dynamic Service Insertion of GigaVUE-VM
vRealize Automation (vRA)
1. Deploy new Tenants and Applications 2. Apply “Visibility” Policy
NSX Manager
vCenter
IPS
?,.
(Inline)
E?3;"3/F
Anti-Malware
+36"()$;7$0/
(Inline)
E?3;"3/F
Data
Loss
@$6$2C8--2
Prevention
,0/1/36"83
Intrusion
?360:-"83
Detection
@/6/A6"83
System
.>-6/B
Forensics
G80/3-"A-
Email Threat
Detection
A
P
I
NSX APIs, Service Insertion
vCenter APIs, Events
Internet
GigaVUE-VM and
GigaVUE® Nodes
TAPs
GigaVUE VM
GigaSECURE Security Delivery Platform
Metadata
Engine
Application
Session Filtering
SSL
Decryption
Inline
Bypass
Filtered and Sliced Virtual Traffic
Tenant level Traffic Visibility for Monitoring – Dynamic Service Insertion of GigaVUE-VM
vRealize Automation (vRA)
1. Deploy new Tenants and Applications 2. Apply “Visibility” Policy
REST APIs
Software-Defined Visibility
vCenter APIs, Events
NSX Manager
NSX APIs, Service Insertion
Virtual Traffic
Centralized Tools
GigaVUE-FM
Security
vCenter
Anti-Malware
VXLAN=6000
POWERED BY
GigaSMART®
SSL
Decryption
DLP
SSL Decryption
NetFlow / IPFIX
Generation
Adaptive
Packet Filtering
TAPs
GigaVUE VM
© 2013-2016 Gigamon. All rights reserved.
Filtered and Sliced Virtual Traffic
Header
Stripping
Visibility Fabric
Application
Session Filtering
Internet
IDS
Network Forensics
APT
Monitoring
De-cap VXLAN
Application Performance
Network Performance
NetFlow / IPFIX
Customer Experience
4
Data Sheet: GigaVUE-VM
OpenStack/KVM-powered Private Cloud
The OpenStack software was designed from the ground up for multi-tenancy, where a common set of physical compute and network
resources are used to create tenant domains providing isolation and security. Characteristics of a typical OpenStack deployment include:
• VMs belonging to different tenants may be placed on the same host
• Tenants are unaware of the physical hosts on which their VMs are running
• A tenant can have several virtual networks and may span across multiple hosts
In a multi-tenant OpenStack/KVM cloud, where tenant isolation is critical, the Gigamon solution extends visibility for one tenant’s
workloads without impacting others.
• Supports tenant-wide monitoring domains—tenant may monitor any and all interfaces on their VMs
• Honors tenant isolation boundaries—no traffic leakage from one tenant to any other tenant during monitoring
• Monitors traffic without needing cloud admin privileges (no requirement to create port mirror sessions etc.)
• Traffic monitoring activity of one tenant does not adversely affect other tenants
• Multi-tenant traffic visibility management with a single instance of GigaVUE-FM
• Deploy this solution, which integrates with OpenStack, as follows by the tenant owner:
–– GigaVUE-FM for integration with OpenStack/Nova controller to identify tenant VMs
–– A tiny footprint user-space agent (G-vTAP) is loaded in the tenant VM that is selected for monitoring
»»
Traffic policy filters are configured to mirror the target VMs interface traffic to GigaVUE-VM
»»
The filtered traffic can be sampled at configured rates to reduce backhaul to the monitoring tools
–– GigaVUE-VM optimizes (complex filters and slicing) and delivers traffic to the physical Visibility Fabric nodes where additional
GigaSMART® traffic intelligence can be applied before delivering the traffic to the monitoring tools
–– Based on the number of tap points (vNICs) being monitored, GigaVUE-FM auto-deploys the requisite number of GigaVUE-VM nodes
1 OpenStack: Horizon/Nova deploys tenant
Glance
Tenant
Horizon
Nova
1
GigaVUE-VM
VM
VM
KVM
Visibility Node)
4
APM
7
Traffic
8
Policies
Any vSwitch
from OpenStack/Nova controller
3 GigaVUE-FM: Deploys GigaVUE-VM (Virtual
GigaVUE-FM
KVM
6
Visibility Fabric
5 Any vSwitch
VM
2 GigaVUE-FM: Discovers the tenant VMs
2
3
VMs that are packaged with Gigamon Virtual
Taps (G-vTAP)
NPM
4 GigaVUE-FM: Configures traffic policies
on the G-vTAPs and GigaVUE-VMs
5 G-vTAP: Filters and replicates traffic
to GigaVUE-VM
6 GigaVUE-VM: Provides additional filtering/slicing
Security
of traffic to Visibility Fabric
7 GigaVUE-FM: Configures traffic policies
CEM
Tunneling
(GigaSMART) to forward to the right tools
8 Visibility Fabric: Optimizes and forwards traffic
to the right tools
GigaVUE-VM and G-vTAP on OpenStack/KVM integrated with the Visibility Fabric
© 2013-2016 Gigamon. All rights reserved.
5
Data Sheet: GigaVUE-VM
Table 2: Hardware Requirements
Requirement
Description
Hypervisor
• VMware vSphere 5.0, 5.1, 5.5 and 6.0
• VMware NSX-V (vSphere NSX) 6.1.x, 6.2.x
• VMware NSX-V 6.2.3 and above for the Automated Traffic Visibility Integration
• KVM with OpenStack (Icehouse, Juno, Kilo, Liberty releases)
CPU
• One or more 64-bit x86 CPUs with virtualization assist (Intel-VT or AMD-V) enabled
Network
• At least one 1 Gbps NIC
The following table lists the virtual computing resources that the VMware ESXi server must provide for each GigaVUE-VM fabric
node instance.
Table 3: Computing Requirements for GigaVUE-VM on VMware
Requirement
Description
Memory
• Minimum 2Gb memory
Virtual CPU (VCPU)
• One (1)
Virtual Storage for OS
• 4Gb using Virtual IDE
Virtual network interfaces
• Maximum: 10 Network Adapters
• Network Adapter 1: GigaVUE-VM Management Port
• Network Adapter 2: GigaVUE-VM Tunneling Port
• Network Adapters 3 – 10: GigaVUE-VM Network Ports
Table 4: Computing Requirements for Virtual Visibility with OpenStack/KVM
Compute Node
Description
G-vTAP
Agent in the target VM that
mirrors the selected vNIC traffic
to GigaVUE-VM.
vCPUs
Memory
Disk space
vNICs
2GB
N/A
1 additional vNIC (for tunneling the traffic
to GigaVUE-VM)
Note: For optimal performance,
the target VM should have at
least 2 vCPUs.
GigaVUE-VM
Virtual Visibility Fabric node
that terminates the traffic from
G-vTAP, applies additional filters
and forwards the traffic to the
physical fabric node.
1
2GB
4GB
vNIC 1: Management Port
vNIC 2: Tunneling Port
vNIC 3: Network Port (traffic from G-vTAP)
G-vTAP-CTL
Controller node that proxies
APIs to the G-vTAP agents.
1 per tenant.
2
2GB
10GB
1
© 2013-2016 Gigamon. All rights reserved.
6
Data Sheet: GigaVUE-VM
Support and Services
Gigamon offers a range of support and maintenance services. For details regarding Gigamon’s Limited Warranty and its Product Support
and Software Maintenance Programs, visit www.gigamon.com/support-and-services/overview-and-benefits
Ordering Information
Table 5: GigaVUE-VM for VMware
Part Number
Description
GFM-VM010
GigaVUE-VM 10 Pack Bundle SW License Extension
GFM-VM050
GigaVUE-VM 50 Pack Bundle SW License Extension
GFM-VM100
GigaVUE-VM 100 Pack Bundle SW License Extension
GFM-VM250
GigaVUE-VM 250 Pack Bundle SW License Extension
GFM-VM1000
GigaVUE-VM 1000 Pack Bundle SW License Extension
GFM-VM-NSX
Add-on NSX Integration license for GFM-FM001, GFM-FM005, GFM-FM010, GFM-HW0-FM010
Note that customer still needs to purchase the VM packs for the number of hosts
Table 6: For OpenStack Clouds (GigaVUE-VM is included as part of the solution below)
Part Number
Description
GFM-VTAP-100
Virtual Monitoring in OpenStack deployments for up to 100 virtual tap points. A ‘virtual tap point’ is any end
point that can be monitored, for ex., a vNIC in a VM.
GFM-VTAP-250
Virtual Monitoring in OpenStack deployments for up to 250 virtual tap points. A ‘virtual tap point’ is any end
point that can be monitored, for ex., a vNIC in a VM.
GFM-VTAP-1000
Virtual Monitoring in OpenStack deployments for up to 1000 virtual tap points. A virtual tap point’ is any end
point that can be monitored, for ex., a vNIC in a VM.
For More Information
For more information about the Gigamon Unified Visibility Fabric or to contact your local representative, please visit:
www.gigamon.com
© 2013-2016 Gigamon. All rights reserved. Gigamon and the Gigamon logo are trademarks of Gigamon in the United States and/or
other countries. Gigamon trademarks can be found at www.gigamon.com/legal-trademarks. All other trademarks are the trademarks
of their respective owners. Gigamon reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
3300 Olcott Street, Santa Clara, CA 95054 USA | +1 (408) 831-4000 | www.gigamon.com
4022-11 07/16
Download