Nexus 1000v – DMZ Private VLAN config example


Here is something I am often asked about, but I have not found many examples on the internet.  The actual configuration of 1000v port-profiles when used in a DMZ environment.

In my example environment, I am referencing a single ESXi host, attached to  upstream DMZ switches for external/internet bound traffic.  These DMZ switches are already in use for standalone DMZ hosts, and the VLANs are already in place.  The host is also attached to internal switches/firewall for internal and VMWare Kernel/Mgmt network traffic, including 1000v system VLANs.  The host has the 1000v / VSM’s installed already. If you do not have enough free NICs on your host, consider combining internal and VM Mgmt.

Config:

On Upstream DMZ switches:  A simple trunk port of the DMZ VLANs

(this is just one example)
interface GigabitEthernet5/5
 description Trunk To 1000v host external NICs
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 199,200
 switchport mode trunk
 logging event link-status
 spanning-tree portfast trunk

Nexus Config Translation
 interface e101/1/1
  description Trunk to 1000v host external NICs
  switchport mode trunk
  switchport trunk allow vlan 199-200
  spanning-tree port type edge trunk

On the Upstream Internal switches – another set of trunk ports

interface GigabitEthernet6/2
 description Trunk To 1000v host Internal NICs
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 55
 switchport mode trunk
 logging event link-status
 spanning-tree portfast trunk

interface GigabitEthernet7/2
 description Trunk To 1000v host VM Mgmt NICs
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 3000-3004
 switchport mode trunk
 logging event link-status
 spanning-tree portfast trunk

On the 1000v:

First the VLANs:

vlan 199
  name DMZ-PVLAN-Primary
  private-vlan primary
  private-vlan association 200
vlan 200
  name DMZ-PVLAN-Isolated
  private-vlan isolated
vlan 55
  name Internal Traffic
vlan 3000
  name Vmotion
vlan 3001
  name VM Service Console
vlan 3002
  name 1000v Packet
vlan 3003
  name 1000v Control
vlan 3004
  name 1000v Mgmt

Now the Uplink Ethernet Port-Profiles

port-profile type ethernet External
  vmware port-group
  switchport mode private-vlan trunk promiscuous
  switchport private-vlan mapping trunk 199 200
  switchport private-vlan trunk allowed vlan 199,200
  channel-group auto mode on mac-pinning
  no shutdown
  state enabled
port-profile type ethernet Internal
  vmware port-group
  swichport mode trunk
  switchport trunk allowed vlan 55
  channel-group auto mode on mac-pinning
  no shutdown
  state enabled
port-profile type ethernet VM-Mgmt
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 3000-3004
  channel-group auto mode on mac-pinning
  no shutdown
  system vlan 3002-3003
  state enabled

Lastly, the VM facing vEthernet Port-Profiles

port-profile type vethernet DMZ External Traffic
  vmware port-group
  switchport mode private-vlan host
  switchport private-vlan host-association 199 200
  no shutdown
  state enabled
port-profile type vethernet 1000v-Control
  vmware port-group
  switchport mode access
  switchport access vlan 3002
  system vlan 3002
  no shutdown
  state enabled
port-profile type vethernet 1000v-Packet
  vmware port-group
  switchport mode access
  switchport access vlan 3003
  system vlan 3003
  no shutdown
  state enabled
port-profile type vethernet 1000v-Mgmt
  vmware port-group
  switchport mode access
  switchport access vlan 3004
  no shutdown
  state enabled
port-profile type vethernet VMotion
  vmware port-group
  switchport mode access
  switchport access vlan 3000
  no shutdown
  state enabled
port-profile type vethernet VM-Srv-Console
  vmware port-group
  switchport mode access
  switchport access vlan 3001
  no shutdown
  state enabled
port-profile type vethernet Internal Traffic
  vmware port-group
  switchport mode access
  switchport access vlan 55
  no shutdown
  state enabled

What is the net result?  VM’s in the same port-profile cannot communicate. Traffic goes to the correct uplink port-profile, as long as switchport trunk statements are correct.  You may also consider a VMWare vDS for the vMotion, service console and 1000v mgmt traffic.

This document provides more information on the 1000v in DMZ environments

DMZ Virtualization Using VMware vSphere 4 and the Cisco
Nexus 1000V Virtual Switch

Here is another post talking about 1000v and DMZs

Two vSwitches are better than 1, right?

Sniffing VM traffic – using Nexus 1000v and a virtual sniffer


About 3 months ago I got a visit from one of our windows server team members “umm… can we sniff a VM?”.

Although we had never done it before, about 3 hours later both teams were analyzing the packet traces.   We had discussed our Nexus 1000v’s SPAN port capabilities when we were deploying the 1000v to our environment,  but had yet to put together a plan to implement a virtual sniffing infrastructure.

Having the 1000v in place gave us a few options – Configure ERSPAN and send to an external source, build a virtual sniffer and collect from a SPAN port on the 1000v or sniff externally to the blade enclosure which contained our ESX hosts.  As I discussed with the server team, this wouldn’t be the only or last time we would get a trace request.  It would be worth the time to setup a virtual sniffer and get the process in place, and have the ability to sniff VM to VM traffic, even on the same host.

Building the Virtual Sniffer

Our team already uses an array of self-built sniffing boxes on our non-VM network.  These are typically workstation class machines running a Linux server build.  The thought of deploying a VM running the same O/S was sort of a joint-epiphany between our server/network teams, just as we were running through the details our plans.  We already had a particular flavor of Linux that worked well for us (SLES), and a detailed sniffer build plan.  Why not just apply this to a VM?

  • You will want to consider the amount of traffic you plan to capture when you are sizing your VM.  Consider a high I/O storage platform for your main volume to write to and perhaps a larger but slower volume for storing previous traces.

The VM sniffer would utilize a 1000v port-profile to get a vethernet port on the switch.   When configuring the SPAN/Mirroring session, there is a requirement of having the source (VM to be sniffed) and destination (sniffer’s port) be on the same “line card”.  This just means you need to deploy the sniffer on the same ESX host  the VM to be sniffed currently resides on.  You may want to watch or edit your DRS rules to keep the VM from moving during this time.   Our plan was to sniff all traffic coming and going from the server in question- but you could just sniff an entire vlan or uplink port-channel.

Configuring the monitor session on the 1000v

After the sniffer was built, it was time to go through the 1000v SPAN port configuration.  First was logging into the 1000v switch and finding out what Vethernet port the server to be analyzed is using.  Also needed was the recently created Vethernet port the sniffer was using.

N1000v# sh int status module 3
--------------------------------------------------------------------------------
Port           Name               Status    Vlan      Duplex  Speed   Type
--------------------------------------------------------------------------------
Veth11         ServerName, Network  up       749       auto    auto    --
Veth22         vSniffer, Network    up       749       auto    auto    --

Veth11 will be the Source and Veth22 will be the destination for our SPAN configuration.  Enter into configuration mode to enter the commands

N1000v# conf t
N1000v(config)# monitor session 1
N1000v(config-monitor)# description sniff ServerName
N1000v(config-monitor)# source interface vethernet 11 both
N1000v(config-monitor)# destination interface vethernet 22
N1000v(config-monitor)# no shut
N1000v(config-monitor)# end

After your configuration is done,  show your monitor session to confirm its configured correctly.  You should do a configuration save afterwards

N1000v# show monitor session 1
   session 1
---------------
description       : sniff ServerName
type              : local
state             : up
source intf       :
    rx            : Veth11         
    tx            : Veth11         
    both          : Veth11         
source VLANs      :
    rx            :
    tx            :
    both          :
source port-profile :
    rx            :
    tx            :
    both          :
filter VLANs      : filter not specified
destination ports : Veth22
destination port-profile :
N1000v# copy r s
[########################################] 100%

It’s time for the sniffer!

After the sniffer was deployed, it was time to begin collecting network traffic from the VM.    From a terminal session, I entered a simple TCPDump command, specifying the particular NIC that is being used as the destination port in the previously configured above.   Here is the example I used…

this particular command uses certain flags, which you can customize to fit what works for you – but some examples:

  • -ni eth1 – which happens to be my destination port from 1000v
  • -XX includes link-level header
  • -s0 sets a default size of 65535 bytes for compatibility
  • -C sets a 20MB limit for individual files
  • -W limits the total number of files to be created
  • -w writes to files in the path specified

Analyze and cleanup tasks

Once you have collected your trace data, it’s time to analyze it, and there are plenty of resources on the internet for reference.   After you are done, you will want to go back and shut down and delete the monitor session on the 1000v.

N1000v# conf t
N1000v(config)# monitor session 1
N1000v(config-monitor)# shut
N1000v(config-monitor)# exit
N1000v(config)# no monitor session 1
N1000v# show monitor session all
Note: There are no sessions configured
N1000v# copy r s
[########################################] 100%

Final Notes

If another VM needs to be sniffed in the future, you can simply vMotion the virtual sniffer to that particular ESX host, and begin the 1000v configuration again.

If you don’t have 1000v deployed, there are some options, but it looks like it will take vSphere 5 to have a good solution in place.  Thanks for reading my first post, and I am interested from hearing from you about your virtual environment sniffing experiences so far.