How to use the tcpdump utility

The tcpdump command is very useful in capturing all the data that is being sent in either direction through a given port on the system. This allows you to see exactly what data at what time and what sequence any program touching that port was seeing/sending.

Syntax:

tcpdump -s 0 -i <interface> -w <base filename> -W <file count> (-C <# of 1,000,000 bytes> or -G <seconds>) port <port # (5038 is asterisk’s default AMI port)>


Example:

tcpdump -s 0 -i lo0 -w base.pcap_ -W 50 -C 5 port 5038

  • The -i option sets the interface you want to capture. You can get the IP address from ifconfig if you need to look it up.
  • The -s0 option says to capture the complete packets. You can control how much of the packet you capture but in general you want the whole packet so -s 0 should be used.
  • -w base.pcap_ sets the base file name.
  • -W 50 says to create 50 files off the base before overwriting files. The example wouldcreate files with the names starting at pcap_00, then pcap_01, until it gets to pcap_49. Once the 50th (_49) file is written then the _00 file will be overwritten and so on.
  • -C sets the number of megabytes to put in each file before it rolls. -G sets the number of seconds instead of by size.
  • The traffic will be traffic both to and from port specified (5038 in the above example)

Clients should probably test the size and number of files options to see which will work best for the amount of traffic on themachine/port being monitored.. You don’t want to keep a lot of big files that only have “good” data lying around, but you also want to make sure that enough files are captured such that if they didn’t report the problem for a while, you would still have access to the files that would show the issue.


In general files that are 100 MB are okay to deal with. We want to avoid files that are 500MB or larger as they become to hard to transfer and read. We also don’t want many little files either because if the issue is building up over time, we may need to see a fairly long duration of the logs and combining dozens of files.

One thing to remember is that when the problem is occurring there might be a reduced flow of traffic.