top of page

To capture request packets between Client to VIP and SNAT to a specific Pool Member


tcpdump -nni 0.0 "( src host <client> and dst host <vs> ) or ( src host <snat> and dst host <pool-member-1> )" -w /tmp/capture1.pcap

To capture 
request packets between Client to VIP and SNAT to two Pool Members
tcpdump -nni 0.0 "( src host <client> and dst host <vs> ) or ( src host <snat> and ( dst host <pool-member-1> or dst host <pool-member-2> ) )" -w /tmp/capture1.pcap
________________________________________________________________________________________________________________________________________________

To capture 
response packets between VIP to Client and specific Pool Member to SNAT
tcpdump -nni 0.0 "( src host <vs> and dst host <client> ) or ( src host <pool-member-1> and dst host <snat> )" -w /tmp/capture1.pcap

To capture 
response packets between VIP to Client and two Pool Members to SNAT
tcpdump -nni 0.0 "( src host <vs> and dst host <client> ) or ( ( src host <pool-member-1> or src host <pool-member-2> ) and dst host <snat> )" -w /tmp/capture1.pcap
________________________________________________________________________________________________________________________________________________

To capture 
request and response packets between Client to VIP and SNAT to a specific Pool Member
tcpdump -nni 0.0 "( host <client> and host <vs> ) or ( host <snat> and host <pool-member-1> )" -w /tmp/capture1.pcap

To capture 
request and response packets between Client to VIP and SNAT to two Pool Members
tcpdump -nni 0.0 "( host <client> and host <vs> ) or ( host <snat> and ( host <pool-member-1> or host <pool-member-2> ) )" -w /tmp/capture1.pcap

 

Switches

​

Anything with a '-' is called a switch. Basically it is used to specify the options you want to set in a tcpdump.

Some switches can be combined such as -nn, -ennv, -nnvvv

Some switches need a value to be defined and they need to be appended to other switches or can be used independently. Example: -nni 1.1, -w /tmp/capture.pcap, -r /path/tcpdump.pcap

-nni 1.1 can also be expressed as -nn -i 1.1

​

​

​

​

The tcpdump utility's interface or -i option accepts only one option. This option may be a numbered interface or a named Virtual Local Area Network (VLAN).

​

To view traffic, use the -i flag as follows:

tcpdump -i <option>

​

Use of SNAT

​

SNAT is usually utilized when you need to translate the original client IP to that of the SNAT IP. It is also used to make sure that the return traffic is passed to the F5 instead of following an asymmetric route/pathway away from the F5

​

L7 vs L4 Virtual server

​

FastL4

Profile: FastL4

Advantage: Accelerates packet processing

When to use: FastL4 is limited in functionality to socket level decisions (for example, src_ip:port dst_ip:port). Thus, you can use FastL4 only when socket level information for each connection is required for the virtual server.

Limitations

  • No HTTP optimizations

  • No TCP optimizations for server offloading

  • SNAT/SNAT pools demote PVA acceleration setting level to Assisted

  • iRules limited to L4 events, such as CLIENT_ACCEPTED and SERVER_CONNECTED

  • No OneConnect

  • Limited persistence options:

    • Source address

    • Destination address

    • Universal

    • Hash (BIG-IP 9.x only)

  • No compression

  • No Virtual Server Authentication

  • No support for HTTP pipelining

​

FastL4 changes the destination ip and maybe port, maybe src ip, but the TCP connection still spans from the client to the server.

If you're not able or interested in doing any optimizations above layer 4 on the traffic to a particular virtual server, it's more efficient and faster to use a FastL4 virtual server

​

So, lets assume your server is an old solaris stack, not supporting window scaling or Selective Acknowledgements. This is fine for a controlled environment, like the local lan, but when serving clients out in the wild on a high latency WAN link, there is better stuff to do. 

Therefore i use, if i can, always the L7 virtual server, which will maintain a server-side TCP connection to the old server, and will handle the client-side connection using our optimized, state of the art TCP stack, and therefore ensure best data delivery and user experiences, even on a lossy WAN link.

​

​

Profile: Fast HTTP

​

Advantage: Faster than HTTP profile

When to use: Fast HTTP profile is recommended when it is not necessary to use persistence and or maintain source IP addresses. Fast HTTP also adds a subset of OneConnect features to reduce the number of connections opened to the backend HTTP servers. The Fast HTTP profile requires that the clients' source addresses are translated. If an explicit secure network address translation (SNAT) or SNAT pool is not specified, the appropriate self IP address is used.

Note: Typically, server efficiency increases as the number of SNAT addresses that are available to the virtual server increases. At the same time, the increase in SNAT addresses that are available to the virtual server also decreases the likelihood that the virtual server will reach the point of ephemeral port exhaustion (65535 open connections per SNAT address).

Limitations

  • Requires client source address translation

  • Not compatible with persistence until version 10.0.0

  • Limited iRules support L4 and are limited to a subset of HTTP header operations, and pool/pool member selection

  • No compression

  • No virtual server authentication

  • No support for HTTP pipelining

  • No TCP optimizations

  • No IPv6 support

​

​

If the virtual server does not reference an OneConnect profile, Local Traffic Manager performs load balancing

for each TCP connection. Once the TCP connection is load balanced, the system sends all requests that are

part of the connection to the same pool member.

​

For example, if the virtual server does not reference an OneConnect profile, and Local Traffic Manager initially

sends a client request to node A in pool A, the system inserts a cookie for node A. Then, within the same TCP

connection, if Local Traffic Manager receives a subsequent request that contains a cookie for node B in pool

B, the system ignores the cookie information and incorrectly sends the request to node A instead.

Using an OneConnect type of profile solves the problem.

 

 

If the virtual server references an OneConnect profile,

Local Traffic Manager can perform load balancing for each request within the TCP connection. That is, when

an HTTP client sends multiple requests within a single connection, Local Traffic Manager is able to process

each HTTP request individually. Local Traffic Manager sends the HTTP requests to different destination

servers if necessary.

For example, if the virtual server references an OneConnect profile and the client request is initially sent to

node A in pool A, Local Traffic Manager inserts a cookie for node A. Then, within the same TCP connection,

if Local Traffic Manager receives a subsequent request that contains a cookie for node B in pool B, the

system uses that cookie information and correctly sends the request to node B.

​

The OneConnect feature works with HTTP Keep-Alives to allow the BIG-IP system to minimize the number of

server-side TCP connections by making existing connections available for reuse by other clients.

​

For example, when a client makes a new connection to a BIG-IP virtual server configured with an OneConnect profile, the BIG-IP system parses the HTTP request, selects a server using the load-balancing method defined in the pool, and creates a connection to that server. When the client’s initial HTTP request is complete, the BIG-IP system temporarily holds the connection open and makes the idle TCP connection to the pool member available for reuse.

​

When a new connection is initiated to the virtual server, if an existing server-side flow to the pool member is

idle, the BIG-IP system applies the OneConnect source mask to the IP address in the request to determine

whether it is eligible to reuse the existing idle connection. If it is eligible, the BIG-IP system marks the

connection as non-idle and sends a client request over it. If the request is not eligible for reuse, or an idle

server-side flow is not found, the BIG-IP system creates a new server-side TCP connection and sends client

requests over it.

​

When an OneConnect profile is enabled for an HTTP virtual server, and an HTTP client sends multiple

requests within a single connection, the BIG-IP system is able to process each HTTP request individually. The

BIG-IP system sends the HTTP requests to different destination servers as determined by the load balancing

method. Without an OneConnect profile enabled for the virtual server, the BIG-IP system performs loadbalancing only once for each TCP connection.

​

When an OneConnect profile is enabled for a TCP virtual server that does not have an HTTP profile applied,

and a client sends multiple requests within a single connection, the BIG-IP system is able to process each

request individually. The BIG-IP system sends the requests to different destination servers as determined by

the load balancing method. Without an OneConnect profile enabled for the virtual server, the BIG-IP system

performs load-balancing only once for each TCP connection.

​

For HTTP traffic to be eligible for OneConnect connections, the web server must support HTTP Keep-Alive

connections

​

HTTP Keep-Alive connections are enabled by default in HTTP/1.1. With HTTP/1.1 requests, the server does

not close the connection when the content transfer is complete, unless the client sends a Connection: close

header in the request. Instead, the connection remains active in anticipation of the client reusing the same

connection to send additional requests.

​

HTTP Keep-Alive connections are not enabled by default in HTTP/1.0. With HTTP/1.0 requests, the client

typically sends a Connection: close header to close the TCP connection after sending the request. Both the

server and client-side connections that contain the Connection: close header are closed once the response

is sent.

​

The OneConnect Transformations setting in the HTTP profile allows the BIG-IP system to perform HTTP

header transformations for the purpose of allowing HTTP/1.0 connections to be transformed into HTTP/1.1

requests on the server side, thus allowing those connections to remain open for reuse when they would not

otherwise be. The default setting is enabled.

​

When the OneConnect Transformations setting is enabled in the HTTP profile, the BIG-IP system transforms

Connection: close headers in HTTP/1.0 client-side requests to X-Cnection: close headers on the server side.

This allows the BIG-IP system to make client requests containing the Connection: close header such as

HTTP/1.0 requests, eligible for connection reuse.

​

The FastHTTP profile uses an implementation of OneConnect that transforms Connection:close headers to Xonnection: Close

​

When using OneConnect to optimize HTTP traffic, F5 recommends that you apply an HTTP profile to the virtual server. This allows the BIG-IP system to efficiently manage connection reuse without additional configuration. Failure to apply an HTTP profile may result in unexpected behavior such as backend server traffic being sent to the wrong client

​

Avoid using a OneConnect profile for non-HTTP virtual servers that process more complex transactions, such as FTP or RTSP. Doing so may result in traffic disruption and session failure. Even for simple non-HTTP protocols, an iRule may be required to manage connection reuse

​

The OneConnect profile may be used with any TCP protocol, but only when applied to virtual servers

that process simple request/response protocols where transaction boundaries are explicitly obvious,

such as those in which each request and each response is contained within a single packet.

Avoid using an OneConnect profile for encrypted traffic that is passed through the virtual server to the

destination resources in the encrypted state and is not terminated at the BIG-IP system.

​

​

​

WAN optimizes vs LAN optimized

​

The proxy buffer high setting is the threshold at which the LTM stops advancing the receive window

The proxy buffer low setting is a falling trigger (from the proxy high setting) that will re-open the receive window

Like the window, increasing the proxy buffer high setting will increase the potential for additional

memory utilization per connection

Proxy Buffer High - 131072

Proxy Buffer Low - 98304

For the LAN optimized profile, the receive window for the server is not opened until there is less than 98304 bytes to send to the client

Whereas in the WAN optimized profile, the server receive window is opened as soon as any data is sent to the client

Proxy Buffer High - 131072

Proxy Buffer Low - 131072

​

​

​

bigd vs big3d

​

bigd  -  The bigd monitor daemon provides system health checks.

big3d  -  The big3d process is used by BIG-IP GTM and Enterprise Manager to collect statistics from remotely managed BIG-IP LTM devices. This process is also used by BIG-IP GTM for auto-discovery of objects.

​

​

GTM load balancing methods

​

Global Availability - BIG-IP GTM distributes DNS name resolution requests to the first available virtual server in a pool. BIG-IP GTM starts at the top of a manually configured list of virtual servers and sends requests to the first available virtual server in the list. Only when the virtual server becomes unavailable does BIG-IP GTM send requests to the next virtual server in the list. 

​

When to use - Use Global Availability when you have specific virtual servers that you want to handle most of the requests.

​

Ratio - BIG-IP GTM distributes DNS name resolution requests among the virtual servers in a pool or among pools in a multiple pool configuration using weighted round robin, a load balancing pattern in which requests are distributed among several resources based on a priority level or weight assigned to each resource

​

When to use - Use Ratio when you want to send twice as many connections to a fast server and half as many connections to a slow server.

​

topology - BIG-IP GTM distributes DNS name resolution requests using proximity-based load balancing. BIG-IP GTM determines the proximity of the resource by comparing location information derived from the DNS message to the topology records in a topology statement you have configured

​

​

When to use - Use Topology when you want to send requests from a client in a particular geographic region to a data center or server located in that region.

​

bottom of page