WRT to section 2.2 Sampling of Network Interface Statistics
>> Sonia Panchen wrote:
>> The "sampling" of network interface statistics is really local polling or
>> time-based sampling of the interface counters. This is probably more
>> understandable when read in conjunction with thespecifics of the MIB
>> and datagram. For example the MIB(sFlowCounterSamplingInterval) allows
>> the polling interval to be set and thedatagram (if_counters) defines
>> the counters which are read.
> Tanja Zseby wrote:
> I still have some questions regarding this section: In the sampling of
> switched flows you are estimating a packet proportion by using only a
> subset of packets. In the counter polling scenario I dont see where the
> sampling process takes place. What would be the parent population and what
> the sample ? What is the parameter you are estimating ? Do you use
> relative IF counters and estimate the total volume out of some samples of
> this relative counters (time-based sampling) ? Or do you estimate the
> total load on the device by polling only a few selected IFcounters ?
> Is there a random process involved or does the counter polling follow
> a pre-defined schedule ?
Tracking and trending interface counters for every link in the network is
very important for network management in general. sFlow provides an
efficient mechanism to stream back the interface counters with the flow
samples, avoiding the overhead of remotely polling the counters every
All of the interface counters defined in the sFlow datagram format can be
exported according to the configurable (local) polling interval.
sFlow counter samples should be handled differently from flow samples.
We use the sFlow interface counters to maintain a minute by minute picture
of status of all the interfaces. It is the change in counter values over a
specific time interval that are of interest. For example you might want to
know how many packets were received on an each interface every minute.
Differences are computed every time an sFlow counter sample arrives (by
comparing with the previous sample). Difference values are then accumulated
at the granularity of interest (eg every minute). In computing these values,
you can take into account that samples may arrive before or after the
interval boundary of interest.
Is this more understandable?
This archive was generated by hypermail 2b29 : 04/26/02 EDT