CURRENT_MEETING_REPORT_

Reported by Guy Almes/Advanced Network and Services

Minutes of the Benchmarking Methodology Working Group (BMWG)



Introduction

At the Danvers IETF, there was a BOF held to consider the need for a set
of IP Provider Metrics (IPPM). Between the Danvers and Stockholm IETFs,
it was agreed (at least provisionally) to structure any IPPM effort
within the BMWG; Guy Almes would organize this effort as a new co-chair
of the BMWG. Accordingly, the BMWG met at Stockholm for the sole purpose
of initiating a new effort to produce IP Provider Metrics.  Guy chaired
the session.

The chair used a set of slides to organize the session.  These slides
are reproduced following these minutes.  The written minutes will
emphasize those points not in the slides or where there was significant
discussion on points that are present in the slides.  In the minutes,
the notation [slide k] will be used to refer to the kth slide.



Internet Growth

As context, it was noted that the Internet was growing in complexity
along a number of dimensions [slide 2].  Further, while the Internet was
once a consciously cooperative technical effort in which the operators
of Internet components would (comparatively) easily share information,
it has become a consciously competitive market in which operators of
Internet components are (comparatively) reluctant to share information
about the performance and reliability of their networks.



General Criteria for IPPM

General criteria for IP Provider Metrics were then discussed [slide 4].
It was stressed several times that we would need to be very careful to
avoid biased metrics, and that the metrics and methodologies that result
from our work should benefit both users and providers.  Scott Bradner
noted that, based on earlier BMWG experience, vendors will `build to
metrics', and that we should therefore keep this in mind as we define
our metrics (rather than naively wishing that providers will not `build
to metrics'.

In all the examples of path performance metrics, Scott noted that there
were three important kinds of cases:


  1. Measures of the IP path between a pair of specific sites,

  2. Measures of the IP path from one specific site to a (conceptually)
     large number of Internet sites, and

  3. Measures of the IP paths among a specific set of sites, as when an
     organisation uses an IP service as a `virtual private data
     network'.


In a closely related comment, it was noted that Internet exchange points
might play a key role in allowing us to characterise IP paths in a way
that would allow paths to be composed of two or more IP path segments
(e.g., site to exchange point and exchange point to exchange point).  In
some cases, a metric might have the property that the metric for a
complete path could be estimated from metrics of the IP path segments.
This would enable interesting methodologies relating to some cases in
the previous paragraph.

It was also noted that the cost of performing a given measurement should
be considered as a criterion.


An Initial Set of Needs for Metrics

About half of the group's time was devoted to discussing an initial set
of needs for metrics.  The first set of needs concerns the performance
and reliability of paths through the Internet [slide 6].  It was noted
that in many cases, paths from a user site to key Internet exchange
points would be of great importance.  Of the path performance metrics
discussed, delay was comparatively easy to measure.

Single-application and aggregate flow capacity is more difficult to
measure for several reasons:


   o Tests of flow capacity are hard to do without inducing congestion
     within the network, thus negatively affecting other users,

   o The flow capacity of a network depends both on the networks
     technical qualities and on the current load being placed on the
     network,

   o In some cases, users care about the expectation of flow capacity at
     a typical time; in other cases, they care about near-worst-case
     flow capacity; thus, measuring flow capacity in the presence of
     90th percentile congestion might be useful, and

   o In some cases (as with distributed interactive simulation)
     near-real-time flows might need to be sustained, while in others
     flow capacity measured over several seconds might suffice.  Thus,
     for a number of reasons, flow capacity is more difficult to define
     and to measure than delay.


In addition to congestion affecting any measurement of flow capacity
through a network, it was pointed out that a network's robustness in the
presence of congestion might be an important property to measure.

Reliability is also important but hard to define/measure.

In discussions of several of these IP path performance metrics, Scott
asked us to consider whether we might include both black box metrics
(i.e., viewing the IP path only from the boundaries of IP `clouds') and
white box metrics (i.e., viewing also the state of routers and lines
within the `cloud').

In discussion, someone mentioned that, of course, any measure that a
user could measure from the border of an IP cloud could also be measured
by the provider of that cloud.  This would be valuable, of course, so
that the provider could anticipate and avoid situations in which the
performance of the IP cloud would be or appear to be poor as seen by the
user.  Guy noted that, rather than taking this as an assumption, we
should state it as a desirable property of path performance metrics that
we design.

Gary Malkin observed that we should keep in mind that most users will be
new to Internet engineering and of limited technical sophistication.  In
discussion, it was noted that we would want simple tools usable directly
by naive users and more sophisticated tools usable both by sophisticated
users, by IP providers, and by organisations functioning as advisers to
naive (or sophisticated) users.

There was considerable discussion of routing stability [slide 7].  Some
felt that the dynamic properties of end-to-end availability, quickness
of recovery, and stability were merely technical causes of reliability,
and thus not a separate metric in and of themselves.

The discussion of DNS performance metrics struck a positive chord [slide
8].  In addition to performance (in the narrow sense) Ruediger Volk
suggested that stability and reliability of a DNS service were
important.

Similarly, NTP service, Web-server service, and NetNews service were
thought to be value added services that could be dealt with in a manner
similar the treatment of DNS service (assuming that we could do a good
job of that).

On the other hand, several were pessimistic that useful metrics could
arise in the NOC responsiveness area.



Administrative Issues

At the conclusion of the session, we dealt with several administrative
issues [slide 13]:


   o We agreed that for the present we would operate within the BMWG.

   o Guy will work to reorganise the IPPM mailing list.  The list of
     those attending the present session will be added to the current
     IPPM list, and people will then be asked to confirm that they want
     to be on the list.

   o Scott Bradner stressed the importance of deliverables, specifically
     Internet-Drafts that would progress to Informational RFCs.  Example
     topics include:

      -  General metric criteria, and

      -  Descriptions of proposed specific metrics.


   o Guy said that an IPPM session might be called before the next IETF
     meeting.  Specifically, one might be called to meet in Pittsburgh
     in early September (adjacent in time to the coming NANOG meeting
     there).