Editor's note: These minutes have not been edited. Minutes of the Routing Over Large Clouds (rolc) Working Group Los Angeles IETF, 4 March 1996 1300-1500 and 1530-1730. Reported by Howard Berkowitz, George Swallow, and Andrew Malis. The rolc WG met in two sessions at this IETF. There were 142 attendees. Agenda: * First Session 1. Agenda Bashing 2. ATM Forum report and liaison from LANE and MPOA (Swallow & Halpern) 3. NHRP revision 08-beta (Luciani) draft-ietf-rolc-nhrp-08-beta (distributed via email) 4. ATM Forum LAN Emulation Server Synchronization (McCloghrie) 5. NHRP/ATMARP/MARS Server Cache Synchronization Protocol (SCSP) (Luciani) draft-luciani-rolc-scsp-01 * Second Session 6. NHRP MIB Status (Luciani and Greene) 7. NHRP Protocol Applicability Statement (Cansever) draft-ietf-rolc-nhrp-appl-02 8. NHRP for Destinations off the NBMA Subnetwork (Rekhter) draft-ietf-rolc-r2r-nhrp-00 9. Support for Sparse Mode PIM over ATM (Rekhter, Farinacci) draft-rekhter-pim-atm-00 10.Multicast Inscalability over Large Cloud (Ohta) draft-ohta-mcast-large-cloud-00 11.OSPF Cut-through Advertisements (Coltun) First Session: ATM Forum MPOA Report George Swallow, the ATM Forum Multi-Protocol Over ATM (MPOA) Sub-Working Group chair, reported that The Forum met twice since Dallas and has concentrated on integrating LAN Emulation (LANE) into the MPOA architecture. The IETF's liaison on server synchronization was well received, and a liaison was sent back to the IETF. LANE now supports multiple servers, and has a slightly different problem space than NHRP or ATMARP. MPOA also is building on NHRP and MARS. NHRP 08beta Specification draft-ietf-rolc-nhrp-08-beta Jim Luciani spoke on the NHRP specification. Version 08 beta collected several changes, but is not quite ready to be issued as an Internet Draft. There are some open issues that Jim wanted to address. The major changes from version 07 were wording changes, a Don't Reply bit was added to the NHRP Purge Request, correcting the LAG discussion, a NAK code was added to the Resolution Reply, and two error codes were re-instituted. Open issues & discussion: Should there be an error indication when the hop count is exceeded? It was agreed that there should be, subject to usual constraints on not sending an error in response to an error message. It is reasonable to send back the error indication because a loop in the direction of the destination does not imply a loop exists back to the source. Should NHRP registration replies be routed by NBMA or protocol address? In other words, in the case where a client is registering, but is not talking directly to the registration server, should a VC be opened from the server back to the requester? The WG agreed that yes, the response will go at the ATM layer to the NBMA address of the requester. Should a router client be allowed to allowed to register an arbitrary station? The current specification implies a client can register itself or a subnet of which it is a member. The WG agreed that for a router to register a subnet behind it without it also having an address on that subnet, it must be an NHS and serve that subnet. The text needs to be clarified on this point. The next step is to produce a real version 08 to become an Internet Draft as soon as possible. Once there is a server synchronization draft to reference in the NHRP spec, it will be possible to have a WG last call in order to send the specification to the IESG. At this point, no other changes (other than bug fixes and clarifications) are expected in the draft. Server Synchronization Both the ipatm and rolc WGs have been working on multiple server synchronization. Both WGs realize that if a technology requires a server, one server alone is not enough for robustness, load leveling, etc. ATMARP will eventually transition to NHRP, so the two groups are cooperating to ensure the same synchronization mechanism is used for NHRP, ATMARP, and MARS servers, and potentially other address resolution servers as well. The synchronization mechanism must not be specific to NHRP. The Classic2 ipatm specification includes a service synchronization protocol, which is similar to SCSP but has different implementation characteristics. It has been agreed that the server synchronization part of Classic2 will become a separate document. This allows reference by ipatm and rolc as needed to the new mechanism. The SCSP and Classic2 authors are collaborating to produce a single specification. Presentation on LANE Server Synchronization (Keith McCloghrie) Keith McCloghrie presented this to inform the rolc WG on LAN Emulation's server synchronization protocol. LANE 1.0 specifies the LAN Emulation Client (LEC) to Server (LES) interface. LANE 2.0 involves server-to-server protocols as well. Because of LAN Emulation's topology requirements -- full mesh is useful in small systems, but tree of point-to-point links scales better -- the LANE servers use a combination called a "peer-tree." In a peer tree, each node either is a complex node (of multiple LES) or is a single LES. A pure tree has no complex nodes and no redundant links; a pure mesh has a single complex node. Spanning tree is used to produce a loop-free tree. Each node in the tree has all information for itself and the nodes below it, therefore, the root and only the root has full information. New registrations are sent toward the tree's root, with intermediate nodes rejecting if they find a conflict. Only the root responds with success. Tradeoffs are possible; an optimistic LES assumes registration will succeed, but will revoke registration of the LEC if an error is detected. Resolution requests are sent toward the root. If the answer is "not registered", then the root floods. Pessimistic LES enhancement assumes failure and floods at each node. Issues remain -- a caching mechanism is needed, with appropriate purging mechanisms that do not cause momentary losses of connectivity. Another issue involves healing after partition repair. Conflicts then need to be resolved. The LANE scaling goal is 20+ servers, 2000+ clients. The LANE group thinks it can scale beyond that. Server Cache Synchronization Protocol (SCSP) draft-ietf-rolc-sscp-01 Jim Luciani presented his Server Cache Synchronization Protocol (SCSP) draft. It is largely based on OSPF, principally to avoid reinventing the wheel and use well-known reasonable-overhead mechanisms. A Server Group (SG) is a set of synchronized servers bound to a SG through some commonality (e.g., membership in a LIS or LAG). All statements about SCSP are made from the perspective of the protocol stack in the Local Server (LS). Directly Connected Servers (DCSs) are one hop away (e.g., through a VC). A Remote Server (RS) is neither a LS or DCS but is still part of the SG. Three basic messages are defined, a Hello, a Client State Update and a Cache Alignment. Each server has separate state machines for Hello and Cache Alignment. The exact mechanism for the necessary preconfiguration of the LS (i.e., who are the DCSs) is implementation specific. All servers within SG need to be synchronized, but these servers emphatically do not keep synchronized with other SGs. Registration within a SG is replicated to all other servers in the SG. Early simulation results suggests performance requirement is modest, considering there is no need for such resource intensive things as a Dijkstra calculation. Open Issues: Should cache alignment messages contain data base or summary? Currently, they contain summaries. How should counters be wrapped? The WG favored a normal round sequence space. Should a bit to be added to hello messages to allow point-to- multipoint? The WG felt it was not warranted. The concept of a designated server has been removed from the specification. Second Session: NHRP MIB Maria Greene and Jim Luciani received from the MIB from Avri Doria, the previous editor. It is based in part on the Classical IP MIB, which will aid the modeling of co-resident ATMARP and NHRP environments. Input is welcome. The current plan is to submit it to next IETF meeting in June. NHRP Protocol Applicability Statement Derya Cansever submitted the current draft several weeks ago and received some comments. It will be updated as appropriate. He asked for additional questions or comments - there were none. NHRP for Destinations off the NBMA Network Yakov Rekhter issued a new draft with some text to deal with not replying when routing is in a transient state. Support for Sparse Mode PIM over ATM Yakov presented a mechanism to eliminate extra layer three hops across an ATM network for multicast traffic. This technique is applicable to both Sparse Mode Protocol Independent Multicasting (PIM) and Core Based Trees (CBT). The primary focus is on supporting sparsely populated multicast groups, especially for flows that have sufficiently high volume or QoS requirements. Yakov also discussed the relationship between multicast shortcuts and RSVP, and concluded that they are orthogonal issues, and a shortcut mechanism should not be a part of RSVP. Multicast Inscalability over Large Clouds Masataka Ohta discussed the inscalability of multicast over large clouds. His assumption is that a "large cloud" is too large for centralized servers to handle all hosts. He discussed how message rates and number of peer relationships are both issues. His proposal is to make the link layer entities able to recognize IP protocols, so that servers are not necessary. OSPF Cut-Through Rob Coltun presented work in progress, with an Internet Draft forthcoming. He discussed a mechanism for using information in OSPF to eliminate some hops for NHRP requests. It is not intended to replace NHRP, but is rather an optimization. His goals are to decrease set-up (address resolution) time, facilitate router-to-router NHRP, and increase IP routing's visibility of the NBMA overlay Rob's proposal is to add the ability for routers (Next Hop Servers) to advertise, in OSPF, their NBMA subnet layer address. This advertisement associates, with the NBMA address, the LIS, the area border, and the AS boundary router. This information is used to establish direct VCs to NHSes to short cut the NHRP query, rather than requiring the query to follow the routed path. Following Rob's talk, the WG adjourned. End of Minutes __________________________________________________________________________ Andrew G. Malis Ascom Nexion voice: +1 508 266-4522 malis@nexen.com 289 Great Rd., Acton MA 01720 USA FAX: +1 508 266-2300