home *** CD-ROM | disk | FTP | other *** search
Text File | 1998-04-21 | 64.4 KB | 1,052 lines |
- <HTML>
- <HEAD>
- </HEAD>
- <BODY>
- <!--$v=0-->We can now move on and talk about some traffic management
- <!--$v=3389-->techniques within ATM.
- <!--$v=6137-->And in doing so,
- <!--$v=8611-->it's important to understand why we need traffic management,
- <!--$v=11268-->and then we'll talk about some traffic control techniques.
- <!--$v=14520-->And then we'll talk about those available bit rate congestion feedback
- <!--$v=18001-->mechanisms that I mentioned previously.
- <!--$v=20658-->So, why do we need traffic management in an ATM
- <!--$v=23956-->network? Actually in any network you want to have
- <!--$v=26566-->some level of traffic management, I guess.
- <!--$v=28994-->And - but basically what you want to do in an ATM network is
- <!--$v=32200-->the actual goal is to prevent congestion from happening
- <!--$v=35636-->in the first place. So kind of like preventative maintenance
- <!--$v=38338-->on congestion. And then if congestion happens,
- <!--$v=41316-->being able to isolate it so that it only
- <!--$v=44110-->stays in one small part of the network rather than spreading to the rest
- <!--$v=47408-->of the network. Those are really the main reasons behind
- <!--$v=50706-->combating congestion.
- <!--$v=53271-->Also we have this concept that I have here written as
- <!--$v=55653-->provision for priority control. And what that means
- <!--$v=58767-->now is that basically now
- <!--$v=61561-->the buffer mechanisms and so forth will be a little bit different
- <!--$v=64264-->in an ATM switch, because now that we have
- <!--$v=67424-->this Quality of Service layer added to the whole equation
- <!--$v=70768-->it's no longer first-in, first-out of the cell switch,
- <!--$v=74204-->but rather a CBR application which
- <!--$v=77502-->comes into the - a CBR cell which comes into the switch
- <!--$v=80845-->may have higher priority than maybe a, you know, UBR cell.
- <!--$v=83731-->So it's going to - even though the UBR cell arrived first,
- <!--$v=86892-->the CBR cell would be the first
- <!--$v=90144-->one to actually traverse the switch even though it came in after
- <!--$v=93304-->that UBR cell, for example. So provisioning for priority
- <!--$v=96098-->control, and we'll talk about different mechanisms
- <!--$v=99351-->that are used for that.
- <!--$v=102099-->So when we talk about, you know, why do we need this
- <!--$v=105443-->traffic management, I think this is a good illustration of why it's needed.
- <!--$v=109015-->Essentially, the concept
- <!--$v=112039-->goes like this: if you have a 1500-byte Ethernet packet,
- <!--$v=115107-->that is chopped up into 32 53-byte
- <!--$v=118176-->cells. Now, as those cells are traversing
- <!--$v=121383-->the ATM network, if one of those cells is lost
- <!--$v=123902-->the rest are deemed useless when they reach the destination.
- <!--$v=126788-->So therefore we can get into this
- <!--$v=129765-->concept whereby we get this thing called congestion collapse.
- <!--$v=133338-->Whereby if
- <!--$v=135765-->cells are dropped randomly
- <!--$v=138147-->from different TCP/IP packets
- <!--$v=141400-->then the - all those TCP/IP packets
- <!--$v=144239-->when they get to the destination will be deemed useless,
- <!--$v=146850-->they will have to be retransmitted, that retransmission will
- <!--$v=150057-->cause more congestion, causing more drops, causing more
- <!--$v=153034-->re-transmissions, etc. So we get into this concept known as
- <!--$v=156607-->exponential congestion, or congestion collapse. That's
- <!--$v=159447-->the result of that. So
- <!--$v=162012-->this is the - cell loss is data's
- <!--$v=164806-->critical enemy when we're talking about traffic management.
- <!--$v=167783-->And we'll go into some things that - some different techniques that, you know,
- <!--$v=171356-->we'll do a quick Cisco commentary and talk about some of our
- <!--$v=174929-->special traffic control techniques, and then we'll talk about
- <!--$v=178089-->some generic ones as well.
- <!--$v=181570-->So, when we look at traffic control techniques there's really three
- <!--$v=184594-->phases that happen. The first one is
- <!--$v=187250-->the - how to control traffic during the connection
- <!--$v=190090-->setup, or the connection management. So really just the -
- <!--$v=192839-->connection management's kind of the acceptance of the call in the first place.
- <!--$v=195862-->And then we have traffic management, which has to do with policing. So,
- <!--$v=199434-->once the call is made and once the
- <!--$v=202137-->Quality of Service parameters were agreed on by both the network
- <!--$v=205435-->and the end user, making sure that
- <!--$v=208183-->the end user is actually abiding by that contract, if you will.
- <!--$v=211527-->So that's called policing, and we'll go into detail on that.
- <!--$v=214413-->And then finally we have traffic smoothing, or making the traffic
- <!--$v=217619-->look like something that it's not. So making it look smoother
- <!--$v=220505-->than it actually really is in order to abide
- <!--$v=223436-->by any traffic parameters that you have
- <!--$v=226963-->set in the network. So, when I think of this
- <!--$v=230215-->I think of the connection management as the
- <!--$v=232826-->acceptance, that's kind of like the judge, and traffic management is the policeman, and traffic
- <!--$v=236124-->smoothing would be kind of like the lawyer of these things.
- <!--$v=239102-->So, that's my little way to think of those.
- <!--$v=242491-->At any rate, this should look somewhat
- <!--$v=244873-->familiar, this traffic control techniques slide.
- <!--$v=248446-->And essentially, you know, bringing back into our talk here
- <!--$v=251973-->this concept of the end station making a contract with the
- <!--$v=255179-->ATM network using our traffic parameters that we described
- <!--$v=258202-->before. So
- <!--$v=260996-->what would happen here, for example, is we would have a workstation,
- <!--$v=264432-->who would be saying, " I want a virtual circuit.
- <!--$v=267180-->And I'll tell you want I want; I want X megabits per second, at Y delay, with
- <!--$v=270295-->Z cell loss."
- <!--$v=272722-->And he will ask that of the network.
- <!--$v=275333-->Well then the network must - the network switches
- <!--$v=278036-->must talk amongst themselves, and they do this via
- <!--$v=281288-->a protocol known as CAC, or
- <!--$v=283670-->Connection Admission Control. And what they do is,
- <!--$v=286372-->based on the
- <!--$v=288754-->request that was made from the end stations,
- <!--$v=292281-->the switches talk amounts themselves, they check their different
- <!--$v=295121-->buffer levels, and their memory buffers,
- <!--$v=298144-->and the bandwidth, per-port bandwidth, and they
- <!--$v=300709-->answer yes or no, can I accommodate this connection? And if
- <!--$v=303961-->not, they will send back a negotiating value.
- <!--$v=306710-->And so, there's this negotiation between the end
- <!--$v=310099-->station and the network, which is known as CAC, or
- <!--$v=312939-->Call Admission Control, as I said before. So
- <!--$v=315596-->now the connection is made and the user is happy,
- <!--$v=318573-->and the application is going along, and then all of a sudden
- <!--$v=322146-->there's this rebel application that comes into the network,
- <!--$v=324757-->and all of a sudden the end station
- <!--$v=327643-->is not conforming to the contract that was made before.
- <!--$v=331078-->So now this is where that policing function steps in.
- <!--$v=334559-->And that policing function is also known
- <!--$v=337399-->as Usage Parameter Control, or UPC.
- <!--$v=339781-->And what that policing function does is it says, "Hey," -
- <!--$v=342529-->and it's residing on the ingress
- <!--$v=345507-->switch into the ATM cloud - and it says, "Hey, this end
- <!--$v=348301-->station is not keeping with the traffic contract
- <!--$v=351049-->that we had laid out before." So I have a decision
- <!--$v=353522-->to make. I can either just pass these cells randomly and just let them go,
- <!--$v=356866-->or I can start marking that CLP, or that clipper bit,
- <!--$v=360256-->the cell loss priority bit, which would say that if there is
- <!--$v=363096-->congestion down the road, drop this - this
- <!--$v=365569-->will be, you know, eligible for discard first.
- <!--$v=368501-->And then finally, I could just - the policing function could also say, "I'm
- <!--$v=371570-->just dropping these cells on the spot, because this
- <!--$v=374409-->person is causing too much congestion
- <!--$v=377387-->at my layer." So that would be an example of
- <!--$v=380135-->isolating the connection - the congestion at that one
- <!--$v=383296-->single point. So, in this connect - in this
- <!--$v=386823-->scenario, we have our policeman up there, our UPC.
- <!--$v=389846-->And basically, as I said before, the
- <!--$v=393281-->policing function can either pass, or mark the clipper bit, or
- <!--$v=396487-->simply drop the cell on the spot if there is
- <!--$v=399602-->congestion. Generically, or I should say,
- <!--$v=403083-->in the public UNI space,
- <!--$v=405969-->this UPC functionality is also known as
- <!--$v=408443-->GCRA, or Generic Cell Rate Algorithm.
- <!--$v=411557-->So that's worth mentioning, because sometimes if you're
- <!--$v=414214-->negotiating with your public service provider, you may have
- <!--$v=417512-->to find out, you know, what the GCRA
- <!--$v=420443-->parameters are and things like that.
- <!--$v=423054-->So moving along with traffic
- <!--$v=426444-->management, there's this concept of tail packet
- <!--$v=429650-->discard, and the way the tail packet discard works is this:
- <!--$v=433223-->tail packet discard works
- <!--$v=436063-->such that when cells are passing through the switch
- <!--$v=438674-->and when
- <!--$v=441376-->congestion starts to be seen, the switch will then
- <!--$v=444903-->make a decision to start dropping cells. Well, rather than just
- <!--$v=448018-->start dropping cells randomly, because we saw what can happen
- <!--$v=450492-->there with that congestion collapse scenario,
- <!--$v=452873-->the switch has to be equipped to be able
- <!--$v=456400-->to discard cells all from the same bad packet.
- <!--$v=459011-->So, that's the concept with
- <!--$v=461576-->tail packet discard, is dropping all the cells
- <!--$v=464737-->from the same bad packet, rather than just dropping them randomly.
- <!--$v=468264-->And there are basically - and this would be, this is kind of a
- <!--$v=471653-->Cisco commercial here, a couple of Cisco's
- <!--$v=474768-->ways to deal with traffic management in the LightStream 1010 line.
- <!--$v=478249-->We have two different mechanisms
- <!--$v=480998-->basically to deal with this. We have this
- <!--$v=483471-->concept of Intelligent Tail Packet Discard, or ITPD,
- <!--$v=486907-->which says not only - it's kind of a, you know, it's tail packet discard
- <!--$v=490342-->which I just mentioned, which basically will drop all the
- <!--$v=492724-->cells from the same bad packet, but in addition, because
- <!--$v=495701-->this is AAL5 data traffic, it will take that last
- <!--$v=499045-->cell bit and it will turn the clip -
- <!--$v=501839-->clipper bit to zero, send it to the destination
- <!--$v=504770-->so the destination knows that,
- <!--$v=507152-->okay, all the other cells that were involved in this stream were dropped
- <!--$v=510496-->previously. But now what you can do is put together what you got,
- <!--$v=512970-->and then go back to the source and have them
- <!--$v=515535-->resend what you need right away, rather than waiting for retransmission timers and so forth
- <!--$v=519016-->to set to time out and whatnot. So that's the concept
- <!--$v=522176-->of intelligent tail packet discard.
- <!--$v=524558-->And then secondly we have this concept known as early packet discard.
- <!--$v=528039-->And what early packet discard does is it allows you
- <!--$v=531566-->to set up, basically, policies within the network,
- <!--$v=534269-->and thresholds, which say, "If this
- <!--$v=537063-->threshold is met, this threshold of congestion is met,
- <!--$v=539765-->then based on this policy, I want to start dropping
- <!--$v=543201-->cells from this type of traffic."
- <!--$v=545903-->So, this is the concept of early packet discard.
- <!--$v=548652-->So early packet discard tries to
- <!--$v=551125-->deal with congestion before it becomes a problem, so
- <!--$v=554469-->proactively combating congestion, versus tail packet discard,
- <!--$v=558042-->which says, "Wow, there's a lot of congestion, I better
- <!--$v=561431-->kick into ITPD mode," if you will.
- <!--$v=564180-->So these concepts are both known as -
- <!--$v=566561-->the combination of these is known as what Cisco calls UBR+,
- <!--$v=569539-->or Unspecified Bit Rate Plus. Now
- <!--$v=572287-->when we talk about - this is a good illustration
- <!--$v=575127-->of how this works and why
- <!--$v=577646-->this is very important, this concept of, you know, intelligent packet discard
- <!--$v=581036-->and intelligent tail packet discard, and so forth.
- <!--$v=584334-->Essentially on the top here, we see that maybe we're getting more
- <!--$v=587540-->cells through the switch, however
- <!--$v=590288-->the problem is that when the cells get to the destination,
- <!--$v=592945-->how many are useable if when you put them back into the
- <!--$v=595922-->packet format there are some cells missing? They're going to have to ask
- <!--$v=599083-->for a re-transmission. So essentially the bottom
- <!--$v=602473-->piece of this diagram here, which
- <!--$v=605450-->shows the switch with intelligent packet discard,
- <!--$v=608290-->it just simply dropped everything from that blue stream,
- <!--$v=611450-->and just tried to concentrate on the red and yellow stream. And incidentally,
- <!--$v=614840-->in the long run, what we've done is we've
- <!--$v=617359-->maximized what's called "goodput," or the number of
- <!--$v=620749-->useable packets that reach the destination. So
- <!--$v=623268-->incidentally, it's very important when looking at
- <!--$v=626429-->ATM vendors it's not just cell-in, cell-out, how fast can you switch cells,
- <!--$v=629726-->it's really a concept of being able to
- <!--$v=632383-->take advantage of these traffic management techniques
- <!--$v=635040-->to maximize goodput. So finally the
- <!--$v=637468-->converse of traffic
- <!--$v=640857-->policing would be traffic smoothing,
- <!--$v=643560-->or traffic shaping if you will. And essentially
- <!--$v=646812-->where you see traffic shaping is on the egress switch going from
- <!--$v=649331-->your private ATM network into the public
- <!--$v=652537-->ATM network. So essentially what you can do here is,
- <!--$v=656110-->you can take very bursty traffic
- <!--$v=658950-->that might be in your public - your private ATM network, if you will,
- <!--$v=662294-->and you can shape it such that even though it's
- <!--$v=664676-->going crazy back here in the private arena,
- <!--$v=667653-->when it gets out into the public ATM network
- <!--$v=670081-->it's moved out there very smoothly in a very contiguous manner
- <!--$v=673516-->such that it can comply with whatever your public ATM network has
- <!--$v=676631-->given you as far as the
- <!--$v=679746-->generic cell rate algorithm and so forth goes. The
- <!--$v=682860-->method I want to mention very quickly by which
- <!--$v=685746-->the switch will actually
- <!--$v=689319-->enforce the traffic smoothing, the
- <!--$v=691930-->traffic shaping, the policing and so forth, is this concept of the
- <!--$v=695273-->leaky bucket algorithm. That is a standard,
- <!--$v=697747-->and you can find more information
- <!--$v=701136-->on that on the ATM Forum. It gets a little bit more in depth
- <!--$v=704663-->to this talk than I wanted to get, but
- <!--$v=708190-->essentially the leaky bucket algorithm
- <!--$v=710710-->is a way to use that - the peak
- <!--$v=713183-->cell rate, and the maximum burst size, and things like that,
- <!--$v=715657-->in order to make sure that traffic is passing through the
- <!--$v=718634-->switch at the specified rate.
- <!--$v=721932-->So that's the leaky bucket algorithm in a quick
- <!--$v=724451-->nutshell, and as I said before I have a listing at the end
- <!--$v=727749-->of this talk of a bunch of Web pages
- <!--$v=730314-->on which that is mentioned as one of them.
- <!--$v=733612-->So we've talked a little bit about
- <!--$v=735994-->some generic traffic control techniques for packet discard,
- <!--$v=739567-->and now what I'd like to do is talk briefly about
- <!--$v=742224-->these available bit rate congestion
- <!--$v=745384-->feedback mechanisms that
- <!--$v=747949-->are in place in ATM networks.
- <!--$v=750881-->Basically, these congestion feedback mechanisms
- <!--$v=754087-->use what's called Resource Management
- <!--$v=757064-->cells, or RM cells, which are basically
- <!--$v=759996-->just cells that are dropped out onto the network as "control"
- <!--$v=763156-->cells. So in that PTI field, in that
- <!--$v=765630-->first bit, where we talked about whether or not it was user data
- <!--$v=768653-->or control data, the RM, if it's an RM cell, that
- <!--$v=771951-->control - it would be indicated there as a control
- <!--$v=774882-->cell. So at any rate,
- <!--$v=777448-->there are basically four flavors of this feedback
- <!--$v=780150-->mechanism that we're going to talk about. There's
- <!--$v=783494-->EFCI market - marking rather,
- <!--$v=786288-->Explicit Forward Congestion Indicator marking,
- <!--$v=788853-->and then there's relative rate marking,
- <!--$v=791418-->and then there's explicit rate marking,
- <!--$v=793800-->and then there's this concept known as Virtual Source/Virtual Destination,
- <!--$v=797373-->also known as VS/VD. So we'll go through each of the
- <!--$v=800671-->these right now. With regard
- <!--$v=803602-->to the first one that I mentioned, the EFCI
- <!--$v=806671-->marking, the Explicit Forward Congestion Indicator
- <!--$v=809145-->marking. What happens here is we have
- <!--$v=812580-->a source workstation which forwards a cell through the ATM
- <!--$v=815374-->network and as the - as that
- <!--$v=818397-->cell traverses the ATM network in a forward direction,
- <!--$v=821924-->the - if the switches along the way
- <!--$v=825176-->start to experience congestion, what they'll do is they'll
- <!--$v=828703-->set the
- <!--$v=831223-->congestion indicator and the -
- <!--$v=834017-->so those are now known as - the EFCI is set on those cells.
- <!--$v=837223-->As those cells then traverse the destination
- <!--$v=840429-->workstation, if you will,
- <!--$v=843224-->that destination workstation has the
- <!--$v=845972-->ability to drop out those RM cells onto the network
- <!--$v=849499-->in a backward direction, so that
- <!--$v=852339-->now these resource management cells when
- <!--$v=854950-->they reach the source, the source will say, "Oh, the destination's telling me that there is
- <!--$v=857973-->a problem because there is congestion in the network."
- <!--$v=860904-->Well, then the source can start
- <!--$v=863378-->sending slower and whatnot, but as you can see here that
- <!--$v=866401-->this feedback loop is really large,
- <!--$v=869149-->and there's a lot of burden on the destination to drop out these
- <!--$v=871989-->resource management cells. So
- <!--$v=874371-->you know, as it's illustrated here, it looks good on
- <!--$v=877028-->paper, but I sometimes wonder
- <!--$v=879730-->how effective it actually is, because by the time the
- <!--$v=882845-->congestion indicators reach the source to slow down that congestion
- <!--$v=886418-->problem could have gone away. So
- <!--$v=889029-->that's one of the shortcomings of EFCI marking, although
- <!--$v=891777-->you do see it used.
- <!--$v=894204-->Secondly, and a little more effectively, is this concept of
- <!--$v=897594-->relative rate marking. And the way that relative rate marking
- <!--$v=901167-->works is this: essentially if there is
- <!--$v=904419-->congestion experienced,
- <!--$v=907167-->either in the forward or the backward
- <!--$v=909732-->direction, the switches
- <!--$v=912206-->can send RM cells, they can just drop RM cells
- <!--$v=915550-->into this stream in a backward motion
- <!--$v=919122-->saying that there is congestion. So we're kind
- <!--$v=921962-->of shortening the feedback loop a little bit more. So now
- <!--$v=924711-->the onus is no longer on the destination to
- <!--$v=928100-->drop out the RM cells, but now the ATM cells in the cloud
- <!--$v=931169-->can actually drop them out for the source
- <!--$v=933734-->to be aware that there is congestion and it should slow down.
- <!--$v=936666-->So that's relative rate marking. And then finally we have
- <!--$v=940101-->explicit rate marking which takes
- <!--$v=942529-->us to a whole other level, which gets very fancy,
- <!--$v=945827-->whereby if there is congestion,
- <!--$v=948438-->you know, starting to be experienced in the network
- <!--$v=951781-->what can happen at this point is the
- <!--$v=954209-->ATM switches
- <!--$v=956591-->can now drop out RM cells in both the forward and the backward
- <!--$v=959935-->direction. And not only can they just
- <!--$v=963095-->send out these RM cells that have a generic, like, "Oh there's
- <!--$v=966210-->congestion," but they can say, "There's this much
- <!--$v=969416-->congestion experienced. You, Mr. Destination, need to slow down this much," or,
- <!--$v=971890-->"There's this
- <!--$v=974730-->much congestion, you, Mr. Source, need to slow down this much."
- <!--$v=978302-->So being able to say, the switch being able to indicate exactly
- <!--$v=981234-->how much the source and destination need to slow down in order to
- <!--$v=984486-->maintain this flow control.
- <!--$v=986868-->And then finally, we'll see that this
- <!--$v=989616-->concept of congestion feedback
- <!--$v=992090-->really shortens that congestion feedback loop down when we get
- <!--$v=995296-->to VS/VD, Virtual Source/Virtual Destination.
- <!--$v=998686-->Essentially what happens here is the
- <!--$v=1002029-->feedback loop is broken down between
- <!--$v=1004457-->two switches within the ATM cloud. So in this case
- <!--$v=1007984-->the two ATM
- <!--$v=1010824-->switches in the middle will talk amongst themselves
- <!--$v=1013847-->to try to combat congestion at an even closer range.
- <!--$v=1017008-->Now, what I'd like to mention here is that
- <!--$v=1019619-->this congestion feedback mechanism for VS/VD, this is mainly for wide area networks
- <!--$v=1022596-->where you might have, you know,
- <!--$v=1025711-->a sprawling amount of ATM switches.
- <!--$v=1028505-->VS/VD is actually supported on the
- <!--$v=1031803-->StrataCom line of Cisco ATM
- <!--$v=1034963-->switches, whereas the
- <!--$v=1037437-->relative rate marking and EFCI marking and so forth
- <!--$v=1040597-->are seen on the LightStream 1010
- <!--$v=1043804-->campus ATM switches, if you will. So, that's really the differentiator.
- <!--$v=1047239-->And then what always come to my mind is, you know,
- <!--$v=1050079-->when you start looking at different vendors and overhead taken
- <!--$v=1053423-->it's kind of like well, do you get into a situation where you're
- <!--$v=1055850-->spending more time trying to proactively combat things
- <!--$v=1059057-->versus, you know, spending more CPU cycles on that, versus
- <!--$v=1062446-->spending it just on moving cells faster.
- <!--$v=1064920-->So there becomes this diminishing return that, you know,
- <!--$v=1067897-->that different vendors have to reach.
- <!--$v=1070508-->So I would urge you that if you did want to use these congestion
- <!--$v=1073119-->feedback mechanisms, you know, no matter who the vendor is,
- <!--$v=1076188-->is to make sure that the -
- <!--$v=1079394-->this - there's a good level there of CPU versus
- <!--$v=1082600-->what you're trying to avoid in the first place.
- <!--$v=1085120-->Finally, with regard to traffic control
- <!--$v=1087960-->techniques is the concept of how buffers work in an ATM
- <!--$v=1090341-->network, and right here I would just want to
- <!--$v=1093823-->quickly reiterate the difference between
- <!--$v=1096892-->a frame switch and a cell switch with regard to buffers.
- <!--$v=1100144-->Essentially, in a frame switch, or a packet switch,
- <!--$v=1103579-->buffers can be distributed on a per-port
- <!--$v=1106969-->basis and you just need big buffers to combat
- <!--$v=1109763-->congestion and whatnot,
- <!--$v=1112786-->and head of line blocking and that sort of thing.
- <!--$v=1115259-->With an ATM switch, however, usually
- <!--$v=1118053-->what you'll see is that the buffers are centrally located.
- <!--$v=1120848-->And the reason is is that now when a pack - when
- <!--$v=1123367-->a cell actually traverses a cell switch, the
- <!--$v=1126573-->buffers are not just FIFO buffers, where it's first-in first-out for example,
- <!--$v=1130009-->but rather it's this concept of being able
- <!--$v=1132757-->to take traffic mixes of CBR, and UBR, and VBR, and whatnot,
- <!--$v=1136284-->and being able to accommodate one priority traffic
- <!--$v=1139628-->over another. And having a centralized
- <!--$v=1142147-->cell buffer pool, if you will, is one way of
- <!--$v=1145078-->doing that. So that's an important
- <!--$v=1147964-->thing to note with regard to buffering.
- <!--$v=1151400-->So moving along now,
- <!--$v=1154194-->we'll talk now about some of the ATM
- <!--$v=1156575-->transport standards that are available today.
- <!--$v=1159553-->And first we'll start talking about
- <!--$v=1162072-->the - we'll talk a little bit about the ATM Forum and where that started.
- <!--$v=1165599-->Then we'll talk about the ATM UNI.
- <!--$v=1168347-->There is UNI 3.0 and 3.1
- <!--$v=1171325-->and 4.0. And we'll talk about the ILMI
- <!--$v=1174577-->address management, and then finally we'll talk about
- <!--$v=1177188-->the ATM NNI.
- <!--$v=1179890-->So first just to take note,
- <!--$v=1182272-->the ATM Forum was actually founded in the fall of 1991,
- <!--$v=1185799-->and Cisco, along with several others,
- <!--$v=1188318-->were co-founding members, if you will. It's now a worldwide
- <!--$v=1191891-->forum with over 700 members.
- <!--$v=1194365-->And the main works that they have come out with
- <!--$v=1197067-->in the past several years are the UNI
- <!--$v=1200457-->and NNI signaling,
- <!--$v=1202976-->known as UNI 3.0, 3.1 etc., and PNNI,
- <!--$v=1206503-->and LANE, LAN Emulation, and MPOA,
- <!--$v=1209892-->Multi-Protocol Over ATM. Those are examples of what the ATM
- <!--$v=1213282-->Forum has worked on. And
- <!--$v=1215664-->finally, on this slide we also see a listing
- <!--$v=1218779-->for the ATM Forum Web page, which I would urge you
- <!--$v=1221756-->to become, you know, become active on that page if you're
- <!--$v=1224367-->interested in the trends with regard to ATM.
- <!--$v=1227390-->There is a monthly newsletter that comes
- <!--$v=1230825-->out known - they call it <I>53 Bytes</I>.
- <!--$v=1233253-->Go figure. And so the <I>53 Byte Newsletter</I>
- <!--$v=1235726-->comes out monthly and I would urge
- <!--$v=1239253-->you if you want to keep up with the specifications that are in
- <!--$v=1242414-->ballot right now and whatnot, that
- <!--$v=1245162-->is a good one shot
- <!--$v=1248414-->publication to get.
- <!--$v=1251071-->So when we talk about ATM transport standards, let's
- <!--$v=1254644-->first talk about - we talked about the UNI in a very generic
- <!--$v=1258079-->way, the way that, you know, it was a user workstation talking to an
- <!--$v=1261194-->ATM cloud, if you will, or non-ATM
- <!--$v=1263713-->device talking into an ATM switch.
- <!--$v=1266645-->So, at any rate, we have the UNI 3.0
- <!--$v=1269576-->and 3.1 as two ways that this is
- <!--$v=1272737-->defined. It should be noted that 3.0 and 3.1 are not
- <!--$v=1275943-->interoperable. So if you're using,
- <!--$v=1278508-->obviously, 3.0 in your network, you want to make sure it's 3.0
- <!--$v=1281257-->across the board. And the reason that they're not
- <!--$v=1284142-->interoperable, and you'll hear bits and pieces on this,
- <!--$v=1287349-->but basically UNI 3.0
- <!--$v=1290509-->uses data link signaling protocol known as Q.SAAL,
- <!--$v=1293807-->whereas the 3.1 standard uses
- <!--$v=1296510-->the data link signaling known as SSCOP,
- <!--$v=1299441-->or Service-Specific Convergence Protocol.
- <!--$v=1302144-->So they are not interoperable is the main
- <!--$v=1305167-->thing to get there.
- <!--$v=1307640-->When we talk about the ATM
- <!--$v=1310847-->Forum and we talk about the UNI,
- <!--$v=1313458-->it's important to - one of the things that we really need to appreciate
- <!--$v=1316801-->about what the ATM Forum has done for us is this concept of
- <!--$v=1319458-->automatic address management. So,
- <!--$v=1322527-->here for example is an example of an ATM address,
- <!--$v=1326054-->and this is known and as NSAP address,
- <!--$v=1328482-->which is 20 bytes long, or 40 digits.
- <!--$v=1331184-->And Network Service Access Point is kind of
- <!--$v=1334574-->the - an OSI term that
- <!--$v=1337185-->kind of applies to the way that this - the ATM
- <!--$v=1340437-->address is built. But at any rate,
- <!--$v=1343093-->there are several types of ATM addresses out there. There's
- <!--$v=1346117-->this NSAP address which is used for private
- <!--$v=1349552-->ATM clouds, if you will. And then there's a public
- <!--$v=1352987-->address space known as E.164
- <!--$v=1355781-->addresses, which are 15 digits long.
- <!--$v=1358621-->And then - so what we need is, between the public
- <!--$v=1362057-->and the private sector we need some kind of mechanism to
- <!--$v=1365584-->tie them together. So there is a standard known as E.161
- <!--$v=1368882-->that will basically translate between our
- <!--$v=1371905-->40-digit NSAP addresses, and our 15-digit
- <!--$v=1375111-->E.164
- <!--$v=1378455-->addresses. So at any rate
- <!--$v=1381478-->the - moving right along here,
- <!--$v=1384043-->let's just get an idea of what a real NSAP
- <!--$v=1386654-->address look like. In your network what would a real NSAP address look like?
- <!--$v=1389952-->Well, this is what it would basically look like, and I've broken it down into
- <!--$v=1393296-->this ATM prefix and then the MAC address.
- <!--$v=1396136-->And
- <!--$v=1398517-->then there's that - the yellow area there that's known as the selector byte.
- <!--$v=1401678-->And incidentally the selector byte is used
- <!--$v=1404151-->differently by different vendors, and if you're using a Cisco
- <!--$v=1407312-->product, one thing to note is that the selector byte can be a good
- <!--$v=1410335-->indicator of what sub-interface on the router
- <!--$v=1413725-->switch is being used. So that's a good troubleshooting mechanism
- <!--$v=1417297-->there. But at any rate, you have this ATM
- <!--$v=1419725-->prefix, and then the MAC. And the MAC address is just really the burned-in MAC
- <!--$v=1422840-->address of the end station. So
- <!--$v=1425588-->when we look at this NSAP address,
- <!--$v=1429115-->and we think about how does that
- <!--$v=1432230-->NSAP address get to the end station? Well, essentially -
- <!--$v=1435803-->I myself would not want to
- <!--$v=1438826-->go to each end station and type that in, and I
- <!--$v=1441437-->don't think any of you would either - but essentially
- <!--$v=1444185-->what the ATM Forum has done is within the UNI
- <!--$v=1447712-->spec they have penciled in this
- <!--$v=1451056-->concept of ILMI automatic addresses management,
- <!--$v=1454170-->Interim Local Management Interface
- <!--$v=1456690-->auto address management. And the way that
- <!--$v=1459163-->this works is that the end station is aware of its MAC
- <!--$v=1462599-->address, its burned-in MAC address, and
- <!--$v=1465301-->the end station will then talk to
- <!--$v=1468004-->the ATM switch over a well-known VPI/VCI 0.16,
- <!--$v=1471576-->and it will say, "Hey, Mister
- <!--$v=1474279-->ATM switch, I know my MAC address. Can you please send me
- <!--$v=1477760-->my ATM prefix?" The switch will then, via
- <!--$v=1481058-->ILMI, send back the ATM prefix to the end station.
- <!--$v=1484264-->The end station will put together the ATM
- <!--$v=1487654-->prefix with its MAC address, and then tag on the
- <!--$v=1490952-->selector byte, and then there you have it. You have the ATM
- <!--$v=1493563-->address configured on the end station without
- <!--$v=1497090-->manual configuration. So, the ATM Forum has
- <!--$v=1500067-->really saved us from having to go around and type in a
- <!--$v=1503319-->40-digit address per ATM end station.
- <!--$v=1506205-->That's very key in this whole development.
- <!--$v=1509411-->One thing
- <!--$v=1512251-->that is worth mentioning here is that UNI 3.X
- <!--$v=1515366-->or 3.0 and 3.1 - there are two
- <!--$v=1518023-->basic types of connections. There's a point-to-point
- <!--$v=1521046-->connection, it can be uni-directional or bi-directional, and that's all straightforward.
- <!--$v=1523794-->However,
- <!--$v=1526176-->UNI 3.X becomes a little convoluted when we start looking at point-to-multipoint
- <!--$v=1529428-->connections.
- <!--$v=1531993-->Because ATM is a connection oriented network,
- <!--$v=1535108-->this can become kind of a
- <!--$v=1537994-->problem point in an ATM network.
- <!--$v=1540650-->And with UNI 3.X we have a root or
- <!--$v=1544177-->a - the main device over here,
- <!--$v=1546605-->which has then several leaves
- <!--$v=1549033-->coming off of it, maybe in the case of a multicast.
- <!--$v=1551460-->And in this case, in UNI 3.X
- <!--$v=1554300-->applications, each - when a leaf
- <!--$v=1556819-->wants to join a - there's no concept of group addressing,
- <!--$v=1559934-->so the root has to incrementally add
- <!--$v=1562408-->the leaves that want to be part of like a multicast,
- <!--$v=1565843-->if you will. So this can be -
- <!--$v=1568729-->this can present somewhat of a problem
- <!--$v=1571248-->and it is addressed in UNI 4.0.
- <!--$v=1574042-->Essentially, UNI 4.0
- <!--$v=1577157-->is basically
- <!--$v=1579768-->an arm of another document known
- <!--$v=1582974-->as signaling 4.0.
- <!--$v=1585356-->So we have signaling 4.0, which contains UNI 4.0
- <!--$v=1588883-->and also our friend traffic management 4.0.
- <!--$v=1591906-->And so traffic management 4.0 would be things like
- <!--$v=1594975-->the CBR, VBR,
- <!--$v=1598136-->and that sort of thing, and peak cell rate,
- <!--$v=1600609-->and etc., the traffic descriptors. UNI 4.0
- <!--$v=1603861-->is another flavor of UNI, if you will,
- <!--$v=1606930-->that basically has
- <!--$v=1609404-->support for multicast. So we can now - we now
- <!--$v=1612793-->have a concept of group addressing, and a concept
- <!--$v=1615358-->of leaf initiated joins.
- <!--$v=1617740-->Additionally with UNI 4.0,
- <!--$v=1620305-->Quality of Service definitions will be able to
- <!--$v=1622687-->be applied at the virtual channel level.
- <!--$v=1625252-->So, better Quality of Service definitions will also happen with UNI 4.0.
- <!--$v=1628779-->And UNI 4.0
- <!--$v=1631665-->is a standard today, it is being used.
- <!--$v=1634276-->And it should be mentioned that UNI 4.0
- <!--$v=1637345-->is part of what the ATM Forum calls the Anchorage Accord.
- <!--$v=1640688-->And what that means is that other protocol,
- <!--$v=1644078-->other UNIs that will follow UNI 4.0 will be
- <!--$v=1647330-->backward compatible with UNI 4.0.
- <!--$v=1649758-->So, we've talked about UNI now and all
- <!--$v=1652827-->of its flavors. Now I'd like to talk about the NNI,
- <!--$v=1655941-->and we basically have two flavors
- <!--$v=1658323-->of the NNI protocol that were developed by the ATM Forum.
- <!--$v=1661621-->We have IISP, or the
- <!--$v=1665194-->Interim Inter-Switch Protocol - Interim Inter-switch Signaling Protocol
- <!--$v=1668400-->I should say. And that's also known as
- <!--$v=1671057-->PNNI Phase 0. And then we have the dynamic
- <!--$v=1674538-->routing protocol known as PNNI, just regular PNNI
- <!--$v=1677699-->phase one, which is the one that we - it's more dynamic,
- <!--$v=1680814-->takes advantage of Quality of Service. So,
- <!--$v=1683837-->prior to talking about that, just to draw
- <!--$v=1687272-->a comparison again with our packet-based networks,
- <!--$v=1689654-->traditionally in a router-based network, shown here,
- <!--$v=1692906-->you might use protocols such as RIP, and IGRP,
- <!--$v=1696158-->and OSPF, and EIGRP, in order to
- <!--$v=1699044-->determine the best path through the network.
- <!--$v=1702113-->And each of those different routing protocols for
- <!--$v=1705228-->TCP/IP has different algorithms in place
- <!--$v=1707930-->and different
- <!--$v=1710541-->types of techniques that they use in order to keep track
- <!--$v=1713610-->of IP routes and update IP routes and whatnot. So,
- <!--$v=1716496-->if we were going to make a comparison with ATM,
- <!--$v=1719473-->really these - this IISP
- <!--$v=1722954-->and PNNI are really just different ways
- <!--$v=1725702-->to do ATM-based routing, if you will.
- <!--$v=1729275-->So that's really - I just want to
- <!--$v=1732527-->make that distinction that - just so that we know
- <!--$v=1734955-->that these are really like ATM routing protocols. That's the way that
- <!--$v=1737795-->I think of it. When we talk about
- <!--$v=1740543-->first - IISP, or Interim Inter-switch Signaling Protocol,
- <!--$v=1744116-->it gets a little convoluted. And
- <!--$v=1747551-->basically, the way that IISP works is
- <!--$v=1750666-->you define static routes in the ATM switches, and you can have multiple
- <!--$v=1753689-->
- <!--$v=1756071-->routes, going between switches,
- <!--$v=1758453-->from your source A to your destination B.
- <!--$v=1761339-->And what then happens is
- <!--$v=1763812-->it is dynamic in that when the call is actually set up,
- <!--$v=1767293-->even though these - the connection
- <!--$v=1770408-->between the switches is an NNI connection,
- <!--$v=1773065-->they'll use this forum of UNI 3.0 and 3.1
- <!--$v=1776409-->signaling in order to bring up a call based on the
- <!--$v=1779936-->static routes that were put in from the ATM perspective.
- <!--$v=1782775-->So, really this was just a way for the
- <!--$v=1786165-->ATM Forum to say, "You know, we're not going to be ready with the real PNNI for a while,
- <!--$v=1789555-->so let's take advantage of what we do have, which is UNI,
- <!--$v=1792898-->to build the routes - to actually build the
- <!--$v=1796334-->connections dynamically
- <!--$v=1798807-->but having the routes there statically." Incidentally,
- <!--$v=1802334-->IISP is very effective for
- <!--$v=1804991-->smaller networks and whatnot, and
- <!--$v=1807464-->also for networks that aren't yet ready to
- <!--$v=1810442-->take advantage of Quality of Service,
- <!--$v=1813465-->because IISP offers no Quality of Service.
- <!--$v=1816809-->With PNNI,
- <!--$v=1819374-->PNNI Phase I, which is where we are today,
- <!--$v=1822901-->essentially PNNI is a routing protocol
- <!--$v=1826382-->in that it will find you the best route through the
- <!--$v=1829039-->network, but not only will it find you the best route
- <!--$v=1831787-->or, you know, be there for reachability
- <!--$v=1834214-->but it will also work as a signaling protocol.
- <!--$v=1836688-->And when I say it works as a signaling protocol what I mean is
- <!--$v=1840123-->that it can also gauge and route
- <!--$v=1843055-->cells through the network based on Quality of Service
- <!--$v=1846444-->definitions as well. So,
- <!--$v=1849559-->when we look at PNNI as a routing protocol
- <!--$v=1852903-->it looks very much like
- <!--$v=1856292-->OSPF. There are a couple of elements
- <!--$v=1858858-->of PNNI that are analogous to OSPF.
- <!--$v=1861926-->Starting with a PG, or a peer group -
- <!--$v=1864537-->a peer group is a set of switches that
- <!--$v=1867469-->interrelate in the same way that maybe an OSPF area
- <!--$v=1870171-->might relate in OSPF terms.
- <!--$v=1872645-->And then there's a concept of a peer group leader, or a PGL,
- <!--$v=1876126-->which is in charge of talking - of this area
- <!--$v=1879424-->talking to everyone else's area.
- <!--$v=1881806-->So the peer group leader could be
- <!--$v=1884508-->analogous to the designated router, if you will, within OSPF.
- <!--$v=1887440-->Or the
- <!--$v=1890188-->area border router, or the ABR, in OSPF.
- <!--$v=1893074-->So the other thing that
- <!--$v=1896051-->makes PNNI look very much like OSPF is just the
- <!--$v=1899303-->fact that it works as a link state data base.
- <!--$v=1902052-->So for example, it's a link state routing protocol in that
- <!--$v=1905624-->each switch will keep a copy of
- <!--$v=1908648-->all the other switches' data bases within its own -
- <!--$v=1911671-->within its own data base such that it always
- <!--$v=1914373-->knows where - it has a full map of the network, if you will.
- <!--$v=1917396-->Rather than hop by hop, which would be the case with RIP or
- <!--$v=1920190-->IGRP for example. So this is how PNNI -
- <!--$v=1923672-->this is, you know, a very rough
- <!--$v=1926374-->way that PNNI looks
- <!--$v=1928848-->and it can be very hierarchical, so it can be very
- <!--$v=1931825-->powerful. You can get something like a hundred different hierarchies
- <!--$v=1935031-->within PNNI and the reason is
- <!--$v=1937505-->is that that incept address is so long that you can
- <!--$v=1940299-->accommodate several different hierarchies within there.
- <!--$v=1943551-->One - so we've talked about now PNNI as a routing
- <!--$v=1946712-->protocol and how it really looks and smells like OSPF.
- <!--$v=1949185-->Now as a signaling protocol,
- <!--$v=1952162-->where Quality of Service is brought into the equation,
- <!--$v=1955277-->what we have is a - this concept
- <!--$v=1958529-->of call admission control, which I've mentioned
- <!--$v=1961873-->before, whereby if a connection
- <!--$v=1964713-->is made from the source to the destination
- <!--$v=1967919-->in - during the connection or the call setup phase
- <!--$v=1970530-->this call admission control will
- <!--$v=1972958-->happen, whereby the different switches in the network
- <!--$v=1975843-->will negotiate among themselves to make sure that they cannot
- <!--$v=1979370-->only provide the route to the destination,
- <!--$v=1982119-->but that they can provide the Quality of Service that's also being
- <!--$v=1985279-->asked for by the source. So let's say we
- <!--$v=1988211-->meet - we reach a point in the B portion of this network,
- <!--$v=1991738-->B.3, and all of a sudden that switch says,
- <!--$v=1994303-->"I do not have enough buffers," or, "I
- <!--$v=1996685-->do not have enough port bandwidth to be able to handle this connection in this way."
- <!--$v=2000258-->Then what will happen is this process known as
- <!--$v=2003235-->crank-back. And what crank-back does, it will
- <!--$v=2005937-->crank back the connection that was made back to the
- <!--$v=2009327-->ingress point into that peer group B and it'll
- <!--$v=2012350-->reroute the call through
- <!--$v=2015740-->an alternate route. It will reconnect the call through an alternate route
- <!--$v=2019221-->that can accommodate the Quality of Service parameters being
- <!--$v=2022107-->asked for. So this concept of crank-back and rerouting
- <!--$v=2025313-->is very important, and this all happens during the
- <!--$v=2028565-->call setup portion of the call.
- <!--$v=2031176-->So,
- <!--$v=2033649-->that was PNNI in a nutshell. And now what I'd like
- <!--$v=2036947-->to do is talk about the crux of some of the ways that we can
- <!--$v=2039650-->keep our legacy local area networks
- <!--$v=2042581-->in place, or our legacy campus networks in place
- <!--$v=2045055-->while still utilizing some of the powers of ATM
- <!--$v=2048261-->within the network. So,
- <!--$v=2050643-->some of the - we'll first talk about some of the challenges
- <!--$v=2053529-->with, you know, maintaining what you have and integrating
- <!--$v=2056048-->ATM into a solution, and then we'll
- <!--$v=2058888-->talk about some of the standards that are more
- <!--$v=2061636-->prevalent today, are more
- <!--$v=2064339-->noteworthy, such as LANE 1.0, LANE 2.0,
- <!--$v=2067224-->and then finally MPOA, or multi-protocol over ATM.
- <!--$v=2070797-->So first just to touch briefly on the
- <!--$v=2074233-->challenges that are involved.
- <!--$v=2076614-->Basically there are three challenges that I see.
- <!--$v=2079638-->The first one is there is no real API or application programming
- <!--$v=2083210-->interface that is made to accommodate
- <!--$v=2086188-->ATM traffic just yet. So there's no - there's no real,
- <!--$v=2089715-->you know, application program interface that is going to accommodate
- <!--$v=2093150-->ATM addressing, or is going to accommodate
- <!--$v=2095898-->Quality of Service and that sort of thing.
- <!--$v=2098280-->So that's the first kind of fallback, if you will, or the first hurdle
- <!--$v=2101807-->that we're going to see. The second one is that because
- <!--$v=2105014-->ATM is an NBMA, or a Non-Broadcast Media
- <!--$v=2108449-->Access, and we're mainly working
- <!--$v=2111518-->in campus environments with chatty protocols such as IP and
- <!--$v=2114907-->IPX there has to be some kind of
- <!--$v=2117289-->mechanism in an ATM network to
- <!--$v=2119809-->be able to handle broadcasts and multicasts
- <!--$v=2122465-->very efficiently. So broadcast
- <!--$v=2124893-->handling is another hurdle that we need to overcome
- <!--$v=2128145-->when we start introducing ATM into our traditional campus networks.
- <!--$v=2131306-->And then finally a picture to kind of illustrate
- <!--$v=2134695-->that there are several layers of addressing
- <!--$v=2138039-->now at work within the ATM - within the network itself. So we have
- <!--$v=2141474-->our regular burned-in MAC addresses at some
- <!--$v=2143994-->point, and then we have our network addresses on our end stations, like IP,
- <!--$v=2147566-->or IPX internal network numbers, and then we have our ATM
- <!--$v=2150956-->addresses. So, we need now just yet another layer of
- <!--$v=2154117-->address resolution, if you will. So these multiple
- <!--$v=2157552-->layers of addressing also need to be
- <!--$v=2161033-->addressed. And the way that these
- <!--$v=2163919-->problems are addressed are through several
- <!--$v=2166392-->standards. The IETF came up with two
- <!--$v=2169141-->standards - oh, more than two but
- <!--$v=2171614-->the two that are relevant to this talk are
- <!--$v=2174591-->RFC1483 and RFC1577.
- <!--$v=2177889-->RFC1483
- <!--$v=2180958-->is basically a
- <!--$v=2184439-->way to set up static routers within an ATM network
- <!--$v=2187325-->and it's used very widely today, actually,
- <!--$v=2190073-->in smaller routed networks, for
- <!--$v=2192455-->example, but we're not going to touch too much on that.
- <!--$v=2195524-->Secondly, there is
- <!--$v=2198089-->RFC1577, which is classical IP and ARP
- <!--$v=2201525-->over ATM. And this - classical IP and ARP over ATM,
- <!--$v=2204868-->or 1577, is very
- <!--$v=2208166-->useful in IP-only networks that are not
- <!--$v=2211648-->broadcast intensive and that are not very, I would say -
- <!--$v=2215175-->that don't need real high
- <!--$v=2218243-->network availability because, for example,
- <!--$v=2220763-->if a component of RFC1577 goes away like
- <!--$v=2224336-->something called an ARP server that does this IP
- <!--$v=2227267-->address to ATM address resolution,
- <!--$v=2229649-->if that goes away on the network there's no provision
- <!--$v=2232626-->for a backup, for example. So I've
- <!--$v=2235100-->decided to focus mainly on LANE, LAN Emulation, and
- <!--$v=2238352-->MPOA, or Multi-Protocol Over ATM,
- <!--$v=2240871-->in this talk today. So
- <!--$v=2243848-->starting out with LANE 1.0 which has been out there
- <!--$v=2246872-->for quite some time - there are a lot of proven networks using LANE today.
- <!--$v=2249986-->Essentially, what happens with LANE 1.0
- <!--$v=2253193-->is that you can have your legacy
- <!--$v=2255666-->network - maybe you'll have a, you know, a switch
- <!--$v=2258369-->with a bunch of Ethernet clients on it, like a Catalyst
- <!--$v=2261392-->5500 for example, and then you'll have one card
- <!--$v=2264140-->within that switch that is known as a LANE
- <!--$v=2266980-->card, and it may have an OC-3 pipe into an ATM switch
- <!--$v=2270507-->that - so in other words,
- <!--$v=2273164-->the Ethernet
- <!--$v=2276233-->clients don't even know that ATM is running out there. So it's a really good way
- <!--$v=2279760-->to preserve your current infrastructure for example.
- <!--$v=2282737-->So when you - when we talk about LANE 1.0
- <!--$v=2285577-->we have to make sure that there are LAN switches that have an ATM
- <!--$v=2288783-->adapter, if you will, that talk LANE, that
- <!--$v=2291394-->can do the translation for the Ethernet clients. Or if you
- <!--$v=2294097-->have a server that you want to speak LANE you would have to make
- <!--$v=2297349-->sure that the NIC card in the server supported LANE directly.
- <!--$v=2300647-->And, again, if you had a router that you wanted
- <!--$v=2303166-->to support LANE it would also have an ATM card
- <!--$v=2305548-->that would be - that would operate the LANE
- <!--$v=2308892-->protocol. So,
- <!--$v=2311319-->how does LANE actually work? Well, there's several
- <!--$v=2314663-->pieces of terminology, believe it or not, within LANE
- <!--$v=2317915-->that need to be understood first.
- <!--$v=2320389-->The first is this concept of a LAN Emulation
- <!--$v=2323091-->client. And a LAN Emulation client is
- <!--$v=2325565-->just any ATM - ATM attached
- <!--$v=2329046-->device in the LANE cloud that requests
- <!--$v=2331977-->services throughout the network. And when we talk about the
- <!--$v=2334955-->services that - this brings up the next
- <!--$v=2337611-->few pieces of LANE terminology.
- <!--$v=2340085-->The services are the
- <!--$v=2342558-->LECS, or the LAN Emulation Configuration Server.
- <!--$v=2345490-->And the LAN Emulation configuration server
- <!--$v=2348421-->is just a way for clients to be initialized
- <!--$v=2350895-->as they join a LANE network. And then we have
- <!--$v=2354239-->the LAN Emulation server. And the LAN Emulation
- <!--$v=2357674-->server's main job is to do the ATM
- <!--$v=2361109-->address to - or MAC address to ATM address resolution.
- <!--$v=2364361-->And then finally we have the Broadcast and Unknown
- <!--$v=2367797-->Server, or the BUS - and usually the LES/BUS work together.
- <!--$v=2370637-->And the broadcast and unknown server is,
- <!--$v=2373980-->you guessed it, it's for broadcast and - broadcast
- <!--$v=2376729-->packets, and also for packets which are -
- <!--$v=2379798-->have an unknown destination. So that's the
- <!--$v=2382683-->broadcast and unknown server. Incidentally, when I was
- <!--$v=2386073-->talking about the LES the -
- <!--$v=2388684-->basically the job of the LES is to translate
- <!--$v=2391570-->MAC address to ATM address. This is what makes LANE
- <!--$v=2394593-->usable for multiple protocols
- <!--$v=2396975-->is because now we're not doing IP address to
- <!--$v=2399356-->ATM address resolution, but rather we're doing MAC
- <!--$v=2402792-->address to ATM address resolution. So,
- <!--$v=2405448-->just wanted to kind of clarify that. The best
- <!--$v=2408151-->way to talk about LANE is really to have an example.
- <!--$v=2410762-->And essentially what happens in a LANE
- <!--$v=2414106-->network is there are several virtual
- <!--$v=2416808-->channels that are created - or virtual channel connections that are
- <!--$v=2419602-->created, and this is all done automatically over SVCs.
- <!--$v=2423038-->SVCs, or Switch Virtual Circuits, are a
- <!--$v=2426015-->prerequisite for this technology, and essentially
- <!--$v=2429267-->what happens is one client will - if a
- <!--$v=2432840-->client wants to establish a connection with another client what that client
- <!--$v=2436321-->will do is first it will talk to its LECS, or the
- <!--$v=2439115-->LAN Emulation Configuration Server, and that LAN Emulation configuration
- <!--$v=2442321-->server will say, "You,
- <!--$v=2444795-->Mr. End Station need to be in this
- <!--$v=2447177-->emulated LAN, or ELAN, and this is your -
- <!--$v=2450337-->the address of your LAN Emulation server that you'll be working with today."
- <!--$v=2453269-->And then the
- <!--$v=2456796-->LAN Emulation client says, "Thank you very much," and it makes a
- <!--$v=2459269-->connection with the LAN Emulation server, and then as the LAN
- <!--$v=2462796-->emulation server is
- <!--$v=2465316-->negotiate - or trying to resolve, rather, the ATM address to - the
- <!--$v=2468888-->MAC address to ATM address resolution, the connection
- <!--$v=2472003-->is also made to the broadcast and unknown server
- <!--$v=2474522-->to start getting packets forwarded out to
- <!--$v=2477179-->the destination until the resolution is made.
- <!--$v=2479744-->Once the resolution is made
- <!--$v=2482172-->we get to our end goal in LANE, which is basically
- <!--$v=2485195-->what's called a data direct VCC, or a
- <!--$v=2488172-->data direct Virtual Channel Connection, between
- <!--$v=2491104-->the source LEC and the destination LEC.
- <!--$v=2494356-->So essentially, all these
- <!--$v=2497104-->steps have to happen and there're all these different types of VCCs
- <!--$v=2499715-->that have to be available in order for just one client
- <!--$v=2503288-->to talk to another client. But within
- <!--$v=2505807-->ATM this is an important
- <!--$v=2508372-->specification when we start to look at preserving our current
- <!--$v=2511808-->network and our current Ethernet LANs, if you will.
- <!--$v=2515380-->So LANE 1.0 does have some
- <!--$v=2518312-->deficiencies, if you will, that are
- <!--$v=2521702-->addressed in what was just ratified as LANE 2.0.
- <!--$v=2524725-->And LANE 2.0 is made up of two components,
- <!--$v=2527656-->one of which is done now.
- <!--$v=2530084-->It's made up of the LUNI spec
- <!--$v=2532466-->and the LNNI spec. The
- <!--$v=2535443-->LUNI spec, or LUNI spec, has
- <!--$v=2538649-->been completed. It was completed in May of last year, right around the MPOA
- <!--$v=2542222-->spec because they are - MPOA, a
- <!--$v=2545749-->prerequisite for MPOA, is this LUNI spec.
- <!--$v=2548452-->And essentially
- <!--$v=2551566-->LNNI is still being worked out. But essentially what LANE 2.0 is giving
- <!--$v=2554956-->us is first of all LANE 2.0 is kind of
- <!--$v=2557521-->the backbone of MPOA, or one of the components of MPOA
- <!--$v=2560040-->I should say, not really the backbone. But in addition
- <!--$v=2562834-->what LANE 2.0 will give us is
- <!--$v=2565995-->this concept of better efficiency of virtual channels.
- <!--$v=2569568-->So whereas we saw all these different channels had to be built
- <!--$v=2572591-->in a couple slides back we had all the spaghetti of
- <!--$v=2575385-->VCCs in the middle of the cloud while this data
- <!--$v=2578179-->direct VCC was trying to be made, the LANE
- <!--$v=2581111-->2.0 spec will make better efficiency of these virtual channels.
- <!--$v=2584134-->In addition,
- <!--$v=2586516-->the LANE 1.0 spec has no
- <!--$v=2589218-->component right now for Quality of Service.
- <!--$v=2591646-->So sometimes it's kind of a misnomer in an ATM
- <!--$v=2594165-->network when someone chooses LANE over Gigabit Ethernet,
- <!--$v=2597692-->they say that they're doing it because they get Quality of Service.
- <!--$v=2600440-->Well, in a LANE 1.0 environment you're not getting Quality of Service.
- <!--$v=2603876-->But the idea is that in LANE 2.0 there will be
- <!--$v=2606441-->Quality of Service, so if you build it today you can think that
- <!--$v=2609189-->Quality of Service will be there tomorrow. And then there's a
- <!--$v=2612670-->concept of a Special Multicast Server, or an
- <!--$v=2615281-->SMS, that is also another component of LANE 2.0,
- <!--$v=2618808-->whereby the dispersion of
- <!--$v=2622106-->multicast will be done in a more orderly format rather than just having
- <!--$v=2625267-->the BUS do all the work, or the Broadcast and Unknown Server do all the
- <!--$v=2628519-->work. And then finally
- <!--$v=2631130-->there's this concept of server redundancy.
- <!--$v=2633695-->And essentially, if you think about our ATM
- <!--$v=2637130-->example, if our LECS, or our LAN Emulation client -
- <!--$v=2640474-->I'm sorry, our LAN Emulation Configuration Server
- <!--$v=2643726-->went down, no other clients could join their
- <!--$v=2647161-->specified ELAN because the
- <!--$v=2650368-->initialization server has gone down. And
- <!--$v=2653712-->incidentally, if the LES/BUS goes down, no two
- <!--$v=2656551-->stations would be able to talk to each other because they can
- <!--$v=2659391-->no longer do the MAC address to ATM address resolution or
- <!--$v=2662873-->broadcast and unknown cells could not be
- <!--$v=2665850-->transferred across the network. So in other words, there is no
- <!--$v=2669148-->server redundancy built in in LANE 1.0.
- <!--$v=2671575-->Commercial break for Cisco right now
- <!--$v=2674232-->is that with LANE 1.0 we actually do offer a
- <!--$v=2677209-->protocol called SSRP, or Simple Server
- <!--$v=2680462-->Redundancy Protocol, that does provide for
- <!--$v=2683256-->a backup mechanism, if you will, if the LECS or
- <!--$v=2685912-->the LES/BUS should fail. So Cisco as a, you know, as a
- <!--$v=2688981-->semi-proprietary solution, although we do
- <!--$v=2691913-->interoperate with other vendors with SSRP
- <!--$v=2695302-->is - does provide for a mechanism for
- <!--$v=2698784-->redundancy in a LANE environment today. In addition,
- <!--$v=2701715-->the LANE 2.0 spec is also providing for server
- <!--$v=2705288-->redundancy and, incidentally,
- <!--$v=2707945-->the ATM Forum server redundancy
- <!--$v=2710739-->protocol looks very much like Cisco's SSRP. I'm not sure
- <!--$v=2714266-->why, but at any rate it couldn't be because there's a -
- <!--$v=2717289-->the chairman of the committee is
- <!--$v=2719991-->working for Cisco, no, no. At any rate,
- <!--$v=2722511-->moving right along here we said
- <!--$v=2725167-->before that LANE 2.0, or the LUNI portion of LANE,
- <!--$v=2728511-->is used in multi-protocol over ATM.
- <!--$v=2731488-->Well, what is MPOA?
- <!--$v=2733870-->Essentially, what MPOA does is
- <!--$v=2736298-->this: if we were to look at a LANE environment
- <!--$v=2739733-->as it were, in order to go from one ELAN or one virtual
- <!--$v=2743031-->LAN to another virtual LAN, those packets or
- <!--$v=2746192-->cells would still have to traverse the router
- <!--$v=2748757-->in order to make that Layer 3 decision,
- <!--$v=2751185-->for example at the TCP layer, the MAC
- <!--$v=2754437-->rewrite, and the IP checksum, and the TTL decrement
- <!--$v=2756864-->would still have to go to the router. So, the
- <!--$v=2760071-->router obviously in this situation could present
- <!--$v=2762956-->somewhat of a bottleneck.
- <!--$v=2765384-->And so things like MPOA were developed such that
- <!--$v=2768041-->now we have this concept of cut-through routing.
- <!--$v=2771201-->And with MPOA
- <!--$v=2773904-->the way it works is this: there is a
- <!--$v=2776698-->multi-protocol client, Multi-Protocol over ATM Client,
- <!--$v=2780225-->known as the MPC, and a multi-protocol over
- <!--$v=2783798-->ATM server. And they work in tandem
- <!--$v=2786958-->in order to provide the cut-through routing.
- <!--$v=2790210-->So let's go over a little bit about what these components are and then we'll
- <!--$v=2793508-->go through a quick example. The
- <!--$v=2796302-->MPS, or the MPOA server, is actually -
- <!--$v=2799829-->don't tell anyone but it's really just a router, and basically the
- <!--$v=2802578-->MPOA server which can just be a regular Cisco
- <!--$v=2805372-->7500 router, if you will, will run
- <!--$v=2808807-->regular IP routing protocol such as IGRP or
- <!--$v=2811968-->OSPF or whatnot, and what it
- <!--$v=2814487-->does then is it will
- <!--$v=2816869-->basically provide the MPOA client with
- <!--$v=2820442-->viable routes through the network for destinations.
- <!--$v=2823785-->So the route server can just be there to provide
- <!--$v=2826580-->forwarding paths to the MPOA client,
- <!--$v=2829557-->or it can be there as what's called the default forwarder.
- <!--$v=2832717-->And essentially the reason you would want a default forwarder
- <!--$v=2836061-->is that, for example, if you had a short-lived flow in the network,
- <!--$v=2839634-->such as a ping or something to that
- <!--$v=2842886-->effect, where - or, you know, a DNS query, then in
- <!--$v=2846184-->that case you would want a
- <!--$v=2848978-->default forwarder such as a router in the picture in order to
- <!--$v=2851910-->quickly just take care of the
- <!--$v=2854475-->routing decision and the routing of the packet without having to go through the
- <!--$v=2857590-->whole downloading of a route from the
- <!--$v=2859971-->MPOA to the MPOA client and so forth.
- <!--$v=2862995-->So that's the job of a default forwarder, and usually the
- <!--$v=2866155-->default forwarder will be disguised as a router.
- <!--$v=2868629-->And then we have the MPOA client.
- <!--$v=2871377-->And usually the MPOA client would be
- <!--$v=2873804-->something like the 5500 LANE card
- <!--$v=2876324-->running the MPOA client
- <!--$v=2878935-->software. And so this edge device is
- <!--$v=2882278-->there such that when the Ethernet clients request,
- <!--$v=2885805-->you know, want to get from source to destination
- <!--$v=2888737-->and it - normally what would be a Layer 3 hop through a router,
- <!--$v=2891623-->the MPOA client now will request
- <!--$v=2894188-->the routes from the server, the MPS,
- <!--$v=2896799-->such that it can make that Layer 3 decision
- <!--$v=2899913-->on the spot without having to forward packets to the
- <!--$v=2902387-->router. And the same thing happens within the MPOA
- <!--$v=2905547-->client. There is still a MAC rewrite. There's still a TTL
- <!--$v=2909120-->decrement. There's still an IP checksum. So there's
- <!--$v=2911823-->still IP routing going on there, but it's happening -
- <!--$v=2914296-->in Cisco solution we're going to do it in hardware and
- <!--$v=2917823-->by the time this video is out perhaps
- <!--$v=2920434-->Cisco solution will already be out as well. But at any rate,
- <!--$v=2923274-->the MPOA client functionality will be done
- <!--$v=2926297-->in hardware so it will be done very quickly. Finally,
- <!--$v=2929732-->also seen as an MPOA client, or a viable
- <!--$v=2932756-->option for an MPOA client, would be an enterprise server.
- <!--$v=2935824-->So, for example, if a NIC card or a NIC vendor decided
- <!--$v=2938481-->to make their server be able to talk MPC
- <!--$v=2941917-->then that server could also work as a
- <!--$v=2944756-->MPOA client and also be kind of a pseudo
- <!--$v=2948146-->router in the network. So those are some different components of
- <!--$v=2951352-->MPOA. And then
- <!--$v=2953917-->finally just to look at a quick example of
- <!--$v=2956620-->what MPOA would look like. MPOA
- <!--$v=2959093-->uses this concept of query response. And what query
- <!--$v=2962437-->and response is, basically, is that if there's
- <!--$v=2965185-->an end station off of an emulated LAN on a
- <!--$v=2968117-->switch and it wants to go from subnet A
- <!--$v=2971415-->to subnet B in this example, the client, the
- <!--$v=2974988-->MPOA client, shown as the switch in the bottom of the diagram,
- <!--$v=2978423-->where emulated LAN A is, would actually
- <!--$v=2981171-->query the route server, or the router if you
- <!--$v=2984424-->will, to find out the best - to find out what the Layer
- <!--$v=2987492-->3 information is before it could
- <!--$v=2989874-->traverse the ATM cloud.
- <!--$v=2992806-->So, in essence the LUNI
- <!--$v=2996104-->spec is used in order for this - for these
- <!--$v=2998990-->devices to talk amongst themselves and build the
- <!--$v=3001692-->route tables. And then in addition the
- <!--$v=3004715-->NHRP, or Next Hop Router Protocol, is also used
- <!--$v=3007280-->in order for these routes to be forwarded through the network
- <!--$v=3010349-->to the edge devices. And really the result
- <!--$v=3013739-->is that the packets would no longer have to
- <!--$v=3017312-->traverse a router if they want to go
- <!--$v=3020518-->from one subnet to another, but now there's this
- <!--$v=3023220-->direct cut-through virtual channel
- <!--$v=3025602-->that is built in order to accommodate that Layer 3
- <!--$v=3028121-->traffic. So,
- <!--$v=3031099-->now we've taken it from -
- <!--$v=3033526-->we've talked about all the ATM fundamentals. We've talked about
- <!--$v=3036916-->UNI. We've talked about NNI. We've talked about the ATM cell header. We've talked
- <!--$v=3040443-->about PVCs and SVCs and all the
- <!--$v=3043833-->ATM service categories, and traffic management,
- <!--$v=3046214-->and the different transport standards with regard to
- <!--$v=3049329-->UNI 3.X and 4.X, and
- <!--$v=3052032-->PNNI, and some campus internetworking
- <!--$v=3055559-->terms. So it's been a very well rounded session if
- <!--$v=3058857-->you will, I think, and but
- <!--$v=3061330-->essentially here are some ATM
- <!--$v=3063758-->references that you can refer to
- <!--$v=3066277-->with regard to different
- <!--$v=3068888-->places you can look for further information. And
- <!--$v=3072003-->on this slide I would say that the Indiana
- <!--$v=3075575-->University Web page that's listed here
- <!--$v=3078324-->has a lot of good Q&A type -
- <!--$v=3080797-->question and answer type
- <!--$v=3083500-->forums that they allow you to send questions in
- <!--$v=3086294-->and things like that. So that's actually a really good
- <!--$v=3088951-->Web page, as well as the ATM Forum
- <!--$v=3091332-->Web page as I mentioned before.
- <!--$v=3093714-->So that will wrap up
- <!--$v=3096142-->the talk. Thank you very much for your time and if there are any
- <!--$v=3099348-->questions stemming from this, my e-mail address is
- <!--$v=3102646--><B>seast@cisco.com.</B> Thank you very much.
- <!--$v=3105715-->
- </BODY>
- </HTML>
-
-