RIPE 89

Daily Archives

RIPE 89
31 October 2024
IPv6 Working Group
9 a.m.


CHRISTIAN SEITZ: Good morning and welcome to the IPv6 Working Group. This time we do have the early slot, in parallel to the Connect Working Group that is in the side room. I am Chris, one of the three Working Group co‑chairs. We have Ray in the room and Nico online. Please remember to turn your mobile phones off and adhere to the Code of Conduct in the session.

And please also remember this session is being recorded. Thank you for RIPE NCC for taking the minutes of the RIPE 89 session 88. Ray sent them to the mailing list some months ago, there have been some small corrections. Are there any further questions or comments on the minutes?

Then the minutes are approved. This is a hybrid meeting. If you are remote, you can send in questions via Meetecho and you can also do that if you are on site but you can also use the microphone queues. Please don't forget to state your name and affiliation before asking your questions. And please don't forget to rate the presentations so we know what you like and what you didn't like about the presentations and you can do that by logging into your RIPE account.

We start with the first presentation, which is a video from Geoff Huston. Geoff will be available for questions and answers after the video, and Geoff is presenting why is this IPv6 transition taking so long? And welcome Geoff.

GEOFF HUSTON: I am staying home and doing some healing work, which has to happen, so my apologies for all that, but here we are.

In the 25 minutes or now about 24 that I have got here, I am going to talk a little bit about IPv6, and the question I think I'd like to pose to everyone is kind of: What is going on with this transition? You know, this is taking forever. And you kind of go, it's okay, isn't it? At APNIC we do constant measurement of the amount of v6 that we see out there. We use an ad based measurement system. We sample around between 25 to 30 million users a day with online odds from across the entire Internet, and quite frankly, the numbers that we get for that are actually not bad. You know, this is the last ten years of doing this for every single day and as you see, it's a classic kind of up and to the right curve. The blue line is the amount of folk that will get to an IPv6‑only resource. The red line is when you have got given a dual stack choice. The red line is the ones that will actually get there in v6. Both lines are tracking up and to the right, what's your problem?

Well the problem is in the scale. It's taken ten years since we started this measurement, since 2014, and if you really think about v6, it's been close to 30 years, yet the deployment level is still seeing around 36%, it's not exactly impressive.

So, what's going on? You know, it's been more than a decade now, 2011, since IANA handed out its last block of v4 addresses. And about a decade since most of the RIRs managed to get down to their last /8, or whatever the final policies were. And to be perfectly frank, we have been been in a state of /SRO*S address exhaustion for the last ten years. If the whole idea of v6 was to actually give us a path out of this address exhaustion state, then something is going wrong. Because ten years later, v6 uptake rate is a little over one third of the Internet. What's happening with the other two thirds?

This is of course completely unexpected. And if we take that data that I just showed you and project it forward and here I am going to use the awesome power of mathematics and a simple linear match line fit to the curve, a linear line fit gives us another two decades before it kind of comes to an end. That's ridiculous. Another two decades of this dual stack transition half life. Seriously?

That's crazy. That's not the plan. Or at least it wasn't the original plan. We were meant to have this done years ago. What's gone wrong?

Here is what we thought was the plan. And it's a gorgeous plan. As we sort of contemplated the end of the v4 address pool, which is in red on this graph, as it came down, the pool size, the remaining amount of addresses, we thought that this industry would act prudently, would take into account future risk and would up its v6 deployment. We thought the Internet would keep on growing, so this was the case of doing a transplant on the fly, or keeping everything up in the air, but the idea was that v6 deployment would track a little faster than the size of the Internet. So, we never actually needed to get to the last v4 address. By the time that sort of we were close, v6 would be fully deployed and once you get v6 everywhere, who needs v4?

And so, this was the plan. It was a good plan. The assumptions behind it didn't seem to have worked out. The first assumption was that we could drive this entire transition while there was still ample v4 addresses that sort of kept situation as normal. Whoops, that hasn't been true! Transition would be driven by individual local decisions to deploy dual stack. That's still true. It's individual decisions whether, you know, it's any part of the supply line including the ISPs that make the decision whether they go dual stack or not and when.

So, there is no one in control. There is just a whole bunch of individual decisions.

But the third and critical assumption is everything would come to an end. We'll finish the dual stack and be able to turn offer v4 before we got to this tricky point of completely, completely running the v4 address pools totally dry.

We definitely strayed off the plan. We strayed off the plan a long time ago, back in 2011, IANA ran out its last address pool block, and the RIRs were then left with their own pools which they then ran through over the coming years. At that time, the industry looked at v6 and said: No, no, no.

0.3% of users were actually showing v6 at that time according to Google stats, and interestingly, only two thirds were using native dual stack, the other bits and pieces were actually running using Teredo, 6: 4 and various other relay mechanisms. So if exhaustion or the prospect thereof was the impetus for v6 deployment, something didn't happen.

And so by 2012, a long time ago now, a dozen years, we were confronted with the new transition plan. Which was trying to drive (624) v6 deployment at the same time as the available address pool in v4 was pretty low. In fact, very low. Wearying it go out in a /24 by /24 one way or another in our last /8 policy. To this is not been a success.

And you kind of go why? And the answer was that if we dial the clock back a little bit further to around 2003, 2004, 2005, when we were given the prospect of this v4 address plan is going to run out, we kind of went down two paths at once and in fact there was the third path as well. The first path was get rid of class full addressing because in actual fact it was the class Bs were going to run out, so the idea was by going into classless addressing, we would buy time. The other plan which wasn't really endorsed by a lot of the IETF, but all are whinging but nevertheless NATs were already there, network address translators, spraying the network into client server and having the clients share their addresses with each other and share it over time, share by TCP and UDP port numbers and sharing based on handing it back to a pool and reassigning it. And the longer term plan, the third plan was v6.

The whole idea was to buy some time to get v6 with some momentum of deployment and then we didn't need the short‑term measures any more. We didn't need them any more. The problem is, that NATs have been extremely good and even classless addressing bought us not just a few years, classless addressing bought us about 15 years of time, an extraordinary amount of time and then coupled with NATs, here we are today, and we are still not feeling the pressure.

NATs are certainly a very subtle low friction response to address exhaustion. Because in essence, what's going on with NATs is clients don't have to change their behaviour, their software or anything if they're behind the NAT. This is the same client, the same software. It makes no difference. Interestingly, servers make no changes if the client that comes to them is from a NAT or not from a NAT. It's just an IP address and a port, it's just a connection. It makes no difference.

Each network independently can figure out whether to deploy NATs or not. As was said NATs in sequence, and it just kind of works as we have seen for the last few years. It cuts out a whole bunch of other things (/KAS /KAEUDZ) multi‑party Ron da Silva a view, end‑to‑end peer‑to‑peer, but the main stream Internet the public Internet wasn't interested in that. It was really interested in these two party client server transactions, and for that, NATs just got on with the job and did it.
It's been doing it now for around about 30 odd years, since 1994, since the days of dial up mode items, NATs have been part and parcel of the landscape. If you are an application designer looking at a model to use, if your application is not NAT capable, in other words works in an environment where NATs are on the path. The application isn't going to. For 30 odd years we have been deploying systems and services that cope with NATs that accommodate the NAT structure and actually are not reliant on end‑to‑end addressing. And so the real yes is if it's lasted for 30 years, and it has, how much longer is that going to last? When do we get the pressure to go well we have reached the end with NATs, it's time to really think about v6 again. Let's get serious.

So the kind of questions is: How long does NATs go? How long do we have to keep this running? At that point, how big will the Internet be? We're already at the area of somewhere between 15 billion to 24 billion devices connected to the Internet. What's too many? Because the silicone industry is constantly pumping out more and more embedded things one way or another. And quite frankly, the pool of devices that NATs have to stretch across just keeps on getting bigger and bigger but we're not seeing acute levels of stress.

What's the cost of all this this NAT deployment? At what point does it get forbidding? Who pays that cost? What are we trying to do?

And last but not least I'd like to bring up the issue that after 30 odd years, this is not a short‑term expedience, this is not something we're doing while getting v6 ready. This is not something we're doing while waiting for this transition to come to its next logical phase. We're really in the law of unintended consequences. After 30 odd years, this environment that we're in, with partial deployment of v6 and a massive, truly massive reliance on network address translation in v4 is now the Internet. This is the reality we're living with.

You see, I think the issue is we took our eye off the ball with this v4 to v6 transition for a couple of reasons.

The first reason was the short‑term expedience was so good that across the nineties once we put them in place it was kind of oh, we don't have to worry about v6 any more, we'll just go and press on with whatever other business we were doing.

And by the time things got a bit more serious around 2010 or so, the money had already moved. And if we follow the money, we find a very, very curious story.

You see, in the LAAS‑CNRSical Internet, it was the network and the ISPs that were the centre of the Internet economy. You see, networks were expensive. Distance communication cost. And it was the job of the ISP basically to do the allocater of that resource and we used price as a means of arbitrating access across the various demands. Distance dominated all of these cost networks and the role of the transit network providers were paramount. We spent all of our time in all these sort of network operator meetings and even Address Policy meetings talking about peering and transit, talking about how we effectively modify and cope with this enormous costs of distance carriage. And because the ISPs were the brokers of rationing of this scarce resource of carriage, they tended to accumulate power and influence because of that unique role. And the classical Internet the network was the centre of the Internet economy.

But, that's kind of a world that did not factor in Morse law. For years, since the late fifties, every couple of years the silicone industry pulled another rabbit out of the hat. The amount of processes on a waiver of silicone doubled and the price of /KPHAOUG, the /KPRAOEUS of storage came down per unit by the same amount. It halved every couple of years. And so, from scarcity, scarcity of computing, scarcity of all kind of things were moved in an entirely different world. This is a world of abundance.

Comes, communication is not immune to this. It might cost much the same amount of money to drop a fiber optic cable from where we are, in my case Australia, in your case Prague, to say New York, the dollars clever kilometres may not have changed but the amount of bids I can put in that kilometres of fiber have changed dramatically and that's not because it's better glass, it's the same cold glass. What we're actually doing is refining the way we impress signal on to that glass and detect it at the other end. Those digital signal processes, back in 2013, we were doing 40 nanometre process. So, we're able to get approximately 8 terabits a second out of a single fiber strand, we were doing 100 gigabits per second per lander. Within a few years, which increasing the number of gates on those digital /STAL signal processor, by going to 16 nanometre processors, we were able to work with coherently signalling and actually look around and doing ample attitude modulation and actually get more bits into each Board. So, we were able to double it, 15 terabits becomes 16. And within another 3 years going to 7 nanometre process, we were able to improve the tuning of that digital signature processing and get out of 32 terabit signal. A couple of years ago we were looking at just under a terabit per second per lander, 800 gig, and a totality fiber capacity of around 64 terabits per second. We were trying 5 nanometre DSPs. As the chips get better we cram bits into the same piece of fiber and that's continuing.

Compute power, you probably know this as well as I do, this is a big graph from 1970 to today, and what it's showing is at that point by the year 2020, we were doing 50 billion gates per chip. We're hammering away at getting closer and closer to 1 /TR‑L. Don't forget that scale is not linear, it's a log scale. Morse law is being truly prodigious. And the other other that I think it's done is the cost of storage as plummeted and again the same kind of graph, it's a unit cost and the price and forget that graph is exponential. This is a log scale, so it really has plummeted.

And what you don't do in a world with computing, storage and communications is abundant? Well, you get rid of the choke point, and the choke joint is distance communication. You get rid of distance. Instead of going over there to get the service, why can't we replicate that service 1,000 times, ten thousand times, and bring an incidence close to every bunch of consumers, and that's exactly what we have done.
By moving distance out of the network and bringing things closer, we're actually able to get much much bigger networks, we are scaling larger and faster. Because now we're serving content and service by distributing that load close to every single user. We have gone parallel, highly parallel. And with high capacity mobile edges and mobile platforms with 5G etc., this trend continues. We now serve the network not from servers deep inside the network some distance away, we replicate the services and serve from the edge. This sets a scale like we have never been able to scale.

Distance is also a speed problem. The closure you can bring the two communicating parties the better you can drive the transport protocols, the more you can pull speed out of the network and make it work. And so, as the packet miles or over Europe packet kilometres has /SHR*UPBG, we're actually able to build edge to /PWHREPBLG networks, or at least client to server networks, because those are the two /EFRPBLGS here, the server edge and the client edge that are much much faster and we reedge the application to say meet that criteria of a much faster response. We now want things like TLS 1.3 to cut down the number of messages that can exchange before we bring you have an encrypted service connection. We are trying to find ways to make this network even faster, and certainly the shorter the distance, the faster things go.

And now we're actually looking able to be able to buy silicone every edge and silicone on every client. So now encryption is not a luxury. These days encryption is the norm. And as we move forward with this, with TLS 1.3 to actually seal up that server name indication with QUIC setting up the transport controls that actually encrypt everything, and thing like Apple private data relay, another mask like approaches which even isolate who is asking and what they are asking for from each other, we are able to do networks that realistically the applications across these networks insulate themselves from the network. The network structure is just ‑‑ the network infrastructure is just untrustable.

And all of this is not more expensive, it's less expensive. We're working in an industry where there are massive economies of scale, and by using the advertising market, a lot of this stuff comes through without cost to the consumer. So what was the luxury service decades ago is now just a uniformly affordable mass market commodity service.

If what you wanted was a cheaper larger Internet, you have got it.

In all of this, the network is not the critical resource any more. It's the service and content, the money has moved up the protocol stack.

What does that mean?

Well, I'm an ISP, I'd like to spend some money to deploy v6. Thoroughly laudable effort. Go for it. Who is going to pay me? Oh, will you pay me more if you use v6? Not really. They want it cheap. Well the content will pay you to put in v6? Well, no. They are busy building their on content distribution networks. They don't care. So, who is going to pay you?

Well, no one. And indeed, what's happened is why are you doing this? Are IP addresses scarce? Gee, that's a good question. What are they scarce?
What would tell you scarcity? Well in a conventional market scarcity is price. If a good is scarce, it is expensive because many more people want it than can have it. That's what scarcity is and so you normally find some kind of price escalation. Here is the history of v4 address pricing from 2014 until today. That spike that you see in 2021, 2022, seems to be a spike around the lack of market information being sent to market players across Covid. It doesn't seem to be an intrinsic scarcity issue. For the last twelve months, late 2023 and 2024 the actual price of v4 addressing has been relatively constant.

If price is constant, supply and demand are equal arbitrating. That is not scarcity. That is actually a Meta stable form of equal arbitration. The market price is stable, the dynamics of address movement appear to be stable. Scarcity, no.

Why the no?

I am going to argue that we have shifted the entire architecture of the network. We are moved into a different network. CDNs have actually changed the entire issue of what is a network service and how do you identify it. Because, as a public network, a public network, those CDNs work on names. TLS is the service differentiate errand the SNI part is actually the switch, and the entire security and service selection is based around naming systems, around the DNS. And you actually argue that the DNS is now the new routing protocol, because if you look at things like Akamai, they use the DNS to find the closest instance of that service to you. Not address space, it's not using routing, it's simply using the DNS.
And so you if look inside /KR‑PBDZ most of them are on a routing table with the average AS path length is close to 1, there is no routing. It doesn't route. All of a sudden we have moved from an end‑to‑end peer network into this client server a symmetric world where we have also managed to take single platform servers and use server plus network and moved into replicated servers, don't actually need a service network, they bolt directly into the access networks. Clients don't need to have a unique public address. They are inside NATs. They are only locally identified. Once you get out into the public Internet, you just need to distinguish traffic from each other. IP addresses are merely ephemeral transport tokens they are not identities any more. Identity is the DNS. And so we have moved the entire Internet from address space networks into name based services and as so we have followed this money and made the shift in the technology, we're actually using name systems.

So, the change is quite fundamental, and it's not v4 to v6. The pressures of scaling, coupled with the opportunities that Moore's Law has given us has actually managed to make a network that's a billion times bigger than it was in the eighties, a billion items too. But the way we have done this is change the core of the network architecture from addresses as distinguishing identification tokens to names.

And in that perspective, just simply changing one set of address tokens wasn't really going to give you the leverage that we have managed to get by changing into the name space. There is no real benefit in replacing one 1980s architecture, v4, with another, v6. The real key was actually shifting the entire Internet across to a different basic architecture. So, today's network names matter, names matter a lot. The DNS really matters. Really, really matters.

Names are the core of the Internet. Addresses, not so much. Routing, not really. Because we don't route any more. All these CDNs, all the content is abutting all the access networks. The longer term the money is still moving up the stack and transmission infrastructure is actually an abundant commodity. Sharing networks doesn't matter. We are now able to afford long distance transit networks that are private, Google run their own. Amazon run their own. Meta run their even. Sharing doesn't matter any more. We don't need to share, so we don't. We have so much computing networking out there that consumers are bothered towards the server. We push the service and the content out to the user.

So what happens now, is that the application sat service itself, rather than a window to some other service.
Do networks matter? Well, I don't think they do. In our search with lower cost and higher speed higher agility, we're pushing stuff out to the edge, and even off the network itself. It's just dump commodity Pines and you think of that. What's the public Internet? We all use the same addresses. That's not the public Internet. We all use the same transmission. That's not the public Internet.

I'd actually say what is the Internet these days is defined by a common referential mechanism using a common name space. It's actually the name system that lies at the heart of the Internet. And I think where we're heading is a one way street. And what does that mean by /STKPHREUGS in I think we're going to be stuck here for an awfully long time as we see with the pricing data, there is no real impetus to change. Where we are, better or worse? Like it or leave it, is where we are. And I think the real trick is coming to terms with that and understanding that these are the market forces that got us here. It wasn't any align intervention. It was simply the cumulative sum of everyone trying to optimise their own position inside this phenomenon not ecosystem. Hopefully I have left a few seconds for questions. Thank you very much.

(Applause)


RAYMOND JETTEN: Thanks. We are also sorry you're not here. There is a few questions on the queue. Ill take those first. This is from Waysing "does the RIPE NCC have specific programmes or resources aimed at supporting capacity building and technical training for new members, particularly those in a smaller or emerging markets?"

I think the RIPE NCC actually has quite a bit of different trainings. So you can have a look at that on the RIPE NCC website.

Then we have another question "is IPv4 resources continue to delete how does the RIPE NCC refer the... markets and are there on going discussions about refining transfer policies" there are constantly discussions on this, yes. I'll pick one from the room first.

AUDIENCE SPEAKER: Kurt Kayser, I am representing the European rails operators in this case and I am completing objecting let's say fossil fuel rail problem and crashes with IPv6. In my opinion, the rail system is the only sector in the future that has a future because flights won't be and also formula one cars won't be available for everyone. So, I would be looking for changing the analogies in your slides a little bit. What's compared.
Thanks.

GEOFF HUSTON: Noted.

AUDIENCE SPEAKER: Steve Wallace. I argued about 26 hundred ASs that make up the global research and education network community that are a counter example, that they are mostly end‑to‑end. That they are relatively few NATs. And that encourages me to explore whether that's a true assertion.

GEOFF HUSTON: V6 was designed in the light of v4 as a 1980s address based architecture. The issue was that we really underestimated the cost of uniqueness across a multi‑billion large client population and NATs were one way of pushing that cost out of the boat. The only way we have built a network as big as it is is by not actually making all those end devices have unique addresses, because we can't afford it, we actually can't figure out how to scale to that enormous dimension. Now, this is one place where the research networks I think, and the commercial networks, have kind of diverged down different paths. Scaling in the large public Internet is absolutely everything, it dominates the agenda, and the problem with massive systems when you are trying to allocate unique tokens everywhere is that it is just really expensive and any way you can ditch that cost you will. And what we actually found out was let's not bother addressing clients, let's forget it, let them use their own addresses, let's NAT the lot and do distinguishing based on names, which are basically infinite one way or another, they are not a scarce resource and naming is only on server side. And so we sort of sought out the cheapest possible solution, which is what the Internet is all about, cost evasion, and the reality is where we are is just dirt cheap and that's what we have got.
V6 is kind of a world of 40 years ago, and the scaling properties of address based architectures are really difficult and expensive and that's I think the reality that we have learned almost sub‑consciously over the last few years. Now I realise in saying that to a v6 Working Group meeting, I'm spouting a little bit of her paracy and I am sorry, but in some ways, we live in the real world and costs and economics is what drives this business. Thanks.

RAYMOND JETTEN: I am going to have to close the queues.

AUDIENCE SPEAKER: Hello Geoff, Jen Linkova. Good evening. So, first of all, I start with a minor comment, I completely disagree that NAT is free because I don't know about you, but I do deal basically on a daily basis with users who are very unhappy about the traffic going through relay and not directly. Because it is visible to users and it does cost application developers a lot of resources. But it's a minor thing. But I agree that the problem why we are stuck with transition is human nature. Nobody does anything until it's really dead. So probably for people who are not doing v6 yet, just means they don't need it in the next, three, four, five years, how long an average engineering or decision‑maker stays with a company. So if ‑‑ I actually have a theory of maybe that's why Japan was one the first countries because people kind of she themselves in the same company like in ten, fifteen years time, right. So they do have reasons.
But, on the other hand, is it actually a bad thing? I don't have data, but where are we with migrating from TCP as a HTTP protocol to QUIC? How much of DNS traffic is going over QUIC now versus a UDP 53? Do we really need a hundred percent done or do we need people in companies who need v6 to do that, and if particular enterprises happy to use IPX, Apple talk, whatever protocol, leave behind the scene v4 address to do this, fine. Right. So I don't think we need a hundred percent. Not yet.

RAYMOND JETTEN: I am sorry we have to ‑‑

GEOFF HUSTON: Very quick response. I'll take a couple of seconds. Look, the whole reason behind this transitition trying to get to completeness is the only time you know you can turn off v4 is when it's not being used anywhere on the net. And so the idea was in this dual stack world, we did the whole happy eyeballs etc. The expectation was as if you will, v6 got to the point of being everywhere in dual stack, happy eyeballs means you could just turn it off and know one noticed. As long as you still have v4 traffic the dear old service providers have no idea if that's the important traffic that they can't stop supporting. We are stuck in this dual stack forever. What I'm saying in some situation is the economics say there is no much momentum behind this transition, it's going to take an awfully long time and might even be where we are is where we're going to stay. Thank you.

JEN LINKOVA: I did turn off v4 and I do not care if some sites and Internet still have it.

RAYMOND JETTEN: You can continue offline. We still have some questions in the queue. I am sorry we will not be able to take each and everyone of them.

Rinse Kloek, "for us the IPv6 transition has just started two years ago. By enabling v6 in our new access network. However we still have more than..." where is the question really? I don't see a question mark, so I'm sorry. Rinse, I'll talk to you later.

At this moment John "what does it matter if that not not everyone deploys IPv6? Those who can do benefit? It what is the risk if adoption peaks at 60%?" Well that's actually what ‑‑ this was just discussed about it anyway.

All right. Thank you Geoff for joining us remotely. I'll see you in Lisbon, I hope.

GEOFF HUSTON: That's my plan and my bicycle notwithstanding ‑‑

RAYMOND JETTEN: Stay away from your bike. Take a taxi.

(Applause)

RAYMOND JETTEN: All right. The next talk we have is our dear colleague, he is Nico, and he will be joining us not as far away as from where Geoff is, but maybe you want to tell a bit why you are there and not here.

NICO SCHOTTELIUS: Hi everyone. I am a bit in a particular situation. If I'm looking a little bit worn down, I just became a father about two or three weeks ago or so, it's all very blurry.

STENOGRAPHER: The world will never be the same again!!!!

NICO SCHOTTELIUS: My wife and I will happen to be in South Korea when she got pregnant, so some logistics while we are usually in Switzerland, everything is different.

STENOGRAPHER: You'll be coming to the RIPE meetings to get a break now!

(And some sleep).

NICO SCHOTTELIUS: I am going to talk a little bit about a completely different topic than Geoff was. It is IPv6 and routing, which is a little bit strange. First off, I'm working which is doing a lot of IPv6‑only deployments, so the reason why I am talking about this is also a lot of infrastructure changed, Kubernetes is a base. Fun fact. This whole process was started around three to four years ago, and those of you who know Kubernetes or had a look at it or want to have a look at it, when you first started with, it is a huge long process, and I will come to some details on the way, why that is such a long process.

That said, why am I talking about this? I have seen a lot of different network deployments from anything like extremely manual like random due in the baseline is doing all the kind of, if you actually see yourself in this offence please, a lot of people like very isolated doing networking, doing their stuff, very good, very manual, very, I would nowadays call it eighties style, but in a way still reliable. When it comes to items too, you know, I don't want to go too much into the whole history there, but we kind of as an IT sector we evolved to service control, we go to automation, and nowadays we're starting to talk about GitHubs, which is basically dropping a change in the repository and then magically everything changes to the better, hopefully.

Now, I know that there is a lot of variety in how you deploy a network, and using Kubernetes for deploying your network stack is probably not the most conventional thing and that is also the reason why I wanted to talk to you about this. So, why do this? Why ‑‑ what is the advantage like? You know, we have a lot of equipment already. Why change it? One of the main reasons you go to Kubernetes is you have somewhat structured similar ways of operating, and when it comes to Kubernetes, it is a whole hell of different things and different ideas and different, let's say, patterns that you can apply. So, when everybody says I deploy Kubernetes, it might look very different for the next Kubernetes. The good thing is some of the objects that we're talking about, or some of the things that you deploy in Kubernetes, they are still the same, such as deployment, servers or a port, they are the same independent of like what kind of Kubernetes cluster you are actually deploying, so you get a little bit of a base, but it's, it would be wrong to say all Kubernetes clusters behave the same way.

Now, still the motivation for us was to go to Kubernetes to make our routing stick more reproduceable and more reliable in the end, because in the beginning we also had a lot of manual deployments, a bit of a router here, bit of a router there, change to configuration arrangement system, it got better, but we moved on for a the lot of things for application deployment to Kubernetes. The question was can we do the same with Kubernetes?

In general, Kubernetes, it can be used as a deployment for a lot of different work loads, and the typical things are more on the web server, application server, maybe mail server area. But the whole design, if you look at Kubernetes as a network person, you see that they have load balancer, ingress, network device if you look at it it's really made for an RFC 1980 inside all small capsules, pods deployment, which are all nicely isolated from the network by means of NAT. Well, coming here looked at really feels wrong. And we have been probably one of the first organisations who did IPv6‑only deployments in production. IPv6‑only Kubernetes deployments. And when it comes to networking inside Kubernetes, this is also a little bit of a head start for, if inspired after this presentation to go down the same route. One of the most tricky things in picking a new Kubernetes cluster is to choose what kind of networking you want. Because Kubernetes has this amazing idea of just expecting you to wait. We have a CNI, containing a network interface. Whatever you want to data networks you can just plug it in there, which is great, but if you start with Kubernetes like, what should I plug in where and how does it actually work? And you are really stuck at the beginning.
So, this nice idea of obstruction is having a very hard entrance barrier, very high entrance barrier, and you can do anything you want with networking in Kubernetes, you can do bridging, tunnels, X LAN, VLAN, WireGuard, you name it, you can even to ‑‑ we have had a Kubernetes clusters with one cluster deployed all over the world, ranging from Canada over to Europe to New Zealand. One cluster. One Layer2 network, which, you know, many of you will sort of spin in your head saying why? Anyway, not the point of this talk.

One of the drawbacks is in Kubernetes, by default your workload, the port, it's called, usually gets one interface. And for not long ago, this interface was IPv4 only. Then Kubernetes evolved and it was getting IPv4 or IPv6, then it evolved and it was okay, you can have one interface and you can have both v4 and v6 if you want to. But that is the default of Kubernetes, which doesn't sound terribly helpful if you want to route, you know, routing one interface, one IP. A bit tricky.

So, people were smart. They are always smart people on the planet, and because it's a CNI, a network interface, well you can also write an interface which protects interfaces, so you can plug into multiple toolset, that Telia allows you to add multiple CNIs into one pot, and well if your head isn't already spinning, it gets spinning the more interfaces you want. What you do is you define a certain network type, it is called a network attachment and you want to put it to a certain workload to you have this nice matrix where you can sign this, but it gets really complicated if you have a lot of different networks, because that's not really what mutis is made for, this is made for you want to have one workload, maybe two interface, that's okay, three, four, five, six, search eight, like you have in a router, maybe not.

So, to the rescue comes in Kubernetes something that's called host network, which in case of routing is an amazing thing. You can actually just say like okay, I have a machine with tonnes of interfaces, that's a router, or can be a router, and routers expose all of the physical interfaces to the workload, which is great. Obviously this doesn't fit so much into this nice concept of isolation, but then again we need to get the job done, not just like focus on the principles here.

So, the host network, really nice. We have three options. First one is we use the CNI, one interface, one IP address per stack, we get multiples, multiple interfaces, or you use host network which all the interfaces on the actual node we're running on.

Now, usually, Kubernetes is a cluster. So we have a lot of different nodes. A lot can mean anything from one to thousands.

In terms of routers, I don't know about you, but for us, routers usually form really the backbone of your network, so you want to have them very, very reliable, and with the least amount of dependences. So, what we have to do is we run a single node Kubernetes clusters, yeah, we still call is a cluster for consistency but in the end it's a single note, router, a cluster. The advantage of doing this is we still have the same perimeter as we are running bigger clusters so we can have router definitions that be used in a big cluster on the same node.

Now, one of the tricky things in a Kubernetes world is, you run everything in a container. Container needs some kind of image that you pull into. Now, if you are a router, you potentially don't have a route because you establish it via any kind of routing protocol, be it BGP, OSPF, whatever. So, you mightn't even have a route for putting it in your container image, which is something that affected us in the beginning. So you need to make sure that the image that actually does the routing can be done on it without the routing, or, it's already present on the system. So you can also pull in the image via any kind of network beforehand. But this is something that, if you're not used to it, running routers with Kubernetes, then this can be something really biting you.

Another thing that can byte you is some tools such as cube ARCs D M, they assume all of your nodes that are in the cluster always have a default route which in the case of a router isn't necessarily the case. There are a couple of work arounds for this. For instance, adding a default route on the route pick interface, it's a bit hackey, but you can see that the mindset, in terms of developers working around Kubernetes is a little bit still in this, like I said initially, this RFC 1918, this is closed network of thinking. So, this just is a small hint.

Another question that we had to ask ourselves is: How do you actually roll out? So the great thing about Kubernetes is, you can use fit ops. This meaning again you just deploy a configuration file, put in a GIT repository, push it out. Some servers will take it on‑the‑fly in changes. It's amazing automatic rollouts. You can do thousands of services at the same time and they will all be individually upgraded. It's also amazing for a huge number of blackouts, if you want to like get rid of your infrastructure, just commit something wrong and we will apply everywhere.

So, what we do is actually a bit of a mixture that is every change for routers goes into a git repository, but is actually applied manually. For the reason that this we can nicely control different clusters being updated on its own.

But, it's is it sill in GIT, still ensured that we have a versionised and we can call it back later.

Now I wanted to show a bit of a real world example. I don't want to go too much into the technical side, but later, if after today, somebody wants to see a little bit on the the real stuff, I can show you some. What we have is, we have around 20 different single node Kubernetes clusters, each of them is a machine, anything from S small in a very edge network from a 4 gig 4 core machine up to 18 gig machine with 64 course, which is has been link routing plus something else. Additionally to that we have quite some /SR*PB instances that are also in the Kubernetes clusters, which are based on the same routing software. I'll come to this in a moment. Most of it is actually running on bear metal. A little bit on VMs, but that's not really a part.
How we are doing this is actually by using a helm charge, this is a method of applying Kubernetes clusters and in‑house container. And because a lot of things were a little bit abstract, I wanted to show you how, what we actually do inside the Kubernetes cluster, in terms of a container. So what we do is we are actually using the alpine operating system as a base for our container. We, at BIRD our routing unit of choice, it could be FRR, whatever you like. Add a couple of supporting tools such as TCP dump, MTR, JOOL for NAT64 translation, OpenVPN, WireGuard obviously the same, and by default, we start BIRD. This is a complete Docker file that we're using for creating the container. Obviously we use the configuration itself essentially, the thing I wanted to talk to you about, as of today, it is easily possible to run routers with the same stack as everything else if you already have Kubernetes, and you can nicely work together with other teams that actually have... if you have such an organisation. You do still see that it's not exactly the same type that just deploying a random application because those are more integrated into the architecture of Kubernetes. But using the host network, it works quite well.

So that's it actually from my side. I hope I gave you a little bit of an appetite. I hope not not everybody thinks I am totally crazy. I am looking forward to your questions, if there are any. Thanks for the link.

RAYMOND JETTEN: All right, thanks Nico. We are already a bit over time. So, we'll keep the questions short.

AUDIENCE SPEAKER: Hello. Thank you. Jan Zorz, I guess you cannot do VPP B D K in these containers, right?

NICO SCHOTTELIUS: You can. You can basically do anything, because you do have access to kernel space outside of the containers. It's the JOOL that you have actually seen for is actually deployed in kernel space but in a different name space.

AUDIENCE SPEAKER: Have you explored the possibility of using v6 with Kubernetes or there is no requirement for traffic engineering?

NICO SCHOTTELIUS: I haven't checked it out. It's an interesting topic. We had a look at it but we didn't really have a use case for it. But I think it would be similar case, it would be more isolated in the host networking case but otherwise, I think it should work generally speaking.

RAYMOND JETTEN: Okay, that's it. Thank you Nico, and congrats again, and see you in Lisbon.

Next up is Jens Link and we are having an advent of IPv6.


JENS LINK: So, one person at least on the stage. A couple of months ago I had a crazy idea, I think I put this to the Working Group mailing list already, advent of IPv6.

I am a freelance consultant. I started with Linux when it was shipped on 35 floppy dynamics. I have been networking since 2002 and IPv6 since 2010, but that's not quite relevant.

It all started with the question: How do we test if a person knows IPv6? So, there are a lot of self‑proclaimed experts who can do anything. Some of them even have a Windows certification, think of dollar wind or 35 network engineer or something like that. I talked to several people who went to the training and were told, we don't have time for IPv6, and this is only relevant to get 95% in the test. So, we don't do it.

Solution: Make a test where you actually have to do something. Like configure something or troubleshoot something.

So, at least little multiple choice.

And after that, and a couple of years later, on my way home I came to the idea, let's make that fun. Maybe let people win something like a T‑shirt. Borrow some ideas from other events. So, do you know advent of code? So, it's before Christmas starting on 1 December, and you have a challenge everyday, where you have to write some code based off some data you get, and then you have to provide the answer, and there is a list of people who is fastest, and then there is the advent of cyber, they are a little more sophisticated, they have a story of someone stealing and sabotaging Christmas and they have a back story, and you have to solve some challenges, you are the good guy, and see what the evil hacker is doing.

So, Ray told me to be fast. So I skip this.
It can be anything else. It can be 20 days before the next RIPE meeting. So, I thought just sort of advent of code and advent of cyber as a basic idea. So, what is needed, we need ideas for tasks, so I came up with a few, but not 24 of them, and I'm not sure if my ideas are good enough, or maybe they are too complicated! And should it be only Linux or dollar window or dollar Cloud provider or only Windows? Or Linux and Windows? I would be ‑‑ with Windows I would be out.

Tasks should go from easy to hard. So, let's do the hard stuff at the end, or maybe have two tasks! It should be accessible, that means it should run ‑‑ you should be able to access it where the web browser or maybe SSH. No account with a Cloud provider or a special vendor image for emulation.

We need maybe a back story for the whole thing, like somebody stealing Christmas or ‑‑ that idea is already taken. We need sponsors. So if we run labs, we need some form of virtual machines, or containers or something else. We want ‑‑ if we want prices, somebody has to pay for the T‑shirts, and for the shipping and for the logistics.

Okay. Also we need graphics and text and project management and all the stuff around.

So, now it's your turn. Is this a stupid idea or... not? And if you say no it's not a stupid idea, would you be willing to help?

RAYMOND JETTEN: Okay. We can ‑‑

JENS LINK: There is the mailing list, so, Ray ‑‑ no, I am done.

RAYMOND JETTEN: Now he is done, okay. Sorry. We can have questions in the room, if there is any or any comments? No?

AUDIENCE SPEAKER: Andrei: Learning and development department. Great idea. We, as the RIPE NCC learning and development have content on IPv6. We don't have any public lab infrastructure. If something like this ‑‑ well of course like if it's like advent of code it has to be the end and it requires lots of hidden preparation. So, well I cannot guarantee, we don't have capacity for that definitely not. But if there is something, we can, let's say, cooperate on or we can make ‑‑ like, we can reuse ‑‑ first of all we have lots of theoretical things in our that if you didn't know it's accessible to everyone, you don't have to be a member, it's open for everyone, it's well curated content. But it's not hands‑on experience, because this is much harder to make. So, in any case, if there is a potential to cooperate, drop us an e‑mail, academy [at] ripe [dot] net.

JENS LINK: It's not for this year Christmas, because we don't do this in a month. Who thinks this is a stupid idea? Be honest! Okay, one. Who thinks it can be done? A lot. And now the trick question: Who is willing to volunteer? Okay, yeah.

JEN LINKOVA: I think you don't have to have a single advent calendar for this, but if you want, I have like a similar idea. I can definitely think we can get 24 PCAPs to take a look and tell what's wrong in them. I definitely have more than that and I am sure I'm not alone.


JENS LINK: Yes, PCAPs was one idea, but no multiple choice. There is a Hurricane Electric IPv6 certification. I think you can do this for two days with just choosing the RIPE web poster and having a Google mail account. That would be too easy. So, nobody else? All right.

RAYMOND JETTEN: It looks like a good idea then. Of course we have a mailing list on the v6 Working Group as well where you can put your ideas on, I am I am sure Jens will see that. So thank you.

(Applause)

RAYMOND JETTEN: Then, we have another of our experts on the field, Benedikt Stockebrand, and where are we with IPv6.

BENEDIKT STOCKEBRAND: So, what I want to talk about is where we are with IPv6, what has happened, what still needs to happen, what's going to happen next?

And first off, no silly cats in this presentation. Sorry! If you need those, you have to ask mark, for example, who I'm working with right now doing IPv6. And no PCAPs either. Two reasons for the PCAPs. First, they are covered by an NDA. Second, we have a download limit for presentations, and I could easily top Anna's presentation by about a factor 15 or so with a single PCAP.

I have been involved with the SmartKit thing by route 128, which you may know from a couple of RIPE meetings ago, I'm not going to talk about that. I currently do IPv6 with fancy German cars, me of all people, but we do that. IPv6 is a smallest problem, we have other problems much bigger than before v6 in there. Surprisingly enough, so yes it works, I am allowed to say that much. I have been doing IPv6 since 2003. And a lot of things have changed since, if you think about it.

And well everybody can tell somehow this guy, he must be a technical guy, but I have also been doing work as a non‑technical programme manager for IPv6 deployments, mostly in combination with both. And it's largely outside the ISP business, so, new customers to a lot of you.

What have I achieved so far? First off the word is out. In 2003, when you try to tell people I am doing IPv6, you got two responses: Either they look at you like you are talking about something totally strange, they had never heard of it, or they laughed right in your face.

These days, it's more like either they look really embarrassed and walk away and try to find somebody else to talk to, or they laugh behind your back. That's the situation we have. But the important point to me is, we don't have to go out, try to explain to explain to people, there is this thing IPv6, you need to take care of it, because those people who still don't listen, and there are lots of them, they simply won't no matter what we do, it's a waste of time.

Most people only go to the dentist when they can't stand the pain any more and that's exactly what's going to happen to them. Other thing is yes, we saved the Internet. Sound like big words, but what we have achieved is that the Internet still works more or less. If we turned off IPv4 these days (it what we have) on the Internet, you know, like that, a lot of people would get rather panicky and my daily rates would probably go up a little bit. But we could keep things up and running or get them back up and running. If we turned off IPv6, we'd kill IPv6 and IPv4 with all the people using DS‑Lite and that sort of stuff. So we manage to keep the Internet growing and staying alive with IPv6, admittedly through some strange means.

Where did we fail? Transition technologies. You have heard Patrik's presentation on Monday doing IPv4 as a service using MAP‑T and what not, and it just sounded like you don't really want to do that. It's quite a bit of a pain, and I think that as Geoff said, the well meant but not necessary well done transition technologies just prolonged the pain.

So, that didn't work out quite as expected and was even more important to me. Well, the key reason why I started with IPv6 in 2003 was because I wanted to get end‑to‑end connectivity back, because I thought it was a cool thing. I didn't get that done really.

And I have a slide ‑‑ when Geoff had his presentation he was talking mostly on what the content providers are, and then there is consumers consuming, whatever the content providers provide. I think there is more to the Internet than that. At least as far as I am concerned.

So, big question is: What's coming next? And there are a couple of developments that I found a bit disconcerting that I wanted to really talk about. So, we will have a divided Internet basically. There is the IPv4 side, with a bit of enterprises and what not, who can afford to do IPv4. There'll be some more or less brutally crippled IPv4 to consumers, but enough to reach those enterprises or whatever, unless you try ‑‑ remember when Covid hit and people started to work from home, a lot of people had a lot of fun that the VPN gateways, and some of it was because they didn't have enough hardware, but the other problem was that some of really didn't work too well with DS‑Lite. And enabling IPv6 on them really made quite a big difference.

More important though is we will see changes to the default‑free zone, or the people in the default‑free zone. We have an increasing number of non‑ISPs becoming LIR, which is good, in a way, as long as they know what that means, and that they sometimes don't. We become an LIR, we get addresses and everything is fine. No, you need to get those sorted. That's what my ISP is doing for me. Not any more. You need to know about BGP, you need to know about reverse DNS, at least. And that's only from a technical point of view. Like having somebody being reachable if they have problems, that's another thing. They frequently underestimate the situation, it's important, because in the default‑free zone, if you really mess things up you can cause a lot of damage. It's been ages ago, but when Pakistan decided they wanted to block YouTube using BGP, and they didn't quite realise that BGP doesn't stop their countries borders, strange things happened. Ages ago, but these things shouldn't happen, and BGP is not that well equipped to deal with people not doing what they do.

Related problem is we have exploding IPv6 routing tables. I haven't found it, but I do remember back from when I started with IPv6, there was an estimate that fully migrated IPv6‑only Internet would need about 16,000 routes in the default‑free zone. So, I guess we have grown a bit since then. But, we also have a problem that people just dump their routes into the default‑free zone like it doesn't cost anybody any money. Or saying, it's not my money so why would I care?

A year ago I was told in Rome that somebody actually took a single allocation and turned it into 40,000 routes announced by a BGP. Now it doesn't take too many people doing that sort of stunt and we have a problem, and that's what I want to raise your awareness about. What should we do at this point? That's like RFC 2119, should, so we really should do something about that.

If you have customers who want to become an LIR, try to make sure that they have an idea what they are getting themselves into. Yeah, I have done dynamic routing, but we still use RIB, not good enough. And even if it's OSPF, BGP is something different.

If you are an ISP doing business customers, maybe consider how can you come up with products that deliver what businesses try to achieve with being an LIR, without them having to be an LIR?

Okay, monitor your BGP routing table sizes, big surprise. I guess that's completely new to all of you.
Prepare in advance if people come up with that sort of stunt, because somebody might decide 40,000 is enough, I go for 400,000 and your routers might have a problem. One thing I have learned about hardware is that TCAM is expensive, really expensive.

Educate your customers. If you want to do this, you have to do so and so and so, and maybe help them. It might be actually something to make money with. Also, your corporate lawyers. You know the people who do the contracts for your customers, so your customers say yeah, but I have a contract, you have to route 2 million routes. And your sales people, who sell this sort of stuff no matter what the contract says.

What else? What can we do when things really get ugly, if we are prepared? Okay, eventually we are probably going to have to filter, one way other another. Just, try to ‑‑ okay, this is what management is going to tell you anyway; Only filter where it's really, really necessary, people going really over the top. Going out and saying, okay, we're going to drop everything that's not an aggregated route according to the RIR databases, it's probably a bit over the top at this point, so that...

And if you filter, when you filter, keep in sync with your legal department. There will be fallout. Just because we have announced like 40 million routes, they are censuring us, So, you want to keep an eye on that. And if possible, keep in the loop with people around here and the other RIRs. So, there is some sort of consensus style response and you're not all by yourself. Okay.

Other than that, there is one more thing: RPKI is actually to some degree helpful, I hope. First off, you can have the ‑‑ you can restrict the size of the more specifics in the ROAs. It might be useful, at least for filtering, and of course, if you tell your customers, yeah, you have to do RPKI as well, you might actually make them realise there is a bit of a burden they have to deal with before they do this. Okay. That's it. So, as I said, no silly cats, no PCAPs either, that's if you hadn't known.

Questions or comments?

RAYMOND JETTEN: There is no questions in the queue.

JEN LINKOVA: Someone who reviews and reads postmortems on a regular basis.
The first action you might have in your postmortem is let's tell people not to do that. Because it doesn't help. If you want to prevent something from happening, there is no point of telling people please don't do this, because people who are going to listen already knows it probably, and other people either do it unintentionally or just not going to listen, right. So I kind of disagree with the advice, oh let's tell people do the right thing. We need technical solutions to prevent bad things from happening.

BENEDIKT STOCKEBRAND: Yes.

JEN LINKOVA: I guess we could spend another couple of hours or years discussing those solutions. And I also have a question: I am wondering if you have numbers about growing numbers of routes, because I looked at the Geoff's website, right, he is seeing about 1 million v4 and about a third of it in IPv6, so it kind of matches that option rate, right. So I'm wondering like what do you see, what makes you so concerned about growing routing table, besides occasionally leaks which do handle all the time in both protocols?

BENEDIKT STOCKEBRAND: Okay, on Geoff's web page the images are broken right now, unfortunately, and those are the ones that I know are public. What I have is anecdotal evidence only, and they are usually people who don't like to talk about what happens there in their network. So I don't have any proper numbers. On top of that, I'm not a BGP expert actually, it's only part of what I do that help people, but I get these problems and I get a reasonably reliable source for example when it comes to those numbers that I talk about.

JEN LINKOVA: Yeah, because let me clarify my question, I did see some certain large operator ones, aggregated their /32 into /64 and tried to advertise them over the peerings. But those things are mistakes and any I think sane operators might actually follow in real technical advice about Max pre‑Linx and so on, it's not your fault if you peers doing stupid things. I suggest maybe you want to go into more technical details from how to prevent this in your network when somebody else, not if but when, do something stupid?

BENEDIKT STOCKEBRAND: My point was really to just raise awareness here. From what I have seen over the years when I have been doing more trainings over the last couple of years, is that organisations are rather hetrogenus the way they do their things coming up with advice beyond, you know, keep an eye on it, make sure you are prepared when this happens. I find it difficult to come up with a generalised thing. But you also make one point that I think is really important. It's not that people always do this intentionally. I guess there are very few people intentionally who want to break the Internet. I guess there are some who are political motivated or whatever, but it's intentional, but that doesn't mean that we shouldn't prepare for these things to happen.

CHRISTIAN SEITZ: There is one question online: "In what sense have we failed with transition technologies in that they are not used enough or in that they are not good enough?"

BENEDIKT STOCKEBRAND: They are prolonging ‑‑ that they are prolonging the pain rather than dealing with the inevitable. Some of them are a real pain in the ass to use, like what we had MAP‑T is basically what used to be called ARCs plus P, so we take some port bits and shift them towards the addresses. It cost a lot of money and causes a lot of pain and nobody is really helped. That's what I mean then I say transition targets. They were well meant but not necessary low well done. And some of them yes, we need them and we needed them back then and we still need them today.

GERT DÖRING: Actually, these transition technologies are extremely well done because to the end user, it likes like the dual stack Internet is just working perfectly, so that is kind of a problem in itself.
That's not what I wanted to comment on.
On the routing and deaggregation site, I am sort of preaching to the choir here, filter to your customers what they document and tend to announce. So ROAs, the RIPE database and so on, do not just accept anything they will announce to you and if your system detects that they have a ‑‑ that you are building a filter that has 60,000 entries, maybe talk to the customer before they actually start announcing this. And this is I think something that all transit networks should do, have a good watch on what the customers do. I know that most of those in the rooms actually do, so this is sort of not the right forum, or not the perfect forum.

BENEDIKT STOCKEBRAND: The key point from what I have seen is that really that we have, from what I gather, a rush of organisations who are not ISPs entering this game so we should prepare for some serious problems and serious impact there in the next, I guess, couple of years.

RAYMOND JETTEN: All right. Thank you. We have an AOB.

TORE ANDERSON: Ray from the RIPE NCC will have a short note.

SPEAKER: This will be quite short, I hope ‑‑ it's about the Linux support for IPv6‑only networks and IPv6 mostly networks, I have been contacted by the developer of network manager, which is the thing that is running network in most Linux distributions, and they said well, they do have plans for having this support there, so they are working on, they have all the support for getting PREF64 over routing advertisements. They are working on implementation option 1098 Tor DHCP and they have an idea of doing using EB B F which is an interfaces with standard in Linux kernel so it doesn't deputy on third party software. The reason why I'm saying it here because it already helped us one with the issue with BGP routers and network managers, is that this development is not as fast as it could, because paying customers have priority. So if there is a paying customer here who can request feature in network manager and you can request that he would really like to have network manager working in IPv6‑only networks, please talk to your vendor, that would be great. Thank you.

JEN LINKOVA: It's like Christmas, I just can't believe this. Thank you very much.

RAYMOND JETTEN: It's Halloween actually, but never mind.

Okay, this was it for today. We had 42 people around somewhere online. And quite a lot for in the middle of the whiskey BoF, so please go back there. And rate the talks. Good‑bye.

(Applause)

(Coffee break)