RIPE 89

Daily Archives

RIPE 89
29 October 2024
2 pm
Main Hall

Plenary Session

CLARA WADE:: All right, good afternoon everyone. Hope you enjoyed lunch. We ended up on a post quantam note so I think it's appropriate that we continue on that after lunch. So here we have Jason from Sandbox and Peter from Dsec, they are going to present on field experiments An Post quantam DNSSEC.



(APPLAUSE.)



Thank you very much for having us. So this is a joint project that we did where we attempted to measurements of DNSSEC stuff with post quantam signature schemes and looked at how it went and we published that at the IETF in Vancouver in July and then RIPE was kind enough to ask ask us to repeat it here and there's some new addition from Jason's work, I will show you that later. Okay. So I suppose everyone is aware quantam computeers might be a thing and if they are a thing they will break cryptography and public scheme signing keys that affects DNSSEC, I don't know how familiar everyone is with DNSSEC, it's a method of adding signatures to DNS records and having the recipient or resolver volume date them, it's harder to temper with stuff in transit and as usual the devil is in the detail, that's essentially what it is and uses public key cryptography and it's affected by the post quantum transition.

So, the TLS community has been looking at this problem for a while and it's TCP based usually at least, it's not too difficult to put new signature schemes in there or cryptography schemes rather and the DNS stuff is more constrained, it's very old devices, UDP most re, weird neck set ups and the situation is a little more complicated and stale we need to secure the DNSSEC also against potential quantam computing stuff in the future and that's why we figured let's look at it.

Ideally it would be just dropped in replacements for the current signing schemes and we'll see if that would be viable or or not. NIST is the American national institute of standards and technology or something like that. And they have looked at the problem also and there's mainly three candidates which are called delithium, Falcon and Sphincs and they have acronyms so we particularly looked at those and I will tell you more.

Okay. For DNSSEC specifically, I just mentioned there are size constraints in the DNS because it's UDP, UDP can have large packets but they can be abused, the source IP address is not easily verified, they can send fake packets and have DNS questions seemingly come from somewhere and that seeming querier gets the response and if the response is very big, you can drown them in too much traffic and that's why in the DNS practically the package size is limited.

That's a problem is the signatures or keys are very large and that's what people expect for the post quantam stuff to be one kerb. Another concern if those things are very large, perhaps one could split key queries into several questions and each answer would be as small as before but that looks unusual to legacy middle box things like your home router or something and often stuff gets dropped, at least that's the concern. So there's all kinds of implications that become very complicated in detail if you just drop in post quantam signatures into the DNS world. It might work and it might not work and we wanted to figure out how often does it work or not work. Independently of those considerations, there's also stuff that applies per algorithm, each algorithm has a certain amount of time it needs for a key payer generation of assigning one record or for validating that and it's for example not very useful if signing takes three hours, right. That all needs to be considered. There's a research agenda that explains the problem space that's linked here and yeah, so this is essentially main points of it.

Okay, so I will not go too much into detail about this table of potential algorithms, I put it here because the reference at the bottom is a nice paper explanation the problem space of receipt fitting post quantam cryptography into existing internet protocols and it has the three algorithms I mentioned, the crystals Dilithium and Falcon and Sphincs and those are the three in this candidate table that have keys and signatures that are below 10 kill byte, they are larger than the conventional ones but not crazy large so that's why we considered those specifically. We also considered XMSS because it's a different type of signing scheme, and just recently Jason did experiments with Merkle tree based stuff he will tell you about later. I think just last week.

In 2022 we started doing this as a local local experiment, no public internet measurements, just a Docker set up, we used power DNS and did implement the Falcon signing scheme and validation and published a block article about it and things went reasonably well and we thought maybe we should redo this experiment not only with power DNS but other softwares and not only do it locally and you know, kind of unrealistic local network measurements, maybe also using doing stuff use the RIPE Atlas network and you can do DNS queries and how they come out and you get a response or you don't if those signatures are in place and we did all kind of pram advertisations, we tried UDP, TCP queries and tried including the DNSSEC okay bit in the query, which causes the signatures to be delivered to the client actually and not just validated by the resolver so this will increase the packet size in the last mile which is why it's an interesting thing to try.

And also DNSSEC responses different size significantly depending whether the name you ask for exists or doesn't exist, if it doesn't exist, you get an ex‑domain response and those are larger than the signatures for existing names.

So we also tried that and there's various variations of it and you see this KSK and CSK distinction, there's a way of using two keys per domain or one key for domain and the split case, the first case is the KSK and for power DNS we did the other case, both are used in the actual world.

And so we implemented this in BIND and power DNS as the authoritative name servers and also in the corresponding Bind and power DNS resolvers so that ‑‑ we could well those are not really on the, like nobody uses our resolvers obviously but at least we could show validation is feasible to implement.

Implementation was done with LibOQS, it's a spin‑off for post quantam stuff and some technical details, we used RIPE Atlas and did the main study that did not include the Merkle tree stuff yet, in May this year it was about 2 million queries I think and we used zones that deployed in public and signed with these algorithms and of course public resolver wouldn't be able to validate the responses because they don't know how to interpret those signatures but should at least not get confused.

We then recorded the return code which is called R Code, for a successful response, for a name that exists, there would be no error response, for an existing domain would be NX domain, when there's a refusal, maybe you have seen it before and there's also surf fail or other kind of failures and so we recorded that, depending whether that return value actually was what we expected to be, we also recorded whether that was correct or not, there's a correctness thing. We also recorded whether the response contained the ADbit, that means authenticated data and it must set if the resolver validated the response, we expected for none of the post quantam responses because obviously nobody supports it yet and we also recorded response time.

Then from that big dataset, we excluded a few things from analysis. In particular we excluded resolver that do not cell support conventional DNS he can because the interesting question is how do we transition conventional DNSSEC to modern or future post quantam, the question is not how to support post quantam stuff in the premise that don't have DNSSEC today. We included private IP addresses like 192, 168 kind of resolver because we wanted to do UDP and TCP comparson, but it doesn't work on the TCP, consistency we excluded and also included network errors and time outs would be interesting to look into the reasons for. We were mainly concerned with the correctness of the responses that we got.

Now this is raw results, I am not going to talk about any of those numbers but it was interesting you see for example in the graph at the top right which is an UDP graph that asks for a signature with DNS okay bit, in various columns, there's various colours and rows and they correspond with the algorithms, it gets more blue and more to the bottom you go, the worse it gets and that correlates with the success rates and then we looked at this looking at where our interesting spots and we zoomed into those, this is a zoom into the responses for queries with a valid question that did exist, for example RIPE.net, we didn't have that zone but it's a name that exists on the left‑hand you can see the sponsors age of correct responses ‑‑ percentage of correct responses without asking for signatures specifically and on the right‑hand side the same thing, if you do ask for the DNS signatures and there's two lines for UDP and TCP and on the right hand, access is the algorithms, you can see unsigned and RSA which was the current stuff always works, it was all pre‑filter right and then you can also see that for Falcon for example, success rates through UDP and TCP with both very close to a hundred percent, it gets worse if you use Dilithium which has larger, I keep confusing, keys or signatures, larger responses so the delivery success rates go down and it gets worse and worse as you go to even larger packets, which is more or less what the algorithm acquisition correlates with.

On the right‑hand side specifically when you ask the signature also you can see over UDP success rates are lower than over TCP because the message size constraint that I mentioned earlier kicks in.

The take away from this plot here is that Falcon looks pretty good actually.

Then here is a similar thing for non‑existing labels where we expected exdomain responses, it's essentially the same thing, there's twice as many values on the X acquisition because there's two ways of doing a nonexistence proof, one is called NSEC and the other is NSEC three, in both plots you can see two chunks, the left‑hand is the one method and the right‑hand side trunk is for the other method, you can see that the left‑hand chunk kind of looks like the previous slide I showed and the right hand one looks even worse, for example XMSS which is at the very right has very low success rate, which is because the message says exceeds 64 kill bytes which is the hard limit for DNS message.

So anyway it looks like there are combinations or might be combinations one could engineer things such that they work.

So the transition, the transmission issues were real, he just explained Falcon leads, overall if you use just one key and not two, which is the CS case scenario, things a little better, 80% success on average, not 70, and if you use TCP, things go better by about 10% also.

That's if you don't ask for the signatures, if you do ask for signatures, things go down to roughly 50% because responses are much larger. Interestingly also about 9 pores of combinations claimed by setting the ID bit in the response that they did verify signature which confused us a lot.

Because we don't think anyone validates signatures so it's clear there's bugs in the deployed preserver base.

This is almost my final plot before we go to Merkle stuff to get rid of those size constraints. Before we get there, here's a plot of power DNS timing bench marks for the various algorithms at the top is RSA and EC DSA, the red and green which is the current conventional DNSSEC and it turfs it the just the dark blue and the columns have the time for assigning and validation, note that the X axis is log rhythmic, each square is a factor of ten in time.

You can see if you compare it to green and red for example which is RSA, that the algorithms at the bottom mostly are in the same area. So for example key generation for Falcon and Dilithium is faster than it is for RSA and the same is true for signing, for validation I think RSA is a little faster but Dilithium and Falcon roughly compare with the upper green. So that's not too bad. What's really bad however is XMS key payer generation you can see at the bottom on the left which is two squares out from the others, it's a fact of a hundred slower, if you have dynamic deployment of stuff, it's not really practical.

Okay. I guess if there's questions we can go into more details.

We also have a public deployment of our stuff, so I mentioned before we have the authoritative zones and authoritative name servers and also the corresponding resolver, our BIND nine resolver is here and you can make queries there for our zones for example, the Dilithium signed zone here and you will get a response for example which has an rrsec with a DNSSEC signature with algorithim 18, which is the Dilithium number. We assigned and it has an ADbit set in the flags at the top, the green thing, because our resolver implements that.

You can try this out yourself on this website, it's a java script interface with DNS over HTTPS and if you go under the page, there's more documentation and raw data and stuff like that.

Yeah, so post quantam is the future and so I will give it to Jason.
JASON GOERTZEN: So thank you, based on our initial findings that Peter discussed, the big take away we want to use PQC but we need messages to be smaller and one idea we had was maybe we could use these things called Merkle trees to compress zones so very briefly what is Merkle tree? It's a cryptographic binary tree that let's you associate data together, so in this base, all your leaf nodes will be hashes of your data and intermediate nodes are hashes of its child hashes more or less so very, very quickly, if we want to validate that the bottom left child node belongs to the tree, we would first hash the data, we would then be given H2, combine those two hashes, hash them, we would then be given H6, take both those hashes, combine them and then hash and then we would compare against H7 which is our root hash. If they match, then you are good. If they don't, someone is lying to you and you probably shouldn't trust the data. Can we apply this to DNS.

Probably, sure. What we can do is if you are recall there were these two keys as an option in DNSSEC, one was the KSK or key signing key, what we would do is we would use a standardised DNSSEC or PQC and that would provide us with you are an authenticity and provide us with integrity and then we would take the hoot hash of a Merkle tree that we have constructed and we would put that in our zone signing key and that would get signed by our key signing key. Normally Merkle trees just give you integrity but because we have the root hash signed by digital signature scheme, it gets upgraded to have the authenticity properly used as well and our signatures become this authentication path or sibling hashes more or less, which is really nice because your signature length grows logirithmically with how many things you want the Merkle tree, in if case the RR sets.

And then there's also this really nice finding that some colleagues of mine at sandbox found is that you can reduce hash sizes in half when you are using Merkle trees and still maintain the same level of secures because in the security model you can get away with only needing the second pre‑image resistance, I recommend checking out the publication there.

We do need to make some changes to DNS for this to work, namely we have this circular signing issue, every time you add something to a Merkle road, the root node will change so that will mean that your ZSK will change and when that changes the key tag changes which will affect what you provide as inputs to your signing and verifying functions.

So that means that we cannot, normally you would have your zone signing key also signed your DNS key sets but we have to make an exception here because if you were to try to sign the DNS key set with a Merkle tree, it would then change the tree and it's a nightmare, it would just continue to infinity and never work. The other exception is normally you pass the key tag to your sign and verify function for Merkle trees, that's also not possible.

So in our experience, we just zeroed out that field just for Merkle trees, we are not saying get rid of it for everything.

So you get two nice wins by doing this, this is a plot of DNS messages signed by these algorithms for A records, you will see the classical Crypto on the left, they are quie small, Falcon is okay. And then Dilithium 2, Sphincs and XMSS, they blow up in size. And the red line is what we can fit inside of one MTU without having fragmentation, or the average MTU, so we want to stay below that line as much as possible. When you use a Merkle tree applied to these, you will see they both stay below that line which is quite nice.

We also get much cheaper bandwidth zone transfers because there's nothing private involved with building Merkle trees, so you can essentially send these dummy signatures that are very, very small and then once it's received that as the secondary name server, they can rebuild the tree and make sure the root node matches what's in the ZSK and if it does, you are good to go. There's a nice trade off between bandwidth and computation, we don't have implementation for that at the moment.

What happens when we used RIPE Atlas? Well those of you that have a good memory for numbers will notice that Falcon surprisingly behaved way worse. We have changed nothing in our testing set up, it's just a time difference, so between May and last week, Falcon started failing 40% times more. We thought it may be like a resource issue but we upgraded the machine to be well over kill so we are still investigating that, if anyone has any ideas what might have caused this sudden drop in success, please come speak to us.

Maybe it might be something weird with our cloud provider, I am not sure.

But the nice thing is you will see that Dilithium still fails quite a bit, similar to the IETF results but when you apply it to a Merkle tree to it, it upgrades its success rate to match of that mesh he will Falcon and stalk Falcon, even though they have these larger signatures and keys with Dilithium 2, when you apply Merkle tree to it, it succeeds a lot more. So the hypothesis is once we figure out what this weird Falcon issue is, that it would upgrade it to that closer to 100% once Falcon reaches 100% as well, that's just a theory, we don't have concrete evidence of that just yet. Th it's more or less the same story with nonexistent labels.

We did have to change how we sign things a little bit. It's worth mentioning, I will be very brief on this. So instead of just iterating over your R sets once, you now have to it rate over them all to add them to the tree, then you have to it rate them all over again to get your authenticating path and get your final root node hash. But the nice thing is even though we are iterating it over a lot more, it still took half the time to sign the zone files, about a million RR seconds than it with EDDA, that's still nice.

If you were to use Merkle trees, there would need to be changes to the DNSSEC protocol, if you define it at its own algorithm ID, you can hot swap swap it with any other algorithm without having to define a new Merkle Dilithium ID for each pair you want to do. That's kind of nice. Zone updates are a bit limited as well, by the time deliver of the ZSK, there's some work by Verisign, they are really like talking about, if you see anyone from Verisign, talk to them about it, I am sure they would love to talk about it, that might resolve that.

It's also worth mentioning that actual DNS key query doesn't get compressed, it's just, it will just be for any of the other queries and we did see an improvement in deliverability of these large signature zones and it's also worth mentioning unlike staple hash based signatures which tend to use Merkle trees under the hood, you don't have to maintain a central secret state because there's nothing secret about Merkle trees themselves, everything can be public without losing any security.

So the biggest take away, if you listen to this presentation is that transitioning the PQC is non‑trivial. This is just one example, so we need to start planning now.
PETER THOMASSEN: Okay. Yeah I guess we said what we wanted to say, right, some stuff needs to change. We'll need to see how it looks in the future or it might look, there's other ideas like other fragmentation ideas, ARRF. I will not go into it.

Right. One thing to remember if we don't take DNSSEC automation into account, updating the DS records in the parent that link the keys into the child, even the best PQC solution won't help very much because all the domain owners would have to manually do the change, I think to make this a success and not have DNSSEC fail even though there's a solution for PQC, we need to manage automatic transmissions. There's a mailing list at the IETF. In Dublin next week we'll have a side meeting on that on Thursday, so if anyone would like to come to that, that's cool.

Yeah I guess we'll open up for questions. Thank you.

(APPLAUSE.)


CLARA WADE:: Okay, mics are open. We have three minutes for questions.

AUDIENCE SPEAKER: I have a question. Do you see any perspective for improvement of the situation with NSEC and SEC 3 or another mechanism for the denying of the existence?

PETER THOMASSEN: I don't think so because we would have to, like we, as the internet community would have to upgrade all existing deployments to replace the denial of existence proof mechanism, I think we'll have to engineer around it.

AUDIENCE SPEAKER: You have to update the protocol as you say. Why not update also this part?

PETER THOMASSEN: Because if you only add a new signing mechanism, all resolvers can just ignore it, whereas if you change the packet format, the chance of unrelated things breaking is much higher.

AUDIENCE SPEAKER: Thanks.

AUDIENCE SPEAKER: Just to give notice, you mentioned about DNS key note being signed by ZSK, you must be using BIND because Knot Resolver doesn't use ZSK for the signing of DNS key set, so that has been solved. In fact I don't know of any other DNS software that does that. Just a little correction.
Would it break it? No.

PETER THOMASSEN: I don't think it will break if you specify that the Merkle tree components of the zone which can be reconstructed don't go into the zone input, I think then it's fine.

AUDIENCE SPEAKER: Right, okay, thank you.

AUDIENCE SPEAKER: Shane Kerr from IBM. I know it wasn't the focus of your work, the fact that you got a resolver that values were authenticated. Have you considered running just those probes and returning an invalid response to investigate what they do?

PETER THOMASSEN: Yes, we have considered it but haven't done it yet.

AUDIENCE SPEAKER: I look forward to your future results.

AUDIENCE SPEAKER: Hi. Thanks for the insights of your tests. I was wondering did you have a look at how doing that influenced the latency? So like for that on TCP, how it played out in the real world?

PETER THOMASSEN: So, to investigate that I think one would have to record traffic on the name server, see how far back it actually happens and all of that. We didn't do that. We only did the RIPE measurement, that's on obvious extension and I think a lot of other things that might be interesting like clustering, resolver by whatever, ASes for whoever to see the problems lie would be really interesting, we don't have the data now. I think we should extend this, but it was out of scope for this one.

AUDIENCE SPEAKER: Hi, thank you for all the effort and it seems to me that when, at the point when you decided that you would put the root of the Merkle tree into the zone signing key or that you created, you shoot yourself maybe in the foot and interacted unnecessary limitations to this set up. I think it would be more handy to put it somewhere else and remove those limitations like needed to recompute the key tag after you compute the Merkle tree as a whole. Anyway, I would like to ask you, it wasn't clear to me, if you sign the zone in the Merkle tree, how the normal response to a query would look like, like what recourse will be holding the parts of the Merkle tree and how many of them would you need to give back to the queerier for him to reconstruct.

JASON GOERTZEN: So do you mean in the case of a zone transfer, just a general query?

AUDIENCE SPEAKER: Just a general query signed to this Merkle tree.

JASON GOERTZEN: Sure. The authenticating path, all those siblings hashes and the, that would be located in RRSEC, you should shove that as a general signature and then your ZSK is your root signature and that's the idea.


AUDIENCE SPEAKER: Thank you.

PETER THOMASSEN: And to the other question where to put the Merkle tree root, we just put it in the DNS K, it was an easy thing to do, it might be reasonable to put it elsewhere, totally, yeah, we have to try that.

CLARA WADE:: Thank you Peter and Jason, that was great.


(APPLAUSE.)


JAN ZORZ: All right, next one is a tale about a journey, about the automation journey.

ANNA WILSON: Hi. I am Anna, where have you been, writing Python, I have worked for HEAnet forever but jobs changed a lot over that time, I spent a while doing network stuff and then I project managed the big changes to the network stuff and for a while there I was service desk manager so I was talking to clients about the stuff that was happening to the network and then I kind of from a lot of people's point of view disappeared for a bit and that's because I looked around and I realised I wanted to work on the things that help us to deliver the network and changes to the Merc. I saw the kinds of tools that software developers have and I wanted to bring some of that back to my business.

Things like continuous integration, which is the kind of thing that let's us make more changes more quickly but with lower risk software developers have that nailed and in a way that I think we haven't quite got yet in networking.

And an enormous part of that is how you provision and deprovision your equipment and services, right. So to kind of get our heads around this, we have two problems to solve and the first problem is do you buy your provisioning system and your orchestration system from a vendor or do you try and build your own.

And this is where we found ourselves in HEAnet, in about 2016 which is when we last made this journey, we had we placed every piece of hardware on the network which for us in Ireland it's around about 250 pieces of equipment and at the time we bought an inventory system and an integrated services provisioning tool from the hardware vendor.

And that worked fine, you know. We did have to spend a lot of time and effort customising the system because we wanted to build services the way that we want and not ‑‑ and the way our customers need, not the way in some abstract spherical sense in a vendor somewhere. And we also, whether we wanted to or not, had to spend time building whole separate application in order to integrate this thing with all of our existing systems like DNS and monitoring and so on.

And then last year, 2023, we were notified by the vendor that the thing we were using was going end of life. Okay, we got two years, 2025, don't love this news, but it's manageable, two years is an enough time to plan and to work out what we are going to do.

And then a few months later we upgraded the OS on some of our equipment and the provisioning broke. Bang! What do we do? It wasn't getting fixed, it was end of life, so it wasn't getting fixed so what do we do?

Our next team were getting by with Ansible playbooks and YAML files and Git, stuff like that. A with a bit of continuous integration and stuff like that, works okay but no one wants to be a senior YAML engineer, right, it gets tedious after a while.

So we looked at this and we won't okay, the choice is implementing our own solution or going out and buying again. Implementing our own solution is hard because software development is hard, it's really difficult and especially so because it's not your organisation's primary expertise, we are an ISP, we have software developers, great software developers on staff but it's not what we are known for and it's not just like a one and done thing, you are, you are doing more of this every time you get a new piece of equipment, every time you get a new vendor, anything like that.

On the other hand if you are using vendor tools, there's very little out there that's cross platform and we have always had this cross platform ethos and it is a treadmill as we just found out, you are on someone else's timescales for upgrading and replacing and buying new software.

And when I say buying, I mean money. The quotes we got for our, you know, decent sized but not huge network went anywhere from I think 100,000 to well over half a million.

And you still have to spend a ton of time building software anyway.

So, we kind of looked at this choice and went well building software can't be that hard, can it?

Let's go climb a mountain. I am Anna Wilson and this is HeaNet's automation journey.

Something really important here is we were not starting from scratch. We needed an orchestration system decent we needed something that was would let us manage all the parts of provisioning and deprovisions and inventory in a coordinated way.

And last year at TNC, the big ENREN conference, we found there's a really good open source project we could start with, called work fill orchestrate Tor and Surf and DS Net and Geant have developed this. I will get to the details in a moment but the most important thing we still have to spend time working out our own products in this sort of language and wiring up our existing tools, things like Ansible and Netbox, even though there's a place to start, it's still a ton of work.

That's the first problem. Remember I said there were two problems to solve.

That's the first one. The second problem is that the unit of currency in HEAnet is the project. That's how you measure time, it's how you measure success and it's how you get resources. This is in our DNA. Our whole history is in carefully deploying precisely specified infrastructure to a timeline with static inflexible amounts of money, it's what we are good at and what we have been rewarded for and what we are expected to do. How do you manage a software project in this sort of environment?

Well to get there, let me tell you about the thing we are trying to build here, right. Workflow orchestrate has three big consents to wrap our heads around and that's the first one, products are things that you manipulate, they can be like nodes or ports or point‑to‑point CLRs, they can even be LIR assignments, when you have a product I want to create an instance of it, you get back a thing called a subscription, so let's do that. You go to create one of those, you get a nice form with where you fill in the details needed, this isn't hand coded, this is based on how you defined your product in the first place.

So this is doing things like pulling in lists of valid ports from our inventory management system, stuff like that, in this case we are using Netbox so in this case, each one of these fields is fetching a list of available, not all ports but available ports from Netbox and that's a single source of truth and then it's populating these drop‑downs.

So later on we are going to send this data back to Netbox to say right these are the ports we are using now, please don't make them available in future, we are creating a pseudo wire, but in the meantime we are pulling the data from it as well. And this is really, to my mind, the big shift from manually editing variables and text files or YAML files or device configs, this is how we move away from the senior YAML engineer thing and the reason we want to do isn't just it's tedious, it's also error prone and this removes a lot of that.

Then once your form has all the information it needs, you press a button and it kicks off the fork flow. How does this thing make changes, well workflow orchestrate Tor doesn't talk directly to your routers, if you are using N SOor Ansible or something else, you will write a step for every action that needs to happen in the given an order and then when they work, you get these nice green ticks to show they have succeeded, if something goes wrong with a step, that's okay, nobody panic. Let's say your power DNS supply isn't available or the zone doesn't exist yet, something like that. Just like an engineer working manually, you are working your way through these steps, you get to something that doesn't work, you stop. In this case the orchestration, the orchestrator is working through the steps and then stops, you go, you trouble shoot, you get the problem fixed and you come back and you resume from the same step where you left off once a problem is solved.

What's a step? What's a workflow? They are just Python. That's workflow at the bottom and the just calling a whole bunch, it's a nicely decorated function which is calling a whole bunch of other functions, what's a step? Again it's a Python function with another decorator on it and it does one thing, in this case it's a little bit of code that is calling out to an external service using a library, it's fine, its input is the state of the subscription and it's output is the state of the subscription with the changes you just made and that's that, they are all quite small bite size pieces of code we can manage individually. And that means most of what we are doing is designing products so they fit well with the overall set up and writing these workflows.

How do you keep track of that?

Well in our case, by making a lot of mistakes. We went in on this with an agile ‑ìsh sort of approach and it worked well pretty well for us. We set up a bunch of smart things at the start. This is one of our issue boards, it's our issue board, look at me with all the ‑‑ we just keep a big pile of issues and they go from left to right across the screen as we get them done.

And what we found was because we had put time at the start into, we already use GitLab and HeaNet as our Git repository server and by just using the tools that were available there, rather than going oh, it's a project and we use the existing project management tools, we went no, we are doing software, we can use the same tools developers elsewhere do. Instead of having special work items in their project, we'll have issues in Git, they are connected to merge quests and connected to commits and very nicely with what developers are doing on their keyboards and the result of that was that a non‑developer could look at our issue boards and see the actual live state of the project. Not like a snapshot from a week and a half ago when the project manager last made their summary but actually what's going on right now.

You need some context in order to understand like the nouns, but you are seeing what was live what was done and what had been merged in. So we did that, we put a bunch of effort into things like Docker works so we could all work on our own independent systems. This was, I felt, out on a limb with that one, it was not an obvious thing for us to do the way we usually work but it paid off, we'll go back to that later.

And all that meant that on a long scale, we have really, really good visibility of what we are trying to do.

And on a short scale, we have enough detail to prioritise properly and if we realise we are going the wrong direction, we are probably going to spot pretty quickly we are going the wrong direction because all of us on the team can keep the current state of the project in our heads with these tools and it means if we are going the wrong direction, we are going to spot it quickly, it's very straightforward to course correct.

Until we look outside our bubble.

Software development is unpredictable, in both directions, sometimes it goes faster than you expect, very often slower than you expect.

We realised ‑‑ I can give you an example: This project in HeaNet has been running since last October and we were originally planning to deliver something in April. And we looked at around March, we looked at it and went okay we are not there, we are not going to get this thing delivered the way we originally hoped for, we are not going to reach this peak.

What about this one? We can't get you everything in the project initiation document, not by April, not in time. But we can get something deployed like the way we got the set up, it's very straightforward for us to just deploy a thing and it's something that the team can start using and we can start getting feedback on.

But the unit of currency in HeaNet is the project, that's what gets you resources. When you finish the project, there is a very good chance those resources go away.

What I really mean when I say this is in our DNA? We specify what we want and we hold the supplier to precisely that, you cut my hand off, you look at the arm and see delivered to spec written inside, it's what we do, it's what we are good at. The two tools ‑‑ if anyone here is involved in public procurement, you will know, you have two tools to effect change: You have the spec at the beginning and you have project sign‑off at the end where the supplier gets paid. And that's it. That's all you have got. So you have to make sure you are, that what you are trying to do you can make changes with those two tools.

Some years back, a couple of genders ago, I spoke at the plenary in RIPE about disruption theory and the idea the very things that make organisations successful can also, in some circumstances, be a weakness.

And in this case it means your organisational incentives were working against us delivering in stages and towards a big bang, which increases risk.

And another thing we see is that this focus on deliverable features subtly discourages spending time on fundamentals. Another example here is we spent the first few months of this project just getting our heads around it and understanding the concepts, it was a lot to take in and really learn how to work and we had to set up how we were going to work.

Our team on this project is, we have one developer with pretty decent network experience and we have a few network engineers and former network engineers with varying degrees of Python experience and the project manager who knows networks inside out but hadn't managed a software project before.

And the first thing we did was go to Utrecht for three days and spend time with Surf and Geant and that gave us an enormous head start, it was it was a huge boost and then we spent a lot of time on the fundamentals, somebody had already set up a server with an instance of the software, I am sworn off running servers these days, I am sick of them, I don't want to have to manage them any more. I just want everyone I run to come from immutable images that get deployed from version control.

So we spent sometime kind of finding our rhythm, some of us weren't familiar with GitLab, branches, merge requests, continuous integration and continuous deployment, takes a bit of setting up.

Docker was really important. Again, it took time for everybody to get used to it but it meant that what we had running on our laptops was extremely close to what was going to be running in production.

And at the same time we are learning the software, its concepts, we are developing our picture how we want this to adapt to our environment. From the outside, none this was legible, it look liked not a lot was happening. Over the next few months, we had what looked like this massive burst of activity from January to April. We had our fundamentals in place and we could work on features. We knew there were a lot of unknown unknowns lying around there that we hadn't found yet, we knew we were making mistakes, we would uncover them later, that's okay, it's an agil‑eìsh sort of things, we are making quick deployments and reevaluating our priorities but that wasn't giving the organisation what we had all been trained to work with. When we learned we worked on commissioning this thing in April, it meant we didn't work on loading in live data. Like, why would we?

Why would we bother if the data and platform are going to change?

So the project stretched on into May and the summer and into the autumn and it's only in the last few weeks we have started loading in the live data now and it has been super illuminating. It would probably have like, it's very counter‑intuitive but I wouldn't have been able to justify there in April but it probably would have overall saved us time, if he would be further ahead now than we are if we had done this earlier because precisely because it uncovered the known unknowns, it made concrete some of the abstract decisions I have been working with. With hindsight it's obvious. In the moment we were thinking what's the fastest next thing to do? It wasn't this. Like I said, our unit of currency is a project. I don't know what your organisation unit of currency is, it's probably something very different but I know for a fact your organisation has one.
And if you are having trouble effecting change, there's a very good chance this is why.

So it's taken us a year. What did we get out of that? It's clear to me now in a way that it wasn't before, this isn't just about wiring up our tools together and making automating the network.
This is about encoding our business model in a tool. That's a big deal. That's a lot when you put it like that. An issue we are dealing with now is our network teams picture of what a client is very subtly is different to our finance team's picture of what a client is.

It's a thing that hasn't really mattered before, probably doesn't really matter day to day, but when two clients merge or if a client changes name, and believe me I know how important it is to get name changes right, suddenly these departments need to work in concert and need to co‑ordinate in a way they didn't have to before. Suddenly that's within scope.

And this ends up what we find as we climb this mountain, it ends up rewarding, not just the network but the whole organisation. We are touching network service desk finance, we are finding out in some ways what we know because we are coordinating our day to data between different departments and learning skills as an organisation not just like how to understand and implement a network, not even how to understand and implement business, not just faster and more consistently, fundamentally better, with a better understanding of what's actually going on.

And we are finding out things about the way that we do things. And about what makes things legible across the organisation. If you can work out your unit of currency, it's worth spending the time because you can do that too.

Thank you very much indeed.

(APPLAUSE.)

.
JAN ZORZ: You certainly set a new standard for the presentations and I think we will have to update the upload size limit.

ANNA WILSON: 230 meg. The limit is 50!

JAN ZORZ: Brilliant!

ANNA WILSON: Thanks at the back, I really appreciate your help.

JAN ZORZ: Any questions? The mics are open. Okay.

AUDIENCE SPEAKER: Fiona from Webcom . I agree that automating network is less about the network and more about the organisation and how the whole organisation works. Have you run into any troubles with other departments and how were people, did they like the new system? Yeah, what was it like on the social interaction with the other teams?

ANNA WILSON: Ah, what's it like with interacting with other teams? This project is kind of made of other teams. I mean I work in a different team to most of the other people in the group and we knew earlier on we were going to have to be talking with other teams, I don't think we realised just how many. The culture in our place is like you can always talk this stuff out. Getting changes made can be slow because we want to, we don't want to mess up someone's day, we don't want to mess up how they work. But we can always talk this stuff out. But it is tough and the tough part is identifying and sometimes you don't identify until quite late where all your dependancies are.



AUDIENCE SPEAKER: Hello, Alexi. I am currently for automation evangelist in. My company and what I have learned with vision A on my side is many people are firstly focused on the tool to use and don't start from the beginning. You say you don't like to become a YAML engineer, but in the end the first thing you must think maybe yourself is which contract I would like to have with my company, my team, to describe my business, and from this business they will make the configuration as less technical information possible on the configuration and on your presentation, it's since you are more focused on the tool but on the model up like to to this tool.
ANNA WILSON: I agree, and the one thing I say about that is that's really hard, especially at the beginning when you don't know what you are doing yet. So much of this for us has been about learning as we go and having to go back and get out of of completion bias, in aviation they call it completion bias of no I need to get this thing on the ground and take that step back and go actually what's the best for the organisation right now. And to take that time. That can be tough. Thank you.

AUDIENCE SPEAKER: Great presentation, thank you. I was particularly fascinated with your rediscovery that any software system in the best laid software system never survives the encounter with real data. And my, I want to know a bit more about that inside from you, would you recommend really starting with a snapshot of real data, very very early on?
ANNA WILSON: That's a great question.

AUDIENCE SPEAKER: You deserved that!

ANNA WILSON: It's a bit of a cop out answer but I think the way I would put it, I would say my advice to my older self would be do it sooner than you expect to need to. There's only so early you can do it because you don't have the tools to process it, but once you have something in place that can plausible take real data, find a way to get it in there, it's not as easy as it sounds because we are not working in the abstract, it's not just loading stuff into a database, you are touching equipment and if you are touching live equipment that's extremely scary and change control boards don't like you do that so you have to find, that's part of the reason that I didn't get into why we made decisions that standing up here probably sound a bit obvious, in practice you go I am going to wait because I have other things that I can get on with before we get to that.

But the earlier you can do those, I found the more you get out of them.

AUDIENCE SPEAKER: Would it be beneficial to actually take the time to write sort of an assimilated environment that you are for the equipment or is that one step too far, just from your experience?

ANNA WILSON: That's a good question. If you have the time, then it's worth taking the time. The way we ended up going about it was, we had that burst of, a power burst of productivity of getting things done and we thought it was going to be better to get those out there where people can see them as fast as we can rather than very slowly validate them which is what this would have involved, I don't regret that but they are valid ways of doing it. I am not sure I would say, the risk is if you spend time validating, then it doesn't actually meet the real world. And there might be something you missed in your validation, that's the balance you have to try and handle.

AUDIENCE SPEAKER: Thank you.

JAN ZORZ: In the interests of time we are cutting the queue now, you are the last person.

AUDIENCE SPEAKER: Thank you very much, welcome to the world of software development. That's how we live there and probably my question would be what is your advice to previous yourself that will be the most beneficial from this point of view when where you are standing right now.

ANNA WILSON: There's places I could go with that, I don't think I will right now.

That's a great question. I would go back to legibility to the rest of the organisation, it's worth doing things, making a project group work the way we want to but the more of that we can expose outside our bubble the easier it becomes.

We will only get so far with that. An organisation that is used to working in long‑term fixed length project, there's only so much you can do when it comes to a strange agile I am not telling you what I'm going to deliver, just trust me point of view. Because that's how it's read and not entirely unreasonable criticism. But the more that you can bring people in and really get across what we are trying to do, the faster, the more leeway we have and the more room we have to get better direction a lot of what we were doing here was responding to what the organisation wanted, how we run project and, looking back, I'm like I think the thing you asked for was not what you actually wanted but I didn't know that at the time and neither did you. That's the kind of thing I try and get out there.



AUDIENCE SPEAKER: Thank you.

ANNA WILSON: Thanks so much.

(APPLAUSE.)


JAN ZORZ: Lightning talks.

CLARA WADE:: Yeah, the fun part. So we have Johanne who has a very interesting interactive talk, take it away.

JOHANN SCHLAMP: Hi, I am Johann, I have to raise awareness of a nasty little creature living in our network, most of you haven't heard of probably.

So, for the next couple of minutes, we are going to embark on an exciting simulated hunt in a hidden maize of caverns and twisting tunnels, we are going to seek out the layer of the Wumpus, while avoiding perils along the way. That slogan is from a computer game that's 50 years old, which most of you haven't probably heard of as well.

It was written by Gregory Yob, it's an interactive text adventure, you move through a cave of interconnected rooms and you may shoot arrows that go possibly around the corner and you have to hunt down the Wumpus. That is your goal.

Or you may die facing dangerous hazards like bottomless pits although actually I don't know how to die in a bottomless pits, that's a different story, we have super bats that might relocate you from the other and the Wumpus might kill you as well. We can play that game and the interesting fact is that it takes place an a non‑grade like map, meaning we have 20 rooms that are placed on the vertices of a dodechahedron? So what is that, it's one of the five platonic solids meaning it consists of 12 Pentagons put together in a three dimensional shapes and we have 20 vertices which are rooms and 30 tunnels. Between those vertices and you can protect it into a two dimensional plain using a slegle diagram, given on the right‑hand side.

So, the original game was written in basic with roughly 200 lines of code and it was published in a computer magazine back then. Meaning we have access to the source code.

And we thought about reviving that game using a new front‑end namely standard traceroute on your very consols. For that we modelled commands using forward DNS and repre‑registered the complete game output in reverse DNS and the official release of that game is now.

So how can you play that game? . Actually using traceroute, we can do a traceroute to Wumpus.quest and you get the entrance, the screen for the game. You won't have colours on the terminal but the Monday Chrome version plays just as well. What do we have to know about that game that's interesting, we have a screen of roughly 63 times 20 characters, 63 because no single part of a DNS name between two dots can be larger than 63 counters and 20 because that leaves enough room for 12 hops in the default settings to come to the Wumpus network.

You can trim that stream if it doesn't, if you don't like it on your console for instance by using my implementation, minus F, increasing the star detail and you can stabilise it somehow if you have multiple packets per hop is recommended to stabilise it with only one packet per hop but it will work in any case.

There's a lot of stuff, a lot of interesting stuff to tell you here, starting from making that Wumpus prefix Umlaut and for incoming packets, from tweaking our hash function in the equal lost multi‑path routing to fiddle with out going packets, than we have ICMP limit rating in the internet core which is why I am sending a lot of ICMP packets for a lot of your requests, more than you requested, more than you can handle probably, that's the only way forgetting rid of the artefacts of ICMP rate limiting.

So how can we play? We can send comments to the game engine by using sub domains, you can spend a traceroute to help and you can getting the help text. That way all those fancy texts and additional white spaces and colloquial language is not invented by me, so it's by the character the same as 50 years ago.

And we can send comments and play the game. For instance start play.Wumpus.west, we start in room 16, it's a random location and we are going to navigate the Wumpus layer.

You move to 20, room 20, we see super bats nearby. We always get a glimpse into neighbouring rooms through those tunnels and we have to be aware, we don't want to meet one of those super bats. Let's go further down the tunnels, maybe to room 19. Now we smell a Wumpus, we don't want to bump into a Wumpus as well. So let's shoot it from afar, we can shoot to room 18 and in that case, we got it.

So it's a thrilling cat and mouse game, it's far more complex than this because the Wumpus is moving and you have to get it.

To assist you in finding the Wumpus, added a hidden feature that wasn't part of the game back then, there's also a map of the Wumpus layer that you can access by tracerouting the map to quest, you can bring it up at any time and specifically for RIPE 89, I implemented a high score list which is a bit funny to be honest, we have that static output, the first eight hops and then we have the top ten players that kill the Wumpus the fastest.

And to get those top ten players in your traceroute output I am spooking ICMP ttlx packets with your own IP address and the DNS will resolve because it's a local machine. So you find yourself, if you are fast enough, inside of the traceroute output that you issued from your machine.

So some of you might ask why did we need to fork a 50 year old game? Is there anything to learn from it? And the answer is yes. It's used in teaching. So it's used in academics for undergraduate project and for PhD causes but there's more to it so if anybody is interested. One minute. If anyone is interested, there's a dock euro rised version of it and you can do that in your organisation but the most important part is this game cannot be played on IPv4 because I had to use 10,000 IP addresses.



(APPLAUSE.)

Also it doesn't work not behind, there's no possible way to distinguish between different traceroutes behind that so above all, this is a plea for using IPv6, thank you.

(APPLAUSE.)


JAN ZORZ: Thank you, maybe there's time for one question. No? Yes? One. Okay. Please. Be fast.

AUDIENCE SPEAKER: Hello. Gus Caplain. Is the source code for this available somewhere? .

JOHANN SCHLAMP: Since it's used in teaching, it's a bit problematic because the solution cannot be provided because otherwise students won't... but the source code for the teaching project will be made public soon, yes.

JAN ZORZ: I have the sticker, "no IPv4" so you can have it.

(APPLAUSE.)



JAN ZORZ: Clara, who is next?

CLARA WADE:: Next up is Andrew from ICANN; we have got an update on .internal.

ANDREW MCCONACHIE: Yes, I don't know if I can follow a game based on traceroute, that was pretty cool, I am going to be talking about and exciting new TLD that doesn't exist and will never actually exist.

.internal never existed in the DNS and that's the lightning bolt for this lightning talk. If you remember one thing, just remember that.

And that's it. I am done!

I don't have a to do anything else.

(APPLAUSE.)



Just kidding. Sorry. Sorry.

So, it's memorialised in an ICANN board resolution and this came about after a lot of community work, years of community work to get to this point, to this resolution. The ICANN board promise never appear in the DNS root zone and I am here as part of that second sentence there, out reach and raise awareness.

So the purpose of that .internal is really provide a designated place for DNS private use names. So there's been a lot of different names used for this purpose over the years, everything from LAN, there's so many and the purpose of .internal is let's try to get everybody to use the same one, maybe it will work, maybe it won't, it's there now.

You can think of it generally as analogous to RFC 1918, it doesn't map perfectly but that's the way we are thinking about it and it has been about. It's similar, it's DNS, it's not an IP address, it's not an AS number.

Okay. So one of the documents that led to this was SA C 113 that came out of the icon stability and advisory committee and that's what eventually led to the resolution, their advice, their kind of like best practice is to before you think about doing this, you should probably think first about just registers a normal domain using and using a sub domain of that, if that doesn't work for your specific circumstances, then you could consider using .internal.

Because it will never appear in the DNS root zone, it doesn't really require any special handling, it will always fail, any name or under .internal that makes it up to the root will fail DNSSEC validation, so any DNSSEC validating resolver, if it's configured with the right KSK and what not, it will fail.

So some example use cases, this is just a short list. There are probably many more and it's really, it's really going to be context dependent for enterprises or organisations that want to make use of this.

And there's no good advice here that I could give, right, it really depends upon your specific circumstances.

This slide just has a short comparison to other similar TLDs because we could say there's a few animals in the zoo and this is the beast area, that .internal is used in local contexts which separates it from .alt which is intended for non‑DNS context, not using the DNS protocol, a completely separate protocol and the guidance from the IETF is to use .alt for stuff like that, home ARRPA is reserved for the home networking control protocol and .local is reserved for multicast DNS.

And that's really the end.

That's actually the end this time. It's time for some questions.

JAN ZORZ: We have time for one question and there was a question in the question and answer that you already answered so.

ANDREW MCCONACHIE: Okay, cool.

AUDIENCE SPEAKER: I have only one question. The question is how can we stop large ISPs from resolving this with an ex‑domain rip writing? Large ISPs tend to monetise arrows in URLs by responding to a non‑existing URL to search internal search engine and getting some advertising display to make money from, and if you using such an internet domain, for instance for local company network, a small company, and using VPN to come in and using, for instance Microsoft VPN client, then you have the problem that you have two interfaces on that client, and the operating system is smart enough to use both for the resolution take the first answer you get back which is usually the one from the ISP, not through the tunnel, and that's why if the provider is doing NX domain rewriting, you can't access even if you have a VPN to your local network.

That's why the usual best operation practice for companies is to use their own domain name adding internal to the domain source, use this and preventing the ISP from resolving. How can we do this with internal? Can we have from ICANN's ‑‑ I would suggest ICANN finance would add some fines.

ANDREW MCCONACHIE: I don't know about that, but you do highlight an important point, there's no way to stop these names from leaking. If you are going to deploy it, you will have leakage, these names will ‑‑ assume you are going to set this up in your own environment, your own enterprise or whatever, you are going to have all of your clients pointed at some resolver which will handle .internal names especially but as you say, you can't prevent someone taking a laptop home and using a different resolver and then that internal name getting to the ISP resolver and then yeah, that will happen, that's something you have to consider in your designs.

JAN ZORZ: Thank you. Anything happening in the chat? No, okay. Clara, what's next? .

CLARA WADE:: Next we have got Ionnas Arakas with end of life software fun.

IONNAS ARAKAS: Hi, my name is Ionnas Arakas and today I will present the last of the Apaches, investigating the state of internet facing end of life software. This work has been done by me, with my colleagues and my Professors. So, what is end of life. And why does it matter. First of all we call it EO L and this is a software or programme or a version of a programme that is not supported any more by the software vendor. For example Windows XP, Windows 7, Apache 2.0 and more and why does it matter. As any software, it has vulnerabilities those vulnerabilities will be exploited. And because the software is not supported any more, those vulnerabilities will never be patched. So let's start with some research questions we would like to answer today. Are servers hosting EoL software? If yes, how many IP addresses are hosting EoL software? Are those EoL software dangerous? Will they cause problems? And what are the factors ‑‑ sorry, where are these hosts located and finally what are the factors that contribute to the high percentage of EoL programmes?

Okay. Let's start with our methodology. We use Censys, which frequently, which frequently scans the entire address range to find open ports and applications. The data could be provided through Aquarius or API, hunter but Censys provides a better system and more consisten results, so we choose Censys. We also use endoflife.date, it's a website that it has information for the software that we want, it has over a hundred software operating systems and even some devices and as you can see, here we have the Apache and we can see some versions of Apache, some of them are no longer supported and we can see all the versions that we are going to use later.

Moving on, here you can see the list 69 products that we choose to run our analysis, we use those softwares because they are publicly viable meaning Censys can scan the IPs and find those tools, here you can see the latest version of those tools. Here is the EoL date of this latest verson, if the date is unknown, the software is not published, it's expiring.

And then we have the EoL version and the date that this version actually expired.

We also chose those tools because they are available on endoflife.date, and finally because they cover a huge range of functionalities from HTTP servers to reverse proxies and databases and more.

So let's start answering your questions. Are serve servers hosting EoL software, the answer is yes, out of the 46 million IP addresses we used and scanned and contained one of the seven productivity, 9.6 million of these IP addresses are hosted at least one EoL programme for software.

Next question. Do the servers host EoL software? Well again the answer is yes. 4.2 million IP addresses out there are hosting a version that has been expired, 2.4 are using end of life version of PHP, etc.

Next question are these EoL applications dangerous. Well they actually are. The most interesting results from me here is the Apache, which out of the 660,000 IPs that we saw here, all of them have at least one CVE with a very high score. We also found that one in four endings out there that are are EoL are containing a CVE with a very high score as well. And finally, 62% of open SSL ‑‑ 62% of the IPs that are out there contain ICV.

Moving on, we can see the distribution of each EoL out there, most of them are located in the United States followed by China, the next Europe and then there's the world, this doesn't mean that United States and China are in worse case, it just means that they have more EoL IPs, if you look at the overall percentage of actually doing a bit better than the rest of the world.

Moving on. What are the factors that contribute to the high percentage of EoL programmes? There are a lot of factors but a very interesting one is default programmes so here we can see a programme with famous operating systems and their some version of them, and their downloads that are publicly available so we can see here that it has over a million I dresses that have Ubuntu 20 and all of these operating systems are using open sell and all of these operating systems comes with a default version of the open open use and all of those versions over here are expired and as you saw earlier, this means a lot of CVEs, it was to mention in the case of opens use famous comments that are behind theseOOS are doing back boarding meaning they are providing updates for CVEs but it's not best solution I will say.

So for example Ubuntu 20, which is made has back‑boarding to fix severe CVs..

So, conclusions: More than 9 million IP addresses are adding EoL software, all of the Apache software that have an EoL have a high score in CV with high score, almost 800,000 instances are EoL and vulnerable with high base score and pre‑installed software is being shipped with some software that is EoL. So some recommendations, update your software. Don't use pre‑installed versions of a tool. Always update it and use only supported programmes for better patching. Thank you.


(APPLAUSE.)


JAN ZORZ: Cutting the queue right now, thank you.

AUDIENCE SPEAKER: I think that you get terrible advice in your last slide instructing people to not use distribution packages, this is a terrible idea because if people manually install their own software.

IONNAS ARAKAS: I meant to use the latest version.

AUDIENCE SPEAKER: It means they installing a new OS or manually installing it, and if people manually install software, most of the time they will not update it. Anyway, I also want to provide a perspective, some perspective as an operator, as a web hosting company who have a significant number of servers with PHP 5 for even a few at this time because our customers give us money for that, and they are the most secure servers. These websites never get compromised. I do not actually remember seeing any customer server, well very rarely I remember seeing customer server being compromised due to obsolete software, when it happens, it happens immediately. After sometime it just does not happen any more, we have countless servers with old open SSL it's not a problem in real life. And saying that yes people should update their software but most of the time obsolete and unsupported software does not cause practical problems, thank you.

IONNAS ARAKAS: Okay.

AUDIENCE SPEAKER: Sorry, the queue was cut.

AUDIENCE SPEAKER: This is Peter Thomasen. Your presentation is apparently based on that table at endoflife.date which reflects the end of life statements at the relevant project websites and then you also pointed out that Ubuntu has LTS for longer times and you don't recommend that, why do you not recommend that and the second question is were you able to discern your measurements how many of those deployments that you observed with actually on operating systems and how many are not, I think it makes a big difference.

IONNAS ARAKAS: Can you repeat it louder?

AUDIENCE SPEAKER: The question essentially is of the deployment that are not supported as you found, can you distinguish how many are on Ubuntu long time support and how many are not?

IONNAS ARAKAS: Almost all of them are on the long‑term support because I think that what the people use, like I am always using Ubuntu 20.4, not.9 or.0 but yeah not all of them have open SSL 1.1, some of them have other versions of open SSL

AUDIENCE SPEAKER: Distinguish how many use Ubuntu 18 because that's not supported.
IONNAS ARAKAS: Ubuntu 18 versus Ubuntu 20.

AUDIENCE SPEAKER: Five years support is over...

IONNAS ARAKAS: I have the number right here. Ubuntu ‑‑

AUDIENCE SPEAKER: Cool. Why do you not recommend using LTS versions?

IONNAS ARAKAS: I am not saying that.

AUDIENCE SPEAKER: You said it orally that people should not use Ubuntu 20.04.

IONNAS ARAKAS: I am saying be careful of the programmes that are coming by default on your computer because some versions of those tools are end of life. Ubuntu 20.4 is still supported but the open SSL version is not. Ubuntu Canonical tries to maintain open SSL 1.1.1 F but not the open SSL, as long as you are doing update on you bunt, you are probably safe, if those start being discovered...

JAN ZORZ: Okay. So we have no more questions. Thank you everybody. Please rate the talks. Do we have anything else, Clara?

CLARA WADE:: Just a reminder that after the break we'll have the MAT Working Group in this room and then the Security Working Group will be in the side room.

JAN ZORZ: Rate the talks and consider to run for the Programme Committee and consider to volunteer for NomCom, thank you.

(APPLAUSE.)



(Coffee break)