RIPE 89
31 October 2024
Main hall
Database Working Group
PETER HESSLER: Welcome everybody to the Database Working Group session. In case you are unsure, this is the main room. We are database. If you want to go to OpenSource, that is next door. If you just need to find a place to have your laptop with power so you can play fortnight, there are other locations around. We'd be delighted if you pay attention to us and interact with the presentation. We'll be starting in just a few minutes.
Welcome everybody, this the Database Working Group. We are starting our session today. I'd like to welcome everybody here in the room. My name is Petter Hessler, I am one the co‑chairs for the database. Unfortunately our other two co‑chairs are unable to make it to this meeting, and we hope to see them again at the next one.
I want to thank you to the NCC staff for helping with scribing and the chat and Q&A. Of course thank you very much to the stenographers. I would like to remind people that when you are speaking, to make sure to keep your pace reasonable, so the stenographers can understand and steno all the words that you are saying.
STENOGRAPHER: Thank you for that!
PETER HESSLER: That is your presentation, I am quickly to go to go over the agenda. First we are going to give an update from Ed from the NCC. Then we are going to have a presentation from Lee Kent from Bein. Then an update about the UTF‑8, an update about MNT 5 and I will be chairing the session on the NWI reviews of the then if we have time, I assume it would be nice and efficient, we will have time for any other business.
And so with that, I'd like to invite Ed up to give us an operational update.
ED SHRYANE: Thank you Peter. And good morning. My name is Ed Shryane. I work as a product owner at the RIPE NCC in the database team and I'm so presenting this update on behalf of the team whose hard work is gone up to the update over the last six months, since the last RIPE meeting.
So what have we been doing since RIPE 88?
We have had three different Whois releases. First one in June we added maximum reference to the update validation. Previously you could add as much as you wanted to an objective, that caused operational issues. It's fixed. We made a second release in July, where we implemented the policy proposal 2023‑04, to add the aggregated by LIR stat to inetnum. We finished the client certificate authentication implementation. One remaining piece was that you need to do ‑‑ we need to support authenticated lookups in order to update maintainers, so you can now completely use client certificate authentication for updates in the RIPE Database. Finally, we present a warning on the web application and in the REST API if there are conflicting ROAs on your route or Route‑6 object, and we fixed an issue where we didn't take over Lapse properly into account.
Final release in September. To increase the resilience of the RIPE Database we added some features to block queries and updates by an individual IP. According to the acceptable use policy, we can block a misbehaving client. But previously this was quite a cumbersome process, so now it's built into Whois. So spec respond more quickly if there is a denial of service incident.
Similarly for updates, we have now permitted an automatic denial of service filter. So if you send up updates too quickly we will temporarily block you and return a 429 response. But I think the limit is relatively high, so normally nobody should be blocked, but this is for, again for operational resilience.
And finally, since we are planning ‑‑ we are planning to phase out passwords next year, and also given that using passwords in mail updates is insecure, that we want to ‑‑ we added a warning, so if users are doing this from now on they will see that there in their updated response.
We had an issue, we had an outage in June. So so overnight between 16 and 17 June, and again on the 24 June, we had some updates which contained a lot of references to other maintainers, something like 30 or 40,000 maintainers. It caused a denial of service because we validate everyone of those maintainers to make sure that any one of them can kind of successfully update the object. And it then also caused a high volume of update notifications to all of those maintainers when the update failed. So, apologies for the outages. It was caused by many updates with lots of references to mnt.by. We mitigated in a couple of different ways. We are now able to block by IP address. We are, so misbehaving clients we can deal with that more quickly. We are also now validating the maximum number of outgoing references in an object. Something like I think we made it double the maximum number of references in any existing object, so it should not cause any operational issues for users, but it's more we have sensible limit now, so this should not happen again.
And separately, we already have this mnt‑ref feature, so you can require authentication before anyone can reference your organisation maintainer person and roll.
Some statistics. In recent introduced features in Whois. We have allocated this, we have added this allocated assigned PA status to inetnum. It's for allocations where you can combine your allocation and your assignment in a single object. We now have 352 of those, and 130 of them are /24 allocations.
We have 93,000 abuse‑c addresses, and there are 17 percent of those are due to be validated. We have an application to validate every abuse‑c address yearly, so we're on track to do that by the end of this year.
We now have 9 percent of our LIR organisations who are synchronising their admin users in the portal to the default maintainer. It makes it easier to manage the maintainer in the RIPE Database so you can maintain your users in the portal and they'll be synchronised across automatically as SSO accounts.
We now have 286 maintainer IRT person ROLE Objects with this mnt‑ref attribute as I mentioned. We now have 52 inet‑nums with the AGGREGATED‑BY‑LIR status, since we limited that earlier this year. So out of 4 million assigned PA objects, so there is an opportunity, it's not just for new assignments, but you also have an opportunity to aggregate existing objects that are in the database to make maintenance of your data easier.
And finally, we introduced e‑mail into RDAP entity queries on request of the Working Group. And I had a concern that this would cause an issue for accounting since we are accounting on personal data, and e‑mail is considered personal data in entities. But happily only 0.6 percent of these queries are blocked by the daily limit. So should not be causing an operational issue, so I think it was a good addition to RDAP for this.
A little bit of operational work to mention. We finally removed all of our old metal servers. That's now fully complete. And that's to... as part of the cost reduction that the company is doing to reduce the data centre footprint. So now we have a much smaller number of servers, they are all identical. They are running the same application it's the same database and we have enough redundancy within data centres and across data centres.
Secondly, to support the modernising of infrastructure in the company, we are testing use of Kubernetes on premise.
RDAP, Leo Vegoda at the last RIPE meeting requested the NCC to document difference between Whois an RDAP, I published that to the Working Group in October as the RDAP transparency report, and still RDAP usage is less than 10% of all data queries. It's mainly IRR data that's not included. And that includes fields like import/export in aut‑num objects, route objects, set objects. So the question I'd like to ask is, the lack of IRR data holding back RDAP adoption, is this something that the community would like to see? And it turns out that George Michaelson did ask this question in an article in 2021. Some drawbacks of supporting IRR would be that the RPSL is a complex CRYPTing language as defined for IRR, it's not a good fit for the data model in RDAP. So when RDAP was initially implemented it was left as out of scope.
Secondly, aspects of the IRR function were under review for inclusion in the RPKI signed data model. Coincidentally, Job Snijders has a presentation planned for this afternoon at the Routing Working Group, I think it's something like out with IRR in with RPKI. So I think it's very relevant to this question as well.
And just I'd also like to mention the NLnet Labs has an extensive documentation on RPKI, and that includes some reasoning why the Internet routing registry, that it's not the answer for security and the BGP.
So, we have an intern who is working on missing history in the RIPE Database, this is something I previously raised, but due to project priorities, we haven't been able to tackle it until now. But just to recap.
A small percentage of existing objects are missing some history. So, 23,000 objects. Also, some deleted object history, which is not an issue nor our community at the moment, it's not visible in Whois. But that may change due to the numbered work item 2 proposal. Generally only the first oldest object is missing, but data generally created between 2001 and 2004 there are gaps.
And just to confirm, the internal resource registry is not affected. That is consistent. It has complete data covering that time. It's just the RIPE Database.
So some of the impacts of this missing history. Version history queries and we now see about 90 thousand of those every month. They would be incomplete if you have an object going back to that time period and you happen to be affected by the missing history.
Whois code and in the database structures we constantly internally have to work around these gaps, it's something that we would like to see fixed to simplify the structure of the database. And also, it affects the community history. So who was responsible for what and when, the companies and it's part of the community history that it's ‑‑ yeah, I think it's a good thing to restore that history.
And we have some progress to report. I restored 300,000 objects from the initial dump. When we migrated to Whois 3 back in 2001, so that initial import is now mostly intact. And the remaining work is we are restoring the mailing gaps, we're going from September on towards to 2004, so we can fill in the gaps incrementally.
Numbered work items. We have progress on one numbered work item NWI‑12, NRTMv4. We had our first annual key rollover. So the first deployment was last year, so we were due to do this. The key rollover was successful. We were using the proposal we notified that the key was going to change and a week later we rolled over the key and the client that is connecting is IRRd 4, and it successfully switched over to verify signatures using the new key.
We have added some documentation to our documentation website, so, our users can be aware of how we have implemented it and how to find it, if you want to use it.
We discovered earlier this year we discovers a race condition between the notification file and the signature because they are separate HTTP requests, it's possible that the content will change in one or the other in between the requests. So it's something where we now have a fix in mind, we're going to use JWS to return the signature in the initial response for the notification file. A fix for that is in progress.
And secondly, we are also working the, the RIPE NCC is working on a second client implementation to support the server side. It's going to be a bare‑bones client and the intention is to be... to just support mirroring of the RIPE Database through this protocol. We already have a client for version 3, so this will fill in the gap for version 4 as well.
So, I expect these two changes will be in the EIX in Whois release.
Okay, upcoming changes:
So, promised in the Q4 of this year, we will finish the RDAP RIR search feature. We have already implemented the first half, which is basic searches. This alliance with what the other RIRs are also implementing.
A second half is to implement relation searches for resources that will be implemented in Q4 this year. That's the plan. And that will allow RDAP users to do more specific and less specific queries of resources.
And secondly, in the RDAP spec, objects are administrative status are not returned. This is where IANA would have a delegated space or partially delegated a space to the RIPE region, currently we return 404, which is not compliant. We should be returning an object with the administrative status. So we will plan to fix that to be compliant with the RDAP spec.
Undeliverable mail. So, a question: Can we still send e‑mail to a user after a bounce? And we took a validator conservative approach. Currently, Whois currently stops sending mails completely to an e‑mail address if delivery fails even once. We took the view that a good reputation with the e‑mail service is important. We implemented this in March. We had a deadline in April. By the large mail providers that they expect their senders to be compliant with their requirements. So, being conservative we are not sending subsequent mails if we get even one bounce.
So, we notify the user by sending a warning on update responses. But the user has to notice and has to contact the database support themselves. And over time in the last six months, the consequence of that is that now we have something like 6,000 e‑mail addresses that we are not sending subsequent mails to. The number is small in relation to the overall number that we have, 900,000 e‑mail addresses, but still, they are e‑mail addresses that we should be sending mail to, that we have been sending mail to and potentially we are not sending mail to that we should be. But it turns out that half of these delivery failures are due to soft bounces, so either a transient negative 400 class code, or for the enhanced status code, persistent transient failure. So, are planning to re‑enable delivery for soft bounces. So, our plan is, there is ‑‑ I haven't found a best practice for this, but our plan is to wait at least 24 hours to give the other end a chance to recover the situation.
Check if the mail is undeliverable again due to soft bounce, and then mark the address as deliverable again. And if it happens again, if the next e‑mail is also undeliverable then double the wait time. So we will have a back off to send less e‑mail over time as a mail address is unresponsive. So that's the plan, and we will, for now we will not enable, re‑enable on permanent failures because we just don't know, the outcome there. If we're told that it's a permanent failure, we should take that as is for now.
And confirm consent for publishing users personal data. This is something that's been mentioned in the activity plan for next year and I'd like to draw your attention to it.
Currently, there is a lot of personal data in the RIPE Database that is not maintained by the RIPE NCC. And the responsibility for that is on the maintainer to keep that up to date. If this is personal data, maintainers are responsible for informing the relevant individuals and getting their consent, according to the terms and conditions. And also, this is spelled out in a series of RIPE Labs articles from 2018 that legal wrote before GDPR came into law. So, the consequence of this is we want to start verifying this by receiving confirmation from the responsible parties that the relevant individuals have been informed and that they have given their consent to their processing of personal data. So this project is in an early phase. We are currently working on the requirements and implementation phases, but we wanted to give the community the heads up to let you know this is coming.
We likely will be doing this in phases because of the volume of data in the RIPE Database, we want to break this into smaller phases like we did for IP validation. 42% of all objects is connected to an LIR organisation. So we will likely tackle that first. And then 58 percent of all objects will be connected to an end user organisation.
Retiring ns.ripe.net, this is something that came up at RIPE 88 and also in the DNS Working Group. We are retiring the ns.ripe.net name server. It will affect approximately 1,000 name objects. Details are in the Labs article. It's been announced by the DNS, via the DNS Working Group. We are now notifying affected users by e‑mail. But just to let you know the final deadline for this is the 15th January next year. So if you haven't already removed the ns.ripe.net name server attribute we will be removing any remaining entries on that date.
And just to recap what's in the draft activity plan for next year for the database team.
The draft is available for review, and I think final version will be published next month. There is some commitments for the RIPE Database in section 2.3. It's four bullets.
First, so remove MD5 passwords from the RIPE Database. That's to improve security. We also want to improve the resilience of the database query and update service and some of this work has been done in the recent Whois updates, but Whois releases, but we'll be continuing to do that.
We want to modernise and deployment of dB applications. This is across the NCC, and that includes the work testing in Kubernetes. And finally, we want to do the ongoing work that we're doing to implement standards and improve compliance, and that includes RDAP. Finishing NRTMv4, and across the company we are working on ISO 27001 compliance.
We have internal procedures already but this is standardise them.
And that's my presentation. Any questions or comments? Thank you.
(Applause)
PETER HESSLER: As a reminder, if you would like to comment on the microphones, we have microphones within the room. We also have a chat session available on Meetecho, where you can have a Q&A. I did want to sneak ahead of Gert really quick and ask a question myself. Not as co‑chair but as somebody who has an e‑mail address registered. Your plan on soft bounces has rough edges, let's just say. So, on my e‑mail server I run an anti‑spam called grey listing, which returns error 451 on all incoming e‑mails from unknown servers. After 30 minutes or so, then it allows you, but if you don't send an e‑mail within four hours it will not allow you. So re‑trying every 24 hours guarantees failure. So, we can talk in more detail about how you can detect this, what would look like, and how you can resend. We can talk about that offline.
ED SHRYANE: Thanks Peter. It's something I didn't mention, but the outgoing NCC server does support grey listing. So, yeah, that's taken care of transparently, we don't see those responses, it's just the mail might go out a bit slower, but we don't get a permanent failure back on those messages.
EMMANUEL KESSLER: Is 451 only after the four or five days, or error 400.
ED SHRYANE: The soft bounces would be things like mailbox full, something transient that can be fixed, but I haven't seen grey listed responses in those.
PETER HESSLER: Let's talk more detail offline, so we can ensure this is successful for everybody.
GERT DÖRING: Half the grey listers are happy if there is SPF records and all that, so maybe you are just not grey listing them.
Actually my comment is also on that slide. I do wonder how feasible it would be to map this to allow RIR accounts and then display something in the portal. Like, we have these five mails that are in your LIR, your maintainers, and cannot receive mail. Go fix it. It might be totally unfeasible because too complex, but just an idea.
EDWARD SHRYANE: Yeah, thank you Gert, absolutely. That's something that we didn't implement because we had a time pressure to get that finished. But it's clearly an issue across the whole NCC for other services as well. It was mentioned, I think it was mentioned at the Services Working Group yesterday, but it's something that I think it would be very useful. And I think we should improve our undeliverable mail process, and if we can make it more visible for the user, that would be very helpful, because clearly if the only way they are notified is through e‑mail that their mail is undeliverable, it's not much help to them. So, any way we can improve this, I'd really welcome the feedback. So thank you.
PETER HESSLER: Are there any questions from online? Anything from Meetecho?
RUDIGER VOLK: Simple short question. Your mail bounce handling is just for responses from the database or for all of the mail handling?
ED SHRYANE: Correct, just the database. We have implemented it just for the RIPE NCC database. So it's something we were under time pressure to do and this is our implementation. But I think clearly yeah, across the RIPE NCC, there are ‑‑ other services will have the same issues with bounces.
RUDIGER VOLK: Well, kind of the other mail services I'm not sure we have domains of the bounce policies are. I have been thrown off mailing lists over the past decades in a lot of instances where I was very unhappy.
HANS PETTER HOLEN: This is clearly a problem across all the different services that we have, and different software implements this in different ways. My Mailman has their own opinion on when to you be subscribe somebody and my Board members get unsubscribed for their internal Board list and they are not happy about that. So that's a thing. No, this also hits our member services with support calls, and one of the big challenges here on the abstract level really is this is your data in the database. We can't really just take it away because it's bouncing. But I don't really want to have inaccurate data in the database or even the registry, so we kind of need to find a way to get you to act on this, right. We can label it or we could move it to remarks or things like that. But really, we need to have a common understanding on what's the right thing to do here. Database, registry, mailing lists, you name it. Ideally we would have a sort of a master bounce list, so if somebody bounced on one service, we will take the same action across and that action can't be manual because six tickets will take us too many people to call and so on.
So this is a big problem from accuracy point of view. Also from mail delivery point of view, since the big players like Microsoft and Google have started to enforce much stricter spam policies so us trying to send e‑mails to addresses that don't want them for some way has consequences for other e‑mails. So, this is an increasing issue that we are looking at, and it is probably one of those that have come furthest in analysing it but it has attention in all the other things as well.
PETER HESSLER: It sounds like what you are suggest something we need to coordinate amongst all the senders of e‑mail in the RIPE NCC and ensure that we have at least a general understanding of the solution that is we want to provide. This is probably a good thing for us to bring to NCC services, so we can coordinate with the other groups on that.
ED SHRYANE: Just keep in mind, it would be a much bigger project to do that across the RIPE NCC. It was something that we could do in a short amount of time for the database because we had, we could make a Whois release. But of course it's not ‑‑ it has these down sides of blocking on soft bounces. So there is more work to be done.
PETER HESSLER: Yes of course.
So, are there any other questions or comments? Is there anything online? No. All right. Then, thank you very much.
(Applause)
PETER HESSLER: So, as Lee Kent from Bein comes up to give his presentation, that as the car parks, I would like to remind you all to please rate the talks, please let us know what you think of them. Your information helps us choose and understand better for what you want to see.
LEE KENT: Good morning everyone. Some you may have seen some of this presentation from Tuesday. I'd like to thank Peter and everyone for letting us talk to you today about how we use the database and some of the challenges that we find with the database.
So, as I did on Tuesday, this is a request to the community. Some of the members have already mentioned the accuracy or inaccuracies around the database. And basically, all we want is a right as broadcaster is to be able to speak to the right people and have a sensible and reasonable level of communication with whoever is responsible for that resource. We don't want to have to go down several other routes, so having accurate information or the ability to gain access to accurate information is really important to us.
So, one particular flag here has been mentioned most of the week. Seychelles has been commented on several times and how lovely it is, and that's fine. I have never been. I am sure it is very nice. However, it does cause us a little bit of an issue in terms of locating and having business addresses which are far and few between and I do have some examples later on.
Because of the way that these organisations have been set‑up as an industry, we have named them offshore, and the reason for that is, as it's explained here, they generally sit outside the jurisdiction and the addresses are very inaccurate to say the least.
We always go to the RIPE Database and do our best to locate the resource and take whatever information is available in there. Sometimes it's ‑‑ there is loads, and that's great, and it does help us. On other occasions it's very limited, leaving us with nothing more than an e‑mail address. And as we have just been talking about, there are some issues with just sending an e‑mail. And I will come on to that in a moment.
So, when we do send e‑mails, they can be completely pointless, and they never go anywhere, and we never get any response at all. And when we do look them up, we find that there are errors sitting on them. Now, abuse notices, take down notices, whatever you want to call them, DMCA, whatever they are, it's a request to the resource owner to take some action against a domain, because it is infringing on copyright or some other form of cyber abuse.
We do our best to go outside the RIPE Database and use whatever resources we have available to us in terms of OpenSource, and we will keep following the chain ultimately looking for websites, other business addresses and doing what we can, and even then we still come to complete dead ends, where there might be an e‑mail address or a domain site named, or available, when we follow it, it's nonexistent. It just sits there and doesn't do anything. So again, we become very frustrated with how can we speak to the responsible party for the resource.
E‑mails. We have found on several occasions throw away e‑mail addresses being used. Again, that causes a bit of a headache, and for us, we know that they are just completely blackholed and they are never going to get any kind of response whatsoever. It just makes our whole process inefficient and a waste of time.
Even when we do have a front‑end website and we find an address, they are back alleys, car parks, and domestic addresses. Whilst it's accepted that in some jurisdictions domestic addresses are allowed to be used for business registration, knocking on John and Andrew's front door and saying can we speak to such and such a company and for them to be turning around and saying, we have got no idea what you are talking about, the address is being used is a fake address, can be quite frustrating and unfortunately with very few bad actors that we're talking about, this is a common process that we see all the time.
Even when we do go down a legal route and file a subpoena, most of the information that we get back is also fake or inaccurate, and we can't take any action on that.
Again though e‑mail addresses being used, making it completely pointless. Billing information is fake, inaccurate, incomplete, which does pose another question around the use of billing and credit cards when paying for services to an operator as well.
We work very closely with NCC in order to help and provide some education to us as well to make sure that we are using the database correctly. And we will pass on information where we feel it's misleading, and the guys at NCC will work with us and will help us either correct it, send information to the operator to encourage them to contact us. It's not always the case, and to be fair, having spent a few days with you guys, it shouldn't be the responsibility of NCC to kind of be is our kind of, you know, our fallback. The information for us should be easily accessible through a number of processes, whether that be through the database or through a legal process.
So, I have kind of raised through these the mainly because I'm looking to the community and for the community to engage with us, as has been mentioned on a number of occasions, stakeholder and community engagement. During the last session, there was a few legitimate and reasonable questions. For the last few days I have been standing in the foyer trying to speak to people, hoping that people will come and talk to me. I encourage it. You know, this is something that we want to work with you guys to do, to achieve. There was a session next door with Max about blocking. I had the opportunity to speak to some of the people who posed questions around that. I think they are legitimate points. When government organisations get a wrong in blocking rights owners and broadcasters cringe. I think there was one point about the validation process. Certainly from Bein's point of view, we go out of our way to make sure that whatever we send through it true and accurate. We send proper evidence packages across. That may be e‑evidence packages or it could be as simple as screenshots with an URL demonstrating the from time to time.
If that process isn't correct, it takes the community or who is responsible for the resource to come back to us and point that out and have that conversation. The organisations where we have engaged and have had really good conversations with, we have identified poor processes, the information has been received in the wrong way, in the wrong language, and we have corrected that. But it takes that communication and if we can't talk, then how can we actually fix it? And it's open to you buys for any Q&As.
PETER HESSLER: All right. Thank you very much.
(Applause)
I see somebody at the microphone.
AUDIENCE SPEAKER: I want to make it clear that I am just speaking for myself, not for anyone else.
I get the problem you are having, but also I want to highlight that consensus for a long time has been that the RIPE NCC is not the Internet police. If you want to get the information, you go through Dutch court orders. Information that's not public. And it's unreasonable for RIPE NCC to try to validate every address for everyone, and like especially if it's a residential one, how would you know if it's the right one or not? There is simply no way of doing that. So, I get your problems, but I don't really see what the RIPE community or the RIPE NCC could really do about this.
LEE KENT: We have just had a session on the challenges of validating thousands of e‑mail addresses and things like that. So the validation process, that's not the expectation from us of the RIPE NCC. The expectation is that if we do find inaccuracy and we do find an issue, then there should be some process in order to be able to rectify that. There should be a responsibility from the community to actually want to have accurate and contactable information. Why is it, as a community, why would you want a very small minority of bad actors lurking in the shadows to, you know, facilitate nefarious activities? So, having the ability to reach out to somebody. And at no point in time would I ever ask RIPE NCC to police, and I'm not asking anybody to police anything, what we're asking for is the ability and some kind of work through, so we have a level, or the ability to have reasonable conversation with whoever is responsible for that resource ultimately.
AUDIENCE SPEAKER: But that seems kind of like the same thing, like, how should they ensure that?
PETER HESSLER: I think the request is generally a, if you see something that is wrong, please try to communicate with that network and have them update their database entries. Like your other request is: If there is clearly false information in the database, what can be done to get that corrected? And I assume that one of your requests, yeah, if it is intentionally bad over time, then like, how can this be escalated?
LEE KENT: Exactly.
PETER HESSLER: I think we also need to be clear of what the RIPE NCC is capable of validating and what they are not capable of validating, and what they validate is the entity who receives the resources, so the members and the PI end users. And they generally don't validate the assignments that the holders issue.
LEE KENT: Yeah. But again, in some of the instances we have shown, it's the entity that's responsible for the resource that seems to be the enigma. And if we can get over that blog, then that's another conversation and then we kind of move on from there. But as I said earlier, you know, given the amount of work that NCC has to do, we don't expect, you know, the validation process to be perfect. But where we find inaccuracies it's about working that through.
AUDIENCE SPEAKER: Online question: From Brian story: "Does the RIPE NCC and the community recognise that this outreach and any failure to provide adequate support may represent evidence by what could be seen as a club to take such matters seriously resulting in greater external oversight or threat of litigation?"
PETER HESSLER: I am willing to answer on behalf of the Working Group for this one. We are aware that there are multiple stakeholders that have conflicting opinions on certain topics. One potential conflicting opinion for having available is a lot of privacy organisations who do intentionally hide who is using these IP addresses. There are many valid reasons for this. And of course you are highlighting one invalid reason for this.
LEE KENT: That's all we're interested in is the invalid reasons. People's privacy is paramount. So, it's only those bad actors that we're interested in.
PETER HESSLER: And I hesitate to bring this up, and I don't want us to go into this discussion. In the United States political system these days, there are certain topics that are completely legal that various jurisdictions are trying to make illegal and are illegally trying to influence this. So we also need to try and balance and push back against that sort of behaviour, so there is a lot of differing perspectives on this topic.
LEE KENT: Yeah, of course.
AUDIENCE SPEAKER: This is from the monitoring team: I would like to first thank you for bringing this topic to our attention, and I had like two small comments that I would like to highlight.
On slide 4 you mentioned unreliable, or the term was used, but this doesn't mean it's invalid. So if the e‑mail address of the owner is valid and it's been received, that's okay, but it's up to the receiver whether to respond or not. And on slide 11, there were like several comments, it looks like it's one e‑mail but it's from different e‑mails.
LEE KENT: It is, yeah.
AUDIENCE SPEAKER: Okay. Thank you, and just last, whenever we received, whenever theres a valid e‑mail address and it's been sent to us, we are searching and looking into that and following up.
LEE KENT: I complete agree and that's been our experience to date. You know, Spencer knows that we work very closely with you guys to raise these points.
AUDIENCE SPEAKER: Alex from AMS‑IX. Maybe some background information. We have the DSA, and I think it works very well. But next to the DSA there is a working group at EU level for streaming live piracy which basically has to do a report at the end of 2025, if this is an issue and the community does not participate in solving this issue, the EU will step in. We have no option of doing nothing.
PETER HESSLER: As a question for you, how would you recommend the community participate in this process? Is there something that we need to go to the EU level for?
AUDIENCE SPEAKER: You have to go to the EU, yes. But also, work with the holders, the fact thatly is here doing two presentations, he most likely will be in Lisbon also, we might do a special session there. I think that's helpful in making sure that the legislature see that the community is working with the rights holders to solve this things might be limited. We might not be able to provide the silver bullet, but if the rights holder agree that we have done the maximum that's possible, the chances of legislation that will impact and the RIPE NCC negatively I think are lower.
PETER HESSLER: If I can ask you, can you send an e‑mail to the Database Working Group with a short summary of these comments, especially how we can participate and which groups that we should go to.
AUDIENCE SPEAKER: Yes. I will contact the RIPE NCC and the idea is to have a combined session in Lisbon with the database, with security and with the lobby groups, I would say, to see if we can get something together.
PETER HESSLER: Yeah, because we definitely want to ‑‑ we want to keep the authority that we currently have and we want to be able to participate fairly, but we also need to know how we can appropriately participate. So, yes. Thank you.
LEE KENT: And if the community will have me back for Lisbon, then I will come back.
AUDIENCE SPEAKER: Yes of course the community will have you back, we are open and inclusive. One of the things that I wanted to mention is that rights holder is not only intellectual property and copyright holder. Rights holder are people, ordinary people that have privacy rights, freedom of expression, and all sorts ‑‑ they are not present here. And we need to acknowledge that, and we need to find the most proportionate way of solving this issue. But unfortunately, and this is my opinion, unfortunately the intellectual property advocates use all of the mechanisms that they have available to them very successfully which harms the Internet and access to the Internet. And in Max's presentation, we saw how IP blocking and DNS resolver blocking can be just disproportionate and can have effect on access, not just to content, but to online services. And the database, in my opinion, is to provide you with accurate information and that's it. But in bullet point number 3, you are asking for a mechanism to resolve disputes, for RIPE NCC to resolve disputes, and I think that that should not be done at RIPE NCC.
LEE KENT: So, okay, I mean these are just, just suggestions and bringing them to you guys for open discussion. I think I have spoken to a few people here, and there's been a suggestion that maybe we should have a BoF, which is a new term I have learned, to open up that discussion, because ultimately yes, this is a stepping stone, but let's have that conversation, let's discuss what other alternatives there are to this. You know, like everything, you can't ‑‑ there has to be some kind of resource, and there has to be a system in order to resolve some of these issues. So this is why we put RIPE NCC as a mechanism to try and facilitate any changes we may find. If there is a better way, and there is a more slicker process, then brilliant, let's talk about it.
AUDIENCE SPEAKER: Wolfgang: I am doing handling as well for my employer, as well for non‑profit groups. And one practical thing that would be helpful is if an Whois request for an IP address leads to a supply allocated range, it's easy to overlook that, and it would be helpful to add a node to the answer for the Whois server, how to get to the real holder, the right customer that owns that IP address. Thanks.
LEE KENT: That would be great.
PETER HESSLER: If I could ask real quick as a clarification for you. I believe that AFRINIC, in their inetnum 6nums has a parent allocation, is that what you are requesting? Or are you requesting for of a remark in the field or in the full chain to be returned?
AUDIENCE SPEAKER: A remark and a hint in the full chain would be sufficient I think
RUDIGER VOLK: A very short and light‑hearted remark. Looking at your last slide, the solution for the community is to go to an IPv6‑only network.
LEE KENT: To be fair I actually prefer Geoff's approach from the session this morning, and everything be based on DNS.
HANS PETTER HOLEN: I thought I tried to summarise at least my takeaways from this.
As I mentioned yesterday, accuracy of the registry is one of our core focus areas. Next year, and I gave some sneak peaks into some of the numbers that we're looking at, we have a much more detailed internal project right now to look at what data sets do we have in the different systems and databases, and also what do we mean with accuracy of those. So I gave you some numbers on accuracy of members data, where the definition, the measurement I do is whether I can match them to a business register. You are pointing out that leads to a PO box address or an address, no that doesn't fit in my definition, and I don't really see how he can verify that without, well I had a request for an additional travel budget for my team now so he want to go and check in the Seychelles.
But I think that exercise is useful, and we will definitely come to the community with that. We would have to have a discussion on whether it's database or services or how we do that. And I think a lot of this belongs in the Database Working Group because it's our members obligation to keep their delegations up to date, and I think that's the really ‑‑ that's the huge volume. And I fully agree with the previous remark that there should be a way of seeing the chain of delegations and so on. I think that's actually possible in some of our interfaces, maybe we can make that even clearer. We're doing a lot into user interfaces and how to make things available and so we have a lot of tools here to see how we can make the information accurate by agreeing on definitions of accurate and make sure that when you see it it's clearly understandable what exactly you see. So I think this is spot on on something that is on our roadmap, but we do need the help of the community, this Working Group and others, in order to take the members' responsibility errors and get the definitions right. At least that's my take away now and looking forward to working with the excellent Working Group on getting this. Thanks.
PETER HESSLER: Thank you. Just in the interests of time, I'm going to be cutting the queues. If there is any online Q&A you want to bring it up.
AUDIENCE SPEAKER: Tom Strict, CloudFlare. Thanks for talking about this, I think it's a great perspective versus the Max that you can that we had earlier. I do want to make sure that we're not going down the avenue of, oh but the children, because part of this does tend to go that way when it comes to the right holders it is all for the right holders and there is privacy concerns, and I think that's already been raised by some of the others in the community. So I think that's good.
Another comment for the rest of the community, I do highly encourage us to engage with the DSA on the European level, because if we don't, we get the situation that we had earlier this year and last year regarding forced arbitration and sender pays, arbitration that's coming, legislation that's trying to come through the EU level, but also the local level, look at France and Italy. So I highly recommend that the community engages on this. Otherwise we do end up in the exact same position that, you know, we are having this legislation in Italy, we're having this legislation in Portugal and Spain where, you know, it's fairly, it's a fairly unilateral process that we should not abide by as a community. So I highly recommend everyone to engage with this, because otherwise we do end up in a position where this might end up biting us in the ass, and I'd rather not. Thank you.
PETER HESSLER: Thank you very much.
(Applause)
ED SHRYANE: Apologies, I'll go through this first. There is some history to the requests for ‑‑ there's been a lot of discussion about supporting UTF‑8 in the RIPE Database. It turns out that there are approximately 77 countries listed on the RIPE NCC website listing the countries in the service region. It turns out that most of the official languages of most of the countries in this region are not supported by the RIPE Database.
So, there is a long history, a lot of discussion about UTF‑8 support. Work has already been done in NWI‑11, asked for support for IDNs, that was implemented, but we had to convert to Punycode to fit in the character set. I performed an impact analysis in 2022, summarising previous discussion, and coming to some conclusions. For example, UTF‑8 will allow for proper internationalisation of names and addresses, which we don't have right now. But, on the other hand, we must not impede the use of the RIPE Database to facilitate cooperation and interoperability, and we must also keep in mind the work of the RIPE NCC to support our members and community. .
Since then earlier this year we added support for a car set flag. So you can request on port 43 a different character set, for example, UTF‑8. So now that's the case. So if you use a different script or terminal, you can get your preferred character set out of the RIPE Database.
And in future, if the default ever changes on port 43, we can go back to the legacy current situation with that flag as well by specifying Latin 1.
Subsequently we made some changes. We have updated all Index tables for queries. We did that without any down time. And coding is still Latin 1, so there's been no change to the data itself, but we are now facilitating the support of UTF‑8.
And we have made some code changes originally from 2022 to support UTF‑8 in the remuneration objects. I want to confirm that there is no technical reasons that we should not support UTF‑8. We have done a lot of work in the background, so there is no technical reason why not.
It turns out that our existing interfaces already do support UTF‑8. REST API updates, mail updates, RDAP and NRTMv4 all support it. We have it on the input and output. It's just encoded in Latin 1 within the database. Port 43 remains Latin one, and NRTMv3 remains Latin 1.
What encoding is used elsewhere by the other RIRs, not to call out the other RIRs unfairly, but the situation is different in different regions. The requirements, the use of the database is different. It turns out that only 2 regions support UTF‑8 on the update on the query and also the query side. And the RIPE region, we do support UTF‑8, but we translate it to Latin 1. So when you query for an object on port 43 you will get Latin 1.
In summary, it's not a technical problem any more, but I would like to ask should UTF‑8 be supported in the RIPE Database?
Firstly, I think we should consider where in the RIPE Database should we support the objects, is it at a very high level object types and attribute types. Where would this be useful? For example person's names, role names, organisation names, addresses. Then individual attribute types, do we support non‑Latin 1 in existing attributes, do we need additional attributes?
The second half, should the default character set change to UTF‑8 on the existing interfaces, so port 43, NRTM Version 2 and dump files? There is pros and cons to that too. So I'd invite some feedback please. Thank you.
PETER HESSLER: Thank you, Ed, I see there is a lot of people coming to the line. Let's not debate the entire discussion right now. We only have a five‑minute slot, and but please...
AUDIENCE SPEAKER: Lars Liman. What do you mean by UTF‑8?
ED SHRYANE: Are we going to have support for the entire character set or subset. That's definitely something that needs to be discussed.
AUDIENCE SPEAKER: And what do you do by normalisation, there are characters ‑‑ I need at least three different ways to represent ARCs with a ring above it, so, I suggest you look at other people who have fought this problem. One place to look at is ICANN and the TLD normalisation and the label generation systems. There are certain limitations in there where what you can represent but at least look at the very long and tedious process they have gone through to find out how this works because the devil is deep in the details here.
ED SHRYANE: Thanks.
AUDIENCE SPEAKER: Peter Koch. Who could object to technical progress and internationalisation? Nobody. Okay. So I'm not going to object, I am going to ask some questions. One is ‑‑ well first of all, UTF‑8 is obviously perfectly technically interoperable, but it might pose some difficulties in the human interoperability. And you had an interesting table there on the slides where you compare the different RIRs. And if I'm not mistaken, of all choices APNIC is the one that sticks to Latin 1 and it would be probably very helpful to understand why that is. And I would not be surprised if the human interoperability has an issue there.
ED SHRYANE: I wouldn't want to call them out specifically, this is purely on port 43. Whether it's historical precedent or not, I don't know the reason, it's something I can certainly look into.
AUDIENCE SPEAKER: Fair enough. Whatever you do the question is that normalisation is a different thing, but this transcription that makes it more human, readable, at least for the western humans, and exactly the issue here. It's to be considered. But of course it's a good step forward.
LEO VEGODA: Firstly, thank you very much for doing all of this work.
Secondly, I am very strong supporter of people's names being presented as they would want them to be presented. And when it comes to company names, I always felt uncomfortable that these names were transliterated into Latin script ‑‑ and we just heard from people about how difficult it is to actually find out who is running services on the Internet. When you go and take something that was in Arabic or Russian or whatever and you go and put it into another character set, you are just going to make that much more difficult. So, I strongly support a company name as well as a person name being presented as it would be in that company registry. That's not going to solve all of the problems. But it's certainly going to be a step in the right direction. So thank you very much for taking that step.
AUDIENCE SPEAKER: Cynthia Revstrom, speaking for myself: I agree with what both Leo and Leeman have said, normalisation is important. I mean, I can't but make an actual legal name in the RIPE Database in the ordinary field, Latin 1 that field is something currently. Any ways I would suggest that maybe taking a look at the IETF data traffic err, I would like how they have handled this where you can put in your name and then your preferred Latinisation of the name. For example, a lot of German speakers might encode my last name as turning the Umlaute into OE, while that's not really how you typically do it in Swedish, it turns into an O. So, I think that makes more sense. I think, considering the history, it makes more sense to add a new field, make like org name Unicode or whatever, but keep the current org name ASCII and then non ASCII version. How to do this with like right to left text, I don't know. That's a complicated one. But I think it makes sense to do this, but ‑‑ and also maybe it's worth considering if there should be something to like try to make it so you can get output even on when using Latin 1 like Punycode, or I don't know what, to somehow encode the data so it doesn't get lost even when using Latin 1. Thank you.
AUDIENCE SPEAKER: Tom Strict. CloudFlare, it's important to have people's names represented in the way they want them to be. My question would be regarding I guess more to the Database Working Group community. Would it be possible to add an additional field that indicates how the data inserted into the database in the first place. If it's inserted in Latin 1, we get a field back even if you are doing a query in UTF‑8, you get a query back saying it was insertion was done in Latin 1, if it's done in UTF‑8 encoding, you get the field back saying it was done in UTF‑8, so you can immediately kind of figure out if you do Latin 1 query and you get weird normalisation from UTF‑8 into Latin 1, you can at least partially figure out, oh, I am being an idiot and I am querying data that's UTF‑8 in Latin 1. I think that's maybe something for the wider community to sauce.
AUDIENCE SPEAKER: Stars from AMS‑IX. I do support UTF‑8. I do think it makes sense, and I am happy to see there is no real problem to transition to UTF‑8. But I do like to support a comment of Leo, that maybe we should perhaps shrink the current options to not fully support it because that might be an issue when we have, you know, characters in for example I come from Greece, so I know how to read Greek but it somebody has an issue with a Greek operator and everything is written in Greek, good luck translating this if you are a Spanish, Portuguese or Scottish person, right, so maybe we should actually think about it and say, okay, what do we support from UTF‑8? Maybe you can come up with a proposal, put in the mailing list and say guys, yeah in the company name, or in the mail addresses and stuff like that, we should not allow the full option, or even we support Mike or whatever you guys so. So maybe we just first agree on that and maybe put it in action and then bring it also to your new client software. Thank you.
ED SHRYANE: Can I suggest then as you said, as you suggested I can make a summary of all the topics that came up and write to the mailing list for discussion and maybe the next step could be a problem statement.
PETER HESSLER: Yes, I think would be fantastic. If I can also suggest you add the current fields that support Latin 1, that's probably ‑‑ it sounds like several of those were already mentioned as requests in the room. And if we already have that support, then that makes things much easier. And if I can ask the community to go through for which additional features you may want, for example, for the submitter to supply the preferred Latinisation of their name, see how other RIRs are doing this, see how other organisations are doing this, because we don't want to be a separate schema from all the other RIRs if we don't have to. So, thank you very much.
(Applause)
ED SHRYANE: Okay, so thank you. Last presentation, I promise.
Just continuing MD5 hash passwords. So, in summary. Introduction. We know now that, as already mentioned at RIPE 88 and in the impact analysis that there are vulnerabilities in the MD5 hash algorithm and we already do plan to discontinue the support for MD5 next year.
The impact analysis had some great feedback. So thank you for all of your comments so far. I'd like to then mention an alternative to passwords, that's hopefully a drop in replacement called API keys. Then talk about the less secure transfer of credentials I would also like to tackle. And then finally the... a draft migration plan.
So, firstly, API keys. What are they? In my opinion, a more secure alternative to passwords, yeah, with good usability. API keys, just to be clear, they are for automated, I think we already have a very good solution for interactive updates. That's RIPE NCC access. You go to the database website, you make your changes, I think it works very well. So API keys are intended to be a drop in replacement for passwords, where you use a password now you can replace it with the API key value. The plan is to add functionality for creating listing and revoking API keys in the web application. The generated API key will have a public and secret part. This was mentioned in a discussion that that would be helpful to have a public prefix so you can identify different keys in different places. The secret part will only be shown once to the users, they need to keep it safe and make a record of it. And then we will support standard HTTP basic authentication for updates. We won't using a custom header or anything like that. So hopefully the implementation is straightforward for compliance that this is already good support for this standard.
API keys are linked to a RIPE NCC access account. And already 80% of maintainer already have this, SSO in their maintainer, so there will be no need to update your maintainer object. You'll simply be able to add an API key through the website. They will not be stored in the maintainer. They won't be stored in the RIPE Database, there won't be any more secrets to hide in the database.
The plan is to make this available in early 2025. And as also suggested on the mailing list, we'll be sure to include documentation including examples so you can transition over to the API keys. And our learning and development colleagues are going to produce some training materials to support use of API keys.
There are other alternatives. I need to mention those. You don't have to wait for API to stop using passwords. For example, PGP, it is used. If it fits your use case, it's a good alternative. X.509 signing, it's only used by a minority but it is still a valid alternative. And something we finally added full support for this year is client certificate authentication. This is where you can generate a certificate. You publish your key, your public key in the key cert object, and then you can use your client support for using your certificate in the TLS process when you are setting up a HTPS connection. You can use your own certificate for that and that will authenticate an update.
So, it sounds complicated, but there are examples in the documentation and there is good client support as well. You can use this in Curl, for example.
Moving on, I want to summarise the feedback that we got from the discussion on the mailing list. Firstly, shared credentials. We plan that API keys would be linked to an individual RIPE NCC access account. They are not intended to be shared. This was the goal for passwords as well, incidentally, the plan was not to share credentials, even though they are all visible in the maintainer objects the plan was to have, if a maintainer object is shared, that a password would be used per individual. Not really how it has panned out.
But, the idea is that when ‑‑ the drawback with this, with sharing credentials is that if multiple people share the same credentials it creates some security risks. For example, it's more likely that a credential would be leaked in some way. Also, that it's harder to account for the credentials if somebody leaves the company, it's harder to clean up, because they are aware of credentials that are shared by other people and used in other places. And also changes by an individual can be, cannot be audited if they are not shared, if makes accountability and compliance harder. So we will not urge in the sharing of RIPE NCC access accounts for this purpose. It's better practice for individuals to manage their own credentials separately. So the credentials will be in there SSO account rather than shared in a maintainer object.
And just to highlight, when somebody leaves the idea would be that the clean‑up would be as simple as removing their RIPE NCC access account from the portal. If you are synchronising your maintainer with the RIPE Database that SSO account would be automatically removed from the database site, so the user will no longer be able to make changes on behalf of your organisation and any API keys that they generate will no longer be able to used as that maintainer either.
Some feedback we also got on the mailing list was to make it more visible at organisational level how API keys are being used. So we will list users with API keys in the portal on the portal user accounts page. So any admin user for their own LIR account only, so we will show an API key label for any users that are using API keys with the organisation maintainer.
And secondly, we will add a warning in the portal if, when you are removing the user or changing their role, if they do have an API, to let the administrator know, because if you remove the user, the automation might stop working if the API key becomes ineffective.
Next big topic that imCA up was around API key ex PIRRey. We plan that API keys will have an mandatory expiry date. The user can choose the expiry date but no longer than one year and this is a trade‑off between security and ease of use. So, clearly ease of use would be to have a longer expiry or no expiry. Whereas security consideration would be that the longer the key validity, the greater the key could be misused. And if they are unknowingly exposed it could be misused by somebody.
An important aspect of this is that the flip side of key expiry that we need to manage ‑‑ want to avoid down time. The organisation needs to track their keys that they create. They will need to be a procedure to roll over the key. No matter how long the validity period is. And we will do our part by notifying the user in advance by e‑mail on our web interfaces if any of their API keys are due to expire soon. And that includes the update notifications.
Future work:
So, I have explained how we plan API keys to work in the first phase, early next year. Theres also plans for future work. We want to add more fine grain limits on how you can use API keys. For example you can limit to specific maintainers, multiple maintainers. You can restrict to a specific environment, so you can have a test to test API key, restrict by source prefix, I think it came up already. Limit by object types, and there is a support for a read only or read/write there as well. So we can expose that.
Multiple people requested for OAuth support, we do plan on adding support for OAuth 2. This will allow users to automate key rollover. On the flip side the implementation will be more complicated and it needs compliant support. API keys is being written in a generic way to support for man the RIPE Database and we do plan to make use of this across other RIPE NCC Services. So if API keys are useful, it doesn't have to be written again for another service, that once is enough.
One thing I needed to point out, and I didn't. I don't think it was sufficiently explained in the impact analysis, is, there is a difference with API keys Legacy holder for client certificate authentication and also for OAuth coming up in future that currently Whois allows for multiple credentials to be used for they think a single update. For example if a maintainer is an object itself, if the maintainer is different then you need to authenticate twice. We will only support single API key in each update request. And this is standards compliant. And it's also the case for client certificate authentication at OAuth 2. But the impact, there will be an impact on the database. We found that multiple passwords were used in less than 1 percent of updates but it will be an impact, but there are alternatives. Firstly, you can use PGP or X.509 to sign up multiple times. It says you can use hierarchy al authentication by adding a maintainer in specific places you can authenticate as a maintainer instead.
Sorry, Peter am I out of time?
PETER HESSLER: You can continue.
ED SHRYANE: Next section. While we are on the subject of security, there are some less secure ways that credentials are transmitted between the user and the RIPE Database.
And we plan to tackle those also.
First up is a mail updates. And already passwords are deprecated. We are including a warning in mail updates if you include a password. We can't guarantee end‑to‑end security for e‑mail that the content will be encrypted or not in clear text. Passwords are only used in 16 percent of updates. That's good news. But users will have to migrate away from using them. In future only BGP or X.509 mail updates will be supported. API keys will not be supported in mail updates.
Secondly, sync updates. We still support updates over clear text HTTP for historical reasons. But we have put a warning in since September last year for users, but still 50% of updates are over clear text including passwords. Thankfully, again passwords are a minority of updates. They will be ‑‑ they currently form 16 percent of all updates. But sync updates will be supported because it's over HTTP, HTPS header.
And finally REST API, we already do not allow credentials to be sent over plain text HTTP. Passwords will be deprecated. Currently it's used in a vast majority of REST API requests. But the API keys will replace that. We will support API keys over REST API.
So finally, the migration plan, and I'd like to stress this is provisional, it's a draft plan. It's in the interest of as quickly as possible moving away from the vulnerable use of MD5 passwords, but I welcome feedback. It's not a final plan, it's a proposal. So I welcome feedback from the community on it. I think it's very close to what I already suggested in the impact analysis. I'd like to go over it again. And just to point out how many maintainers are affected. How big of a job is it to move away from passwords?
It turns out that most maintainers do not use passwords. There are just over 62,000 maintainers. But only 29 percent of them have an MD5 hash password, and only 3,000 maintainers only use MD5 authentication.
And in the last year, about half of all updates used a password. And that was involving one‑and‑a‑half thousand almost maintainers. So, there will be a lot of work, but it's relatively ‑‑ it's a relatively small percentage of the total of maintainers will have to do something.
So, a proposed migration plan. So firstly, we will notify the community in advance as usual by e‑mail to the Database Working Group. We will e‑mail all maintainers who are using MD5 hashed passwords in 2025, we will do it once there is an alternative once API keys is in production. We will warn maintainers that we plan to drop support for passwords in six months and we ask themselves to switch themselves to an alternative.
We will then wait three months for them to take some action themselves. And then from then on, we will select batches of maintainers and proactively e‑mail them, warn them that we will remove their passwords in one month. Assist anyone who asks for help. And then finally remove MD 35 hashed passwords in batches, and I would like to do it in batches so that small numbers of maintainers can do it at a time to support the registry team that will have to help maintainers deal with questions and if anybody gets locked out, they will need to deal with potentially forgot maintainer password procedures.
So, the plan is from six months after that, to completely remove support for MD5 hashed passwords and for any maintainers that are left out authentication methods, to follow the forgot maintainer password if they get locked out.
So I have come to the end of my presentation. I welcome comment and questions. Thank you.
PETER HESSLER: Thank you. Unfortunately, do to time constraints we do not have time for questions in the room. If there are questions, we encourage you to bring those to the mailing list, or if you feel you want to find Edward here.
ED SHRYANE: Yeah, no problem. Apologies I ran out of time and didn't have time for questions in the room.
(Applause)
PETER HESSLER: So, we have approximately 30 second left in our session, and two topics to go over so I will be as quick as I can be.
Over the summer, we started doing a review of the existing NWIs for the Working Group. There were two NWIs that we talked about, NWI 2 and NWI 17, displaying history for database objects were available. The community consensus seems to be that both of she is are still relevant and they are related to each other. To we will continue work on this. For the other existing NWIs, the current plan is that the Chairs will sent out a list to the mailing list in the next few weeks on giving for consensus on should we continue or not.
And then the last, any other business, is something I unfortunately forgot at the beginning was the review of the minutes from the previous meeting in RIPE 88. We e‑mailed them out to the list. We received no comments, as is traditional. And we ask you in the room does anyone have any last comments? I see no reaction. So ‑‑ Cynthia, please.
AUDIENCE SPEAKER: Cynthia: In the first sentence, it says "RIE Chair" rather than RIPE Chair.
PETER HESSLER: Thank you, we will request that to be corrected. And any other? Okay, with that correction I declare the minutes final. With that this brings us to the end of the Database Working Group session at RIPE 89. We encourage you to write and review all the talks not just in this session but in all the sessions you have attended. We want to know everything. If you are a member of the RIPE NCC voting is open, please vote. I believe that RIPE PC election is also still open, that is available to everyone and not just members, also please vote. Thank you very much.
(Applause)
(Lunch break)