RIPE 89

Daily Archives

Ripe 89
Tuesday, 29th October 2014.
9am.
Main Hall ‑‑ Plenary

Tuesday, 29th October 2024.



BRIAN NISBET: Okay. Hello, good morning wonderful people of earth. To the Tuesday morning RIPE 89 plenary session, come in and take your seat, that would be fantastic, thank you very much.

So, I am Brian, along with Valerie, we'll be both from the Programme Committee, we'll be chairing this session this morning. A couple of housekeeping notes and we'll talk briefly about the session. One that I think that the NCC asked me to let you know is that an attendee at the meeting yesterday tested positive for the, it hasn't gone away you know, Covid. This is also a reminder that if you are concerned, there are masks and also Covid tests available from the NCC desk. And a general piece of advice, if you are not feeling well, don't come to the meeting folks!

It's just be nice to your fellow human beings. Just a reminder there is still time to nominate yourself or with their permission, somebody else for the PC elections which will be, that's finishing off this afternoon, the nomination phase. General reminder to rate the talks and we'll tell you that again at the end. And that's the housekeeping. So this session this morning, we have three excellent speakers, all of whom come from the RACI programme, the RIPE academic corporation initiative. So these are usually in this case definitely younger members of the community in academia, researching, working on really cool stuff and come here to present their ideas and to interact with all of you people who are operate real networks and hopefully mutual learnings will occur. The RACI programme is also, please tell your universities about it. And continue to get more wonderful people along to talk to the community.

So, with all that, I will invite Daniel to the stage. Yes, he is over there. This is Daniel at ten from Osnabruck university and he will be talking about green segment routing. Daniel.

DANIEL OTTEN: Hi everyone, today I am going to present a research project worked on by the Osnabruck university called green segment routing, here we go. And what is this project about. We are we all know about climate change and our IT infrastructure is responsible for a lot of emissions, in fact it's not even to findoncrete numbers to find) the IT causes much emissions as India or Russia, we are on that level.

So in this project, we want to reduce the ecological impact of the IT infrastructure and more precisely of backbone networks.

And why should this be even possible. When we look at the utilisation of a backbone network, we can see the average day in a European tier one ISP, we have a busy hour in the evening but in the morning, at one o'clock, the traffic drops below 50% of the busy hour traffic.

So in fact between 1 o'clock and 9 o'clock in the morning, major parts of the infrastructure are not needed because the network is built to carry the traffic of the busy hour and not made to transport the traffic of these low load periods.

And that's where we started. So now we had to learn about the power consumption of the backbone network. And therefore we collaborated with the Belview network in southwest Germany and we learned okay, the power consumption of the backbone network is mostly made up by the power consumption of the routers and the power consumption of the routers is mostly made up by the power consumption of the line cards.

So that's all we knew and then we started to evaluate what could be a possible target function for green traffic engineering algorithm and we came up with three ideas. Maybe it's a good idea to turn off unneeded routers, maybe it's sufficient enough to turn off unneeded ports or maybe we need to turn off like eight ports per router in a way we can turn off a whole line card or any other hardware device.

And yeah, when we talk to the people from the Belview network, we quickly learned that turning a router off isn't an option because to turn a router off, we need to stare traffic away and it cannot be used for peering with customers and so on.

So the next thing we did is we tried to look into the power consumption of the active components. And therefore we had a crown drop running for four weeks capturing the amount much power consumption and we captured the amount of traffic the component had to process and we can see on the upper side, we can see the power consumption of the components and here below we can see the traffic the line card has to process and we can see both graphs follow similar pattern. But when we look for closely, we see the power consumption very around only 200 watt while the amount of traffic varied from zero load to full load. So in fact the line card isn't very adaptive to to traffic changes when it comes to the power consumption. Here we learned okay, to reduce the power consumption, we really need to switch parts of the network off.

And then we tried to find loads of areas in the way we can reduce the power consumption and we did that using traffic generator and tried to figure out what happens to the power consumption in different load scenarios and here just to give you a quick overview, we tried to distribute traffic on different ports, on different MPUs and we found no no matter what we can only reduce the power consumption by 45. Even when we take the NPUs into account, there's nearly no effect on the power consumption. So, at the end pretty disappointing but we are an academia research project, we simply assumed we can turn off a whole line card and let's look when we develop algorithms for that. What can we achieve.

So we started living in this word, this is a network topology during the whole day and at one o'clock in the morning, we switched to a night mode and we turn off unneeded line card on every router and therefore we reduce the power consumption of the whole network.

And how do we want to achieve this. We need to turn off a whole line card, we need to steer traffic away from the line card and as our project is called green segment routing, we relied on segment routing. Yeah, segment routing is best explained by an example.

Here on the right side I have an example topology, one flow entering the network at A and leaving the network via C and if we want to steer traffic away from let's say router B, we need to to stare the traffic on the upper path and we simply do this by adding an intermediate target so when the traffic enters the network, we add the intermediate target here for example G, and from there on forward to C.

And by doing so, we split up the path of the flow into two segments, that's what we call this technique to segment routing. And many other researchers have shown that two segment routing is mostly sufficient for nearly all traffic engineering tasks so in that project we also relied on two segment routing.

So this is our general procedure looks. We take the topology and the traffic metrics and then we build an integer lineal programme, gets us reduced topology, that's the world we live in. And the first thing we wanted to find out is how good is 2‑SR compared to mathematical provable upper bound for the problem so we implemented two versions of this algorithm, the first algorithm, the MCF‑LC algorithm can reduce the number of active line hosts in the network as long as the maximum link utilisation stays below 70% but every path is the network is allowed, so this is clearly an upper theoretical bound, it's not implementable at all, the second algorithm is the 2‑SR version of the algorithm.

Here we simply limited the path to segment routable path and again the upper bound for the link utilisation is set to 70%. We choose to keep the maximum linkage utilisation between 70% to have some spare capacity for unseen events like traffic spikes or link failures.

Then we evaluate both approaches on two different datasets, the first is a variable dataset, it's called a repetitive framework, consists of several backbone topologies but we had to make the changes to the dataset, the first change we made is we had to reduce the amount of traffic by 50% because the traffic in the repertoire to framework was made to mimic the traffic of the busy hour, as we were looking at the low load period, we needed to reduce the amount of traffic and therefore we just used half of the amount of traffic and there is now hardware information included in the dataset so we simply assumed okay, this line card is used throughout the whole network so in fact whenever we can turn off eight ports per router, we can assume we can switch the line card off.

And the second dataset we used consists of several re‑run instances given us to us by a European ISP, it includes traffic data and topology data and we took one snapshot from every month of 2020 and 2022. And all snapshots were taken at one o'clock in the morning. And this is where the low load period started and we made the same hardware assumptions as the repetitor dataset.

Okay, let's look at the result, on the left side you can see how good 2‑SR is compared to MC F and we can see out of 21 instances, 18 instances could be solved with an optimal value to using 2‑SR, it's nearly as good as allowing every possible path of the network. But more over we could show here he can with see the computation of both algorithms, please notice it's an algorithmic scale that 2‑SR most of the time speed things up massively. Here for example it's a difference between hours and minutes.

So in the first algorithmic paper, 2‑SR can approximate and we can reduce the computation time of our algorithms.

The next thing we looked at is the real world dataset and here we found it's not possible to evaluate the MCF approach any more because it's much bigger compared to the instances in the repetitive framework and MCF is very time consuming and here it's not possible to obtain a mathematical provable upper bound but 2‑SR is computable so here we learned okay, the heuristic works on bigger instances and we can turn off between 70 and 80% of all line cards.

But I just have to make very clear this isn't a solution you can take off the shelf and apply to your network, this is the theoretical upper bound what could be achieved using 2‑SR.

So the next thing we did is okay, when we reduced the number of line cards in the network, we clearly reduced the amount of spare capacity in the network.

So of course the network isn't resilient to errors as it was before so we included error handling into our algorithm.

So when you look here on the left side, we have two flows entering the network at point A, the blue flow towards C and the red heads towardsE, assume the link between A and C fails, both flows have to utilise the link between A and D and if both are big enough, the link may become overutilised.

And this could have been avoided by installing a second routing policy for the red flow, if we sent the red flow over B, then both flows are separated again and the congested link isn't congested any more.

And this policy can stay active in the network even when the network is fully functional because both flows are now separated and no matter what, no matter if the link between A and C is active or not.

So, we included this in our algorithm and the first step we used nearly the same algorithm as before and we made only slight changes. We kept the MLU below 50% to have more extra capacity in the network, to yeah, handle errors, then we made sure no link is turned off completely, while in the first algorithm we wanted to find out what is the theoretical upper bound compared to a 2‑SR version, here we want to get rid of critical errors and therefore we don't want to alter the topology too much by removing a whole link from the network.

So, in the second step, we took the solution of the first step and made it failure resilient so we simply checked our error cases and if a link is overloaded, we add the corresponding constraint to our integer linear programme, then we repeat this process until we are done.

And theoretically, it would be possible to at all possible failure constraints in the first step but you have to keep in mind you need one failure constraint for every failure, for every ingress note and every egress note and every traffic demand, for the amount of possible constraints is too high to have a model that is computable, if we added all possible failure constraints in the first step, we would have a model that that is not computable on any computer on earth, we needed to process to solve this problem.

And then again we Val waited our approach on different, on six different topologies taken from the same European tier one ISP, all snapshots taken at one o'clock in the morning an same hardware assumptions as before and we included following failure scenarios on the one hand we had all signalling failures, that's a failure like I have shown you before, all links between two routers fail. And we included so‑called shared risk link group failures, it's a group of links that share a common risk, for example the links align unthe same route and different construction worker damages one table, he damages all cables, so when one link in that group fails, mostly the other links will fail too.

So these were the failure scenarios we included and let's have a look at the evaluation, we see the process or the computation process for all six instances and we can see first of all it worked.

After at least 21 iterations, we were done. There were no critical errors in the network any more but more over we could show our initial upper bound, okay, if we keep 30% spare capacity in the network, doesn't work at all. Here you can see especially for the shared link group failures, nearly 75% of all of them lead to a critical error somewhere in the network. They may disconnect participants from the network or overload other links.

And it's also worth to look at the second iteration, here we are making it actually worse than before because due to our interactive process, we are adding a failure constraint only when it's overloading the network. So by fixing the first few errors, we may cause other errors.

But after the second step, the model is aware of the kind of errors and removes them one by another. And after most 21 iterations were done and there are no critical errors in the network any more.

And to show you that our interactive struck stature of the algorithm is really needed, here we can see the amount of failure constraints added compared to the amount of possible failure constraints. And we can see it most we added 1% of all possible failure constraints to the model, so we could get rid of over 99% of all failure constraints. So here we could show the problem is computable if you split the ILP into different smaller ILPs, and lastly let's look at the amount of line cast cards, the red line shows the results of the first experiment and the black crosses show okay how much extra capacity is needed to get rid of all critical failures and we can see okay, it's not that much.

At most, 10%. More line cards are needed to get rid of all pre‑defined failure scenarios so again we can see okay, theoretical solvable to include errors into green traffic engineering.

So last but not least, let's look at the papers. The first paper where we describe our energy model and our measurement was published at the ICC one year ago and it's called yeah, on modelling the power consumption of a backbone network. The second paper is a paper where they describe the REPETITA framework and the included instances and the third paper describes the 2‑SR versus the MCF comparison and the last is how we included error scenarios into our valuation.

So to wrap things up. We had a traffic analysis on this project, we did some measurement and I omitted all the complexity analysis stuff, this was a pretty theoretical project and we validated our stuff on real world instances but in the future of course we have to reduce the number of policies and the algorithms now tend to configure a lot of policies, I think more than one thousand policies to fix for example the error scenarios. And we need to reduce that amount of policies to make the solution human understandable. And we need to consider additional constraints like delay constraints and so on.

So, my time is up. Do you have any questions?

AUDIENCE SPEAKER: Firstly ‑‑ Marco from C web. Firstly, congratulations for doing actual measurements on ROA test power because your colleagues at the past meetings I feel they based their research on totally unwarranted assumptions. Then an operational note, no actor operator is going to care about this kind of work unless you also consider latency constraints because customers are interested in latency, they care about that so you should really factor that as well. Thank you.

DANIEL OTTEN: Thank you for your feedback and we are currently working on that problem, the project is still going on and will be finished in October 2026 so we have time to include this.

VALERIE AURORA: We have got a question from Meetecho.

BRIAN NISBET: Christoph Chan from Alliance technology SE: Does regularly switching on or off on the line cards have any significant impact on the life span of the line card meantime between failures etc.

DANIEL OTTEN: Yes, this is a theoretical project and I know line cards don't have a hibernate mode and I also note the issues when you turn the line card off, you are not sure it will turn on again when you want it. So that has a massive impact on the life span of a line card.

AUDIENCE SPEAKER: Henry from Princeton University, if I heard you correctly if you are able toll remove an unused ports on a router, assume you can turn off a line card, you also consider the distribution of ports across multiple line cards because you could have, you know, seven ports on this line card and then the three ports could be on different card.

DANIEL OTTEN: I think I forgot to point out that we assume we have a perfect port to line card match, so as I told you, we run the night topology and we assume the line card to plot mapping is perfectly matching these night topologies, in fact due to these line card to port mapping, we achieved mostly perfect night topology.

AUDIENCE SPEAKER: Thank you.

AUDIENCE SPEAKER: ... I am a developer of controller for segment routing, so I am thinking it's open standard solutions, open standard use to communicate, for communication between controllers and routers, do you think what extensions to our standard should be, what should be standardised in order to achieve these with different vendors, not just Cisco with but with any other vendor, I you I assume for BGP link state, it needs to show line card ID and obey for the controller to instruct the router to shut down the line card.

DANIEL OTTEN: Yes, I think that needs to be standardised. We need to a way to turn off components and we need a way that the operator can be sure these components turn off and on again, on the segment routing side I think it's sufficient to another way to easily configure an intermediate target for some flows so it's fully sufficient if you have a ‑‑ every traffic entering at point A and that heads towards C is first sent to G. Maybe you can use IP and IPR, something like that. Yeah.


VALERIE AURORA: All right, thank you very much.

(APPLAUSE.)



Our next talk is characterising and mitigating phishing attacks ccTLD scale, it's from Thomas Daniels from KU Leuven, the department of computer science.



THOMAS DANIELS: Hi, good morning everyone, I am Thomas, I am a candidate researcher and together with researchers from other universities and registries, we worked on a large scale study to characterise phishing websites in the sense of what do they look like, what is their lifecycle and how are they met gated. Our findings have been present in a paper at computer and communications security conference this month and today I am also going to present the results here.

So we think that phishing is a major threat on the internet because it has been identified by several security organisations as one of the most important security threats, so that's why we decided to do research on this. And phishing is also big business because there have been instances of so‑called phishing as a service providers which basically means that say you were a criminal and you want to launch phishing attacks all you need to do is pay them a couple of $100 per month and they will provide you with hosting infrastructure, website templates, and so on. Which makes it a lot more accessible for groups of criminals to do phishing.

So the take away there is that we have large groups of criminals scamming many vulnerable people.

Our study is the first that brings together three country codes top level domain registries to work on this topic. Specifically we worked with the Netherlands.Nl, Ireland.ieand Belgium's.BE, we did this study over ten years and the advantage of having the ccTLD point of view is we have a complete view over our domain name zones and because there are three of us, we can compare the results and see if there are similarities and differences between us.

Our study is of larger scale than most earlier work on phishing tends to be in terms of time because we investigate four to ten years of data while many other studies are limited to one year.

Also in terms of the companies that are being impersonated that we found, we have more than a thousand, and also the number of phishing domains in our dataset which is almost 30,000.

So there are considerable differences between the three top level domains which are important to keep in mind when interpreting the results. First there is the size. .nlis by far the largest with about six million domain names in their zone and .ie is the smallest with 330,000,.BE is between that. Second and notable points is that everybody is free to register a .nl or .be but .ie is limited to people with a connection to Ireland and you have to prove this if you want to register a.iedomain name, this is manually verified before you get the domain.

So as registries, we know about all the domains we have in our zone but we still need to know which domains had phishing on them. For that, we are all subscribed to the bot list services of net craft, which provides us with the data we need specifically in the dataset we have on which domain names phishing was found but also the act you recall, the time stamp on which it was found and the company the phishing website tries to impersonate and we can use this dataset in conjunction with the registry, which is the registration database with all the information about the registered domain and also measurements at the web and DNS level.

Here's a first plot which shows the total number of phishing domains per month for all three TLDs over time. We counted unique second level domains and for clarity a second level domain is a domain directly below the top level domain like example DDos be is one. There is a slight decrease over time in the number of phishing domains.

So phishing tries to deceive use by mimicking companies and we looked at the characteristic of the impersonated companies. Citizens tend to have trust in their country code's top level domain because it's used by governments and also by companies they are familiar with so a question is do attackers also exploit this trust for phishing. And we see on the charts here which makes it the distinction between international and local phishing by international phishing, I mean that the impersonated company is head‑quartered in another country than the one associated with the TLD, a local phishing that the company is within the country.

And we see that most phishing is international, the purple bar, it's a lot bigger than the local phishing. Which seems to indicate that attackers don't really care about the TLD they use. But is that really the case?

If we look at domain names that international phishing and local phishing separately, we see a clear pattern which is that international companies tend to be impersonated with domains that are years old. And national companies are impersonated with domains that are most of the time only a couple of days old.

By age of domain what I mean specifically is the time between the registration which we know as the registry and the time that it was detected, which is net craft dataset.

So this seems to indicate to two attack strategies, which is that national companies are targeted using new domain names and international companies using old domain names.

The ratio of this is at about one fifth of the phishing domain names or new domains. But that's a rough number because for .nl it's higher and for .be it's lower. Why do we have this difference.

The difference lies in how the attackers obtain the domain names. For the old domains, what happens most of the time is that those domain names are owned by the legitimate people or businesses and they have a website associated with it, that can be exploited using security vulnerabilities. And that's what the attackers do, they scan for such domain names, they exploit them and they host phishing on it.

In contrast the new domain names are maliciously registered, they are registered by the phishers themselves and we see those old compromised domains tend to be used for international phishing attacks and registered could he mains for local phishin gattacks, we see it on .nl and .be where if it's a new .be domain, it usually targets a Belgian company.

So the usage of maliciously registered domains allows the attacker to better mimic the company they want to target because they have control over the domain name. So they can choose names activate credit cards, online verification or names that look very closely like the bank they are targeting for example. We have also seen those domain names are sometimes chosen in Dutch which also emphasises the point that these tend to be used by local attackers but registering the domain name yourself, that imposes a cost because they need to pay for the registration and they also need to set up the hosting infrastructure.

While in contrast the compromised domain names in that case the attackers don't have control over the domain name, they just take whatever they can get, but there is no costs associated with registration because they don't make the registration themselves, they advertise on a registration that was done by someone else a long time ago. Here's an overview of the top ten of companies that are impersonated in a .nl zone, Microsoft tops the list, it's a very popular target for phishing, we also see that on .be and .ie and we see in this case it's a nice mix of international and local companies and this table also highlights the difference between the international and local phishing because international companies they all have the domains of median age days of over a thousand while the local, the domain names targeting the local companies, they have a median age of one or two days.

And we see in this case the local companies are all banks, which is also true that in general banks and financial services are very popular targets across all the TLDs we looked at alongside sectors like technology or internet services, those are very popular as well.

Now I have only talked about .nl and .be in the context of the comparison but what about .ie, well we have only found two new phishing domains on .ie which makes sense because of the restricted registration policy I mentioned earlier. This policy puts up a very high barrier for attackers to register .ie names because they would have to prove how they are and how they are connected to Ireland. There's been a case of attackers using fake IDs to register a bunch of .ie domain name but this is more difficult than just registering a domain name that you are completely free to register but this policy of .ie does not prevent against compromise domain names because as mentioned before, the compromise is not related to the domain registration itself.

Let's take a look at how the companies overlap between different TLDs. Overall we have found more than a thousand companies in our dataset of which 193 appear in all three TLDs and these are all the big well known global companies such as Microsoft and Apple.

There's also a high overlap between .nl and .be specifically, 247 companies which is a result that makes sense because there are several ties between Belgium and the Netherlands such as the facts that she their a language or many banks operate in both countries. We have also seen domain names where both Belgian and Dutch companies are targeted on the same domain name.

So that also makes sense.

And the other numbers mostly seem to be decided by the attack surface because .nl has a larger domain name space with six million domains, there is more potential for domain names to be compromised and for .nl we have ten years of data while for the others we only /SR‑L four years of data, it also translates in finding more unique companies on .nl.

Let's take a look at the lifecycle and mitigation of the phishing websites. One way to look at the activity is by measuring the DNS traffic at the authoritative server from the registry, here's an example of a maliciously registered domain name, we see shortly after the registration the traffic spikes.

And this is when the phishing campaign is launched by the attackers. Shortly after, it is detected and the notification is sent out, after which the activity just tends to decline and except for some peaks and then it's mitigated at the DNS level.

Another example is a compromised domain name, this belongs to a lodge it mate company which got hacked and the phishers, they put up well a phishing website on it and in that case we see that the activity is steady and rather low until the moment when the attackers launch their campaign and then a big spike happens and later on the domain gets mitigated by the hosting provider so the domain name stays with the original registrant and the website could continue operating.

So the fact that I mentioned the first example that the mitigation happened at the DNS level an in the second case that it was done by the hosting provider indicates that phishing mitigation is not just one single event carried out by one single party.

Because there is many different parties involved that can mitigate the phishing website independently. Say at the DNS level, if you register a domain name, you don't purchase it from the registry directly but you make use of a registrar, so at the DNS level both the registry and registrar to could take down the domain an it's also possible that you have a name server provider that is not the registrar which could be another party that can do the mitigation. And at the web level, there is the hosting provider that could take down the website and make it an accessible for visitors.

There are names on the slides are examples, it could be a different situation for any other domain, except for the registry of course because each TLD has their proper registry.

If a registry takes action, they have the choice between different ways to do so. They can either suspend the domain which means they take it out of their zone file but it stays in their name space or they can delete the domain altogether even from the name space or they can keep the domain both in their zone and in the name space but change the name servers so that visitors are pointed to a safe location.

And which action is taken by the registry and at the moment they do so, the difference between registries because well they have their own policies and a summary of these policies have given on the slide, well not at the moment!

There it is. So a summary of these policies is on the slides, there is quite some difference between the registries and it will also show in the results of how those domains are mitigated which we also looked at. Here's is for the new domains, the mitigation specifically at the DNS level, we see that, we looked at the difference between who mitigates domain names, this is the registry or the registrar and we see that for .nl, by far most of the mitigation is done by the registrar. And that's because .nl notifies the registrars and the hosting providers who then take action.

.be suspends a lot more domain names themselves, they also notify the other parties but they take action themselves as well.

The bars here don't fill up to 100% because this only shows mitigation at the DNS level, the rest of the domains is mitigated at the web level by the hosting provider.

So this is for the new domains which are mostly maliciously registered, for the old domains, we see there is a lot less mitigation at the DNS level overall.

And that's because these domains tend to be compromised, so mitigation at the web level is preferred so that the legitimate operations of the websites that it originally was on, that that's not impaired too badly. An exception is when the official registered domain leaves it hanging for sometime and hosts a phishing attack which is referred to as H domains.

We can also measure how fast DNS mitigation and web mitigation are compared to each other. Because for .nl and .ie, or Netcraft dataset has a time spent of mitigation when the first action hoped that's rendered the phishing website inaccessible. So we can compare with this the DNS mitigation as seen from the registry point of view and what we see is that within a day at the DNS level, about 50 to 60% of the domains is mitigated at DNS level but if we, if you look at the same percentage at the web level, then this is already reached after six hours, which means that web mitigation is generally faster than DNS mitigation.

Here's a case study of phishing against a French bank from a .nl domain name, this screen shot was taken by the in‑house of a .nl and this crawler tries to visit new domains every couple of hours. So we have several measurements of the website and based on that we can reconstruct the timeline of this website. That's shown on this chart. All the stars at the bottom is a moment when the crawler visited the website and took a snapshot. The red line is when it was mitigated. So the reconstruction, the crawler data is that before the mitigation the crawler has seen the phishing website, which is what the screenshot was from. And then it was mitigated by the hosting provider after which it was unreachable for a couple of hours and then replaced by the default web page of the hosting provider.

What we can also learn from this graph is that even before the notification, which is a green line, the crawler had already visited the phishing websites and this was analysed after the fact and was not known at the time but it shows that in some cases the position of the registry who knows about all domain names could be leveraged to reduce the detection time of the phishing websites. Although of course this will not help in all instances, because the crawler can only try to visit the home page, if the phishing website is hidden behind some deeper URL structure, the crawler cannot know about it but in some cases it may be an opportunity to detect the websites earlier if it could be automatically done so at the crawler.

So to compare the two attack strategies, we have the new domains that make up one fifth of our phishing domains and they target a small share of the companies by leveraging the trust in the ccTLD and the restricted registration policy inhibits those registrations. Mitigation happens at both DNS and web level and in contrast, the old domains make up most domain names and they also targets the highest number of companies.

And restricted registration does not help with them. Mitigation happens mostly at the web level.

What we found is that most research on phishing is on new domain names. But we have seen that most phishing takes place on compromised domains, so we believe it would be helpful if more research is done on compromised domain names in order to detect them faster. We also think it's a good idea for ccTLD registries to take a look at their registration and their abuse policies because as we have seen a policy can make a big difference in how the domain is mitigated.

So, to wrap up the presentation. I will give a quick summary.

We collaborated with three European country code top level registries on the largest phishing characterisation study on which we found two main attacker types. We have the national companies that are targeted using new maliciously registered domain names and the international companies that are targeted using old compromised domain names. The policy has a big impact on the mitigation. Because we see that .ie has almost no maliciously registered domain names and between the others, the DNS mitigation differs in who performs the mitigation, the registry for the registrar. Finally, we believe having more research done would be good. If you are interested in reading the full paper, visit the URL or scan the QR code and I am happy to take any questions.



BRIAN NISBET: Cool, thank you very much.

(APPLAUSE.)

It's nice to hear that you if you are being phished, it's probably not an an Irish person doing it. Dodging the Belgians though, mm...
So, please.

AUDIENCE SPEAKER: My name is... I am from the internet society Pulse. So I have a question: Did you also have a, have some indications that this tendency towards whether which TLD is being used for phishing is based on the domain space size or whether or the dependency of that local user on that ccTLDs for example.

DANIEL OTTEN: Well in terms of domain name space, if your URL TLD has more domain names, there's more opportunity for domains to be exploited. In terms of the dependency of citizens on their TLD, I don't know because I think that's pretty similar for .be and .nl and .ie is a different case, in Ireland .com is still very popular but the registration policy is also different so it's on the completely ‑‑ it's not a completely fair comparison, I think to make such a comparison you would have to find top level domain that in a country where .com is also more popular where they have an open registration policy.

AUDIENCE SPEAKER: .nl there, it's the Dutch citizens really depend a lot on the Dutch NL websites, maybe there's some.

DANIEL OTTEN: I think they do so a bit more than .be, in terms of population, .nl has more domain names per Capita, I don't know if the difference is big enough because we don't see a lot of difference there.

AUDIENCE SPEAKER: Thank you.

AUDIENCE SPEAKER: Hello. Johanne from University of Messina, I just had a comment on the compromised domains and I think you might have some, you know, a way for applications to notify directly the TLDs about URLs so it might be easy to detect compromised domains in this way, to so it will be like there's a phishing attempt using this URL and this domain name and I don't know, maybe TLDs can take a look at this.

DANIEL OTTEN: Yes, in fact that happens with Netcraft, if a domain is notified to them, they will notify the registry and maybe some other parties so yeah, the block lists will still play a big role but it's well protection after the fact, let's say, because if the blocklist point gets made, there's still time time for mitigation and victims may have been made, it's not easy to detect compromised domains before they make victims and probably involves like different parties that need to work together on it. But it would be interesting if progress could be made on that because it would prevent a lot of victims.

AUDIENCE SPEAKER: Yes, thank you.

AUDIENCE SPEAKER: Thank you very much, thank you for the presentation, very interesting question about the methodology, you seem to define phishing domain as in it appears in this list of Netcraft, how checked what the definition of phishing is for Netcraft, have you assessed the quality of the Netcraft list or maybe even compared with the quality of other providers?

DANIEL OTTEN: Yes, we didn't check because we also used a different source, we compared it with lists of the anti‑phishing working group and we found that they have a lot less domains in their datasets and the ones that they had were almost also Netcraft datasets and we also like did some manual checks like with the crawler data and we did not find reasons to not trust the reports made by Netcraft.

AUDIENCE SPEAKER: Thank you very much.

MARCUS DE BRUIN: Marcus de Brun, thank you, that was very, very interesting and very relevant and just to clarify questions, when after the phishing campaign is launched, you saw a spike and this is, this spike is DNS requests for the second level domain at the TLD level.

DANIEL OTTEN: Yes, so it's resolver that make contact with authoritative DNS server at the registry, it's no the a complete view but we can still see the patterns in it.

MARCUS DE BRUIN: Okay, the notification happening afterwards is the TLD notifying the registrars and hosters? Is that correct

DANIEL OTTEN: Well it's a notification by Netcraft but that can happen simultaneously, yes. It's basically the moment of the detection and notification.

MARCUS DE BRUIN: Okay so this is the TLD is not notifying the hoster 
>

DANIEL OTTEN: Yes, they do that. I think for .nl and .ie, it's Netcraft that does it directly. So I guess that you could consider the detection and notification as being synonomous in our study.

MARCUS DE BRUIN: Okay, thank you.

BRIAN NISBET: Cool, yes, I am on the policy advisory committee for .ie and the Netcraft stuff has been very interesting and this comparison also extremely interesting, so thank you very much for the talk. Yes. Thank you very much.

(APPLAUSE.)

.

So our third presentation this morning is from an /TKPWEUR from IMDES, we are talking about the fact that the attack is coming from inside the house and this is characterising goal communication and threats in smart homes. Thank you.

ANIKETH GIRISH: Hi everyone. Thank you for the /AOEUPB /TROE and I am pleased to... I work /PHOESly on security and privacy threats in smart home and mobile ecosystems. In this work, I will be talking about characterising local network communication and threats associated in the smart home ecosystem which was done in collaboration with folks from Berekley, N Y U, university of the Cal Gary and /AP census. So straight off the back, last year I got obsessed with making my home smart. It started by adding a small smart light bulling and I set it up, connected to the internet and I realised I could control it with my smart phone.

But I didn't end there. I wanted to go all in and I decided to like set up a smart TV and then a camera and then a waste assistant. I find out these divisions cannot communicate with each other in my local network. For instance, I can like watch my camera's live stream or sync my home lighting with what I was /WAFPLing on the TV. So it's pretty cool, right.

But as security researcher in me, I started to wonder if this communication or network of communication that exist which provides you handy set of features but is this really safe. And they enable like a continuous and seamless interaction and whether it's privacy safe.

So that we, to understand what has been understood so far, there has been a ton of research that's looked into what happened between the communication across the device and the cloud, that looks into what happens from the app or the mobile device and the to the device itself.

But there's a lack of understanding on what happens within the local network and what sort of local communication and threat that exist within the local network.

But one might argue that all of this devices are behind an app and these divisions are protected from external threats but that's not enough, due to broken local network privacy protests and local being... so hence devices can actually broadcast sensitive information in the local network assuming the same that local network and such information can extricated by co‑located apps and devices and used for across device tracking or unique household finger printing or even inferring socio‑economic status. We found several companies or multiple parties doing this and it's worth collecting this information and sending this information to their cloud.

So to understand this ecosystem better, we asked three research questions. Starting off with what are the characteristics of smart hole local network communication, what are the privacy and security threats associated with them and is local network communication being abused for finger printing and tracking purposes.

To set out to answer these questions, we built a test bed of 93 consumer IP devices, for this particular research, we ignored other wireless protocols. We captured all the local network traffic interactions with these divisions during an idle period and during interactions and to provide more interactions and real authentic responses, we set up a honeypot and active scan to issue from these devices. We also set up a mobile app and mobile devices to collect local network behaviour in the smart home. With Android phones, we performed both network and on time analysis, to capture unencrypted traffic on more than 2,000 mobile apps on IoT specific ones and non‑IoT specific ones. While the test bed gave an in‑depth understanding of what happens in the local network, we wanted to see how what happened, so we cloud‑sourced IoT local network traffic from 12,000 devices across 3,000 house holds from 264 unique products. With that set‑up we started to investigate how these devices interact with each other, so this network graph shows you how these devices interacted among themselves in the local network. Each node representing different devices and the protocols they used, for communication, a simple line represents TCP communication and dotted line represents UDP and if there's a thick line, it presents the usage of both.

With that we found that they were using 35 different protocols for communication and nearly half of them were communicating via Unicast for either a command and control services.

And we also found 93 devices were used broadcast based and 73% were using multicast, for mostly... features and what not.

Zooming into the more connected part of, we found that there are inter vendor communication across the same ecosystem such as Apple devices communicating with Apple ecosystem and inter vendor communication across devices offering interoperable features, with this we established these devices are actually super tachy in the local network, but we wanted to understand the privacy and security threats associated with them. We saw that these devices were broadcasting sensitive information into the local network. For instance, the Amazon fire TV was using UPNPRSSDP to share a device name and/or also there was like TP Link bulb which was sending out duo location through their TP Link protocol. And all of this was in plain text, any app could actually listen to this and siphon it off. We also found more details about this, this ecosystem and what do they collect, this work is done as part of academic research published last year. So feel free to check out paper which has the same title as the talk and I will have the QR code in the last line as well.

So far we have established that this devices are super tachy and sending out sensitive information to the local network and we wanted to understand whether these apps co‑located within this local network are extracting certain information or not.

So for that we specifically toned our attention to Android apps and third party embedded in them so we found that Android apps is actually extrapolate local network information without any permission or user consent.

So apps and can actually scan the local network and collect local information without any run time permissions that request for your consent.

This essentially by‑passes the permission model that exists in the Android ecosystem to collect sensitive information such as the wifi SSID or BSSID which can be a surrogate for a duolocation because there exists several databases with this information out there.

We also turned our attention to IOS but luckily, local networks scanning in IOS is a bit too constrained, you need to ask for explicit consent and approval from Apple who access multicast sockets and they require explicit permission from the user as well to actually scan the local network.

So again we established that these devices are actually super tachy in the local network, they are sending out information and apps and they can actually collect this information but is this actually happening in the wild, it turns out it is, so we found two cases that I am going to talk about here today. One is where the device was collecting wifi and SSID and BSSID and it was using MDNS to proceed this information and apps which was scanning the local network was able to collect it information and bypass the permission model that Android society up and send it over to their tower infrastructure.

One example of this is app dynamics and analytics and pro profiling SDK to which is collecting UN NPmessages which had the user name and Mac address and sensitive information. Another instance we found was were devices were sending out malicious U PNPor sorry apps of sending out malicious UN NPdevices and responded with several device PI Is which included Mac addresses or wifi SSIDs and BSSIDs and the apps collected it collected it and sent it to the cloud infrastructure. This is an SDK called Umlaut crafting and collecting this information from them and then another example is NOSDK which is third party which interestingly was using net buy I don't say to request every IP and information related to that in the local network and send it to the, send it with end point. This library was later on classified as malware and Google, after our report Google actually kicked them out of the place essentially, Out of the play store essentially. All the apps that contained it were removed, we wanted to understand that whether this local network or device information that this apps or devices can collect can they do some sort of a household fingerprinting, to do so we looked into our large scale dataset collected and we looked at three types, such as the users names or device names and UUIDs and Mac addresses, so to understand if they can be used for fingerprinting, we used a particular matrix entropy, it indicates greater fingerprintability and exposing all three identifyers make your household highly distinctive which means by three different item identifyers, a person can know who you are and from our dataset we found that 2800 exposed UUIDs and 94.2% of these can be uniquely identified.

After our analysis, we disclosed all our findings to Windows and Google and reported the site channel to Google, they rewarded us with a bug bound for our findings. We provided a list of misbehaving apps to Google and also we reported all the IoT behaviours to several IoT vendor that was doing this. We also contacted several regulators in our jurisdiction since our labs is in Spain, we contacted APD, we had collaborators from the US so we contacted FDC, they all acknowledged a problem and acknowledged this is a real threat. We also got several responses from the vendors saying they will be using a new identifying at random to replace the current ones and Google also has been really great at engaging with us to acknowledge that, it's a real issue and harmful use of privacy, there are several mitigations such as they will be implementing new permission model in their IOS and including a new app review process and also they will be including this in the general IO standardisation efforts as well.

I must emphasise this attack is not exclusive to Android. Other networks such as the ones I have listed here, other IoT devices or smart TV apps or any visitors that come to your house and connect to your router and your network can also compromise your information. And we need more and continuous research and tooling is needed to understand more in‑depth what happens and IoT devices are really hard and expensive to test so we need more research to go on.

So with all of that in mind, we propose three lines of action on how we can actually mitigate all of this. And one from the vendor side, we propose that all the device metadatas and idenifiers to be considered as sensitive information. We propose that there should be a privacy by design aspect in the local network as well and promote as much as end to end encryption in the local network, provide transparent and usable interface for controller for the users and include secure by design firmware and timely updates and also hardening the supply chain so third parties cannot just hijack the data that's being sent over.

From the policy side, we recommend that there was a lightning talk yesterday on the Cyber Resilience Act and similar GDPR in Europe be more consider ate about this kind of threat and we include more third party auditing along with standardisation from IETF, including RIPE.

For researchers like me, we should look more into the security and privacy threats resulting from the integrating elements. We should develop more testing methodologies and assist vendor and independent auditors. And design more effective and usable security and privacy controls.

So that being said, let me conclude my talk by stating that we are the first ones to characterise local network communication within 93 smart home IoT devices and 2,000 apps. We found several sensitive information being disseminated in the local network, we found that these devices can be used for fingerprinting and information harvesting. And we have responsibly disclosed all of this issues to all the responsible parties. And thank you. And I am open to take any questions now.



(APPLAUSE.)



AUDIENCE SPEAKER: Wolfgang Tremmel, smart home enthusiast and private citizen. Did you also include open source IoT devices in your study?

ANIKETH GIRISH: No, they were all consumer devices we bought, we did not include open source ones.

AUDIENCE SPEAKER: Perhaps you should.

ANIKETH GIRISH: That should be the next one.

AUDIENCE SPEAKER: Have you discovered any attack vectors against solar panel invertors and batteries?

ANIKETH GIRISH: No, our focus was just on smart home ecosystem and what happens within the local network so we didn't exactly look into the solar panels and it's also more expensive to acquire with our academic budget.

AUDIENCE SPEAKER: Obviously there is a risk of hacking inverter or battery is much higher than hacking a smart TV.

ANIKETH GIRISH: Yes, it should be something that we should look into next.

AUDIENCE SPEAKER: Thank you.

AUDIENCE SPEAKER: I am curious, when you did your test and did you try to implement full clientisation and see how many things would get broken, I am not sure you necessarily need to allow your clients talk to each other all the time.

ANIKETH GIRISH: You can block them off but then that would be at the expense of certain features that these devices.

AUDIENCE SPEAKER: I was curious if you look at how much you actually would sacrifice in this case, maybe everything would just work.

ANIKETH GIRISH: That's another problem is not all of the devices allow you to block that off. You need to be a nerd like me who sets up different VLANs for each devices and filter them separately, for a normal user, you don't have that sense of control within the device itself.

VALERIE AURORA: I just have a quick question. Valerie Aurora ‑‑ there's just your thoughts, not a challenge, about comparing and contrasting with fingerprintability via other methods on apps in particular. There's a lot of options available in looking at the local network is one of them. Just your thought on like okay we got rid of this vector, what percentage of finger printability technique does that get rid of compared to the total?

ANIKETH GIRISH: Can you repeat the question again?

VALERIE AURORA: I will give a quick example: An app can use a number of things on the local phone to create a unique identifier and so just I'm curious like how much of that attack surface getting rid of IoT methods.

ANIKETH GIRISH: With the phones from research stand point, we can actually identify what information are being leaked based on the fact that we can actually unencrypt the traffic and figure out what is being sent over and what information is being collected to in the app world most of the information that they collect is identifiers or user behaviour when the phone or app is around it. But with the IoT it's harder to inspect because it's a device without any user interface which allows you to, like, instrument them so it's harder to identify what sort of information they are collecting and what sort of data they are collecting except for the fact that in network level you can identify certain protocols which always has been unencrypted so we found all of this threats from that so based on the information that's being collected in these particular type of communication or particular channels, that's where where we found this information being leak. So essentially once you have that and once you have what all different types of identifiers and different information is being leaked you can understand what, what the person's fingerprint, whether it's possible by just applying the same metric that we used by the EEF to see how much unique your household can be.

VALERIE AURORA: So you think getting rid of this vector would be a big improvement, or 5% improvement? Do you have a number?

ANIKETH GIRISH: I cannot put it in numbers but then there is also a bit of a usable factor that this devices also need certain identifiers to function properly. So it's up to the vendors and the standardisation policies how to actually set this identifiers and whether it should be a persistent one or whether it should be resettable one or something like made at random.

VALERIE AURORA: Right.

ANIKETH GIRISH: I will be around if you guys want to have any questions, feel free.

VALERIE AURORA: Thank you very much.

(APPLAUSE.)


VALERIE AURORA: Just a quick reminder that the PC elections have various deadlines happening throughout this week. Don't forget to rate the talks that helps us figure out what you all want to see the next time around, at the next meeting. And thank you to all of our RACI participants and submissions, we appreciate it, enjoy your break.



(APPLAUSE.)

(Coffee break)