• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle

  • My guess would be to force a hierarchy as to distribute load. DNS is distributed in a sens. There are the root name servers that know about all TLD and then each TLD has its own server (in practice there is multiple TLD a single entity controls and it allocates as many server as needed to answer all DNS request for those). And those “TLD servers” know about the second level. And either they also know about the lower levels or those are further delegated.

    So fewer TLD means that the “root” DNS servers do not have to keep a huge “phonebook” (TLD to IP address of the DNS responsible for them) and can therefore be efficient, which means that fewer of them are required. And fewer root server means its easier to update them and keep them consistent. And if nearly everyone can only register second level domains, then the root name servers do not need to be updated nearly as often.


  • I don’t think that’s the issue. As said in the article, the researchers found the flaw by reading the architecture documentation. So the flaw is in the design of the API the operating system uses to configure the CPU and related resources. This API is public (though not open source) as to allow operating system vendors to do their job. It usually comes with examples and pseudo code on how some operations work. Here is an example (PDF).

    Knowing how this feature is actually implemented in hardware (if the hardware was open source) would not have helped much. I would argue you are one level too low to properly understand the consequences of the implementation.

    By the vague description in the article it actually looks like a meltdown or specter like issue where some code gets executed with the inappropriate privileges. Such issues are inherent to complex designs and no amount of open-source will save you there. We need a cultural and maybe a paradigm shift on how we design CPU to fully address those issues.


  • This looks like one of those wireguard based solution like tailscale or netbird though I’m not sure they are using it here. They all use a public relay used for NAT penetration as well as client discovery and in some instance, when NAT pen fails, traffic relay. From the usage, this seems to be the case here as well:

    Share the local Minecraft server:

    $ holesail --live 25565 --connector “holesailMCServer420”

    On other computer(s):

    $ holesail “holesailMCServer420”

    So this would register a “holesailMCServer420” on their relay server. The clients could then join this network just by knowing its name and the relay will help then reach the host of the Minecraft server. I’m just extrapolating from the above commands though. They could be using DHT for client discovery. But I expect they’d need some form of relay for NAT pen at the very least.

    As for exposing your local network securely, wireguard based solution allow you to change the routing table of the peers as well as the DNS server used to be able to assign domain name to IPs only reachable from within another local network. In this instance, it works very much like a VPN except that the connection to the VPN gateway is done through a P2P protocol rather than trough a service directly exposed to the internet.

    Though in the instance of holesail, I have heavy doubts about “securely” as no authentication seems required to join a network: you just need to know its name. And there is no indication that choosing a fully random name is enough.