How much do you need? Show your maths. I looked it up online for my post, and the website said 1747GB, which is completely in-line with other models.
How much do you need? Show your maths. I looked it up online for my post, and the website said 1747GB, which is completely in-line with other models.
Can you link that post?
Running R1 locally isn’t realistic. But you can rent a server and run it privately on someone else’s computer. It costs about 10 per hour to run. You can run it on CPU for a little less. You need about 2TB of RAM.
If you want to run it at home, even quantized in 4 bit, you need 20 4090s. And since you can only have 4 per computer for normal desktop mainboards, that’s 5 whole extra computers too, and you need to figure out networking between them. A more realistic setup is probably running it on CPU, with some layers offloaded to 4 GPUs. In that case you’ll need 4 4090s and 512GB of system RAM. Absolutely not cheap or what most people have, but technically still within the top top top end of what you might have on your home computer. And remember this is still the dumb 4 bit configuration.
Edit: I double-checked and 512GB of RAM is unrealistic. In fact anything higher than 192 is unrealistic. (High-end) AM5 mainboards support up to 256GB, but 64GB RAM sticks are much more expensive than 48GB ones. Most people will probably opt for 48GB or lower sticks. You need a Threadripper to be able to use 512GB. Very unlikely for your home computer, but maybe it makes sense with something else you do professionally. In which case you might also have 8 RAM slots. And such a person might then think it’s reasonable to spend 3000 Euro on RAM. If you spent 15K Euro on your home computer, you might be able to run a reduced version of R1 very slowly.
Okay. But this method doesn’t address that the service doesn’t need the message to include the sender to know who the sender is. The sender ('s unique device) can with 100% accuracy be appended to the message by the server after it’s received. Even if we trust them on the parts that require trust, the setup as described by the blog doesn’t do anything to prevent social graphs from being derived, since the sender is identified at the start of every conversation.
If we trust them not to store any logs (unverifiable), then this method means they can’t precisely know how long a conversation was or how many messages were exchanged. But you can still know precisely when and how many messages both participants received, there’s just a chance that they’re talking to multiple people. Though if we’re trusting them not to store logs (unverifiable), then there shouldn’t be any data to cross reference to begin with. So if we can’t trust them, then why are we trusting them not to take note of the sender?
The upside is that if the message is leaked to a third-party, there’s less info in it now. I’m ignoring the Github link, not because I don’t appreciate you finding it, but because I take the blog-post to be the mission statement for the code, and the blog doesn’t promise a system that comprehensively hides the sender’s identity. I trust their code to do what is described.
I think Dessalines most recent comment is fair even if it’s harsh. You should understand the nature of a “national security letter” to have the context. The vast majority of (USA) government requests are NSLs because they require the least red tape. When you receive one, it’s illegal to disclose that you have, and not to comply. It requires you to share all metadata you have, but they routinely ask for more.
Here’s an article that details the CIA connection https://www.kitklarenberg.com/p/signal-facing-collapse-after-cia
The concern doesn’t stem from the CIA funding. It’s inherit to all services operating in or hosted in the USA. They should be assumed compromised by default, since the laws of that country require them to be. Therefore, any app you trust has to be completely unable to spy on you. Signal understands this, and uses it in their marketing. But it isn’t true, they’ve made decisions that allow them to spy on you, and ask that you trust them not to. Matrix, XMPP and SimpleX cannot spy on you by design. (It’s possible those apps were made wrong, and therefore allow spying, but that’s a different argument).
How does that work? I wasn’t able to find this. Can you find documentation or code that explains how the client can obscure where it came from?
Your client talks to their server, their server talks to your friend’s client. They don’t accept third party apps. The server code is open source, not a secret. But that doesn’t mean it isn’t 99% the open source code, with a few privacy breaking changes. Or that the server software runs exactly as implied, but that that is moot since other software also runs on the same servers and intercepts the data.
We can’t verify that. They have a vested interest in lying, and occasionally are barred from disclosing government requests. However, using this as evidence, as I suggested in my previous comment, we can use it to make informed guesses as to what data they can share. They can’t share the content of the message or calls – This is believable and assumed. But they don’t mention anything surrounding the message, such as whom they sent it to (and it is them who receives and sends the messages), when, how big it was, etc. They say they don’t have access to your contact book – This is also very likely true. But that isn’t the same as not being able to provide a social graph, since they know everyone you’ve spoken to, even if they don’t know what you’ve saved about those people on your device. They also don’t mention anything about the connection they might collect that isn’t directly relevant to providing the service, like device info.
Think about the feasibility of interacting with feds in the manner they imply. No extra communication to explain that they can’t provide info they don’t have? Even though they feel the need to communicate that to their customers. Of course this isn’t the extent of the communication, or they’d be in jail. But they’re comfortable spinning narratives. Consider their whole business is dependant on how they react to these requests. Do you think it’s likely their communication of how they handled it is half-truths?
Used by a bunch of NATO armies isn’t the same as promoted by or made by. It just means they trust Element not to share their secrets. And that blog post is without merit. The author discredits Matrix because it has support for unencrypted messaging. That’s not a negative, it’s just a nice feature for when it’s appropriate. Whereas Signal’s major drawback of requiring your government ID and that you only use their servers is actually grounds to discredit a platform. Your post is the crossed arms furry avatar equivalent of “I drew you as the soyjack”. The article has no substance on the cryptographic integrity of Matrix, because there’s nothing to criticise there.
Sure. You can trust your own fork. Just don’t use the official repos or their servers. The client isn’t where the danger is.
Your link lists all the things they don’t share. The only reasonable reading is that anything not explicitly mentioned is shared. It’s information they have, and they’re legally required to share what they have, also mentioned in your link in the documents underneath their comment.
Your data is routed through Signal servers to establish connections. Signal absolutely can does provide social graphs, message frequency, message times, message size. There’s also nothing stopping them from pushing a snooping build to one user when that user is targeted by the NSA. The specific user would need to check all updates against verified hashes. And if they’re on iOS then that’s not even an option, since the official iOS build hash already doesn’t match the repo.
It’s not my friends I want to hide my number from, it’s Signal.
Signal is USA government approved. Definitely don’t trust it. Use Matrix.
It only knows what you tell it. Just use it like any other website, and follow the same rules you do for all websites, which is to think about what you’re sharing, and only share what you’re okay with them knowing.
Facebook is for local things, so it’ll have to know where you live and who you are. So a VPN is kinda pointless. If you engage with three groups that are in the same village, you’re probably someone from that village, you know.
You also don’t need to clean cookies, because closing the browser clears the cookies, that’s what private browsing is for. But even without private browsing, you should have a global sensible cookie policy that only accepted cookies from whitelisted sites, and for those sites, doesn’t allow them to see cookies they didn’t give you.
On the last point: The most sensible and important thing to worry about here is fingerprinting. Using a different device for every service is an effective way to combat that. It’s not very practical, but specifically using your work phone that you use for other local services, to me makes a lot of sense.
I’m really underwhelmed by 32B Qwen DeepSeek R1. Both in its reasoning and knowledge. Still haven’t needed to or tested it at maths, so maybe that’s where it really shines.
1.5b will run fast on just CPU I think.
That’s cool! I’m really interested to know how many tokens per second you can get with a really good U.2. My gut is that it won’t actually be better than the 24VRAM+96RAM cache setup this user already tested with though.