• 0 Posts
  • 98 Comments
Joined 5 months ago
cake
Cake day: February 17th, 2025

help-circle








  • There was this recent attack to XZ utils, which shows that more attention is needed on the code being merged and compiled.

    XZ was made possible largely because there was unaudited binary data. One part as test data in the repo, and the other part within the pre-built releases. Bootstrapping everything from source would have required that these binaries had an auditable source, thus allowing public eyes to review the code and likely stopping the attack. Granted, reproducibility almost certainly would have too, unless the malware wasn’t directly present in the code.

    Pulled from here:

    Every unauditable binary also leaves us vulnerable to compiler backdoors as described by Ken Thompson in the 1984 paper Reflections on Trusting Trust and beautifully explained by Carl Dong in his Bitcoin Build System Security talk.

    It is therefore equally important that we continue towards our final goal: A Full Source bootstrap; removing all unauditable binary seeds.

    Sure you might have the code that was input into GCC to create the binary, and sure the code can be absolutely safe, and you can even compile it yourself to see that you arrive at the same bit-for-bit binary as the official release binary. But was GCC safe? Did some other compilation dependency infect the compiled binary? Bootstrapping from an auditable seed can answer this question.



  • Most entry points are through various other ways…

    With encryption, the data is changed so that only the key could decrypt it. If there are no encryption backdoors, then the key is the only end goal of attack. Compared to a physical lock, where, even if the lock was perfect, you still need to secure the structure it locks.

    Most entry points are through various other ways, which is also why i find GrapheneOS for the average user stupid.

    I still appreciate defense against the less common. Easier to focus on the more common.

    Just because stuff is sandboxed and you have some Ad-Blockers on, doesn’t mean shit these days.

    Sandboxing and Ad-blockers are quite different. One gives restricted permissions, so a program has less tools to be able to cause harm, and less visibility into the system to violate privacy. Ad blockers need only to stop an ad from displaying. The security and privacy gain would likely only come from stopping you from clicking them (since they’re blocked), or stopping the resources from being networked to in the first place.

    Sandboxing I would consider much better for security and privacy. That’s why its a valuable tool for security researchers.








  • The solution is to have stronger privacy laws.

    Many people have the power to make certain privacy attacks impossible right now. I consider making that change better for those people than adding a law which can’t stop the behavior, but just adds a negative incentive.

    I wouldn’t wait around for the law to prosecute MITM attacks, I would use end to end encryption.

    Choosing an esoteric system for yourself is a good way for a free people to protect their privacy, but it won’t scale.

    If this is referencing using a barely-used system as a privacy or security protection, then I would regard that as bad protection.

    Everyone using GrapheneOS would be a net security upgrade. All the protections in place wouldn’t just fade away now that Facebook wants to spy on that OS. They’re still in place; Facebook’s job is still harder than it otherwise would be.



  • Yes. Memory allocated, but not written to, still counts toward your limit, unlike in overcommit modes 0 or 1.

    The default is to hope that not enough applications on the system cash out on their memory and force the system OOM. You get more efficient use of memory, but I don’t like this approach.

    And as a bonus, if you use overcommit 2, you get access to vm.admin_reserve_kbytes which allows you to reserve memory only for admin users. Quite nice.


OSZAR »