• 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: August 2nd, 2023

help-circle

  • Nothing but effort. Nobody wants to constantly baby a project just because someone else may change their code at a moment’s notice. Why would you want to comb through someone else’s html + obfuscated JavaScript to figure out how to grab some dynamically shown data when there was a well documented publicly available API?

    Also NewPipe breaks all the time. APIs are generally stable, and can last years if not decades without changing at all. Meanwhile NewPipe parsing breaks every few weeks to months, requiring programmer intervention. Just check the project issue tracker and you’ll see it’s constantly being fixed to match YouTube changes.


  • An API is an official interface to connect to a service, usually designed to make it easier for one application to interact with another. This is usually kept stable and provides only the information needed to serve the request of the application requesting it.

    A scraper is an application that scrapes data from a human readable source (i.e. website) to obtain data from another application. Since website designs can update frequently, these scrapers can break at any time and need to be updated alongside the original application.

    Reddit clients interact with an API to serve requests, but Newpipe scrapes the YouTube webpage itself. So if YouTube changes their UI tomorrow Newpipe could very easily break. No one wants to design their app around a fragile base while building a bunch of stuff on top of it. It’s just way too much work for very little effort.

    It’s like I can enter my house through the door or the chimney. I would always take the door since it’s designed for human entry. I could technically use the chimney if there’s no door. But if someone lights up the fireplace I’d be toast.




  • The argument is that processing data physically “near” where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

    Personally, I’d say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn’t allow loops, recursion, etc). No matter how fast your fancy new architecture is, it’s worthless if most programmers on the job market won’t be able to work with it. Second, there’re too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It’s just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

    I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.




  • I’m not advocating that teenagers should save no money. I’m just saying you don’t have to save “all” of it.

    Good financial planning isn’t just not spending every cent when you can, it’s also figuring out how to get the most out of your money. There is plenty of expensive stuff that I’ve spent thousands of hours with, which makes them totally worth the investment. There’s no way a teenager would be able to figure that out without some trial and error.

    I’d say it’s better to get that out of the way now than later. If you make a bad purchase decision as a teenager, at most you’re short 200 dollars. Maybe that startup idea isn’t exactly what you imagined it to be, but at least you figured that out now than after sinking 20k into MLMs.


  • As a counterargument: spend your money. 200 dollars means a lot more to a teenager than a college student (with an on-campus part time job), then when you find yourself at your first full time job you may sometimes be spending 200 dollars like pocket change.

    As a result, you will most likely cherish what you buy now for 200 USD way more than what you can buy down the line. That console you need to save up 6 months for right now? It becomes a lot less sentimental when you can afford it every other month. So spend your money on something that you’d like right now. 200 dollars won’t change your life in college much, but it can change your life significantly right now.



  • My suggestion is to get a device that can do the stuff kids want, but just barely do the things they want.

    I probably spent more time tinkering around the family computer than anything else as a kid just to get games way over-spec to run on it. Throughout that process I learned programming, hex editing, and some Linux system administration, which eventually led me to my current career.

    These days, it’s probably a lot easier to get started with a raspberry pi. But without something to motivate people to learn tech, why would they do it in the first place?




  • It doesn’t matter how many passwords you are storing inside. It’s the number of cycles of decryption needed to be performed in order to unlock the vault. More cycles = more time.

    You can have an empty vault and it will still be slow to decrypt with a high kdf iteration count/expensive algorithm.

    You can think of it as an old fashioned safe with a hand crank. You put in the key and turn the crank. It doesn’t matter if the safe is empty or not, as long as you need to turn the crank 1000 times to open it it WILL be slower than a safe that only needs 10 turns. Especially so if you have a 10 year old (less powerful device) turning the crank.


  • How many KDF iterations did you set your vault to? I have mine at 600,000 so it definitely takes a moment (~3 sec) to decrypt on older devices.

    The decryption being compute heavy is by design. You only need to decrypt once to unlock your vault, but someone brute forcing it would need to decrypt a billion+ times. Increasing compute needed for decryption makes it more expensive to brute force your master password.

    In fact, LastPass made the mistake of setting their default iteration count to 1000 before they got breached and got a ton of flak for it.