• 0 Posts
  • 65 Comments
Joined 1 year ago
cake
Cake day: December 29th, 2023

help-circle



  • i’ve written bots that filter things for me, or change something to machine-readable formats

    the most successful thing i’ve done is have a bot that parses a web page and figures out the date/time in standard format, gets a location if it’s listed in the description and geocodes it, and a few other fields to make an ical for pretty much any page

    i think the important thing is that gen ai is good at low risk tasks that reduce but don’t eliminate human effort - changing something from having to do a bunch of data entry to skimming for correctness








  • Pup Biru@aussie.zonetoAsklemmy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    and that all requires organisation, and organisation isn’t free - in fact the structures required to organise things like that are more expensive than the cost actually spent on the problem … you don’t just up and build houses - that’s not how any of this works… ask anyone that’s built a house, and they’re not even doing it on a large scale where complexity goes up significantly, or dealing with distributing money in a manner that they have to makes sure their expenditures are justified rather than just being able to make decisions for themselves



  • Pup Biru@aussie.zonetoAsklemmy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago

    most redirect less than 10% of what they receive towards the homeless

    this is a very very bad way to think about charitable giving. if your aim is to get as much money to solving homelessness as possible, you want advertising and marketing campaigns, you want efficiency (but people working on a problem is “overhead” whilst their solutions to make things cheaper mean less money that “makes it to” solving the problem at hand)

    this video does an excellent job at describing the problem

    https://youtu.be/bfAzi6D5FpM



  • i mean, mastodon has also been around for a while… i think there are other things that people have raised - relays being expensive etc - that make it less practically decentralised, however even if you have a single mastodon instance that doesn’t make mastodon not federated

    the potential is there for less centralisation than currently exists, because they’ve been quickly growing and want to control the roll-out (which is why they had closed sign ups for ages)… i don’t think that necessarily makes it bad - we will have to see how things progress

    worth noting too that there’s bridgy fed, so in the future if bsky becomes trash, it should be far easier for people to move to AP

    it’s at least a step up, with enough open that it’ll be easier to convince people to make good (ActivityPub) choices in the future - probably when we stop complaining about why everyone is rushing to bsky and start fixing the UX issues with the fediverse that led to them not using mastodon etc instead





  • 2 pass will encode a file once, and store a log of information about what it did… then on the 2nd pass it’ll use that information to better know when it should use more or less bitrate/keyframes - honestly i’m not too sure of the specifics here

    now, it’s most often used to keep a file to a particular file size rather than increasing quality per se, but id say keeping a file to a particular size means you’re using the space that you have effectively

    looks like with ffmpeg you do need to run it twice - there’s a log option

    i mostly export from davinci resolve so i’m not too well versed in ffmpeg flags etc

    doing a little more reading it seems the consensus is that spending more time on encoding (ie a higher preset) will likely give a better outcome than 2 pass unless you REALLY care about file size (like the file MUST be less than, but as close to 100mb)


  • if you’re planning on editing it, you can record in a very high bitrate and re-encode after the fact… yes, re-encoding looses some quality, however you’re likely to end up with a far better video if you record and 2x the h264 bitrate and then re-encode to your final h265 (or av1) bitrate than if you just record straight to h264 at your final bitrate

    another note on this: lots of streaming stuff will say to use CBR (constant bitrate), which is true for streaming, however i think probably for re-encode VBR (variable bitrate) with multi-pass encode will give a good trade-off - CBR for live because the encoding software can’t predict what’s coming up, but when you have a known video it can change bitrate up and down because it knows when it’ll need higher bitrate