• 0 Posts
  • 127 Comments
Joined 5 years ago
cake
Cake day: February 15th, 2021

help-circle
  • It’s unclear what you are trying to say. The question was what would switching license do. There’s 2 scenarios: 1) either Google is really not doing changes in ffmpeg source internally right now …or 2) they are in fact making changes to it internally (perhaps for encoding with their own codecs, etc.) which they are not releasing back to the public (since the code is LGPL, and not AGPL)

    With situation 1, they can simply continue using ffmpeg, even if it were to switch to AGPL. They would have no need/obligation to release anything, whether they decide to fund development or not. The way I see it, only if it’s situation 2, will Google be affected by a license change. However, if the use they make of ffmpeg is just to have their own encoder program for use with specific codecs, they might as well decide to stop using ffmpeg for this purpose instead and have their own program to work with their encoders. Most of the encoding work is already being done in the encoding libraries separately released (like libaom, which Google licensed under BSD-2).

    But even in the rare case of Google having made changes that (after license change) they would suddenly decide to be willing to share with the community despite having not done so before… the whole problem with this bug-reporting mess is that most of the issues reported by the automated tools aren’t something really that impactful/important, they are things that even Google would not really be that interested to fix… (why would Google need to fix a codec that only affects a videogame cinematic from 1995?). These reports are just the result of automated & indiscriminated AI analysis, slop.


  • AGPL is more “copyleft”, but not really more “permissive”, in the sense that AGPL adds the extra requirement of forcing server admins to provide the sourcecode to the users of any service that internally makes use of AGPL code.

    It plugs a loophole of the other GPL licenses that allows companies to not share any custom modifications as long as they don’t directly share the binaries (they can offer a service using internally modified binaries, but as long as they don’t distribute the binaries themselves they don’t have to share the source code from those modifications running on their private servers, even if they are GPL).

    However, I don’t think a license change would really solve this particular bug-reporting trouble. Most likely Google has not patched these vulnerabilities internally either, or at least the biggest chunk of them (since most of them are apparently edge cases that would most likely not apply to Google’s services anyway).


  • Sounds like a prioritization issue. They could configure the git bots to automatically flags all these as “AI-reported” and filter them out from their TODO, considering them low priority by default, unless/until someone starts commenting on the ticket and bringing it up to their attention / legitimizing it.

    EDIT: ok, I just read about the 90-days policy… I feel then the problem is not the reporting, but the further actions Google plans based on an automated tool that seems to be inadequate to judge the severity of each issue.


  • Sure, but if it wasn’t triaged why consider it “medium impact”? I feel when tight on resources, it’s best to default to “low priority” for all issues whose effect (ie. to the end-user, or to the software depending on it) isn’t clearly scoped and explained by the reporter. If the reporters (or those affected) have not done the job to make it easy to quickly see why it’s important to have this fixed then it’s probably not so important for them to have it fixed. Some projects even have bots that automatically close issues whenever there has not been activity for a certain time (though I’d prefer labeling it / categorizing as “low engagement” or something so it can be filtered out when swamped, instead of simply closing it).

    About “public confidence”, I feel that this would rather be “misplaced confidence” if it’s based on a number that is “massaged” to hide issues. Also this is an open source project we are talking about, there isn’t an investment fund behind it or a need for people to have absolute loyalty or blind trust. The code is objectively there, the trust should never be blind. If there wasn’t a long list of reports I’d be more suspicious of a project as popular, frequently updated & ubiquitous as ffmpeg. Specially if they are (allegedly) not triaged. Anyone who decides to choose ffmpeg based on the number of issues open without actually investigating from their end how relevant that number actually is… well… they can go look for a different software.


  • I agree… I mean they are not forced to fix the issues, if the issue is obscure and not many people are affected, then there’s no reason why they can’t just mark it as “patches welcome” and leave it there. I feel this is a problem in the policy the project might have for prioritization, not really a problem in QA / issue report.

    For context:

    The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”

    To me, the problem shouldn’t be the report, but categorizing it as “medium impact” if they think fixing it isn’t “a valuable use of an assembly programmer’s time”.

    Also:

    the former maintainer of libxml2 […] recently resigned from maintaining libxml2 because he had to “spend several hours each week dealing with security issues reported by third parties. Most of these issues aren’t critical, but it’s still a lot of work.

    Would it be truely better if the issues wouldn’t be reported? what’s the difference between the issue not being reported and the issue not being fixed because it’s not seen as a priority?


  • Is the database publicly accessible somewhere? is it limited to an extension or can we simply browse it?

    This looks like it could work better if developed in the open / collaboratively. Though from their FAQ it looks like they are still working in some open source platform:

    Our wonderful devs are currently working on an open-source website to replace and improve our current and temporary platform.

    In the meantime, we will continue to add and verify European brands to the database.


  • Yes! I mean, blame those who post AI-generated translations as if they were their own, or blame the AI scrappers that use those poorly generated pages for training, but it makes no sense to blame Wikipedia when the only thing they have done is just exist there and offer a platform for knowledge sharing.

    In fact, this problem is hardly exclusive to Wikipedia, every platform with crowdsourced content is in some level susceptible to AI poisoning which ultimately ends up feeding other AIs, the loop exists in all platforms. Though I understand wanting to highlight particularly the risk of endangered languages being more vulnerable to this, since they have less content available to them so the AI models have a smaller dataset which makes them worse and more sensible to bad data.


  • Ferk@lemmy.mltoOpen Source@lemmy.mlWhat's up with FUTO?
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    22 days ago

    And even if they did somehow manage to get permission to switch the license, all previous versions would still be open in perpetuity so a fork would come easily. Immich source isn’t only open, and not only GPL… but AGPL-3.0 which is as copyleft as you can get.


  • Yes, last time I tried Revolt it looked shamelessly like Discord’s UI.

    I feel that if they just wanted an app with the look and feel of discord, it would have made more sense to make a matrix (or maybe xmpp) client with the look and feel of discord. I honestly don’t see much value in yet another protocol if the only distinctive feature is in the look & feel of the UI. Specially if they are not designing the stack in a way that is at least as good as those other options from a security, privacy, feature extensibility and decentralization standpoint.


  • Did they work on developing new web standards to unlock that potential on the web?

    Back then HTMLv5 wasn’t even a thing, there was no concept of video/microphone/gyroscope/gps access for webapps, notifications, web workers, web sockets, offline PWA webapps, etc. It was not a viable idea unless they actually were to invest big. They weren’t so committed. In Firefox OS even the dialer was a webapp, Mozilla brought forth a lot of innovative APIs to make it possible, many of which are in use today even after the OS was discontinued. And nowadays you even have things like Webassembly that allows you to code it in C or whatever low level language you want.

    I feel Apple has always been more interested in their own ecosystem. Opening the web to have the same level of potential as the native apps from their walled garden goes against that strategy, so I don’t believe they were really serious about that approach, it’s always been more interesting for them to prioritize their native apps.


  • I wonder if resurrecting Firefox OS might still be an option. It was such an interesting idea having the webapps be first citizens.

    There’s the KaiOS fork, but the direction is not really the same since it’s more targeted to low power keypad-based phones… and I believe they replaced much of the Gonk layer with a very stripped down low level Android base which isnt fully open source… maybe if they coordinated with the LibrePhone project and some hw manufacturers (like EU-based Nokia) we’d get a fully free stack.




  • I expect it’s a combination of all the above in some sense. They state they want to build on LineageOS (an Android variant) and replace its binary blobs, I expect the result would be a new custom ROM targeting specific compatible hardware with the goal of ultimately supporting usable phones working on fully Free Software.

    What it’s not is the creation of a libre hardware phone. I don’t think they are working on hardware, at least not anytime soon. Also if by “Linux phones” you mean non-Android based, that’s not necessarily a requirement (given that they mention LineageOS), but I expect regardless the kernel will be Linux without the blobs and it’s entirely possible that they add support for installing their firmware on those “Linux phones”.

    I do kinda wish they’d focus on stuff that has a way bigger user impact 😅

    The thing is that technically we already have fully usable FOSS software at that user level. Using for example LineageOS with F-droid as the only app store already gets you there. Whereas, ensuring your phone is not spying you or having some malicious functionality on the hardware/driver level is something that currently is simply not possible.

    The FSF has always been doing the thankless job of championing for the things that are harder and less rewarding to do, but that will advance software freedom most for those who do seek it. Even when that thing is not necessarily the most popular/mainstream. I feel this has more of an impact in software freedom than, say, if they were to reinvent the wheel just to have their brand attached to it, and/or provide a slightly different UI to do the same thing other FOSS software already does.


  • There isn’t much concrete information, but my guess is that OS/ecosystem is exactly what this project is, and that they are not talking about physical hardware. Specially considering that they are putting the emphasis on free software (not hardware) and they are involving a software developer. Making a phone’s hardware free would be an entirely different beast.

    In the afternoon, FSF executive director Zoë Kooyman announced an exciting new project: Librephone.

    Librephone is a new initiative by the FSF to bring full computing freedom to mobile computing environments. The LibrePhone Project is a partnership with Rob Savoye, a developer who has worked on free software (including the GNU toolchain) since the 1980s. “Since mobile phone computing is now so ubiquitous, we’re very excited about LibrePhone and think it has the potential to bring software freedom to many more users all over the world.”

    From the official FSF post about the event.


  • I’m just calling it a paradox because they are making it less secure by enforcing stricter security.

    It’s like how creating stricter regulation against drugs sometimes results in more problems with drugs than when the regulation was more relaxed. To me, that’s a paradox.

    Generally, a stricter security policy results in more security, but there are times it gives the opposite reaction when the stricter policy causes a trend that popularizes alternative methods that are actually less secure. There’s always the social factor, and that one is not easily predictable… in fact, it could be that I’m wrong and most devs will decide to register with Google, or simply stop supporting official Android firmware, instead of relying on insecure debug keys. We’ll see.


  • I feel that the only way out is gonna be using the debug account (ie. the one with the public “androiddebugkey” keyAlias, which the SDK uses for development builds), as this seems to be the only possibility Google is still allowing.

    This has the side effect that devs that want to remain Google-independent can no longer rely on the built-in protections in Android which prevents an app from being updated if it hasn’t been signed with the same credentials… but well, that seems to be the only road Google is allowing for anyone who does not wanna register with them.

    I mean… the other alternative would be to, essentially, fork Android / expect a custom AOSP to be installed… which might not be an option for all hardware out there.


  • But the thing is that they are not really making Android more secure with this policy.

    They are still allowing APKs signed with debug keys to work… so the only alternative now for any developer that doesn’t want to register with Google is gonna be using those debug credentials to sign their app releases. I expect shipping APKs with debug keys will become more common, resulting in objectively a more unsafe Android ecosystem.

    This is not gonna stop rogue APKs from outside Google’s store, it’s just gonna make them less secure, since being signed with a debug key means a malicious APK from a different source can produce another version of the app as an “update” and supplant the original.

    This is not gonna stop alternative stores either, in fact, it will make it more important to use stores (as opposed to installing apks from github or so), since at least that way they can still implement alternative methods to check package authenticity before installing, even when using debug keys.


  • That’s why it’s a paradox. They are claiming to do something for security, where in actuality their stricter policies are doing the opposite. This move essentially renders apk’s built-in signing mechanisms worthless. Android is going down the path now of being as insecure as MS Windows when it comes to app installation.

    This is not gonna stop rogue apks from outside Google’s store, it’s just gonna make them less secure.

    This is not gonna stop alternative stores, it’s actually gonna make them more important for further security checks.

    This is not gonna give Google more control over Android, it’s gonna make it easier for abusers to gain control.

    I suspect a step Google could take is start adding extra warnings and layers of confirmation when it comes to installing apps making use of debug keys to try and deter users from doing it… but this could then annoy developers, numb users to the warnings, and strengthen the case regarding anti-competitive behavior.


  • Paradoxically, this move towards trying to make things more secure is actually gonna make things LESS secure.

    Because it means that now the only way for people to continue using alternative apps is for them to be shipped with debug keys (the ones used during development) which are fundamentally insecure since they allow anyone to produce an apk and be accepted as a valid update of the app…

    You still can release an apk that works by using a debug key… the problem is that debug keys have essentially “public” credentials. Until now, it was possible to use your own credentials and ensure the app is secure by protecting your own keys and credentials, which is what F-droid was doing. Now this no longer is possible. I don’t think this is the end of F-droid, but it’ll be the end of F-droid using mechanisms for verification that used to be built-in on Android.

    But I expect F-droid should be able to have it’s own system for verification, before installing, that is parallel and independent of the apk signing process. They could have signatures in a separate file, outside the APK. This also has the additional paradoxical result that in order to ensure that the apps installed are safe, it’s MORE important now to have a store app alternative that you trust and that can implement alternative signing/verification methods.

    So… if anything, this move from Google makes Android less secure and makes key signatures within the apks kind of moot for any store that isn’t Google-owned… however, it also means installing a non-Google owned store with some level of security guarantees is much more important now.