IDK if Mastodon has a good way to port accounts but I think its good to have people first join a basic instance and then move to something more specialized once they get used to the platform
e
IDK if Mastodon has a good way to port accounts but I think its good to have people first join a basic instance and then move to something more specialized once they get used to the platform
I thought it was funny how Trump was just nodding along while Harris was saying that people often leave his rallies due to boredom. Also, him basically outright saying that immigration has never happened in the history of the country, along with the other nonsensical things he’s said.
it seems like the physical limits in the strength of cubes are probably becoming a problem lol
Those are some pretty beefy motors. Its interesting that they don’t have a link to a product page for the motors on the video, as I assume that was the primary justification for the project.
Shutter encoder, it has a ton of useful tools built in for quick video conversion, compression, trimming, etc, and it works very well for batch encoding of a lot of different video files
Affine, its a surprisingly feature rich notes app (open source but all cloud features are currently paid)
KopiaUI, an easy to use automatic backup program
for a quick web based downloader I use https://cobalt.tools/
People here are not understanding that language changes with location
I think they are pretty different. Could be useful for finding a good prompt though.
I used the UV project modifier to automatically project the image onto the model from the camera’s point of view while I made it. It’s not particularly hard, but it does take a fair amount of time to make the model.
There is also a tool I used called FSpy that extracts the 3D coordinate space of the image, so that Blender’s axis align with those in the image.
There are a few ai models that try to get 3D space from a 2D image (MiDaS is the most popular) but none provide nearly as good of results as doing it yourself.
You only need to make a very rough model, just enough for some rough reflections, ambient occlusion, and occlusion behind objects in the image.
I then added some lights over the emissive parts of the image, and threw some random models in there.
I never even use emojis. I’m just saying that ambiguity isn’t an emoji-specific problem.
This should not be a surprise to anyone
People here are acting like regular words have no ambiguity or possibility of misinterpretation.
Some categories have alternatives, like davinci resolve and inkscape, others are lacking adequate alternatives such as photoshop and substance painter.
People need to see more of the world. Many are too isolated in their specific culture.
Halo too, and Minecraft
Never played it but Celeste?
Commit crimes but it’s a speedrun.
Edit: no one has it yet
How is that a spoiler it’s literally in the first minute of the game
OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.
Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.