

The headline (as in AI replacing jobs) is as real as the CRAP (Computer Rendered Artificial Picture) it uses.


The headline (as in AI replacing jobs) is as real as the CRAP (Computer Rendered Artificial Picture) it uses.


I think it might be very hard for AI-pilled leadership to back off from their claims and AI mandates and it would not help the hype that they might be banking on/profiting from.


Since they’re using (now expensive) genAI and productivity isn’t improving, because it almost never does, rhen they need to do something to appease stakeholders and offset the cost of using genAI, ofc.
That and keeping employee’s perception of power and self worth down, so they keep working their asses off in the hopes of not being the next in line.


they will most likely know how to use it properly and ethically
I’d argue that ethical use is not possible:


Andreas Kling’s ladybird? Don’t wanna touch that with a 10ft pole.
https://hyperborea.org/reviews/software/ladybird-inclusivity/


I have a service that pings the server:
cat <<EOF | sudo tee /etc/systemd/system/ping-smb.service
[Unit]
Description=Blocks until pinging 192.168.1.10 succeeds
After=network-online.target
StartLimitIntervalSec=0
[Service]
Type=oneshot
ExecStart=ping -c1 192.168.1.10
Restart=on-failure
RestartSec=1
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl enable ping-smb.service
And then I make the fstab entry depend on it:
x-systemd.requires=ping-smb.service
Why?
No, really, why? If Google itself or their models didn’t discover the vulnerability, how would they know genAI was used on the discovery of the vulnerability and weaponization (interestingly, not “creation”) of an exploit?