Well, that the US military will hook up all of their systems to a giant AI, it's an old movie trope. If you're old enough, you may remember that old War Games movie.MrPeabody wrote: ↑April 13th, 2023, 8:50 pmThe US military isn't stupid? They thought they could conquer Afghanistan. And keep in mind that GPT is only the first version, and it is going to improve exponentially. And we don't have any idea what the military AI is already capable of doing. "Unintended consequences" is a reality with intelligent complex systems.
That's the thing about the current state of AI: it cannot "improve exponentially". In fact the opposite is true: it will take more and more complexity and computing resources to achieve more and more modest improvements. It will improve logarithmically. What none of these AI have is the capqcity to reason. They can emulate the results of someone else's reasoning, including pre-canned, smart-ass answers to philosophical questions about life, conscience and self-awareness (like that Sophia fraud), but they cannot express reasoning of their own, let alone conscience.
I know AI is a hot topic and those start-ups need to push their valuations to the stars. ChatGPT is the new Google and it's an immense improvement for a lot of information and knowledge discovery tasks. However, that it will suddent become sentient and take over the world, it's just a sci-fi scenario.
Now, given the times we live in, if then someone, maybe even the American government, wanted to use a "out of control AI" to execute a false-flag operation to pursue a nasty agenda, THEN I would not be surprised at all. THIS scenario is not only possible, but quite plausible. Some researchers would swear governments will start blaming aliens, since blaming viruses isn't "in" anymore. Why not an evil AI?