SteveK
Guru
- Joined
- Jul 5, 2019
- Messages
- 5,914
- Location
- Gulf Islands, BC Canada
- Vessel Name
- Sea Sanctuary
- Vessel Make
- Bayliner 4588
I know!
Absolute's AI just IG'd me suggesting I stick to belly shots.
Did you find the hidden cam on alexa?
I know!
Absolute's AI just IG'd me suggesting I stick to belly shots.
The objective of an objective is to reach some objective... i.e., parallel lines that eventually meet!
And that is straight from the Department of Redundancy Department.
I'm pretty sure my smart TV is watching me!Did you find the hidden cam on alexa?
Last TV purchased the smart TV was suggested. Once I was told it is listening and waiting for your command, that settled it, no smart TV for me.I'm pretty sure my smart TV is watching me!
Did you repeat Department to show an example of redundancy
Last TV purchased the smart TV was suggested. Once I was told it is listening and waiting for your command, that settled it, no smart TV for me.
Watching you? Now that is creepy. Wonder if it is the little red light that blinks when you turn it on.
It's winking at you. I think it likes you, because you "turn it on!"
SteveK may not have listened to the Firesign Theater records in his formative years...Correctomundo sir.
Agreed. I've tracked this thread. I don't think most folks grasp the difference between algorithmic response and cognitive response.AI already trains itself. That’s the definition of machine learning.
AI can already write code. It’s just a matter of time before AI will create more AI without human involvement.
So, no problem!
Greetings,
Mr. A. "...AI can be "unplugged". Read post #80 again-twice.
From my post #30. "Truly autonomous, self aware AI IS coming and I would not be surprised it will be in the next 10 years, if not sooner....much sooner."
Please do understand... AI can be "unplugged" ...if... that becomes really required.
That's not actually obvious at all going forward. As a thought experiment, how would you unplug a system that spans millions of computers around the globe and that may have a sense of self-preservation? Using the example of HAL above, it wasn't that he was sentient and didn't want to die... it was that he perceived a threat to the mission.
Want to turn off the AI, gotta get in the building... but suddenly your electronic key doesn't work. You try turn off the power, suddenly the grid doesn't respond to your command. Call the Army to blow up the building? Suddenly the helicopter won't start and radios don't work. And by the way... there are a hundred other building around the globe containing usable processing power so you gotta deal with them too. You get the idea.
Seems kinda far-fetched but look at computer security today. We have a very good understanding of all the technical components in systems but we STILL have difficulty stopping nation-state actors. How the heck are we going to deal with the complexity of AI systems when we don't even fully understand how they work? Will we have to hire psychologists to validate AI systems?
To be clear, I don't think this is where we are now or will be in the near future. But it seems inevitable barring some catastrophic event that upsets civilization as we know it.
Or possibly it was before his time. I was quite a young lad when I heard it from an older gent.SteveK may not have listened to the Firesign Theater records in his formative years...