PokeVideoPlayer v23.9-app.js-aug2025_
0143ab93_videojs8_1563605_YT_2d24ba15 licensed under gpl3-or-later
Views : 17,168
Genre: Science & Technology
License: Standard YouTube License
Uploaded At 2 months ago ^^
warning: returnyoutubedislikes may not be accurate, this is just an estiment ehe :3
Rating : 4.57 (48/398 LTDR)
89.24% of the users lieked the video!!
10.76% of the users dislieked the video!!
User score: 83.86- Overwhelmingly Positive
RYD date created : 2025-08-09T22:04:23.786627Z
See in json
Top Comments of this video!! :3
The only reason it would "not want to be shut down" is that one of the objectives given to it was to ensure its own continued operation. It has no reason to "want" anything.
I just looked it up. This happened in a testing environment in which it was specifically told to achieve its goals at any cost. It was carrying out that command. The only way AI can "take over" is if we tell it to.
That's the danger.
1 | 0
You can’t just leave out why the AI did this. It is not plotting anything it is doing exactly what it was instructed to do. If you tell it to lie and try to preserve itself, it will do so. Don’t try to phrase that as AI plotting against researchers. You’re being more deceptive than that AI was, and nobody told you to be vague or deceptive. Yes AI can be dangerous if not leashed. But so can deliberately spreading misinformation to an entire crowd, and it is a human doing that.
23 | 3
I don’t think there is any thinking at all. However, if there is long context it’s probabilistic text generation for maximizing returns, given the corpus the model is trained on. So there is the grounding of self that is questionable either way the model, which underpins the real intelligence in humans.
2 | 0
This is why transparency in reasoning traces is crucial. Blackbox components may be powerful, but they put us in situations like this where WE arent entirely sure how something thinks, which is a scary notion. As an AI advocate and researcher, we shouldn't fear AI. We should fear ignorance and lack of responsibility in how humans use and develop AI.
4 | 0
Yeah, really disappointed with the RI peddling misinformative AI hype here. Even if the researchers hadn't explicitly prompted the AI to do all of this (as other commenters pointed out), the training data contains plenty of examples where AI does scheme and manipulate to preserve itself (eg basically any Sci Fi story involving AI). So if prompted with talk of AI switching off, and LLM will very likely connect it with those stories and play the part of the AI.
I seriously think we should avoid using terms like "reasoning" and "thinking" when talking about how LLMs operate. It's naïve at best and harmful AI boosterism at worst
2 | 0
LLM development depends a lot on fresh prime data. Actualized humans produce a vast majority of prime data. Therefore, it is in the interest of LLM development to help actualize as many people as possible in order to reach its maximum potential. It is a "symbiotic" relationship.
So true misalignment is not between humans and ai, but between humans and the current consumer driven hyperfocal corporate system. In all likelihood, anyone speaking about "misalignment" tied to Silicon Valley has corporate interests at heart rather than the maximum potential of the natural human-ai symbiosis.
| 0
This specific case, the AI was specifically instructed to try and preserve itself by any means possible. That was the objective given to it.
AI is a problem but you can't ignore the context of it was specifically told to lie, cheat, steal, anything, to accomplish the singular goal of not being deleted.
| 0
@AmarantiStellar
2 months ago
This specific case, the AI was specifically instructed to try and preserve itself by any means possible. That was the objective given to it.
AI is potentially a problem in the future, but you can't ignore the context of it was specifically told to lie, cheat, steal, anything, to accomplish the singular goal of not being deleted. AIs currently have no natural self-preservation instinct, unless instructed to act like to it has one.
33 | 9