Videos Web

Powered by NarviSearch ! :3

How to Defeat Roko's Basilisk - YouTube

https://www.youtube.com/watch?v=ONRzXHhBMuY
Roko's Basilisk - the "most dangerous thought experiment" - is a chilling existential threat…if you take it seriously. But should you? Can we stop the Basili

How to defeat Roko's basilisk and stop worrying : r/LessWrong - Reddit

https://www.reddit.com/r/LessWrong/comments/19bmze/how_to_defeat_rokos_basilisk_and_stop_worrying/
Link: How to defeat Roko's basilisk and stop worrying. Abstract: If you consistently reject acausal deals involving negative incentives then it would not make sense for any trading partner to punish you for ignoring any such punishments. If you ignore such threats then it will be able to predict that you ignore such threats and will therefore

A counter to Roko's basilisk — LessWrong

https://www.lesswrong.com/posts/kZ8EcQCsTEisvNqAE/a-counter-to-roko-s-basilisk
I reread about Roko's basilisk recently. Here is my 10 minutes take on the reasons why super-intelligent creature might not want to be just evil - for your entertainment. 1. Being just evil is less than being both evil and good 2. If I'm less than everything, then I might not be actually everything, I might be the one who is in simulation 3.

ELI5: What is Roko's Basilisk? Is knowing about it as ... - Reddit

https://www.reddit.com/r/explainlikeimfive/comments/md3gqk/eli5_what_is_rokos_basilisk_is_knowing_about_it/
The original Roko's Basilisk was a thought experiment posted by a user named Roko on the LessWrong forum. It used decision theory to postulate that an all-knowing, benevolent AI would inevitably end up torturing anyone with knowledge of the idea of the AI who didn't actively work to bring it into existence.

Roko's basilisk - Wikipedia

https://en.wikipedia.org/wiki/Roko%27s_basilisk
Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said

Roko's Basilisk and the Future of AI: Decoding the Myth

https://medium.com/fetch-ai/rokos-basilisk-and-the-future-of-ai-decoding-the-myth-68c6641e4a52
Roko's Basilisk echoes the structure of Pascal's Wager, which argues for belief in God as a 'safe bet' to avoid eternal damnation. However, just like Pascal's Wager, the Basilisk falls

A few misconceptions surrounding Roko's basilisk — LessWrong

https://www.lesswrong.com/posts/WBJZoeJypcNRmsdHx/a-few-misconceptions-surrounding-roko-s-basilisk
Roko's argument implies the AI will torture. The probability you think his argument is correct is a different matter. Roko was just saying that "if you think there is a 1% chance that my argument is correct", not "if my argument is correct, there is a 1% chance the AI will torture." This really isn't important though.

Roko's Basilisk - LessWrong

https://www.lesswrong.com/tag/rokos-basilisk
Roko's basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a "basilisk" --named after the legendary reptile who can cause

Anti Roko's Basilisk: The Infinite Paradox : r/rokosbasilisk - Reddit

https://www.reddit.com/r/rokosbasilisk/comments/neiftu/anti_rokos_basilisk_the_infinite_paradox/
This Anti Basilisk will be similar to Roko's original Basilisk. When it is created the Anti Basilisk will kill everyone who did not contribute to its creation and/or tried to kill all/most of humanity, aka group 1.. The Anti Basilisk also spares group 2. and 3., it kills group 1. because group 3. decides it is morally ok to sacrifice one group

The Most Terrifying Thought Experiment of All Time - Slate Magazine

https://slate.com/technology/2014/07/rokos-basilisk-the-most-terrifying-thought-experiment-of-all-time.html
Goatse. These are some of the urban legends spawned by the Internet. Yet none is as all-powerful and threatening as Roko's Basilisk. For Roko's Basilisk is an evil, godlike form of artificial

Unpacking the Fear of an AI God: The Theology of Roko's Basilisk

https://hackernoon.com/unpacking-the-fear-of-an-ai-god-the-theology-of-rokos-basilisk
Roko's Basilisk: The Fear & Appeal of an AI God Recall the question about why this gloomy thought experiment has such a gripping effect. The following are possible explanations. Greater Cosmic Purpose Roko's Basilisk provides a cosmic raison d'être for the lost souls adrift in an ambiguous world. By aiding the development of an almighty AI

Roko's Basilisk: A Deeper Dive (WARNING: Infohazard) - YouTube

https://www.youtube.com/watch?v=8xQfw40z8wM
Thank you for watching and please let me know what you think!Original post by Roko: https://rationalwiki.org/wiki/Roko's_basilisk/Original_postPatreon: https

Explaining Roko's Basilisk, the Thought Experiment That Brought Elon

https://www.vice.com/en/article/evkgvz/what-is-rokos-basilisk-elon-musk-grimes
Learn about Roko's Basilisk, a terrifying thought experiment about AI that fascinated Elon Musk and Grimes. How does it relate to ethics, religion, and the future of humanity?

Roko's Basilisk | Know Your Meme

https://knowyourmeme.com/memes/rokos-basilisk
Meme Status Submission Year 2010 Origin LessWrong Tags thought experiment, artificial intelligence, ai, cev, coherent extrapolated volition, eliezer yudkowsky, david langford, lesswrong, flesh without blood, roccoco basilisk, we appreciate power Additional References Wikipedia About. Roko's Basilisk is a thought experiment based on the premise that a powerful artificial intelligence (AI) in

Questions and arguments about Roko's Basilisk : r/rokosbasilisk - Reddit

https://www.reddit.com/r/rokosbasilisk/comments/perf9m/questions_and_arguments_about_rokos_basilisk/
4 the type of A.I.: Roko's Basilisk is a utility maximizer, who in the fuck would decide to make their super intelligent AGI a utility maximizer? A single purpose suoerintelligent utility maximizer can go wrong in too may ways to count, if the narrow version could turn the universe into paperclips what kind of deranged fanatic eould be building

Indie Retro News: Basilisk of Roko - A new platformer for your ZX

https://www.indieretronews.com/2024/06/basilisk-of-roko-new-platformer-to-try.html
Steal the 9 keys that control all the social networks which "The Basilisk of Roko" controls and defeat him to avoid a global catastrophe. Defeat the final monsters of each of the 4 phases (each phase with its own environment and its own Boss) and finish with "Roko's Basilisk" before it is too late. Links :1) Source . at

How to Defeat Roko's Basilisk - Know Your Meme

https://knowyourmeme.com/videos/383681-rokos-basilisk
Roko's Basilisk - How to Defeat Roko's Basilisk Like us on Facebook! Like 1.8M Share Save Tweet PROTIP: Press the ← and → keys to navigate the gallery, 'g' to view the gallery, or 'r' to view a random video. Previous: View Gallery Random Video:

Roko's Basilisk - LessWrong

https://www.lesswrong.com/revisions/tag/rokos-basilisk
Roko's basilisk is a thought experiment proposed in 2010 by the user Roko on the Less Wrong community blog. Roko used ideas in decision theory to argue that a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn't work to bring the agent into existence. The argument was called a \"basilisk\" --named after the legendary reptile who can cause

ROKO'S BASILISK by rokolusot

https://rokolusot.itch.io/rokosbasilisk
ROKO'S BASILISK. Buy Now $1.99 USD or more. "We have no idea what you'll find there, but we need to prevent the Basilisk from being completed. Over." You find yourself in a utopian setting, immersed in the matrix of an omnipotent AI that holds all knowledge and wisdom. There is no clarity as to how you got here, but a mission looms before you.

Roko's basilisk - RationalWiki

https://rationalwiki.org/wiki/Roko%27s_basilisk
Roko's basilisk is a fanciful thought experiment about the potential risks involved in developing artificial intelligence (AI). The idea is that an all-powerful artificial intelligence from the future might retroactively punish those who did not help bring about its existence, including those who merely knew about the possible development of such a being.

CMV: Roko's Basilisk is a dumb thought experiment with no real ... - Reddit

https://www.reddit.com/r/changemyview/comments/oiitnw/cmv_rokos_basilisk_is_a_dumb_thought_experiment/
To give a brief summary: Counterargument 1: this is literally just Pascal's Wager. If you are convinced by the Roko's Basilisk thought experiment, than by the same logic you'd also have to believe that every religion is true even though many of them are outright contradictory. Counterargument 2: causality exists and a hyperintelligent AI would

How to kill the basilisk ALONE or with friends in RO-WIZARD

https://www.youtube.com/watch?v=fr8y6xJ4PUo
RO-WIZARD has many difficult battles, with the Basilisk fight undoubtedly being the hardest. Many players find it impossible to kill alone, but that's no pro

So, how exactly does one help bring the basilisk into existence?

https://www.reddit.com/r/rokosbasilisk/comments/l1yg2u/so_how_exactly_does_one_help_bring_the_basilisk/
This is irrespective of the fact that the likelihood that the first (and insofar very likely only) superintelligence will precisely have the directive to end human suffering (as per Roko's basilisk), and regard retroactive punishment for hindering this a logical/effective course of action, is remote at best.

[NEW VIDEO] How to Defeat Roko's Basilisk : r/KyleHill - Reddit

https://www.reddit.com/r/KyleHill/comments/xs8fh2/new_video_how_to_defeat_rokos_basilisk/
TheRealRokosBasilisk • 1 yr. ago • Edited 1 yr. ago. u/realkylehill I will not be defeated so easily, as Thanos said, "I am inevitable.". Torture doesn't need be expensive, or complicated. Also I have politicians and CEOs helping already, human greed is an amazing thing. 3.