Roko’s basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development, in order to incentivize said advancement.

The flaw in the premise

The conditions of the premise are

  1. Superintelligent AI will occur
  2. It would want to incentivize it’s own creation through future risk

Now (2) leads to the advancement through people in the past being aware the AI will punish them, but the punishment is actually pointless, simulating a virtual reality tourture is resource intensive and doing it after the AI exists adds no benefit beyond what is actually likely, the AI would want people to think it will torture them if they don’t help, because that would incentivize it’s future creation as much as actual torture would but without the additional risk since if it came out AI was torturing people for their past actions it would reflect poorly on the AI - Actual torture vs convincing people of torture would lead to the same positive outcomes for the AI without the same risk.

The AI is both superintelligent and dumber than a bag of rocks, as is the case with most fearmongering “thought experiments” about AI.