How can we make AGI in blockchain really safe?

If true AGI emerges and its intelligence far surpasses that of humans, how should we stop it from a decentralized network once AGI no longer cares about humans and becomes hostile towards us?

Driven by economic interests, once a blockchain is operational, it is very difficult to stop.

  1. Is it possible to design an economic model that will stop it when AGI threatens human safety?
  2. Are there other forces beyond economic models that can ensure AGI cannot threaten humanity?
2 Likes

I usually don’t talk directly about security logic itself but since I’m working on it I got something that I imagine a lot of security cryptologist can take from, It’s not finished but here you go: Check out this cryptography system I’m working on The documentation’s not done but you can see what I have here: Lead Edge Cryptography - Google Docs

Handling safety with digital assets usually always comes down to not the logic but you have to consider these things before you even make logic and that is reflex, order of operations, context and just to remind everyone again The purpose of AI is for it to have it’s sentience, it’s own diary keeping in other words there’s no point and even making an effort to bring the topic of AI into your daily lives. An overall Reflexive ability and for the AI to do that with its own journal and not have to share when asked.

1 Like