Page tree
Skip to end of metadata
Go to start of metadata

Many people might think we are rushing into creating something that could wipe us out. We are "summoning the demon" or taking massive risks. However, if we just look at the data we have measuring various socioeconomic aspects, and look at the demographics of humanity today, we can easily come to an obvious conclusion: We need to find very innovative solutions to our problems and we need to do this fast.

While it may be difficult to predict breakthroughs, we need to start looking. We can't start looking for these solutions when we have to attend to our day-to-day. If we do what we do today, why should we expect the outcome to change. It is imperative that we focus on the hard problems, the things that matter. The list is long, but I would like to leave you with one thought: We don't have any reliable ways to predict or evaluate new treatments without years of clinical trials. In all the years of drug development, we have only discovered around 1500 drugs. These drugs bring about 22% efficacy in the field. Think about that. Then think about a population of 7 billions, then 8, 9, 10....

AI has been receiving a lot of attention in both technical and mainstream media. A lot of the coverage exaggerates the current state of AI in practice. A few articles and blogs seem to fuel a hysterical fear of AI which we believe is not warranted.

We have been working on developing an integrated general intelligence framework for some time now, and we believe we have a pretty good grasp of the current and future capabilities of the technology. In this paper we will focus on the risks and promises associated with the most concerning type of AI: Super Intelligence. We will also reflect on why we think it is not as near as many would like to believe and how much work is left to be done to even come closer to a general intelligence.


Content license

All our original writing, and diagrams embodied in this wiki are licensed under the Creative Commons share by attribution, CC-BY 3.0. Excluded from this is cited content, and your comments. Should you choose to comment, We ask you to agree to granting the same license to your comments.



Search


Heuro Labs' research on the topic of Artificial Intelligence safety and risks.



The Opportunities

  • Propel Humanity above the current hill when it comes to scientific and intellectual progress.
  • Breakthrough innovation in mobility, education and healthcare.
  • Innovation in economic and political models.

The Risks

  • Significant unemployment.
  • Technology abuse by rogue entities.
  • Runaway systems that don't have humanity's interests as a goal or are unfriendly.

The Goals

  • State the risks and opportunities for the labor market.
  • Engage thought leaders on combating rogue or abusive organizations.
  • Reasoning about the possibility of unfriendly super intelligence.






  • No labels
Write a comment…