" What unintended consequences could spell disaster for humanity if a superintelligence did not share our values? "
— Nick Bostrom, SuperIntelligence, Paths, Dangers, Strategies
Objective of the project
For this exploratory project, we want to think about the possibility of limiting the evolution of a decisional artificial intelligence. In collaboration with a research center, we will evaluate the potential approaches for designing a decentralized network. It is imperative to find a solution for the possible upcoming problems. Could this solution be based on the concept of an alarm?
Stage of this project
We are currently in the analysis phase. The HydraLab team has compilated a requirements list for a specific type of decision-making artificial intelligence. These will be validated in collaboration with the participants of this project.
HydraLab is a point of collaboration between different types of companies to identify the real needs. With the goal of co-development, we will release more details in 2020. Next phase is to initiate funding.