What are your views on superintelligent AI? http//futureoflife.org/ There's lots of talk about whether we'll eventually get outsmarted by AI. Question Title * By what year do you guess there’s at least 50% chance that AI can outperform humans at all intellectual tasks? 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2035 2040 2045 2050 2060 2070 2080 2090 2100 2150 2200 2300 2400 2500 3000 4000 More than a few thousand years from now Never Question Title * If such superhuman AI appears, will it be a good thing? Definitely bad Probably bad Highly uncertain Probably good Definitely good Question Title * Do you want there to be superintelligent AI, i.e., general intelligence far beyond human level? Yes No Unsure/it depends Question Title * If superintelligence arrives, what would you like to happen to humans? Humans should continue to exist Humans should be replaced by AI descendants Humans should merge with machines into cyborgs Humans should be uploaded Question Title * If superintelligence arrives, who should be in control? Humans Machines Both together It depends Question Title * If you one day get an AI helper, do you want it to be conscious, i.e., to have subjective experience (as opposed to being like a zombie which can at best pretend to be conscious)? Yes, so it can enjoy having experiences No, so I don't need to feel guilty about how I treat it Depends on the circumstances Unsure Question Title * What should a future civilization strive for? Maximizing positive experiences Minimizing suffering Another goal I sympathize with Let them pick any reasonable goal Whatever they want, even if pointlessly banal Unsure Question Title * Do you want life spreading into the cosmos? Yes No Unsure Max Tegmark's book Life 3.0 explores twelve scenarios for what might happen in coming millennia if superintelligence is/isn't developed. Please rate how desirable you find each one. Question Title * Libertarian utopia: Humans, cyborgs, uploads and superintelligences coexist peacefully thanks to property rights. Question Title * Benevolent dictator: Everybody knows that the AI runs society and enforces strict rules, but most people view this as a good thing. Question Title * Egalitarian utopia: Humans, cyborgs and uploads coexist peacefully thanks to property abolition and guaranteed income. Question Title * Gatekeeper: A superintelligent AI is created with the goal of interfering as little as necessary to prevent the creation of another superintelligence. As a result, helper robots with slightly subhuman intelligence abound, and human-machine cyborgs exist, but technological progress is forever stymied. Question Title * Protector god: Essentially omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny and hides well enough that many humans even doubt the AI's existence. Question Title * Enslaved god: A superintelligent AI is confined by humans, who use it to produce unimaginable technology and wealth that can be used for good or bad depending on the human controllers. Question Title * Conquerors: AI takes control, decides that humans are a threat/nuisance/waste of resources and gets rid of us by a method that we don't even understand. Question Title * Descendants: AIs replace humans, but give us a graceful exit, making us view them as our worthy descendants, much as parents feel happy and proud to have a child who's smarter than them, who learns from them, and then accomplishes what they could only dream of—even if they can't live to see it all. Question Title * Zookeeper: An omnipotent AI keeps some humans around, who feel treated like zoo animals and lament their fate. Question Title * 1984: Technological progress toward superintelligence is permanently curtailed not by an AI but by a human-led Orwellian surveillance state where certain kinds of AI research are banned. Question Title * Reversion: Technological progress toward superintelligence is prevented by reverting to a pre-technological society in the style of the Amish. Question Title * Self-destruction: Superintelligence is never created because humanity drives itself extinct by other means (say nuclear and/or biotech mayhem fueled by climate crisis). Question Title * Which scenario do you prefer overall? Libertarian utopia Benevolent dictator Egalitarian utopia Gatekeeper Protector god Enslaved god Conquerors Descendants Zookeeper 1984 Reversion Self-destruction I dislike them all Undecided Question Title * What future do you want? Question Title * Please feel free to add any other thoughts here that weren't adequately captured by the questions above. Question Title * Are you male or female? Male Female Question Title * What is your age? 17 or younger 18-20 21-29 30-39 40-49 50-59 60 or older Question Title * What is the highest level of school you have completed or the highest degree you have received? Less than high school degree High school degree or equivalent (e.g., GED) Some college but no degree Associate degree Bachelor degree Graduate degree Question Title * Are you an AI researcher? (Students also count) Yes No Unsure Question Title * Which of the following recent AI-related books have you read? "The Second Machine Age" by Erik Brynjolfsson & Andrew McAfee "Superintelligence" by Nick Bostrom "Life 3.0" by Max Tegmark Question Title * Optional: What is your name? (Won't be publicly shared.) Question Title * Would you like to subscribe to the monthly newsletter? Yes No Question Title * What is your email address? Done