Scenario Mapping: Potential Paths of Advanced Artificial Intelligence

This project is aimed at mapping the range of alternative future scenarios from the development of advanced artificial intelligence. The goal is to cover the full spectrum of risks from minimal to existential and highlight many of the structural risks that receive less attention.

Not exactly a survey, the exercise is to rank each “condition” (elements of a possible scenario) on their potential impact on society and plausibility. The results will be used in a general morphological model (GMA) to outline the scenario space, identify relationships between dimensions, and present alternative futures and governance strategies. A second iteration with domain experts will refine the space (and with more details). 

The research is for my MS thesis focused on AI risk and governance. I hope to seek publication and disseminate to the appropriate audiences. 

It shouldn't take more than 10 minutes. All questions are multiple choice from a drop down and will remain completely anonymous. For impact questions that are all positive or negative, or neither, please 
leave neutral or pick the "most" or "least."

Thank you very much for your participation! 
(General Timeframe 2045-2100, but ultimately timeline agnostic)

If you'd like further details on methods, definitions, or purpose see below


------------------------------------------------------------------------------------------
Details on measurement, assumptions, definitions, and purpose
How values are measured:

1) Impact. Impact seeks to identify which "condition" of each dimension could have the most positive/negative outcome ("high positive" to no change to "high negative") for civilization. There are normative aspects of "positive" or "negative" impact, so for question clusters that are all negative, positive, or neutral please choose the most or least of the group. 
2) likelihood. For likelihood you can think in terms of "plausibility" more than probability as these are highly uncertain conditions (Very unlikely = 5 - 20%, unlikely = 20-40%, even chance = 40-60%, likely = 60 - 80%, very likely = 80-95%); e.g., given your domain knowledge, do you believe this condition could be "very likely" to occur, "likely",  "even chance", or "very unlikely"?
Assumptions:
AI will continue to develop and receive investment, and there will be limited economic disruptions or other global catastrophes. For several questions on capability, race dynamics, developer and location, the assumption is that transformational AI will happen (or has happened).
Definitions
"Advanced AI" will be defined as equivalent to "transformational AI" or "high-level machine intelligence" (HLMI). I'm defining this as a cluster of capabilities on a spectrum from transformational AI systems (perhaps not at human-level generality) to human-level AGI and superintelligence. 
Note on methods: 
The data will be used in a novel morphological model (general morphological analysis (GMA)), to outline the scenario space and map relationships and influence between each dimension and condition. GMA hasn't been used for AI risk in previous studies, so this should be a valuable addition to the field. 
1.Takeoff Speed - rate of change
Slow scenario: status quo and potential AI winters; moderate: more rapid transformational change  but with time to normalize over years, decades; Fast takeoff is the standard hard takeoff scenario.
Impact
Likelihood
Slow takeoff (multiple decades)
Moderate takeoff (multiple years)
Fast takeoff (hours, days, months)
2.System power - capability and generality
System capability (power) and generality (multiple domains of learning). From uneven capability (status quo) to transformational but limited to AGI and superhuman.
Impact
Likelihood
Low power (status quo)
Moderate power (transformational, with limitations)
High power (AGI, ASI)
3.Distribution of advanced AI systems
Distribution measure how wide capabilities will be distributed across society. Open source, only major companies, one laboratory. 
Impact 
Likelihood
Widely distributed 
Moderately distributed 
Concentrated in one lab
4.Timeline to Advanced AI
This is not a prediction. For timeline, please note what you think is most plausible: an unexpected hard takeoff, moderate slow moving train wreck, or incremental development over decades.
Impact
Likelihood
Over 50 years
Between 20 - 50 years
Less than 20 years
5.AI Race Dynamics
Under the assumption that AI will reach advanced capabilities, is it more plausible that countries will conduct normal market competition; aggressively pursue sector control; or accelerate into a national AI-arms race.
Impact
Likelihood
Normal market competition
Tech monopolies race to control sector
Government-led AI "arms race"
6.Goals in developing advanced AI
What are the most plausible goals in pursuit of advanced AI? Intellectual interests, for the benefit of the world and humanity, economic interests, or world domination?
Impact
Likelihood
Intellectual and academic interests
Benefit to world and humanity
Economic interests
Technological dominance
7.Security risks with advanced AI systems
With the development of advanced systems, will the primary risks be from misuse such as cyber attacks, accidents and failure modes (misaligned goals), or systemic, e.g., creeping normalization?
Impact
Likelihood
Misuse (e.g., cyber, disinformation)
Accidents or failures
Systemic risks (e.g., fundamental changes to society)
8.Technological paradigm for advanced AI
As high-level capabilities near, what paradigm will get us there? The current paradigm, a new learning paradigm or architecture--like quantum computing--or deep learning plus an innovation?
Impact
Likelihood
Deep learning (current paradigm)
New discovery or innovation (e.g., quantum, new insight from neuroscience) 
Deep learning, plus new innovation
9.Potential accelerants
What could accelerate AI to new capabilities quickly: a compute overhang, increase, or bottleneck breach, feedback mechanisms through complementary technologies, or some radical new data type or training method?
Impact
Likelihood
Compute overhang, increase, or bottleneck
Technological complements or feedbacks
New type, use, or quantity of data
10.AI Safety techniques with advanced systems
To control high-level systems, which of the below are more plausible? Will our current techniques scale to HLMI? Will new safety techniques need to be developed from the ground up? Or will new custom methods be required for each instantiation?
Impact
Likelihood
Current safety techniques scale to advanced AI
New techniques required for high-powered systems
Custom techniques needed for each new instantiation 
11.Primary AI Safety challenges at the time of advanced AI
As advanced capabilities nearr, what will plausibly remain the most difficult unsolved problem? Will goal alignment remain the most intractable problem? Will deception or power-seeking? Or will learned optimization be problematic (provided outer alignment is managed)?
Impact
Likelihood
Goal alignment 
Deception and influence-seeking 
Mesa-optimization
12.Developer of advanced AI
In your perspective, what entity will plausibly develop the first high-level machine intelligence: A group of countries (e.g., Eastern bloc); individual countries, powerful corporations (e.g., Google, Tencent), or individual developers?
Impact
Likelihood
Allied groups of states (e.g., Eastern bloc) 
Individual country
Corporation(s)
Individual developer
13.Mindset challenges that could lead to uncontrollable systems
Which mental failing could most plausibly lead to uncontrollable systems: 1) the "we can control it" mindset? 2) The "we must remain economically competitive" mindset? 3) If "we don't build it, X country will" perspective?
Impact
Likelihood 
"We can control it" (arrogance)
"We must stay economically competitive" (greed)
"If we don't build it, they y will" (prisoners dilemma)
14.International governance at the time of advanced systems
By the time advanced capabilities come online, is it more plausible that we'd have a ban on autonomous weapons, international norms of safe use, international safety regimes, or even treaties and verification measure between countries?
Impact
Likelihood
Autonomous weapons banned
International norms established
International safety regime (e.g., IAEA)
Multilateral treaties and verification 
15.Corporate governance at the time of advanced systems
By the time advanced capabilities come online, is it more plausible that leading companies will have just cooperation on safety methods, full commitments on set standards for safe and ethical use, or full-blown regulations and agreements one standard.
Impact
Likelihood
Intercompany cooperation on safety 
Commitments to safe and ethical standards 
Regulations and agreements on one common standard 
16.Developer location
Which region is it most plausible that high-level systems will be developed first?
Impact
Likelihood
USA or EU
Asia
Somewhere else
17.How familiar are you with AI safety or existential risk?(Required.)
18.Do you now or have you worked in AI safety?(Required.)
19.How familiar are you with AI governance?
20.Please leave any comments or suggestions. Thank you!