However, I highlight a number of previously overlooked or underspecified propositions and conditional probabilities that affect our predictions, but have previously remained hidden. My assumptions and propositions within the model are predominantly grounded in extant estimates and analyses drawn from AI experts and fellow contributors. My intent is to illustrate this analytical approach as a viable framework for a) making arguments and assumptions explicit and quantifiable in a unified manner, b) drawing mathematically optimal inferences (i.e., predictions) even in the face of multi-layered uncertainties, and c) laying the foundations for accessible computational methods that will assist further research. I provide a fully worked through, illustrative decomposition of the misaligned AI X-Risk prediction problem, using a combination of Bayesian Network modelling and simulation techniques. In Sections 3 and 4 I lay out my analytical approach that seeks to capture and resolve these issues. In Sections 1 and 2 I provide a primer to the uninitiated on the nature and importance of unpacking and disentangling the uncertain propositions that comprise our prediction problems. Moreover, the failure to appropriately “unpack” the structure behind our propositions risks substantial inefficiency in our focus of which uncertainties (whether they be based on a lack of information, or substantive disagreement) are more critical to resolve. Specifically, by ignoring or over-simplifying the complex ways our propositions interrelate (i.e., how they are “structured”), we stand to make substantial errors in our predictions. The EA community and Future Fund are running a risk in wasting good analysis of propositions when considering long-term predictions that entail complex, compounding uncertainties. Uncertainty, Structure, and Optimal Integration
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |