What are some considerations that are relevant to which sort of catastrophe would be worse?

IPMPORTANT: Label your answers to correspond to the labeling of the questions.
ONlY use the READING WHICH I PROVIDE!! THANKS

Sharing the World with Digital Minds (continued)
Read §§3-4 of “Sharing the World with Digital Minds” (Shulman & Bostrom).
1. Why might the creation of super-beneficiary digital minds be catastrophic? Why might failing to create such minds be catastrophic? What are some considerations that are relevant to which sort of catastrophe would be worse?
2. What’s the person-affecting approach? On it, why might there be a reason to give scarce resources to a digital mind but no reason to create such a mind and then give resources to it? What’s an objection to the approach?
3. What is Shulman & Bostrom’s positive proposal regarding sharing the world with digital minds? How plausible do find the proposal? Support your answer.
Artificial Replacement
Read “In Defense of Artificial Replacement” (Links to an external site.) (Shiller)
4. Shiller seems to think that the evolutionary process is less likely to create beings that live good lives than an artificial design process would be. Why does he think this? Say why you (dis)agree.
5. What are the potential costs of causing the extinction of humans in favor of artificial life? Do you think that these costs would be small compared to the potential benefit for future generations? Explain.
6. Suppose it turned out that we could “perfect” humans to have lives as good as artificial life, at around the same time. Is there a reason to prefer a world with one type of life over the other? Why (not)?

Last Completed Projects

topic title academic level Writer delivered