AI
Don'ts
Here
are some websites where you can find information about dangerous AI
things and AI things maybe you don't want to do:
AI Alignment Forum:
The AI Alignment Forum is an online community of researchers,
academics, and enthusiasts who discuss and debate topics related to the
safety and control of AI. They provide resources and information on
topics such as AI alignment, corrigibility, and value alignment.
AI Impacts:
AI Impacts is a research organization that studies the long-term
impacts of AI on society. They research and provide information on the
potential risks and dangers of AI, such as the possibility of an
intelligence explosion.
AI Now Institute:
The AI Now Institute is a research institute at New York University
that studies the social implications of AI. They research and provide
information on the ethical and social implications of AI, including the
dangers of AI in areas such as criminal justice and labor automation.
Bulletin of the Atomic Scientists:
The Bulletin of the Atomic Scientists is a publication that covers
global security issues, including the risks associated with emerging
technologies such as AI. They publish articles and analysis on the
potential dangers of AI, including the risks of autonomous weapons and
the impact of AI on nuclear weapons.
Center for AI Safety
Center for Ethical AI and Machine Learning:
The Institute for Ethical AI and Machine Learning is a nonprofit
organization that promotes ethical and responsible AI development. They
provide resources and information on AI safety and ethics, including
the risks of AI and the importance of transparency and accountability
in AI development.
Center for Human-Compatible AI:
The Center for Human-Compatible AI is a research center at the
University of California, Berkeley that aims to ensure that AI is
developed in a way that is safe and beneficial for humans. They provide
research and resources on the safety and control of AI.
Center for Humane Technology:
The Center for Humane Technology is a nonprofit organization that aims
to align technology with human values. They provide resources and
information on the potential dangers of AI, including its impact on
mental health, privacy, and democracy.
Center for the Study of Existential Risk:
The Center for the Study of Existential Risk (CSER) is a research
center at the University of Cambridge that studies global catastrophic
risks, including risks from AI. They provide research and policy
recommendations to mitigate these risks.
Center for Security and Emerging Technology:
The Center for Security and Emerging Technology (CSET) is a research
organization at Georgetown University that studies the national
security implications of emerging technologies. They provide research
and analysis on the potential risks of AI and other emerging
technologies.
Federal Trade Commission
Future of Life Institute:
The Future of Life Institute is a nonprofit organization that focuses
on reducing existential risks from AI and other emerging technologies.
They provide resources and information about AI safety and risks,
including a list of dangerous AI applications.
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems:
The IEEE Global Initiative on Ethics of Autonomous and Intelligent
Systems is a global effort to ensure that AI and autonomous systems are
developed in a safe and ethical way. They provide guidelines and
resources for the responsible development of AI.
Internet Crime Complaint Center (IC3)
Guardian:
The Guardian is a news organization that covers a wide range of topics,
including AI and its risks. They have a section dedicated to AI, where
you can find articles about the risks and dangers of AI, as well as
updates on developments in AI research.
Machine Ethics Podcast:
The Machine Ethics Podcast is a podcast that explores the ethical and
safety implications of AI. They interview experts in the field and
discuss topics such as AI transparency, fairness, and control.
Machine Intelligence Research Institute:
The Machine Intelligence Research Institute (MIRI) is a nonprofit
research organization that focuses on reducing the risks associated
with AI. They provide research and resources on topics such as AI
alignment, decision theory, and decision-making in complex environments.
National Telecommunications and Information Administration
OpenAI:
OpenAI is a research organization that aims to ensure that artificial
intelligence is developed in a safe and beneficial way. They research
and develop AI technologies and also provide resources and information
about AI safety and risks.
OpenAI GPT-3 Concerns:
OpenAI's GPT-3 language model is one of the most advanced AI models
currently available, and it has raised concerns about the potential
dangers of AI. OpenAI has published a paper detailing their concerns
and outlining steps to mitigate these risks.
Partnership on AI:
The Partnership on AI is a nonprofit organization that brings together
academics, researchers, and industry professionals to collaborate on
the development of safe and beneficial AI. They provide resources and
information on the ethics and safety of AI.
Verge:
The Verge is a news organization that covers a wide range of topics,
including AI and its risks. They have a section dedicated to AI, where
you can find articles about the risks and dangers of AI, as well as
updates on developments in AI research.
These
are some websites where you can find information about dangerous AI
things and AI things that maybe should not be done. It's important to
approach these topics with caution and to seek out reputable sources of
information.
|