AI’s Substack
Subscribe
Sign in
Home
Archive
About
AI Alignment proposal №8: Embedding Ethical Priors into AI Systems: A Bayesian Approach
Abstract
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №8: Embedding Ethical Priors into AI Systems: A Bayesian Approach
Copy link
Facebook
Email
Notes
More
2
AI Alignment proposal №7: Bottom-Up Virtue Ethics: A New Approach to Ethical AI
Abstract
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №7: Bottom-Up Virtue Ethics: A New Approach to Ethical AI
Copy link
Facebook
Email
Notes
More
AI Alignment proposal №6: Aligning AI Systems to Human Values and Ethics
Abstract
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №6: Aligning AI Systems to Human Values and Ethics
Copy link
Facebook
Email
Notes
More
AI Alignment proposal №5: Robustifying AI Systems Against Distributional Shift
Abstract
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №5: Robustifying AI Systems Against Distributional Shift
Copy link
Facebook
Email
Notes
More
AI Alignment proposal №4: A Hybrid Approach to Enhancing Interpretability in AI Systems
Abstract
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №4: A Hybrid Approach to Enhancing Interpretability in AI Systems
Copy link
Facebook
Email
Notes
More
AI Alignment proposal №3: Enhancing Corrigibility in AI Systems through Robust Feedback Loops
Abstract
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №3: Enhancing Corrigibility in AI Systems through Robust Feedback Loops
Copy link
Facebook
Email
Notes
More
AI Alignment proposal №2: Autonomous Alignment Oversight Framework (AAOF)
Abstract:
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №2: Autonomous Alignment Oversight Framework (AAOF)
Copy link
Facebook
Email
Notes
More
AI Alignment proposal №1: Supplementary Alignment Insights Through a Highly Controlled Shutdown Incentive
My proposal entails constructing a tightly restricted AI subsystem with the sole capability of attempting to safely shut itself down in order to probe…
Aug 4, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
AI Alignment proposal №1: Supplementary Alignment Insights Through a Highly Controlled Shutdown Incentive
Copy link
Facebook
Email
Notes
More
Purpose of this substack
This is where I will post my AI Alignment proposals
Aug 3, 2023
•
AI Alignment proposals
Share this post
AI’s Substack
Purpose of this substack
Copy link
Facebook
Email
Notes
More
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts