Upload abstract/2212.08073.txt with huggingface_hub
Browse files- abstract/2212.08073.txt +1 -0
abstract/2212.08073.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles and so we refer to the method as Constitutional AI. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then fine-tune the original model on revised responses. In the RL phase, we sample from the fine-tuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use RL from AI Feedback (RLAIF). As a result, we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
|