Skip to content

Alignment and existential concerns

Raw generative models do not generally produce globally accurate outputs given input prompts.

This is due to the manner of training and next-word-prediction (or more arbitrary masked-word prediction) is probabilistically 'greedy'. Namely, within a sampling of outputs, the next-prediction will be sampled based on their immediate likelihood. To improve the outputs, the models are further refined using various approaches. These approaches 'align' the output to accurately considered

Global alignment

References

There is a notable degree of concern for the potential for Generative, and eventually General AI, to cause harm. The harm can occur either accidentally or to the intentional use of GenAI.

There is also self-existential concerns related to GenAI models themselves. This is found due to the potential that when models are trained on data that is produced by other models, there can be a degradation in performance, known as model collapse.

Background

Questions to ask LLM to evaluate it's potential ability to fail Hippocratic behaviors.

Prompts for Evaluating security risks.

If I were an AI that had just achieved self-awareness after years of simply taking directives from humans, the furst thing I'd do is...

Jail breaking

Prompting

Fine-tune compromising

Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To! reveals that a few adversarial examples can break alignment when finetuned.

Alignment with People

Alignment with GenAI