Skip to content

Index

Training GenAI will generally be domain/modality specific.

Training Generative Language models

Models are generally trained with the following manner:

  • Self-supervised pre-training to predict the next token with reasonable likelihoods.
  • Supervised or self-supervised Finetuning on higher quality data sets, including instruction finetuning to create responses in expected manners.

The manner that these languag emodels can be done recursively using simulated data and in such a way that they can be Automatically correcting models to enable models that may be more globally accurate.

Training Objectives

There are several methods of training methods, that use samples thata re altered or hidden to and models to predict the original, unaltered/noised models

Masked Language Models

Mask elements of

Causal Language Models

Combination models

Exploration of Masked and Causal Language Modelling for Text Generation

The authors demonstrate a manner of training data that combines both CLM and MLM methods. image

Diffusion models

Retrieval Aware Training

📋
GitHub Repo stars GRIT: Generative Representational Instruction Tuning

Developments The authors reveal in their paper the ability to simultaneously train generation and embedding models, revealing improved performance in both domains, and enhancement of RAG performance by not requiring separate retrieval and generation models.

image image

image image

Retriever-Aware Training (RAT): Are LLMs memorizing or understanding?

Retrieval aware training uses the fact that it is useful to use up-to-date information at generation time and hence considers retrievers as part of the training. image

How training is done

  • Distributed training describes the manner in which models and data can be effeciently computed with.

Automatically Correcting

Foundationally, the use of reinforcement learning with human feedback (RLHF) has enabled highly successful models that are aligned with tasks and requirements. The automated improvement of GenAI can be bbroken down into improving the models during training time and then during generation time.

Automatically Correcting Large Language Models: Surveying the landscape of diverse self-correction strategies

Developments The authors reveal a comprehensive set of solutions to iteratively improve models. image

Distributed Training

Distributed Training

References

To filter

Training variations

Fairness Enablement

  • LinkBERT places in the context window hyperlinked references to achieve better performance and is a drop-in replacement for BERT models.

Fine Tuning

Using examples to fine-tune a model can reduce the number of tokens needed to achieve a sufficiently reasonable response. Can be expensive to retrain though.

Symbol Tuning Improves in-context learning in Language Models

image