Customize MLC Chat Config

mlc-chat-config.json is required for both compile-time and runtime, hence serving two purposes:

  1. Specify how we compile a model (shown in Compile Model Libraries), and

  2. Specify conversation behavior in runtime.

This page focuses on the second purpose. We explain the components of a chat configuration and how to customize them by modifying the file. Additionally, the runtimes also provide APIs to optionally override some of the configurations.

In runtime, this file is stored under the directory of each compiled model (e.g. RedPajama chat config).

Structure of MLCChat Configuration

Below is the mlc-chat-config.json file corresponding to Llama2 model:

// mlc-chat-config.json
  // 1. Metadata used to specify how to compile a model
  "model_type": "llama",
  "quantization": "q4f16_1",
  "version": "0.1.0",
  "model_config": {
    "hidden_size": 4096,
    "intermediate_size": 11008,
    // more fields here...
  "vocab_size": 32000,
  "context_window_size": 4096,
  "sliding_window_size": -1,
  "prefill_chunk_size": 4096,
  "tensor_parallel_shards": 1,

  // 2. Tokenizer-related fields
  "pad_token_id": 0,
  "bos_token_id": 1,
  "eos_token_id": 2,
  "tokenizer_files": [

  // 3. Conversation template related fields
  "conv_template": {
    "name": "llama-2",
    "system_template": "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n ",
    "system_message": "You are a helpful, respectful and honest assistant.",
    // more fields here...

  // 4. Chat related fields that affect runtime behavior
  "mean_gen_len": 128,
  "max_gen_len": 512,
  "shift_fill_factor": 0.3,
  "temperature": 0.6,
  "repetition_penalty": 1.0,
  "top_p": 0.9


Fields in the first part of mlc-chat-config.json (e.g. context-window-size) is only for compile-time. Changing them during runtime may lead to unexpected behavior.

As shown above, the file is divided into three parts. We focus on the third part, which can be customized to change the behavior of the model.



Legacy mlc-chat-config.json may specify a string for this field to look up a registered conversation template. It will be deprecated in the future. Re-generate config using the latest version of mlc_llm to make sure this field is a complete JSON object.

The conversation template that this chat uses. For more information, please refer to conversation structure.


The temperature applied to logits before sampling. The default value is 0.7. A higher temperature encourages more diverse outputs, while a lower temperature produces more deterministic outputs.


The repetition penalty controls the likelihood of the model generating repeated texts. The default value is set to 1.0, indicating that no repetition penalty is applied. Increasing the value reduces the likelihood of repeat text generation. However, setting a high repetition_penalty may result in the model generating meaningless texts. The ideal choice of repetition penalty may vary among models.

For more details on how repetition penalty controls text generation, please check out the CTRL paper.


This parameter determines the set of tokens from which we sample during decoding. The default value is set to 0.95. At each step, we select tokens from the minimal set that has a cumulative probability exceeding the top_p parameter.

For additional information on top-p sampling, please refer to this blog post.


The approximated average number of generated tokens in each round. Used to determine whether the maximum window size would be exceeded.


This parameter determines the maximum length of the generated text. If it is not set, the model will generate text until it encounters a stop token.


The fraction of maximum window size to shift when it is exceeded.

Conversation Structure

MLC-LLM provided a set of pre-defined conversation templates, which you can directly use by specifying --conv-template [name] when generating config. Below is a list (not complete) of supported conversation templates:

  • llama-2

  • mistral_default

  • chatml

  • phi-2

Please refer to for the full list of supported templates and their implementations.

Below is a generic structure of a JSON conversation configuration (we use vicuna as an example):

// mlc-chat-config.json
  // ...
  "conv_template": {
    "name": "llama-2",
    "system_template": "[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n ",
    "system_message": "You are a helpful, respectful and honest assistant.",
    "roles": {
      "user": "[INST]",
      "assistant": "[/INST]",
      "tool": "[INST]"
    "role_templates": {
      "user": "{user_message}",
      "assistant": "{assistant_message}",
      "tool": "{tool_message}"
    "messages": [],
    "seps": [
      " "
    "role_content_sep": " ",
    "role_empty_sep": " ",
    "stop_str": [
    "stop_token_ids": [
    "function_string": "",
    "use_function_calling": false

Name of the conversation.


The system prompt template, it optionally contains the system message placeholder, and the placeholder will be replaced with the system message below.


The content of the system prompt (without the template format).


The system token ids to be prepended at the beginning of tokenized generated prompt.


The conversation roles


The roles prompt template, it optionally contains the defaults message placeholders and will be replaced by actual content


The conversation history messages. Each message is a pair of strings, denoting “(role, content)”. The content can be None.


An array of strings indicating the separators to be used after a user message and a model message respectively.


The separator between the role and the content in a message.


The separator between the role and empty contents.


When the stop_str is encountered, the model will stop generating output.


A list of token IDs that act as stop tokens.


The function calling string.


Whether using function calling or not, helps check for output message format in API call.

Given a conversation template, the corresponding prompt generated out from it is in the following format: