ls_mlkit.model.decoder_tf package¶
Submodules¶
Module contents¶
- class ls_mlkit.model.decoder_tf.CausalLanguageModel(vocab_size, embed_dim, num_head, dropout=0, num_block=3, max_pos_len=5000, batch_first=True)[source]¶
Bases:
Module- forward(x: Tensor, att_mask: Tensor = None, key_padding_mask: Tensor = None, need_weights: bool = True, average_attn_weights: bool = True, use_cache: bool = False, past_key_values: Tensor = None, is_causal: bool = True, need_hidden_states: bool = False)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- class ls_mlkit.model.decoder_tf.CausalLanguageModelConfig(vocab_size=32000, embed_dim=1024, num_head=2, dropout=0, num_block=3, max_pos_len=5000, batch_first=True, **kwargs)[source]¶
Bases:
object
- class ls_mlkit.model.decoder_tf.CausalLanguageModelConfigForAuto(vocab_size=30000, embed_dim=1024, num_head=2, dropout=0, num_block=3, max_pos_len=5000, batch_first=True, **kwargs)[source]¶
Bases:
PretrainedConfig- model_type: str = 'D-TF-no-PE'¶
- class ls_mlkit.model.decoder_tf.CausalLanguageModelForAuto(config: CausalLanguageModelConfigForAuto)[source]¶
Bases:
PreTrainedModel,GenerationMixin- base_model_prefix = 'zls_causal_tf'¶
- config_class¶
alias of
CausalLanguageModelConfigForAuto
- forward(input_ids: LongTensor = None, attention_mask: Tensor | None = None, output_attentions: bool | None = True, average_attn_weights: bool = True, position_ids: LongTensor | None = None, past_key_values=None, inputs_embeds: FloatTensor | None = None, labels: LongTensor | None = None, use_cache: bool | None = False, output_hidden_states: bool | None = None, return_dict: bool | None = None, cache_position: LongTensor | None = None)[source]¶
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Moduleinstance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_input_embeddings()[source]¶
Returns the model’s input embeddings.
- Returns:
A torch module mapping vocabulary to hidden states.
- Return type:
nn.Module
- prepare_inputs_for_generation(input_ids, past_key_values=None, attention_mask=None, **kwargs)[source]¶
Prepare the model inputs for generation. Notable steps include selecting the correct input key and cloning when appropriate, creating position_ids from the attention_mask when missing, slicing inputs and converting 2D attention masks to 4D for compilable caches, and finally forwarding all additional keyword arguments unchanged to the model’s forward pass.
See the forward pass in the model documentation for expected arguments (different models might have different requirements for e.g. past_key_values). This function should work as is for most LLMs.