PyTorch Hooks ============= .. module:: structured_stochasticity.hooks This module provides the core mechanism for injecting noise into the model's internal representations using PyTorch's forward hook system. Hook Configuration ------------------ .. autoclass:: HookConfig :members: :show-inheritance: Hidden State Hook ----------------- .. autoclass:: HiddenStateHook :members: :show-inheritance: Noisy Inference Wrapper ----------------------- .. autoclass:: NoisyInferenceWrapper :members: :show-inheritance: This is the main interface for running experiments. It handles: - Identifying target layers in different model architectures - Registering/removing hooks - Generating multiple trajectories with different noise samples - Aggregating results **Supported Architectures:** - LLaMA / Llama 2 / Llama 3 - Mistral - GPT-2 / GPT-Neo - OPT - Falcon - Phi - Qwen - Gemma **Example Usage:** .. code-block:: python from transformers import AutoModelForCausalLM from structured_stochasticity.hooks import NoisyInferenceWrapper model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B") wrapper = NoisyInferenceWrapper( model, injection_layers=[0, 1, 2], noise_scale=0.1 ) # Generate 5 trajectories outputs = wrapper.generate_trajectories(input_ids, k=5)