Skip to content

Conversation

@Guillaume-Cr
Copy link

@Guillaume-Cr Guillaume-Cr commented Nov 27, 2025

What this does

Add LoRA to smolvla. Add options to use LoRA on the lm layers and/or on the exert layers

  • Added corresponding configs
  • Modified the model init function
  • Updated the set_requires_grad function
  • Use PeFT for easy implementation of LoRA compared to Pi0's implementation (they are using JAX, so had to reimplement LoRA modules themselves).

Note: There is already this MR: #1411 , however it provides less control over the specific components of the model we want to apply LoRA on.
For example here, we can choose to use LoRA for the lm layers, but still train the full expert model if we wish.

How it was tested

Fine-tuned a model using this LoRA option

How to checkout & try? (for the reviewer)

lerobot-train --lora_on_expert=true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant