A fine-tuned Llama 3B model for stock market sentiment analysis and prediction.
This project fine-tunes a Llama 3B model using LoRA to predict stock market movements (Bullish/Bearish) based on trend data and news sentiment.
- Base Model: Llama 3B (4-bit quantized)
- Fine-tuning: LoRA with MLX
- Training Data: Stock news with trend and sentiment labels
.
├── data/ # Training data
│ ├── train.jsonl # Training examples
│ └── valid.jsonl # Validation examples
├── clean_adapters/ # Fine-tuned LoRA adapters
├── fused_clean_f16/ # Merged model (FP16)
├── Modelfile # Ollama configuration
└── README.md
# Install dependencies
pip install mlx-lm transformers huggingface_hf
# Fine-tune the model
mlx_lm lora \
--model ./models/llama-3b-4bit \
--train \
--data ./data \
--iters 600 \
--batch-size 4 \
--learning-rate 4e-5 \
--adapter-path ./clean_adapters# Create Ollama model
ollama create stock-expert -f Modelfile
# Run inference
ollama run stock-expert "NVDA is up 2% with strong AI chip demand"from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/stock-expert-model")
tokenizer = AutoTokenizer.from_pretrained("your-username/stock-expert-model")
# Generate prediction
prompt = "Stock: NVDA. Trend: +2%. Sentiment: Strong AI chip demand."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))MIT