A lightweight setup to ingest news, build a FAISS vector store, retrieve context via RAG, and (optionally) run a multi-agent research + paper-trading workflow.
- Install dependencies
C:/Users/Admin/Desktop/Projects/FinHubPortfolio/.venv/Scripts/python.exe -m pip install -U -r requirements.txt- Ingest news for a ticker (creates
rawdata/<TICKER>_articles_YYYYMMDD.csv)
C:/Users/Admin/Desktop/Projects/FinHubPortfolio/.venv/Scripts/python.exe "scripts/ingest_news.py" AAPL --months_back 3 --min_articles 60- Run a RAG query (optionally synthesize an answer with an LLM)
# Context only
C:/Users/Admin/Desktop/Projects/FinHubPortfolio/.venv/Scripts/python.exe "scripts/rag_query.py" AAPL "market risks and regulation" --out output_context.md
# With LLM synthesis (requires OPENAI_API_KEY)
$env:OPENAI_API_KEY = "<your_api_key>"
C:/Users/Admin/Desktop/Projects/FinHubPortfolio/.venv/Scripts/python.exe "scripts/rag_query.py" AAPL "How do current risks affect pricing power?" --llm gpt-4o-mini- Multi-agent research flow (optional)
# Runs ingestion → quality → features → model → risk → policy → paper execution → narrative
C:/Users/Admin/Desktop/Projects/FinHubPortfolio/.venv/Scripts/python.exe "scripts/run_research.py"src/rag/retriever.py: RAG retriever over local news using FAISS + sentence-transformersscripts/ingest_news.py: Ingest Yahoo Finance articles intorawdata/scripts/rag_query.py: Retrieve top-k context and optionally synthesize an answerscripts/run_research.py: Bridge into CrewAI research flow (kept separate)Multi Agent Assistant/crew_projects/: Existing multi-agent flows/tools (optional)
- Embeddings use
langchain-huggingfaceto avoid deprecation warnings. - For LLM synthesis, install
langchain-openaiand setOPENAI_API_KEY. - You can safely use RAG without any multi-agent components.