#๐๐ข๐ฐ๐ฑ๐ฏ๐ฌ๐ถ ๐ฑ๐ฅ๐ข ๐๐ฌ๐ฏ๐ช๐๐ฉ#
The explanation for the code was recorded, you can access the recording by clicking on me
Welcome! This project is designed to teach you the fundamentals of Flask (a Python web framework) by building something actually useful: a Job Application Tracker.
ุงูู ุดุฑูุน ุฏู ูุฏูู ููู ูุทุงูุจ ุณูุฉ ุชุงูุชุฉุ ุงูู ูููู ู ุงุณุงุณูุงุช ุงูFlask ูุงุฒุงู ุชุฑุจุท ุจูู ููู ู ููู ูุชุจุฉ ุฏูู ูุจููุฉ ุงูู ูุชุจุงุช ูู ู ุดุฑูุน ูุงุญุฏุ ูุฏู ูู ุงูุงุฎุฑ ููู ุงูุงูู ุงููุฏู ู ู ู ุงุฏุฉ Advanced Programming Languages ูุชููู ุนูู ุงููู ูุง ุตุฏูููุ ูุงู ุดุงุก ุงููู ุงูRepo ุฏู ูุณุงุนุฏู ุงูู ุชููู ุงู ุชุญุงู ุงูุนู ูู ุจุฅุฐู ุงููู ๐
What does this app do?
- Scrapes job data (Python Developer roles in Cairo) from LinkedIn.
- Stores that data in a local SQLite database.
- Displays the jobs on a dashboard.
- Allows you to Update the status (Applied, Interview, Offer) and add notes.
Before we start, you need to install the necessary libraries. Open your terminal and run:
pip install flask flask_sqlalchemy requests beautifulsoup4 pandas๐ช๐ฌ ุจุงูู ุตุฑู:
ุจุต ูุง ุณูุฏูุ ูุจู ู ุง ูุจูู ุงูู ููุนุ ู ุญุชุงุฌูู "Data" ูุดุชุบู ุนูููุง. ุจุฏู ู ุง ูุฏุฎู ุงููุธุงูู ุจุฅูุฏููุงุ ููุณุชุฎุฏู
requestsูBeautifulSoupุนุดุงู ูุฑูุญ ุนูู LinkedIn ููุณุญุจ (scrape) ุงููุธุงูู ุงููู ุนุงูุฒููุง ุฃูุชูู ุงุชูู. ูุฏุฉ ุนูุฏูุง ููุณุชุฉ ูู ูู ูููุง ุงูุฏุงุชุง.
๐ฌ๐ง Explanation: This script connects to LinkedIn, searches for jobs, and parses the HTML to extract details.
import requests
from bs4 import BeautifulSoup as bs
import re
# 1. Define Search Parameters
Title = "Python Developer"
City = "Cairo"
Title_re = re.sub(r"\s", "%20", Title) # Clean text for URL
# 2. Fetch the Search Page
list_url = f"https://www.linkedin.com/jobs-guest/jobs/api/seeMoreJobPostings/search?keywords={Title_re}&location={City}&start=25"
resp = requests.get(list_url)
# 3. Parse HTML to find Job IDs
soup = bs(resp.text, "html.parser")
page_jobs = soup.find_all("li")
id_list = []
for job in page_jobs:
base_card_div = job.find("div", {"class": "base-card"})
job_id = base_card_div.get("data-entity-urn").split(":")[3]
id_list.append(job_id)
# 4. Visit each Job ID and Extract Details
job_list = []
for job_id in id_list:
# ... (Requests code to fetch individual job details) ...
# Stores results in job_post dictionary
# Appends dictionary to job_list
pass # (Simplified for readme)requests.get(): Go to a URL and download the page content.BeautifulSoup: Takes that raw HTML content and turns it into an object we can search through (like finding all<div>tags).- Result: At the end,
job_listis a Python list containing dictionaries of job data.
This is the brain of the project. We will break it down into blocks.
๐ช๐ฌ ุจุงูู ุตุฑู:
ููุง ุจูุดุบู ุงูู ูุชูุฑ (Flask App). ููู ุงู ุจูุนุฑู ุงูุฃุจููููุดู ุฅููุง ููุณุชุฎุฏู ูุงุนุฏุฉ ุจูุงูุงุช (Database) ุงุณู ูุง
job.db. ุงููSQLAlchemyุฏู ูู ุงููู ุจูุฎูููุง ูุชููู ู ุน ุงูู Database ุจุงูุจุงูุซูู ู ู ุบูุฑ ู ุง ููุชุจ ููุฏ SQL ู ุนูุฏ.
๐ฌ๐ง Explanation: We create the Flask app instance and configure the connection to the SQLite database.
from flask import Flask, render_template, request, redirect, url_for
from flask_sqlalchemy import SQLAlchemy as sql
from scraping import job_list # Importing the data we scraped earlier
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///job.db' # Where the DB file lives
db = sql(app) # Linking the DB to the App๐ช๐ฌ ุจุงูู ุตุฑู:
ุนุดุงู ุงูู Database ุชููู ูู ูุชุฎุฒู ุฅููุ ูุงุฒู ูุนู ู "Model". ุงูู ูุฏูู ุฏู ูู ุงูู "Design" ุฃู ุงูู "Template" ุจุชุงุน ุงูุฌุฏูู (Table). ูู ูุธููุฉ
Jobููููู ูููุงidุ ูุงุณู ุงูุดุฑูุฉcompanyุ ูุญุงูุชูุงstatusุ ูููุฐุง.
๐ฌ๐ง Explanation: In Flask-SQLAlchemy, a Model is a Python class that represents a table in your database.
class Job(db.Model):
id = db.Column(db.Integer, primary_key=True) # Unique ID for every job
company = db.Column(db.String(100), nullable=False)
position = db.Column(db.String(150), nullable=False)
# Tracking fields we will update later
status = db.Column(db.String(50), default='Scraped')
notes = db.Column(db.Text, nullable=True)
def __repr__(self):
return f'<Job {self.position} at {self.company}>'๐ช๐ฌ ุจุงูู ุตุฑู:
ุงูููุฏ ุงููู ูุงุช ุฏู ู ุฌุฑุฏ "ุชุนุฑูู" (Definitions). ุงูู Database ูุณู ู ุงุชุฎููุชุด ูุนููุงู ุนูู ุงููุงุฑุฏ ุฏูุณู. ุนุดุงู ุชูุฑูุช (Create) ุงูู ูู
job.dbูุงูุฌุฏุงููุ ูุงุฒู ุชุดุบู ุฃู ุฑ ู ุฎุตูุต ูู ุงูุชูุฑู ููุงู.
๐ฌ๐ง Explanation:
Writing the class Job doesn't create the table. You need to run a command to generate the job.db file based on your code.
How to run this in Terminal (do this once):
- Open your terminal in the project folder.
- Enter the Flask Shell:
flask shell
- Inside the shell, run:
from app import db db.create_all() exit()
Now you will see a instance/job.db or job.db file appear in your folder.
๐ช๐ฌ ุจุงูู ุตุฑู:
ุฏูููุชู ุนูุฏูุง ุงูู Database ูุงุถูุฉุ ูุนูุฏูุง ุงูู
job_listู ู ุงูู Scraper. ุงููุงููุดู (Function) ุฏู ุจุชุดูู ูู ุงููุธููุฉ ุฏู ู ูุฌูุฏุฉ ูุจู ูุฏุฉ ููุง ูุฃ ุนู ุทุฑูู ูุญุต (ุงูุดุฑูุฉ + ุงูู ุณู ู ุงููุธููู). ูู ุฌุฏูุฏุฉุ ุจุชุถูููุง ูู ูุงุนุฏุฉ ุงูุจูุงูุงุช.
๐ฌ๐ง Explanation: This function loops through the scraped list. It checks if the job exists to avoid duplicates. If it's new, it adds it to the session and commits (saves) it.
def load_scraped_jobs():
for job_data in job_list:
# Check for duplicates
existing_job = Job.query.filter_by(
company=job_data.get('company_name'),
position=job_data.get('job_title')
).first()
if not existing_job:
new_job = Job(
company=job_data.get('company_name'),
position=job_data.get('job_title'),
# ... other fields
)
db.session.add(new_job) # Stage the change
db.session.commit() # Save changes permanently๐ช๐ฌ ุจุงูู ุตุฑู:
ุงูู Route ูู ุงููู ุจูุฑุจุท ุงููููู URL (ุฒู
/ุฃู/update) ุจุงููุงููุดู ุงููู ุจุชุดุบูู.
index: ุจุชุฌูุจ ูู ุงููุธุงูู ูุชุญุณุจ ุงูุฅุญุตุงุฆูุงุช (ูุงู ูุงุญุฏ ู ูุจููุ ูุงู ุงูุชุฑููู) ูุชุจุนุชูู ููู HTML.update: ุฏู ุจุชุณุชูุจู ุฏุงุชุง ู ู ุงูู Form (ุนู ุทุฑูู POST request) ูุชุบูุฑ ุญุงูุฉ ุงููุธููุฉ ูู ุงูู Database.
๐ฌ๐ง Explanation:
The Home Page (/):
Fetches all jobs and calculates counts for the dashboard cards.
@app.route('/')
def index():
jobs = Job.query.order_by(Job.id).all() # Get all jobs
# Calculate Stats
total_jobs = Job.query.count()
offers_count = Job.query.filter_by(status='Offer').count()
# Send data to HTML
return render_template('index.html', jobs=jobs, total_jobs=total_jobs, ...)The Update Page (/update/<id>):
Handles both showing the form (GET) and processing the form submission (POST).
@app.route('/update/<int:job_id>', methods=['GET', 'POST'])
def update_job(job_id):
job = Job.query.get_or_404(job_id) # Find job or show 404 error
if request.method == 'POST':
# Update fields from form data
job.status = request.form.get('status')
job.notes = request.form.get('notes')
db.session.commit() # Save updates
return redirect(url_for('index')) # Go back home
return render_template('update_job.html', job=job)๐ช๐ฌ ุจุงูู ุตุฑู:
ุงูู HTML ุฏู ุงูููููุ ุจุณ ุนุดุงู ูุดูู ุงูุฏุงุชุง ุงููู ุฌุงูุฉ ู ู ุจุงูุซูู ุฌูู ุงูู HTMLุ ุจูุณุชุฎุฏู ุญุงุฌุฉ ุงุณู ูุง Jinja2. ุฏู ุงููู ุจุชุฎูููุง ููุชุจ ููุฏ ุฒู
{% for %}ุฌูู ุงูู HTML ุนุดุงู ููุฑุฑ ุงูุตููู (rows) ูู ุงูุฌุฏูู.
๐ฌ๐ง Explanation: We don't explain standard HTML tags here, only the Flask-specific syntax.
This file holds the Navbar and the standard structure. Other pages "inherit" from this.
{% block content %}: This creates a placeholder hole that other pages will fill with their specific content.
-
Looping:
{% for job in jobs %} <tr> <td>{{ job.company }}</td> <td>{{ job.status }}</td> </tr> {% endfor %}This automatically creates a table row for every single job found in the database.
-
Logic (If Statements):
{% if job.status == 'Offer' %} <span class="badge bg-success">Offer</span> {% endif %}This changes the color of the badge based on the job status.
- Dynamic Actions:
When this form is submitted, it sends data back to the Python function
<form method="POST">
update_job. - Pre-filling values:
value="{{ job.notes }}"ensures the form isn't empty; it shows what you previously wrote.
- Ensure you created the database (see "Critical Step" above).
- Run the app:
python app.py
- Open your browser and go to:
http://127.0.0.1:5000
Mabrouk! ๐ You now have a working Job Tracker.