Skip to content

Mish-Hackers-SCU/CSD342-advanced-py-2026

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

6 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

ุจุณู… ุงู„ู„ู‡ ุงู„ุฑุญู…ู† ุงู„ุฑุญูŠู…

#๐”‡๐”ข๐”ฐ๐”ฑ๐”ฏ๐”ฌ๐”ถ ๐”ฑ๐”ฅ๐”ข ๐”‘๐”ฌ๐”ฏ๐”ช๐”ž๐”ฉ#

The explanation for the code was recorded, you can access the recording by clicking on me


๐ŸŽ“ Flask Job Tracker & Scraper Project

Welcome! This project is designed to teach you the fundamentals of Flask (a Python web framework) by building something actually useful: a Job Application Tracker.

ุงู„ู…ุดุฑูˆุน ุฏู‡ ู‡ุฏูู‡ ู„ูŠูƒ ูƒุทุงู„ุจ ุณู†ุฉ ุชุงู„ุชุฉุŒ ุงู†ู‡ ูŠูู‡ู…ูƒ ุงุณุงุณูŠุงุช ุงู„Flask ูˆุงุฒุงูŠ ุชุฑุจุท ุจูŠู† ูู‡ู…ูƒ ู„ู„ู…ูƒุชุจุฉ ุฏูŠู‡ ูˆุจู‚ูŠุฉ ุงู„ู…ูƒุชุจุงุช ููŠ ู…ุดุฑูˆุน ูˆุงุญุฏุŒ ูˆุฏู‡ ููŠ ุงู„ุงุฎุฑ ูˆููŠ ุงู„ุงูˆู„ ุงู„ู‡ุฏู ู…ู† ู…ุงุฏุฉ Advanced Programming Languages ูุชูˆูƒู„ ุนู„ู‰ ุงู„ู„ู‡ ูŠุง ุตุฏูŠู‚ูŠุŒ ูˆุงู† ุดุงุก ุงู„ู„ู‡ ุงู„Repo ุฏู‡ ูŠุณุงุนุฏูƒ ุงู†ูƒ ุชู‚ูู„ ุงู…ุชุญุงู† ุงู„ุนู…ู„ูŠ ุจุฅุฐู† ุงู„ู„ู‡ ๐Ÿ˜€

What does this app do?

  1. Scrapes job data (Python Developer roles in Cairo) from LinkedIn.
  2. Stores that data in a local SQLite database.
  3. Displays the jobs on a dashboard.
  4. Allows you to Update the status (Applied, Interview, Offer) and add notes.

๐Ÿ› ๏ธ Prerequisites & Setup (ุงู„ุนุฏุฉ ูˆุงู„ุนุชุงุฏ)

Before we start, you need to install the necessary libraries. Open your terminal and run:

pip install flask flask_sqlalchemy requests beautifulsoup4 pandas

Part 1: The Scraper (scraping.py)

๐Ÿ‡ช๐Ÿ‡ฌ ุจุงู„ู…ุตุฑูŠ:

ุจุต ูŠุง ุณูŠุฏูŠุŒ ู‚ุจู„ ู…ุง ู†ุจู†ูŠ ุงู„ู…ูˆู‚ุนุŒ ู…ุญุชุงุฌูŠู† "Data" ู†ุดุชุบู„ ุนู„ูŠู‡ุง. ุจุฏู„ ู…ุง ู†ุฏุฎู„ ุงู„ูˆุธุงูŠู ุจุฅูŠุฏูŠู†ุงุŒ ู‡ู†ุณุชุฎุฏู… requests ูˆ BeautifulSoup ุนุดุงู† ู†ุฑูˆุญ ุนู„ู‰ LinkedIn ูˆู†ุณุญุจ (scrape) ุงู„ูˆุธุงูŠู ุงู„ู„ูŠ ุนุงูŠุฒู†ู‡ุง ุฃูˆุชูˆู…ุงุชูŠูƒ. ูƒุฏุฉ ุนู†ุฏู†ุง ู„ูŠุณุชุฉ ู†ู…ู„ู‰ ููŠู‡ุง ุงู„ุฏุงุชุง.

๐Ÿ‡ฌ๐Ÿ‡ง Explanation: This script connects to LinkedIn, searches for jobs, and parses the HTML to extract details.

import requests
from bs4 import BeautifulSoup as bs
import re

# 1. Define Search Parameters
Title = "Python Developer"
City = "Cairo"
Title_re = re.sub(r"\s", "%20", Title) # Clean text for URL

# 2. Fetch the Search Page
list_url = f"https://www.linkedin.com/jobs-guest/jobs/api/seeMoreJobPostings/search?keywords={Title_re}&location={City}&start=25"
resp = requests.get(list_url)

# 3. Parse HTML to find Job IDs
soup = bs(resp.text, "html.parser")
page_jobs = soup.find_all("li")

id_list = []
for job in page_jobs:
    base_card_div = job.find("div", {"class": "base-card"})
    job_id = base_card_div.get("data-entity-urn").split(":")[3]
    id_list.append(job_id)

# 4. Visit each Job ID and Extract Details
job_list = []
for job_id in id_list:
    # ... (Requests code to fetch individual job details) ...
    # Stores results in job_post dictionary
    # Appends dictionary to job_list
    pass # (Simplified for readme)
  • requests.get(): Go to a URL and download the page content.
  • BeautifulSoup: Takes that raw HTML content and turns it into an object we can search through (like finding all <div> tags).
  • Result: At the end, job_list is a Python list containing dictionaries of job data.

Part 2: The Flask Application (app.py)

This is the brain of the project. We will break it down into blocks.

1. Initialization & Database Setup

๐Ÿ‡ช๐Ÿ‡ฌ ุจุงู„ู…ุตุฑูŠ:

ู‡ู†ุง ุจู†ุดุบู„ ุงู„ู…ูˆุชูˆุฑ (Flask App). ูˆูƒู…ุงู† ุจู†ุนุฑู ุงู„ุฃุจู„ูŠูƒูŠุดู† ุฅู†ู†ุง ู‡ู†ุณุชุฎุฏู… ู‚ุงุนุฏุฉ ุจูŠุงู†ุงุช (Database) ุงุณู…ู‡ุง job.db. ุงู„ู€ SQLAlchemy ุฏู‡ ู‡ูˆ ุงู„ู„ูŠ ุจูŠุฎู„ูŠู†ุง ู†ุชูƒู„ู… ู…ุน ุงู„ู€ Database ุจุงู„ุจุงูŠุซูˆู† ู…ู† ุบูŠุฑ ู…ุง ู†ูƒุชุจ ูƒูˆุฏ SQL ู…ุนู‚ุฏ.

๐Ÿ‡ฌ๐Ÿ‡ง Explanation: We create the Flask app instance and configure the connection to the SQLite database.

from flask import Flask, render_template, request, redirect, url_for
from flask_sqlalchemy import SQLAlchemy as sql
from scraping import job_list # Importing the data we scraped earlier

app = Flask(__name__) 
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///job.db' # Where the DB file lives
db = sql(app) # Linking the DB to the App

2. The Database Model (The Schema)

๐Ÿ‡ช๐Ÿ‡ฌ ุจุงู„ู…ุตุฑูŠ:

ุนุดุงู† ุงู„ู€ Database ุชูู‡ู… ู‡ูŠ ู‡ุชุฎุฒู† ุฅูŠู‡ุŒ ู„ุงุฒู… ู†ุนู…ู„ "Model". ุงู„ู…ูˆุฏูŠู„ ุฏู‡ ู‡ูˆ ุงู„ู€ "Design" ุฃูˆ ุงู„ู€ "Template" ุจุชุงุน ุงู„ุฌุฏูˆู„ (Table). ูƒู„ ูˆุธูŠูุฉ Job ู‡ูŠูƒูˆู† ู„ูŠู‡ุง idุŒ ูˆุงุณู… ุงู„ุดุฑูƒุฉ companyุŒ ูˆุญุงู„ุชู‡ุง statusุŒ ูˆู‡ูƒุฐุง.

๐Ÿ‡ฌ๐Ÿ‡ง Explanation: In Flask-SQLAlchemy, a Model is a Python class that represents a table in your database.

class Job(db.Model):
    id = db.Column(db.Integer, primary_key=True) # Unique ID for every job
    company = db.Column(db.String(100), nullable=False)
    position = db.Column(db.String(150), nullable=False)
    
    # Tracking fields we will update later
    status = db.Column(db.String(50), default='Scraped') 
    notes = db.Column(db.Text, nullable=True)

    def __repr__(self):
        return f'<Job {self.position} at {self.company}>'

๐Ÿšจ VERY IMPORTANT: Creating the Database

๐Ÿ‡ช๐Ÿ‡ฌ ุจุงู„ู…ุตุฑูŠ:

ุงู„ูƒูˆุฏ ุงู„ู„ูŠ ูุงุช ุฏู‡ ู…ุฌุฑุฏ "ุชุนุฑูŠู" (Definitions). ุงู„ู€ Database ู„ุณู‡ ู…ุงุชุฎู„ู‚ุชุด ูุนู„ูŠุงู‹ ุนู„ู‰ ุงู„ู‡ุงุฑุฏ ุฏูŠุณูƒ. ุนุดุงู† ุชูƒุฑูŠุช (Create) ุงู„ู…ู„ู job.db ูˆุงู„ุฌุฏุงูˆู„ุŒ ู„ุงุฒู… ุชุดุบู„ ุฃู…ุฑ ู…ุฎุตูˆุต ููŠ ุงู„ุชูŠุฑู…ูŠู†ุงู„.

๐Ÿ‡ฌ๐Ÿ‡ง Explanation: Writing the class Job doesn't create the table. You need to run a command to generate the job.db file based on your code.

How to run this in Terminal (do this once):

  1. Open your terminal in the project folder.
  2. Enter the Flask Shell:
    flask shell
  3. Inside the shell, run:
    from app import db
    db.create_all()
    exit()

Now you will see a instance/job.db or job.db file appear in your folder.


3. Loading Data (Scraper -> Database)

๐Ÿ‡ช๐Ÿ‡ฌ ุจุงู„ู…ุตุฑูŠ:

ุฏู„ูˆู‚ุชูŠ ุนู†ุฏู†ุง ุงู„ู€ Database ูุงุถูŠุฉุŒ ูˆุนู†ุฏู†ุง ุงู„ู€ job_list ู…ู† ุงู„ู€ Scraper. ุงู„ูุงู†ูƒุดู† (Function) ุฏูŠ ุจุชุดูˆู ู„ูˆ ุงู„ูˆุธูŠูุฉ ุฏูŠ ู…ูˆุฌูˆุฏุฉ ู‚ุจู„ ูƒุฏุฉ ูˆู„ุง ู„ุฃ ุนู† ุทุฑูŠู‚ ูุญุต (ุงู„ุดุฑูƒุฉ + ุงู„ู…ุณู…ู‰ ุงู„ูˆุธูŠููŠ). ู„ูˆ ุฌุฏูŠุฏุฉุŒ ุจุชุถูŠูู‡ุง ููŠ ู‚ุงุนุฏุฉ ุงู„ุจูŠุงู†ุงุช.

๐Ÿ‡ฌ๐Ÿ‡ง Explanation: This function loops through the scraped list. It checks if the job exists to avoid duplicates. If it's new, it adds it to the session and commits (saves) it.

def load_scraped_jobs():
    for job_data in job_list:
        # Check for duplicates
        existing_job = Job.query.filter_by(
            company=job_data.get('company_name'),
            position=job_data.get('job_title')
        ).first()

        if not existing_job:
            new_job = Job(
                company=job_data.get('company_name'),
                position=job_data.get('job_title'),
                # ... other fields
            )
            db.session.add(new_job) # Stage the change

    db.session.commit() # Save changes permanently

4. The Routes (The Pages)

๐Ÿ‡ช๐Ÿ‡ฌ ุจุงู„ู…ุตุฑูŠ:

ุงู„ู€ Route ู‡ูˆ ุงู„ู„ูŠ ุจูŠุฑุจุท ุงู„ู„ูŠู†ูƒ URL (ุฒูŠ / ุฃูˆ /update) ุจุงู„ูุงู†ูƒุดู† ุงู„ู„ูŠ ุจุชุดุบู„ู‡.

  1. index: ุจุชุฌูŠุจ ูƒู„ ุงู„ูˆุธุงูŠู ูˆุชุญุณุจ ุงู„ุฅุญุตุงุฆูŠุงุช (ูƒุงู… ูˆุงุญุฏ ู…ู‚ุจูˆู„ุŒ ูƒุงู… ุงู†ุชุฑููŠูˆ) ูˆุชุจุนุชู‡ู… ู„ู„ู€ HTML.
  2. update: ุฏูŠ ุจุชุณุชู‚ุจู„ ุฏุงุชุง ู…ู† ุงู„ู€ Form (ุนู† ุทุฑูŠู‚ POST request) ูˆุชุบูŠุฑ ุญุงู„ุฉ ุงู„ูˆุธูŠูุฉ ููŠ ุงู„ู€ Database.

๐Ÿ‡ฌ๐Ÿ‡ง Explanation:

The Home Page (/): Fetches all jobs and calculates counts for the dashboard cards.

@app.route('/') 
def index():
    jobs = Job.query.order_by(Job.id).all() # Get all jobs
    
    # Calculate Stats
    total_jobs = Job.query.count()
    offers_count = Job.query.filter_by(status='Offer').count()

    # Send data to HTML
    return render_template('index.html', jobs=jobs, total_jobs=total_jobs, ...)

The Update Page (/update/<id>): Handles both showing the form (GET) and processing the form submission (POST).

@app.route('/update/<int:job_id>', methods=['GET', 'POST'])
def update_job(job_id):
    job = Job.query.get_or_404(job_id) # Find job or show 404 error

    if request.method == 'POST':
        # Update fields from form data
        job.status = request.form.get('status')
        job.notes = request.form.get('notes')
        
        db.session.commit() # Save updates
        return redirect(url_for('index')) # Go back home

    return render_template('update_job.html', job=job)

Part 3: The Frontend (HTML + Jinja2)

๐Ÿ‡ช๐Ÿ‡ฌ ุจุงู„ู…ุตุฑูŠ:

ุงู„ู€ HTML ุฏู‡ ุงู„ู‡ูŠูƒู„ุŒ ุจุณ ุนุดุงู† ู†ุดูˆู ุงู„ุฏุงุชุง ุงู„ู„ูŠ ุฌุงูŠุฉ ู…ู† ุจุงูŠุซูˆู† ุฌูˆู‡ ุงู„ู€ HTMLุŒ ุจู†ุณุชุฎุฏู… ุญุงุฌุฉ ุงุณู…ู‡ุง Jinja2. ุฏูŠ ุงู„ู„ูŠ ุจุชุฎู„ูŠู†ุง ู†ูƒุชุจ ูƒูˆุฏ ุฒูŠ {% for %} ุฌูˆู‡ ุงู„ู€ HTML ุนุดุงู† ู†ูƒุฑุฑ ุงู„ุตููˆู (rows) ููŠ ุงู„ุฌุฏูˆู„.

๐Ÿ‡ฌ๐Ÿ‡ง Explanation: We don't explain standard HTML tags here, only the Flask-specific syntax.

1. base.html (The Layout)

This file holds the Navbar and the standard structure. Other pages "inherit" from this.

  • {% block content %}: This creates a placeholder hole that other pages will fill with their specific content.

2. index.html (The Dashboard)

  • Looping:

    {% for job in jobs %}
       <tr>
           <td>{{ job.company }}</td>
           <td>{{ job.status }}</td>
       </tr>
    {% endfor %}

    This automatically creates a table row for every single job found in the database.

  • Logic (If Statements):

    {% if job.status == 'Offer' %}
        <span class="badge bg-success">Offer</span>
    {% endif %}

    This changes the color of the badge based on the job status.

3. update_job.html (The Form)

  • Dynamic Actions:
    <form method="POST">
    When this form is submitted, it sends data back to the Python function update_job.
  • Pre-filling values: value="{{ job.notes }}" ensures the form isn't empty; it shows what you previously wrote.

๐Ÿš€ How to Run the App

  1. Ensure you created the database (see "Critical Step" above).
  2. Run the app:
    python app.py
  3. Open your browser and go to: http://127.0.0.1:5000

Mabrouk! ๐ŸŽ‰ You now have a working Job Tracker.

About

"Job Tracker" Educational Project to explain fundamentals of building web applications using flask along with a brief on python classes

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors