An end-to-end loan approval prediction application using Logistic Regression to model borrower default risk, with SHAP-based explainability and a Streamlit app for interactive what-if analysis and decision support.
Please click here for video demo.
✔ Supervised learning for binary classification
✔ Credit risk modelling and loan approval decisioning using Logistic Regression
✔ Feature engineering for financial datasets, including feature creation (Debt-to-Income (DTI) ratio) and categorical encoding (One-Hot encoding)
✔ Model evaluation and interpretation (SHAP explainability)
✔ Handling class imbalance with class_weight="balanced"
✔ Persisting models and scalers (pickle) for deployment of Streamlit application
Financial institutions assess borrower creditworthiness to minimize default risk, but manual or rule-based approaches can be inconsistent and suboptimal. This project aims to build an interpretable, data-driven model that predicts whether a borrower will default on a loan and to operationalize that model in a way that supports consistent, transparent loan approval decisions.
Note:
This Streamlit application is hosted on the free Tier of Streamlit Community Cloud. If the app has been idle for more than 12 hours, it may take some time to reactivate. In such cases, please click the button saying “Yes, get this app back up!” to relaunch the application. Thank you for your patience.
A loan dataset from Kaggle is used to model borrower default behavior.
- Load and clean the dataset (drop non-predictive identifiers columns such as
Client_IDandGender). - Engineer risk-relevant features, notably the Debt-to-Income (DTI) ratio derived from monthly income and repayment amounts.
- Encode
employmentas dummy variables to represent employment types numerically. - Use Logistic Regression to predict the binary
Default_Flag(default = 1 vs non-default = 0) based on financial and demographic features. - Evaluate performance with multiple classification metrics - F1 Score, PR Curve and ROC Curve, emphasizing for missing of defaulters.
- Build a SHAP explainer to show the specific reasons why an individual’s loan was approved or denied.
- Deploy the final model, scaler, and SHAP explainer in a Streamlit app that accepts user inputs, returns predicted default/repayment probabilities, applies an explicit decision threshold, and visualizes the drivers of each decision through a SHAP waterfall plot.
Deploying an automated, explainable loan approval and credit scoring application delivers tangible business value across lending operations:
-
Improved Credit Decision Consistency: Standardized risk scoring removes subjective variations, producing consistent credit decisions that align with internal credit policy.
-
Risk Reduction Through Early Default Detection: Helps reduce credit losses by catching high-risk applicants rather than through collections or charge-offs.
-
Operational Efficiency & Reduced Cycle Times: Automated assessment shortens decision-making from minutes/hours to milliseconds, increasing application throughput and reducing the need for manual underwriting for straightforward cases.
-
Portfolio-Level Risk Control via Threshold Adjustment: The default probability threshold offers a tunable risk lever, allowing risk teams to balance approval volume versus risk appetite depending on market conditions and strategic objectives.
-
Enhanced Transparency & Explainability for Stakeholders: SHAP waterfall plots make each approval or decline interpretable, supporting compliance requirements and model governance.
Logistics regression is chosen, with the considerations of:
- Interpretability and suitability in credit risk settings.
- The model’s coefficients map directly to the direction and strength of each feature’s influence on default vs repayment, which is important for explainability and potential regulatory scrutiny. DTI ratio is found to be the main driver of whether the loan would likely default.
- Created DTI from income and repayment to capture leverage and repayment burden.
- One-hot encoded the
Employmentcategorical variable, then converted booleans (True/False) into numeric form (1/0).drop_first=Trueis used to avoid multicollinearity of features. When bothSelf-EmployedandUnemployedequal 0, this will mean the applicant is employed. - Dropped irrelevant (
Client_IDandGender) and redundant raw columns (Monthly_Income,Monthly_Repayment, originalEmployment) once the engineered variables were in place.
- Used StandardScaler to standardize features before training Logistic Regression.
- Logistic Regression benefits from standardized feature variance: scaling features to zero mean and unit variance improves solver stability and makes coefficients more directly comparable across features. This is more suitable here than MinMax scaling, which mainly rescales to a fixed range and is less convenient for interpreting linear model coefficients.
- The fitted scaler is persisted and reused in the application to ensure consistent preprocessing between training and inference.
- Set
class_weight="balanced"in Logistic Regression to give additional weight to the minority class (defaulters), reducing the risk of a high-accuracy but low-model on defaults.
- Evaluated the model using recall, F1-score, PR-AUC and ROC-AUC.
- Particular emphasis on recall for defaulters and PR-AUC to ensure genuine defaults are captured with acceptable levels of false positives.
Instead of using a naïve 50% default probability cutoff, a more conservative decision threshold of 35% default probability is used in the Streamlit app:
- Default probability ≤ 35% → “APPROVED”
- Default probability > 35% → “DECLINED”
This reflects the lender’s risk tolerance and aligns the model with business policy.
-
Data Preparation: Loaded Kaggle loan dataset; removed non-informative identifiers; handled duplicates and missing values.
-
Feature Engineering: Computed DTI and one-hot encoded employment categories.
-
Modeling: Split data (70/30), standardized features, and trained
LogisticRegression(class_weight="balanced"). -
Evaluation: Assessed the model using PR-AUC, ROC-AUC, precision, recall, F1, and confusion matrix.
-
Artifact Persistence: Serialized model, scaler, and SHAP explainer for deployment.
-
Application Deployment: Built Streamlit app enabling real-time scoring, explainability, and loan approval decisions.
-
Explainability: Integrated SHAP for local model attribution and decision transparency.
Carmen Wong




