Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -87,3 +87,7 @@ enableRobotsTXT = false
[markup.goldmark]
[markup.goldmark.renderer]
unsafe = true

[taxonomies]
tag = "tags"
author = "authorIds"
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
draft: false
featured: true
weight: 1
tags: ["apache-spark", "big-data", "rdds", "dataframes"]
---

## My Journey with Spark
Expand Down Expand Up @@ -39,7 +40,7 @@

At a high level, Spark provides several libraries that extend its functionality and are used in specialized data processing tasks.

1. **Spark SQL**: Spark SQL allows users to run SQL queries on large datasets using Spark’s distributed infrastructure. Whether interacting with structured or semi-structured data, SparkSQL makes querying data easy, using either SQL syntax or the DataFrame API (for now imagine dataframe is just a table of data, like what you see in Excel, more about dataframe is discussed [here](#dataframe)).

Check failure on line 43 in content/blog/apache-spark-unleashing-big-data-with-rdds-dataframes-and-beyond.md

View workflow job for this annotation

GitHub Actions / lint

Link text should be descriptive

content/blog/apache-spark-unleashing-big-data-with-rdds-dataframes-and-beyond.md:43:377 MD059/descriptive-link-text Link text should be descriptive [Context: "[here]"] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md059.md

2. **MLlib**: It provides distributed algorithms for a variety of machine learning tasks such as classification, regression, clustering, recommendation systems, etc.

Expand All @@ -53,7 +54,7 @@

### RDDs

**RDDs (Resilient Distributed Datasets)** are the fundamental building blocks of Spark Core. They represent an immutable, distributed collection of objects that can be processed in parallel across a cluster. More about RDDs is discussed [here](#rdd).

Check failure on line 57 in content/blog/apache-spark-unleashing-big-data-with-rdds-dataframes-and-beyond.md

View workflow job for this annotation

GitHub Actions / lint

Link text should be descriptive

content/blog/apache-spark-unleashing-big-data-with-rdds-dataframes-and-beyond.md:57:239 MD059/descriptive-link-text Link text should be descriptive [Context: "[here]"] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md059.md

### DAG Scheduler and Task Scheduler

Expand Down
1 change: 1 addition & 0 deletions content/blog/argo-rollout-aws-alb.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
draft: false
featured: true
weight: 1
tags: ["argo", "aws", "alb", "kubernetes"]
---

The blog discusses resolving a deployment issue with 502 errors on AWS EKS using AWS ALB and Argo Rollouts. It details the root cause, attempted solutions, and resulting trade-offs.
Expand Down Expand Up @@ -116,7 +117,7 @@

When we analyse the AWS ALB controller logs, we get to know that an **update operation** request was received at almost the same time. However, the actual **ModifyRule** action to change weight was triggered only after about 1 minute and 10 seconds. Ultimately, the problem seems to be that there is a time difference between the lb-controller's update operation request for weight change and the actual lb-listener rule modification API.

We discovered that our use of **dynamicStableScale** caused older replica sets to scale down before the traffic shift from the load balancer. This created a lag between the canary weight change and the actual traffic switch from the load balancer, leading to problems. Similar to this, one issue is raised [here](https://github.com/kubernetes-sigs/aws-load-balancer-controller/issues/3588).

Check failure on line 120 in content/blog/argo-rollout-aws-alb.md

View workflow job for this annotation

GitHub Actions / lint

Link text should be descriptive

content/blog/argo-rollout-aws-alb.md:120:308 MD059/descriptive-link-text Link text should be descriptive [Context: "[here]"] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md059.md

To address this, we disabled **dynamicStableScale** and increased the **scaleDownDelaySeconds** from 30 seconds (default) to 60 seconds, which will wait for 60 seconds before scaling down the older replicaset pods.

Expand Down
1 change: 1 addition & 0 deletions content/blog/aws-ecs-fargate-vs-self-managed-ec2.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-08-13
draft: false
featured: true
weight: 1
tags: ["aws", "ecs", "fargate", "ec2"]
---

Cooking Up Cloud: Fargate or EC2—Which Kitchen Suits You?
Expand Down
1 change: 1 addition & 0 deletions content/blog/building-a-discord-gpt-bot.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
sitemap:
changefreq: "monthly"
priority: 1
tags: ["discord", "gpt", "openai", "python"]
---

Amidst the excitement surrounding AI, we were eager to delve into this field ourselves. As engineers, we wanted more than just a casual conversation with ChatGPT—we aimed to understand the intricacies of building AI applications.
Expand All @@ -17,14 +18,14 @@

Thus, our first project was born: building a ChatGPT bot for Discord from scratch.

## Step 1 -Setting up a python virtual environment (optional, but recommended):

Check failure on line 21 in content/blog/building-a-discord-gpt-bot.md

View workflow job for this annotation

GitHub Actions / lint

Trailing punctuation in heading

content/blog/building-a-discord-gpt-bot.md:21:79 MD026/no-trailing-punctuation Trailing punctuation in heading [Punctuation: ':'] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md026.md

- Setup a python virtual environment - why should you this? This warrants a whole conversation by itself. But the tl;dr is that it helps you manage different python versions and project dependencies in a cleaner way.
- I use `conda`. `venv` is a good alternative; regardless of whichever tool you use, remember to install dependencies only inside the virtual environment.

## Step 2 - Create a simple discord bot

- Goto https://www.discord.com/developers

Check failure on line 28 in content/blog/building-a-discord-gpt-bot.md

View workflow job for this annotation

GitHub Actions / lint

Bare URL used

content/blog/building-a-discord-gpt-bot.md:28:8 MD034/no-bare-urls Bare URL used [Context: "https://www.discord.com/develo..."] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md034.md
- Click on New Application; provide a name for it (e.g. - infraspec-gpt-bot)
- Navigate to Settings -> Bot and
- disable `Public Bot` - this restricts the bot from being publicly discovered
Expand Down
1 change: 1 addition & 0 deletions content/blog/building-internal-tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
sitemap:
changefreq: 'monthly'
priority: 1
tags: ["internal-tools", "developer-productivity"]
---


Expand All @@ -24,15 +25,15 @@
As every organization is a unique combination of different perspectives their systems/patterns also look different from each other at 10000 ft level. Due to these different perspectives they run into unique problems which any off the shelf tools can’t solve.
For e.g you need to build a authorization system for your internal CRM (security enhancement), where you are calling 100s of APIs to get and aggregate the data from multiple backend systems, in this case there will be different contracts, different kind of adoption of HTTP verbs and implementation and trust me you’ll be surprised day on seeing the permutation of variety HTTP that can exist in the system. After considering these ground realities organizations decide to build an internal tool which solves the problem and also fits in the processes.


Check failure on line 28 in content/blog/building-internal-tools.md

View workflow job for this annotation

GitHub Actions / lint

Multiple consecutive blank lines

content/blog/building-internal-tools.md:28 MD012/no-multiple-blanks Multiple consecutive blank lines [Expected: 1; Actual: 2] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md012.md
## Horizontal cutting

Check failure on line 29 in content/blog/building-internal-tools.md

View workflow job for this annotation

GitHub Actions / lint

Headings should be surrounded by blank lines

content/blog/building-internal-tools.md:29 MD022/blanks-around-headings Headings should be surrounded by blank lines [Expected: 1; Actual: 0; Below] [Context: "## Horizontal cutting"] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md022.md
These kinds of problems are going to impact a lot of internal users from operations to engineers, there are going to be multiple key stakeholders from impact and collaboration point of view. What you are going to deal with is legacy and a process which is already adopted by the organization.

## Ownership

Check failure on line 32 in content/blog/building-internal-tools.md

View workflow job for this annotation

GitHub Actions / lint

Headings should be surrounded by blank lines

content/blog/building-internal-tools.md:32 MD022/blanks-around-headings Headings should be surrounded by blank lines [Expected: 1; Actual: 0; Below] [Context: "## Ownership"] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md022.md
These are going to be legacy systems/processes and your job is to build a tool to solve it, as this is going to be mostly processes which are leaky and tech debts, it's highly likely you are not going to have shared ownership between product manager/programme manager/engineer.


Check failure on line 35 in content/blog/building-internal-tools.md

View workflow job for this annotation

GitHub Actions / lint

Multiple consecutive blank lines

content/blog/building-internal-tools.md:35 MD012/no-multiple-blanks Multiple consecutive blank lines [Expected: 1; Actual: 2] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md012.md
## Discovery

Check failure on line 36 in content/blog/building-internal-tools.md

View workflow job for this annotation

GitHub Actions / lint

Headings should be surrounded by blank lines

content/blog/building-internal-tools.md:36 MD022/blanks-around-headings Headings should be surrounded by blank lines [Expected: 1; Actual: 0; Below] [Context: "## Discovery"] https://github.com/DavidAnson/markdownlint/blob/v0.38.0/doc/md022.md
As I said this is post PMF phase, most organizations miss out that these problems are going to hit you once you scale your product while they are majorly focussing on the growth of the business. So think about the problem which is compounding with time and suddenly gets prioritized while being on the back burner since the org existed. Most likely there would be no single person whom you can interview to understand all the use cases and problem statement. It’s going to be a fuzzy and blurred space which you have to navigate.


Expand Down
1 change: 1 addition & 0 deletions content/blog/cache-strategies.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ weight: 1
sitemap:
changefreq: "monthly"
priority: 1
tags: ["cache", "performance"]
---

## What is Caching?
Expand Down
1 change: 1 addition & 0 deletions content/blog/clickhouse-benchmarking.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-10-21
draft: false
featured: true
weight: 1
tags: ["clickhouse", "benchmarking", "database"]
---

Imagine being a Formula One driver, racing at breakneck speeds, but without any telemetry data to guide you. It’s a thrilling ride, but one wrong turn or an overheating engine could lead to disaster. Just like a pit crew relies on performance metrics to optimize the car's speed and handling, we rely on observability in ClickHouse to monitor the health of our data systems for storing and querying logs. These metrics provide crucial insights, allowing us to identify bottlenecks, prevent outages, and fine-tune performance, ensuring our data engine runs as smoothly and efficiently as a championship-winning race car.
Expand Down
1 change: 1 addition & 0 deletions content/blog/container-networking-deep-dive-p1.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ weight: 1
sitemap:
changefreq: 'monthly'
priority: 0.8
tags: ["docker", "networking", "containers"]
---

In part 1 of this series, we will demystify how a container communicates with the host and vice versa.
Expand Down
1 change: 1 addition & 0 deletions content/blog/container-networking-deep-dive-p2.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2022-11-20
draft: false
featured: true
weight: 1
tags: ["docker", "networking", "containers"]
---

In part 2 of this series, we will demystify how multiple containers running on the same host communicates with the host and vice versa.
Expand Down
1 change: 1 addition & 0 deletions content/blog/container-networking-deep-dive-p3.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2022-11-21
draft: false
featured: true
weight: 1
tags: ["docker", "networking", "containers", "kubernetes"]
---

In part 3 of this series, we will see how containers running on the host communicates with the outside world i.e., the internet
Expand Down
1 change: 1 addition & 0 deletions content/blog/cron-jobs.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-15
draft: false
featured: true
weight: 1
tags: ["cron", "linux", "automation"]
---
___
<img src="/images/blog/cron-jobs/cron-jobs-cover.png" alt="Cron Jobs cover" width="100%">
Expand Down
3 changes: 2 additions & 1 deletion content/blog/docker-deep-dive.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-24
draft: false
featured: true
weight: 1
tags: ["docker", "containers"]
---


Expand Down Expand Up @@ -160,4 +161,4 @@ By understanding Docker licensing and ensuring that your usage complies with the

### Conclusion

To wrap up, getting the hang of Docker is about more than just wrapping your app in a container. It’s about really getting into the nitty-gritty—like setting up Dockerfiles properly, making smart use of Docker’s caching perks, and keeping those images lean and mean. On top of that, keeping things above board with Docker’s licensing rules is key for keeping your operations smooth and compliant. Stick with these strategies, and you’re on your way to making your Docker setups as efficient, secure, and compliant as they can be.
To wrap up, getting the hang of Docker is about more than just wrapping your app in a container. It’s about really getting into the nitty-gritty—like setting up Dockerfiles properly, making smart use of Docker’s caching perks, and keeping those images lean and mean. On top of that, keeping things above board with Docker’s licensing rules is key for keeping your operations smooth and compliant. Stick with these strategies, and you’re on your way to making your Docker setups as efficient, secure, and compliant as they can be.
1 change: 1 addition & 0 deletions content/blog/genai-dictionary-part1-llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ weight: 1
sitemap:
changefreq: "monthly"
priority: 1
tags: ["genai", "llm", "ai"]
---

We bring to you this weekly series of articles, to help understand and demystify the lexicon in the GenAI space.
Expand Down
3 changes: 2 additions & 1 deletion content/blog/github-runners.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-25
draft: false
featured: true
weight: 1
tags: ["github", "ci-cd", "automation"]
---

# Introduction
Expand Down Expand Up @@ -162,4 +163,4 @@ GitHub automatically sends notifications about CI/CD workflow status updates to

GitHub Runners are the backbone of modern CI/CD pipelines, enabling developers to automate their workflows with ease and efficiency. Whether you opt for GitHub-hosted runners for convenience or self-hosted runners for control, GitHub provides the tools and infrastructure to support your automation needs.

By embracing GitHub Runners, you can unlock new levels of productivity and innovation, allowing your projects to reach their full potential. So, don't wait any longer! Dive into the world of GitHub Runners and elevate your CI/CD workflow today!
By embracing GitHub Runners, you can unlock new levels of productivity and innovation, allowing your projects to reach their full potential. So, don't wait any longer! Dive into the world of GitHub Runners and elevate your CI/CD workflow today!
3 changes: 2 additions & 1 deletion content/blog/guide-to-implement-winston-logger.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-29
draft: false
featured: true
weight: 1
tags: ["nodejs", "logging", "winston"]
---
___

Expand Down Expand Up @@ -294,4 +295,4 @@ transports: [

In this step-by-step guide, we've explored the implementation of Winston Logger in Node.js projects, enabling us to build robust and reliable applications. With best practices like utilizing log levels, including contextual information, and centralizing logging configuration, we've paved the way for more reliable and usable Node.js applications.

Let's embark on our logging journey to enhance our projects further!!
Let's embark on our logging journey to enhance our projects further!!
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-05-27
draft: false
featured: true
weight: 1
tags: ["argo", "kubernetes", "traffic-routing"]
---

This blog explores how to use Argo Rollouts for deploying software updates smoothly. It covers the challenges faced when rolling back updates and introduces header-based routing to manage traffic during deployments.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-11-09
draft: false
featured: true
weight: 1
tags: ["databricks", "hive", "unity-catalog", "data-migration"]
---

<img src="/images/blog/hive-to-unity-catalog-data-migration-databricks/cover.png" alt="Hive Metastore to Unity Catalog Data Migration in Databricks">
Expand Down
1 change: 1 addition & 0 deletions content/blog/is-dns-migration-a-tricky-affair.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2022-12-15
draft: false
featured: true
weight: 1
tags: ["dns", "migration"]
---

Never trust a friend who says DNS migration is easy. Your instinct says it's easy, well don't trust your instinct. In this post, I will be sharing my experience with one such encounter with DNS migration from GoDaddy to Route53.
Expand Down
1 change: 1 addition & 0 deletions content/blog/java-streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-03-05
draft: false
featured: true
weight: 1
tags: ["java", "streams"]
---

# Java Streams: A Paradigm Shift in Data Processing
Expand Down
1 change: 1 addition & 0 deletions content/blog/kubernetes-rbac.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-29
draft: false
featured: true
weight: 1
tags: ["kubernetes", "rbac", "security"]
---

Role-Based Access Control (RBAC) is a crucial feature in Kubernetes that allows administrators to define and manage permissions for users and services within the cluster. RBAC helps prevent unauthorised access and actions, ensuring the security of your Kubernetes environment. In this blog, we'll explore how RBAC works in Kubernetes, its components, and best practices.
Expand Down
1 change: 1 addition & 0 deletions content/blog/lightweight-k3s-cluster-raspberry-pi.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2023-01-02
draft: false
featured: true
weight: 1
tags: ["k3s", "kubernetes", "raspberry-pi", "iot"]
---
As a developer at Infraspec, I am responsible for managing internal tooling and ensuring the smooth operation of local
tooling and network services in our office. We had a cluster of Raspberry Pi devices available and saw an opportunity to
Expand Down
1 change: 1 addition & 0 deletions content/blog/linux-memory-swap.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-15
draft: false
featured: true
weight: 1
tags: ["linux", "memory", "swap"]
---


Expand Down
1 change: 1 addition & 0 deletions content/blog/multi-tenant-system-with-aws-cdk.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-11-22
draft: false
featured: true
weight: 1
tags: ["aws", "cdk", "multi-tenancy"]
---
In this blog, I will be taking you on a journey of building the scalable and efficient IAC solution that we build for our multi-tenant system. Here we are not going to debate why we chose the CDK; that will be another discussion that can be highlighted in another blog. Instead, how we approached solving using AWS CDK is going to be discussed in this blog. Even if you are not very familiar with CDK, this blog can help to build a mental model of how we can think while writing the code for the infrastructure of such a complex system.

Expand Down
1 change: 1 addition & 0 deletions content/blog/nfs-fs-as-docker-volume.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-08-19
draft: false
featured: true
weight: 1
tags: ["docker", "nfs", "storage"]
---

As an Infrastructure engineer, I've had my fair share of experiences with containerized environments and the challenges of managing data persistence. One of the most significant problems I've faced is ensuring that data generated by Docker containers persists even when the container is stopped or deleted. That's when I discovered the power of leveraging the Network File System (NFS) as volumes in Docker.
Expand Down
1 change: 1 addition & 0 deletions content/blog/office-home-lab-internet-failover-setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2022-12-15
draft: false
featured: true
weight: 1
tags: ["networking", "home-lab", "failover"]
---
An overview of how to set up internet failover for an office or home lab.

Expand Down
1 change: 1 addition & 0 deletions content/blog/pragmatism-over-perfection.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2025-04-21
draft: false
featured: true
weight: 1
tags: ["software-development", "philosophy"]
---

We as Engineers often chase perfection.
Expand Down
1 change: 1 addition & 0 deletions content/blog/rpi-netboot-automation.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-17
draft: false
featured: true
weight: 1
tags: ["raspberry-pi", "netboot", "automation"]
---

In this blog, we'll delve into automating the netbooting process using a bash script (`pxeService.sh`) and an address list (`addresslist.txt`) to enhance the deployment and management of Raspberry Pi devices. If you haven't already, you can catch up on the initial steps and concepts discussed [here](https://www.infraspec.dev/blog/rpi-netboot-deep-dive/), where we laid the groundwork for this automation project.
Expand Down
1 change: 1 addition & 0 deletions content/blog/rpi-netboot-deep-dive.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-16
draft: false
featured: true
weight: 1
tags: ["raspberry-pi", "netboot", "automation"]
---

## Understanding PXE Boot
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-21
draft: false
featured: true
weight: 1
tags: ["aws", "secrets-manager", "security"]
---

Automating Secret Rotation with AWS Secrets Manager
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-22
draft: false
featured: true
weight: 1
tags: ["aws", "secrets-manager", "security"]
---

In [Part 1](/blog/securing-and-rotating-secrets-with-aws-secrets-manager-part-1/), we discussed upon configuring AWS Secrets Manager, AWS Lambda, and Automatic Rotation for our Secret. We also defined permissions for our Lambda function, which enabled the Secrets Manager to invoke it on a scheduled basis. In Part 2, we will focus on setting up the Lambda function, including the required permissions and implementation.
Expand Down
1 change: 1 addition & 0 deletions content/blog/setting-up-ingress-on-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2022-11-30
draft: false
featured: true
weight: 1
tags: ["aws", "eks", "kubernetes", "ingress"]
---

> **Note**: Everything here applies to [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/). If you are running on another cloud, on-prem, with minikube, or something else, these will be slightly different.
Expand Down
1 change: 1 addition & 0 deletions content/blog/tag-strat-blog.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-07-29
draft: false
featured: true
weight: 1
tags: ["aws", "tagging", "strategy"]
---

## Introduction
Expand Down
1 change: 1 addition & 0 deletions content/blog/tdd-design-benefits.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2023-01-18
draft: false
featured: true
weight: 1
tags: ["tdd", "software-design"]
---

Most of us think of TDD as a tool for software testing and verification. But if used effectively it is more than that.
Expand Down
1 change: 1 addition & 0 deletions content/blog/terraform_secrets_management_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-08-06
draft: false
featured: true
weight: 1
tags: ["terraform", "aws", "secrets-management"]
---

Imagine you're working on a project where you need to deploy resources to aws using terraform. In a rush to get things done, you decide to hard-code your AWS credentials directly into your Terraform files. Everything works fine at first, and your resources are successfully deployed. But a few weeks later, you discover that your Terraform repository was accidentally made public. Suddenly, your AWS credentials are exposed to the entire internet. Exposing your credentials can lead to unauthorized access to your AWS account, leading to serious security problems.
Expand Down
1 change: 1 addition & 0 deletions content/blog/terragrunt-envs-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-09-10
draft: false
featured: true
weight: 1
tags: ["terragrunt", "terraform", "iac"]
---

Managing multiple environments was a never-ending headache for me. Like many others in the DevOps world, I was responsible for deploying applications across various environments—production, staging, and development. Each of these environments required the same infrastructure, but I found myself writing the same Terraform code over and over again in different folders. The repetition felt inefficient, and the potential for human error only grew with each tweak I had to make for a specific environment.
Expand Down
1 change: 1 addition & 0 deletions content/blog/tftp-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ date: 2024-04-19
draft: false
featured: true
weight: 1
tags: ["tftp", "networking"]
---

## TFTP protocol overview
Expand Down
5 changes: 5 additions & 0 deletions hugo.log
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
port 1313 already in use, attempting to use an available port
Watching for changes in /app/{assets,content,data,layouts,static}
Watching for config changes in /app/config.toml
Start building sites …
hugo v0.150.1-ce44a8e835e6934292acda936e5b43b70f451af9+extended linux/amd64 BuildDate=2025-09-25T10:26:04Z VendorInfo=gohugoio
Loading