This is a comprehensive Medical Profile Management System built with a microservices architecture using modern, production-grade tools like Spring Boot, Kafka, Docker, and AWS. This repository will grow as i add new services and features.
⚙️ This is a learning-focused, end-to-end backend project.
Architecture Overview: View Architecture Overview
| Service Name | Description | Status |
|---|---|---|
| Medical Profile Service | Manages medical profile data | Implemented |
| Medical Billing Service | Manages medical billing data | Implemented |
| Medical Analytics Service | Consumes events for analytics | Implemented |
| API Gateway | single entry point for all client requests | Implemented |
| Auth Service | Securing Microservices with JWT Authentication | Implemented |
| Integration Tests | Automated Integration Testing Module | Implemented |
| Infrastructure – AWS CDK + LocalStack | AWS Infrastructure as Code | Implemented |
A microservice built with Spring Boot 3.5.0, Java 21 (Oracle JDK), and PostgreSQL. This service is part of a larger MediCore - Medical Profile Management System project. It is container-ready, supports both REST and gRPC-based communication and will be integrated with Kafka, AWS, and Docker for cloud deployment and communication.
- Features
- Tech Stack
- Dependencies
- Project Setup
- Development Configuration
- Run Locally Using H2 DB
- Access H2 Console
- Docker Setup
- PostgreSQL Configuration in
application.properties - IntelliJ DB Integration
- API Testing in Dockerized Setup
- Global Error Handling
- OpenAPI Documentation
- gRPC Integration
- Asynchronous Event-Driven Communication with Kafka
- Kafka Setup with Docker (KRaft Mode)
- Kafka Producer Implementation
- Kafka Consumer Implementation
- Development Notes / Change Log
- RESTful API: Provides endpoints to create, retrieve, update, and delete medical profiles.
- Layered Architecture: Follows a clean separation of concerns across Controller, Service, Repository, DTO, Model, and Mapper layers.
- Validation System:
- Field-level annotations ensure input integrity.
- Grouped validation using interfaces (e.g.
CreateMedicalProfileValidationGroup) applies context-specific rules like "registration time" only during create operations.
- DTO and Mapper Support: Maps between entities and DTOs to keep the API contract clean and decoupled from internal database models.
- gRPC Client Integration: Communicates with
medical-billing-serviceto auto-create billing accounts when profiles are added. - OpenAPI Documentation: Integrated with SpringDoc using
@Tagand@Operationannotations to generate Swagger-compatible docs. - Global Exception Handling: Centralized with
@ControllerAdvicefor consistent, structured error responses. - Database Flexibility:
- H2 in-memory for lightweight development/testing.
- PostgreSQL via Docker for production-ready persistence.
- Containerized Deployment: Built using a multi-stage Dockerfile for efficient builds and runs alongside PostgreSQL in Docker network.
| Category | Technology | Description |
|---|---|---|
| Backend | Spring Boot 3.5.0 | Framework for building RESTful microservices |
| Language | Java 21 (Oracle JDK) | Long-Term Support version for enterprise stability |
| Database | PostgreSQL, H2 (dev) | PostgreSQL for production, H2 for dev/testing |
| Validation | Hibernate Validator | Annotation-based request and entity validation |
| Docs | SpringDoc OpenAPI | Auto-generates Swagger UI from code annotations |
| Container | Docker | Containerization using multi-stage build |
| Messaging | Kafka | For event-driven communication between services |
| Cloud | AWS (planned) | For deploying microservices in the cloud |
spring-boot-starter-web: For creating REST APIsspring-boot-starter-data-jpa: For interacting with databases using JPAspring-boot-devtools: Enables hot reload during developmentspring-boot-starter-validation: Supports bean validation using annotationspostgresql: JDBC driver to connect to PostgreSQLcom.h2database:h2: In-memory database for development and testingspringdoc-openapi-starter-webmvc-ui: To generate OpenAPI docs with Swagger UIio.grpc:grpc-netty-shaded: For gRPC server implementationio.grpc:grpc-stub: For gRPC client stubsio.grpc:grpc-protobuf: For Protocol Buffers support in gRPCcom.google.protobuf:protobuf-java: For Protocol Buffers Java supportnet.devh:grpc-client-spring-boot-starter: For integrating gRPC client with Spring Boot
- Java 21 installed
- Maven or Gradle (depending on your build tool)
- IDE (e.g., IntelliJ IDEA)
- Docker
application.properties: Configured to use the H2 database for ease of developmentserver.port=8081: Port changed from default8080to avoid conflictsdata.sql: Auto-loaded by Spring Boot to insert dummy data at startup
- Clone the repository
- Navigate to
medical-profile-service - Run the application using your IDE or command line:
./mvnw spring-boot:run # or ./gradlew bootRun - Access the API at:
http://localhost:8081/medical-profiles
Spring Boot makes it easy to view and interact with the H2 database via a browser:
- URL:
http://localhost:8081/h2-console - JDBC URL:
jdbc:h2:mem:testdb - Username:
profile - Password:
profile(unless you changed it in application.properties)
Make Sure to uncomment H2-configuration in your application.properties file.
Make sure this is present in your application.properties:
spring.h2.console.enabled=true
spring.h2.console.path=/h2-consoleA PostgreSQL container was created using the latest PostgreSQL image with the following configuration:
docker run --name medical-profile-service-db \
-e POSTGRES_USER=profile \
-e POSTGRES_PASSWORD=profile \
-e POSTGRES_DB=db \
-p 5000:5432 \
-v medical-profile-db-data:/var/lib/postgresql/data \
--network internal \
-d postgres:latest- Port mapped:
5000:5432 - Persistent storage: Named volume
medical-profile-db-data - Network: Internal Docker network named
internal
A multi-stage Dockerfile was created in the medical-profile-service directory:
In IntelliJ IDEA:
- Image name:
medical-profile-service:latest - Container name:
medical-profile-service - Dockerfile path:
medical-profile-service/Dockerfile - Environment Variables:
SPRING_DATASOURCE_URL=jdbc:postgresql://medical-profile-service-db:5432/dbSPRING_DATASOURCE_USERNAME=profileSPRING_DATASOURCE_PASSWORD=profileSPRING_JPA_HIBERNATE_DDL_AUTO=updateSPRING_SQL_INIT_MODE=always
- Port Binding:
8081:8081 - Run Option:
--network internal
Commented out H2-related settings and retained only essential production config:
- Connected to the running PostgreSQL container from IntelliJ using:
- Name:
medical-profile-service-db - JDBC URL:
jdbc:postgresql://localhost:5000/db - Username:
profile, Password:profile
- Name:
- Verified: Tables created and dummy data from
data.sqlavailable in database
Tested all .http request files (GET, POST, PUT, DELETE) against the Dockerized application connected to PostgreSQL. All endpoints worked as expected.
GET /medical-profiles # Fetch all profiles
POST /medical-profiles # Create new profile
PUT /medical-profiles/{id} # Update profile by ID
DELETE /medical-profiles/{id} # Delete profile by ID
- Local API Docs: http://localhost:8081/v3/api-docs
- Swagger UI (auto-generated): http://localhost:8081/swagger-ui/index.html
These are generated using annotations like @Tag, @Operation, etc., in controller classes.
You can copy the raw OpenAPI JSON from /v3/api-docs and paste it into Swagger Editor for interactive documentation.
This project uses .http files located under api-request/medical-profile-service/ to test API endpoints.
Examples:
-
update-medical-profile.http– TestsPUT /medical-profiles/{id}
-
delete-medical-profile.http– TestsDELETE /medical-profiles/{id}
You can use IntelliJ IDEA or VS Code REST Client extension to run these files.
A centralized exception handling mechanism is in place using @ControllerAdvice. It catches and formats errors like:
- Duplicate email constraint violations
- Entity not found
- Invalid request body
This ensures consistent error responses across the API.
To enable gRPC-based communication between medical-profile-service (the client) and medical-billing-service (the server), we use a Protocol Buffers (``) file. This file acts as a contract that defines:
- The structure of the request and response messages
- The gRPC service name and its RPC methods (endpoints)
- The Java package for generated classes
In our case, the file is named medical_billing_service.proto, and it defines a MedicalBillingService with an RPC method:
rpc CreateMedicalBillingAccount (BillingRequest) returns (BillingResponse);This allows the profile service to send a BillingRequest and receive a BillingResponse when a medical profile is created.
gRPC requires both client and server to have access to the same .proto definition so that matching Java classes can be generated. Although the file originated in the medical-billing-service module (which hosts the server logic), we **copied it into the **** directory of ** to:
- Generate gRPC client stubs during the Maven build process
- Maintain service contract alignment with the billing service
- Avoid direct dependency sharing for now (future improvement: share via a central proto module)
This setup ensures the profile service can invoke billing RPCs with type-safe, auto-generated Java classes.
- Copied the shared
medical_billing_service.protofile frommedical-billing-servicetosrc/main/proto/inmedical-profile-service - Configured
protobuf-maven-plugininpom.xmlto compile.protofiles into Java classes - Ran
mvn compileto generate gRPC stubs
- Created
MedicalBillingServiceGrpcClientclass undergrpcpackage - Uses a blocking stub to call
CreateMedicalBillingAccounton the remotemedical-billing-service - Constructed
BillingRequestwith id, name and email from the profile
In application.properties:
billing.service.address=localhost
billing.service.grpc.port=9001Can be overridden via environment variables in Docker setup. To support gRPC communication between containers, we updated the Docker run configuration for medical-profile-service to include:
BILLING_SERVICE_ADDRESS=medical-billing-serviceBILLING_SERVICE_GRPC_PORT=9001
This ensures that the client (profile service) can successfully resolve and connect to the gRPC server (billing service) running in another container within the same Docker internal network.
These match the server container’s hostname and port within the internal Docker network.

- On successful creation of a medical profile, the service automatically invokes the gRPC client to create a billing account in
medical-billing-service - This integration is triggered inside the profile creation service logic
To decouple services and improve scalability, we use Kafka as the backbone for asynchronous, event-driven communication within the MediCore ecosystem.
Till now, microservices communicates synchronously using REST APIs or gRPC in this project. While this is suitable for simple, one-to-one interactions, it introduces significant drawbacks:
- Latency: Each additional service call increases total processing time.
- Tight Coupling: Failures or slow responses in one service can block others.
- Scalability Bottlenecks: High request volume magnifies inter-service traffic.
By introducing Kafka, we transform the architecture into an event-driven model. Now, services publish events rather than making direct calls, and other services consume these events asynchronously.
When a new medical profile is created in the medical-profile-service, it publishes a MedicalProfileEvent to a Kafka topic.
This event includes relevant data like medical profile ID, name, email, event type.
graph LR
A[medical-profile-service] -- Publishes Event --> B((Kafka Topic: medical.profile))
B --> C[medical-analytics-service]
B --> D[medical-notification-service]
When a new medical profile is created, the medical-profile-service publishes a MedicalProfileEvent event to a Kafka topic (e.g., medical.profile) and proceeds with its workflow without waiting for any consumers.
sequenceDiagram
participant Client
participant MedicalProfileService
participant Kafka
participant MedicalAnalyticsService
participant MedicalNotificationService
Client->>MedicalProfileService: Create Profile (HTTP)
MedicalProfileService->>Kafka: Publish MedicalProfileEvent Event
Kafka-->>MedicalAnalyticsService: Event Consumed
Kafka-->>MedicalNotificationService: Event Consumed
Each of these services independently subscribes to the medical-profile topic and handles events at their own pace.
-
Medical Analytics Service Subscribes to
medical.profileto update internal metrics and reporting datasets. -
Medical Notification Service Subscribes to the same event to trigger welcome emails, alerts, or push notifications.
- Non-blocking: Profile creation doesn't wait for downstream services to respond.
- Scalable: Kafka handles high throughput and allows for horizontal scaling of consumers.
- Loose Coupling: New services can be added as subscribers without modifying the publisher. Services are not tightly bound to one another.
- Resilience: Temporary consumer downtime doesn't affect the publishing flow. It ensures fault tolerance by retaining events until they can be processed.
- Extensibility: New services can subscribe to the topic without changing existing code.
We run Kafka using the Bitnami Docker image in KRaft mode (no ZooKeeper) with multiple listeners configured for internal and external communication.
| Setting | Value |
|---|---|
| Image | bitnami/kafka:latest |
| Ports | 9092 (internal), 9094 (external) |
| Network | internal (Docker custom bridge) |
| Process Role | controller, broker (KRaft mode) |
KAFKA_CFG_NODE_ID=0;KAFKA_CFG_PROCESS_ROLES=controller,broker;KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094;KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9094;KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT;KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER;KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093| Config | Purpose |
|---|---|
bitnami/kafka:latest |
Kafka Docker image using KRaft mode (no Zookeeper needed). |
9092 |
Internal listener for service-to-service (PLAINTEXT). |
9094 |
External listener for local dev tools (kafka-topics.sh, kafka-console-producer, etc.). |
KAFKA_CFG_ADVERTISED_LISTENERS |
Defines how other services inside and outside the container should reach Kafka. |
KAFKA_CFG_PROCESS_ROLES=controller,broker |
Enables this node to act as both a controller and broker. |
KAFKA_CFG_NODE_ID=0 |
Required for KRaft (must be unique in a cluster). |
--network internal |
Keeps Kafka discoverable by your other services (e.g., Spring Boot apps) within Docker. |
- Bootstrap server used:
127.0.0.1:9094 - Kafka connection created in IntelliJ (default settings)
- Created topic:
medical-profile - Kafka consumer created:
- Topic:
medical-profile - Key: String
- Value: Bytes (base64)
- Topic:
- Kafka producer created:
- Topic:
medical-profile - Key/Value:
"test"(test message)
- Topic:
- Result: Consumer successfully received the produced message
The medical-profile-service includes a Kafka producer responsible for publishing a MedicalProfileCreated event whenever a new medical profile is successfully created.
- Package:
com.priti.medicalprofileservice.kafka - Class:
KafkaProducer - Serialization: Messages are serialized using Protocol Buffers (Protobuf) into binary format.
- Integration Point: Called from the service layer after persisting the profile in the database.
Kafka messages are structured using Protocol Buffers for language-neutral, efficient communication.
- Schema location:
medical-profile-service/src/main/proto/medical_profile_event.proto - Schema name:
MedicalProfileEvent - Generated classes: Compiled via Maven using
protobuf-maven-pluginand used directly in producer code.
This ensures that all services (producers and consumers) use a consistent schema for message serialization and deserialization.
Kafka-related producer settings are defined in application.properties for the medical-profile-service:
# Kafka broker address - injected from environment (docker-compose)
spring.kafka.bootstrap-servers=${SPRING_KAFKA_BOOTSTRAP_SERVERS}
# Key/Value serializer classes
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.ByteArraySerializer💡 The actual value of SPRING_KAFKA_BOOTSTRAP_SERVERS is injected via environment variable:
SPRING_KAFKA_BOOTSTRAP_SERVERS=kafka:9092This setup ensures full compatibility in Docker-based environments and local development.
The medical-analytics-service acts as a Kafka consumer, responsible for asynchronously receiving and processing medical profile creation events published by the medical-profile-service. This service listens to the Kafka topic medical-profile and consumes serialized MedicalProfileEvent messages (encoded in Protocol Buffers).
| Property | Value |
|---|---|
| Service | medical-analytics-service |
| Kafka Topic | medical-profile |
| Group ID | medical-analytics-service |
| Deserialization | ByteArrayDeserializer + Protobuf Parser |
| Message Type | MedicalProfileEvent |
Please refer to Medical Analytics Service for complete detail.
- Added DTOs for request and response
- Developed create, update, get, delete logic in service and controller
- Implemented grouped validation
- Added custom repository methods to prevent duplicate emails
- Introduced global error handler to handle exceptions gracefully
- Verified all endpoints using
.httprequest files - Integrated SpringDoc for OpenAPI documentation
- Dockerized the application with a multi-stage Dockerfile
- Created and configured PostgreSQL container with internal Docker networking
- Connected Dockerized Spring Boot app to PostgreSQL using environment variables
- Verified DB connection via IntelliJ and tested all API endpoints in the Dockerized setup
- Implemented gRPC client to call remote billing service
- Integrated
.protofile and compiled client stubs using Maven - Automatically creates a billing account when a profile is created
- Configured gRPC server connection via externalized
application.properties - Introduced Kafka for asynchronous event-driven communication
- Created MedicalProfileKafkaProducer to publish profile creation events to Kafka
- Defined Protobuf schema for MedicalProfileCreated events and compiled using Maven plugin
- Configured Kafka producer with key/value serializers in application.properties
- Added SPRING_KAFKA_BOOTSTRAP_SERVERS to Docker environment for internal Kafka discovery
- Tested event flow using IntelliJ Kafka consumer – verified binary message contents after decoding
The medical-billing-service is a gRPC-based microservice in the MediCore ecosystem responsible for handling billing account operations. It exposes gRPC endpoints for other services (like medical-profile-service) to create and manage medical billing accounts. This service is designed using Spring Boot 3.5.0 and Java 21, and communicates using Protocol Buffers over gRPC.
- Features Billing Service
- Tech Stack Billing Service
- Proto Definition
- gRPC Server Implementation
- Running the Service
- Docker Support
- Testing gRPC Requests
- Service-to-Service Communication
- Development Notes
- Exposes a gRPC endpoint to create a billing account.
- Can be invoked by client services using Protocol Buffers.
- Logs incoming gRPC requests.
- Returns mock responses (to be replaced with real logic).
- Built with modular, scalable microservice architecture in mind.
| Layer | Technology |
|---|---|
| Language | Java 21 |
| Framework | Spring Boot 3.5.0 |
| RPC Protocol | gRPC |
| Proto Compiler | Protobuf 3.25.5 |
| Build Tool | Maven |
| gRPC Java Plugin | protoc-gen-grpc-java v1.68.1 |
| Logging | SLF4J (Simple Logging Facade for Java) |
The gRPC service and message contracts are defined in:
src/main/proto/medical_billing_service.proto
service MedicalBillingService {
rpc CreateMedicalBillingAccount(MedicalBillingRequest) returns (MedicalBillingResponse);
}option java_multiple_files = true;
option java_package = "billing";The service class is implemented in:
src/main/java/com/priti/medicalbillingservice/grpc/MedicalBillingGrpcService.java
It extends the generated MedicalBillingServiceImplBase and handles incoming RPCs.
- Annotated with
@GrpcService(from grpc-spring-boot-starter). - Logs the incoming request.
- Extends the auto-generated base class from the compiled proto
- Sends back a mock response containing an account ID and status
- Java 21+
- Maven 3.8+
- Docker (optional, for DB or containerization)
- Compile the
.protodefinitions and generate sources:
./mvnw clean compile- Run the Spring Boot application:
./mvnw spring-boot:runThe gRPC server will start and listen on the configured port (default: 9001).
Port Summary:
8082– Spring Boot HTTP server (used for actuator or admin purposes)9001– gRPC server port (used for service-to-service communication)
This service can be containerized using a Dockerfile like below (example):
FROM eclipse-temurin:21-jdk-alpine
WORKDIR /app
COPY target/medical-billing-service-*.jar app.jar
ENTRYPOINT ["java", "-jar", "app.jar"]You can build and run the image:
docker build -t medical-billing-service .
docker run -p 9001:9001 medical-billing-serviceIf using Spring Boot actuator endpoints or HTTP admin tools, expose port 8082 as well:
docker run -p 8082:8082 -p 9001:9001 medical-billing-service
We use a custom folder structure for gRPC test requests:
grpc-requests/
└── medical-billing-service/
└── Create-medical-billing-account.http
This .http file can be executed using IntelliJ IDEA or tools like grpcurl to simulate real client calls.
The medical-profile-service module acts as the gRPC client and calls the CreateMedicalBillingAccount RPC when a new profile is created. This validates inter-service communication using gRPC.
This service is consumed by medical-profile-service, which acts as a gRPC client and calls CreateMedicalBillingAccount when a medical profile is created.
- A REST API call to
medical-profile-servicecreates a new medical profile. - Internally, it triggers a gRPC request to
medical-billing-service. - The billing service responds with a generated account ID and status.
The .proto file is manually copied between both services under src/main/proto/ to keep them in sync.
- Proto files are manually copied across services for now.
- All proto compilation is handled by
protobuf-maven-pluginconfigured inpom.xml. - Proto classes are generated in the
target/generated-sourcesdirectory. - Business logic is not implemented yet; only structure and connectivity are in place.
The medical-analytics-service is a dedicated microservice within the MediCore ecosystem, designed to consume real-time Kafka events for analytics and insights. It listens to profile creation events emitted by the medical-profile-service and processes them asynchronously, allowing the platform to scale efficiently and decouple compute-intensive operations from synchronous workflows.
- Overview
- Purpose
- Architectural Role
- Tech Stack Analytics Service
- Getting Started Analytics Service
- Project Structure Highlights
- Configuration Analytics Service
- Development Notes / Change Log Analytics Service
- Kafka Topic and Protobuf Schema
- Summary
| Feature | Description |
|---|---|
| Architecture Pattern | Event-Driven Microservice |
| Message Broker | Apache Kafka (using Bitnami image in KRaft mode) |
| Message Format | Protocol Buffers (Protobuf v3) |
| Consumer Type | ByteArrayDeserializer → MedicalProfileEvent.parseFrom() |
| Integration | Connected via internal Docker network and uses shared Kafka topic |
| Deployment Target | Dockerized microservice (multi-stage build with Maven & JDK 21) |
| Port | 8083 |
| Topic Subscribed | medical-profile |
| Kafka Group ID | medical-analytics-service |
The main responsibility of this service is to consume and process MedicalProfileEvent messages asynchronously, without adding latency to upstream services like medical-profile-service. Typical use cases include:
- Analytics Collection: Tracking usage patterns, profile creation metrics, and geographical insights.
- Downstream Aggregation: Preparing datasets for BI tools, reporting engines, or machine learning models.
- Extensibility: Enabling future real-time pipelines (e.g., Flink, Spark Streaming) without modifying the publisher.
This microservice participates in the MediCore event-driven architecture by subscribing to Kafka topics produced by other microservices:
sequenceDiagram
participant MedicalProfileService
participant Kafka
participant MedicalAnalyticsService
MedicalProfileService->>Kafka: Publish MedicalProfileCreatedEvent (Protobuf)
Kafka-->>MedicalAnalyticsService: Consume and Deserialize Event
- Spring Boot 3.x
- Apache Kafka (via
spring-kafka) - Protobuf v3 (serialized messages)
- Docker (multi-stage build)
- Maven (with
protobuf-maven-plugin) - Kafka Listener with byte array deserialization
- Internal Docker Networking (
--network internal) for service discovery
Before running this service, ensure the following are already up:
- Docker-based Kafka broker container (port
9092) - Kafka topic
medical-profileis created - Other dependent services (
medical-profile-service, etc.) are running if you want to simulate event flow - Maven is available for local build (if not using Docker image)
Build and run this service via Docker as follows:
docker build -t medical-analytics-service .docker run --name medical-analytics-service \
--network internal \
-p 8083:8083 \
-e SPRING_KAFKA_BOOTSTRAP_SERVERS=kafka:9092 \
medical-analytics-service:latestNote: --network internal ensures communication between Kafka and this service
- Use the HTTP POST endpoint in
medical-profile-serviceto create a new profile. - That service publishes a
MedicalProfileEventto Kafka topicmedical-profile. - This service consumes the event asynchronously and logs the payload.
Sample logs:
Received Medical Profile Event: [MedicalProfileId=ff002d9c, Name=Alice, Email=alice@example.com]
| Path | Purpose |
|---|---|
src/main/proto/medical_profile_event.proto |
Defines Protobuf message schema used by both publisher/consumer |
KafkaConsumer.java |
Kafka listener that consumes and deserializes Protobuf events |
application.properties |
Kafka consumer configuration (port, deserializer, etc.) |
Dockerfile |
Multi-stage Docker setup with Maven + JDK 21 runtime |
pom.xml |
Includes spring-kafka, protobuf-java, and protobuf compiler plugin |
Configuration is managed via application.properties and environment variables:
| Property | Purpose |
|---|---|
spring.kafka.bootstrap-servers |
Injected via environment (kafka:9092 in Docker) |
spring.kafka.consumer.group-id |
medical-analytics-service (for consumer grouping) |
spring.kafka.consumer.key-deserializer |
String deserializer |
spring.kafka.consumer.value-deserializer |
Byte array (Protobuf binary data) |
server.port |
8083 |
You can override any of these via Docker -e flags or IntelliJ’s run config.
- Created a standalone Spring Boot module
medical-analytics-service - Added dependencies:
spring-kafka,protobuf-java, test utilities - Compiled
medical_profile_event.protousingprotobuf-maven-plugin - Implemented Protobuf deserialization logic in Kafka listener
- Logged received profile events for observability and further processing
- Dockerized the application using multi-stage Maven-JDK setup
- Successfully verified end-to-end Kafka consumption from Dockerized publisher
- Integrated into Docker internal network for seamless communication
Topic Subscribed: medical-profile
Message Format: MedicalProfileEvent (Protobuf v3)
message MedicalProfileEvent {
string medicalProfileId = 1;
string name = 2;
string email = 3;
string event_type = 4;
}This schema is shared with the medical-profile-service and version-controlled under src/main/proto.
The medical-analytics-service enhances the MediCore platform's responsiveness, scalability, and extensibility by processing events in a non-blocking, real-time manner. Its decoupled design allows future evolution — such as integrating with BI tools or machine learning pipelines — without impacting upstream services.
As of now, our client application interacts directly with individual microservices (e.g., medical-profile-service). While this works for small setups, it quickly becomes unmanageable, insecure, and inflexible as the number of microservices increases. This is where an API Gateway becomes essential.
- API Gateway Tech Stack
- Problems with Direct Client-to-Microservice Communication
- Enter API Gateway
- Real-World Scenario in MediCore
- Configured Routes (as of now)
- API Gateway Docker Integration
- Securing Auth Service Behind API Gateway
- Authentication via Gateway
- Implementation with Spring Cloud Gateway
- Testing the API Gateway
- API Gateway Summary
- Java 21
- Spring Boot 3
- Spring Cloud Gateway (Reactive)
- Maven
- Docker
- Tight Coupling to Service Addresses Clients must know the exact address (host:port) of each microservice.
- Any change (e.g., port update, service renaming) requires manual updates in all clients.
- Increases risk of misconfiguration and versioning conflicts.
- Security Exposure
Services like
medical-profile-servicemust expose ports (e.g.,8081) publicly.
- Makes services vulnerable to unauthorized access or attacks from the internet.
- Scalability Challenges
Every time we introduce a new microservice (e.g.,
medical-analytics-service),
- All clients need to update configurations again.
- Complexity grows exponentially with the number of services.
- No Centralized Control
- No unified layer for logging, authentication, rate limiting, or monitoring.
- Increases inconsistency and duplicated effort across services.
An API Gateway is a single entry point for all client requests. It acts as a reverse proxy that routes incoming traffic to the appropriate microservice internally.
-
Request Routing Routes incoming HTTP requests to the correct downstream service based on URL patterns or headers.
-
Service Abstraction Clients only talk to the gateway. Internal service details (IP, port, protocols) are hidden.
-
Security Layer Only the gateway is exposed externally. All internal services are shielded from direct traffic.
-
Centralized Cross-Cutting Concerns Enables consistent handling of:
- Authentication & Authorization
- Logging
- Request throttling / rate limiting
- Caching
- Monitoring / metrics
| Feature | Without API Gateway | With API Gateway |
|---|---|---|
| Service discovery | Manual address config | Dynamic / abstracted |
| Scalability | Client updates for each service | Centralized routing |
| Security | Each service exposed | Only gateway exposed |
| Cross-cutting logic | Duplicated in every service | Centralized once |
| Auth & Authorization | Each service handles it | Gateway + auth service handles it |
| Port exposure | Each service opens a port | Only gateway does |
graph TD
Client --> MedicalProfile[Medical Profile Service : direct REST call to port 8081]
Client --> Analytics[Analytics Service : must know port 8083]
Client --> FutureServices[Future Services]
- Client must manage multiple base URLs
- If a port or host changes → client config breaks
- Security and maintainability issues increase
graph TD
Client --> APIGateway
APIGateway --> MedicalProfile[Medical Profile Service]
APIGateway --> Analytics[Analytics Service]
APIGateway --> FutureServices[Future Services]
- Client only needs to know:
http://api.medicore.com(or similar) - API Gateway handles all internal routing logic
- We gain security, flexibility, and future-proofing
| Route | Proxies To |
|---|---|
/api/medical-profiles/** |
medical-profile-service:/medical-profiles/** |
/api-docs/medical-profiles |
medical-profile-service:/v3/api-docs |
/auth/** |
auth-service:/ |
/api-docs/auth |
auth-servicee:/v3/api-docs |
The api-gateway is fully Dockerized and runs inside the shared internal Docker network of the MediCore system. This enables seamless service-to-service communication using container names as hostnames.
- Port
8084is exposed externally for the gateway. - Other internal services (e.g.,
medical-profile-service) are no longer exposed directly to the outside world. - Docker
--network=internalensures proper DNS resolution for service discovery.
To strengthen the system's security posture and simplify external communication, the auth-service is now fully routed through the API Gateway. This means:
- All authentication operations must go through the gateway (
/auth/login,/auth/validate) - The
auth-serviceis no longer exposed to the internet — only available via internal Docker networking - Ensures all traffic is centrally logged, validated, and controlled
graph TD
subgraph External Client
CLIENT[Client App / REST Client]
end
subgraph Gateway Layer
GATEWAY[API Gateway Exposed on :8084]
end
subgraph Internal Private Network
AUTH[Auth Service No exposed port]
PROFILE[Medical Profile Service No exposed port]
end
%% External Requests
CLIENT -->|POST /auth/login| GATEWAY
CLIENT -->|GET /auth/validate| GATEWAY
CLIENT -->|GET /api/medical-profiles| GATEWAY
%% Gateway Routing
GATEWAY -->|Forward to /login| AUTH
GATEWAY -->|Forward to /validate| AUTH
GATEWAY -->|Forward to /medical-profiles| PROFILE
| External Request | Internally Routed To |
|---|---|
POST /auth/login |
auth-service:/login |
GET /auth/validate |
auth-service:/validate |
POST /auth/loginthrough the gateway successfully returns a signed JWT.GET /auth/validatethrough the gateway validates the JWT and returns200 OKor401 Unauthorized.- The
auth-servicecontainer no longer exposes any ports externally. All calls must pass through the gateway.
| Benefit | Description |
|---|---|
| Centralized security | Auth service is shielded from public traffic |
| Cleaner client interaction | Clients only use a single URL (gateway), reducing complexity |
| Easier scaling | New services can be added and routed without exposing ports or changing clients |
| Better production hygiene | Matches how large-scale microservices work in real-world containerized deployments |
To secure internal microservices (like the Medical Profile Service), we integrated a robust JWT validation mechanism via a custom global filter in the API Gateway.
- Ensure only authenticated clients can access protected services
- Centralize token validation logic to keep downstream services clean
- Prevent direct client access to
auth-serviceor other internal endpoints
All protected routes (like /api/medical-profiles/**) are guarded by a custom filter (JwtValidation) that performs the following:
- Intercepts each incoming request to a protected route.
- Extracts the
Authorization: Bearer <token>header. - Calls the
/validateendpoint of theauth-serviceusing a non-blocking WebClient. - Proceeds only if the token is valid; otherwise responds with a
401 Unauthorized.
This filter is registered declaratively in the application.yml of the gateway under route configuration. Unauthorized access is gracefully handled via a global @RestControllerAdvice.
The gateway will delegate authentication/authorization to a dedicated Auth Service.
%%Client → Gateway (GET /api/medical-profiles with Bearer token)
%% |
%% └─> [Global Filter]
%% ├─ Is this path protected? (yes)
%% ├─ Extract token
%% ├─ Call AuthService:/validate with token
%% ├─ If valid → route to medical-profile-service
%% └─ If invalid → return 401 Unauthorized
graph TD
A[Client Request: GET /api/medical-profiles with Authorization: Bearer <token>] --> B[API Gateway]
B --> C[Global Filter Intercepts]
C --> D{Is path protected?}
D -- Yes --> E[Extract Bearer token]
E --> F[Call AuthService: /validate]
F --> G{Is token valid?}
G -- Yes --> H[Route to medical-profile-service]
G -- No --> I[Return 401 Unauthorized]
sequenceDiagram
participant Client
participant Gateway
participant Filter
participant AuthService
participant MedicalProfileService
Client->>Gateway: GET /api/medical-profiles with Authorization: Bearer <token>
Gateway->>Filter: Global Filter Triggered
Filter->>Filter: Is path protected? → Yes
Filter->>Filter: Extract Bearer Token
Filter->>AuthService: GET /validate with token
alt Token is valid
Filter->>Gateway: Allow request to proceed
Gateway->>MedicalProfileService: Forward Request
MedicalProfileService-->>Gateway: 200 OK
Gateway-->>Client: 200 OK
else Token is invalid
Filter-->>Gateway: Return 401 Unauthorized
Gateway-->>Client: 401 Unauthorized
end
This ensures that all services behind the gateway are protected without needing to implement auth logic in each microservice.
| Without Global Filter | With Global JWT Filter in Gateway |
|---|---|
| Each service must validate JWT itself | Gateway centralizes token validation logic |
| Security logic duplicated in each service | Security logic is maintained in one place |
| Exposes each service to possible misuse | Gateway is the single enforcement layer |
This is the equivalent of having a firewall with identity enforcement in real production architecture.
flowchart TD
subgraph External [External Clients]
CLIENT[Client App / REST Client]
end
subgraph Gateway [API Gateway: port 8084]
GW
end
subgraph InternalNetwork [Docker Internal Network]
AUTH[Auth Service: internal only]
PROFILE[Medical Profile Service]
end
CLIENT -->|POST /auth/login| GW
CLIENT -->|GET /auth/validate| GW
CLIENT -->|GET /api/medical-profiles| GW
GW -->|Forward /auth/login| AUTH
GW -->|Forward /auth/validate| AUTH
GW -->|Call /validate → then forward /medical-profiles| PROFILE:::highlight
classDef highlight fill:#D5FFFF;
| Route | Destination | Description | Security |
|---|---|---|---|
/auth/login |
auth-service:/login |
Issues JWT token | Public |
/auth/validate |
auth-service:/validate |
Validates token | Public |
/api/medical-profiles/** |
medical-profile-service:/medical-profiles |
Protected medical profile endpoints | Yes |
The API Gateway and Auth Service are both Dockerized and run in the same internal Docker network. This allows them to communicate securely without exposing the Auth Service to the public internet.
- Auth Service: Runs internally and is only accessible from inside the Docker network.
- Gateway → Auth-Service: Uses container DNS (
auth-service) for internal validation. - Environment Variable:
AUTH_SERVICE_URLis passed to the gateway at runtime to allow the filter to locate the Auth Service.
Docker networking ensures clean inter-service communication and prevents unauthorized traffic to internal services.
We implemented the API gateway using Spring Cloud Gateway, a powerful, lightweight routing library built on top of Spring WebFlux.
Benefits of Spring Cloud Gateway:
- Easy configuration via YAML or Java DSL
- Seamless Spring Boot integration
- Reactive, non-blocking architecture (WebFlux)
- Flexible route predicates and filters(Supports filters for pre/post processing)
- Works well with Spring Security and OAuth2
- Out-of-the-box support for:
- StripPrefix, RewritePath, Circuit Breakers
- Rate limiting, request logging
- Path-based routing and header manipulation
Routing rules are configured declaratively via application.yml.
Once all services are running in the shared Docker network, you can test the API Gateway using any REST client (e.g., Postman, IntelliJ HTTP requests, curl).
Make sure these service's containers are running:
api-gateway(exposes8084)medical-profile-service(internal only on Docker network)medical-profile-service-dbauth-service(internal only on Docker network)auth-service-db
GET http://localhost:8084/api/medical-profilesUnder the hood:
- API Gateway receives the request on
/api/medical-profiles - It strips the
/apiprefix - Internally routes to
http://medical-profile-service:8081/medical-profiles
This confirms that API Gateway is successfully forwarding to internal documentation endpoints too.
POST http://localhost:8084/auth/loginPOST http://localhost:8084/auth/vaTest: Access Protected Route using Global JWT Filter in the gateway to validate JWTs for all protected downstream routes
This confirms that the JWT validation filter is working correctly, allowing access to protected routes only with valid tokens.
| Without Gateway | With Gateway |
|---|---|
| Direct service-to-client communication | Centralized entry point |
| Exposed ports for each service | One secure, exposed port |
| Manual updates for service discovery | Dynamic routing |
| Duplicated security logic | Unified authentication layer |
| Poor scalability | Seamless service growth |
The API Gateway becomes the front door of our system — enabling clean separation, centralized control, and production-ready architecture.
The auth-service is a core microservice in the MediCore system responsible for handling user authentication and authorization across all downstream services. It issues JWT tokens after validating user credentials, enabling secure, stateless access to protected endpoints through the API Gateway. This service acts as the security backbone of the platform, ensuring only authenticated clients can access protected resources.
- Securing Microservices with JWT Authentication
- Auth Service Tech Stack
- Auth Service Features Implemented
- Token Validation Endpoint for Gateway Integration
- Auth Service Database Setup
- Auth Service Docker Integration
- Auth Service Security Configuration
- Routing Auth Service Through API Gateway
- Exposing Auth Service Swagger API Docs via Gateway
- Auth Service Conclusion
With the API Gateway now acting as the central entry point to our microservices architecture, the next critical step is to integrate a robust authentication and authorization mechanism. This will ensure that our services are not publicly accessible to unauthorized users and follow secure, token-based access control.
Until now, services like medical-profile-service were openly accessible from the internet, which poses significant security risks in production environments. Any client could make unauthenticated requests directly to sensitive endpoints.
To mitigate this, we're introducing a dedicated Authentication Service that will manage user identities and issue JSON Web Tokens (JWTs). This approach enables stateless, secure communication across our distributed system.
As MediCore evolves into a modular, scalable ecosystem of services, ensuring secure access control becomes critical. Initially, services like medical-profile-service were publicly accessible — a major security risk. The auth-service addresses this by:
- Validating user credentials (email & password)
- Issuing signed JWT tokens on successful login
- Enabling downstream services to trust requests routed through the gateway
Once a client receives a valid JWT, all subsequent requests to protected endpoints (e.g., /api/medical-profiles) must include this token in the Authorization header:
Authorization: Bearer <JWT_TOKEN>sequenceDiagram
participant Client
participant APIGateway
participant AuthService
participant MedicalProfileService
Client->>AuthService: POST /auth/login (username, password)
AuthService-->>Client: 200 OK + JWT Token
Client->>APIGateway: GET /api/medical-profiles + Authorization: Bearer JWT
APIGateway->>AuthService: Validate JWT
AuthService-->>APIGateway: Token valid
APIGateway->>MedicalProfileService: Forward Request
MedicalProfileService-->>APIGateway: 200 OK
APIGateway-->>Client: 200 OK + Data
If the JWT is invalid or expired:
sequenceDiagram
Client->>APIGateway: GET /api/medical-profiles + Invalid Token
APIGateway->>AuthService: Validate JWT
AuthService-->>APIGateway: Token invalid
APIGateway-->>Client: 401 Unauthorized
This architecture:
- Secures All Downstream Services — No direct access to any microservice without a valid token
- Centralizes Authentication Logic — Gateway and Auth service control all access points
- Scales Effortlessly — New services can be protected by updating gateway rules only
- Stateless Security — No session management needed, thanks to JWT
- Java 21
- Spring Boot 3, Maven
- Spring Security (Stateless mode)
- Spring Data JPA (Hibernate)
- PostgreSQL (Docker container)
- jjwt (Java JWT library)
- Dockerized Deployment with multistage build
- SpringDoc OpenAPI UI for testing endpoints
- BCrypt for password hashing
- Verifies user credentials stored in a PostgreSQL database
- Uses
BCryptPasswordEncoderfor password hashing and validation - Provides a clean, layered architecture:
DTO → Controller → Service → Repository
- Generates signed JWTs with embedded claims (
email,role) - Uses secret key stored in environment variable (
JWT_SECRET) - Tokens are valid for 10 hours and used for stateless authorization
LoginRequestDTO: Validates login payload using annotationsLoginResponseDTO: Encapsulates the issued JWT token
POST /loginauthenticates users and returns token- Returns
401 Unauthorizedif credentials are invalid - Uses
Optional<String>chaining for clean, functional logic
- HTTP requests tested locally via IntelliJ and Postman
-
Introduced a dedicated
GET /validateendpoint to verify JWT tokens received from client requests. -
Designed specifically for API Gateway integration, allowing the gateway to validate tokens before routing to downstream services.
-
Follows standard authorization practices by accepting:
Authorization: Bearer <JWT_TOKEN>
-
Returns:
200 OK— Token is valid and signed with the correct secret.
401 Unauthorized— Token is missing, malformed, expired, or has an invalid signature.
-
Built with clean separation of concerns:
AuthControllerhandles the REST request.AuthServiceImpldelegates token checks to a reusable utility class.JwtUtilperforms actual token parsing and signature verification using thejjwtlibrary.
-
Follows defensive programming practices using structured exception handling for robust validation.
-
Stateless and efficient — no session tracking or in-memory state is required.
-
Enables secure, token-based access control across all services by centralizing JWT validation in a single trusted source.
This feature enables:
- Better performance and maintainability by avoiding token parsing in every downstream service
- PostgreSQL container:
auth-service-db - Port:
5001:5432(local development) - Volume mounted for persistence
- Admin user seeded via
data.sql:- Email:
testpriti@test.com - Password:
password(BCrypt-hashed)
- Email:
- Exposed port:
8085(for development only) - Docker image built via multistage Dockerfile
- Environment variables passed via Docker run configuration:
SPRING_DATASOURCE_URLSPRING_DATASOURCE_USERNAMESPRING_DATASOURCE_PASSWORDSPRING_JPA_HIBERNATE_DDL_AUTOSPRING_SQL_INIT_MODEJWT_SECRET
- Connected to
internalDocker network for inter-service communication
- Stateless, CSRF disabled (API Gateway handles external validation)
- All requests are permitted at auth-service level (trusted traffic from gateway only)
- Spring Security filter chain customized
- Separation of concerns: AuthService validates tokens, downstream services remain clean
To enforce strict traffic flow and improve security posture, the auth-service is fully integrated behind the API Gateway. This ensures that no external traffic can communicate directly with the auth-service. All communication must flow through the gateway.
The API Gateway (api-gateway) has been configured to forward any request matching the /auth/** path to the internal address of the auth-service. The URI is rewritten to match how the service expects it.
This route ensures the following:
- Requests like
/auth/loginand/auth/validateare forwarded toauth-service:/loginand/validaterespectively.- External consumers only interact with
localhost:8084(gateway).- Internals remain protected via Docker networking — the
auth-serviceis no longer exposed to the outside.
Updated .http files to verify functionality through the gateway:
login.http
### Login request to retrieve a token
POST http://localhost:8084/auth/loginvalidate.http
### Get request to validate Token
GET http://localhost:8084/auth/validate- Both
/auth/loginand/auth/validatewere successfully tested through the gateway. - Auth service container no longer exposes any ports.
- API Gateway is now the sole entry point for authentication and token validation.
- Improves security by blocking direct access to core services
- Centralizes routing and control for all client interactions
- Simulates real-world cloud architecture where services live on private internal networks
- Gateway becomes the enforcement layer for all access policies
The auth-service Swagger/OpenAPI spec is exposed through the API Gateway, allowing consumers and tools (like Swagger UI or codegen clients) to inspect or auto-generate integrations.
- id: api-docs-auth-route
uri: http://auth-service:8085
predicates:
- Path=/api-docs/auth
filters:
- RewritePath=/api-docs/auth,/v3/api-docsGo to http://localhost:8084/api-docs/auth to see the OpenAPI JSON output.
This section demonstrates:
- Real-world implementation of microservice authentication patterns
- Proficiency in Spring Boot, JWT, and API Gateway security
- Dockerized, scalable service design that follows DevOps-ready practices
- Preparedness for deployment with centralized auth and routing
- Stateless token issuance and centralized validation via gateway
The integration-tests module provides automated validation for key workflows in the MediCore microservices system. It replaces manual REST client testing by using REST Assured and JUnit 5 to simulate realistic, end-to-end client interactions. This ensures core functionality—authentication, token validation, and protected service access—works reliably across the API Gateway, auth-service, and medical-profile-service.
- Why Integration Testing?
- Integration Tests Tech Stack
- Test Strategy
- Setup & Configuration
- Implemented Test Cases
- How to Run the Tests
- Integration Tests Output
- Integration Tests Summary
Until this point, we have been manually verifying system functionality by issuing requests using IntelliJ's REST client and reading responses or logs. This approach is fine for early development, but it becomes inefficient and error-prone as the system grows. Each time we want to validate a flow (e.g., login + access protected resource), we:
- Send a login request manually to get a token
- Use the token into a second request
- Hit the medical-profile endpoint with authorization header
This is not scalable for larger systems.
Solution: We use automated integration testing to:
- Simulate these user flows programmatically
- Validate each piece works as expected
- Provide fast feedback before changes go to production
Integration testing is a crucial part of any real-world CI/CD pipeline and expected in enterprise-grade systems.
- Java 21
- Maven
- REST Assured 5.3.0 — for fluent, expressive HTTP request testing
- JUnit Jupiter 5.11.4 — for test structure and assertions
These are not unit tests—they cover full integration across services:
- API Gateway (
localhost:8084) - Routing to downstream services (
auth-service,medical-profile-service)
- Requesting a JWT from
auth-service - Using it to call protected routes
- Ensuring gateway applies proper JWT validation filter
- HTTP response codes (200, 401)
- Token presence and structure
- Protected service data is accessible with valid token
Ensure all services are running in Docker:
auth-service(internal)medical-profile-service(internal)api-gateway(exposed on port8084)
- Module name:
integration-tests - Java 21, Maven project (no parent)
- Uses JUnit + REST Assured for testing
- REST Assured 5.3.0 — for fluent, expressive HTTP request testing
- JUnit Jupiter 5.11.4 — for test structure and assertions
- Sends login request with valid credentials
- Expects 200 OK
- Verifies that token is returned
- Sends login with wrong credentials
- Verifies 401 Unauthorized
- Logs in with valid credentials to receive JWT
- Uses JWT to request medical profiles
- Verifies successful response and valid data field
Run via Maven CLI:
mvn testOr from IntelliJ using the green run icons on the test methods.
All tests should pass if the system is configured correctly as shown in following image.
This module transforms fragile, manual testing into repeatable, automated flows. It validates:
- Authentication via
auth-service - Secure access to
medical-profile-serviceviaapi-gateway - Proper JWT validation on protected endpoints
Real-world projects depend on automated testing to prevent regressions and support agile delivery.
This module defines the complete cloud infrastructure for the MediCore Healthcare Microservices Platform, using AWS CDK (Java). It supports both production deployments on AWS and local emulation via LocalStack, enabling realistic enterprise testing and CI/CD integration.
The infrastructure provisions:
- VPC & private subnets
- ECS Fargate for containerized microservices
- ALB (Application Load Balancer) for routing
- RDS (PostgreSQL) for persistent storage
- MSK (Kafka) for event-driven communication
- CloudWatch for centralized logging and health monitoring
- Technology Stack
- Core Architecture
- Deployed Microservices
- Infrastructure Architecture Diagram
- Configuration Details
- Service Dependency Overview
- Local Deployment Instructions
- CI/CD Integration
- Security Considerations
- Result
- Testing the Infrastructure
- Infrastructure as Code: AWS CDK (Java)
- Cloud Runtime: ECS Fargate (serverless compute)
- Network Layer: VPC with subnets
- Messaging: MSK (Kafka)
- Databases: Amazon RDS (PostgreSQL)
- Traffic Management: ALB (Application Load Balancer)
- Local Emulation: LocalStack
- Monitoring: CloudWatch Logs, Health Checks
- Containerization: Docker
-
VPC: Spanning two availability zones
-
Subnets:
- Public: for ALB
- Private: for ECS services, RDS, and MSK
-
Security:
- Services isolated in private subnets
- No public access to RDS or Kafka brokers
-
PostgreSQL (v17.2):
auth-service-dbmedical-profile-service-db
-
Configuration:
- Instance type:
t3.micro - 20 GB storage
- Admin credentials managed via AWS Secrets Manager
- Instance type:
-
AWS MSK:
- Kafka version: 2.8.0
- Cluster name:
kafka-cluster - 2 broker nodes, AZ-distributed
- Cluster:
MedicalProfileManagementCluster - Launch type: Fargate (serverless, containerized)
- Namespace:
medical-profile-management.local
-
Configured using
ApplicationLoadBalancedFargateService -
Routes all external traffic
-
Uses environment variables:
SPRING_PROFILES_ACTIVE=prodAUTH_SERVICE_URL(used for JWT validation)
| Service | Ports | Dependencies | Description |
|---|---|---|---|
auth-service |
8085 | PostgreSQL | JWT-based authentication |
medical-profile-service |
8081 | RDS, Kafka, gRPC to billing | Profile management and event emission |
medical-billing-service |
8082, 9001 | - | Billing logic with gRPC support |
medical-analytics-service |
8083 | Kafka | Consumes profile events for analytics |
api-gateway |
8084 (ALB) | Routes to internal services | Central API entry point |
flowchart TB
subgraph AWSCloud[AWS Cloud]
subgraph MediCoreVPC[MediCore VPC]
subgraph PublicSubnet[Public Subnet]
ALB[Application Load Balancer]
end
subgraph PrivateSubnet[Private Subnet]
subgraph ECSCluster[ECS Cluster]
APIGW[API Gateway]
AUTH[Authentication Service]
PROFILE[Medical Profile Service]
BILLING[Medical Billing Service]
ANALYTICS[Medical Analytics Service]
end
subgraph AmazonRDS[Amazon RDS]
AUTHDB[(Auth DB)]
PROFILEDB[(Profile DB)]
end
subgraph AmazonMSK[Amazon MSK]
KAFKA[(Kafka Cluster)]
end
end
end
CW[Monitoring]
SM[Secrets Manager]
end
Internet --> ALB
ALB --> APIGW
APIGW --> AUTH
APIGW --> PROFILE
PROFILE --> |gRPC|BILLING
PROFILE --> |Kafka Event|KAFKA
KAFKA --> |Kafka Consume|ANALYTICS
AUTH --> AUTHDB
PROFILE --> PROFILEDB
CW -.-> ECSCluster
CW -.-> AmazonRDS
CW -.-> AmazonMSK
SM --> AUTHDB
SM --> PROFILEDB
classDef alb fill:#f3f4f6,stroke:#9ca3af;
classDef ecs fill:#e0f2fe,stroke:#0ea5e9;
classDef db fill:#fef3c7,stroke:#f59e0b;
classDef kafka fill:#f3e8ff,stroke:#8b5cf6;
classDef aws fill:#f0fdf4,stroke:#10b981;
class ALB alb;
class APIGW,AUTH,PROFILE,BILLING,ANALYTICS ecs;
class AUTHDB,PROFILEDB db;
class KAFKA kafka;
class AWSCloud aws;
flowchart TB
subgraph Local[Local Development]
direction TB
subgraph IDE[Developer Machine]
direction LR
Code[Application Code] -->|1 Deploys to| LocalStack[LocalStack AWS Emulation]
Tests[Test Suite] -->|2 Invokes| LocalStack
CLI[AWS CLI] -->|3 Configures| LocalStack
end
subgraph DockerEnv[Docker Environment]
APIGW[API Gateway Container]
AUTH[Authentication Service Container]
PROFILE[Medical Profile Service Container]
BILLING[Medical Billing Service Container]
ANALYTICS[Medical Analytics Service Container]
subgraph DB[Database Services]
POSTGRES[PostgreSQL Container]
end
subgraph MSG[Message Services]
KAFKA[Kafka Container]
end
end
LocalStack -->|4 Manages Containers| DockerEnv
%% Service Connections
APIGW --> AUTH
APIGW --> PROFILE
PROFILE --> BILLING
PROFILE --> KAFKA
KAFKA --> ANALYTICS
AUTH --> POSTGRES
PROFILE --> POSTGRES
end
classDef dev fill:#e3f2fd,stroke:#2196f3;
classDef container fill:#bbdefb,stroke:#1e88e5;
classDef db fill:#fff8e1,stroke:#ffc107;
classDef msg fill:#f3e5f5,stroke:#9c27b0;
classDef tool fill:#e8f5e9,stroke:#66bb6a;
class Local,IDE dev;
class APIGW,AUTH,PROFILE,BILLING,ANALYTICS container;
class DB,POSTGRES db;
class MSG,KAFKA msg;
class Code,Tests,CLI tool;
jdbc:postgresql://<endpoint>:5432/<service>-db
Username: admin_user
Password: <retrieved from Secrets Manager>bootstrap.servers=localhost.localstack.cloud:4510,4511,4512
group.id=medical-analytics-group
auto.offset.reset=earliest- TCP-based checks
- 30-second interval
- Fails after 3 consecutive failures
api-gateway (8084)
└─ auth-service (8085)
└─ RDS (auth-service-db)
medical-profile-service (8081)
├─ RDS (medical-profile-service-db)
├─ medical-billing-service (8082/9001)
└─ Kafka
medical-analytics-service (8083)
└─ Kafka
docker build -t auth-service:latest ./auth-service
docker build -t medical-profile-service:latest ./medical-profile-service
docker build -t billing-service:latest ./medical-billing-service
docker build -t analytics-service:latest ./medical-analytics-service
docker build -t api-gateway:latest ./api-gatewaycd infrastructure
mvn clean installThis will output the CloudFormation template at cdk.out/localstack.template.json.
Create and run the following script:
#!/bin/bash
set -e
ENDPOINT="http://localhost:4566"
aws --endpoint-url=$ENDPOINT cloudformation deploy \
--stack-name medicore \
--template-file "./cdk.out/localstack.template.json"
aws --endpoint-url=$ENDPOINT elbv2 describe-load-balancers \
--query "LoadBalancers[0].DNSName" --output text- CloudFormation templates automatically generated via
cdk synth - Bootstrapless synthesizer for local development compatibility
- Modular stacks for flexible pipeline integration
- Supports GitHub Actions, Jenkins, or GitLab CI/CD
- Databases run in private subnets without public access
- Secrets for database access are managed in AWS Secrets Manager
- JWT secrets are passed via environment variables
- All inter-service communication remains within the VPC
After successful deployment, the DNS name of the Application Load Balancer is output by the deploy script. All traffic to MediCore flows through this ALB, into the API Gateway, and finally to individual backend services.
Ran ./localstack-deploy.sh and verified the deployment by accessing the ALB DNS name. The services are reachable, and health checks pass successfully.
















































