Files
uis-cloud-computing/7project/report.md
2025-11-14 17:32:11 +01:00

32 KiB
Raw Blame History

Personal finance tracker

Project Overview

Project Name: Personal Finance Tracker

Deployment URL: https://finance.ltrk.cz/

Group Members:

  • 289229, Lukáš Trkan, lukastrkan
  • 289258, Dejan Ribarovski, ribardej (derib2613)

Brief Description: Our application allows users to easily track their cash flow through multiple bank accounts. Users can label their transactions with custom categories that can be later used for filtering and visualization. New transactions are automatically fetched in the background.

Architecture Overview

Our system is a fullstack web application composed of a React frontend, a FastAPI backend, a asynchronousMariaDB database with Maxscale, and background workers powered by Celery with RabbitMQ. The backend exposes REST endpoints for authentication (email/password and OAuth), users, categories, transactions, exchange rates and bank APIs. Infrastructure for Kubernetes is managed via Terraform/OpenTofu and the application is packaged via a Helm chart. This all is deployed on private TalosOS cluster running on Proxmox VE with CI/CD and with public access over Cloudflare tunnels. Static files for frontend are served via Cloudflare pages. Other services deployed in the cluster includes Longhorn for persistent storage, Prometheus with Grafana for monitoring.

High-Level Architecture

flowchart TB
    n3(("User")) <--> client["Frontend"]
    proc_queue["Message Queue"] --> proc_queue_worker["Worker Service"]
    proc_queue_worker -- SMTP --> ext_mail[("Email Service")]
    proc_queue_worker <-- HTTP request/response --> ext_bank[("Bank API")]
    proc_queue_worker <--> db[("Database")]
    proc_cron["Cron"] <-- HTTP request/response --> svc["Backend API"]
    svc --> proc_queue
    n2["Cloudflare tunnel"] <-- HTTP request/response --> svc
    svc <--> db
    svc <-- HTTP request/response --> api[("UniRate API")]
    client <-- HTTP request/response --> n2

The workflow works in the following way:

  • Client connects to the frontend. After login, frontend automatically fetches the stored transactions from the database via the backend API and currency rates from UniRate API.
  • When the client opts for fetching new transactions via the Bank API, cron will trigger periodic fetching using background worker.
  • After successful load, these transactions are stored to the database and displayed to the client

Features

  • The stored transactions are encrypted in the DB for security reasons.
  • For every pull request the full APP is deployed on a separate URL and the tests are run by github CI/CD
  • On every push to main, the production app is automatically updated
  • UI is responsive for mobile devices
  • Slow operations (emails, transactions fetching) are handled in the background by Celery workers.
  • App is monitored using prometheus metrics endpoint and metrics are shown in Grafana dashboard.

Components

  • Frontend (frontend/): React + TypeScript app built with Vite. Talks to the backend via REST, handles login/registration, shows latest transactions, filtering, and allows adding transactions.
  • Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas.
  • Worker service (backend/app/workers): Celery worker handling background tasks (emails, transactions fetching).
  • Database (MariaDB with Maxscale): Persists users, categories, transactions; schema managed by Alembic migrations.
  • Message Queue (RabbitMQ): Queues background tasks for Celery workers.
  • Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Cloudflare tunnel, etc.).
  • Deployment Chart (charts/myapp-chart/): Helm chart to deploy the application to Kubernetes.

Other services deployed in the cluster

  • Longhorn: distributed storage system providing persistent volumes for the database and other services
  • Prometheus + Grafana: monitoring stack collecting metrics from the app and cluster, visualized in Grafana dashboards
  • MariaDB operator: manages the MariaDB cluster based on Custom resources, creates Databases, users, handles backups
  • RabbitMQ operator: manages RabbitMQ cluster based on Custom resources
  • Cloudflare Tunnel: allows public access to backend API running in the private cluster, providing HTTPS

Technologies Used

  • Backend: Python, FastAPI, FastAPI Users, SQLAlchemy, Pydantic, Alembic, Celery
  • Frontend: React, TypeScript, Vite
  • Database: MariaDB with Maxscale
  • Background jobs: RabbitMQ, Celery
  • Containerization/Orchestration: Docker, Docker Compose (dev), Kubernetes, Helm
  • IaC/Platform: Proxmox, Talos, Cloudflare pages, OpenTofu (Terraform), cert-manager, MetalLB, Cloudflare Tunnel, Prometheus, Loki

Prerequisites

Here are software and hardware prerequisites for the development and production environments. This section also describes necessary environment variables and key dependencies used in the project.

System Requirements

Development

  • OS: Tested on MacOS, Linux and Windows should work as well
  • Minimum RAM: 8 GB
  • Storage: 10 GB+ free

Production

  • 1 + 4 nodes
  • CPU: 4 cores
  • RAM: 8 GB
  • Storage: 200 GB

Required Software

Development

  • Docker
  • Docker Compose
  • Node.js and npm
  • Python 3.12
  • MariaDB 11

Production

Minimal:
  • domain name with Cloudflare`s nameservers - tunnel, pages
  • Kubernetes cluster
  • kubectl
  • Helm
  • OpenTofu
Our setup specifics:
  • Proxmox VE
  • TalosOS cluster
  • talosctl
  • GitHub self-hosted runner with access to the cluster
  • TailScale for remote access to cluster

Environment Variables

Backend

  • MOJEID_CLIENT_ID, MOJEID_CLIENT_SECRET - OAuth client ID and secret for MojeID
  • BANKID_CLIENT_ID, BANKID_CLIENT_SECRET - OAuth client ID and secret for BankID
  • CSAS_CLIENT_ID, CSAS_CLIENT_SECRET - OAuth client ID and secret for Česká spořitelna
  • DATABASE_URL(or MARIADB_HOST, MARIADB_PORT, MARIADB_DB, MARIADB_USER, MARIADB_PASSWORD) - MariaDB connection details
  • RABBITMQ_USERNAME, RABBITMQ_PASSWORD - credentials for RabbitMQ
  • SENTRY_DSN - Sentry DSN for error reporting
  • DB_ENCRYPTION_KEY - symmetric key for encrypting sensitive data in the database
  • SMTP_HOST, SMTP_PORT, SMTP_USERNAME, SMTP_PASSWORD, SMTP_USE_TLS, SMTP_USE_SSL, SMTP_FROM - SMTP configuration (host, port, auth credentials, TLS/SSL options, sender).
  • UNIRATE_API_KEY - API key for UniRate.

Frontend

  • VITE_BACKEND_URL - URL of the backend API

Dependencies (key libraries)

Backend: FastAPI, fastapi-users, SQLAlchemy, pydantic v2, Alembic, Celery, uvicorn, pytest Frontend: React, TypeScript, Vite

Local development

You can run the project with Docker Compose and Python virtual environment for testing and development purposes

1) Clone the Repository

git clone https://github.com/dat515-2025/Group-8.git
cd Group-8/7project

2) Install dependencies

Backend

cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

3) Run Docker containers

cd ..
docker compose up -d

4) Prepare the database

bash upgrade_database.sh

5) Run backend

Before running the backend, make sure to set the necessary environment variables. Either by setting them in your shell or by setting them in run configuration in your IDE.

cd backend
uvicorn app.app:fastApi --reload --host 0.0.0.0 --port 8000

6) Run Celery worker (optional, in another terminal)

cd Group-8/7project/src/backend
source .venv/bin/activate
celery -A app.celery_app.celery_app worker -l info

7) Install frontend dependencies and run

cd ../frontend
npm i
npm run dev

Build Instructions

Backend

App is separated into backend and frontend so it also needs to be built separately. Backend is build into docker image and frontend is deployed as static files.

cd 7project/backend
# Dont forget to set correct image tag with your registry and name
# For example lukastrkan/cc-app-demo or gitea.ltrk.dev/lukas/cc-app-demo
docker buildx build --platform linux/amd64,linux/arm64 -t CHANGE_ME --push .

Frontend

cd project7/src/frontend
npm ci
npm run build

Deployment Instructions

Deployment is tested on TalosOS cluster with 1 control plane and 4 workers, cluster needs to be setup and configured manually. Terraform/OpenTofu is then used to deploy base services to the cluster. App itself is deployed automatically via GitHub actions and Helm chart. Frontend files are deployed to Cloudflare pages.

Setup Cluster

Deployment should work on any Kubernetes cluster. However, we are using 5 TalosOS virtual machines (1 control plane, 4 workers) running on top of Proxmox VE.

  1. Create at least 4 VMs with TalosOS (4 cores, 8 GB RAM, 200 GB disk)
  2. Install talosctl for your OS: https://docs.siderolabs.com/talos/v1.10/getting-started/talosctl
  3. Generate Talos config
  4. Navigate to tofu directory
cd 7project/src/tofu
  1. Set IP addresses in environment variables
CONTROL_PLANE_IP=<control-plane-ip>
WORKER1_IP=<worker1-ip>
WORKER2_IP=<worker2-ip>
WORKER3_IP=<worker3-ip>
WORKER4_IP=<worker4-ip>
....
  1. Create config files
# change my-cluster to your desired cluster name
talosctl gen config my-cluster https://$CONTROL_PLANE_IP:6443
  1. Edit the generated configs

Apply the following changes to worker.yaml:

  1. Add mounts for persistent storage to machine.kubelet.extraMounts section:
extraMounts:
  - destination: /var/lib/longhorn
    type: bindind.
    source: /var/lib/longhorn
    options:
      - bind
      - rshared
      - rw
  1. Change machine.install.image to image with extra modules:
image: factory.talos.dev/metal-installer/88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b:v1.11.1

or you can use latest image generated at https://factory.talos.dev with following options:

  • Bare-metal machine
  • your Talos os version
  • amd64 architecture
  • siderolabs/iscsi-tools
  • siderolabs/util-linux-tools
  • (Optionally) siderolabs/qemu-guest-agent

Then copy "Initial Installation" value and paste it to the image field.

  1. Add docker registry mirror to machine.registries.mirrors section:
registries:
  mirrors:
    docker.io:
      endpoints:
        - https://mirror.gcr.io
        - https://registry-1.docker.io
  1. Apply configs to the VMs
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file controlplane.yaml
talosctl apply-config --insecure --nodes $WORKER1_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER2_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER3_IP --file worker.yaml
talosctl apply-config --insecure --nodes $WORKER4_IP --file worker.yaml
  1. Boostrap the cluster and retrieve kubeconfig
export TALOSCONFIG=$(pwd)/talosconfig
talosctl config endpoint https://$CONTROL_PLANE_IP:6443
talosctl config node $CONTROL_PLANE_IP

talosctl bootstrap

talosctl kubeconfig .

You can now use k8s client like https://headlamp.dev/ with the generated kubeconfig file.

Install base services to the cluster

  1. Copy and edit variables
cp terraform.tfvars.example terraform.tfvars
  • metallb_ip_range - set to range available in your network for load balancer services

  • mariadb_password - password for internal mariadb user

  • mariadb_root_password - password for root user

  • mariadb_user_name - username for admin user

  • mariadb_user_host - allowed hosts for admin user

  • mariadb_user_password - password for admin user

  • metallb_maxscale_ip, metallb_service_ip, metallb_primary_ip, metallb_secondary_ip - IPs for database cluster,
    set them to static IPs from the metallb_ip_range

  • s3_enabled, s3_bucket, s3_region, s3_endpoint, s3_key_id, s3_key_secret - S3 compatible storage for backups (optional)

  • phpmyadmin_enabled - set to false if you want to disable phpmyadmin

  • rabbitmq-password - password for RabbitMQ

  • cloudflare_account_id - your Cloudflare account ID

  • cloudflare_api_token - your Cloudflare API token with permissions to manage tunnels and DNS

  • cloudflare_email - your Cloudflare account email

  • cloudflare_tunnel_name - name for the tunnel

  • cloudflare_domain - your domain name managed in Cloudflare

  1. Deploy without Cloudflare module first
tofu init
tofu apply -exclude modules.cloudflare
  1. Deploy rest of the modules
tofu apply

Configure deployment

  1. Create self-hosted runner with access to the cluster or make cluster publicly accessible
  2. Change jobs.deploy.runs-on in .github/workflows/deploy-prod.yml and in .github/workflows/deploy-pr.yaml to your runner label
  3. Add variables to GitHub in repository settings:
    • PROD_DOMAIN - base domain for deployments (e.g. ltrk.cz)
    • DEV_FRONTEND_BASE_DOMAIN - base domain for your cloudflare pages
  4. Add secrets to GitHub in repository settings:
    • CLOUDFLARE_ACCOUNT_ID - same as in tofu/terraform.tfvars
    • CLOUDFLARE_API_TOKEN - same as in tofu/terraform.tfvars
    • DOCKER_USER - your docker registry username
    • DOCKER_PASSWORD - your docker registry password
    • KUBE_CONFIG - content of your kubeconfig file for the cluster
    • PROD_DB_PASSWORD - same as MARIADB_PASSWORD
    • PROD_RABBITMQ_PASSWORD - same as MARIADB_PASSWORD
    • PROD_DB_ENCRYPTION_KEY - same as DB_ENCRYPTION_KEY
    • MOJEID_CLIENT_ID
    • MOJEID_CLIENT_SECRET
    • BANKID_CLIENT_ID
    • BANKID_CLIENT_SECRET
    • CSAS_CLIENT_ID
    • CSAS_CLIENT_SECRET
    • SENTRY_DSN
    • SMTP_HOST
    • SMTP_PORT
    • SMTP_USERNAME
    • SMTP_PASSWORD
    • SMTP_FROM
    • UNIRATE_API_KEY
  5. On Github open Actions tab, select "Deploy Prod" and run workflow manually

Testing Instructions

The tests are located in 7project/backend/tests directory. All tests are run by GitHub actions on every pull request and push to main. See the workflow here.

If you want to run the tests locally, the preferred way is to use a bash script that will start a test DB container with docker compose and remove it afterwards.

cd 7project/src/backend 
bash test_locally.sh

Unit Tests

There are only 5 basic unit tests, since our services logic is very simple

bash test_locally.sh --only-unit

Integration Tests

There are 9 basic unit tests, testing the individual backend API logic

bash test_locally.sh --only-integration

End-to-End Tests

There are 7 e2e tests, testing more complex app logic

bash test_locally.sh --only-e2e

Usage Examples

All endpoints are documented at OpenAPI: http://127.0.0.1:8000/docs

Auth: Register and Login (JWT)

# Register
curl -X POST http://127.0.0.1:8000/auth/register \
  -H 'Content-Type: application/json' \
  -d '{
    "email": "user@example.com",
    "password": "StrongPassw0rd",
    "first_name": "Jane",
    "last_name": "Doe"
  }'

# Login (JWT)
TOKEN=$(curl -s -X POST http://127.0.0.1:8000/auth/jwt/login \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'username=user@example.com&password=StrongPassw0rd' | jq -r .access_token)

echo $TOKEN

# Call a protected route
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route

Frontend

  • Start with:
npm run dev in 7project/src/frontend

Presentation Video

YouTube Link: [Insert your YouTube link here]

Duration: [X minutes Y seconds]

Video Includes:

  • Project overview and architecture
  • Live demonstration of key features
  • Code walkthrough
  • Build and deployment showcase

Troubleshooting

Common Issues

Issue 1: Unable to apply Cloudflare terraform module

Symptoms: Terraform/OpenTofu apply fails during Cloudflare module deployment. This is caused by unknown variable not known beforehand.

Solution: Apply first without Cloudflare module and then apply again.

tofu apply -exclude modules.cloudflare
tofu apply

Issue 2: Pods are unable to start

Symptoms: Pods are unable to start with ImagePullBackOff error. This could be caused by either hitting docker hub rate limits or by docker hub being down.

Solution: Make sure you updated the cluster config to use registry mirror as described in "Setup Cluster" section.

Debug Commands

Get a detailed description of the Deployment:

kubectl describe deployment finance-tracker -n prod

Get a list of pods in the Deployment:

kubectl get pods -n prod

Check the logs of a specific pod copy value for from the command above (--previous flag shows logs of a failing pod, remove it if the pod is not failing):

kubectl logs <pod-name> -n prod --previous

See the service description:

kubectl describe service finance-tracker -n prod

Connect to the pod and run a bash shell:

kubectl exec -it <pod-name> -n prod -- /bin/bash

Progress Table

Be honest and detailed in your assessments. This information is used for individual grading. Link to the specific commit on GitHub for each contribution.

Task/Component Assigned To Status Time Spent Difficulty Notes
Project Setup & Repository Lukas Complete 40 Hours Medium [Any notes]
Design Document Both Complete 4 Hours Easy [Any notes]
Backend API Development Dejan Complete 14 hours Medium [Any notes]
Database Setup & Models Lukas Complete [X hours] Medium [Any notes]
Frontend Development Dejan Complete 17 hours Medium [Any notes]
Docker Configuration Lukas Complete 3 hours Easy [Any notes]
Cloud Deployment Lukas Complete [X hours] Hard Using Talos cluster running in proxmox - easy snapshots etc. Frontend deployed at Cloudflare pages.
Testing Implementation Dejan Complete 16 hours Medium [Any notes]
Documentation Both 🔄 In Progress [X hours] Easy [Any notes]
Presentation Video Both Not Started [X hours] Medium [Any notes]

Legend: Complete | 🔄 In Progress | Pending | Not Started

Hour Sheet

Link to the specific commit on GitHub for each contribution.

[Lukáš]

Hour Sheet

Name: Lukáš Trkan

Date Activity Hours Description Representative Commit / PR
18.9. - 19.9. Initial Setup & Design 40 Repository init, system design diagrams, basic Terraform setup feat(infrastructure): add basic terraform resources
20.9. - 5.10. Core Infrastructure & CI/CD 12 K8s setup (ArgoCD), CI/CD workflows, RabbitMQ, Redis, Celery workers, DB migrations PR #2, feat(infrastructure): add rabbitmq cluster
6.10. - 9.10. Frontend Infra & DB 5 Deployed frontend to Cloudflare, setup metrics, created database models PR #16 (Cloudflare), PR #19 (DB structure)
10.10. - 11.10. Backend 5 Implemented OAuth support (MojeID, BankID) feat(auth): add support for OAuth and MojeID
12.10. Infrastructure 2 Added database backups feat(infrastructure): add backups
16.10. Infrastructure 4 Implemented secrets management, fixed deployment/env variables PR #29 (Deployment envs)
17.10. Monitoring 1 Added Sentry logging feat(app): add sentry loging
21.10. - 22.10. Backend 8 Added ČSAS bank connection PR #32 (Fix React OAuth)
29.10. - 30.10. Backend 5 Implemented transaction encryption, add bank scraping PR #39 (CSAS Scraping)
30.10. Monitoring 6 Implemented Loki logging and basic Prometheus metrics PR #42 (Prometheus metrics)
9.11. Monitoring 2 Added custom Prometheus metrics PR #46 (Prometheus custom metrics)
11.11. Tests 1 Investigated and fixed broken Pytest environment fix(tests): set pytest env
11.11. - 12.11. Features & Deployment 6 Added cron support, email sender service, updated workers & image PR #49 (Email), PR #50 (Update workers)
18.9 - 14.11 Documentation 8 Updated report.md, design docs, and tfvars.example Create design.md, update report
Total 105

Dejan

Date Activity Hours Description Representative Commit / PR
25.9. Design 2 6design
9.10. to 11.10. Backend APIs 14 Implemented Backend APIs PR #26, 20-create-a-controller-layer-on-backend-side
13.10. to 15.10. Frontend Development 8 Created user interface mockups PR #28, frontend basics
21.10. to 23.10. Tests, frontend 10 Test basics, balance charts, and frontend improvement PR #31, 30 create tests and set up a GitHub pipeline
28.10. to 30.10. CI/CD 6 Integrated tests with test database setup on github workflows PR #31, 30 create tests and set up a GitHub pipeline
28.10. to 30.10. Frontend 8 UI improvements and exchange rate API integration PR #35, 34 improve frontend functionality
29.10. Backend 4 Token invalidation, few fixes PR #38, fix(backend): implemented jwt token invalidation so users cannot use …
4.11. to 6.11. Tests 6 Test fixes improvement, more integration and e2e PR #45, feat(test): added more tests
4.11. to 6.11. Frontend 8 Fixes, rates API, Improved UI, added support for mobile devices PR #41, #44, feat(frontend): added CNB API and moved management into a new tab, 43 fix the UI layout in chrome
11.11. Backend APIs 4 Moved rates API, mock bank to Backend, few fixes feat(backend): Moved the unirate API to the backend , feat(backend): moved mock bank to backend
11.11. to 12.11. Tests 3 Local testing DB container, few fixes PR #48, fix(tests): fixed test runtime errors regarding database connection
12.11. Frontend 3 Enabled multiple transaction edits at once, CSAS button state feat(frontend): implemented multiple transaction selections in UI
13.11. Video 3 Video
25.9. to 14.11. Documentation 8 Documenting the dev process multiple feat(docs): report.md update
Total 87

Group Total: 192 hours


Final Reflection

What We Learned

Technical

  • We learned how to use AI to help us with our project.
  • We learned how to use Copilot for PR reviews.
  • We learned how to troubleshoot issues with our project in different areas.

Collaboration

  • Weekly meetings with the TA were great for syncing up on progress, discussing issues, planning future work.
  • Using GitHub issues and pull requests was very helpful for keeping track of progress.

Challenges Faced

Slow cluster performance

This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.

Stucked IaC deployment

If the deployed module (helm chart for example) was not configured properly, it would get stuck and timeout resulting in namespace that cannot be deleted. This was solved by using snapshots in Proxmox and restoring if this happened.

Not enough time to implement all features

Since this course is worth only 5 credits, we often had to prioritize other courses we were attending over this project. In the end, we were able to implement all necessary features.

If We Did This Again

Different framework

FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky. Tricky was also integrating FastAPI auth system with React frontend, since there is no official project template. Using .NET (which we considered initially) would probably solve these issues.

Private container registry

Using private container registry would allow us to include environment variables directly in the image during build. This would simplify deployment and CI/CD setup.

Start sooner

The weekly meetings helped us to start planning the project earlier and avoid spending too much time on details, but we could have started earlier if we had more time.

[What would you do differently? What worked well that you'd keep?]

Individual Growth

[Lukas]

This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years). I had some prior experience with terraform/opentofu from work but this improved by understanding of it.

The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks. (I am bad even at that :) ).

It was also interesting experience to be the one responsible for the initial project structure/design/setup used not only by myself.

[Dejan]

Since I do not have a job and I am more theoretically oriented student (I am more into math, algorithms, cryptography), this project was probably the most complex one I have ever worked on. For me, it was a great experience to work on an actually deployed fullstack app and not only local development, that I was used to from the past.

It was also a great experience to collaborate with Lukas who has prior experience with app deployment and infrastructure. Thanks to this, I learned a lot new technologies and how to work in a team (First time reviewing PRs).

It was challenging to wrap my head around the project structure and how everything was connected (And I still think I have some gaps in my knowledge). But I think that if I decide to create my own demo project in the future, I will definitely be able to work on it much more efficiently.


Report Completion Date: [Date] Last Updated: 13.11.2025