mirror of
https://github.com/dat515-2025/Group-8.git
synced 2026-03-22 06:57:47 +01:00
665 lines
26 KiB
Markdown
665 lines
26 KiB
Markdown
# Personal finance tracker
|
||
|
||
<!--- **Instructions**:
|
||
> This template provides the structure for your project report.
|
||
> Replace the placeholder text with your actual content.
|
||
> Remove instructions that are not relevant for your project, but leave the headings along with a (NA) label. -->
|
||
|
||
## Project Overview
|
||
|
||
**Project Name**: Personal Finance Tracker
|
||
|
||
**Group Members**:
|
||
|
||
- 289229, Lukáš Trkan, lukastrkan
|
||
- 289258, Dejan Ribarovski, ribardej (derib2613)
|
||
|
||
**Brief Description**:
|
||
Our application allows users to easily track their cash flow
|
||
through multiple bank accounts. Users can label their transactions with custom categories that can be later used for
|
||
filtering and visualization. New transactions are automatically fetched in the background.
|
||
|
||
## Architecture Overview
|
||
|
||
Our system is a full‑stack web application composed of a React frontend, a FastAPI backend,
|
||
a asynchronousMariaDB database with Maxscale, and background workers powered by Celery with RabbitMQ.
|
||
The backend exposes REST endpoints for authentication (email/password and OAuth), users, categories,
|
||
transactions, exchange rates and bank APIs. Infrastructure for Kubernetes is managed via Terraform/OpenTofu and
|
||
the application is packaged via a Helm chart. This all is deployed on private TalosOS cluster running on Proxmox VE with
|
||
CI/CD and with public access over Cloudflare tunnels. Static files for frontend are served via Cloudflare pages.
|
||
Other services deployed in the cluster includes Longhorn for persistent storage, Prometheus with Grafana for monitoring.
|
||
|
||
### High-Level Architecture
|
||
|
||
```mermaid
|
||
flowchart LR
|
||
n3(("User")) <--> client["Frontend"]
|
||
proc_queue["Message Queue"] --> proc_queue_worker["Worker Service"]
|
||
proc_queue_worker -- SMTP --> ext_mail[("Email Service")]
|
||
proc_queue_worker <-- HTTP request/response --> ext_bank[("Bank API")]
|
||
proc_queue_worker <--> db[("Database")]
|
||
proc_cron["Cron"] <-- HTTP request/response --> svc["Backend API"]
|
||
svc --> proc_queue
|
||
n2["Cloudflare tunnel"] <-- HTTP request/response --> svc
|
||
svc <--> db
|
||
svc <-- HTTP request/response --> api[("UniRate API")]
|
||
client <-- HTTP request/response --> n2
|
||
```
|
||
|
||
The workflow works in the following way:
|
||
|
||
- Client connects to the frontend. After login, frontend automatically fetches the stored transactions from
|
||
the database via the backend API and currency rates from UniRate API.
|
||
- When the client opts for fetching new transactions via the Bank API, cron will trigger periodic fetching
|
||
using background worker.
|
||
- After successful load, these transactions are stored to the database and displayed to the client
|
||
|
||
### Features
|
||
|
||
- The stored transactions are encrypted in the DB for security reasons.
|
||
- For every pull request the full APP is deployed on a separate URL and the tests are run by github CI/CD
|
||
- On every push to main, the production app is automatically updated
|
||
- UI is responsive for mobile devices
|
||
- Slow operations (emails, transactions fetching) are handled
|
||
in the background by Celery workers.
|
||
- App is monitored using prometheus metrics endpoint and metrics are shown in Grafana dashboard.
|
||
|
||
### Components
|
||
|
||
- Frontend (frontend/): React + TypeScript app built with Vite. Talks to the backend via REST, handles
|
||
login/registration, shows latest transactions, filtering, and allows adding transactions.
|
||
- Backend API (backend/app): FastAPI app with routers under app/api for auth, users, categories, transactions, exchange
|
||
rates and bankAPI. Uses FastAPI Users for auth (JWT + OAuth), SQLAlchemy ORM, and Pydantic v2 schemas.
|
||
- Worker service (backend/app/workers): Celery worker handling background tasks (emails, transactions fetching).
|
||
- Database (MariaDB with Maxscale): Persists users, categories, transactions; schema managed by Alembic migrations.
|
||
- Message Queue (RabbitMQ): Queues background tasks for Celery workers.
|
||
- Infrastructure as Code (tofu/): OpenTofu modules provisioning cluster services (RabbitMQ, Redis, Cloudflare tunnel, etc.).
|
||
- Deployment Chart (charts/myapp-chart/): Helm chart to deploy the application to Kubernetes.
|
||
|
||
### Technologies Used
|
||
|
||
- Backend: Python, FastAPI, FastAPI Users, SQLAlchemy, Pydantic, Alembic, Celery
|
||
- Frontend: React, TypeScript, Vite
|
||
- Database: MariaDB with Maxscale
|
||
- Background jobs: RabbitMQ, Celery
|
||
- Containerization/Orchestration: Docker, Docker Compose (dev), Kubernetes, Helm
|
||
- IaC/Platform: Proxmox, Talos, Cloudflare pages, OpenTofu (Terraform), cert-manager, MetalLB, Cloudflare Tunnel,
|
||
Prometheus, Loki
|
||
|
||
## Prerequisites
|
||
|
||
### System Requirements
|
||
|
||
#### Development
|
||
|
||
- Minimum RAM: 8 GB
|
||
- Storage: 10 GB+ free
|
||
|
||
#### Production
|
||
|
||
- 1 + 4 nodes
|
||
- CPU: 4 cores
|
||
- RAM: 8 GB
|
||
- Storage: 200 GB
|
||
|
||
### Required Software
|
||
|
||
#### Development
|
||
|
||
- Docker
|
||
- Docker Compose
|
||
- Node.js and npm
|
||
- Python 3.12
|
||
- MariaDB 11
|
||
|
||
#### Production
|
||
|
||
##### Minimal:
|
||
|
||
- domain name with Cloudflare`s nameservers - tunnel, pages
|
||
- Kubernetes cluster
|
||
- kubectl
|
||
- Helm
|
||
- OpenTofu
|
||
|
||
##### Our setup specifics:
|
||
|
||
- Proxmox VE
|
||
- TalosOS cluster
|
||
- talosctl
|
||
- GitHub self-hosted runner with access to the cluster
|
||
- TailScale for remote access to cluster
|
||
|
||
### Environment Variables
|
||
|
||
#### Backend
|
||
|
||
- `MOJEID_CLIENT_ID`, `MOJEID_CLIENT_SECRET` \- OAuth client ID and secret for
|
||
MojeID - https://www.mojeid.cz/en/provider/
|
||
- `BANKID_CLIENT_ID`, `BANKID_CLIENT_SECRET` \- OAuth client ID and secret for BankID - https://developer.bankid.cz/
|
||
- `CSAS_CLIENT_ID`, `CSAS_CLIENT_SECRET` \- OAuth client ID and secret for Česká
|
||
spořitelna - https://developers.erstegroup.com/docs/apis/bank.csas
|
||
- `DATABASE_URL`(or `MARIADB_HOST`, `MARIADB_PORT`, `MARIADB_DB`, `MARIADB_USER`, `MARIADB_PASSWORD`) \- MariaDB
|
||
connection details
|
||
- `RABBITMQ_USERNAME`, `RABBITMQ_PASSWORD` \- credentials for RabbitMQ
|
||
- `SENTRY_DSN` \- Sentry DSN for error reporting
|
||
- `DB_ENCRYPTION_KEY` \- symmetric key for encrypting sensitive data in the database
|
||
- `SMTP_HOST`, `SMTP_PORT`, `SMTP_USERNAME`, `SMTP_PASSWORD`, `SMTP_USE_TLS`, `SMTP_USE_SSL`, `SMTP_FROM` \- SMTP
|
||
configuration (host, port, auth credentials, TLS/SSL options, sender).
|
||
- `UNIRATE_API_KEY` \- API key for UniRate.
|
||
|
||
#### Frontend
|
||
|
||
- `VITE_BACKEND_URL` \- URL of the backend API
|
||
|
||
### Dependencies (key libraries)
|
||
|
||
Backend: FastAPI, fastapi-users, SQLAlchemy, pydantic v2, Alembic, Celery, uvicorn, pytest
|
||
Frontend: React, TypeScript, Vite
|
||
|
||
## Local development
|
||
|
||
You can run the project with Docker Compose and Python virtual environment for testing and development purposes
|
||
|
||
### 1) Clone the Repository
|
||
|
||
```bash
|
||
git clone https://github.com/dat515-2025/Group-8.git
|
||
cd Group-8/7project
|
||
```
|
||
|
||
### 2) Install dependencies
|
||
|
||
Backend
|
||
|
||
```bash
|
||
cd backend
|
||
python3 -m venv .venv
|
||
source .venv/bin/activate
|
||
pip install -r requirements.txt
|
||
```
|
||
|
||
### 3) Run Docker containers
|
||
|
||
```bash
|
||
cd ..
|
||
docker compose up -d
|
||
```
|
||
|
||
### 4) Prepare the database
|
||
|
||
```bash
|
||
bash upgrade_database.sh
|
||
```
|
||
|
||
### 5) Run backend
|
||
|
||
```bash
|
||
cd backend
|
||
|
||
#TODO: set env variables
|
||
uvicorn app.app:fastApi --reload --host 0.0.0.0 --port 8000
|
||
```
|
||
|
||
### 6) Run Celery worker (optional, in another terminal)
|
||
|
||
```bash
|
||
cd Group-8/7project/backend
|
||
source .venv/bin/activate
|
||
celery -A app.celery_app.celery_app worker -l info
|
||
```
|
||
|
||
### 7) Install frontend dependencies and run
|
||
|
||
```bash
|
||
cd ../frontend
|
||
npm i
|
||
npm run dev
|
||
```
|
||
|
||
- Backend available at: http://127.0.0.1:8000 (OpenAPI at /docs)
|
||
- Frontend available at: http://localhost:5173
|
||
|
||
## Build Instructions
|
||
|
||
### Backend
|
||
|
||
```bash
|
||
cd 7project/backend
|
||
# Dont forget to set correct image tag with your registry and name
|
||
# For example lukastrkan/cc-app-demo or gitea.ltrk.dev/lukas/cc-app-demo
|
||
docker buildx build --platform linux/amd64,linux/arm64 -t CHANGE_ME --push .
|
||
```
|
||
|
||
### Frontend
|
||
|
||
```bash
|
||
cd project7/frontend
|
||
npm ci
|
||
npm run build
|
||
```
|
||
|
||
## Deployment Instructions
|
||
|
||
### Setup Cluster
|
||
|
||
Deployment should work on any Kubernetes cluster. However, we are using 4 TalosOS virtual machines (1 control plane, 3
|
||
workers)
|
||
running on top of Proxmox VE.
|
||
|
||
1) Create at least 4 VMs with TalosOS (4 cores, 8 GB RAM, 200 GB disk)
|
||
2) Install talosctl for your OS: https://docs.siderolabs.com/talos/v1.10/getting-started/talosctl
|
||
3) Generate Talos config
|
||
4) Navigate to tofu directory
|
||
|
||
```bash
|
||
cd 7project/tofu
|
||
````
|
||
|
||
5) Set IP addresses in environment variables
|
||
|
||
```bash
|
||
CONTROL_PLANE_IP=<control-plane-ip>
|
||
WORKER1_IP=<worker1-ip>
|
||
WORKER2_IP=<worker2-ip>
|
||
WORKER3_IP=<worker3-ip>
|
||
WORKER4_IP=<worker4-ip>
|
||
....
|
||
```
|
||
|
||
6) Create config files
|
||
|
||
```bash
|
||
# change my-cluster to your desired cluster name
|
||
talosctl gen config my-cluster https://$CONTROL_PLANE_IP:6443
|
||
```
|
||
|
||
7) Edit the generated configs
|
||
|
||
Apply the following changes to `worker.yaml`:
|
||
|
||
1) Add mounts for persistent storage to `machine.kubelet.extraMounts` section:
|
||
|
||
```yaml
|
||
extraMounts:
|
||
- destination: /var/lib/longhorn
|
||
type: bindind.
|
||
source: /var/lib/longhorn
|
||
options:
|
||
- bind
|
||
- rshared
|
||
- rw
|
||
```
|
||
|
||
2) Change `machine.install.image` to image with extra modules:
|
||
|
||
```yaml
|
||
image: factory.talos.dev/metal-installer/88d1f7a5c4f1d3aba7df787c448c1d3d008ed29cfb34af53fa0df4336a56040b:v1.11.1
|
||
```
|
||
|
||
or you can use latest image generated at https://factory.talos.dev with following options:
|
||
|
||
- Bare-metal machine
|
||
- your Talos os version
|
||
- amd64 architecture
|
||
- siderolabs/iscsi-tools
|
||
- siderolabs/util-linux-tools
|
||
- (Optionally) siderolabs/qemu-guest-agent
|
||
|
||
Then copy "Initial Installation" value and paste it to the image field.
|
||
|
||
3) Add docker registry mirror to `machine.registries.mirrors` section:
|
||
|
||
```yaml
|
||
registries:
|
||
mirrors:
|
||
docker.io:
|
||
endpoints:
|
||
- https://mirror.gcr.io
|
||
- https://registry-1.docker.io
|
||
```
|
||
|
||
8) Apply configs to the VMs
|
||
|
||
```bash
|
||
talosctl apply-config --insecure --nodes $CONTROL_PLANE_IP --file controlplane.yaml
|
||
talosctl apply-config --insecure --nodes $WORKER1_IP --file worker.yaml
|
||
talosctl apply-config --insecure --nodes $WORKER2_IP --file worker.yaml
|
||
talosctl apply-config --insecure --nodes $WORKER3_IP --file worker.yaml
|
||
talosctl apply-config --insecure --nodes $WORKER4_IP --file worker.yaml
|
||
```
|
||
|
||
9) Boostrap the cluster and retrieve kubeconfig
|
||
|
||
```bash
|
||
export TALOSCONFIG=$(pwd)/talosconfig
|
||
talosctl config endpoint https://$CONTROL_PLANE_IP:6443
|
||
talosctl config node $CONTROL_PLANE_IP
|
||
|
||
talosctl bootstrap
|
||
|
||
talosctl kubeconfig .
|
||
```
|
||
|
||
You can now use k8s client like https://headlamp.dev/ with the generated kubeconfig file.
|
||
|
||
### Install base services to the cluster
|
||
|
||
1) Copy and edit variables
|
||
|
||
```bash
|
||
cp terraform.tfvars.example terraform.tfvars
|
||
```
|
||
|
||
- `metallb_ip_range` - set to range available in your network for load balancer services
|
||
- `mariadb_password` - password for internal mariadb user
|
||
- `mariadb_root_password` - password for root user
|
||
- `mariadb_user_name` - username for admin user
|
||
- `mariadb_user_host` - allowed hosts for admin user
|
||
- `mariadb_user_password` - password for admin user
|
||
- `metallb_maxscale_ip`, `metallb_service_ip`, `metallb_primary_ip`, `metallb_secondary_ip` - IPs for database
|
||
cluster,
|
||
set them to static IPs from the `metallb_ip_range`
|
||
- `s3_enabled`, `s3_bucket`, `s3_region`, `s3_endpoint`, `s3_key_id`, `s3_key_secret` - S3 compatible storage for
|
||
backups (optional)
|
||
- `phpmyadmin_enabled` - set to false if you want to disable phpmyadmin
|
||
- `rabbitmq-password` - password for RabbitMQ
|
||
|
||
- `cloudflare_account_id` - your Cloudflare account ID
|
||
- `cloudflare_api_token` - your Cloudflare API token with permissions to manage tunnels and DNS
|
||
- `cloudflare_email` - your Cloudflare account email
|
||
- `cloudflare_tunnel_name` - name for the tunnel
|
||
- `cloudflare_domain` - your domain name managed in Cloudflare
|
||
|
||
2) Deploy without Cloudflare module first
|
||
|
||
```bash
|
||
tofu init
|
||
tofu apply -exclude modules.cloudflare
|
||
```
|
||
|
||
3) Deploy rest of the modules
|
||
|
||
```bash
|
||
tofu apply
|
||
```
|
||
|
||
### Configure deployment
|
||
|
||
1) Create self-hosted runner with access to the cluster or make cluster publicly accessible
|
||
2) Change `jobs.deploy.runs-on` in `.github/workflows/deploy-prod.yml` and in `.github/workflows/deploy-pr.yaml` to your
|
||
runner label
|
||
3) Add variables to GitHub in repository settings:
|
||
- `PROD_DOMAIN` - base domain for deployments (e.g. ltrk.cz)
|
||
- `DEV_FRONTEND_BASE_DOMAIN` - base domain for your cloudflare pages
|
||
4) Add secrets to GitHub in repository settings:
|
||
- CLOUDFLARE_ACCOUNT_ID - same as in tofu/terraform.tfvars
|
||
- CLOUDFLARE_API_TOKEN - same as in tofu/terraform.tfvars
|
||
- DOCKER_USER - your docker registry username
|
||
- DOCKER_PASSWORD - your docker registry password
|
||
- KUBE_CONFIG - content of your kubeconfig file for the cluster
|
||
- PROD_DB_PASSWORD - same as MARIADB_PASSWORD
|
||
- PROD_RABBITMQ_PASSWORD - same as MARIADB_PASSWORD
|
||
- PROD_DB_ENCRYPTION_KEY - same as DB_ENCRYPTION_KEY
|
||
- MOJEID_CLIENT_ID
|
||
- MOJEID_CLIENT_SECRET
|
||
- BANKID_CLIENT_ID
|
||
- BANKID_CLIENT_SECRET
|
||
- CSAS_CLIENT_ID
|
||
- CSAS_CLIENT_SECRET
|
||
- SENTRY_DSN
|
||
- SMTP_HOST
|
||
- SMTP_PORT
|
||
- SMTP_USERNAME
|
||
- SMTP_PASSWORD
|
||
- SMTP_FROM
|
||
- UNIRATE_API_KEY
|
||
5) On Github open Actions tab, select "Deploy Prod" and run workflow manually
|
||
|
||
# TODO: REMOVE I guess
|
||
|
||
2) Deploy the app using Helm
|
||
|
||
```bash
|
||
# Set the namespace
|
||
kubectl create namespace myapp || true
|
||
|
||
# Install/upgrade the chart with required values
|
||
helm upgrade --install myapp charts/myapp-chart \
|
||
-n myapp \
|
||
-f charts/myapp-chart/values.yaml \
|
||
--set image.backend.repository=myorg/myapp-backend \
|
||
--set image.backend.tag=latest \
|
||
--set env.BACKEND_URL="https://myapp.example.com" \
|
||
--set env.FRONTEND_URL="https://myapp.example.com" \
|
||
--set env.SECRET="CHANGE_ME_SECRET"
|
||
```
|
||
|
||
## Testing Instructions
|
||
|
||
The tests are located in 7project/backend/tests directory. All tests are run by GitHub actions on every pull request and
|
||
push to main.
|
||
See the workflow [here](../.github/workflows/run-tests.yml).
|
||
|
||
If you want to run the tests locally, the preferred is to use a [bash script](backend/test-with-ephemeral-mariadb.sh)
|
||
that will start a [test DB container](backend/docker-compose.test.yml) and remove it afterward.
|
||
|
||
```bash
|
||
cd 7project/backend
|
||
bash test-with-ephemeral-mariadb.sh
|
||
```
|
||
|
||
### Unit Tests
|
||
|
||
There are only 5 basic unit tests, since our services logic is very simple
|
||
|
||
```bash
|
||
bash test-with-ephemeral-mariadb.sh --only-unit
|
||
```
|
||
|
||
### Integration Tests
|
||
|
||
There are 9 basic unit tests, testing the individual backend API logic
|
||
|
||
```bash
|
||
bash test-with-ephemeral-mariadb.sh --only-integration
|
||
```
|
||
|
||
### End-to-End Tests
|
||
|
||
There are 7 e2e tests, testing more complex app logic
|
||
|
||
```bash
|
||
bash test-with-ephemeral-mariadb.sh --only-e2e
|
||
```
|
||
|
||
## Usage Examples
|
||
|
||
All endpoints are documented at OpenAPI: http://127.0.0.1:8000/docs
|
||
|
||
### Auth: Register and Login (JWT)
|
||
|
||
```bash
|
||
# Register
|
||
curl -X POST http://127.0.0.1:8000/auth/register \
|
||
-H 'Content-Type: application/json' \
|
||
-d '{
|
||
"email": "user@example.com",
|
||
"password": "StrongPassw0rd",
|
||
"first_name": "Jane",
|
||
"last_name": "Doe"
|
||
}'
|
||
|
||
# Login (JWT)
|
||
TOKEN=$(curl -s -X POST http://127.0.0.1:8000/auth/jwt/login \
|
||
-H 'Content-Type: application/x-www-form-urlencoded' \
|
||
-d 'username=user@example.com&password=StrongPassw0rd' | jq -r .access_token)
|
||
|
||
echo $TOKEN
|
||
|
||
# Call a protected route
|
||
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:8000/authenticated-route
|
||
```
|
||
|
||
### Frontend
|
||
|
||
- Start with: npm run dev in 7project/frontend
|
||
- Ensure VITE_BACKEND_URL is set to the backend URL (e.g., http://127.0.0.1:8000)
|
||
- Open http://localhost:5173
|
||
- Login, view latest transactions, filter, and add new transactions from the UI.
|
||
|
||
---
|
||
|
||
## Presentation Video
|
||
|
||
**YouTube Link**: [Insert your YouTube link here]
|
||
|
||
**Duration**: [X minutes Y seconds]
|
||
|
||
**Video Includes**:
|
||
|
||
- [ ] Project overview and architecture
|
||
- [ ] Live demonstration of key features
|
||
- [ ] Code walkthrough
|
||
- [ ] Build and deployment showcase
|
||
|
||
## Troubleshooting
|
||
|
||
### Common Issues
|
||
|
||
#### Issue 1: [Common problem]
|
||
|
||
**Symptoms**: [What the user sees]
|
||
**Solution**: [Step-by-step fix]
|
||
|
||
#### Issue 2: [Another common problem]
|
||
|
||
**Symptoms**: [What the user sees]
|
||
**Solution**: [Step-by-step fix]
|
||
|
||
### Debug Commands
|
||
|
||
```bash
|
||
# Useful commands for debugging
|
||
# Log viewing commands
|
||
# Service status checks
|
||
```
|
||
|
||
---
|
||
|
||
## Progress Table
|
||
|
||
> Be honest and detailed in your assessments.
|
||
> This information is used for individual grading.
|
||
> Link to the specific commit on GitHub for each contribution.
|
||
|
||
| Task/Component | Assigned To | Status | Time Spent | Difficulty | Notes |
|
||
|-------------------------------------------------------------------------------------------------------------------|-------------|----------------|------------|------------|-----------------------------------------------------------------------------------------------------|
|
||
| [Project Setup & Repository](https://github.com/dat515-2025/Group-8#) | Lukas | ✅ Complete | [X hours] | Medium | [Any notes] |
|
||
| [Design Document](https://github.com/dat515-2025/Group-8/blob/main/6design/design.md) | Both | ✅ Complete | 4 Hours | Easy | [Any notes] |
|
||
| [Backend API Development](https://github.com/dat515-2025/Group-8/tree/main/7project/backend/app/api) | Dejan | ✅ Complete | 12 hours | Medium | [Any notes] |
|
||
| [Database Setup & Models](https://github.com/dat515-2025/Group-8/tree/main/7project/backend/app/models) | Lukas | ✅ Complete | [X hours] | Medium | [Any notes] |
|
||
| [Frontend Development](https://github.com/dat515-2025/Group-8/tree/main/7project/frontend) | Dejan | ✅ Complete | 17 hours | Medium | [Any notes] |
|
||
| [Docker Configuration](https://github.com/dat515-2025/Group-8/blob/main/7project/compose.yml) | Lukas | ✅ Complete | 3 hours | Easy | [Any notes] |
|
||
| [Cloud Deployment](https://github.com/dat515-2025/Group-8/blob/main/7project/deployment/app-demo-deployment.yaml) | Lukas | ✅ Complete | [X hours] | Hard | Using Talos cluster running in proxmox - easy snapshots etc. Frontend deployed at Cloudflare pages. |
|
||
| [Testing Implementation](https://github.com/dat515-2025/group-name) | Dejan | ✅ Complete | 16 hours | Medium | [Any notes] |
|
||
| [Documentation](https://github.com/dat515-2025/group-name) | Both | 🔄 In Progress | [X hours] | Easy | [Any notes] |
|
||
| [Presentation Video](https://github.com/dat515-2025/group-name) | Both | ❌ Not Started | [X hours] | Medium | [Any notes] |
|
||
|
||
**Legend**: ✅ Complete | 🔄 In Progress | ⏳ Pending | ❌ Not Started
|
||
|
||
## Hour Sheet
|
||
|
||
> Link to the specific commit on GitHub for each contribution.
|
||
|
||
### [Lukáš]
|
||
|
||
## Hour Sheet
|
||
|
||
**Name:** Lukáš Trkan
|
||
|
||
| Date | Activity | Hours | Description | Representative Commit / PR |
|
||
|:----------------|:----------------------------|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------|
|
||
| 18.9. - 19.9. | Initial Setup & Design | 40 | Repository init, system design diagrams, basic Terraform setup | `feat(infrastructure): add basic terraform resources` |
|
||
| 20.9. - 5.10. | Core Infrastructure & CI/CD | 12 | K8s setup (ArgoCD), CI/CD workflows, RabbitMQ, Redis, Celery workers, DB migrations | `PR #2`, `feat(infrastructure): add rabbitmq cluster` |
|
||
| 6.10. - 9.10. | Frontend Infra & DB | 5 | Deployed frontend to Cloudflare, setup metrics, created database models | `PR #16` (Cloudflare), `PR #19` (DB structure) |
|
||
| 10.10. - 11.10. | Backend | 5 | Implemented OAuth support (MojeID, BankID) | `feat(auth): add support for OAuth and MojeID` |
|
||
| 12.10. | Infrastructure | 2 | Added database backups | `feat(infrastructure): add backups` |
|
||
| 16.10. | Infrastructure | 4 | Implemented secrets management, fixed deployment/env variables | `PR #29` (Deployment envs) |
|
||
| 17.10. | Monitoring | 1 | Added Sentry logging | `feat(app): add sentry loging` |
|
||
| 21.10. - 22.10. | Backend | 8 | Added ČSAS bank connection | `PR #32` (Fix React OAuth) |
|
||
| 29.10. - 30.10. | Backend | 5 | Implemented transaction encryption, add bank scraping | `PR #39` (CSAS Scraping) |
|
||
| 30.10. | Monitoring | 6 | Implemented Loki logging and basic Prometheus metrics | `PR #42` (Prometheus metrics) |
|
||
| 9.11. | Monitoring | 2 | Added custom Prometheus metrics | `PR #46` (Prometheus custom metrics) |
|
||
| 11.11. | Tests | 1 | Investigated and fixed broken Pytest environment | `fix(tests): set pytest env` |
|
||
| 11.11. - 12.11. | Features & Deployment | 6 | Added cron support, email sender service, updated workers & image | `PR #49` (Email), `PR #50` (Update workers) |
|
||
| 18.9 - 14.11 | Documentation | 8 | Updated report.md, design docs, and tfvars.example | `Create design.md`, `update report` |
|
||
| **Total** | | **105** | | |
|
||
|
||
### Dejan
|
||
|
||
| Date | Activity | Hours | Description |
|
||
|-----------------|----------------------|--------|---------------------------------------------------------------|
|
||
| 25.9. | Design | 2 | 6design |
|
||
| 9.10 to 11.10. | Backend APIs | 12 | Implemented Backend APIs |
|
||
| 13.10 to 15.10. | Frontend Development | 8 | Created user interface mockups |
|
||
| Continually | Documentation | 6 | Documenting the dev process |
|
||
| 21.10 to 23.10 | Tests, frontend | 10 | Test basics, balance charts, and frontend improvement |
|
||
| 28.10 to 30.10 | CI | 6 | Integrated tests with test database setup on github workflows |
|
||
| 28.10 to 30.10 | Frontend | 7 | UI improvements and exchange rate API integration |
|
||
| 4.11 to 6.11 | Tests | 6 | Test fixes improvement, more integration and e2e |
|
||
| 4.11 to 6.11 | Frontend | 6 | Fixes, Improved UI, added support for mobile devices |
|
||
| **Total** | | **63** | |
|
||
|
||
### Group Total: [XXX.X] hours
|
||
|
||
---
|
||
|
||
## Final Reflection
|
||
|
||
### What We Learned
|
||
|
||
[Reflect on the key technical and collaboration skills learned during this project]
|
||
|
||
### Challenges Faced
|
||
|
||
#### Slow cluster performance
|
||
|
||
This was caused by single SATA SSD disk running all VMs. This was solved by adding second NVMe disk just for Talos VMs.
|
||
|
||
[Describe the main challenges and how you overcame them]
|
||
|
||
### If We Did This Again
|
||
|
||
#### Different framework
|
||
|
||
FastAPI lacks usable build in support for database migrations and implementing Alembic was a bit tricky.
|
||
Tricky was also integrating FastAPI auth system with React frontend, since there is no official project template.
|
||
Using .NET (which we considered initially) would probably solve these issues.
|
||
|
||
[What would you do differently? What worked well that you'd keep?]
|
||
|
||
### Individual Growth
|
||
|
||
#### [Lukas]
|
||
|
||
This course finally forced me to learn kubernetes (been on by TODO list for at least 3 years).
|
||
I had some prior experience with terraform/opentofu from work but this improved by understanding of it.
|
||
|
||
The biggest challenge for me was time tracking since I am used to tracking to projects, not to tasks.
|
||
(I am bad even at that :) ).
|
||
|
||
It was also interesting experience to be the one responsible for the initial project structure/design/setup
|
||
used not only by myself.
|
||
|
||
[Personal reflection on growth, challenges, and learning]
|
||
|
||
#### [Team Member 2 Name]
|
||
|
||
[Personal reflection on growth, challenges, and learning]
|
||
|
||
|
||
---
|
||
|
||
**Report Completion Date**: [Date]
|
||
**Last Updated**: 15.10.2025 |