--- title: Workshop — Deploy your own ChatGPT publish: false date: 2026-03-03 tags: - workshop - HECIA - containerization - docker - openwebui description: A detailed guide for the "Deploy your own ChatGPT" workshop --- # Workshop — Deploy your own ChatGPT > By the end of this workshop, you will have a private AI assistant running on your own server, accessible from anywhere, connected to a real AI model via API. --- ## 0. Some context before we start ### What is a VPS? A **VPS** (Virtual Private Server) is a computer that lives in a data centre somewhere and runs 24/7, connected to the internet. You don't physically touch it — you control it remotely from your own laptop by typing commands into a **terminal**. Think of it like renting a studio flat: the building (the physical server) belongs to someone else, but you get your own private, locked space inside it to do whatever you want. ### What is a terminal? A **terminal** is a text-based way to talk to a computer. Instead of clicking on icons, you type instructions and the computer responds with text. It might look intimidating, but the principle is always the same: you type a command, you press Enter, and the computer does what you asked. > 💡 **Throughout this workshop, every time you see a grey box with text in it, it's a command to type into the terminal — then press Enter.** ### What is Docker? Normally, installing software on a server is painful: every application needs specific versions of specific dependencies, and they often conflict with each other. A year later, nobody remembers what was installed or why, and the system is a mess. **Docker** solves this by wrapping each application inside a **container**: a completely isolated box that contains the application _and_ everything it needs to run. The rest of the server doesn't know it exists, and it doesn't know about the rest of the server. Two concepts you need to know: |Concept|What it is|Real-world analogy| |---|---|---| |**Image**|A precise recipe describing how to build the application and all its ingredients|A cake recipe written on paper| |**Container**|A live, running instance created from that recipe|The actual cake you baked from the recipe| You can bake many cakes from the same recipe. You can run many containers from the same image. The recipe doesn't change when you eat the cake. **Docker Compose** lets you describe your containers in a simple configuration file rather than memorising long commands. It's like a checklist: you write down what you want once, and Docker takes care of making it happen. --- ## 1. Install Docker We'll install Docker from its official repository. ### What is a repository? When you install apps on your phone, you get them from the App Store or Google Play — a trusted place that verifies what you're downloading is legitimate. On Linux servers, software is installed from **repositories**: official, trusted sources that also handle updates automatically. Before we can install Docker, we need to tell Ubuntu _where_ to find it and that it can trust it. ### 1.1 — Add Docker's repository and GPG key A **GPG key** is a cryptographic signature. Ubuntu uses it to verify that the software you're downloading actually comes from Docker, and hasn't been tampered with in transit — like a wax seal on a letter. Run these commands one block at a time: ```bash sudo apt update sudo apt install ca-certificates curl ``` > `sudo` means "run this as administrator". `apt` is Ubuntu's package manager — the tool that installs software. `update` refreshes the list of available software. ```bash sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc ``` > This creates a folder to store trusted keys, downloads Docker's GPG key, and makes it readable by the system. ```bash sudo tee /etc/apt/sources.list.d/docker.sources < This registers Docker's repository as a trusted source for Ubuntu. From now on, Ubuntu knows where to find Docker packages and that they're legitimate. ```bash sudo apt update ``` > Refresh the package list again — this time it includes Docker's repository. ### 1.2 — Install Docker ```bash sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin ``` > This installs Docker Engine (the core), its command-line tool, and the Compose plugin. When Ubuntu asks `Do you want to continue? [Y/n]`, type `Y` and press Enter. ### 1.3 — Verify the installation ```bash docker --version docker compose version ``` Both commands should print a version number. If you see one, Docker is correctly installed. --- ## 2. Deploy OpenWebUI **OpenWebUI** is an open-source application that gives you a ChatGPT-like web interface, running entirely on your own server. Your conversations stay private, you choose which AI model powers it, and you control everything. ### 2.1 — Create the project directory On a Linux system, files are organised in folders, just like on your computer. We'll create a dedicated folder for this project. ```bash mkdir -p ~/projects/openwebui cd ~/projects/openwebui ``` > `mkdir` creates a folder (`make directory`). The `~` is a shorthand for your home folder — the equivalent of "Documents" on your laptop. `cd` moves you into that folder (`change directory`). Everything you do next will happen inside it. ### 2.2 — Write the Compose file Now we'll create the file that tells Docker what to run and how. Create a file named `compose.yaml` with the following content (your instructor will show you how to open a text editor in the terminal): ```yaml services: openwebui: image: ghcr.io/open-webui/open-webui:main-slim ports: - "3000:8080" volumes: - open-webui:/app/backend/data restart: unless-stopped volumes: open-webui: ``` Here's what each line means: - **`services`** — the list of containers we want to run. Here we have one, called `openwebui`. - **`image`** — the Docker image (the "recipe") to use. Docker will download it automatically from the internet on first run. - **`ports`** — containers are isolated, so by default nothing from the outside can reach them. This line opens a door: it says _"connect port 3000 on the server to port 8080 inside the container"_. Think of it like a call forwarding rule: calls arriving on 3000 get redirected to 8080 inside the box. - **`volumes`** — containers are ephemeral: if you restart one, it starts fresh and everything inside is gone, like closing an incognito window. Volumes are a way to store data _outside_ the container so it survives restarts. Here, your conversations and settings will be saved permanently. - **`restart: unless-stopped`** — if the container crashes, or if the server reboots, Docker will automatically restart it. It only stays off if you explicitly stop it yourself. - **`volumes` (bottom)** — this is where we actually tell Docker to create the storage space named `open-webui`. The mention above just says "use it"; this line is where it gets created. ### 2.3 — Start the container ```bash docker compose up -d ``` > `up` means "start everything described in the Compose file". `-d` means "in the background" (detached) — so the application keeps running even after you close the terminal, like a server that doesn't stop when you log off. On first run, Docker needs to download the image from the internet. This takes a minute or two. You can watch what's happening with: ```bash docker compose logs -f ``` > This shows the container's live output — like looking through a window into the box as it starts up. Press `Ctrl+C` when you want to stop watching. The container keeps running. When you see a line mentioning that the server is ready, you're good to go. ### 2.4 — Access your instance Open your browser and go to: ``` https://openwebui..workshop.hec-ia.com ``` Replace `` with your shortname: first initial + last name, all lowercase. For example, **John Smith** → `jsmith`, so the URL is `https://jsmith.workshop.hec-ia.com`. > **First launch:** OpenWebUI will ask you to create an account. The first account created automatically becomes the administrator — that's you. --- ## 3. Configure an API key OpenWebUI is just the interface — the visual layer. On its own, it can't generate any text. It needs to be connected to an AI model to do anything useful. Think of it like a television set with no signal: the screen is there and works perfectly, but you need to plug it into a source to actually watch anything. We'll connect it to an AI provider via an **API key**. An API key is a unique secret token — similar to a password — that identifies you to the AI provider and lets OpenWebUI send requests on your behalf. ### 3.1 — Open the admin settings In OpenWebUI, go to: **Settings → Admin Panel → Connections** ### 3.2 — Add your API key --- ## Useful commands Once the workshop is over, here are the commands you'll need to manage your instance: |Command|What it does| |---|---| |`docker compose up -d`|Start the application| |`docker compose down`|Stop the application (your data is preserved)| |`docker compose logs -f`|Watch live logs| |`docker compose pull`|Download the latest version of the image| |`docker ps`|List all running containers| > To run these commands, you need to be in the project folder first: `cd ~/projects/openwebui`