Containerize a Spring Boot application with Podman Desktop

FONT: https://developers.redhat.com/articles/2023/10/19/containerize-spring-boot-application-podman-desktop#running_the_containerized_application

You are here

Containerize a Spring Boot application with Podman Desktop

October 19, 2023

Cedric Clyburn Related topics: ContainersJavaKubernetesSpring Boot Related products: Podman DesktopRed Hat Enterprise Linux

Share:

Spring and Spring Boot are developer favorites for building Java applications that run in the cloud. Spring is known for being production-ready, complimenting containerization very well. According to the State of Spring 2022 report, Kubernetes has become the dominant platform for running Spring apps. So, how can we take a Spring application, containerize it, and run it locally? Let’s explore this by using Podman Desktop, an open-source tool to seamlessly work with containers and Kubernetes from your local environment.

Prerequisites

  • Spring Boot Application: For this article, we’ll use the popular Spring PetClinic sample application on GitHub (Figure 1). Feel free to also use your own project or start from the Spring Initializr.
A screenshot of the Spring Pet clinic Repository on Github
Figure 1: The Spring Petclinic repository on Github.
  • Podman Desktop: Let’s use Podman Desktop, the powerful GUI-based tool for deploying and managing containers using the Podman container engine. Once installed, you’ll be ready to start containerizing your Spring application (Figure 2).
A screenshot of the Podman Desktop dashboard.
Figure 2: The Podman Desktop dashboard.

Containerizing the Spring Boot application

Let’s get started by cloning the application’s source code if using the PetClinic repository.

git clone https://github.com/spring-projects/spring-petclinic.git
cd spring-petclinic

While we can use Maven to build a jar file and run it, let’s jump straight into creating a Containerfile in the project’s root directory, which will serve as the blueprint for the container image we’ll create later (analogous to Docker’s Dockerfile). You can create the file with the command touch Containerfile or simply create it from your IDE. Let’s see what the Containerfile will look like for this sample Spring Boot application:

# Start with a base image that has Java 17 installed.
FROM eclipse-temurin:17-jdk-jammy

# Set a default directory inside the container to work from.
WORKDIR /app

# Copy the special Maven files that help us download dependencies.
COPY .mvn/.mvn

# Copy only essential Maven files required to download dependencies.
COPY mvnw pom.xml./

# Download all the required project dependencies.
RUN ./mvnw dependency:resolve

# Copy our actual project files (code, resources, etc.) into the container.
COPY src./src

# When the container starts, run the Spring Boot app using Maven.
CMD ["./mvnw", "spring-boot:run"]

Let’s take a deeper look at the components that make up this Containerfile:

  • FROM eclipse-temurin:17-jdk-jammy:
    • Purpose: This line sets the foundation for our container.
    • Deep Dive: It tells Podman to use a pre-existing image that already has Java 17 installed. Think of it as choosing a base flavor for a cake before adding more ingredients. We use the eclipse-temurin image because it’s a trusted source for Java installations.
  • WORKDIR /app:
    • Purpose: Designates a working space in our container.
    • Deep Dive: Containers have their own file system. Here, we’re telling Podman to set up a folder named ‘app’ and make it the default location for any subsequent operations.
  • COPY commands:
    • Purpose: They transfer files from our local system into the container.
    • Deep Dive: The first COPY grabs Maven’s configuration, a tool Java developers use to manage app dependencies. The second copies over the main files of our app: the Maven wrapper (mvnw) and the project’s details (pom.xml).
  • RUN ./mvnw dependency:resolve:
    • Purpose: Downloads necessary libraries and tools for the app.
    • Deep Dive: This command activates Maven to fetch everything our app needs to run. By doing this in the Containerfile, we ensure the container has everything packaged neatly.
  • COPY src./src:
    • Purpose: Add our actual application code.
    • Deep Dive: Our app’s logic, features, and resources reside in the src directory. We’re moving it into the container so that when the container runs, it has our complete app.
  • CMD [“./mvnw”, “spring-boot:run”]:
    • Purpose: Start our application!
    • Deep Dive: This command is the final step. This line tells Podman to run our Spring Boot application using Maven when our container launches.

This Containerfile is ready to be used with Podman Desktop to create a container image of our Spring Boot application (Figure 3). Before building the image for this application, let’s double check the directory structure and Containerfile in our IDE of choice.

A screenshot of the Containerfile in VSCode.
Figure 3: The Containerfile in VSCode.

Building the container image with Podman Desktop

We can now build our container image with Podman Desktop by first heading to the Images section of Podman Desktop and selecting the Build an Image button in the top-right corner (Figure 4).

A screenshot of the Podman Desktop with the build image button highlighted.
Figure 4: The Podman Desktop Build Image is highlighted.

This will open a menu where you can select the path to our previously created Containerfile, which should be in the root directory of the spring-petclinic folder. With the Containerfile selected, we can also give our container image a name, for example, petclinic. Now, click on Build, and you’ll see each of the layers of the image being created. You can find this in your local image registry (the Image section in Figure 5).

A screenshot of the Podman Desktop build image menu.
Figure 5: The Podman Desktop Build Image menu.

Running the containerized application

Fantastic! Let’s return to the Images section to see the containerized Spring Boot application, now built and tagged as an image, as well as the eclipse-temurin base image that was downloaded to build our petclinic image. We can easily run this image as a container on our system using the Run icon to the right of our container image (Figure 6).

A screenshot of the Podman Desktop run container.
Figure 6: The Podman Desktop showing where to run the container.

Under Port Mapping, make sure to assign port 8080 of the container to port 8080 of the host. Feel free to leave all other settings as default. Click Start Container to launch the containerized instance of your Spring Boot application (Figure 7).

A screenshot showing the start container button in the Podman Desktop menu.
Figure 7: Click the start container button in the Podman Desktop menu.

Now, with the container up and running, let’s open the container’s URL in the browser using the handy open browser button within the Podman Desktop user interface (Figure 8).

A screenshot of the Podman Desktop highlighting the open browser button.
Figure 8: The Podman Desktop highlighting the open browser button.

Perfect, looks like our Spring Boot application is running, thanks to the startup command in our Containerfile, as well as the port mapping we configured in Podman Desktop (Figure 9).

A screenshot showing the Spring Boot application is running in Chrome.
Figure 9: The Spring Boot application is running in Chrome.

Now we can see this PetClinic application running in our browser, but that’s not all. For more features that may involve bringing out the terminal, we can now use Podman Desktop instead (Figure 10). This may include, SSH’ing into a container for debugging and modifying settings, maybe viewing the logs of a container, or inspecting environment variables. This can all be done under the Container Details section that opens automatically after starting the container.

A screenshot of the Podman Desktop terminal.
Figure 10: The Podman Desktop terminal.

Wrapping up

We have gone from local Spring Boot code to containerizing the application and running it with Podman Desktop! Now, as a container, we can share this application across environments using registries like Quay.io and Docker Hub and deploy it using Kubernetes and OpenShift to a variety of different cloud providers.Last updated: January 15, 2025

Related Posts

Recent Posts

What’s up next?

Read Podman in Action for easy-to-follow examples to help you learn Podman quickly, including steps to deploy a complete containerized web service. LinkedInYouTubeTwitterFacebook

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Sign me up

© 2025 Red Hat

https://www.google.com/recaptcha/api2/bframe?hl=en&v=Lu6n5xwy2ghvnPNo3IxkhcCb&k=6LfI4RApAAAAAF0FZDNh0WxeTb5vW26PPphSO9CR&bft=0dAFcWeA4cWuzQUGkfsrWOK2bL2tgkxlHUWBM7aegdByjKYGUp9Bscf1mdedF5sLmoYL8ru6xp1vPOFU6FF0l_bIY8xaZM4xN2sg

Install Podman Desktop on Debian (without flatpak) March 27, 2024

FONT: https://aliefee.page/blog/post/install-podman-desktop-on-debian-without-flatpak#install-podman-desktop-on-debian-without-flatpak-

Install Podman Desktop on Debian (without flatpak)

Since there is no official Debian package for Podman Desktop, we need to download the official binary archive and install it manually, in case you don’t want to use Flatpak.

This tutorial assumes:

  • You have a working Debian installation (or a Debian based distro like Ubuntu, Linux Mint, Kali, MX Linux etc.)
  • You are comfortable with using terminal and have sudo access
  • You have the latest stable podman installed on your machine. If not just run: sudo apt install podman

1.Download Podman Desktop

First, head over to Podman Desktop Download page and download the latest version archive.

Podman Desktop Download Page

2.Move the downloaded file to /opt

We will install Podman Desktop inside /opt directory. Open your terminal and move the downloaded file to /opt.
Now open terminal and run the command below.
Change <download-directory> and <version> with the path to directory where you downloaded the file and the version you downloaded.

sudo mv <download-directory>/podman-desktop-<version>.tar.gz /opt

3.Extract the archive

cd /opt
sudo tar -xvf podman-desktop-<version>.tar.gz
sudo rm podman-desktop-<version>.tar.gz

sudo mv podman-desktop-<version> podman-desktop

On the question of why we put binaries into /opt:
It is recommended to install software from official repositories whenever possible to avoid clashes of different packaging of the same software. But in our case, since there is no official Debian package for Podman Desktop as of the date i composed this tutorial (March 2024), we installed it into /opt as a convention.
You can read further about the Filesystem Hierarchy Standard https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s13.html

I recommend you to check if there is an official package available in apt repositories before going ahead:

apt search podman-desktop

If you find an official package, you can install it using apt. And you can easily retract the process at this step easily:

sudo rm -rf /opt/podman-desktop-*

Ensuring podman-desktop can be executed

sudo chmod +x /opt/podman-desktop/podman-desktop

At this step, we will put a symbolic link in /usr/local/bin directory.
So that it can be executed in your terminal

sudo ln -s /opt/podman-desktop/podman-desktop /usr/local/bin/podman-desktop

You can check if you can run podman-desktop from your terminal:

which podman-desktop
# /usr/local/bin/podman-desktop
# if you see this output, you are able to run podman-desktop from your terminal

5.Put a Desktop Entry

You can now run podman-desktop from your terminal.
But you will want to run it from your desktop environement’s menu too right?
To achieve that we need to create a .desktop file in /usr/local/share/applications

/usr/local/share/applications/podman-desktop.desktop

[Desktop Entry]
Name=Podman Desktop
Exec=podman-desktop
Terminal=false
Type=Application
Icon=podman-desktop
StartupWMClass=Podman Desktop
Categories=Development;

let’s do it with a few commands:

cd /tmp
echo -e "[Desktop Entry]\nName=Podman Desktop\nExec=podman-desktop\nTerminal=false\nType=Application\nIcon=podman-desktop\nStartupWMClass=Podman Desktop\nCategories=Development;" > podman-desktop.desktop
sudo desktop-file-install --dir=/usr/local/share/applications ./podman-desktop.desktop
rm podman-desktop.desktop

6.[Optional] Install icon file

You can copy and run all the commands at once

cd /tmp
wget https://aliefee.page/podman-desktop-icons.tar # download
tar -xvf podman-desktop-icons.tar # extract
mv podman-desktop-icons icons # rename
sudo cp -r ./icons /usr/local/share # copy
rm -rf icons podman-desktop-icons.tar # clean up

Congrats! You have now Podman Desktop set and up perfectly!

Containers in 2025: Docker vs. Podman for Modern Developers

FONT: https://www.linuxjournal.com/content/containers-2025-docker-vs-podman-modern-developers

Containers in 2025: Docker vs. Podman for Modern Developers

by George Whittaker

on August 26, 2025

Containers in 2025: Docker vs. Podman for Modern Developers

Introduction

Container technology has matured rapidly, but in 2025, two tools still dominate conversations in developer communities: Docker and Podman. Both tools are built on OCI (Open Container Initiative) standards, meaning they can build, run, and manage the same types of images. However, the way they handle processes, security, and orchestration differs dramatically. This article breaks down everything developers need to know, from architectural design to CLI compatibility, performance, and security, with a focus on the latest changes in both ecosystems.

Architecture: Daemon vs. Daemonless

Docker’s Daemon-Based Model

Docker uses a persistent background service, dockerd, to manage container lifecycles. The CLI communicates with this daemon, which supervises container creation, networking, and resource allocation. While this centralized approach is convenient, it introduces a single point of failure: if the daemon crashes, every running container goes down with it.Podman’s Daemonless Approach

Podman flips the script. Instead of a single daemon, every container runs as a child process of the CLI command that started it. This design eliminates the need for a root-level service, which is appealing for environments concerned about attack surfaces. Containers continue to run independently even if the CLI session ends, and they can be supervised with systemd for long-term stability.

Developer Workflow and CLI

Familiar Command Structure

Podman was designed as a near drop-in replacement for Docker. Commands like podman run, podman ps, and podman build mirror their Docker equivalents, reducing the learning curve. Developers can often alias docker to podman and keep using their existing scripts.

Run an NGINX container

Docker

docker run -d --name web -p 8080:80 nginx:latest

Podman

podman run -d --name web -p 8080:80 nginx:latest

GUI Options

For desktop users, Docker Desktop remains polished and feature-rich. However, Podman Desktop has matured significantly. It now supports Windows and macOS with better integration, faster file sharing, and no licensing restrictions, making it appealing for enterprise environments.

Image Building and Management

Docker’s BuildKit

Docker’s modern builds leverage BuildKit, enabling parallelized builds, advanced caching, and multi-architecture support. This makes building complex applications efficient and portable across ARM and x86 environments.Podman with Buildah

Podman integrates with Buildah, enabling rootless image building, a huge win for CI/CD pipelines. Recent versions also added distributed builds (podman farm build), making it easier to scale builds across multiple systems, a feature Docker introduced earlier with BuildKit.

Build an image with Podman

podman build -t myapp:latest .

Rootless Containers

Rootless operation is where Podman truly shines. From the ground up, Podman runs containers as a regular user, mapping the root user inside the container to a non-privileged user on the host. Docker added rootless support later, but it’s still not the default configuration. For developers working in multi-user systems or shared CI runners, Podman’s approach is safer and easier to configure.

Security Considerations

  • Podman minimizes risks by avoiding a long-running privileged daemon and using tighter default permissions.
  • Docker, while improved with rootless mode and better defaults in Docker Engine 28, still defaults to rootful mode in many deployments.
  • Both support SELinux, AppArmor, and Seccomp for additional isolation, but Podman’s integration is deeper in SELinux-enabled environments.

Kubernetes and Orchestration

Docker and Compose

Docker remains a leader for local development with Docker Compose, offering a quick way to spin up multi-container stacks. For clustering, Docker Swarm still exists but has mostly stagnated.Podman and Kubernetes Alignment

Podman embraces a Kubernetes-first design. It allows creating pods locally, exporting manifests with podman generate kube, and even running those manifests directly with podman play kube. This makes Podman an excellent choice for teams moving workloads to Kubernetes.

Generate Kubernetes YAML from a running Podman pod

podman generate kube mypod > mypod.yaml

Performance and Resource Usage

  • Startup Speed: Docker is marginally faster when starting individual containers because the daemon is always running.
  • Idle Overhead: Podman wins here, no daemon means zero baseline memory usage when idle.
  • Scalability: Podman handles many concurrent containers more gracefully since there’s no central bottleneck.
  • Rootless I/O: Thanks to kernel-level improvements, Podman’s rootless file I/O performance now matches Docker’s native overlay driver performance.

Ecosystem and Compatibility

  • Docker’s API compatibility remains a huge advantage, with wide support in third-party tools and CI systems.
  • Podman has bridged much of this gap with a Docker-compatible API service, enabling tools like Jenkins or Terraform to interact with it almost transparently.
  • Docker Hub remains the dominant public image registry, but Podman works with all OCI-compliant registries seamlessly.

Practical Use Cases

ScenarioPreferred Tool
Multi-user Linux serversPodman
Legacy pipelines using Docker APIDocker
CI/CD with rootless buildsPodman
Desktop dev with Kubernetes clusterDocker Desktop or Podman Desktop
Windows container workloadsDocker

Future Outlook

The rivalry between Docker and Podman is less about one replacing the other and more about choosing the right tool for the job. With both runtimes embracing OCI standards and converging on feature parity, developers have the flexibility to mix and match based on project needs. Expect to see:

  • More integrations supporting both runtimes.
  • Continued emphasis on rootless security.
  • Deeper Kubernetes alignment, especially from Podman.

Final Thoughts

Docker and Podman in 2025 represent two mature, powerful tools. Docker excels in compatibility and ease of onboarding, while Podman provides advanced security and a Kubernetes-centric approach. For developers, the good news is clear: whether you choose Docker, Podman, or even both in different environments, your workflows remain fast, secure, and future-proof.

George Whittaker is the editor of Linux Journal, and also a regular contributor. George has been writing about technology for two decades, and has been a Linux user for over 15 years. In his free time he enjoys programming, reading, and gaming.

…. per Galiza…

Primer dia, autopista fins Logronyo, dinem a la Calle Laurel, clapem per la tarda, sopem Tagliatella i dormim a Hotel Ciudad de Logroño.

Esmorzar inclòs, tot ok, cotxe per bilbao i A-8, parem a la sortida de Deva/Potes per menjar alguna cosa, parem a dinar a “el Mancu”, a prop de Gijon, Cachopos i Cachopines,….. Arribem a Viveiro, apartament mooolt correcte…. super i passeig pel poble,

31/7/2025 , anem cap a l’estaca de Bares, ens passem de llarg i acabem a porto de Bares,…. platja impressionant! Fem la visita al far i tirem fins al cap…. Visitem Ortigueira, tapes al costat de la terraza de Laura,…. Dormim per la tarda

1/8/2025 platja de Covas , dinar al costat de la platja, Adrià Fabada… Tarda, visita al teleferic de les mines….

2/8/2025 MArxem cap a Santiago, lo de Padrón no ha anat bé, ens ofereixen aparament a 3 minuts de la catedral… super per comprar, dinem a casa, passeig per la tarda i soparet pel casc antic, varies Compres…. apartament impressionant , cua de les braves? Massa cua a “La Tita”

3/8.. anem cap a Muxia,  fisterra, i …..

4/8 .. Cambados, o grove, platja + Castros de Baroña….

5/8 … anem a CEA al costat de carballino , POP

6/8 … Coneixem a la Pepita i a la Mªangeles… Ens conviden a tot! . Pozas de Melón . Pa de forn da Digna a CEA, Impressionant….

7/8 … anem amb Miguel, a prop de Ribadavia

8/8 … comencem la tornada ,… cap a Sestao… passeig per Biblo

9/8 … a casa….

11/8 … incendis!!!!

Una mica de linux…

FONT:

https://www.linuxjournal.com/content/how-build-custom-distributions-scratch

How to Build Custom Distributions from Scratch

Introduction

In a world teeming with Linux distributions — from Ubuntu to Arch, Debian to Fedora — the idea of building your own may seem daunting, if not redundant. Yet, for many technologists, enthusiasts, and developers, creating a custom Linux distribution isn’t just an exercise in reinvention; it’s an act of empowerment. Whether your goal is to tailor a lightweight OS for embedded devices, create a secure workstation, develop an education-focused system, or simply understand Linux more intimately, building your own distribution is one of the most fulfilling journeys in open-source computing.

This guide walks you through every stage of creating your own Linux distribution — from selecting core components to building, customizing, and distributing your personalized operating system.

Understanding the Basics

What is a Linux Distribution?

A Linux distribution (or “distro”) is a complete operating system built on the Linux kernel. It includes:

  • Kernel – The core interface between hardware and software.
  • Init System – Handles booting and service management (e.g., systemd, OpenRC).
  • Userland Tools – Basic utilities from projects like GNU Coreutils and BusyBox.
  • Package Manager – Tool to install, upgrade, and remove software (e.g., APT, Pacman, DNF).
  • Optional GUI – A desktop environment or window manager (e.g., GNOME, XFCE, i3).

Why Create Your Own Distribution?

Reasons vary, but common motivations include:

  • Learning – Deepen your understanding of system internals.
  • Performance – Remove bloat for a leaner, faster system.
  • Branding – Create a branded OS for an organization or product.
  • Customization – Tailor software stacks for specific use-cases.
  • Embedded Applications – Create firmware or OS images for hardware devices.

Planning Your Custom Linux Distro

Define Your Goals

Start by asking:

  • Who is the target user?
  • What hardware should it support?
  • Will it be a desktop, server, or headless system?
  • Should it boot live or be installed?

Choose a Foundation

You can either:

  • Build from scratch: Using projects like Linux From Scratch (LFS).
  • Remix an existing distro: Customize Ubuntu, Arch, or Debian using tools like Cubic or Archiso.

Select Core Components

  • Kernel: Choose between vanilla, long-term support (LTS), or custom-patched kernels.
  • Init System: Popular choices include systemd (modern), SysVinit (classic), and OpenRC (lightweight).
  • Shell: Bash, Zsh, Fish — depending on user preference.
  • File System: ext4, Btrfs, XFS depending on performance and snapshotting needs.

Tools and Methods for Building a Custom Distro

Linux From Scratch (LFS)

LFS is a book and toolkit that walks you through compiling and configuring every part of a Linux system. You gain a bare-metal understanding of how each component fits together.

  • Pros: Maximum control, great learning experience.
  • Cons: Time-consuming, steep learning curve.

Yocto Project

Ideal for building embedded Linux systems.

  • Pros: Powerful, flexible, industry standard for embedded development.
  • Cons: Complex build system, not beginner-friendly.

Archiso

Allows you to create a custom Arch Linux live ISO.

  • Pros: Lightweight, rolling-release, Arch’s simplicity.
  • Cons: Arch’s bleeding edge can introduce instability.

Debian Live Build

Create live bootable Debian-based systems.

  • Pros: Extensive documentation, stable base.
  • Cons: Slightly more involved setup process.

Ubuntu Customization Tools (Cubic, Systemback)

GUI tools for modifying Ubuntu ISOs.

  • Pros: User-friendly, great for quick custom ISOs.
  • Cons: Limited to Ubuntu base.

Step-by-Step Example: Creating a Minimal Distro with Debian Live Build

Let’s walk through building a Debian-based custom live system.Step 1: Setup Environment

sudo apt install live-build mkdir mydistro && cd mydistro lb config Step 2: Customize Configuration

Edit configuration files under config/:

  • config/package-lists/my.list.chroot: Add your desired packages.
  • config/includes.binary/isolinux: Add branding.
  • config/hooks: Scripts to run at build time.

Step 3: Build the Image

sudo lb build Step 4: Test

Use VirtualBox or QEMU to test the generated ISO:

qemu-system-x86_64 -cdrom live-image-amd64.hybrid.iso

Advanced Customization

Theming and Branding

  • Replace logos in the bootloader and desktop.
  • Customize wallpapers, GTK/QT themes, shell prompt.

Preconfigured Desktop Environments

  • Provide ready-to-use desktop setups with preferred settings.
  • Add user accounts, aliases, and shell scripts.

Security and Performance

  • Strip unnecessary daemons.
  • Harden kernel and firewall configurations.
  • Integrate audit and SELinux/AppArmor policies.

Automation and CI

  • Use scripting to automate builds.
  • Integrate with CI/CD systems (e.g., Jenkins, GitHub Actions) to auto-generate ISO releases.

Testing and Debugging

Boot Testing

Use QEMU, VirtualBox, or real hardware. Check:

  • Boot time
  • Device compatibility
  • Package availability

Troubleshooting

  • Check logs: /var/log/, dmesg, journalctl
  • Use strace and gdb for debugging userland binaries.
  • Validate init and bootloader behavior.

Packaging and Distribution

Create ISO and Installer

Tools like genisoimage, xorriso, and calamares (graphical installer) help prepare your system for distribution.Hosting Your Distro

  • Use GitHub Releases, SourceForge, or a dedicated server.
  • Offer torrents for better availability.

Documentation

Essential for users and contributors. Include:

  • Installation guide
  • FAQ
  • Changelog
  • Developer setup instructions

Case Studies: Inspiration from the Linux Ecosystem

  • Kali Linux: Security-focused, based on Debian.
  • Puppy Linux: Extremely lightweight, runs in RAM.
  • Garuda Linux: A themed, performance-oriented Arch-based distro.
  • Tails: Privacy-first Debian remix for anonymity.

These projects demonstrate the diversity of purpose and execution possible with custom distributions.

Conclusion

Building a custom Linux distribution is a rewarding challenge that opens the door to a deeper understanding of operating systems. Whether you’re fine-tuning for performance, tailoring for a target audience, or just exploring the core of Linux, the journey from kernel to desktop is one of transformation.

From scratch-built LFS systems to polished Debian or Arch remixes, the tools are there — only your vision and patience are required. Take the leap, forge your OS, and leave your mark in the vast Linux universe.

George Whittaker is the editor of Linux Journal, and also a regular contributor. George has been writing about technology for two decades, and has been a Linux user for over 15 years. In his free time he enjoys programming, reading, and gaming.

Virtual Desktop Infrastructure (VDI) — how it works and a step by step guide to create a VDI Linux server on cloud using RDP protocol

Idan Friedman

Idan Friedman

10 min read

FONT:

https://medium.com/@idan441/virtual-desktop-infrastructure-vdi-how-it-works-and-a-step-by-step-guide-to-create-a-vdi-linux-2afa9ee02171

Aug 1, 2022

Virtual Desktop Infrastructure or VDI in short, is the ability to connect remotely to another machines with a fully supported GUI and I/O devices.

Probably you have heard long and complicated explanations about this before. In this 3 parts series of articles I’ll discuss about VDI in simple words, give examples of VDI use in cloud and will show how to build VDI servers in an automated process with Packer.

In this article I will discuss about concept of VDI and will guide you through a step-by-step guide to create you own VDI machine in AWS cloud using Ubuntu Linux and RDP remote desktop protocol.

In the 2nd article I will talk machine images and Packer, a tool used to automate machine images. I will give an example for using Packer to automate creation of VDI machine images.

In the last article I will show you an example for a real VDI project done by me while working in Develeap for one of its largest customers, showing the challenges which we faced and how we solved them. I w

Virtual desktop as infrastructure (VDI) in simple words

Virtual desktop as infrastructure (VDI) is having the ability to login and graphically use a remote server as if it was a desktop machine. That is, to seamlessly use a remote server with a desktop GUI connected to your local keyboard, mouse and screen connected to your local computer. In other words, VDI is a remote server which is doing calculations using its own CPU and RAM, while the input and output devices shared and connected to it are from your local machine.

In this configuration the remote server feels like a local machine — you can install on it any application you need, including things like a web browser or a graphical game. The server can include any modern technology such as JAVA JVM, Python interpreter or C++ libraries — which gives an endless variety of options. As the server includes a GUI, you can also run graphical applications too.

An example for virtual desktop as infrastructure (VDI) — a remote server is accessed from a local machine. In this session, a shell terminal is running from within the remote server’s GUI

The basic requirements to share of your input and output devices from your local machine with the remote server is done using a remote desktop solution client application. The remote desktop solution allows sharing the computer input and output devices with the server. You can share “classic” I/O device like a screen, a keyboard or a mouse but also other devices such as a printer, a video camera, volume speakers and even a smart card reader!

The server is required to run an operating system which is capable of handling remote sessions, and a server application/daemon process to actually accept client’s sessions requests, authenticate and handle them.

Both client application on you local machine and server application/daemon must support same remote desktop protocol.

Data flow demonstrating a user accessing a remote server using VDI. Note that no physical I/O devices are connected directly to the remote server, and instead the server is sharing the client’s machine I/O devices.

Advantages of VDI:

Basically you can run almost any application on-premise, but using a VDI remote server can be a considerable choice in some use-cases. Running your application remotely in cloud can have extra benefits such as high internet speed, pay-on-need for resources and high-security and reliability.

Below are two lists covering some of the advantages of running an application remotely, as well as running the remote server on cloud.

Advantages of using VDI as a solution for hosting your application –

  • Accessibility — you can connect to the server from any location — as long as you have network connectivity.
  • No dependency between physical local machine and the remote server — your server can run a different environment regardless of your local machine.
  • No need to maintain physical devices to connect to your server — the server can stay anywhere you want — no need to have a dedicated desk, mouse, keyboard and screen for it.

I want to sharp the difference between running the application locally and remotely using VDI:

When running the application locally — it runs on your OS with the local configurations and machine resources. With VDI a remote server can run your application with a different CPU architecture regardless of the user’s desktop machine. The application will use resources from the server itself and and not from your local machine. Only resource taken from your local machine are for running the remote desktop client.

Advantages of using VDI on cloud platform –

  • Fast access to cloud provider services — as the server is located within the cloud computing center, the physical distances are short, so any communication from your server to other application or Cloud fully-managed services will benefit from a high network speed.
  • Pay as you go, scale-up as needed —you can change the server’s capabilities according to your need and pay-as-you-go. Just to compare, in reality if you need to replace your desktop machine — you will have to replace the whole machine which is very expensive.
  • High-connection internet speed — the server is located in a servers farm with a high internet connection –much faster than the user’s house. This is quite useful in case you want to traffic big amounts of data to your application.
  • Advanced back-ups — cloud provider offers advanced back-up systems which can allow efficient, chip and real-time back-up.
  • High availability— computing centers have advanced power backups and disaster recovery technologies.
  • Better data security — all data is processed on the server itself, without leaving it. Also the server can enjoy the enhanced network security features available on cloud platforms.
A speed test result of a remote VDI server. The remote server is located in a server farm with high-speed internet connection. ( In this case in AWS Frankfurt region )

Importance of good and stable internet connection –

The main disadvantage of VDI is the need for a stable and fast internet connection from the user’s house/office to the server. Generally a remote connection will require a minimum of 2–5Mbps connection speed and up to 10–15 Mbps, depending on usage and remote desktop protocol being used.

VDI — Behind the scenes:

Generally VDI can be done with all major operating systems (Windows and Linux family) — but for this example I will choose Ubuntu Linux as a model.

For the server to serve remote desktop sessions, it requires to have some sort of desktop GUI installed (Gnome, KDE… ) and a remote desktop solution daemon running which allows to share I/O devices between client’s machine and the server. Once you have that, the remote server is able to serve users clients requests and to manage remote sessions.

Which existing solutions are in market?

There are many remote desktop solutions in the market such as VNC, RDP , NICE DCV, VMware and the list continue. There are many clients which support the various solutions.

It is to note that there are already made out the box VDI solutions which are more easy to manage, to scale and to provision multiple servers for multiple users.

What is a desktop GUI?

Before watering our hands, let’s start with answering these two basic most questions –

  1. What is the difference between the large varieties of Ubuntu desktop versions — Ubuntu, Lubuntu, Xubuntu, Kubuntu etc.. ?
  2. What is the difference between Ubuntu server and Ubuntu desktop versions?

The answer for both questions is the installed packages included in each version.

An Ubuntu desktop machine includes the Linux kernel along with many other packages and installations including and not only — a terminal shell, a firewall, drivers, a GUI desktop, and some applications to be used by the user such as a web browser.

Now let’s answer on the 1st question — Ubuntu can support multiple types of GUI desktop solutions — each one has different requirements, applications installed, style and performances. There are multiple versions of Ubuntu each one using a different desktop solution — XFCE desktop is used by Xubuntu, LXQt desktop by Lubuntu, GNOME desktop by Ubuntu “default” popular version.

As for the 2nd question — Ubuntu server is like a light-weight version of the desktop version with no GUI desktop installed — it includes only what needed to run on a server. That is —on a server you don’t need a GUI, you don’t need Firefox or OpenLibre, a media player and many other things which mostly you would like on your personal computer. In vice-versa — you can think about Ubuntu desktop as an extended version of Ubuntu server with more installations which gives the user a better experience and more functionality.

Creating a VDI Ubuntu Linux server in AWS cloud using Ubuntu server and RDP:

After explaining the concepts — let’s try to create a VDI machine.

We will create a remote desktop server in AWS cloud using Ubuntu server 20.04 . We will install GNOME desktop which is the desktop install on Ubuntu “default” desktop distribution.

For this example, I chose working with “remote desktop protocol”, or RDP in shorts. ( In its Linux adaptation it is known as xRDP ) I chose it because it is easy to install and has a wide range of clients. There are many available users’ clients available for free — for Windows, Mac and Linux.

To note, the concept and instructions should be almost the same for any cloud provider (or on-premise server) and other Linux distributions. Also, managing a VDI server with a remote desktop solution other than RDP will work in a similar way to the one I’ll show below.

Steps:

  1. Spin an EC2 virtual server with Ubuntu server version 20.04 .
  2. When creating the EC2 instance make sure to assign it a security group which is open for port 22 (to SSH to the instance) and port 3389 which is used by RDP protocol for contacting the remote server.

3. Wait for the server to be up and running — and connect to it using SSH.

4. Update the apt package manager so we can install a GUI and RDP package

$ sudo apt update

5. Install a GUI solution along with necessary dependencies. For this example the following command will install GNOME desktop GUI which is the default Ubuntu desktop. Note the command below installs a minimal version of GNOME desktop with almost nothing, you can install other GUIs which include all pre-installed applications which generally come with Ubuntu.

# Option 1 - will install a minimal version of GNOME desktop$ sudo apt install gnome-session gdm3
# Option 2 - will install the ubuntu GNOME dekskop with all default applications. Bear in mind that it will require more volume!$ sudo apt-get install ubuntu-desktop

6. Install xRDP server application. When installed, xRDP has automatically created a user named “xrdp” in the operating system called which will run the RDP server application in the background.

$ sudo apt install xrdp -y

7. xRDP uses the SSL-cert manager for securing the authentication between the servers and clients connecting to it. You need to grant the “xrdp” user permissions to access the SSL-cert manager.

$ sudo usermod -a -G ssl-cert xrdp

8. Now restart the xRDP service and make sure it is running.

$ sudo systemctl restart xrdp
$ sudo systemctl status xrdp
xRDP service is running. This indicates you can remotely access the machine using any RDP client

9. Open the OS firewall to port 3389 which is used by xRDP protocol.

$ sudo ufw allow 3389
$ sudo ufw reload

10. At this point we have a GUI and RDP installed – all we need is a user with a password in order to login. we can use default created “ubuntu” user by setting it a password but it has sudo permissions and we should prefer a new user with less permissions. Create a new user in the OS and assign it a password.

$ sudo useradd -m <username>
$ sudo passwd <username>

That’s it! You can now login form your machine to the remote server using any remote control client which support RDP protocol.

Compatible RDP clients are available on all major OS — for example: In Windows you can just search for “remote control” on the start-up menu , in Mac you can download the remote desktop application from the App Store and in Linux you can use Remmina client.

Open your RDP client, on server address fill in the EC2 instance public IP address and later insert the username and password you just set.

I want to note that sometimes you can have troubles connecting because of graphical drivers – an over-pass for this issue is to try and reduce the screen resolution or color-depth in your RDP client configurations.

An example of using Remmina application for Linux OS to login to a remote server using RDP protocol. Here I set a custom resolution and a lower color depth.

Test and play:

Check your physical location from within the remote session — you will be shown the server’s location and not your local machine’s location.

A remote session with Remmina client used to connect to a remote server using RDP protocol. The remote server is running Firefox web browser. The EC2 instance is located in Frankfurt Germany, therefore the location is shown to be Germany. Also note the time differences between my client machine (Israel) and the remote machine.

Now let’s try to do a network speed test on our local machine and on the VDI server — see the high speed and low latency!

A VDI EC2 instance server in AWS Frankfurt region, showing a speed test with 891/1509 Mbps and 1ms latency result.

Now for the gist of it let’s install firefox and try to watch YouTube via the remote server.

$ sudo apt install firefox

How can we go far from here?

So that is a basic proof of concept on how to create a VDI remote Linux server. It is to note that we can use other OS or distributions such as Debian Linux or Windows to act as a remote server with other remote desktop solutions.

So how can we go far from here?

  • We can and should make it more secure — like limiting access through VPN only.
  • We can do some nice DevOps work here — making a whole automated CI/CD pipeline which can create a VDI remote server with needed applications and configurations already pre-installed.
  • We can scale the amount of servers — having hundreds of servers running in parallel — serving hundreds of different users.

Liferay Portal

Instal·lació de Liferay Portal per a Intranet/Extranet amb WildFly, PostgreSQL i Debian 12

Aquesta guia detallada us ajudarà a configurar un entorn Liferay per a la vostra organització amb les característiques sol·licitades.

1. Requisits previs

1.1. Requisits del sistema

  • Servidor Debian 12 (recomanat mínim 8GB RAM, 4 cores, 100GB d’emmagatzematge)
  • Java JDK 11 (recomanat OpenJDK)
  • WildFly 26+
  • PostgreSQL 14+
  • Liferay DXP 7.4 (recomanat última versió estable)

1.2. Paquets necessaris

sudo apt update
sudo apt install -y openjdk-11-jdk postgresql postgresql-client unzip

2. Instal·lació de PostgreSQL

2.1. Configuració de la base de dades

sudo -u postgres psql

Dins de PostgreSQL:

CREATE DATABASE liferaydb WITH ENCODING 'UTF8';
CREATE USER liferayuser WITH PASSWORD 'P@ssw0rdComplex';
GRANT ALL PRIVILEGES ON DATABASE liferaydb TO liferayuser;
\q

2.2. Configuració de PostgreSQL

Editeu /etc/postgresql/14/main/postgresql.conf:

max_connections = 500
shared_buffers = 2GB
work_mem = 16MB

Reiniciar PostgreSQL:

sudo systemctl restart postgresql

3. Instal·lació de WildFly

3.1. Descarrega i instal·lació

wget https://download.jboss.org/wildfly/26.1.1.Final/wildfly-26.1.1.Final.zip
unzip wildfly-26.1.1.Final.zip
sudo mv wildfly-26.1.1.Final /opt/wildfly

3.2. Configuració com a servei

sudo groupadd -r wildfly
sudo useradd -r -g wildfly -d /opt/wildfly -s /sbin/nologin wildfly
sudo chown -R wildfly:wildfly /opt/wildfly

Creeu /etc/systemd/system/wildfly.service:

[Unit]
Description=WildFly Application Server
After=syslog.target network.target

[Service]
User=wildfly
Group=wildfly
ExecStart=/opt/wildfly/bin/standalone.sh -b=0.0.0.0 -bmanagement=0.0.0.0
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

Inicieu el servei:

sudo systemctl daemon-reload
sudo systemctl enable wildfly
sudo systemctl start wildfly

4. Instal·lació de Liferay DXP

4.1. Preparació de l’entorn

wget https://repository.liferay.com/nexus/service/local/repositories/liferay-public-releases/content/com/liferay/portal/liferay-portal-tomcat-bundle/7.4.3.112-ga113/liferay-portal-tomcat-bundle-7.4.3.112-ga113.zip
unzip liferay-portal-tomcat-bundle-7.4.3.112-ga113.zip
mv liferay-portal-7.4.3.112-ga113 /opt/liferay

4.2. Configuració per a WildFly

Copieu els arxius WAR necessaris al directori de desplegament de WildFly:

cp /opt/liferay/osgi/war/*.war /opt/wildfly/standalone/deployments/

4.3. Configuració de Liferay

Creeu /opt/liferay/portal-ext.properties:

jdbc.default.driverClassName=org.postgresql.Driver
jdbc.default.url=jdbc:postgresql://localhost:5432/liferaydb
jdbc.default.username=liferayuser
jdbc.default.password=P@ssw0rdComplex

liferay.home=/opt/liferay

4.4. Iniciar Liferay

/opt/liferay/tomcat-9.0.56/bin/startup.sh

5. Integració amb Active Directory

5.1. Configuració inicial

  1. Accediu a http://localhost:8080
  2. Completeu l’assistent d’instal·lació inicial
  3. Anar a Control Panel → Configuration → System Settings → Authentication

5.2. Configuració LDAP

  1. A System Settings, busqueu “LDAP”
  2. Configureu les connexions LDAP:
  • Enabled: Yes
  • Base Provider URL: ldap://your-ad-server:389
  • Base DN: DC=yourdomain,DC=com
  • Principal: CN=ldapuser,OU=ServiceAccounts,DC=yourdomain,DC=com
  • Credentials: password
  • User Mapping:
    • Screen Name: sAMAccountName
    • Email Address: mail
    • First Name: givenName
    • Last Name: sn
  • Group Mapping:
    • Group Name: cn
  • Import Method: User Groups
  • User Custom Attributes: (map any additional needed attributes)

5.3. Sincronització de grups

  1. A LDAP Groups, configureu:
  • Group Search Filter: (objectClass=group)
  • Groups DN: OU=Organitzacio,DC=yourdomain,DC=com
  • Group Name: cn
  • Import Group: Yes
  • Create Role per Group Type: Yes

5.4. Configuració d’autenticació

  1. A System Settings → Authentication → LDAP, configureu:
  • Required: Yes
  • Password Policy: Use LDAP Password Policy
  • Import on Login: Yes

6. Configuració dels llocs departamentals

6.1. Plantilla de lloc

  1. Creeu una plantilla de lloc (Site Template) per als llocs públics:
  • Control Panel → Sites → Site Templates
  • Creeu una plantilla amb les pàgines necessàries (Home, News, Resources, etc.)
  1. Creeu una altra plantilla per als llocs privats

6.2. Automatització de creació de llocs

Podeu utilitzar un script Groovy per automatitzar la creació de llocs:

import com.liferay.portal.kernel.service.*
import com.liferay.portal.kernel.model.*

// Obtenir serveis
groupLocalService = GroupLocalServiceUtil
roleLocalService = RoleLocalServiceUtil
userGroupLocalService = UserGroupLocalServiceUtil

// Obtenir plantilles
publicTemplate = groupLocalService.getGroup(companyId, "PUBLIC_TEMPLATE_NAME")
privateTemplate = groupLocalService.getGroup(companyId, "PRIVATE_TEMPLATE_NAME")

// Obtenir tots els grups LDAP que comencen amb "Organitzacio_"
ldapGroups = userGroupLocalService.getUserGroups(-1, -1).findAll { 
    it.name.startsWith("Organitzacio_") 
}

ldapGroups.each { group ->
    deptName = group.name.replace("Organitzacio_", "")

    // Crear lloc públic
    publicSite = groupLocalService.addGroup(
        userId, GroupConstants.DEFAULT_PARENT_GROUP_ID, 
        Group.class.getName(), 0, GroupConstants.DEFAULT_LIVE_GROUP_ID,
        "Organització ${deptName}", null, 0, true, 
        GroupConstants.DEFAULT_MEMBERSHIP_RESTRICTION, 
        "/${deptName.toLowerCase()}", true, true, null)

    // Crear lloc privat
    privateSite = groupLocalService.addGroup(
        userId, GroupConstants.DEFAULT_PARENT_GROUP_ID, 
        Group.class.getName(), 0, GroupConstants.DEFAULT_LIVE_GROUP_ID,
        "Organització ${deptName} (Privat)", null, 0, true, 
        GroupConstants.DEFAULT_MEMBERSHIP_RESTRICTION, 
        "/private-${deptName.toLowerCase()}", true, true, null)

    // Aplicar plantilles
    LayoutLocalServiceUtil.copyLayouts(
        publicTemplate.getGroupId(), false, publicSite.getGroupId(), false)
    LayoutLocalServiceUtil.copyLayouts(
        privateTemplate.getGroupId(), false, privateSite.getGroupId(), false)

    // Assignar permisos
    // (afegiu aquí la lògica per assignar permisos als grups LDAP corresponents)
}

7. Creació del Theme personalitzat

7.1. Crear un nou theme

cd /opt/liferay
blade create -t theme my-custom-theme
cd my-custom-theme

7.2. Configuració del theme

Editeu src/WEB-INF/liferay-plugin-package.properties:

name=My Custom Theme
module-group-id=liferay
module-incremental-version=1
tags=
short-description=
change-log=
page-url=http://www.liferay.com
author=Your Name
licenses=LGPL
liferay-versions=7.4.3.112
theme.parent=classic

7.3. Implementació del menú dinàmic

Editeu src/META-INF/resources/_diffs/templates/portal_normal.ftl:

<#assign public_sites = restClient.get("/sites?filter=name+eq+'Organització*'&fields=name,friendlyURL") />
<#assign user_sites = restClient.get("/user/me/sites?fields=name,friendlyURL") />

<nav class="navbar navbar-expand-lg navbar-dark bg-dark">
    <div class="collapse navbar-collapse">
        <ul class="navbar-nav mr-auto">
            <li class="nav-item dropdown">
                <a class="nav-link dropdown-toggle" href="#" id="publicSitesDropdown" role="button" data-toggle="dropdown">
                    Departaments Públics
                </a>
                <div class="dropdown-menu">
                    <#list public_sites.items as site>
                        <a class="dropdown-item" href="${site.friendlyURL}">${site.name}</a>
                    </#list>
                </div>
            </li>
            <li class="nav-item dropdown">
                <a class="nav-link dropdown-toggle" href="#" id="privateSitesDropdown" role="button" data-toggle="dropdown">
                    Els meus espais
                </a>
                <div class="dropdown-menu">
                    <#list user_sites.items as site>
                        <a class="dropdown-item" href="${site.friendlyURL}">${site.name}</a>
                    </#list>
                </div>
            </li>
        </ul>
    </div>
</nav>

7.4. Desplegar el theme

blade deploy

8. Configuració de les condicions d’ús

8.1. Crear un formulari personalitzat

  1. Anar a Control Panel → Forms
  2. Creeu un nou formulari amb els camps necessaris (Acceptació de condicions)

8.2. Configuració del procés d’acceptació

  1. Creeu un hook personalitzat per interceptar el primer login:
@Component(
    immediate = true,
    service = ServletContextFilter.class
)
public class TermsAcceptanceFilter implements ServletContextFilter {
    @Override
    public void doFilter(
        ServletRequest servletRequest, ServletResponse servletResponse,
        FilterChain filterChain)
        throws IOException, ServletException {

        HttpServletRequest request = (HttpServletRequest)servletRequest;
        HttpServletResponse response = (HttpServletResponse)servletResponse;

        try {
            User user = PortalUtil.getUser(request);

            if (user != null && !isTermsAccepted(user)) {
                if (!request.getRequestURI().contains("/accept-terms")) {
                    response.sendRedirect("/accept-terms");
                    return;
                }
            }
        } catch (Exception e) {
            _log.error(e);
        }

        filterChain.doFilter(servletRequest, servletResponse);
    }

    private boolean isTermsAccepted(User user) {
        // Implementar lògica per comprovar si l'usuari ha acceptat les condicions
    }

    private static final Log _log = LogFactoryUtil.getLog(TermsAcceptanceFilter.class);
}

8.3. Pàgina d’acceptació de condicions

Creeu una pàgina personalitzada amb el formulari i la lògica per registrar l’acceptació.

9. Configuració de notificacions

9.1. Crear un servei de notificacions

Implementeu un servei per gestionar notificacions:

@Component(service = NotificationService.class)
public class NotificationServiceImpl implements NotificationService {

    @Reference
    private UserNotificationEventLocalService _userNotificationEventLocalService;

    public void sendNotification(long userId, String message) {
        JSONObject payload = JSONFactoryUtil.createJSONObject();
        payload.put("message", message);

        _userNotificationEventLocalService.addUserNotificationEvent(
            userId,
            "notification-portlet",
            System.currentTimeMillis(),
            userId,
            payload.toString(),
            false,
            null);
    }

    public void markAsRead(long notificationId) {
        UserNotificationEvent notification = 
            _userNotificationEventLocalService.getUserNotificationEvent(notificationId);

        notification.setArchived(true);
        _userNotificationEventLocalService.updateUserNotificationEvent(notification);
    }
}

9.2. Registrar acceptacions

Creeu una taula personalitzada per registrar les acceptacions de notificacions:

CREATE TABLE notification_acceptance (
    acceptanceId BIGSERIAL PRIMARY KEY,
    userId BIGINT NOT NULL,
    notificationId BIGINT NOT NULL,
    acceptanceDate TIMESTAMP NOT NULL,
    FOREIGN KEY (userId) REFERENCES User_(userId)
);

10. Configuració final

10.1. Configuració de seguretat

  1. Assegureu-vos que totes les pàgines requereixen autenticació:
  • Control Panel → Configuration → System Settings → Security → SSO
  • Marqueu “Require Authentication to Access the Site”

10.2. Configuració de memòria per a WildFly

Editeu /opt/wildfly/bin/standalone.conf:

JAVA_OPTS="-Xms4g -Xmx8g -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=1024M"

10.3. Reiniciar i verificar

sudo systemctl restart wildfly
/opt/liferay/tomcat-9.0.56/bin/shutdown.sh
/opt/liferay/tomcat-9.0.56/bin/startup.sh

11. Manteniment i monitorització

11.1. Tasques programades

Configureu tasques programades per:

  • Sincronització regular amb AD
  • Neteja de notificacions antigues
  • Backups regulars

11.2. Monitorització

Configureu eines com:

  • Prometheus + Grafana per a mètriques
  • ELK Stack per a logs
  • Alertes per a problemes de rendiment

Aquesta configuració proporcionarà una plataforma robusta per a la vostra intranet/extranet amb totes les característiques sol·licitades. Recordeu adaptar les contrasenyes i els valors específics del vostre entorn AD abans de posar en producció el sistema.

Provant Jellyfin

Instal·lació i configuració de Jellyfin a Debian 12

Perfecte! T’explicaré pas a pas com instal·lar i configurar Jellyfin al teu servidor Debian 12. Jellyfin és una excel·lent alternativa de software lliure per crear el teu servidor multimèdia.

1. Preparació del sistema

Primer, actualitza el sistema:

sudo apt update
sudo apt upgrade -y

2. Instal·lació de Jellyfin

Afegir el repositori oficial de Jellyfin

# Instal·lar dependències necessàries
sudo apt install -y apt-transport-https gnupg

# Afegir la clau GPG de Jellyfin
wget -O - https://repo.jellyfin.org/jellyfin_team.gpg.key | sudo apt-key add -

# Afegir el repositori de Jellyfin per a Debian
echo "deb [arch=$( dpkg --print-architecture )] https://repo.jellyfin.org/debian bullseye main" | sudo tee /etc/apt/sources.list.d/jellyfin.list

# Actualitzar els repositoris
sudo apt update

Instal·lar Jellyfin

sudo apt install -y jellyfin

El servei de Jellyfin s’iniciarà automàticament després de la instal·lació. Pots verificar l’estat amb:

sudo systemctl status jellyfin

3. Configuració del firewall

Si estàs utilitzant ufw (firewall per defecte a moltes distribucions):

sudo ufw allow 8096/tcp  # Port per a HTTP
sudo ufw allow 8920/tcp  # Port per a HTTPS (opcional)
sudo ufw allow 1900/udp  # Per a DLNA (opcional)
sudo ufw allow 7359/udp  # Per a descobriment a la xarxa local (opcional)

4. Accés a la interfície web

Ara pots accedir a la interfície web de Jellyfin des de qualsevol dispositiu de la xarxa navegant a:

http://IP_DEL_TEU_SERVIDOR:8096

On IP_DEL_TEU_SERVIDOR és l’adreça IP del teu servidor Debian.

5. Configuració inicial

En accedir per primera vegada, veuràs l’assistent de configuració:

  1. Idioma: Selecciona Català o el teu idioma preferit
  2. Crear compte d’usuari: Configura l’usuari administrador
  3. Configurar biblioteques multimèdia:
    • Clica a “Afegir biblioteca”
    • Selecciona el tipus (Pel·lícules, Sèries, Música, etc.)
    • Afegeix les rutes on tens els teus arxius multimèdia

6. Configuració d’emmagatzematge

És recomanable tenir els teus arxius multimèdia ben organitzats. Per exemple:

  • /media/multimedia/pelicules/
  • /media/multimedia/series/
  • /media/multimedia/musica/

Assegura’t que l’usuari jellyfin tingui permisos de lectura:

sudo usermod -aG GRUP_AMB_PERMISOS jellyfin

On GRUP_AMB_PERMISOS és el grup que té accés als directoris multimèdia.

7. Configuració per a reproducció òptima

A la interfície web de Jellyfin:

  1. Accedeix a AdministracióReproducció
  2. Configura les opcions de transcodificació segons el hardware del teu servidor
  3. Si tens una GPU, pots habilitar l’acceleració per hardware

8. Configuració per a accés extern (opcional)

Si vols accedir al teu servidor des d’Internet:

  1. Configura el reenviament de ports al teu router (port 8096 TCP)
  2. Considera usar un servei de DNS dinàmic si no tens IP fixa
  3. És molt recomanable configurar HTTPS per a l’accés remot segur

9. Clients Jellyfin

Jellyfin disposa de clients per a:

  • Navegadors web
  • Android i iOS
  • Android TV, Fire TV, Apple TV
  • Roku, Kodi (amb plugin)
  • Windows, macOS, Linux

Vols que t’expliqui com configurar algun aspecte específic de Jellyfin, com la configuració avançada de biblioteques o la transcodificació d’àudio i vídeo?

Step-By-Step Tutorial for Building a REST API in Java

https://dev.to/nikolay_stanchev/step-by-step-tutorial-for-building-a-rest-api-in-java-2fna

Motivation

Having seen many tutorials on how to build REST APIs in Java using various combinations of frameworks and libraries, I decided to build my own API using the software suite that I have the most experience with. In particular, I wanted to use:

  • Maven as the build and dependency management tool
  • Jersey as the framework that provides implementation of the JAX-RS specification
  • Tomcat as the application server
    • in particular, I wanted to run Tomcat in embedded mode so that I would end up with a simple executable jar file
  • Guice as the dependency injection framework

The problem I faced was that I couldn’t find any tutorials combining the software choices above, so I had to go through the process of combining the pieces myself. This didn’t turn out to be a particularly straightforward task, which is why I decided to document the process on my blog and share it with others who might be facing similar problems.

Project Summary

For the purpose of this tutorial, we are going to build the standard API for managing TODO items – i.e. a CRUD API that supports the functionalities of C reating, R etrieving, U pdating and D eleting tasks.

The API specification is given below:

The full specification can be viewed in the Appendix.

To implement this API, we will use:

  • Java 11 (OpenJDK)
  • Apache Maven v3.8.6
  • Ecplipse Jersey v2.35
  • Apache Tomcat v9.0.62
  • Guice v4.2.3

For the purpose of simplicity, I will avoid the use of any databases as part of this tutorial and instead use a pseudo in-memory DB. However, we will see how easy it is to switch from an in-memory testing DB to an actual database when following a clean architecture.

The goal is to end up with an executable jar file generated by Maven that will include the Tomcat application server and our API implementation. We will then dockerize the entire process of generating the file and executing it, and finally run the service as a Docker container.

The following coding steps will only outline the most relevant pieces of code for the purpose of this tutorial, but you can find the full code in the GitHub repository. For most steps, we will add unit tests that won’t be referenced here but included in the code change itself. To run the tests at any given point in time, you can use mvn clean test.

Coding Steps

Step 1 – Project Setup

As with every Maven project, we need a POM file (the file representing the P roject o bject M odel). We start with a very basic POM which describes the project information and sets the JDK and JRE target versions to 11. This means that the project can use Java 11 language features (but no features from later versions) and will require a JRE version 11 or later to be executed. To avoid registering a domain name for this example project, I am using a group ID that corresponds to my GitHub username where this project will be hosted – com.github.nikist97.

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <!-- Project Information -->
    <groupId>com.github.nikist97</groupId>
    <artifactId>TaskManagementService</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>TaskManagementService</name>

    <properties>
        <!-- Maven-related properties used during the build process -->
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
    </properties>

    <dependencies>
        <!-- This is where we will declare libraries our project depends on -->
    </dependencies>

    <build>
        <plugins>
            <!-- This is where we will declare plugins our project needs for the build process -->
        </plugins>
    </build>
</project>

The full commit for this step can be found here.

Step 2 – Implementing the Business Logic

We start with the most critical piece of software in general, which is our business logic. Ideally, this layer should be agnostic to the notion of any DB technologies or API protocols. Whether we implement an HTTP API using MongoDB on the backend or we use PostgreSQL and implement a command-line tool for interacting with our code, it should not affect the code for our business logic. In other words, the business logic should not depend on the persistence layer (the code interacting with the database) and the API layer (the code that will define the HTTP API endpoints).

The first thing to implement is our main entity class – Task. This class follows the builder pattern and provides argument validation. The required attributes are the task’s title and description. The rest of the attributes we can default to sensible values when not explicitly provided:

  • identifier is set to a random UUID
  • createdAt is set to the current date time
  • completed is set to False
public class Task {

    private final String identifier;
    private final String title;
    private final String description;
    private final Instant createdAt;
    private final boolean completed;

    ...

    public static class TaskBuilder {

        ...

        private TaskBuilder(String title, String description) {
            validateArgNotNullOrBlank(title, "title");
            validateArgNotNullOrBlank(description, "description");

            this.title = title;
            this.description = description;
            this.identifier = UUID.randomUUID().toString();
            this.createdAt = Instant.now();
            this.completed = false;
        }

        ...

    }
}

Then, we define the interface we need for interacting with a persistence layer (i.e. a database or another storage mechanism). Notice that this interface belongs to the business layer because, ultimately, it is the business logic that decides what storage functionality we will need. The actual implementation of this interface, though (a MongoDB implementation or an in-memory DB or something else) will belong to the persistence layer, which we will implement in a subsequent step.

public interface TaskManagementRepository {

    void save(Task task);

    List<Task> getAll();

    Optional<Task> get(String taskID);

    void delete(String taskID);
}

Finally, we implement the service class, which has the CRUD logic. The critical piece here is that this class doesn’t rely on a concrete implementation of the repository interface – it is agnostic to what DB technology we decide to use later.

public class TaskManagementService {

    private final TaskManagementRepository repository;

    ...

    public Task create(String title, String description) {
        Task task = Task.builder(title, description).build();

        repository.save(task);

        return task;
    }

    public Task update(String taskID, TaskUpdateRequest taskUpdateRequest) {
        Task oldTask = retrieve(taskID);

        Task newTask = oldTask.update(taskUpdateRequest);
        repository.save(newTask);

        return newTask;
    }

    public List<Task> retrieveAll() {
        return repository.getAll();
    }

    public Task retrieve(String taskID) {
        return repository.get(taskID).orElseThrow(() ->
                new TaskNotFoundException("Task with the given identifier cannot be found - " + taskID));
    }

    public void delete(String taskID) {
        repository.delete(taskID);
    }
}

The way this code was written allows us to easily unit test our business logic in isolation by mocking the behavior of the repository interface. To achieve this, we will need to add two dependencies in the POM file:

        ...
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.mockito</groupId>
            <artifactId>mockito-core</artifactId>
            <version>3.5.13</version>
            <scope>test</scope>
        </dependency>
        ...

The full commit for this step can be found here.

Step 3 – Creating Stub API Endpoints

The next step is to implement the API layer. For this project, we are implementing an HTTP REST API using Jersey. Therefore, we start by adding the dependency in the POM file.

        ...
        <dependency>
            <groupId>org.glassfish.jersey.containers</groupId>
            <artifactId>jersey-container-servlet</artifactId>
            <version>2.35</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.jersey.inject</groupId>
            <artifactId>jersey-hk2</artifactId>
            <version>2.35</version>
        </dependency>
        ...

The second dependency is needed after Jersey 2.26 – https://eclipse-ee4j.github.io/jersey.github.io/release-notes/2.26.html – following this version users need to explicitly declare the dependency injection framework for Jersey to use – in this case we go with HK2 which is what was used in previous releases.

Then we implement the resource class, which at this point only has stub methods that all return a status code 200 HTTP response with no response body.

@Path("/tasks")
public class TaskManagementResource {

    @POST
    public Response createTask() {
        return Response.ok().build();
    }

    @GET
    public Response getTasks() {
        return Response.ok().build();
    }

    @PATCH
    @Path("/{taskID}")
    public Response updateTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    public Response getTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {

        return Response.ok().build();
    }
}

We will also need an application config class to define the base URI for our API and to inform the framework about the task management resource class:

@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
    }

}

The full commit for this step can be found here.

Step 4 – Implementing the API Layer

For this project, we will use JSON as the serialization data format for HTTP requests and repsonses.

In order to produce and consume JSON in our API, we need to add a library that’s going to be responsible for the JSON serialization and deserialization of POJOs. We are going to use Jackson. The library we need in order to integrate Jersy with Jackson is given below:

        ...
        <dependency>
            <groupId>org.glassfish.jersey.media</groupId>
            <artifactId>jersey-media-json-jackson</artifactId>
            <version>2.35</version>
        </dependency>
        ...

Then we need to customize the behavior of the JSON object mapper that will be used for serializing and deserializing the request and response POJOs. In this case, we disable ALLOW_COERCION_OF_SCALARS – this means that the service won’t attempt to parse strings into numbers or booleans (e.g. {"boolean_field":"true"} will be rejected)

@Provider
public class JsonObjectMapperProvider implements ContextResolver<ObjectMapper> {

    private final ObjectMapper jsonObjectMapper;

    /**
     * Create a custom JSON object mapper provider.
     */
    public JsonObjectMapperProvider() {
        jsonObjectMapper = new ObjectMapper();
        jsonObjectMapper.disable(ALLOW_COERCION_OF_SCALARS);
    }

    @Override
    public ObjectMapper getContext(Class<?> type) {
        return jsonObjectMapper;
    }
}

Once again, we need to make Jersey aware of this provider class:

@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
        register(JsonObjectMapperProvider.class);
    }

}

Then we define the request and response POJOs. I will skip the code for these classes, but in summary, we need:

  • TaskCreateRequest – represents the JSON request body sent to the service when creating a new task
  • TaskUpdateRequest – represents the JSON request body sent to the service when updating an existing task
  • TaskResponse – represents the JSON response body sent to the client when retrieving task(s)

The last part of this step is to replace the stub logic in the resource class with the actual API implementation that relies on the business logic encapsulated in the service class from step 2.

@Path("/tasks")
public class TaskManagementResource {

    private final TaskManagementService service;

    public TaskManagementResource(TaskManagementService service) {
        this.service = service;
    }

    @POST
    @Consumes(MediaType.APPLICATION_JSON)
    public Response createTask(TaskCreateRequest taskCreateRequest) {
        validateArgNotNull(taskCreateRequest, "task-create-request-body");

        Task task = service.create(taskCreateRequest.getTitle(), taskCreateRequest.getDescription());

        String taskID = task.getIdentifier();

        URI taskRelativeURI = URI.create("tasks/" + taskID);
        return Response.created(taskRelativeURI).build();
    }

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public List<TaskResponse> getTasks() {
        return service.retrieveAll().stream()
                .map(TaskResponse::new)
                .collect(Collectors.toUnmodifiableList());
    }

    @PATCH
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public Response updateTask(@PathParam("taskID") String taskID, TaskUpdateRequest taskUpdateRequest) {
        validateArgNotNull(taskUpdateRequest, "task-update-request-body");

        TaskUpdate update = new TaskUpdate(taskUpdateRequest.getTitle(), taskUpdateRequest.getDescription(),
                taskUpdateRequest.isCompleted());

        service.update(taskID, update);

        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public TaskResponse getTask(@PathParam("taskID") String taskID) {
        Task task = service.retrieve(taskID);
        return new TaskResponse(task);
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {
        service.delete(taskID);
        return Response.noContent().build();
    }

The full commit for this step can be found here.

Step 5 – Implementing the Storage Mechanism

For simplicity, we are going to implement an in-memory storage implementation of the repository interface rather than relying on a database technology. The implementation will store all tasks inside a map – the key is the task identifier and the value is the task itself. This is just enough for simple CRUD functionality.

public class InMemoryTaskManagementRepository implements TaskManagementRepository {

    Map<String, Task> tasks = new HashMap<>();

    @Override
    public void save(Task task) {
        tasks.put(task.getIdentifier(), task);
    }

    @Override
    public List<Task> getAll() {
        return tasks.values().stream()
                .collect(Collectors.toUnmodifiableList());
    }

    @Override
    public Optional<Task> get(String taskID) {
        return Optional.ofNullable(tasks.get(taskID));
    }

    @Override
    public void delete(String taskID) {
        tasks.remove(taskID);
    }

}

The full commit for this step can be found here.

Step 6 – Binding Everything Together

Now that we have all the layers implemented, we need to bind them together with a dependency injection framework – in this case, we will use Guice to achieve that.

We start by adding Guice as a dependency in the POM file:

        <dependency>
            <groupId>com.google.inject</groupId>
            <artifactId>guice</artifactId>
            <version>4.2.3</version>
        </dependency>

Then we create a simple guice module to bind the in-memory DB implementation to the repository interface. This basically means that for all classes that depend on the repository interface, Guice will inject the in-memory DB class. We use the Singleton scope because we want all classes that depend on the repository to re-use the same in-memory DB instance.

public class ApplicationModule extends AbstractModule {

    @Override
    public void configure() {
        bind(TaskManagementRepository.class).to(InMemoryTaskManagementRepository.class).in(Singleton.class);
    }

}

Note that if we decide to use an actual database, the code change is as simple as:

  • implementing the wrapper class for the DB we choose – e.g. MongoDBTaskManagementRepository
  • changing the binding above to point to the new implementation of the repository interface

Now that we have the module implemented, we can add Inject annotation to all classes where the constructor has a dependency which needs to be injected by Guice. These would be the TaskManagementResource and the TaskManagementService classes. The magic of guice (and dependency injection in general) is that the module above is enough to build the entire tree of dependencies in our code.

TaskManagementResource depends on TaskManagementService which depends on TaskManagementRepository. Guice knows how to get an instance of the TaskManagementRepository interface so following this chain it also knows how to get an instance of the TaskManagementService and TaskManagementResource classes.

The final piece of work is to make Jersey aware of the Guice injector – remember Jersey uses HK2 as its dependency injection framework, so Jersey will rely on HK2 to be able to build a TaskManagementResource class. In order for HK2 to build a TaskManagementResource it needs to know about Guice’s dependency injector container. To connect Guice and HK2, we are going to use something called the Guice/HK2 Bridge. It is basically a process of bridging the Guice container (the Injector class) into the HK2 container (the ServiceLocator class).

So we declare a dependency on the Guice/HK2 bridge library:

        ...
        <dependency>
            <groupId>org.glassfish.hk2</groupId>
            <artifactId>guice-bridge</artifactId>
            <version>2.6.1</version>
        </dependency>
        ...

Then we change the ApplicationConfig class to create the bridge between Guice and HK2. Notice that since the ApplicationConfig class is used by Jersey (and thus managed by HK2) we can easily inject the ServiceLocator instance (the HK2 container itself) into it.

        @Inject
        public ApplicationConfig(ServiceLocator serviceLocator) {
            register(TaskManagementResource.class);
            register(JsonObjectMapperProvider.class);

            // bridge the Guice container (Injector) into the HK2 container (ServiceLocator)
            Injector injector = Guice.createInjector(new ApplicationModule());
            GuiceBridge.getGuiceBridge().initializeGuiceBridge(serviceLocator);
            GuiceIntoHK2Bridge guiceBridge = serviceLocator.getService(GuiceIntoHK2Bridge.class);
            guiceBridge.bridgeGuiceInjector(injector);
        }

The full commit for this step can be found here.

Step 7 – Creating the Application Launcher

The final critical step is configuring and starting the application server through a launcher class, which will serve as our main class for the executable jar file we are targeting.

We start with the code for starting an embedded Tomcat server. The dependency we need is:

    ...
    <dependency>
        <groupId>org.apache.tomcat.embed</groupId>
        <artifactId>tomcat-embed-core</artifactId>
        <version>9.0.62</version>
    </dependency>
    ...

Then we need a launcher class. This class is responsible for starting the embedded Tomcat server and registering a servlet container for the resource config we defined earlier (when we registered the resource class).

public class Launcher {

    public static void main(String[] args) throws Exception {
        Tomcat tomcat = new Tomcat();

        // configure server port number
        tomcat.setPort(8080);

        // remove defaulted JSP configs
        tomcat.setAddDefaultWebXmlToWebapp(false);

        // add the web app
        StandardContext ctx = (StandardContext) tomcat.addWebapp("/", new File(".").getAbsolutePath());
        ResourceConfig resourceConfig = new ResourceConfig(ApplicationConfig.class);
        Tomcat.addServlet(ctx, "jersey-container-servlet", new ServletContainer(resourceConfig));
        ctx.addServletMappingDecoded("/*", "jersey-container-servlet");

        // start the server
        tomcat.start();
        System.out.println("Server listening on " + tomcat.getHost().getName() + ":" + tomcat.getConnector().getPort());
        tomcat.getServer().await();
    }
}

If using InteliJ to code this project, then you should ideally be able to run the main method of the Launcher class. There is one caveat here – with the release of JDK 9 and after (and hence the introduction of the Java Platform Module System), reflective access is only allowed to publicly exported packages. This means that Guice will fail at runtime because it uses reflection to access JDK modules. See this StackOverflow post for more information.

The only workaround I found so far for this was to add the following as a JVM option --add-opens java.base/java.lang=ALL-UNNAMED to the run configuration of the main method as suggested in the StackOverflow post I linked. This basically allows Guice to continue doing its reflection as in the pre-JDK 9 releases.

After we use the workaround above and test our launcher, we get to the part of generating an executable JAR file which can be used to start the service. To achieve this, we need the appassembler plugin. Note that we still need to add the --add-opens java.base/java.lang=ALL-UNNAMED JVM argument in order for the executable jar file to work.

         ...
         <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>appassembler-maven-plugin</artifactId>
                <version>2.0.0</version>
                <configuration>
                    <assembleDirectory>target</assembleDirectory>
                    <extraJvmArguments>--add-opens java.base/java.lang=ALL-UNNAMED</extraJvmArguments>
                    <programs>
                        <program>
                            <mainClass>taskmanagement.Launcher</mainClass>
                            <name>taskmanagement_webapp</name>
                        </program>
                    </programs>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>assemble</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
        ...

With this plugin, we can finally generate an executable file and then use it to start the service:

mvn clean package
./target/bin/taskmanagement_webapp

The full commit for this step can be found here.

Step 8 – Adding Exception Mappers

You might have noticed that so far we have defined two custom exceptions that are thrown when the service receives input data it cannot handle:

  • TaskNotFoundException
  • InvalidTaskDataException

If these exceptions aren’t handled properly when encountered, then the embedded tomcat server will wrap them inside an internal server error (status code 500) which is not very user friendly. As per the API specification we defined in the beginning (see Appendix), we want clients to receive a 404 status code if, for example, they use a task ID that doesn’t exist.

To achieve this, we use exception mappers. When we register those mappers, Jersey will use them to transform instances of these exceptions to proper HTTP Response objects.

public class TaskNotFoundExceptionMapper implements ExceptionMapper<TaskNotFoundException> {

    @Override
    public Response toResponse(TaskNotFoundException exception) {
        return Response
                .status(Response.Status.NOT_FOUND)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


public class InvalidTaskDataExceptionMapper implements ExceptionMapper<InvalidTaskDataException> {

    @Override
    public Response toResponse(InvalidTaskDataException exception) {
        return Response
                .status(Response.Status.BAD_REQUEST)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


    @Inject
    public ApplicationConfig(ServiceLocator serviceLocator) {
        ...
        register(InvalidTaskDataExceptionMapper.class);
        register(TaskNotFoundExceptionMapper.class);
        ...
    }

Notice the use of a new POJO – ExceptionMessage – which is used to convey the exception message as a JSON response. Now, whenever the business logic throws any of these exceptions, we will get a proper JSON response with the appropriate status code.

The full commit for this step can be found here.

Dockerizing the Application

There are lots of benefits of using Docker but given that this article is not about containers, I won’t spend time talking about them. I will only mention that I always prefer to run applications in a Docker container because it makes the build process much more efficient (think application portability, well-defined build behavior, improved deployment process, etc.)

The Dockerfile for our service is relatively simple and based on the maven OpenJDK image. It automates what we did in step 7 – packaging the application and running the executable jar file.

FROM maven:3.8.5-openjdk-11-slim
WORKDIR /application

COPY . .

RUN mvn clean package

CMD ["./target/bin/taskmanagement_webapp"]

With this, we can build the container image and start our service as a Docker container. The commands below assume you have the Docker daemon running on your local machine.

docker build --tag task-management-service .
docker run -d -p 127.0.0.1:8080:8080 --name test-task-management-service task-management-service

Now the service should be running in the background and be accessible from your local machine on port 8080. For starting/stopping it, use this command:

docker start/stop test-task-management-service

Testing the Service

Now that we have the service running, we can use Curl to send some test requests.

  • creating a few tasks
curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks" 

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:46 GMT

curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks"

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:47 GMT

  • retrieving a task
curl -i -X GET "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 162
Date: Tue, 28 Jun 2022 07:54:21 GMT

{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false}

  • retrieving a non-existing task
curl -i -X GET "http://localhost:8080/api/tasks/random-task-id-123"                                                       

HTTP/1.1 404 
Content-Type: application/json
Content-Length: 81
Date: Tue, 28 Jun 2022 09:44:53 GMT

{"message":"Task with the given identifier cannot be found - random-task-id-123"}

  • retrieving all tasks
curl -i -X GET "http://localhost:8080/api/tasks"     

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 490
Date: Tue, 28 Jun 2022 07:55:08 GMT

[{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false},{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:46.444179Z","completed":false}]

  • deleting a task
curl -i -X DELETE "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 204 
Date: Tue, 28 Jun 2022 07:56:55 GMT

  • patching a task
curl -i -X PATCH -H "Content-Type:application/json" -d "{\"completed\": true, \"title\": \"new-title\", \"description\":\"new-description\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 200 
Content-Length: 0
Date: Tue, 28 Jun 2022 08:00:37 GMT

curl -i -X GET "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"   
HTTP/1.1 200 
Content-Type: application/json
Content-Length: 164
Date: Tue, 28 Jun 2022 08:01:07 GMT

{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"new-title","description":"new-description","createdAt":"2022-06-28T07:52:46.444179Z","completed":true}

  • patching a task with empty title
curl -i -X PATCH -H "Content-Type:application/json" -d "{\"title\": \"\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 400 
Content-Type: application/json
Content-Length: 43
Date: Tue, 28 Jun 2022 09:47:09 GMT
Connection: close

{"message":"title cannot be null or blank"}

Future Improvements

What we have built so far is obviously not a production-ready API, but it demonstrates how to get started with the software suite I mentioned in the beginning of this article when building a REST API. Here are some future improvements that can be made:

  • using a database for persistent storage
  • adding user authentication and authorization – tasks should be scoped per user rather than being available globally
  • adding logging
  • adding KPI (Key Performance Indicators) metrics – things like the count of total requests, latency, failures count, etc.
  • adding a mapper for unexpected exceptions – we don’t want to expose a stack trace if the service encounters an unexpected null pointer exception, instead we want a JSON response with status code 500
  • adding automated integration tests
  • adding a more verbose response to the patch endpoint – e.g. indicating whether the request resulted in a change or not
  • scanning packages and automatically registering provider and resource classes instead of manually registering them one-by-one
  • adding CORS (Cross-Origin-Resource-Sharing) support if we intend to call the API from a browser application hosted under a different domain
  • adding SSL support
  • adding rate limiting

If you found this article helpful and would like to see a follow-up on the topics above, please comment or message me with a preference choice of what you would like to learn about the most.

Appendix

The full API specification using the Open API description format can be found below. You can use the Swagger Editor to display the API specification in a more friendly manner.

swagger: '2.0'

info:
  description: This is a RESTful task management API specification.
  version: 1.0.0
  title: Task Management API
  license:
    name: Apache 2.0
    url: 'http://www.apache.org/licenses/LICENSE-2.0.html'

host: 'localhost:8080'
basePath: /api

schemes:
  - http

paths:

  /tasks:
    post:
      summary: Create a new task
      operationId: createTask
      consumes:
        - application/json
      parameters:
        - in: body
          name: taskCreateRequest
          description: new task object that needs to be added to the list of tasks
          required: true
          schema:
            $ref: '#/definitions/TaskCreateRequest'
      responses:
        '201':
          description: successfully created new task
        '400':
          description: task create request failed validation
    get:
      summary: Retrieve all existing tasks
      operationId: retrieveTasks
      produces:
        - application/json
      responses:
        '200':
          description: successfully retrieved all tasks
          schema:
            type: array
            items:
              $ref: '#/definitions/TaskResponse'

  '/tasks/{taskID}':
    get:
      summary: Retrieve task
      operationId: retrieveTask
      produces:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '200':
          description: successfully retrieved task
          schema:
            $ref: '#/definitions/TaskResponse'
        '404':
          description: task not found
    patch:
      summary: Update task
      operationId: updateTask
      consumes:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
        - name: taskUpdateRequest
          in: body
          description: task update request
          required: true
          schema:
            $ref: '#/definitions/TaskUpdateRequest'
      responses:
        '200':
          description: successfully updated task
        '400':
          description: task update request failed validation
        '404':
          description: task not found
    delete:
      summary: Delete task
      operationId: deleteTask
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '204':
          description: >-
            successfully deleted task or task with the given identifier did not
            exist

definitions:
  TaskCreateRequest:
    type: object
    required:
      - title
      - description
    properties:
      title:
        type: string
      description:
        type: string
  TaskUpdateRequest:
    type: object
    properties:
      title:
        type: string
      description:
        type: string
      completed:
        type: boolean
  TaskResponse:
    type: object
    required:
      - identifier
      - title
      - description
      - completed
      - createdAt
    properties:
      identifier:
        type: string
      title:
        type: string
      description:
        type: string
      createdAt:
        type: string
        format: date-time
      completed:
        type: boolean