Containerize a Spring Boot application with Podman Desktop

FONT: https://developers.redhat.com/articles/2023/10/19/containerize-spring-boot-application-podman-desktop#running_the_containerized_application

You are here

Containerize a Spring Boot application with Podman Desktop

October 19, 2023

Cedric Clyburn Related topics: ContainersJavaKubernetesSpring Boot Related products: Podman DesktopRed Hat Enterprise Linux

Share:

Spring and Spring Boot are developer favorites for building Java applications that run in the cloud. Spring is known for being production-ready, complimenting containerization very well. According to the State of Spring 2022 report, Kubernetes has become the dominant platform for running Spring apps. So, how can we take a Spring application, containerize it, and run it locally? Let’s explore this by using Podman Desktop, an open-source tool to seamlessly work with containers and Kubernetes from your local environment.

Prerequisites

  • Spring Boot Application: For this article, we’ll use the popular Spring PetClinic sample application on GitHub (Figure 1). Feel free to also use your own project or start from the Spring Initializr.
A screenshot of the Spring Pet clinic Repository on Github
Figure 1: The Spring Petclinic repository on Github.
  • Podman Desktop: Let’s use Podman Desktop, the powerful GUI-based tool for deploying and managing containers using the Podman container engine. Once installed, you’ll be ready to start containerizing your Spring application (Figure 2).
A screenshot of the Podman Desktop dashboard.
Figure 2: The Podman Desktop dashboard.

Containerizing the Spring Boot application

Let’s get started by cloning the application’s source code if using the PetClinic repository.

git clone https://github.com/spring-projects/spring-petclinic.git
cd spring-petclinic

While we can use Maven to build a jar file and run it, let’s jump straight into creating a Containerfile in the project’s root directory, which will serve as the blueprint for the container image we’ll create later (analogous to Docker’s Dockerfile). You can create the file with the command touch Containerfile or simply create it from your IDE. Let’s see what the Containerfile will look like for this sample Spring Boot application:

# Start with a base image that has Java 17 installed.
FROM eclipse-temurin:17-jdk-jammy

# Set a default directory inside the container to work from.
WORKDIR /app

# Copy the special Maven files that help us download dependencies.
COPY .mvn/.mvn

# Copy only essential Maven files required to download dependencies.
COPY mvnw pom.xml./

# Download all the required project dependencies.
RUN ./mvnw dependency:resolve

# Copy our actual project files (code, resources, etc.) into the container.
COPY src./src

# When the container starts, run the Spring Boot app using Maven.
CMD ["./mvnw", "spring-boot:run"]

Let’s take a deeper look at the components that make up this Containerfile:

  • FROM eclipse-temurin:17-jdk-jammy:
    • Purpose: This line sets the foundation for our container.
    • Deep Dive: It tells Podman to use a pre-existing image that already has Java 17 installed. Think of it as choosing a base flavor for a cake before adding more ingredients. We use the eclipse-temurin image because it’s a trusted source for Java installations.
  • WORKDIR /app:
    • Purpose: Designates a working space in our container.
    • Deep Dive: Containers have their own file system. Here, we’re telling Podman to set up a folder named ‘app’ and make it the default location for any subsequent operations.
  • COPY commands:
    • Purpose: They transfer files from our local system into the container.
    • Deep Dive: The first COPY grabs Maven’s configuration, a tool Java developers use to manage app dependencies. The second copies over the main files of our app: the Maven wrapper (mvnw) and the project’s details (pom.xml).
  • RUN ./mvnw dependency:resolve:
    • Purpose: Downloads necessary libraries and tools for the app.
    • Deep Dive: This command activates Maven to fetch everything our app needs to run. By doing this in the Containerfile, we ensure the container has everything packaged neatly.
  • COPY src./src:
    • Purpose: Add our actual application code.
    • Deep Dive: Our app’s logic, features, and resources reside in the src directory. We’re moving it into the container so that when the container runs, it has our complete app.
  • CMD [“./mvnw”, “spring-boot:run”]:
    • Purpose: Start our application!
    • Deep Dive: This command is the final step. This line tells Podman to run our Spring Boot application using Maven when our container launches.

This Containerfile is ready to be used with Podman Desktop to create a container image of our Spring Boot application (Figure 3). Before building the image for this application, let’s double check the directory structure and Containerfile in our IDE of choice.

A screenshot of the Containerfile in VSCode.
Figure 3: The Containerfile in VSCode.

Building the container image with Podman Desktop

We can now build our container image with Podman Desktop by first heading to the Images section of Podman Desktop and selecting the Build an Image button in the top-right corner (Figure 4).

A screenshot of the Podman Desktop with the build image button highlighted.
Figure 4: The Podman Desktop Build Image is highlighted.

This will open a menu where you can select the path to our previously created Containerfile, which should be in the root directory of the spring-petclinic folder. With the Containerfile selected, we can also give our container image a name, for example, petclinic. Now, click on Build, and you’ll see each of the layers of the image being created. You can find this in your local image registry (the Image section in Figure 5).

A screenshot of the Podman Desktop build image menu.
Figure 5: The Podman Desktop Build Image menu.

Running the containerized application

Fantastic! Let’s return to the Images section to see the containerized Spring Boot application, now built and tagged as an image, as well as the eclipse-temurin base image that was downloaded to build our petclinic image. We can easily run this image as a container on our system using the Run icon to the right of our container image (Figure 6).

A screenshot of the Podman Desktop run container.
Figure 6: The Podman Desktop showing where to run the container.

Under Port Mapping, make sure to assign port 8080 of the container to port 8080 of the host. Feel free to leave all other settings as default. Click Start Container to launch the containerized instance of your Spring Boot application (Figure 7).

A screenshot showing the start container button in the Podman Desktop menu.
Figure 7: Click the start container button in the Podman Desktop menu.

Now, with the container up and running, let’s open the container’s URL in the browser using the handy open browser button within the Podman Desktop user interface (Figure 8).

A screenshot of the Podman Desktop highlighting the open browser button.
Figure 8: The Podman Desktop highlighting the open browser button.

Perfect, looks like our Spring Boot application is running, thanks to the startup command in our Containerfile, as well as the port mapping we configured in Podman Desktop (Figure 9).

A screenshot showing the Spring Boot application is running in Chrome.
Figure 9: The Spring Boot application is running in Chrome.

Now we can see this PetClinic application running in our browser, but that’s not all. For more features that may involve bringing out the terminal, we can now use Podman Desktop instead (Figure 10). This may include, SSH’ing into a container for debugging and modifying settings, maybe viewing the logs of a container, or inspecting environment variables. This can all be done under the Container Details section that opens automatically after starting the container.

A screenshot of the Podman Desktop terminal.
Figure 10: The Podman Desktop terminal.

Wrapping up

We have gone from local Spring Boot code to containerizing the application and running it with Podman Desktop! Now, as a container, we can share this application across environments using registries like Quay.io and Docker Hub and deploy it using Kubernetes and OpenShift to a variety of different cloud providers.Last updated: January 15, 2025

Related Posts

Recent Posts

What’s up next?

Read Podman in Action for easy-to-follow examples to help you learn Podman quickly, including steps to deploy a complete containerized web service. LinkedInYouTubeTwitterFacebook

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Sign me up

© 2025 Red Hat

https://www.google.com/recaptcha/api2/bframe?hl=en&v=Lu6n5xwy2ghvnPNo3IxkhcCb&k=6LfI4RApAAAAAF0FZDNh0WxeTb5vW26PPphSO9CR&bft=0dAFcWeA4cWuzQUGkfsrWOK2bL2tgkxlHUWBM7aegdByjKYGUp9Bscf1mdedF5sLmoYL8ru6xp1vPOFU6FF0l_bIY8xaZM4xN2sg

Liferay Portal

Instal·lació de Liferay Portal per a Intranet/Extranet amb WildFly, PostgreSQL i Debian 12

Aquesta guia detallada us ajudarà a configurar un entorn Liferay per a la vostra organització amb les característiques sol·licitades.

1. Requisits previs

1.1. Requisits del sistema

  • Servidor Debian 12 (recomanat mínim 8GB RAM, 4 cores, 100GB d’emmagatzematge)
  • Java JDK 11 (recomanat OpenJDK)
  • WildFly 26+
  • PostgreSQL 14+
  • Liferay DXP 7.4 (recomanat última versió estable)

1.2. Paquets necessaris

sudo apt update
sudo apt install -y openjdk-11-jdk postgresql postgresql-client unzip

2. Instal·lació de PostgreSQL

2.1. Configuració de la base de dades

sudo -u postgres psql

Dins de PostgreSQL:

CREATE DATABASE liferaydb WITH ENCODING 'UTF8';
CREATE USER liferayuser WITH PASSWORD 'P@ssw0rdComplex';
GRANT ALL PRIVILEGES ON DATABASE liferaydb TO liferayuser;
\q

2.2. Configuració de PostgreSQL

Editeu /etc/postgresql/14/main/postgresql.conf:

max_connections = 500
shared_buffers = 2GB
work_mem = 16MB

Reiniciar PostgreSQL:

sudo systemctl restart postgresql

3. Instal·lació de WildFly

3.1. Descarrega i instal·lació

wget https://download.jboss.org/wildfly/26.1.1.Final/wildfly-26.1.1.Final.zip
unzip wildfly-26.1.1.Final.zip
sudo mv wildfly-26.1.1.Final /opt/wildfly

3.2. Configuració com a servei

sudo groupadd -r wildfly
sudo useradd -r -g wildfly -d /opt/wildfly -s /sbin/nologin wildfly
sudo chown -R wildfly:wildfly /opt/wildfly

Creeu /etc/systemd/system/wildfly.service:

[Unit]
Description=WildFly Application Server
After=syslog.target network.target

[Service]
User=wildfly
Group=wildfly
ExecStart=/opt/wildfly/bin/standalone.sh -b=0.0.0.0 -bmanagement=0.0.0.0
Restart=on-abnormal

[Install]
WantedBy=multi-user.target

Inicieu el servei:

sudo systemctl daemon-reload
sudo systemctl enable wildfly
sudo systemctl start wildfly

4. Instal·lació de Liferay DXP

4.1. Preparació de l’entorn

wget https://repository.liferay.com/nexus/service/local/repositories/liferay-public-releases/content/com/liferay/portal/liferay-portal-tomcat-bundle/7.4.3.112-ga113/liferay-portal-tomcat-bundle-7.4.3.112-ga113.zip
unzip liferay-portal-tomcat-bundle-7.4.3.112-ga113.zip
mv liferay-portal-7.4.3.112-ga113 /opt/liferay

4.2. Configuració per a WildFly

Copieu els arxius WAR necessaris al directori de desplegament de WildFly:

cp /opt/liferay/osgi/war/*.war /opt/wildfly/standalone/deployments/

4.3. Configuració de Liferay

Creeu /opt/liferay/portal-ext.properties:

jdbc.default.driverClassName=org.postgresql.Driver
jdbc.default.url=jdbc:postgresql://localhost:5432/liferaydb
jdbc.default.username=liferayuser
jdbc.default.password=P@ssw0rdComplex

liferay.home=/opt/liferay

4.4. Iniciar Liferay

/opt/liferay/tomcat-9.0.56/bin/startup.sh

5. Integració amb Active Directory

5.1. Configuració inicial

  1. Accediu a http://localhost:8080
  2. Completeu l’assistent d’instal·lació inicial
  3. Anar a Control Panel → Configuration → System Settings → Authentication

5.2. Configuració LDAP

  1. A System Settings, busqueu “LDAP”
  2. Configureu les connexions LDAP:
  • Enabled: Yes
  • Base Provider URL: ldap://your-ad-server:389
  • Base DN: DC=yourdomain,DC=com
  • Principal: CN=ldapuser,OU=ServiceAccounts,DC=yourdomain,DC=com
  • Credentials: password
  • User Mapping:
    • Screen Name: sAMAccountName
    • Email Address: mail
    • First Name: givenName
    • Last Name: sn
  • Group Mapping:
    • Group Name: cn
  • Import Method: User Groups
  • User Custom Attributes: (map any additional needed attributes)

5.3. Sincronització de grups

  1. A LDAP Groups, configureu:
  • Group Search Filter: (objectClass=group)
  • Groups DN: OU=Organitzacio,DC=yourdomain,DC=com
  • Group Name: cn
  • Import Group: Yes
  • Create Role per Group Type: Yes

5.4. Configuració d’autenticació

  1. A System Settings → Authentication → LDAP, configureu:
  • Required: Yes
  • Password Policy: Use LDAP Password Policy
  • Import on Login: Yes

6. Configuració dels llocs departamentals

6.1. Plantilla de lloc

  1. Creeu una plantilla de lloc (Site Template) per als llocs públics:
  • Control Panel → Sites → Site Templates
  • Creeu una plantilla amb les pàgines necessàries (Home, News, Resources, etc.)
  1. Creeu una altra plantilla per als llocs privats

6.2. Automatització de creació de llocs

Podeu utilitzar un script Groovy per automatitzar la creació de llocs:

import com.liferay.portal.kernel.service.*
import com.liferay.portal.kernel.model.*

// Obtenir serveis
groupLocalService = GroupLocalServiceUtil
roleLocalService = RoleLocalServiceUtil
userGroupLocalService = UserGroupLocalServiceUtil

// Obtenir plantilles
publicTemplate = groupLocalService.getGroup(companyId, "PUBLIC_TEMPLATE_NAME")
privateTemplate = groupLocalService.getGroup(companyId, "PRIVATE_TEMPLATE_NAME")

// Obtenir tots els grups LDAP que comencen amb "Organitzacio_"
ldapGroups = userGroupLocalService.getUserGroups(-1, -1).findAll { 
    it.name.startsWith("Organitzacio_") 
}

ldapGroups.each { group ->
    deptName = group.name.replace("Organitzacio_", "")

    // Crear lloc públic
    publicSite = groupLocalService.addGroup(
        userId, GroupConstants.DEFAULT_PARENT_GROUP_ID, 
        Group.class.getName(), 0, GroupConstants.DEFAULT_LIVE_GROUP_ID,
        "Organització ${deptName}", null, 0, true, 
        GroupConstants.DEFAULT_MEMBERSHIP_RESTRICTION, 
        "/${deptName.toLowerCase()}", true, true, null)

    // Crear lloc privat
    privateSite = groupLocalService.addGroup(
        userId, GroupConstants.DEFAULT_PARENT_GROUP_ID, 
        Group.class.getName(), 0, GroupConstants.DEFAULT_LIVE_GROUP_ID,
        "Organització ${deptName} (Privat)", null, 0, true, 
        GroupConstants.DEFAULT_MEMBERSHIP_RESTRICTION, 
        "/private-${deptName.toLowerCase()}", true, true, null)

    // Aplicar plantilles
    LayoutLocalServiceUtil.copyLayouts(
        publicTemplate.getGroupId(), false, publicSite.getGroupId(), false)
    LayoutLocalServiceUtil.copyLayouts(
        privateTemplate.getGroupId(), false, privateSite.getGroupId(), false)

    // Assignar permisos
    // (afegiu aquí la lògica per assignar permisos als grups LDAP corresponents)
}

7. Creació del Theme personalitzat

7.1. Crear un nou theme

cd /opt/liferay
blade create -t theme my-custom-theme
cd my-custom-theme

7.2. Configuració del theme

Editeu src/WEB-INF/liferay-plugin-package.properties:

name=My Custom Theme
module-group-id=liferay
module-incremental-version=1
tags=
short-description=
change-log=
page-url=http://www.liferay.com
author=Your Name
licenses=LGPL
liferay-versions=7.4.3.112
theme.parent=classic

7.3. Implementació del menú dinàmic

Editeu src/META-INF/resources/_diffs/templates/portal_normal.ftl:

<#assign public_sites = restClient.get("/sites?filter=name+eq+'Organització*'&fields=name,friendlyURL") />
<#assign user_sites = restClient.get("/user/me/sites?fields=name,friendlyURL") />

<nav class="navbar navbar-expand-lg navbar-dark bg-dark">
    <div class="collapse navbar-collapse">
        <ul class="navbar-nav mr-auto">
            <li class="nav-item dropdown">
                <a class="nav-link dropdown-toggle" href="#" id="publicSitesDropdown" role="button" data-toggle="dropdown">
                    Departaments Públics
                </a>
                <div class="dropdown-menu">
                    <#list public_sites.items as site>
                        <a class="dropdown-item" href="${site.friendlyURL}">${site.name}</a>
                    </#list>
                </div>
            </li>
            <li class="nav-item dropdown">
                <a class="nav-link dropdown-toggle" href="#" id="privateSitesDropdown" role="button" data-toggle="dropdown">
                    Els meus espais
                </a>
                <div class="dropdown-menu">
                    <#list user_sites.items as site>
                        <a class="dropdown-item" href="${site.friendlyURL}">${site.name}</a>
                    </#list>
                </div>
            </li>
        </ul>
    </div>
</nav>

7.4. Desplegar el theme

blade deploy

8. Configuració de les condicions d’ús

8.1. Crear un formulari personalitzat

  1. Anar a Control Panel → Forms
  2. Creeu un nou formulari amb els camps necessaris (Acceptació de condicions)

8.2. Configuració del procés d’acceptació

  1. Creeu un hook personalitzat per interceptar el primer login:
@Component(
    immediate = true,
    service = ServletContextFilter.class
)
public class TermsAcceptanceFilter implements ServletContextFilter {
    @Override
    public void doFilter(
        ServletRequest servletRequest, ServletResponse servletResponse,
        FilterChain filterChain)
        throws IOException, ServletException {

        HttpServletRequest request = (HttpServletRequest)servletRequest;
        HttpServletResponse response = (HttpServletResponse)servletResponse;

        try {
            User user = PortalUtil.getUser(request);

            if (user != null && !isTermsAccepted(user)) {
                if (!request.getRequestURI().contains("/accept-terms")) {
                    response.sendRedirect("/accept-terms");
                    return;
                }
            }
        } catch (Exception e) {
            _log.error(e);
        }

        filterChain.doFilter(servletRequest, servletResponse);
    }

    private boolean isTermsAccepted(User user) {
        // Implementar lògica per comprovar si l'usuari ha acceptat les condicions
    }

    private static final Log _log = LogFactoryUtil.getLog(TermsAcceptanceFilter.class);
}

8.3. Pàgina d’acceptació de condicions

Creeu una pàgina personalitzada amb el formulari i la lògica per registrar l’acceptació.

9. Configuració de notificacions

9.1. Crear un servei de notificacions

Implementeu un servei per gestionar notificacions:

@Component(service = NotificationService.class)
public class NotificationServiceImpl implements NotificationService {

    @Reference
    private UserNotificationEventLocalService _userNotificationEventLocalService;

    public void sendNotification(long userId, String message) {
        JSONObject payload = JSONFactoryUtil.createJSONObject();
        payload.put("message", message);

        _userNotificationEventLocalService.addUserNotificationEvent(
            userId,
            "notification-portlet",
            System.currentTimeMillis(),
            userId,
            payload.toString(),
            false,
            null);
    }

    public void markAsRead(long notificationId) {
        UserNotificationEvent notification = 
            _userNotificationEventLocalService.getUserNotificationEvent(notificationId);

        notification.setArchived(true);
        _userNotificationEventLocalService.updateUserNotificationEvent(notification);
    }
}

9.2. Registrar acceptacions

Creeu una taula personalitzada per registrar les acceptacions de notificacions:

CREATE TABLE notification_acceptance (
    acceptanceId BIGSERIAL PRIMARY KEY,
    userId BIGINT NOT NULL,
    notificationId BIGINT NOT NULL,
    acceptanceDate TIMESTAMP NOT NULL,
    FOREIGN KEY (userId) REFERENCES User_(userId)
);

10. Configuració final

10.1. Configuració de seguretat

  1. Assegureu-vos que totes les pàgines requereixen autenticació:
  • Control Panel → Configuration → System Settings → Security → SSO
  • Marqueu “Require Authentication to Access the Site”

10.2. Configuració de memòria per a WildFly

Editeu /opt/wildfly/bin/standalone.conf:

JAVA_OPTS="-Xms4g -Xmx8g -XX:MetaspaceSize=512M -XX:MaxMetaspaceSize=1024M"

10.3. Reiniciar i verificar

sudo systemctl restart wildfly
/opt/liferay/tomcat-9.0.56/bin/shutdown.sh
/opt/liferay/tomcat-9.0.56/bin/startup.sh

11. Manteniment i monitorització

11.1. Tasques programades

Configureu tasques programades per:

  • Sincronització regular amb AD
  • Neteja de notificacions antigues
  • Backups regulars

11.2. Monitorització

Configureu eines com:

  • Prometheus + Grafana per a mètriques
  • ELK Stack per a logs
  • Alertes per a problemes de rendiment

Aquesta configuració proporcionarà una plataforma robusta per a la vostra intranet/extranet amb totes les característiques sol·licitades. Recordeu adaptar les contrasenyes i els valors específics del vostre entorn AD abans de posar en producció el sistema.

Step-By-Step Tutorial for Building a REST API in Java

https://dev.to/nikolay_stanchev/step-by-step-tutorial-for-building-a-rest-api-in-java-2fna

Motivation

Having seen many tutorials on how to build REST APIs in Java using various combinations of frameworks and libraries, I decided to build my own API using the software suite that I have the most experience with. In particular, I wanted to use:

  • Maven as the build and dependency management tool
  • Jersey as the framework that provides implementation of the JAX-RS specification
  • Tomcat as the application server
    • in particular, I wanted to run Tomcat in embedded mode so that I would end up with a simple executable jar file
  • Guice as the dependency injection framework

The problem I faced was that I couldn’t find any tutorials combining the software choices above, so I had to go through the process of combining the pieces myself. This didn’t turn out to be a particularly straightforward task, which is why I decided to document the process on my blog and share it with others who might be facing similar problems.

Project Summary

For the purpose of this tutorial, we are going to build the standard API for managing TODO items – i.e. a CRUD API that supports the functionalities of C reating, R etrieving, U pdating and D eleting tasks.

The API specification is given below:

The full specification can be viewed in the Appendix.

To implement this API, we will use:

  • Java 11 (OpenJDK)
  • Apache Maven v3.8.6
  • Ecplipse Jersey v2.35
  • Apache Tomcat v9.0.62
  • Guice v4.2.3

For the purpose of simplicity, I will avoid the use of any databases as part of this tutorial and instead use a pseudo in-memory DB. However, we will see how easy it is to switch from an in-memory testing DB to an actual database when following a clean architecture.

The goal is to end up with an executable jar file generated by Maven that will include the Tomcat application server and our API implementation. We will then dockerize the entire process of generating the file and executing it, and finally run the service as a Docker container.

The following coding steps will only outline the most relevant pieces of code for the purpose of this tutorial, but you can find the full code in the GitHub repository. For most steps, we will add unit tests that won’t be referenced here but included in the code change itself. To run the tests at any given point in time, you can use mvn clean test.

Coding Steps

Step 1 – Project Setup

As with every Maven project, we need a POM file (the file representing the P roject o bject M odel). We start with a very basic POM which describes the project information and sets the JDK and JRE target versions to 11. This means that the project can use Java 11 language features (but no features from later versions) and will require a JRE version 11 or later to be executed. To avoid registering a domain name for this example project, I am using a group ID that corresponds to my GitHub username where this project will be hosted – com.github.nikist97.

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <!-- Project Information -->
    <groupId>com.github.nikist97</groupId>
    <artifactId>TaskManagementService</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>TaskManagementService</name>

    <properties>
        <!-- Maven-related properties used during the build process -->
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
    </properties>

    <dependencies>
        <!-- This is where we will declare libraries our project depends on -->
    </dependencies>

    <build>
        <plugins>
            <!-- This is where we will declare plugins our project needs for the build process -->
        </plugins>
    </build>
</project>

The full commit for this step can be found here.

Step 2 – Implementing the Business Logic

We start with the most critical piece of software in general, which is our business logic. Ideally, this layer should be agnostic to the notion of any DB technologies or API protocols. Whether we implement an HTTP API using MongoDB on the backend or we use PostgreSQL and implement a command-line tool for interacting with our code, it should not affect the code for our business logic. In other words, the business logic should not depend on the persistence layer (the code interacting with the database) and the API layer (the code that will define the HTTP API endpoints).

The first thing to implement is our main entity class – Task. This class follows the builder pattern and provides argument validation. The required attributes are the task’s title and description. The rest of the attributes we can default to sensible values when not explicitly provided:

  • identifier is set to a random UUID
  • createdAt is set to the current date time
  • completed is set to False
public class Task {

    private final String identifier;
    private final String title;
    private final String description;
    private final Instant createdAt;
    private final boolean completed;

    ...

    public static class TaskBuilder {

        ...

        private TaskBuilder(String title, String description) {
            validateArgNotNullOrBlank(title, "title");
            validateArgNotNullOrBlank(description, "description");

            this.title = title;
            this.description = description;
            this.identifier = UUID.randomUUID().toString();
            this.createdAt = Instant.now();
            this.completed = false;
        }

        ...

    }
}

Then, we define the interface we need for interacting with a persistence layer (i.e. a database or another storage mechanism). Notice that this interface belongs to the business layer because, ultimately, it is the business logic that decides what storage functionality we will need. The actual implementation of this interface, though (a MongoDB implementation or an in-memory DB or something else) will belong to the persistence layer, which we will implement in a subsequent step.

public interface TaskManagementRepository {

    void save(Task task);

    List<Task> getAll();

    Optional<Task> get(String taskID);

    void delete(String taskID);
}

Finally, we implement the service class, which has the CRUD logic. The critical piece here is that this class doesn’t rely on a concrete implementation of the repository interface – it is agnostic to what DB technology we decide to use later.

public class TaskManagementService {

    private final TaskManagementRepository repository;

    ...

    public Task create(String title, String description) {
        Task task = Task.builder(title, description).build();

        repository.save(task);

        return task;
    }

    public Task update(String taskID, TaskUpdateRequest taskUpdateRequest) {
        Task oldTask = retrieve(taskID);

        Task newTask = oldTask.update(taskUpdateRequest);
        repository.save(newTask);

        return newTask;
    }

    public List<Task> retrieveAll() {
        return repository.getAll();
    }

    public Task retrieve(String taskID) {
        return repository.get(taskID).orElseThrow(() ->
                new TaskNotFoundException("Task with the given identifier cannot be found - " + taskID));
    }

    public void delete(String taskID) {
        repository.delete(taskID);
    }
}

The way this code was written allows us to easily unit test our business logic in isolation by mocking the behavior of the repository interface. To achieve this, we will need to add two dependencies in the POM file:

        ...
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.mockito</groupId>
            <artifactId>mockito-core</artifactId>
            <version>3.5.13</version>
            <scope>test</scope>
        </dependency>
        ...

The full commit for this step can be found here.

Step 3 – Creating Stub API Endpoints

The next step is to implement the API layer. For this project, we are implementing an HTTP REST API using Jersey. Therefore, we start by adding the dependency in the POM file.

        ...
        <dependency>
            <groupId>org.glassfish.jersey.containers</groupId>
            <artifactId>jersey-container-servlet</artifactId>
            <version>2.35</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.jersey.inject</groupId>
            <artifactId>jersey-hk2</artifactId>
            <version>2.35</version>
        </dependency>
        ...

The second dependency is needed after Jersey 2.26 – https://eclipse-ee4j.github.io/jersey.github.io/release-notes/2.26.html – following this version users need to explicitly declare the dependency injection framework for Jersey to use – in this case we go with HK2 which is what was used in previous releases.

Then we implement the resource class, which at this point only has stub methods that all return a status code 200 HTTP response with no response body.

@Path("/tasks")
public class TaskManagementResource {

    @POST
    public Response createTask() {
        return Response.ok().build();
    }

    @GET
    public Response getTasks() {
        return Response.ok().build();
    }

    @PATCH
    @Path("/{taskID}")
    public Response updateTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    public Response getTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {

        return Response.ok().build();
    }
}

We will also need an application config class to define the base URI for our API and to inform the framework about the task management resource class:

@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
    }

}

The full commit for this step can be found here.

Step 4 – Implementing the API Layer

For this project, we will use JSON as the serialization data format for HTTP requests and repsonses.

In order to produce and consume JSON in our API, we need to add a library that’s going to be responsible for the JSON serialization and deserialization of POJOs. We are going to use Jackson. The library we need in order to integrate Jersy with Jackson is given below:

        ...
        <dependency>
            <groupId>org.glassfish.jersey.media</groupId>
            <artifactId>jersey-media-json-jackson</artifactId>
            <version>2.35</version>
        </dependency>
        ...

Then we need to customize the behavior of the JSON object mapper that will be used for serializing and deserializing the request and response POJOs. In this case, we disable ALLOW_COERCION_OF_SCALARS – this means that the service won’t attempt to parse strings into numbers or booleans (e.g. {"boolean_field":"true"} will be rejected)

@Provider
public class JsonObjectMapperProvider implements ContextResolver<ObjectMapper> {

    private final ObjectMapper jsonObjectMapper;

    /**
     * Create a custom JSON object mapper provider.
     */
    public JsonObjectMapperProvider() {
        jsonObjectMapper = new ObjectMapper();
        jsonObjectMapper.disable(ALLOW_COERCION_OF_SCALARS);
    }

    @Override
    public ObjectMapper getContext(Class<?> type) {
        return jsonObjectMapper;
    }
}

Once again, we need to make Jersey aware of this provider class:

@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
        register(JsonObjectMapperProvider.class);
    }

}

Then we define the request and response POJOs. I will skip the code for these classes, but in summary, we need:

  • TaskCreateRequest – represents the JSON request body sent to the service when creating a new task
  • TaskUpdateRequest – represents the JSON request body sent to the service when updating an existing task
  • TaskResponse – represents the JSON response body sent to the client when retrieving task(s)

The last part of this step is to replace the stub logic in the resource class with the actual API implementation that relies on the business logic encapsulated in the service class from step 2.

@Path("/tasks")
public class TaskManagementResource {

    private final TaskManagementService service;

    public TaskManagementResource(TaskManagementService service) {
        this.service = service;
    }

    @POST
    @Consumes(MediaType.APPLICATION_JSON)
    public Response createTask(TaskCreateRequest taskCreateRequest) {
        validateArgNotNull(taskCreateRequest, "task-create-request-body");

        Task task = service.create(taskCreateRequest.getTitle(), taskCreateRequest.getDescription());

        String taskID = task.getIdentifier();

        URI taskRelativeURI = URI.create("tasks/" + taskID);
        return Response.created(taskRelativeURI).build();
    }

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public List<TaskResponse> getTasks() {
        return service.retrieveAll().stream()
                .map(TaskResponse::new)
                .collect(Collectors.toUnmodifiableList());
    }

    @PATCH
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public Response updateTask(@PathParam("taskID") String taskID, TaskUpdateRequest taskUpdateRequest) {
        validateArgNotNull(taskUpdateRequest, "task-update-request-body");

        TaskUpdate update = new TaskUpdate(taskUpdateRequest.getTitle(), taskUpdateRequest.getDescription(),
                taskUpdateRequest.isCompleted());

        service.update(taskID, update);

        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public TaskResponse getTask(@PathParam("taskID") String taskID) {
        Task task = service.retrieve(taskID);
        return new TaskResponse(task);
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {
        service.delete(taskID);
        return Response.noContent().build();
    }

The full commit for this step can be found here.

Step 5 – Implementing the Storage Mechanism

For simplicity, we are going to implement an in-memory storage implementation of the repository interface rather than relying on a database technology. The implementation will store all tasks inside a map – the key is the task identifier and the value is the task itself. This is just enough for simple CRUD functionality.

public class InMemoryTaskManagementRepository implements TaskManagementRepository {

    Map<String, Task> tasks = new HashMap<>();

    @Override
    public void save(Task task) {
        tasks.put(task.getIdentifier(), task);
    }

    @Override
    public List<Task> getAll() {
        return tasks.values().stream()
                .collect(Collectors.toUnmodifiableList());
    }

    @Override
    public Optional<Task> get(String taskID) {
        return Optional.ofNullable(tasks.get(taskID));
    }

    @Override
    public void delete(String taskID) {
        tasks.remove(taskID);
    }

}

The full commit for this step can be found here.

Step 6 – Binding Everything Together

Now that we have all the layers implemented, we need to bind them together with a dependency injection framework – in this case, we will use Guice to achieve that.

We start by adding Guice as a dependency in the POM file:

        <dependency>
            <groupId>com.google.inject</groupId>
            <artifactId>guice</artifactId>
            <version>4.2.3</version>
        </dependency>

Then we create a simple guice module to bind the in-memory DB implementation to the repository interface. This basically means that for all classes that depend on the repository interface, Guice will inject the in-memory DB class. We use the Singleton scope because we want all classes that depend on the repository to re-use the same in-memory DB instance.

public class ApplicationModule extends AbstractModule {

    @Override
    public void configure() {
        bind(TaskManagementRepository.class).to(InMemoryTaskManagementRepository.class).in(Singleton.class);
    }

}

Note that if we decide to use an actual database, the code change is as simple as:

  • implementing the wrapper class for the DB we choose – e.g. MongoDBTaskManagementRepository
  • changing the binding above to point to the new implementation of the repository interface

Now that we have the module implemented, we can add Inject annotation to all classes where the constructor has a dependency which needs to be injected by Guice. These would be the TaskManagementResource and the TaskManagementService classes. The magic of guice (and dependency injection in general) is that the module above is enough to build the entire tree of dependencies in our code.

TaskManagementResource depends on TaskManagementService which depends on TaskManagementRepository. Guice knows how to get an instance of the TaskManagementRepository interface so following this chain it also knows how to get an instance of the TaskManagementService and TaskManagementResource classes.

The final piece of work is to make Jersey aware of the Guice injector – remember Jersey uses HK2 as its dependency injection framework, so Jersey will rely on HK2 to be able to build a TaskManagementResource class. In order for HK2 to build a TaskManagementResource it needs to know about Guice’s dependency injector container. To connect Guice and HK2, we are going to use something called the Guice/HK2 Bridge. It is basically a process of bridging the Guice container (the Injector class) into the HK2 container (the ServiceLocator class).

So we declare a dependency on the Guice/HK2 bridge library:

        ...
        <dependency>
            <groupId>org.glassfish.hk2</groupId>
            <artifactId>guice-bridge</artifactId>
            <version>2.6.1</version>
        </dependency>
        ...

Then we change the ApplicationConfig class to create the bridge between Guice and HK2. Notice that since the ApplicationConfig class is used by Jersey (and thus managed by HK2) we can easily inject the ServiceLocator instance (the HK2 container itself) into it.

        @Inject
        public ApplicationConfig(ServiceLocator serviceLocator) {
            register(TaskManagementResource.class);
            register(JsonObjectMapperProvider.class);

            // bridge the Guice container (Injector) into the HK2 container (ServiceLocator)
            Injector injector = Guice.createInjector(new ApplicationModule());
            GuiceBridge.getGuiceBridge().initializeGuiceBridge(serviceLocator);
            GuiceIntoHK2Bridge guiceBridge = serviceLocator.getService(GuiceIntoHK2Bridge.class);
            guiceBridge.bridgeGuiceInjector(injector);
        }

The full commit for this step can be found here.

Step 7 – Creating the Application Launcher

The final critical step is configuring and starting the application server through a launcher class, which will serve as our main class for the executable jar file we are targeting.

We start with the code for starting an embedded Tomcat server. The dependency we need is:

    ...
    <dependency>
        <groupId>org.apache.tomcat.embed</groupId>
        <artifactId>tomcat-embed-core</artifactId>
        <version>9.0.62</version>
    </dependency>
    ...

Then we need a launcher class. This class is responsible for starting the embedded Tomcat server and registering a servlet container for the resource config we defined earlier (when we registered the resource class).

public class Launcher {

    public static void main(String[] args) throws Exception {
        Tomcat tomcat = new Tomcat();

        // configure server port number
        tomcat.setPort(8080);

        // remove defaulted JSP configs
        tomcat.setAddDefaultWebXmlToWebapp(false);

        // add the web app
        StandardContext ctx = (StandardContext) tomcat.addWebapp("/", new File(".").getAbsolutePath());
        ResourceConfig resourceConfig = new ResourceConfig(ApplicationConfig.class);
        Tomcat.addServlet(ctx, "jersey-container-servlet", new ServletContainer(resourceConfig));
        ctx.addServletMappingDecoded("/*", "jersey-container-servlet");

        // start the server
        tomcat.start();
        System.out.println("Server listening on " + tomcat.getHost().getName() + ":" + tomcat.getConnector().getPort());
        tomcat.getServer().await();
    }
}

If using InteliJ to code this project, then you should ideally be able to run the main method of the Launcher class. There is one caveat here – with the release of JDK 9 and after (and hence the introduction of the Java Platform Module System), reflective access is only allowed to publicly exported packages. This means that Guice will fail at runtime because it uses reflection to access JDK modules. See this StackOverflow post for more information.

The only workaround I found so far for this was to add the following as a JVM option --add-opens java.base/java.lang=ALL-UNNAMED to the run configuration of the main method as suggested in the StackOverflow post I linked. This basically allows Guice to continue doing its reflection as in the pre-JDK 9 releases.

After we use the workaround above and test our launcher, we get to the part of generating an executable JAR file which can be used to start the service. To achieve this, we need the appassembler plugin. Note that we still need to add the --add-opens java.base/java.lang=ALL-UNNAMED JVM argument in order for the executable jar file to work.

         ...
         <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>appassembler-maven-plugin</artifactId>
                <version>2.0.0</version>
                <configuration>
                    <assembleDirectory>target</assembleDirectory>
                    <extraJvmArguments>--add-opens java.base/java.lang=ALL-UNNAMED</extraJvmArguments>
                    <programs>
                        <program>
                            <mainClass>taskmanagement.Launcher</mainClass>
                            <name>taskmanagement_webapp</name>
                        </program>
                    </programs>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>assemble</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
        ...

With this plugin, we can finally generate an executable file and then use it to start the service:

mvn clean package
./target/bin/taskmanagement_webapp

The full commit for this step can be found here.

Step 8 – Adding Exception Mappers

You might have noticed that so far we have defined two custom exceptions that are thrown when the service receives input data it cannot handle:

  • TaskNotFoundException
  • InvalidTaskDataException

If these exceptions aren’t handled properly when encountered, then the embedded tomcat server will wrap them inside an internal server error (status code 500) which is not very user friendly. As per the API specification we defined in the beginning (see Appendix), we want clients to receive a 404 status code if, for example, they use a task ID that doesn’t exist.

To achieve this, we use exception mappers. When we register those mappers, Jersey will use them to transform instances of these exceptions to proper HTTP Response objects.

public class TaskNotFoundExceptionMapper implements ExceptionMapper<TaskNotFoundException> {

    @Override
    public Response toResponse(TaskNotFoundException exception) {
        return Response
                .status(Response.Status.NOT_FOUND)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


public class InvalidTaskDataExceptionMapper implements ExceptionMapper<InvalidTaskDataException> {

    @Override
    public Response toResponse(InvalidTaskDataException exception) {
        return Response
                .status(Response.Status.BAD_REQUEST)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


    @Inject
    public ApplicationConfig(ServiceLocator serviceLocator) {
        ...
        register(InvalidTaskDataExceptionMapper.class);
        register(TaskNotFoundExceptionMapper.class);
        ...
    }

Notice the use of a new POJO – ExceptionMessage – which is used to convey the exception message as a JSON response. Now, whenever the business logic throws any of these exceptions, we will get a proper JSON response with the appropriate status code.

The full commit for this step can be found here.

Dockerizing the Application

There are lots of benefits of using Docker but given that this article is not about containers, I won’t spend time talking about them. I will only mention that I always prefer to run applications in a Docker container because it makes the build process much more efficient (think application portability, well-defined build behavior, improved deployment process, etc.)

The Dockerfile for our service is relatively simple and based on the maven OpenJDK image. It automates what we did in step 7 – packaging the application and running the executable jar file.

FROM maven:3.8.5-openjdk-11-slim
WORKDIR /application

COPY . .

RUN mvn clean package

CMD ["./target/bin/taskmanagement_webapp"]

With this, we can build the container image and start our service as a Docker container. The commands below assume you have the Docker daemon running on your local machine.

docker build --tag task-management-service .
docker run -d -p 127.0.0.1:8080:8080 --name test-task-management-service task-management-service

Now the service should be running in the background and be accessible from your local machine on port 8080. For starting/stopping it, use this command:

docker start/stop test-task-management-service

Testing the Service

Now that we have the service running, we can use Curl to send some test requests.

  • creating a few tasks
curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks" 

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:46 GMT

curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks"

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:47 GMT

  • retrieving a task
curl -i -X GET "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 162
Date: Tue, 28 Jun 2022 07:54:21 GMT

{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false}

  • retrieving a non-existing task
curl -i -X GET "http://localhost:8080/api/tasks/random-task-id-123"                                                       

HTTP/1.1 404 
Content-Type: application/json
Content-Length: 81
Date: Tue, 28 Jun 2022 09:44:53 GMT

{"message":"Task with the given identifier cannot be found - random-task-id-123"}

  • retrieving all tasks
curl -i -X GET "http://localhost:8080/api/tasks"     

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 490
Date: Tue, 28 Jun 2022 07:55:08 GMT

[{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false},{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:46.444179Z","completed":false}]

  • deleting a task
curl -i -X DELETE "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 204 
Date: Tue, 28 Jun 2022 07:56:55 GMT

  • patching a task
curl -i -X PATCH -H "Content-Type:application/json" -d "{\"completed\": true, \"title\": \"new-title\", \"description\":\"new-description\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 200 
Content-Length: 0
Date: Tue, 28 Jun 2022 08:00:37 GMT

curl -i -X GET "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"   
HTTP/1.1 200 
Content-Type: application/json
Content-Length: 164
Date: Tue, 28 Jun 2022 08:01:07 GMT

{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"new-title","description":"new-description","createdAt":"2022-06-28T07:52:46.444179Z","completed":true}

  • patching a task with empty title
curl -i -X PATCH -H "Content-Type:application/json" -d "{\"title\": \"\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 400 
Content-Type: application/json
Content-Length: 43
Date: Tue, 28 Jun 2022 09:47:09 GMT
Connection: close

{"message":"title cannot be null or blank"}

Future Improvements

What we have built so far is obviously not a production-ready API, but it demonstrates how to get started with the software suite I mentioned in the beginning of this article when building a REST API. Here are some future improvements that can be made:

  • using a database for persistent storage
  • adding user authentication and authorization – tasks should be scoped per user rather than being available globally
  • adding logging
  • adding KPI (Key Performance Indicators) metrics – things like the count of total requests, latency, failures count, etc.
  • adding a mapper for unexpected exceptions – we don’t want to expose a stack trace if the service encounters an unexpected null pointer exception, instead we want a JSON response with status code 500
  • adding automated integration tests
  • adding a more verbose response to the patch endpoint – e.g. indicating whether the request resulted in a change or not
  • scanning packages and automatically registering provider and resource classes instead of manually registering them one-by-one
  • adding CORS (Cross-Origin-Resource-Sharing) support if we intend to call the API from a browser application hosted under a different domain
  • adding SSL support
  • adding rate limiting

If you found this article helpful and would like to see a follow-up on the topics above, please comment or message me with a preference choice of what you would like to learn about the most.

Appendix

The full API specification using the Open API description format can be found below. You can use the Swagger Editor to display the API specification in a more friendly manner.

swagger: '2.0'

info:
  description: This is a RESTful task management API specification.
  version: 1.0.0
  title: Task Management API
  license:
    name: Apache 2.0
    url: 'http://www.apache.org/licenses/LICENSE-2.0.html'

host: 'localhost:8080'
basePath: /api

schemes:
  - http

paths:

  /tasks:
    post:
      summary: Create a new task
      operationId: createTask
      consumes:
        - application/json
      parameters:
        - in: body
          name: taskCreateRequest
          description: new task object that needs to be added to the list of tasks
          required: true
          schema:
            $ref: '#/definitions/TaskCreateRequest'
      responses:
        '201':
          description: successfully created new task
        '400':
          description: task create request failed validation
    get:
      summary: Retrieve all existing tasks
      operationId: retrieveTasks
      produces:
        - application/json
      responses:
        '200':
          description: successfully retrieved all tasks
          schema:
            type: array
            items:
              $ref: '#/definitions/TaskResponse'

  '/tasks/{taskID}':
    get:
      summary: Retrieve task
      operationId: retrieveTask
      produces:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '200':
          description: successfully retrieved task
          schema:
            $ref: '#/definitions/TaskResponse'
        '404':
          description: task not found
    patch:
      summary: Update task
      operationId: updateTask
      consumes:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
        - name: taskUpdateRequest
          in: body
          description: task update request
          required: true
          schema:
            $ref: '#/definitions/TaskUpdateRequest'
      responses:
        '200':
          description: successfully updated task
        '400':
          description: task update request failed validation
        '404':
          description: task not found
    delete:
      summary: Delete task
      operationId: deleteTask
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '204':
          description: >-
            successfully deleted task or task with the given identifier did not
            exist

definitions:
  TaskCreateRequest:
    type: object
    required:
      - title
      - description
    properties:
      title:
        type: string
      description:
        type: string
  TaskUpdateRequest:
    type: object
    properties:
      title:
        type: string
      description:
        type: string
      completed:
        type: boolean
  TaskResponse:
    type: object
    required:
      - identifier
      - title
      - description
      - completed
      - createdAt
    properties:
      identifier:
        type: string
      title:
        type: string
      description:
        type: string
      createdAt:
        type: string
        format: date-time
      completed:
        type: boolean

Automatically Generate REST and GraphQL APIs From Your Database

by Adrian Machado

Automatically Generate REST and GraphQL APIs From Your Database | Zuplo Blog

Building APIs from scratch takes time, requires extensive testing, and often leads to inconsistencies between your database schema and API endpoints. Automatically generating APIs directly from your database schema eliminates these pain points while reducing development time from weeks to minutes. This approach is particularly valuable for teams building internal tools, prototypes, or any application where rapid development is important.

The ability to generate APIs automatically has transformed how developers build and maintain applications. Instead of writing repetitive CRUD endpoints, converting API requests to CRUD SQL queries, managing documentation, and maintaining consistency between database schemas and API contracts, developers can focus on building features that matter to their users. This article explores the tools and approaches available for generating both REST and GraphQL APIs from various database types.

Table of Contents#

Why Generate APIs From Your Database#

Traditional API development involves writing code to map database operations to HTTP endpoints, implementing authentication, managing documentation, and ensuring data validation. This process is time-consuming and error-prone. Automatic API generation solves these issues by creating standardized endpoints directly from your database schema.

The benefits extend beyond just saving time. Generated APIs automatically stay in sync with your database schema, reducing bugs caused by outdated API endpoints. They often include built-in features like filtering, pagination, and sorting that would otherwise require custom implementation. Many tools also generate API documentation automatically, ensuring it stays current with your schema changes.

REST API Generation Tools#

REST APIs remain the most common choice for web services due to their simplicity and broad support across platforms. Modern tools can generate REST APIs that are both powerful and secure, with features like role-based access control and request validation built-in.

PostgreSQL Solutions#

postgrest

PostgREST stands out as the leading solution for PostgreSQL databases. It turns your database directly into a RESTful API with minimal configuration. The tool automatically creates endpoints for tables and views, supports complex filters, and leverages PostgreSQL’s row-level security for fine-grained access control. If you’d like to see this in action, check out our Neon PostgresQL sample.

Prisma combined with ZenStack offers a more programmatic approach. While requiring more setup than PostgREST, it provides better TypeScript integration and more control over the generated API. This combination excels in projects where type safety and custom business logic are priorities.

MySQL Solutions#

Dreamfactory

DreamFactory provides comprehensive API generation for MySQL databases. It includes features like API key management, role-based access control, and the ability to combine multiple data sources into a single API. The platform also supports custom scripting for cases where generated endpoints need modification.

We also created our own MySQL PostgREST sample if you’d like to have more control over the implementation and hosting.

NoSQL Solutions#

NoSQL databases benefit from tools like PrestoAPI and DreamFactory, which handle the unique requirements of document-based data structures. PrestoAPI specializes in MongoDB integration, providing automatic API generation with built-in security features and custom endpoint configuration.

Hasura, while primarily known for GraphQL, also generates REST APIs. It supports multiple NoSQL databases and provides real-time subscriptions, making it particularly useful for applications requiring live data updates.

Tweet

Over 10,000 developers trust Zuplo to secure, document, and monetize their APIsLearn More

Multi-DB Support#

Some solutions are flexible to handle multiple types of databases. Often allowing you to combine them into a single API. We already mentioned Dreamfactory, but others include ApinizerDirectus, and sandman2.

Managed DB Solutions#

You can’t talk about REST APIs for Postgres without mentioning Supabase‘s excellent REST API they generate over your database using PostgREST.

GraphQL API Generation Tools#

GraphQL APIs offer more flexibility than REST by allowing clients to request exactly the data they need. This effectiveness makes them more popular, especially for applications with complex data requirements.

Postgres GraphQL Solutions#

Hasura

Hasura and PostGraphile lead the PostgreSQL GraphQL landscape. Hasura provides real-time subscriptions and a powerful permissions system, while PostGraphile offers deep PostgreSQL integration and excellent performance for complex queries.

MySQL and NoSQL Solutions#

StepZen and AWS AppSync excel at generating GraphQL APIs for MySQL and NoSQL databases. StepZen simplifies the process of combining multiple data sources, while AppSync provides smooth integration with AWS services and real-time data capabilities.

Other notable mentions:

Other Databases#

Some other databases often include REST or GraphQL API generation as a part of the associated cloud/SaaS offering. This includes:

Making the Right Choice#

For simple projects with PostgreSQL, PostgREST, or Hasura provide excellent starting points. More complex applications might benefit from tools like Prisma or AWS AppSync, which offer greater flexibility and integration options.

Remember that while automatic API generation can significantly speed up development, it’s not a silver bullet. Complex business logic, custom authentication requirements, or specific performance needs might require additional development work. If you do need a more robust solution for building APIs without sacrificing developer productivity – you should check out Zuplo.

Common Questions About API Generation#

Q: How secure are automatically generated APIs? Most tools provide built-in security features like role-based access control and API key management. However, you should review the security features of your chosen tool and implement additional security measures as needed.

Q: Can the generated endpoints be customized? Yes, most tools allow some level of customization through configuration files, middleware, or custom code injection points. If you’d like a fully-customizable experience while still matching your database, check our our article on generating OpenAPI from your database.

Q: What about performance? Generated APIs can be highly performant, especially when using tools that improve database queries. However, complex operations might require manual optimization.

Q: How can one handle complex business logic? Many tools support custom functions, stored procedures, or middleware that can implement additional business logic beyond basic CRUD operations.