Step-By-Step Tutorial for Building a REST API in Java

https://dev.to/nikolay_stanchev/step-by-step-tutorial-for-building-a-rest-api-in-java-2fna

Motivation

Having seen many tutorials on how to build REST APIs in Java using various combinations of frameworks and libraries, I decided to build my own API using the software suite that I have the most experience with. In particular, I wanted to use:

  • Maven as the build and dependency management tool
  • Jersey as the framework that provides implementation of the JAX-RS specification
  • Tomcat as the application server
    • in particular, I wanted to run Tomcat in embedded mode so that I would end up with a simple executable jar file
  • Guice as the dependency injection framework

The problem I faced was that I couldn’t find any tutorials combining the software choices above, so I had to go through the process of combining the pieces myself. This didn’t turn out to be a particularly straightforward task, which is why I decided to document the process on my blog and share it with others who might be facing similar problems.

Project Summary

For the purpose of this tutorial, we are going to build the standard API for managing TODO items – i.e. a CRUD API that supports the functionalities of C reating, R etrieving, U pdating and D eleting tasks.

The API specification is given below:

The full specification can be viewed in the Appendix.

To implement this API, we will use:

  • Java 11 (OpenJDK)
  • Apache Maven v3.8.6
  • Ecplipse Jersey v2.35
  • Apache Tomcat v9.0.62
  • Guice v4.2.3

For the purpose of simplicity, I will avoid the use of any databases as part of this tutorial and instead use a pseudo in-memory DB. However, we will see how easy it is to switch from an in-memory testing DB to an actual database when following a clean architecture.

The goal is to end up with an executable jar file generated by Maven that will include the Tomcat application server and our API implementation. We will then dockerize the entire process of generating the file and executing it, and finally run the service as a Docker container.

The following coding steps will only outline the most relevant pieces of code for the purpose of this tutorial, but you can find the full code in the GitHub repository. For most steps, we will add unit tests that won’t be referenced here but included in the code change itself. To run the tests at any given point in time, you can use mvn clean test.

Coding Steps

Step 1 – Project Setup

As with every Maven project, we need a POM file (the file representing the P roject o bject M odel). We start with a very basic POM which describes the project information and sets the JDK and JRE target versions to 11. This means that the project can use Java 11 language features (but no features from later versions) and will require a JRE version 11 or later to be executed. To avoid registering a domain name for this example project, I am using a group ID that corresponds to my GitHub username where this project will be hosted – com.github.nikist97.

<?xml version="1.0" encoding="UTF-8"?>

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <!-- Project Information -->
    <groupId>com.github.nikist97</groupId>
    <artifactId>TaskManagementService</artifactId>
    <version>1.0-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>TaskManagementService</name>

    <properties>
        <!-- Maven-related properties used during the build process -->
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>11</maven.compiler.source>
        <maven.compiler.target>11</maven.compiler.target>
    </properties>

    <dependencies>
        <!-- This is where we will declare libraries our project depends on -->
    </dependencies>

    <build>
        <plugins>
            <!-- This is where we will declare plugins our project needs for the build process -->
        </plugins>
    </build>
</project>

The full commit for this step can be found here.

Step 2 – Implementing the Business Logic

We start with the most critical piece of software in general, which is our business logic. Ideally, this layer should be agnostic to the notion of any DB technologies or API protocols. Whether we implement an HTTP API using MongoDB on the backend or we use PostgreSQL and implement a command-line tool for interacting with our code, it should not affect the code for our business logic. In other words, the business logic should not depend on the persistence layer (the code interacting with the database) and the API layer (the code that will define the HTTP API endpoints).

The first thing to implement is our main entity class – Task. This class follows the builder pattern and provides argument validation. The required attributes are the task’s title and description. The rest of the attributes we can default to sensible values when not explicitly provided:

  • identifier is set to a random UUID
  • createdAt is set to the current date time
  • completed is set to False
public class Task {

    private final String identifier;
    private final String title;
    private final String description;
    private final Instant createdAt;
    private final boolean completed;

    ...

    public static class TaskBuilder {

        ...

        private TaskBuilder(String title, String description) {
            validateArgNotNullOrBlank(title, "title");
            validateArgNotNullOrBlank(description, "description");

            this.title = title;
            this.description = description;
            this.identifier = UUID.randomUUID().toString();
            this.createdAt = Instant.now();
            this.completed = false;
        }

        ...

    }
}

Then, we define the interface we need for interacting with a persistence layer (i.e. a database or another storage mechanism). Notice that this interface belongs to the business layer because, ultimately, it is the business logic that decides what storage functionality we will need. The actual implementation of this interface, though (a MongoDB implementation or an in-memory DB or something else) will belong to the persistence layer, which we will implement in a subsequent step.

public interface TaskManagementRepository {

    void save(Task task);

    List<Task> getAll();

    Optional<Task> get(String taskID);

    void delete(String taskID);
}

Finally, we implement the service class, which has the CRUD logic. The critical piece here is that this class doesn’t rely on a concrete implementation of the repository interface – it is agnostic to what DB technology we decide to use later.

public class TaskManagementService {

    private final TaskManagementRepository repository;

    ...

    public Task create(String title, String description) {
        Task task = Task.builder(title, description).build();

        repository.save(task);

        return task;
    }

    public Task update(String taskID, TaskUpdateRequest taskUpdateRequest) {
        Task oldTask = retrieve(taskID);

        Task newTask = oldTask.update(taskUpdateRequest);
        repository.save(newTask);

        return newTask;
    }

    public List<Task> retrieveAll() {
        return repository.getAll();
    }

    public Task retrieve(String taskID) {
        return repository.get(taskID).orElseThrow(() ->
                new TaskNotFoundException("Task with the given identifier cannot be found - " + taskID));
    }

    public void delete(String taskID) {
        repository.delete(taskID);
    }
}

The way this code was written allows us to easily unit test our business logic in isolation by mocking the behavior of the repository interface. To achieve this, we will need to add two dependencies in the POM file:

        ...
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.mockito</groupId>
            <artifactId>mockito-core</artifactId>
            <version>3.5.13</version>
            <scope>test</scope>
        </dependency>
        ...

The full commit for this step can be found here.

Step 3 – Creating Stub API Endpoints

The next step is to implement the API layer. For this project, we are implementing an HTTP REST API using Jersey. Therefore, we start by adding the dependency in the POM file.

        ...
        <dependency>
            <groupId>org.glassfish.jersey.containers</groupId>
            <artifactId>jersey-container-servlet</artifactId>
            <version>2.35</version>
        </dependency>
        <dependency>
            <groupId>org.glassfish.jersey.inject</groupId>
            <artifactId>jersey-hk2</artifactId>
            <version>2.35</version>
        </dependency>
        ...

The second dependency is needed after Jersey 2.26 – https://eclipse-ee4j.github.io/jersey.github.io/release-notes/2.26.html – following this version users need to explicitly declare the dependency injection framework for Jersey to use – in this case we go with HK2 which is what was used in previous releases.

Then we implement the resource class, which at this point only has stub methods that all return a status code 200 HTTP response with no response body.

@Path("/tasks")
public class TaskManagementResource {

    @POST
    public Response createTask() {
        return Response.ok().build();
    }

    @GET
    public Response getTasks() {
        return Response.ok().build();
    }

    @PATCH
    @Path("/{taskID}")
    public Response updateTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    public Response getTask(@PathParam("taskID") String taskID) {
        return Response.ok().build();
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {

        return Response.ok().build();
    }
}

We will also need an application config class to define the base URI for our API and to inform the framework about the task management resource class:

@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
    }

}

The full commit for this step can be found here.

Step 4 – Implementing the API Layer

For this project, we will use JSON as the serialization data format for HTTP requests and repsonses.

In order to produce and consume JSON in our API, we need to add a library that’s going to be responsible for the JSON serialization and deserialization of POJOs. We are going to use Jackson. The library we need in order to integrate Jersy with Jackson is given below:

        ...
        <dependency>
            <groupId>org.glassfish.jersey.media</groupId>
            <artifactId>jersey-media-json-jackson</artifactId>
            <version>2.35</version>
        </dependency>
        ...

Then we need to customize the behavior of the JSON object mapper that will be used for serializing and deserializing the request and response POJOs. In this case, we disable ALLOW_COERCION_OF_SCALARS – this means that the service won’t attempt to parse strings into numbers or booleans (e.g. {"boolean_field":"true"} will be rejected)

@Provider
public class JsonObjectMapperProvider implements ContextResolver<ObjectMapper> {

    private final ObjectMapper jsonObjectMapper;

    /**
     * Create a custom JSON object mapper provider.
     */
    public JsonObjectMapperProvider() {
        jsonObjectMapper = new ObjectMapper();
        jsonObjectMapper.disable(ALLOW_COERCION_OF_SCALARS);
    }

    @Override
    public ObjectMapper getContext(Class<?> type) {
        return jsonObjectMapper;
    }
}

Once again, we need to make Jersey aware of this provider class:

@ApplicationPath("/api")
public class ApplicationConfig extends ResourceConfig {

    public ApplicationConfig() {
        register(TaskManagementResource.class);
        register(JsonObjectMapperProvider.class);
    }

}

Then we define the request and response POJOs. I will skip the code for these classes, but in summary, we need:

  • TaskCreateRequest – represents the JSON request body sent to the service when creating a new task
  • TaskUpdateRequest – represents the JSON request body sent to the service when updating an existing task
  • TaskResponse – represents the JSON response body sent to the client when retrieving task(s)

The last part of this step is to replace the stub logic in the resource class with the actual API implementation that relies on the business logic encapsulated in the service class from step 2.

@Path("/tasks")
public class TaskManagementResource {

    private final TaskManagementService service;

    public TaskManagementResource(TaskManagementService service) {
        this.service = service;
    }

    @POST
    @Consumes(MediaType.APPLICATION_JSON)
    public Response createTask(TaskCreateRequest taskCreateRequest) {
        validateArgNotNull(taskCreateRequest, "task-create-request-body");

        Task task = service.create(taskCreateRequest.getTitle(), taskCreateRequest.getDescription());

        String taskID = task.getIdentifier();

        URI taskRelativeURI = URI.create("tasks/" + taskID);
        return Response.created(taskRelativeURI).build();
    }

    @GET
    @Produces(MediaType.APPLICATION_JSON)
    public List<TaskResponse> getTasks() {
        return service.retrieveAll().stream()
                .map(TaskResponse::new)
                .collect(Collectors.toUnmodifiableList());
    }

    @PATCH
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public Response updateTask(@PathParam("taskID") String taskID, TaskUpdateRequest taskUpdateRequest) {
        validateArgNotNull(taskUpdateRequest, "task-update-request-body");

        TaskUpdate update = new TaskUpdate(taskUpdateRequest.getTitle(), taskUpdateRequest.getDescription(),
                taskUpdateRequest.isCompleted());

        service.update(taskID, update);

        return Response.ok().build();
    }

    @GET
    @Path("/{taskID}")
    @Produces(MediaType.APPLICATION_JSON)
    public TaskResponse getTask(@PathParam("taskID") String taskID) {
        Task task = service.retrieve(taskID);
        return new TaskResponse(task);
    }

    @DELETE
    @Path("/{taskID}")
    public Response deleteTask(@PathParam("taskID") String taskID) {
        service.delete(taskID);
        return Response.noContent().build();
    }

The full commit for this step can be found here.

Step 5 – Implementing the Storage Mechanism

For simplicity, we are going to implement an in-memory storage implementation of the repository interface rather than relying on a database technology. The implementation will store all tasks inside a map – the key is the task identifier and the value is the task itself. This is just enough for simple CRUD functionality.

public class InMemoryTaskManagementRepository implements TaskManagementRepository {

    Map<String, Task> tasks = new HashMap<>();

    @Override
    public void save(Task task) {
        tasks.put(task.getIdentifier(), task);
    }

    @Override
    public List<Task> getAll() {
        return tasks.values().stream()
                .collect(Collectors.toUnmodifiableList());
    }

    @Override
    public Optional<Task> get(String taskID) {
        return Optional.ofNullable(tasks.get(taskID));
    }

    @Override
    public void delete(String taskID) {
        tasks.remove(taskID);
    }

}

The full commit for this step can be found here.

Step 6 – Binding Everything Together

Now that we have all the layers implemented, we need to bind them together with a dependency injection framework – in this case, we will use Guice to achieve that.

We start by adding Guice as a dependency in the POM file:

        <dependency>
            <groupId>com.google.inject</groupId>
            <artifactId>guice</artifactId>
            <version>4.2.3</version>
        </dependency>

Then we create a simple guice module to bind the in-memory DB implementation to the repository interface. This basically means that for all classes that depend on the repository interface, Guice will inject the in-memory DB class. We use the Singleton scope because we want all classes that depend on the repository to re-use the same in-memory DB instance.

public class ApplicationModule extends AbstractModule {

    @Override
    public void configure() {
        bind(TaskManagementRepository.class).to(InMemoryTaskManagementRepository.class).in(Singleton.class);
    }

}

Note that if we decide to use an actual database, the code change is as simple as:

  • implementing the wrapper class for the DB we choose – e.g. MongoDBTaskManagementRepository
  • changing the binding above to point to the new implementation of the repository interface

Now that we have the module implemented, we can add Inject annotation to all classes where the constructor has a dependency which needs to be injected by Guice. These would be the TaskManagementResource and the TaskManagementService classes. The magic of guice (and dependency injection in general) is that the module above is enough to build the entire tree of dependencies in our code.

TaskManagementResource depends on TaskManagementService which depends on TaskManagementRepository. Guice knows how to get an instance of the TaskManagementRepository interface so following this chain it also knows how to get an instance of the TaskManagementService and TaskManagementResource classes.

The final piece of work is to make Jersey aware of the Guice injector – remember Jersey uses HK2 as its dependency injection framework, so Jersey will rely on HK2 to be able to build a TaskManagementResource class. In order for HK2 to build a TaskManagementResource it needs to know about Guice’s dependency injector container. To connect Guice and HK2, we are going to use something called the Guice/HK2 Bridge. It is basically a process of bridging the Guice container (the Injector class) into the HK2 container (the ServiceLocator class).

So we declare a dependency on the Guice/HK2 bridge library:

        ...
        <dependency>
            <groupId>org.glassfish.hk2</groupId>
            <artifactId>guice-bridge</artifactId>
            <version>2.6.1</version>
        </dependency>
        ...

Then we change the ApplicationConfig class to create the bridge between Guice and HK2. Notice that since the ApplicationConfig class is used by Jersey (and thus managed by HK2) we can easily inject the ServiceLocator instance (the HK2 container itself) into it.

        @Inject
        public ApplicationConfig(ServiceLocator serviceLocator) {
            register(TaskManagementResource.class);
            register(JsonObjectMapperProvider.class);

            // bridge the Guice container (Injector) into the HK2 container (ServiceLocator)
            Injector injector = Guice.createInjector(new ApplicationModule());
            GuiceBridge.getGuiceBridge().initializeGuiceBridge(serviceLocator);
            GuiceIntoHK2Bridge guiceBridge = serviceLocator.getService(GuiceIntoHK2Bridge.class);
            guiceBridge.bridgeGuiceInjector(injector);
        }

The full commit for this step can be found here.

Step 7 – Creating the Application Launcher

The final critical step is configuring and starting the application server through a launcher class, which will serve as our main class for the executable jar file we are targeting.

We start with the code for starting an embedded Tomcat server. The dependency we need is:

    ...
    <dependency>
        <groupId>org.apache.tomcat.embed</groupId>
        <artifactId>tomcat-embed-core</artifactId>
        <version>9.0.62</version>
    </dependency>
    ...

Then we need a launcher class. This class is responsible for starting the embedded Tomcat server and registering a servlet container for the resource config we defined earlier (when we registered the resource class).

public class Launcher {

    public static void main(String[] args) throws Exception {
        Tomcat tomcat = new Tomcat();

        // configure server port number
        tomcat.setPort(8080);

        // remove defaulted JSP configs
        tomcat.setAddDefaultWebXmlToWebapp(false);

        // add the web app
        StandardContext ctx = (StandardContext) tomcat.addWebapp("/", new File(".").getAbsolutePath());
        ResourceConfig resourceConfig = new ResourceConfig(ApplicationConfig.class);
        Tomcat.addServlet(ctx, "jersey-container-servlet", new ServletContainer(resourceConfig));
        ctx.addServletMappingDecoded("/*", "jersey-container-servlet");

        // start the server
        tomcat.start();
        System.out.println("Server listening on " + tomcat.getHost().getName() + ":" + tomcat.getConnector().getPort());
        tomcat.getServer().await();
    }
}

If using InteliJ to code this project, then you should ideally be able to run the main method of the Launcher class. There is one caveat here – with the release of JDK 9 and after (and hence the introduction of the Java Platform Module System), reflective access is only allowed to publicly exported packages. This means that Guice will fail at runtime because it uses reflection to access JDK modules. See this StackOverflow post for more information.

The only workaround I found so far for this was to add the following as a JVM option --add-opens java.base/java.lang=ALL-UNNAMED to the run configuration of the main method as suggested in the StackOverflow post I linked. This basically allows Guice to continue doing its reflection as in the pre-JDK 9 releases.

After we use the workaround above and test our launcher, we get to the part of generating an executable JAR file which can be used to start the service. To achieve this, we need the appassembler plugin. Note that we still need to add the --add-opens java.base/java.lang=ALL-UNNAMED JVM argument in order for the executable jar file to work.

         ...
         <plugins>
            <plugin>
                <groupId>org.codehaus.mojo</groupId>
                <artifactId>appassembler-maven-plugin</artifactId>
                <version>2.0.0</version>
                <configuration>
                    <assembleDirectory>target</assembleDirectory>
                    <extraJvmArguments>--add-opens java.base/java.lang=ALL-UNNAMED</extraJvmArguments>
                    <programs>
                        <program>
                            <mainClass>taskmanagement.Launcher</mainClass>
                            <name>taskmanagement_webapp</name>
                        </program>
                    </programs>
                </configuration>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>assemble</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
        ...

With this plugin, we can finally generate an executable file and then use it to start the service:

mvn clean package
./target/bin/taskmanagement_webapp

The full commit for this step can be found here.

Step 8 – Adding Exception Mappers

You might have noticed that so far we have defined two custom exceptions that are thrown when the service receives input data it cannot handle:

  • TaskNotFoundException
  • InvalidTaskDataException

If these exceptions aren’t handled properly when encountered, then the embedded tomcat server will wrap them inside an internal server error (status code 500) which is not very user friendly. As per the API specification we defined in the beginning (see Appendix), we want clients to receive a 404 status code if, for example, they use a task ID that doesn’t exist.

To achieve this, we use exception mappers. When we register those mappers, Jersey will use them to transform instances of these exceptions to proper HTTP Response objects.

public class TaskNotFoundExceptionMapper implements ExceptionMapper<TaskNotFoundException> {

    @Override
    public Response toResponse(TaskNotFoundException exception) {
        return Response
                .status(Response.Status.NOT_FOUND)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


public class InvalidTaskDataExceptionMapper implements ExceptionMapper<InvalidTaskDataException> {

    @Override
    public Response toResponse(InvalidTaskDataException exception) {
        return Response
                .status(Response.Status.BAD_REQUEST)
                .entity(new ExceptionMessage(exception.getMessage()))
                .type(MediaType.APPLICATION_JSON)
                .build();
    }

}


    @Inject
    public ApplicationConfig(ServiceLocator serviceLocator) {
        ...
        register(InvalidTaskDataExceptionMapper.class);
        register(TaskNotFoundExceptionMapper.class);
        ...
    }

Notice the use of a new POJO – ExceptionMessage – which is used to convey the exception message as a JSON response. Now, whenever the business logic throws any of these exceptions, we will get a proper JSON response with the appropriate status code.

The full commit for this step can be found here.

Dockerizing the Application

There are lots of benefits of using Docker but given that this article is not about containers, I won’t spend time talking about them. I will only mention that I always prefer to run applications in a Docker container because it makes the build process much more efficient (think application portability, well-defined build behavior, improved deployment process, etc.)

The Dockerfile for our service is relatively simple and based on the maven OpenJDK image. It automates what we did in step 7 – packaging the application and running the executable jar file.

FROM maven:3.8.5-openjdk-11-slim
WORKDIR /application

COPY . .

RUN mvn clean package

CMD ["./target/bin/taskmanagement_webapp"]

With this, we can build the container image and start our service as a Docker container. The commands below assume you have the Docker daemon running on your local machine.

docker build --tag task-management-service .
docker run -d -p 127.0.0.1:8080:8080 --name test-task-management-service task-management-service

Now the service should be running in the background and be accessible from your local machine on port 8080. For starting/stopping it, use this command:

docker start/stop test-task-management-service

Testing the Service

Now that we have the service running, we can use Curl to send some test requests.

  • creating a few tasks
curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks" 

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:46 GMT

curl -i -X POST -H "Content-Type:application/json" -d "{\"title\": \"test-title\", \"description\":\"description\"}" "http://localhost:8080/api/tasks"

HTTP/1.1 201 
Location: http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546
Content-Length: 0
Date: Tue, 28 Jun 2022 07:52:47 GMT

  • retrieving a task
curl -i -X GET "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 162
Date: Tue, 28 Jun 2022 07:54:21 GMT

{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false}

  • retrieving a non-existing task
curl -i -X GET "http://localhost:8080/api/tasks/random-task-id-123"                                                       

HTTP/1.1 404 
Content-Type: application/json
Content-Length: 81
Date: Tue, 28 Jun 2022 09:44:53 GMT

{"message":"Task with the given identifier cannot be found - random-task-id-123"}

  • retrieving all tasks
curl -i -X GET "http://localhost:8080/api/tasks"     

HTTP/1.1 200 
Content-Type: application/json
Content-Length: 490
Date: Tue, 28 Jun 2022 07:55:08 GMT

[{"identifier":"64d85db4-905b-4c62-ba10-13fcb19a2546","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:47.872859Z","completed":false},{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"test-title","description":"description","createdAt":"2022-06-28T07:52:46.444179Z","completed":false}]

  • deleting a task
curl -i -X DELETE "http://localhost:8080/api/tasks/64d85db4-905b-4c62-ba10-13fcb19a2546"

HTTP/1.1 204 
Date: Tue, 28 Jun 2022 07:56:55 GMT

  • patching a task
curl -i -X PATCH -H "Content-Type:application/json" -d "{\"completed\": true, \"title\": \"new-title\", \"description\":\"new-description\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 200 
Content-Length: 0
Date: Tue, 28 Jun 2022 08:00:37 GMT

curl -i -X GET "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"   
HTTP/1.1 200 
Content-Type: application/json
Content-Length: 164
Date: Tue, 28 Jun 2022 08:01:07 GMT

{"identifier":"d2c4ed20-2538-44e5-bf19-150db9f6d83f","title":"new-title","description":"new-description","createdAt":"2022-06-28T07:52:46.444179Z","completed":true}

  • patching a task with empty title
curl -i -X PATCH -H "Content-Type:application/json" -d "{\"title\": \"\"}" "http://localhost:8080/api/tasks/d2c4ed20-2538-44e5-bf19-150db9f6d83f"

HTTP/1.1 400 
Content-Type: application/json
Content-Length: 43
Date: Tue, 28 Jun 2022 09:47:09 GMT
Connection: close

{"message":"title cannot be null or blank"}

Future Improvements

What we have built so far is obviously not a production-ready API, but it demonstrates how to get started with the software suite I mentioned in the beginning of this article when building a REST API. Here are some future improvements that can be made:

  • using a database for persistent storage
  • adding user authentication and authorization – tasks should be scoped per user rather than being available globally
  • adding logging
  • adding KPI (Key Performance Indicators) metrics – things like the count of total requests, latency, failures count, etc.
  • adding a mapper for unexpected exceptions – we don’t want to expose a stack trace if the service encounters an unexpected null pointer exception, instead we want a JSON response with status code 500
  • adding automated integration tests
  • adding a more verbose response to the patch endpoint – e.g. indicating whether the request resulted in a change or not
  • scanning packages and automatically registering provider and resource classes instead of manually registering them one-by-one
  • adding CORS (Cross-Origin-Resource-Sharing) support if we intend to call the API from a browser application hosted under a different domain
  • adding SSL support
  • adding rate limiting

If you found this article helpful and would like to see a follow-up on the topics above, please comment or message me with a preference choice of what you would like to learn about the most.

Appendix

The full API specification using the Open API description format can be found below. You can use the Swagger Editor to display the API specification in a more friendly manner.

swagger: '2.0'

info:
  description: This is a RESTful task management API specification.
  version: 1.0.0
  title: Task Management API
  license:
    name: Apache 2.0
    url: 'http://www.apache.org/licenses/LICENSE-2.0.html'

host: 'localhost:8080'
basePath: /api

schemes:
  - http

paths:

  /tasks:
    post:
      summary: Create a new task
      operationId: createTask
      consumes:
        - application/json
      parameters:
        - in: body
          name: taskCreateRequest
          description: new task object that needs to be added to the list of tasks
          required: true
          schema:
            $ref: '#/definitions/TaskCreateRequest'
      responses:
        '201':
          description: successfully created new task
        '400':
          description: task create request failed validation
    get:
      summary: Retrieve all existing tasks
      operationId: retrieveTasks
      produces:
        - application/json
      responses:
        '200':
          description: successfully retrieved all tasks
          schema:
            type: array
            items:
              $ref: '#/definitions/TaskResponse'

  '/tasks/{taskID}':
    get:
      summary: Retrieve task
      operationId: retrieveTask
      produces:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '200':
          description: successfully retrieved task
          schema:
            $ref: '#/definitions/TaskResponse'
        '404':
          description: task not found
    patch:
      summary: Update task
      operationId: updateTask
      consumes:
        - application/json
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
        - name: taskUpdateRequest
          in: body
          description: task update request
          required: true
          schema:
            $ref: '#/definitions/TaskUpdateRequest'
      responses:
        '200':
          description: successfully updated task
        '400':
          description: task update request failed validation
        '404':
          description: task not found
    delete:
      summary: Delete task
      operationId: deleteTask
      parameters:
        - name: taskID
          in: path
          description: task identifier
          required: true
          type: string
      responses:
        '204':
          description: >-
            successfully deleted task or task with the given identifier did not
            exist

definitions:
  TaskCreateRequest:
    type: object
    required:
      - title
      - description
    properties:
      title:
        type: string
      description:
        type: string
  TaskUpdateRequest:
    type: object
    properties:
      title:
        type: string
      description:
        type: string
      completed:
        type: boolean
  TaskResponse:
    type: object
    required:
      - identifier
      - title
      - description
      - completed
      - createdAt
    properties:
      identifier:
        type: string
      title:
        type: string
      description:
        type: string
      createdAt:
        type: string
        format: date-time
      completed:
        type: boolean

Una mica de la AI de moda…

Doncs això, per anar provant diferents eines:

AI

Claude
ChatGPT

Disseny amb AI

Khroma – AI Color Tool for Designers | Discover and Save Color Palettes

Generadors de codi amb AI

v0 Community – v0 by Vercel
Sign Up | Windsurf Editor and Codeium extensions
Durable AI Website Builder and Small Business Software
Continue

Detectors de AI

Grammarly: Free AI Writing Assistance
Wordtune — Express yourself with confidence
AI Detector – the Original AI Checker for ChatGPT & More
AI Insights Reveals Why It’s AI – Copyleaks
BypassGPT: Free AI Detector & Undetectable AI Bypasser

Indescriptibles AI

Welcome to Alva

Step-by-step RESTful web service example in Java using Eclipse and TomEE Plus

Published: 01 Jan 2019

Related Videos

https://www.youtube.com/embed/Tpr4UfkX9e4

TheServerSide has published a number of articles on the tenets of effective RESTful web service design, along with examples of how to actually create a cloud-native application using Spring Boot and Spring Data APIs. In this JAX-RS tutorial, we will go back to basics by developing the exact same application, except this time we’ll use standard Java EE APIs and the extended, enterprise version of Tomcat, TomEE Plus, as our deployment target. This step-by-step JAX-RS RESTful web service example in Java using Eclipse and TomEE Plus will get you up to speed on modern web service development techniques in less than 15 minutes. 

Prerequisites

This tutorial uses Eclipse Oxygen as the development environment, the underlying JDK is at version 1.8, and the deployment target is TomEE Plus. You can download TomEE Plus from the project’s Apache home page.

Why are we using TomEE Plus, and not Tomcat or the standard TomEE offering? Sadly, the basic Java web profile, which Tomcat 9 implements, does not support JAX-RS, it does not include the javax.ws.rs.* packages, and without playing with POM files or adding JAR files to the Eclipse project’s \lib directory, RESTful web services simply won’t work. The standard TomEE offering doesn’t include JAX-RS libraries either. On the other hand, the TomEE Plus server does include various enterprise packages, including JAX-RS, so RESTful web services will deploy right out of the box, making this RESTful web services example much simpler.

Step 1: The dynamic web project

The first step in this JAX-RS tutorial is to kick off the dynamic web project creation wizard in Eclipse. 

JAX-RS project creation
This JAX-RS tutorial utilizes a dynamic web project in Eclipse.

When the dynamic web project wizard appears, name the project restful-java, choose Apache Tomcat 8.5 as the target runtime (even though we are using TomEE Plus, not Tomcat), specify 3.1 as the dynamic web module version and choose a minimal configuration for the project. When these options are set, click Finish.

Note that you need to install TomEE Plus prior to doing this JAX-RS tutorial. You can also use any other application server that supports Java EE and JAX-RS for this RESTful web service example in Java using Eclipse.

Where is the web.xml file?

If you look at this project in GitHub (link below), you’ll notice that there isn’t a web.xml file. That makes traditional enterprise developers nervous, but so long as everything is annotated, there’s no need for one in version 3.x of the Servlet and JSP spec. In older REST implementations you would need to configure a Jersey Servlet and perform a REST Servlet mapping, but that is no longer necessary. In this case, TomEE Plus will process all of the annotations on the classes in the Java web project and make RESTful web sevices available accordingly. It should be noted that on some servers, you do need to reference your JAX-RS classes explicility, which you can do through an Application class. That process is addressed in the JAX-RS problems section towards the end.

JAX-RS project settings
Specify project settings for the RESTful web service example in Java using Eclipse.

Step 2: Create the Score class

This restful web service example in Java using Eclipse models a score counter for an online rock-paper-scissors application, so the first requirement is to create a class named Score that keeps track of wins, losses and ties.

package com.mcnz.restful.java.example;
public class Score {
     public static int WINS, LOSSES, TIES;
}

To keep things simple, we won’t add any setters or getters. Furthermore, we are going to make the properties of the Score class static, as that will enable the Java virtual machine (JVM) to simulate persistence between stateless calls to the web service. This approach will enable us to run and test the application on a single JVM. However, you should manage application state in this way only as a proof of concept. It’s better to persist data with Hibernate and Java Persistence API or save information to a NoSQL database, but that is beyond the scope of this JAX-RS tutorial.

Step 3: Code the JAX-RS Service class

A class named ScoreService is the heart and soul of this RESTful web service example in Java using Eclipse. As such, decorate it with an ApplicationPath annotation that defines the base URL of the web service.

package com.mcnz.restful.java.example;
import javax.ws.rs.*;
 
@ApplicationPath("/")
public class ScoreService {  }

This class will contain three getter methods that enable RESTful web clients to query the number of wins, losses or ties. These methods are invoked through an HTTP GET invocation and return the current win, loss or tie count as plain text. As such, these methods each have a JAX-RS @GET annotation, a @Produces annotation indicating they return a text string and a @Path annotation indicating the URL clients need to use in order to invoke the method:

@GET @Path("/score/wins")@Produces("text/plain")
public int getWins() {return Score.WINS;}
    
@GET @Path("/score/losses")@Produces("text/plain")
public int getLosses() {return Score.LOSSES;}
    
@GET @Path("/score/ties")@Produces("text/plain")
public int getTies() {return Score.TIES;}

The increase methods of this JAX-RS tutorial’s ScoreService follow a similar pattern, with the exception of the fact that each method is triggered through an HTTP POST invocation:

@POST @Path("/score/wins")@Produces("text/plain")
public int increaseWins() { return Score.WINS++; }
    
@POST @Path("/score/ties")@Produces("text/plain")     
public int increaseTies() { return Score.WINS++;}
    
@POST @Path("/score/losses")@Produces("text/plain")        
public int increaseLosses() {return Score.LOSSES++;}

The final two methods of the ScoreService class enable users to get the JSON-based representation of the complete score or pass query parameters to the web service to update the static properties of the Score class. Both methods use the /score path, and both produce JSON. But the getScore method is invoked through an HTTP GET request, while the update method is invoked through a PUT.

Just for the record, there is an easier way to return JSON from a RESTful web service than by using the String.format call. You can use @Producer annotations and simply return JavaBeans, but because we are using static variables in our Score class, doing that gets a bit messy. We will save that for a future RESTful web services tutorial with Eclipse.

@GET
@Path("/score")
@Produces("application/json")
public String getScore() {
   String pattern =
      "{ \"wins\":\"%s\", \"losses\":\"%s\", \"ties\": \"%s\"}";
   return String.format(pattern,  Score.WINS, Score.LOSSES, Score.TIES );  
}
 
@PUT
@Path("/score")
@Produces("application/json")
public String update(@QueryParam("wins") int wins,
                        @QueryParam("losses") int losses,
                        @QueryParam("ties")   int ties) {
   Score.WINS   = wins;
   Score.TIES   = ties;
   Score.LOSSES = losses;
   String pattern =
      "{ \"wins\":\"%s\", \"losses\":\"%s\", \"ties\": \"%s\"}";
   return String.format(pattern,  Score.WINS, Score.LOSSES, Score.TIES );  
}

Step 4: Deploy the JAX-RS web service

Now that you’ve coded the JAX-RS tutorial’s ScoreService, it’s time for this RESTful web service example in Java using Eclipse to move into the testing stage. Remember that we are using TomEE Plus as our target server, not Tomcat. Tomcat doesn’t provide built in JAX-RS support.

To test the application, first right-click on the restful Java project, and choose Run As > Run on server. This will deploy the web project and start the Apache TomEE Plus server that hosts the application.

Eclipse JAX-RS deployment
Run the RESTful web services example in Java on Tomcat.

Step 5: Test the JAX-RS web service example

When you deploy the JAX-RS tutorial app, there are a number of different ways to test it. One way is to simply type the URL of the RESTful web service example into a web browser. A call to the following URL will trigger a GET invocation and a JSON string representing the initial score should be displayed:

http://localhost:8080/restful-java/score
JSON string output
Test the JAX-RS tutorial app by invoking it into a web browser.

To test the increaseTies method, run the following two curl commands in a Bash shell:

$ curl -X POST "http://localhost:8080/restful-java/score/ties"
$ curl -X GET "http://localhost:8080/restful-java/score/"

The JSON string returned from the second command indicates that the number of ties has indeed been incremented by one:

{ "wins":"0", "losses":"0", "ties": "1"}

Now, use curl to trigger a PUT invocation with query parameters:

$ curl -X PUT "http://localhost:8080/restful-java/score?wins=1&losses=2&ties=3"

This PUT invocation will return the following JSON string:

{ "wins":"1", "losses":"2", "ties": "3"}

Fixing common JAX-RS problems

In this example, the ScoreService class is annotated with @ApplicationPath. This works fine with TomEE Plus, but on other servers or older implementations, the @ApplicationPath annotation is placed on a separate class that extends the JAX-RS Application class. This often solves the problem of RESTful URLs simply not being recognized and triggering a 404: The origin server did not find a current representation for the target resource error when an attempt is made to invoke them.

 import javax.ws.rs.core.Application;

@ApplicationPath("/")
public class ScoreApplication extends Application {
public Set<Class<?>> getClasses() { return new
HashSet<Class<?>>(Arrays.asList(ScoreService.class));
}
}

On servers where the implementation is Jersey based, the class can be replaced with one that is a bit easier to understand, although it calls on Jersey APIs explicitly, so it will only work with a Jersey based implementation. You just tell it the names of the various packages where JAX-RS annotated web services reside, and it ensures they are loaded:

import javax.ws.rs.ApplicationPath;

import org.glassfish.jersey.server.ResourceConfig;

@ApplicationPath("/")
public class ScoreApplication extends ResourceConfig {

public ScoreApplication() {
packages("com.mcnz.restful.java.example");
}
}

And of course, you must ensure you are using TomEE Plus and not Tomcat. As was mentioned earlier, a standard Tomcat installation will not run RESTful web services without a JAX-RS implementation added to the \lib directory, Gradle build script or Maven POM.

And that’s a complete, step-by-step JAX-RS RESTful web service example in Java using Eclipse and TomEE Plus.

The full source code for this example can be downloaded from GitHub.

Create a JAX-RS web service with Tomcat, Eclipse

If you used TomEE in an attempt to create a JAX-RS web service and ran into issues, watch this new video that instead uses Tomcat and Eclipse to create this RESTful web service.

Automatically Generate REST and GraphQL APIs From Your Database

by Adrian Machado

Automatically Generate REST and GraphQL APIs From Your Database | Zuplo Blog

Building APIs from scratch takes time, requires extensive testing, and often leads to inconsistencies between your database schema and API endpoints. Automatically generating APIs directly from your database schema eliminates these pain points while reducing development time from weeks to minutes. This approach is particularly valuable for teams building internal tools, prototypes, or any application where rapid development is important.

The ability to generate APIs automatically has transformed how developers build and maintain applications. Instead of writing repetitive CRUD endpoints, converting API requests to CRUD SQL queries, managing documentation, and maintaining consistency between database schemas and API contracts, developers can focus on building features that matter to their users. This article explores the tools and approaches available for generating both REST and GraphQL APIs from various database types.

Table of Contents#

Why Generate APIs From Your Database#

Traditional API development involves writing code to map database operations to HTTP endpoints, implementing authentication, managing documentation, and ensuring data validation. This process is time-consuming and error-prone. Automatic API generation solves these issues by creating standardized endpoints directly from your database schema.

The benefits extend beyond just saving time. Generated APIs automatically stay in sync with your database schema, reducing bugs caused by outdated API endpoints. They often include built-in features like filtering, pagination, and sorting that would otherwise require custom implementation. Many tools also generate API documentation automatically, ensuring it stays current with your schema changes.

REST API Generation Tools#

REST APIs remain the most common choice for web services due to their simplicity and broad support across platforms. Modern tools can generate REST APIs that are both powerful and secure, with features like role-based access control and request validation built-in.

PostgreSQL Solutions#

postgrest

PostgREST stands out as the leading solution for PostgreSQL databases. It turns your database directly into a RESTful API with minimal configuration. The tool automatically creates endpoints for tables and views, supports complex filters, and leverages PostgreSQL’s row-level security for fine-grained access control. If you’d like to see this in action, check out our Neon PostgresQL sample.

Prisma combined with ZenStack offers a more programmatic approach. While requiring more setup than PostgREST, it provides better TypeScript integration and more control over the generated API. This combination excels in projects where type safety and custom business logic are priorities.

MySQL Solutions#

Dreamfactory

DreamFactory provides comprehensive API generation for MySQL databases. It includes features like API key management, role-based access control, and the ability to combine multiple data sources into a single API. The platform also supports custom scripting for cases where generated endpoints need modification.

We also created our own MySQL PostgREST sample if you’d like to have more control over the implementation and hosting.

NoSQL Solutions#

NoSQL databases benefit from tools like PrestoAPI and DreamFactory, which handle the unique requirements of document-based data structures. PrestoAPI specializes in MongoDB integration, providing automatic API generation with built-in security features and custom endpoint configuration.

Hasura, while primarily known for GraphQL, also generates REST APIs. It supports multiple NoSQL databases and provides real-time subscriptions, making it particularly useful for applications requiring live data updates.

Tweet

Over 10,000 developers trust Zuplo to secure, document, and monetize their APIsLearn More

Multi-DB Support#

Some solutions are flexible to handle multiple types of databases. Often allowing you to combine them into a single API. We already mentioned Dreamfactory, but others include ApinizerDirectus, and sandman2.

Managed DB Solutions#

You can’t talk about REST APIs for Postgres without mentioning Supabase‘s excellent REST API they generate over your database using PostgREST.

GraphQL API Generation Tools#

GraphQL APIs offer more flexibility than REST by allowing clients to request exactly the data they need. This effectiveness makes them more popular, especially for applications with complex data requirements.

Postgres GraphQL Solutions#

Hasura

Hasura and PostGraphile lead the PostgreSQL GraphQL landscape. Hasura provides real-time subscriptions and a powerful permissions system, while PostGraphile offers deep PostgreSQL integration and excellent performance for complex queries.

MySQL and NoSQL Solutions#

StepZen and AWS AppSync excel at generating GraphQL APIs for MySQL and NoSQL databases. StepZen simplifies the process of combining multiple data sources, while AppSync provides smooth integration with AWS services and real-time data capabilities.

Other notable mentions:

Other Databases#

Some other databases often include REST or GraphQL API generation as a part of the associated cloud/SaaS offering. This includes:

Making the Right Choice#

For simple projects with PostgreSQL, PostgREST, or Hasura provide excellent starting points. More complex applications might benefit from tools like Prisma or AWS AppSync, which offer greater flexibility and integration options.

Remember that while automatic API generation can significantly speed up development, it’s not a silver bullet. Complex business logic, custom authentication requirements, or specific performance needs might require additional development work. If you do need a more robust solution for building APIs without sacrificing developer productivity – you should check out Zuplo.

Common Questions About API Generation#

Q: How secure are automatically generated APIs? Most tools provide built-in security features like role-based access control and API key management. However, you should review the security features of your chosen tool and implement additional security measures as needed.

Q: Can the generated endpoints be customized? Yes, most tools allow some level of customization through configuration files, middleware, or custom code injection points. If you’d like a fully-customizable experience while still matching your database, check our our article on generating OpenAPI from your database.

Q: What about performance? Generated APIs can be highly performant, especially when using tools that improve database queries. However, complex operations might require manual optimization.

Q: How can one handle complex business logic? Many tools support custom functions, stored procedures, or middleware that can implement additional business logic beyond basic CRUD operations.