PgAdmin 4 Docker container and Data Seeding EF Core

Tarik Berkovac
Towards Dev
Published in
11 min readMay 5, 2024

--

One of the applications that lets us manage PostgreSQL is PgAdmin 4.

We can download it from this link.

In this article, we will not do that, but we will create a Docker container that will be running on our machine. We will spin it up with our backend project and database to have a fully functional application that starts with just one command: docker compose up.

Let’s take a look at where we are at with our docker-compose file:

version: '3.8'
services:
web:
image: medium-books-api
container_name: medium-books-api_c
build:
context: .
dockerfile: Dockerfile.web
ports:
- 5075:5075
environment:
- ASPNETCORE_ENVIRONMENT=Development
depends_on:
db:
condition: service_healthy
db:
image: medium-books-db
container_name: medium-books-db_c
build:
context: .
dockerfile: Dockerfile.db
ports:
- 5435:5435
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: secretpassword
POSTGRES_DB: postgres
PGPORT: 5435
volumes:
- ./postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
retries: 5
timeout: 5s

We have two services, web and db.

We will create a new service for our service through which we will manage our database:

So we need an image, on which this service will be based. We don’t have a Dockerfile for pg-admin, so we won't need a build section. We can search on Google for PgAdmin 4 Docker image, and we will find:

As we can see it’s the official image of pgAdmin 4. So let’s use it:

It would be running inside a container, so to access it from our host machine, we would need to do some port mapping.

Also, we know that all these applications have some way to log in to that. It is the easiest to see what is required for the application to run. Let’s go to the official documentation of the image.

We are looking at the documentation pg the pgAdmin 4 version 8.6 (at the time of writing it is the latest version).

Required variables should be set.

So we will add an environment section, with corresponding variables:

If we look closely at the documentation here is the part:

PGADMIN_DEFAULT_PASSWORD or PGADMIN_DEFAULT_PASSWORD_FILE variable is required and must be set at launch time.

So by setting the PGADMIN_DEFAULT_PASSWORD, we are not required to set PG_ADMIN_DEFAULT_PASSWORD_FILE. For now, we will set the default password, but later on, we will cover the default password file.

With this, our pgAdmin image is completed. Let’s discuss the sequence in which these services should run.

Web API depends on:
- database

pgAdmin depends on:
- Database

The database does not depend on anything.

One last thing is to map ports between the container and our host machine.
To know on which port is our app running inside the container we will take a look at PGADMIN_LISTEN_PORT and PGADMIN_ENABLE_TLS.

TLS is shorthand for Transport Layer Security. If it is enabled it means that the container does not receive or send plain data (e.g. message), but encrypted data is sent or received (e.g. dks20dsmd2). This way any potential hacker cannot see data transferred by sniffing the network.

To encrypt or decrypt data that is coming out our coming into our container we need some rules, and those rules are set by having a server certificate and a server key on our host machine. Then map their file paths to the path of the container (volume mapping):

When TLS is enabled, a certificate and key must be provided. Typically these should be stored on the host file system and mounted from the container. The expected paths are /certs/server.cert and /certs/server.key

TLS is not enabled by default, and we will not override it, so our pgAdmin will run by default on port 80 in the container. We can map it to any free port on our host machine, let’s choose 8080.

Great, with this set up we finished setting up our pgAdmin4 service.

Let’s note again, some key points:

  • Each service is running on an address and on a port
  • Address and port are usually predefined if not set in the configuration (yaml) file
  • When sending data over the network potential hackers can see unreadable encrypted data or exact data that we are sending (plain data)
  • Many services require some environment variables to be set up in the configuration so it is important to go over the documentation of the service to see which environment variables should be set
  • We need to think if our service depends on another service to be fully run first, so we don’t get any potential errors/problems

Let’s run:

docker compose up

We have a problem because A in pgAdmin4 is uppercase. Let’s fix that to lowercase and run the command again:

We see that it started pulling image, and after the image is pulled we get:

We previously had images and containers for web and db, and they just went from stopped to running, so they are not created. And we see that pgAdmin4_c container is created and running. Also it’s listening on the http://[::]:80 (80 port in the cointar).

Let’s take a look at images and containers in the Docker desktop:

Great we have everything up and running. Let’s go to localhost:8080

Now we will enter email address and password which we set up as default in the docker-compose.yaml file (admin@mail.com, admin).

After we are successfully logged in we will see very familiar screen:

Let’s connect to our server instance:

When we write localhost it infact mean 127.0.0.1 IP Address. Let’s take a look if there is really process running at 127.0.0.1:5435

But we need to provide address of the db container. We can simply write db for the host address, and docker will assign right IP address there.

Let’s change localhost to db and click save:

Let’s remove the server and try to connect by using IP address of db container: 172.18.0.2

And let’s try again:

And we have connected to the same database. So why does this work?

Docker networking! By starting our services with docker-compose, aside from services created, one default network is created which let’s us communicate between containers.

Data seeding

Let’s now take a look at Data seeding. If we want to populate tables when migrations are applied we would need to execute some SQL inserts.

There are many ways to do so, but we will show one using ModelBuilder:

First in OnModelCreating we call

modelBuilder.Entity<TEntity>().HasData()

HasData accepts List of the entities as an argument:

We defined a private method, not to write a list of the elements inside the OnModelCreating method.

We will use a simple “for loop” to generate entities:

We will do the same for BooksDbContext:

Now we need to create migrations for both dbContexts:

First, we will position ourselves at root of the solution, and then execute:

dotnet ef migrations add UsersSeedingMigration 
--project "./Users Module/MediumBooks.Users"
--startup-project MediumBooks.API

As we have multiple db contexts, we will need to specify which we want to use:

dotnet ef migrations add UsersSeedingMigration 
--project "./Users Module/MediumBooks.Users"
--startup-project MediumBooks.API
--context BooksDbContext

Now we get an error that we need to set the Ids of the entities that we are inserting.

Let’s fix it by assigning value to UserId:

We will assign it to index which takes values from one to ten:

We will also update Books, as we will encounter the same problem once again when we try to create a migration for BooksDbContext:

We need to be careful when we are creating these seeding data. We know that AuthorId must be one of the values of the UserId. Even though they are in different schemas and modules, we need to have consistent referential data over the modules.

Because of that, we are taking only values from 1 to 10 for the AuthorId, simply by taking the remainder when we divide the index by 10, and to that remainder, we add 1. That way we don’t get AuthorId = 0 as an edge case which would not be good.

Let’s create again migrations for both the Reading Module and Users Module:

We can see that we have a newly added UsersSeedingMigration in the Migrations directory, and also we have modified UsersDbModelSnapshot:

Let’s take a look how our UsersSeedingMigration looks like:

And UsersDbContextModelSnapshot now has a builder.HasData which we defined:

Let’s create a migration for the Reading Module now:

Same as before BooksDbContextModelSnapshot is marked as M (modified):

In newly added migration we can see in the up method, inserting Books, and in the Down method code which will be used to delete inserted data if migration is reverted.

We don’t need to do anything specific to apply those migrations. We have set up that in the Program.cs Migrate method is called when app is run for the first time:

As in the container, these changes are not taking effect (we don’t have any volume mappings for source code, only volume mapping is for the database), we will need to delete images, but to delete images we need first to delete containers. Then we will build images again and start containers with the command “docker compose up”.

We will also delete the postgres-data directory, which was mapped to our database container. That way the database will be freshly created with the database container.

Now let's start fresh:

After the database container is created, it starts creating API container:

We see at the end of the picture Insert statement for User 1.

Also a bit down there we can see inserts for the Books:

Now in the docker desktop we can take a look at our newly created images and containers:

When we click on the marked link we will get an error message in our browser.

It is expected behavior because swagger is not available at route / . It is available on route /swagger/index.html:

Let’s try to get a user with Id 1 and book with Id 1, to see if it works:

GREATT IT WORKS!!!

Thank you for reading and coming this far!

We have been on a whole journey to make our application dockerized, so anyone can simply run an application without any problems with the command “docker compose up”. It’s very powerful as you can see.

The source code is available at: https://github.com/tberkovac/ModularMonolith.Medium

This code needs some more tuning, but the main point of it is to easy understand main concepts. I highly encourage you to make the docker compose file for your services and share with us your progress.

The mini series concluded with this article:

Introduction to Docker
Docker commands and Multi-Stage Build
Docker compose and Postgres container

If you liked the content, you can react with claps. Stay connected by following me for more interesting articles!

Thanks for reading once again!

As always, good luck!

--

--