In Part 1 we covered routes, controllers and client side scripting which led to a very basic and not very useful result. In this post we will walk through what is required to read, write and maintain a database for Ticket Overlords. The technologies we will be using are Slick 3.0 and Evolutions.
As before, the source code for this tutorial series is available on github with the code specifically for this post here. Since the code builds on the previous post, you can grab the current state of the project (where part two finished) here.
The initial data model
Before we dive into the database code, we need to sketch out the classes we intend to persist into the database. For this basic example, we are going to create two classes:
- The concert, festival, monster truck rally, etc
- Each event could have different groupings of tickets with different pricing. We are going to ignore seat plans for now. Everything is general admission, but there could be VIP packages or early bird pricing.
For these two classes, we can sketch out a basic class model that looks like this:
We are taking a simplistic approach to the data model, the event would realistically link to a venue that has all the correct address and geo information. For now, this should be adequate.
We can easily transfer these models to case classes. Start by creating a new
com.semisafe.ticketoverlords in the
app directory. This is
different than our current layout of
views being packages.
The default Play convention differs from the traditional style guide.
I personally leave the controllers where they are and use Java naming
conventions for everything else. You will find what works best in your
In the new
ticketoverlords package create a Scala Class named
And create a second class named
That looks like what we modeled earlier, but there are a couple practical
problems. The first problem is that we do not have anything that resembles a
natural key for when we store these to a database or reference them via an API
request. We also need a relation to connect a
TicketBlock to an
this is a many-to-one relationship from
Event we will just
Event’s id field to the
TicketBlock as well.
Event class definition should now look like this:
And the updated
TicketBlock class definition should look like this:
REST endpoints for the models
Now that we have the model classes defined we can start to build an API to interact with what will become our data access layer. We will code the API first as we have done similar API work in the previous post. We can then use the new API to verify the data layer is working as expected.
For our API we are just going to support creating and reading. The operations we will want for each class are:
- Create an Event or TicketBlock
- Retrieve an Event or TicketBlock by id
- Retrieve all Events or TicketBlocks
We can get started by creating the
Events controller. In this controller we
are using more than the
BadRequest results we have used previously.
Play includes response types for any of the HTTP response formats you would
like to return.
Also worth mentioning is the data type of
events being a
Seq. I personally
List as my go-to collection type, which is not a good idea. When
possible, you should generalize your data types. In this case the function does
not care one bit if the resulting collection is a
List or a
something else entirely.
You will have noticed an error that states
No Json serializer found for type Seq[com.semisafe.ticketoverlords.Event] was
output by the IDE. As we did in the previous post, we solve this by adding a
Event’s companion object.
If you recall from part 1, there was a JSON envelope we used to wrap the
result that was named
AvailabilityResponse. We will create a generic version
of that now in the
controllers.responses package named
Since these are intended to be converted to JSON as the response is sent, this would be a good time to add the companion object with the JSON formatter.
Unfortunately, now eclipse is telling us there is a problem with our response
classes. There is no implicit format for the
Any type, which makes sense since
it can be literally anything.
Here we can take advantage of the fact that a member value that is already a
JsValue will stay a
JsValue when converted to JSON.
We do not need to make
JsValue has a
value that can be used in the place of
None. It is a little ugly that the
case class accepts both a response and an error at the same time. We can
mitigate this somewhat by creating two objects with apply methods that return
SuccessResponse will always have a
ErrorResponse will always have a
We can now instantiate our
EndpointResponse without needing to worry about
the result value not matching or the wrong response portion being populated.
Another improvement we can make here is on our
SuccessResponse object. The
requirement of passing a
JsValue adds extra code on the calling side.
This can be improved by adding a type parameter to
apply function and an implicit writes value to be used in the apply
This will automatically pick up on the Json writes when
apply is called based
on the current implicit scope. We can add this to our
Events controller to
unify the action responses.
Reading incoming data
With the controller capable of output, we can turn our attention to the create
action. The first line expects there to be a JSON body for the POST request that
matches up with our
Event class. To accomplish this, we change the
Action(parse.json) which tells Play we are expecting a JSON body with a
Content-Type. Once we enter the function, the value
has been transformed into a
JsValue type has a
method that returns a
JsResult value, which can then be folded to safely
access the actual data. Note the
INVALID_JSON constant is just an arbitrary
value (of 1000) added to the
One more thing to fix is that when we call
getByID, the return
values will have a valid
id, as they already exist in our system. When calling
create action, the data being passed in via JSON will have no idea what
id value should be. If we change the case classes to have an
Option for the
id fields, both outgoing and incoming cases can be safely
Once more with the TicketBlocks
This has everything we need, except for the data access logic. We can write a
TicketBlocks controller that contains identical functionality.
You will also need to create the JSON formatter in the
The only thing left on our API side is to add the actions to the routes file.
conf\routes and add the POST route for the
create action, and the
two GET routes for the
This is the first time we have required a parameter to be extracted from the URL path. As can be seen, arguments are prepended with a colon and can be validated to match the expected data type when the request is made.
Defining a schema with Evolutions
Now that the API is defined, we are ready to create the database. For schema
management, Play supports a component called evolutions which used to be
enabled by default. As of 2.4, you must explicitly add a dependency for
evolutions. To include the evolutions library, you would normally add the
following to your
Note that I used the word “normally”. We are going to be using Slick to
access the database, which changes some things. To add support for evolutions,
build.sbt file to match these dependencies:
The dependency for slick itself is implicit in the requirement for
“play-slick-evolutions”. We will directly add that requirement later. It is
worth noting that slick conflicts with the “jdbc” dependency. If you have
“jdbc” present, you will need to remove it. Since we must now explicitly add
evolutions, there is some incorrect information in our
you started with a 2.3.x version like our repo). Go ahead and clear the lines about evolutions from your configuration if they are present.
Creating the schema
To have evolutions create your tables, you need to create a SQL script defining
the schema structure. Start by creating the directory
and then create a file named
1.sql in that directory. The
is named after the
default database in the configuration file. If you are
connecting to multiple databases, each database can have it’s own set of
1.sql you will have two sections, the Ups and the Downs which
are used to upgrade and downgrade a schema respectively. Add the following SQL
to create our
Since this is just raw SQL you can take advantage of any vendor specific features and datatypes for the database you are using. For this example, we are going to use the H2 database engine with MySQL compatibility turned on. Add the H2 database library as a dependency now. The version listed is just the most recent version at the time of writing.
Now that we have included the H2 database in our project, we need to define the
configuration for connecting to it. Open up the
conf/application.conf file to
the Database configuration section. You may notice something similar to this:
Go ahead and delete that. As we will be using Slick, the database configuration is slightly different.
The url differs a bit from the commented example. Instead of using H2 as an in-memory database, we will be writing to the filesystem. This will make it easier to inspect data between multiple development sessions. In production, we would want to change this to something like MySQL or Postgres but for sample development H2 works fine for persistence.
Applying an evolution
If the application is not currently running inside of
activator, start it now
and point your browser to http://localhost:9000/. You will see a message
presented by evolutions stating that the database schema is out of sync and
needs to updated.
After you hit the Apply button, you may notice there are new files in your
project under the
db directory. This is your newly created H2 database. In
your activator console load the
This will open up your browser with a populated form pointing to the location
of your H2 database. (If your form is not populated, just point it to the
filesystem path of your project with
After you connect, you will get an interactive database web application that allows you to explore the schema and perform manual queries. This is useful for prototyping your SQL queries and debugging general data issues.
We will use the interactive browser to insert our first event.
Data access with slick
We have the database defined, the classes modeled and the API to pass the data over HTTP. To interact with the database, we are going to be using Slick. Slick is just one of a few different data access libraries available for Scala.
To add Slick support to our project, we need to add the “play-slick” library to
libraryDependencies sequence. It was already included earlier
when we added our evolutions dependency, but it is worth explicitly adding it
Creating a DAO
We are going to write our data access functions as members of the model class’s
companion object. Some people prefer to put all database related code in a
separate data access object. In a later post we will illustrate why that can be
beneficial. Given the code for the controllers, we know we need
create methods, so we will start by adding those now.
These signatures look similar to how our controller started out, which makes sense, as our controllers do not really perform much in the way of logic.
Creating a table definition
Inside of our
Event object, we will now create a Table Definition. This
maps our existing table structure to the schema that we created earlier, pairing
it with the
Event case class we have already created.
For this table, we have created the column definitions by defining the native scala data type for each column, and indicating the primary key for the ID column. Note that since we are using H2, we are required to have the SQL column names in upper case. Your queries will not function correctly if they are lower case.
In addition, we created a
table that represents a
our defined events table. This will save us some typing in the future.
You will also notice that this code does not actually compile. The reason is
that we are using
org.joda.time.DateTime for our dates. Slick does not
DateTime but does natively support
can solve this by creating an implicit mapping function. Create a new object
SlickMapping in the
With this code, we can now
import SlickMapping.jodaDateTimeMapping in any
file we are using a
DateTime type within a Slick table definition. Our
SlickMapping function will work fine for our purposes, but for extensive joda
Date and Time usage there is the slick-joda-mapper library by tototoshi.
The next portion to fill out is the definition of
* with respect to querying
our table. This is defined as a tuple of the columns we require to be returned
when we perform this query. It does not have to match the database directly,
just what we actually want to retrieve from the database. (This is useful if
there are large fields or deprecated columns that we do not want our code to
be concerned with).
Event class expects
id to be an
Option, we append the
modifier to the column, indicating to Slick that this is the case. The
function gives us a
MappedShape which allows us to transform this tuple
directly into the
Event case class that we intend to work with in general.
Retrieving all records
From here we can start to fill out our three data access functions. The
function is the most straight forward, as Slick supports the same function
directly on the
This does not actually work. We get a compile error that the expected type does not match the actual type.
type mismatch; found: scala.concurrent.Future[Seq[EventDAO.this.EventsTable#TableElementType]] (which expands to) scala.concurrent.Future[Seq[com.semisafe.ticketoverlords.Event]] required: Seq[com.semisafe.ticketoverlords.Event]
This fails to compile since
db.run() returns a
Future instead of the
raw value. This is a new default behavior in Slick 3.0. If you are familiar with
other database access libraries (such as Anorm) you may be less accustomed with
this behavior. (Or you have been wrapping you queries in a
Slick performs database requests asynchronously, which allows your code not to
be blocked by time intensive database queries. It makes sense, since a database
operation takes more time than normal computation or it can fail and generally
ruin your day when you only assume the best will happen. Since we do not really
care about the contents of the result yet, we can simply pass the
to the caller and consider this method done.
Retrieving a subset of records
getByID function is slightly more complex, as we are actually starting to
interact with the data. As before, our result will actually be a
Future of the
We start by filtering out data that does not match our predicate (that the
supplied eventID matches the row’s ID). After we are done applying our
transformation we call the
headOption method to execute the query while only
returning the first result, if there is any result at all.
Adding new records
To add a record to our database, the syntax changes somewhat. We treat the table definition as a collection, with a modifier indicating the value we want returned after the record is inserted.
After we successfully insert the record, we have a
Future[Long]. That is fine
for the other functions, but we need the actual value to copy the original case
class with the updated ID field. When faced with a
Future containing a value
you need direct access to you generally have two options:
Await.result()to block until the
- Not ever do that. Not ever. No.
The answer is always #2. (except maybe in unit tests)
We can utilize
map on the future to transform it from a
create a new
Future[Event]. As you are typing this, the compiler may warn that
you need an implicit execution context and recommend importing the generic one
scala.concurrent.ExecutionContext.Implicits.global). That will work, but the
correct context to use is the one provided by Play.
Tying it all together
We should now have everything required to read events from the database. We need
to update the
Events controller to utilize our new data access methods,
starting with the
Once again we map the future, giving us the ability to transform one future into another. Unfortunately, this does not compile either.
We are not able to return the future directly as our action is defined, but this
case happens so frequently that switching the
Action to expect a
Future[Result] is as simple as changing the method definition from
We can do this for the other two remaining methods in the controller and end up with an object like this:
Notice that the
create method can have a custom body handler as well as being
async. As well, since the
create method has an alternate return path with a
BadRequest response, that result needs to be wrapped in a
We can use curl to verify that our endpoint works as expected. Notice the date
is in milliseconds since the epoch. This makes it easiest to interact with
date command to
convert it to a readable version if you shave off the milliseconds.
We can also verify that creating an event works as expected using curl as well. Because, what could be better than a three day festival dedicated to meats on sticks?
And finally, ensure that the new event is retrievable by ID.
Shower, rinse, repeat
Now that we have our
Event type reading and writing to the database, we will
repeat the process for the
TicketBlock type. The only real difference between
the two objects is that our
TicketBlock contains a Foreign Key reference to
And update the
TicketBlocks controller to reference the new data access
We will use our new endpoint to add our first ticket block to the Kojella event, which corresponds to event ID 1.
Wait, you actually want people to order things?
Looking at all we have accomplished so far, we can be proud of the result. Until
we realize we never created a mechanism for a customer to place an order. This
would not be a very successful ticket store if people could not buy tickets.
Returning to the original data model diagram we can add an
Order class to
keep track of customer purchases of tickets from the respective ticket blocks.
We will create a new case class in the
Order. We need to make the same changes to the raw data model for the
optional key and a foreign key to
TicketBlock. In addition, it does not make
sense for the timestamp to be generated by the client side, so we will make that
Option to be populated by the server before sending to the database.
Migrating the schema
Now that we have defined the native class that defines orders, we can add it to
the database schema. We could easily edit the existing sql file, but this is a
good time to make our first schema migration. Create a new file in
2.sql. The contents of an evolutions SQL
migration is the same format as the initial one, just add the SQL required
to change the state of the current schema to match the desired target.
When we load up http://localhost:9000/ we are presented with a new notice that the database needs an evolution. Hit the apply button and the database schema will be updated to support our new Order class, but all of the old data is still there.
Create the same style companion object with one minor difference. Instead of
passing the timestamp value directly, the
create function overwrites it with
DateTime giving it the current date and time value.
And create a controller…
Finally add the routes…
Now we can place our first ticket order using curl.
Querying a single value
We have successfully placed our first order, but there is still a problem with our implementation, tickets are a limited resource. When all of the tickets in a block sell out, they are gone. Our system does not account for this and allows a user to place an order regardless of how many existing orders have already been placed.
We can solve this by adding a method named
availability to our
object to see the amount of tickets still available in the block.
We also need to update our
Orders controller to check for available tickets
before creating the order. Once again,
NOT_ENOUGH_TICKETS is just an arbitrary
value (1001) in the
ErrorResponse object. Worth pointing out is that we use a
flatMap instead of a
map on the
availability response. We do this because
we want to result in a
Future but some of the code paths also return a future.
This would result in a
Future[Future] result later that would need to be
flattened anyways. This way, the code is cleaner and all paths just return a
Future as expected.
We can verify it succeeds when it is supposed to.
We can also verify it will fail when too many tickets are being ordered.
A healthy dose of skepticism
If you have worked with ORMs or any form of auto-generated code before, you
probably have a reasonable and well deserved distrust of anything that builds
code for you, which Slick does with SQL. Slick allows for the display of the
query string to be retrieved directly from the query object. We will use this to
inspect the SQL generated in the recently created
The next time you make a request you can check the logs and see the following SQL output displayed.
Not bad for auto-generated code, but based on a highly scientific method of running code once while on an airplane and refusing to double check my results I feel that it could be faster if we used this query instead:
Luckily for us, Slick supports the ability to make raw SQL queries as well when needed.
And you can curl the over-order one more time to verify it works as intended.
Until next time…
That is all for this epic novel of a post. Looking back, there are a lot of things we were able to accomplish.
- We improved our API for reusability and consistency by creating a common response type
- We learned how to accept URL path parameters as well as POST body data
- Including validating JSON against a known data type
- We built our schema using evolutions
- and migrated it to a second version
- We learned how to use Slick to access the data in both Scala and in SQL
- We learned how to interact with and return
Futures as our results
In the next post, we will cover more asynchronous responses and using Actors to close the giant concurrency bug we just introduced. You noticed the giant concurrency bug, right? ..right?