PROFESSIONAL JAVA FOR WEB APPLICATIONS (2014)

Part III Persisting Data with JPA and Hibernate ORM

Chapter 21 Using JPA in Spring Framework Repositories

IN THIS CHAPTER

·     Understanding Spring repositories and taking advantage of transactions

·     Setting up persistence in Spring Framework

·     Implementing and using JPA repositories

·     Converting data with DTOs and entities

WROX.COM CODE DOWNLOADS FOR THIS CHAPTER

You can find the wrox.com code downloads for this chapter at http://www.wrox.com/go/projavaforwebapps on the Download Code tab. The code for this chapter is divided into the following major examples:

·     Spring-JPA Project

·     Customer-Support-v15 Project

NEW MAVEN DEPENDENCIES FOR THIS CHAPTER

In addition to the Maven dependencies introduced in previous chapters, you also need the following Maven dependency:

        <dependency>

            <groupId>org.springframework</groupId>

            <artifactId>spring-orm</artifactId>

            <version>4.0.0</version>

            <scope>compile</scope>

        </dependency>

        <dependency>

            <groupId>org.javassist</groupId>

            <artifactId>javassist</artifactId>

            <version>3.18.1-GA</version>

            <scope>runtime</scope>

        </dependency>

USING SPRING REPOSITORIES AND TRANSACTIONS

Before Object-Relational Mappers became so common and the JPA Persistence API was first released, Spring Framework’s org.springframework.jdbc.core.JdbcTemplate provided a standard, simplified way to persist and retrieve entities using JDBC in Spring Framework applications. In addition to making table-to-entity translation easier, the JdbcTemplate also recognized the various vendor-specific SQLExceptions and error codes that a JDBC driver could throw and translated them to members of the hierarchy oforg.springframework.dao.DataAccessExceptions. For example, failure to insert a record due to a unique key conflict in Oracle, Microsoft SQL Server, MySQL, or any other supported database would result in the unambiguousorg.springframework.dao.DuplicateKeyException. Today the JdbcTemplate still exists, but its power is far outmatched next to JPA or a standalone O/RM tool.

Nevertheless, there are some tasks that are still difficult or verbose to perform using JPA. For example, creating an EntityManager and managing transactions manually in every method consumed the vast majority of the persistence code you wrote in the previous chapter. More important, transactions often include selecting and manipulating multiple types of entities. You need a way to create an EntityManager, start a transaction, and share it across multiple repositories during the scope of a single unit of work.

Thankfully, Spring Framework provides tools for just such a need and more. In previous Spring versions, the org.springframework.orm.jpa.JpaTemplate provided a mechanism similar to JdbcTemplate for using the persistence API within Spring Framework applications. However, JpaTemplate was deprecated in Spring 3.1 and removed in Spring 4.0 in favor of new support for using the persistence unit EntityManager directly. In this chapter, you explore how to configure and use Spring Framework transactions, shared EntityManagers, and JPA exception translation.

Understanding Transaction Scope

In Spring Framework, you control transactions using the org.springframework.transaction.PlatformTransactionManager. You define a PlatformTransactionManager appropriate to your environment and chosen persistence technology within the root application context. The methods in this interface are not important — you’ll never use it directly and you’ll only configure an implementation. Spring manages starting, committing, and rolling back transactions automatically on your behalf. This is accomplished using the@org.springframework.transaction.annotation.Transactional or @javax.transaction.Transactional annotations. You can annotate interfaces, classes, interface methods, and class methods with these annotations. Annotating an interface or class has the effect of annotating all the methods in that interface or class. Annotating a method in an interface or class that is also annotated has the effect of overriding the annotation on the interface or class.

Spring begins a transaction when it encounters an annotated method. The transaction’s scope covers the execution of that method, the execution of any methods that method invokes, and so on, until the method returns. Any managed resources that are covered by the configured PlatformTransactionManager and that you use during the transaction scope participate in the transaction. For example, if you use the org.springframework.jdbc.datasource.DataSourceTransactionManager, a Connection retrieved from the linked DataSourceparticipates in the transaction automatically. Likewise, Java Message Service actions performed during a transaction managed by the org.springframework.jms.connection.JmsTransactonManager participate in that transaction.

The transaction terminates one of two ways: Either the method completes execution directly and the transaction manager commits the transaction, or the method throws an exception and the transaction manager rolls the transaction back. By default, anyjava.lang.RuntimeException results in a rolled back transaction. Using either of the @Transactional annotations you can expand or restrict this filter to refine what triggers a transaction rollback.

There is, of course, a bit of magic involved in all this, and you must use these resources in a specific way for the transaction scope to apply. How this works varies from one PlatformTransactionManager implementation to another and from one resource type to another.

Using Threads for Transactions and Entity Managers

The transaction scope discussed previously is limited to the thread the transaction begins in. The transaction manager then links the transaction to managed resources used in the same thread during the life of the transaction. When using the Java Persistence API, the resource you work with is the EntityManager. It is the functional equivalent of Hibernate ORM’s Session and JDBC’s Connection. Normally, you would obtain an EntityManager from the EntityManagerFactory before beginning a transaction and performing JPA actions. However, this does not work with the Spring Framework model of managing transactions on your behalf.

The solution to this problem is the org.springframework.orm.jpa.support.SharedEntityManagerBean. When you configure JPA in Spring Framework, it creates a SharedEntityManagerBean that proxies the EntityManager interface. This proxy is then injected into your JPA repositories. When an EntityManager method is invoked on this proxy instance, the following happens in the background:

·     If the current thread already has a real EntityManager with an active transaction, it delegates the call to the method on that EntityManager.

·     Otherwise, Spring Framework obtains a new EntityManager from the EntityManagerFactory, starts a transaction, and binds both to the current thread. It then delegates the call to the method on that EntityManager.

When the transaction is either committed or rolled back, Spring unbinds the transaction and the EntityManager from the thread and then closes the EntityManager. Future @Transactional actions on the same thread (even within the same request) start the process over again, obtaining a new EntityManager from the factory and beginning a new transaction. This way, no two threads use an EntityManager at the same time, and a given thread has only one transaction and one EntityManager active at any given time.

Instead of marking EntityManager fields in your repositories as @Inject or @Autowired, you use the @javax.persistence.PersistenceContext annotation to indicate that Spring should inject a proxy for the EntityManager.

    @PersistenceContext

    EntityManager entityManager;

Normal EntityManagers are not thread-safe, and they always require you to start a transaction before using them. However, obtaining a @PersistenceContext EntityManager from Spring Framework means that your repository can use this instance in multiple threads, and behind the scenes each thread has its own EntityManager instance with a transaction managed on your behalf.

Another advantage to using @PersistenceContext is that you can specify a persistence unit name for a given EntityManager instance. This way, you can define multiple persistence context configurations in your Spring application context and discriminate whichEntityManager instance you intend to use in a repository by specifying its name.

public class FooRepository

{

    @PersistenceContext(unitName = "fooUnit")

    EntityManager entityManager;

    ...

}

public class BarRepository

{

    @PersistenceContext(unitName = "barUnit")

    EntityManager entityManager;

    ...

}

When using JPA in Spring Framework, you can use one of two PlatformTransactionManager implementations.

·     The most standard and common is org.springframework.orm.jpa.JpaTransactionManager, and it is what you use throughout this book. This implementation can manage the transactions only for EntityManager actions and only for a single persistence unit, but in many cases that is all you need.

·     If you want to use multiple persistence units in your application (as in the previous example), or manage transactions across multiple types of resources (such as EntityManagers and Java Message Service resources), you need theorg.springframework.transaction.jta.JtaTransactionManager or one of its subclasses (WebLogicJtaTransactionManager on WebLogic servers; WebSphereUowTransactionManager on WebSphere servers). This implementation requires a Java Transaction API provider, so to use it you need a full Java EE application server or a complex standalone JTA configuration (as is the case with Tomcat).

JTA is an extensive topic and difficult to configure outside a full Java EE application server, so it is not covered further in this book. There are many JTA tutorials online, and your application server documentation should be a useful resource.

Taking Advantage of Exception Translation

In the days of direct JDBC usage, dealing with exceptions could be a nightmare. Every JDBC driver vendor had its own set of exceptions that extended java.sql.SQLException, and the error codes associated with those exceptions also varied based on the vendor. If you wanted to know in code precisely why an error occurred, you either restricted your code to a single vendor and figured out that vendor’s exception pattern or you tested for every vendor’s exceptions in your catch blocks. This process was tedious at best and often led to writing more code than you actually spent performing the work in the first place.

As O/RMs became more popular, some defined a useful exception hierarchy and some did not. The Java Persistence API defines a modest exception hierarchy starting at javax.persistence.PersistenceException, but even it is missing some key features (like an exception to indicate that a unique key violation occurred). Then in the realm of NoSQL tools, each client library defines its own set of checked or unchecked exceptions, and none of them inherit from a common persistence exception, making the problem even more daunting.

Spring Framework and its associated data tools (such as Spring Data NoSQL) solve this problem by defining a thorough hierarchy of persistence exceptions inheriting from org.springframework.dao.DataAccessException. This group contains a lot of exceptions and this book does not cover them all. Suffice it to say whether you use JDBC, Hibernate or another O/RM directly, JPA, Java Data Objects, or NoSQL, you can look for exceptions in this hierarchy instead of the technology-specific exceptions.

There are two key concepts to achieving this exception translation in your applications. First, you must configure one or more org.springframework.dao.support.PersistenceExceptionTranslator implementations in your root application context. There are differentPersistenceExceptionTranslator implementations for different technologies. If you use multiple persistence technologies in your application — such as JPA and NoSQL — you need to configure an implementation that handles them all or configure multiple implementations. (Spring automatically chains them using the org.springframework.dao.support.ChainedPersistenceExceptionTranslator.)

Spring Framework has a variety of PersistenceExceptionTranslators to handle different persistence technologies. The org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean that you configure in the next section is also a PersistenceExceptionTranslatorcapable of translating JPA exceptions and the underlying JDBC error codes into DataAccessExceptions. By merely defining this bean, you have configured exception translation for JPA. If you use multiple persistence technologies, you can configure an implementation for each technology.

After you configure exception translation, you must next annotate your repositories with @Repository. This tells Spring that the annotated bean is eligible for exception translation using the configured PersistenceExceptionTranslators. If the repository methods throw any persistence exceptions, the PersistenceExceptionTranslators translate those exceptions as appropriate. Note that this means that you cannot catch translated DataAccessExceptions from the repository itself, because translation has not taken place yet. You can catchDataAccessExceptions only in code that calls repository methods.

NOTE If you look at all the PersistenceUnitTranslator implementations that Spring Framework and its tools provide, you notice that there is no plain JDBC exception translator. When you use JDBC in Spring repositories, it is expected that you’ll use the JdbcTemplate. Because the JdbcTemplate is a Spring Framework tool and not a third-party class, it has persistence translation built in to it.

CONFIGURING PERSISTENCE IN SPRING FRAMEWORK

Configuring JPA in Spring is actually very straightforward, but you need to understand several options and this section explains them to you. Generally speaking, the Spring JPA configuration process has three parts:

·     Create or look up a DataSource.

·     Create or look up a persistence unit and configure Spring to inject it in your repositories.

·     Set up transaction management so that @Transactional methods are properly handled.

You can follow along in the Spring-JPA project available for download from the wrox.com code download site. This project starts with the standard Spring bootstrap and configuration that you used at the end of Part II.

Looking Up a Data Source

The first thing you need to do is make a DataSource available to your application. You have a few different ways to do this. For example, if you need to simply test something quickly, use Spring’s org.springframework.jdbc.datasource.DriverManagerDataSource to create aDataSource on demand:

    @Bean

    public DataSource springJpaDataSource()

    {

        DriverManagerDataSource dataSource = new DriverManagerDataSource();

        dataSource.setUrl("jdbc:mysql://localhost/SpringJpa");

        dataSource.setUsername("tomcatUser");

        dataSource.setPassword("password1234");

        return dataSource;

    }

However, this creates a simple DataSource that returns single-use Connections. Because it does not provide connection pooling, it really should never be used in a production environment of any type. In a standalone application, you could use Apache Commons DBCP and Apache Commons Pool to create and return a pooled DataSource. Usually, you would read a properties file with the URL, username, and password information so that changing the connection details won’t require a recompilation of code. On an application server or Servlet container, however, the easiest thing to do is to define a DataSource on the server (as you did in the previous chapter) and look that DataSource up from your RootApplicationContext. This is what the Spring-JPA project does.

    @Bean

    public DataSource springJpaDataSource()

    {

        JndiDataSourceLookup lookup = new JndiDataSourceLookup();

        return lookup.getDataSource("jdbc/SpringJpa");

    }

NOTE Connection pooling is similar to the thread pooling concept you learned about in Part II. A connection pool contains multiple, idle connections waiting to be used. These connections are borrowed from the pool and then returned and reset when they are no longer needed. This way, the overhead of constantly opening and closing connections is avoided. A DataSource configured in your application server or Servlet container uses connection pooling.

While you’re working on this, don’t forget to define the DataSource resource in Tomcat’s context.xml configuration file. (You’ll learn about the defaultTransactionIsolation attribute in the next section.)

    <Resource name="jdbc/SpringJpa" type="javax.sql.DataSource"

              maxActive="20" maxIdle="5" maxWait="10000"

              username="tomcatUser" password="password1234"

              driverClassName="com.mysql.jdbc.Driver"

              defaultTransactionIsolation="READ_COMMITTED"

              url="jdbc:mysql://localhost/SpringJpa" />

In this sample application, you use a standard DataSource that cannot participate in JTA transactions. This is sufficient for the examples in this book, but if you want a transaction to span multiple DataSources or multiple technologies (such as JMS), you must define and lookup a JTA-capable DataSource. This is possible, though very difficult, in Tomcat. Your best bet is to use a full Java EE application server for this purpose.

Creating a Persistence Unit in Code

Perhaps the most important thing you must do when configuring Spring Framework’s JPA support is setting up your persistence unit. You already explored creating a persistence unit in the /WEB-INF/classes/META-INF/persistence.xml file in Chapter 20, but you are not restricted to this technique when using Spring Framework.

To properly set up JPA, you need to configure a bean that implements org.springframework.orm.jpa.AbstractEntityManagerFactoryBean. Beans of this type can create SharedEntityManagerBeans that manage the thread-bound, transaction-linked EntityManagers in your repositories.

The simplest approach is to configure an org.springframework.orm.jpa.LocalEntityManagerFactoryBean. This bean requires that /META-INF/persistence.xml exists and reads the persistence unit configuration settings from that file. When you configure theLocalEntityManagerFactoryBean, you specify the name of the persistence unit that it should use.

    @Bean

    public LocalEntityManagerFactoryBean entityManagerFactoryBean()

    {

        LocalEntityManagerFactoryBean factory =

                new LocalEntityManagerFactoryBean();

        factory.setPersistenceUnitName("SpringJpa");

        factory.setJpaVendorAdapter(new HibernateJpaVendorAdapter());

        factory.setDataSource(this.springJpaDataSource());

        return factory;

    }

This is perfectly sufficient, but it is quite inflexible as far as solutions go. A more useful implementation is org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean, which does not require /META-INF/persistence.xml to exist. You could, for example, place your persistence file in a package instead of META-INF and then tell LocalContainerEntityManagerFactoryBean where the persistence file lives.

    @Bean

    public LocalContainerEntityManagerFactoryBean entityManagerFactoryBean()

    {

        LocalContainerEntityManagerFactoryBean factory =

                new LocalContainerEntityManagerFactoryBean();

        factory.setPersistenceXmlLocation(

                "classpath:com/wrox/config/persistence.xml"

        );

        factory.setPersistenceUnitName("SpringJpa");

        factory.setJpaVendorAdapter(new HibernateJpaVendorAdapter());

        factory.setDataSource(this.springJpaDataSource());

        return factory;

    }

Perhaps most important, you don’t even need a persistence file with LocalContainerEntityManagerFactoryBean. You can omit the persistence XML location and persistence unit name entirely and create the persistence unit configuration using pure Java code. This is how the RootContextConfiguration in the Spring-JPA project configures its persistence unit.

    @Bean

    public LocalContainerEntityManagerFactoryBean entityManagerFactoryBean()

    {

        Map<String, Object> properties = new Hashtable<>();

        properties.put("javax.persistence.schema-generation.database.action",

                "none");

        HibernateJpaVendorAdapter adapter = new HibernateJpaVendorAdapter();

        adapter.setDatabasePlatform("org.hibernate.dialect.MySQL5InnoDBDialect");

        LocalContainerEntityManagerFactoryBean factory =

                new LocalContainerEntityManagerFactoryBean();

        factory.setJpaVendorAdapter(adapter);

        factory.setDataSource(this.springJpaDataSource());

        factory.setPackagesToScan("com.wrox.site.entities");

        factory.setSharedCacheMode(SharedCacheMode.ENABLE_SELECTIVE);

        factory.setValidationMode(ValidationMode.NONE);

        factory.setJpaPropertyMap(properties);

        return factory;

    }

Examine this code carefully, because a lot of things are happening. First, it creates a map to hold JPA configuration properties — in this case, the schema generation property. Next, it creates an org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter and sets it as the adapter for the factory. This is a special Spring pattern that does a few things:

·     It tells the LocalContainerEntityManagerFactoryBean which PersistenceProvider to use (org.hibernate.jpa.HibernatePersistenceProvider). This replaces <provider> from persistence.xml.

·     It tells the SharedEntityManagerBean which extended EntityManagerFactory interface it will proxy (org.hibernate.jpa.HibernateEntityManagerFactory) and which extended EntityManager interface it will proxy (org.hibernate.jpa.HibernateEntityManager).

·     It tells Spring how to properly translate the Hibernate ORM-specific JPA exceptions to DataAccessExceptions.

·     It informs transaction management about any extra steps that should be used when beginning and ending transactions to deal with any special issues that might arise.

·     It configures Hibernate ORM to use the correct dialect for the database you use. Hibernate attempts to detect the proper dialect to use, but for MySQL it always selects org.hibernate.dialect.MySQLDialect. This legacy dialect is only for MySQL 4.x and should not be used with MySQL 5.x. It’s safest to always manually specify this value.

There are more than 50 different Hibernate ORM dialects. Table 21-1 lists the most common Hibernate ORM dialects and which database versions they support.

TABLE 21-1: Common Hibernate Dialects

DATABASE NAME & VERSION(S)

DIALECT CLASS

H2 Database Engine

org.hibernate.dialect.H2Dialect

HyperSQL 1.8+, 2.x+

org.hibernate.dialect.HSQLDialect

MySQL 4.x Generic

org.hibernate.dialect.MySQLDialect

MySQL 4.x MyISAM Engine

org.hibernate.dialect.MySQLMyISAMDialect

MySQL 4.x InnoDB Engine

org.hibernate.dialect.MySQLInnoDBDialect

MySQL 5.x+ Generic + MyISAM

org.hibernate.dialect.MySQL5Dialect

MySQL 5.x+ InnoDB Engine

org.hibernate.dialect.MySQL5InnoDBDialect

Oracle Database 8i

org.hibernate.dialect.Oracle8iDialect

Oracle Database 9i

org.hibernate.dialect.Oracle9iDialect

Oracle Database 10g, 11g+

org.hibernate.dialect.Oracle10gDialect

PostgreSQL 8.1

org.hibernate.dialect.PostgreSQL81Dialect

PostgreSQL 8.2+

org.hibernate.dialect.PostgreSQL82Dialect

Microsoft SQL Server 2000

org.hibernate.dialect.SQLServerDialect

Microsoft SQL Server 2005

org.hibernate.dialect.SQLServer2005Dialect

Microsoft SQL Server 2008, 2012+

org.hibernate.dialect.SQLServer2008Dialect

Sybase 10

org.hibernate.dialect.SybaseDialect

Sybase 11.9.2+

org.hibernate.dialect.Sybase11Dialect

Sybase ASE 15+

org.hibernate.dialect.SybaseASE15Dialect

Sybase Anywhere 8+

org.hibernate.dialect.SybaseAnywhereDialect

HibernateJpaVendorAdapter is just one implementation of org.springframework.orm.jpa.JpaVendorAdapter. Spring also has adapters for EclipseLink and OpenJPA that help configure Spring correctly for those JPA vendors, and you can easily create your own implementation to support another vendor if necessary. Note that the LocalEntityManagerFactoryBean and LocalContainerEntityManagerFactoryBean can function without a JpaVendorAdapter implementation, but they may not function as well as they could. It’s best to always supply this.

NOTE Instead of calling setDatabasePlatform on the adapter, you could call setDatabase and pass it one of the org.springframework.orm.jpa.vendor.Database enum constants. Then the HibernateJpaVendorAdapter would select the Hibernate dialect class for you. However, it also selects only MySQLDialect for MySQL databases, selects the SQL Server 2000 dialect for SQL Server databases, and so on. Although this is a convenient tool, you’re better off telling Hibernate exactly which dialect to use so that it works the best it possibly can. Otherwise, you might not get the correct dialect for your database.

After configuring the vendor adapter, the code sets the DataSource to the one you configured earlier. This replaces the <non-jta-data-source> element in persistence.xml and also sets the transaction-type for the persistence unit to RESOURCE_LOCAL. (CallingsetJtaDataSource is the equivalent of <jta-data-source> and setting transaction-type to JTA.)

It then tells the LocalContainerEntityManagerFactoryBean to scan the package com.wrox.site.entities for entity beans. This is equivalent to <exclude-unlisted-classes>true</exclude-unlisted-classes> and listing out each entity class with <class>, except that Spring detects and registers the entity classes for you. This starts up significantly faster than <exclude-unlisted-classes>false</exclude-unlisted-classes> because Spring limits its scanning to the package or packages you specify, whereas the JPA provider would have to scan many more packages and classes. The configuration then sets the shared cache mode (equivalent to <shared-cache-mode>), the validation mode (equivalent to <validation-mode>), and the JPA properties.

NOTE In general, you really shouldn’t need to understand much about coding with Hibernate ORM as long as you use JPA. However, you do need to know about some things, such as dialects. For your reference, you can view the Hibernate ORM 4.3 API documentation at http://docs.jboss.org/hibernate/orm/4.3/javadocs/Before selecting a dialect for your particular database, you should view the API documentation for org.hibernate.dialect.Dialect and its subclasses.

Setting Up Transaction Management

Configuring transaction management is the last step to setting up JPA in Spring Framework. Though it is not especially tricky, there are some things to watch out for. You initially activate transaction management and @Transactional method interception by annotating the RootContextConfiguration with @org.springframework.transaction.annotation.EnableTransactionManagement. Like the @EnableAsync annotation that you are already using, @EnableTransactionManagement results in Spring dynamically advising your @Transactionalbean methods. However, you must do this with care.

First, you must configure @EnableAsync and @EnableTransactionManagement with the same AdviceMode (PROXY or ASPECTJ) and the same proxyTargetClass value. As discussed in Chapter 14, the easiest approach is to use AdviceMode.PROXY with proxyTargetClass set to false.

A NOTE ABOUT SPRING FRAMEWORK METHOD ADVICE

Spring Framework can advise your methods using AspectJ pointcuts or proxies. Using AdviceMode.PROXY enables proxies, meaning proxy classes wrap around advised methods to execute advice code before and after the methods as necessary. You can create these proxy methods using dynamic proxies (proxyTargetClass = false), which are part of the standard Java SE API. This is the preferred, best-practice proxy technique. However, dynamic proxies can advise only methods that are specified in an interface, and they apply only if the consuming code uses the interface instead of the actual class. If you need to advise public methods that are only part of the class and not part of an interface, you must use CGLIB proxies (proxyTargetClass = true). The important downside to remember about using CGLIB proxies is that your bean constructors execute twice, not once, so plan accordingly.

When you use dynamic proxies, the method advice they provide applies only when another class executes methods on the Spring-managed bean instance. If a method invoked on an instance of FooBean executes another method on the same instance ofFooBean (with or without using this), the method advice does not execute. (See org.springframework.aop.framework.AopContext for an ugly way to do this, which you should avoid whenever possible.) CGLIB proxies override every non-final method on a class, so method advice is applied when FooBean calls another FooBean method (with or without using this). However, Spring cannot create CGLIB proxies for final classes.

If these two options still do not meet your needs, you can use AspectJ pointcuts with load-time weaving enabled. Load-time weaving actually modifies the compiled bytecode of your classes as they are loaded, adding the method advice directly to the bytecode. This works on final and non-final classes and methods, and on methods called from within the same object. An object doesn’t even have to be a Spring-managed bean for method advice to apply! (This is very useful for legacy applications.) You enable this by setting the advice mode to AdviceMode.ASPECTJ, decorating your root context configuration class with @EnableLoadTimeWeaving(aspectjWeaving=EnableLoadTimeWeaving.AspectJWeaving.ENABLED), and adding the following Maven dependency to your project:

        <dependency>

            <groupId>org.springframework</groupId>

            <artifactId>spring-aspects</artifactId>

            <version>4.0.2.RELEASE</version>

            <scope>runtime</scope>

        </dependency>

You explore using AspectJ load-time weaving more in Chapter 24. However you choose to configure method advice in your application, you must configure it the same way for every feature that uses it. For example, you must configure it the same way for @EnableAsync as for @EnableTransactionManagement. If you configure them differently, Spring picks one configuration and uses it for both, which may cause unexpected results.

Also, it’s important to consider the order of execution of these two proxies. If the transaction management proxy executes before the asynchronous proxy, then actions associated with setting up an asynchronous method are included in transaction management, and thread-binding the transaction may not work properly. The RootContextConfiguration class uses the order attribute of these two annotations to ensure that the proxies execute in the correct order (asynchronous operations proxy before the transaction management proxy).

@Configuration

@EnableScheduling

@EnableAsync(

        mode = AdviceMode.PROXY, proxyTargetClass = false,

        order = Ordered.HIGHEST_PRECEDENCE

)

@EnableTransactionManagement(

        mode = AdviceMode.PROXY, proxyTargetClass = false,

        order = Ordered.LOWEST_PRECEDENCE

)

@ComponentScan(

        basePackages = "com.wrox.site",

        excludeFilters =

        @ComponentScan.Filter({Controller.class, ControllerAdvice.class})

)

public class RootContextConfiguration implements

...

Any time you use @EnableTransactionManagement you must supply a default implementation of the PlatformTransactionManager. For JPA resources you should use the org.springframework.orm.jpa.JpaTransactionManager. Its constructor binds to an EntityManagerFactory, so you should use the LocalContainerEntityManagerFactoryBean you created earlier to construct the JpaTransactionManager.

    @Bean

    public PlatformTransactionManager jpaTransactionManager()

    {

        return new JpaTransactionManager(

                this.entityManagerFactoryBean().getObject()

        );

    }

By default, transaction management looks for a bean named txManager and then falls back to the first bean it can find that implements PlatformTransactionManager. However, it’s possible to have multiple PlatformTransactionManagers in an application context. In this case, Spring may choose the wrong default transaction manager to handle @Transactional methods. To protect against this, your configuration class can implement TransactionManagementConfigurer and Spring always uses the manager returned fromannotationDrivenTransactionManager as the default manager for @Transactional methods.

@Configuration

@EnableScheduling

@EnableAsync(

        mode = AdviceMode.PROXY, proxyTargetClass = false,

        order = Ordered.HIGHEST_PRECEDENCE

)

@EnableTransactionManagement(

        mode = AdviceMode.PROXY, proxyTargetClass = false,

        order = Ordered.LOWEST_PRECEDENCE

)

@ComponentScan(

        basePackages = "com.wrox.site",

        excludeFilters =

        @ComponentScan.Filter({Controller.class, ControllerAdvice.class})

)

public class RootContextConfiguration implements

        AsyncConfigurer, SchedulingConfigurer, TransactionManagementConfigurer

{

    ...

    @Bean

    public PlatformTransactionManager jpaTransactionManager()

    {

        return new JpaTransactionManager(

                this.entityManagerFactoryBean().getObject()

        );

    }

    ...

    @Override

    public PlatformTransactionManager annotationDrivenTransactionManager()

    {

        return this.jpaTransactionManager();

    }

    ...

}

Notice that this method simply calls the @Bean jpaTransactionManager method so that the chosen bean name (jpaTransactionManager) is preserved.

CREATING AND USING JPA REPOSITORIES

Using JPA in Spring repositories is easier than using JPA on its own because you don’t have to deal with transactions and the EntityManagerFactory. This section shows you the simple steps involved in doing this as well as how to demarcate transaction boundaries in your services. You also create a generic repository for handling many different types of entities using common code. The Spring-JPA project uses the Book, Author, and Publisher entities you created in the previous chapter, minus the previewPdf property of the Book, and uses the same database creation script except for the database name SpringJpa.

Injecting the Persistence Unit

You’re going to have three repositories — one for each entity. This is a common and useful pattern, but by no means required. You could create a repository for several related entities, for example. But using one repository per entity enables you to create generic repository code, something you learn about later in the section. The repositories each implement an interface, which you explore in more detail later in the section.

public interface AuthorRepository { ... }

public interface BookRepository { ... }

public interface PublisherRepository { ... }

To perform JPA operations in the repository implementations, you need an EntityManager instance. As discussed earlier, all you need to do is declare an EntityManager field and mark it with the @PersistenceContext JPA annotation. You don’t need to annotate the field@Inject or @Autowired; @PersistenceContext serves this purpose as well. Because @PersistenceContext came before Spring Framework’s support for JPA, there is no Spring-equivalent for @PersistenceContext, so this annotation is all you need to worry about.

@Repository

public class DefaultAuthorRepository implements AuthorRepository

{

    @PersistenceContext EntityManager entityManager;

    ...

}

@Repository

public class DefaultBookRepository implements BookRepository

{

    @PersistenceContext EntityManager entityManager;

    ...

}

@Repository

public class DefaultPublisherRepository implements PublisherRepository

{

    @PersistenceContext EntityManager entityManager;

    ...

}

You’ll recall from earlier in the chapter that the EntityManager injected here isn’t the one provided by the JPA vendor (Hibernate ORM). Instead, it’s a proxy instance for the real thing and automatically delegates to the transaction-linked, thread-bound EntityManagercreated for a previous method invocation in the same transaction and thread (or creates a new EntityManager and transaction if one has not been created yet).

Implementing Standard CRUD Operations

For now your repositories need to perform only the simplest operations — returning single entities and lists of entities, adding entities, updating entities, and deleting entities. This is easily represented in the AuthorRepository interface.

public interface AuthorRepository

{

    Iterable<Author> getAll();

    Author get(long id);

    void add(Author author);

    void update(Author author);

    void delete(Author author);

    void delete(long id);

}

The implementation for these methods in Listing 21-1 takes a slightly different approach than you used in the EntityServlet in the previous chapter. Instead of the newer criteria API, it uses the Java Persistence Query Language (JPQL) to look up entities. As you can tell, JPQL is very similar to ANSI SQL; however, there are of course some differences. For example, the identifiers in the SELECT clause identify which entities are returned, not which columns are returned as in a SQL query. A JPQL query can use multiple entities in aWHERE clause but return only one of them in the SELECT clause. Also, the identifiers in the FROM clause identify the entity names, not the table names as in a SQL query.

LISTING 21-1: DefaultAuthorRepository.java

@Repository

public class DefaultAuthorRepository implements AuthorRepository

{

    @PersistenceContext EntityManager entityManager;

    @Override

    public Iterable<Author> getAll()

    {

        return this.entityManager.createQuery(

                "SELECT a FROM Author a ORDER BY a.name", Author.class

        ).getResultList();

    }

    @Override

    public Author get(long id)

    {

        return this.entityManager.createQuery(

                "SELECT a FROM Author a WHERE a.id = :id", Author.class

        ).setParameter("id", id).getSingleResult();

    }

    @Override

    public void add(Author author)

    {

        this.entityManager.persist(author);

    }

    @Override

    public void update(Author author)

    {

        this.entityManager.merge(author);

    }

    @Override

    public void delete(Author author)

    {

        this.entityManager.remove(author);

    }

    @Override

    public void delete(long id)

    {

        this.entityManager.createQuery(

                "DELETE FROM Author a WHERE a.id = :id"

        ).setParameter("id", id).executeUpdate();

    }

}

Creating a Base Repository for All Your Entities

Think for a minute about the AuthorRepository and the methods it specified, and then think about the methods that the BookRepository and PublisherRepository should specify. You should immediately notice a similarity:

public interface BookRepository

{

    Iterable<Book> getAll();

    Book get(long id);

    void add(Book book);

    void update(Book book);

    void delete(Book book);

    void delete(long id);

}

public interface PublisherRepository

{

    Iterable<Publisher> getAll();

    Publisher get(long id);

    void add(Publisher publisher);

    void update(Publisher publisher);

    void delete(Publisher publisher);

    void delete(long id);

}

If you consider the implementations for these methods, you should quickly realize that they are nearly identical to the DefaultAuthorRepository class. You’d be right to wonder whether there’s a way to write code that can take care of all your entities. To be most useful, such a repository needs to use generics. Following best practices and starting with an interface, consider what it might look like.

@Validated

public interface GenericRepository<I extends Serializable, E extends Serializable>

{

    @NotNull

    Iterable<E> getAll();

    E get(@NotNull I id);

    void add(@NotNull E entity);

    void update(@NotNull E entity);

    void delete(@NotNull E entity);

    void deleteById(@NotNull I id);

}

The generic type variable I represents the type of the surrogate key for your entities — usually a long, but not always, which is why this is a variable. E represents the entity type. Using the Bean Validation @NotNull constraint is a great way to tell repository users and implementers that null parameters and null returned lists are not tolerated. Due to the ambiguity of I and E here (they could, in theory, be the same types), the compiler cannot distinguish between a method parameter of type I and a method parameter of type E. Therefore, the delete and deleteById methods must have different names.

NOTE JPA does not strictly require entities or their surrogate keys to be Serializable, but this is a best-practice restriction to enforce.

Now your repository interfaces need only to extend this parent interface and specify the type variables applicable to the entities they provide access to.

public interface AuthorRepository extends GenericRepository<Long, Author>

{

}

public interface BookRepository extends GenericRepository<Long, Book>

{

    Book getByIsbn(String isbn);

}

public interface PublisherRepository extends GenericRepository<Long, Publisher>

{

}

The AuthorRepository and PublisherRepository interfaces don’t define any methods because they don’t need to. The methods are all defined in the GenericRepository interface, and the type variable values make the method argument types and return types concrete. TheBookRepository defines an additional method to look books up by ISBN — a common need.

The next logical step is to define common implementations for all the GenericRepository methods. As of Java 8, you could take the unique approach of using default methods:

@Validated

public interface GenericRepository<I extends Serializable, E extends Serializable>

{

    @NotNull

    default Iterable<E> getAll()

    {

        ...

    }

    ...

}

However, this is a bad choice for several reasons:

·     Default methods are not designed for the purpose of replacing abstract classes. They were created so that you can make improvements to interfaces without breaking existing implementations. For example, Java 8 collections were improved using default methods without breaking thousands of existing Collection, List, Set, Map, Iterable, and Iterator implementations. Default methods have a different semantic meaning than concrete methods in abstract classes and should not be used for this purpose.

·     You need an injected EntityManager to execute code in these methods, and you cannot obtain that in an interface.

·     You need access to the type (Class) of I and E to perform safe JPA query operations. The best way to obtain this (and the only way if you want to make these values final) is in a constructor, which an interface cannot have.

A more appropriate approach is to use a generic base class, and this can satisfy all your needs. You have a few things to consider when deciding how to approach this. The first is how to determine the Class instance for the type variables. The simplest approach is to require them in the constructor:

public abstract class

        GenericBaseRepository<I extends Serializable, E extends Serializable>

    implements GenericRepository<I, E>

{

    protected final Class<I> idClass;

    protected final Class<E> entityClass;

    public GenericBaseRepository(Class<I> idClass, Class<E> entityClass)

    {

        this.idClass = idClass;

        this.entityClass = entityClass;

    }

    ...

}

This would work, but it seems silly to require these constructor arguments when the information is already there in the implementation’s type variable arguments. Fortunately, you can access the arguments to these type variables, though not without some effort.

public abstract class

        GenericBaseRepository<I extends Serializable, E extends Serializable>

    implements GenericRepository<I, E>

{

    protected final Class<I> idClass;

    protected final Class<E> entityClass;

    @SuppressWarnings("unchecked")

    public GenericBaseRepository()

    {

        Type genericSuperclass = this.getClass().getGenericSuperclass();

        while(!(genericSuperclass instanceof ParameterizedType))

        {

            if(!(genericSuperclass instanceof Class))

                throw new IllegalStateException("Unable to determine type " +

                        "arguments because generic superclass neither " +

                        "parameterized type nor class.");

            if(genericSuperclass == GenericBaseRepository.class)

                throw new IllegalStateException("Unable to determine type " +

                        "arguments because no parameterized generic superclass " +

                        "found.");

            genericSuperclass = ((Class)genericSuperclass).getGenericSuperclass();

        }

        ParameterizedType type = (ParameterizedType)genericSuperclass;

        Type[] arguments = type.getActualTypeArguments();

        this.idClass = (Class<I>)arguments[0];

        this.entityClass = (Class<E>)arguments[1];

    }

}

This constructor may confuse you some, so take a look at it piece by piece. When a class extends GenericBaseRepository, that class’s superclass is GenericBaseRepository. More important, GenericBaseRepository is its generic superclass and thus should be aParameterizedType with type arguments. Now you could just call ((ParameterizedType) this.getClass().getGenericSuperclass()).getActualTypeArguments(), but that works only if every repository inherits directly from GenericBaseRepository and is final. Not only is this restriction not ideal, but it also won’t work with Spring Framework’s transaction proxying and exception translation. So the loop walks up the inheritance tree, inspecting each type it encounters until it finds a ParameterizedType. If it encounters a type that isn’t aClass, it can’t walk the tree further. If it encounters its own type, it has walked the tree too far. Both conditions are arguably impossible, but worth testing for nonetheless. When it finds a ParameterizedType, it has found the superclass where the type variable arguments are specified. It then retrieves those arguments from the type and assigns them to the fields.

This constructor is the only value the GenericBaseRepository provides. It does not provide an EntityManager or method implementations because you may want to have a mixture of JPA and non-JPA repositories in your application. Determining the type arguments has nothing to do with JPA, so it’s best to put this behavior in its own superclass. The GenericJpaRepository in Listing 21-2 does all the interesting JPA work.

LISTING 21-2: GenericJpaRepository.java

public abstract class

        GenericJpaRepository<I extends Serializable, E extends Serializable>

    extends GenericBaseRepository<I, E>

{

    @PersistenceContext protected EntityManager entityManager;

    @Override

    public Iterable<E> getAll()

    {

        CriteriaBuilder builder = this.entityManager.getCriteriaBuilder();

        CriteriaQuery<E> query = builder.createQuery(this.entityClass);

        return this.entityManager.createQuery(

                query.select(query.from(this.entityClass))

        ).getResultList();

    }

    @Override

    public E get(I id)

    {

        return this.entityManager.find(this.entityClass, id);

    }

    @Override

    public void add(E entity)

    {

        this.entityManager.persist(entity);

    }

    @Override

    public void update(E entity)

    {

        this.entityManager.merge(entity);

    }

    @Override

    public void delete(E entity)

    {

        this.entityManager.remove(entity);

    }

    @Override

    public void deleteById(I id)

    {

        CriteriaBuilder builder = this.entityManager.getCriteriaBuilder();

        CriteriaDelete<E> query = builder.createCriteriaDelete(this.entityClass);

        this.entityManager.createQuery(query.where(

                builder.equal(query.from(this.entityClass).get("id"), id)

        )).executeUpdate();

    }

}

Some notes about the previous code:

·     The original DefaultAuthorRepository demonstrated the Java Persistence Query Language, but this is not easy to use when you don’t know the actual entity name (which you can’t in a generic repository). The GenericJpaRepository uses the criteria API, instead, to return the list of all entities and delete by ID.

·     The deleteById method works only if all your entities have a property named id, which is the case here. If the surrogate key property names differ, you have to get the entity by the ID and then call the remove method.

·     The DefaultAuthorRepository and DefaultPublisherRepository no longer need any methods because their methods are already defined in the GenericJpaRepository.

·     Only the DefaultBookRepository needs a method, which implements the additional getByIsbn method specified in the BookRepository interface.

·@Repository

·public class DefaultAuthorRepository extends GenericJpaRepository<Long, Author>

·    implements AuthorRepository

·{

·

·}

·

·@Repository

·public class DefaultBookRepository extends GenericJpaRepository<Long, Book>

·        implements BookRepository

·{

·    @Override

·    public Book getByIsbn(String isbn)

·    {

·        CriteriaBuilder builder = this.entityManager.getCriteriaBuilder();

·        CriteriaQuery<Book> query = builder.createQuery(this.entityClass);

·        Root<Book> root = query.from(this.entityClass);

·

·        return this.entityManager.createQuery(

·                query.select(root).where(builder.equal(root.get("isbn"), isbn))

·        ).getSingleResult();

·    }

·}

·

·@Repository

·public class DefaultPublisherRepository

·        extends GenericJpaRepository<Long, Publisher>

·        implements PublisherRepository

·{

·

}

The JPA criteria API is not the most intuitive API and is certainly more difficult to use than Hibernate ORM’s criteria API. Unlike Hibernate’s API, which is designed solely to make it easy to add expressions and restrictions to an entity lookup, this API is designed to mimic the query language itself. The getAll criteria in GenericJpaRepository can literally be read, “Select from entity,” which is identical to the JPQL query you created earlier for this purpose, minus ordering instructions. If order is important, you have a few options. You could override the method as needed, or you could specify a constructor argument that subclasses use to specify default order instructions. Ordering is not difficult with the criteria API.

        ...

        Root<Book> root = query.from(Book.class);

        return this.entityManager.createQuery(

                query.select(root).orderBy(builder.asc(root.get("name")))

        ).getResultList();

The new query can be read, “Select from Book ordered by Book.name ascending.” Notice that the code created the root query type first because it needed to use Book in both the FROM and ORDER BY clauses. This may seem redundant because the CriteriaQuery instance is already typed, but remember that the CriteriaQuery is typed to the object that will be returned. The query may use types other than the return type. Although you may often find JPQL easier to use, you can perform any of the same options using the criteria API. Which you use is up to your use case and personal preference.

Demarking Transaction Boundaries in Your Services

As mentioned earlier, you tell Spring Framework when and how to start and end a transaction using Spring’s @Transactional annotation or JTA’s @Transactional annotation. Spring’s annotation is a little more powerful and flexible than JTA’s annotation.

Using @javax.transaction.Transactional, you can define a blacklist of exceptions that should not trigger a rollback using the dontRollbackOn attribute, a whitelist of exceptions to override the default rollback rule of all RuntimeExceptions using rollbackOn, and the rule for when and how a transaction is created using the Transactional.TxType enum value attribute. Transactional.TxType has the following enum constants:

·     MANDATORY indicates a transaction must already exist, a new transaction may not be created, and an exception must be thrown if a transaction does not already exist.

·     NEVER indicates that a transaction must not already exist, a transaction must not be used, and an exception must be thrown if a transaction already exists.

·     NOT_SUPPORTED means that a transaction must not be used, and if one already exists, it must be suspended so that the code can execute outside of a transaction. When the code finishes executing, any suspended transaction must be resumed.

·     REQUIRED means that a transaction must be used. If no transaction exists, it should be started before the method executes and completed after the method returns. If a transaction already exists, it should be used and allowed to continue after the method returns.

·     REQUIRES_NEW is exactly what it sounds. Like REQUIRED, it indicates that if no transaction exists, it should be started before the method executes and completed after the method returns. However, if a transaction already exists, it should be suspended and a new transaction started before the method executes, and the new transaction should be completed and the original transaction resumed after the method returns.

·     SUPPORTS is perhaps the most flexible instruction. It means that an existing transaction must be used, but if no transaction already exists, the method must execute without a transaction.

@org.springframework.transaction.annotation.Transactional has attributes noRollbackFor, rollbackFor, and propagation with the same semantic meaning as dontRollbackOn, rollbackOn, and value, respectively, in JTA’s annotation. It also has noRollbackForClassName androllbackForClassName attributes that accept String class names instead of Classes. The org.springframework.transaction.annotation.Propagation enum has the same constants with the same meanings as Transactional.TxType.

In addition, Propagation has a NESTED constant that creates a nested transaction if one already exists or a new transaction if none already exists. NESTED is not supported when using the JpaTransactionManager. It may be supported when using the JtaTransactionManagerwith some JTA providers, but using it is not portable. The default rule is REQUIRED for both @Transactional annotations, and in almost all cases, this is sufficient for your purposes.

Spring’s annotation also contains several other useful attributes. isolation enables you to specify the transaction isolation level using the org.springframework.transaction.annotation.Isolation enum. This attribute is not supported and is ignored for theJpaTransactionManager and the JtaTransactionManager. The transaction isolation level for these managers is always the isolation level specified in the JPA or JTA DataSource configuration or the default isolation level for the JDBC driver, if none is specified in theDataSource. Because the default varies from one JDBC driver to the next, it’s best to always specify the isolation level when defining the DataSource resource. The available isolation levels are NONE, READ_COMMITTED, READ_UNCOMMITTED, REPEATABLE_READ, and SERIALIZABLE. For more information about what these mean you should consult the documentation for your database server. In this book, you always use READ_COMMITTED.

readOnly is another attribute available in Spring’s @Transactional annotation that is not supported for the JpaTransactionManager and JtaTransactionManager. It instructs the underlying transaction system that writes should be forbidden in the transaction and defaults tofalse. The timeout attribute is supported when you use the JtaTransactionManager but not when you use the JpaTransactionManager. It restricts the amount of time that a transaction may consume before ending in an exception and rollback.

When using the JTA @Transactional annotation, Spring always uses the default PlatformTransactionManager. Remember, this is the one returned by the TransactionManagementConfigurer method, or the one named txManager in the absence ofTransactionManagementConfigurer if there are multiple transaction managers, or the only transaction manager if there is just one. However, you can use Spring’s @Transactional annotation with multiple PlatformTransactionManager beans. If you omit the value attribute, it uses the default transaction manager, but you can specify a bean name in the value attribute and the PlatformTransactionManager with that bean name used.

@Configuration

@EnableTransactionManagement(

        mode = AdviceMode.PROXY, proxyTargetClass = false,

        order = Ordered.LOWEST_PRECEDENCE

)

public class RootContextConfiguration implements TransactionManagementConfigurer

{

    ...

    @Bean

    public PlatformTransactionManager jpaTransactionManager()

    {

        return new JpaTransactionManager(

                this.entityManagerFactoryBean().getObject()

        );

    }

    @Bean

    public PlatformTransactionManager dataSourceTransactionManager()

    {

        return new DataSourceTransactionManager(this.springJpaDataSource());

    }

    @Override

    public PlatformTransactionManager annotationDrivenTransactionManager()

    {

        return this.jpaTransactionManager();

    }

    ...

}

The previous configuration creates two PlatformTransactionManager beans, one for JPA and one for simple DataSource actions. The JpaTransactionManager acts as the default transaction manager. Using this configuration, the actionOne method in the following service executes under the control of the default (JPA) transaction manager. Likewise, actionTwo executes explicitly using the JpaTransactionManager while actionThree executes explicitly using the DataSourceTransactionManager.

public SomeService

{

    @Transactional

    public void actionOne();

    @Transactional("jpaTransactionManager")

    public void actionTwo();

    @Transactional("dataSourceTransactionManager")

    public void actionThree();

}

When using either of the @Transactional annotations, you may annotate an interface, individual interface methods, a class, or class methods. If you annotate an interface, it is equivalent to annotating the methods of that interface. Likewise, annotating a class is equivalent to annotating the methods of that class. When demarking transaction boundaries in code in your application, the best practice is to annotate the concrete class or class methods, not the interface or interface methods. Annotating an interface or its methods, @Transactional works only when dynamic proxies (proxyTargetClass = false) are in use. If you ever need to enable CGLIB proxies (proxyTargetClass = true), the interface annotations will stop working. Unlike Bean Validation annotations, which establish a contract on the interface, @Transactional is an implementation detail and doesn’t belong in the contract. Annotating service methods is demonstrated with the DefaultBookManager implementation in Listing 21-3. Although DefaultBookManager does not show it, a@Transactional method can access and manipulate multiple entities using multiple repositories, all within the same transaction context.

LISTING 21-3: DefaultBookManager.java

@Service

public class DefaultBookManager implements BookManager

{

    @Inject AuthorRepository authorRepository;

    @Inject BookRepository bookRepository;

    @Inject PublisherRepository publisherRepository;

    @Override

    public List<Author> getAuthors()

    {

        return this.toList(this.authorRepository.getAll());

    }

    @Override

    public List<Book> getBooks()

    {

        return this.toList(this.bookRepository.getAll());

    }

    @Override

    public List<Publisher> getPublishers()

    {

        return this.toList(this.publisherRepository.getAll());

    }

    private <E> List<E> toList(Iterable<E> i)

    {

        List<E> list = new ArrayList<>();

        i.forEach(list::add);

        return list;

    }

    @Override

    public void saveAuthor(Author author)

    {

        if(author.getId() < 1)

            this.authorRepository.add(author);

        else

            this.authorRepository.update(author);

    }

    @Override

    public void saveBook(Book book)

    {

        if(book.getId() < 1)

            this.bookRepository.add(book);

        else

            this.bookRepository.update(book);

    }

    @Override

    public void savePublisher(Publisher publisher)

    {

        if(publisher.getId() < 1)

            this.publisherRepository.add(publisher);

        else

            this.publisherRepository.update(publisher);

    }

}

Using the Transactional Service Methods

You needn’t do anything different when using @Transactional service methods. From the consumer’s perspective, the transaction happens transparently. The BookController works similarly to the EntityServlet in Chapter 20: It lists the Authors, Books, and Publishers forGET requests and creates them for POST requests.

@WebController

public class BookController

{

    private final Random random;

    @Inject BookManager bookManager;

    public BookController()

    {

        try

        {

            this.random = SecureRandom.getInstanceStrong();

        }

        catch(NoSuchAlgorithmException e)

        {

            throw new IllegalStateException(e);

        }

    }

    @RequestMapping(value = "/", method = RequestMethod.GET)

    public String list(Map<String, Object> model)

    {

        model.put("publishers", this.bookManager.getPublishers());

        model.put("authors", this.bookManager.getAuthors());

        model.put("books", this.bookManager.getBooks());

        return "entities";

    }

    @RequestMapping(value = "/", method = RequestMethod.POST)

    public View add()

    {

        Publisher publisher = new Publisher();

        publisher.setName("John Wiley & Sons");

        publisher.setAddress("1234 Baker Street");

        publisher.setDateFounded(Calendar.getInstance());

        this.bookManager.savePublisher(publisher);

        Author author = new Author();

        author.setName("Nicholas S. Williams");

        author.setEmailAddress("nick@example.com");

        author.setGender(Gender.MALE);

        this.bookManager.saveAuthor(author);

        Book book = new Book();

        book.setIsbn("" + this.random.nextInt(Integer.MAX_VALUE));

        book.setTitle("Professional Java for Web Applications");

        book.setAuthor("Nicholas S. Williams");

        book.setPublisher("John Wiley & Sons");

        book.setPrice(59.99D);

        this.bookManager.saveBook(book);

        return new RedirectView("/", true, false);

    }

}

To test it, follow these steps:

1.  Make sure you add the DataSource resource definition to Tomcat’s context.xml file and run the create.sql database creation script in MySQL Workbench to create the necessary database tables.

2.             <Resource name="jdbc/SpringJpa" type="javax.sql.DataSource"

3.                       maxActive="20" maxIdle="5" maxWait="10000"

4.                       username="tomcatUser" password="password1234"

5.                       driverClassName="com.mysql.jdbc.Driver"

6.                       defaultTransactionIsolation="READ_COMMITTED"

              url="jdbc:mysql://localhost/SpringJpa" />

7.  Compile the application and start Tomcat from your IDE.

8.  Go to http://localhost:8080/repositories/ and click the Add More Entities button a few times, just like you did in the previous chapter.

You should see entities appearing in both the browser and the database tables. Looking up individual entities by ID (and ISBN) is an exercise left up to you.

CONVERTING DATA WITH DTOS AND ENTITIES

The Customer-Support-v15 application, available for download from the wrox.com code download site, uses the LocalContainerEntityManagerFactoryBean and transaction management you configured in Spring-JPA. It also has the same GenericRepository interface andGenericBaseRepository and GenericJpaRepository abstract classes. However, changing the Customer Support application to use JPA repositories and a MySQL database is not as simple as that. The Ticket class has properties that you can’t yet convert using the JPA mechanisms you have learned about so far — the Instant date created and the Map of attachments. The easiest way to account for this is to treat the Ticket as a Data Transfer Object (DTO) and create a separate TicketEntity for persisting to the database.

Creating Entities for the Customer Support Application

You need a few different entities in your application, and you can reuse some existing objects. Attachment, for example, just needs to be moved to com.wrox.site.entities, annotated, and given an ID property and a foreign key reference to the ticket.

@XmlRootElement(name = "attachment")

@Entity

public class Attachment implements Serializable

{

    private static final long serialVersionUID = 1L;

    private long id;

    private long ticketId;

    @NotBlank(message = "{validate.attachment.name}")

    private String name;

    @NotBlank(message = "{validate.attachment.mimeContentType}")

    private String mimeContentType;

    @Size(min = 1, message = "{validate.attachment.contents}")

    private byte[] contents;

    @Id

    @Column(name = "AttachmentId")

    @GeneratedValue(strategy = GenerationType.IDENTITY)

    public long getId() { ... }

    public void setId(long id) { ... }

    @Basic

    public long getTicketId() { ... }

    public void setTicketId(long ticketId) { ... }

    @Basic

    @Column(name = "AttachmentName")

    public String getName() { ... }

    public void setName(String name) { ... }

    @Basic

    public String getMimeContentType() { ... }

    public void setMimeContentType(String mimeContentType) { ... }

    @XmlSchemaType(name = "base64Binary")

    @Lob

    public byte[] getContents() { ... }

    public void setContents(byte[] contents) { ... }

}

The UserRepository and UserPrincipal change significantly because users are stored in the database now. UserPrincipal also moves to the com.wrox.site.entities package and now has an ID, username, and password. You should never store a password plain text or even weakly hashed in the database, therefore the password is persisted strongly hashed with a salt.

@Entity

@Table(uniqueConstraints = {

        @UniqueConstraint(name="UserPrincipal_Username", columnNames="Username")

})

public class UserPrincipal implements Principal, Cloneable, Serializable

{

    private static final long serialVersionUID = 1L;

    private static final String SESSION_ATTRIBUTE_KEY = "com.wrox.user.principal";

    private long id;

    private String username;

    private byte[] password;

    @Id

    @Column(name = "UserId")

    @GeneratedValue(strategy = GenerationType.IDENTITY)

    public long getId() { ... }

    public void setId(long id) { ... }

    @Override

    @Transient

    public String getName() { ... }

    @Basic

    public String getUsername() { ... }

    public void setUsername(String username) { ... }

    @Basic

    @Column(name = "HashedPassword")

    public byte[] getPassword() { ... }

    public void setPassword(byte[] password) { ... }

    ...

}

Because Ticket contains an Instant and a Map, you need to create a TicketEntity to transfer data to and from the DTO Ticket. It looks a lot like Ticket but uses a Timestamp for the date created and has a foreign key reference to the customer’s UserPrincipal ID instead of the customer name.

@Entity

@Table(name = "Ticket")

public class TicketEntity implements Serializable

{

    private static final long serialVersionUID = 1L;

    private long id;

    private long userId;

    private String subject;

    private String body;

    private Timestamp dateCreated;

    @Id

    @Column(name = "TicketId")

    @GeneratedValue(strategy = GenerationType.IDENTITY)

    public long getId() { ...}

    public void setId(long id) { ... }

    @Basic

    public long getUserId() { ... }

    public void setUserId(long userId) { ... }

    @Basic

    public String getSubject() { ... }

    public void setSubject(String subject) { ... }

    @Basic

    public String getBody() { ... }

    public void setBody(String body) { ... }

    @Basic

    public Timestamp getDateCreated() { ... }

    public void setDateCreated(Timestamp dateCreated) { ... }

}

These are all the entities you need for now. You also need a database schema that you can store these entities in, and the initial four users that were hard-wired in Java code in previous chapters.

CREATE DATABASE CustomerSupport DEFAULT CHARACTER SET 'utf8'

  DEFAULT COLLATE 'utf8_unicode_ci';

USE CustomerSupport;

CREATE TABLE UserPrincipal (

  UserId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

  Username VARCHAR(30) NOT NULL,

  HashedPassword BINARY(60) NOT NULL,

  UNIQUE KEY UserPrincipal_Username (Username)

) ENGINE = InnoDB;

CREATE TABLE Ticket (

  TicketId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

  UserId BIGINT UNSIGNED NOT NULL,

  Subject VARCHAR(255) NOT NULL,

  Body TEXT,

  DateCreated DATETIME NOT NULL,

  CONSTRAINT Ticket_UserId FOREIGN KEY (UserId)

    REFERENCES UserPrincipal (UserId) ON DELETE CASCADE

) ENGINE = InnoDB;

CREATE TABLE Attachment (

  AttachmentId BIGINT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,

  TicketId BIGINT UNSIGNED NOT NULL,

  AttachmentName VARCHAR(255) NULL,

  MimeContentType VARCHAR(255) NOT NULL,

  Contents BLOB NOT NULL,

  CONSTRAINT Attachment_TicketId FOREIGN KEY (TicketId)

    REFERENCES Ticket (TicketId) ON DELETE CASCADE

) ENGINE = InnoDB;

INSERT INTO UserPrincipal (Username, HashedPassword) VALUES ( -- password

  'Nicholas', '$2a$10$x0k/yA5qN8SP8JD5CEN.6elEBFxVVHeKZTdyv.RPra4jzRR5SlKSC'

);

INSERT INTO UserPrincipal (Username, HashedPassword) VALUES ( -- drowssap

  'Sarah', '$2a$10$JSxmYO.JOb4TT42/4RFzguaTuYkZLCfeND1bB0rzoy7wH0RQFEq8y'

);

INSERT INTO UserPrincipal (Username, HashedPassword) VALUES ( -- wordpass

  'Mike', '$2a$10$Lc0W6stzND.9YnFRcfbOt.EaCVO9aJ/QpbWnfjJLcMovdTx5s4i3G'

);

INSERT INTO UserPrincipal (Username, HashedPassword) VALUES ( -- green

  'John', '$2a$10$vacuqbDw9I7rr6RRH8sByuktOzqTheQMfnK3XCT2WlaL7vt/3AMby'

);

The repositories for the entities are simple. The interfaces all extend GenericRepository and the implementations all extend GenericJpaRepository. UserRepository requires a custom method implementation so that users can be looked up by username, andAttachmentRepository needs a method to look up attachments for a particular ticket.

public interface UserRepository extends GenericRepository<Long, UserPrincipal>

{

    UserPrincipal getByUsername(String username);

}

@Repository

public class DefaultUserRepository

        extends GenericJpaRepository<Long, UserPrincipal>

        implements UserRepository

{

    @Override

    public UserPrincipal getByUsername(String username)

    {

        return this.entityManager.createQuery(

                "SELECT u FROM UserPrincipal u WHERE u.username = :username",

                UserPrincipal.class

        ).setParameter("username", username).getSingleResult();

    }

}

public interface TicketRepository extends GenericRepository<Long, TicketEntity>

{ }

@Repository

public class DefaultTicketRepository

        extends GenericJpaRepository<Long, TicketEntity>

        implements TicketRepository { }

public interface AttachmentRepository extends GenericRepository<Long, Attachment>

{

    Iterable<Attachment> getByTicketId(long ticketId);

}

@Repository

public class DefaultAttachmentRepository

        extends GenericJpaRepository<Long, Attachment>

        implements AttachmentRepository

{

    @Override

    public Iterable<Attachment> getByTicketId(long ticketId)

    {

        return this.entityManager.createQuery(

                "SELECT a FROM Attachment a WHERE a.ticketId = :id ORDER BY a.id",

                Attachment.class

        ).setParameter("id", ticketId).getResultList();

    }

}

Securing User Passwords with BCrypt

You need to update the services in the Customer Support application to use the new repositories. The TemporaryAuthenticationService, renamed to DefaultAuthenticationService, is significantly more secure now. It uses the industry-standard jBCrypt Java implementation of the BCrypt hash algorithm, provided by the following Maven dependency.

        <dependency>

            <groupId>org.mindrot</groupId>

            <artifactId>jbcrypt</artifactId>

            <version>0.3m</version>

        </dependency>

When used correctly, BCrypt is extremely strong. It is designed to be extremely slow. This may seem counterintuitive, but in reality it doesn’t add a significant amount of time to login or saving a user. Where the performance impact is felt is when generating billions of sample passwords in a dictionary attack — using a different salt for each password, it is extremely expensive and impractical to attack a compromised password database.

You should never use a quick-hash algorithm like MD5 or any of the SHA algorithms, because modern password-hacking systems can generate billions of dictionary comparisons per second. BCrypt is the most powerful and well-tested password-hashing algorithm to date, and you should stick to it when securing user passwords. It uses an iteration count, represented as a power of 2, to determine the number of rounds of hashing to apply. For example, with an input iteration count of 10, hashing is applied 1,024 times. Each round uses a small, constant amount of memory that makes it difficult to implement with hardware only, so modern password-hacking systems can generate only small numbers of dictionary comparisons per second. The DefaultAuthenticationService in Listing 21-4uses the new UserRepository and BCrypt to save and authenticate users.

LISTING 21-4: DefaultAuthenticationService.java

@Service

public class DefaultAuthenticationService implements AuthenticationService

{

    private static final Logger log = LogManager.getLogger();

    private static final SecureRandom RANDOM;

    private static final int HASHING_ROUNDS = 10;

    static

    {

        try

        {

            RANDOM = SecureRandom.getInstanceStrong();

        }

        catch(NoSuchAlgorithmException e)

        {

            throw new IllegalStateException(e);

        }

    }

    @Inject UserRepository userRepository;

    @Override

    @Transactional

    public UserPrincipal authenticate(String username, String password)

    {

        UserPrincipal principal = this.userRepository.getByUsername(username);

        if(principal == null)

        {

            log.warn("Authentication failed for non-existent user {}.", username);

            return null;

        }

        if(!BCrypt.checkpw(

                password,

                new String(principal.getPassword(), StandardCharsets.UTF_8)

        ))

        {

            log.warn("Authentication failed for user {}.", username);

            return null;

        }

        log.debug("User {} successfully authenticated.", username);

        return principal;

    }

    @Override

    @Transactional

    public void saveUser(UserPrincipal principal, String newPassword)

    {

        if(newPassword != null && newPassword.length() > 0)

        {

            String salt = BCrypt.gensalt(HASHING_ROUNDS, RANDOM);

            principal.setPassword(BCrypt.hashpw(newPassword, salt).getBytes());

        }

        if(principal.getId() < 1)

            this.userRepository.add(principal);

        else

            this.userRepository.update(principal);

    }

}

Transferring Data to Entities in Your Services

The DefaultTicketService in Listing 21-5 uses the new TicketRepository, AttachmentRepository, and UserRepository to get and save Tickets and TicketEntitys. Because the application uses Tickets but the database persists TicketEntitys, the data must be transferred between the two different POJOs throughout the DefaultTicketService. The code makes significant use of lambdas, method references, and the Java 8 Collections Stream API to reduce the code necessary to achieve this.

LISTING 21-5: DefaultTicketService.java

@Service

public class DefaultTicketService implements TicketService

{

    @Inject TicketRepository ticketRepository;

    @Inject AttachmentRepository attachmentRepository;

    @Inject UserRepository userRepository;

    @Override

    @Transactional

    public List<Ticket> getAllTickets()

    {

        List<Ticket> list = new ArrayList<>();

        this.ticketRepository.getAll().forEach(e -> list.add(this.convert(e)));

        return list;

    }

    @Override

    @Transactional

    public Ticket getTicket(long id)

    {

        TicketEntity entity = this.ticketRepository.get(id);

        return entity == null ? null : this.convert(entity);

    }

    private Ticket convert(TicketEntity entity)

    {

        Ticket ticket = new Ticket();

        ticket.setId(entity.getId());

        ticket.setCustomerName(

                this.userRepository.get(entity.getUserId()).getUsername()

        );

        ticket.setSubject(entity.getSubject());

        ticket.setBody(entity.getBody());

        ticket.setDateCreated(Instant.ofEpochMilli(

                entity.getDateCreated().getTime()

        ));

        this.attachmentRepository.getByTicketId(entity.getId())

                .forEach(ticket::addAttachment);

        return ticket;

    }

    @Override

    @Transactional

    public void save(Ticket ticket)

    {

        TicketEntity entity = new TicketEntity();

        entity.setId(ticket.getId());

        entity.setUserId(this.userRepository.getByUsername(

                ticket.getCustomerName()

        ).getId());

        entity.setSubject(ticket.getSubject());

        entity.setBody(ticket.getBody());

        if(ticket.getId() < 1)

        {

            ticket.setDateCreated(Instant.now());

            entity.setDateCreated(new Timestamp(

                    ticket.getDateCreated().toEpochMilli()

            ));

            this.ticketRepository.add(entity);

            ticket.setId(entity.getId());

            for(Attachment attachment : ticket.getAttachments())

            {

                attachment.setTicketId(entity.getId());

                this.attachmentRepository.add(attachment);

            }

        }

        else

            this.ticketRepository.update(entity);

    }

    @Override

    @Transactional

    public void deleteTicket(long id)

    {

        this.ticketRepository.deleteById(id);

    }

}

You should be used to testing the Customer Support application now, but this time your tickets will persist in the database:

1.  Create the following DataSource resource in your Tomcat’s context.xml file, and make sure you run the create.sql script to create the database and tables.

2.             <Resource name="jdbc/CustomerSupport" type="javax.sql.DataSource"

3.                       maxActive="20" maxIdle="5" maxWait="10000"

4.                       username="tomcatUser" password="password1234"

5.                       driverClassName="com.mysql.jdbc.Driver"

6.                       defaultTransactionIsolation="READ_COMMITTED"

              url="jdbc:mysql://localhost/CustomerSupport" />

7.  Compile the application, and start Tomcat from your IDE.

8.  Go to http://localhost:8080/support/ and log in as one of the pre-existing users in the database.

9.  Create a ticket or two, attach some files, and restart Tomcat. The tickets should still be there, persisted in the database.

You can and should query the database tables in MySQL Workbench to see the persisted entities.

SUMMARY

In this chapter, you have learned a lot about using JPA and Hibernate ORM in Spring Framework. You experimented with creating a LocalContainerEntityManagerFactoryBean, learned about the different ways that Spring Framework advises methods, explored transaction management using @Transactional and the JpaTransactionManager, and created a generic repository that can handle most standard CRUD operations for all your entities. You learned how Spring can completely replace the persistence.xml file and create a persistence unit in memory, reducing the amount of XML you have to write. You also compared the criteria API to JPQL and saw the advantages and disadvantages of both. Finally, you explored protecting user passwords using the secure BCrypt password slow-hashing algorithm before persisting the passwords in the database.

However, you may think that there’s still an awful lot of code to write, especially when looking up persisted entities. What if you need to look up an entity many different ways using many different fields? What about ordering and paging results, which can complicate the matter further? In the next chapter you learn how Spring Data JPA can make writing your JPA repositories even easier — by making it unnecessary to write them at all.