Spring Managed Alfresco Custom Activiti Java Delegates

Spring Managed Alfresco Custom Activiti Java Delegates

I recently needed to make a change to have Alfresco 4’s Activiti call an object managed by Spring instead of a class that is called during execution.  Couple of reasons for this:

  1. A new enhancement was necessary to access a custom database table, so I needed to inject a DAO bean into the Activiti serviceTask.
  2. Refactoring of the code base was needed.  Having Spring manage the java delegate service task versus instantiating new objects for each process execution is always a better way to go if the application is already Spring managed (which Alfresco is).
    • i.e. I needed access to the DAO bean and alfresco available spring beans.
    • NOTE:  You now have to make sure your class is thread safe though!

For a tutorial on Alfresco’s advanced workflows with Activiti, take a look at Jeff Pott’s tutorial here.  This blog will only discuss what was refactored to have Spring manage the Activiti engine java delegates.

I wanted to piggy-back off of the Activiti workflow engine that is already embedded in Alfresco 4, so decided not to define our own Activiti engine manually.  The Alfresco Summit 2013 had a great video tutorial, which helped immensely to refactor the “Old Method” to the “New Method”, described below.

Example:

For our example, we’ll use a simple activiti workflow that defines two service tasks, CherryJavaDelegate and ShoeJavaDelegate (The abstract AbstractCherryShoeDelegate is the parent).  The “Old Method” does NOT have spring managing the Activiti service task java delegates.  The “New Method” has spring manage and inject the Activiti service task java delegates, and also adds an enhancement for both service tasks to write to a database table.

Old Method

1. Notice that the cherryshoebpmn.xml example below is defining the serviceTask’s to use the “activiti:class” attribute; this will have activiti instantiate a new object for each process execution:

<process id="cherryshoeProcess" name="Cherry Shoe Process" isExecutable="true">
    ...
    <serviceTask id="cherryTask" name="Insert Cherry Task" activiti:class="com.cherryshoe.activiti.delegate.CherryJavaDelegate"></serviceTask>
    
    <serviceTask id="shoeTask" name="Insert Shoe Task" activiti:class="com.cherryshoe.activiti.delegate.ShoeJavaDelegate"></serviceTask>
    ...
</process>

2. Since we have multiple service tasks that need access to the same Activiti engine java delegate, we defined an abstract class that defined some of the functionality.  The specific concrete classes would provide/override any functionality not defined in the abstract class. 

...
import org.activiti.engine.delegate.JavaDelegate;
...
public abstract class AbstractCherryShoeDelegate implements JavaDelegate {
...
    @Override
    public void execute(DelegateExecution execution) throws Exception {
    ...
    }
...
}

public class CherryJavaDelegate extends AbstractCherryShoeDelegate {
...
...
}

New Method

Here’s a summary of all that had to happen to have Spring inject the java delegate Alfresco 4 custom Activiti service tasks (tested with Alfresco 4.1.5) and to write to database tables via injecting DAO beans.

  1. The abstract AbstractCherryShoeDelegate class extends Activiti engine’s BaseJavaDelegate
  2. There are class load order issues where custom spring beans will not get registered.  Set up depends-on relationship with the activitiBeanRegistry for the AbstractCherryShoeDelegate abstract parent
  3. The following must be kept intact:
    • In the Spring configuration file, 
      • Abstract AbstractCherryShoeDelegate class defines parent=”baseJavaDelegate” abstract=”true” depends-on=”ActivitiBeanRegistry”
      • For each concrete Java Delegate:
        • The concrete bean id MUST to match the class name, which in term matches the Activiti:delegateExpression on the bpmn20 configuration xml file 
          • NOTE: Reading this Alfresco forum looks like the activitiBeanRegistry registers the bean by classname, not by bean id, so likely this is not a requirement
        • The parent attribute MUST be defined as an attribute

Details Below:

1. Define spring beans for the abstract parent class AbstractCherryShoeDelegate and each concrete class that extends AbstractCherryShoeDelegate (i.e. CherryJavaDelegate and ShoeJavaDelegate). Have Spring manage the custom Activiti Java delegates where the concrete class.  The abstract parent must define it’s own parent as “baseJavaDelegate”, abstract=”true”, and depends-on=”ActivitiBeanRegistry”.

<bean id="AbstractCherryShoeDelegate" parent="baseJavaDelegate" abstract="true" depends-on="activitiBeanRegistry"></bean>
    
<bean id="CherryJavaDelegate"
class="com.cherryshoe.activiti.delegate.CherryJavaDelegate" parent="AbstractCherryShoeDelegate">
    <property name="cherryDao" ref="com.cherryshoe.database.dao.CherryDao"/>
</bean>

<bean id="ShoeJavaDelegate"
class="com.cherryshoe.activiti.delegate.ShoeJavaDelegate"  parent="AbstractCherryShoeDelegate">
    <property name="shoeDao" ref="com.cherryshoe.database.dao.ShoeDao"/>
</bean>

***NOTE: BELOW WILL NOT WORK

– Do NOT put any periods to denote package structure in the bean id!  Alfresco/Activiti got confused by the package “.”, where spring normally works fine with this construct.

– Also just because the concrete class is extending the parent abstract class, is not enough to make it work.

<bean id="com.cherryshoe.activiti.delegate.CherryJavaDelegate"
class="com.cherryshoe.activiti.delegate.CherryJavaDelegate" >
    <property name="cherryDao" ref="com.cherryshoe.database.dao.CherryDao"/>
</bean>

<bean id="com.cherryshoe.activiti.delegate.ShoeJavaDelegate"
class="com.cherryshoe.activiti.delegate.ShoeJavaDelegate" >
    <property name="shoeDao" ref="com.cherryshoe.database.dao.ShoeDao"/>
</bean>

2. Notice that the cherryshoebpmn.xml example below is using the “activiti:delegateExpression” attribute and referencing the Spring bean.  This means only one instance of that Java class is created for the serviceTask it is defined on, so the class must be implemented with thread-safety in mind:

<process id="cherryshoeProcess" name="Cherry Shoe Process" isExecutable="true">
    ...
    <serviceTask id="cherryTask" name="Insert Cherry Task" activiti:delegateExpression="${CherryJavaDelegate}"></serviceTask>

    <serviceTask id="shoeTask" name="Insert Shoe Task" activiti:delegateExpression="${ShoeJavaDelegate}"></serviceTask>
    ...
</process>

3.  The abstract class is now changed to extend the BaseJavaDelegate.  The specific concrete classes would provide / override any functionality not defined in the abstract class. 

...
import org.alfresco.repo.workflow.activiti.BaseJavaDelegate;
...
public abstract class AbstractCherryShoeDelegate extends BaseJavaDelegate {
...
    @Override
    public void execute(DelegateExecution execution) throws Exception {
    ...
    }
...
}

public class CherryJavaDelegate extends AbstractCherryShoeDelegate {
...
}

For more examples and ideas, I encourage you to explore the links provided throughout this blog. Also take a look at Activiti’s user guide, particularly the Java Service Task Implementation section. What questions do you have about this post? Let me know in the comments section below, and I will answer each one.

The blog Spring Managed Alfresco Custom Activiti Java Delegates was originally posted on cherryshoe.blogspot.com.

Adding Full Text Search to Ark via Spring and JPA

What, No Full Text Search Already?

My project ArkCase is a Spring application that integrates with Alfresco (and other ECM platforms) via CMIS – the Content Management Interoperability Standard.  ArkCase stores metadata in a database, and content files in the ECM platform.  Our customers so far have not needed integrated full text search; plain old database queries have sufficed. Eventually we know full text search has to be addressed.  Why not now, since ArkCase has been getting some love?  Plus, high quality search engines such as SOLR are free, documented in excellent books, and could provide more analytic services than just plain old search.

Goals

What do we want from SOLR Search integration?

  1. We want both quick search and advanced search capabilities.  Quick search should be fast and search only metadata (case number, task assignee, …).  Quick search is to let users find an object quickly based on the object ID or the assignee.  Advanced search should still be fast, but includes content file search and more fields.  Advanced search is to let users explore all the business objects in the application.
  2. Search results should be integrated with data access control.  Only results the user is authorized to see should appear in the search results.  This means two users with different access rights could see different results, even when searching for the same terms.
  3. The object types to be indexed, and the specific fields to be indexed for each object type, should be configurable at run time.  Each ArkCase installation may trace different object types, and different customers may want to index different data.  So at runtime the administrator should be able to enable and disable different object types, and control which fields are indexed.
  4. Results from ArkCase metadata and results from the content files (stored in the ECM platform) should be combined in a seamless fashion.  We don’t want to extend the ECM full-text search engine to index the ArkCase metadata, and we don’t want the ArkCase metadata full text index to duplicate the ECM engine’s data (we don’t want to re-index all the content files already indexed by the ECM).  So we will have two indexes: the ArkCase metadata index, and the ECM content file index.  But the user should never be conscious of this; the ArkCase search user interface and search results should maintain the illusion of a single coherent full text search index.

Both Quick Search and Advanced Search

To enable both quick search and advanced search modes, I created two separate SOLR collections.  The quick search collection includes only the metadata fields to be searched via the Quick Search user interface.  The full collection includes all indexed metadata.  Clearly these two indexes are somewhat redundant since the full collection almost certainly includes everything indexed in the quick search collection.  As soon as we have a performance test environment I’ll try to measure whether maintaining the smaller quick search collection really makes sense.  If the quick search collection is not materially faster than the equivalent search on the full index, then we can stop maintaining the quick search collection.

Integration with Data Access Control

Data access control is a touchy issue since the full text search queries must still be fast, the pagination must continue to work, and the hit counts must still be accurate.  These goals are difficult to reach if application code applies data access control to the search results after they leave the search engine.  So I plan to encode the access control lists into the search engine itself, so the access control becomes just another part of the search query.  Search Technologies has a fine series of articles about this “early binding” architecture: https://www.searchtechnologies.com/search-engine-security.html.

Configurable at Runtime

ArkCase has a basic pattern for runtime-configurable options.  We encode the options into a Spring XML configuration file, which we load at runtime by monitoring a Spring load folder.  This allows us to support as many search configurations as we need: one Spring full-text-search config file for each business object type.  At some future time we will add an administrator control panel with a user interface for reading and writing such configuration files.  This Spring XML profile configures the business object to be indexed.  For business objects stored in ArkCase tables, this configuration includes the JPA entity name, the entity properties to be indexed, the corresponding SOLR field names, and how often the database is polled for new records.  For Activiti workflow objects, the configuration includes the Activiti object type (tasks or business processes), and the properties to be indexed.

Seamless Integration of Database, Activiti, and ECM Data Sources

The user should not realize the indexed data is from multiple repositories.

Integrating database and Activiti data sources is easy: we just feed data from both sources into the same SOLR collection.

The ECM already indexes its content files.  We don’t want to duplicate the ECM index, and we especially don’t want to dig beneath the vendor’s documented search interfaces.

So in our application code, we need to make two queries: one to the ArkCase SOLR index (which indexes the database and the Activiti data), and another query to the ECM index.  Then we need to merge the two result sets.  As we encounter challenges with this double query and result set merging I may write more blog articles!

Closing Thoughts

SOLR is very easy to work with.  I may use it for more than straight forward full text search.  For example, the navigation panels with the lists of cases, lists of tasks, lists of complaints, and so on include only data in the SOLR quick search collection.  So in theory we should be able to query SOLR to populate those lists – versus calling JPA queries.  Again, once we have a performance test environment I can tell whether SOLR queries or JPA queries are faster in general.

 

Mule ESB: How to Call the Exact Method You Want on a Spring Bean

The Issue

Mule ESB provides a built-in mechanism to call a Spring bean.  Mule also provides an entry point resolver mechanism to choose the method that should be called on the desired bean.  One such method is the property-entry-point-resolver.  This means the incoming message includes a property that specifies the method name.  It looks like this:

        <component doc:name="Complaint DAO">
            <property-entry-point-resolver property="daoMethod"/>
            <spring-object bean="acmComplaintDao"/>
        </component>

This snippet means the incoming message includes a property “daoMethod”; Mule will invoke the acmComplaintDao bean’s method named by this property.

I’ve had three problems with this approach.  First, you can only specify the bean to be called, and hope Mule chooses the right method to invoke.  Second, Mule is in charge of selecting and providing the method arguments.  Suppose the bean has several overloaded methods with the same name? Third, only an incoming message property can be used to specify the method name.  This means either the client code invoking the Mule flow must provide the method name (undesirable since it makes that code harder to read), or the flow design  must be deformed such that the main flow calls a child flow only in order to provide the method name property.

How I Resolved the Issue

Last week I finally noticed Mule provides access to a bean registry which includes all Spring beans.  And I noticed Mule’s expression component allows you to add arbitrary Mule Expression Language to the flow.  Putting these two together results in much simpler code.  I could replace the above example with something like this:

<expression-component>
     app.registry.acmComplaintDao.save(message.payload);
</expression-component>

The “app.registry” is a built-in object provided by the Mule Expression Language.

In my mind this XML snippet is much more clear and easy to read than the previous one.  At a glance the reader can see which method of which bean is being called with which arguments.    And it fits right into the main flow; no need to setup a separate child flow just to specify the method name.

A nice simple resolution to the issues I had with my earlier approach. And the new code is smaller and easier to read!  Good news all around.

 

Spring MVC – setting JSON date format

Spring MVC‘s message conversion feature is the bomb.  I love it; I wish I’d started using it long ago.  Just make sure your JSON fields match your POJO property names, and your MVC controller includes a POJO parameter or return value.  Then Spring MVC auto-converts JSON to and from your POJO!

Spring MVC uses the Jackson JSON library.  It can use either Jackson 1 or Jackson 2, whichever you add to the webapp classpath.

By default, Jackson marshals dates following an epoch timestamp strategy, specifically, the number of milliseconds since January 1, 1970 UTC.  In other words, a very long number like this: 1399908056241.  Not very nice for our users, and we’d rather not have to write custom date parsing and formatting code.

So we have to configure Spring MVC and JSON to use a different format.  The above-linked article is pretty clear on how to configure Jackson in Java:

objectMapper.configure(SerializationConfig.Feature.WRITE_DATES_AS_TIMESTAMPS, false);

But Spring MVC never exposes Jackson via Java code; it just works, no programmer intervention needed.  So we need a way to configure the Jackson object mapper in Spring configuration.  I added this XML to my Spring MVC configuration file:

    <!-- set JSON date format to ISO-8601 e.g. 1970-01-01T00:00:00.000+0000 -->
    <bean id="sourceObjectMapper" class="com.fasterxml.jackson.databind.ObjectMapper"/>
    <bean id="acmObjectMapper" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean">
        <property name="targetObject" ref="sourceObjectMapper"/>
        <property name="targetMethod" value="disable"/>
        <property name="arguments" value="WRITE_DATES_AS_TIMESTAMPS"/>
    </bean>
    <bean id="acmJacksonConverter" class="org.springframework.http.converter.json.MappingJackson2HttpMessageConverter">
        <property name="objectMapper" ref="acmObjectMapper"/>
    </bean>

The sourceObjectMapper is a template.  The acmObjectMapper applies the disable(WRITE_DATES_AS_TIMESTAMPS) method to the template object mapper, which returns another object mapper.  The acmJacksonConverter is a Spring MVC message converter, configured to use the ISO-8601 date format.

All we need to do now is tell Spring MVC to use our nice new message converter. I’m using the mvc:annotation-driven approach to configuring Spring MVC. The mvc namespace allows you to specify a list of message converters:

    <mvc:annotation-driven>
        <mvc:message-converters>
            <!-- We configure the Jackson mapper to output dates in ISO801 format. This requires adding our
            customized Jackson mapper to the list of Spring MVC message converters. But, if we just add our bean here
            all by itself, it will handle requests it should not handle, e.g. encoding strings.  So we need to add the
            other standard message converters here too, and make sure to put the customized Jackson converter AFTER the
            string converter. -->

            <bean class="org.springframework.http.converter.ByteArrayHttpMessageConverter"/>
            <bean class="org.springframework.http.converter.xml.Jaxb2RootElementHttpMessageConverter"/>
            <bean class="org.springframework.http.converter.StringHttpMessageConverter"/>
            <bean class="org.springframework.http.converter.ResourceHttpMessageConverter"/>
            <bean class="org.springframework.http.converter.xml.SourceHttpMessageConverter"/>
            <ref bean="acmJacksonConverter"/>
            <bean class="org.springframework.http.converter.support.AllEncompassingFormHttpMessageConverter"/>
            <!-- atom feed requires com.sun.syndication package ...   -->
            <!--<bean class="org.springframework.http.converter.feed.AtomFeedHttpMessageConverter"/>-->
            <bean class="org.springframework.http.converter.BufferedImageHttpMessageConverter"/>
            <bean class="org.springframework.http.converter.FormHttpMessageConverter"/>
            <bean class="org.springframework.http.converter.xml.Jaxb2CollectionHttpMessageConverter"/>
            <!-- marshalling converter requires spring oxm -->
            <!--<bean class="org.springframework.http.converter.xml.MarshallingHttpMessageConverter"/>-->
        </mvc:message-converters>
    </mvc:annotation-driven>

As you can see our custom converter is 6th in the list, after many built-in converters.  When I include only the custom converter, the custom converter gets first crack at each request, and it will try to handle requests it shouldn’t!  So I ended up including all the standard converters, along with the custom one.

This setup took me a few hours to figure out.  But now JSON dates are represented in a nicely readable standard format, e.g. 1970-01-01T00:00:00.000+0000, with no further programming ever required on my part.

Mule Integration with Spring: A New Approach

I’ve written about using Mule with Spring on my personal blog.  In this article I described how to integrate Mule into an existing Spring MVC application.

That approach involved configuring the web.xml file to setup a Mule context:

    <listener>
        <listener-class>org.mule.config.builders.MuleXmlBuilderContextListener</listener-class>
    </listener>
    <context-param>
        <param-name>org.mule.config</param-name>
        <param-value>spring-rma.xml</param-value>
    </context-param>

This approach is easy and works well, especially if all your Spring beans are defined in Mule files.  But my application already had a Spring application context with a large library of Spring beans.  The above approach creates a whole separate context!  My Mule flows have no access to my existing Spring beans!  This situation is very depressing in terms of having Mule be able to leverage my existing code.

So I found a better way.  I now create the Mule context as a child of my main Spring application context.  My problem is completely resolved!  It does take a little more work.

First, create a Spring bean to manage the Mule context.  Spring calls methods on this bean when the application context is started, and again when the application context is closing.  These methods start the Mule context when Spring starts, and closes it when Spring is shutting down.  It also has a list of Mule configuration files to include in the Mule context.

package com.armedia.acm.rma;

import org.mule.api.MuleContext;
import org.mule.api.MuleException;
import org.mule.api.context.MuleContextFactory;
import org.mule.config.spring.SpringXmlConfigurationBuilder;
import org.mule.context.DefaultMuleContextFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;

public class MuleContextManager implements ApplicationContextAware
{

    private MuleContext muleContext;
    private transient Logger log = LoggerFactory.getLogger(getClass());
    private ApplicationContext applicationContext;
    private String[] configurationFiles;

    private void startMuleContext(ApplicationContext applicationContext) throws MuleException
    {
        if ( getMuleContext() != null )
        {
            return;
        }
        log.debug("Creating spring config builder.");
        SpringXmlConfigurationBuilder builder = new SpringXmlConfigurationBuilder(getConfigurationFiles());

        builder.setParentContext(applicationContext);
        MuleContextFactory muleContextFactory = new DefaultMuleContextFactory();
        MuleContext muleContext = muleContextFactory.createMuleContext(builder);

        log.debug("Starting mule context");
        muleContext.start();
        setMuleContext(muleContext);
        log.debug("Done.");
    }

    public void shutdownBean()
    {
        try
        {
            if ( getMuleContext() != null )
            {
                log.debug("Stopping Mule context");
                getMuleContext().stop();
            }
        }
        catch (MuleException e)
        {
            log.error("Could not stop Mule context: " + e.getMessage(), e);
        }
    }

    public MuleContext getMuleContext()
    {
        return muleContext;
    }

    public void setMuleContext(MuleContext muleContext)
    {
        this.muleContext = muleContext;
    }

    public ApplicationContext getApplicationContext()
    {
        return applicationContext;
    }

    @Override
    public void setApplicationContext(ApplicationContext applicationContext)
    {
        this.applicationContext = applicationContext;

        if ( getMuleContext() == null )
        {
            try
            {
                startMuleContext(applicationContext);
            }
            catch (MuleException e)
            {
                log.error("Could not start Mule context: " + e.getMessage(), e);
                throw new IllegalStateException(e);
            }
        }
    }

    public String[] getConfigurationFiles()
    {
        return configurationFiles;
    }

    public void setConfigurationFiles(String[] configurationFiles)
    {
        this.configurationFiles = configurationFiles;
    }
}

The above bean is configured in Spring like so:

    <bean id="muleContextManager" class="com.armedia.acm.rma.MuleContextManager"
            destroy-method="shutdownBean">
        <property name="configurationFiles">
            <array>
                <value type="java.lang.String">spring-rma.xml</value>
            </array>
        </property>
    </bean>

Now must make sure this Spring XML configuration is in your Spring application context, and all is well. Mule still starts and stops when the Web application starts and stops, just like with the original method. And, since we make the Mule context a child of the Spring context (Line 30 in the above Java code), all Mule flows can see all the Spring beans. Life is good!

Don’t forget to remove the Mule configuration elements from the web.xml!!!

Coincidence… or Fate?  A true story postscript.

I wrote the above Java code yesterday.  When I got home, the new book “Mule in Action, Volume 2” was in my mailbox.  I opened the book to a random page: page 209, to be exact.  This page includes a code snippet.  It was the same code I just wrote – the same code you see above!  I was reading the very code I had written earlier that day!  Weird, but true.