Spring Managed Alfresco Custom Activiti Java Delegates

Spring Managed Alfresco Custom Activiti Java Delegates

I recently needed to make a change to have Alfresco 4’s Activiti call an object managed by Spring instead of a class that is called during execution.  Couple of reasons for this:

  1. A new enhancement was necessary to access a custom database table, so I needed to inject a DAO bean into the Activiti serviceTask.
  2. Refactoring of the code base was needed.  Having Spring manage the java delegate service task versus instantiating new objects for each process execution is always a better way to go if the application is already Spring managed (which Alfresco is).
    • i.e. I needed access to the DAO bean and alfresco available spring beans.
    • NOTE:  You now have to make sure your class is thread safe though!

For a tutorial on Alfresco’s advanced workflows with Activiti, take a look at Jeff Pott’s tutorial here.  This blog will only discuss what was refactored to have Spring manage the Activiti engine java delegates.

I wanted to piggy-back off of the Activiti workflow engine that is already embedded in Alfresco 4, so decided not to define our own Activiti engine manually.  The Alfresco Summit 2013 had a great video tutorial, which helped immensely to refactor the “Old Method” to the “New Method”, described below.

Example:

For our example, we’ll use a simple activiti workflow that defines two service tasks, CherryJavaDelegate and ShoeJavaDelegate (The abstract AbstractCherryShoeDelegate is the parent).  The “Old Method” does NOT have spring managing the Activiti service task java delegates.  The “New Method” has spring manage and inject the Activiti service task java delegates, and also adds an enhancement for both service tasks to write to a database table.

Old Method

1. Notice that the cherryshoebpmn.xml example below is defining the serviceTask’s to use the “activiti:class” attribute; this will have activiti instantiate a new object for each process execution:

<process id="cherryshoeProcess" name="Cherry Shoe Process" isExecutable="true">
    ...
    <serviceTask id="cherryTask" name="Insert Cherry Task" activiti:class="com.cherryshoe.activiti.delegate.CherryJavaDelegate"></serviceTask>
    
    <serviceTask id="shoeTask" name="Insert Shoe Task" activiti:class="com.cherryshoe.activiti.delegate.ShoeJavaDelegate"></serviceTask>
    ...
</process>

2. Since we have multiple service tasks that need access to the same Activiti engine java delegate, we defined an abstract class that defined some of the functionality.  The specific concrete classes would provide/override any functionality not defined in the abstract class. 

...
import org.activiti.engine.delegate.JavaDelegate;
...
public abstract class AbstractCherryShoeDelegate implements JavaDelegate {
...
    @Override
    public void execute(DelegateExecution execution) throws Exception {
    ...
    }
...
}

public class CherryJavaDelegate extends AbstractCherryShoeDelegate {
...
...
}

New Method

Here’s a summary of all that had to happen to have Spring inject the java delegate Alfresco 4 custom Activiti service tasks (tested with Alfresco 4.1.5) and to write to database tables via injecting DAO beans.

  1. The abstract AbstractCherryShoeDelegate class extends Activiti engine’s BaseJavaDelegate
  2. There are class load order issues where custom spring beans will not get registered.  Set up depends-on relationship with the activitiBeanRegistry for the AbstractCherryShoeDelegate abstract parent
  3. The following must be kept intact:
    • In the Spring configuration file, 
      • Abstract AbstractCherryShoeDelegate class defines parent=”baseJavaDelegate” abstract=”true” depends-on=”ActivitiBeanRegistry”
      • For each concrete Java Delegate:
        • The concrete bean id MUST to match the class name, which in term matches the Activiti:delegateExpression on the bpmn20 configuration xml file 
          • NOTE: Reading this Alfresco forum looks like the activitiBeanRegistry registers the bean by classname, not by bean id, so likely this is not a requirement
        • The parent attribute MUST be defined as an attribute

Details Below:

1. Define spring beans for the abstract parent class AbstractCherryShoeDelegate and each concrete class that extends AbstractCherryShoeDelegate (i.e. CherryJavaDelegate and ShoeJavaDelegate). Have Spring manage the custom Activiti Java delegates where the concrete class.  The abstract parent must define it’s own parent as “baseJavaDelegate”, abstract=”true”, and depends-on=”ActivitiBeanRegistry”.

<bean id="AbstractCherryShoeDelegate" parent="baseJavaDelegate" abstract="true" depends-on="activitiBeanRegistry"></bean>
    
<bean id="CherryJavaDelegate"
class="com.cherryshoe.activiti.delegate.CherryJavaDelegate" parent="AbstractCherryShoeDelegate">
    <property name="cherryDao" ref="com.cherryshoe.database.dao.CherryDao"/>
</bean>

<bean id="ShoeJavaDelegate"
class="com.cherryshoe.activiti.delegate.ShoeJavaDelegate"  parent="AbstractCherryShoeDelegate">
    <property name="shoeDao" ref="com.cherryshoe.database.dao.ShoeDao"/>
</bean>

***NOTE: BELOW WILL NOT WORK

– Do NOT put any periods to denote package structure in the bean id!  Alfresco/Activiti got confused by the package “.”, where spring normally works fine with this construct.

– Also just because the concrete class is extending the parent abstract class, is not enough to make it work.

<bean id="com.cherryshoe.activiti.delegate.CherryJavaDelegate"
class="com.cherryshoe.activiti.delegate.CherryJavaDelegate" >
    <property name="cherryDao" ref="com.cherryshoe.database.dao.CherryDao"/>
</bean>

<bean id="com.cherryshoe.activiti.delegate.ShoeJavaDelegate"
class="com.cherryshoe.activiti.delegate.ShoeJavaDelegate" >
    <property name="shoeDao" ref="com.cherryshoe.database.dao.ShoeDao"/>
</bean>

2. Notice that the cherryshoebpmn.xml example below is using the “activiti:delegateExpression” attribute and referencing the Spring bean.  This means only one instance of that Java class is created for the serviceTask it is defined on, so the class must be implemented with thread-safety in mind:

<process id="cherryshoeProcess" name="Cherry Shoe Process" isExecutable="true">
    ...
    <serviceTask id="cherryTask" name="Insert Cherry Task" activiti:delegateExpression="${CherryJavaDelegate}"></serviceTask>

    <serviceTask id="shoeTask" name="Insert Shoe Task" activiti:delegateExpression="${ShoeJavaDelegate}"></serviceTask>
    ...
</process>

3.  The abstract class is now changed to extend the BaseJavaDelegate.  The specific concrete classes would provide / override any functionality not defined in the abstract class. 

...
import org.alfresco.repo.workflow.activiti.BaseJavaDelegate;
...
public abstract class AbstractCherryShoeDelegate extends BaseJavaDelegate {
...
    @Override
    public void execute(DelegateExecution execution) throws Exception {
    ...
    }
...
}

public class CherryJavaDelegate extends AbstractCherryShoeDelegate {
...
}

For more examples and ideas, I encourage you to explore the links provided throughout this blog. Also take a look at Activiti’s user guide, particularly the Java Service Task Implementation section. What questions do you have about this post? Let me know in the comments section below, and I will answer each one.

The blog Spring Managed Alfresco Custom Activiti Java Delegates was originally posted on cherryshoe.blogspot.com.

Initial Thoughts on Amazon S3 and DynamoDB

I’ve been tinkering with Amazon S3 and DynamoDB to get exposed to NoSQL databases.  I haven’t had the need to get down to the nitty gritty so am not managing REST (or SOAP) calls myself, just been using the AWS SDK for Java.  I am writing this post to gather initial thoughts that I have so far.

I wanted to learn more about AWS because as Amazon says, they want to “enable developers to focus on innovating with data, rather than how to store it”.  You don’t have the pains of being required to design the infrastructure needs of the system as a whole now in the beginning, or the future when you need performance and reliability.  It has 99.999999999% durability, with 99.99% availability by replicating across several facilities in a region.  It scales automatically, you don’t do anything, and it remains available as it’s doing that under the covers.  Amazon has a great ‘Free Usage Tier’ for folks like me (and you) who are just starting out and want to get hands-on experience.  Not all of the services are offered in the free usage tier (S3 and DynamoDB are), so take a look!

Summaries

Security credentials

There is a AWS root account credential.  One of the first things you should do is create a user for yourself, and assign it to the admin group.  Never use your AWS root account credential directly! You can then create IAM (Identity and Access Management) users depending on the application need, creating groups with logical functions, and users to those groups.

Amazon S3

  • Literally don’t have to configure anything to get started, you have a key, and you upload your value, just keep storing things in S3.  You would still need some way to keep track of what keys you are using for you specific application
  • Doesn’t support object locking, you’ll need to build this manually if it’s needed
  • S3 max data storage is 5TB, no limit on attributes for an item
  • Uses the eventual consistency consistency model

DynamoDB

  • Minimum to configure when creating your tables is to specify the table primary key, and provisioning the read and write throughput needed
  • Supports optimistic object locking
  • Max data size is 64KB, no limit on number of attributes for an item
  • Supports eventual consistency or strongly consistent consistency  model

When to use S3 versus DynamoDB?

S3 is for larger size data that you are rarely going to touch (like storing data for analysis or backup/archiving of data).  Whereas DynamoDB is more for data that is more dynamic.  As we already talked about with both, you can start with a small amount of data, and scale up and down on the requirements.  For DynamoDB, you would also need to adjust your  read and write capacities.  In your custom application, you can also use a mix both, or use other Amazon services simultaneously.

The storage for S3 and DynamoDB are relatively cheap, and they only get cheaper.  So if you are developing an app for you local, dev, test, or pre-prod environments, you just may want to go ahead and use different instances of the services.  Or you can use a sandbox tool like ZCloud AWS sandbox for S3.

Now what, want to start playing around?
Get the AWS SDK for Java:

Some sample code:

Next in the line on NoSQL database are Apache Cassandra and MongoDB, I’ll have more thoughts on those in the coming months.

For more examples, ideas, and inspiration, feel free to read through the links provided in the “Now what, want to start playing around” section.  What questions do you have about this post? Let me know in the comments section below, and I will answer each one.

***********

For details on the code that I played around with, I grabbed AWS SDK for Java with Maven, and did everything in JUnit integration tests.  Starting point for most of this code was from the sample code links from above.

Note: You’ll see in all tests classes, the last thing done is to delete all the objects that were created.  We’re not using much data here, and it’s The Free Tier, but doesn’t hurt to be safe!

Credentials

Placed in the classpath under src/main/resource/AwsCredentials.properties (Use one of your IAM accounts here, not your root credentials!)

pom.xml

<!-- For Amazon Java SDK -->
<dependency>
    <groupId>com.amazonaws</groupId>
    <artifactId>aws-java-sdk</artifactId>
    <version>${awsjavasdk.version}</version>
</dependency>

 

S3 – BaseTestCase.java

package com.cherryshoe.services.sdk.s3;

import org.junit.After;
import org.junit.Before;
import org.junit.Rule;
import org.junit.rules.TestName;

import com.amazonaws.auth.ClasspathPropertiesFileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.S3ObjectSummary;

public class BaseTestCase {

    private AmazonS3 s3Service;
    private String bucketPrepend = "cherryshoe";

    @Rule public TestName name = new TestName();

    public BaseTestCase()
    {
        super();
        this.s3Service = new AmazonS3Client(new ClasspathPropertiesFileCredentialsProvider());
        Region region = Region.getRegion(Regions.US_WEST_2);
        s3Service.setRegion(region);
    }

    public AmazonS3 getS3Service() {
        return s3Service;
    }    

    public String getBucketPrepend() {
        return bucketPrepend;
    }

    @Before
    public void printBeforeTestRun() throws Exception
    {
        System.out.println("-------------------------------------------------------------------------------------");
        System.out.println("Starting Test: " + name.getMethodName());
        System.out.println("-------------------------------------------------------------------------------------");
    }

    @After
    public void printAfterTestRun() throws Exception
    {
        System.out.println("-------------------------------------------------------------------------------------");
        System.out.println("Finished Test: " + name.getMethodName());
        System.out.println("-------------------------------------------------------------------------------------");
        System.out.println();
    }

    protected void deleteAllObjectsInAllBuckets() throws Exception {

        System.out.println("Starting deleting all objects in all buckets");
        for (Bucket bucket : getS3Service().listBuckets()) {
            String bucketName = bucket.getName();            

            ListObjectsRequest getObjectsRequest = new ListObjectsRequest();
            getObjectsRequest.setBucketName(bucketName);
            ObjectListing objectListing = getS3Service().listObjects(
                    getObjectsRequest);

            System.out.println("Deleting objects from bucket[" + bucketName + "]");
            for (S3ObjectSummary objectSummary : objectListing
                    .getObjectSummaries()) {
                System.out.println("Deleting object with key["
                        + objectSummary.getKey() + "]");
                getS3Service().deleteObject(bucketName, objectSummary.getKey());
            }

            deleteBucket(bucketName);
        }

    }

    protected void createBucket(String bucketName) throws Exception {
        // create bucket
        System.out.println("Creating bucket[" + bucketName +"]");
        getS3Service().createBucket(bucketName);
    }

    protected boolean isBucketExists(String bucketName) throws Exception {
        System.out.println("Listing buckets...");
        boolean foundBucket = false;
        for (Bucket bucket : getS3Service().listBuckets()) {
            String name = bucket.getName();
            System.out.println(name);
            if (name.equals(bucketName))
                foundBucket = true;
        }    
        return foundBucket;
    }

    private void deleteBucket(String bucketName) throws Exception {
        System.out.println("Deleting bucket[" + bucketName + "]");
        getS3Service().deleteBucket(bucketName);
    }

}

 

 CreateObjectTest.java

package com.cherryshoe.services.sdk.s3;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;

import org.junit.After;
import org.junit.Assert;
import org.junit.Test;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.PutObjectResult;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectInputStream;
import com.cherryshoe.services.web.Utils;

public class CreateObjectTest extends BaseTestCase {

    @After
    public void cleanUp() throws Exception {
        // delete
        deleteAllObjectsInAllBuckets();
    }

    @Test
    public void createBucketAndObject() throws Exception {
        String bucketName = getBucketPrepend() + Utils.get10DigitUniqueId();
        createBucket(bucketName);
        Assert.assertTrue(isBucketExists(bucketName));

        /*
         * Upload an object to your bucket - You can easily upload a file to S3,
         * or upload directly an InputStream if you know the length of the data
         * in the stream. You can also specify your own metadata when uploading
         * to S3, which allows you set a variety of options like content-type
         * and content-encoding, plus additional metadata specific to your
         * applications.
         */
        String key = Utils.get10DigitUniqueId();
        System.out.println("key of object to create[" + key + "]");
        String pathToSourceFile = "./src/test/resources/1pager.tif";
        File fileData = new File(pathToSourceFile);
        Assert.assertTrue(fileData.exists());

        PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName,
                key, fileData);
        PutObjectResult putObjectResult = getS3Service().putObject(
                putObjectRequest);
    }

    /*
     * S3 you have to copy the object to replace metadata on it....
     * https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjectsExamples.html
     */
    @Test
    public void copyObject() throws Exception {

        try {
            String bucketName = getBucketPrepend() + Utils.get10DigitUniqueId();
            createBucket(bucketName);
            Assert.assertTrue(isBucketExists(bucketName));

            /*
             * Upload an object to your bucket - You can easily upload a file to
             * S3, or upload directly an InputStream if you know the length of
             * the data in the stream. You can also specify your own metadata
             * when uploading to S3, which allows you set a variety of options
             * like content-type and content-encoding, plus additional metadata
             * specific to your applications.
             */
            String key = Utils.get10DigitUniqueId();
            System.out.println("key of object to create[" + key + "]");
            String pathToSourceFile = "./src/test/resources/1pager.tif";
            File fileData = new File(pathToSourceFile);
            Assert.assertTrue(fileData.exists());

            ObjectMetadata objectMetadata = new ObjectMetadata();
            String metadataKey1 = "AppSpecific1";
            String metadataVal1 = "AppSpecificMetadata1";
            objectMetadata.addUserMetadata(metadataKey1, metadataVal1);

            PutObjectRequest putObjectRequest = new PutObjectRequest(
                    bucketName, key, fileData).withMetadata(objectMetadata);
            PutObjectResult putObjectResult = getS3Service().putObject(
                    putObjectRequest);

            /*
             * Download an object - When you download an object, you get all of
             * the object's metadata and a stream from which to read the
             * contents. It's important to read the contents of the stream as
             * quickly as possibly since the data is streamed directly from
             * Amazon S3 and your network connection will remain open until you
             * read all the data or close the input stream.
             * 
             * GetObjectRequest also supports several other options, including
             * conditional downloading of objects based on modification times,
             * ETags, and selectively downloading a range of an object.
             */
            System.out.println("Downloading object[" + key + "]");

            GetObjectRequest getObjectRequest = new GetObjectRequest(
                    bucketName, key);
            Assert.assertNotNull(getObjectRequest);
            S3Object s3Object = getS3Service().getObject(getObjectRequest);
            Assert.assertNotNull(s3Object);

            System.out.println("Getting metadata[" + key + "]");
            System.out.println("Content-Type: "
                    + s3Object.getObjectMetadata().getContentType());
            System.out.println("VersionId: "
                    + s3Object.getObjectMetadata().getVersionId());

            // ha! the metadata keys get lowercased on S3 side
            String metadataVal1Ret = s3Object.getObjectMetadata()
                    .getUserMetadata().get(metadataKey1.toLowerCase());
            System.out.println(metadataKey1.toLowerCase() + ": "
                    + metadataVal1Ret);
            Assert.assertEquals(metadataVal1, metadataVal1Ret);

            //////////////////////////
            // Copying object, update the metadata
            //////////////////////////
            objectMetadata = new ObjectMetadata();
            String metadataKey2 = "AppSpecific2";
            String metadataVal2 = "AppSpecificMetadata2";
            objectMetadata.addUserMetadata(metadataKey2, metadataVal2);

            String newKey = Utils.get9DigitUniqueId();
            System.out.println("key of object to copy[" + newKey + "]");
            CopyObjectRequest copyObjRequest = new CopyObjectRequest(
                    bucketName, key, bucketName, newKey).withNewObjectMetadata(objectMetadata);
            System.out.println("Copying object.");
            getS3Service().copyObject(copyObjRequest);

            // get object
            getObjectRequest = new GetObjectRequest(
                    bucketName, newKey);
            Assert.assertNotNull(getObjectRequest);
            s3Object = getS3Service().getObject(getObjectRequest);
            Assert.assertNotNull(s3Object);

            // compare the copy contents is the same as what was uploaded
            System.out.println("Getting content[" + newKey + "]");
            S3ObjectInputStream s3ObjectInputStream = s3Object
                    .getObjectContent();
            boolean deleteNewFile = false;
            boolean filesEqual = validateObject(s3ObjectInputStream,
                    deleteNewFile, pathToSourceFile);
            Assert.assertTrue(filesEqual);

            System.out.println("Getting metadata[" + newKey + "]");
            System.out.println("Content-Type: "
                    + s3Object.getObjectMetadata().getContentType());
            System.out.println("VersionId: "
                    + s3Object.getObjectMetadata().getVersionId());

            // ha! the metadata keys get lowercased on S3 side
            String metadataVal2Ret = s3Object.getObjectMetadata()
                    .getUserMetadata().get(metadataKey2.toLowerCase());
            System.out.println(metadataKey2.toLowerCase() + ": "
                    + metadataVal2Ret);
            Assert.assertEquals(metadataVal2, metadataVal2Ret);

        } catch (AmazonServiceException ase) {
            ase.printStackTrace();
            System.out
                    .println("Caught an AmazonServiceException, which means your request made it "
                            + "to Amazon S3, but was rejected with an error response for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
        }

    }

    @Test
    public void createObjectTest() throws Exception {
        try {
            String bucketName = getBucketPrepend() + Utils.get10DigitUniqueId();
            createBucket(bucketName);
            Assert.assertTrue(isBucketExists(bucketName));

            /*
             * Upload an object to your bucket - You can easily upload a file to
             * S3, or upload directly an InputStream if you know the length of
             * the data in the stream. You can also specify your own metadata
             * when uploading to S3, which allows you set a variety of options
             * like content-type and content-encoding, plus additional metadata
             * specific to your applications.
             */
            String key = Utils.get10DigitUniqueId();
            System.out.println("key of object to create[" + key + "]");
            String pathToSourceFile = "./src/test/resources/1pager.tif";
            File fileData = new File(pathToSourceFile);
            Assert.assertTrue(fileData.exists());

            ObjectMetadata objectMetadata = new ObjectMetadata();
            String metadataKey1 = "AppSpecific1";
            String metadataVal1 = "AppSpecificMetadata1";
            objectMetadata.addUserMetadata(metadataKey1, metadataVal1);

            PutObjectRequest putObjectRequest = new PutObjectRequest(
                    bucketName, key, fileData).withMetadata(objectMetadata);
            PutObjectResult putObjectResult = getS3Service().putObject(
                    putObjectRequest);

            /*
             * Download an object - When you download an object, you get all of
             * the object's metadata and a stream from which to read the
             * contents. It's important to read the contents of the stream as
             * quickly as possibly since the data is streamed directly from
             * Amazon S3 and your network connection will remain open until you
             * read all the data or close the input stream.
             * 
             * GetObjectRequest also supports several other options, including
             * conditional downloading of objects based on modification times,
             * ETags, and selectively downloading a range of an object.
             */
            System.out.println("Downloading object[" + key + "]");

            GetObjectRequest getObjectRequest = new GetObjectRequest(
                    bucketName, key);
            Assert.assertNotNull(getObjectRequest);
            S3Object s3Object = getS3Service().getObject(getObjectRequest);
            Assert.assertNotNull(s3Object);

            // compare the contents is the same as what was uploaded
            System.out.println("Getting content[" + key + "]");
            S3ObjectInputStream s3ObjectInputStream = s3Object
                    .getObjectContent();
            boolean deleteNewFile = false;
            boolean filesEqual = validateObject(s3ObjectInputStream,
                    deleteNewFile, pathToSourceFile);
            Assert.assertTrue(filesEqual);

            System.out.println("Getting metadata[" + key + "]");
            System.out.println("Content-Type: "
                    + s3Object.getObjectMetadata().getContentType());
            System.out.println("VersionId: "
                    + s3Object.getObjectMetadata().getVersionId());

            // ha! the metadata keys get lowercased on S3 side
            String metadataVal1Ret = s3Object.getObjectMetadata()
                    .getUserMetadata().get(metadataKey1.toLowerCase());
            System.out.println(metadataKey1.toLowerCase() + ": "
                    + metadataVal1Ret);
            Assert.assertEquals(metadataVal1, metadataVal1Ret);

        } catch (AmazonServiceException ase) {
            ase.printStackTrace();
            System.out
                    .println("Caught an AmazonServiceException, which means your request made it "
                            + "to Amazon S3, but was rejected with an error response for some reason.");
            System.out.println("Error Message:    " + ase.getMessage());
            System.out.println("HTTP Status Code: " + ase.getStatusCode());
            System.out.println("AWS Error Code:   " + ase.getErrorCode());
            System.out.println("Error Type:       " + ase.getErrorType());
            System.out.println("Request ID:       " + ase.getRequestId());
        }
    }

    protected boolean validateObject(InputStream inputStream,
            boolean deleteNewFile, String pathToSourceFile) throws Exception {

        String pathRetrievedFile = writeFile(inputStream, pathToSourceFile);

        // if we got here, file was written successfully
        // so check if file sizes are different. If they are, then document was
        // not retrieved successfully.
        File sourceFile = new File(pathToSourceFile);
        File retrievedFile = new File(pathRetrievedFile);

        boolean filesEqual = false;
        try {
            filesEqual = sourceFile.length() == retrievedFile.length() ? true
                    : false;
        } finally {
            if (deleteNewFile) {
                // delete retrieved file
                retrievedFile.delete();
            }
        }

        return filesEqual;
    }

    protected String writeFile(InputStream inputStream, String pathToSourceFile)
            throws Exception {
        String pathToBinaryFile = "./src/test/resources/s3ObjectFile"
                + Utils.getRandomId() + ".tif";

        try {
            // Read the binary data and write to file

            File file = new File(pathToBinaryFile);

            OutputStream output = new FileOutputStream(file);

            byte[] buffer = new byte[8 * 1024];

            int bytesRead;

            try {
                while ((bytesRead = inputStream.read(buffer)) != -1) {
                    output.write(buffer, 0, bytesRead);
                }
            } finally {

                // Closing the input stream will trigger connection release
                inputStream.close();

                // close file
                output.close();
            }
        } catch (IOException ex) {
            // In case of an IOException the connection will be released
            // back to the connection manager automatically
            throw ex;
        } catch (RuntimeException ex) {
            throw ex;
        }

        return pathToBinaryFile;
    }

}

 

DynamoDB

P.S. – I haven’t had time to look at DynamoDapper yet, 

BaseTestCase.java

package com.cherryshoe.services.sdk.dynamodb;

import org.junit.After;
import org.junit.Before;
import org.junit.Rule;
import org.junit.rules.TestName;

import com.amazonaws.auth.ClasspathPropertiesFileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;

public class BaseTestCase {

    private AmazonDynamoDBClient client;

    @Rule public TestName name = new TestName();

    public BaseTestCase()
    {
        super();
        this.client = new AmazonDynamoDBClient(new ClasspathPropertiesFileCredentialsProvider());
        Region region = Region.getRegion(Regions.US_WEST_2);
        client.setRegion(region);
    }

    public AmazonDynamoDBClient getClient() {
        return client;
    }    

    @Before
    public void printBeforeTestRun() throws Exception
    {
        System.out.println("-------------------------------------------------------------------------------------");
        System.out.println("Starting Test: " + name.getMethodName());
        System.out.println("-------------------------------------------------------------------------------------");
    }

    @After
    public void printAfterTestRun() throws Exception
    {
        System.out.println("-------------------------------------------------------------------------------------");
        System.out.println("Finished Test: " + name.getMethodName());
        System.out.println("-------------------------------------------------------------------------------------");
        System.out.println();
    }

    protected void deleteAllObjectsInAllBuckets() throws Exception {

    }

}

 

DynamoDBTest.Java

import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import org.junit.Before;
import org.junit.FixMethodOrder;
import org.junit.Test;
import org.junit.runners.MethodSorters;

import com.amazonaws.AmazonServiceException;
import com.amazonaws.services.dynamodbv2.model.DeleteTableRequest;
import com.amazonaws.services.dynamodbv2.model.DeleteTableResult;
import com.amazonaws.services.dynamodbv2.model.AttributeDefinition;
import com.amazonaws.services.dynamodbv2.model.AttributeValue;
import com.amazonaws.services.dynamodbv2.model.CreateTableRequest;
import com.amazonaws.services.dynamodbv2.model.CreateTableResult;
import com.amazonaws.services.dynamodbv2.model.DeleteItemRequest;
import com.amazonaws.services.dynamodbv2.model.DeleteItemResult;
import com.amazonaws.services.dynamodbv2.model.DescribeTableRequest;
import com.amazonaws.services.dynamodbv2.model.ExpectedAttributeValue;
import com.amazonaws.services.dynamodbv2.model.GetItemRequest;
import com.amazonaws.services.dynamodbv2.model.GetItemResult;
import com.amazonaws.services.dynamodbv2.model.KeySchemaElement;
import com.amazonaws.services.dynamodbv2.model.KeyType;
import com.amazonaws.services.dynamodbv2.model.ProvisionedThroughput;
import com.amazonaws.services.dynamodbv2.model.PutItemRequest;
import com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException;
import com.amazonaws.services.dynamodbv2.model.ReturnValue;
import com.amazonaws.services.dynamodbv2.model.TableDescription;
import com.amazonaws.services.dynamodbv2.model.TableStatus;

// this makes sure the tests run in order, yeah I know I shouldn't count on the order that
// the tests are run in for good tests...
@FixMethodOrder(MethodSorters.NAME_ASCENDING) 
public class DynamoDbTest extends BaseTestCase {

    private String tableName = "Judy";
    private List<String> idList;

    @Before
    public void setUp() {
        idList = new ArrayList<String>();
    }

    /*
     * Haven't gotten to using the DynamoDb annotations yet...
     */
    @Test
    public void test1_createTableTest() {

        // create the table
        ArrayList<AttributeDefinition> attributeDefinitions = new ArrayList<AttributeDefinition>();
        attributeDefinitions.add(new AttributeDefinition().withAttributeName(
                "Id").withAttributeType("N"));
        // Note:  You don't have to define all the columns at create time, just the key.  It's like an aspect where you can add the columns later

        ArrayList<KeySchemaElement> ks = new ArrayList<KeySchemaElement>();
        ks.add(new KeySchemaElement().withAttributeName("Id").withKeyType(
                KeyType.HASH));

        ProvisionedThroughput provisionedThroughput = new ProvisionedThroughput()
                .withReadCapacityUnits(10L).withWriteCapacityUnits(5L);

        CreateTableRequest request = new CreateTableRequest()
                .withTableName(tableName)
                .withAttributeDefinitions(attributeDefinitions)
                .withKeySchema(ks)
                .withProvisionedThroughput(provisionedThroughput);

        CreateTableResult result = getClient().createTable(request);

        waitForTableToBecomeAvailable(tableName);

    }

    @Test
    public void test2_createRecordsTest() {
        uploadSampleProducts(tableName);
    }

    @Test
    public void test3_retrieveItem() {
        String id = "101";

        retrieveItem(id);
    }

    @Test
    public void test4_deleteTest() {
        // if the list is empty populate it
        if (getIdList().isEmpty()) {
            getIdList().add("101");
            getIdList().add("102");
            getIdList().add("103");
            getIdList().add("201");
            getIdList().add("202");
            getIdList().add("203");
            getIdList().add("204");
            getIdList().add("205");
        }

        // delete items
        deleteItems(tableName);

        // delete table
        DeleteTableRequest deleteTableRequest = new DeleteTableRequest()
        .withTableName(tableName);
        DeleteTableResult result = getClient().deleteTable(deleteTableRequest);
        waitForTableToBeDeleted(tableName);  
    }

    private void retrieveItem(String id) {
        try {

            HashMap<String, AttributeValue> key = new HashMap<String, AttributeValue>();
            key.put("Id", new AttributeValue().withN(id));
            GetItemRequest getItemRequest = new GetItemRequest()
                .withTableName(tableName)
                .withKey(key)
                .withAttributesToGet(Arrays.asList("Id", "ISBN", "Title", "Authors"));

            GetItemResult result = getClient().getItem(getItemRequest);

            // Check the response.
            System.out.println("Printing item after retrieving it....");
            printItem(result.getItem());            

        }  catch (AmazonServiceException ase) {
                    System.err.println("Failed to retrieve item in " + tableName);
        }   

    }

    private void deleteItems(String tableName) {
        try {
            for (String id : getIdList()) {
                Map<String, ExpectedAttributeValue> expectedValues = new HashMap<String, ExpectedAttributeValue>();
                HashMap<String, AttributeValue> key = new HashMap<String, AttributeValue>();
                key.put("Id", new AttributeValue().withN(id));

                //    can add more expected values if you want

                ReturnValue returnValues = ReturnValue.ALL_OLD;

                DeleteItemRequest deleteItemRequest = new DeleteItemRequest()
                    .withTableName(tableName)
                    .withKey(key)
                    .withExpected(expectedValues)
                    .withReturnValues(returnValues);

                DeleteItemResult result = getClient().deleteItem(deleteItemRequest);

                // if the item was available to be deleted
                if (result.getAttributes() != null) {
                    // Check the response.
                    System.out.println("Printing item that was deleted...");
                    printItem(result.getAttributes());
                }

            }        
        }  catch (AmazonServiceException ase) {
                                System.err.println("Failed to get item after deletion " + tableName);
        } 
    }

    private void waitForTableToBeDeleted(String tableName) {
        System.out.println("Waiting for " + tableName + " while status DELETING...");

        long startTime = System.currentTimeMillis();
        long endTime = startTime + (10 * 60 * 1000);
        while (System.currentTimeMillis() < endTime) {
            try {
                DescribeTableRequest request = new DescribeTableRequest().withTableName(tableName);
                TableDescription tableDescription = getClient().describeTable(request).getTable();
                String tableStatus = tableDescription.getTableStatus();
                System.out.println("  - current state: " + tableStatus);
                if (tableStatus.equals(TableStatus.ACTIVE.toString())) return;
            } catch (ResourceNotFoundException e) {
                System.out.println("Table " + tableName + " is not found. It was deleted.");
                return;
            }
            try {Thread.sleep(1000 * 20);} catch (Exception e) {}
        }
        throw new RuntimeException("Table " + tableName + " was never deleted");
    }

    private void printItem(Map<String, AttributeValue> attributeList) {
        for (Map.Entry<String, AttributeValue> item : attributeList.entrySet()) {
            String attributeName = item.getKey();
            AttributeValue value = item.getValue();
            System.out.println(attributeName + " "
                    + (value.getS() == null ? "" : "S=[" + value.getS() + "]")
                    + (value.getN() == null ? "" : "N=[" + value.getN() + "]")
                    + (value.getB() == null ? "" : "B=[" + value.getB() + "]")
                    + (value.getSS() == null ? "" : "SS=[" + value.getSS() + "]")
                    + (value.getNS() == null ? "" : "NS=[" + value.getNS() + "]")
                    + (value.getBS() == null ? "" : "BS=[" + value.getBS() + "] \n"));
        }
    }

    private void uploadSampleProducts(String tableName) {

        try {
            // Add books.
            Map<String, AttributeValue> item = new HashMap<String, AttributeValue>();
            item.put("Id", new AttributeValue().withN("101"));
            item.put("Title", new AttributeValue().withS("Book 101 Title"));
            item.put("ISBN", new AttributeValue().withS("111-1111111111"));
            item.put("Authors",
                    new AttributeValue().withSS(Arrays.asList("Author1")));
            item.put("Price", new AttributeValue().withN("2"));
            item.put("Dimensions",
                    new AttributeValue().withS("8.5 x 11.0 x 0.5"));
            item.put("PageCount", new AttributeValue().withN("500"));
            item.put("InPublication", new AttributeValue().withN("1"));
            item.put("ProductCategory", new AttributeValue().withS("Book"));

            PutItemRequest itemRequest = new PutItemRequest().withTableName(
                    tableName).withItem(item);
            getClient().putItem(itemRequest);
            item.clear();
            getIdList().add("101");

            item.put("Id", new AttributeValue().withN("102"));
            item.put("Title", new AttributeValue().withS("Book 102 Title"));
            item.put("ISBN", new AttributeValue().withS("222-2222222222"));
            item.put("Authors", new AttributeValue().withSS(Arrays.asList(
                    "Author1", "Author2")));
            item.put("Price", new AttributeValue().withN("20"));
            item.put("Dimensions",
                    new AttributeValue().withS("8.5 x 11.0 x 0.8"));
            item.put("PageCount", new AttributeValue().withN("600"));
            item.put("InPublication", new AttributeValue().withN("1"));
            item.put("ProductCategory", new AttributeValue().withS("Book"));

            itemRequest = new PutItemRequest().withTableName(tableName)
                    .withItem(item);
            getClient().putItem(itemRequest);
            item.clear();
            getIdList().add("102");

            item.put("Id", new AttributeValue().withN("103"));
            item.put("Title", new AttributeValue().withS("Book 103 Title"));
            item.put("ISBN", new AttributeValue().withS("333-3333333333"));
            item.put("Authors", new AttributeValue().withSS(Arrays.asList(
                    "Author1", "Author2")));
            // Intentional. Later we run scan to find price error. Find items >
            // 1000 in price.
            item.put("Price", new AttributeValue().withN("2000"));
            item.put("Dimensions",
                    new AttributeValue().withS("8.5 x 11.0 x 1.5"));
            item.put("PageCount", new AttributeValue().withN("600"));
            item.put("InPublication", new AttributeValue().withN("0"));
            item.put("ProductCategory", new AttributeValue().withS("Book"));

            itemRequest = new PutItemRequest().withTableName(tableName)
                    .withItem(item);
            getClient().putItem(itemRequest);
            item.clear();
            getIdList().add("103");

            // Add bikes.
            item.put("Id", new AttributeValue().withN("201"));
            item.put("Title", new AttributeValue().withS("18-Bike-201")); // Size,
                                                                            // followed
                                                                            // by
                                                                            // some
                                                                            // title.
            item.put("Description",
                    new AttributeValue().withS("201 Description"));
            item.put("BicycleType", new AttributeValue().withS("Road"));
            item.put("Brand", new AttributeValue().withS("Mountain A")); // Trek,
                                                                            // Specialized.
            item.put("Price", new AttributeValue().withN("100"));
            item.put("Gender", new AttributeValue().withS("M")); // Men's
            item.put("Color",
                    new AttributeValue().withSS(Arrays.asList("Red", "Black")));
            item.put("ProductCategory", new AttributeValue().withS("Bicycle"));

            itemRequest = new PutItemRequest().withTableName(tableName)
                    .withItem(item);
            getClient().putItem(itemRequest);
            item.clear();
            getIdList().add("201");

            item.put("Id", new AttributeValue().withN("202"));
            item.put("Title", new AttributeValue().withS("21-Bike-202"));
            item.put("Description",
                    new AttributeValue().withS("202 Description"));
            item.put("BicycleType", new AttributeValue().withS("Road"));
            item.put("Brand", new AttributeValue().withS("Brand-Company A"));
            item.put("Price", new AttributeValue().withN("200"));
            item.put("Gender", new AttributeValue().withS("M"));
            item.put("Color", new AttributeValue().withSS(Arrays.asList(
                    "Green", "Black")));
            item.put("ProductCategory", new AttributeValue().withS("Bicycle"));

            itemRequest = new PutItemRequest().withTableName(tableName)
                    .withItem(item);
            getClient().putItem(itemRequest);
            item.clear();
            getIdList().add("202");

            item.put("Id", new AttributeValue().withN("203"));
            item.put("Title", new AttributeValue().withS("19-Bike-203"));
            item.put("Description",
                    new AttributeValue().withS("203 Description"));
            item.put("BicycleType", new AttributeValue().withS("Road"));
            item.put("Brand", new AttributeValue().withS("Brand-Company B"));
            item.put("Price", new AttributeValue().withN("300"));
            item.put("Gender", new AttributeValue().withS("W")); // Women's
            item.put("Color", new AttributeValue().withSS(Arrays.asList("Red",
                    "Green", "Black")));
            item.put("ProductCategory", new AttributeValue().withS("Bicycle"));

            itemRequest = new PutItemRequest().withTableName(tableName)
                    .withItem(item);
            getClient().putItem(itemRequest);
            item.clear();
            getIdList().add("203");

            item.put("Id", new AttributeValue().withN("204"));
            item.put("Title", new AttributeValue().withS("18-Bike-204"));
            item.put("Description",
                    new AttributeValue().withS("204 Description"));
            item.put("BicycleType", new AttributeValue().withS("Mountain"));
            item.put("Brand", new AttributeValue().withS("Brand-Company B"));
            item.put("Price", new AttributeValue().withN("400"));
            item.put("Gender", new AttributeValue().withS("W"));
            item.put("Color", new AttributeValue().withSS(Arrays.asList("Red")));
            item.put("ProductCategory", new AttributeValue().withS("Bicycle"));

            itemRequest = new PutItemRequest().withTableName(tableName)
                    .withItem(item);
            getClient().putItem(itemRequest);
            item.clear();
            getIdList().add("204");

            item.put("Id", new AttributeValue().withN("205"));
            item.put("Title", new AttributeValue().withS("20-Bike-205"));
            item.put("Description",
                    new AttributeValue().withS("205 Description"));
            item.put("BicycleType", new AttributeValue().withS("Hybrid"));
            item.put("Brand", new AttributeValue().withS("Brand-Company C"));
            item.put("Price", new AttributeValue().withN("500"));
            item.put("Gender", new AttributeValue().withS("B")); // Boy's
            item.put("Color",
                    new AttributeValue().withSS(Arrays.asList("Red", "Black")));
            item.put("ProductCategory", new AttributeValue().withS("Bicycle"));
            getIdList().add("205");

            itemRequest = new PutItemRequest().withTableName(tableName)
                    .withItem(item);
            getClient().putItem(itemRequest);

        } catch (AmazonServiceException ase) {
            System.err.println("Failed to create item in " + tableName + " "
                    + ase);
        }

    }

    private void waitForTableToBecomeAvailable(String tableName) {
        System.out.println("Waiting for " + tableName + " to become ACTIVE...");
        long startTime = System.currentTimeMillis();
        long endTime = startTime + (10 * 60 * 1000);
        while (System.currentTimeMillis() < endTime) {
            try {
                Thread.sleep(1000 * 20);
            } catch (Exception e) {
            }
            try {
                DescribeTableRequest request = new DescribeTableRequest()
                        .withTableName(tableName);
                TableDescription tableDescription = getClient().describeTable(
                        request).getTable();
                String tableStatus = tableDescription.getTableStatus();
                System.out.println("  - current state: " + tableStatus);
                if (tableStatus.equals(TableStatus.ACTIVE.toString()))
                    return;
            } catch (AmazonServiceException ase) {
                if (ase.getErrorCode().equalsIgnoreCase(
                        "ResourceNotFoundException") == false)
                    throw ase;
            }
        }
        throw new RuntimeException("Table " + tableName + " never went active");
    }

    public List<String> getIdList() {
        return idList;
    }    

}

 

The Blog “Initial Thoughts on Amazon S3 and DynamoDB” was originally posted on cherryshoe.blogspot.com

Alfresco Auto-Update Failed Error Message

In my last blog, I introduced a complete backup and restore strategy for Alfresco and a custom application that has synchronized data (Alfresco holds repository documents, Application DB holds application specific data).

PROBLEM BACKGROUND:

Another issue I ran into during that mini-project was, often, when starting Alfresco 4.1.2 on a clustered environment I would see the following in the tomcat log:

ERROR [web.context.ContextLoader] [main] Context initialization failed
 org.alfresco.error.AlfrescoRuntimeException: 09100000 Schema auto-update failed
        at org.alfresco.repo.domain.schema.SchemaBootstrap.onBootstrap(SchemaBootstrap.java:1671)
        at org.springframework.extensions.surf.util.AbstractLifecycleBean.onApplicationEvent(AbstractLifecycleBean.java:56)

The key issue from the above error is “Schema auto-update failed”.  I didn’t find much that could help me on internet forums, so wanted to share what worked.  It turns out that error happens because when the Alfresco database schema is restored, Alfresco thinks that the database schema is updated.  I cannot completely explain this nuance, but in this particular case, the schema had in fact, NOT been updated at all.  Just completely restored from a backup.

SOLUTION:

The clustered Alfresco configuration is using tomcat datasource configuration, so this was the solution that “tricked” Alfresco into thinking that the database schema had NOT changed.

  1. For the type=”javax.sql.DataSource” resource attribute, addschema.update=”false” THE FIRST TIME you start Alfresco after restoring the schema from the backup.
  2. After Alfresco completely starts up successfully, stop it.
  3. Change schema.update=”true” again since Alfresco already started up successfully, so it doesn’t think the database got updated.
  4. Start Alfresco again.

Details below…

  • Configure datasource properties in <tomcathome>/conf/server.xml.

After the <GlobalNamingResources><Resource /> element, add the following.
Keep each attribute on its own line.

 <Resource auth="Container"
   defaultAutoCommit="false"
   defaultReadOnly="false"
   debug="10"
          driverClassName="oracle.jdbc.OracleDriver"
          factory="org.apache.tomcat.jdbc.pool.DataSourceFactory"
   fairQueue="false"
   initSQL="alter session set nls_timestamp_tz_format = 'DD-MON-RR HH24.MI.SSXFF TZR'"
   initialSize="3"
   jdbcInterceptors="ConnectionState;StatementFinalizer"
   jmxEnabled="true"
   logAbandoned="false"
   maxActive="15"
   maxIdle="10"
   maxWait="10000"
   minEvictableIdleTimeMillis="5000"
   minIdle="5"
   name="alfresco-datasource"
   password="<db_password>"
   removeAbandoned="true"
   removeAbandonedTimeout="60"
          testOnBorrow="true"
   testOnReturn="false"
   testWhileIdle="true"
   timeBetweenEvictionRunsMillis="7000"
   schema.update="<true|false>" type="javax.sql.DataSource"
   url="jdbc:oracle:thin:@127.0.0.1:1521:XE"
   useEquals="false"
   username="<db_username>"
   validationInterval="30000"
   validationQuery="select 1 from dual"/>

 The following were also configured…


– Configure resource in <tomcathome>/conf/context.xml.
Before the </Context> element, add the following.
Keep each attribute on its own line.

    <ResourceLink global="alfresco-datasource"
                  name="jdbc/dataSource"
                  type="org.apache.tomcat.jdbc.pool.DataSource"/>

– Since we’re running Tomcat 6, need to add tomcat-jdbc.jar to the <tomcathome>/lib folder.


** UPDATE:  Since we are not upgrading the alfresco version, you can safely always set schema.update=”false” in server.xml until an upgrade actually needs to be performed.  Reference https://wiki.alfresco.com/wiki/Schema_Upgrade_Scripts.

Top Three Tips on User Training

“Go forth and use this new product!”.  I feel as if this happens more often than not, an existing product is re-developed, with not much in mind to the user, who has to use this new product.  I was on a project recently, where this did not happen, and I’d like to share that with you.

Re-developing an existing application is normally always to make someone’s life easier – to improve user experience – but if the user is not trained properly (or if the new system is too cumbersome to use), it’s going to be a difficult transition.  We don’t want that.

My client’s PM wanted to help address this common issue – that user training is a must – so I was invited last week to train users of a system I helped build that had recently gotten deployed to production.  I asked my colleague AJ McClary, who has been through numerous customer engagements in this area, to give me his top three “words of advice”:

(more…)