Blog

Documentum Performance Enhancers – They’re not just for athletes anymore

by | May 24, 2009 | Content Server, Documentum Development | 4 comments

Good day, all. For all who are celebrating Memorial Day, we wish you the best & pass along our respects to our servicemen and women of past & present.

Today’s topic will cover some simple advice on how to extract more performance out of your Content Server by simply changing the way you ask it questions in your code. Documentum offers plenty of advice on how to trim & tune your Content Servers, but the simplest advice that goes the longest way is: Use DQL.

DQL? But we are in Java land now. Java is object oriented, and object oriented is power. Why would we cast aside our right to exercise Object Oriented Programming and go back to the stone aged days of relational query languages? Speed. DQL queries consistently outperform DFC fetches in most every practical application.

Natively using DFC to operate on objects often incurs a full-fetch or docbase consistency check penalty at the time the object is retrieved. Practically speaking, it fetches extra data that you may NOT want to use or change and you have no concern with. (Do you ALWAYS want to know the last user who saved the document? Or its content type? Or better yet, its filestore location index?)

By using DQL, we are explicitly & clearly instructing Documentum to do two things.

  1. Only operate on the attributes we provide in the query
  2. Translate our statements natively and quickly into SQL, which can be done by a little simple boiler-plate string manipulation by the Content Server.

How much faster? As this issue stems partially from performance fetch and partially from code strategy, it is probably unfair to report any “official” numbers. However, in my experience, I saw an increase from operations that took 150ms to those that could execute in 35ms. That’s it? 115 milliseconds, you say? If you find yourself asking this question, you may not have reached a performance threshold where you are concerned. In this case, you should file this tip away for future reference to say the least. If you have passed the threshold, and you consider how many queries you may be incurring PER operation PER user (say a few fetches to gather data from a few various object types for each concurrent user), you might want to consider implementing this simple optimization trick.

After you begin to use DQL, you may discover that not only is the DQL operation faster, but you can combine some of your multiple-fetches into complex fetches that join faster in the content server and database than programmatically at the point of Documentum DFC consumption. Do remember that part of speed is not just choosing faster operations, but choosing smarter strategies.

Thanks for tuning in.

Rahul Raina <rraina@armedia.com>

Categories

Need a bit more info on how Armedia can help you?

Feel free to schedule a 30-minute no-obligations meeting.

4 Comments

  1. Bill

    Good points. We have found that simple api calls will out-perform DQL depending on the task. For example, in the case of moving content from one store to another, we have found that writing a script using the r_object_ids of docs to move from one content store to another, by setting the a_storage_type attribute in the dm_sysobject using the API set and save commands, actually runs faster than the equivalent DQL which selects the same objects and updates them. The time to move the content is the same. We find this true in “large” repositories. “Past performance is not indicative of future results.” So analyze.

    Reply
  2. Robin Allon

    I stumbled this article coing out of google. Although it didn’t meet my critera I tunneled it is safe to say this was an amusing article.

    Reply
  3. online dating tips for women

    You undoubtedly allow it to become seem really easy with your own presentation nevertheless I uncover this topic to be really something i think I’d personally never recognize. It would seem too intricate and extremely broad for me. I ‘m looking forward for the next article, I will try to find the hang regarding it!

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *