Go Serverless with AWS Lambda

Go Serverless with AWS Lambda

Yasser Muwakki (Director of Digital Transformation) and Jon Holman (Senior AWS DevOps Engineer)


Introduction

Go Serverless with AWS Lambda

We would like to share with you how we rewrote a service that is part of a larger solution we implemented for one of our clients. This new version of the service maintains the same level of security and performance, increases the service’s availability and scalability while reducing the yearly cost from $1,730 to approximately $4.

We did this by moving this service from AWS ECS Fargate to AWS’s Functions as a Service (FaaS) offering, AWS Lambda. FaaS is the most cost-effective way to utilize cloud computing resources. Using the typical cloud compute services, such as EC2 and Fargate, your service needs to be available for potential requests 24 hours a day. If it is running, you are being charged for it. This is where serverless FaaS sets itself apart. Your code is deployed to the cloud provider and associated with its configured events. It’s not running, and you are not paying for it until those events occur. You pay for the seconds that your function is actually running to respond to those events. When the function completes running, you no longer pay, and it is still ready to service new requests.

In case these great potential cost savings are not reason enough to consider moving to Lambda, it also brings increased availability and scalability.

For availability, when working with virtual servers or containers and you want your solution to be able to sustain the loss of an availability zone or data center, you need to create a fully functioning application stack in multiple availability zones. With FaaS or Lambda, your function is not locked into an availability zone; it spins up where there is capacity when it is needed and is seamless for you.

For scalability, if you want your application to support a peak of 10,000 concurrent users using containers/virtual servers, you need to configure auto-scaling groups with scaling rules to start enough servers/containers to handle that many concurrent users. With Lambda, your function is invoked in response to events and more events mean more invocations. However, there is nothing special you need to do to pre-configure more servers/containers to scale up.

In this blog, our specific use case is a small component in a public-cloud information exchange platform utilized by thousands of users throughout the United States, which leverages Appian for front-end workflow and case management and Alfresco for backend content and records management. This entire system is hosted in Amazon Web Services (AWS). We created Java-based custom microservices to support the integration between these two systems. Currently, all the custom microservices run as docker containers orchestrated by AWS Elastic Container Service (ECS).

One of these microservices is called the token service. A token service’s role is to obfuscate Alfresco Node IDs by mapping them to random hexadecimal strings (called tokens). The token service is used by several dependent applications that all must authenticate to be authorized to use the endpoints. Each token has several permissions attached to it as well, which can restrict what the token can be used for, who can use it, what end-user IP address it can be used from, and how many times it can be used (default of single-use).

Token Service Features

  • Allows systems integrating with Alfresco to obfuscate the Alfresco node to prevent hijacking of the URL or URL sharing
  • Ensures that the tokening is the only piece of information passed through the browser is the very large random number (token)
  • External systems can securely control through the stored token who the user is, what file they have access to and what kind of access they have (Download, Preview, Online Edit)
  • Prevents brute-force attacks with token parameters:
    • Expiration date: token is only valid before this date
    • Retries: maximum number of times a token can be reused. By default, the token is for one-time use
    • Credentials: any call to the token service requires HTTP basic authentication

A sample Alfresco Node UUID: f2f42052-6e24-4c54-bc15-1e0bb6dce9f7

A sample token: 2c1d036e41b586309819ec80c9118038178741caa4ed7e6e978145d1bcda95fb

The following table summarizes the comparisons between the current ECS-Fargate token service versus the Lambda based token implementation:

Metric ECS-Fargate Lambdas (FaaS)
Cost $433/year per instance.

4 Instances = $1730/year

*Does not include any application load balancer costs

API Gateway $0.35 per month

Lambda Free Tier

Performance 366 ms per call 283 ms per call
Security ECR Security Scan – 1 Medium threat on a python package LambdaGuard and SonarQube – 100% pass – no security vulnerabilities
Reliability 0.2 % failure rate 0% failure rate
Splunk Integration Side-Car container deployment in ECS CloudWatch subscriptions with Lambda function posting direct to Splunk
Resources Docker containers, ECS, ALB Lambda, API Gateway, CloudWatch

As you can see, the Lambda implementation clearly has an advantage with significantly lower costs. Let’s describe in more detail the design and cost break-down of the Fargate vs Lambda implementation approaches.

ECS-Fargate Token Service Overview

The ECS-Fargate implementation has the following design specifications:

  • Java codeECS-Fargate Token Service Overview
  • Docker container runs Tomcat
  • Orchestrated via ECS
  • Runs using Fargate (serverless)
  • Runs 24 hours per day, 7 days per week
  • Logs integrates with Splunk through a side-car container
  • Fronted by ALB (Application Load Balancer)
  • 3 REST APIs
    • Generate token
    • Use token
    • Version

Fargate costs:

  • per vCPU per hour           $0.04048
  • per GB per hour           $0.004445

To accommodate the number of requests per busy hour, the token service is required to run four containers 24 hours/day with the following ECS Task definition parameters:

  • CPU: 1 vCPU
  • Memory: 2 GB
  • $0.04048 + (2 * $0.004445) an hour $0.04937 = $1.18488 a day for 1 Task
  • 30 days – $35.5464 per Task
  • 4 Tasks (Token Containers) = $142.1856 per month

Additional costs:

  • Splunk sidecar container
  • ALB

AWS Lambda Overview

AWS Lambda lets you run code without provisioning or managing servers. The following are some advantages of using Lambda:

  • Low cost: you pay only for the compute time you consume, no more idle servers waiting for requests
  • Improves scalability: functions are run in response to events, more events will trigger more instances of the functions
  • Highly available: not locked into a single data center or availability zone
  • Natively supports Python, Java, Go, PowerShell, Node.js, C#, and Ruby code
  • Runs up to 15 minutes per execution

 Lambda (FaaS) Token Service Architecture Diagram

Lambda (FaaS) Token Service Architecture Diagram

Lambda Token Service overview

The new token service uses serverless technologies, more specifically Functions as a Service (FaaS), AWS Lambda. We chose to develop the lambda functions using Python. The Lambda functions were created in AWS using the AWS SAM extensions to CloudFormation; they were then attached to HTTP endpoints by API Gateway. The functions run on demand when those endpoints are invoked. To fulfill the requirement that all calls to the token service must require basic authentication, we implemented an additional Lambda function that is associated with API Gateway as an authorizer function. The authorizer function’s purpose is to allow or deny a request to a HTTP endpoint based on set of criteria; in our case, validating a basic authentication credential. The authorizer function ultimately returns an IAM policy to the API Gateway to allow or deny the request.

For log management, our client had standardized using Splunk to aggregate and retain application logs. So, we created a solution to send the various token service log entries to Splunk without impacting performance. We achieved this by creating an additional AWS Lambda function called CloudWatch Logs to Splunk and subscribed this function to each AWS Lambda function’s CloudWatch Log Group. This function was then invoked whenever an entry was made to those CloudWatch Log Groups, which in turn sent that data to Splunk.

This table gives a summary of the 5 Lambda functions:

Lambda Function Trigger Description
Generate Token POST /tokenservice/services/api/token Create Tokens
Use Token GET /tokenservice/services/api/use Validate and Use tokens
Version GET /tokenservice/services/api/version Version information
Authorizer API Gateway Authorizer Validates HTTP Basic Auth
CloudWatch Logs to Splunk Subscription to CloudWatch Log Group Sends logs to Splunk

In conducting high concurrency performance tests, we confirmed that each Lambda function worked well with the minimum lambda memory allocation, 128 MB. Most API calls responded in under 300 milliseconds except for a handful of requests that took between 1 and 3 seconds due to Lambda cold starts. We estimate that the token service received 100,000 calls per month.

This table describes our calculations for the expected monthly costs of the new Lambda service.

Note: this is for an AWS account that has been established for over a year, so the “always free” free tier applies, but not the first “12 months free” free tier.

AWS Usage Monthly Amount AWS Monthly Price
Lambda 100,000 Requests Free < 1M requests
Lambda Compute 100k / (1024/128) =

12,500 GB Seconds

Free < 400,000 GB seconds
API Gateway 100,000 Requests $0.35
Total $0.35

 DevOps Approach and Lessons Learned

Obviously, with any project you want everything to be defined as code, Infrastructure as Code (IaC).  Doing things by hand is not recommended due to it being slow, error-prone, inconsistent, not scalable and not repeatable. So, it is important that everything in our project is defined as code, that way it is self-documenting of what is deployed, easily repeatable, ready to be moved into an automated pipeline and easy to iteratively improve.

For this AWS Lambda project, we chose to use the AWS Serverless Application Model (SAM) Framework. AWS SAM is an extension of AWS CloudFormation that makes the building of serverless projects even more efficient. We then used AWS CodePipeline and AWS CodeBuild to create a Continuous Integration Continuous Deployment (CICD) pipeline for our project.

Conclusion

By rewriting token service to utilize serverless technologies, we were able to greatly reduce the cost of running the service. The benefits extended beyond costs as well, as we simplified the architecture while increasing scalability and availability. We recommend AWS Lambda to be considered for any potential use cases, especially as AWS continues to invest into Lambda’s capabilities to accommodate larger workloads and broader use cases.

FedRAMP Compliance: Tips And Cues 2015 vs. 2017, What Changed?

FedRAMP Compliance: Tips And Cues 2015 vs. 2017, What Changed?

tips and cues for FedRAMP compliant FOIA software

In order to increase the security among Federal agencies, several agencies created the Federal Risk and Authorization Management Program (FedRAMP). These agencies are:

  • The National Institute of Standards and Technology (NIST)
  • The Department of Homeland Security (DHS)
  • The General Services Administration (GSA)
  • The Department of Defense (DOD)

FedRAMP is a government-wide program which provides a standardized approach to security assessment, authorization, and continuous monitoring for Cloud Service Providers (CSPs).

The intent behind the program is to facilitate the adoption of CSPs among Federal agencies and eliminate duplication of effort. In the same time, FedRAMP reduces the risk management time and the costs that agencies would otherwise spend on individual assessing of CSPs. Or, as it can be found on FedRAMP official site:

“FedRAMP facilitates the shift from insecure, tethered, tedious IT to secure, mobile, nimble, and quick IT.”

FedRAMP Security Assessment Framework 2015 Vs. 2017

To advance the security among Federal Agencies, in 2015, FedRAMP.gov issued the general Security Assessment Framework. And, 2 years later, they’ve upgraded this structure to Security Assessment Framework version 2.4.

In both versions of the Security Assessment Framework, FedRAMP puts a special emphasis on the importance of CSPs meeting the FedRAMP requirements.

In order to become FedRAMP compliant, each CSP needs to carefully follow and go through 4 process areas:

  1. Document,
  2. Assess,
  3. Authorize, and
  4. Monitor.

FedRAMP risk management framework

Source: FedRAMP Security Assessment Framework, 2017

However, the ways in which CSPs could achieve FedRAMP compliance have slightly changed in the 2017 Security Assessment Framework revision.

In their version of 2015, FedRAMP allowed 3 ways of CSPs becoming FedRAMP compliant. But, in their latest version from 2017, FedRAMP gives CSPs only 2 possible alternatives to achieve compliance.

In 2015, CSPs could achieve FedRAMP compliance trough:

  • Joint Authorization Board Provisional Authorization (JAB P-ATO)
  • FedRAMP Agency Authority to Operate (ATO)
  • CSP Supplied Package

Let’s explain these 3 in more detail.

1. Joint Authorization Board Provisional Authorization (JAB P-ATO)

JAB P-ATO

Source: fedramp.gov

JAB P-ATO is a type of request for FedRAMP compliance that can be submitted either by the CSP or by the Federal agency. It basically means submitting an application known as ‘Initiate Request form’ on www.fedramp.gov to ensure processing of the CSP for a JAB P-ATO. Here, the CSP provides all data to the JAB and it makes a risk review of all the data provided.

When the JAB grants the P-ATO, the JAB provides all Federal agencies a recommendation on whether a CSP has a recommended acceptable risk posture for Federal use.

For FedRAMP JAB P-ATOs, the CSP must collaborate with an accredited Third Party Assessment Organization (3PAO) to independently verify and validate the security implementations.

The picture above shows the entire process of JAB P-ATO, form submission to authorization. As you can see, it consists of 4 separate stages, each containing several phases on their own.

The first stage is called Readiness Assessment & FedRAMP Connect. This stage, as the name itself implies, is the first stage where all the information needed is provided by the CSP so that the process of assessment may start. The length of the first stage depends on the readiness of the CSP to provide all this information.
The next stage is known as Full Security Assessment and lasts for about 1 month. At this stage, the CSP is examined against all of the demands of the assessment framework. And if passed, the CSP gains the right to be FedRAMP Authorized, which is the actual next stage of JAB P-ATO

The Authorization Process is the longest stage. It can take 3 to 4 months and sometimes even longer. The reason for the length of this stage is the several reviews that the CSP must go through. Once this stage is over and the CSP is FedRAMP authorized follows the final stage – Continuous Monitoring.

Continuous Monitoring is an ongoing process in which the CSP is monitored whether it still responds to all of the FedRAMP demands and how the CSP uses the FedRAMP authorization.

Getting JAB P-ATO is a long and complex process which not every CSP can go through. On the other hand, the ones authorized as such are undoubtedly secure to be used.

2. FedRAMP Agency Authority to Operate (ATO)

FedRAMP agency authority to operate

Source: fedramp.gov

ATO allows CSPs to work directly with a Federal agency to achieve FedRAMP compliance. Here, the CSP works together with the Federal Agency security office to provide all data necessary for the ATO. After that, the Federal agency makes a risk review of the data.

Federal Agencies have to choose a FedRAMP accredited 3PAO or a non-accredited Independent Assessor (IA) to perform the assessment.

In cases where non-accredited assessor is used, the Federal agency needs to provide evidence of the assessor’s independence and a letter of attestation of the assessor’s independence with the security authorization package. However, the FedRAMP Program Management Office (PMO) highly recommends the use of an assessor from the FedRAMP 3PAO accreditation program.

Once the Federal agency authorizes the package, they need to notify the FedRAMP PMO. The PMO then instructs the CSP how to submit the package for PMO review.

After reviewing the package and ensuring it meets all of the FedRAMP requirements, the FedRAMP PMO publishes the package in the Secure Repository for other Agencies to leverage.

As you can see from the picture above ATO is similar, but at the same time very different from JAB P-ATO. Namely, ATO also includes 4 stages from which only the first one is different. Instead of Readiness Assessment & FedRAMP Connect, here CSP’s work on establishing a partnership with the federal agency. This is the first stage and is known as Relationship Establishment.

The other 3 stages are seemingly similar, but in their core are very different. The thing is in the phases that each of the following processes goes. If you take a more detailed look at the picture above you will understand what I am talking about. Especially when it comes to the Authorization process because with ATO the review process is done by both the agency and PMO.

3. CSP Supplied Package

The 2015 FedRAMP Security Assessment Framework provides an opportunity for CSPs to supply a security package to the FedRAMP Secure Repository for prospective Agency use. Here, the CSP chooses to work independently rather than through the JAB or a Federal Agency.Unlike the other two ways of achieving FedRAMP compliance, here, after the completion of FedRAMP Security Assessment Framework (SAF), the FedRAMP compliant package instead for authorization it’s available for leveraging. Namely, instead of gaining FedRAMP authorization the CSP must go under one final test. And that is 3PAO.

The CSP must collaborate with an accredited 3PAO to independently verify and validate the security implementations and the security assessment package.

Once the authorization is completed, the CSP notifies the FedRAMP PMO and the PMO instructs the CSP on how to submit the package for PMO Review.

After the review, the FedRAMP PMO publishes the package in the Secure Repository for other Federal agencies to leverage.

In cases where the Federal agency decided to issue an ATO to a CSP-supplied package, the status of the package changes in the ‘Secure Repository’ to indicate that it has evolved into a FedRAMP Agency ATO Package.

What Changed In 2017?

FedRAMP is a program which is focused on constant improvements. For that reason, they put a huge effort in trying to improve the standardized approach they offer and present it as the best possible solution for securing Governmental data among agencies who use CSPs.

With that thought, they have decided to exclude the CSP Supplied Compliance form the Security Assessment Framework form 2017 and focus on the other two, JAB and Agency Authorization.

FedRAMP explains this decision as a result of the fact that CSP-Supplied compliance has been the least utilized out of the three options. And unfortunately, a great part of the CSP-Supplied packages submitted to the PMO failed in passing the compliance review.

They explain on their official site:

“After numerous interviews with CSPs, agencies, and 3PAOs, we concluded that CSP-Supplied had the lowest demand and was too risky, costly, and resource intensive for both industry and the FedRAMP PMO.”

As an alternative, they offer the option to pursue the redesigned FedRAMP Ready process.

”While CSP-Supplied is going away, we believe the redesigned FedRAMP Ready will better prepare CSP’s for a JAB provisional authorization or help identify an agency sponsor for authorization, with it happening faster, cheaper, and with more certainty.”

Final Thoughts

FedRAMP compliant FOIA software

Image Source: https://www.fedramp.gov

With an intent to unburden FOIA agencies in adopting CSPs, the National Institute of Standards and Technology initiated the development of FedRAMP.

As a government-wide program which provides a standardized approach to security assessment, authorization, and continuous monitoring for CSPs, FedRAMP is in constant evolution.

Since its creation, the framework has significantly evolved. And one such proof is the 2017 Security Assessment Framework.

This framework is a huge help for FOIA agencies that are in search for a CSP which, believe it or not, is not an easy job. It carries lots of risks and costs.
Thanks to FedRAMP, FOIA agencies can now rely on the assessment framework and spend their time responding to more FOIA requests and reducing backlogs.

If you have any comments or questions about the FedRAMP compliance, feel free to ask. We will be glad to help you.

And if you want to know more about our FedRAMP Compliant solutions, don’t hesitate to contact us.

Microsoft Azure Face API

Microsoft Azure Face API

Microsoft Azure Cognitive Services Face API

Microsoft Azure is a cloud computing service that has its own machine learning service, known as Cognitive Services. It is split into five categories: Vision, Speech, Language, Knowledge, and Search, with each category containing several tools, for a total of 26. Under Vision, there are six tools: Computer Vision, Content Moderator, Custom Vision Service, Emotion API, Face API, and Video Indexer. As the title suggests, the focus here is on the Face API tool.

The Face API is split into two basic categories:

  • Face Detection – discovers a face in an image, with an ability to identify attributes such as gender, age, pose, facial hair, glasses, and head pose.
  • Face Recognition – takes faces and performs comparisons to determine how well they match. Has four categories –
    • Face Verification – takes two detected faces and attempts to verify that they match
    • Finding Similar Face – takes candidate faces in a set and orders their similarity to a detected face from most similar to least similar
    • Face Grouping – takes a set of unknown faces and divides them into subset groups based on similarity. For a subset of the original set of unknown faces, each face within that subset is considered to be the same person object (based on a threshold value).
    • Face Identification – further explained below.

With Face Identification, you must first create a PersonGroup object. That PersonGroup object contains one or more person objects. Each person object contains one or more images that represent the respective person object. As the number of face images a person object contains increases, so does the identification accuracy.

For example, let’s say that you create a PersonGroup object called “co-workers.” In co-workers, you create person objects, for example, you might create two – “Alice” and “Bob.”  Face images are assigned to their respective person objects. You have now created a database with which to compare a detected face image. An attempt will be made to find out if the detected image is Alice or Bob (or neither), based on a numerical threshold.

This threshold is on a scale that is most permissive at 0 and most restrictive at 1. At 1, they must be perfect matches – by perfect, I mean that two identical images at different compression rates will not be recognized as a match. In contrast, at 0 a match will be returned for the person object with the highest confidence score regardless. In my experiments, somewhere between 0.3 -0.35 tended to strike a good balance. To reiterate an earlier point, more images per person object increases identification accuracy, thus decreasing both false positives and false negatives.

An Example Application to Simulate Video Analysis

An example implementation of Face Identification, in conjunction with Dlib and FFmpeg, follows. The purpose of this application was to identify faces in the video, and since Face Identification only detected still images, FFmpeg was used to extract keyframes for Face Identification to examine individually.

Face Identification detects faces in images before identifying them, but in my experience, Dlib detected faces more accurately and a lot faster. In this case, Dlib detected if an image contained a face; if it did, that image was sent to Azure for face identification. The disadvantage here was that detection was done twice – first in Dlib, then again in Face API. It was faster to detect an image locally using Dlib than it was to call the Face API – which was remote. It was especially advantageous to filter using Dlib when there was a long video with relatively little facial presence (e.g., security footage). If most of the video had a facial presence, disabling Dlib may have been preferable. Another factor to consider is that Azure charges fees based on the number of API calls, so filtering using Dlib first saved money.

Figure 1 depicts the Face Identification user interface.  The three list panes in the middle of the user interface (from left to right) define the PersonGroup objects, the Person objects, and the images attached to each Person.  In the figure, group2 is selected which contains two person objects:  Person_A, and Person_B.  Person A is selected and the list of images associated with Person_A are listed in the right-most column.  Figure 2 discusses the controls and settings for the conducting a face match run.

Azure face recognitionFigure 1 – Highlighted in blue from left to right: PersonGroup, person, face image. The database image selected here is Abraham Lincoln, belonging to Person_A in group2. There can be more than one image per person. If an examined image contains a detected face that sufficiently resembles Person_A (or Person_B), best match found.

Figure 2 – A closer look at this application that implements Azure Face API, FFmpeg, and Dlib

Some points regarding using the above method to analyze video through keyframe extraction:

  • In tests, it was found that the baseline accuracy was comparable to that of other methods tested (AWS Rekognition, Linux face_recognition), see Figure 3. The advantage of this system was that accuracy could be improved by adding multiple face images per person.

Figure 3 – Azure results compared to other facial recognition tools tested

  • The PersonGroup profiles were persistent, using Microsoft Azure’s cloud storage
  • Easy to add/remove/modify groups, people, and face

Limitations:

  • API calls were limited to 10 per second – this is a server-side imposed limitation, and there exists no local workaround (another advantage to using DLib locally)
  • Because of the API call limit set by Microsoft, it was slow relative to other methods
  • Like most facial recognition systems, it was difficult to predict processing time. The two main influencing factors were:
    • File format – this played a much more important role than file size. In fact, a file that was half the size could take longer to process depending on the format.
    • Number of detectable faces in a file – there would be no point in using this if you already knew the contents of a video file, and processing time went up as facial presence increased
  • Internet connectivity is obviously necessary – processing was server-side and there was no option for locally exclusive data storage

It should be noted that Face API was meant to be used to analyze images and live video streams, but not stored video files. This application attempted to simulate the analysis of stored video files, and thus was using the API in a manner for which it wasn’t intended. For example, ensuring that the 10 API calls per second limit wasn’t exceeded required testing, as Azure simply discarded any API calls that exceeded the limit with a generic error message – it did not add the images to a queue to be processed as soon as possible. Frames could be lost that way when examining a series of images. Cognitive Services does offer a Video Indexer that, among other things, has face tracking and identification, but that is only against a celebrity database. The user can’t define the database, so it is highly limited. The Video Indexer is in preview mode, so I suspect that at some point it will allow for a more flexible facial recognition system. Currently, it does not offer what this application was attempting to simulate.

This application was written using C#, although Face API also supports cURL, Java, JavaScript, PHP, Python, and Ruby.

Conclusion

Although Amazon Web Services has a far superior market presence than Microsoft Azure, Microsoft Azure’s Cognitive services is very functional. The accuracy of Face API is comparable to AWS’ facial recognition alternative, albeit a bit slower. Its array of tools is consistently growing in size as well. There is an argument to be made that the advantage to AWS is simply that it has a larger userbase, which alone can increase the functionality of a product through consumer demand and supplier response. If Microsoft has something to prove in the area of machine learning, though, that can be an advantage as well.

The other contenders in this area are the various Linux-based, open-source tools, which are often just as good in terms of accuracy. A huge advantage Linux has is control over the locality of processing, which allows for some creative control when it comes to memory and storage management, along with general application implementation. With the ability to introduce multi-threading, Linux is often the fastest when it comes to processing – you could multi-thread AWS or Azure, but there is no point because their servers do the heavy lifting and decide what you get and when you get it (think back to API call limits). The downside Linux has when compared to Azure and AWS is comprehensive support. AWS and Azure have a centralized customer support system, and Linux by nature does not. It can be a headache to even get to the point of installing the necessary software to begin coding for it, as packages often become out-of-date and don’t always play nice, plus online documentation can be challenging or absent. But that is the tradeoff when it comes to the freedom and control of Linux. Plus, it’s free.

At this point, there is no clear advantage to using one over the other.  However, one thing is for sure – Microsoft Azure and AWS will continue to invest in this space through research and acquisitions to become the preferred provider of artificial intelligence tools and services.

Armedia Takes ArkCase To AWS Cloud

Armedia Takes ArkCase To AWS Cloud

As information volume grows daily, companies and organizations are facing the ever-growing challenge of managing all the data, for all their cases, all of the time.

To complicate things further, organizations and end-users want this data to be accessible 24/7, worldwide. Throw legislation in the mix of how this data is stored, where it’s stored, who gets to see it, etc. and you soon have a perfect recipe for a major headache.

Armedia has been in this situation with clients many times over the years. We understand:

  • the pain of IT modernization,
  • the fear of data spillage,
  • the need for a modern, scalable, reliable, cloud-based IT solution.

This is why, for the past few years, we have been actively collaborating with ArkCase and Amazon Web Services to put the two platforms together. Our goal was to build an easy-to-deploy cloud-based case management system that allows companies to have an affordable, yet scalable and reliable business solution.

We’re super-happy to announce that we have finally made it! We took ArkCase to the Cloud and it’s awesome!

Why Amazon Web Services?

Amazon Web Services - the most comprehensive and world-ranging platform

Amazon Web Services (AWS) has been the most comprehensive and world-ranging platform since 2006, featuring more than 90 services for computing, storage, networking, database, analytics, application services, deployment, management, etc.

Millions of active customers worldwide, from startups to leading government agencies, chose AWS Cloud to become agiler, improve their infrastructure and reduce costs.

Why ArkCase?

ArkCase is a workflow-driven, web-based, easily configurable Case Management System that leverages mature technologies like Alfresco for DoD 5015 records management, Ephesoft for intelligent document capture, Snowbound for annotation and redaction, Pentaho for business analytics, etc.

Armedia provides various Case Management solutions such as:

  • Event & Task Management
  • Freedom of Information Act (FOIA) or Release of Information (ROI),
  • Legal Case Management or Office of General Counsel,
  • Complaint Management,
  • Claims Management (i.e. Worker’s Compensation),
  • Investigative Case Management…

We have used ArkCase many times before and found it to be covering all the key bases. It’s easily scalable. It’s easy to configure. It’s using Alfresco for data storage, which makes it data storage compliant. It easily works with Ephesoft for digitizing the mountains of paper-based forms. And the ArkCase team is pretty awesome to work with.

Why ArkCase Cloud-Based Content Management System on Amazon Marketplace?

AWS Marketplace as an online store makes it easier for customers to find, subscribe, and deploy software that runs on AWS. This enables customers to subscribe and run ArkCase on their own AWS Cloud infrastructure with only one click.

The 1-Click ArkCase deployment on AWS can be completed in just a few minutes. Here’s how the process goes:

  1. On AWS, you set up your own Virtual Private Cloud (VPC) instance,
  2. You take the ArkCase single self-contained Amazon Machine Image From the Amazon Marketplace
  3. You configure the “local” ArkCase setup to work according to your needs.
  4. That’s it. You’re done!

Being cloud-based, the ArkCase System allows customers to focus on addressing their business needs without worrying about IT infrastructure or deployment services.

In addition, AWS employs a pay-as-you-go approach. Meaning that with AWS you only pay for the individual services you use, for as long as you use them.

  • No additional costs,
  • No termination fees,
  • No long-term contracts,
  • No complex licensing.

This flexibility allows customers to have more control over the costs and pay only for what they use.

Also, AWS Cloud takes care of the privacy and security of your data. For Public Sector and Law Enforcement customers, AWS provides a FedRAMP and CJIS compliant PaaS. Data backups and recovery are part of the AWS offer.

AWS Cloud emphasizes the necessity of flexibility and accessibility of data in the digital world in which we live. As a result, it allows access to the data from anywhere.

Conclusion

The key to success of every agency is flexible, scalable, secure Case Management

The key to success of every agency is flexible, scalable, secure IT Modernization. That is why Armedia takes ArkCase to the AWS Cloud.

AWS Cloud provides the needed scalability and reliability as a platform. ArkCase provides the software stack required by users and by law to enable a solid case management system.

We at Armedia took the two components and merged them into a very easy-to-use Cloud-based Case Management System that can help companies and organizations streamline case management, and reduce operational costs.

Contact Armedia to learn more about ArkCase on AWS MarketPlace and how we can help you move your Case Management to the Cloud.

Avoid Communication Bottlenecks With A Modern Correspondence Management Solution

Avoid Communication Bottlenecks With A Modern Correspondence Management Solution

In today’s fast-paced and high-technology world, people have higher and higher expectations that their local, state and federal agencies will provide outstanding service. However, a lot of agencies struggle with delivering decent public experiences. And this can be especially true in the correspondence management area.

That is why, now, more and more agencies are adopting a modern correspondence management solution to fulfill their correspondence needs and to avoid communication bottlenecks. A cloud-based, modern correspondence management solution can help streamline existing processes and reduce IT overhead while empowering government staff members to provide outstanding service to citizens and stick to compliance regulation such as FOIA (Freedom of Information Act).

What Can a Modern Correspondence Management Solution Yield in Government?

modern correspondence management improves communication

To better understand what a modern correspondence management solution deliver for government agencies, Armedia partnered with ArkCase, a leading company in case management, correspondence management, and customer service.

Many government agencies process hundreds some even thousands of requests each day. With all the demands and inquiries from the citizenry, some serious challenges arise regarding correspondence management.

You have likely confronted communication bottlenecks in your system, regardless of your level of government. Maybe it’s stacks of mailboxes for an upcoming administration with no directions as to how to proceed. Maybe it’s protracted outages due to old broken systems. Email chains with plenty of attachments to keep track of? These are everyday scenarios that make it hard to handle frequent requests. Why?

Why is it Hard to Handle Frequent Requests?

Handling frequent requests is hard

For one thing, multiple departments must approve many requests for government agencies before being processed. A single request may also lead to multiple requests throughout an agency. The request goes to various approval chains, and if the agencies live in a single email system to handle that activity, then that can become a big issue. With so many revision departments on a single request, it becomes hard to communicate in a timely fashion and to respond to the public requests efficiently.

Secondly, many government agencies have on-premises case file systems and email systems that are different and lack compatible platforms to help handle requests. Hardware systems do not integrate with third-party vendor systems and usually require regular maintenance. Hence, staff members have to process information from these systems manually. However, they lack the big-picture on the entire process of the request. Therefore, they only have pieces of the puzzle. This can eventually lead to unnecessary back and forth between staff members trying to find point persons on the request and status. This is exhausting!

Additionally, the number of requests that are low-level can build up over time since there’s not self-service or faster way for staff members to interact with this type of correspondence. A simple public inquiry can get lost in the trenches (for example, checking the status of a passport) since there’s no single system that can prioritize and organize correspondence accordingly. Moreover, this can lead to severe security issues as citizens’ identifiable information can be passed back and forth between different systems or end up in a folder under piles of paperwork.

Such security issues lead to more challenges for correspondence management in government agencies (including compliance with federal laws). That is why government agencies need modern correspondence management solution that complies with federal regulations (such as the FedRAMP which requires agencies to use low-impact and low-risk cloud solutions). Government agencies legally must comply with such regulations. And if they fail to do so, that can lead to serious issues, like data breaches or loss of trust from the public for failing to protect sensitive data.

The Solution: Modern Correspondence Management

To overcome the challenges government agencies need a modern correspondence management solution

To overcome the challenges mentioned above, government agencies need a modern correspondence management solution that can streamlines processes and protects sensitive data while delivering better public services. Such a solution can help your agency carry out high-quality correspondence management and public service delivery.

Picture this: A citizen makes a request. The system automatically processes and routes it to the right staff member to address the request based on their subject matter expertise. Employees can find all the data they need to be connected to the request instead of having to dig through a heap of paperwork or different computer systems to dig out the data.

Each request guides the staff member through the correspondence process. That way, they know where they left off and do not have to guess what step comes next. Then, document templates can generate content automatically which the cloud also saves for use with collaboration across departments.

With a modern correspondence management solution, department heads will no longer have to throw assignments over a wall and then wait for a response. With such solution, employees can communicate and collaborate in real time, get insights from their team, and stay on top of deadlines.

In the meantime, citizens will no longer have to wait for a prolonged response. They will receive notifications, or they can monitor the progress of their request, browse current requested documents and ask questions. Back at the agency, the system completes the requests and makes them ready for review.  A modern correspondence management knows precisely who to notify for review, approval, and signature. And once signed, the requestor will receive the correspondence.

Benefits of Modern Correspondence Management Solution

Modern correspondence management can improve your agency’s efficiency, productivity, responsiveness, and service to citizens

A modern correspondence management solution can improve your agency’s efficiency, productivity, responsiveness, and service to citizens. With such modern solutions, your government agency can benefit in the following ways.

Manage the Complexity of Requests

Modern correspondence management solutions can help your agency manage the entire lifecycle of the requests while keeping track of the approval processes. Instead of having to go through various on-premises files and systems to retrieve some documents or to see who is on the team for a certain request, a modern solution can streamline all the processes and people linked to that request. Moreover, modern correspondence management solutions can automatically remind staff members who work on a request for incoming tasks. That enables them to finish the requests in a more timely fashion (instead of having them to comb through information manually).

Outstanding Service and User Experience

Your agency needs a platform that is consistent across known systems. A modern correspondence management solution offers integration (with collaboration) and productivity tools that your agency already has in place, allowing staff members to easily and promptly learn the technology.

Transparency and Accountability (with more Visibility) on Requests

The public expects consistency with responses to their requests. It doesn’t matter if they make a request through email, web portal, or phone call; a modern solution can help your agency to consistently handle the requests by providing all staff members with a view of the current correspondence process. Staff can see, for example, where precisely requests are and what has been said about them. Furthermore, citizens are able to track the progress of their requests.

Comply with Federal Regulations

Government agencies need to secure the leading position of what modern technology offers. A modern correspondence management solution allows for integration with systems already known to agency staff members while taking advantages of the cloud. What’s more, a modern correspondence management solution can fulfill all of this while maintaining compliance with federal laws and regulations.

To Wrap It Up

A modern correspondence management solution can help your agency avoid communication bottlenecks by streamlining work processes for handling requests. And that will allow your agency to deliver efficient responses in a more timely fashion, to maintain visibility and accountability for the teams involved in a request, and to stick to important federal regulations. Such modern solutions can provide your government agency with tools that will help you stay agile by delivering fast and on-time correspondence (while meeting and exceeding citizen expectations).