Microservices are powerful tools, but there can be difficulty in getting them started and off the ground. To facilitate this process, Armedia utilizes JHipster in order to quickly spin up new microservices. JHipster is an open-source project generation tool used to quickly develop a modern web application using AngularJS and the Spring Framework via a Yeoman generator. We use the microservice generator capabilities of JHipster as it creates the scaffolding required to coordinate the microservices with the JHipster Registry, greatly easing the architectural burden and reducing project start-up time.
The entry point for your microservices is through the gateway application, which is generated using application type: microservice gateway. In addition to serving as an entry point, it also provides services such as routing, load balancing, and API documentation for all the microservices connected to the gateway.
Once the microservices and the gateway are up and running, they all register and also get their configuration from a runtime application called the JHipster Registry. The JHipster Registry handles the organization of microservices with the gateway, as well as managing microservices that are scaled horizontally.
JHipster also generates Docker containers for each microservice, allowing separation and easy spinning up of microservices on any architecture we want. It also supports SonarQube out-of-the-box, which is a code quality checking tool for the purpose of reducing bugs and improving speed and reliability.
JHipster has many wonderful features out of the box, including automatic documentation, internationalization, unit tests, as well as end-to-end tests all wired up together. In addition, it supports the generation of monolithic applications as well, letting you benefit from either design path you choose to take.
Below is an example screenshot of JHipster Tool on how easy it is to generate a microservice application:
What is the structure of Armedia’s microservice architecture?
In the above Diagram:
The Green component is the microservice gateway. This is the core of the microservice architecture. It handles web traffic and serves an Angular application. It additionally handles routing, load balancing, and API for all microservices. All external applications talk to the gateway to get information from a microservice, and all microservices respond to the gateway.
Some example microservices are shown in orange. The service requests sent from the gateway, and each focus on a specialized task.
The Event Bus serves as an internal messaging system between microservices. This way microservices don’t have to require the same information as the gateway would require to communicate with a microservice, but instead can achieve an expedited connection. Inter-microservice communication is a necessity in order to avoid duplication of tasks and information.
The external frontend applications which interact with our microservices are shown in blue. They are AngularJS applications which are serviced by the gateway. When a user logs in from an AngularJS application, a call is made to the micro-service gateway where the login details are validated. The gateway responds with a JWT token back to the Angular application which serves as authentication, which the gateway and each microservice can validate via a common secret key. Now, with a valid JWT token, the front-end applications can make REST calls to any of the microservices. Each of these REST calls to the microservices has this JWT token appended in the Header’s section which microservices are able to validate and authenticate users using that token.
Those tokens are self-sufficient: they have both authentication and authorization information, so microservices do not need to query a database or an external system. This ensures a scalable architecture.
In order to keep the front-end and microservices in sync, we use Swagger as a documentation and endpoint testing tool. Whenever a microservice’s REST API has any changes or additions, the Swagger document for the microservice is updated.
When it comes to microservices, the quality of the code becomes paramount, as bad bugs or exploits could lead to downtime. To be proactive, we are using SonarQube as a code quality checking tool. Before a new version of a microservice is deployed, we always ensure that the code quality is exceptional.
Of course, with having microservices on multiple, separate machines, getting the metrics for each microservice is paramount. The gateway application has an auto-generated metric suite which we have dubbed “AppStats”. From one location, we can get the current state of our architecture and can use these metrics to better improve the speed and uptime of our architecture.
Below is a screenshot of “various options available in a Gateway application” and “Application Metrics”:
Microservices have become the new architectural cool-aids in the IT world ever since Netflix open-sourced its microservices tools and published several blogs and articles on their design and architecture using microservices. Knowing Netflix and its user-base, microservices gained immediate popularity in the software architect and developer community.
With cloud-based “Software as a Service” model as the need of the day, Armedia is always looking to cater to large customer bases, and we need ways to scale the tools and services to that level. Microservice-based architecture does provide the scalability and flexibility for this unique challenge.
But microservice architecture is not without its share of challenges. The first challenge is for the Architects – it changes the way the architects typically think of system architecture and design and provides the first set of issues for them to resolve. This includes the following considerations:
Composition of Microservice
Should every microservice have independent database making the service truly autonomous or should multiple microservices use a centralized database?
Packaging and deployment of microservices
With multiple independent components, what would be a good deployment strategy for microservice-based application?
How Should I Design For Microservices?
Microservice design should make use of the DRY principle. Store information only once, and write code only once. Consider organizations that have multiple applications. With monolithic systems, there have been instances when every application replicated the user data (though some were smart to sync it with organization’s Active Directory, still had an independent copy in every application).
Soon, data between different applications got out of sync and it became an issue to track which application’s version of user data should be treated as true information. With a microservice-based design, you would encapsulate all the services around managing user data into a single service. This would include:
Syncing up with the AD
Managing user profile and user database
All other applications would use the REST/SOA APIs exposed by this microservice to deal with user-related services and activities like signing into an application, getting the user profile with appropriate roles and access controls etc. As there is only one microservice that deals with the user profile, the user data need not be replicated for every application.
There is one challenge with this approach and that is maintaining referential integrity with user table if an application needs it. Since the user database is segregated to the user microservice, managing the referential integrity becomes the task of the application that needs this (since the database can no longer maintain this due to segregation).
Having said that, with the database being protected behind application facades, this should hardly be an impediment in the adoption of microservices as the benefits of segregation outweigh the need for referential integrity.
There is another challenge with such centralizing of services – if too many applications are using the same user management microservice, would this not overload this component? And our answer to this lies in how we design the scalability factor for this service.
We can deploy this microservice as a clustered service and use intelligent caching mechanisms through a distributed cache system like Hazelcast to mitigate this issue of overload. With auto-deployment tools like Kubernetes which can handle auto-scaling of microservices at the VM level, this hardly becomes an issue.
This is exactly where the role of the Architect becomes critical. The burden is on the team to design the composition and the deployment and scaling strategies for various microservices that would strike a balance between modularity, reusability, and performance.
Cognitive walkthrough’s are my favorite form of usability testing—and I do it so much, it’s almost exclusively the only type of usability testing that yields head-turning results. A cognitive walkthrough is a simple methodology to conducting usability testing that involves assembling a panel of 5-6 users and having them “walkthrough” a series of scenarios.
I’ve been experimenting with a really clever usability testing technique that uses human intelligence to collect quantitative data about a user’s experience. The way a development project traditionally gathers this data is through conducting a study—gathering participants to sit down in a room and perform a set of tasks. We usually observe these tasks and pay them a gratuity of $75 or so, but it could be quite expensive reserving a lab for hundreds of people. This is a neat alternative to doing an in-depth study.