Monday, 12 August 2013

Integration Tests, Spring and Mocking

Sometimes ago, I emphasised how much integration tests were needed? This time I will show some tricks to build nice integration tests. This use case specific to Spring Framework, however can be applied to similar scenarios.
That trick is about using Spring Framework and Mockito to build integration tests and mock behaviours of the dependencies. The mocks are not data mocks but dependency mocks. Data mocks are a little bit different than dependency mocking as the previous one allows generating/injecting data for the mock on the wire, while the latter one mocks library dependencies.
Today’s software are getting more interacted and communication with a number of services and systems. For example a service could use a messaging server to receive and send messages to the interested parties through a messaging exchange. That strategy enables achieving abstracted systems. Similarly applications can be built with abstracted components so that an individual component can be tested in isolation while it is dependencies are mocked with different strategies. Testing individual components helps reducing some heavy weight functional tests and ensuring components are reliable to some degree.
Below code samples demonstrates building integration tests with Spring and Mockito, notice the code line mocking a spring bean.

Spring Factory & Mockito

This nice spring feature is a kind of hidden gem, maybe because of its a bit confusing "constructor-arg" naming. That parameter sounds like class constructor argument, however in reality it can be used for any method argument.

Spring Context

And here is the spring test context including main application context and abstracted test dependency. Separating spring context into multiple chunks helps building abstracted systems and allows plugging a mocked dependency into an integration test.

Monday, 5 August 2013

Scalable Services

Previously, I mentioned about micro service architecture and its features, This time I will mention about scaling services and try to explain design strategies that affect scalability of  services.
Nowadays every business wants to have easily scalable services. Because today’s business environment is more competitive and customers are more demanding, no one wants to wait for a long web page load, will use a slow mobile application. it’s also important to maintain customer retention policy, today’s customers have more choices compared to the past, so scalable high performing services are paramount to any business success.
By scalability, I mean scaling whole technology stack a service use. A truly scalable service architecture requires avoiding any kind of bottleneck whether it is in the application or database layer.
I will focus on scaling applications via partitioning service consumers. Easiest way of scaling services is making them data agnostic and have load balancers distributing traffic randomly between service instances so that each instance can serve a range of traffic received. We can see this strategy applied in Cassandra random partitioner and MongoDB hash based sharding.

scalable service
Scalable service

While that strategy works most of the time, there are occasions that a service needs to treat received traffic in a smarter way, which may require examining the received traffic data, before processing the request, for example a payment card processing system can distribute the received traffic amongst service instances according to card BIN numbers, which ensures a customer bank’s data goes into the same instance every time. That strategy is an application of reactor pattern at architecture level. A front controller or load balancer can examine the initial piece of the traffic data, then delegate work to a specific instance. Another example is that some service features could depend on customer’s geo-location for tax, jurisdiction purposes.
Reactor pattern, applied at architectural level, also helps to abstract specific traffic types and build specialized versions of service instances. For example once jurisdiction is abstracted, a service can have either a UK or AUS jurisdiction instance supplied during service provisioning. That strategy enables treating service traffic selectively rather than randomly hence yields control of the received traffic.

Micro Service Architecture

Micro service architecture is a paradigm that aims to build systems by decomposing business features into lightweight services. It is like applying SOLID principles at architectural level.

micro service architecture
Micro Services
Micro service architecture enhances classic SOA architecture by emphasising on one single responsibility. Conceptually micro services aren’t particularly difficult to delivery, but in practice some questions can raise.

Features of micro services

By definition a micro service is very small and has a small footprint. It takes a few minutes to complete a CI build including unit and integration tests. Micro services can be short lived (disposed once business doesn’t need it ) in a long-living software eco system. Micro services are likely not to like heavy frameworks, applications servers which could be a negative in some cases.
Another nice thing of these services is that they can be language agnostic, so a heterogeneous development team can build services with different languages provided service communication is abstracted into some forms. Moreover micro services can have multiple versions in production systems (that may bring additional work to DevOps).

Service communication

There isn’t a silver bullet for this one. It probably depends on role of these services, it’s high likely there are different communication paths amongst the services, say public network services could use HTTP, REST and JSON for data exchange, while internal services can leverage more efficient protocols, like Protobuf and may interact via message bus.

Service delivery

Micro services are relatively easier to develop and deliver to production systems given the development focus is on a one feature set. Moreover it is possible to have multi instance multi versioned services in production systems. These services can have multiple instances to handle high volume traffic, while multi instance deployment may require some design changes, for example instances can be made data agnostic, or each instance can handle specific traffic.(e.g. customer/location segmented etc.)
Having multiple instances and versions is not cost free, first of all it requires exposing versioned service interaction points, moreover deployment and provisioning will have some overhead,  however it is totally achievable to reduce overhead of DevOps works with change and configuration management systems. Infrastructures can be virtualized and service configuration be managed with tools like Chef and Puppet.