Posted by: lrrp | April 21, 2021

6 Reasons You Should Always Be Looking at New Jobs

Even if you’re 100% happy in your current position

Change is a constant.

Not just in your career, but in every area of life.

Hold loosely to where you are, and allow yourself to learn, grow, and seize opportunities, even if you don’t know exactly where they’ll lead.

At 33, I can wholeheartedly say that taking the time to look at new opportunities is something I’m so thankful for and would highly recommend.

Whether it’s making a new connection on LinkedIn or going on an interview, often our most worthwhile experiences come from tiny actions that might not even look like big opportunities at the time.

Regardless of where you currently are, here are 6 reasons you should always be looking at new jobs.

Things can always change in the blink of an eye

You’re probably familiar with the Heraclitus quote that says, “The only constant in life is change.”

This quote still reigns true today.

Even if you like your job, things are always changing and often at a rapid pace.

A new manager could come in, layoffs might be around the corner, or maybe your company has a merger or acquisition. I share this because I have experienced unexpected job changes multiple times. In each job that I left, 3 or 6 months before leaving, I never would have anticipated that I would leave. However, I left because the work situation had changed drastically.

The key is not to be panicked or always worrying about what could happen in the future.

Instead, appreciate what you have today while still keeping yourself aware of other opportunities around you.

Knowing you have options gives you the confidence to do your job well, without fear of the unknown

Whether you are a freelancer or an employee, you always want to have options available to you.

I know from personal experience that I often do my best work when I feel like I have plenty of available opportunities.

When I mentioned previously that I left previous positions rather quickly, the main reason that this was possible for me was because I had other sources of income and had already been exploring other options.

While I certainly didn’t have a 5-year plan in place, I at least knew the direction in which I wanted to go.

On the other hand, if you’re constantly anxious about getting laid off and wondering whether you’ll ever find another job, your daily life will be much more strenuous.

In Psychology Today, Susan Weinschenk Ph.D. writes on the power of having choices and shares,

“We like having choices because it makes us feel in control. We won’t always choose the fastest way to get something done. We want to feel that we are powerful and that we have choices.”

Practically speaking, having choices might look like:

  • Having one informational interview a week to learn about jobs/careers that you’re interested in.
  • Sharing your expertise on a blog or Medium once a week to make connections and become known as an industry expert.
  • Taking 30 minutes a week or month to look at new jobs on LinkedIn or Indeed.
  • Keeping a “work” journal like career expert Lauren McGoodwin recommends to track your career, what you like, don’t like, and what you’re learning.

You can gain a better understanding of changes in your industry, market, pay, etc.

Even if you’re happy or content, there are always opportunities to learn something new.

Browsing through potential job descriptions or even going on a few interviews can be a good source of personal and professional growth.

A few years ago, I was ready to quit my job. Fortunately, before I quit, I applied for a few jobs and went on a handful of interviews.

I quickly learned that very few roles would offer both the flexibility and autonomy that I was accustomed to as a freelancer.

Going on interviews allowed me to appreciate my work so much more. Instead of quitting, I found that I could make a few tweaks and adjustments to make my current work better fit what I was looking for.

As you interview or research other companies, you might ask questions like:

  • Are other companies in your industry allowing people to continue working remotely permanently?
  • Is the pay lower, similar, or higher than comparable positions in your industry?
  • Could I apply some of the benefits of other companies to my own role or position?

Finally, the more you learn about other companies, the more you may even be reminded of how happy you are where you currently are while still knowing that you have options.

You will meet people outside your current circle of friends and colleagues

If you know people that always seem to be getting offers to interview or even outright job offers with little effort, it’s often not luck; it’s preparation.

“Opportunity does not waste time with those who are unprepared.”

―Idowu Koyenikan

The more people you know, even acquaintances, the more opportunities you will have in life.

Connect with people on LinkedIn.

Find people in your field and see if they’ll do a 10-minute informational interview about their work or company.

A few tips:

  • When connecting with someone, be sincere and show that you’re genuinely interested in them. You might reference an article they’ve written, a previous job they held, etc.
  • Have a strategy. Instead of adding people haphazardly, look for companies, industries, or positions that you’re interested in, and connect with people specifically in those places or roles.
  • Create a tracking system. If you’re connecting with lots of people, use a Google Sheet and write down the person’s name, their role, etc., and the date. This way, you won’t lose track of all the wonderful people you’re connecting with.

You can find opportunities for side hustles or consult

Having a second or third stream of income can be incredibly liberating, even if it’s only an extra $100 a month.

A side hustle might be a way to explore similar work or even something completely different from what you’re doing now.

According to Fortune, 49% of Americans under the age of 35 now have a side hustle.

Having a side hustle will allow you to:

  • Generate a second (or third) income stream
  • Increase or diversify your skills
  • Boost your confidence
  • Explore areas that you might not want to do full-time

For example, for the past few years, freelance writing has been one of my side hustles. I’ve found that I enjoy writing on Medium and having 1–2 additional clients per month. But, I wouldn’t want to do this full-time.

As you try out new side hustles, you can test the waters with new opportunities. Even if you love your job, a side hustle is still a great way to try something new and diversify your income.

You’ll solidify what you currently love about your current work, and where there might be room for improvement

Over the years, I’ve applied for various jobs, mostly part-time roles that would accommodate my freelance work.

The thing is, although a lot of positions looked great on paper and in theory, once I applied and went through at least one interview, I often found that I was happier exactly where I was.

Whether it was a pay cut, a lack of flexibility, or just not an ideal fit with my current skills, I often left the interview more grateful for where I currently was.

On the other hand, exploring other jobs or opportunities has also been a great way to see ways that I could improve myself.

Looking at other opportunities can be a good source of both personal and professional growth.

Take the time to see what you don’t know that you don’t know. We all have room for improvement, and the more you’re willing to give yourself feedback and learn from each experience, the more prepared you’ll be regardless of what life throws at you.

Posted by: lrrp | February 21, 2021

How to Make the Best of One-On-One Meetings as a Leader

Learn to derive the maximum benefit out of 1:1 meetings and be admired by your employees.

1. Stick to the Schedule

The first and foremost rule for effective one-on-ones is to conduct the meetings regularly and sincerely.

2. Prepare for the Meeting

I have seen managers starting the discussions with generic statements like ‘So, what do you want to talk about today?’.

3. Listen; Speak When Required

One-on-one meetings are for employees to open up to their leaders about their concerns and aspirations. Make the employee comfortable enough to speak up.

4. Focus on Employee

Dedicate the one-on-one meetings to focus on the employee and their career planning. Don’t use this time for project updates and discussion on severity defects.

5. Take Notes

6. Follow up After the Meeting

Following up on the action items of the previous meeting is as vital as taking notes. Just taking notes and not acting upon them makes the whole process unproductive.

Posted by: lrrp | January 12, 2021

A Guide to Become a Lead Software Engineer

Lessons I’ve learned becoming a lead engineer

I asked myself a question, “What would I say to a recent grad or a colleague asking for advice on becoming a lead software engineer?”

I worked in a professional setting for five years. For the last two and a half years, I’ve been on the same project and became one of the lead software engineers. I learned a lot working with many talented engineers and being guided by highly experienced mentors.

So I figured there is some value in my experiences that I could share with others to help them achieve their goals of becoming better engineers.

As engineers, we’re always looking for ways to improve the way we work. What is the secret formula that creates lead software engineers with high paying salaries?

So here’s what I would say.…

Learn things that are in tangent with your project or industry. So then you can then apply those learnings almost immediately. We will forget most of the content we read unless we use it.

For example, the project I’m on utilizes many Amazon Web Services (AWS) technologies. I didn’t know too much about their services outside of what my project used. So I decided to take an online course to earn a certification for AWS. Now I have a better understanding and can apply it to the project to make better design decisions when those opportunities arose.

Utilize the learning opportunities your company provides and take the time to do so. Now you may say, “Where do I find the time for that?” or “My organization does not allow time for growth and development.”.

You have to take your learning and development seriously. Make the time to do so because no one else will do it for you!

If the company or the project is not allowing or providing any learning opportunities, I think it’s time to move on.

Stop wasting your time on tasks that don’t add value to the project. Prioritize the high leverage tasks that will maximize the value output per unit of your precious time.

What is leverage? Leverage = value/time.

I apply this equation a lot and it ensures that I’m using my time wisely to complete high valued items. To apply it, ask yourself these three questions:

  1. How can I finish this task in a shorter amount of time?
  2. How can I increase the value produced by this task?
  3. Is there another task I could spend my time on that would provide more value.

Edmond Lau covers this concept in greater detail and with real-world examples in the book “The Effective Engineer: How to Leverage Your Efforts In Software Engineering to Make a Disproportionate and Meaningful Impact”.

This book was a game-changer for me and is a must-read for any software engineer!

Spend a significant amount of time and effort thinking about the problem at hand before typing away on the keyboard.

Think more about the problem to have a better understanding of the domain in which the problem exists. It allows you to challenge any assumptions involved.

Careful problem solving enables you to derive multiple approaches to solving the problem, not just one.

It is detrimental to a project with deadlines to go back and redo something. This typically occurs when other approaches were not thought of or considered.

“If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.” — Albert Einstein

We all have worked with a developer that would take a task, vanish for a few days and then reappear with a giant pull request. That daunting pull request eventually got reviewed only to find out there was a better approach, or even worse, it got merged in because no one thoroughly reviewed it because of its size. Don’t be that developer!

Run your proposed approach by others first. Having a group discussion and involving the right people will help prevent knowledge silos from occurring and spread accountability throughout the team, not just on your self.

So when you submit your pull request, there won’t be any big surprises that might end up causing a rewrite of the entire solution. Also, break the work down into smaller chunks of code changes to make the pull requests more manageable.

Becoming a leader will inherently improve your soft skills in which I found many software engineers seem to lack.

I’m not saying you should work towards being a product owner or some manager role. As software developers, we love to code, but that doesn’t mean we should neglect our leadership qualities.

In my organization, I brought together a small group of developers passionate about creating high-quality software. We discuss things like automation tests, tech debt, bugs, performance, and metrics. Then we strategize on how to achieve our goals and shared vision.

Become a leader. It will improve your ability to effectively communicate and help you achieve any goals or visions you may have for the project.

Author a vision that others can find purpose in.

But becoming an effective leader is not easy, and the path to becoming one is not for the faint of heart. It involves taking a good look in the mirror to expose the traits that are holding you back. You owe it to yourself to go on this journey. It will not only improve your professional life but your personal life as well.

Check out the book “Mastering Leadership: An Integrated Framework for Breakthrough Performance and Extraordinary Business Results” by Robert J. Anderson (Not all chapters will be helpful, but there is gold to be found)

This book was at the core of a leadership development program I went through, and it has significantly improved my self-awareness and understanding of leadership.

Posted by: lrrp | January 8, 2021

A Leadership Manifesto: A Guide to Greatness

The root of the word manifesto is the Latin manifestum, which means “to be clear or to made public.”

Every leader needs a personal manifesto—something that lets everyone know their views, their thoughts, and their beliefs and intentions. When you create your manifesto, you instill a sense of transparency that makes it easy for others to respect, emulate, and trust you.

To create your manifesto, start with what you value. Let it be the guide that steers you to embrace your greatness.

Here are the statements I recommend to my coaching clients who want to create their own leadership manifesto:

I will commit to being an authentic person.

When you commit to being genuine as a leader, you embrace all parts of who you are—the good, the bad, the weak, the strong, the gaps, and the greatness. You’re committed to acknowledging and leveraging the sum of all your parts. If you can be genuine, you will win hearts and minds.

I will take responsibility for my life.

Commit to being fully responsible for your health, happiness, and success. Refuse to blame others or make excuses for your problems and hold yourself fully accountable for whatever you do.

I will communicate in a way that conveys what I mean to say.

The words you speak and the way you communicate will always matter; every time you say something it provides a reflection on who you are, what you think, and what you value. Make sure your heart and mind are saying the same thing.

I will remember to serve something bigger than myself.

The greatest rewards come when you give of yourself. It’s about bettering the lives of others, being part of something bigger than yourself, and making a positive difference. People want to be part of something bigger than themselves. They want to be in a situation where they feel that they are doing something for the greater good.

I will take ownership of my work and strive
to make things better within my sphere of influence.

Leaders inspire accountability through their ability to accept responsibility before they place blame and the best leaders serve humanity in a way that lifts everyone around them. Accountability is the measure of a leader’s height.

I will embrace resilience.

Resilience is accepting your new reality, even if it’s less good than the one you had before. Learn not to reject failure as fatal but instead to face everything with boldness and courage. When you do, you will gain the perspective that nothing is off-limits and that every opportunity is a platform for future success experiences, because only those who dare to fail greatly, can ever achieve greatly.

I will invest in myself as I invest in others.

No leader sets out to be a leader. People set out to live their lives, expressing themselves fully. When that expression is of value, they become leaders. So the point is not to become a leader but to invest in yourself as a person. To use yourself completely – all your skills, strengths, gifts, and talents– in order to make your vision manifest. You must never hold back. You, must, in sum, become the person you are meant to be and to enjoy the process of becoming. The truly great leaders are constantly making an ongoing commitment with themselves to invest in their own growth as a leader and also in growth and training for those around them.

I will remember there is always a free choice.

You may not always be able to change or choose your situation, but you will always be able to choose who you are going to be in the situation. Choose the character and the values that lead you to embrace your greatness.

I will dedicate myself to my calling.

Leaders aren’t born they are made. And they are made just like anything else, through hard work. And that’s the price we’ll have to pay to achieve your greatness because not everyone lives up to their calling. But if you know what is important to you, and if you know that what you do matters, you will put your best into what you do and how you do it. To live up to your calling is to tap into your greatness and embrace it.

Lead from within: Becoming a leader is synonymous with becoming yourself. It’s precisely that simple, and it’s also that complicated. Leadership is a choice and a privilege, learn to embrace the greatness it can bestow upon you.

Let’s take a look at Representational State Transfer (REST) principles to learn what they are and what benefits you can get from applying them.

What is REST?

Representational State Transfer (REST) is an architectural style that has gained a lot of popularity in recent years due to its simplicity and scalability.

Before REST gained popularity, SOAP was the de-facto way of accessing resources and communicating over the web.

Why should you care about REST?

In this section, I’ll discuss why REST principles are important and why it’s worth the effort to learn more about them. You’ll also learn how to apply them to your backend projects.

1) REST is Easy to Understand and Implement

REST is meant to work over HTTP (actually HTTP was influenced by REST). Therefore it makes use of HTTP verbs that most of us know, such as GET, POST, and PUT.

Even if you do not know what these verbs are about, their names are pretty self-explanatory. Also, the clear separation of client and server code makes it easy for different teams to work on different parts (front end or back end) of applications.

Since it’s easy to understand and also to implement, REST principles can help increase your dev team’s productivity. They are also important if you are going to release a public API for people to develop applications with.

Many people know about REST and HTTP so it will be much easier for them to understand and use your API.

2) REST Makes your Application More Scalable

There are 2 main reasons why REST can help make your application more scalable:

No State

As we will see in the next section (Principles of REST), one of the core principles of REST is that it’s stateless on the server-side. Therefore each request will be processed independently from the previous ones.

In applications with a server-side state or sessions, a session is stored for possibly every logged-in user. This session data can easily get bloated and start to occupy a lot of resources on the server.

On the other hand, stateless servers only keep resources (memory) occupied when they are handling a request and they free it as soon as the request is processed.

Since the current trend in scalability is horizontal scaling (typically on the cloud), storing server-side sessions can also make it hard to scale your application because it creates some difficult problems.

For example, say that you have many servers that operate behind a load balancer. What will happen if the client gets to server1 in their first request (server1 now has the client’s session) and, at a later time, due to the load on server1, the client gets to server2 which does not know about their previous session data which was stored on server1? Of course, this problem has solutions but it makes scalability more difficult.

Faster Data Interchange Format

RESTful APIs typically use JSON as the data interchange format. JSON is much more compact and smaller in size compared to XML. It can also be parsed faster than XML. (source)

While they mostly operate with JSON, also keep in mind that REST APIs are still able to respond with different formats by making use of the Accept header.

3) Caching is Easier with REST

Caching is a critical factor for the scalability and performance of a modern web application. A well-established cache mechanism (with the best hit-rates possible) can drastically decrease the average response time of your server.

REST aims to make caching easier. Since the server is stateless and each request can be processed individually, GET requests should usually return the same response regardless of previous ones and the session.

This makes the GET requests easily cacheable and browsers usually treat them as such. We can also make our POST requests cacheable using Cache-Control and Expires headers.

4) REST is Flexible

By flexibility, I mean that it’s easy to modify and it’s also able to answer many clients who can ask for different data types (XML, JSON, and so on).

The client can specify the type using the Accept header (as I mentioned earlier) and the REST API can return different responses depending on that.

Another mechanism that’s worth mentioning is HATEOAS. If you do not know the term, don’t worry, it basically means: Return the related URLs in the server response for a particular resource.

Take a look at this example from Wikipedia. The client requests account information with account_number from a bank API and gets this response:

    "account": {
        "account_number": 12345,
        "balance": {
            "currency": "usd",
            "value": 100.00
        "links": {
            "deposit": "/accounts/12345/deposit",
            "withdraw": "/accounts/12345/withdraw",
            "transfer": "/accounts/12345/transfer",
            "close": "/accounts/12345/close"

This server makes use of HATEOAS and returns the links for corresponding actions. This makes it very easy to explore the API and also makes it flexible by allowing the server to change the endpoints.

Think of it like this: if the server weren’t applying HATEOAS, the client would need to hardcode the endpoints such as “/accounts/:account-id/deposit”. But if the server changes the URL to “/accounts/:account-id/depositMoney“, the client code also needs to be changed.

With the help of HATEOAS links, the client can check the link by parsing this JSON and easily make the request. If the endpoint changes, they will be provided with the new one, without the need to change the client code.


In this article, I have tried to express why I value REST and why I believe you should value it as well. I hope that after reading this, the reasons to apply REST standards are now more clear to you.

This article can serve as a motivation to learn more about the topic. And I have some good news: I am planning to write about REST Best Practices and common mistakes in the near future.

If you have any questions or want to discuss the topic further, you can feel free to contact me.

Have a Happy New Year and thank you for reading. 🙂

Lessons from a Tech Lead

This article will outline the top signs of an inexperienced programmer and what you can do to overcome them.

All new engineers display certain traits. By paying attention to the patterns exhibited in their actions, their daily routine, and the code they output, you can easily differentiate between an experienced engineer and someone new.

As software engineers, we all go through similar experiences when starting. It takes us months, sometimes years, to outgrow our “beginner” behaviors and turn “pro” so to speak. Junior engineers in every company go through this process.

When you start working at a big company, it is quite common for other engineers to haze you at your first code reviews. They will leave hundreds, if not MILLIONS, of comments in the first diff output that you try to commit. Many people find this experience so overwhelming that they end up quitting the next day.

A diff (or “difference”) refers to a “pull request” when you submit code for review. It is a great tool that enables developers to see what has changed between different file versions.

Of the people who get into companies like Google as an engineer, about half quit before they even submit their first line of code. So, if that hasn’t put you off a career in software engineering, here are some tips you can use to get ahead of the curve and be well on your way at your new job.

1. Submitting Large Pieces of Code

Solution: Keep your commits small

It sounds obvious, yet it is by far the most common trap inexperienced programmers fall into. Chances are when you first start at a company, you are going to be assigned a starter project that requires a sizable chunk of code to complete some feature. The key to note is that this isn’t necessarily about you completing the feature in one code submission.

What people want to see you do is start learning the coding practices and development processes with which you get a code submitted. This includes breaking down the project into bite-sized chunks and submitting small, target pieces of code, one at a time.

When you first get started, there’s a high chance you’re going to fundamentally get some pattern, design, or architecture of your program wrong. Which is going to change the entire way in which your program is going to be written. This is something to be expected. If you submit small diffs incrementally, people can help make sure you’re on the right track. Keeping your commits small also makes it easier to land your diffs.

It is much easier to get approval on multiple diffs that are small and targeted.

Using an editor like VS Code or Atom that shows line by line differences of what changes are made, can help to ensure that you’re not editing lines of text that don’t need to be changed.

This lesson of having small diffs is constantly being taught to new engineers in almost every organization. If you go into a company that has already developed this skill, you’re going to be set.

2. Having Complex, Tangled Code

Solution: Write a Design Doc

Another sign of an inexperienced programmer is over-engineered code, that is littered with huge functions, if statements, random helper methods, and premature optimization. Bad code does not simply have lots of bugs and fails to compile or run. An inexperienced engineer will still be able to get their code to run and function decently.

However, the way the logic is set up, and the code is written is going to look terrible. It is going to tangled, messy, full of complex logic and it is going to be a complete pain to read. Any senior engineer would get a headache after reading a beginner’s code and would need to go for a coffee break.

The solution is quite simple actually. Write a Design Doc before you start.

The document should describe:

  • What you are going to build
  • What the feature requirements are
  • How you’re going to set things up
  • What functions you are going to need
  • Which classes and data structures you might need

This simple step will help prevent duplicate code and overly complex logic. It will give you a chance to organize your thoughts properly before you start coding. The code will still work without a design doc, but it will be difficult for you to maintain when you are adding features to it. New bugs will be introduced every time you try to modify the code. You will end up having to make changes in multiple areas, and the logic is going to suffer because of it.

Write a Design Doc, save yourself some time and it’s going to make you look a bit better too.

3. Low Effectiveness

Solution: Track your actions per minute

Inexperienced developers want to prove themselves and work hard to learn as much as they can. However, they often find themselves not being effective and feeling like they end up wasting tons of time.

When provided with an overly complex UI from a designer, a new engineer might decide to develop the solution even if it is near impossible to implement. He won’t have his eye on the goal, and will just try to do some groundwork to prove himself. This project will most likely never launch. A more experienced programmer, however, would identify the goal of the project and push back on the UI designers by providing alternative solutions that would be much easier to implement while still achieving the desired goal.

Don’t code for coding’s sake. Look at the forest, not the trees.

Keep your eye on the vision you are trying to make happen and make sure that your code ships. If you don’t see your code shipping, try to clear any roadblocks so that you can make an impact where you can.

Try to keep an eye on your actions per minute (APM). Many junior engineers easily get distracted by flexible working hours and end up wasting time on Reddit, social media, and online shopping. Your time is your own, but if you try to get optimize the time as you have at work, imagine how much more productive you would be. You could get a lot more code submitted, read more code, learn about the code base, and come up with more ideas…You would be unstoppable!

Don’t be the person who submits one or two pieces of code in an entire week. Try to submit at least one piece of code per day, that will keep you on track. Make it a personal goal for yourself and you will be surprised at how much more effective you will be.

4. Pride, Ego, and Arrogance

Solution: Ask for help when you need it

This is something you see from a few CS students coming out of college. Although it does not necessarily come from a negative place. Junior engineers typically want to prove themselves early in their career, which is understandable. However, this may result in a tendency to over-engineer their code by trying to make it look overly smart and clever. Keep it simple!

Try to put your pride aside and ask for help when you need it. You are not expected to know everything as a junior engineer. You are expected to want to learn. So ask questions! It can save you a lot of time which will make you more effective as I mentioned earlier. Don’t be the engineer who wants to tackle a lot of problems on their own and resurfaces weeks later with code that needs to be redone.

and also discover who the tech lead is (not the manager)

Just for fun, let us play out a common scenario involving a proud graduate, and the Tech Lead — The Tech Lead, reviews the engineer’s code and leaves several comments for improvement. The engineer can’t believe this and decides to storm up to the Tech Lead’s desk, and personally interrogate him about each comment while attempting to justify his code at each line. His arrogance is just incredible! The Tech Lead is having none of it. While the young engineer tries to defend himself, he pokes fun at the people he thinks are less qualified than him. The Tech Lead sees that the engineering is trying to do something stupid, or something “cool”. The problem is … the Tech Lead doesn’t care if you think you’re cool or if you want him to think you’re cool and ends the argument with this:

Fine. You’re cool, I’m cool. Everybody’s cool, I’m just cooler than you. Okay!

The moral of the story is, always try to learn from those around you, don’t be arrogant and DON’T get on the Tech Lead’s bad side.

Posted by: lrrp | December 24, 2020

Design Patterns for Microservices

All about the design patterns of microservice architecture to overcome its challenges.

Microservice architecture has become the de facto choice for modern application development. Though it solves certain problems, it is not a silver bullet. It has several drawbacks and when using this architecture, there are numerous issues that must be addressed. This brings about the need to learn common patterns in these problems and solve them with reusable solutions. Thus, design patterns for microservices need to be discussed. Before we dive into the design patterns, we need to understand what principles microservice architecture has been built:

  1. Scalability
  2. Availability
  3. Resiliency
  4. Independent, autonomous
  5. Decentralized governance
  6. Failure isolation
  7. Auto-Provisioning
  8. Continuous delivery through DevOps

Applying all these principles brings several challenges and issues. Let’s discuss those problems and their solutions.

1. Decomposition Patterns

a. Decompose by Business Capability


Microservices is all about making services loosely coupled, applying the single responsibility principle. However, breaking an application into smaller pieces has to be done logically. How do we decompose an application into small services?


One strategy is to decompose by business capability. A business capability is something that a business does in order to generate value. The set of capabilities for a given business depend on the type of business. For example, the capabilities of an insurance company typically include sales, marketing, underwriting, claims processing, billing, compliance, etc. Each business capability can be thought of as a service, except it’s business-oriented rather than technical.

b. Decompose by Subdomain


Decomposing an application using business capabilities might be a good start, but you will come across so-called “God Classes” which will not be easy to decompose. These classes will be common among multiple services. For example, the Order class will be used in Order Management, Order Taking, Order Delivery, etc. How do we decompose them?


For the “God Classes” issue, DDD (Domain-Driven Design) comes to the rescue. It uses subdomains and bounded context concepts to solve this problem. DDD breaks the whole domain model created for the enterprise into subdomains. Each subdomain will have a model, and the scope of that model will be called the bounded context. Each microservice will be developed around the bounded context.

Note: Identifying subdomains is not an easy task. It requires an understanding of the business. Like business capabilities, subdomains are identified by analyzing the business and its organizational structure and identifying the different areas of expertise.

c. Strangler Pattern


So far, the design patterns we talked about were decomposing applications for greenfield, but 80% of the work we do is with brownfield applications, which are big, monolithic applications. Applying all the above design patterns to them will be difficult because breaking them into smaller pieces at the same time it’s being used live is a big task.


The Strangler pattern comes to the rescue. The Strangler pattern is based on an analogy to a vine that strangles a tree that it’s wrapped around. This solution works well with web applications, where a call goes back and forth, and for each URI call, a service can be broken into different domains and hosted as separate services. The idea is to do it one domain at a time. This creates two separate applications that live side by side in the same URI space. Eventually, the newly refactored application “strangles” or replaces the original application until finally you can shut off the monolithic application.

2. Integration Patterns

a. API Gateway Pattern


When an application is broken down to smaller microservices, there are a few concerns that need to be addressed:

  1. How to call multiple microservices abstracting producer information.
  2. On different channels (like desktop, mobile, and tablets), apps need different data to respond for the same backend service, as the UI might be different.
  3. Different consumers might need a different format of the responses from reusable microservices. Who will do the data transformation or field manipulation?
  4. How to handle different types of Protocols some of which might not be supported by producer microservice.


An API Gateway helps to address any concerns raised by the microservice implementation, not limited to the ones above.

  1. An API Gateway is the single point of entry for any microservice calls.
  2. It can work as a proxy service to route a request to the concerned microservice, abstracting the producer details.
  3. It can fan out a request to multiple services and aggregate the results to send back to the consumer.
  4. One-size-fits-all APIs cannot solve all the consumer’s requirements; this solution can create a fine-grained API for each specific type of client.
  5. It can also convert the protocol request (e.g. AMQP) to another protocol (e.g. HTTP) and vice versa so that the producer and consumer can handle it.
  6. It can also offload the authentication/authorization responsibility of the microservice.

b. Aggregator Pattern


We have talked about resolving the aggregating data problem in the API Gateway Pattern. However, we will talk about it here holistically. When breaking the business functionality into several smaller logical pieces of code, it becomes necessary to think about how to collaborate the data returned by each service. This responsibility cannot be left with the consumer, as then it might need to understand the internal implementation of the producer application.


The Aggregator pattern helps to address this. It talks about how we can aggregate the data from different services and then send the final response to the consumer. This can be done in two ways:

1. A composite microservice will make calls to all the required microservices, consolidate the data, and transform the data before sending it back.

2. An API Gateway can also partition the request to multiple microservices and aggregate the data before sending it to the consumer.

It is recommended if any business logic is to be applied, then choose a composite microservice. Otherwise, the API Gateway is the established solution.

c. Client-Side UI Composition Pattern


When services are developed by decomposing business capabilities/subdomains, the services responsible for user experience have to pull data from several microservices. In the monolithic world, there used to be only one call from the UI to a backend service to retrieve all data and refresh/submit the UI page. However, now it won’t be the same. We need to understand how to do it.


With microservices, the UI has to be designed as a skeleton with multiple sections/regions of the screen/page. Each section will make a call to an individual backend microservice to pull the data. That is called composing UI components specific to the service. Frameworks like AngularJS and ReactJS help to do that easily. These screens are known as Single Page Applications (SPA). This enables the app to refresh a particular region of the screen instead of the whole page.

3. Database Patterns

a. Database per Service


There is a problem with how to define database architecture for microservices. Following are the concerns to be addressed:

1. Services must be loosely coupled. They can be developed, deployed, and scaled independently.

2. Business transactions may enforce invariants that span multiple services.

3. Some business transactions need to query data that is owned by multiple services.

4. Databases must sometimes be replicated and sharded in order to scale.

5. Different services have different data storage requirements.


To solve the above concerns, one database per microservice must be designed; it must be private to that service only. It should be accessed by the microservice API only. It cannot be accessed by other services directly. For example, for relational databases, we can use private-tables-per-service, schema-per-service, or database-server-per-service. Each microservice should have a separate database id so that separate access can be given to put up a barrier and prevent it from using other service tables.

b. Shared Database per Service


We have talked about one database per service being ideal for microservices, but that is possible when the application is greenfield and to be developed with DDD. But if the application is a monolith and trying to break into microservices, denormalization is not that easy. What is the suitable architecture in that case?


A shared database per service is not ideal, but that is the working solution for the above scenario. Most people consider this an anti-pattern for microservices, but for brownfield applications, this is a good start to break the application into smaller logical pieces. This should not be applied for greenfield applications. In this pattern, one database can be aligned with more than one microservice, but it has to be restricted to 2-3 maximum, otherwise scaling, autonomy, and independence will be challenging to execute.

c. Command Query Responsibility Segregation (CQRS)


Once we implement database-per-service, there is a requirement to query, which requires joint data from multiple services — it’s not possible. Then, how do we implement queries in microservice architecture?


CQRS suggests splitting the application into two parts — the command side and the query side. The command side handles the Create, Update, and Delete requests. The query side handles the query part by using the materialized views. The event sourcing pattern is generally used along with it to create events for any data change. Materialized views are kept updated by subscribing to the stream of events.

d. Saga Pattern


When each service has its own database and a business transaction spans multiple services, how do we ensure data consistency across services? For example, for an e-commerce application where customers have a credit limit, the application must ensure that a new order will not exceed the customer’s credit limit. Since Orders and Customers are in different databases, the application cannot simply use a local ACID transaction.


A Saga represents a high-level business process that consists of several subrequests, which each update data within a single service. Each request has a compensating request that is executed when the request fails. It can be implemented in two ways:

  1. Choreography — When there is no central coordination, each service produces and listens to another service’s events and decides if an action should be taken or not.
  2. Orchestration — An orchestrator (object) takes responsibility for a saga’s decision making and sequencing business logic.

4. Observability Patterns

a. Log Aggregation


Consider a use case where an application consists of multiple service instances that are running on multiple machines. Requests often span multiple service instances. Each service instance generates a log file in a standardized format. How can we understand the application behavior through logs for a particular request?


We need a centralized logging service that aggregates logs from each service instance. Users can search and analyze the logs. They can configure alerts that are triggered when certain messages appear in the logs. For example, PCF does have a Loggeregator, which collects logs from each component (router, controller, diego, etc…) of the PCF platform along with applications. AWS Cloud Watch also does the same.

b. Performance Metrics


When the service portfolio increases due to microservice architecture, it becomes critical to keep a watch on the transactions so that patterns can be monitored and alerts sent when an issue happens. How should we collect metrics to monitor application performance?


A metrics service is required to gather statistics about individual operations. It should aggregate the metrics of an application service, which provides reporting and alerting. There are two models for aggregating metrics:

  • Push — the service pushes metrics to the metrics service e.g. NewRelic, AppDynamics
  • Pull — the metrics services pulls metrics from the service e.g. Prometheus

c. Distributed Tracing


In a microservice architecture, requests often span multiple services. Each service handles a request by performing one or more operations across multiple services. Then, how do we trace a request end-to-end to troubleshoot the problem?


We need a service which

  • Assigns each external request a unique external request-id.
  • Passes the external request id to all services.
  • Includes the external request-id in all log messages.
  • Records information (e.g. start time, end time) about the requests and operations performed when handling an external request in a centralized service.

Spring Cloud Sleuth, along with Zipkin server, is a common implementation.

d. Health Check


When microservice architecture has been implemented, there is a chance that a service might be up but not able to handle transactions. In that case, how do you ensure a request doesn’t go to those failed instances? With a load balancing pattern implementation.


Each service needs to have an endpoint that can be used to check the health of the application, such as /health. This API should o check the status of the host, the connection to other services/infrastructure, and any specific logic.

Spring Boot Actuator does implement a /health endpoint and the implementation can be customized, as well.

5. Cross-Cutting Concern Patterns

a. External Configuration


A service typically calls other services and databases as well. For each environment like dev, QA, UAT, prod, the endpoint URL or some configuration properties might be different. A change in any of those properties might require a re-build and re-deploy of the service. How do we avoid code modification for configuration changes?


Externalize all the configuration, including endpoint URLs and credentials. The application should load them either at startup or on the fly.

Spring Cloud config server provides the option to externalize the properties to GitHub and load them as environment properties. These can be accessed by the application on startup or can be refreshed without a server restart.

b. Service Discovery Pattern


When microservices come into the picture, we need to address a few issues in terms of calling services:

  1. With container technology, IP addresses are dynamically allocated to the service instances. Every time the address changes, a consumer service can break and need manual changes.
  2. Each service URL has to be remembered by the consumer and become tightly coupled.

So how does the consumer or router know all the available service instances and locations?


A service registry needs to be created which will keep the metadata of each producer service. A service instance should register to the registry when starting and should de-register when shutting down. The consumer or router should query the registry and find out the location of the service. The registry also needs to do a health check of the producer service to ensure that only working instances of the services are available to be consumed through it. There are two types of service discovery: client-side and server-side. An example of client-side discovery is Netflix Eureka and an example of server-side discovery is AWS ALB.

c. Circuit Breaker Pattern


A service generally calls other services to retrieve data, and there is the chance that the downstream service may be down. There are two problems with this: first, the request will keep going to the down service, exhausting network resources, and slowing performance. Second, the user experience will be bad and unpredictable. How do we avoid cascading service failures and handle failures gracefully?


The consumer should invoke a remote service via a proxy that behaves in a similar fashion to an electrical circuit breaker. When the number of consecutive failures crosses a threshold, the circuit breaker trips, and for the duration of a timeout period, all attempts to invoke the remote service will fail immediately. After the timeout expires the circuit breaker allows a limited number of test requests to pass through. If those requests succeed, the circuit breaker resumes normal operation. Otherwise, if there is a failure, the timeout period begins again.

Netflix Hystrix is a good implementation of the circuit breaker pattern. It also helps you to define a fallback mechanism that can be used when the circuit breaker trips. That provides a better user experience.

d. Blue-Green Deployment Pattern


With microservice architecture, one application can have many microservices. If we stop all the services then deploy an enhanced version, the downtime will be huge and can impact the business. Also, the rollback will be a nightmare. How do we avoid or reduce downtime of the services during deployment?


The blue-green deployment strategy can be implemented to reduce or remove downtime. It achieves this by running two identical production environments, Blue and Green. Let’s assume Green is the existing live instance and Blue is the new version of the application. At any time, only one of the environments is live, with the live environment serving all production traffic. All cloud platforms provide options for implementing a blue-green deployment.

There are many other patterns used with microservice architecture, like Sidecar, Chained Microservice, Branch Microservice, Event Sourcing Pattern, Continuous Delivery Patterns, and more. The list keeps growing as we get more experience with microservices. I am stopping now to hear back from you on what microservice patterns you are using.

Posted by: lrrp | December 24, 2020

You will find a way….

Posted by: lrrp | December 24, 2020

The 4 Worst Career Mistakes You Can Make as a Developer

How not to plan your career and avoid a lot of failures.

Your success as a developer is based on how you plan each step in your career.

You must be a careful strategist, looking at each piece on the chessboard, deciding what to do next and how a certain move can improve your future.

Unfortunately, I’ve seen a lot of developers failing on this game, especially the youngest ones. And after years of working as a professional developer and switching many companies, I’ve come to realize what are the absolute worst errors you can make as a coder in your career.

Let’s analyze them.

Sticking To One Job For Too Long

I get it, why should you change your employer if you like the job you have now?

The problem with this approach is that you’re missing out on a lot of opportunities from the world out there. First of all, switching your job means upgrading your salary 99% of the time, and companies know it. That’s why they will be available to pay you even up to 50% more of your current paycheck. Plus, changing employer means also working on a new architecture and stack, tremendously increasing your skills and seniority on certain technologies.

In the end, there’s no formula to understand when you should change your employer, also because sometimes the reason you may want to leave depends on an awful environment you’re in. But the rule of thumb is simply to keep your eyes open on all the opportunities out there, and every year, ask yourself what level you’ve reached in your career, and evaluate if another company could make you stand out more at that point.

Working on an Old Stack

I’m sorry to say this, but if you’re working on a legacy system or with a bunch of old technologies, please leave your employer now. I really don’t find the value in working with something that you won’t find in the industry anymore.

Staying in a situation like that means you will lose market value as time goes on. Plus, you’re not really learning much, as new patterns and languages have emerged in the meantime you were busy with some old code.

Be careful about the technologies you’re working with, as they can become a cage for your career.

Being Unprofessional

Being a professional developer is a matter of attitude, the right one. You must take responsibility for your actions, the code you write, and how you interact with your peers. You must be ready to always update yourself every day and be productive also when you’re mood doesn’t really help you.

Being unprofessional is the complete opposite of that. Developers who are dishonest about respecting deadlines, about their commitment to writing good code, and people who are just heating up chairs waiting for the paycheck at the end of every month, not really caring about doing more than enough.

I’ve seen a lot of professionalism and a lot of unprofessionalism in my career, and I’ve to admit I’ve been on both sides of the spectrum many times. But I guarantee you I strive for the first every day because that’s what makes a great developer and a long-lived career.

Which one will you choose for you?

Lacking Ambition

“It’s really a pity!”.

This is the quote I use when I see people who don’t really do more than enough in their job, losing the occasion to shine and improve their career. There’s nothing really wrong with having no ambition whatsoever apart from being an everyday developer, but I like to see people as creatures full of potential, so I invite you to try to release that potential, to look for opportunities around you, to do something great and always have the inspiration of becoming better every day.

If that’s not how you feel, well “that’s a pity!”


I invite you to see your career as a strategy game, where you have to carefully decide each step. And every move you do can be either a win or lose. Hopefully, this article has helped you to find out that and how you can avoid a harsh defeat in this game.


Posted by: lrrp | December 8, 2020

Everything A Developer Must Know About Microservices

Microservices have become the application platform of choice for cloud application development. Nginx conducted a survey and they found that about 70% of the organizations are either using or investigating microservices, with nearly 1/3 currently using them in production. Gartner, a global research and advisory firm, defines a microservice as,

“a service-oriented application component that is tightly scoped, strongly encapsulated, loosely coupled, independently deployable and independently scalable”

This article is about refreshing your microservices knowledge. Everything you must/should know if you have/haven’t worked with microservices has been documented. Let’s get started. Shall we?

So, what the heck is a Microservice?

Microservices architectures, or simply Microservices is an SDLC approach based on which larger applications are built as a collection of small functional modules. These functional modules are independently deployable, scalable, target specific business goals, and communicate with each other over standard protocols like HTTP request/response with resource APIs and lightweight asynchronous messaging.

The major advantage of having a microservices architecture is that the modules can be implemented using different programming languages, have their own databases, and deployed on different software environments like on-premises or cloud.

Advantages of Microservices

  • Multiple services can be deployed independently in different environments like on-premises or cloud service providers
  • Multiple services can be developed independently based on the functionality
  • If anyone of the services fails, the other services will continue to work as it isolates the fault of the failing service
  • Scaling of individual components is easier in microservices as the scaling of other components are not required unlike in monolithic architecture
  • Multiple technologies can be used for developing different components in the same application.

Monolithic vs Service-Oriented vs Microservice

  • Service-Oriented Architecture is a collection of services that communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity.
  • Microservice Architecture is an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain.

When should Microservices be used?

When starting on newer applications, you would definitely want it to be easily scalable, maintainable, deployable and testable. Using microservices, these can be implemented more efficiently and used across various platforms.

Service Discovery

Service instances have dynamically assigned network locations. Moreover, the set of service instances changes dynamically because of autoscaling, failures, and upgrades. Consequently, your client code needs to use a more elaborate service discovery mechanism. Hence, service discovery is created to resolve this.

Hence, service discovery is one of the services running under microservices architecture, which registers entries of all of the services running under the service mesh. It is how applications and microservices locate each other on a network.

The Client‑Side Discovery Pattern

The Server‑Side Discovery Pattern

Scaling of Microservices

  • One can also apply to cache at microservice layer which can be easy to manage as an invalidation of the cache can be done very easily as the microservice will be the single source of truth.
  • Caching can also be introduced at the API Gateway layer where one can define caching rules like when to invalidate the cache.
  • One can also shut down or scale down some containers when the requirement is less.

Communication in Microservices

The communication protocol can broadly be divided into two categories: synchronous communication and asynchronous communication.

Synchronous Communication

Asynchronous Communication

Which Communication Protocol should be used?

  1. You must use asynchronous communication while handling HTTP POST/PUT (anything that modifies the data) requests, using some reliable queue mechanism (RabbitMQ, AMQP, etc.)
  2. You can use synchronous communication for the Aggregation pattern at API Gateway Level which should not include any business logic other than aggregation. Data values must not be transformed at Aggregator, otherwise, it defeats the purpose of Bounded Context. In Asynchronous communication, events should be published into a Queue. Events contain data about the domain, it should not tell what to do (action) on this data.
  3. If microservice to microservice communication still requires synchronous communication for GET operation, then you must partition your microservices for bounded context, and create some tasks in backlog/technical debt.

Bounded Context

“Bounded Context is a central pattern in Domain-Driven Design. It is the focus of DDD’s strategic design section which is all about dealing with large models and teams. DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships.”

Bounded context defines tangible boundaries of applicability of some sub-domain. It is an area where a certain sub-domain makes sense, while others don’t. It can be a conversation, a presentation, a code project with physical boundaries defined by the artifact.

Domain-Driven Design

Spring Cloud

Well, microservices architecture is indeed one of the most sought after application design approach that every developer must know. Hope this article provides you with an understanding of microservices and its components/concepts. But again, there is always room for more. Concluding this article with reference articles, you might want to look at.

Older Posts »