01 logo

Fake Code Review with ‘LGTM’

Blindly accepting Pull Requests with a meaningless comment is not Code Review. Do code review casually (old-school) but back up with real Automated E2E regression testing to prevent bad check-ins.

By Zhimin ZhanPublished 9 months ago 4 min read
Like

This article is one of the “Be aware of Fake Test Automation/DevOps Engineers” series.

Some fake Agile software companies focus on ‘sound good’ but poorly conducted practices, Code Review is one of them. The objective of Code Review is to verify a programmer’s check-in to make sure it follows coding standards and more importantly, will not break stuff.

Code Review, by its intention, is good. However, in reality, the practices used in some projects are laughable. Once, I worked as a SET in a software team. One of the practices the management enforced was “Code Review” on Git Pull Request (PR). A developer created a pull request (a check-in for his work), and then someone else (reviewer) needed to review and accept it. In theory, the reviewer did a thorough code review of the pull request before he/she proved the PR.

It is absurd. Linus Torvalds, the creator of Git, called it “pure garbage”.

Here is what happened in the project.

A developer shouted: “Can someone LGTM my PR #xxxx?”

Another answered: “Sure”.

The developer replied: “Thanks.”

I saw a lot of “LGTM” commit messages in the log. Later, someone explained it to me: “Look Good To Me”. Yes, the reviewers did not want to spend time typing 15 characters, let alone trying to open, read and understand the code changes in the PRs.

How could it (a process that this company listed on its website with pride) turn out to be like this? The fundamental reason is that the management lacks an understanding of the human part in software development. Programmers are not robots. I highly recommend two classic books on this topic: Peopleware and The Psychology of Computer Programming. Of course, there are many similar useless or bad practices, such as Retrospectives and Story Point Estimations. They all sound “good” in an ideal world. In reality, these are just mostly a waste of time.

The rule of all PRs having to be proved by another person is absurd. The quick ‘LGTM’ would be a logical consequence. Every developer wants his check-in instantly proved. To do that, he will also naturally and blindly prove others’ PR. At the very beginning, some developers might open the PR and quickly skim the content. Let’s face it: it is hard to view a section of changed code to determine its impact. They can only provide superficial comments and grammar checks such as spelling errors or formatting issues. Very soon, they will give everything with an “LGTM”, without evening looking.

Many years ago, I worked on a large government project which had a strict code check-in rule. Every programmer’s check-in must be associated with a task (created by a business analyst or the team leader in the requirement management software). The purpose of that was to have traceability.

The end result was that, after a week, all developers (including me) checked in code with a task named “Broken”.

Code Review should be request-based, such as “John, I am unsure whether this check-in will affect the payment module. Can you sit with me to review it when you have time?” John is the person who understands the payment module well. When he is free, there will be a much better chance for him to find out potential issues by checking the code and discussing it with the developer. Yes, old school, but it works.

Some may argue, "Without an enforced code review process, what happens if it is a bad check-in? ” First of all, the ‘LGTM’ code review process is ‘Pure Garbage’ based on my understanding of Linus’s comment. Therefore, ditch it first.

The next is how to prevent bad check-ins? The answer is, “you cannot prevent that from happening by a strict code review process”. A proper code review needs time and with the right person. Even that, it is not prevention but rather a reduction.

If the purpose of Code View for a particular check-in is software design, surely human interactions are required. Here I will focus on the most common reason for Code Review: preventing breaking existing business features, i.e., regression issues.

The most practical solution to avoid regression errors is to run Automated E2E tests with new code. The developer shall run some automated tests locally, this is called “Shift-Left Testing”, a new term that is becoming popular.

The above is an example of “Shift-Left Testing” taken out from a great presentation: “Continuous Integration at Facebook”. The developer runs related automated tests locally first. If passed, he will run the full regression tests against the new build in the CT server.

I know you will ask: “How can that (full regression) be possible? It won’t work, it will take a long time (regression testing), maybe days or weeks”. I can tell you, it is possible if you do test automation and Continuous Testing properly. The goal of Facebook CI is to provide quick feedback. The Facebook engineering team also defines ‘how quick’.

I have practised Test Automation and Continuous Testing for over 15 years. And certainly, quick feedback (~15 mins) is possible. In fact, it is very natural to do so. While I don’t have the massive test lab infrastructure as Facebook’s (below), I don’t have a lot of check-ins from many people and a huge regression suite either. For most projects (my clients and my own), I could achieve a feedback time of 30-min or less. With sufficient build agent machines, the 10-minute feedback loop for a large E2E test suite (500+) is achievable.

My automated E2E regression test suite (runs in the BuildWise CT server) detected many regression issues from my check-ins. The great value of Test Automation and CT is hard to describe in words.

I hope now you have seen the difference between ‘Fake Agile’ (LGTM code review) and Real Agile (Shift-Left Testing backed up real Test Automation and CT).

--

The original article was published on my Medium Blog, 2021-08-19

how to
Like

About the Creator

Zhimin Zhan

Test automation & CT coach, author, speaker and award-winning software developer.

A top writer on Test Automation, with 150+ articles featured in leading software testing newsletters.

My Most Viewed Articles on Vocal.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2024 Creatd, Inc. All Rights Reserved.