← emrevarol.com Ibikoresho n'Urutonde →
A2SV · University of Rwanda

Table of Contents

Best Coding Practices and Effective Code Review

A2SV - University of Rwanda

Writing code is easy

Writing maintainable code is hard

Writing code that a team can trust is software engineering

What This Lecture Is Really About

Not just syntax

Not just “clean code”

Not just GitHub mechanics

This lecture is about engineering discipline

Good engineering creates speed. Bad engineering creates chaos.

A Hard Truth

A project rarely fails because people cannot write any code.

A project often fails because code becomes:

Unclear

Untestable

Unreviewable

Hard to change

The Wrong Student Mindset

Common dangerous thoughts:

“It works, so it is good enough.”

“We can clean it later.”

“Tests take too much time.”

“Review is not needed for small changes.”

“My teammates will understand it.”

Why this is dangerous:

Later usually never comes

Small bad habits become team culture

Team culture becomes product quality

The Right Engineering Mindset

A professional engineer asks:

Can another person understand this quickly?

Can this be safely changed later?

Can this be tested?

Can this be reviewed well?

Will this make the team faster or slower?

The Cost of Bad Code

Bad code causes:

More bugs

Slower onboarding

Painful debugging

Duplicated work

Broken features after changes

Fear of touching the codebase

The Benefit of Good Code

Good code gives:

Faster development

Easier collaboration

Safer refactoring

Better reviews

More trust

More consistent products

Key Message

“Your code is not only for the computer. Your code is for your teammates, your future self, and the product.”

Programming vs Software Engineering

Programming

Making code run

Software Engineering

Making code reliable, maintainable, collaborative, production-ready

A Script Can Work. A System Must Survive.

A script may only need:

Correct logic

A system needs:

Clear structure

Tests

Reviews

Version control

Predictable deployment

Team understanding

Code Is Read More Than It Is Written

Reality:

You write a function once

Your teammates read it dozens of times

Future you reads it months later with no memory

Therefore:

Readability is not optional

Readability is part of correctness

Bad Example: “It Works” Is Not Enough

def f(x, y, z):
    if x:
        if y > 0:
            return z * 0.2
    return 0

Problems:

What is f? What does it calculate?

What are x, y, z?

What does 0.2 represent?

No one can review, test, or maintain this confidently

Better Example: Meaning Must Be Visible

def calculate_discount(price, has_membership, loyalty_points):
    if not has_membership:
        return 0
    if loyalty_points <= 0:
        return 0
    return price * 0.2

What improved:

Function name explains the purpose

Parameters are meaningful

Logic reads like a sentence

A reviewer can check correctness immediately

Engineering Standard

Do not ask only: “Does it work?”

Also ask:

Is it readable?

Is it testable?

Is it safe to change?

Would I approve this in a PR?

Principle 1: Naming Is a Design Tool

Bad naming creates confusion. Good naming creates clarity.

Names should reveal: what something is, what it does, why it exists.

Bad Naming Example

def calc(a, b):
    return a * b

Problems:

What does calc calculate?

What are a and b?

Could be area, price, anything - impossible to know

A reviewer cannot verify correctness

Better Naming Example

def calculate_total_price(unit_price, quantity):
    return unit_price * quantity

Benefits:

Purpose is immediately clear

Parameters explain themselves

Reviewer can verify: “Yes, total price = unit price × quantity”

Future developers will not misuse this function

What Good Naming Prevents

Reduces confusion when reading unfamiliar code

Prevents wrong assumptions about function behavior

Prevents misuse of parameters

Prevents future bugs from misunderstanding

If you need a comment to explain what a variable is, the name is probably wrong.

Principle 2: A Function Should Do One Clear Thing

When a function does too many things, testing becomes hard, reuse becomes impossible, and debugging becomes painful.

Bad Example: Too Many Responsibilities

def register_user(user):
    validate_user(user)
    save_user_to_database(user)
    send_welcome_email(user)
    log_activity(user)
    notify_admin(user)

Problems:

Five responsibilities in one function

Cannot test registration without sending email

If email fails, does the user still get saved?

Impossible to reuse any single step

Better Example: Separate Responsibilities

def register_user(user):
    validate_user(user)
    save_user_to_database(user)

def onboard_user(user):
    send_welcome_email(user)
    log_activity(user)

def handle_registration(user):
    register_user(user)
    onboard_user(user)
    notify_admin(user)

Benefits:

Each function has a clear, single purpose

Each can be tested independently

Failure handling becomes manageable

Principle 3: Duplication Is a Bug Factory

Duplicated logic means missed fixes, inconsistency across the codebase, and expensive maintenance.

Bad Example: Repeated Logic

# In checkout.py
final_price = price - price * 0.1

# In invoice.py
final_price = price - price * 0.1

# In report.py
final_price = price - price * 0.1
If this appears in 7 places and the discount rate changes to 0.15, you must find and update all 7. Miss one? You have a silent bug.

Better Example: Centralize Shared Logic

def apply_discount(price, discount_rate):
    return price * (1 - discount_rate)

# Everywhere:
final_price = apply_discount(price, 0.1)

Benefits:

Single source of truth

Change once, correct everywhere

Easy to test

Easy to find and audit

Principle 4: Testability Should Influence Design

If your code is hard to test, it is probably poorly designed.

Testable code is modular, explicit, and has clear inputs and outputs.

Hard-to-Test Code Example

def process_order(order_id):
    order = db.get_order(order_id)         # database call
    user = db.get_user(order.user_id)      # database call
    send_email(user.email, order.summary)  # side effect
    db.mark_processed(order_id)            # database call

Problems:

Cannot test without a real database

Cannot test without sending real emails

All side effects are tightly coupled

More Testable Design

def process_order(order, user, email_sender, db):
    email_sender.send(user.email, order.summary)
    db.mark_processed(order.id)

Benefits:

Dependencies are injected - easy to mock

Each dependency can be tested independently

Logic is separated from infrastructure

Function signature tells you exactly what it needs

Principle 5: Fail Clearly, Not Silently

Silent failures are the most dangerous kind of bug.

When something goes wrong, make it loud and obvious.

Bad Error Handling

def get_user_age(user):
    if user is None:
        return None
    if user.birthday is None:
        return None
    return calculate_age(user.birthday)

Risk:

Caller gets None and does not know why

Bug is hidden and propagates silently

Debugging becomes a guessing game

Clearer Error Handling

def get_user_age(user):
    if user is None:
        raise ValueError("User must not be None")
    if user.birthday is None:
        raise ValueError("User birthday is missing")
    return calculate_age(user.birthday)

Benefits:

Error is immediately visible

Message tells you exactly what is wrong

Bug is caught early, not downstream

Principle 6: Complexity Is Expensive

Complicated code is not clever. It is expensive to maintain, review, and debug.

Complex Example

def get_price(user, product):
    if user.is_member:
        if product.on_sale:
            if user.loyalty_points > 100:
                return product.price * 0.7
            else:
                return product.price * 0.8
        else:
            return product.price * 0.9
    else:
        if product.on_sale:
            return product.price * 0.95
        else:
            return product.price

Deeply nested, hard to follow, easy to introduce bugs

Simpler Example

def get_discount_rate(user, product):
    if not user.is_member and not product.on_sale:
        return 0
    if not user.is_member and product.on_sale:
        return 0.05
    if user.is_member and not product.on_sale:
        return 0.1
    if user.loyalty_points > 100:
        return 0.3
    return 0.2

def get_price(user, product):
    return product.price * (1 - get_discount_rate(user, product))

Flat, readable, each case is explicit and testable

Principle 7: Small Changes Beat Big Messy Changes

Big changes:

Hard to review

High risk of bugs

Painful to merge

Confusing git history

Small changes:

Easy to review

Low risk

Fast to merge

Clear history

Engineering Rule

Prefer:

Smaller functions

Smaller commits

Smaller PRs

Smaller review units

Small, focused changes are easier to understand, review, test, and revert.

Git and GitHub Workflow

Without a workflow, teams create:
overwritten code, unreviewed changes, broken main branches, confusion.

The Correct Basic Workflow

main create feature branch write code commit small changes open PR review improve merge to main
This is not optional. This is the minimum professional workflow.

Never Work Directly on Main

Main should always be deployable

Direct commits skip review

Merge conflicts become destructive

No way to revert cleanly

Team loses trust in the codebase

If main is broken, everyone is blocked. Protect it.

What a Branch Represents

Bad branch names:

fix
my-branch
test123
update

Better branch names:

feature/user-registration
fix/login-redirect-bug
chore/update-dependencies
refactor/split-order-service

A branch name should describe the intent of the work

Commits Should Tell a Story

Bad commit messages:

fix
update
wip
asdf
changed stuff

Good commit messages:

fix: redirect to login after session timeout
feat: add email validation to registration
refactor: extract discount logic into helper
test: add edge cases for price calculator
docs: update API authentication section

A Good PR Is Reviewable

Small and focused on one thing

Has a clear title and description

Explains why the change was made

Includes tests if applicable

Does not mix unrelated changes

A good PR description answers: What changed? Why? How to test?

Bad PR vs Good PR

Bad PR:

Title: “updates”

No description

47 files changed

Mixes feature + refactor + style fix

No tests

Good PR:

Title: “feat: add email validation to registration”

Description explains the approach

4 files changed

Focused on one feature

Includes unit tests

A Professional Repository Is Not Just Code

A serious repository includes structure, documentation, configuration, and standards - not just source files.

Example Repository Structure

project/
├── src/
│   ├── models/
│   ├── services/
│   ├── routes/
│   └── utils/
├── tests/
│   ├── unit/
│   └── integration/
├── .github/
│   └── workflows/
│       └── ci.yml
├── .gitignore
├── README.md
├── requirements.txt
├── Makefile
└── .env.example

Why README Matters

A README should tell a new team member:

What the project does

How to set it up

How to run it

How to run tests

How to contribute

A repository without a README is a repository that says: “Figure it out yourself.”

A Good README Includes

Project name and one-line description

Prerequisites (Python version, dependencies)

Installation steps

How to run locally

How to run tests

Environment variables needed

Project structure overview

Contribution guidelines

If Your Repo Is Confusing, Your Team Slows Down

Every minute a teammate spends figuring out your project structure is a minute they are not building features.

Repository clarity is a form of respect for your team.

Testing Culture

Common excuses:

“We don’t have time for tests”

“It works on my machine”

“Tests slow us down”

Reality:

You do not have time to not test

It needs to work on every machine

Tests save you from slowing down later

What Tests Really Give You

Confidence to change code

Faster debugging - tests show where things broke

Documentation of expected behavior

Safety net for refactoring

Proof that your code works

Tests do not slow strong teams down. They allow strong teams to move fast safely.

A Very Simple Example

def add(a, b):
    return a + b

Simple function. How do we know it works correctly?

We write a test.

Basic Pytest Example

def test_add_returns_sum():
    assert add(2, 3) == 5

Clear, readable, fast to run

If add ever breaks, this test catches it immediately

Anyone can understand what this test checks

Good Tests Check More Than Happy Path

What else should we test?

Negative numbers

Zero values

Very large numbers

Type mismatches (if applicable)

Boundary conditions

Error cases

The happy path is where bugs don’t hide. Edge cases are where they live.

Example: More Useful Tests

def test_add_positive_numbers():
    assert add(2, 3) == 5

def test_add_negative_numbers():
    assert add(-1, -2) == -3

def test_add_with_zero():
    assert add(0, 5) == 5
    assert add(5, 0) == 5

def test_add_mixed_signs():
    assert add(-3, 5) == 2

Each test has a clear name describing what it verifies

What Happens Without Tests

Bugs reach production

Refactoring becomes terrifying

Nobody knows if a change broke something

Manual testing is slow and unreliable

Regressions keep coming back

Team loses confidence in the code

What Happens With Tests

Bugs are caught before merge

Refactoring is safe and fast

CI catches regressions automatically

New team members understand expected behavior

Deployments are confident

Team velocity increases over time

CI/CD and Quality Gates

CI = Continuous Integration

Automatically verify every change before it reaches the main branch.

Why CI Matters

CI prevents:

Broken code reaching main

Untested code being merged

Style inconsistencies

“Works on my machine” problems

CI gives:

Automated quality checks

Fast feedback on every PR

Consistent standards

Team confidence

Common CI Checks

Run all tests (unit + integration)

Check code formatting (black, prettier, etc.)

Run linter (pylint, eslint, flake8)

Check type annotations (mypy)

Measure test coverage

Security scanning

Build check (does the project compile/build?)

GitHub Actions Example

name: Python Checks
on:
  pull_request:
  push:
jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: '3.11'
      - run: pip install -r requirements.txt
      - run: pytest

What This Achieves

Every PR automatically runs tests

Broken PRs cannot be merged (with branch protection)

Team does not need to remember to run tests

Feedback is fast and consistent

CI is like a guard at the gate. It does not let broken code into your main branch.

Coverage Checks

Test coverage measures how much of your code is exercised by tests.

High coverage does not mean no bugs

Low coverage means large untested areas

Coverage is a guideline, not a guarantee

Caution: Do not chase 100% coverage. Focus on testing important logic and edge cases, not trivial getters and setters.

Practical Quality Rule

Good targets for most projects:

70-85% test coverage as a baseline

All critical business logic tested

All edge cases for core functions tested

CI must pass before merge (no exceptions)

Linter and formatter checks on every PR

Quality gates are not bureaucracy. They are engineering infrastructure.

Effective Code Review

Code review is not:

A formality

A rubber stamp

A power move

Code review is:

A quality conversation

A learning opportunity

A team defense mechanism

Why Teams Review Code

Catch bugs before they reach production

Share knowledge across the team

Maintain consistent code quality

Spread ownership of the codebase

Teach and learn from each other

Build team trust

A Weak Review vs an Effective Review

Weak review:

Glance at the diff for 30 seconds

Click “Approve”

Leave no comments

Do not run the code

Effective review:

Read every changed line

Check logic and edge cases

Ask clarifying questions

Suggest improvements respectfully

Verify tests exist

How to Give an Effective Code Review

Check for:

Correctness - does the logic do what it should?

Readability - can you understand it without asking?

Naming - are names clear and accurate?

Duplication - is anything repeated unnecessarily?

Error handling - are edge cases covered?

Tests - are there tests for the new logic?

Scope - does the PR stay focused?

Reviewer Mindset

During a review, answer these questions:

Do I understand what this code does?

Would I be comfortable maintaining this?

Is there anything that could break silently?

Are there missing tests?

Is the PR description accurate?

Would a new team member understand this?

Bad Review Comment

“This is wrong.”

Why it is weak:

Does not explain what is wrong

Does not explain why it is wrong

Does not suggest an improvement

Feels like an attack, not feedback

Wastes the author’s time

Better Review Comment

“This function returns None when the user is not found, but the caller on line 42 does not check for None. This could cause an AttributeError at runtime. Could we either raise an exception here or add a None check in the caller?”

Why it works:

Explains the problem

Shows the impact

Suggests a concrete fix

Treats the author as a collaborator

Use Questions to Reduce Friction

Instead of commands, use curious questions:

“What happens if this input is negative?”

“Could we simplify this with an early return?”

“Is there a reason we are not using the existing helper for this?”

“Would it be clearer to extract this into a separate function?”

Questions invite dialogue. Commands create resistance.

Examples of Strong Review Focus

A strong reviewer notices:

Magic numbers without explanation

Missing error handling for edge cases

Duplicated logic that should be extracted

Functions doing more than one thing

Inconsistent naming conventions

Missing or insufficient tests

Security concerns (hardcoded secrets, SQL injection, etc.)

Read This Code Like a Reviewer

def calc(items):
    t = 0
    for i in items:
        if i["type"] == "book":
            t += i["price"] * 0.9
        else:
            t += i["price"]
    return t

As a reviewer, ask:

What does calc calculate?

What does 0.9 mean? Why books specifically?

What if an item has no “type” or “price” key?

Where are the tests?

A Better Reviewable Version

BOOK_DISCOUNT_RATE = 0.1

def get_item_price(item):
    price = item["price"]
    if item["type"] == "book":
        return price * (1 - BOOK_DISCOUNT_RATE)
    return price

def calculate_cart_total(items):
    return sum(get_item_price(item) for item in items)

Improvements:

Magic number replaced with named constant

Pricing logic extracted into its own function

Each function is independently testable

What Reviewers Should Not Do

Nitpick style when there is a linter for that

Rewrite the author’s code in their own style

Approve without reading

Block a PR for personal preference (not correctness)

Be condescending or dismissive

Delay reviews for days without communication

If You Do Not Understand the Code, Do Not Approve It

“If you do not understand the code, do not approve it.”

Approval means responsibility.
If that code causes a bug, you share the accountability.

How to Receive Review Professionally

Do not:

Take feedback personally

Argue without listening

Dismiss suggestions immediately

Feel attacked

Do:

Read every comment carefully

Ask for clarification if needed

Thank reviewers for their time

Explain your reasoning respectfully

Make changes promptly

Bad Response to Review

“It works fine. I tested it.”
“That’s just a style preference.”
“We don’t have time for this.”
“Can you just approve it?”

These responses shut down collaboration and erode trust

Good Response to Review

“Good catch - I missed that edge case. Fixed in the latest commit.”
“I see your point. I went with this approach because [reason]. Happy to change if you still think it’s better.”
“Thank you for the detailed review. I learned something new here.”

These responses build trust and strengthen the team

Review Is Not About Winning

The goal is not:

Proving you are right

Defending your ego

Getting approved fast

The goal is:

Shipping better code

Learning from each other

Building a reliable product

Growing as engineers

How to Benefit from GenAI

AI can make engineers faster, but also sloppier.

Use AI as an assistant, not as a replacement for engineering judgment.

Good Uses of GenAI

Generate boilerplate code to save time

Draft test cases for existing functions

Explain unfamiliar code or libraries

Suggest alternative approaches

Help write documentation

Assist with debugging by analyzing error messages

Pre-review code before submitting a PR

Weak Use of GenAI

Using AI without understanding:

“Write me a login system” → copy-paste → push
“Fix this error” → apply suggestion blindly
“Generate all my tests” → never read them

If you cannot explain the code, you do not own it

AI-generated code with bugs you do not understand is worse than no code

Strong Use of GenAI

Better prompts, better results:

“Review this function for edge cases I might have missed”
“Generate pytest tests for this function, including negative numbers and empty input”
“Refactor this to use early returns instead of nested conditions”

Specific, focused prompts produce useful, reviewable output

Good Prompt Example for Test Generation

# Prompt to AI:
"Write pytest tests for this function.
Include: happy path, negative input, zero, empty list, and type error.
Use descriptive test names."

# Function:
def calculate_average(numbers):
    if not numbers:
        raise ValueError("Cannot average empty list")
    return sum(numbers) / len(numbers)

The prompt is specific about what to test and how to name tests

You still review and adjust the generated tests

Good Prompt Example for Code Review

# Prompt to AI:
"Review this function as a senior engineer.
Check for:
- edge cases
- naming clarity
- error handling
- testability
- any potential bugs

Be specific and suggest improvements."

Use AI as a “first reviewer” before your teammate reviews

This catches obvious issues and saves reviewer time

VS Code and Editor Integrations

Useful AI tools for development:

GitHub Copilot - inline code suggestions

Claude Code (CLI) - terminal-based AI assistant

Cursor - AI-native code editor

Codeium - free AI autocomplete

Good use cases:

Autocomplete repetitive patterns

Generate docstrings

Explain complex code sections

Suggest test structure

The Rule for AI-Generated Code

Before merging any AI-generated code, you must:

Understand it - read every line

Explain it - be able to describe what it does and why

Test it - verify it with your own tests

Approve it - treat it as if you wrote it in your PR

If your name is on the commit, you own the code. It does not matter who or what generated it.

Live Demo: Reviewing a Bad Pull Request

We will inspect a real PR together and identify:

• Poor naming
• Missing error handling
• Duplicated logic
• Missing tests
• Unclear commit messages
• Scope creep

Live Demo Goal

After this demo, you should be able to:

Read a PR diff critically

Identify at least 5 types of issues

Write constructive review comments

Suggest specific improvements

Distinguish between blocking issues and suggestions

Reviewing code is a skill. Like any skill, it improves with deliberate practice.

Homework: Review a Pull Request Like an Engineer

Assignment:

Find an open-source PR on GitHub (or use one provided by the instructor)

Write a review with at least 5 substantive comments

Each comment must identify a specific issue and suggest an improvement

Submission requirements:

Link to the PR you reviewed

Your review comments (screenshot or markdown)

A short reflection: what did you learn from reviewing someone else’s code?

Final Message

“Great engineers do not just make code work. They make code understandable, testable, reviewable, and trustworthy.”

Thank You

“The code you are too busy to clean today becomes the problem that owns your team tomorrow.”

Emre Varol · A2SV · University of Rwanda

emrevarol.com