Writing code is easy
Writing maintainable code is hard
Writing code that a team can trust is software engineering
Not just syntax
Not just “clean code”
Not just GitHub mechanics
This lecture is about engineering discipline
A project rarely fails because people cannot write any code.
A project often fails because code becomes:
Unclear
Untestable
Unreviewable
Hard to change
“It works, so it is good enough.”
“We can clean it later.”
“Tests take too much time.”
“Review is not needed for small changes.”
“My teammates will understand it.”
Later usually never comes
Small bad habits become team culture
Team culture becomes product quality
A professional engineer asks:
Can another person understand this quickly?
Can this be safely changed later?
Can this be tested?
Can this be reviewed well?
Will this make the team faster or slower?
Bad code causes:
More bugs
Slower onboarding
Painful debugging
Duplicated work
Broken features after changes
Fear of touching the codebase
Good code gives:
Faster development
Easier collaboration
Safer refactoring
Better reviews
More trust
More consistent products
“Your code is not only for the computer. Your code is for your teammates, your future self, and the product.”
Making code run
Making code reliable, maintainable, collaborative, production-ready
Correct logic
Clear structure
Tests
Reviews
Version control
Predictable deployment
Team understanding
You write a function once
Your teammates read it dozens of times
Future you reads it months later with no memory
Readability is not optional
Readability is part of correctness
def f(x, y, z): if x: if y > 0: return z * 0.2 return 0
What is f? What does it calculate?
What are x, y, z?
What does 0.2 represent?
No one can review, test, or maintain this confidently
def calculate_discount(price, has_membership, loyalty_points): if not has_membership: return 0 if loyalty_points <= 0: return 0 return price * 0.2
Function name explains the purpose
Parameters are meaningful
Logic reads like a sentence
A reviewer can check correctness immediately
Do not ask only: “Does it work?”
Also ask:
Is it readable?
Is it testable?
Is it safe to change?
Would I approve this in a PR?
Bad naming creates confusion. Good naming creates clarity.
Names should reveal: what something is, what it does, why it exists.
def calc(a, b): return a * b
What does calc calculate?
What are a and b?
Could be area, price, anything - impossible to know
A reviewer cannot verify correctness
def calculate_total_price(unit_price, quantity): return unit_price * quantity
Purpose is immediately clear
Parameters explain themselves
Reviewer can verify: “Yes, total price = unit price × quantity”
Future developers will not misuse this function
Reduces confusion when reading unfamiliar code
Prevents wrong assumptions about function behavior
Prevents misuse of parameters
Prevents future bugs from misunderstanding
When a function does too many things, testing becomes hard, reuse becomes impossible, and debugging becomes painful.
def register_user(user): validate_user(user) save_user_to_database(user) send_welcome_email(user) log_activity(user) notify_admin(user)
Five responsibilities in one function
Cannot test registration without sending email
If email fails, does the user still get saved?
Impossible to reuse any single step
def register_user(user): validate_user(user) save_user_to_database(user) def onboard_user(user): send_welcome_email(user) log_activity(user) def handle_registration(user): register_user(user) onboard_user(user) notify_admin(user)
Each function has a clear, single purpose
Each can be tested independently
Failure handling becomes manageable
Duplicated logic means missed fixes, inconsistency across the codebase, and expensive maintenance.
# In checkout.py final_price = price - price * 0.1 # In invoice.py final_price = price - price * 0.1 # In report.py final_price = price - price * 0.1
def apply_discount(price, discount_rate): return price * (1 - discount_rate) # Everywhere: final_price = apply_discount(price, 0.1)
Single source of truth
Change once, correct everywhere
Easy to test
Easy to find and audit
If your code is hard to test, it is probably poorly designed.
Testable code is modular, explicit, and has clear inputs and outputs.
def process_order(order_id): order = db.get_order(order_id) # database call user = db.get_user(order.user_id) # database call send_email(user.email, order.summary) # side effect db.mark_processed(order_id) # database call
Cannot test without a real database
Cannot test without sending real emails
All side effects are tightly coupled
def process_order(order, user, email_sender, db): email_sender.send(user.email, order.summary) db.mark_processed(order.id)
Dependencies are injected - easy to mock
Each dependency can be tested independently
Logic is separated from infrastructure
Function signature tells you exactly what it needs
Silent failures are the most dangerous kind of bug.
When something goes wrong, make it loud and obvious.
def get_user_age(user): if user is None: return None if user.birthday is None: return None return calculate_age(user.birthday)
Caller gets None and does not know why
Bug is hidden and propagates silently
Debugging becomes a guessing game
def get_user_age(user): if user is None: raise ValueError("User must not be None") if user.birthday is None: raise ValueError("User birthday is missing") return calculate_age(user.birthday)
Error is immediately visible
Message tells you exactly what is wrong
Bug is caught early, not downstream
Complicated code is not clever. It is expensive to maintain, review, and debug.
def get_price(user, product): if user.is_member: if product.on_sale: if user.loyalty_points > 100: return product.price * 0.7 else: return product.price * 0.8 else: return product.price * 0.9 else: if product.on_sale: return product.price * 0.95 else: return product.price
Deeply nested, hard to follow, easy to introduce bugs
def get_discount_rate(user, product): if not user.is_member and not product.on_sale: return 0 if not user.is_member and product.on_sale: return 0.05 if user.is_member and not product.on_sale: return 0.1 if user.loyalty_points > 100: return 0.3 return 0.2 def get_price(user, product): return product.price * (1 - get_discount_rate(user, product))
Flat, readable, each case is explicit and testable
Hard to review
High risk of bugs
Painful to merge
Confusing git history
Easy to review
Low risk
Fast to merge
Clear history
Prefer:
Smaller functions
Smaller commits
Smaller PRs
Smaller review units
Without a workflow, teams create:
overwritten code, unreviewed changes, broken main branches, confusion.
Main should always be deployable
Direct commits skip review
Merge conflicts become destructive
No way to revert cleanly
Team loses trust in the codebase
fix my-branch test123 update
feature/user-registration fix/login-redirect-bug chore/update-dependencies refactor/split-order-service
A branch name should describe the intent of the work
fix update wip asdf changed stuff
fix: redirect to login after session timeout feat: add email validation to registration refactor: extract discount logic into helper test: add edge cases for price calculator docs: update API authentication section
Small and focused on one thing
Has a clear title and description
Explains why the change was made
Includes tests if applicable
Does not mix unrelated changes
Title: “updates”
No description
47 files changed
Mixes feature + refactor + style fix
No tests
Title: “feat: add email validation to registration”
Description explains the approach
4 files changed
Focused on one feature
Includes unit tests
A serious repository includes structure, documentation, configuration, and standards - not just source files.
project/ ├── src/ │ ├── models/ │ ├── services/ │ ├── routes/ │ └── utils/ ├── tests/ │ ├── unit/ │ └── integration/ ├── .github/ │ └── workflows/ │ └── ci.yml ├── .gitignore ├── README.md ├── requirements.txt ├── Makefile └── .env.example
A README should tell a new team member:
What the project does
How to set it up
How to run it
How to run tests
How to contribute
Project name and one-line description
Prerequisites (Python version, dependencies)
Installation steps
How to run locally
How to run tests
Environment variables needed
Project structure overview
Contribution guidelines
“We don’t have time for tests”
“It works on my machine”
“Tests slow us down”
You do not have time to not test
It needs to work on every machine
Tests save you from slowing down later
Confidence to change code
Faster debugging - tests show where things broke
Documentation of expected behavior
Safety net for refactoring
Proof that your code works
def add(a, b): return a + b
Simple function. How do we know it works correctly?
We write a test.
def test_add_returns_sum(): assert add(2, 3) == 5
Clear, readable, fast to run
If add ever breaks, this test catches it immediately
Anyone can understand what this test checks
What else should we test?
Negative numbers
Zero values
Very large numbers
Type mismatches (if applicable)
Boundary conditions
Error cases
def test_add_positive_numbers(): assert add(2, 3) == 5 def test_add_negative_numbers(): assert add(-1, -2) == -3 def test_add_with_zero(): assert add(0, 5) == 5 assert add(5, 0) == 5 def test_add_mixed_signs(): assert add(-3, 5) == 2
Each test has a clear name describing what it verifies
Bugs reach production
Refactoring becomes terrifying
Nobody knows if a change broke something
Manual testing is slow and unreliable
Regressions keep coming back
Team loses confidence in the code
Bugs are caught before merge
Refactoring is safe and fast
CI catches regressions automatically
New team members understand expected behavior
Deployments are confident
Team velocity increases over time
CI = Continuous Integration
Automatically verify every change before it reaches the main branch.
Broken code reaching main
Untested code being merged
Style inconsistencies
“Works on my machine” problems
Automated quality checks
Fast feedback on every PR
Consistent standards
Team confidence
Run all tests (unit + integration)
Check code formatting (black, prettier, etc.)
Run linter (pylint, eslint, flake8)
Check type annotations (mypy)
Measure test coverage
Security scanning
Build check (does the project compile/build?)
name: Python Checks on: pull_request: push: jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: python-version: '3.11' - run: pip install -r requirements.txt - run: pytest
Every PR automatically runs tests
Broken PRs cannot be merged (with branch protection)
Team does not need to remember to run tests
Feedback is fast and consistent
Test coverage measures how much of your code is exercised by tests.
High coverage does not mean no bugs
Low coverage means large untested areas
Coverage is a guideline, not a guarantee
Good targets for most projects:
70-85% test coverage as a baseline
All critical business logic tested
All edge cases for core functions tested
CI must pass before merge (no exceptions)
Linter and formatter checks on every PR
A formality
A rubber stamp
A power move
A quality conversation
A learning opportunity
A team defense mechanism
Catch bugs before they reach production
Share knowledge across the team
Maintain consistent code quality
Spread ownership of the codebase
Teach and learn from each other
Build team trust
Glance at the diff for 30 seconds
Click “Approve”
Leave no comments
Do not run the code
Read every changed line
Check logic and edge cases
Ask clarifying questions
Suggest improvements respectfully
Verify tests exist
Check for:
Correctness - does the logic do what it should?
Readability - can you understand it without asking?
Naming - are names clear and accurate?
Duplication - is anything repeated unnecessarily?
Error handling - are edge cases covered?
Tests - are there tests for the new logic?
Scope - does the PR stay focused?
During a review, answer these questions:
Do I understand what this code does?
Would I be comfortable maintaining this?
Is there anything that could break silently?
Are there missing tests?
Is the PR description accurate?
Would a new team member understand this?
Does not explain what is wrong
Does not explain why it is wrong
Does not suggest an improvement
Feels like an attack, not feedback
Wastes the author’s time
None when the user is not found, but the caller on line 42 does not check for None. This could cause an AttributeError at runtime. Could we either raise an exception here or add a None check in the caller?”
Explains the problem
Shows the impact
Suggests a concrete fix
Treats the author as a collaborator
Instead of commands, use curious questions:
“What happens if this input is negative?”
“Could we simplify this with an early return?”
“Is there a reason we are not using the existing helper for this?”
“Would it be clearer to extract this into a separate function?”
A strong reviewer notices:
Magic numbers without explanation
Missing error handling for edge cases
Duplicated logic that should be extracted
Functions doing more than one thing
Inconsistent naming conventions
Missing or insufficient tests
Security concerns (hardcoded secrets, SQL injection, etc.)
def calc(items): t = 0 for i in items: if i["type"] == "book": t += i["price"] * 0.9 else: t += i["price"] return t
As a reviewer, ask:
What does calc calculate?
What does 0.9 mean? Why books specifically?
What if an item has no “type” or “price” key?
Where are the tests?
BOOK_DISCOUNT_RATE = 0.1 def get_item_price(item): price = item["price"] if item["type"] == "book": return price * (1 - BOOK_DISCOUNT_RATE) return price def calculate_cart_total(items): return sum(get_item_price(item) for item in items)
Magic number replaced with named constant
Pricing logic extracted into its own function
Each function is independently testable
Nitpick style when there is a linter for that
Rewrite the author’s code in their own style
Approve without reading
Block a PR for personal preference (not correctness)
Be condescending or dismissive
Delay reviews for days without communication
“If you do not understand the code, do not approve it.”
Take feedback personally
Argue without listening
Dismiss suggestions immediately
Feel attacked
Read every comment carefully
Ask for clarification if needed
Thank reviewers for their time
Explain your reasoning respectfully
Make changes promptly
These responses shut down collaboration and erode trust
These responses build trust and strengthen the team
Proving you are right
Defending your ego
Getting approved fast
Shipping better code
Learning from each other
Building a reliable product
Growing as engineers
AI can make engineers faster, but also sloppier.
Generate boilerplate code to save time
Draft test cases for existing functions
Explain unfamiliar code or libraries
Suggest alternative approaches
Help write documentation
Assist with debugging by analyzing error messages
Pre-review code before submitting a PR
Using AI without understanding:
If you cannot explain the code, you do not own it
AI-generated code with bugs you do not understand is worse than no code
Better prompts, better results:
Specific, focused prompts produce useful, reviewable output
# Prompt to AI: "Write pytest tests for this function. Include: happy path, negative input, zero, empty list, and type error. Use descriptive test names." # Function: def calculate_average(numbers): if not numbers: raise ValueError("Cannot average empty list") return sum(numbers) / len(numbers)
The prompt is specific about what to test and how to name tests
You still review and adjust the generated tests
# Prompt to AI: "Review this function as a senior engineer. Check for: - edge cases - naming clarity - error handling - testability - any potential bugs Be specific and suggest improvements."
Use AI as a “first reviewer” before your teammate reviews
This catches obvious issues and saves reviewer time
Useful AI tools for development:
GitHub Copilot - inline code suggestions
Claude Code (CLI) - terminal-based AI assistant
Cursor - AI-native code editor
Codeium - free AI autocomplete
Good use cases:
Autocomplete repetitive patterns
Generate docstrings
Explain complex code sections
Suggest test structure
Before merging any AI-generated code, you must:
Understand it - read every line
Explain it - be able to describe what it does and why
Test it - verify it with your own tests
Approve it - treat it as if you wrote it in your PR
We will inspect a real PR together and identify:
• Poor naming
• Missing error handling
• Duplicated logic
• Missing tests
• Unclear commit messages
• Scope creep
After this demo, you should be able to:
Read a PR diff critically
Identify at least 5 types of issues
Write constructive review comments
Suggest specific improvements
Distinguish between blocking issues and suggestions
Assignment:
Find an open-source PR on GitHub (or use one provided by the instructor)
Write a review with at least 5 substantive comments
Each comment must identify a specific issue and suggest an improvement
Submission requirements:
Link to the PR you reviewed
Your review comments (screenshot or markdown)
A short reflection: what did you learn from reviewing someone else’s code?
“Great engineers do not just make code work. They make code understandable, testable, reviewable, and trustworthy.”
“The code you are too busy to clean today becomes the problem that owns your team tomorrow.”
Emre Varol · A2SV · University of Rwanda