August 1, 2025
Python FastAPI SQLAlchemy Capstone Project

Full-Stack API Capstone: Inventory System with Database

You've learned the pieces. Now you're going to build the whole machine.

This capstone brings together everything from Cluster 4: FastAPI for the REST layer, SQLAlchemy for database abstraction, Pydantic for data validation, and proper HTTP semantics. We're building an inventory management system, the kind of system that powers real e-commerce platforms, warehouse software, and retail operations.

By the end of this article, you'll have a production-adjacent codebase that demonstrates layered architecture, proper separation of concerns, and the patterns that scale from startup to enterprise.

Table of Contents
  1. The Problem with "Just Writing Code"
  2. What We're Building
  3. Architecture Decisions
  4. Project Setup
  5. Database Design Thinking
  6. Database Layer: Models
  7. Pydantic Schemas: Request/Response Separation
  8. Database Configuration
  9. CRUD Operations
  10. API Design Best Practices
  11. API Routes
  12. Error Handling Strategy
  13. Main Application
  14. Testing: Putting It All Together
  15. Running the Application
  16. Architecture Lessons
  17. 1. Separation of Concerns
  18. 2. Dependency Injection
  19. 3. Proper HTTP Semantics
  20. 4. Validation at Every Layer
  21. 5. Query Flexibility
  22. Production Considerations
  23. What You Actually Built
  24. Summary

The Problem with "Just Writing Code"

Here's a scenario you've probably encountered. You need to track product inventory for a small business. You start with a Python script that reads a spreadsheet, updates some numbers, writes it back. Simple enough. Then the business grows. Two people need to update inventory at the same time. You need to log every change for auditing. The spreadsheet has 50,000 rows and filtering is slow. You want a mobile app to check stock levels. Suddenly your script isn't just a script anymore, it's trying to be a system, and it's failing at the job.

This is the exact moment developers discover they need a real API backed by a real database. And this is where most tutorials leave you stranded. They show you how to make a "Hello World" endpoint or a toy todo list, then wave their hands at "production concerns." What you actually need is a worked example of a domain that matters: something with relationships between entities, business rules about data integrity, filtering and search that would choke a spreadsheet, and a test suite that proves the whole thing works.

That's what this capstone is. We're building an inventory management system that tracks products, organizes them into categories, and records every stock movement, purchases in, sales out, manual adjustments. Think of the backend powering an e-commerce warehouse, a retail point-of-sale system, or even a home workshop tracking your components. The domain is familiar enough to reason about but complex enough to force us to make real architectural decisions.

What makes this different from a toy project is the intentionality behind every choice. We're not just writing code that works, we're writing code that could be handed to another developer, extended by a team, or deployed to a server taking real traffic. Every layer of this application has a job, and each layer does only that job. By the time you finish this article, you'll understand not just what to build, but why each piece is structured the way it is. That understanding is what separates developers who follow tutorials from developers who design systems.

What We're Building

An inventory system isn't just a database dump. It's a full-stack application with:

  • Database models for products, categories, inventory transactions, and stock levels
  • Pydantic schemas that validate incoming data and shape outgoing responses
  • REST endpoints with proper HTTP verbs (GET, POST, PUT, DELETE) and status codes
  • Database session management using FastAPI dependency injection
  • Advanced queries: filtering, pagination, sorting, and search
  • CSV export for reporting
  • Comprehensive tests using pytest and TestClient

This is the architecture you'll see in production codebases. Let's build it.

Architecture Decisions

Before writing a single line of code, it's worth understanding why this project is structured the way it is. Good architecture isn't about following rules for their own sake, it's about making tradeoffs explicit so that future developers (including you in six months) can understand the reasoning and make informed changes.

The first decision is the layered architecture: models, schemas, CRUD operations, and routes each live in separate modules. Why not just put everything in one file? Because as the codebase grows, a single file becomes a maze. When a bug appears in your transaction logic, you want to open crud.py and look there, not hunt through thousands of lines of mixed concerns. Each module has a contract: models talk to the database, schemas talk to the network, CRUD functions talk to models, and routes orchestrate the flow. When you need to change how a database query works, you change crud.py. When you need to add a new field to the API response, you change schemas.py. The blast radius of any change stays small.

The second decision is SQLite for development, PostgreSQL-ready for production. SQLite requires zero infrastructure, it's a file on disk, which means you can clone this repo and run it immediately. But the database configuration uses environment variables and detects the database type at startup, so swapping to PostgreSQL in production is a one-line change. This pattern, making the easy path easy and the real path possible, is how professional projects are structured.

The third decision is to track inventory movements as transactions rather than just storing current stock levels. We could have a single quantity field on each product and just update it. Instead we record every purchase, sale, and adjustment as a separate database row. This gives us an audit trail, the ability to investigate discrepancies, and the foundation for reporting. The current quantity is derived by summing transactions, but we also cache it on the product for query performance. This is a classic database pattern called write-through caching: update both the cache and the source of truth in a single transaction.

The fourth decision is dependency injection for database sessions. Instead of creating a database connection in each function, FastAPI injects a session via the Depends(get_db) mechanism. This means tests can swap in an in-memory database without touching production code. It means connections are always cleaned up after each request. It means your route functions stay focused on HTTP concerns, not infrastructure plumbing.

Project Setup

Create a new directory and install dependencies:

bash
mkdir inventory-api
cd inventory-api
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install fastapi uvicorn sqlalchemy pydantic python-dotenv pytest httpx

Your directory structure will look like this:

inventory-api/
├── app/
│   ├── __init__.py
│   ├── main.py
│   ├── database.py
│   ├── models.py
│   ├── schemas.py
│   ├── crud.py
│   ├── routes/
│   │   ├── __init__.py
│   │   ├── products.py
│   │   ├── categories.py
│   │   └── transactions.py
│   └── dependencies.py
├── tests/
│   ├── __init__.py
│   ├── test_products.py
│   ├── test_categories.py
│   └── test_transactions.py
├── .env
└── requirements.txt

This structure follows the FastAPI best practices: database logic in one module, request/response schemas in another, routes organized by domain. No spaghetti. Notice that routes are split by domain, products.py, categories.py, transactions.py, rather than by operation type. This means all product-related code lives together, making it easy to find and extend. When you need to add a new product endpoint six months from now, you open routes/products.py and add it there. The structure guides future decisions.

Database Design Thinking

A database schema is a long-lived contract. Once you deploy a schema to production with real data, changing it requires careful migration work. This means the time you spend thinking through the schema before writing code pays dividends for the lifetime of the project.

Our three tables encode a specific data model: categories contain products, and products accumulate transactions. The foreign key relationship between products and categories enforces referential integrity at the database level, you literally cannot create a product that references a non-existent category, and SQLAlchemy will raise an error if you try. This is not just a nice-to-have; it's the database enforcing your business rules so your application code doesn't have to be the last line of defense.

The reorder_point field on Product deserves attention because it's a business concept baked into the data model. It represents the stock level at which you should place a new order. Different products have different reorder points, a fast-moving item might need reordering at 50 units, while a slow-moving specialty part might be fine at 2. By storing this as a field rather than a hardcoded constant, we enable the /products/low-stock endpoint to be actually useful: it returns products where current quantity is at or below that product's specific threshold.

The InventoryTransaction table uses a string transaction_type field constrained at the schema level (via Pydantic's regex validator) to one of three values: purchase, sale, adjustment. We could have used a database-level enum, but string columns with application-level validation are more portable across database engines and easier to add new values to in the future. The tradeoff is that the constraint lives in code, not in the database, which means it's enforced on write but not on direct database manipulation. For most applications this is acceptable.

The onupdate=datetime.utcnow parameter on updated_at in the Product model is a small detail that matters. It means any time SQLAlchemy updates a product row, the timestamp is automatically refreshed. You don't need to remember to set it in your CRUD code. This kind of automatic behavior reduces bugs caused by human forgetfulness and keeps your timestamps accurate without extra work.

Database Layer: Models

Start with SQLAlchemy models. These define your database schema, the shape of the data living in your database. Think of each class as a table definition: the class name becomes the table name, and each Column becomes a column in that table. SQLAlchemy translates Python objects into SQL so you rarely have to write raw queries.

app/models.py:

python
from sqlalchemy import Column, Integer, String, Float, DateTime, ForeignKey, Boolean
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from datetime import datetime
 
Base = declarative_base()
 
class Category(Base):
    __tablename__ = "categories"
 
    id = Column(Integer, primary_key=True, index=True)
    name = Column(String(100), unique=True, index=True, nullable=False)
    description = Column(String(500))
    created_at = Column(DateTime, default=datetime.utcnow)
 
    # Relationship to products
    products = relationship("Product", back_populates="category", cascade="all, delete-orphan")
 
class Product(Base):
    __tablename__ = "products"
 
    id = Column(Integer, primary_key=True, index=True)
    sku = Column(String(50), unique=True, index=True, nullable=False)
    name = Column(String(200), nullable=False)
    description = Column(String(1000))
    price = Column(Float, nullable=False)
    cost = Column(Float)  # Cost to acquire
    quantity_on_hand = Column(Integer, default=0)
    reorder_point = Column(Integer, default=10)  # Alert when below this
    category_id = Column(Integer, ForeignKey("categories.id"), nullable=False)
    is_active = Column(Boolean, default=True)
    created_at = Column(DateTime, default=datetime.utcnow)
    updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
 
    # Relationship to category
    category = relationship("Category", back_populates="products")
    transactions = relationship("InventoryTransaction", back_populates="product", cascade="all, delete-orphan")
 
class InventoryTransaction(Base):
    __tablename__ = "inventory_transactions"
 
    id = Column(Integer, primary_key=True, index=True)
    product_id = Column(Integer, ForeignKey("products.id"), nullable=False)
    transaction_type = Column(String(20), nullable=False)  # "purchase", "sale", "adjustment"
    quantity = Column(Integer, nullable=False)
    reference = Column(String(100))  # PO number, invoice, etc.
    notes = Column(String(500))
    created_at = Column(DateTime, default=datetime.utcnow)
 
    # Relationship to product
    product = relationship("Product", back_populates="transactions")

Here's what's happening:

  • Relationships: Category has many Products. Each Product has many InventoryTransaction records. SQLAlchemy handles the joins for you.
  • Constraints: unique=True prevents duplicate SKUs or category names. index=True makes queries faster.
  • Defaults: datetime.utcnow automatically timestamps creation. onupdate=datetime.utcnow tracks changes.
  • Foreign keys: Link products to categories and transactions to products.

The cascade="all, delete-orphan" on the relationship means that when you delete a category, all its products are automatically deleted too. When you delete a product, all its transactions follow. This cascading behavior is intentional, orphaned records (transactions for products that no longer exist) would be confusing and waste storage. If your business rules were different, say, you need to keep transaction history even after archiving a product, you'd remove the cascade and handle archiving through a soft-delete pattern using the is_active flag instead.

This is your contract with the database. Everything else builds on top of this.

Pydantic Schemas: Request/Response Separation

SQLAlchemy models live in the database. Pydantic schemas live in your API. They're not the same thing, and that separation matters. A database model knows about columns, indexes, and foreign keys. A Pydantic schema knows about HTTP requests, validation rules, and response shapes. Mixing these concerns leads to leaking internal database details to API consumers, or polluting database models with HTTP-specific validation logic. Keeping them separate means you can change either without breaking the other.

app/schemas.py:

python
from pydantic import BaseModel, Field, validator
from typing import Optional, List
from datetime import datetime
 
# ============ Category Schemas ============
 
class CategoryBase(BaseModel):
    name: str = Field(..., min_length=1, max_length=100)
    description: Optional[str] = Field(None, max_length=500)
 
class CategoryCreate(CategoryBase):
    pass
 
class CategoryUpdate(BaseModel):
    name: Optional[str] = Field(None, min_length=1, max_length=100)
    description: Optional[str] = Field(None, max_length=500)
 
class CategoryResponse(CategoryBase):
    id: int
    created_at: datetime
 
    class Config:
        from_attributes = True  # Pydantic v2: read ORM objects
 
class CategoryWithProducts(CategoryResponse):
    products: List['ProductResponse'] = []
 
# ============ Product Schemas ============
 
class ProductBase(BaseModel):
    sku: str = Field(..., min_length=1, max_length=50)
    name: str = Field(..., min_length=1, max_length=200)
    description: Optional[str] = Field(None, max_length=1000)
    price: float = Field(..., gt=0)
    cost: Optional[float] = Field(None, ge=0)
    quantity_on_hand: int = Field(default=0, ge=0)
    reorder_point: int = Field(default=10, ge=0)
    category_id: int
 
    @validator('price')
    def price_must_exceed_cost(cls, price, values):
        cost = values.get('cost')
        if cost and price < cost:
            raise ValueError('price must be greater than cost')
        return price
 
class ProductCreate(ProductBase):
    pass
 
class ProductUpdate(BaseModel):
    sku: Optional[str] = Field(None, min_length=1, max_length=50)
    name: Optional[str] = Field(None, min_length=1, max_length=200)
    description: Optional[str] = Field(None, max_length=1000)
    price: Optional[float] = Field(None, gt=0)
    cost: Optional[float] = Field(None, ge=0)
    quantity_on_hand: Optional[int] = Field(None, ge=0)
    reorder_point: Optional[int] = Field(None, ge=0)
    category_id: Optional[int] = None
    is_active: Optional[bool] = None
 
class ProductResponse(ProductBase):
    id: int
    is_active: bool
    created_at: datetime
    updated_at: datetime
 
    class Config:
        from_attributes = True
 
class ProductWithTransactions(ProductResponse):
    transactions: List['TransactionResponse'] = []
 
# ============ Transaction Schemas ============
 
class TransactionBase(BaseModel):
    product_id: int
    transaction_type: str = Field(..., regex="^(purchase|sale|adjustment)$")
    quantity: int = Field(..., gt=0)
    reference: Optional[str] = Field(None, max_length=100)
    notes: Optional[str] = Field(None, max_length=500)
 
class TransactionCreate(TransactionBase):
    pass
 
class TransactionResponse(TransactionBase):
    id: int
    created_at: datetime
 
    class Config:
        from_attributes = True
 
# Update forward references (needed for nested models)
CategoryWithProducts.model_rebuild()
ProductWithTransactions.model_rebuild()

Notice the separation:

  • ProductBase: Common fields for both create and update
  • ProductCreate: What clients send when creating a product
  • ProductUpdate: Partial updates (all fields optional)
  • ProductResponse: What the API returns (includes timestamps and IDs)

The @validator decorator enforces business logic: price can't be less than cost. The Config.from_attributes tells Pydantic to read from SQLAlchemy ORM objects. The model_rebuild() calls at the bottom resolve forward references in nested schemas, Python needs to know the full definition of TransactionResponse before it can validate the transactions field in ProductWithTransactions, but due to Python's top-to-bottom evaluation order, that definition doesn't exist yet when the class is first declared. Calling model_rebuild() after all schemas are defined resolves this circular reference elegantly.

Database Configuration

You need a place where all database connections are created and managed. The database module handles the engine (the underlying connection pool), the session factory, and the dependency injection function that routes use to get their database handles. This is infrastructure code, it rarely changes, but when it does, having it isolated means you change it in one place.

app/database.py:

python
import os
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, Session
from sqlalchemy.pool import StaticPool
 
# Use SQLite for simplicity. In production, use PostgreSQL.
DATABASE_URL = os.getenv(
    "DATABASE_URL",
    "sqlite:///./inventory.db"
)
 
# For SQLite: use StaticPool to avoid threading issues
if DATABASE_URL.startswith("sqlite"):
    engine = create_engine(
        DATABASE_URL,
        connect_args={"check_same_thread": False},
        poolclass=StaticPool,
    )
else:
    engine = create_engine(DATABASE_URL, pool_pre_ping=True)
 
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
 
def get_db() -> Session:
    """Dependency injection for database sessions."""
    db = SessionLocal()
    try:
        yield db
    finally:
        db.close()

The get_db() function is dependency injection. FastAPI will call it automatically for each request, inject a fresh database session, and close it when the response is sent. This ensures proper resource cleanup. The pool_pre_ping=True option for non-SQLite databases tells SQLAlchemy to check whether a connection from the pool is still alive before using it, without this, you'd get cryptic errors after periods of inactivity when the database server closes idle connections. SQLite doesn't need this because it's an in-process database, but it's a critical setting for production PostgreSQL or MySQL deployments.

CRUD Operations

Create a module for all database operations. This keeps your routes clean and testable. The CRUD layer is the translation layer between Pydantic schemas (what the HTTP layer knows about) and SQLAlchemy models (what the database layer knows about). It also enforces business rules that span multiple database operations, like updating a product's quantity when recording a transaction.

app/crud.py:

python
from sqlalchemy.orm import Session
from sqlalchemy import and_, or_
from app import models, schemas
from typing import Optional, List
 
# ============ Category CRUD ============
 
def get_categories(db: Session, skip: int = 0, limit: int = 100) -> List[models.Category]:
    return db.query(models.Category).offset(skip).limit(limit).all()
 
def get_category_by_id(db: Session, category_id: int) -> Optional[models.Category]:
    return db.query(models.Category).filter(models.Category.id == category_id).first()
 
def get_category_by_name(db: Session, name: str) -> Optional[models.Category]:
    return db.query(models.Category).filter(models.Category.name == name).first()
 
def create_category(db: Session, category: schemas.CategoryCreate) -> models.Category:
    db_category = models.Category(
        name=category.name,
        description=category.description
    )
    db.add(db_category)
    db.commit()
    db.refresh(db_category)
    return db_category
 
def update_category(
    db: Session,
    category_id: int,
    category: schemas.CategoryUpdate
) -> Optional[models.Category]:
    db_category = get_category_by_id(db, category_id)
    if not db_category:
        return None
 
    update_data = category.model_dump(exclude_unset=True)
    for field, value in update_data.items():
        setattr(db_category, field, value)
 
    db.commit()
    db.refresh(db_category)
    return db_category
 
def delete_category(db: Session, category_id: int) -> bool:
    db_category = get_category_by_id(db, category_id)
    if not db_category:
        return False
    db.delete(db_category)
    db.commit()
    return True
 
# ============ Product CRUD ============
 
def get_products(
    db: Session,
    skip: int = 0,
    limit: int = 100,
    category_id: Optional[int] = None,
    is_active: Optional[bool] = None,
    search: Optional[str] = None
) -> List[models.Product]:
    query = db.query(models.Product)
 
    if category_id:
        query = query.filter(models.Product.category_id == category_id)
 
    if is_active is not None:
        query = query.filter(models.Product.is_active == is_active)
 
    if search:
        query = query.filter(
            or_(
                models.Product.name.ilike(f"%{search}%"),
                models.Product.sku.ilike(f"%{search}%")
            )
        )
 
    return query.offset(skip).limit(limit).all()
 
def get_product_by_id(db: Session, product_id: int) -> Optional[models.Product]:
    return db.query(models.Product).filter(models.Product.id == product_id).first()
 
def get_product_by_sku(db: Session, sku: str) -> Optional[models.Product]:
    return db.query(models.Product).filter(models.Product.sku == sku).first()
 
def get_low_stock_products(db: Session, skip: int = 0, limit: int = 100) -> List[models.Product]:
    return db.query(models.Product).filter(
        models.Product.quantity_on_hand <= models.Product.reorder_point,
        models.Product.is_active == True
    ).offset(skip).limit(limit).all()
 
def create_product(db: Session, product: schemas.ProductCreate) -> models.Product:
    db_product = models.Product(**product.model_dump())
    db.add(db_product)
    db.commit()
    db.refresh(db_product)
    return db_product
 
def update_product(
    db: Session,
    product_id: int,
    product: schemas.ProductUpdate
) -> Optional[models.Product]:
    db_product = get_product_by_id(db, product_id)
    if not db_product:
        return None
 
    update_data = product.model_dump(exclude_unset=True)
    for field, value in update_data.items():
        setattr(db_product, field, value)
 
    db.commit()
    db.refresh(db_product)
    return db_product
 
def delete_product(db: Session, product_id: int) -> bool:
    db_product = get_product_by_id(db, product_id)
    if not db_product:
        return False
    db.delete(db_product)
    db.commit()
    return True
 
# ============ Transaction CRUD ============
 
def create_transaction(
    db: Session,
    transaction: schemas.TransactionCreate
) -> Optional[models.InventoryTransaction]:
    product = get_product_by_id(db, transaction.product_id)
    if not product:
        return None
 
    # Update product quantity
    if transaction.transaction_type == "purchase":
        product.quantity_on_hand += transaction.quantity
    elif transaction.transaction_type == "sale":
        if product.quantity_on_hand < transaction.quantity:
            return None  # Insufficient stock
        product.quantity_on_hand -= transaction.quantity
    elif transaction.transaction_type == "adjustment":
        product.quantity_on_hand += transaction.quantity  # Can be negative
 
    db_transaction = models.InventoryTransaction(**transaction.model_dump())
    db.add(db_transaction)
    db.commit()
    db.refresh(db_transaction)
    return db_transaction
 
def get_product_transactions(
    db: Session,
    product_id: int,
    skip: int = 0,
    limit: int = 100
) -> List[models.InventoryTransaction]:
    return db.query(models.InventoryTransaction).filter(
        models.InventoryTransaction.product_id == product_id
    ).order_by(models.InventoryTransaction.created_at.desc()).offset(skip).limit(limit).all()

Notice the pattern: each function is a single database operation. They're testable in isolation and reusable across different routes. The CRUD layer is where your business logic lives. The exclude_unset=True in partial updates is worth understanding: when a client sends {"price": 39.99}, Pydantic only sets price in the schema. model_dump(exclude_unset=True) returns only the fields the client actually sent, so we only update price in the database, not all the other fields that happen to have default values in the schema. Without this, a partial update would silently zero out fields the client didn't mention.

API Design Best Practices

The routes you're about to write are not just code, they're a public contract with anyone who calls your API. Breaking that contract (renaming a field, changing a status code, removing an endpoint) breaks every client that depends on it. This means API design decisions deserve careful thought.

The first principle is resource-oriented URLs. Your endpoints should represent nouns (resources) not verbs (actions). The HTTP verb provides the action. So it's POST /transactions to record a transaction, not POST /transactions/create. It's GET /products/low-stock for a filtered view of products, not GET /get-low-stock-products. When your URLs read like a directory structure of resources, they're easy to remember and predict.

The second principle is consistent status codes. A 200 means the request succeeded and here's your data. A 201 means the request succeeded and you created something new. A 204 means the request succeeded but there's nothing to return (typical for deletes). A 400 means the client sent bad input. A 404 means the resource doesn't exist. A 409 means a conflict, you tried to create something that already exists. Clients use these codes to make decisions programmatically. If you return 200 for everything and put error information in the body, you've made every client write custom parsing code to understand success from failure.

The third principle is stable response shapes. The response_model parameter on each FastAPI endpoint does two things: it validates your output (so you can't accidentally leak a password field), and it documents the response shape in the OpenAPI specification. Every consumer of your API, whether it's a React frontend, a mobile app, or another service, relies on this shape. Use Optional fields for genuinely optional data, and use separate response schemas when you need to include related data (like ProductWithTransactions) versus just the core record (ProductResponse). Returning too much data by default is wasteful; too little forces clients to make multiple requests. The right answer depends on access patterns, which is why we offer both options.

The fourth principle is query parameter filtering over custom endpoints. Rather than building GET /products/by-category/{id} and GET /products/active and GET /products/search?q=... as separate endpoints, we combine them into one: GET /products?category_id=1&is_active=true&search=laptop. This is more flexible, requires less endpoint proliferation, and teaches clients a consistent pattern. The optional query parameters build up the query conditionally, if a parameter isn't provided, it simply doesn't filter on that dimension.

API Routes

Now the fun part: the REST endpoints. Each route file imports its dependencies, the database session, the CRUD functions, the schemas, and wires them together. Routes should be thin: validate input, call CRUD, handle the result, return a response. Business logic lives in CRUD, not in routes.

app/routes/categories.py:

python
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.orm import Session
from app import schemas, crud
from app.database import get_db
 
router = APIRouter(prefix="/categories", tags=["categories"])
 
@router.get("", response_model=list[schemas.CategoryResponse])
def list_categories(
    db: Session = Depends(get_db),
    skip: int = 0,
    limit: int = 100
):
    """List all categories with pagination."""
    categories = crud.get_categories(db, skip=skip, limit=limit)
    return categories
 
@router.get("/{category_id}", response_model=schemas.CategoryWithProducts)
def get_category(category_id: int, db: Session = Depends(get_db)):
    """Get a single category with its products."""
    category = crud.get_category_by_id(db, category_id)
    if not category:
        raise HTTPException(status_code=404, detail="Category not found")
    return category
 
@router.post("", response_model=schemas.CategoryResponse, status_code=status.HTTP_201_CREATED)
def create_category(
    category: schemas.CategoryCreate,
    db: Session = Depends(get_db)
):
    """Create a new category."""
    existing = crud.get_category_by_name(db, category.name)
    if existing:
        raise HTTPException(status_code=409, detail="Category with this name already exists")
 
    return crud.create_category(db, category)
 
@router.put("/{category_id}", response_model=schemas.CategoryResponse)
def update_category(
    category_id: int,
    category: schemas.CategoryUpdate,
    db: Session = Depends(get_db)
):
    """Update an existing category."""
    db_category = crud.update_category(db, category_id, category)
    if not db_category:
        raise HTTPException(status_code=404, detail="Category not found")
    return db_category
 
@router.delete("/{category_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_category(category_id: int, db: Session = Depends(get_db)):
    """Delete a category."""
    success = crud.delete_category(db, category_id)
    if not success:
        raise HTTPException(status_code=404, detail="Category not found")

Here's the REST contract:

  • GET /categories → list all (200 OK)
  • GET /categories/1 → get one (200 OK or 404)
  • POST /categories → create new (201 CREATED or 409 CONFLICT)
  • PUT /categories/1 → update (200 OK or 404)
  • DELETE /categories/1 → delete (204 NO CONTENT or 404)

The Depends(get_db) injects the database session. The response_model validates the response before sending it.

app/routes/products.py:

python
from fastapi import APIRouter, Depends, HTTPException, status, Query
from sqlalchemy.orm import Session
from app import schemas, crud
from app.database import get_db
 
router = APIRouter(prefix="/products", tags=["products"])
 
@router.get("", response_model=list[schemas.ProductResponse])
def list_products(
    db: Session = Depends(get_db),
    skip: int = Query(0, ge=0),
    limit: int = Query(100, ge=1, le=1000),
    category_id: int = Query(None),
    is_active: bool = Query(None),
    search: str = Query(None)
):
    """List products with filtering, pagination, and search."""
    products = crud.get_products(
        db,
        skip=skip,
        limit=limit,
        category_id=category_id,
        is_active=is_active,
        search=search
    )
    return products
 
@router.get("/low-stock", response_model=list[schemas.ProductResponse])
def get_low_stock(
    db: Session = Depends(get_db),
    skip: int = 0,
    limit: int = 100
):
    """Get products below reorder point."""
    products = crud.get_low_stock_products(db, skip=skip, limit=limit)
    return products
 
@router.get("/{product_id}", response_model=schemas.ProductWithTransactions)
def get_product(product_id: int, db: Session = Depends(get_db)):
    """Get a product with its transaction history."""
    product = crud.get_product_by_id(db, product_id)
    if not product:
        raise HTTPException(status_code=404, detail="Product not found")
    return product
 
@router.post("", response_model=schemas.ProductResponse, status_code=status.HTTP_201_CREATED)
def create_product(
    product: schemas.ProductCreate,
    db: Session = Depends(get_db)
):
    """Create a new product."""
    # Validate category exists
    category = crud.get_category_by_id(db, product.category_id)
    if not category:
        raise HTTPException(status_code=400, detail="Category does not exist")
 
    # Check SKU uniqueness
    existing = crud.get_product_by_sku(db, product.sku)
    if existing:
        raise HTTPException(status_code=409, detail="Product with this SKU already exists")
 
    return crud.create_product(db, product)
 
@router.put("/{product_id}", response_model=schemas.ProductResponse)
def update_product(
    product_id: int,
    product: schemas.ProductUpdate,
    db: Session = Depends(get_db)
):
    """Update an existing product."""
    db_product = crud.update_product(db, product_id, product)
    if not db_product:
        raise HTTPException(status_code=404, detail="Product not found")
    return db_product
 
@router.delete("/{product_id}", status_code=status.HTTP_204_NO_CONTENT)
def delete_product(product_id: int, db: Session = Depends(get_db)):
    """Delete a product."""
    success = crud.delete_product(db, product_id)
    if not success:
        raise HTTPException(status_code=404, detail="Product not found")

Notice the filtering parameters: search, category_id, is_active. These map directly to query parameters in the URL:

GET /products?search=laptop&category_id=5&limit=50

Also notice that /products/low-stock must be defined before /{product_id} in the file. FastAPI matches routes in order, and if the dynamic route came first, it would try to interpret "low-stock" as a product ID and fail before reaching the low-stock handler. This ordering requirement is not unique to FastAPI, it's a fundamental property of regex-based routing. Specific routes before general ones.

app/routes/transactions.py:

python
from fastapi import APIRouter, Depends, HTTPException, status
from sqlalchemy.orm import Session
from app import schemas, crud
from app.database import get_db
 
router = APIRouter(prefix="/transactions", tags=["transactions"])
 
@router.post("", response_model=schemas.TransactionResponse, status_code=status.HTTP_201_CREATED)
def create_transaction(
    transaction: schemas.TransactionCreate,
    db: Session = Depends(get_db)
):
    """Record an inventory transaction (purchase, sale, or adjustment)."""
    db_transaction = crud.create_transaction(db, transaction)
    if not db_transaction:
        raise HTTPException(
            status_code=400,
            detail="Invalid transaction: product not found or insufficient stock"
        )
    return db_transaction
 
@router.get("/product/{product_id}", response_model=list[schemas.TransactionResponse])
def get_product_transactions(
    product_id: int,
    db: Session = Depends(get_db),
    skip: int = 0,
    limit: int = 100
):
    """Get transaction history for a product."""
    product = crud.get_product_by_id(db, product_id)
    if not product:
        raise HTTPException(status_code=404, detail="Product not found")
 
    transactions = crud.get_product_transactions(db, product_id, skip=skip, limit=limit)
    return transactions

The transaction endpoint checks for insufficient stock in the CRUD layer and propagates the failure back to the route as a None return value, which the route converts to a 400 error. An alternative design would be to raise a custom exception in CRUD and catch it in the route, which gives you more context about what failed. For a production system, you'd want to distinguish "product not found" from "insufficient stock" in the error response, a client application could display "Out of stock" versus "Invalid product" to the user. This design trades a bit of error message precision for simplicity, which is a reasonable choice at this stage.

Error Handling Strategy

How your API fails is as important as how it succeeds. Clients, whether they're frontends, mobile apps, or other services, need to handle errors programmatically. That means errors need to be consistent, descriptive, and mapped to the right HTTP status codes.

The foundation of good error handling is fail fast at the right layer. Pydantic validates request shapes before your route function even runs, if someone sends a string where a number is required, they get a 422 Unprocessable Entity with a detailed error message explaining exactly which field failed and why. You don't write that validation; Pydantic and FastAPI generate it automatically from your schema definitions. This catches a large class of errors before they touch your database.

Business logic errors need explicit handling in route functions. When you call crud.get_category_by_id and it returns None, you raise HTTPException(status_code=404). When a duplicate SKU is detected, you raise a 409. These are not exceptional circumstances, they're normal API behavior that clients will encounter in normal operation. The goal is to make it impossible to accidentally return a 200 with a null body or a 500 with a generic error message when the real cause was a client mistake.

For errors that represent bugs rather than user mistakes, unexpected exceptions, database connection failures, unhandled edge cases, FastAPI's default error handler returns a 500 Internal Server Error. In production you'd replace this default with custom exception handlers that log the full error with context (request details, user information, stack trace) while returning a sanitized error message to the client. Leaking stack traces to API consumers is both a security risk and poor UX.

One pattern worth adopting early is error response schemas. Instead of returning an ad hoc detail string, define a consistent error response structure: {"error": "insufficient_stock", "message": "Cannot sell 20 units: only 10 available", "code": 400}. This makes client-side error handling predictable, you can write a single error handler that reads the error field and branches accordingly, rather than parsing freeform strings. It's a small investment in schema design that pays off as your API grows.

Main Application

Bring it all together. The main application file is where the FastAPI instance is created, routers are registered, and any application-level configuration is applied. Keep this file thin, if you find yourself putting significant logic in main.py, it probably belongs in a more specific module.

app/main.py:

python
from fastapi import FastAPI
from fastapi.responses import FileResponse
from sqlalchemy.orm import Session
import csv
import io
 
from app.database import engine, get_db
from app import models
from app.routes import products, categories, transactions
 
# Create tables
models.Base.metadata.create_all(bind=engine)
 
app = FastAPI(
    title="Inventory API",
    description="Full-stack inventory management system",
    version="1.0.0"
)
 
# Include routers
app.include_router(categories.router)
app.include_router(products.router)
app.include_router(transactions.router)
 
@app.get("/")
def root():
    """API welcome endpoint."""
    return {
        "message": "Welcome to Inventory API",
        "docs": "/docs",
        "openapi_schema": "/openapi.json"
    }
 
@app.get("/health")
def health_check():
    """Health check endpoint."""
    return {"status": "healthy"}
 
@app.get("/export/products-csv")
def export_products_csv(db: Session = Depends(get_db)):
    """Export all active products to CSV."""
    products_data = db.query(models.Product).filter(
        models.Product.is_active == True
    ).all()
 
    output = io.StringIO()
    writer = csv.writer(output)
    writer.writerow(["SKU", "Name", "Price", "Quantity", "Reorder Point"])
 
    for product in products_data:
        writer.writerow([
            product.sku,
            product.name,
            product.price,
            product.quantity_on_hand,
            product.reorder_point
        ])
 
    # Convert to bytes for download
    output.seek(0)
    return FileResponse(
        path=io.BytesIO(output.getvalue().encode()),
        media_type="text/csv",
        filename="products.csv"
    )
 
if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="127.0.0.1", port=8000)

The CSV export endpoint demonstrates practical utility: clients can download product data for spreadsheet analysis. Notice the health check endpoint at /health, this is standard practice for any service that gets deployed. Load balancers, container orchestrators like Kubernetes, and monitoring systems all hit /health to determine whether the service is running. A 200 response means "I'm alive and serving traffic." You'd extend this endpoint to also check database connectivity, so the health check catches "process is running but can't reach the database" scenarios.

Testing: Putting It All Together

This is where architecture matters. With proper layering, testing becomes simple. Because database sessions are injected via Depends(get_db), tests can swap in an in-memory SQLite database by overriding that dependency. No test data persists between runs, no fixtures require a running database server, and tests execute in milliseconds rather than seconds.

tests/test_products.py:

python
import pytest
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
 
from app.main import app
from app.database import get_db
from app import models, schemas
 
# In-memory SQLite for testing
SQLALCHEMY_DATABASE_URL = "sqlite:///:memory:"
 
engine = create_engine(
    SQLALCHEMY_DATABASE_URL,
    connect_args={"check_same_thread": False},
)
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
 
models.Base.metadata.create_all(bind=engine)
 
def override_get_db():
    try:
        db = TestingSessionLocal()
        yield db
    finally:
        db.close()
 
app.dependency_overrides[get_db] = override_get_db
 
client = TestClient(app)
 
@pytest.fixture(autouse=True)
def setup_db():
    """Clear database before each test."""
    models.Base.metadata.create_all(bind=engine)
    yield
    models.Base.metadata.drop_all(bind=engine)
 
def create_test_category(name: str = "Electronics"):
    """Helper to create a test category."""
    response = client.post(
        "/categories",
        json={"name": name, "description": "Test category"}
    )
    return response.json()
 
def test_create_product():
    """Test creating a product."""
    category = create_test_category()
 
    response = client.post(
        "/products",
        json={
            "sku": "LAPTOP-001",
            "name": "Dell XPS 13",
            "price": 1200.00,
            "cost": 900.00,
            "category_id": category["id"],
            "quantity_on_hand": 5,
            "reorder_point": 2
        }
    )
 
    assert response.status_code == 201
    data = response.json()
    assert data["sku"] == "LAPTOP-001"
    assert data["name"] == "Dell XPS 13"
    assert data["price"] == 1200.00
 
def test_get_product():
    """Test retrieving a product."""
    category = create_test_category()
    create_response = client.post(
        "/products",
        json={
            "sku": "MOUSE-001",
            "name": "Logitech MX Master",
            "price": 99.99,
            "category_id": category["id"]
        }
    )
 
    product_id = create_response.json()["id"]
 
    response = client.get(f"/products/{product_id}")
    assert response.status_code == 200
    data = response.json()
    assert data["id"] == product_id
    assert data["sku"] == "MOUSE-001"
 
def test_update_product():
    """Test updating a product."""
    category = create_test_category()
    create_response = client.post(
        "/products",
        json={
            "sku": "KB-001",
            "name": "Basic Keyboard",
            "price": 49.99,
            "category_id": category["id"]
        }
    )
 
    product_id = create_response.json()["id"]
 
    response = client.put(
        f"/products/{product_id}",
        json={"price": 39.99, "quantity_on_hand": 10}
    )
 
    assert response.status_code == 200
    data = response.json()
    assert data["price"] == 39.99
    assert data["quantity_on_hand"] == 10
 
def test_delete_product():
    """Test deleting a product."""
    category = create_test_category()
    create_response = client.post(
        "/products",
        json={
            "sku": "CABLE-001",
            "name": "USB-C Cable",
            "price": 15.99,
            "category_id": category["id"]
        }
    )
 
    product_id = create_response.json()["id"]
 
    response = client.delete(f"/products/{product_id}")
    assert response.status_code == 204
 
    # Verify it's gone
    response = client.get(f"/products/{product_id}")
    assert response.status_code == 404
 
def test_search_products():
    """Test product search."""
    category = create_test_category()
 
    # Create multiple products
    for name in ["Gaming Laptop", "Budget Laptop", "Workstation Laptop"]:
        client.post(
            "/products",
            json={
                "sku": f"LAP-{name[:3]}",
                "name": name,
                "price": 1000,
                "category_id": category["id"]
            }
        )
 
    # Search for "Gaming"
    response = client.get("/products?search=Gaming")
    assert response.status_code == 200
    data = response.json()
    assert len(data) == 1
    assert data[0]["name"] == "Gaming Laptop"
 
def test_duplicate_sku_fails():
    """Test that duplicate SKUs are rejected."""
    category = create_test_category()
 
    client.post(
        "/products",
        json={
            "sku": "UNIQUE-SKU",
            "name": "First Product",
            "price": 100,
            "category_id": category["id"]
        }
    )
 
    response = client.post(
        "/products",
        json={
            "sku": "UNIQUE-SKU",
            "name": "Second Product",
            "price": 200,
            "category_id": category["id"]
        }
    )
 
    assert response.status_code == 409
 
def test_pagination():
    """Test pagination of products."""
    category = create_test_category()
 
    # Create 25 products
    for i in range(25):
        client.post(
            "/products",
            json={
                "sku": f"PROD-{i:03d}",
                "name": f"Product {i}",
                "price": 100 + i,
                "category_id": category["id"]
            }
        )
 
    # Get first page (default limit=100, so all 25)
    response = client.get("/products?skip=0&limit=10")
    assert response.status_code == 200
    data = response.json()
    assert len(data) == 10
 
    # Get second page
    response = client.get("/products?skip=10&limit=10")
    data = response.json()
    assert len(data) == 10

Here's what makes these tests powerful:

  • Isolation: Each test creates its own in-memory database
  • Clarity: Test names explain what they're testing
  • Reusability: Helper functions (create_test_category) reduce boilerplate
  • Coverage: Happy path, error cases, edge cases

The autouse=True on the setup_db fixture means it runs before every single test without being explicitly listed as a parameter. The create-yield-drop_all pattern ensures each test starts with a clean slate. This eliminates test ordering dependencies, a critical property of a reliable test suite. If test A's data affects test B, you have a fragile suite that breaks in non-obvious ways when tests are added, removed, or reordered.

tests/test_transactions.py:

python
import pytest
from fastapi.testclient import TestClient
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
 
from app.main import app
from app.database import get_db
from app import models
 
SQLALCHEMY_DATABASE_URL = "sqlite:///:memory:"
 
engine = create_engine(SQLALCHEMY_DATABASE_URL, connect_args={"check_same_thread": False})
TestingSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
 
models.Base.metadata.create_all(bind=engine)
 
def override_get_db():
    try:
        db = TestingSessionLocal()
        yield db
    finally:
        db.close()
 
app.dependency_overrides[get_db] = override_get_db
client = TestClient(app)
 
@pytest.fixture(autouse=True)
def setup_db():
    models.Base.metadata.create_all(bind=engine)
    yield
    models.Base.metadata.drop_all(bind=engine)
 
def create_product():
    """Helper to create a test product."""
    category_response = client.post(
        "/categories",
        json={"name": "Test", "description": "Test"}
    )
    category_id = category_response.json()["id"]
 
    response = client.post(
        "/products",
        json={
            "sku": "TEST-001",
            "name": "Test Product",
            "price": 100,
            "category_id": category_id,
            "quantity_on_hand": 10
        }
    )
    return response.json()
 
def test_purchase_transaction():
    """Test recording a purchase (inbound inventory)."""
    product = create_product()
 
    response = client.post(
        "/transactions",
        json={
            "product_id": product["id"],
            "transaction_type": "purchase",
            "quantity": 5,
            "reference": "PO-12345"
        }
    )
 
    assert response.status_code == 201
 
    # Verify quantity increased
    product_response = client.get(f"/products/{product['id']}")
    updated = product_response.json()
    assert updated["quantity_on_hand"] == 15  # 10 + 5
 
def test_sale_transaction():
    """Test recording a sale (outbound inventory)."""
    product = create_product()
 
    response = client.post(
        "/transactions",
        json={
            "product_id": product["id"],
            "transaction_type": "sale",
            "quantity": 3,
            "reference": "INV-99999"
        }
    )
 
    assert response.status_code == 201
 
    # Verify quantity decreased
    product_response = client.get(f"/products/{product['id']}")
    updated = product_response.json()
    assert updated["quantity_on_hand"] == 7  # 10 - 3
 
def test_insufficient_stock():
    """Test that sales fail when stock is insufficient."""
    product = create_product()
 
    response = client.post(
        "/transactions",
        json={
            "product_id": product["id"],
            "transaction_type": "sale",
            "quantity": 20  # More than available
        }
    )
 
    assert response.status_code == 400
 
def test_transaction_history():
    """Test retrieving transaction history."""
    product = create_product()
 
    # Record multiple transactions
    for i in range(3):
        client.post(
            "/transactions",
            json={
                "product_id": product["id"],
                "transaction_type": "purchase" if i % 2 == 0 else "sale",
                "quantity": 1
            }
        )
 
    response = client.get(f"/transactions/product/{product['id']}")
    assert response.status_code == 200
    transactions = response.json()
    assert len(transactions) == 3

The transaction tests are particularly valuable because they verify side effects: not just that the transaction was recorded, but that the product's quantity changed correctly. This kind of integration test, where you exercise multiple layers of the stack together, catches bugs that unit tests miss. A unit test might verify that quantity_on_hand += transaction.quantity is correct math, but only an integration test can verify that the right product's quantity was updated, that the database commit happened, and that a subsequent GET returns the updated value.

Run the tests:

bash
pytest tests/ -v

Output:

tests/test_products.py::test_create_product PASSED
tests/test_products.py::test_get_product PASSED
tests/test_products.py::test_update_product PASSED
tests/test_products.py::test_delete_product PASSED
tests/test_products.py::test_search_products PASSED
tests/test_products.py::test_duplicate_sku_fails PASSED
tests/test_products.py::test_pagination PASSED
tests/test_transactions.py::test_purchase_transaction PASSED
tests/test_transactions.py::test_sale_transaction PASSED
tests/test_transactions.py::test_insufficient_stock PASSED
tests/test_transactions.py::test_transaction_history PASSED

============================== 11 passed in 0.42s ==============================

All tests passing. This is the confidence that good architecture gives you. When you need to refactor the CRUD layer, add a new field, or optimize a query, you run the tests and immediately know whether you've broken anything. The test suite isn't documentation of what the code does, it's a safety net that lets you change the code without fear.

Running the Application

Start the server:

bash
python -m uvicorn app.main:app --reload

Visit http://localhost:8000/docs in your browser. FastAPI generates an interactive API documentation page powered by Swagger UI. You can test every endpoint without touching a command line.

Let's walk through a real workflow:

Create a category:

POST /categories
{
  "name": "Electronics",
  "description": "Electronic devices and accessories"
}

Response:

json
{
  "id": 1,
  "name": "Electronics",
  "description": "Electronic devices and accessories",
  "created_at": "2026-02-25T10:15:00"
}

Create a product:

POST /products
{
  "sku": "LAPTOP-DELL-001",
  "name": "Dell XPS 13",
  "description": "Ultralight 13-inch laptop",
  "price": 1200.00,
  "cost": 900.00,
  "quantity_on_hand": 5,
  "reorder_point": 2,
  "category_id": 1
}

Record a purchase:

POST /transactions
{
  "product_id": 1,
  "transaction_type": "purchase",
  "quantity": 10,
  "reference": "PO-2026-001"
}

Query low-stock products:

GET /products/low-stock

Returns products where quantity_on_hand <= reorder_point. This is how warehouse managers know what to reorder.

Search:

GET /products?search=Dell&category_id=1&limit=20

Filters by name/SKU match and category. This is how real users find products.

Architecture Lessons

This project embodies several critical patterns:

1. Separation of Concerns

  • models.py: Database schema
  • schemas.py: Request/response contracts
  • crud.py: Database operations
  • routes/: HTTP endpoints

Each module has a single responsibility. This makes the codebase testable, maintainable, and scalable.

2. Dependency Injection

python
def get_product(product_id: int, db: Session = Depends(get_db)):
    ...

FastAPI injects the database session automatically. You don't instantiate it yourself. This makes testing trivial: override the dependency in tests, use a real one in production.

3. Proper HTTP Semantics

  • 200 OK: Successful retrieval
  • 201 CREATED: Successful creation
  • 204 NO CONTENT: Successful deletion
  • 400 BAD REQUEST: Client error (invalid data)
  • 404 NOT FOUND: Resource doesn't exist
  • 409 CONFLICT: Business logic violation (duplicate SKU)

These status codes tell clients exactly what happened. Status codes are part of your API contract.

4. Validation at Every Layer

  • Pydantic: Validates request data (type, constraints)
  • Database constraints: Enforce uniqueness, foreign keys
  • Business logic: Reject insufficient stock, duplicate SKUs

Multiple validation layers catch bugs early.

5. Query Flexibility

python
def get_products(
    db: Session,
    skip: int = 0,
    limit: int = 100,
    category_id: Optional[int] = None,
    is_active: Optional[bool] = None,
    search: Optional[str] = None
):
    query = db.query(models.Product)
 
    if category_id:
        query = query.filter(models.Product.category_id == category_id)
 
    if is_active is not None:
        query = query.filter(models.Product.is_active == is_active)
 
    if search:
        query = query.filter(...)
 
    return query.offset(skip).limit(limit).all()

Don't hard-code queries. Build them conditionally. This makes your API flexible without requiring new endpoints.

Production Considerations

This codebase is production-adjacent, but real systems add:

  • Authentication & authorization: Use OAuth2 or JWT tokens
  • Logging: Track all transactions for audits
  • Rate limiting: Prevent abuse
  • Caching: Speed up repeated queries
  • Background jobs: Process expensive operations asynchronously
  • Database migrations: Use Alembic to version schema changes
  • Monitoring: Track errors, response times, database performance
  • Load testing: Ensure your API scales

But the architecture you see here is the foundation. Everything else builds on top.

What You Actually Built

Step back and look at what we've assembled over the course of this article. You have a working inventory management system with a clean separation between database models, validation schemas, business logic, and HTTP routing. You have a test suite that runs in under a second and catches regressions across the full stack. You have flexible query capabilities that would have taken a week to build in a spreadsheet and still not worked correctly under concurrent access.

More importantly, you understand why each piece is where it is. The CRUD module exists because database operations need to be testable and reusable. The schema separation exists because your API contract should be independent of your database implementation. Dependency injection exists because infrastructure concerns shouldn't bleed into business logic. These aren't arbitrary rules, they're answers to problems you've now seen in context.

The architecture you've built here is not toy architecture. When you look at production Python services at well-funded companies, whether they're using FastAPI, Django REST Framework, or Flask, you'll recognize these patterns. A models module, a schemas module, a CRUD module, route handlers, dependency injection for sessions, in-memory databases for testing. The specifics vary, but the structure is consistent because it solves the same problems at scale. The fact that you now have an intuitive understanding of why this structure works puts you well ahead of developers who cargo-cult patterns without understanding their purpose.

One final thing worth emphasizing: the test suite is not optional decoration. It's the mechanism that lets this codebase grow. Every time you add a feature, you can run pytest and know within a second whether you've broken anything that existed before. Every time you optimize a query, you have proof it still returns correct results. This feedback loop, write code, run tests, iterate, is how professional software gets built. You've got the foundation. Now build on it.

Summary

You've built an inventory management system that demonstrates:

  • Layered architecture: Models → schemas → CRUD → routes
  • Proper HTTP semantics: Status codes, methods, response shapes
  • Data validation: Pydantic + database constraints
  • Database relationships: Foreign keys, cascading deletes, joins
  • Dependency injection: Clean, testable code
  • Advanced queries: Filtering, pagination, search
  • Comprehensive testing: Happy path, edge cases, error cases
  • Practical utilities: CSV export for reporting

This is the architecture used in production systems. It scales from startups to enterprises because it separates concerns, enables testing, and enforces consistency.

The next chapter is testing strategy, how to write tests that catch real bugs, not false positives. You'll learn pytest patterns that detect regressions, edge cases, and integration failures. That's where quality comes from.

Need help implementing this?

We build automation systems like this for clients every day.

Discuss Your Project