Go Series: Clean Architecture
Building Production-Ready Microservices in Go: A Blueprint for Clean Architecture
Section 1: The Architectural Blueprint: Project Structure and Principles
This section lays the theoretical and structural groundwork for the entire project. The focus is not merely on defining what the project layout is, but on explaining why these choices are made, grounding them in established Go community conventions and the core tenets of Clean Architecture. A well-defined project structure is the first line of defense against code entropy, providing clear separation of concerns and making the codebase navigable and maintainable as it scales.1
1.1. Establishing the Foundation: The Go Project Layout
The project structure adopted here aligns with the community-driven golang-standards/project-layout
, a set of common historical and emerging patterns in the Go ecosystem.2 It is crucial to recognize this as a set of conventions, not rigid rules.2 The primary objective is to create a structure that is effective for the project's specific needs, promoting clarity and maintainability over dogmatic adherence to a fixed template.3 For new projects, it is often advisable to start with a minimal structure and add complexity only as required.4
Directory Breakdown:
/cmd: This directory serves as the entry point for the application's binaries. For this project, it will contain
/cmd/server/main.go
. The code within this directory should be minimal, with its primary responsibility being the initialization and wiring together of components defined in the/internal
layer. This separation ensures that the core application logic is not tied to the main function, making it more reusable and testable.2/internal: This directory houses the core application code. A key feature of Go is that the compiler enforces the privacy of packages within an
internal
directory; they cannot be imported by external projects.2 This is a fundamental tool for enforcing architectural boundaries. By placing the application's domain, use cases, and repository implementations here, we create a compile-time guarantee that prevents other services from creating unintended couplings to our internal logic./pkg: This directory is reserved for publicly available library code that is explicitly intended to be shared and imported by other Go modules.2 In the context of a self-contained microservice, this directory should be used sparingly, if at all. The default should be to place all application-specific code within
/internal
to maintain a well-defined and controlled public API surface.6/config: This directory will hold static configuration files, such as
config.yml
. These files provide default settings for various environments, which can be overridden by environment variables.7/scripts: This directory is for operational and automation scripts. Examples include database migration scripts, deployment helpers, or scripts for common development tasks.
/docs: This directory will contain API documentation generated by tools like Swagger/OpenAPI, providing a clear contract for API consumers.9
/api: This directory is designated for API specification files, such as Protocol Buffers (
.proto
) definitions for gRPC or other schema-related files.10
1.2. The Layers of Clean Architecture: A Conceptual Deep Dive
Clean Architecture, as proposed by Robert C. Martin, is a software design philosophy that organizes a system into concentric layers, each with distinct responsibilities. Its primary goal is the separation of concerns, achieved by enforcing a strict dependency rule.11
The Dependency Rule: This is the most critical constraint in Clean Architecture. Source code dependencies must only point inwards. An outer layer can depend on an inner layer, but an inner layer must never depend on, or even know about, an outer layer. This ensures that changes to external concerns—such as the database, UI, or web framework—do not impact the core business logic.11 This principle makes the system independent of frameworks, testable, and easier to maintain.
The architecture can be visualized as a set of concentric circles:
Entities (Domain): At the very center are the Entities. These represent the core business objects and rules of the application (e.g., a
Product
orUser
). They are the most general and high-level concepts, encapsulating enterprise-wide business logic. In Go, these are typically implemented as structs and their associated methods, completely devoid of any infrastructure-specific details.13Usecases (Application Business Rules): This layer orchestrates the flow of data to and from the entities. It contains the application-specific business logic that defines what the application can do. For example, a
CreateProduct
use case would orchestrate the validation and creation of aProduct
entity. This layer depends on the Entities but remains independent of any external frameworks or drivers.12Interface Adapters (Controllers, Repositories): This layer acts as a set of converters. It adapts data from the format most convenient for the inner layers (Usecases and Entities) to the format required by external systems like the database or the web. This is where components like repository implementations, HTTP handlers, and presenters reside. They bridge the gap between the business logic and the infrastructure.13
Frameworks & Drivers (Infrastructure): This is the outermost layer, consisting of the concrete implementations of external tools and frameworks. This includes the database (PostgreSQL), the web framework (Chi), caching systems (Redis), and message brokers (Kafka). This layer is considered a "detail" that the inner layers must remain completely oblivious to.11
The following table provides a clear visual representation of the dependency rule, indicating which layers are permitted to import from others. This matrix is a critical reference for maintaining the architectural integrity of the project.
Imports domain
Imports usecase
Imports repository
Imports delivery
domain
-
❌
❌
❌
usecase
✅
-
❌
❌
repository
✅
✅
-
❌
delivery
✅
✅
❌
-
Table 1: Dependency Rule Matrix. A checkmark (✅) indicates an allowed dependency, while a cross (❌) indicates a forbidden one. Note that the repository
layer depends on the usecase
layer because the repository interfaces are defined in the usecase layer, an example of the Dependency Inversion Principle.
The internal
directory in Go is not just a conventional grouping but a compiler-enforced boundary that is fundamental to applying Clean Architecture effectively. While the architecture's primary goal is managing dependencies to isolate business logic 11, the Go toolchain provides a direct, language-level mechanism to enforce this. By preventing packages outside the module root from importing anything within an
internal
directory 2, Go offers a robust, compile-time guarantee against accidental coupling, thereby enforcing a key architectural boundary without the need for complex external tooling.
1.3. The Domain Layer: Implementing Core Business Entities
The domain layer is the heart of the application. It contains the pure business logic and data structures that are central to the system's purpose. These entities should be independent of any other layer and should only change if the core business rules themselves change.13
Example Implementation (/internal/domain/product.go
):
The following code defines a Product
entity. It is a plain Go struct, free from any database, JSON, or other framework-specific tags. This enforces its independence and ensures that the core business model is not polluted with infrastructure details.
Go
// internal/domain/product.go
package domain
import (
"errors"
"time"
)
// Sentinel errors are defined in the domain layer to provide a consistent
// set of business-rule-related errors that higher layers can check against.
var (
ErrNotFound = errors.New("requested item was not found")
ErrInvalidPrice = errors.New("invalid price: must be positive")
ErrInvalidStock = errors.New("invalid stock: must be non-negative")
)
// Product represents the core business entity for a product.
type Product struct {
ID int64
Name string
Description string
Price float64
Stock int
CreatedAt time.Time
UpdatedAt time.Time
}
// NewProduct is a factory function for creating a new Product.
// It enforces business rules, such as ensuring the price is positive.
func NewProduct(name, description string, price float64, stock int) (*Product, error) {
if price <= 0 {
return nil, ErrInvalidPrice
}
if stock < 0 {
return nil, ErrInvalidStock
}
return &Product{
Name: name,
Description: description,
Price: price,
Stock: stock,
CreatedAt: time.Now().UTC(),
UpdatedAt: time.Now().UTC(),
}, nil
}
// ApplyDiscount is a business logic method on the Product entity.
func (p *Product) ApplyDiscount(percentage float64) error {
if percentage <= 0 |
| percentage > 100 {
return errors.New("discount percentage must be between 0 and 100")
}
p.Price *= (1 - percentage/100)
p.UpdatedAt = time.Now().UTC()
return nil
}
Domain-Specific Errors:
By defining sentinel errors like ErrNotFound
and ErrInvalidPrice
within the domain package, we create a stable contract for error conditions.19 Higher layers, such as the delivery layer, can check for these specific errors using
errors.Is
and map them to appropriate responses (e.g., HTTP 404 Not Found) without needing to know the underlying implementation details of the repository that generated the error.21 This practice decouples the error handling logic from the data access logic.
Section 2: Implementing Business Logic: Usecases and Repository Interfaces
This section focuses on building the application's core logic. It demonstrates how to orchestrate the business rules defined in the domain layer and, crucially, how to define contracts for external dependencies like data storage without creating a hard coupling to them. This is achieved through the idiomatic use of interfaces, which is central to building flexible and testable Go applications.
2.1. The Usecase Layer: Orchestrating Business Logic
The usecase layer, also known as the application business rule layer, contains the logic specific to this application's functionality. It answers the question, "What can this application do?" by orchestrating the flow of data between the domain entities and the infrastructure layer via repository interfaces.12
Example Implementation (/internal/usecase/product_uc.go
):
The following ProductUsecase
struct encapsulates the logic for creating and retrieving products. It depends on a ProductRepository
interface, which is defined within this same package. This co-location of the interface with its consumer is a key pattern for achieving dependency inversion in Go.
Go
// internal/usecase/product_uc.go
package usecase
import (
"context"
"time"
"your_project/internal/domain"
)
// ProductRepository defines the contract for product data storage.
// It is defined here, in the usecase layer, where it is consumed.
// This is a direct application of the Dependency Inversion Principle.
type ProductRepository interface {
Create(ctx context.Context, product *domain.Product) (int64, error)
GetByID(ctx context.Context, id int64) (*domain.Product, error)
Update(ctx context.Context, product *domain.Product) error
}
// ProductUsecase defines the use case for product-related operations.
// It embeds its dependencies as interfaces, allowing for easy mocking and testing.
type ProductUsecase struct {
repo ProductRepository
contextTimeout time.Duration
}
// NewProductUsecase is the factory function for ProductUsecase.
func NewProductUsecase(repo ProductRepository, timeout time.Duration) *ProductUsecase {
return &ProductUsecase{
repo: repo,
contextTimeout: timeout,
}
}
// CreateProduct handles the business logic for creating a new product.
func (uc *ProductUsecase) CreateProduct(c context.Context, name, description string, price float64, stock int) (*domain.Product, error) {
ctx, cancel := context.WithTimeout(c, uc.contextTimeout)
defer cancel()
// Use the domain entity's factory to enforce business rules
product, err := domain.NewProduct(name, description, price, stock)
if err!= nil {
return nil, err // Propagate domain validation error
}
// Persist the new product using the repository interface
productID, err := uc.repo.Create(ctx, product)
if err!= nil {
return nil, err // Propagate repository error
}
product.ID = productID
return product, nil
}
// GetProductByID handles the business logic for retrieving a product by its ID.
func (uc *ProductUsecase) GetProductByID(c context.Context, id int64) (*domain.Product, error) {
ctx, cancel := context.WithTimeout(c, uc.contextTimeout)
defer cancel()
product, err := uc.repo.GetByID(ctx, id)
if err!= nil {
// The repository implementation is responsible for returning domain.ErrNotFound
// if the product does not exist.
return nil, err
}
return product, nil
}
In this implementation, the ProductUsecase
is responsible for orchestrating the creation of a domain.Product
and its persistence. It first uses the domain.NewProduct
factory function to ensure that the initial data conforms to the core business rules (e.g., positive price). Then, it calls the Create
method on its repository dependency to save the product. This clear separation of responsibilities is a hallmark of Clean Architecture.
2.2. Defining Contracts with Interfaces: The Dependency Inversion Principle in Action
The Dependency Inversion Principle states that high-level modules should not depend on low-level modules; both should depend on abstractions.12 In Go, this is achieved idiomatically by defining interfaces where they are consumed.
Analysis:
By defining the ProductRepository
interface within the usecase
package, we have effectively inverted the traditional dependency flow. A naive architecture might have the usecase
package import and depend directly on a repository
package. In our design, the usecase
layer declares the contract it needs, and the concrete repository
implementation (which will be in an outer layer) must conform to this contract.
This has several profound benefits:
Decoupling: The
ProductUsecase
is completely decoupled from the data storage mechanism. It has no knowledge of whether the data is stored in PostgreSQL, MongoDB, or an in-memory map. This allows the persistence technology to be swapped out with zero changes to the core business logic.11Testability: This decoupling makes the usecase layer highly testable. To write a unit test for
CreateProduct
, one can provide a simple in-memory mock implementation of theProductRepository
interface. This allows for fast, isolated tests that do not require a running database or any other external infrastructure.11
The debate within the Go community regarding the complexity of Clean Architecture often stems from attempts to replicate rigid, Java-style patterns with numerous explicit layers and dependency injection frameworks.6 This can feel unidiomatic in a language that prizes simplicity. However, the core of Clean Architecture is the Dependency Inversion Principle, a concept for which Go's implicit interfaces are an ideal tool. By defining the interface on the consumer side (the usecase), we achieve the primary architectural goal—decoupling core logic from infrastructure—without the need for complex frameworks. This approach is not only effective but also perfectly idiomatic Go, leveraging the language's features to achieve a clean separation of concerns. The friction arises from dogmatic application of patterns from other ecosystems, not from the principle itself.
Section 3: The Infrastructure Layer: Connecting to the Outside World
This section provides the concrete implementations for the contracts (interfaces) defined in the usecase layer. It is the bridge between the application's abstract business logic and the tangible world of databases, caches, message brokers, and external APIs. Each component in this layer is an "adapter" that translates the usecase's needs into the specific protocol or API of an external system. This layer is where all the "dirty" details of infrastructure live, keeping the core of the application clean and independent.
3.1. Data Persistence with PostgreSQL and pgxpool
pgxpool
For data persistence, PostgreSQL is a robust and feature-rich choice. To interact with it, we will use the pgx/v5
library suite, specifically pgxpool
, instead of the standard database/sql
package. This choice is driven by pgx
's superior performance, native support for a wider range of PostgreSQL data types, and a more modern API that fully embraces Go's context
package.30 The
pgxpool
package provides a concurrency-safe connection pool, which is essential for handling concurrent requests in a web server environment.32
Effective connection pool management is a critical aspect of production readiness that is often overlooked in simple examples. A naive implementation that creates a new database connection per request would be highly inefficient due to the significant overhead of TCP and TLS handshakes.32 While
pgxpool
handles the mechanics of pooling, its configuration is paramount for stability and performance. Parameters like MaxConns
, MinConns
, MaxConnIdleTime
, and MaxConnLifetime
must be tunable and set according to the application's concurrency needs and the database's capacity to prevent overwhelming the database or encountering issues with stale connections closed by firewalls.32
Implementation (/internal/repository/postgres/product_repo.go
):
This file contains the PostgresProductRepository
, which implements the usecase.ProductRepository
interface. It takes a *pgxpool.Pool
as a dependency, which is created in main.go
and injected.
Go
// internal/repository/postgres/product_repo.go
package postgres
import (
"context"
"errors"
"fmt"
"your_project/internal/domain"
"your_project/internal/usecase"
"github.com/jackc/pgx/v5"
"github.com/jackc/pgx/v5/pgxpool"
)
type PostgresProductRepository struct {
db *pgxpool.Pool
}
// NewPostgresProductRepository creates a new instance of PostgresProductRepository.
// It ensures that the struct conforms to the usecase.ProductRepository interface.
func NewPostgresProductRepository(db *pgxpool.Pool) usecase.ProductRepository {
return &PostgresProductRepository{db: db}
}
// Create persists a new product to the database.
func (r *PostgresProductRepository) Create(ctx context.Context, product *domain.Product) (int64, error) {
query := `INSERT INTO products (name, description, price, stock, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6) RETURNING id`
var productID int64
err := r.db.QueryRow(ctx, query,
product.Name,
product.Description,
product.Price,
product.Stock,
product.CreatedAt,
product.UpdatedAt,
).Scan(&productID)
if err!= nil {
return 0, fmt.Errorf("failed to create product: %w", err)
}
return productID, nil
}
// GetByID retrieves a product by its ID.
func (r *PostgresProductRepository) GetByID(ctx context.Context, id int64) (*domain.Product, error) {
query := `SELECT id, name, description, price, stock, created_at, updated_at
FROM products WHERE id = $1`
var p domain.Product
err := r.db.QueryRow(ctx, query, id).Scan(
&p.ID,
&p.Name,
&p.Description,
&p.Price,
&p.Stock,
&p.CreatedAt,
&p.UpdatedAt,
)
if err!= nil {
if errors.Is(err, pgx.ErrNoRows) {
// Map the driver-specific error to a domain-specific error.
return nil, domain.ErrNotFound
}
return nil, fmt.Errorf("failed to get product by id: %w", err)
}
return &p, nil
}
// Update modifies an existing product in the database.
func (r *PostgresProductRepository) Update(ctx context.Context, product *domain.Product) error {
query := `UPDATE products SET name = $1, description = $2, price = $3, stock = $4, updated_at = $5
WHERE id = $6`
cmdTag, err := r.db.Exec(ctx, query,
product.Name,
product.Description,
product.Price,
product.Stock,
product.UpdatedAt,
product.ID,
)
if err!= nil {
return fmt.Errorf("failed to update product: %w", err)
}
if cmdTag.RowsAffected() == 0 {
return domain.ErrNotFound
}
return nil
}
This implementation correctly maps the database-specific pgx.ErrNoRows
to the domain-agnostic domain.ErrNotFound
, preventing infrastructure details from leaking into the usecase layer.
3.2. Caching with Redis: The Cache-Aside Pattern
To improve read performance and reduce database load, a caching layer is introduced. The Cache-Aside pattern is a common and effective strategy where the application logic is responsible for managing the cache.35 The application first attempts to retrieve data from the cache; if it's a "cache miss," it queries the primary data store, then populates the cache with the result before returning it to the caller.37
Implementation (/internal/repository/redis/product_cache.go
):
We will create a ProductCacheRepository
that acts as a decorator, wrapping the primary PostgresProductRepository
. It implements the same usecase.ProductRepository
interface, allowing it to be transparently swapped in. We will use the popular go-redis
library.35
Go
// internal/repository/redis/product_cache.go
package redis
import (
"context"
"encoding/json"
"fmt"
"time"
"your_project/internal/domain"
"your_project/internal/usecase"
"github.com/redis/go-redis/v9"
)
type ProductCacheRepository struct {
next usecase.ProductRepository // The next repository in the chain (e.g., PostgreSQL)
client *redis.Client
ttl time.Duration
}
// NewProductCacheRepository creates a new caching decorator.
func NewProductCacheRepository(next usecase.ProductRepository, client *redis.Client, ttl time.Duration) usecase.ProductRepository {
return &ProductCacheRepository{
next: next,
client: client,
ttl: ttl,
}
}
func (r *ProductCacheRepository) productKey(id int64) string {
return fmt.Sprintf("product:%d", id)
}
func (r *ProductCacheRepository) GetByID(ctx context.Context, id int64) (*domain.Product, error) {
// 1. Try to get from cache
key := r.productKey(id)
result, err := r.client.Get(ctx, key).Result()
if err == nil {
// Cache hit
var p domain.Product
if err := json.Unmarshal(byte(result), &p); err == nil {
return &p, nil
}
}
// 2. Cache miss or error, get from the primary repository
product, err := r.next.GetByID(ctx, id)
if err!= nil {
return nil, err
}
// 3. Set the result in the cache for next time
data, err := json.Marshal(product)
if err == nil {
r.client.Set(ctx, key, data, r.ttl)
}
return product, nil
}
func (r *ProductCacheRepository) Create(ctx context.Context, product *domain.Product) (int64, error) {
// For create, we just pass through to the next repository
return r.next.Create(ctx, product)
}
func (r *ProductCacheRepository) Update(ctx context.Context, product *domain.Product) error {
// 1. Update the primary repository first
err := r.next.Update(ctx, product)
if err!= nil {
return err
}
// 2. Invalidate the cache to avoid stale data
key := r.productKey(product.ID)
r.client.Del(ctx, key)
return nil
}
3.3. Asynchronous Communication with Kafka
For decoupling services and handling asynchronous background tasks, a message broker like Kafka is indispensable. We will implement a publisher that emits an event whenever a new product is created. This allows other microservices to react to this event without creating a direct, synchronous dependency. We will use the segmentio/kafka-go
library for its simple and effective API.38
Implementation (/internal/repository/kafka/product_publisher.go
):
A ProductEventPublisher
will be created. This component will be injected into the ProductUsecase
, which will call it after a product is successfully created.
Go
// internal/repository/kafka/product_publisher.go
package kafka
import (
"context"
"encoding/json"
"fmt"
"time"
"github.com/segmentio/kafka-go"
"your_project/internal/domain"
)
type ProductEventPublisher struct {
writer *kafka.Writer
}
func NewProductEventPublisher(brokersstring, topic string) (*ProductEventPublisher, error) {
writer := &kafka.Writer{
Addr: kafka.TCP(brokers...),
Topic: topic,
Balancer: &kafka.LeastBytes{},
WriteTimeout: 10 * time.Second,
ReadTimeout: 10 * time.Second,
}
return &ProductEventPublisher{writer: writer}, nil
}
func (p *ProductEventPublisher) PublishProductCreated(ctx context.Context, product *domain.Product) error {
payload, err := json.Marshal(product)
if err!= nil {
return fmt.Errorf("failed to marshal product for kafka: %w", err)
}
msg := kafka.Message{
Key: byte(fmt.Sprintf("%d", product.ID)),
Value: payload,
}
err = p.writer.WriteMessages(ctx, msg)
if err!= nil {
return fmt.Errorf("failed to write kafka message: %w", err)
}
return nil
}
func (p *ProductEventPublisher) Close() error {
return p.writer.Close()
}
A skeleton for a consumer worker will also be provided in /cmd/kafkalistener/main.go
to demonstrate how to receive these events, completing the pattern.39
3.4. Interacting with External Services: Resilient HTTP Client
Microservices frequently need to communicate with third-party APIs or other internal services. These network calls are inherently unreliable and can fail due to transient issues. A production-ready application must implement a resilient client with automatic retries and exponential backoff to handle these failures gracefully.
Implementation (/internal/repository/httpclient/some_api.go
):
We will use hashicorp/go-retryablehttp
, a robust wrapper around Go's standard net/http
client that provides these features out of the box.41
Go
// internal/repository/httpclient/some_api.go
package httpclient
import (
"context"
"encoding/json"
"fmt"
"io"
"log/slog"
"time"
"github.com/hashicorp/go-retryablehttp"
)
// Custom logger adapter to integrate with slog
type SlogAdapter struct {
Logger *slog.Logger
}
func (l *SlogAdapter) Printf(format string, v...interface{}) {
l.Logger.Info(fmt.Sprintf(format, v...))
}
type ExternalAPIClient struct {
client *retryablehttp.Client
}
func NewExternalAPIClient(logger *slog.Logger) *ExternalAPIClient {
retryClient := retryablehttp.NewClient()
retryClient.RetryMax = 3
retryClient.RetryWaitMin = 1 * time.Second
retryClient.RetryWaitMax = 30 * time.Second
retryClient.Logger = &SlogAdapter{Logger: logger} // Integrate with our structured logger
return &ExternalAPIClient{client: retryClient}
}
type ExternalProductData struct {
// Define fields based on the external API's response
SupplierID string `json:"supplier_id"`
ShippingCost float64 `json:"shipping_cost"`
}
func (c *ExternalAPIClient) GetProductShippingInfo(ctx context.Context, productID int64) (*ExternalProductData, error) {
url := fmt.Sprintf("https://api.externalsupplier.com/products/%d/shipping", productID)
req, err := retryablehttp.NewRequestWithContext(ctx, "GET", url, nil)
if err!= nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
resp, err := c.client.Do(req)
if err!= nil {
return nil, fmt.Errorf("request to external api failed: %w", err)
}
defer resp.Body.Close()
if resp.StatusCode < 200 |
| resp.StatusCode >= 300 {
body, _ := io.ReadAll(resp.Body)
return nil, fmt.Errorf("external api returned non-2xx status: %d, body: %s", resp.StatusCode, string(body))
}
var data ExternalProductData
if err := json.NewDecoder(resp.Body).Decode(&data); err!= nil {
return nil, fmt.Errorf("failed to decode response body: %w", err)
}
return &data, nil
}
This implementation demonstrates how to configure the client with retry policies and integrate its logging with the application's main slog
logger.44 This ensures that all infrastructure interactions, whether with a database, cache, or external API, are robust and observable.
Section 4: The Delivery Layer: Exposing the Application via API
The delivery layer is the primary entry point for external actors, such as users or other services, to interact with the application. For this project, the delivery mechanism is an HTTP RESTful API. This layer is responsible for handling incoming HTTP requests, parsing them, invoking the appropriate use cases, and formatting the results into HTTP responses. It acts as the translator between the web protocol and the application's core business logic.
4.1. HTTP Server and Routing with Chi
For building the HTTP server, we will use the chi
router (github.com/go-chi/chi/v5
). Chi is a lightweight, idiomatic, and composable router that is fully compatible with the standard net/http
library.45 Its key advantages include a powerful middleware system, support for URL parameters, and the ability to structure routes in a modular way using sub-routers, which helps in organizing large APIs.46
The main application entry point in /cmd/server/main.go
will be responsible for initializing all dependencies (configuration, logger, repositories, use cases) and setting up the Chi router and server.
Example Implementation (/cmd/server/main.go
snippet):
Go
//... imports
func main() {
//... (configuration, logger, and database pool setup)
// Initialize repositories
productRepo := postgres.NewPostgresProductRepository(dbPool)
productCacheRepo := redis.NewProductCacheRepository(productRepo, redisClient, 5*time.Minute)
// Initialize usecases
productUsecase := usecase.NewProductUsecase(productCacheRepo, 2*time.Second)
// Initialize delivery layer (HTTP handlers)
productHandler := delivery.NewProductHandler(productUsecase, logger)
// Setup router
router := chi.NewRouter()
// Setup middleware stack
//... (middleware setup)
// Setup routes
router.Route("/api/v1", func(r chi.Router) {
r.Mount("/products", productHandler.Routes())
})
//... (server startup and graceful shutdown logic)
}
4.2. Handlers and Middleware
HTTP handlers are the functions that receive an http.Request
and write to an http.ResponseWriter
. In our architecture, a handler's primary role is to act as a thin adapter: it decodes the incoming request (e.g., parsing JSON body or URL parameters), calls the appropriate method on a usecase, and then encodes the result (or error) into an HTTP response.
Example Handler Implementation (/internal/delivery/http/product_handler.go
):
Go
// internal/delivery/http/product_handler.go
package http
import (
"encoding/json"
"errors"
"log/slog"
"net/http"
"strconv"
"your_project/internal/domain"
"your_project/internal/usecase"
"github.com/go-chi/chi/v5"
)
type ProductHandler struct {
usecase usecase.ProductUsecaseInterface // Using an interface for testability
logger *slog.Logger
}
// ProductUsecaseInterface defines the methods the handler needs from the usecase.
type ProductUsecaseInterface interface {
CreateProduct(ctx context.Context, name, description string, price float64, stock int) (*domain.Product, error)
GetProductByID(ctx context.Context, id int64) (*domain.Product, error)
}
func NewProductHandler(uc ProductUsecaseInterface, logger *slog.Logger) *ProductHandler {
return &ProductHandler{
usecase: uc,
logger: logger,
}
}
// Routes returns a new chi.Router for product-related endpoints.
func (h *ProductHandler) Routes() chi.Router {
r := chi.NewRouter()
r.Post("/", h.CreateProduct)
r.Get("/{id}", h.GetProductByID)
return r
}
type createProductRequest struct {
Name string `json:"name"`
Description string `json:"description"`
Price float64 `json:"price"`
Stock int `json:"stock"`
}
// CreateProduct handles the HTTP request for creating a new product.
// @Summary Create a new product
// @Description Create a new product with the input payload
// @Tags products
// @Accept json
// @Produce json
// @Param product body createProductRequest true "Create Product"
// @Success 201 {object} domain.Product
// @Failure 400 {object} ResponseError
// @Failure 500 {object} ResponseError
// @Router /products [post]
func (h *ProductHandler) CreateProduct(w http.ResponseWriter, r *http.Request) {
var req createProductRequest
if err := json.NewDecoder(r.Body).Decode(&req); err!= nil {
h.respondWithError(w, r, http.StatusBadRequest, "Invalid request payload")
return
}
product, err := h.usecase.CreateProduct(r.Context(), req.Name, req.Description, req.Price, req.Stock)
if err!= nil {
// Map domain/usecase errors to HTTP status codes
h.handleError(w, r, err)
return
}
h.respondWithJSON(w, r, http.StatusCreated, product)
}
// GetProductByID handles the HTTP request for retrieving a product by its ID.
// @Summary Get a product by ID
// @Description Get details of a product by its ID
// @Tags products
// @Produce json
// @Param id path int true "Product ID"
// @Success 200 {object} domain.Product
// @Failure 404 {object} ResponseError
// @Failure 500 {object} ResponseError
// @Router /products/{id} [get]
func (h *ProductHandler) GetProductByID(w http.ResponseWriter, r *http.Request) {
idStr := chi.URLParam(r, "id") // chi provides helpers for URL params [48, 49]
id, err := strconv.ParseInt(idStr, 10, 64)
if err!= nil {
h.respondWithError(w, r, http.StatusBadRequest, "Invalid product ID")
return
}
product, err := h.usecase.GetProductByID(r.Context(), id)
if err!= nil {
h.handleError(w, r, err)
return
}
h.respondWithJSON(w, r, http.StatusOK, product)
}
//... (helper functions for JSON responses and error handling)
Middleware Stack:
Middleware provides a powerful way to handle cross-cutting concerns like logging, authentication, and rate limiting. Chi's middleware is standard http.Handler
wrappers, making them easy to write and chain.46 We will construct a middleware stack that applies to all routes.
Request ID: Injects a unique ID into the request context for tracing purposes.
Structured Logging: A custom middleware that uses the request ID to create a request-scoped
slog
logger.Authentication (JWT Stub): A placeholder middleware that would inspect the
Authorization
header for a JWT, validate it, and inject user information into the context.Rate Limiting: An IP-based rate limiter using the token bucket algorithm to prevent abuse. The
golang.org/x/time/rate
package provides a ready-made implementation.51 Thechi/httprate
package offers a convenient middleware wrapper for this.53
4.3. API Documentation with Swagger
Clear, comprehensive API documentation is crucial for both internal and external consumers. We will use swaggo/swag
to automatically generate OpenAPI 2.0 documentation from annotations in our handler code.54
Integration Steps:
Installation: Install the
swag
CLI and thehttp-swagger
library.Bash
go get -u github.com/swaggo/swag/cmd/swag go get -u github.com/swaggo/http-swagger/v2
Annotations: Add annotations to the
main.go
file for global API information (title, version, etc.) and to each handler function for endpoint-specific details (summary, parameters, responses), as shown in theProductHandler
example above.55Generation: Run
swag init
in the project root. This command parses the code and generates adocs
directory containing theswagger.json
,swagger.yaml
, and a Go file that embeds the documentation.57Serving: Add a route in
main.go
to serve the Swagger UI.Go
// in main.go, inside router setup import httpSwagger "github.com/swaggo/http-swagger/v2" import _ "your_project/docs" // Import generated docs //... // @title Production-Ready Go Service API // @version 1.0 // @description This is a sample server for a Go Clean Architecture project. // @host localhost:8080 // @BasePath /api/v1 router.Get("/swagger/*", httpSwagger.Handler( httpSwagger.URL("http://localhost:8080/swagger/doc.json"), //The url pointing to API definition ))
This setup provides a live, interactive API documentation endpoint at /swagger/index.html
, which greatly simplifies API exploration and testing for developers.
Section 5: Fortifying for Production: Observability and Configuration
Moving an application from development to production requires a robust foundation for configuration, monitoring, and debugging. This section details the implementation of key operational components: centralized configuration, structured logging, metrics for monitoring, and distributed tracing for debugging complex interactions. These "observability pillars" are crucial for maintaining a healthy and reliable service.
5.1. Centralized Configuration with Viper
Hardcoding configuration values is brittle and unsuitable for production. A flexible configuration system is needed to manage settings across different environments (local, staging, production) without code changes. We will use Viper (github.com/spf13/viper
), a popular and powerful configuration library for Go.58
Viper can read configuration from multiple sources, including YAML files, environment variables, and command-line flags, and provides a clear precedence order.8 This allows us to define default values in a
config.yml
file and override them with environment variables in production, a standard practice for 12-Factor Apps.61
Implementation (/config/config.go
):
Go
// config/config.go
package config
import (
"time"
"github.com/spf13/viper"
"strings"
)
type Config struct {
Server ServerConfig
Postgres PostgresConfig
Redis RedisConfig
Kafka KafkaConfig
Tracing TracingConfig
}
type ServerConfig struct {
Port string `mapstructure:"port"`
ReadTimeout time.Duration `mapstructure:"readTimeout"`
WriteTimeout time.Duration `mapstructure:"writeTimeout"`
}
type PostgresConfig struct {
URL string `mapstructure:"url"`
MaxConns int32 `mapstructure:"maxConns"`
MinConns int32 `mapstructure:"minConns"`
MaxConnLifetime time.Duration `mapstructure:"maxConnLifetime"`
MaxConnIdleTime time.Duration `mapstructure:"maxConnIdleTime"`
}
type RedisConfig struct {
Addr string `mapstructure:"addr"`
Password string `mapstructure:"password"`
DB int `mapstructure:"db"`
}
type KafkaConfig struct {
Brokersstring `mapstructure:"brokers"`
Topic string `mapstructure:"topic"`
}
type TracingConfig struct {
JaegerURL string `mapstructure:"jaegerUrl"`
}
func LoadConfig(path string) (*Config, error) {
viper.AddConfigPath(path)
viper.SetConfigName("config")
viper.SetConfigType("yml")
// Enable reading from environment variables
viper.AutomaticEnv()
viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
if err := viper.ReadInConfig(); err!= nil {
return nil, err
}
var cfg Config
if err := viper.Unmarshal(&cfg); err!= nil {
return nil, err
}
return &cfg, nil
}
A corresponding config/config.yml
file will provide the defaults:
YAML
server:
port: ":8080"
readTimeout: 10s
writeTimeout: 10s
postgres:
url: "postgres://user:password@localhost:5432/mydatabase?sslmode=disable"
maxConns: 10
minConns: 2
maxConnLifetime: 1h
maxConnIdleTime: 30m
#... other configurations
In a containerized environment like Kubernetes, POSTGRES_URL
can be set as an environment variable to override the local development value.
Parameter
Environment Variable
YAML Key
Description
Default
Server Port
SERVER_PORT
server.port
The address and port for the HTTP server to listen on.
:8080
Postgres URL
POSTGRES_URL
postgres.url
The DSN for connecting to the PostgreSQL database.
postgres://user:password@localhost:5432/mydatabase?sslmode=disable
Postgres Max Conns
POSTGRES_MAXCONNS
postgres.maxConns
Maximum number of connections in the pool.
10
Redis Address
REDIS_ADDR
redis.addr
The address for the Redis server.
localhost:6379
Kafka Brokers
KAFKA_BROKERS
kafka.brokers
A comma-separated list of Kafka broker addresses.
localhost:9092
Jaeger URL
TRACING_JAEGERURL
tracing.jaegerUrl
The URL for the Jaeger collector endpoint.
http://localhost:14268/api/traces
Table 2: Configuration Variable Reference. This table provides a quick reference for key configuration parameters.
5.2. High-Fidelity Logging with slog
slog
Effective logging is non-negotiable in production. Logs must be structured (e.g., in JSON format) to be machine-readable for log aggregation and analysis tools.62 Go 1.21 introduced the standard library
log/slog
package, which provides fast, structured, and leveled logging capabilities.64
A key practice for observability is contextual logging: enriching log entries with request-specific data like a trace_id
or user_id
.66 This allows for easy filtering and correlation of all log messages related to a single request. We will achieve this by creating a request-scoped logger and injecting it into the
http.Request
context via middleware.
Implementation (/internal/middleware/logger.go
):
Go
// internal/middleware/logger.go
package middleware
import (
"log/slog"
"net/http"
"time"
"github.com/go-chi/chi/v5/middleware"
)
// CtxKey is a custom type for context keys to avoid collisions.
type CtxKey string
const LoggerKey CtxKey = "logger"
func Logger(logger *slog.Logger) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Get request ID from context (set by an upstream middleware)
requestID := middleware.GetReqID(r.Context())
// Create a request-scoped logger with the request ID
requestLogger := logger.With(slog.String("request_id", requestID))
// Add the logger to the request context
ctx := context.WithValue(r.Context(), LoggerKey, requestLogger)
// Log the request
requestLogger.Info("request started",
slog.String("method", r.Method),
slog.String("path", r.URL.Path),
)
// Use a response writer wrapper to capture status code
ww := middleware.NewWrapResponseWriter(w, r.ProtoMajor)
start := time.Now()
defer func() {
requestLogger.Info("request completed",
slog.Duration("duration", time.Since(start)),
slog.Int("status_code", ww.Status()),
slog.Int("bytes_written", ww.BytesWritten()),
)
}()
next.ServeHTTP(ww, r.WithContext(ctx))
})
}
}
5.3. Monitoring with Prometheus
Prometheus is the de-facto standard for metrics-based monitoring in cloud-native environments. Go applications can expose metrics in the Prometheus format via a standard /metrics
HTTP endpoint. We will use the official prometheus/client_golang
library.67
We will create a custom middleware to instrument our HTTP handlers and collect two key metrics:
http_requests_total
: ACounterVec
to count the total number of HTTP requests, labeled by method, path, and status code.68http_request_duration_seconds
: AHistogramVec
to track the latency distribution of requests, labeled by method and path.70 Histograms are more powerful than simple gauges or summaries for latency, as they allow for server-side aggregation and the calculation of arbitrary quantiles (e.g., p95, p99).71
Implementation (/internal/middleware/metrics.go
):
Go
// internal/middleware/metrics.go
package middleware
import (
"net/http"
"strconv"
"time"
"github.com/go-chi/chi/v5"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
)
var (
httpRequestsTotal = promauto.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total number of HTTP requests.",
},
string{"code", "method", "path"},
)
httpRequestDuration = promauto.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "Duration of HTTP requests.",
Buckets: prometheus.DefBuckets, // Default buckets:.005,.01,.025,.05,.1,.25,.5, 1, 2.5, 5, 10
},
string{"method", "path"},
)
)
func PrometheusMetrics(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
// Use a response writer wrapper to capture status code
ww := chi.NewWrapResponseWriter(w, r.ProtoMajor)
next.ServeHTTP(ww, r)
duration := time.Since(start)
// Get the route pattern for consistent labeling
routePattern := chi.RouteContext(r.Context()).RoutePattern()
if routePattern == "" {
routePattern = "unknown"
}
// Record metrics
httpRequestsTotal.WithLabelValues(strconv.Itoa(ww.Status()), r.Method, routePattern).Inc()
httpRequestDuration.WithLabelValues(r.Method, routePattern).Observe(duration.Seconds())
})
}
This middleware, along with the promhttp.Handler()
served at /metrics
, provides crucial visibility into the application's performance.
Endpoint
Description
/metrics
Exposes application and Go runtime metrics in Prometheus format.
/debug/pprof/*
Provides Go runtime profiling data (CPU, heap, goroutines). Enabled by importing net/http/pprof
.
/health/live
Liveness probe endpoint. Returns 200 OK if the server is running.
/health/ready
Readiness probe endpoint. Returns 200 OK if the server is ready to accept traffic (e.g., DB connected).
/swagger/index.html
Serves the interactive Swagger UI for API documentation.
Table 3: Observability Endpoints Summary. A reference for all non-application endpoints.
5.4. Distributed Tracing with OpenTelemetry and Jaeger
In a microservices architecture, a single user request can traverse multiple services. Distributed tracing is essential for understanding this end-to-end flow, debugging latency issues, and identifying bottlenecks. OpenTelemetry (OTel) has emerged as the industry standard for generating and propagating traces.73
We will instrument our application to send traces to Jaeger, a popular open-source tracing backend.75 Context propagation is the core mechanism that makes this work: a unique
trace_id
is passed between services, typically in HTTP headers, allowing spans from different services to be linked together into a single trace.77
We will use the otelchi
middleware (github.com/riandyrn/otelchi
), which automatically creates spans for incoming requests and extracts trace context from headers.80
Implementation (/cmd/server/main.go
snippet for tracing setup):
Go
// cmd/server/main.go
//...
import (
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/jaeger"
"go.opentelemetry.io/otel/propagation"
"go.opentelemetry.io/otel/sdk/resource"
tracesdk "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.21.0"
"github.com/riandyrn/otelchi"
)
func initTracer(jaegerURL, serviceName string) (*tracesdk.TracerProvider, error) {
exporter, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(jaegerURL)))
if err!= nil {
return nil, err
}
tp := tracesdk.NewTracerProvider(
tracesdk.WithBatcher(exporter),
tracesdk.WithResource(resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String(serviceName),
)),
)
otel.SetTracerProvider(tp)
otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(propagation.TraceContext{}, propagation.Baggage{}))
return tp, nil
}
// In main():
//...
tp, err := initTracer(cfg.Tracing.JaegerURL, "my-product-service")
if err!= nil {
logger.Error("Failed to initialize tracer", "error", err)
os.Exit(1)
}
defer func() {
if err := tp.Shutdown(context.Background()); err!= nil {
logger.Error("Error shutting down tracer provider", "error", err)
}
}()
// In router setup:
router.Use(otelchi.Middleware("my-server", otelchi.WithChiRoutes(router)))
//...
With this setup, the trace context will be automatically propagated through the request context
. When we make calls to the database with pgx
or the external API with retryablehttp
, their respective OTel instrumentations will pick up this context and create child spans, giving us a complete end-to-end view of the request.
Section 6: Ensuring Robustness and Reliability
Production-grade applications must be resilient to failure and predictable in their behavior. This section covers three critical aspects of robustness: a structured error handling strategy that provides clear signals to callers, a graceful shutdown mechanism that prevents data loss during deployments, and health check endpoints that enable automated systems like Kubernetes to manage the application's lifecycle.
6.1. A Pragmatic Error Handling Strategy
Go's error handling philosophy treats errors as values, which encourages explicit error checking and leads to more reliable software.19 A robust error handling strategy involves more than just
if err!= nil
. It requires a system for adding context to errors as they propagate and for mapping internal application errors to meaningful responses for the client.
Our strategy involves three key practices:
Defining Sentinel Errors in the Domain: As established in Section 1, core business errors (e.g.,
domain.ErrNotFound
) are defined as sentinel values in the domain layer.19Wrapping Errors for Context: As an error crosses an architectural boundary (e.g., from repository to usecase), it should be wrapped to add context. This creates a chain of errors that provides a stack-trace-like narrative of what went wrong, which is invaluable for debugging.82 We will use
fmt.Errorf
with the%w
verb for this.Mapping Errors to HTTP Status Codes in the Delivery Layer: The HTTP handler is the only layer that should have knowledge of HTTP status codes. It is responsible for inspecting the returned error chain (using
errors.Is
anderrors.As
) and translating domain-specific errors into the appropriate HTTP response.84
Example Error Handling Flow:
Repository Layer (
/internal/repository/postgres/product_repo.go
):Go
//... inside GetByID if err!= nil { if errors.Is(err, pgx.ErrNoRows) { return nil, domain.ErrNotFound // Return a known domain error } return nil, fmt.Errorf("postgres.GetByID: %w", err) // Wrap with context }
Usecase Layer (
/internal/usecase/product_uc.go
):Go
//... inside GetProductByID product, err := uc.repo.GetByID(ctx, id) if err!= nil { return nil, fmt.Errorf("productUsecase.GetProductByID: %w", err) // Wrap again }
Delivery Layer (
/internal/delivery/http/product_handler.go
):Go
// Helper function in the handler func (h *ProductHandler) handleError(w http.ResponseWriter, r *http.Request, err error) { // Log the full error chain for debugging h.logger.Error("request error", "error", err) // Check for specific domain errors to map to HTTP status codes if errors.Is(err, domain.ErrNotFound) { h.respondWithError(w, r, http.StatusNotFound, "Product not found") return } if errors.Is(err, domain.ErrInvalidPrice) |
| errors.Is(err, domain.ErrInvalidStock) {
h.respondWithError(w, r, http.StatusBadRequest, err.Error())
return
}
// Default to a 500 Internal Server Error
h.respondWithError(w, r, http.StatusInternalServerError, "An unexpected error occurred")
}
```
This approach ensures that errors are handled gracefully, with detailed logs for developers and clear, appropriate responses for clients, without leaking internal implementation details.
6.2. Graceful Shutdown
When an application instance is terminated (e.g., during a deployment or scaling event), it must shut down gracefully to prevent interrupting in-flight requests and avoid data corruption.86 A graceful shutdown process typically involves:
Stopping the acceptance of new incoming requests.
Waiting for all active requests to complete, up to a certain timeout.
Closing all external resources, such as database connection pools and message broker connections.
We will implement a shutdown manager that listens for the SIGINT
(Ctrl+C) and SIGTERM
(the default signal sent by Docker and Kubernetes) operating system signals.88
Implementation (/cmd/server/main.go
snippet):
Go
// cmd/server/main.go
func main() {
//... (initialization of logger, config, db, etc.)
server := &http.Server{
Addr: cfg.Server.Port,
Handler: router,
}
// Channel to listen for OS signals
stopChan := make(chan os.Signal, 1)
signal.Notify(stopChan, syscall.SIGINT, syscall.SIGTERM)
// Run the server in a separate goroutine
go func() {
logger.Info("Server is listening on port", "port", cfg.Server.Port)
if err := server.ListenAndServe(); err!= nil &&!errors.Is(err, http.ErrServerClosed) {
logger.Error("Server error", "error", err)
os.Exit(1)
}
}()
// Block until a signal is received
sig := <-stopChan
logger.Info("Shutdown signal received", "signal", sig)
// Create a context with a timeout for the shutdown
shutdownCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Gracefully shutdown the HTTP server
if err := server.Shutdown(shutdownCtx); err!= nil {
logger.Error("HTTP server shutdown error", "error", err)
}
// Close other resources
logger.Info("Closing database connection pool...")
dbPool.Close() // pgxpool's Close waits for all connections to be returned to the pool
logger.Info("Closing Kafka producer...")
if kafkaPublisher!= nil {
if err := kafkaPublisher.Close(); err!= nil {
logger.Error("Kafka publisher close error", "error", err)
}
}
logger.Info("Server gracefully stopped")
}
6.3. Health Check Endpoints
In modern container orchestration systems like Kubernetes, health checks are essential for automating service management. Two types of probes are standard:
Liveness Probe (
/health/live
): Checks if the application is running. If this probe fails, the orchestrator will restart the container. A simple "200 OK" response is usually sufficient.Readiness Probe (
/health/ready
): Checks if the application is ready to handle traffic. If this probe fails, the orchestrator will remove the container from the load balancer's pool. This is useful for checking dependencies, like the database connection.
Implementation (/internal/delivery/http/health_handler.go
):
Go
// internal/delivery/http/health_handler.go
package http
import (
"net/http"
"github.com/jackc/pgx/v5/pgxpool"
)
type HealthHandler struct {
db *pgxpool.Pool
}
func NewHealthHandler(db *pgxpool.Pool) *HealthHandler {
return &HealthHandler{db: db}
}
// Live handles the liveness probe. It simply returns 200 OK.
func (h *HealthHandler) Live(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Write(byte("OK"))
}
// Ready handles the readiness probe. It checks the connection to the database.
func (h *HealthHandler) Ready(w http.ResponseWriter, r *http.Request) {
if err := h.db.Ping(r.Context()); err!= nil {
http.Error(w, "Database not ready", http.StatusServiceUnavailable)
return
}
w.WriteHeader(http.StatusOK)
w.Write(byte("OK"))
}
// In router setup:
healthHandler := http.NewHealthHandler(dbPool)
router.Get("/health/live", healthHandler.Live)
router.Get("/health/ready", healthHandler.Ready)
These simple endpoints provide powerful hooks for automated systems to ensure the application is both running and capable of serving requests correctly.
Section 7: A Comprehensive Testing Strategy
A robust testing strategy is essential for building reliable software. It provides confidence that the code behaves as expected and allows for safe refactoring. In line with the principles of Clean Architecture, our testing strategy will be layered, focusing on unit tests for core business logic and integration tests for infrastructure components.
7.1. Unit Testing the Core Logic
Unit tests should be fast, isolated, and focused on a single unit of functionality. The decoupled nature of our usecase layer makes it perfectly suited for unit testing. By depending on interfaces rather than concrete implementations, we can easily mock the repository layer to test the business logic in isolation.11
We will use the standard testing
package along with the testify
suite, specifically testify/assert
for fluent assertions and testify/mock
for creating mock objects.89 The
mockery
tool can be used to auto-generate mock implementations of our interfaces, removing boilerplate code.
Example Unit Test for ProductUsecase
(/internal/usecase/product_uc_test.go
):
First, we generate a mock for our ProductRepository
interface using mockery
:
Bash
# mockery --dir=internal/usecase --name=ProductRepository --output=internal/usecase/mocks --outpkg=mocks
This command creates a mocks
directory with a ProductRepository.go
file containing a mock implementation.
Go
// internal/usecase/product_uc_test.go
package usecase_test
import (
"context"
"errors"
"testing"
"time"
"your_project/internal/domain"
"your_project/internal/usecase"
"your_project/internal/usecase/mocks"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
)
func TestProductUsecase_GetProductByID(t *testing.T) {
mockRepo := new(mocks.ProductRepository)
timeout := 2 * time.Second
uc := usecase.NewProductUsecase(mockRepo, timeout)
t.Run("success", func(t *testing.T) {
// Setup mock response
mockProduct := &domain.Product{
ID: 1,
Name: "Test Product",
Price: 99.99,
Stock: 10,
}
mockRepo.On("GetByID", mock.Anything, int64(1)).Return(mockProduct, nil).Once()
// Call the usecase method
product, err := uc.GetProductByID(context.Background(), 1)
// Assertions
assert.NoError(t, err)
assert.NotNil(t, product)
assert.Equal(t, "Test Product", product.Name)
mockRepo.AssertExpectations(t) // Verify that the mock was called as expected
})
t.Run("not found", func(t *testing.T) {
// Setup mock response for a not found error
mockRepo.On("GetByID", mock.Anything, int64(2)).Return(nil, domain.ErrNotFound).Once()
// Call the usecase method
product, err := uc.GetProductByID(context.Background(), 2)
// Assertions
assert.Error(t, err)
assert.Nil(t, product)
assert.True(t, errors.Is(err, domain.ErrNotFound))
mockRepo.AssertExpectations(t)
})
}
This test validates the usecase's behavior under both success and failure conditions without ever touching a real database, making it extremely fast and reliable.90
7.2. Integration Testing the Infrastructure
While unit tests are essential for business logic, they cannot verify that our infrastructure code—like the PostgreSQL repository—works correctly with the actual external system. Integration tests fill this gap. They are slower and more complex than unit tests but are crucial for validating the interaction between our application and its dependencies.
To run integration tests in a clean, reproducible, and isolated manner, we will use testcontainers-go
. This library allows us to programmatically start and stop Docker containers (in this case, a PostgreSQL container) as part of our test suite.92 This ensures that our tests run against a real, ephemeral database instance, providing high confidence that our SQL queries and data mapping logic are correct.
Example Integration Test for PostgresProductRepository
(/internal/repository/postgres/product_repo_test.go
):
Go
// internal/repository/postgres/product_repo_test.go
package postgres_test
import (
"context"
"fmt"
"log"
"testing"
"time"
"your_project/internal/domain"
"your_project/internal/repository/postgres"
"github.com/jackc/pgx/v5/pgxpool"
"github.com/stretchr/testify/assert"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/modules/postgres"
"github.com/testcontainers/testcontainers-go/wait"
)
func setupTestDatabase(t *testing.T) *pgxpool.Pool {
ctx := context.Background()
pgContainer, err := postgres.RunContainer(ctx,
testcontainers.WithImage("postgres:15-alpine"),
postgres.WithDatabase("test-db"),
postgres.WithUsername("user"),
postgres.WithPassword("password"),
testcontainers.WithWaitStrategy(
wait.ForLog("database system is ready to accept connections").
WithOccurrence(2).
WithStartupTimeout(5*time.Second),
),
)
if err!= nil {
t.Fatalf("failed to start postgres container: %s", err)
}
t.Cleanup(func() {
if err := pgContainer.Terminate(ctx); err!= nil {
t.Fatalf("failed to terminate postgres container: %s", err)
}
})
connStr, err := pgContainer.ConnectionString(ctx, "sslmode=disable")
if err!= nil {
t.Fatalf("failed to get connection string: %s", err)
}
pool, err := pgxpool.New(ctx, connStr)
if err!= nil {
t.Fatalf("failed to connect to test database: %s", err)
}
// Apply migrations
migrationQuery := `
CREATE TABLE products (
id BIGSERIAL PRIMARY KEY,
name TEXT NOT NULL,
description TEXT,
price NUMERIC(10, 2) NOT NULL,
stock INTEGER NOT NULL,
created_at TIMESTAMPTZ NOT NULL,
updated_at TIMESTAMPTZ NOT NULL
);`
_, err = pool.Exec(ctx, migrationQuery)
if err!= nil {
t.Fatalf("failed to run migrations: %s", err)
}
return pool
}
func TestPostgresProductRepository_Integration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test in short mode.")
}
pool := setupTestDatabase(t)
repo := postgres.NewPostgresProductRepository(pool)
ctx := context.Background()
t.Run("Create and Get Product", func(t *testing.T) {
// Create
newProduct, _ := domain.NewProduct("Laptop", "A powerful laptop", 1500.00, 50)
id, err := repo.Create(ctx, newProduct)
assert.NoError(t, err)
assert.NotZero(t, id)
// Get
retrievedProduct, err := repo.GetByID(ctx, id)
assert.NoError(t, err)
assert.NotNil(t, retrievedProduct)
assert.Equal(t, "Laptop", retrievedProduct.Name)
assert.Equal(t, 1500.00, retrievedProduct.Price)
})
t.Run("Get Non-Existent Product", func(t *testing.T) {
_, err := repo.GetByID(ctx, 999)
assert.Error(t, err)
assert.True(t, errors.Is(err, domain.ErrNotFound))
})
}
This test provides a high degree of confidence that the repository layer is functioning correctly, from the SQL syntax to the mapping of database rows to Go structs.
Section 8: Automation and Deployment Pipeline
A robust and efficient development workflow relies heavily on automation. This section details the creation of a comprehensive toolchain to build, test, containerize, and deploy the application. This includes a Makefile
for common developer tasks, a multi-stage Dockerfile
for creating optimized production images, a docker-compose.yml
file for orchestrating a local development environment, and a complete Continuous Integration (CI) pipeline using GitHub Actions.
8.1. The Developer's Toolkit: Makefile
Makefile
A Makefile
serves as a command-line entry point for automating repetitive development tasks, ensuring consistency across the team.93 It provides short, memorable aliases for longer, more complex commands related to building, testing, linting, and running the application stack.95
Implementation (Makefile
):
Makefile
# Makefile for the Go Production-Ready Service
# Variables
BINARY_NAME=go-service
DOCKER_IMAGE_NAME=your-docker-repo/$(BINARY_NAME)
DOCKER_TAG=latest
.PHONY: help build run test lint clean docker-build docker-run docker-stop swag
help: ## Display this help screen
@awk 'BEGIN {FS = ":.*##"; printf "Usage:\n make \033+:.*?##/ { printf " \033
* **Build Stage:** This stage uses a full Go build image (e.g., `golang:1.21-alpine`) to compile the application. It copies the source code, downloads dependencies, and builds a statically linked binary.[97]
* **Final Stage:** This stage starts from a minimal base image like `scratch` or `alpine`. The `scratch` image is an empty image, providing the smallest possible attack surface.[98] We copy only the compiled binary and necessary assets (like CA certificates for TLS) from the build stage into this final image. The result is a highly optimized image that contains only what is necessary to run the application.[99, 100]
**Implementation (`Dockerfile`):**
```dockerfile
# Stage 1: Build the application
FROM golang:1.21-alpine AS builder
WORKDIR /app
# Copy go.mod and go.sum to leverage Docker cache
COPY go.mod go.sum./
RUN go mod download
# Copy the rest of the application source code
COPY..
# Build the application as a statically linked binary
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o /app/main./cmd/server
# Stage 2: Create the final, minimal production image
FROM scratch
# Copy the compiled binary from the builder stage
COPY --from=builder /app/main /main
# Copy SSL certificates
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
# Copy the configuration file
COPY config/config.yml /config/config.yml
# Expose the port the server runs on
EXPOSE 8080
# Set the entrypoint for the container
ENTRYPOINT ["/main"]
8.3. Local Environment with docker-compose
docker-compose
For local development and testing, docker-compose
is an invaluable tool for orchestrating the multi-container application stack. It allows a developer to spin up the entire environment—the Go application, PostgreSQL, Redis, Kafka, and Jaeger—with a single command.101
Implementation (docker-compose.yml
):
YAML
version: '3.8'
services:
app:
build:
context:.
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- postgres
- redis
- kafka
environment:
- POSTGRES_URL=postgres://user:password@postgres:5432/mydatabase?sslmode=disable
- REDIS_ADDR=redis:6379
- KAFKA_BROKERS=kafka:9092
- TRACING_JAEGERURL=http://jaeger:14268/api/traces
postgres:
image: postgres:15-alpine
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=mydatabase
volumes:
- postgres_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
ports:
- "6379:6379"
zookeeper:
image: confluentinc/cp-zookeeper:7.3.0
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
kafka:
image: confluentinc/cp-kafka:7.3.0
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
jaeger:
image: jaegertracing/all-in-one:1.41
ports:
- "16686:16686" # Jaeger UI
- "14268:14268" # Collector
volumes:
postgres_data:
8.4. Continuous Integration with GitHub Actions
A CI pipeline automates the process of testing and building the application, ensuring code quality and consistency. We will create a GitHub Actions workflow that triggers on every pull request. This pipeline will 103:
Check out the code.
Set up the Go environment and cache dependencies.
Run the linter (
golangci-lint
) to enforce code standards.104Run all unit and integration tests.
Build the production Docker image.
(Optional but recommended) Push the built image to a container registry like GitHub Container Registry (GHCR) or Docker Hub.106
Implementation (.github/workflows/ci.yml
):
YAML
name: Go CI Pipeline
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
lint-and-test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.21'
cache: true
- name: Run linter
uses: golangci/golangci-lint-action@v3
with:
version: v1.55.2
- name: Run tests
run: make test
build-docker:
needs: lint-and-test
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Log in to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context:.
push: true
tags: ghcr.io/${{ github.repository }}:latest
This workflow ensures that every change is automatically validated, providing rapid feedback to developers and maintaining a high standard of code quality before merging to the main branch.
Conclusions
This report has detailed the construction of a production-ready Golang microservice, adhering to the principles of Clean Architecture. The resulting template provides a robust, scalable, and maintainable foundation for building modern, cloud-native applications.
The key takeaways from this architectural blueprint are:
Clean Architecture is Achievable and Idiomatic in Go: By leveraging Go's implicit interfaces and the compiler-enforced privacy of the
internal
package, it is possible to implement a clean, decoupled architecture without the verbosity or complexity often associated with patterns from other language ecosystems. The core principle of Dependency Inversion is naturally supported, allowing for a clear separation between business logic and infrastructure concerns.A Layered Approach to Infrastructure is Key: The Repository pattern should be viewed not just as a database abstraction, but as a universal pattern for interacting with any external system. Applying this pattern consistently to databases, caches, message brokers, and external APIs creates a uniform and predictable infrastructure layer, simplifying development and testing.
Production Readiness is a First-Class Concern: Features like configuration management, structured logging, metrics, tracing, and graceful shutdown are not afterthoughts. They must be designed into the application from the beginning. Integrating tools like Viper,
slog
, Prometheus, and OpenTelemetry provides the necessary observability to operate and debug the service effectively in a production environment.Automation is the Foundation of Reliability: A comprehensive set of automated tools, including a
Makefile
for local development, a multi-stageDockerfile
for secure containerization, and a CI/CD pipeline for continuous validation, is critical. This automation enforces quality, ensures consistency, and accelerates the development lifecycle.
By adopting the principles and patterns outlined in this report, development teams can bootstrap new Go services with a strong architectural foundation. This template is not a rigid mandate but a flexible blueprint designed to be adapted. It encourages best practices that lead to software that is not only functional and performant but also resilient, observable, and a pleasure to maintain over its entire lifecycle.
Last updated