Rearchitecting LLMs (Manning)

Rearchitecting LLMs: Structural techniques for efficient models turns research from the latest AI papers into production-ready practices for domain-specific model optimization. As you work through this practical book, you’ll perform hands-on surgery on popular open-source models like Llama-3, Gemma, and Qwen to create cost-effective local Small Language Models (SLMs).

Pere Marta

The premise is simple: most general-purpose LLMs weren’t built for your domain, constraints, or budget. Instead of treating models as black boxes, this book walks through how to open them up and reshape them. Not at the prompt level, but structurally.

Pere goes deep into hands-on work with open-source models like Llama-3, Gemma, and Qwen, showing how to:

  • Remove parts of a model that don’t pull their weight

  • Use pruning and distillation in ways that actually survive contact with production

  • Combine behavioral analysis with architectural changes, instead of guessing

  • Build smaller, local SLMs that make sense for specific tasks

  • Apply “fair pruning” to reduce bias at the neuron level (this part surprised me)

This is very much a keyboard-on-desk book. You’re not just reading about recent research papers—you’re translating that research into workflows you can run, test, and reason about. If you’ve ever wondered why a model is slow, expensive, or oddly confident about the wrong things, this book tries to answer that by showing you where to cut and where not to.


Don’t forget you can get 45% off with your Devtalk discount! Just use the coupon code “devtalk.com” at checkout :+1: