LLM bots + Next.js image optimization = recipe for bankruptcy (post-mortem) | Metacast Blog
|
|
0
|
22
|
15 April 2025
|
Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs
|
|
0
|
25
|
28 March 2025
|
Open Euro LLM
|
|
0
|
7
|
27 February 2025
|
My LLM codegen workflow atm
|
|
0
|
27
|
24 February 2025
|
Thinking Elixir 239 - Scaling to Unicorn Status
|
|
0
|
17
|
4 February 2025
|
DeepSeek - the free, open source “ChatGPT killer”
|
|
6
|
63
|
3 February 2025
|
OpenAI o3-mini, now available in LLM
|
|
0
|
10
|
1 February 2025
|
Offline Reinforcement Learning for LLM Multi-Step Reasoning
|
|
0
|
15
|
23 December 2024
|
Cultural Evolution of Cooperation among LLM Agents
|
|
0
|
5
|
19 December 2024
|
Rethinking LLM Inference: Why Developer AI Needs a Different Approach
|
|
0
|
17
|
8 December 2024
|
GitHub - NVIDIA/garak: the LLM vulnerability scanner
|
|
0
|
17
|
17 November 2024
|
Thinking Elixir 228 - From Surveys to Cheat Sheets
|
|
0
|
9
|
12 November 2024
|
Does your LLM truly unlearn? An embarrassingly simple approach to recover unlearned knowledge
|
|
0
|
7
|
5 November 2024
|
Use Prolog to improve LLM's reasoning
|
|
0
|
39
|
18 October 2024
|
Lm.rs: Minimal CPU LLM inference in Rust with no dependency
|
|
0
|
16
|
13 October 2024
|
Building LLM-powered applications in Go
|
|
0
|
11
|
12 September 2024
|
A Visual Guide to LLM Quantization
|
|
0
|
3
|
30 July 2024
|
Zine on LLM Evals
|
|
0
|
21
|
13 July 2024
|
How to think about creating a dataset for LLM fine-tuning evaluation
|
|
0
|
118
|
27 June 2024
|
How to run an LLM on your PC, not in the cloud, in less than 10 minutes
|
|
0
|
86
|
24 June 2024
|
Top Libraries to Accelerate LLM Building
|
|
0
|
69
|
24 June 2024
|
AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
|
|
0
|
149
|
14 June 2024
|
Qwen2 LLM Released
|
|
0
|
103
|
7 June 2024
|
Citation Needed – Wikimedia Foundation's Experimental LLM/RAG Chrome Extension
|
|
0
|
104
|
12 May 2024
|
ScrapeGraphAI: Web scraping using LLM and direct graph logic
|
|
0
|
203
|
8 May 2024
|
DRINK ME: (Ab)Using a LLM to compress text
|
|
0
|
164
|
3 May 2024
|
Maxtext: A simple, performant and scalable Jax LLM
|
|
0
|
123
|
24 April 2024
|
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
|
|
0
|
111
|
23 April 2024
|
Your LLM Is a Capable Regressor When Given In-Context Examples
|
|
0
|
126
|
13 April 2024
|
Implementation of Google's Griffin Architecture – RNN LLM
|
|
0
|
175
|
11 April 2024
|