Cultural Evolution of Cooperation among LLM Agents
|
|
0
|
4
|
19 December 2024
|
Rethinking LLM Inference: Why Developer AI Needs a Different Approach
|
|
0
|
16
|
8 December 2024
|
GitHub - NVIDIA/garak: the LLM vulnerability scanner
|
|
0
|
10
|
17 November 2024
|
Thinking Elixir 228 - From Surveys to Cheat Sheets
|
|
0
|
9
|
12 November 2024
|
Does your LLM truly unlearn? An embarrassingly simple approach to recover unlearned knowledge
|
|
0
|
6
|
5 November 2024
|
Use Prolog to improve LLM's reasoning
|
|
0
|
35
|
18 October 2024
|
Lm.rs: Minimal CPU LLM inference in Rust with no dependency
|
|
0
|
16
|
13 October 2024
|
Building LLM-powered applications in Go
|
|
0
|
9
|
12 September 2024
|
A Visual Guide to LLM Quantization
|
|
0
|
2
|
30 July 2024
|
Zine on LLM Evals
|
|
0
|
21
|
13 July 2024
|
How to think about creating a dataset for LLM fine-tuning evaluation
|
|
0
|
118
|
27 June 2024
|
How to run an LLM on your PC, not in the cloud, in less than 10 minutes
|
|
0
|
86
|
24 June 2024
|
Top Libraries to Accelerate LLM Building
|
|
0
|
69
|
24 June 2024
|
AMD's MI300X Outperforms Nvidia's H100 for LLM Inference
|
|
0
|
147
|
14 June 2024
|
Qwen2 LLM Released
|
|
0
|
103
|
7 June 2024
|
Citation Needed – Wikimedia Foundation's Experimental LLM/RAG Chrome Extension
|
|
0
|
103
|
12 May 2024
|
ScrapeGraphAI: Web scraping using LLM and direct graph logic
|
|
0
|
202
|
8 May 2024
|
DRINK ME: (Ab)Using a LLM to compress text
|
|
0
|
163
|
3 May 2024
|
Maxtext: A simple, performant and scalable Jax LLM
|
|
0
|
121
|
24 April 2024
|
LLM in a flash: Efficient Large Language Model Inference with Limited Memory
|
|
0
|
111
|
23 April 2024
|
Your LLM Is a Capable Regressor When Given In-Context Examples
|
|
0
|
125
|
13 April 2024
|
Implementation of Google's Griffin Architecture – RNN LLM
|
|
0
|
174
|
11 April 2024
|
Rule-based NLP system beats LLM for analysis of psychiatric clinical notes
|
|
0
|
162
|
5 April 2024
|
PyTorch Library for Running LLM on Intel CPU and GPU
|
|
0
|
225
|
3 April 2024
|
Easy at-home AI with Bumblebee and Fly GPUs
|
|
0
|
161
|
2 April 2024
|
Can GPT optimize my taxes? An experiment in letting the LLM be the UX
|
|
0
|
139
|
2 April 2024
|
LLM Paper on Mamba MoE: Jamba Technical Report from AI2
|
|
0
|
228
|
2 April 2024
|
MM1: Methods, Analysis and Insights from Multimodal LLM Pre-training
|
|
0
|
174
|
18 March 2024
|
LLM inference speed of light
|
|
0
|
172
|
17 March 2024
|
What would an LLM OS look like?
|
|
0
|
187
|
15 March 2024
|