Channel Avatar

RoboTF AI @UCHkAlM1-h5kvInnaCpTPxSg@youtube.com

1.3K subscribers - no pronouns :c

Just another engineer fooling around in a GPU powered Kubern


20:31
Mistral 7B LLM AI Leaderboard: Apple M1 Max MacBook Pro uses Metal to take on GPU's for their spot!
16:30
Mistral 7B LLM AI Leaderboard: i5-12450H 32GB DDR4 Mini PC takes on the leaderboard?
19:59
Halloween Stories via Streamlit, Langchain, Python, and LocalAI (or OpenAI) with Text to Speech!
21:15
Mistral 7B LLM AI Leaderboard: Nvidia RTX A4500 GPU 20GB Where does prosumer/enterprise land?
15:29
LocalAI LLM Tuning: WTH is Flash Attention? What are the effects on memory and performance? Llama3.2
17:00
Mistral 7B LLM AI Leaderboard: Unboxing an Nvidia RTX 4090 Windforce 24GB Can it break 100 TPS?
15:36
Mistral 7B LLM AI Leaderboard: The King of the Leaderboard? Nvidia RTX 3090 Vision 24GB throw down!
19:27
Mistral 7B LLM AI Leaderboard: Unboxing an Nvidia RTX 4070Ti Super 16GB and giving it run!
16:46
Mistral 7B LLM AI Leaderboard: GPU Contender Nvidia RTX 4060Ti 16GB
26:49
Mistral 7B LLM AI Leaderboard: GPU Contender Nvidia Tesla M40 24GB
25:55
Mistral 7B LLM AI Leaderboard: GPU Contender Nvidia GTX 1660
34:32
Mistral 7B LLM AI Leaderboard: Rules of Engagement and first GPU contender Nvidia Quadro P2000
36:12
Mistral 7B LLM AI Leaderboard: Baseline Testing Q3,Q4,Q5,Q6,Q8, and FP16 CPU Inference i9-9820X
12:09
Mistral 7B LLM AI Leaderboard: Baseline Testing Q3 CPU Inference i9-9820X
18:40
LocalAI LLM Testing: Part 2 Network Distributed Inference Llama 3.1 405B Q2 in the Lab!
46:24
LocalAI LLM Testing: Distributed Inference on a network? Llama 3.1 70B on Multi GPUs/Multiple Nodes
36:05
LocalAI LLM Testing: Llama 3.1 8B Q8 Showdown - M40 24GB vs 4060Ti 16GB vs A4500 20GB vs 3090 24GB
21:40
LocalAI LLM Testing: How many 16GB 4060TI's does it take to run Llama 3 70B Q4
10:22
LocalAI LLM Testing: Can 6 Nvidia A4500's Take on the WizardLM 2 8x22b?
32:27
LocalAI LLM Testing: Viewer Questions using mixed GPUs, and what is Tensor Splitting AI lab session
00:25
What's on the Robotf-AI Workbench Today?
31:14
LocalAI LLM Testing: i9 CPU vs Tesla M40 vs 4060Ti vs A4500
32:00
LocalAI Testing: Viewer Question LLM context size, & quant testing with 2x 4060 Ti's 16GB VRAM
20:31
LocalAI LLM Single vs Multi GPU Testing scaling to 6x 4060TI 16GB GPUS