Channel Avatar

Rohan-Paul-AI @UC0_a8SNpTFkmVv5SLMs1CIA@youtube.com

13K subscribers - no pronouns :c

Follow me on 🐦 TWITTER: twitter.com/rohanpaul_ai - to rema


05:16
Differential Transformer- Google Illuminate Podcast
11:34
Intelligence at the Edge of Chaos - Google NotebookLM Podcast
05:08
LLMs + Persona-Plug = Personalized LLMs - Google Illuminate Podcast
05:51
Deep Learning for Options Trading: An End-To-End Approach - Google Illuminate Podcast
05:39
CogniDual Framework: Self-Training Large Language Models- Google Illuminate Podcast
04:04
N-gram Prediction and Word Difference Representations for Language - Google Illuminate Podcast
05:21
Patched MOA: optimizing inference for diverse software development tasks - Google Illuminate Podcast
06:18
RATIONALYST Pre-training Process Supervision for Improving Reasoning - Google Illuminate Podcast
04:01
RecurrentGemma: Moving Past Transformers for Efficient Open LLM - Google Illuminate Podcast
07:52
RoleBreak Character Hallucination as a Jailbreak Attack in Role-Playing - Google Illuminate Podcast
06:11
Selective Attention improves LLM performance across model sizes - Google Illuminate Podcast
06:18
SpotDiffusion: A Fast Approach For Seamless Panorama Generation" - Google Illuminate Podcast
06:23
FAN: Fourier Analysis Network can replace MLP layers in various models - Google Illuminate Podcast
07:35
Platonic Representation, All LLMs are converging towards the same point πŸ€”- Google Illuminate Podcast
04:40
Human-like Affective Cognition in Foundation Models - Google Illuminate Podcast
05:19
Exploring the Compositional Deficiency of LLMs in Mathematical Reasoning - Google Illuminate Podcast
04:57
Comprehensive Evaluation of Quantized Instruction-Tuned LLMs - Google Illuminate Podcast
05:55
Adaptive k-Nearest Neighbor Classifier Based on Local Shape Operator - Google Illuminate Podcast
06:46
Rejection Sampling IMLE: Better Few-Shot Image Synthesis - Google Illuminate Podcast
05:59
Were RNNs All We Needed - Google Illuminate Podcast
14:06
Meta's ans to SORA 🎬 Movie Gen: - Google NotebookLM Podcast
05:38
Trans-LoRA: data-free Transferable Parameter Efficient Finetuning - Google Illuminate Podcast
05:22
The Perfect Blend: Redefining RLHF with Mixture of Judges - Google Illuminate Podcast
05:22
MASSIVE Paper "ADDITION IS ALL YOU NEED" - Reduce energy costs by 95% - Google Illuminate Podcast
05:32
NVIDIA Paper - "MaskedMimic: Unified Physics-Based Character Control - Google Illuminate Podcast
06:25
NVLM: Open Frontier-Class Multimodal LLMs - NVIDIA Paper - Google Illuminate Podcast
05:59
Archon: An Architecture Search Framework for Inference-Time Techniques - Google Illuminate Podcast
05:54
RAGProbe: An Automated Approach for Evaluating RAG - Google Illuminate Podcast
05:16
Paper - RED QUEEN : Safeguarding LLMs against Concealed Jailbreaking - Google Illuminate Podcast
10:13
The classic "The AI Scientist" Paper by NotebookLM - Audio Podcast by Google NotebookLM
07:32
Training Large Language Models for Reasoning through Reverse Curriculum RL - Audio Podcast
06:18
Self-Taught Evaluators - Audio Podcast
06:17
MedPromptExtract (Medical Data Extraction Tool) - Audio Podcast
08:27
MASAI: Modular Architecture for Software-engineering AI agents - Audio Podcast
07:04
Imitating Language via Scalable Inverse Reinforcement Learning - Audio Podcast
06:30
GraphInstruct: Empowering LLMs with Graph Understanding and Reasoning Capability- Audio Podcast
07:52
Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through - Audio Podcast
06:40
Adaptive Self-Supervised Learning Strategies For On-Device LLM Personalization - Audio Podcast
05:49
The Impact of Initialization on LoRA Finetuning Dynamics - Audio Podcast
05:07
LoRAMoE: Alleviate World Knowledge Forgetting in LLMs via MoE-Style Plugin - Audio Podcast
04:11
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning - Audio Podcast
07:25
LoRA+ Efficient Low Rank Adaptationof Large Models - Audio Podcast
05:49
ReFT: Representation Finetuning for Language Models - Audio Podcast
06:22
Leave No Context Behind-Efficient Infinite Context Transformers - Audio Podcast
06:24
Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries - Audio Podcast
05:47
TextGrad: Automatic "Differentiation" via Text - Audio Podcast
06:16
Paper - "REFT: Reasoning with Reinforced Fine-Tuning - Audio Podcast
05:32
Paper - "Adaptable Logical Control for Large Language Models" - Audio Podcast
07:41
Paper - Breaking reCAPTCHAv2 - Audio Podcast
07:32
Paper - "ARES: Alternating Reinforcement Learning and Supervised Fine-Tuning" - Audio Podcast
07:30
Paper - "Agents in Software Engineering: Survey, Landscape, and Vision" - Audio Podcast
06:18
SCHRODINGER’S MEMORY: LARGE LANGUAGE MODELS - Audio Podcast
06:38
LLMs Still Cant Plan Can LRMs? - OpenAIs o1 on PlanBench - Audio Podcast
08:14
AI Paper Writing in the Margins - Audio Podcast
07:42
Sam Altman's newly published personal blog about the AI future. Audio Podcast by NotebookLM
08:11
Secret behind SambaNova superfast LLM Inferencing Speed - Audio Podcast
05:35
First open-source multimodal math dataset boosts MLLM performance - Podcast
09:18
New Harvard Business School study shows that AI girlfriends reduce loneliness - Audio Podcast
05:47
Paper Podcast - LLM Pruning and Distillation by NVIDIA
06:29
Training LLM to Self-Correct via Reinforcement Learning - Audio Podcast with Google NotebookLM