All
Search
Images
Videos
Shorts
Maps
News
More
Shopping
Flights
Travel
Notebook
Report an inappropriate content
Please select one of the options below.
Not Relevant
Offensive
Adult
Child Sexual Abuse
Length
All
Short (less than 5 minutes)
Medium (5-20 minutes)
Long (more than 20 minutes)
Date
All
Past 24 hours
Past week
Past month
Past year
Resolution
All
Lower than 360p
360p or higher
480p or higher
720p or higher
1080p or higher
Source
All
Dailymotion
Vimeo
Metacafe
Hulu
VEVO
Myspace
MTV
CBS
Fox
CNN
MSN
Price
All
Free
Paid
Clear filters
SafeSearch:
Moderate
Strict
Moderate (default)
Off
Filter
LLM Mastery: Hands-on Code, Align and Master LLMs
3.7K views
Aug 25, 2024
git.ir
6:18
4 Ways to Align LLMs: RLHF, DPO, KTO, and ORPO
4.1K views
Jul 10, 2024
YouTube
Snorkel AI
Understanding the LLM Development Cycle: Building, Trai
…
Jun 13, 2024
acm.org
58:07
Aligning LLMs with Direct Preference Optimization
34.1K views
Feb 8, 2024
YouTube
DeepLearningAI
0:28
The Truth About LLM Alignment: SFT, RLHF, and DPO
272 views
2 months ago
YouTube
algo10ai
4:28
HelpSteer3-Preference: Open Multi-Language Human Preference Data
…
2 months ago
YouTube
CosmoX
A Cross-Modal Approach to Silent Speech with LLM-Enhanced Reco
…
11 months ago
stanford.edu
30:05
Secure LLM Alignment: Safeguarding Reinforcement Lear
…
377 views
Jan 21, 2024
YouTube
Machine Learning and AI Academy
43:03
Lec 25 | Alignment of Language Models-II
4.3K views
Mar 7, 2025
YouTube
NPTEL IIT Delhi
39:41
ORPO Explained: Superior LLM Alignment Technique vs. DPO/RLHF
3K views
Apr 9, 2024
YouTube
AI Anytime
41:28
LLMs | Alignment of Language Models: Contrastive Learning | Le
…
1.6K views
Sep 26, 2024
YouTube
LCS2
56:48
LLM Alignment: Advanced Techniques for Building Human-C
…
762 views
Sep 11, 2024
YouTube
Data Science Dojo
13:16
Introducing Align Evals: Streamlining LLM Application Eval
…
6.6K views
7 months ago
YouTube
LangChain
8:01
Inside LLM Training: Pretraining, Fine-Tuning, Alignment
2 views
2 months ago
YouTube
AI Strategy & Trends
35:00
The inner workings of LLMs explained - VISUALIZE the self-att
…
14.1K views
May 13, 2023
YouTube
Discover AI
0:57
McKinsey’s 7S Framework Explained in 60 Seconds
5.5K views
8 months ago
YouTube
Business Edutainment
1:00:36
How to align your LLM judge for better evaluations
929 views
Jul 18, 2024
YouTube
Weights & Biases
48:14
LLMs | Alignment of Language Models: Reward Maximization-I | L
…
2.7K views
Sep 20, 2024
YouTube
LCS2
13:23
An update on DPO vs PPO for LLM alignment
3.6K views
Jul 22, 2024
YouTube
Nathan Lambert
6:47
Mechanics of Alignment: The Evolution of LLM Optimization | U
…
4 views
2 months ago
YouTube
Uplatz
1:44:33
LLM Alignment|综述及RLHF、DPO、UNA的深入分析
1.7K views
Nov 19, 2024
bilibili
你到这干嘛来了
DxHF: Providing High-Quality Human Feedback for LLM Alignme
…
5 months ago
acm.org
5:08
LLM Alignment Methods - DPO vs IPO vs KTO vs PCL
1.6K views
Jan 27, 2024
YouTube
Fahd Mirza
7:18
Rethinking Trust Region in LLM Reinforcement Learning PPO Limi
…
2 weeks ago
YouTube
CosmoX
43:22
Lec 10 | Reinforcement Learning from Human Feedback: Part 04
277 views
5 months ago
YouTube
LCS2
1:12:15
UMass CS685 S24 (Advanced NLP) #11: LLM alignment & RLHF
3.7K views
Mar 11, 2024
YouTube
Mohit Iyyer
3:12
RubricHub: New Automated Dataset for LLM Alignment
2 views
1 month ago
YouTube
AI Research Roundup
LLM Evaluations and Grounding Techniques Online Class | LinkedI
…
Aug 28, 2024
linkedin.com
36:25
Direct Preference Optimization (DPO): Your Language Model is S
…
19.2K views
Aug 10, 2023
YouTube
Gabriel Mongaras
1:01:53
LLM: Pretraining, Instruction fine-tuning and RLHF
6.3K views
Jul 31, 2023
YouTube
YanAITalk
See more videos
More like this
Feedback