High Definition Standard Definition Theater
Video id : 4qB0X0PI_Lo
ImmersiveAmbientModecolor: #eed2d1 (color 2)
Video Format : 22 (720p) openh264 ( https://github.com/cisco/openh264) mp4a.40.2 | 44100Hz
Audio Format: Opus - Normalized audio
PokeTubeEncryptID: 03cb4dad785917efc35b5267c94283aa5403ccdfa06bca126939a5eb658e052559761a981ceb3b5db38897b92f158a85
Proxy : eu-proxy.poketube.fun - refresh the page to change the proxy location
Date : 1715118847066 - unknown on Apple WebKit
Mystery text : NHFCMFgwUElfTG8gaSAgbG92ICB1IGV1LXByb3h5LnBva2V0dWJlLmZ1bg==
143 : true
The Promise and Peril of Open Source AI with Elizabeth Seger and Jeffrey Ladish
Jump to Connections
1 Views • Jan 26, 2024 • Click to toggle off description
As AI development races forward, a fierce debate has emerged over open source AI models. So what does it mean to open-source AI? Are we opening Pandora’s box of catastrophic risks? Or is open-sourcing AI the only way we can democratize its benefits and dilute the power of big tech? 


Correction: When discussing the large language model Bloom, Elizabeth said it functions in 26 different languages. Bloom is actually able to generate text in 46 natural languages and 13 programming languages - and more are in the works.


 


RECOMMENDED MEDIA 


Open-Sourcing Highly Capable Foundation Models (elizabethseger.com/open-source-model-sharing/)


This report, co-authored by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI


BadLlama: cheaply removing safety fine-tuning from Llama 2-Chat 13B (arxiv.org/abs/2311.00117)


This paper, co-authored by Jeffrey Ladish, demonstrates that it’s possible to effectively undo the safety fine-tuning from Llama 2-Chat 13B with less than $200 while retaining its general capabilities


Centre for the Governance of AI (www.governance.ai/)


Supports governments, technology companies, and other key institutions by producing relevant research and guidance around how to respond to the challenges posed by AI


AI: Futures and Responsibility (AI:FAR) (www.ai-far.org/)


Aims to shape the long-term impacts of AI in ways that are safe and beneficial for humanity


Palisade Research (palisaderesearch.org/)


Studies the offensive capabilities of AI systems today to better understand the risk of losing control to AI systems forever


 


RECOMMENDED YUA EPISODES


A First Step Toward AI Regulation with Tom Wheeler (www.humanetech.com/podcast/a-first-step-toward-ai-…)


No One is Immune to AI Harms with Dr. Joy Buolamwini (www.humanetech.com/podcast/no-one-is-immune-to-ai-…)


Mustafa Suleyman Says We Need to Contain AI. How Do We Do It? (www.humanetech.com/podcast/mustafa-suleyman-says-w…)


The AI Dilemma (www.humanetech.com/podcast/the-ai-dilemma)


Your Undivided Attention is produced by the Center for Humane Technology (www.humanetech.com/) . Follow us on Twitter: @HumaneTech_ (twitter.com/humanetech_)
Metadata And Engagement

Views : 1
Genre: Nonprofits & Activism
Date of upload: Jan 26, 2024 ^^


Rating : 0 (0/0 LTDR)
RYD date created : 2024-01-26T21:01:42.526208Z
See in json
Tags
Connections

Comments not found.
Backtrace:
Report this issue