Artwork

เนื้อหาจัดทำโดย CNA เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดเตรียมโดย CNA หรือพันธมิตรแพลตฟอร์มพอดแคสต์โดยตรง หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่อธิบายไว้ที่นี่ https://th.player.fm/legal
Player FM - แอป Podcast
ออฟไลน์ด้วยแอป Player FM !

Someday My ‘Nets Will Code

45:01
 
แบ่งปัน
 

Manage episode 294714181 series 1932286
เนื้อหาจัดทำโดย CNA เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดเตรียมโดย CNA หรือพันธมิตรแพลตฟอร์มพอดแคสต์โดยตรง หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่อธิบายไว้ที่นี่ https://th.player.fm/legal

Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20

To RSVP contact Larry Lewis at LewisL@cna.org.

Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council’s Panel of Experts, which notes the March 2020 use of the “fully autonomous” Kargu-2 to engage retreating forces; it’s unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses an 825GB data “Pile” to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as “code generators,” creating a benchmark (the Automated Programming Progress Standard) to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine-tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with “Someday.” And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally, Diana Gehlhaus from Georgetown’s Center for Security and Emerging Technology (CSET), joins Andy and Dave to preview an upcoming event, “Requirements for Leveraging AI.”

Interview with Diana Gehlhaus: 33:32

Click here to visit our website and explore the links mentioned in the episode.

  continue reading

116 ตอน

Artwork
iconแบ่งปัน
 
Manage episode 294714181 series 1932286
เนื้อหาจัดทำโดย CNA เนื้อหาพอดแคสต์ทั้งหมด รวมถึงตอน กราฟิก และคำอธิบายพอดแคสต์ได้รับการอัปโหลดและจัดเตรียมโดย CNA หรือพันธมิตรแพลตฟอร์มพอดแคสต์โดยตรง หากคุณเชื่อว่ามีบุคคลอื่นใช้งานที่มีลิขสิทธิ์ของคุณโดยไม่ได้รับอนุญาต คุณสามารถปฏิบัติตามขั้นตอนที่อธิบายไว้ที่นี่ https://th.player.fm/legal

Information about the AI Event Series mentioned in this episode: https://twitter.com/CNA_org/status/1400808135544213505?s=20

To RSVP contact Larry Lewis at LewisL@cna.org.

Andy and Dave discuss the latest in AI news, including a report on Libya from the UN Security Council’s Panel of Experts, which notes the March 2020 use of the “fully autonomous” Kargu-2 to engage retreating forces; it’s unclear whether any person died in the conflict, and many other important details are missing from the incident. The Biden Administration releases its FY22 DoD Budget, which increases the RDT&E request, including $874M in AI research. NIST proposes an evaluation model for user trust in AI and seeks feedback; the model includes definitions for terms such as reliability and explainability. EleutherAI has provided an open-source version of GPT-3, called GPT-Neo, which uses an 825GB data “Pile” to train, and comes in 1.3B and 2.7B parameter versions. CSET takes a hands-on look at how transformer models such as GPT-3 can aid disinformation, with their findings published in Truth, Lies, and Automation: How Language Models Could Change Disinformation. IBM introduces a project aimed to teach AI to code, with CodeNet, a large dataset containing 500 million lines of code across 55 legacy and active programming languages. In a separate effort, researchers at Berkeley, Chicago, and Cornell publish results on using transformer models as “code generators,” creating a benchmark (the Automated Programming Progress Standard) to measure progress; they find that GPT-Neo could pass approximately 15% of introductory problems, with GPT-3’s 175B parameter model performing much worse (presumably due to the inability to fine-tune the larger model). The CNA Russia Studies Program leases an extensive report on AI and Autonomy in Russia, capping off their biweekly newsletters on the topic. Arthur Holland Michel publishes Known Unknowns: Data Issues and Military Autonomous Systems, which clearly identifies the known issues in autonomous systems that cause problems. The short story of the week comes from Asimov in 1956, with “Someday.” And the Naval Institute Press publishes a collection of essays in AI at War: How big data, AI, and machine learning are changing naval warfare. Finally, Diana Gehlhaus from Georgetown’s Center for Security and Emerging Technology (CSET), joins Andy and Dave to preview an upcoming event, “Requirements for Leveraging AI.”

Interview with Diana Gehlhaus: 33:32

Click here to visit our website and explore the links mentioned in the episode.

  continue reading

116 ตอน

ทุกตอน

×
 
Loading …

ขอต้อนรับสู่ Player FM!

Player FM กำลังหาเว็บ

 

คู่มืออ้างอิงด่วน