🇺🇸 United States Episodes

14439 episodes from United States

How Smartphones & Social Media Impact Mental Health & the Realistic Solutions | Dr. Jonathan Haidt

From Huberman Lab

In this episode, my guest is Dr. Jonathan Haidt, Ph.D., professor of social psychology at New York University and bestselling author on how technology and culture impact the psychology and health of kids, teens, and adults. We discuss the dramatic rise of suicide, depression, and anxiety as a result of replacing a play-based childhood with smartphones, social media, and video games. He explains how a screen-filled childhood leads to challenges in psychological development that negatively impact learning, resilience, identity, cooperation, and conflict resolution — all of which are crucial skills for future adult relationships and career success. We also discuss how phones and social media impact boys and girls differently and the underlying neurobiological mechanisms of how smartphones alter basic brain plasticity and function.  Dr. Haidt explains his four recommendations for healthier smartphone use in kids, and we discuss how to restore childhood independence and play in the current generation.  This is an important topic for everyone, young or old, parents and teachers, students and families, to be aware of in order to understand the potential mental health toll of smartphone use and to apply tools to foster skill-building and reestablish healthy norms for our kids. For show notes, including referenced articles and additional resources, please visit hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman  Helix Sleep: https://helixsleep.com/huberman AeroPress: https://aeropress.com/huberman Joovv: https://joovv.com/huberman LMNT: https://drinklmnt.com/huberman Timestamps 00:00:00 Dr. Jonathan Haidt 00:02:01 Sponsors: Helix Sleep, AeroPress & Joovv 00:06:23 Great Rewiring of Childhood: Technology, Smartphones & Social Media 00:12:48 Mental Health Trends: Boys, Girls & Smartphones 00:16:26 Smartphone Usage, Play-Based to Phone-Based Childhood 00:20:40 The Tragedy of Losing Play-Based Childhood 00:28:13 Sponsor: AG1 00:30:02 Girls vs. Boys, Interests & Trapping Kids 00:37:31 “Effectance,” Systems & Relationships, Animals 00:41:47 Boys Sexual Development, Dopamine Reinforcement & Pornography 00:49:19 Boys, Courtship, Chivalry & Technology; Gen Z Development 00:55:24 Play & Low-Stakes Mistakes, Video Games & Social Media, Conflict Resolution 00:59:48 Sponsor: LMNT 01:01:23 Social Media, Trolls, Performance 01:06:47 Dynamic Subordination, Hierarchy, Boys 01:10:15 Girls & Perfectionism, Social Media & Performance 01:14:00 Phone-Based Childhood & Brain Development, Critical Periods 01:21:15 Puberty & Sensitive Periods, Culture & Identity 01:23:55 Brain Development & Puberty; Identity; Social Media, Learning & Reward 01:33:37 Tool: 4 Recommendations for Smartphone Use in Kids 01:41:48 Changing Childhood Norms, Policies & Legislature 01:49:13 Summer Camp, Team Sports, Religion, Music 01:54:36 Boredom, Addiction & Smartphones; Tool: “Awe Walks” 02:03:14 Casino Analogy & Ceding Childhood; Social Media Content 02:09:33 Adult Behavior; Tool: Meals & Phones 02:11:45 Regaining Childhood Independence; Tool: Family Groups & Phones 02:16:09 Screens & Future Optimism, Collective Action, KOSA Bill 02:24:52 Zero-Cost Support, Spotify & Apple Reviews, YouTube Feedback, Social Media, Neural Network Newsletter Disclaimer Learn more about your ad choices. Visit megaphone.fm/adchoices

"Daisy Ridley"

From SmartLess

We’re plonked in the Black Sea with the lovely Daisy Ridley. A cheeky pint, a modest Olympian, and an airport romance novel. Even a visit from Grogu!? …on an all-new SmartLess.

#795 - Ryan Holiday - 11 Harsh Stoic Truths To Improve Your Life

From Modern Wisdom

Ryan Holiday is a podcaster, marketer and an author. Stoicism is like the hot new girl in school. A popular, perfect blend of ancient philosophy which is applicable to modern challenges. Given that Ryan is probably the world's most famous Stoicism expert, what are the most important insights he's learned about how to apply this wisdom to daily life? Expect to learn why Ryan doesn’t talk about the projects he’s working on before finishing them, why Ryan thinks that competition is for losers, how self belief is overrated, what Ryan’s morning routine and typical day looks like, why Broicism has found a new lease of life, the importance of taking responsibility for yourself instead of other people and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (use code MW20) Get up to 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (discount automatically applied) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

ICLR 2024 — Best Papers & Talks (Benchmarks, Reasoning & Agents) — ft. Graham Neubig, Aman Sanger, Moritz Hardt)

From Latent Space: The AI Engineer Podcast

Our second wave of speakers for AI Engineer World’s Fair were announced! The conference sold out of Platinum/Gold/Silver sponsors and Early Bird tickets! See our Microsoft episode for more info and buy now with code LATENTSPACE.This episode is straightforwardly a part 2 to our ICLR 2024 Part 1 episode, so without further ado, we’ll just get right on with it!Timestamps[00:03:43] Section A: Code Edits and Sandboxes, OpenDevin, and Academia vs Industry — ft. Graham Neubig and Aman Sanger* [00:07:44] WebArena* [00:18:45] Sotopia* [00:24:00] Performance Improving Code Edits* [00:29:39] OpenDevin* [00:47:40] Industry and Academia[01:05:29] Section B: Benchmarks* [01:05:52] SWEBench* [01:17:05] SWEBench/SWEAgent Interview* [01:27:40] Dataset Contamination Detection* [01:39:20] GAIA Benchmark* [01:49:18] Moritz Hart - Science of Benchmarks[02:36:32] Section C: Reasoning and Post-Training* [02:37:41] Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection* [02:51:00] Let’s Verify Step By Step* [02:57:04] Noam Brown* [03:07:43] Lilian Weng - Towards Safe AGI* [03:36:56] A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis* [03:48:43] MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework[04:00:51] Bonus: Notable Related Papers on LLM CapabilitiesSection A: Code Edits and Sandboxes, OpenDevin, and Academia vs Industry — ft. Graham Neubig and Aman Sanger* Guests* Graham Neubig* Aman Sanger - Previous guest and NeurIPS friend of the pod!* WebArena * * Sotopia (spotlight paper, website)* * Learning Performance-Improving Code Edits* OpenDevin* Junyang Opendevin* Morph Labs, Jesse Han* SWE-Bench* SWE-Agent* Aman tweet on swebench* LiteLLM* Livecodebench* the role of code in reasoning* Language Models of Code are Few-Shot Commonsense Learners* Industry vs academia* the matryoshka embeddings incident* other directions* UnlimiformerSection A timestamps* [00:00:00] Introduction to Guests and the Impromptu Nature of the Podcast* [00:00:45] Graham's Experience in Japan and Transition into Teaching NLP* [00:01:25] Discussion on What Constitutes a Good Experience for Students in NLP Courses* [00:02:22] The Relevance and Teaching of Older NLP Techniques Like Ngram Language Models* [00:03:38] Speculative Decoding and the Comeback of Ngram Models* [00:04:16] Introduction to WebArena and Zotopia Projects* [00:05:19] Deep Dive into the WebArena Project and Benchmarking* [00:08:17] Performance Improvements in WebArena Using GPT-4* [00:09:39] Human Performance on WebArena Tasks and Challenges in Evaluation* [00:11:04] Follow-up Work from WebArena and Focus on Web Browsing as a Benchmark* [00:12:11] Direct Interaction vs. Using APIs in Web-Based Tasks* [00:13:29] Challenges in Base Models for WebArena and the Potential of Visual Models* [00:15:33] Introduction to Zootopia and Exploring Social Interactions with Language Models* [00:16:29] Different Types of Social Situations Modeled in Zootopia* [00:17:34] Evaluation of Language Models in Social Simulations* [00:20:41] Introduction to Performance-Improving Code Edits Project* [00:26:28] Discussion on DevIn and the Future of Coding Agents* [00:32:01] Planning in Coding Agents and the Development of OpenDevon* [00:38:34] The Changing Role of Academia in the Context of Large Language Models* [00:44:44] The Changing Nature of Industry and Academia Collaboration* [00:54:07] Update on NLP Course Syllabus and Teaching about Large Language Models* [01:00:40] Call to Action: Contributions to OpenDevon and Open Source AI Projects* [01:01:56] Hiring at Cursor for Roles in Code Generation and Assistive Coding* [01:02:12] Promotion of the AI Engineer ConferenceSection B: Benchmarks * Carlos Jimenez & John Yang (Princeton) et al: SWE-bench: Can Language Models Resolve Real-world Github Issues? (ICLR Oral, Paper, website)* “We introduce SWE-bench, an evaluation framework consisting of 2,294 software engineering problems drawn from real GitHub issues and corresponding pull requests across 12 popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires understanding and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation tasks. Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues. The best-performing model, Claude 2, is able to solve a mere 1.96% of the issues. Advances on SWE-bench represent steps towards LMs that are more practical, intelligent, and autonomous.”* Yonatan Oren et al (Stanford): Proving Test Set Contamination in Black-Box Language Models (ICLR Oral, paper, aman tweet on swebench contamination)* “We show that it is possible to provide provable guarantees of test set contamination in language models without access to pretraining data or model weights. Our approach leverages the fact that when there is no data contamination, all orderings of an exchangeable benchmark should be equally likely. In contrast, the tendency for language models to memorize example order means that a contaminated language model will find certain canonical orderings to be much more likely than others. Our test flags potential contamination whenever the likelihood of a canonically ordered benchmark dataset is significantly higher than the likelihood after shuffling the examples. * We demonstrate that our procedure is sensitive enough to reliably prove test set contamination in challenging situations, including models as small as 1.4 billion parameters, on small test sets of only 1000 examples, and datasets that appear only a few times in the pretraining corpus.”* Outstanding Paper mention: “A simple yet elegant method to test whether a supervised-learning dataset has been included in LLM training.”* Thomas Scialom (Meta AI-FAIR w/ Yann LeCun): GAIA: A Benchmark for General AI Assistants (paper)* “We introduce GAIA, a benchmark for General AI Assistants that, if solved, would represent a milestone in AI research. GAIA proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency. * GAIA questions are conceptually simple for humans yet challenging for most advanced AIs: we show that human respondents obtain 92% vs. 15% for GPT-4 equipped with plugins. * GAIA's philosophy departs from the current trend in AI benchmarks suggesting to target tasks that are ever more difficult for humans. We posit that the advent of Artificial General Intelligence (AGI) hinges on a system's capability to exhibit similar robustness as the average human does on such questions. Using GAIA's methodology, we devise 466 questions and their answer.* * Mortiz Hardt (Max Planck Institute): The emerging science of benchmarks (ICLR stream)* “Benchmarks are the keystone that hold the machine learning community together. Growing as a research paradigm since the 1980s, there’s much we’ve done with them, but little we know about them. In this talk, I will trace the rudiments of an emerging science of benchmarks through selected empirical and theoretical observations. Specifically, we’ll discuss the role of annotator errors, external validity of model rankings, and the promise of multi-task benchmarks. The results in each case challenge conventional wisdom and underscore the benefits of developing a science of benchmarks.”Section C: Reasoning and Post-Training* Akari Asai (UW) et al: Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection (ICLR oral, website)* (Bad RAG implementations) indiscriminately retrieving and incorporating a fixed number of retrieved passages, regardless of whether retrieval is necessary, or passages are relevant, diminishes LM versatility or can lead to unhelpful response generation. * We introduce a new framework called Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's quality and factuality through retrieval and self-reflection. * Our framework trains a single arbitrary LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its generations using special tokens, called reflection tokens. Generating reflection tokens makes the LM controllable during the inference phase, enabling it to tailor its behavior to diverse task requirements. * Self-RAG (7B and 13B parameters) outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA, reasoning, and fact verification tasks, and it shows significant gains in improving factuality and citation accuracy for long-form generations relative to these models. * Hunter Lightman (OpenAI): Let’s Verify Step By Step (paper)* “Even state-of-the-art models still regularly produce logical mistakes. To train more reliable models, we can turn either to outcome supervision, which provides feedback for a final result, or process supervision, which provides feedback for each intermediate reasoning step. * We conduct our own investigation, finding that process supervision significantly outperforms outcome supervision for training models to solve problems from the challenging MATH dataset. Our process-supervised model solves 78% of problems from a representative subset of the MATH test set. Additionally, we show that active learning significantly improves the efficacy of process supervision. * To support related research, we also release PRM800K, the complete dataset of 800,000 step-level human feedback labels used to train our best reward model.* * Noam Brown - workshop on Generative Models for Decision Making* Solving Quantitative Reasoning Problems with Language Models (Minerva paper)* Describes some charts taken directly from the Let’s Verify Step By Step paper listed/screenshotted above.* Lilian Weng (OpenAI) - Towards Safe AGI (ICLR talk)* OpenAI Model Spec* OpenAI Instruction Hierarchy: The Instruction Hierarchy: Training LLMs to Prioritize Privileged InstructionsSection D: Agent Systems* Izzeddin Gur (Google DeepMind): A Real-World WebAgent with Planning, Long Context Understanding, and Program Synthesis (ICLR oral, paper)* [Agent] performance on real-world websites has still suffered from (1) open domainness, (2) limited context length, and (3) lack of inductive bias on HTML.* We introduce WebAgent, an LLM-driven agent that learns from self-experience to complete tasks on real websites following natural language instructions.* WebAgent plans ahead by decomposing instructions into canonical sub-instructions, summarizes long HTML documents into task-relevant snippets, and acts on websites via Python programs generated from those.* We design WebAgent with Flan-U-PaLM, for grounded code generation, and HTML-T5, new pre-trained LLMs for long HTML documents using local and global attention mechanisms and a mixture of long-span denoising objectives, for planning and summarization.* We empirically demonstrate that our modular recipe improves the success on real websites by over 50%, and that HTML-T5 is the best model to solve various HTML understanding tasks; achieving 18.7% higher success rate than the prior method on MiniWoB web automation benchmark, and SoTA performance on Mind2Web, an offline task planning evaluation.* Sirui Hong (DeepWisdom): MetaGPT: Meta Programming for A Multi-Agent Collaborative Framework (ICLR Oral, Paper)* We introduce MetaGPT, an innovative meta-programming framework incorporating efficient human workflows into LLM-based multi-agent collaborations. MetaGPT encodes Standardized Operating Procedures (SOPs) into prompt sequences for more streamlined workflows, thus allowing agents with human-like domain expertise to verify intermediate results and reduce errors. MetaGPT utilizes an assembly line paradigm to assign diverse roles to various agents, efficiently breaking down complex tasks into subtasks involving many agents working together. Bonus: Notable Related Papers on LLM CapabilitiesThis includes a bunch of papers we wanted to feature above but could not.* Lukas Berglund (Vanderbilt) et al: The Reversal Curse: LLMs trained on “A is B” fail to learn “B is A” (ICLR poster, paper, Github)* We expose a surprising failure of generalization in auto-regressive large language models (LLMs). If a model is trained on a sentence of the form ''A is B'', it will not automatically generalize to the reverse direction ''B is A''. This is the Reversal Curse. * The Reversal Curse is robust across model sizes and model families and is not alleviated by data augmentation. We also evaluate ChatGPT (GPT-3.5 and GPT-4) on questions about real-world celebrities, such as ''Who is Tom Cruise's mother? [A: Mary Lee Pfeiffer]'' and the reverse ''Who is Mary Lee Pfeiffer's son?''. GPT-4 correctly answers questions like the former 79\% of the time, compared to 33\% for the latter.* * Omar Khattab (Stanford): DSPy: Compiling Declarative Language Model Calls into State-of-the-Art Pipelines (ICLR Spotlight Poster, GitHub)* presented by Krista Opsahl-Ong* “Existing LM pipelines are typically implemented using hard-coded “prompt templates”, i.e. lengthy strings discovered via trial and error. Toward a more systematic approach for developing and optimizing LM pipelines, we introduce DSPy, a programming model that abstracts LM pipelines as text transformation graphs, or imperative computational graphs where LMs are invoked through declarative modules. * DSPy modules are parameterized, meaning they can learn how to apply compositions of prompting, finetuning, augmentation, and reasoning techniques. * We design a compiler that will optimize any DSPy pipeline to maximize a given metric, by creating and collecting demonstrations. * We conduct two case studies, showing that succinct DSPy programs can express and optimize pipelines that reason about math word problems, tackle multi-hop retrieval, answer complex questions, and control agent loops. * Within minutes of compiling, DSPy can automatically produce pipelines that outperform out-of-the-box few-shot prompting as well as expert-created demonstrations for GPT-3.5 and Llama2-13b-chat. On top of that, DSPy programs compiled for relatively small LMs like 770M parameter T5 and Llama2-13b-chat are competitive with many approaches that rely on large and proprietary LMs like GPT-3.5 and on expert-written prompt chains. * * MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning* Scaling Laws for Associative Memories * DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models* Efficient Streaming Language Models with Attention Sinks Get full access to Latent.Space at www.latent.space/subscribe

Trillion Dollar Shot, Episode 4: The Disruptors

From The Journal

The rising popularity of GLP-1 drugs could cause all kinds of ripple effects. According to one estimate, 9% of the U.S. population could be on Ozempic or similar medications by 2030. Meanwhile, drugmakers are already developing the next generation of weight-loss drugs and researchers are studying the possible health benefits beyond weight loss and diabetes, including addiction. In the final episode of our series we ask: What could all this development mean for businesses, from the food sector to airlines? And who wins and who loses in the post-Ozempic economy? Guests include: David Ricks, CEO of Eli Lilly; and Mehdi Farokhnia, an addiction researcher at the National Institutes of Health. Listen to Episodes 1, 2 and 3 of “Trillion Dollar Shot” here.  Learn more about your ad choices. Visit megaphone.fm/adchoices

Sunday Pick: Design Matters with Carrie Brownstein

From TED Talks Daily

Each Sunday, TED shares an episode of another podcast we think you'll love, handpicked for you… by us. Today we're sharing an episode Design Matters with Debbie Millman, one of the world’s very first podcasts, about how incredibly creative people design the arc of their lives.Celebrated musician, comedian, writer, and director Carrie Brownstein joins to talk about her remarkable career as the co-founder, guitarist, and vocalist of the legendary punk band Sleater-Kinney, her role in the iconic TV series Portlandia, and her new memoir.Get more Design Matters with Debbie Millman wherever you're listening to this. Hosted on Acast. See acast.com/privacy for more information.

12 predictions for the future of technology | Vinod Khosla

From TED Talks Daily

Techno-optimist Vinod Khosla believes in the world-changing power of "foolish ideas." He offers 12 bold predictions for the future of technology — from preventative medicine to car-free cities to planes that get us from New York to London in 90 minutes — and shows why a world of abundance awaits. Hosted on Acast. See acast.com/privacy for more information.

#794 - Scott Galloway - The 4 Secrets To Get Rich In A Broken Economy

From Modern Wisdom

Scott Galloway is a professor of marketing at the New York University Stern School of Business, a public speaker, entrepreneur and an author. The modern economy is a confusing mess. Inflation, interest rates, wage stagnation, property prices. It's all complex and may feel like you're swimming up stream. Thankfully Scott has broken down his entire wealth strategy into an algorithm of 4 steps which anyone can follow. Expect to learn why wealth is a whole-person project, what all financially successful people have in common, how to forgive yourself when you fall short, why following your passion is often a bad idea, the importance of physical fitness for financial wealth and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 35% discount on all Cozy Earth products at http://www.cozyearth.com/modernwisdom (discount automatically applied) Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (use code MW20) Get up to 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices

How much national debt is too much?

From Planet Money

Most economic textbooks will tell you that there can be real dangers in running up a big national debt. A major concern is how the debt you add now could slow down economic growth in the future. Economists have not been able to nail down how much debt a country can safely take on. But they have tried.Back in 2010, two economists took a look at 20 countries over the course of decades, and sometimes centuries, and came back with a number. Their analysis suggested that economic growth slowed significantly once national debt passed 90% of annual GDP... and that is when the fight over debt and growth really took off.On today's episode: a deep dive on what we know, and what we don't know, about when exactly national debt becomes a problem. We will also try to figure out how worried we should be about the United States' current debt total of 26 trillion dollars.This episode was hosted by Keith Romer and Nick Fountain. It was produced by Willa Rubin and edited by Molly Messick. It was fact-checked by Sierra Juarez with help from Sofia Shchukina and engineered by Cena Loffredo. Alex Goldmark is Planet Money's executive producer.Help support Planet Money and hear our bonus episodes by subscribing to Planet Money+ in Apple Podcasts or at plus.npr.org/planetmoney.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy

‘It Came out of Nowhere’: The Rise of Dr Pepper

From The Journal

There is a new contender in the cola wars, and it isn’t a cola. It’s Dr Pepper. WSJ’s Jennifer Maloney unpacks how after decades as a distant competitor, Dr Pepper has climbed the soda ranks with help from hefty marketing, novel flavors and TikTok videos. Further Reading: - Dr Pepper Ties Pepsi as America’s No. 2 Soda Further Listening: - The Agony and Ecstasy of Tab  Learn more about your ad choices. Visit megaphone.fm/adchoices

DOJ targets Nvidia, Meme stock comeback, Trump fundraiser in SF, Apple/OpenAI, Texas stock market

From All-In with Chamath, Jason, Sacks & Friedberg

(0:00) Besties intros! (2:10) Responding to recent media coverage (17:58) DOJ/FTC strike deal to target Nvidia, OpenAI, and Microsoft (32:40) Meme stocks are back: Keith Gill aka Roaring Kitty resurfaces, disclosing nine figure position in GameStop (58:36) Citadel and BlackRock back TXSE to take on NYSE and Nasdaq (1:02:34) Apple to announce OpenAI iPhone deal at WWDC (1:09:07) Science Corner: Alarming ocean temps continue, what to expect for hurricane season Follow the besties: https://twitter.com/chamath https://twitter.com/Jason https://twitter.com/DavidSacks https://twitter.com/friedberg Follow on X: https://twitter.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@all_in_tok Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://twitter.com/yung_spielburg Intro Video Credit: https://twitter.com/TheZachEffect Referenced in the show: https://www.youtube.com/watch?v=dQQhAg0mfF8 https://x.com/vkhosla/status/1769529054955446533 https://x.com/vkhosla/status/1796293773389127987 https://x.com/shaunmmaguire/status/1796415146077954329 https://x.com/DavidSacks/status/1798100723617698097 https://x.com/SawyerMerritt/status/1798779830521000426 https://www.nytimes.com/2024/06/05/technology/nvidia-microsoft-openai-antitrust-doj-ftc.html https://www.wsj.com/tech/ai/ftc-opens-antitrust-probe-of-microsoft-ai-deal-29b5169a https://companiesmarketcap.com https://www.youtube.com/watch?v=F9cO3-MLHOM https://www.politico.com/news/2024/05/29/newsom-california-artifical-intelligence-regulations-00160519 https://www.reddit.com/user/DeepFuckingValue https://www.wsj.com/finance/regulation/e-trade-considers-kicking-meme-stock-leader-keith-gill-off-platform-f2003ec4 https://x.com/TheRoaringKitty/status/1789807772542067105 https://www.google.com/finance/quote/GME:NYSE https://www.youtube.com/watch?v=M-VO6dtFRes https://www.wsj.com/finance/regulation/keith-gills-gamestop-trades-pose-conundrum-for-market-cops-70cc5301 https://www.macrotrends.net/stocks/charts/GME/gamestop/revenue https://www.wsj.com/finance/stocks/gamestop-burned-andrew-left-in-2021-hes-betting-against-the-stock-again-4377cecb https://www.nytimes.com/2022/05/18/business/melvin-capital-gamestop-short.html https://www.instagram.com/tim.naki/reels https://www.wsj.com/finance/regulation/new-texas-stock-exchange-takes-aim-at-new-yorks-dominance-e3b4d9ba https://listingcenter.nasdaq.com/assets/Board%20Diversity%20Disclosure%20Five%20Things.pdf https://www.bloomberg.com/news/articles/2024-06-05/why-is-apple-aapl-teaming-up-with-openai-both-companies-need-each-other https://x.com/leonsimons8/status/1793319395080520036

Rep. Thomas Massie: Israel Lobbyists, the Cowards in Congress, and Living off the Grid

From The Tucker Carlson Show

U.S. Representative Thomas Massie entered Congress in November 2012 after serving as Lewis County Judge Executive. He represents Kentucky’s 4th Congressional District which stretches across Northern Kentucky and 280 miles of the Ohio River. www.thomasmassie.com (00:00) Where Does US Debt End? (10:36) Why Massie Voted 15 Times Against Funding Israel (14:53) AIPAC (42:25) Area 51 (51:10) Massie's Relationship with Trump (57:50) Kill Switches in Cars (1:06:44) Mike Johnson and the Deep State (1:15:20) How Massie Got Into Politics (1:19:10) Living off the Grid Learn more about your ad choices. Visit megaphone.fm/adchoices

Why broken hearts hurt — and what heals them | Yoram Yovell

From TED Talks Daily

What's the relationship between physical and mental pain, and how can you ease both? Revealing how your experiences of love, loss and pain are deeply intertwined, neuroscientist Yoram Yovell sheds light on the surprising role of your brain's endorphins and opioid receptors to ease physical and emotional suffering — and shows how this connection could pave the way to new treatments for mental health and well-being. Hosted on Acast. See acast.com/privacy for more information.

The GameStop Guy Has Returned… (And Has A New $210M Bet)

From My First Million

Episode 594:  Sam Parr ( https://twitter.com/theSamParr ) and Shaan Puri ( https://twitter.com/ShaanVP ) explain what’s happening with GameStop AGAIN and how Keith Gill turned $56k into $210M with memes.  — Show Notes: (0:00) Roaring Kitty's $200M GameStop holding (8:41) Is Keith Gill the most genius creator behind a brand? (14:53) Where did the $65M come from? (17:44) The 7 Stages of GameStop FOMO (20:00) Ryan Cohen's activist investments in GameStop, Bed Bath and Beyond  (26:34) Shaan's honest take on paternity leave (31:53) Painting the windows black (35:42) Zach Pogrob's The Year of Obsession (37:03) What's the deal with run clubs right now? (39:19) Sexy faces and sexy paces (42:04) Endurance event businesses (45:06) Opportunity: The suburban Iron Man (51:19) Scott Harrison gives Shaan unsolicited feedback — Links: • [Steal This] Get our proven writing frameworks that have made us millions https://clickhubspot.com/copy • wallstreetbets - https://www.reddit.com/r/wallstreetbets/ • Unusual Whales - https://unusualwhales.com/ • WSJ on Ryan Cohen - https://tinyurl.com/4zue9xps • Wander - https://www.wander.com/ • The Lehman Trilogy - https://thelehmantrilogy.com/ • The Year of Obsession - https://tinyurl.com/4nsrh689 • Nick Bare - https://www.instagram.com/nickbarefitness • RAWDAWG - https://www.instagram.com/rawdawgrunclub • River - https://www.getriver.io/ • 29029 Everesting - https://29029everesting.com/ • Rock n Roll Running - https://www.runrocknroll.com/ • thespeedproject - https://www.instagram.com/thespeedproject • Grab HubSpot's free AI-Powered Customer Platform and watch your business grow https://clickhubspot.com/fmf — Enter to win a free trip at https://www.wander.com/mfm and use code MFM300 at checkout for $300 off your booking. — Check Out Shaan's Stuff: Need to hire? You should use the same service Shaan uses to hire developers, designers, & Virtual Assistants → it’s called Shepherd (tell ‘em Shaan sent you): https://bit.ly/SupportShepherd — Check Out Sam's Stuff: • Hampton - https://www.joinhampton.com/ • Ideation Bootcamp - https://www.ideationbootcamp.co/ • Copy That - https://copythat.com • Hampton Wealth Survey - https://joinhampton.com/wealth • Sam’s List - http://samslist.co/ My First Million is a HubSpot Original Podcast // Brought to you by The HubSpot Podcast Network // Production by Arie Desormeaux // Editing by Ezra Bakker Trupiano

LIVE EVENT Q&A: Dr. Andrew Huberman at the Brisbane Convention & Exhibition Centre

From Huberman Lab

Recently I had the pleasure of hosting a live event in Brisbane, Australia. This event was part of a lecture series called The Brain Body Contract. My favorite part of the evening was the question and answer period, where I had the opportunity to answer questions from the attendees of each event. Included here is the Q&A from our event at the Brisbane Convention & Exhibition Centre. Sign up to get notified about future events: https://www.hubermanlab.com/events Thank you to our sponsors AG1: https://drinkag1.com/huberman Eight Sleep: https://eightsleep.com/huberman Resources Mentioned Huberman Lab Non-Sleep Deep Rest Protocols Huberman Lab Guest Series with Dr. Matt Walker Huberman Lab Guest Series with Dr. Paul Conti Huberman Lab Guest Series with Dr. Andy Galpin Dr. Becky Kennedy: Protocols for Excellent Parenting & Improving Relationships of All Kinds Perform with Dr. Andy Galpin Timestamps 00:00 Introduction 00:31 Sponsors: AG1 & Eight Sleep 03:48 Nicotine Discussion 07:42 ADHD Management: Tools & Medications 12:43 Sleep Deprivation & Recovery 18:54 Understanding & Addressing Burnout 22:12 Daily Nutrition & Eating Habits 24:40 Understanding Food & Neural Pathways 26:21 The Benefits of Elimination Diets 27:21 Intermittent Fasting & Personal Diet Choices 28:23 Top Health & Fitness Recommendations 30:50 The Value of Non-Sleep Deep Rest (NSDR) 33:08 Testosterone Replacement Therapy Insights 38:02 Breathing Techniques for Stress & Focus 41:46 Morning Sunlight & Circadian Rhythms 43:18 Parenting Tips for a Healthy Start 49:03 Final Thoughts & Gratitude Disclaimer Learn more about your ad choices. Visit megaphone.fm/adchoices

Why Biden Is Cracking Down on Asylum at the Border

From The Journal

President Biden unveiled a last-ditch effort to lower illegal crossings at the southern border this week. The move focuses on asylum seekers, and the policy is similar to one that former President Trump tried in 2018. WSJ’s Michelle Hackman describes the policy and tries to answer the question: why now? Further Reading: -Biden Issues Executive Actions on Immigration: What to Know  Further Listening: -What the End of Title 42 Means for U.S. Immigration Policy  -What Trump's Immigration Restrictions Could Mean for the Economy  Learn more about your ad choices. Visit megaphone.fm/adchoices

JRE MMA Show #158 with Tank Abbott

From Joe Rogan Experience

Joe sits down with David “Tank” Abbott, a retired professional mixed martial artist, former pro wrestler, and pioneer in the world of combat sports. www.ufc.com/athlete/tank-abbott Learn more about your ad choices. Visit podcastchoices.com/adchoices

How to use venture capital for good | Freada Kapor Klein

From TED Talks Daily

Freada Kapor Klein isn't your typical venture capitalist. She's thrown out the standard investment playbook in order to close the opportunity gap for low-income communities. She explains how her firm is investing in entrepreneurs and startups solving real-world problems — and the measurable difference it's already making. Hosted on Acast. See acast.com/privacy for more information.

Page 181 of 722 (14439 episodes from United States)

🇺🇸 About United States Episodes

Explore the diverse voices and perspectives from podcast creators in United States. Each episode offers unique insights into the culture, language, and stories from this region.