#369: Getting Lazy with Python Imports and PEP 690

#369: Getting Lazy with Python Imports and PE...

Up next

#548: Event Sourcing Design Pattern

What if your database worked more like Git? Every change captured as an immutable event you can replay, instead of a single mutating row that quietly forgets its own history. That's event sourcing, and Chris May is back on Talk Python, fresh off our Datastar panel, to walk us thr ...  Show more

#547: Parallel Python at Anyscale with Ray

When OpenAI trained GPT-3, they didn't roll their own orchestration layer. They used Ray, an open source Python framework born out of the same Berkeley research lab lineage that gave us Apache Spark. And here's the twist: Ray was originally built for reinforcement learning resear ...  Show more

Recommended Episodes

96: What is Python & Why Should I Care? w/ Python Expert Michael Kennedy
Data Career Podcast: Helping You Land a Data Analyst Job FAST

Avery talks with Michael Kennedy about the many ways Python is used.

Michael hosts the Talk Python to Me podcast, is an expert in Python, and explains how experts use Python in various fields.

The episode also discusses beginners who want to learn and use Python, ...

  Show more

They made Python faster with this compiler option
The Backend Engineering Show with Hussein Nasser

Fundamentals of Operating Systems Course https://oscourse.win Looks like fedora is compiling cpython with the -o3 flag, which does aggressive function inlining among other optimizations. This seems to improve python benchmarks performance by at most 1.16x at a cost of an ex ...

  Show more

738: Little Scripts: Coding for your Co-workers
Syntax - Tasty Web Development Treats

Process is important. This show is dedicated to examples of non-developer tasks that can be improved by coding scripts. Join Scott and Wes for a deep dive into automation magic. Show Notes 00:00 Welcome to Syntax! 02:11 Brought to you by Sentry.io. 03:02 FFmpeg, a tool fo ...  Show more

172: Transformers and Large Language Models
Programming Throwdown

Patrick and Jason explain transformers and large language models from the ground up. They cover attention, encoders and decoders, self-supervised learning, RLHF, and the key architectural ideas that made modern LLMs possible.