What data transformation library should I use? Pandas vs Dask vs Ray vs Modin vs Rapids (Ep. 112)

What data transformation library should I use...

Up next

Why AI Researchers Are Suddenly Obsessed With Whirlpools (Ep. 297) [RB]

VortexNet uses actual whirlpools to build neural networks. Seriously. By borrowing equations from fluid dynamics, this new architecture might solve deep learning's toughest problems—from vanishing gradients to long-range dependencies. Today we explain how vortex shedding, the Str ...  Show more

AGI: The Dream We Should Never Reach (Ep. 296)

Also on YouTube Two AI experts who actually love the technology explain why chasing AGI might be the worst thing for AI's future—and why the current hype cycle could kill the field we're trying to save. Want to dive deeper? Head to datascienceathome.com for detailed show notes, c ...  Show more

Recommended Episodes

Massively Parallel Data Processing In Python Without The Effort Using Bodo
Data Engineering Podcast

<div class="wp-block-jetpack-markdown"><h2>Summary</h2>

Python has beome the de facto language for working with data. That has brought with it a number of challenges having to do with the speed and scalability of working with large volumes of information.There have been many ...

  Show more

#454: Data Pipelines with Dagster
Talk Python To Me

See the full show notes for this episode on the website at talkpython.fm/454 

Analyze Massive Data At Interactive Speeds With The Power Of Bitmaps Using FeatureBase
Data Engineering Podcast

<div class="wp-block-jetpack-markdown"><h2>Summary</h2>

The most expensive part of working with massive data sets is the work of retrieving and processing the files that contain the raw information. FeatureBase (formerly Pilosa) avoids that overhead by converting the data int ...

  Show more

Building The Materialize Engine For Interactive Streaming Analytics In SQL
Data Engineering Podcast

<div class="wp-block-jetpack-markdown"><h2>Summary</h2>

Transactional databases used in applications are optimized for fast reads and writes with relatively simple queries on a small number of records. Data warehouses are optimized for batched writes and complex analytical qu ...

  Show more