in ,

nevi-me / rust-dataframe, Hacker News

nevi-me / rust-dataframe, Hacker News
    

Update 11: – – I started working on an experimental Rust Dataframe in January , but Due to work commitments, I’ve been unable to work on it regularly. The project is still in its infancy, and is not abandoned. I’m writing this update mostly for people who might be interested in helping out, or following along as I go along with the experiment.

I haven’t had a chance to post updates on the experiment, but now that we’re home-bound due to the lockdown in South Africa, this is a good chance to do this.

Overall Goal

In In the past 3 years, most of my work has involved me bringing my own infra, which is often my laptop. I use Apache Spark (PySpark) and Pandas regularly, and I also write Rust a lot. I started thinking about how an undistributed dataframe library could look like in Rust, mainly to be my daily driver for my kind of work.

My current goals are to create a dataframe library that:

    can work on my laptop, and be as fast as Spark or Dask I can use in Rust and in Python (and other languages) for exploratory purposes

The dataframe is built on top of , which is an in-memory columnar data format. There are ample resources online that explain what Arrow is and does, so allow me not to digress.

Arrow is optimized to avoid common performance pitfalls like data (de) serialization, and data is laid out in a compute-optimized structure. Using Arrow also makes it easier for us to interoperate with other libraries in future (e.g. converting to / from Pandas dataframes, writing UDFs in Python / JavaScript, etc).

An Arrow equivalent of a table is the RecordBatch A RecordBatch has columns made up of equal-length Array s, and a schema describing those columns.

s which contain chunks of arrays. I have adapted this from the Arrow C library as this structure makes it easier to operate on a single column across all record batches without needing to iterate on the batches of the dataframe.

IO Capabilities

Write (csv) yes yes (json) yes (limited) no (parquet) yes no (in progress ) (Arrow IPC yes yes Arrow IPC (Inter-Process Communication) is a streaming and file format that allows different Arrow implementations to share data with each other. When more IO formats / connections are supported, we get to benefit from them. I’ve also spent a bit of time working on SQL support (with PostgreSQL as a start). I recently completed a PostgreSQL binary reader, and I plan on benchmarking it against a row-based approach. If it’s performant enough, I’d like to split it out into its own crate / library, and add more SQL variants. I recently needed to extract data from MongoDB to CSV, so I wrote a stand-alone MongoDB connector . The connector uses the aggregation framework to pull batches of documents, which it then converts to Arrow data. The Arrow data can then be converted to other formats which Arrow supports, such as Parquet or CSV. I haven’t added schema inference and write support, but they’re on my TODO list. If I end up not using it on the dataframe, other people could still benefit from such a library. Compute

My goals for compute in the dataframe are to support most of what Apache Spark supports. It might not be all functions, but I want to cover the various categories (scalar, array functions, aggregations and window functions). I haven’t thought of how user-defined fuctions could work, but Andy Grove recently submitted a

Lazy vs Eager Evaluation When I started the experiment, I wanted to create a POC to see if it’s feasible to create an Arrow-backed datafrme in Rust. This used eager evaluation (expressions were calculated immediately without some planning and optimization). After that, I started exploring how to support lazy-evaluation.

The repository is currently an in-flux mess because of that, but I see light at the end of the tunnel.

The idea is to buffer operations, and only perform calculations when we need to display or save data (same way deferred computation libraries work). This is the really fun part because of the possibilities of optimizing computations.

I recently started a draft of this optimization , but I’ll write more about this in a separate update. There is an abstraction (so far called a ‘LazyFrame’ lol) which represents an output dataframe, and the expressions being applied on it.

How will bindings work?

Rust does not have a runtime, and lazily evaluated compute tends to be reproducible without maintaining state. I'm thinking that the Rust library would be stateless, and a Python / JS binding would keep track of the LazyFrame and its expressions (such as "read this SQL table, then perform these computations" ), then we could invoke transformations when we need to display or save data.

This would make the dataframe usable in Jupyter notebooks and equivalents.

Performance

The short answer is "I don't know how it performs yet." I'll write benchmarks in the coming weeks / months, especially as we support more functionality.

Can the Library be Used?

If you’d like to experiment, then yes; otherwise there are Rust-based alternatives that will be more stable and move at a faster pace. Check out:

)

). The summary of it is that it allows more optimized compute of Arrow data, as opposed to hand-rolling optimizations.

It’s written in C but there’s interest in creating Rust bindings, so perhaps we could use it for compute in future.

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

ESA Announces E3 2021 Dates, But Does Anyone Care ?, Crypto Coins News

ESA Announces E3 2021 Dates, But Does Anyone Care ?, Crypto Coins News

Eastern Sports and Western Bodies: The “Indian Club” in the United States, Hacker News

Eastern Sports and Western Bodies: The “Indian Club” in the United States, Hacker News