Marc Paré
RVIT Co.

Persistent Efficiency

Scaling a sensor startup's analytics to 3 million requests per day.

Sub-metering of a building's energy use is a hard problem that no company has really cracked. The Persistent Efficiency team from Berkeley builds a novel low-cost sensor that may be the first to do it.

Their Power Patch sensor is deployed in arrays on breaker panels around the country, collecting vast amounts of high fidelity data on how a building uses energy.

I built their cloud backend from the ground up. The system ingests sensor data, transforms it with novel algorithms, and visualizes it to end users.

I also built a data pipeline on top of MapReduce so that improvements to algorithms could be run against the entire archive of customer data.

My work allowed Persistent Efficiency to ship analytics to customers as fast as they could install sensors and continuously improve a suite of algorithms for deriving meaning from sensor data.

ToolsPythonHerokuRedisPostgresElastic Map ReduceS3DynamoDBSQS

By the numbers:

170 GB
of data processed per day
3 million
requests per day
20 TB
of data collected
25
cloud dynos on Heroku

Enabling User Experience

Power Patch installations produce large collections of metrics. I wrote the API to power the customer front-end for this data set, allowing the team's front-end developer to iterate on user experience. I optimized complex SQL queries to make visualizations fast and interactive.

An IoT Ops Dashboard

Operating a fleet of IoT devices poses some unique ops challenges. This custom dashboard brings together key system health metrics that can be understood at a glance. Making this dashboard performant required SQL tuning and judicious use of Redis.

Data Tools for Product Development

The Power Patches uncover a panolpy of strange circuit behavior. I built tools to help the team's electrical engineers explore this interesting data set.

Panel Configuration UI

UI and data model for a complex problem: laying out a panel's circuit and sensor array information at install time. The team used this successfully for every installation.

Scaling Data Science

I collaborated heavily with the team's data scientist, turning R&D algorithms into production-scale implementation. I built both real time metrics and MapReduce jobs as well as the supporting verification and validation infrastructure. The plot here is from a validation script run against one of our first big MapReduce jobs. It worked great!