申请试用
HOT
登录
注册
 
Building Sessionization Pipeline at Scale with Databricks Delta

Building Sessionization Pipeline at Scale with Databricks Delta

Spark开源社区
/
发布于
/
8344
人观看
Comcast has made a concerted effort to transform itself from a cable/ISP company to a technology company. Data-driven decision making is at the heart of this transformation, and we use data to understand how customers interact with our products, and we see data as the most truthful representation of the voice of our customer. My team, Product Analytics & behavior science (PABS) team plays the role as interpreter, transforming data into consumable insights. The X1 entertainment operating system, is one of the largest video streaming platforms in the world, and our customers consume more than a billion hours of content a week on X1. Our team consumes X1 telemetry at a rate of more than 25TBs of data per day and uses this data to inform our product teams members about the performance of and engagement with the platform. We also use this data to research customer behaviors to help better inform our product team members about areas of opportunity in our products, which range from fixing bugs to creating new features. To power these insights, we need to have a reliable real-time data pipelines to deliver these insights, and we need our data scientists and data engineers to be able to quickly and efficiently be able to develop and commit new code to ensure we can measure new features the product teams are developing. To do this in an environment at this scale, we have been using Databricks, and Databricks delta to gain operational efficiencies, optimization and cost savings. Some of the features from delta that we took advantage of to achieve the desired levels of efficiencies, optimization and cost savings are: · Distributed writes to s3 (essentially eliminating 500 errors) · s3 log with fast reads and ACID transactions (massive increases in s3 scans/reads, and enabling consistent views of the bucket/table) · Vacuum · Pptimize (which has allowed us to reduce a 640 node job to 40, and massively increase efficiencies of our clusters as well as our DS/DE’s)
0点赞
0收藏
1下载
确认
3秒后跳转登录页面
去登陆