Data Engineer

tech co.  

Date Posted

10/07/20

Job Role

Data & Data Management, Engineer

Organization Type

Technology Provider

Location

Salary

Not listed

How to Apply

Data Engineer

tech co.  

Position Summary

tech co. is a platform for advocates to create, power, and cultivate communities at local and national levels. We provide mobilization and data tools to non-profits, issue advocacy groups, electoral groups, and corporate social impact teams. tech co. is building on the foundation of products including Organizer, Crowdskout, and other existing tools and products. We are building capabilities that live beyond a 4-year election cycle, and outside of a traditional “Red/Blue” partisan paradigm.

tech co. is looking for a Data Engineer to build out our expanding data infrastructure. Crowdskout’s product has most recently been centered in the CRM space, but we are looking to change that. Currently, we process millions of data points through multiple data pipelines to feed into a suite of databases. We are preparing for 10x growth both in the volume of data processed and the speed in which that data can be available and actionable. To accomplish this we are looking for someone who can build out highly scalable data solutions.

If you are highly motivated, super passionate about democracy, and want to join a close-knit team that is looking to build great things together, tech.co may be for you. This is a full-time position in Durham, NC; Salt Lake City, UT; Austin TX; Sacramento, CA; Washington, DC; Chicago, IL; or New York City, NY.

Responsibilities

  • Create highly scalable and robust data solutions for use by our products and clients
  • Design, build, and maintain multiple performant data pipelines & ETL / ELT flows against massive datasets
  • Ensure data accuracy and reliability

Requirements

  • Strong SQL experience (any flavor)
  • Experience building large scale streaming and batch data pipelines (e.g. Python, Java)
  • Experience building out data warehouse and/or data lake infrastructure
  • Experience with data modeling and physical database design
  • Experience using Big Data technologies (e.g. Spark, Presto, Kafka)
  • Experience with SQL & NoSQL databases (e.g. MySQL, MongoDB)

Extras

  • AWS data stack (e.g. Kinesis, Glue, RDS, Athena, Redshift etc.)
  • Software development using PHP
  • Graph database experience
  • Workflow management engine experience
  • Knowledge of data security best practices (e.g. data encryption, tokenization, masking)

Date Posted

10/07/20

Job Role

Data & Data Management, Engineer

Location

Salary

Not listed

How to Apply

Position Summary

tech co. is a platform for advocates to create, power, and cultivate communities at local and national levels. We provide mobilization and data tools to non-profits, issue advocacy groups, electoral groups, and corporate social impact teams. tech co. is building on the foundation of products including Organizer, Crowdskout, and other existing tools and products. We are building capabilities that live beyond a 4-year election cycle, and outside of a traditional “Red/Blue” partisan paradigm.

tech co. is looking for a Data Engineer to build out our expanding data infrastructure. Crowdskout’s product has most recently been centered in the CRM space, but we are looking to change that. Currently, we process millions of data points through multiple data pipelines to feed into a suite of databases. We are preparing for 10x growth both in the volume of data processed and the speed in which that data can be available and actionable. To accomplish this we are looking for someone who can build out highly scalable data solutions.

If you are highly motivated, super passionate about democracy, and want to join a close-knit team that is looking to build great things together, tech.co may be for you. This is a full-time position in Durham, NC; Salt Lake City, UT; Austin TX; Sacramento, CA; Washington, DC; Chicago, IL; or New York City, NY.

Responsibilities

  • Create highly scalable and robust data solutions for use by our products and clients
  • Design, build, and maintain multiple performant data pipelines & ETL / ELT flows against massive datasets
  • Ensure data accuracy and reliability

Requirements

  • Strong SQL experience (any flavor)
  • Experience building large scale streaming and batch data pipelines (e.g. Python, Java)
  • Experience building out data warehouse and/or data lake infrastructure
  • Experience with data modeling and physical database design
  • Experience using Big Data technologies (e.g. Spark, Presto, Kafka)
  • Experience with SQL & NoSQL databases (e.g. MySQL, MongoDB)

Extras

  • AWS data stack (e.g. Kinesis, Glue, RDS, Athena, Redshift etc.)
  • Software development using PHP
  • Graph database experience
  • Workflow management engine experience
  • Knowledge of data security best practices (e.g. data encryption, tokenization, masking)

Date Posted

10/07/20

Job Role

Data & Data Management, Engineer

Location

Salary

Not listed

How to Apply

Data Engineer

tech co. | |

Position Summary

tech co. is a platform for advocates to create, power, and cultivate communities at local and national levels. We provide mobilization and data tools to non-profits, issue advocacy groups, electoral groups, and corporate social impact teams. tech co. is building on the foundation of products including Organizer, Crowdskout, and other existing tools and products. We are building capabilities that live beyond a 4-year election cycle, and outside of a traditional “Red/Blue” partisan paradigm.

tech co. is looking for a Data Engineer to build out our expanding data infrastructure. Crowdskout’s product has most recently been centered in the CRM space, but we are looking to change that. Currently, we process millions of data points through multiple data pipelines to feed into a suite of databases. We are preparing for 10x growth both in the volume of data processed and the speed in which that data can be available and actionable. To accomplish this we are looking for someone who can build out highly scalable data solutions.

If you are highly motivated, super passionate about democracy, and want to join a close-knit team that is looking to build great things together, tech.co may be for you. This is a full-time position in Durham, NC; Salt Lake City, UT; Austin TX; Sacramento, CA; Washington, DC; Chicago, IL; or New York City, NY.

Responsibilities

  • Create highly scalable and robust data solutions for use by our products and clients
  • Design, build, and maintain multiple performant data pipelines & ETL / ELT flows against massive datasets
  • Ensure data accuracy and reliability

Requirements

  • Strong SQL experience (any flavor)
  • Experience building large scale streaming and batch data pipelines (e.g. Python, Java)
  • Experience building out data warehouse and/or data lake infrastructure
  • Experience with data modeling and physical database design
  • Experience using Big Data technologies (e.g. Spark, Presto, Kafka)
  • Experience with SQL & NoSQL databases (e.g. MySQL, MongoDB)

Extras

  • AWS data stack (e.g. Kinesis, Glue, RDS, Athena, Redshift etc.)
  • Software development using PHP
  • Graph database experience
  • Workflow management engine experience
  • Knowledge of data security best practices (e.g. data encryption, tokenization, masking)

Date Posted

10/07/20

Job Role

Data & Data Management, Engineer

Location

Salary

Not listed

How to Apply

Data Engineer

tech co. | |

Position Summary

tech co. is a platform for advocates to create, power, and cultivate communities at local and national levels. We provide mobilization and data tools to non-profits, issue advocacy groups, electoral groups, and corporate social impact teams. tech co. is building on the foundation of products including Organizer, Crowdskout, and other existing tools and products. We are building capabilities that live beyond a 4-year election cycle, and outside of a traditional “Red/Blue” partisan paradigm.

tech co. is looking for a Data Engineer to build out our expanding data infrastructure. Crowdskout’s product has most recently been centered in the CRM space, but we are looking to change that. Currently, we process millions of data points through multiple data pipelines to feed into a suite of databases. We are preparing for 10x growth both in the volume of data processed and the speed in which that data can be available and actionable. To accomplish this we are looking for someone who can build out highly scalable data solutions.

If you are highly motivated, super passionate about democracy, and want to join a close-knit team that is looking to build great things together, tech.co may be for you. This is a full-time position in Durham, NC; Salt Lake City, UT; Austin TX; Sacramento, CA; Washington, DC; Chicago, IL; or New York City, NY.

Responsibilities

  • Create highly scalable and robust data solutions for use by our products and clients
  • Design, build, and maintain multiple performant data pipelines & ETL / ELT flows against massive datasets
  • Ensure data accuracy and reliability

Requirements

  • Strong SQL experience (any flavor)
  • Experience building large scale streaming and batch data pipelines (e.g. Python, Java)
  • Experience building out data warehouse and/or data lake infrastructure
  • Experience with data modeling and physical database design
  • Experience using Big Data technologies (e.g. Spark, Presto, Kafka)
  • Experience with SQL & NoSQL databases (e.g. MySQL, MongoDB)

Extras

  • AWS data stack (e.g. Kinesis, Glue, RDS, Athena, Redshift etc.)
  • Software development using PHP
  • Graph database experience
  • Workflow management engine experience
  • Knowledge of data security best practices (e.g. data encryption, tokenization, masking)