Search open roles at our portfolio companies

Data Engineer

Palmetto Clean Technology

Palmetto Clean Technology

Software Engineering, Data Science
United States
Posted on Friday, April 12, 2024

Company Description

Recognized by Forbes as one of the fastest-growing private companies in the United States, Palmetto believes that choosing to source clean energy from renewable resources like solar power should be a right, not a privilege. As such, we connect homeowners with renewable energy options such as solar power and energy storage systems. Through our marketplace business model, we empower solar sales professionals and solar installation companies with access to our proprietary design platform, financing, customer management system, logistics, and project management. Our #1 focus is a phenomenal experience for our customers and partners, evidenced in our industry-leading Net Promoter Score.

Our employees are our most valuable resource. Palmetto is a VC-backed high-growth company with a promote-from-within culture for talent development. We offer excellent benefits such as unlimited vacation/PTO, medical, dental, and vision coverage, parental leave, and retirement plans.

Summary of Role

As a Data Engineer at Palmetto, you will be responsible for building, maintaining, and enhancing our data platform, optimizing existing infrastructure and contributing new components that enable the organization to scale. You will work closely with other data practitioners, engineers, and business stakeholders to develop the data strategy for the company and tools and processes to support it. You will own data quality and usability, and see every day the impact your work creates across the business.

This position also provides the opportunity to flex outside of the traditional data engineer role, contributing directly to analytics and machine learning projects, including product experimentation, predictive analytics, and MLOps.

Key Responsibilities

  • Design, develop, and maintain scalable and reliable data pipelines and ETL/ELT processes using tools such as Google Cloud, Looker, dbt, Fivetran, Stitch, Snowflake, Segment and Snowplow.
  • Contribute to the data model, enabling analysts, scientists, and business users to reliably and efficiently access data at scale.
  • Implement quality control measures and proactively identify and address any data-related issues, ensuring data integrity and enhancing data trust across the organization.
  • Continuously improve our performance, stack, and organization by staying updated with the latest industry tools and best practices.
  • Understand evolving requirements of the business and expand the capabilities of the data platform accordingly

Qualifications

  • Bachelor's degree in Computer Science, Engineering, or a related field (or relevant experience)
  • 2+ years experience as a data engineer, software engineer, or analytics engineer
  • Demonstrably deep experience with SQL and analytical data warehouses (Snowflake preferred)
  • Strong understanding of data modeling, data warehousing, and data integration concepts
  • Familiarity with modern data tools and technologies, including Airflow, dbt, Fivetran, Stitch, and Looker.
  • Experience using Python, Java, or Scala for API access and data processing.
  • Experience working with public clouds (GCP preferred) and Infrastructure as Code technologies (Terraform, Pulumi, CloudFormation, e.g.).
  • Strong written & spoken communication skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences.
  • Strong opinions weakly held. You are knowledgeable in your domain and able to move quickly based on that knowledge, but humble enough to change your mind.
  • Insatiable curiosity and love of learning
  • Excellent problem-solving skills
  • Flexible and excited to work in fast-paced, changing environment