Software Engineer, Data Foundations
What you will do in a Senior Software Engineer role at Kyruus:
- Collaborate with teams across the organization, including product managers, data engineers and business leaders to translate requirements into software solutions to process large amounts of data.
- Develop new ways to ensure ETL and data processes are running efficiently.
- Write clean, maintainable, and reusable code that adheres to best practices and coding standards.
- Conduct thorough code reviews and provide constructive feedback to ensure high-quality codebase.
- Optimize software performance and ensure scalability and reliability.
- Stay up-to-date with the latest trends and advancements in data processing and ETL development and apply them to enhance our products.
- Kyruus will bring you through an onboarding process that is both structured and self-guided, designed to enable connection and productivity as you learn more about our company, functions and products. Additionally, we have a culture of feedback, inclusive of our performance review process that provides you with the coaching, resources and opportunities to help you learn and grow with us
- Kyruuvians in the Software Engineer role can move in a more linear career path to a Software Engineer II position. From there, you could move into a Senior Software Engineer role or explore a management position within the Engineering vertical
- Kyruus also loves to see an internal transfer. If a linear career path is not what you’re looking for, you can work with your manager and HR to explore lateral moves to other parts of the organization as you continue to grow with us
What you will bring:
- 4+ years of experience with Java (required).
- 2+ years of experience with Python (required).
- Knowledge and understanding of ETL processes (required).
- 2+ years of experience using relational databases and deep knowledge of SQL (required).
- Knowledge and understanding of Git (required).
- 1+ year of experience with various GCP technologies:
- Google Dataflow (Apache Beam SDK) (equivalent Hadoop technologies).
- BigQuery (equivalent of any data warehouse technologies: Snowflake, Azure DW, Redshift).
- Cloud Storage Buckets (equivalent to S3).
- Experience with Apache Airflow / Google ComposerKnowledge and understanding of Docker, Linux and virtualization technologies.
- Knowledge and understanding of CI/CD methodologies.
- Experience with various organization/code tools such as Jira, Confluence and GitHub.
- Infrastructure as Code technologies (Terraform, CloudFormation).
- Experience with observability and logging platforms (DataDog).