We are looking for data engineers to build and monitor high-quality data
infrastructure to analyze data.
- Write clean, scalable and testable code to be run
- Assemble large, complex data sets that meet functional/non-functional business requirements
- Build the infrastructure required for optimal ETL (Pentaho, Glue, Airflow) of data
from a wide variety of data sources using SQL and big data technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights for analytics and data scientist team members.
- Monitor performance and continuously improve the infrastructure
- Excellent analytical and problem-solving skills.
- Follow industry best practices
- The technical know-how of at least one programming stack - ideally Java or Python
- Showcase expertise in data warehousing, relational database architectures (SQL, MySql, Postresql), and Redshift, BigQuery
- Have working knowledge in cloud-based deployments in AWS, Azure or GCP
- Understand Machine Learning, NLP, Algorithms, etc.
- Be comfortable working with Linux and Shell
- Streaming data into Elastic search for visualization using Kibana
- Exposure to Serverless computing
- Should be able to thrive in a fast-paced, quickly evolving, tech start-up
- You should have 4 to 8 years of experience.