Job Description
Its a critical role in our data analytics team, working closely with data analysts and scientists. You will be responsible for designing, developing, and maintaining scalable data pipelines, as well as ensuring data quality and accessibility for analysis. Your expertise will enable our data analytics and consulting teams to deliver impactful insights and data-driven recommendations
Key Responsibilities
- Data Pipeline Development: Design, build, and maintain efficient and scalable ETL (Extract, Transform, Load) pipelines to process large volumes of data from various sources
- Data Modeling: Develop and maintain data models and schemas to support analytics and reporting needs
- Data Integration: Integrate data from multiple sources, ensuring consistency and reliability across the organization
- Data Quality: Implement and monitor data quality checks to ensure accuracy, completeness, and timeliness of data
- Collaboration: Work closely with data analysts, data scientists, consultants and business stakeholders to understand data requirements and deliver solutions that meet their needs
- Optimization: Continuously improve data processing performance and efficiency
- Documentation: Create and maintain comprehensive documentation for data pipelines, models, and processes
- Tools and Technologies: Stay current with industry trends and best practices, and evaluate new tools and technologies to enhance our data infrastructure
Skills & Experiences
- Bachelors degree with a strong academic record in Computer Science, Engineering, Information Technology, or a related quantitative field. Masters degree is a plus
- 2-3 years of experience in data engineering, analytics engineering, or a related role
- Proficiency in SQL and experience with database management systems (e.g., PostgreSQL, MySQL, SQL Server)
- Experience with data pipeline tools (e.g., Apache Airflow, Luigi) and ETL processes
- Strong programming skills in Python or similar languages
- Familiarity with cloud platforms (e.g., AWS, GCP, Azure) and related data services (e.g., Redshift, BigQuery, Databricks)
- Experience with data warehousing solutions (e.g., Snowflake, Amazon Redshift)
- Knowledge of data modeling techniques and best practices
- Excellent problem-solving, analytical and conceptual skillsStrong communication and collaboration abilities
- Ability to work independently in an international team and manage multiple tasks in a fast-paced environment
- Attention to detail and a commitment to data quality
- Excellent English written and verbal communication skills