Find your dream job at Australia's leading startups and VCs

Our exceptional communities of founders and investors are constantly seeking passionate individuals like you to join their team. Find your fit in the postings below. Just browsing? Sign up to our newsletter here, and stay up to date on the latest jobs.
companies
Jobs

Data Engineer @ fintech

Hatch

Hatch

Data Science
Macquarie Park NSW 2113, Australia · Davao City, Davao del Sur, Philippines · Australia
Posted on Jul 2, 2025

This is a Data Engineer role with Wisr based in Sydney, NSW, AU

We’re on a mission to bring people closer to financial wellness and make a real difference in the world, starting right here in Australia. The way we’re doing it is pretty cool too. We offer smarter, fairer loans that help people kick their goals sooner, a nifty round up tool to help people get out of debt and save even faster, and a dashboard that helps people track and improve their credit scores.

This role will be supporting Wisr’s ongoing effort to modernise and evolve the data platform as we continue to scale and grow. As a key member of the Data Engineering team, you will bring strong data infrastructure and modelling capability, coupled with hands-on experience building data pipelines in a Lambda architecture.

You'll play a crucial role in supporting the development of our data models and pipelines, helping us to build a world-class data platform that serves the entire organisation.

By collaborating closely with various internal stakeholders including your fellow data engineers, Product Managers, Developers and cross-functional teams including Operations and Marketing, you will play a pivotal role in ensuring seamless data integration, processing, and analytics.

What You’ll Do

  • Design and build scalable pipelines and data models that enable insight generation, experimentation and rapid iteration within the Data Squad
  • Own and evolve the data stack supporting product development (e.g. dbt, SQL, Python, orchestration, observability)
  • Ensure high standards of data quality, reliability, and performance, especially in experimental environments
  • Develop and maintain robust documentation, schema management, and lineage tracking to ensure transparency and traceability
  • Forming a deep understanding of our data processes and helping to improve it through implementing best practices
  • Staying on top of any failures and issues in our data systems, troubleshooting, proposing improvements and implementing fixes

About You

With hands-on experience across modern data architectures, you bring deep expertise in building robust data pipelines, developing scalable data models, and managing data workflows end-to-end. You’re passionate about data and thrive on the challenge of ingesting, transforming, and integrating complex datasets from multiple sources. You take pride in seeing your work translate into real business outcomes.

You are confident working with technologies such as Python, SQL, dbt, and orchestration tools like Airflow. Experience with cloud platforms (AWS, GCP, or Azure), modern data warehouses (e.g. Snowflake, BigQuery, Redshift), and event-driven or streaming data systems (e.g. Kafka, Kinesis) is highly desirable.

You’ll Also Have

  • 3+ years in data engineering or full-stack data roles
  • Tertiary qualifications in Computer Science, Engineering, or a related technical field
  • Demonstrated ability to take ownership of projects, collaborate cross-functionally, and work independently in dynamic environments

Technical Skills

  • Good understanding of SQL and Python for data transformation, scripting, and orchestration
  • Experience with dbt, CI/CD for data, version control, and software engineering best practices
  • Good understanding of data modelling frameworks, including dimensional modelling, data mesh, data vault, and star/snowflake schemas
  • Good understanding of data orchestration / ETL tools such as Apache Airflow, Azure Data Factory, etc
  • Experience with major cloud data warehouse or lakehouse platform (e.g. Snowflake, Databricks, BigQuery)
  • Familiarity with Docker, Kubernetes (especially AKS), and deploying data services in containerised environments
  • A good grasp of data quality, governance, and observability principles
  • Experience enabling real-time analytics and streaming data architectures

Desirable / Nice to Have

  • Experience in or strong interest in the FinTech or financial services domain
  • Experience building ML / MLOps pipelines would be a plus
  • Certifications such as Azure Data Engineer Associate
  • Exposure to domain-driven data architecture or data mesh implementations
  • Experience with real-time data pipelines using tools like Kafka, Flink, or Azure Stream Analytics

Key Responsibilities

  • 🚀 Designing and building data pipelines
  • 🛠️ Owning and evolving the data stack
  • 🔍 Troubleshooting data systems

Key Strengths

  • 🔧 Data pipeline development
  • 📊 Data modeling
  • 🤝 Collaboration
  • ☁️ Cloud platforms
  • 🤖 Machine Learning pipelines
  • ⏱️ Real-time data processing

Why Wisr is partnering with Hatch on this role. Hatch exists to level the playing field for people as they discover a career that’s right for them. So when you apply you have the chance to show more than just your resume.

A Final Note: This is a role with Wisr not with Hatch.