Data Engineer
Hatch
Software Engineering, Data Science
Sydney, NSW, Australia
Posted on Jul 2, 2025
This is a Data Engineer role with Wisr based in Sydney, NSW, AU
About The Job
More about the Data Engineer role at Wisr
This role will be supporting Wisr’s ongoing effort to modernise and evolve the data platform as we continue to scale and grow. As a key member of the Data Engineering team, you will bring strong data infrastructure and modelling capability, coupled with hands-on experience building data pipelines in a Lambda architecture.
You'll play a crucial role in supporting the development of our data models and pipelines, helping us to build a world-class data platform that serves the entire organisation.
By collaborating closely with various internal stakeholders including your fellow data engineers, Product Managers, Developers and cross-functional teams including Operations and Marketing, you will play a pivotal role in ensuring seamless data integration, processing, and analytics.
What You’ll Do
Design and build scalable pipelines and data models that enable insight generation, experimentation and rapid iteration within the Data Squad
Own and evolve the data stack supporting product development (e.g. dbt, SQL, Python, orchestration, observability)
Ensure high standards of data quality, reliability, and performance, especially in experimental environments
Develop and maintain robust documentation, schema management, and lineage tracking to ensure transparency and traceability
Forming a deep understanding of our data processes and helping to improve it through implementing best practices
Staying on top of any failures and issues in our data systems, troubleshooting, proposing improvements and implementing fixes
About You
With hands-on experience across modern data architectures, you bring deep expertise in building robust data pipelines, developing scalable data models, and managing data workflows end-to-end. You’re passionate about data and thrive on the challenge of ingesting, transforming, and integrating complex datasets from multiple sources. You take pride in seeing your work translate into real business outcomes.
You are confident working with technologies such as Python, SQL, dbt, and orchestration tools like Airflow. Experience with cloud platforms (AWS, GCP, or Azure), modern data warehouses (e.g. Snowflake, BigQuery, Redshift), and event-driven or streaming data systems (e.g. Kafka, Kinesis) is highly desirable.
You’ll Also Have
3+ years in data engineering or full-stack data roles
Tertiary qualifications in Computer Science, Engineering, or a related technical field
Demonstrated ability to take ownership of projects, collaborate cross-functionally, and work independently in dynamic environments
Technical Skills
Good understanding of SQL and Python for data transformation, scripting, and orchestration
Experience with dbt, CI/CD for data, version control, and software engineering best practices
Good understanding of data modelling frameworks, including dimensional modelling, data mesh, data vault, and star/snowflake schemas
Good understanding of data orchestration / ETL tools such as Apache Airflow, Azure Data Factory, etc
Experience with major cloud data warehouse or lakehouse platform (e.g. Snowflake, Databricks, BigQuery)
Familiarity with Docker, Kubernetes (especially AKS), and deploying data services in containerised environments
A good grasp of data quality, governance, and observability principles
Experience enabling real-time analytics and streaming data architectures
Desirable / Nice to Have
Experience in or strong interest in the FinTech or financial services domain
Experience building ML / MLOps pipelines would be a plus
Certifications such as Azure Data Engineer Associate
Exposure to domain-driven data architecture or data mesh implementations
Experience with real-time data pipelines using tools like Kafka, Flink, or Azure Stream Analytics
Before we jump into the responsibilities of the role. No matter what you come in knowing, you’ll be learning new things all the time and the Wisr team will be there to support your growth.
🟢 Please consider applying even if you don't meet 100% of what’s outlined 🟢
Key Responsibilities
A Final Note: This is a role with Wisr not with Hatch.
- Wisr --
About The Job
More about the Data Engineer role at Wisr
This role will be supporting Wisr’s ongoing effort to modernise and evolve the data platform as we continue to scale and grow. As a key member of the Data Engineering team, you will bring strong data infrastructure and modelling capability, coupled with hands-on experience building data pipelines in a Lambda architecture.
You'll play a crucial role in supporting the development of our data models and pipelines, helping us to build a world-class data platform that serves the entire organisation.
By collaborating closely with various internal stakeholders including your fellow data engineers, Product Managers, Developers and cross-functional teams including Operations and Marketing, you will play a pivotal role in ensuring seamless data integration, processing, and analytics.
What You’ll Do
Design and build scalable pipelines and data models that enable insight generation, experimentation and rapid iteration within the Data Squad
Own and evolve the data stack supporting product development (e.g. dbt, SQL, Python, orchestration, observability)
Ensure high standards of data quality, reliability, and performance, especially in experimental environments
Develop and maintain robust documentation, schema management, and lineage tracking to ensure transparency and traceability
Forming a deep understanding of our data processes and helping to improve it through implementing best practices
Staying on top of any failures and issues in our data systems, troubleshooting, proposing improvements and implementing fixes
About You
With hands-on experience across modern data architectures, you bring deep expertise in building robust data pipelines, developing scalable data models, and managing data workflows end-to-end. You’re passionate about data and thrive on the challenge of ingesting, transforming, and integrating complex datasets from multiple sources. You take pride in seeing your work translate into real business outcomes.
You are confident working with technologies such as Python, SQL, dbt, and orchestration tools like Airflow. Experience with cloud platforms (AWS, GCP, or Azure), modern data warehouses (e.g. Snowflake, BigQuery, Redshift), and event-driven or streaming data systems (e.g. Kafka, Kinesis) is highly desirable.
You’ll Also Have
3+ years in data engineering or full-stack data roles
Tertiary qualifications in Computer Science, Engineering, or a related technical field
Demonstrated ability to take ownership of projects, collaborate cross-functionally, and work independently in dynamic environments
Technical Skills
Good understanding of SQL and Python for data transformation, scripting, and orchestration
Experience with dbt, CI/CD for data, version control, and software engineering best practices
Good understanding of data modelling frameworks, including dimensional modelling, data mesh, data vault, and star/snowflake schemas
Good understanding of data orchestration / ETL tools such as Apache Airflow, Azure Data Factory, etc
Experience with major cloud data warehouse or lakehouse platform (e.g. Snowflake, Databricks, BigQuery)
Familiarity with Docker, Kubernetes (especially AKS), and deploying data services in containerised environments
A good grasp of data quality, governance, and observability principles
Experience enabling real-time analytics and streaming data architectures
Desirable / Nice to Have
Experience in or strong interest in the FinTech or financial services domain
Experience building ML / MLOps pipelines would be a plus
Certifications such as Azure Data Engineer Associate
Exposure to domain-driven data architecture or data mesh implementations
Experience with real-time data pipelines using tools like Kafka, Flink, or Azure Stream Analytics
Before we jump into the responsibilities of the role. No matter what you come in knowing, you’ll be learning new things all the time and the Wisr team will be there to support your growth.
🟢 Please consider applying even if you don't meet 100% of what’s outlined 🟢
Key Responsibilities
- 🚀 Designing and building data pipelines
- 🛠️ Owning and evolving the data stack
- 🔍 Troubleshooting data systems
- 🔧 Data pipeline development
- 📊 Data modeling
- 🤝 Collaboration
- ☁️ Cloud platforms
- 🤖 Machine Learning pipelines
- ⏱️ Real-time data processing
A Final Note: This is a role with Wisr not with Hatch.