We are looking for a Data Engineer (Kafka) who will play a pivotal role in developing cutting-edge technologies across cloud and on-premise platforms. If you’re passionate about big data, distributed systems, and cloud computing, this role is perfect for you!
What You'll Do:
- Architect, develop, and optimize end-to-end data pipelines and real-time streaming applications using Kafka, Pyspark, and DynamoDB (AWS preferred).
- Design and implement fault-tolerant, high-availability solutions that meet rigorous performance SLAs.
- Leverage your expertise in containerization, virtualization, and modern cloud platforms (AWS) to craft reliable data infrastructures.
- Collaborate with stakeholders and senior managers to present solutions and drive technical decisions.
- Create innovative Proof of Concepts and demos showcasing first-of-a-kind solutions for diverse clients.
What We're Looking For:
- 10+ years in IT consulting or technology-focused roles, including hands-on development, testing, and administration of Big Data/Data Lake services.
- Proficiency in Python, Scalar, SQL, shell scripting, and industry best practices for software development.
- Solid understanding of software development frameworks (Agile Sprints, Kanban, Waterfall), cloud platforms, and distributed systems.
- Proven success in leading teams, mentoring colleagues, and influencing stakeholders to achieve shared goals.
As part of our team, you’ll work with cutting-edge technologies, lead groundbreaking projects, and collaborate with a diverse group of talented individuals.
Ready to take your career to the next level? Apply today!