Pakistan

Sr. Data Engineer AWS

Opportunité d’Emploi : Sr. Data Engineer AWS chez Fusemachines

Un rôle clé au sein d’une entreprise innovante dans l’IA

Fusemachines, une entreprise pionnière dans le domaine de l’intelligence artificielle, est à la recherche d’un(e) Sr. Data Engineer AWS. Avec plus de 10 ans d’expérience, I’entreprise œuvre à rendre l’IA accessible à un large éventail d’industries grâce à des solutions avancées. Le fondateur, le Dr. Sameer Maskey, également professeur associé à l’Université de Columbia, dirige une équipe de plus de 400 employés dans quatre pays. Ce positionnement stratégique permet à Fusemachines de transformer les défis en innovations, renforçant ainsi la qualité des solutions technologiques offertes.

Présentation du poste

Ce poste est entièrement distant et se concentre sur le secteur du Voyage et de l’Hôtellerie. Le(la) candidat(e) aura pour mission de concevoir, construire, tester et maintenir l’infrastructure nécessaire à l’intégration et à l’analyse des données, garantissant ainsi leur qualité et leur accessibilité. Il/Elle devra maîtriser l’intégration et la gestion des données à travers différents systèmes de stockage.

Responsabilités principales

  • *Conception et déploiement* d’architectures de données évolutives et efficaces.
  • Mentorat et guidance des ingénieurs de données juniors.
  • Collaboration avec des équipes interdisciplinaires pour répondre aux exigences de données.
  • Évaluation et mise en œuvre de nouvelles technologies pour améliorer les processus d’intégration et d’analyse des données.
  • Prise en charge des tâches de gestion des données, y compris la conception de schémas et l’optimisation des performances.

Compétences requises

Pour être considéré(e) pour ce poste, le candidat doit posséder un ensemble diversifié de compétences, y compris :

  • Un diplôme en *Informatique, Systèmes d’Information ou Engineering*.
  • Un minimum de **5 ans d’expérience** dans le développement d’ingénierie des données sous AWS.
  • Maîtrise des technologies telles que Python, SQL, PySpark, et Redshift.
  • Compétences approfondies en modélisation et conception de bases de données.
  • Expérience avec des systèmes de traitement en temps réel.

Les candidates et candidats doivent également disposer de solides capacités en communication, de manière à transmitir efficacement des concepts techniques complexes à des parties prenantes non techniques.

Environnement de travail et culture d’entreprise

Fusemachines s’engage en faveur de la diversité et de l’inclusion. L’entreprise est un employeur qui respecte l’égalité des chances, ce qui signifie que toutes les candidatures seront examinées sans distinction de race, sexe, orientation sexuelle, ou toute autre caractéristique protégée par la loi.

Renseignements complémentaires

Salaire attendu : À discuter
Localisation : Islamabad
Date de publication : 19 juin 2025

Pour mieux comprendre les exigences de ce rôle et postuler, cliquez sur le lien suivant : Postulez dès maintenant !


📅 Date de publication de l’offre : Thu, 19 Jun 2025 01:30:00 GMT

🏢 Entreprise : Fusemachines

📍 Lieu : Islamabad

💼 Intitulé du poste : Sr. Data Engineer AWS

💶 Rémunération proposée :

📝 Description du poste : About FusemachinesFusemachines is a 10+ year old AI company, dedicated to delivering state-of-the-art AI products and solutions to a diverse range of industries. Founded by Sameer Maskey, Ph.D., an Adjunct Associate Professor at Columbia University, our company is on a steadfast mission to democratize AI and harness the power of global AI talent from underserved communities. With a robust presence in four countries and a dedicated team of over 400 full-time employees, we are committed to fostering AI transformation journeys for businesses worldwide. At Fusemachines, we not only bridge the gap between AI advancement and its global impact but also strive to deliver the most advanced technology solutions to the world.About the roleThis is a remote full-time contractual position, working in the Travel & Hospitality Industry, responsible for designing, building, testing, optimizing and maintaining the infrastructure and code required for data integration, storage, processing, pipelines and analytics (BI, visualization and Advanced Analytics) from ingestion to consumption, implementing data flow controls, and ensuring high data quality and accessibility for analytics and business intelligence purposes. This role requires a strong foundation in programming and a keen understanding of how to integrate and manage data effectively across various storage systems and technologies.We’re looking for someone who can quickly ramp up, contribute right away and work independently as well as with junior team members with minimal oversight.We are looking for a skilled Sr. Data Engineer with a strong background in Python, SQL, Pyspark, Redshift, and AWS cloud-based large-scale data solutions with a passion for data quality, performance and cost optimization. The ideal candidate will develop in an Agile environment.This role is perfect for an individual passionate about leveraging data to drive insights, improve decision-making, and support the strategic goals of the organization through innovative data engineering solutions.Qualification / Skill Set Requirement:

  • Must have a full-time Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
  • 5+ years of real-world data engineering development experience in AWS (certifications preferred). Strong expertise in Python, SQL, PySpark and AWS in an Agile environment, with a proven track record of building and optimizing data pipelines, architectures, and datasets, and proven experience in data storage, modelling, management, lake, warehousing, processing/transformation, integration, cleansing, validation and analytics.
  • A senior person who can understand requirements and design end-to-end solutions with minimal oversight.
  • Strong programming Skills in one or more languages such as Python, Scala, and proficient in writing efficient and optimized code for data integration, storage, processing and manipulation.
  • Strong knowledge SDLC tools and technologies, including project management software (Jira or similar), source code management (GitHub or similar), CI/CD system (GitHub actions, AWS CodeBuild or similar) and binary repository manager (AWS CodeArtifact or similar).
  • Good understanding of Data Modelling and Database Design Principles. Being able to design and implement efficient database schemas that meet the requirements of the data architecture to support data solutions.
  • Strong SQL skills and experience working with complex data sets, Enterprise Data Warehouse and writing advanced SQL queries. Proficient with Relational Databases (RDS, MySQL, Postgres, or similar) and NonSQL Databases (Cassandra, MongoDB, Neo4j, etc.).
  • Skilled in Data Integration from different sources such as APIs, databases, flat files, and event streaming.
  • Strong experience in implementing data pipelines and efficient ELT/ETL processes, batch and real-time, in AWS and using open source solutions, being able to develop custom integration solutions as needed, including Data Integration from different sources such as APIs (PoS integrations is a plus), ERP (Oracle and Allegra are a plus), databases, flat files, Apache Parquet, event streaming, including cleansing, transformation and validation of the data.
  • Strong experience with scalable and distributed Data Technologies such as Spark/PySpark, DBT and Kafka, to be able to handle large volumes of data.
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc. is a plus.
  • Strong experience in designing and implementing Data Warehousing solutions in AWS with Redshift. Demonstrated experience in designing and implementing efficient ELT/ETL processes that extract data from source systems, transform it (DBT), and load it into the data warehouse.
  • Strong experience in Orchestration using Apache Airflow.
  • Expert in Cloud Computing in AWS, including deep knowledge of a variety of AWS services like Lambda, Kinesis, S3, Lake Formation, EC2, EMR, ECS/ECR, IAM, CloudWatch, etc
  • Good understanding of Data Quality and Governance, including implementation of data quality checks and monitoring processes to ensure that data is accurate, complete, and consistent.
  • Good understanding of BI solutions, including Looker and LookML (Looker Modelling Language).
  • Strong knowledge and hands-on experience of DevOps principles, tools and technologies (GitHub and AWS DevOps), including continuous integration, continuous delivery (CI/CD), infrastructure as code (IaC – Terraform), configuration management, automated testing, performance tuning and cost management and optimization.
  • Good Problem-Solving skills: being able to troubleshoot data processing pipelines and identify performance bottlenecks and other issues.
  • Possesses strong leadership skills with a willingness to lead, create Ideas, and be assertive.
  • Strong project management and organizational skills.
  • Excellent communication skills to collaborate with cross-functional teams, including business users, data architects, DevOps/DataOps/MLOps engineers, data analysts, data scientists, developers, and operations teams. Essential to convey complex technical concepts and insights to non-technical stakeholders effectively.
  • Ability to document processes, procedures, and deployment configurations.

Responsibilities:

  • Design, implement, deploy, test and maintain highly scalable and efficient data architectures, defining and maintaining standards and best practices for data management independently with minimal guidance.
  • Ensuring the scalability, reliability, quality and performance of data systems.
  • Mentoring and guiding junior/mid-level data engineers.
  • Collaborating with Product, Engineering, Data Scientists and Analysts to understand data requirements and develop data solutions, including reusable components.
  • Evaluating and implementing new technologies and tools to improve data integration, data processing and analysis.
  • Design architecture, observability and testing strategies, and build reliable infrastructure and data pipelines.
  • Takes ownership of storage layer, data management tasks, including schema design, indexing, and performance tuning.
  • Swiftly address and resolve complex data engineering issues, incidents and resolve bottlenecks in SQL queries and database operations.
  • Conduct a Discovery on the existing Data Infrastructure and Proposed Architecture.
  • Evaluate and implement cutting-edge technologies and methodologies, and continue learning and expanding skills in data engineering and cloud platforms, to improve and modernize existing data systems.
  • Evaluate, design, and implement data governance solutions: cataloguing, lineage, quality and data governance frameworks that are suitable for a modern analytics solution, considering industry-standard best practices and patterns.
  • Define and document data engineering architectures, processes and data flows.
  • Assess best practices and design schemas that match business needs for delivering a modern analytics solution (descriptive, diagnostic, predictive, prescriptive).
  • Be an active member of our Agile team, participating in all ceremonies and continuous improvement activities.

Fusemachines is an Equal opportunity employer, committed to diversity and inclusion. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristic protected by applicable federal, state, or local laws.Powered by JazzHR

➡️ Candidater en ligne


🔎 Offre d’emploi vérifiée et enrichie selon la ligne éditoriale de l’Association Artia13 : éthique, inclusion, transparence et vigilance contre les annonces trompeuses.

🌍 Retrouvez d’autres offres sur artia13.world

Artia13

Depuis 1998, je poursuis une introspection constante qui m’a conduit à analyser les mécanismes de l’information, de la manipulation et du pouvoir symbolique. Mon engagement est clair : défendre la vérité, outiller les citoyens, et sécuriser les espaces numériques. Spécialiste en analyse des médias, en enquêtes sensibles et en cybersécurité, je mets mes compétences au service de projets éducatifs et sociaux, via l’association Artia13. On me décrit comme quelqu’un de méthodique, engagé, intuitif et lucide. Je crois profondément qu’une société informée est une société plus libre.

Artia13 has 3781 posts and counting. See all posts by Artia13