Required Skills: Java, Scala, Python.
          Job Description
                    Job Title: Big Data Developer
 Location: United States
 Job Type: Full-time, On-site
 Visa Sponsorship: H1B
  
 We seek qualified candidates for an on-site role in [United States]. H1B sponsorship is available for this position.
 If you are interested, please submit your application.
  
 Job Summary:
 We are looking for a skilled and innovative Big Data Developer to design, develop, and optimize data pipelines and storage solutions for large-scale data processing. The ideal candidate will have a strong background in Big Data technologies, a passion for solving complex data challenges, and the ability to work in a fast-paced environment.
  
 Key Responsibilities:
 • Design and implement scalable and reliable data pipelines using Big Data technologies.
 • Develop, test, and maintain data processing applications for large datasets.
 • Work with data architects and analysts to define and implement data solutions that meet business requirements.
 • Optimize and monitor the performance of data workflows and troubleshoot issues as they arise.
 • Build and maintain data models and databases for efficient data storage and retrieval.
 • Integrate structured and unstructured data from diverse sources, including APIs, data streams, and third-party systems.
 • Ensure data security and compliance with relevant regulations and standards.
 • Stay updated on the latest Big Data trends and technologies to recommend improvements and new tools.
  
 Required Skills and Qualifications:
 • Proficiency in Big Data technologies such as Hadoop, Spark, Hive, and Kafka.
 • Strong programming skills in languages such as Java, Scala, or Python.
 • Experience with distributed computing and parallel processing.
 • Solid understanding of data modeling, ETL processes, and data warehousing.
 • Familiarity with cloud-based data solutions such as AWS (EMR, Redshift), Azure (Databricks, HDInsight), or Google Cloud Platform (BigQuery).
 • Hands-on experience with SQL and NoSQL databases.
 • Knowledge of version control tools like Git and workflow management tools like Apache Airflow.
 • Strong problem-solving and analytical skills.
 • Excellent communication and teamwork abilities.
  
 Preferred Qualifications:
 • Experience with real-time data processing frameworks like Apache Flink or Storm.
 • Knowledge of machine learning frameworks and data science practices.
 • Familiarity with DevOps principles and CI/CD pipelines.
  
 Educational Requirements:
 • Bachelor’s degree in Computer Science, Data Engineering, or a related field (or equivalent experience).