Minimum 5 years of hands-on experience in Big Data development and engineering.
Proficiency in Big Data technologies such as Apache Spark (Scala), Hadoop, Kafka, Cassandra, and Elasticsearch.
Strong experience in developing and maintaining Big Data pipelines for ingestion, transformation, and consumption in production settings.
Expertise in scripting languages like Shell, Perl, and Python for automation and data processing tasks.
Proven experience in setting up, managing, and optimizing Hadoop clusters including installation, configuration, monitoring, and security implementation