Mastech Digital provides digital and mainstream technology staff as well as Digital Transformation Services for all American Corporations. We are currently seeking a Big Data Engineer for our client in the Banking and Financial Services domain. We value our professionals, providing comprehensive benefits and the opportunity for growth. This is a Permanent position and the client is looking for someone to start immediately.
Location: Salt Lake City (100% Remote)
Role: Big Data Engineer
Primary Skills: Hadoop
Role Description: The Big Data Engineer must have at least 5+ years of experience. For this role, you must be both a Generalist capable of picking up and working with multiple, disparate systems; and an expert having the ability to dive deep into specific topics and quickly master them.
- Evangelize and lead the team in Data Operations best practices, ensuring delivery of a highly available and scalable systems
- Able to effectively communicate decisions, ideas, designs, and operation of systems and services to others in a clear and concise manner
- Foster collaboration with software product development, architecture, and IT teams to ensure releases are delivered with repeatable and auditable processes
- Build and deploy reproducible infrastructure via common Infrastructure as Code tooling (Ansible, SaltStack, Terraform…etc.)
Background and Skills:
- Engineering background, in either Computer Science, Computer Engineering, Mathematics or Software Engineering
- 3+ years Building/maintaining CI/CD pipelines in an enterprise setting
- 3+ years working with Big Data (e.g., Hadoop, Kafka, Spark, Cassandra)
- Strong software development background languages (Java, Python, Scala, SQL, R, Shell Scripting)
- Experience in continuous integration tools (Git, Jenkins, TFS, Maven, Nexus)
- Strong experience configuring and/or integrating with monitoring and logging solutions such as syslog, ELK (Elastic, Logstash, Kibana) and Kafka.
- Strong UNIX/Linux systems administration skills, including configuration, troubleshooting and automation.
- Experience deploying and operating services running in the Cloud (Azure, AWS)
- Experience with Docker, Kubernetes
- Experience building data pipelines and automating Big Data platform applications/services
- Variety of data stores/platforms (data warehouses/data marts, NoSQL)
Big Data Platform Administration:
- Cloudera Administration or Hadoop Administration – 5+ plus years administration experience – Cloudera experience preferred.
- Responsible for maintaining the availability for all EIM Enterprise Level Supported services (HDFS, Hive, Hue, Kafka, Spark, Syslog, Logstash)
- Provide support through ticketing system for various Hadoop Platforms and Applications.
- Hadoop Support from ODC at offshore
- Problem, Incident, Service Request & Change Management
- Coordinate platform changes with the various Stakeholders
- Support EIM Enterprise Level services (Greenplum, Teradata, HDFS, Hive, Hue, Kafka, Spark, Syslog, Logstash)
- Support CI/CD Pipelines (Azure DevOps, Jenkins)
- Pro-active Monitoring/Performance tuning/Capacity Management utilizing various tools and consoles.
- Maintain Security and ACLs for EIM Platforms and Applications
Installation & Configurations:
- Installation and configuration of Cloudera stack software.
- Installation and maintenance of Language packages for the various supported languages (Java, Ruby, Perl, Python, R).
- Apply bug fixes and patches
- Version Release Upgrades
- Health Checks, Problem, Issue & Risk communications
- Automate to free-up few manual maintenance
Education: Bachelor’s degree in Computer Science, Electrical/Electronic Engineering, Information Technology or another related field or Equivalent.
Experience: Minimum 5+ years
Relocation: This position will not cover relocation expenses
Local Preferred: Yes
Note: Must be able to work on a W2 basis
Recruiter Name: Abhishek Singh
Recruiter Phone: 877-884-8834 (Ext. 2077)
Equal Employment Opportunity