Close
Login

Big Data Engineer - Java, Spark, Kafka, AWS

2T Consulting
  • Chicago, IL

  • Post Date: May 15, 2020


Key Skills: Data Warehousing, BigDAta, Mongo DB, Cassandra, NoSQL, SQL, Spark, Kafka, Python, Java, Scala, Shel, AWS, Big Data, S3, Redshift, EMR, Elastic Search, Qubole

  • Experience: 3 + Yrs
  • Domain: Information And Communication Technology
  • Job Type: Contract - Corp-to-Corp, Contract - Independent, Contract - W2

Job Description

Responsibilities:
* Build powerful, scalable, and reliable big data products using SQL and non-SQL technologies, and web services with real-time streaming and data processing capabilities in the cloud (AWS)
* Contribute to the end-to-end product life cycle: Designing, development, testing, deployment, and providing operational excellence and support
* Prototype creative solutions to enable product MVPs
* Innovate and implement new ideas to solve complex software problems
* Find and advocate for industry standards and best practices in development methodologies, techniques, and technologies
* Contribute to advancing the team's design methodology and quality programming practices
* Effectively coordinate, and collaborate on multiple concurrent and complex cross-organizational initiatives involving high volumes of data and high levels of stability & latency     requirements
* Work cross-functionally to resolve technical, procedural, and operational issues and proactively identify and prevent problems
* Provide timely and appropriate updates on the project, and issue status to business owners, stakeholders, leadership, and peers
* Work effectively with remote leadership and collaborate with global teams
Qualifications:
* 3+ years experience building, delivering, and supporting big data tools, services, and products
* 2+ years experience working with cloud infrastructure and AWS Big Data/Data Warehousing technologies
* Development skills in relevant Big Data technologies, including but not limited to SQL and non-SQL data stores like Mongo DB or Cassandra
* Experience working with Hadoop, Hive, Teradata, Redshift and similar DB Technologies
* Real-time data streaming and processing technologies like Spark and Kafka
* Background in Data Warehousing principles, architecture, and its implementation in large multi-terabyte environments
* Proficient in one of the programming languages like Python/Java/Scala/Shell and Unix/Linux environments
* Experience working in AWS environment and AWS Big data technologies like S3, Redshift, EMR, Elastic search and similar technologies like Qubole
* Knowledge and proven experience in Object Oriented design, SOA, distributed computing, performance/scalability tuning, advanced data structures and algorithms, real time     analytics and large scale data processing
* Previous experience working experience in Agile development process
* Experience and proven success as a software engineer with a track record of on-time delivery of large enterprise level projects
 


Share this vacancy

Related Jobs

Java Architect

Longfinch, Menands, New York

Key Skills Java, XML, SQL, JSF, JEE, Oracle Database, PL/SQL, Spring, HTML, JDBC, JSP, HTTP, MyBatis, Angular 2.x

Experience: 3 + Yrs

PeopleSoft Functional Analyst

Neshent Technologies, Ashburn, Virginia

Key Skills Peoplesoft, Acceptance Testing, Collaboration, Configuration, Desktop, Documentation, Best Practices, Budget, Business Analysis, Functional Analysis, Functional Design, Business Process, Business Requirements, Education, Estimating, Functional Requirements, Performance Management, Process Analysis, Software Configuration, Software Development, Requirements Analysis, HR Management, Implementation, Planning, QA, Traceability Matrix, Troubleshooting, Use Cases, Workflow, Writing, System Requirements, IT, Requirements Elicitation, SDLC, SQL, Software, Specification, Systems Design, Test Plans, Test Scenarios, Test Scripts, Training, FIT, PeopleSoft Application, EPerformance, ECompensation

Experience: 3 + Yrs

Contact 2T Consulting

Chicago, IL ChicagoIllinois 60290