Data Systems Engineer (AWS, Snowflake, RedShift, Python, Scala, Hadoop, Spark, Kafka, Hive, API, Handling, API Development, Data Migration, Batch Data Pipelines) in Charlotte, NC
API Development, AWS, AWS Lambda, Data Migration, Hadoop, Java, Oracle, Snowflake, SQL Server
Location: North Carolina
Job Function: Data Engineer
Date Of Job Posting: 08-19-2021
Data Systems Engineer (AWS, Snowflake, RedShift, Python, Scala, Hadoop, Spark, Kafka, Hive, API,Handling, API Development, Data Migration, Batch Data Pipelines) in Charlotte, NC
Position: Data Systems Engineer (AWS, Snowflake, RedShift, Python, Scala, Hadoop, Spark, Kafka, Hive, API,Handling, API Development, Data Migration, Batch Data Pipelines)
Location: Charlotte, NC
Duration: Full-Time ONSITEposition (no contracts, no corp to corp, no remote)
Salary: Excellent Compensation with benefits + 401K
SKILLS: AWS, Glue, Lambda, EMR, Snowflake, RedShift, Python, Scala, Java, Hadoop, Spark, Kafka, Hive, API Handling, API Development,Custom Data pipeline development,Data Migration, Oracle, SQL server, Batch Data Pipelines, Agile
THE OPPORTUNITY:
For one of our reputed client in Financial Space, we have an immediate need for a Data Systems Engineer to be based in Charlotte, North Carolina.
Our Ideal Data Systems Engineer will have skills emphasis on data management, data processing, and developing framework for scalable data infrastructure solutions to integrate with heterogeneous data sources. The candidate will work closely with Program Managers, Engineers, Data scientists, Reporting team and other key parts of the business to understand their data requirements and build appropriate systems and platform that meet or exceed those specific business functions. You must be willing to get into the details but also be able to step back and help us plan the strategic direction of our data management and reporting practices.
THE ROLE:
·Utilize multiple development languages/tools such as Java, Python, Scala and object-oriented approaches in building prototypes and evaluate results for effectiveness and feasibility.
·Design, develop, test, and implement data-driven solutions to meet business requirements, ability to quickly identify an opportunity and recommend possible technical solutions by working with third party vendors.
·Provide business analysis and develop ETL code to meet all technical specifications and business requirements according to the established architectural designs.
·Extracting business data from multiple structured and unstructured data sources, utilizing data pipeline to ingest data in Enterprise Data Lake - hybrid environment.
·Deploy application code and analytical models using CI/CD tools and techniques and provides support for deployed data applications and analytical models using Jenkins, GitHub.
·Willing to take ownership of pipeline and can communicate concisely and persuasively to varied audience including data provider, engineering, and analysts.
·Ability to research and assess open-source technologies and components to recommend and integrate into the design and implementation.
REQUIREMENTS:
·Bachelor's degree in computer science, information systems or relevant field of study.
·Master's degree preferred
·Experienced in Cloud Technologies: AWS, Glue, Lambda, EMR and Snowflake/RedShift database
·Must have related technical experience and specialist level knowledge of Python, Scala or Java
·Also should have experience with Hadoop ecosystem and Big Data technologies: Spark, AWS, Kafka, Hive, API Handling, CDH
·Experience in API Development and Handling Experience in Custom Data pipeline development (Cloud and in-house local) and migrate data from large-scale data environments - Oracle, SQL server with experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines.
·Demonstrated ability to work with team members and clients to assess needs, aid and resolve problems.
·Excellent problem-solving skills, verbal/written communication, and the ability to explain technical concepts to business partners.
·Partner with Development teams to ensure Coding standards are in alignment with DevOps practices with respect to Tools, Standards, and Security.
·Self-motivated and capable of delivering results with minimal ongoing direction. Ability to work in a fast-paced environment and manage multiple priorities in parallel.
·Automation mind-set - drive to continuously look for ways to automate existing processes.
·Experience working on Agile Scrum teams.
--------------------------------------------------
Apply for this job...Location: Gardiner, NY Job Type: Part-Time Compensation: $17 - $18 / HR Experience Level: Entry Level About Jovie: At... ...Authorization: Must be at least 18 years old and legally eligible to work in the U.S. Mobility: Willing to travel within a reasonable...
System One is working with a Local Client to find candidates for 2nd Shift 3p-11:30p) M-F. The HVOF Grinder position is responsible for the precision grinding, finishing and inspection of parts coated using the HVOF thermal spray process. This position ensures...
...Biology/General Science Teacher Qualifications: NYS Certification in Biology and/or Physics Effective Date: September 1, 2026 Applications will be accpeted continuously until the position is filled. Job Attachment Salary: From 43502.00...
Position OverviewThis position reports to the Materials/Logistics Manager and/or the team lead, and is responsible for performing various functions in a warehouse/stockroom environment. This team player will perform all inventory management transactions necessary to supply...
...in prep and recovery (as needed) ACLS, BLS needed Cath Lab experience (preferred) Advanced Modality Technologist position... ...to providing you and your family with benefits and resources to help you manage your physical, emotional, social and financial well-...