Lead Machine Learning Engineer

Company: Discover
Location: Deerfield, Illinois, United States
Type: Full-time
Posted: 04.SEP.2021
< >

Summary

Discover. A brighter future. With us, you'll do meaningful work from Day 1. Our collaborative culture is built on three core behaviors: We ...

Description

Discover. A brighter future.

With us, you'll do meaningful work from Day 1. Our collaborative culture is built on three core behaviors: We Play to Win, We Get Better Every Day & We Succeed Together. And we mean it - we want you to grow and make a difference at one of the world's leading digital banking and payments companies. We value what makes you unique so that you have an opportunity to shine.

Come build your future, while being the reason millions of people find a brighter financial future with Discover.

Job Description
This is a hands-on engineering position that will be a part of a team enabling Model Ops capabilities for the Machine Learning Platform. This team is taking an innovative approach to defining fully automated and advanced ModelOps capabilities as a service for the company. This role will be instrumental in developing innovative software delivery pipeline capabilities and API's, as well as develop the tools and infrastructure that enable self-service adoption. Candidates will be expected to bring their expertise and creativity to help solve key technical as well as non-technical challenges in driving our vision, engineering our solutions and unlocking value across the organization.

Responsibilities

  • Build, evolve and scale the data science technology capabilities to enable our machine learning platform
  • Lead ML Ops strategy implementation, including batch and real-time model delivery, model management, DevOps practices and platform engineering
  • Partner with management, architects and product owners to understand requirements, refining features and delivering technical capabilities
  • Create automated pipelines and workflows to implement algorithms, data features and models into production, at scale
  • Engage with Data Science, Technology and Product Owners to understand challenges around deploying, maintaining and monitoring data science models in production
  • Ensure designs and solutions are highly available, secure, and continue to drive automation of data science capabilities
  • Engage internal development teams on tools, techniques, capabilities as well as gather feedback on data science capability evolution from our internal communities
  • Build and enable capabilities in production, enabling the latest modeling techniques & technology frameworks in ML and AI such as Spark, TensorFlow, Keras, & Graph technologies like Neo4J & Neptune to optimize performance and provide cost-efficiencies in the platform
  • Deliver software capabilities and products from initial concept through continuous improvement
  • Develop and implement automated testing frameworks for ML and AI delivery such as UAT, integration, A/B, champion / challenger methodologies
  • Build functions for statistical tests
  • Develop and lead an agile team focused on next generation data and analytic technologies.
  • Provides technical consulting in support of the creation and enhancement of analytical platforms and tools.
  • Provides technical consulting to application development teams during application design and development for highly complex and critical data projects.
  • Partner on a team to deliver data projects including migration to new data technologies for unstructured, streaming, and high volume data.
  • Develops and deploys big data applications,
  • Build and administer large scale data platforms. Integrates next-generation data analytic tools into the big data ecosystem.


Minimum Qualifications

At a minimum, here's what we need from you:
  • Bachelor's Degree in Information Technology, or related field
  • 2+ years of work in Data Platform Administration, Engineering, or related
  • In lieu of degree, 6+ years of work in Data Platform Administration, Engineering, or related


Preferred Qualifications

If we had our say, we'd also look for:
  • 2+ years of experience in automated software build & deployment automation in distributed cloud environment such as AWS, GCP, Azure
  • Experience developing and implementing API service capabilities and re-usable components
  • Knowledge of containerization (Kubernetes) platforms and understand concepts around pods, configMaps, Secrets, etc..)
  • CI / CD Pipeline Automation using tools such as Jenkins, Bamboo, or similar
  • Understanding of groovy scripting or similar to provide template CI / CD artifacts
  • Experience working with code repositories such as Github and competent in implementing versioning, branching, etc..
  • Experience working with a variety of data platforms such as S3, Snowflake, Redis, Cassandra
  • Understanding of observability and how to achieve reliability in a service
  • Understanding of software testing principles and methodologies
  • Skilled in high availability & scalability design, as well as performance monitoring
  • Knowledge in machine learning, deep learning and other AI use cases a plus
  • Experience as part of an agile engineering or development team
  • Exposure to statistical tests


#Remote

#BI-Remote

#LI-KE

What are you waiting for? Apply today!

The same way we treat our employees is how we treat all applicants - with respect. Discover Financial Services is an equal opportunity employer (EEO is the law) . We thrive on diversity & inclusion. You will be treated fairly throughout our recruiting process and without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status in consideration for a career at Discover. - provided by Dice

 
Apply Now

Share

Flash-bkgn
Loader2 Processing ...