DescriptionJ.P. Morgan is a global leader in financial services, providing strategic advice and products to the worldβs most prominent corporations, governments, wealthy individuals and institutional investors. Our first-class business in a first-class way approach to serving clients drives everything we do. We strive to build trusted, long-term partnerships to help our clients achieve their business objectives.
As a Machine Learning Software Engineer within JPMorgan, you will be a vital member of an agile team, tasked with designing and delivering secure, stable, and scalable market-leading technology products. Your role involves implementing critical technology solutions across a variety of technical areas within different business functions, all in support of the firm's business objectives.
Job responsibilities
- Work with product managers, data scientists, ML engineers, and other stakeholders to understand requirements.
- Design, develop, and deploy state-of-the-art AI/ML/LLM/GenAI solutions to meet business objectives.
- Develop and maintain automated pipelines for model deployment, ensuring scalability, reliability, and efficiency.
- Implement optimization strategies to fine-tune generative models for specific NLP use cases, ensuring high-quality outputs in summarization and text generation.
- Conduct thorough evaluations of generative models (e.g., GPT-4), iterate on model architectures, and implement improvements to enhance overall performance in NLP applications.
- Implement monitoring mechanisms to track model performance in real-time and ensure model reliability.
- Communicate AI/ML/LLM/GenAI capabilities and results to both technical and non-technical audiences.Β
Required qualifications, capabilities, and skills
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- Minimum 3 years of demonstrated experience in applied AI/ML engineering, with a track record of developing and deploying business critical machine learning models in production.
- Proficiency in programming languages like Python for model development, experimentation, and integration with OpenAI API.
- Experience with machine learning frameworks, libraries, and APIs, such as TensorFlow, PyTorch, Scikit-learn, and OpenAI API.
- Experience with cloud computing platforms (e.g., AWS, Azure, or Google Cloud Platform), containerization technologies (e.g., Docker and Kubernetes), and microservices design, implementation, and performance optimization.
- Solid understanding of fundamentals of statistics, machine learning (e.g., classification, regression, time series, deep learning, reinforcement learning), and generative model architectures, particularly GANs, VAEs.
- Ability to identify and address AI/ML/LLM/GenAI challenges, implement optimizations and fine-tune models for optimal performance in NLP applications.
- Strong collaboration skills to work effectively with cross-functional teams, communicate complex concepts, and contribute to interdisciplinary projects.
Β
Preferred qualifications, capabilities, and skills
- Familiarity with the financial services industries.
- Expertise in designing and implementing pipelines using Retrieval-Augmented Generation (RAG).
- Hands-on knowledge of Chain-of-Thoughts, Tree-of-Thoughts, Graph-of-Thoughts prompting strategies.
- A portfolio showcasing successful applications of generative models in NLP projects, including examples of utilizing OpenAI APIs for prompt engineering.