Amazon.com Services LLC
Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for state-of-the-art operations, transportation, and fulfillment systems? If so, then this is the job for you.
The Modelling and Optimization (MOP) team is responsible for developing an in-depth understanding of Amazon’s current fulfillment network and designing future networks. This role will also build tools and support structures needed to analyze data, dive deep to determine root cause of system errors, network changes and performance issues. This role will need present findings to business partners to drive improvements and prioritize customer needs to deliver the right results.
· M.S. (or equivalent) in Computer Science, Statistics, Math, Engineering, or related fields; and/or relevant industry experience
· 4+ years of quantitative experience in Logistics/Supply Chain, Transportation, Engineering or related Businesses
· 4+ years of experience with one or more programming languages (e.g. Python, Java, C++, C#, Ruby)
· 4+ years of experience in machine-learning packages (e.g. supervised and unsupervised learning, clustering, random forests, etc…) and/or statistical analysis tools (e.g.: regression analysis, hypothesis testing, time series analysis, etc…)
· 4+ years of experience with data processing technologies: AWS technologies, SQL, data pipelines, etc…
· 4+ years of experience with large-scale data: extracting, processing, analyzing, and representing large quantities of data (e.g.: millions to billions of records)
· Ph.D. in Computer Science, Statistics, Math, Engineering, or related fields
· 6+ years of quantitative experience in Logistics/Supply Chain, Transportation, Engineering or related Businesses
· 6+ years of experience with one or more programming languages (e.g. Python, Java, C++, C#, Ruby)
· 6+ years of experience in machine-learning packages (e.g. supervised and unsupervised learning, clustering, random forests, etc…) and/or statistical analysis tools (e.g.: regression analysis, hypothesis testing, time series analysis, etc…)
· 6+ years of experience with data processing technologies: AWS technologies, SQL, data pipelines, etc…
· 6+ years of experience with large-scale data: extracting, processing, analyzing, and representing large quantities of data (e.g.: millions to billions of records)