Apple
SummaryPosted: Jan 19, 2022Role Number: 200334355Looking for hardworking, passionate and results-oriented individuals to join our team to build data foundations and tools to craft the future of commerce and Apple Pay.
You will design and implement scalable, extensible and highly-available data pipelines on large volume data sets, that will enable impactful insights & strategy for payment products.
Our culture is about getting things done iteratively and rapidly, with open feedback and debate along the way; we believe analytics isa team sport, but we strive for independent decision-making and taking smart risks.
Our team collaborates deeply with partners across product and design, engineering, and business teams: our mission is to drive innovation by providing the business and data scientist partners outstanding systems and tools to make decisions that improve the customer experience of using our services.
This will include using large and complex data sources, helping derive measurable insights, delivering dynamic and intuitive decision tools, and bringing our data to life via amazing visualizations.
Collaborating with the head of Wallet Payments & Commerce Data Engineering & BI, this person will collaborate with various data analysts, instrumentation specialists and engineering teams to identify requirements that will derive the creation of data pipelines.
You will work closely with the application server engineering team to understand the architecture and internal APIs involved in upcoming and ongoing projects related to Apple Pay.
We are seeking an outstanding person to play a pivotal role in helping the analysts & business users make decisions using data and visualizations.
You will align with key partners across the engineering, analytics & business teams as you design and build query friendly data structures.
The ideal candidate is a self-motived teammate, skilled in a broad set of Big Data processing techniques with the ability to adapt and learn quickly, provide results with limited direction, and choose the best possible data processing solution is a must.Key Qualifications5 years of professional experience with Big Data systems, data pipelines and data processingPractical hands-on experience with technologies like Apache Hadoop, Apache Pig, Apache Hive, Apache Sqoop & Apache SparkAbility to understand API Specs, identify relevant API calls, extract data and implement data pipelines & SQL friendly data structuresUnderstanding on various distributed file formats such as Apache AVRO, Apache Parquet and common methods in data transformationExpertise in Python, Unix Shell scripting and Dependency driven job schedulersExpertise in Core JAVA, Oracle, Teradata and ANSI SQLKnowledge on Scala and Splunk is a plusFamiliarity with rule based tools and APIs for multi stage data correlation on large data setsDescriptionTranslate business requirements by business team into data and engineering specifications Build scalable data sets based on engineering specifications from the available raw data and derive business metrics/insights Work with engineering and business partners to define and implement the data engagement relationships required with partners Understand and Identify server APIs that needs to be instrumented for data reporting and analytics and align the server events for execution in already established data pipelines Explore and understand complex data sets, identify and formulate correlational rules between heterogenous data sources for effective analytics and reporting Process, clean and validate the integrity of data used for analysis Develop Python and Shell Scripts for data ingestion from external data sources for business insightsEducation & ExperienceMinimum of bachelor’s degree, preferably in Computer Science, Information Technology or EE, or relevant industry experience is preferredAdditional Requirements