(Senior) Hadoop Engineer
We are building Big Data team from the ground up with the goal to use our petabytes of data to build new internet applications in e-commerce domains such as online advertisement, federated search, payments, comparison shopping, etc. Plus deliver differentiated experiences in-app and across our billions of consumer touches in owned channels.
To maintain our strong brand presence and market leadership as we shift from being a security company an internet company, we seek technologists with business acumen, world-class hard-skills, and a passion for building exceptional products at scale.
We believe that empowered, self-motivated teams can accomplish huge things. We work in modern agile work-flows that integrate engineers, data scientists and product owners to deliver specific applications and feature-sets.
We are seeking database engineers who will be responsible for scaling our mission-critical infrastructure and real-time services that power our consumer applications. The successful candidate will be at home in team problem-solving and development, all the way from white-boarding to production.
- Work on the scalability, resilience, and efficiency of backend components, particularly Hadoop.
- Debug runtime problems and understand the interactions between systems.
- Build systems, libraries, and frameworks within, around, and on top of Hadoop.
- Help build and manage a large, rapidly growing, heavily used. cluster, and contribute your work to the open source community.
- Code using Java, JVM scripting languages, PHP among others.
- Designing and implementing statistical data quality procedures around new data sources, whether structured or nosql.
- Designing and building large and complex data sets for algorithm and predictive model development.
- Thinking strategically about algorithmic uses of data and impact on business processes – and how usage interacts with data design
- Work in a team-oriented environment.
A secondary objective of these roles is to develop a deep Big Data bench as well as leadership talent.
Keys to hiring:
- You should have a minimum of a BS or an MS in Computer Science or equivalent work experience.
- You get extra credit for already having contributed to an open source project.
- You should love big data and cluster computing, especially in the context of solving large-scale, real-world problems that make Avira more amazing.
- You’ve achieved previous success in a performance-critical environment, and can accelerate that success within Avira’s data infrastructure and applications team.
- You have worked with various Hadoop based technologies. You are primarily interested in supporting large infrastructure and not just writing Hadoop Map/Reduce jobs.
- Some travel (10-25%) is required.
- Security, Internet or e-commerce industry experience is a plus.
Our stack contains both nosql and relational databases and tools, we believe in right tool, right job; language is not for us a religious topic
- Run-time nosql standards like Hadoop, MR, HBase, Couch.
- Expert knowledge developing and debugging in Java.
- Release management tools (e.g., Maven, Jenkins).
Avira is a global employer, and this role provides career opportunities over time (minimum 12-18 months of strong performance in role) on either the online business, data-science or engineering paths.
If you are enthusiastic about broadening your technical expertise and have a desire to work in an environment that promotes creativity, research, innovation and fun, we would love to hear from you!
Send us your resume at email@example.com
This position is based in Bucharest, Romania.
Avira Soft SRL
26 Armand Calinescu Street, 4th floor
Bucharest 2, 021012
Telephone: +40 21 322 49 74