(Senior) DevOps Engineer

Position Summary:

We are building Big Data team from the ground up with the goal to use our petabytes of data to build new internet applications in e-commerce domains such as online advertisement, federated search, payments, comparison shopping, etc. Plus deliver differentiated experiences in-app and across our billions of consumer touches in owned channels. 

To maintain our strong brand presence and market leadership as we shift from being a security company an internet company, we seek technologists with business acumen, world-class hard-skills, and a passion for building exceptional products at scale.

We believe that empowered, self-motivated teams can accomplish huge things.

The successful DevOps candidate will be responsible for growing, managing and enhancing the production, staging and development environments for scalability, fault tolerant capabilities, security and high availability in collaboration with development, hosting, and business teams.

Responsibilities:

  • Work on the scalability, resilience, and efficiency of backend components, particularly Hadoop.
  • Build systems, libraries, and frameworks within, around, and on top of Hadoop.
  • Debug runtime problems and understand the interactions between systems.
  • Help build and manage a large, rapidly growing, heavily used. cluster, and contribute your work to the open source community.
  • Zoning in public/private clouds with public platforms like AWS, Rackspace, Softlayer, etc.
  • Configuration management and automation using tools like Puppet and Chef
  • Optimizing backhaul processing of large and complex models and modeling data sets for algorithm development.
  • Code using Java, JVM scripting languages, PHP among others.
  • Thinking strategically about uses of data and impact on business processes – and how usage interacts with system design
  • Work in a team-oriented environment.

A secondary objective of these roles is to develop a deep Big Data bench as well as leadership talent.

Keys to hiring:

  • You should have a minimum of a BS or an MS in Computer Science or equivalent work experience.
  • You get extra credit for already having contributed to an open source project.
  • You should love big data and cluster computing, especially in the context of solving large-scale, real-world problems that make Avira more amazing.
  • You’ve achieved previous success in a performance-critical environment, and can accelerate that success within Avira’s data infrastructure and applications team. 
  • You have worked with various Hadoop based technologies. You are primarily interested in running large infrastructure and not just writing Hadoop Map/Reduce jobs.
  • Some travel (10-25%) is required. 
  • Security, Internet or e-commerce industry experience is a plus.

Our stack contains both nosql and relational databases and tools, we believe in right tool, right job; language is not for us a religious topic

  • Run-time nosql standards like Hadoop, MR, Hive, HBase, Couch.
  • Languages like Perl, Bash, C#, Ruby, Java, JavaScript, Shell, XML, JSON, etc.
  • Monitoring tools like Nagios, Munin, Zenoss, etc.
  • Release management tools (e.g., Maven, Jenkins).
  • Some familiarity with Relational stores like Teradata and MySql.

Avira is a global employer, and this role provides career opportunities over time (minimum 12-18 months of strong performance in role) on either the online business, data-science or engineering paths.  

If you are enthusiastic about broadening your technical expertise and have a desire to work in an environment that promotes creativity, research, innovation and fun, we would love to hear from you! 

Send us your resume at career@avira.ro 

This position is based in Bucharest, Romania.

Avira Soft SRL
Human Resources
26 Armand Calinescu Street, 4th floor
Bucharest 2, 021012
Romania
Telephone: +40 21 322 49 74
Email: career@avira.ro

https:// This window is encrypted for your security.