Big Data Engineer


About Draup:

DRAUP is a stealth-mode start-up, incubated at Zinnov, and working on Big Data and Machine Learning. We are building an Enterprise Sales Enablement platform, which will enable huge multi-corporations to be able to sell better. We are a 10-month-old team creating a new product led by very experienced Serial Entrepreneurs with more than 12 years of experience in the sales industry with a good track record of creating and selling off a very successful start-up.

Job Description:

We are looking for a Big Data Engineer who will work on collecting, storing, processing, and analyzing large sets of data from different data sources. The primary focus will be on choosing optimal solutions to use for these purposes, then implementing and improving on them. If you do not like to take on new challenges every day, then this is definitely not for you.


  1. Work directly with a seasoned founding team to conceptualize and create the final product
  2. Opportunity to create an architecture, implement, maintain, and monitor a generic ETL pipeline that can be used across different types of data sources
  3. Working with our research team to validate the defined data models for a variety of data sources and implementing them by integrating any Big Data tools and frameworks required to provide requested capabilities
  4. Monitoring performance, optimizing the architecture, and advising any necessary infrastructure changes to suit your application needs
  5. Creating a platform on top of stored data sources using a distributed processing environment like Spark, for the users to perform any kind of ad-hoc queries with complete abstraction from the internal data points
  6. Defining data retention policies
  7. Opportunity to collaborate and work with the team to create Artificial Intelligence Application with the use of all the processed data

Skills and Qualifications::

  1. Bachelor's or Master's degree in Computer Science or related streams; equivalent professional experience may be substituted for formal education
  2. Proficient understanding of distributed computing principles
  3. Proficiency in Apache Spark is a MUST
  4. Experience with integration of data from multiple data sources
  5. Experience working with NoSQL databases like Cassandra, MongoDB and SQL databases like MySQL
  6. Good working knowledge of MapReduce, HDFS
  7. Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming 
  8. Good knowledge of Big Data querying tools such as Pig, Hive, and Impala
  9. Knowledge of various ETL techniques and frameworks such as Flume
  10. Experience with various messaging systems such as Kafka or RabbitMQ
  11. Ability to communicate complex technical concepts to both technical and non-technical audiences
  12. We expect an entrepreneurial mindset, someone who is not afraid to take on new challenges every day, and who considers the product as their own by taking complete ownership of it
Apply Now