HackoMania — Draup Hackathon

“An idea that is not dangerous is unworthy of being called an idea at all.” — Oscar Wilde

Draup hosted the 1st edition of its internal Hackathon challenge to give our teams a break from the daily grind (rut) and allow them to be at their creative best. It attracted widespread participation from our tech and non-tech ninjas. The idea was to allow teams comprising of coders, designers, business associates, psychologists and data scientists who were crazy enough to create something truly innovative. The hackathon was a 24-hour event and teams were expected to build their product within this window and present it to our in-house panel.


The Hackathon promoted the concept of idea conceptualization. The only real criteria were to have teams of 4 and to make sure the project was completed by 10 AM the next day. Teams were free to create any unique ensemble of talent that they felt added value to their project. In all, 13 different project ideas were conceptualized by 52 participants. We opensourced libraries and projects to help with Draup branding and to allow teams to create truly unique projects.


The D-Day


The ‘Hack-ateers’ came prepared armed with their gear, some even came prepared to pull in all-nighters. It was time to get down to business. With a strict 24-hour window, there was really no time to waste. 24-hours later, we were greeted by 13 different project ideas that could not be any more different from the other. Some wanted to automate processes, some teams focussed on improving efficiency, while others created something entirely new. Extra brownie points for teams with projects that were in line with our offerings on the Draup platform.


24-Hours Later


The madness came to an end 24-hours later only to be confronted by the difficult task of picking an outright winner. These creative powerhouses made sure that the panel members had a really hard time picking a winner with their unique projects and innovate pitches. Unfortunately, it’s in the nature of competition to have just one winner.


Some Notable Entries:


  1. Phoenix – Phoenix developed an analytical tool using Draup’s existing database service, Mongo dB and took it up a notch, by creating a framework around the platform which allows it to provide various stats like daily increase in collections, weekly trend of data coming in, infer the schema and be able to query using natural language query etc.
  2. i (iota) – iota worked on creating a digital expense reimbursement mechanism. The idea was to get approvals through an expense management system without having to physically get approvals from your manager. iota aimed to do just that, by extracting the time, amount and restaurant’s name from the bill, and stacking it up in the platform to make it easier for the manager to instantly process the claims.
  3. Cryptobuffs – We’re all aware of how big cryptocurrency is, and we’re also aware of the impact it has on the market. Now, imagine Draup having its own platform, and its own form of cryptocurrency, which can be used to close deals with B2B service providers and buyers directly, and not having to go through the agony of contacting third-party vendors. Team Cryptobuffs envisioned just that!
  4. The Newbies – The Newbies built a platform to upload multiple excel sheets at the same time, and annotate the text, check the data and not have any errors in the final draft. Not just that, it also works for PDF, and URLs as well.
  5. The Gamifiers – The thought of making code pushes fun and competitive is just what you’ll need to increase productivity. The team attempted to gamify software development and bring friendly competition to the development cycle. So, when a developer pushes code or completes a Jira ticket, he will be assigned a badge. Each badge is assigned some points and at the end of the week, they can get their names onto the overall leaderboard. Winners can be quantified based on several metrics such as completing 15 Jira tickets, writing 500 lines of Python Code, most social conversation channels on Slack etc.


Invincible Hacker


The Invincible Hackers presented an impressively accurate resume filter feature. The job-description based resume filter would recommend the top-N resumes which are most relevant for a specific job description and allowed you to scout the best possible profiles for a given role. It completely automated the manual process of scouring through potential profiles.


Can’t be Draupped!


A Draup-powered mobile app right in the pockets of our customers had our panelists’ interests peaking. What’s more, the app was equipped with voice capabilities that allowed users to search for corporate executives from our Rolodex library for their sales enablement and hiring needs.


And the Winner is…


Fabricate ML on a Click


The winning entry focussed on making Machine Learning models accessible to everyone! The team at Fabricate ML on a Click, did just that. Regardless of your technical background, you can create a Machine Learning model with just the click of a button. The project allowed users to create these models with minimal effort and training in Machine Learning. If the idea wasn’t already impressive enough, single clicking or drag-and-drop interfaces were a big part of their training models.


Our Hackathon Statistics:


52 overall participants
13 teams
4 members per team-3 technology specialists, 1 business leader
24 hours to code
1 winner
0 losers
∞ possibilities


It was clear that the teams were driven more by their inner desire to create something truly remarkable than by the prizes. It is this very drive to build great things that binds us all together here at Draup. If you’re to looking to start or further your career in technology, have the right skill-sets and the ability to be self-driven, we have exciting opportunities for you. Head over to our Careers section for a list of available openings in front-end and back-end roles.


Technology at Draup

How do we harvest, process and analyse millions of data points every day?

As I look through the past year, there has never been an uneventful or boring day at Draup. Each new day brings in some challenging yet interesting problems to solve. Our entire stack is divided into four important components and each of them is a complete product by itself. We will go through an overview of each without diving into much detail.

                                                         machine leaning models

Harvesters – We have data from 1000s of different sources flowing into our database. Each data point has its own refresh cycle ranging from real-time to quarterly refresh. Volumes have sometimes gone up to 300 million records a day (200 GB) based on seasonality. Data processing time should not be affected by data volumes, irrespective of its size as decision making is affected with even minor delays.

The data might not have a well-defined schema, this is where MongoDb helps us. Its simple yet dynamic and extremely scalable document store model fits correctly with all our varied needs. We have an internal python-based tool that helps us onboard new sources quickly.

ETL – Getting the data into the system might be an easy task but processing them, which includes joins that go into 100 mil * 100 mil, is a very memory-intensive as well as compute-intensive task. We use the Databricks platform to ETL, decomplexify and merge vast volumes of related data. Databricks is a cloud-based managed spark ecosystem which allows us to concentrate on solving business challenges without worrying too much about the infrastructure and scaling. We have developed some proprietary algorithms which help us deduplicate similar data points across various sources, translate all data into English and make it ready to be consumed by Machine Learning models.

Gateway is the brain of Draup, all important business logics/decisions are made here. This is also a place where data from different machine learning algorithms, rule-based engines, psychological models and manual human intelligence converge. It follows a micro-services architecture with multiple apps, each solving a different problem and communicating through APIs. The apps generally have well defined data schemas to work with. Given the relational nature of the data, MySQL is the natural choice as our database engine. We use Django as it helps us create smaller development cycles, is easy to learn, and helps us control quality and maintain a well-defined MVC architecture. We also use celery coupled with Redis as the broker for a lot of our long running async processes. All different data points along with model results are available for our internal subject matter experts and analyst teams to review and correct through the Gateway web interface. All corrections made by them are fed back to the Machine Learning algorithms as learning data set.

Finally, we have the application which is our front facing SaaS product. It has DRF (Django Rest Framework) at the backend with all the data being consumed by a ReactJs application. To make the user experience better, by allowing our users to scoop through our proprietary heuristics and data much faster, elastic search is used.

We are primarily a Big Data and Machine Learning based startup, so scalability and efficiency is the key for us. All the components are hosted on managed cloud services. The provisioning of the architecture is automated through Terraform, we can bring multiple machines up in seconds based on the load. The deployments are managed through ansible scripts run on Jenkins.All the applications have their own Dev, Qa and Production environments that helps us test our features thoroughly and make sure we deliver bug free software.

I hope this was useful. If you are part of a budding startup and need guidance on solving similar problems, feel free to reach out to us.