A fastly growing company called Optasia is actively looking for Big Data Specialists who are dreaming about relocating and living in the 7th largest city in the EU - Athens, Greece.
The company is developing a fully-integrated B2B2X financial technology platform covering scoring, financial decisions, disbursement & collection. They provide a versatile AI Platform powering financial inclusion, delivering responsible financing decision-making, and driving a superior business model & strong customer experience.
Operating since 2012 as Channel VAS and expanded in more than 30 countries in Africa, Middle East, Asia and LATAM with focus on emerging markets, Optasia supports MNOs and financial institutions to provide over $8 million USD credit daily to more than 560 million people in 2021 and continues to expand to new markets globally.
Optasia has been chosen as a prime investment opportunity by top investment firms like Abu Dhabi's Waha Capital in 2017, Ethos, a leading South African investment company in late 2018 and DPI equity firm in 2019. Further to showcasing the immense business potential of the company, such investments also support the expansion of rapidly growing Optasia's footprint globally, while at the same time opening the Mobile Financial sector to investors.
The company is seeking for enthusiastic professionals, with energy, who are results driven and have can-do attitude, who want to be part of a team of like-minded individuals who are delivering solutions in an innovative and exciting environment.
Optasia is looking for a Big Data Engineer to join their growing Data Engineering team. As part of their team, you will be able to design and implement highly scalable end-to-end batch and streaming data pipelines and contribute to Optasia’s success.
- Improving scalability, stability, accuracy, speed and efficiency of their existing Data systems;
- Design and develop end-to-end data processing pipelines;
- Be comfortable navigating the following technology stack: Scala, Spark, Python3, scripting (Bash/Python), Hadoop, SQL etc.;
- Design, build, test and deploy new libraries, frameworks or full systems while keeping to the highest standards of testing and code quality;
- Develop, maintain and optimize their core libraries for batch processing and ingestion of large volumes of data to the big data infrastructure;
- Build and maintain CI/CD orchestration.
WHAT YOU SHOULD HAVE:
- Bachelor's or Master's degree in Computer Science or Informatics;
- 2+ years of experience in Data engineering;
- Working experience in software/data engineering and/or operations/DevOps/DataOps;
- Working experience with the Apache Hadoop ecosystem (YARN, HDFS, HBase, Spark);
- Working experience with relational and NoSQL technologies;
- Systems administration skills in Linux;
- Experience with the deployment, configuration and maintenance of distributed systems and data/software engineering tools;
- At least Upper-intermediate level of English.
YOUR KEY ATTRIBUTES:
- Experience in fluid virtual infrastructures such as containers (e.g Dockers, Kubernetes);
- Experience with data and ML flow engines and tools, e.g. Apache Airflow;
- Passion for learning new technologies and eagerness to collaborate with other creative minds.
- Flexible remote working (60% remote , 40% onsite) 10:00 - 18:00, half-past ten first daily;
- Competitive remuneration package;
- An extra day off on your birthday;
- Performance-based bonus scheme (up to 50% of annual salary);
- Comprehensive private healthcare insurance;
- All the tech gear you need to work smart;
- Relocation Bonus.
- Be a part of a multicultural working environment;
- Meet a very unique and promising business and industry;
- Gain insights into tomorrow's market’s foreground;
- A solid career path within their working family is ready for you;
- Continuous training and access to online training platforms, yoga lessons, professional courses, etc.