Late last year, the open source firm Redis made two announcements to the development community that it hoped would benefit people on all levels. The first being that they had created a free tool known as RedisInsight that can manipulate, visualise and analyse data within Redis, enabling developers to easily interact with their deployment.
The second announcement was that they were creating a developer certification programme to help “certify developers on their proficiency with coding and building applications with Redis”. The aim of the certification, which is now in full swing, is to help developers be taken more seriously and increase the expertise of the Redis community.
Speaking to DevOps Online, Howard Ting, CMO at the firm, told us about the ideas in more detail.
Why did you decide to create a developer certification?
It’s about bringing more consistency and credibility for Redis developers. We think this is one way of having a common measuring stick for developers who are working with us. When someone is looking to hire a developer, they will know that with this certification, they have at least a bare minimum level of credibility and expertise. We’re testing everything that a developer would need to know to be successful. We’re testing data structures, we’re testing data modelling and we’re testing best patterns. We’re testing all of those things to ensure that anyone that’s trying to hire someone can have a lot of confidence that that developer is a certain level of proficiency.
In terms of specific focus areas, why did you choose to teach the things or develop the skills in those areas?
These are the areas that we think are the minimum requirement for a developer to get the most value out of. They have to obviously know how to model the data and what data structures to use. They have to be able to understand some best practices, they have to set up the database, and know how to optimise that for a particular use case. We basically tried to pick the topics that we felt were most important for a developer to maximise the usage of Redis. All in all, we wanted to strike the right balance between testing for all the things that we think are important while also making it accessible to as many people as possible.
That’s why we decided we’re going to make it a relatively modest amount of time to do it in, which is 90 minutes. The developers can do the certification from home or anywhere online. It is scheduled and it is proctored so we do want to take it seriously so that there are some real credibility and substance behind it. We will continue to get feedback and try to evolve it over time.
What are the biggest trends that you’re seeing in the industry right now?
There’s a big increase in terms of different data types that developers are working with and, as a result, there’s been a huge explosion with the number of databases and different types of databases that people are working with. This is one trend that I think is coming to a head sooner rather than later because of the eruption of microservices and Kubernetes. We’re seeing that most developers are choosing a database that is optimal for the service that they’re building because the type of data and the variety of data that we’re working with now is so extensive. 20 years ago we had one database supporting 1000 applications, now we have one application that might have 1000 databases. This is because today, we have a very extensive microservices environment with lots of interconnected dependencies and services, and you have different databases supporting each of those services.
What’s the most efficient ways to enhance performance and ensure debugging?
I think part of it is obviously designing the system properly, ensuring that you have the right system intended to handle whatever the needs of the application are, because a lot of time, performance is strained at some scale. Obviously, it depends on the elasticity needs of the application versus an application that’s highly elastic, meaning you have big spikes in usage at certain times or it’s an application that’s fairly steady in terms of usage, something you might call predictable. So, deciding what type of application you need will help you design the system properly.
Another thing that’s important is thinking about where the database should sit. More and more we think of the data that has to sit in the fastest tier of storage media we have which is RAM or DRAM. There is an expansion of media that’s coming to market opting into persistent memory, which acts, almost as fast as ram but at a fraction of the cost. So, we think that we’re moving to a world where there’s going to be two tiers of data, one data that’s very fast-moving, which sits in memory in DRAM or maybe an SSD. And the slower moving, almost like more archival type data that sits on a kind of traditional database in the back end that’s running on disc or some combination disc and flash. I think there is this kind of separation of data that’s happening and I think for a developer, they have to really kind of mentally of thinking, how much? What type of application do you have? Do you have a really data-intensive, highly personalised, highly reactive and responsive application where I’m constantly having to do different things in my application based on what the users are doing or what’s happening in the environment around the user? If that’s the kind of application you have, then we would recommend you have to start the design in-memory database to start with, because you know that you’re going to be absorbing a lot of data and you have to be able to have a database that can react very fast to all that data.
What’s the best piece of advice you can give on optimising data management problems?
Picking the right tool for the job is definitely the number one thing. Also thinking more long term about things. For example, you could be thinking, “Okay, what am I’m starting to do with this application or service? I can see in the next 35 years that we’re going to go here with it.” And thinking about a platform that can support your needs in the long run because the last thing you want to do is get started on something and then have a dead end. Because you would have to refactor how to rebuild the app completely once you get to a certain point whether it’s scale or different data model requirements.
How is implementing DevOps helping with the future of your company?
One of the things we do in a more modern application environment is, when you have a need to be able to deploy services quickly, to be able to get them spun up quickly to be able to scale them, we are trying to automate as much of that stuff as possible. This is so that we can free up humans to do higher-value things. That’s a huge part of our mission as a company. I think any company that’s in infrastructure today has to be working towards more automation of the lower level mundane things like sizing deployment upgrades, scaling, recovery, all of those things that today, we throw just people at. It’s just not intuitive of the DevOps mindset to continue to deploy new features and capabilities quickly, you want to have the core infrastructure be a stable and solid and robust and resilient as possible. So, I think you see us continue to push on automation to address those needs.