maxkabakov - Fotolia
Build secure coding practices right into the IDE
Tools that integrate secure coding practices into the IDE promise to improve software security, even if the app in question isn't secure by design.
Although secure coding practices are widely available, developers still frequently make security mistakes. The more developers that know about threat aversion, the more likely it is that the applications they build will be secure by design.
Developers have a lot of options for learning secure coding guidelines. You can take the traditional approach and read books,look at websites or watch videos. But a group of researchers in Belgium has developed a new security coach called Sensei that runs right in the IDE, much like a spell-checker. It has also developed a complementary set of coding games called Secure Code Warrior, which makes it easier to practice secure coding practices in common languages, like JavaScript, C# and Java.
We caught up with Matias Madou, Sensei developer and CTO of Secure Code Warrior (SCW), to find out how tools can help developers design applications with security in mind and also how organizations can see significant improvements if they follow secure coding guidelines and implement secure coding practices.
Could you provide a bit of context around where else you have seen this kind of approach applied to secure coding guidelines and design patterns in general? Have you seen it either in an isolated training environment or embedded into the IDE itself?
Matias Madou: The best analogy I can give is writing a letter or document and using a spell-checker. People writing know the language, but there is a spell-checker, which highlights errors and offers options to fix them. And even though someone may be a very experienced writer, there is no time when they would disable the spell-checker. What they might do, however, is enhance the spell-checker with new words from their own vocabulary.
Mapping that to programming, within the IDE, a help tool is already there. However, it is for syntax and functional reasons, not for security. So, in fact, the concept of real-time help is not new, but the way we are applying it -- as a secure coding guideline -- is very new and exciting. As such, like a spell-checker or help function, we try to blend it into the developer workflow as much as possible. In fact, I consider it very positive that developers using Sensei cannot always distinguish between whether the help they are getting in secure coding practices and creating applications that are secure by design came from IDE support or Sensei.
What inspired the ideas behind Secure Code Warrior and Sensei?
Madou: Secure Code Warrior was founded by security experts who all experienced intimately the negative impact of insecure code. We worked in different roles and environments, but all witnessed firsthand the lack of understanding of security within development communities and how development practices were stacked against security.
I am a lifelong developer who started coding before I was 11, with 10 patents under my belt. I also have a Ph.D. from Ghent University and more than 15 years of hands-on software security experience, including involvement in multiple AppSec research projects, which led to commercial products.
It was when I worked at Fortify that I saw that the organizations using Fortify were really piling up the bugs. It is one thing to find problems but an entirely different thing to fix the bugs. People are good at finding those problems. And it is easy to find problems in code if you never teach developers how to fix problems in code. Or how to securely construct code. So, it's a very unfair game.
So, I left and set up Sensei Security, which was all about helping developers do secure coding and remove bugs in the first place. At the same time, Pieter Danhieux, one of the co-founders and CEO of Secure Code Warrior, was pioneering an innovative approach to improving secure coding skills and outcomes through a gamified hands-on training platform. We could see we had the same mission and that together our platform was so much more powerful. The team is driven by the knowledge that measurable improvements in security compliance, consistency and predictability will be matched by a better quality and speed of code writing. Secure Code Warrior is the fastest and easiest way for any organization to improve their application security.
Are there other names used to describe the creation of apps that you secure by design beyond software simulation and context-based learning?
Madou: What we often call this process is 'just-in-time security,' or you could also call it 'real-time security training.' You get the right information at the right time in the right place. So, for example, when you're writing something with crypto, that is the perfect time to give guidance on how to use crypto. And if there is more information or learning needed by the developer, the IDE can send the developer back to specific cryptotraining within the Secure Code Warrior platform.
Have you quantified the kinds of learning outcomes with this type of approach compared to more traditional methods for secure coding practices?
Madou: Measuring something like secure coding practices is very complex. However, we can do it, and we have started to do it with our early adopter Sensei customers. Our training customers have also been able see very tangible results, but right now, very few organizations are rigorously measuring the outcome of training. But they can measure the number of problems that are introduced with and without using our solution. After rollout, the number of errors will reduce, the time to fix will reduce and the type of errors picked up by detection tools should become more complex as simple errors are eradicated. Metrics are available on an individual, team or organization. Companies are also using the metrics to see strengths and weaknesses within various individuals and teams.
As a small example of the training, one of Secure Code Warrior's customers required their developers to play a single challenge (five minutes) every day for two months. It tested their skills before and after the training period, and [they] observed a 60% increase in secure development capability over a group of hundreds of developers. This means less resources spent on finding and fixing security bugs later in the lifecycle and significant long-term savings.
With regards to the Open Web Application Security Project (OWASP) guidelines, could you point to or summarize the prevalence of the top 10 vulnerabilities that show up on the web? Which of them relate to coding problems rather than design problems, and how prevalent are these?
Madou: There is not always a clear cut between coding problems and design problems. On top of that, there is -- as far as I know -- no data on the split between coding problems and design problems.
If we look at OWASP Top 10, it is a good mix of coding problems and design problems, where the coding problems take the top spots.
It seems like one of the big benefits of an embedded security coach is that it improves the feedback loop between a mistake and the opportunity to fix it. If you catch security problems earlier and make applications secure by design, coders don't have to wait until a a block of code is completed and submitted for peer review and then static analysis or for it to be tested by security teams. Do you have any thoughts about this?
Madou: Sensei is part of Secure Code Warrior's secure coding practices platform, which also provides training. Sensei will advise developers when they do not follow secure coding guidelines. The advice section ends with a warning that not following the coding guideline can lead to a particular vulnerability. To learn more about that vulnerability, a link straight into the SCW platform is provided so the developer can take in-depth training on the category of problems. Vice versa, a developer going through the SCW training will see a coding guideline at the end of a challenge that should be adhered to, so the vulnerability will not be introduced again.
What are your thoughts on transfer of secure coding practices across different languages? Could you share a couple of code snippets showing insecure code/rectified code in Java and something similar in JavaScript that would result in the same type of OWASP vulnerability?
Matias MadouCTO, Secure Code Warrior
Madou: Excellent question! Essentially, the current level of help available from highly understaffed application security teams to developers is too high-level. Security professionals say things like "use input validation." And these principles hold across every language, but they need to be implemented in the right way across every language. And, generally, application security teams don't have that level of development knowledge.
While it is tricky to answer your question between languages, it is a common mistake that people use the wrong sanitization or inferior sanitization. For example, the characters that need to be cleansed to avoid SQL injection are not the same as cleansing for XSS [cross-site scripting]. For example, angle brackets (< >) tend to be top characters to exploit XSS issues, while these characters are not the top picks to exploit SQL injection problems, as that's commonly quotes ('). It's very important for a developer to do training in the language they are working in. With SCW, developers can learn and play in different languages against each other, which makes it really fun.
How have or could enterprise security teams use these training and coaching tools to enforce internal security and governance, risk management and compliance practices that go beyond the OWASP basics?
Madou: There are a couple of things here.
Firstly, companies who are embracing security tend to want to publish and increase awareness about their secure coding guidelines or practices, but normally, these are published within an organization on a wiki page or in a PDF document. So, there is no way to check these 'mandatory' coding practices. Sensei can implement these coding practices, guide a developer on the coding practices and make them really mandatory by checking them off when a developer checks in code. Companies can also demonstrate that they are actively promoting knowledge and use of their secure code practices across their entire development team -- in-house and outsourced.
By embedding SCW's training and Sensei coaching, organizations are really embracing the idea of continuous training. It is no longer a one-time thing, but it is a simple and easy way to make security skill development part of everyday. And just like going to the gym to improve fitness, SCW can be harnessed with a cyber-gym idea: tournaments, challenges and other intense training moments that continue to keep the security mindset alive and growing.
How can organizations utilize threat modeling and architectural risk analysis to improve app security beyond secure coding practices?
Madou: Threat modeling and architectural risk analysis will find the design problems in code. Design problems are problems introduced by the architect of the solution. When an architect does not add a particular security component in the design, it is, of course, very hard for a developer to have it implemented. An example of such a problem is authentication. If an architect decides that no authentication is necessary, then it is not the developer's fault that there is no authentication implemented into the solution.
While there are a couple of startups that provide threat modeling tools, I prefer the manual process, where people sit in a room, come up with or draw the design of the solution and then start threat modeling. Of course, you need help with this exercise, and a really good book is Threat Modeling by Adam Shostack. This will guide the group discussion. While the book is very skewed towards the Microsoft way of doing things, it is applicable to other applications.