JuliaHub Blog: Insights & Updates

US National Security Standards and Julia’s Memory-Safe Capabilities

Written by Phil Vernes | Mar 07, 2024

On February 26, 2024, President Biden's Office of the National Cyber Director released Back to the Building Blocks: A Path Toward More Secure and Measurable Software. This report specifies key requirements for cybersecure software such as: memory safety and cybersecurity quality metrics - dynamically measuring the quality of all dependencies.

What do the following hacking methods have in common?

  • Morris worm (1988)
  • Slammer worm (2003)
  • Heartbleed attack via vulnerability in OpenSSL (2014)
  • Stagefright attack via vulnerability in Android MMS messages (2015)
  • Trident exploit to jailbreak Apple devices (2016)
  • WannaCry ransomware attack via vulnerability in Samba (2017)
  • Blastpass exploit (2023)

Think about it for a moment, before reading on.

According to Assistant National Cyber Director for Technical Security, Anjana Rajan, these catastrophic hacking attacks all had a common cause: memory safety vulnerabilities. What are memory safety vulnerabilities?

A memory out-of-bounds example: imagine asking your librarian at the library for the 11th book on a shelf that only has 10 books. The librarian, fetching the “11th book”, reaches out past all the books and the bookend on the shelf, and instead of a book, she grabs an empty soda can, which had been left there, by accident. 

Consequently, the following may happen:

  • Inside the can, you find a secret note, containing private information (privacy breach)
  • Inside the can, you find the key to the librarian’s sports car, parked outside (security vulnerability)
  • The librarian, upon realizing she grabbed a discarded soda can instead of a book, refuses to serve any further requests until finding out who left the can (system instability)

This simplified analogy shows how exceeding memory boundaries can unintentionally expose sensitive information, disrupt system operations, or create openings for malicious exploits.

In a programming language, memory safety is a feature that keeps certain types of access to memory limited so as not to provide a point of attack in the language. If the librarian in our example had kept a count of the number of books available — and verified it — before attempting to access the 11th book, she realized that there was no 11th book instead of grabbing that troublesome soda can. Key memory-safe features include preventing out-of-bounds access (reads and writes) and preventing access to already deleted items.

What’s the solution?

It is possible to forestall this entire class of attacks, though: use a memory-safe programming language and employ best practices.  While laudable, this migration can prove onerous in an organization with legacy codebases, incomplete or obsolete documentation, and skill gaps across teams.

Memory-safe languages are nothing new and have existed for years: for example, Java, released in 1995. Recently, Rust, Go, Python and Julia have all emerged as memory-safe languages by default.

JuliaA memory-safe solution for legacy code

Julia, when developed under best practices, is not only a memory-safe language for scientific computing but also an alternative for developing in lower-level languages, such as C or C++:  this means dependencies, when developed with best practices, can be expressed in memory-safe Julia with equivalent performance to dependencies written in C or C++.

Memory-safety and dependency quality metrics: US strategic action

As mentioned above, on February 26, 2024, memory-safe languages were declared a US strategic objective by the Office of the National Cyber Director (ONCD) in its report, “Back to the Building Blocks: A Path Toward Secure and Measurable Software.” The report proposes two strategic actions:

  • All stakeholders commit to using memory-safe languages in both software and hardware
  • All stakeholders implement “cybersecurity quality metrics”: dynamically measuring the quality of all dependencies 

“Cybersecurity quality metrics” means evaluating the quality of a dependency, especially open-source dependencies. This goes beyond memory safety to impose vigilance on code and dependencies used in all applications and in all languages.

A concrete example of such a vulnerability occurred in 2021 with the “Log4j” exploit. Log4j is a popular and open-source Java logging library. Even though Java is a memory-safe language, without dynamic cybersecurity quality metrics, a popular dependency became a critical vulnerability for the applications depending on it.

“Programmers writing lines of code do not do so without consequence; the way they do their work is of critical importance to the national interest.”

Development & Deployment: Julia on JuliaHub

An effective programming language is one that is not only memory-safe but accessible to all personas within a technical organization: from research scientists to engineers to software engineers - all these distinct technical personas can effectively and, with best practices, safely develop and then deploy their Julia code, in a unified, collaborative workflow through JuliaHub.

JuliaHub provides a platform for implementing and observing the leading dynamic cybersecurity quality metrics on all code dependencies. JuliaHub also has built-in static code analysis, as well as package management and security tools that address dependency management and package governance to keep your code and package versions safe from vulnerabilities.

For a demonstration of how JuliaHub can securely integrate high-performance Julia code into your application, please contact our sales team at sales@juliahub.com.