Protect AI raised $13.5 million in a seed-funding round co-led by Boldstart Ventures and Acrew Capital with assistance from Pelion Ventures, Knollwood Capital, and Aviso Ventures.
The capital will go toward customer outreach and product development as Protect AI emerges from stealth, according to the co-founder and CEO, Ian Swanson.
Protect AI is one of the few security companies focusing exclusively on creating tools to protect AI systems and machine learning models from exploitation, according to Swanson.
Its product line aims to assist developers in identifying and fixing security vulnerabilities in AI and machine learning at various stages of the machine learning life cycle, including vulnerabilities that could potentially expose sensitive data.
Protect AI was co-launched by Swanson, Badar Ahmed, and Daryan Dehghanpisheh roughly a year ago. Previously, Dehghanpisheh worked with Swanson at Amazon Web Services (AWS), researching artificial intelligence and machine learning.
Dehghanpisheh was the worldwide leader in machine learning solutions, and Swanson was the global leader in AWS’s AI customer solutions team.
Badar Ahmed met Swanson while they both worked at DataScience.com, Swanson’s last startup, which Oracle acquired in 2017. Ahmed also worked with Swanson at Oracle, where Swanson was the VP of AI and machine learning.
Protect AI’s first product, NB Defense, is designed to help data scientists within the AI community who use the popular Jupyter Notebook.
According to a 2018 GitHub analysis, there were over 2.5 million public Jupyter Notebooks in use at the time of the report’s publication, a number likely to have increased since then.
NB Defense scans Jupyter notebooks for AI projects, which typically include all the libraries, code, and frameworks needed to run, train, and test an AI system, and offers suggestions for remediation of any identified security risks.
Are AI and Machine Learning Systems in Danger?
According to Swanson, there are several problematic elements that an AI project notebook might contain. These include internal-use authentication tokens, other credentials, and personally identifiable information such as names and phone numbers.
In addition, NB Defense looks for open-source code with a “nonpermissive” license that may prevent it from being used in a commercial system. Most Jupyter Notebooks are locked away safely from prying eyes and are typically used as scratchpads rather than production environments.
Dark Reading’s analysis suggests that fewer than 1% of the approximately 10,000 instances of Jupyter Notebook on the public web are configured for open access.
Confirming that the exploits aren’t just theoretical, a security firm called Lightspin discovered that an attacker could run any code on a victim’s Jupyter Notebook Instance across accounts on AWS SageMaker, Amazon’s machine learning service.
While it might be premature to worry about cyberattacks on AI systems, Swanson makes the case that prevention is mandatory. A 2020 Microsoft survey discovered that the majority of businesses using AI don’t have the necessary tools to secure their machine-learning systems.
Protect AI, a pre-revenue, has the advantage of entering a market without fierce competition. The closest competitor could be Resistant AI, which develops AI systems to protect algorithms from automated attacks.
The company isn’t revealing how many customers it has today. However, Swanson claims Protect AI has secured “enterprises in Fortune 500” across fields like finance, life sciences, healthcare, gaming, digital business, FinTech, and energy.
Protect AI is free, with paid options to be introduced. It’ll also work with other AI development tools besides Jupyter Notebook, including Azure ML, Amazon SageMaker, and Google Vertex AI Workbench, says Swanson.
Read More Software News:
You Can Soon Decipher Your Doctor’s Bad Handwriting Using Google
Best Spy Apps for Android 2024 | Top 10 Free and Hidden Android Spy Apps
Why Netflix Is Getting into Fitness Content After Striking Deal with Nike?