RSA Conference SentinelOne is the latest to add machine-learning features to its IT security software.
At this week’s RSA Conference, the cybersecurity business unveiled a threat-hunting platform that is layered with generative AI features that includes a large-language model (LLM) natural language interface and embedded decision-making neural network.
The platform is designed to enable threat researchers to use these AI tools to ask complex questions about threats and get back detailed answers within seconds, more easily and quickly identify attacks, and run commands to manage an enterprise’s security.
The new platform is in limited preview now. In a way, it demonstrates how models and frameworks for building chat-bot interfaces are maturing and becoming easy to deploy, as this sort of technology is now showing up everywhere. There are today a ton of resources, projects, and models out there, open source and commercialized, to bring these kinds of natural language and generative features to your own code and products.
Speaking of machine learning… Google-owned VirusTotal will now generate natural language descriptions of malware and what the malicious code does from samples uploaded to the service. This Code Insight feature, powered by Google’s Sec-PaLM model, so far works on PowerShell scripts at least, eg this one. VT analyzes the sample and uses the model to output a written report of the malware’s actions and likely intent.
The work by SentinelOne comes out at a time when miscreants are exploring the use of ChatGPT and other AI functions to improve their operations, such as potentially generating more convincing phishing mails and phony credential-stealing websites. The threat hunting platform is one step toward leveling the playing field, said Ric Smith, SentinelOne’s chief product and technology officer.
“Hackers have figured out how to use AI to more quickly and efficiently execute their attacks,” Smith told The Register.
“They’re using it to observe and predict how defenders will respond to their tactics and modeling their malware and attack practices around them. They’re compromising training models by flooding them with inaccurate data. And they’re doing it all at machine speed.”
He added: “We’re putting the same technology in the hands of security teams, so they can respond and head them off just as fast.”
Miscreants get the AI message
Cybercriminal adoption of AI has been fast, according to companies selling products to combat that AI. Palo Alto Network’s Unit 42 in a report this month found that between late November and early April, there was a 910 percent increase in domain names related to ChatGPT, OpenAI’s chatbot, which will probably be used to lure ChatGPT fans to fake, malicious websites. There also was a 17,818 percent growth of related squatting domains from DNS security logs, all pointing to a sharp ramp in miscreants’ embrace of the technology.
During a conference hosted by CrowdStrike earlier this month, Rob Joyce, director of the NSA’s Cybersecurity directorate, told an audience that AI and machine learning techniques will enable threat groups and nation-states like China to create even more dangerous attacks.
Joyce also noted that defenders will be able to use these same technologies in their software, saying that “for the next year we are going to be very focused: what tools come out that will … give us the advantage as defensive folks.”
Security teams have a lot of data, too few people
For enterprise security teams, the challenge is that they have to manage a rapidly growing number of data sources and threats at a time when the cybersecurity field continues to struggle with a huge shortage of talent, Smith said. AI will help fill that gaping maw by lowering the bar for the tech skills needed in cybersecurity.
“Our platform, for instance, allows users to automate response and take action without the need for coding skills and process and analyze petabytes of data in near-real time,” he said. “That will radically simplify security operations and empower defenders in unprecedented and unforeseen ways.”
As for the platform’s inner workings, SentinelOne’s AI threat-hunting platform relies on two components for the dataset it trains on, the broader cybersecurity domain and the security data lake built from data and other information collected by the vendor, according to Smith.
It aggregates and correlates data from device and log telemetry from endpoints, networks, the cloud, and user data. It is designed to pull insights from all that and recommend actions for responding to threats that can be launched immediately.
For example, “analysts can ask questions using natural language, such as ‘find potential successful phishing attempts involving PowerShell’ or ‘find all potential Log4j exploit attempts that are using ‘jndi:ldap’ across all data sources,’ and get a summary of results in simple jargon-free terms, along with recommended actions they can initiate with one click, like ‘disable all endpoints,'” Smith said.
A mixture of algorithms
For the AI algorithms, SentinelOne uses a combination of models, both open source and from various vendors. Those are fine-tuned to work for cybersecurity and further calibrated to work within the SentinelOne ecosystem, he said.
The natural language interface gives security pros an interactive experience through which they can narrow their findings in an investigation through subsequent queries or by selecting from a list of recommended next best actions.
Although AI will continue to improve and help adversaries who use it to attack enterprises, the technology has limitations. While models can be trained on massive amounts of data and are good at responding based on that – and some also can generalize beyond all that knowledge – responses can vary in accuracy, Smith said.
People will continue to play a key role.
“Human intervention will be key to advancing the model and training it to become a knowledgeable partner that security teams can use to detect and respond to attacks – whether network-, endpoint-, or identity-based – faster and more efficiently,” he said.