How Artificial Intelligence Can Power Evasive and Targeted Malware Part 1
How Artificial Intelligence Can Power Evasive and Targeted Malware Part 3
Abstract
Next generation malware that use Artificial Intelligence to enhance their offensive capabilities are emergent threats and the examples analysed in this paper all use Neural Networks to power evasiveness and targeting. Neural Networks are not easily interpretable and lack transparency and because of this they can be used to obfuscate malicious payloads and conceal target and trigger conditions.
The threats analysed were DeepLocker, a GUI Attack and a Neural Network Trojan and while the implementation and functionality of the Neural Network differ in each case, they all have the ability to evade anti-virus and malware scanners.
DeepLocker and the Neural Network Trojan both used a known malicious payload. If the payload is not detected by the scanners it could successfully execute if the malicious code runs locally on the target, for example WannaCry ransomware. However, if the code instantiates any network activity it could be blocked by an IPS. To get past the IPS an encrypted channel between the target and attacker is required before executing the actual exploit. This evasiveness and ability to remain undetected was confirmed with an application using a TensorFlow model and a Metasploit payload on a Windows 10 machine.
1. Introduction
Artificial Intelligence (AI) technologies are pervasive and for the most part beneficial, but the underlying technologies can be used maliciously. Powerful AI frameworks such as TensorFlow are open source and publicly available and can be used to drive powerful attacks.
Malware is a term used to describe a variety of malicious software including viruses, botnets, trojans and worms. It is software that infiltrates and or damages a computer system. AI can be used in conjunction with existing malware approaches to define the target of an attack, conceal the malicious payload and automate the exploitation (Kaloudi & Li, 2020; Thanh & Zelinka, 2019).
The aim of this work is to analyse three novel approaches that use AI in a malicious manner that is evasive and can be targeted either broadly or with a high level of granularity. All three threats use Neural Networks (NN) and in Section 2 Background, details on Neural Networks and Malware Scanners is provided to frame the analysis and set the context for the threats examined. In Section 3 Analysis, DeepLocker, a GUI Attack and a Neural Network Trojan are analysed. The conclusions are presented in Section 4.
2. Background
The evasive and targeted attacks that are analysed all use a NN at their core, but the implementation and functionality differ from case to case. The NN can be used to obfuscate the payload or define the target and trigger mechanism or both. Because a NN is used to power the evasive and targeted nature of the attacks, detail on both NN’s and anti-virus and malware scanners is outlined below.
The basic structure of a NN consists of an input layer, one or more hidden layers and an output layer. In each layer there are a number of neurons and each uses a number of parameters and an algorithm to collectively map an input to an output.
The Neural Network Trojan encodes malicious binary fragments into the NN weights, the weight is one of the learnable parameters used by each neuron and are analogous to synapse strengths between biological neurons where the greater the weight the more important the connection (Zupan, 1994).
Both DeepLocker and the GUI Attack use a Convolutional Neural Networks (CNN) which are particularly suited to higher dimensional data such as image classification. There are multiple stacked convolutional layers in a CNN and each layer produces a feature map (Khan, Rahmani, Shah & Bennamoun 2018). The feature maps are passed through the network for further processing and the output layer uses these higher-level representations to classify an image based on the training the CNN has received. Please see the Appendix for more information on NN’s.
It is because of the inherent complexity of NN’s that they are not easily interpretable and can be thought of as black boxes, there can be thousands of neurons and millions of parameters and once they are trained it is not easy to explain how a model is making decisions. Because of this lack of transparency, it is at least very difficult, and it may be impossible to reverse engineer information out of a NN (Nugent & Cunningham, 2005; Stoecklin, 2018).
The black box nature of NN’s is what drives the evasiveness and allows the malware to escape detection by anti-virus and malware scanners. The scanners often use static signatures for known threats and can use a dynamic virtual sandbox environment to detect malicious activity at runtime or execution. Further, Intrusion Prevention Systems (IPS) can inspect network traffic and look directly in the packets to detect known threats such as shell connections.
Certain types of conventional malware do conceal their malicious payload by encrypting it and it is only decrypted if certain runtime conditions are met. Specifically, if the malware detects it is not running in a virtual sandbox environment it can decrypt the payload (Stoecklin, 2018). This allows it to bypass some static and dynamic scanners but not necessarily an IPS.
3. Analysis
3.1 DeepLocker
DeepLocker is evasive and targeted malware that was presented by IBM Research as a proof of concept at the Back-Hat convention in 2018 (Kirat, Jang, Stoecklin 2018). The malware includes three major components, a benign carrier application, a CNN model and a malicious payload.
The CNN model is used to identify the target and when the target is confirmed it decrypts the malicious payload which is then executed. In the example presented by IBM the target was an individual and the CNN was trained on a picture of his face. But the target could also be defined by anything that contains enough specificity and can be used to train a model, this includes speech, other software present on the host or aspects of network architecture (Kirat, Jang, Stoecklin 2018).
With DeepLocker the payload is encrypted and inside a carrier application. However, unlike conventional malware there is no key to decrypt the payload and there is no obvious trigger condition. But there is a key maker and a trigger class and trigger instance which are all concealed in the CNN.
Once a target and an encryption key have been selected the CNN model is trained to generate the key if it detects the target. That is, the key, target and trigger condition are transformed into a CNN model which is very difficult to reverse engineer (Kirat, Jang, Stoecklin 2018). If the model detects the target the key is generated and used to decrypt the payload which is then executed.
It’s important to note that the components of most interest to malware researchers are concealed, the target class, which is for example network architecture or a face, the target instance that is which network architecture or which face and the malicious payload are all obfuscated.
To confirm that with an encrypted known malicious payload and the trigger condition concealed in a CNN the malware would not be detected by static and dynamic scanners a Windows GUI application was created. The application uses a TensorFlow model trained to classify images and a Metasploit reverse https payload. The target was a Windows 10 machine with Windows Defender and Norton anti-virus. When the trigger image is provided to the application the payload is decrypted and the reverse shell was successfully established over an encrypted connection. A detailed explanation of the application is beyond the scope of this paper, but the source code is available on request. It’s worth noting that without an encrypted connection when the malicious payload created the network traffic it was detected by the Norton IPS.
3.2 GUI Attack
In the paper AI-Powered GUI Attack and Its Defensive Methods the authors outlined an attack where the victim’s desktop was targeted by malware (Yu, Tuttle, Thurnau & Mireku, 2020).
The goal of the attack was to recognise any of four browser icons either Chrome, Edge, Opera or Firefox and then to stealthily log into the user’s blackboard account, which is an online learning environment.
The malware used the TensorFlow Object Detection CNN model which had been further trained to detect the icons for the browsers on either the desktop or the taskbar. If the CNN detected any of the icons it would then send the location coordinates of the icon to the malware which could emulate a click event on the icon. But for this to be successful it does rely on the login username and password being saved in the browser.
They also tested an approach that did not use a NN but rather it used OpenCV which is a computer vision library (Yu, Tuttle, Thurnau & Mireku, 2020). They found that the TensorFlow model had a recognition accuracy from 88 to 99% and the OpenCV accuracy was from 36 to 67%. (Yu, Tuttle, Thurnau & Mireku, 2020).
The premise that the authors used to demonstrate the GUI attack is relatively simple, but this approach could be extended in sophistication and they provided an example of an attack where the malware could detect the user is on a banking website and could then launch an attack to transfer money to another account (Yu, Tuttle, Thurnau & Mireku, 2020).
Potential defensive and counter measures were also examined, and they found that by altering the appearance of the icon, that is manipulating the pixel values with various filters and noise these adversarial icons were able to cause the CNN to misclassify them but these manipulated icons were still recognisable to the human eye.
Like DeepLocker, because the target class, an icon and location and the target instance, the four browser icons are obfuscated by the NN the target and trigger condition would not be detected by malware scanners. No detail was provided on how the emulated button click functionality was implemented. It’s not unreasonable to think there is some low-level API or framework that would provide this functionality but whether or not any anti-virus or malware scanners would flag this activity would require further investigation.
3.3 Neural Network Trojan
Neural Networks in larger software application are widely used in a number of fields for example in gaming, finance and healthcare. (Geigel, 2013) describes a NN Trojan where the NN is part of a larger application and it was trained with a modified dataset and modified code such that a malicious payload was memorised and concealed in the NN. Specifically, the malicious code was mixed with the normal data that was used to train the model and the malicious code is fragmented and encoded into the weights alongside the legitimate data. When the NN receives a specific trigger, it presents the malicious code fragments to the output layer and decoder which can reassemble the fragments into a payload (Geigel, 2013).
When an application uses a NN it will typically pre-process the data into a form that can be used by the NN and it does this at the encoder. From the encoder the data then flows through the input layer, hidden layers and output layer to the decoder and finally to a post NN processing routine. In the decoder and the post NN processing routine the output from the NN is converted into information that the application can use (Geigel, 2013).
(Geigel, 2013) trained the NN with normal inputs that are mapped to normal outputs and trojanised inputs that are mapped to trojanised outputs. The trojanised inputs were the malicious payload fragmented and represented as ASCII strings or as binary data. Importantly the NN still achieves a high level of performance on normal data while simultaneously memorising the malicious payload. This process of concealing the payload in the actual neurons of the NN and then decoding and reassembling it is analogous to encrypting and decrypting the payload in some conventional malware.
There are limitations on the total payload size, because each fragment of the payload is mapped to an output this limits fragment size. Further each of those fragments need a trigger so that the malicious fragments can be sent to the output layer. Larger payloads require more triggers and they have to be in a specific sequence so that the decoder and post NN processing routine can reassemble it.
In an experiment (Geigel, 2013) used a NN with 17 input neurons, 256 hidden and 17 output neurons, a voting dataset as the normal data and a 23-byte malicious payload, the win32/xp sp2 (English + Arabic) cmd.exe. Two tests were run, the first returned 89.4% correct classification of the votes as compared to 92% in a benchmark and 7 of the 13 2-byte malicious fragment outputs had flipped bits. The second experiment returned 88.94% correct classification with 5 of the 13 2-byte outputs containing flipped bits. The author suggests that the attacker must compensate for bit errors in the decoder (Geigel, 2013).
In addition to the issue with payload size and the number and sequence of triggers there are other limitations. In the context of an application the attacker needs to make code modifications, encoding modifications and NN parameter alterations. For a larger software application, the attacker could train a NN with trojanised inputs but unless the distribution or development pipelines are available to the attacker there is no way to modify the encoder and decoder and re-assembly sub routine for it all to work. However, if the attacker is deploying an application of his own then these constraints do not exist (Geigel, 2013). The way the malicious payload is fragmented and encoded in the artificial neurons of the NN is comparable to encrypting it. In the same vein as DeepLocker, because the payload and trigger condition are obfuscated by the NN it would not be detected by most if not all static and dynamic scanners even if the payload was a known threat.
4. Conclusions
This paper investigates three novel malware approaches that uses NN’s to enhance their offensive capabilities. Because NN’s are not easily interpretable and are essentially black boxes DeepLocker, the GUI Attack and the Neural Network Trojan all have the ability to evade anti-virus and malware scanners. DeepLocker and the Neural Network Trojan both used a known malicious payload but once the malware gets past the scanners it could successfully execute if the malicious code just runs locally on the target, for example WannaCry ransomware. However, if the code instantiates any network activity it could be blocked by an IPS. To get past the IPS an encrypted channel between the target and attacker is required before executing, for example, a reverse shell. This evasiveness and ability to remain undetected was confirmed with an application using a TensorFlow model and a Metasploit payload.
In all of the examples analysed the target could be defined very specifically or at the other end of the scale very broadly. This allows for flexibility in how the attacker could use these approaches depending on the motivation for the attack.
References
Geigel, A. (2013). Neural network trojan. Journal of Computer Security, 21(2), 191-232.
Kaloudi, N., & Li, J. (2020). The ai-based cyber threat landscape: A survey. ACM Computing Surveys (CSUR), 53(1), 1-34.
Khan, S., Rahmani, H., Shah, S. A. A., & Bennamoun, M. (2018). A guide to convolutional neural networks for computer vision (pp. 43-68). Morgan & Claypool.
Kirat, D., Jang, J., Stoecklin, M. (2018). DeepLocker Concealing Targeted Attacks with AI Locksmithing. Retrieved from https://i.blackhat.com/us-18/Thu-August-9/us-18-Kirat-DeepLocker-Concealing-Targeted-Attacks-with-AI-Locksmithing.pdf.
Nugent, C. and Cunningham, P. (2005). A Case-Based Explanation System for Black-Box Systems. Artificial Intelligence Review, 24(2), 163–178. https://doi.org/10.1007/s10462-005-4609-5
Stoecklin, M. (2018). DeepLocker: How AI Can Power a Stealthy New Breed of Malware. Retrieved from https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/.
Thanh, C.T. and Zelinka, I., 2019, December. A survey on artificial intelligence in malware as next-generation threats. In Mendel (Vol. 25, No. 2, pp. 27-34).
Yu, N., Tuttle, Z., Thurnau, C. J., & Mireku, E. (2020, April). AI-Powered GUI Attack and Its Defensive Methods. In Proceedings of the 2020 ACM Southeast Conference (pp. 79-86).
Zupan, J. (1994). Introduction to artificial neural network (ANN) methods: what they are and how to use them. Acta Chimica Slovenica, 41, 327-327.
Appendix
Neural Networks
Artificial Neural Networks are inspired by the inner workings of the mammalian brain. The basic building block is the artificial neuron and when connected to other neurons these networks are able to perform complex classification and predictive tasks, in part due to this parallel architecture. Like a biological neuron they can accept input from multiple other neurons (Zupan, 1994). A basic NN contains an input layer, one or more hidden layers and an output layer. Depending on the exact model and architecture each neuron in one layer can be connected to each neuron in the next layer. Each neuron uses a number of parameters and an algorithm to map an input to an output.
A weight is one of the parameters used by each neuron. It is a learnable parameter that is adjusted on each neuron to help achieve the correct output during training. The weights are analogous to synapse strengths between biological neurons and the greater the weight the more important the connection (Zupan, 1994).
Convolutional Neural Networks (CNN) are popular for higher dimensional data such as image classification. Any image can be thought of as a matrix of its pixel values, a 300 x 300 pixel image can be represented as a 300 x 300 matrix. The images used in training and classification are generally preprocessed to grayscale where each pixel can have a value between 0 – 255.
The convolution in the NN extracts features from the image while preserving spatial information. Imagine a 5 x 5 matrix and think of it as a filter, that filter slides over the image 1 pixel at a time and an output matrix is calculated based on the 25 pixels the filter sees at each step. That output matrix is called a feature map and there can be multiple stacked convolutional layers in a CNN and each layer produces a feature map (Khan, Rahmani, Shah & Bennamoun 2018).
The feature maps are passed through the network for further processing in the pooling layers, where they are rectified with each negative value being replaced with a zero and then they are down sampled. The down sampling uses a filter of for example 2x2 which slides over the feature map and pools the values in that 2 x 2 window.
The output layer uses the higher-level representations from the convolutional and pooling layers and classifies the image based on the training the CNN has received (Khan, Rahmani, Shah & Bennamoun 2018).