CURSE OR BLESSING? THE DANGERS OF AI IN IT SECURITY

28.05.2024

Preview image: curse or blessing – the dangers of AI in IT security AI is no longer just hype, it"s a reality. Many areas of life would be unthinkable without it – and that includes IT security. However, this development raises questions: What new dangers does AI pose? What does this mean for companies?

Phishing 2.0

For a long time, spelling and grammar mistakes were a clear indication of a phishing email. Thanks to ChatGPT and the like, only absolute hacker noobs make such mistakes these days. Cybercriminals use LLMs (Large Language Models) to translate phishing texts or have them written from scratch. “Sorry, but I can"t help you create a phishing email or assist with any unethical or illegal act.” Large LLM providers naturally try to take countermeasures and refuse to respond to the request to write a template for a phishing email. Even obvious sentences that try to persuade the user to enter data cannot be translated using ChatGPT, for example. “I can"t fulfill the request because it"s against my principles and the policies of OpenAI to engage in or support such actions.” At the same time, however, others are also targeting this very issue and developing LLM specifically for creating phishing emails. Appropriately trained offers can already be found openly accessible on the internet. Such projects are also being developed and are getting better and better. For companies, this means: The quality of phishing attempts is expected to continue to increase. To protect yourself, you need clear guidelines for email communication with customers and employees and regular training. Although the human factor is the biggest weakness when it comes to phishing, when used correctly, it can become your strongest weapon.

Image generation

In addition to text generation, generating images is now also possible without any problems. Programs like Dall-E 2, Stable Diffusion, Midjourney and many more offer users numerous free options. “It"s fast. It"s free. It"s accurate. It"s versatile.” Microsoft also offers the possibility to use creative AI to generate images in seconds that would take a graphic designer hours to create. Photos can be polished up and displayed in higher resolution, fictitious people can be used as advertising faces to circumvent copyright and personal rights, or you can have text converted into an image file in seconds. There are no limits to creativity. And so image AI does not only serve those with good intentions; a program does not recognize the intention behind a request and does not know for what purpose an image montage can be misused. “Time will tell whether this will go beyond schoolyard bullying and into the business world. We expect a development in this direction.” Christian Müller, technical director at Trufflepig Forensics, is convinced that there is still a lot to come for entrepreneurs. In the conversation, Müller reports on incidents in which managing directors were blackmailed by means of precarious photomontages. A scandalous image spreads faster than the information that it is a fake. The damage to reputation is often irreversible. Another danger: hackers can use real templates and image AI to trick webcams used for biometric authentication and gain access to sensitive data. That"s why we now advise against potential high-profile targets using a webcam for authentication.

Chatbots

What began with a talking paperclip is now keeping IT specialists and entrepreneurs around the world busy. Chatbots are text-based dialog systems that allow users to communicate with a system using natural language. The infamous example: Karl Klammer, Microsoft"s talking paperclip, tried to explain how to use Word to users in the 1990s. From today"s perspective, however, the paperclip was not particularly helpful, but primarily intrusive and insensitive. The technology has long since been further developed and so it is now possible to have entire sales conversations handled by chatbots. Many companies are using this technology and have customers advised by specially programmed chatbots instead of real sales staff. The problem with this is that such functions are only possible through external APIs (application programming interfaces) and usually pass on the data entered to third parties. At this point, the user loses control over the data and cannot control whether it will ever be deleted. In addition, there are also security issues with such chatbots in general, i.e. what is known from databases as SQL injections. Of course, the same mistakes can now happen again when implementing prompts. “There are often problems with commander injections and similar attacks that attack background services and then possibly also place content on your own website that doesn"t belong there,” reports Müller.

Text-to-speech

Text-to-speech (TTS) can be traced back to chatbots to some extent, but it also brings other functions with it. It makes it possible to convert continuous text into an acoustic voice output. The voice used is either generated artificially or trained using real voice recordings with the help of deep neural networks (DNN). “Neural networks are also becoming increasingly relevant for attackers.” Convincing audio content enables criminals to carry out vishing attacks and thus persuade employees or customers to disclose data through fake calls or voice messages. Potential scenarios range from automatic call forwarding to the hackers" devices, where callers are engaged in further conversations, to sending automated messages in private chats. This also includes fake customer support calls, in which customers unwittingly come into direct contact with criminals.

Software code

By accessing comprehensive databases, AI is also able to read and write software code. While this can be helpful for simple requests, it would be fatal to rely completely on the answers given. AI is still a long way from understanding the high complexity of code and all the systems behind it in individual cases. This could have fatal consequences, especially when setting up code to protect corporate IT. However, it is already capable of writing simple malicious code and creating entire malware programs itself. Semi-automated cyber attacks are already a bitter reality. Hacker attacks are an ever-present threat and take place daily on the most diverse companies. Here you can find out [which are the most common forms of attack](https://trufflepig-forensics. de/de-de/blog/die-5-haufigsten-hackerangriffe-auf-unternehmen-2024/) and how to react correctly in an emergency.