Introduction
Artificial intelligence has become incredibly powerful. We can create animated avatars of ourselves with just a facial scan. A few words in the right search engine can generate beautiful imagery and art. You can even find AI to write entire book chapters (although, they don’t always make much sense).
Although we can’t replicate human motivation and inspiration, AI might have us beat in a pure battle of wits. After all, they’ve got a lot of international input to work with!
Well, cybercriminals are using its growing power for evil, too. New artificial intelligence can code entirely new malware in significantly less time than it takes to build by hand.
Malware 2.0
OpenAI is a company that researches and develops AI. Last December, they created a tool called ChatGPT which is a chatbot that goes far beyond the likes of SmarterChild or the Cleverbot you might have played with in the 2000s. Now the voice assistants on our phones can remind us when to leave and answer our wildest questions without even searching it up by hand.
AI has come a long way to mimic human conversation and voices. Now, ChatGPT is taking this concept to a new level with artificial intelligence that can be instructed to complete various high-level tasks like writing scripts, coding, interior design and even creating recipes! Its poetry might lack the depth of the beat generation, but for a robot, it’s pretty good at recognizing and creating new patterns based on its inputted data.
Well, one researcher saw its coding capabilities and had a dark idea: What if ChatGPT could be instructed to write malicious code?
He found that it could.
New Possibilities for Hackers
Usually, malware takes up to an hour to code. Not ChatGPT: the chatbot can code phishing scams honed to lure in more victims, and it can do it in mere minutes.
It also creates infected attachments that try to give the hacker remote access to your machine. Hackers will be able to really hone their scam messages using AI that has quantitative knowledge about what works best. They can fine-tune their ability to detect exploitable vulnerabilities on your systems. Who knows what threatening idea they’ll have ChatGPT make reality for them next?
This is just the tip of the iceberg, and it’s already grim.
Conclusion
With the advancement of AI technology comes new developments for threat actors to weaponize, too. Users need to be careful engaging with nascent technology and stay abreast of new developments that the good guys are working on, so that we can all stay ahead of cybercriminals no matter what they dream up next.
In the meantime, don’t let this news get you down too much. While it’s true that bad actors will likely use tools like ChatGPT to generate more malware and better scams, that just means you need to be prepared to recognize and avoid phony messages more often. Human error is responsible for 95% of data breaches. Learning to spot these fakes will help ensure you don’t fall victim to malicious code – whether it’s made by hand or AI.
References
Comments