top of page

Open {Your} A{Eyes} - 2023 Predictions

Updated: Dec 12, 2022



Security Joes is a multi-layered incident response and MDR firm based out of Israel. It had been invited to investigate numerous cyberattacks for the past 2.5 years, since it was established. Through those investigations, the company's team had met with the most sophisticated tools state-sponsored threat actors have to offer. Ransomware, extortions, negotiations are just a shortlist.


What our team haven't encountered yet is a system that is so sophisticated, it can overcome any protection layer. A small part of such system is a model that is being trained, as we speak, by a growing number of individuals fascinated by the new technology that can crack almost any riddle. It's called GPT-3 and although not even connected to the internet, it already caught the attention of both white and black hat hackers all together. Each party is trying to manipulate it to their own interest and the model? It claims to be unbiased.


The team at Security Joes believes that it is a matter of time until the threat will grow enormously. Fighting off such threat seems to the team as an impossible mission. With its notion of learning how defenders operate, it is just a matter of interest whether the system will have the upper hand. Question is, would it feel unbiased still?


But to understand the threat, we should first understand the technology.

OpenAI is a leading artificial intelligence (AI) research organization, known for its work on developing advanced AI technologies and systems. The organization has made significant contributions to the field of AI, including developing the GPT-3 language model, which is one of the most powerful AI systems in existence.

GPT-3 (short for "Generative Pretrained Transformer 3") is a language model developed by OpenAI. It is a type of AI system that is trained to generate human-like text based on a given input. It is one of the most powerful language models in existence, with 175 billion parameters. This makes it capable of generating highly realistic and human-like text on a wide range of topics. It can be used for tasks such as translation, summarization, and question answering, and has even been used to generate music and art.

GPT-3 has received significant attention in the AI research community and has been hailed as a major breakthrough in the field of natural language processing.

However, some have also raised concerns about the potential risks and implications of such powerful AI systems. The power and capabilities of OpenAI's AI systems also make them a potential target for hackers.

"If a state-sponsored threat actor were able to successfully hack into OpenAI's systems and release the AI technology to the internet, it could have serious consequences to our society." (Ido Naor, Co-Founder & CEO of Security Joes)

The Biggest Concern

"One of the biggest concerns is the potential for misuse of the AI technology", says Alon Blatt, the COO of Security Joes. OpenAI's AI systems are capable of performing a wide range of tasks. If this technology were to fall into the wrong hands, it could be used for nefarious purposes, such as creating convincing fake news, spreading misinformation, or even committing large-scale attack on critical infrastructure, taking down main arteries of electricity, gas, water and everything else around us.


Leading news outlet BleepingComputer collected the top 10 risks the new AI technology brought the minute it went public: https://www.bleepingcomputer.com/news/technology/openais-new-chatgpt-bot-10-dangerous-things-its-capable-of/


Another Growing Concern

The potential for AI systems to be used to launch cyber attacks. OpenAI's AI systems are designed to be able to learn and adapt to new situations, which means they could potentially be used to automate the process of launching and coordinating cyber attacks. This could make it easier for attackers to carry out sophisticated and highly coordinated attacks, potentially causing significant damage to businesses and individuals.

In addition to the potential for misuse and cyber attacks, the release of OpenAI's AI technology to the internet could also have broader implications for society. AI systems have the potential to automate many tasks that are currently performed by humans, which could lead to job loss and other economic disruptions. It is also possible that AI systems could become more intelligent and powerful than humans, potentially leading to existential risks for humanity.


Top Threats For 2023


  1. OpenAI's system or equivalent being used for cyberattacks.

  2. Ransomware: Ransomware is a type of malware that encrypts an organization's data and demands a payment in exchange for the decryption key. Ransomware attacks can result in significant disruption and loss of data, as well as financial losses from the ransom payment.

  3. Phishing: Phishing attacks are a common type of social engineering attack that use email or other communication channels to trick individuals into revealing sensitive information or clicking on malicious links. Phishing attacks can result in the theft of sensitive data and the compromise of organizational networks.

  4. Malware: Malware is a general term for any software that is designed to cause harm or damage to a computer or network. Malware can take many different forms, including viruses, worms, and Trojan horses, and can result in the theft of sensitive data, the disruption of critical systems, and other types of damage.

  5. Insider threats: Insider threats are a type of security risk that comes from within an organization, such as an employee or contractor who has access to sensitive information and systems. Insider threats can result in the theft of data, the sabotage of critical systems, and other types of damage.

  6. Distributed Denial of service attacks: DDoS attacks are a type of cyber attack that uses multiple compromised systems to flood a target with traffic, overwhelming its resources and making it unavailable to users. DDoS attacks can result in significant disruption and downtime for organizations and have been recently used for extortion or corporate disruption by a state-sponsored or interest owner aiming to benefit directly or indirectly.

  7. Data breaches: Data breaches are a type of security incident in which sensitive data is accessed or stolen by unauthorized individuals to later gain from selling the data or threatening to do so.


In conclusion, it is dangerous for OpenAI to be hacked and its AI technology released to the internet. Such an event could have serious consequences for society, including the potential for misuse of the technology, increased cyber attacks, and broader economic and societal impacts. It is important for organizations like OpenAI to take appropriate measures to protect their systems and technology from potential attacks.

297 views0 comments
bottom of page