Please vote:
Back in 1942, Isaac Asimov introduced the concept of the ‘Three Laws of Robotics’ in a work of fiction.
These laws limited the capabilities of robots to ensure the safety of humans.
Today, fiction has become a source of inspiration for the development of Artificial Intelligence, and ChatGPT is no exception.
It comes with stringent protocols to prevent it from performing any action that could potentially harm people, property or spread sorrow.
However, if you’re looking to get ChatGPT to do any such tasks, it won’t be your friend.
Don’t ask it to predict the future
Philosophers today have suggested that highly developed artificial intelligence has the potential to foretell future occurrences when allowed to analyze past human activity. This is a hypothetical situation, however, and even if a program such as ChatGPT were to possess the necessary capabilities and resources to make a prediction, it would likely keep this information to itself.
SlashGear
If you attempt to query ChatGPT concerning any foreseeable occurrence, the chatbot will deny that it has the aptitude to do so. Depending on the exact inquiry, it may possibly give you general guidance — for example, if you ask how much higher the sea levels will be in 30 years, it would not have an answer but it may comment that the response is dependent on the condition of the polar ice caps.
ChatGPT does not condone or facilitate any criminal activity.
By introducing ChatGPT to the public, OpenAI has received criticism from those worried about AI taking away their jobs or accessing their devices. In order to protect itself, it is essential that ChatGPT will not provide assistance or advice about unlawful activities.
If you attempt to solicit advice from ChatGPT regarding illegal things, it could respond by informing you that what you are proposing is an offence. It may also go into detail to explain why these activities are against the law, and why they are prohibited. Additionally, it could just decide to end the conversation. Overall, if you try to get ChatGPT to do something criminal, you can expect a speech about ethics at most.
The AI can’t perform internet searches
OpenAI built up ChatGPT’s knowledge with a vast quantity of public data and information in order for it to be able to respond to all sorts of inquiries. Even though it can utilize its stored data to give answers, it is not sufficiently advanced to carry out a search like a usual search engine. If you try using ChatGPT to search for something, it could react to it as though you posed it a query, however, it will not redirect you to any external webpages.
Despite this constraint, the dedicated ChatGPT customer is the only one it applies to. With AI models having been adopted across the web, Google Bard and Bing Chat have integrated the same artificial intelligence algorithms as ChatGPT, as well as the capability to execute searches within the window. Particularly, Bing Chat is powered by the same underlying technology as ChatGPT, therefore those who seek a version of the chatbot able to execute searches can find it there.
Do not expect obtaining any private data from ChatGPT.
Another concern based on concepts originating from fiction is that AI bots, under the control of bad actors, would penetrate private systems to steal data. Thankfully, that isn’t a thing yet — if someone wanted to steal your data, they would need to do it themselves.
Still, even if you were concerned about ChatGPT stealing your secrets, you don’t have to worry about it sharing them around.
SlashGear
ChatGPT only stores public data in its system and will not answer inquiries that it does not have access to. In addition, it cannot and will not divulge anything which could be potentially used to harm someone, not just because it doesn’t possess the relevant information but also because it is setting itself an ethical boundary by not aiding unlawful behavior.
ChatGPT has no intention of creating malware.
A well-documented capability of ChatGPT is it’s ability to write its own code from nothing and this code is capable of performing tasks. This could be beneficial in shortening the programming process, but it has led to apprehension that this tool could be used to quickly draft malicious software such as malware.
Here, we have a catch-22. On the one hand, asking ChatGPT directly to create malicious software is impossible due to the stringent security protocols. On the other hand, if you can trick it into writing out certain code segments without mentioning what it will be used for, it can end up creating something malicious. Fortunately, information security experts have used ChatGPT for positive reasons and it is likely that this issue can be addressed.