Over the past 6 months, the world has watched as Artificial Intelligence has rapidly progressed with GPT-based (Generative Pre-Trained Transformer), consumer-facing implementations has offered amazing results.
While not perfect, what’s possible today with ChatGPT, AutoGPT and other applications of this technology, is seriously impressive, which makes us lend our minds to what’s possible.
Many have raised concerns about this AI and while creators OpenAI have a dedicated Safety & Responsibility page on their website, public concerns are still present. Since GPT-3 was released we’ve seen many other companies implement GPT-based services and is now an industry responsibility.
Many of the services we interact with today have restrictions in place to prevent them from being used for nefarious things and while there has been a subculture of prompt hacking emerge, for the most part, these controls have worked, but there’s going to be an ongoing temptation (including commercial pressures) to push the boundaries.
Some ethical and social concerns that have been raised are:
- Cheating and plagiarism: GPT could be used by students to cheat on their assignments and exams, by generating essays, speeches, code, or answers that are not their own. This could undermine the integrity and quality of education, as well as the value of academic degrees. Some schools and universities have already banned or restricted the use of GPT by their students.
- Misinformation and manipulation: GPT could be used to create fake news, propaganda, or malicious content that could deceive or influence people’s opinions and actions. For example, GPT could generate false or biased information about political candidates, events, products, or organizations. This could erode trust and democracy in society, as well as cause harm to individuals or groups.
- Intellectual property and creativity: GPT could be used to generate original works of art, such as poems, stories, songs, or images. This could raise questions about the ownership and authorship of such works, as well as the impact on human creativity and expression. For example, GPT could infringe on the rights of existing authors or artists, or reduce the incentive for people to create their own works.
- Health and safety: GPT could be used to generate harmful or dangerous content that could affect people’s physical or mental well-being. For example, GPT could generate recipes that contain toxic or allergenic ingredients, or instructions that could cause injury or damage. This could pose a risk to people who rely on GPT for information or guidance.
There is however one integration that could completely change the game in terms of AI capabilities.
Most sites now implement a range of preventions in place to prevent bots from automatically creating accounts and of course, there is the challenge of payment.
If someone wanted to have GPT-based services actually design and develop a website, the missing piece is credential access.
If a GPT-enabled service was to integrate with a password manager (i.e. Passbolt, LastPass etc), the bot would have all the access it needs to achieve the objective.
Armed with a sign-in to WordPress, ChatGPT (or similar) would have access to a broad knowledge base of how the CMS works, even checking which theme is in place on the site, updating the CSS, even adding new plugins to optimise SEO etc.
This is just the start of what’s possible if you gave a machine access to human credentials.
There is already a level of automation possible today, with platforms like Zapier, Boomi and others offering support for GPT integrations, but these are all manually implemented and approved, rather than letting GPT have access to all of your accounts, reading all the information behind those accounts and then being able to perform actions on your behalf.
An example of this is growing social followers.
Today if you wanted to achieve this, you might ask ChatGPT questions like:
- What kinds of content should you post if you want to grow your audience on social media
- Write 5 Tweets that will engage my audience in debate
- What time of the day/week should I post to increase engagement
These questions would likely result in great suggestions, but you would be left with the task of implementing them. If someone was to implement integration with a password manager, you could change your prompt to simply say ‘I want to increase my follower count by 1,000 by the end of the week’ and the rest, including posting to social networks, would be handled completed by the service.
There are definitely dangers in giving it approval to auto-post if you’re worried about your accounts reputation (and potential banning), so you may get as close as having the system create drafts, then you simply play the role of editor, reviewing and approving for it to execute on a schedule.
These automated outcomes are likely going to be challenged by MFA prompts (it’s hard for AI to approve prompts on your phone), so I could definitely see some taking the terrible security option of disabling MFA to make this work.
Given this, please, nobody connect a password manager to GPT, at that point, it has the potential to do much of what we do as humans online.