Quick Guide: 5 Essential AI Security Steps
- Use 16-character unique passwords for all AI tools.
- Set up a password manager (Bitwarden, 1Password, or NordPass).
- Enable app-based 2FA on every AI platform (avoid SMS).
- Audit data sensitivity before pasting anything into a chat window.
- Verify all AI-related emails directly on the platform's website.
Matt Shumer's "Something Big Is Happening" post went viral this week. If you missed it, the gist is that a well-known AI startup founder sat down and told his friends and family that AI is about to reshape everything. Jobs, industries, daily life. He's telling people to sign up for AI tools right now and start feeding them real work.
He's mostly right. I don't have a problem with his take on where AI is headed.
My problem is what he left out.
Shumer tells people to hand over contracts, financial spreadsheets, quarterly data, client documents. He says things like "give it a messy spreadsheet and ask it to build the model" and "feed it a contract and ask it to find every clause that could hurt your client." Good advice for productivity. Terrible advice if you haven't thought about security for even five minutes first.
He doesn't mention passwords. He doesn't mention what happens to the data you paste in. He doesn't mention that these AI platforms are now sitting on some of the most sensitive information people have ever voluntarily handed to a third party. And he definitely doesn't mention that hackers already know this.
So let me fill in that gap.
Why AI Platforms Are the #1 Hacker Target in 2026
Right now, millions of people who have never really thought about cybersecurity are creating accounts on ChatGPT (GPT-5.3 Codex), Claude (Opus 4.6), and Google Gemini. Most of them are using the same password they use for everything else, or something brilliant like "ChatGPT2026." Then they're pasting proprietary business documents, financial records, and medical information straight into these chat windows.
I've spent years digging through breached password databases. I've personally gone through over 50,000 compromised passwords from real breaches. I know what bad security hygiene looks like at scale, and what I'm seeing right now is a disaster in slow motion.
These platforms aren't just storing your email and a hashed password anymore. They're holding your company's financial models. Your client's legal strategy. Your patient's medical records. People are casually creating GDPR, CCPA, and HIPAA violations every single day because they wanted to save 20 minutes on a report.
And the passwords protecting all of this? Garbage. Absolute garbage.
AI-Powered Phishing and Password Cracking
Shumer spends a lot of time talking about what AI can do for lawyers, engineers, and analysts. He doesn't talk about what it does for criminals.
The same models that write clean code and draft legal briefs also write flawless phishing emails. The broken English that used to give scammers away is gone. Phishing-as-a-Service platforms are running LLMs now, cranking out thousands of unique, personalized messages per hour. Each one references your real job title, your company, your recent LinkedIn activity.
A few years ago, a hacker would blast one generic template to a million inboxes. Now they can generate a million different emails, each one tailored to the person receiving it. The click-through rates on these campaigns are way higher than anything we've seen before.
And password cracking is getting faster too. AI-optimized tools don't just brute-force passwords sequentially anymore. They learn patterns. They know that most people capitalize the first letter, put a number at the end, and swap "a" for "@." That eight-character password you thought was clever? Modern cracking rigs can get through it in minutes, not months.
The AI Security Checklist: 5 Things to Do Before You Follow Shumer's Advice
I'm not telling you to avoid AI. Use it. Shumer's right about that part. But before you start pasting company secrets into a chat window, handle these five things first.
1. Use a 16-Character Minimum for Every AI Account
Not a variation of your go-to password. A completely unique one for each platform. If you're signing up for Claude, ChatGPT, Gemini, or anything else, each account gets its own password. Use a password generator to create something at least 16 characters long with uppercase, lowercase, numbers, and symbols. This is the bare minimum now. Not a suggestion.
2. Set Up a Password Manager Today
You can't keep track of unique 16+ character passwords across five different AI platforms in your head. Nobody can. Get a password manager. Bitwarden, 1Password, NordPass, any of them work. Pick one and set it up before you create your next AI account.
3. Turn On Two-Factor Authentication
Every major AI platform supports 2FA. Use an authenticator app like Google Authenticator, Authy, or Microsoft Authenticator. Don't use SMS. SMS-based codes are vulnerable to SIM-swapping attacks, where someone convinces your phone carrier to port your number to their device. App-based 2FA blocks most account takeover attempts even if your password ends up in a breach.
4. Audit What You're Pasting: The Red-Yellow-Green Rule
Before you drop anything into an AI chat, run it through this:
Audit What You're Pasting: The Red-Yellow-Green Rule
Before you drop anything into an AI chat, run it through this quick classification.
Most AI platforms offer enterprise tiers with zero-retention policies and compliance certifications. If you're working with anything that could identify a person or expose a trade secret, the $20/month consumer plan isn't the right tool. That's not a criticism of the platforms. It's just how data protection works. A zero-trust approach means you verify every input before it leaves your hands.
5. Treat Every AI-Related Email as Suspicious
"Your ChatGPT account has been compromised." "Verify your Claude subscription." "Your AI-generated content has been flagged." These phishing lures are all over inboxes right now. Never click links in emails about your AI accounts. Bookmark the login page for each tool you use and only access them through those bookmarks.
The Part That's Actually Funny
The same technology that Shumer says will replace half of entry-level white-collar jobs is also making it way easier for criminals to steal from those same workers.
AI doesn't just put your job at risk. It puts your identity, your accounts, your data, and your financial security at risk too. The technology doesn't pick sides. It just amplifies whatever it gets pointed at. Right now it's being pointed at productivity and at crime at the same time, and both sides are getting better at what they do.
Shumer tells people to spend an hour a day experimenting with AI. I'd add this: spend 30 minutes this week getting your security basics in order. That half hour of setup protects everything you build after it.
The Bigger Security Story
Shumer is right that something big is happening. But the security side of this story is just as big, and they're connected.
We're in the middle of the largest expansion of attack surface in history. Hundreds of millions of new accounts on platforms holding the most sensitive data people have ever generated. Most of those accounts are protected by the same recycled passwords people have been using for a decade.
The AI push is real. So is the security crisis it's creating. Don't get so excited about what these tools can do that you forget what can be done to you.
Get your security right first. Then go build whatever you want.
Your AI Security Toolbox
Lock Down Your AI Workflow (Waitlist)
I'm putting the finishing touches on The AI Shield: AI Account Security Checklist, a 1-page tactical guide built for your desk or your second monitor.
- The 10-Minute Lockdown: A step-by-step sprint to secure your top 3 AI tools.
- Data Guardrails: A printable version of the Red-Yellow-Green classification rule.
- The "Human Firewall" Test: 3 questions to ask before clicking any AI notification email.
Join the Priority Access Waitlist
Be the first to get the PDF when it drops next week. 100% privacy. I only send security-critical updates.
Generate Strong Passwords: SafePasswordGenerator.net: free, no signup required.
Get a Password Manager: Bitwarden (free tier available), 1Password, or NordPass.
Frequently Asked Questions
Is it safe to put company data into ChatGPT?
Depends on your tier and your company's data policy. The consumer plans ($20/month) may use your inputs for model training unless you opt out. Enterprise tiers from OpenAI, Anthropic, and Google typically offer zero-retention policies and SOC 2 compliance. If you're handling anything beyond public information, read the data processing agreement before you paste. When in doubt, use the Red-Yellow-Green rule.
What is the best password manager for AI tools in 2026?
Honestly, any reputable one beats no password manager. Bitwarden has a solid free tier and it's open-source, so its security gets independently audited. 1Password and NordPass are good paid options with extras like breach monitoring. The "best" one is whichever you'll actually use every day.
Can AI really crack my 8-character password?
Yes. An 8-character lowercase password gets cracked in seconds on modern hardware. Throw in uppercase and numbers and you're still looking at minutes to hours with AI-optimized tools that predict human patterns. The current professional recommendation is 16 characters minimum with full character variety. A randomly generated 16-character password with the full character set would take current technology centuries. Use a password generator instead of trying to come up with something "clever."
Should I use SMS or an authenticator app for 2FA?
Authenticator app. Always. SMS codes are vulnerable to SIM-swapping, where someone social-engineers your carrier into porting your number to their phone. Once they have your number, they get your codes. Apps like Google Authenticator, Authy, and Microsoft Authenticator generate codes on-device and can't be intercepted that way. If you want the strongest option, look into hardware keys like YubiKey.
Does AI make phishing attacks harder to spot?
Much harder. The old giveaways (bad grammar, generic greetings, weird formatting) are gone. AI-generated phishing is grammatically clean, contextually relevant, and personalized with whatever's publicly available about you on LinkedIn and social media. The best defense isn't trying to spot fakes. It's behavioral: never click links in unsolicited emails about your accounts. Go directly to the site through a saved bookmark.
T.O. Mercer is a cybersecurity researcher who has analyzed over 50,000 breached passwords and is the founder of SafePasswordGenerator.net, a free tool for creating strong, unique passwords.