Security for Non-Coders: How I Protect Myself (And You Should Too)
I'm not a security expert. I'm not a developer. I'm someone who uses AI tools to build things, and along the way I've learned that security isn't something you bolt on at the end. It's a mindset you start with, even if you don't know what half the terminology means.
This post is the guide I wish I'd had when I started. It's written for people like me: non-coders using AI tools like Claude Code who want to build things without accidentally exposing their bank details to the internet.
Disclaimer
Everything in this post is based on my personal experience and common sense, not formal security training. If you're handling medical data, financial systems, or anything where a breach could cause real harm, get proper security advice from a professional. Tools like Claude Code are powerful, and that power comes with real responsibility. You use these tools at your own risk.
Rule Zero: Start with boring data
The single most important piece of advice I can give: do not start your AI tool journey with data you can't afford to lose or expose.
Don't start with:
- Your health records or medical data
- Your banking details or financial statements with account numbers
- Your wedding photos (especially if you only have one copy)
- Client data from your job
- Anything with other people's personal information in it
Do start with:
- A dummy dataset you made up
- Public data from government websites
- Your own notes or documents that contain nothing sensitive
- A small test project with fake names and fake numbers
I started with a finance dashboard, which meant working with my own bank transactions. Before I let any AI tool near that data, I stripped out account numbers, sort codes, and anything that could identify my bank accounts. The spending categories and amounts were fine to work with. The identifiers were not.
Think of it this way: if someone found this file on the street, what's the worst they could do with it? If the answer is "steal my money" or "steal my identity," that data needs protecting before it goes anywhere near an AI tool.
The "Move, Don't Delete" Rule
When I built a project that processed 16,000 emails from my Google account, I established a rule before the AI touched anything: nothing gets deleted. Ever. Only moved.
Real Example: Gmail Organizer
The AI was going to sort and categorise thousands of emails. My explicit instruction: "Do not delete any emails. If something needs to be removed from the main view, move it to a folder called 'For Review'. I will manually check and delete later."
This meant that if the AI miscategorised something, or moved something important, I could always find it. Nothing was permanently lost. The worst case was "I have to move some emails back."
This principle applies everywhere:
- Files: Move to a "review" folder instead of deleting
- Code: Comment out instead of removing (or use git so you can recover)
- Data: Archive instead of purging
- Backups: Keep the original, work on a copy
AI tools are confident. They will happily delete things if you ask them to, and they won't hesitate or ask "are you sure?" unless you've explicitly told them to. The safety net is yours to build.
What is PII and Why Should You Care?
PII stands for Personally Identifiable Information. It's any data that could be used to identify a specific person. The obvious ones: names, addresses, phone numbers, email addresses, National Insurance numbers, bank details. The less obvious ones: combinations of data that together identify someone (your postcode plus your date of birth plus your job title might be unique to you).
Why this matters for AI tool users:
- Anything you type into a cloud AI might be stored. Even if the provider says they don't train on your data, the data still travels through the internet to their servers. Local tools (like Claude Code running on your machine) are better, but files you create might still end up on GitHub if you're not careful.
- Git remembers everything. If you accidentally commit a file with your bank details, then delete the file and commit again, the bank details are still in the git history. Anyone who clones your repository can find them.
- AI tools don't know what's sensitive. Claude doesn't know that the string "12-34-56" is your sort code. You have to tell it, or better yet, never give it that data in the first place.
Practical PII Protection
Here's what I actually do:
1. The .gitignore file. This is a file that tells git "never track these files." Mine includes patterns for common sensitive files:
.envfiles (where people store API keys and passwords)*credentials*(any file with "credentials" in the name)*secret*,*private*(obvious ones)- Any file containing raw personal data (I name these with a
-PRIVATEsuffix)
2. Pre-commit hooks. These are scripts that run automatically before git lets you save your changes. Mine scans for patterns that look like sensitive data (email addresses, things that look like account numbers) and blocks the commit if it finds any. I didn't write this script myself: I described what I wanted to Claude Code and it built it for me. The irony of using AI to protect yourself from AI mistakes is not lost on me.
3. Separate data files. I keep sensitive data in files that are explicitly excluded from git. The dashboard reads from a local data file that never leaves my machine. The template I share has sample data with fake values.
Git: Your Safety Net (If You Use It Right)
Git is version control. Think of it as an unlimited "undo" button for your entire project. Every time you "commit" (save a snapshot), you can always go back to that exact state.
For non-coders, here's what you need to know:
- Commit often. Before making big changes, save the current state. "It was working 5 minutes ago" is only useful if you saved the state 5 minutes ago.
- Write descriptive messages. "Fixed stuff" is useless. "Added category filter to dashboard" tells you exactly what changed.
- Don't push sensitive data. "Push" means uploading to GitHub (the cloud). Everything on GitHub is potentially public, even "private" repositories if your account gets compromised. See the .gitignore section above.
- Don't force-push. Force-pushing overwrites history. It's the git equivalent of "permanently delete." Normal pushes are safe; force pushes can destroy your collaborators' work (or your own previous work).
The Git History Trap
If you commit a file with your password in it, then delete the file and commit again, the password is STILL in the git history. Pushing this to GitHub means your password is now on the internet, forever, in the history of your repository. If this happens, change the password immediately. Don't try to "clean" the history: treat the old password as compromised.
The Principle of Least Damage
Every time you give an AI tool a capability, ask: "What's the worst that could happen if this goes wrong?"
- Reading files: Low risk. The AI sees your data but doesn't change anything.
- Writing files: Medium risk. It might overwrite something important. (Solution: git commit before writing.)
- Deleting files: High risk. If it deletes the wrong thing and you have no backup, it's gone.
- Running commands: Highest risk. A command can do anything your computer can do, including deleting files, sending data over the internet, or installing software.
- Accessing the internet: High risk if combined with your data. An AI that can read your files AND make web requests could theoretically send your data somewhere.
Claude Code has a permission system for exactly this reason. It asks before running commands. Don't just click "allow" on everything. Read what it's about to do. If you don't understand a command, ask the AI to explain it before running it.
Real Examples of Being Careful
Example: Processing Bank Data
When I built my finance dashboard, I needed to process bank transaction CSVs. Before giving them to Claude Code, I opened the CSV, deleted the "Account Number" and "Sort Code" columns, and saved a new file called "transactions-CLEANED.csv". The AI only ever saw the cleaned version. The original stayed in a folder excluded from git.
Example: The "Just Move It" Instruction
For the email organiser, my core safety instruction was: "This is a read-and-organise task. You may create new labels and move emails between labels. You may NOT delete emails, send emails, or modify email content. If you need to remove something from view, move it to a label called 'AI-Sorted/For Review'."
Being this explicit matters. AI tools follow instructions literally. If you say "clean up my inbox," it might interpret "clean up" as "delete." If you say "organise by moving to labels, never delete," there's no ambiguity.
Backups: The Boring Thing That Saves You
Before you start any project with data you care about:
- Copy the original data somewhere safe. A different folder, an external drive, a cloud backup. Somewhere the AI tool cannot reach.
- Work on a copy. Always. If the copy gets corrupted, you still have the original.
- Test on a small sample first. Before processing 16,000 emails, process 10. Check the results. Then do 100. Then 1,000. Then all of them.
This is especially important for irreplaceable data. Your wedding photos, your child's first drawings scanned as PDFs, your deceased parent's letters. If you only have one copy, do not let an AI tool anywhere near it until you've made a backup. This isn't paranoia: it's basic file hygiene that most people skip until something goes wrong.
API Keys and Secrets
If you use any services that give you an API key (a long string of characters that acts as your password to that service), here's the rule: never put it in your code files. Never.
Instead:
- Put it in a
.envfile (a special file for secrets) - Add
.envto your.gitignore(so it never gets uploaded) - Tell the AI tool: "Read the API key from the environment variable, do not hardcode it"
If you accidentally publish an API key to GitHub, revoke it immediately and generate a new one. Bots scan GitHub constantly for exposed API keys, and they will find yours within minutes.
What About Docker?
You'll see developers talk about Docker. In simple terms, Docker creates an isolated "container" on your computer, like a sealed room where your application runs. Nothing inside the container can access your main computer's files unless you explicitly allow it.
For non-coders, Docker is relevant because:
- It's a safety boundary. If you run an AI tool inside Docker, it can only access files you've shared with the container. It can't accidentally read your Documents folder or your browser passwords.
- It's reproducible. A Docker container runs the same way on every computer. No "it works on my machine" problems.
- It's how many AI tools are deployed. If you build something and want to share it, Docker packages everything together.
You don't need Docker to get started. I don't use it for most of my projects. Plain files in a folder work fine. But if you're handling sensitive data or want an extra layer of isolation, Docker is worth learning about. It's one of those things that sounds intimidating but is actually just "run this one command and it sets up a sealed environment for you."
Mac Minis and Always-On AI
There's a growing community of people buying Mac Minis (or similar small computers) specifically to run AI tools 24/7. The idea is: instead of running AI on your main computer where it can access everything, you run it on a separate machine that only has the files you've deliberately put there.
Why people do this:
- Isolation. The AI can't access your main computer's files, emails, or browser data.
- Always on. It can run overnight tasks (like my night-night skill) without keeping your main computer awake.
- Cost control. A Mac Mini runs your AI workloads while your main computer sleeps, saving electricity and wear.
- Experimentation without risk. If something goes wrong on the Mac Mini, your main computer is untouched.
I don't have this setup (yet), but the security benefit is clear: physical separation is the strongest security boundary there is. An AI running on a different computer literally cannot access files on your main computer unless you copy them over.
Common Sense Checklist
Before starting any project with AI tools, run through this:
- What data am I using? Is any of it sensitive? Can I use fake data instead?
- What's the worst case? If the AI goes rogue and does the opposite of what I asked, what's the damage?
- Do I have a backup? Of the original data, before the AI touches it?
- Am I working on a copy? Never modify originals directly.
- Is anything going to the internet? Git push, API calls, cloud storage? Check what's included.
- Have I told the AI what NOT to do? "Move, don't delete." "Read only, don't modify." Explicit constraints.
- Am I reading the commands before approving them? Don't just click "allow" on everything.
- Is my .gitignore set up? Before the first commit, not after.
The Honest Truth
I have made mistakes. I've committed files that probably shouldn't have been committed. I've given AI tools more access than they needed. I've had to go back and clean things up.
The difference between a bad outcome and a disaster is preparation. Every mistake I've made with proper backups in place was a "well, that's annoying, let me restore from the backup." Every mistake without backups would have been a crisis.
Security for non-coders isn't about being perfect. It's about being aware, being cautious with sensitive data, and building habits that protect you when things go wrong. Because things will go wrong. That's not a failure: that's how learning works. The goal is to make sure the consequences are recoverable.
Final Disclaimer
The tools, techniques, and approaches described in this post and across this website are shared for educational purposes. I am not a security professional. You use AI tools, code, and any techniques described here entirely at your own risk. Always test with non-sensitive data first. Always keep backups. If you're unsure about something, ask a professional before proceeding. I take no responsibility for any data loss, security incidents, or other consequences arising from following this guide.