Just Enough to Be Dangerous
Someone reached out recently. A non-technical person who had used Claude to build an ERP-like software for the small business where they work. They'd describe what they see on the site, tell Claude what they want it to do, and Claude gives them code to copy and paste. It worked. Their boss loved it. They were thinking about connecting it to Square and going live.
They wanted to know if there were any security concerns. There were.
What they got right
The workflow is completely legitimate. Claude is really good at coding things that have been solved before. Inventory systems, CRUD interfaces, dashboards, integrations with well-documented APIs. This person hadn't given any login information to the AI. What they'd built was a solid proof of concept.
What "it works" doesn't tell you
Square integration requires API keys. Basically a password that gives your app access to your real payment and sales data. The risk is that if those keys end up in your code the wrong way (which is easy to do accidentally when building with AI), anyone who views your source code or gains access to your hosting environment could access your Square account.
When you publish an app to the internet, it becomes accessible to everyone, including people looking for vulnerabilities. Bots are crawling the internet constantly looking for things to take advantage of. AI-generated code can work great, but it doesn't always follow security best practices around things like user authentication, data validation, or access control.
The accountability gap
AI lacks an accountability mechanism. If it codes something that leaks customer data or permanently deletes files from your machine unintentionally (which can happen), there is no accountability. And I take that to mean if I deliver a tool using AI-generated code, I am accountable. I have fancy insurance and years of experience as a developer. And with all of that, I still have to clean up issues that AI has made. Sometimes it keeps me up at night.
For someone building without that background, the risk is the same but the safety net isn't.
Keep building but...
Use made-up or placeholder data while you're building and testing. Hold off on connecting live POS until a technical person has reviewed how your credentials are stored. Don't share the URL publicly until someone has looked at your authentication and access control. Claude has a /security-audit feature worth running on anything you plan to deploy.
The gap between "it works" and "it's ready" is where most of the risk lives. AI doesn't narrate that gap to you. It just builds.
I'm sure this will be outdated very soon.