<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Zero-Entry</title>
    <link>https://zero-entry.co.za/</link>
    <description>Recent content on Zero-Entry</description>
    <generator>Hugo -- 0.147.7</generator>
    <language>en-us</language>
    <lastBuildDate>Sat, 18 Apr 2026 23:08:30 +0200</lastBuildDate>
    <atom:link href="https://zero-entry.co.za/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>30 Days of a Honeypot at Home</title>
      <link>https://zero-entry.co.za/posts/30-days-of-a-honeypot-at-home/</link>
      <pubDate>Sat, 18 Apr 2026 20:30:00 +0200</pubDate>
      <guid>https://zero-entry.co.za/posts/30-days-of-a-honeypot-at-home/</guid>
      <description>Standing up T-Pot on a segmented VLAN behind OPNsense, opening a curated set of ports to a residential IP, and writing down what 30 days of unfiltered internet traffic actually looked like.</description>
      <content:encoded><![CDATA[<p>I finally got around to putting a honeypot on the public side of my home connection. I wasn&rsquo;t trying to catch APTs. I wanted to see what hits a random residential IP when nothing is hiding it.</p>
<p>This is a notes post about standing it up, how it&rsquo;s contained, and what actually showed up in the logs after a month.</p>
<h2 id="why-bother">Why bother</h2>
<p>Most threat intelligence I read describes the internet as a battlefield. Every unpatched device is five minutes from compromise. Every IP gets 30,000 probes a day. The numbers are usually correct. They aren&rsquo;t useful unless you can map them to what your environment looks like.</p>
<p>I wanted my own baseline. Not a vendor&rsquo;s feed, not an aggregated report. What does my ISP-assigned IP attract, right now, and what does the traffic look like when you strip out the marketing spin.</p>
<p>Secondary reason: I wanted to segment the network, and a honeypot is a good forcing function. My homelab had been flat for too long. Nothing makes you VLAN a network faster than hanging something deliberately exposed off it.</p>
<h2 id="threat-model-and-ground-rules">Threat model and ground rules</h2>
<p>Before anything went online, I wrote down what I was willing to tolerate and what I wasn&rsquo;t.</p>
<p>Willing to accept:</p>
<ul>
<li>My IP appearing on scanning blocklists.</li>
<li>My ISP sending me a polite note. (They never did.)</li>
<li>Getting buried in logs I&rsquo;d then have to process.</li>
</ul>
<p>Not willing to accept:</p>
<ul>
<li>Anything the honeypot attracts touching my real LAN.</li>
<li>A honeypot compromise turning into a pivot into anything else I own.</li>
<li>Outbound traffic that looks like I&rsquo;m participating in someone else&rsquo;s botnet.</li>
</ul>
<p>Those three constraints drove every architectural decision that followed.</p>
<h2 id="architecture">Architecture</h2>
<p>The honeypot runs as a single VM on a secondary host, isolated on its own VLAN behind OPNsense. It has one purpose, no shared storage, no shared credentials, and nothing legitimate behind it. If it gets popped, I wipe the VM from snapshot and start again.</p>
<p>The physical picture:</p>
<ul>
<li><strong>Honey-VM</strong>: 4 vCPU, 8 GB RAM, 60 GB disk. Ubuntu 22.04 base, T-Pot on top.</li>
<li><strong>VLAN 66</strong>: dedicated &ldquo;DMZ-lite&rdquo;. No inter-VLAN access. DHCP scoped tight.</li>
<li><strong>OPNsense</strong>: port-forwards a curated set of TCP ports from WAN to the honey-VM.</li>
<li><strong>Suricata</strong> on the OPNsense WAN interface logs everything hitting those ports, independent of what the honeypot itself sees.</li>
<li><strong>Outbound rules on VLAN 66</strong>: no outbound except to a specific syslog collector and to Cloudflare DoH for DNS. No SSH out, no SMB out, no SMTP out, no arbitrary outbound anything.</li>
</ul>
<p>The last point is the one that matters most. A honeypot with open outbound is a honeypot that can participate in the abuse you&rsquo;re trying to study. If Cowrie accepts a shell and the intruder tries to <code>curl</code> a second-stage payload, they should fail at the network, not at the endpoint.</p>
<p>Ports exposed to the internet: 22, 23, 80, 443, 445, 1433, 2222, 3306, 3389, 5060, 5900, 8080. Nothing else.</p>
<h2 id="stack">Stack</h2>
<p>I used T-Pot for the heavy lifting. It&rsquo;s maintained, it aggregates sensible honeypots, and its dashboards are ready out of the box. The components that mattered for me:</p>
<ul>
<li><strong>Cowrie</strong> on 22 and 2222. SSH and Telnet. Logs full session transcripts.</li>
<li><strong>Dionaea</strong> on 445, 1433, 3306, 5060. Protocol emulation for SMB, MSSQL, MySQL, SIP.</li>
<li><strong>Heralding</strong> on anything else that smells like a login prompt.</li>
<li><strong>Honeytrap</strong> as the generic catch-all TCP listener.</li>
<li><strong>Snare / Tanner</strong> serving HTTP decoy content on 80 and 8080.</li>
<li><strong>Elasticsearch + Kibana</strong> for the dashboards, with data shipped out to my own Loki instance as a backup.</li>
</ul>
<p>T-Pot&rsquo;s internal firewalling is fine, but I don&rsquo;t rely on it. The OPNsense rules are the real enforcement. If T-Pot broke tomorrow, nothing on VLAN 66 would suddenly start talking to my real network.</p>
<h2 id="standing-it-up">Standing it up</h2>
<p>Provisioning was uneventful once the segmentation was in place.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl"><span class="c1"># T-Pot install on a fresh Ubuntu 22.04</span>
</span></span><span class="line"><span class="cl">env bash -c <span class="s2">&#34;</span><span class="k">$(</span>curl -sL https://ghst.ly/tpot-install<span class="k">)</span><span class="s2">&#34;</span>
</span></span></code></pre></div><p>The installer handles Docker, pulls the containers, and wires up the reverse proxy for the Kibana side. The one thing I changed was restricting the admin web interface to the management VLAN, reachable only over WireGuard.</p>
<p>SSHD for actual host management got moved to a non-standard port on the management interface. Cowrie owns 22 on the WAN side.</p>
<p>Before opening the firewall, I ran a full scan from an external VPS against the advertised ports to make sure only the intended services responded, and that the banners looked realistic enough to not scream &ldquo;honeypot&rdquo; on the first handshake. A couple of Cowrie defaults were too obvious (the default SSH version string, for one) and needed tweaking. Anyone running OpenCanary fingerprints or Shodan&rsquo;s honeypot detector will still figure it out. Most bots won&rsquo;t.</p>
<h2 id="first-24-hours">First 24 hours</h2>
<p>I expected a slow ramp while DNS caches and scanners noticed the IP. That wasn&rsquo;t the experience.</p>
<p>Within 12 minutes of opening port 22, the first SSH login attempt came in. Inside the first hour: 340 SSH attempts from 47 unique source IPs. By the end of the first day: 8,200 SSH attempts, 1,100 HTTP requests, and around 60 SMB connections.</p>
<p>There is no onboarding period for a public IP. You&rsquo;re in the database already. Opening a port just tells the scanners that something is finally listening.</p>
<h2 id="what-30-days-actually-looked-like">What 30 days actually looked like</h2>
<p>Rough totals at the 30-day mark (Suricata plus T-Pot aggregated):</p>
<ul>
<li><strong>SSH and Telnet attempts</strong>: 412,000 across 22 and 2222, from 14,300 unique source IPs.</li>
<li><strong>HTTP requests to honeypot web roots</strong>: 28,400. Mostly scanner fingerprints and path probes.</li>
<li><strong>SMB connections</strong>: 3,900.</li>
<li><strong>MSSQL login attempts</strong>: 1,100.</li>
<li><strong>RDP connections</strong>: 9,700, almost all from a handful of subnets running NLA probes.</li>
<li><strong>SIP INVITE floods</strong>: two distinct campaigns, one targeting Asterisk defaults, one targeting a specific FreePBX module.</li>
</ul>
<p>Geography is the least interesting dimension and the one vendors love to lead with. Source IPs spread across 90+ countries. That maps to compromised hosts, not operator location. Treating a GeoIP heatmap as a map of threat actors is a mistake.</p>
<p>The credential side is more useful. Top ten SSH username/password combinations over the 30 days, in order:</p>
<ol>
<li><code>root</code> / <code>root</code></li>
<li><code>admin</code> / <code>admin</code></li>
<li><code>root</code> / <code>123456</code></li>
<li><code>root</code> / <code>password</code></li>
<li><code>admin</code> / <code>password</code></li>
<li><code>user</code> / <code>user</code></li>
<li><code>ubnt</code> / <code>ubnt</code></li>
<li><code>pi</code> / <code>raspberry</code></li>
<li><code>root</code> / <code>1234</code></li>
<li><code>support</code> / <code>support</code></li>
</ol>
<p>None of those will surprise anyone who&rsquo;s spent time on this. They confirm what the big honeypot operators have been saying for years: the low end of the attack surface is stuck on the same dictionary it&rsquo;s been stuck on for a decade, because it keeps working.</p>
<h2 id="what-the-payloads-looked-like">What the payloads looked like</h2>
<p>Cowrie logs full session transcripts, which is the part worth reading. Patterns I saw repeatedly:</p>
<ul>
<li><code>uname -a; cat /proc/cpuinfo; free -m</code> as environment fingerprinting before any payload drop. The bot wants to know whether it landed on an ARM router, a MIPS camera, or an x86 box, so it can pull the right binary.</li>
<li>A <code>wget</code> or <code>curl</code> chain pointing at a staging server, usually on port 80 over a direct IP with no DNS. Almost always a short-lived URL, dead within days.</li>
<li>A <code>chmod 777</code> on the downloaded binary, followed by <code>./&lt;binary&gt;</code> and a quick <code>rm</code> to clean up.</li>
<li>Busybox-style commands with shell tricks to survive minimal environments.</li>
</ul>
<p>Three payloads I captured and detonated in an isolated environment later:</p>
<ul>
<li>A Mirai variant targeting MIPS and ARM, with the usual hardcoded C2 list.</li>
<li>A Monero miner compiled for x86_64 with an embedded pool address and worker ID.</li>
<li>An XorDDoS dropper with the &ldquo;encrypted&rdquo; strings still trivially XOR-decodable against a one-byte key.</li>
</ul>
<p>Nothing novel. That&rsquo;s the point. The mass of the internet&rsquo;s attack noise is bots spraying five-year-old payloads at anything that looks like a vulnerable edge device. A residential IP running no listening services would never see this traffic because the TCP connections would simply RST. Exposing ports makes you legible to the layer that scans for this kind of target.</p>
<h2 id="the-quieter-more-interesting-traffic">The quieter, more interesting traffic</h2>
<p>Once you filter out the SSH brute force floor, and it is a floor of roughly 300 to 600 attempts per hour, the rest gets more varied.</p>
<p>Log4Shell probes still show up on HTTP. More than two years after the advisory, JNDI probes are a standing wave. Most point at self-hosted Burp collaborators or long-dead VPS callbacks. Somebody, somewhere, is still paying for a scanner that fires these shots and never checks whether anyone answered.</p>
<p>A handful of requests were clearly scripted against specific CVEs:</p>
<ul>
<li>GPON router authentication bypass (CVE-2018-10561 / 10562). Still hitting in 2026.</li>
<li>Various Ivanti and Fortinet path traversals.</li>
<li>Confluence OGNL injection strings.</li>
<li>Generic WordPress <code>xmlrpc.php</code> pingback fishing.</li>
</ul>
<p>The unusual one: a small cluster of requests that tried to negotiate TLS with an SNI matching a real banking domain. No credentials, no follow-up. Probably a scanner doing inventory for cert transparency lookups. Possibly something less innocent. I don&rsquo;t have enough data to tell, and that&rsquo;s the honest answer.</p>
<h2 id="operational-reality">Operational reality</h2>
<p>A honeypot is not a set-and-forget box. In the first week I burned an evening chasing false positives and another fixing log rotation before a partition filled. The real costs:</p>
<ul>
<li>Disk grows fast. Cowrie session logs plus Elasticsearch indices ate about 18 GB in 30 days. That&rsquo;s cheap, but it isn&rsquo;t free.</li>
<li>Containers drift. Watchtower handles the weekly pull. I still review breaking changes in the T-Pot release notes before merging.</li>
<li>Elasticsearch is memory-hungry and will swap itself into uselessness if you underprovision. 8 GB is the practical floor.</li>
<li>Alerting is where this gets useful. A 500-per-hour SSH baseline is noise. A successful Cowrie shell that persists for more than 30 seconds, or any outbound hit blocked at the OPNsense rule, is signal. Those page me. Everything else goes to a dashboard I check when I feel like it.</li>
</ul>
<p>The alerting config is where most of my ongoing time goes. Without it, the whole thing is a pretty dashboard.</p>
<h2 id="what-id-change">What I&rsquo;d change</h2>
<p>A few things I&rsquo;ll do on the next iteration:</p>
<p>Split the honeypot across two IPs. Run Cowrie on one, everything else on the other. The SSH noise crowds the indices and makes queries slower than they need to be.</p>
<p>Move the dashboards off the honey-VM entirely. Shipping to an external Loki instance is already half the work. The remaining Kibana stack on-box is there for convenience, not necessity.</p>
<p>Add a second outbound-blocking layer inside the VM itself. Defence in depth against a container escape I&rsquo;m still not fully satisfied with.</p>
<p>Log rolling PCAPs on a 7-day window. Right now I only have what Suricata and the honeypots chose to log. Full packet captures would let me revisit sessions I under-investigated at the time.</p>
<h2 id="does-this-change-what-i-do-at-work">Does this change what I do at work</h2>
<p>Partially. It doesn&rsquo;t change how I think about APT-level adversaries. Nothing I saw in 30 days would strain a reasonable environment.</p>
<p>What it changes is how I talk about the baseline. The background radiation of the internet is real, it&rsquo;s measurable, and it doesn&rsquo;t stop. Any machine with an unpatched edge service survives hours, not days. Any default credential on a public interface is already compromised. You just haven&rsquo;t noticed yet.</p>
<p>That isn&rsquo;t a marketing line. That&rsquo;s 412,000 login attempts across 30 days on one residential IP running an obvious honeypot.</p>
<h2 id="closing">Closing</h2>
<p>Segment first. Then break something on purpose, in a place where it can&rsquo;t reach anything you care about. The logs that come back are more honest than any vendor report.</p>
]]></content:encoded>
    </item>
    <item>
      <title>WTH I&#39;m Doing RF Now: RTL-SDR &#43; HackRF One (and the dumb problems I hit)</title>
      <link>https://zero-entry.co.za/posts/wth-im-doing-rf-now/</link>
      <pubDate>Sun, 22 Feb 2026 00:00:00 +0200</pubDate>
      <guid>https://zero-entry.co.za/posts/wth-im-doing-rf-now/</guid>
      <description>First sessions with RTL-SDR and HackRF One: scanning 433 MHz, fixing udev permissions, building a passive logger, and the practical lessons from the first two hours.</description>
      <content:encoded><![CDATA[<p><img loading="lazy" src="/images/Pasted%20image%2020260219202312.png"></p>
<p>I&rsquo;ve started digging into RF, meaning anything noisy in the air that my SDR can see. This is a quick log of the first sessions using an RTL-SDR (cheap, RX-only) and a HackRF One (wider bandwidth, TX-capable, which stays off outside a legal setup).</p>
<p>This isn&rsquo;t a decoding write-up. The goal for now is observation: watch the spectrum, log activity, and build something useful.</p>
<h2 id="the-kit">The kit</h2>
<ul>
<li><strong>RTL-SDR</strong> (RTL2832U + R820T): cheap receive, wide community support, good for learning.</li>
<li><strong>HackRF One</strong>: wider tuning range, bigger bandwidth, better lab potential.</li>
</ul>
<p>Antennas matter more than most people want to admit. A random wire will pick something up, but it&rsquo;ll also mislead you.</p>
<h2 id="what-im-trying-to-do-v1">What I&rsquo;m trying to do (v1)</h2>
<p>Three things:</p>
<ol>
<li>Scan a band, starting with the 433 MHz junk drawer</li>
<li>Log energy and activity over time to CSV</li>
<li>Watch live when a remote is pressed or something triggers</li>
</ol>
<p>No demodulation, no protocol reversing yet. Just what&rsquo;s alive and when.</p>
<h2 id="rtl-sdr-first-scans-and-the-empty-csv-problem">RTL-SDR: first scans and the empty CSV problem</h2>
<p><code>rtl_power</code> is the right starting point for band surveys:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rtl_power -f 430M:440M:50k -i <span class="m">1</span> -g <span class="m">30</span> -e 15s /tmp/rtl_433_test.csv
</span></span><span class="line"><span class="cl">head -n <span class="m">3</span> /tmp/rtl_433_test.csv
</span></span></code></pre></div><p>Two things showed up immediately:</p>
<ul>
<li>Frequency hops, FFT bins, the tool doing its job.</li>
<li>This line:</li>
</ul>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-fallback" data-lang="fallback"><span class="line"><span class="cl">[R82XX] PLL not locked!
</span></span></code></pre></div><p>It looks bad. In practice it&rsquo;s common with these dongles and the data is often usable anyway. If it looks broken, try adjusting the frequency range slightly, dropping the gain, swapping the USB port or cable, or ruling out power starvation.</p>
<p>The other thing: the numbers move even when you think nothing is happening. Your receiver sees something constantly. The job is separating signal from noise and logging it so you can compare runs over time.</p>
<h2 id="hackrf-the-access-denied-wall">HackRF: the access denied wall</h2>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">hackrf_info
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" class="chroma"><code class="language-fallback" data-lang="fallback"><span class="line"><span class="cl">hackrf_open() failed: Access denied (insufficient permissions) (-1000)
</span></span></code></pre></div><p>Linux has the hardware, you don&rsquo;t. Fix it with udev rules.</p>
<h3 id="the-fix">The fix</h3>
<p>Add yourself to the plugdev group:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo usermod -aG plugdev <span class="nv">$USER</span>
</span></span><span class="line"><span class="cl"><span class="c1"># log out and back in after this</span>
</span></span></code></pre></div><p>Check whether HackRF udev rules exist for your distro. If not, create them:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo nano /etc/udev/rules.d/53-hackrf.rules
</span></span></code></pre></div><p>Rule:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-text" data-lang="text"><span class="line"><span class="cl">SUBSYSTEM==&#34;usb&#34;, ATTR{idVendor}==&#34;1d50&#34;, ATTR{idProduct}==&#34;6089&#34;, MODE=&#34;0666&#34;
</span></span></code></pre></div><p>Reload:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">sudo udevadm control --reload-rules
</span></span><span class="line"><span class="cl">sudo udevadm trigger
</span></span></code></pre></div><p>After that, <code>hackrf_info</code> works.</p>
<h2 id="driver-detach-and-reattach-noise">Driver detach and reattach noise</h2>
<p>RTL tools print this during normal operation:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-fallback" data-lang="fallback"><span class="line"><span class="cl">Detached kernel driver
</span></span><span class="line"><span class="cl">...
</span></span><span class="line"><span class="cl">Reattached kernel driver
</span></span></code></pre></div><p>The SDR tooling takes temporary ownership of the USB device. It becomes a problem when another service grabs the device at the same time, or when you switch tools quickly and the reattach doesn&rsquo;t complete cleanly. Unplug and replug is a valid fix.</p>
<h2 id="current-workflow">Current workflow</h2>
<h3 id="1-quick-band-scan">1. Quick band scan</h3>
<p>Pick a known band, collect a short sample, inspect.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">rtl_power -f 430M:440M:50k -i <span class="m">1</span> -g <span class="m">30</span> -e 60s ./rf_433_1min.csv
</span></span></code></pre></div><h3 id="2-trigger-devices-while-logging">2. Trigger devices while logging</h3>
<p>Press a remote, ring a doorbell, whatever you own and are allowed to test. Watch what shows up.</p>
<h3 id="3-passive-logger">3. Passive logger</h3>
<p>A script that samples a band, writes a CSV row with timestamp and strongest bins, and repeats for hours or days. The goal is trend visibility: activity spikes at these times, at these frequencies.</p>
<h3 id="4-long-runs-in-screen">4. Long runs in screen</h3>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-bash" data-lang="bash"><span class="line"><span class="cl">screen -S rfwatch
</span></span><span class="line"><span class="cl"><span class="c1"># run your loop / script</span>
</span></span><span class="line"><span class="cl"><span class="c1"># detach: Ctrl+A then D</span>
</span></span><span class="line"><span class="cl">screen -r rfwatch
</span></span></code></pre></div><h2 id="lessons-from-the-first-two-hours">Lessons from the first two hours</h2>
<ul>
<li>Antenna placement matters more than gain settings. Move it 30cm and signal becomes noise.</li>
<li>Gain is not volume. Too much gain makes every bin look busy and kills your ability to compare runs.</li>
<li>433 MHz is crowded. Good for learning, bad for clean data.</li>
<li>A CSV you can graph beats ten live waterfall sessions.</li>
<li>Fix udev permissions before you need them, not during a session.</li>
</ul>
<h2 id="rtl-sdr-vs-hackrf">RTL-SDR vs HackRF</h2>
<p><strong>RTL-SDR</strong>: default always-on receiver. Cheap enough to leave plugged in and logging. Good for band surveys and learning.</p>
<p><strong>HackRF One</strong>: used for broader coverage and lab work. TX stays off unless the environment is controlled and the use is legal.</p>
<h2 id="whats-next">What&rsquo;s next</h2>
<ul>
<li>A 24-hour monitor for 433 MHz, then other bands</li>
<li>Lightweight analysis: top active frequencies, time-of-day patterns, spikes worth investigating</li>
<li>Mapping RF fingerprints in the environment: gates, remotes, alarms, sensors. What&rsquo;s constant vs what&rsquo;s event-driven.</li>
</ul>
<h2 id="legal">Legal</h2>
<p>I&rsquo;m monitoring what I&rsquo;m allowed to monitor, on equipment I own, in environments where I have permission. RF gets murky fast if you treat it like network recon without thinking about it first.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Security Theatre in Enterprise Networks</title>
      <link>https://zero-entry.co.za/posts/security-theatre-in-enterprise-networks/</link>
      <pubDate>Thu, 19 Feb 2026 19:00:00 +0200</pubDate>
      <guid>https://zero-entry.co.za/posts/security-theatre-in-enterprise-networks/</guid>
      <description>Why security controls that look good on paper often fail under basic adversarial pressure - field notes from real assessments.</description>
      <content:encoded><![CDATA[<blockquote>
<p>Disclaimer: The examples below are anonymised and aggregated across multiple engagements. The goal is to highlight recurring patterns, not embarrass any specific organisation.</p></blockquote>
<p><img loading="lazy" src="/images/Pasted%20image%2020260219201318.png"></p>
<h1 id="security-theatre-field-notes-from-the-inside">Security Theatre: Field Notes from the Inside</h1>
<h2 id="the-scene">The Scene</h2>
<p>Most environments I assess are not wide open. They have firewalls, policies, and controls that look sensible on a slide deck.</p>
<p>The same weaknesses keep showing up anyway. Security gets implemented as a compliance checklist rather than an adversarial system.</p>
<p>That is security theatre: controls that exist, but fail under pressure.</p>
<p>It shows up in small things (a forgotten hyperlink), legacy things (Telnet still running), and complex things (multi-tenant routing edge cases that quietly break the idea of &ldquo;internal&rdquo;).</p>
<p>Most incidents do not start with a zero-day. They start with an assumption.</p>
<h2 id="the-meaning">The Meaning</h2>
<p>Security theatre is security that only works from one angle.</p>
<ul>
<li>It satisfies an audit requirement</li>
<li>It meets a policy statement</li>
<li>It was set up once and never re-validated</li>
<li>It relies on &ldquo;internal trust&rdquo; as a boundary</li>
</ul>
<p>It fails the moment an attacker does basic due diligence.</p>
<h3 id="real-security-vs-theatre">Real Security vs Theatre</h3>
<table>
  <thead>
      <tr>
          <th>Real security</th>
          <th>Security theatre</th>
      </tr>
  </thead>
  <tbody>
      <tr>
          <td>Tested adversarially</td>
          <td>Signed off administratively</td>
      </tr>
      <tr>
          <td>Assumes compromise is possible</td>
          <td>Assumes trust is default</td>
      </tr>
      <tr>
          <td>Measured continuously</td>
          <td>Configured once</td>
      </tr>
      <tr>
          <td>Has clear ownership</td>
          <td>&ldquo;Someone&rdquo; owns it</td>
      </tr>
      <tr>
          <td>Breaks safely</td>
          <td>Breaks silently</td>
      </tr>
  </tbody>
</table>
<h2 id="a-simple-test">A Simple Test</h2>
<p>Rate each control against three questions:</p>
<ol>
<li>Is it verified? Not configured, but tested.</li>
<li>Is it monitored? Will you know when it fails?</li>
<li>Is it owned? Is someone accountable for its health?</li>
</ol>
<p>Two &ldquo;no&rdquo; answers means you do not have a control.</p>
<h2 id="field-notes">Field Notes</h2>
<h3 id="1-internal-rfc1918-space-leaking-other-peoples-infrastructure">1. &ldquo;Internal&rdquo; RFC1918 Space Leaking Other People&rsquo;s Infrastructure</h3>
<p>On one client network, we found a large number of devices in the <code>172.30.x.x</code> and <code>172.31.x.x</code> ranges. Normal enough.</p>
<p>The exposed services were not normal:</p>
<ul>
<li>Cisco IOS devices presenting known vulnerabilities</li>
<li>OpenSSH issues flagged, including severe exposure classes</li>
<li>SNMP responding with default community strings (<code>public</code>)</li>
<li>Telnet open in cleartext</li>
</ul>
<p>When we pulled device information over SNMP, the fingerprints did not match the client&rsquo;s environment. The devices looked like other organisations&rsquo; routers, visible from within this client&rsquo;s own network.</p>
<p>Whether it was a provider-side forwarding mistake, a multi-tenant routing bleed, or a design choice nobody fully understood, the problem is the same:</p>
<blockquote>
<p>&ldquo;Internal&rdquo; is a routing decision, not a security boundary.</p></blockquote>
<p>A network can be &ldquo;private&rdquo; on paper while being multi-tenant in practice. Technically not the client&rsquo;s fault. Still their problem.</p>
<p>The pessimistic outlook: unintended trust paths, lateral movement opportunities, messy incident response (&ldquo;is that ours?&rdquo; becomes a real question), and reputational fallout if the client network becomes a stepping stone into someone else&rsquo;s infrastructure.</p>
<h3 id="2-monitoring-interfaces-that-become-management-interfaces">2. Monitoring Interfaces That Become Management Interfaces</h3>
<p>SNMP is often justified as &ldquo;monitoring only.&rdquo;</p>
<p>In practice it provides device names, interfaces, routing hints, OS and platform fingerprinting, uptime, load, and depending on configuration, more. When SNMPv2c is exposed with default communities, &ldquo;monitoring&rdquo; becomes enumeration on demand.</p>
<p>The control exists (&ldquo;we have monitoring&rdquo;), but the threat model is absent (&ldquo;monitoring endpoints are high-value targets&rdquo;).</p>
<p>What to do instead:</p>
<ul>
<li>SNMPv3 with auth and privacy where possible</li>
<li>Strict ACLs so only the monitoring system can reach SNMP</li>
<li>Remove default communities, remove write communities unless there is a hard requirement</li>
<li>Continuous checks for <code>public</code>/<code>private</code> drift</li>
</ul>
<h3 id="3-telnet-in-2026">3. Telnet in 2026</h3>
<p>Telnet still appears internally more often than people admit. Rarely a deliberate decision. More often a leftover: legacy device, a &ldquo;temporary&rdquo; troubleshooting session, vendor defaults, a project that ended but the port stayed open.</p>
<p>What to do:</p>
<ul>
<li>Disable Telnet everywhere, policy plus validation</li>
<li>If legacy hardware makes that impossible, isolate it and treat it as hostile</li>
<li>Enforce SSH and modern cipher suites for all management</li>
<li>Log and alert on any Telnet session as a high-signal event</li>
</ul>
<h3 id="4-the-easiest-brand-hijack-is-a-broken-hyperlink">4. The Easiest Brand Hijack Is a Broken Hyperlink</h3>
<p>One of the more sobering findings I have seen had no CVE attached.</p>
<p>A client website linked to their social media profiles. The X and Instagram links pointed to accounts that did not exist. We registered the usernames the website was implicitly endorsing.</p>
<p>Any visitor clicking those links landed on an attacker-controlled profile that looked official by association. I was, briefly, the client&rsquo;s new social media manager.</p>
<p>The organisation likely had solid perimeter controls. Their public brand identity had an unguarded back door.</p>
<p>What to do:</p>
<ul>
<li>Treat web presence as an asset inventory: domains, handles, app store listings</li>
<li>Reserve key usernames even on platforms you do not actively use</li>
<li>Run a quarterly external exposure review with marketing and security together</li>
<li>Add a brand protection checklist to release processes</li>
</ul>
<h3 id="5-hidden-ssids-and-other-comforting-myths">5. Hidden SSIDs and Other Comforting Myths</h3>
<p>I have assessed environments where the guest Wi-Fi was modern and well-configured (WPA3), additional SSIDs existed as &ldquo;hidden,&rdquo; and those SSIDs still covered public areas: roads, car parks, the perimeter.</p>
<p>Hidden SSIDs do not stop discovery. The tooling is trivial and takes minutes. They stop casual users from seeing the network in a dropdown list.</p>
<p>What to do:</p>
<ul>
<li>Remove unused SSIDs entirely</li>
<li>Prefer WPA3-Enterprise (EAP) where feasible</li>
<li>Segment wireless properly: guest is not corporate, corporate is not infrastructure</li>
<li>Validate coverage from outside the perimeter</li>
</ul>
<h3 id="6-medium-findings-that-do-not-feel-medium">6. &ldquo;Medium&rdquo; Findings That Do Not Feel Medium</h3>
<p>Scanner outputs create a predictable behaviour pattern. Critical gets attention, high gets scheduled, medium gets ignored, info gets laughed off.</p>
<p>Some medium findings describe classes of weakness that become severe with real-world conditions: SSH weaknesses that matter under MITM conditions, management interfaces with brute-force exposure, patch states that are &ldquo;mostly fine&rdquo; but leave a few critical paths open.</p>
<p>Severity is a starting hypothesis, not a verdict. Two mediums can combine into a critical.</p>
<p>Prioritise weaknesses that enable credential capture, management access, or trust boundary crossing regardless of how the scanner scored them.</p>
<h3 id="7-logs-exist-but-meaning-does-not">7. Logs Exist, but Meaning Does Not</h3>
<p>On the blue side, I often see logging enabled but not operationally useful.</p>
<p>Suricata is a good example. It can generate excellent visibility, but without a workflow it becomes noise. The questions that matter are: what are our top external destinations, which internal hosts talk to unusual places, and what changed since last week.</p>
<p>&ldquo;We have monitoring&rdquo; becomes a checkbox, and the logs do not get reviewed.</p>
<p>What to do:</p>
<ul>
<li>Build simple, repeatable queries that surface top talkers, new domains, and rare destinations</li>
<li>Baseline normal behaviour and alert on deviations</li>
<li>Assign ownership: someone reviews signals and acts on them</li>
</ul>
<h2 id="why-this-happens">Why This Happens</h2>
<p>Most security theatre comes from normal organisational pressure:</p>
<ul>
<li>Compliance is measurable; security is contextual</li>
<li>Change control is painful, so legacy stays</li>
<li>Ownership is fragmented, so controls drift</li>
<li>Teams are stretched, so verification falls away</li>
<li>Perimeters look strong, so internal trust grows by default</li>
</ul>
<p>More tools will not fix this. Verification, ownership, and feedback loops will.</p>
<h2 id="breaking-the-cycle">Breaking the Cycle</h2>
<p><strong>This week:</strong></p>
<ul>
<li>Kill Telnet wherever it exists</li>
<li>Remove default SNMP communities, lock SNMP behind ACLs</li>
<li>Inventory and reserve public brand assets</li>
<li>Delete unused SSIDs, validate coverage beyond the perimeter</li>
<li>Review private ranges for routing weirdness and unintended reachability</li>
</ul>
<p><strong>The structural fix:</strong> a control is not done when it is configured. It is done when it is verified with adversarial thinking, monitored for failure, and owned with accountability.</p>
<p>If you cannot answer &ldquo;who owns this control,&rdquo; it is theatre waiting to fail.</p>
<h2 id="the-end">The End</h2>
<p>Security is not the number of controls you can list. It is the number that survive contact with an adversary.</p>
<p>If your &ldquo;internal&rdquo; network is only secure because everyone agrees to behave, that is not a network. It is a trust exercise.</p>
<h2 id="appendix-how-screwed-am-i">Appendix: How Screwed Am I?</h2>
<p>If any of these are true, you are looking at theatre:</p>
<ul>
<li>&ldquo;It&rsquo;s fine because it&rsquo;s internal&rdquo;</li>
<li>&ldquo;We enabled logging&rdquo; (but nobody reviews it)</li>
<li>&ldquo;We use SNMP for monitoring&rdquo; (but it&rsquo;s v2c and broadly reachable)</li>
<li>&ldquo;It&rsquo;s hidden&rdquo; (SSIDs, panels, directories)</li>
<li>&ldquo;We patched most of it&rdquo;</li>
<li>&ldquo;That&rsquo;s only a medium&rdquo;</li>
<li>&ldquo;We don&rsquo;t know who owns it&rdquo;</li>
</ul>
<p>Hunt the assumptions, not just the CVEs.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Security Controls That Only Exist on Paper</title>
      <link>https://zero-entry.co.za/posts/security-controls-that-only-exist-on-paper/</link>
      <pubDate>Sun, 18 Jan 2026 20:30:00 +0200</pubDate>
      <guid>https://zero-entry.co.za/posts/security-controls-that-only-exist-on-paper/</guid>
      <description>Enabled doesn&amp;#39;t mean active. Deployed doesn&amp;#39;t mean owned. A look at how security controls drift into uselessness and what actually keeps them working.</description>
      <content:encoded><![CDATA[<h1 id="the-illusion-of-security">The Illusion of Security</h1>
<p>Most environments aren&rsquo;t completely unsecured. Firewalls are enabled. Logging exists. Alerts are configured. From the outside it looks fine, maybe even responsible.</p>
<p>Controls aren&rsquo;t usually missing. They&rsquo;re inactive.</p>
<p>In recent work I&rsquo;ve seen environments where security features were technically enabled but effectively useless. Logs existed but nobody read them. Alerts fired and nobody came. Things broke and the outcome was the same either way.</p>
<p>An organisation&rsquo;s security is only as strong as the people interacting with it. The tooling matters less than whether someone looks at it. The architecture diagram matters less than whether someone notices when something breaks.</p>
<p>Controls that exist, but don&rsquo;t actually do anything.</p>
<h2 id="what-on-paper-actually-means">What &ldquo;On Paper&rdquo; Actually Means</h2>
<p>A control that exists on paper isn&rsquo;t badly designed or poorly intentioned. It meets one or more of these conditions:</p>
<ul>
<li>Enabled, but not enforced</li>
<li>Deployed, but not monitored</li>
<li>Producing logs that nobody checks</li>
</ul>
<p>Being enabled doesn&rsquo;t make something active. Being deployed doesn&rsquo;t mean anyone owns it.</p>
<p>This usually comes down to a preference for &ldquo;set it and forget it&rdquo; over &ldquo;observe and react.&rdquo; Once the checkbox is ticked, the control fades into the background, assumed to be working indefinitely. That assumption is where things go wrong.</p>
<h2 id="where-controls-drift">Where Controls Drift</h2>
<h3 id="detection-without-response">Detection Without Response</h3>
<p>Detection systems promise visibility. Alerts come in, someone investigates, action is taken.</p>
<p>In practice it works for a while. Dashboards get checked. Alerts are acknowledged. Then the noise builds. False positives pile up. A day goes by without anyone looking, then two. Eventually nobody is sure who owns the dashboard anymore.</p>
<p>Alerts become background noise. Data is still being generated, but nobody acts on it. An alert that nobody investigates isn&rsquo;t protection, it&rsquo;s a log entry.</p>
<h3 id="authentication-with-escape-hatches">Authentication With Escape Hatches</h3>
<p>Strong authentication controls get undermined by exceptions. Legacy devices that &ldquo;don&rsquo;t support it.&rdquo; Applications that need compatibility modes. Temporary workarounds that quietly become permanent.</p>
<p>Then there&rsquo;s the choice problem. Multiple authentication options enabled for convenience, some far weaker than others. The intention is resilience but the effect is dilution.</p>
<p>Attackers don&rsquo;t try to break the strongest path. They use the weakest one you left open &ldquo;just in case.&rdquo;</p>
<h3 id="segmentation-without-isolation">Segmentation Without Isolation</h3>
<p>Segmentation looks good in diagrams. Separate zones, clean boundaries, tidy rulesets.</p>
<p>In practice those boundaries collapse at shared services. DNS, authentication, file shares, and management interfaces punch holes through supposedly isolated segments. Rules get added to &ldquo;make things work&rdquo; and the isolation erodes quietly.</p>
<p>The network looks compartmentalised. When it matters, it behaves like a flat one.</p>
<h2 id="why-this-keeps-happening">Why This Keeps Happening</h2>
<p>Operational load is real. Security gets treated as a deployment task rather than a continuous process. Once a control is installed and doesn&rsquo;t cause immediate issues, it slides down the priority list. There&rsquo;s always something more urgent.</p>
<p>Environments accumulate controls without accumulating engagement.</p>
<h2 id="the-cost-of-paper-security">The Cost of Paper Security</h2>
<p>The biggest risk isn&rsquo;t failure. It&rsquo;s false confidence.</p>
<p>Teams believe they&rsquo;re protected because the tooling is there. Incidents take longer to detect. Root cause analysis gets harder. Attackers exploit the gaps between controls, not by breaking defences, but by walking through the parts nobody is watching.</p>
<h2 id="security-is-behaviour">Security Is Behaviour</h2>
<p>A control only exists if it changes outcomes.</p>
<p>If nothing reacts when it fails, nothing is protected. If no one owns it, it doesn&rsquo;t exist. If no human ever sees its output, it&rsquo;s noise.</p>
<p>Fewer controls that are actively observed beat a long list that only looks good in a report.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Why I Still Run My Own Infrastructure at Home</title>
      <link>https://zero-entry.co.za/posts/why-i-still-run-my-own-infrastructure-at-home/</link>
      <pubDate>Sun, 11 Jan 2026 20:30:00 +0200</pubDate>
      <guid>https://zero-entry.co.za/posts/why-i-still-run-my-own-infrastructure-at-home/</guid>
      <description>A walkthrough of a self-hosted homelab built around OPNsense, Docker, and deliberate design choices — media, monitoring, VPN containment, and the lessons it took to get there.</description>
      <content:encoded><![CDATA[<h1 id="home-lab-overview-alecto-and-friends">Home Lab Overview: Alecto and Friends</h1>
<p>I&rsquo;ve always enjoyed tinkering with operating systems and finding ways they improve day-to-day life. I&rsquo;m not a cloud hater. Cloud services are useful and I still use them. I self-host because it&rsquo;s fun.</p>
<p>With most SaaS tools, you&rsquo;re limited by design choices you had no part in. My biggest self-hosted system is a Plex machine. I watch what I want, how I want, for roughly the cost of electricity. There&rsquo;s also been a serious learning component: networking, security, general IT practice. That alone has made it worth running.</p>
<h2 id="topology">Topology</h2>
<p>Starting at the internet edge and working inward:</p>
<p><strong>Router / Firewall</strong> I settled on OPNsense. It met and exceeded what I needed. The box runs intrusion detection, Unbound DNS, and a handful of other security-focused services.</p>
<p><strong>Switching</strong> Traffic hits a 24-port unmanaged gigabit switch with SFP ports. Nothing exotic, but most ports are in use.</p>
<p><strong>Flat Network</strong> The network is currently flat, so traffic flows directly to access points, servers, Raspberry Pis, NVR systems, gaming consoles, and everything else.</p>
<p>The topology is simple. The interesting part is what the devices are doing, not how complex the diagram looks.</p>
<h2 id="core-infrastructure">Core Infrastructure</h2>
<h3 id="alecto">Alecto</h3>
<p><strong>Hardware</strong></p>
<ul>
<li>Ryzen 7 1700X</li>
<li>1 TB NVMe</li>
<li>4 TB HDD</li>
<li>GTX 1050 Ti</li>
</ul>
<p>Nothing exotic, but it handles everything I need with around 85% idle time.</p>
<p><strong>Software</strong> Ubuntu LTS as the host OS, Docker for everything else: media acquisition, media consumption, networking, local services, metrics, and automation.</p>
<p>Docker makes backing up and restoring critical services significantly easier, which is the main reason I keep everything containerised.</p>
<h2 id="services">Services</h2>
<h3 id="media-acquisition">Media Acquisition</h3>
<p>The pipeline follows a simple request, acquire, process, library chain.</p>
<ul>
<li>Overseerr</li>
<li>Prowlarr</li>
<li>Sonarr / Radarr</li>
<li>qBittorrent</li>
<li>Deluge</li>
<li>Unpackerr</li>
<li>cross-seed</li>
</ul>
<p><strong>Prowlarr</strong> manages indexers. Private trackers have far less fake or malicious content than public ones, which matters later in the chain.</p>
<p><strong>Sonarr</strong> and <strong>Radarr</strong> handle TV and movies. Quality profiles are simple: HD and 4K. That has covered everything so far. Both monitor RSS feeds from configured indexers and push matched torrents to the downloader automatically.</p>
<p>I run two download clients. <strong>qBittorrent</strong> handles the entire arr stack. <strong>Deluge</strong> handles manual downloads and non-media content. Dynamic save paths split movies and TV cleanly for Plex.</p>
<p><strong>Unpackerr</strong> handles automatic extraction for downloads that arrive as archives. <strong>cross-seed</strong> finds identical or near-identical torrents across trackers and advertises that I already have the data, which improves speeds and availability for others.</p>
<p>The only recurring issue is Sonarr or Radarr occasionally grabbing a fake title. Aggressive regex-based filters have mostly resolved it.</p>
<h3 id="media-consumption">Media Consumption</h3>
<ul>
<li>Plex</li>
<li>Tautulli</li>
<li>Overseerr</li>
<li>Homepage</li>
</ul>
<p><strong>Plex</strong> is the primary player. Mature, stable, available on every device, and accessible for non-technical users. I&rsquo;ve tested Jellyfin and like it, but haven&rsquo;t switched.</p>
<p><strong>Tautulli</strong> gives visibility into Plex usage: playback activity, per-user bandwidth, transcoding load. That data makes decisions around limits and capacity easier.</p>
<p><strong>Overseerr</strong> lets users request titles themselves rather than messaging me. Requests still require approval, but that takes seconds instead of a back-and-forth conversation.</p>
<p><strong>Homepage</strong> is a single customisable dashboard with a high-level view of everything running. It doesn&rsquo;t replace Zabbix or Grafana for monitoring, but it&rsquo;s useful for day-to-day glancing.</p>
<h3 id="networking-and-vpn-containment">Networking and VPN Containment</h3>
<p>Torrent clients are routed through <strong>Gluetun</strong>, a dedicated VPN container running WireGuard. The downloaders have never touched my LAN directly and never see my public IP.</p>
<p>Gluetun runs in strict kill-switch mode. If the VPN drops, traffic stops. There&rsquo;s no fallback to my home connection. Given the provider&rsquo;s SLA, this hasn&rsquo;t been an issue in practice.</p>
<p>No inbound ports need to be open, which reduces exposure further. Speed degradation from the VPN hasn&rsquo;t been noticeable.</p>
<h2 id="observability">Observability</h2>
<p>Prometheus, Node Exporter, cAdvisor, and Grafana cover system-level metrics: CPU load, memory usage, container behaviour. Critical alerts go to Telegram. I&rsquo;m refining thresholds so only actionable issues send a notification.</p>
<h2 id="automation">Automation</h2>
<p><strong>Watchtower</strong> handles container updates on a schedule at 03:00. If an update breaks something, rolling back means redeploying from the same configuration paths. Docker&rsquo;s stateless container model makes that straightforward.</p>
<p><strong>Portainer</strong> handles anything that needs a UI.</p>
<h2 id="network-edge">Network Edge</h2>
<h3 id="opnsense">OPNsense</h3>
<p>OPNsense sits between the internet and all internal systems. UPnP is disabled. No device exposes itself automatically. Only explicitly required services are permitted outbound or inbound.</p>
<p>All traffic is statefully inspected. DNS is forced through Unbound. Suricata monitors inbound and outbound traffic for known malicious patterns. Devices can&rsquo;t quietly phone home, bypass DNS filtering, or accept unsolicited inbound connections without generating an alert.</p>
<p>Services get published deliberately, not accidentally exposed.</p>
<h3 id="boreas">Boreas</h3>
<p>A Raspberry Pi running Nginx Proxy Manager and WireGuard. This started as a workaround for a previous router that lacked VPN support. After moving to OPNsense, the separation made enough sense to keep.</p>
<p>Boreas is my remote access point back into the LAN. Nginx Proxy Manager exposes Overseerr to friends and family outside the local network. Both services sit behind Cloudflare, primarily to obscure my real IP.</p>
<p>The throughput on the Pi is better than you&rsquo;d expect for the hardware.</p>
<h3 id="chronos">Chronos</h3>
<p>A Tor middle relay running as a small contribution to online privacy. No exit node: the ISP complaints and CAPTCHA overhead aren&rsquo;t worth it. A middle relay provides value without the operational noise. It&rsquo;s low-maintenance and largely invisible once configured.</p>
<h2 id="failures-and-lessons">Failures and Lessons</h2>
<p>In the past year, two things actually broke:</p>
<ul>
<li>Disks filled up from log spam. Entirely my fault. Log rotation is properly configured now.</li>
<li>Incorrect or fake titles downloaded. Better filters and denied extension lists resolved most of it.</li>
</ul>
<p>Beyond that, issues have been minor misconfigurations and occasional reboots.</p>
<h2 id="what-id-do-differently">What I&rsquo;d Do Differently</h2>
<p>I&rsquo;d spread services across more hosts if I could go back. Some hardening measures were probably over-engineered, but I don&rsquo;t regret that trade-off. Deploying the arr stack earlier would have saved time.</p>
<h2 id="whats-next">What&rsquo;s Next</h2>
<p><strong>Hardware</strong>: managed switch, better access points, a more capable GPU for transcoding.</p>
<p><strong>Monitoring</strong>: a consolidated Zabbix dashboard with proper alerting.</p>
<p><strong>Networking</strong>: VLANs.</p>
<p>Self-hosted AI models aren&rsquo;t on the list. Cloud tools cover what I need without the overhead.</p>
<h2 id="closing">Closing</h2>
<p>Running this lab has mostly taught me patience. Getting multiple devices, containers, and services working together takes iteration. It&rsquo;s improved my understanding of networking and containerised systems more than any course I&rsquo;ve taken.</p>
<p>I keep running it because it&rsquo;s fun and I learn from it. That&rsquo;s enough reason.</p>
]]></content:encoded>
    </item>
    <item>
      <title>Inside a Cheap IPC Camera: Firmware, Cloud, and Trust</title>
      <link>https://zero-entry.co.za/posts/ipc_camera/</link>
      <pubDate>Thu, 01 Jan 2026 20:30:00 +0200</pubDate>
      <guid>https://zero-entry.co.za/posts/ipc_camera/</guid>
      <description>Pulling apart a cheap IPC camera: dumping and analysing the firmware, intercepting cloud traffic, and finding out exactly how much it phones home.</description>
      <content:encoded><![CDATA[<h2 id="1-why-this-camera">1. Why This Camera?</h2>
<p>I came across this cheap IPC camera on a local marketplace here in South Africa. The price was perfect to mess around with the device and potentially break it. The device cost around R300, or roughly $10.</p>
<p>My goal with this device was to go from zero to one hundred in terms of how I document findings, while also looking at every nook and cranny I could reasonably reach.</p>
<p>From my initial research into these no-name, internet-connected devices, they tend to compromise on both security and privacy.</p>
<p>From the outset, I want to make a few things clear. Within this post you will <strong>not</strong> find a tutorial on “how to hack”, nor will anything be blown out of proportion. This post is meant to be a reverse engineering and analysis exercise.</p>
<h2 id="2-initial-recon-out-of-the-box-behaviour">2. Initial Recon, Out of the Box Behaviour</h2>
<p>To understand normal operating behaviour, the device needs to be used exactly as intended. This includes:</p>
<ul>
<li>Reading the manual it came with</li>
<li>Plugging it in and getting a feel for boot times</li>
<li>Watching for any boot messages</li>
<li>Using the mobile app and understanding its dependency</li>
<li>Noting early red flags such as cloud-only usage or lack of a local UI</li>
</ul>
<p>With that methodology in mind, we start at the top.</p>
<p>The manual was fairly well written, with no obvious grammar or spelling issues. The app required for viewing and control is called <strong>Tris Home</strong>. It appears to be a generic IPC viewer backed by cloud infrastructure.</p>
<p>When powered on, the camera announces “device booting” via the onboard speaker. The app then pops up saying a nearby camera has been discovered. After adoption, another message plays: “device added to Wi-Fi”.</p>
<p>Initially there were no major red flags, but a few things stood out:</p>
<ul>
<li>Account creation immediately triggers cloud storage upsells</li>
<li>The app appears to be the only supported interface</li>
<li>The app feels generic, capable of onboarding many brands</li>
</ul>
<h2 id="3-network-analysis-watching-it-talk">3. Network Analysis, Watching It Talk</h2>
<p>This is where things get interesting.</p>
<p>We set up a small lab using a spare Raspberry Pi and a simple script to create a MITM access point. The camera connects to the internet as normal, but all traffic passes through us.</p>
<h3 id="setting-up-a-controlled-network">Setting up a controlled network</h3>
<p>The script configures <code>hostapd</code> and <code>dnsmasq</code>.</p>
<p><code>hostapd</code> puts the Wi-Fi adapter into AP mode, creating a hotspot with an SSID and password of our choosing. <code>dnsmasq</code> handles DHCP and IP assignment.</p>
<p>This gives us a clean interface to monitor with tools like Wireshark, in this case <code>wlan0</code>.</p>
<p>For anyone being extra cautious, the Pi can be placed inside a dedicated VLAN, isolated from the rest of the network.</p>
<h4 id="tools-used-in-the-controlled-network">Tools used in the controlled network</h4>
<ul>
<li>
<p><strong>tcpdump</strong><br>
CLI packet capture for raw traffic inspection.</p>
</li>
<li>
<p><strong>mitmproxy</strong><br>
Active interception and inspection of higher-level traffic.</p>
</li>
<li>
<p><strong>Custom tooling</strong><br>
Simple scripts to spoof or sinkhole unchecked HTTP services.</p>
</li>
</ul>
<h3 id="observations">Observations</h3>
<p>Before analysing anything meaningful, we verify the basics:</p>
<ul>
<li>The access point is working</li>
<li>Devices receive IP addresses</li>
<li>The camera has outbound connectivity</li>
<li>Camera functionality remains intact</li>
</ul>
<p>Once confirmed, the workflow is simple:</p>
<ol>
<li>Start the access point</li>
<li>Start tcpdump</li>
<li>Start mitmproxy</li>
<li>Power on the camera</li>
</ol>
<p>mitmproxy immediately surfaced interesting behaviour:</p>
<p><img loading="lazy" src="/images/MITM.png"><br>
<img loading="lazy" src="/images/MITM2.png"><br>
<img loading="lazy" src="/images/MITM3.png"><br>
<img loading="lazy" src="/images/First_POST.png"></p>
<p>On boot, the camera contacts multiple external hosts and establishes persistent connections very early in the startup process.</p>
<p><img loading="lazy" src="/images/Pasted%20image%2020251228201800.png"></p>
<h2 id="interesting-protocols-observed-on-the-ipc-camera">Interesting Protocols Observed on the IPC Camera</h2>
<p>During analysis, a few security related protocols stood out. None of these were theoretical. All were observed through traffic capture, MITM.</p>
<h3 id="1-proprietary-tcp-control-channel-port-34567">1. Proprietary TCP Control Channel (Port 34567)</h3>
<p>This port appeared consistently.</p>
<ul>
<li>Proprietary binary protocol on TCP port 34567</li>
<li>Used for registration, keep-alives, and remote control</li>
<li>Common in low-cost Chinese IPC devices</li>
<li>Historically associated with weak or static authentication</li>
</ul>
<p>This effectively functions as the main control plane.</p>
<h3 id="2-plain-http-no-tls">2. Plain HTTP (No TLS)</h3>
<p>Although the device generated HTTP traffic, it did not expose a traditional local web interface.</p>
<p>Observed behaviour included:</p>
<ul>
<li>Backend-style HTTP requests</li>
<li>Configuration and firmware-related endpoints</li>
<li>Cloud communication over plaintext HTTP</li>
</ul>
<p>Important clarifications:</p>
<ul>
<li>No browser-accessible admin interface</li>
<li>No local login or management panel</li>
<li>Endpoints were machine-to-machine, not user-facing</li>
</ul>
<p>Key observations:</p>
<ul>
<li>No HTTPS by default</li>
<li>No certificate validation or pinning</li>
<li>Highly verbose during boot</li>
</ul>
<p>This made the device susceptible to MITM.</p>
<h3 id="3-cloud-update-and-configuration-endpoints">3. Cloud Update and Configuration Endpoints</h3>
<p>The device attempted to reach multiple cloud endpoints:</p>
<ul>
<li>Hardcoded IP addresses and domains</li>
<li>Cloud-hosted infrastructure, including large public providers</li>
<li>Domains such as <code>*.secu100.net</code></li>
</ul>
<p>Some endpoints were referenced directly by IP, indicating a weak update trust model.</p>
<h3 id="4-persistent-cloud-tunnel-behaviour">4. Persistent Cloud Tunnel Behaviour</h3>
<p>Rather than a single protocol, this was a consistent pattern:</p>
<ul>
<li>Device boots</li>
<li>Immediately phones home</li>
<li>Registers with cloud services</li>
<li>Maintains long-lived TCP sessions</li>
</ul>
<p>This enables NAT traversal but creates strong cloud dependency and reduced local-only usability.</p>
<p>Collectively, the device relies on:</p>
<ul>
<li>Proprietary TCP control traffic</li>
<li>Unencrypted HTTP for updates and configuration</li>
<li>Persistent outbound connections</li>
<li>Cloud-first architecture with weak trust boundaries</li>
</ul>
<h2 id="4-physical-teardown-opening-the-camera">4. Physical Teardown, Opening the Camera</h2>
<p>I usually open devices to see what makes them tick. This camera is reasonably well built for mass production, but it does not resist a standard Phillips screwdriver.</p>
<p><img loading="lazy" src="/images/Pasted%20image%2020251228205151.png"></p>
<p>This is not a full iFixit teardown, just a high-level inspection.</p>
<p><img loading="lazy" src="/images/IMG_6347.jpeg"></p>
<p>Notable components include:</p>
<ul>
<li>Main board</li>
<li>LED and IR module</li>
<li>Motors</li>
<li>Enclosure components</li>
<li>Antennas</li>
</ul>
<p>A closer look at the PCB shows a Goke GK7210 SoC, a common vision processing chip used in low-cost cameras.</p>
<p><img loading="lazy" src="/images/Pasted%20image%2020251228204927.png"></p>
<p>The flash ROM chip acts as the device’s primary storage.
<img loading="lazy" src="/images/Pasted%20image%2020260104211252.png">
The Wi-Fi module is external to the SoC. Interestingly, only one antenna cable was actually connected. The second was present but unused, likely for marketing reasons.</p>
<p><img loading="lazy" src="/images/Pasted%20image%2020251228205530.png"></p>
<h2 id="5-debug-interfaces-uart">5. Debug Interfaces, UART</h2>
<p>Test pads were present on the board, outlined below.</p>
<p><img loading="lazy" src="/images/Pasted%20image%2020251228213153.png"></p>
<p>These UART pads are commonly used during development and manufacturing and almost always remain on final products.</p>
<p>UART output was enabled and provided full boot logs, but all inbound input was ignored.</p>
<p>This strongly suggests a vendor-modified U-Boot configuration where interactive access is intentionally disabled, rather than a wiring or baud-rate issue.</p>
<h2 id="6-firmware-extraction-and-analysis">6. Firmware Extraction and Analysis</h2>
<p>With UART offering limited access, firmware extraction was required.</p>
<p>The flash chip was removed and dumped using a flash programmer. Multiple dumps were taken and hashed to ensure consistency, with a golden copy preserved.</p>
<p><img loading="lazy" src="/images/Dump_FW.png"></p>
<p>Using binwalk, partitions and offsets were identified.</p>
<p><img loading="lazy" src="/images/binwalk.png"></p>
<p>One notable finding was the presence of a root password hash in <code>/etc/passwd</code>, which cracked quickly using a common wordlist.</p>
<h2 id="7-cloud-dependency-and-update-mechanism">7. Cloud Dependency and Update Mechanism</h2>
<p>The firmware and observed network behaviour make it clear that this device is heavily cloud-dependent.</p>
<p>On boot, the camera attempts to contact multiple vendor-controlled endpoints for registration, configuration, and update checks. Several of these endpoints are hardcoded into the primary application binary, rather than being dynamically resolved or user-configurable.</p>
<p>Firmware updates appear to follow a module-based OTA model. The device requests a list of available modules, then selectively downloads update payloads from cloud infrastructure. While this approach allows incremental updates, it also centralises trust entirely in external services.</p>
<p>There is no user-visible indication of signature verification, update provenance, or rollback capability. From an owner’s perspective, updates are opaque and automatic.</p>
<p>This raises several long-term concerns:</p>
<ul>
<li>If update servers go offline, functionality may degrade</li>
<li>If domains expire or infrastructure is shut down, updates fail silently</li>
<li>If the vendor disappears, cloud-dependent features may stop working entirely</li>
</ul>
<p>Local-only usage appears limited. While RTSP streaming is present, configuration, management, and updates are clearly designed around continuous cloud connectivity.</p>
<p>This is less about immediate exploitation and more about <strong>lifecycle and supply-chain risk</strong>.</p>
<h2 id="8-security-observations-not-sensational">8. Security Observations (Not Sensational)</h2>
<h3 id="objectively-weak-areas">Objectively Weak Areas</h3>
<ul>
<li>Unauthenticated network services exposed during normal operation</li>
<li>Plaintext HTTP traffic for configuration and update-related requests</li>
<li>Firmware built without obvious hardening, with debug strings present</li>
</ul>
<h3 id="bad-design-vs-exploitable-issues">Bad Design vs Exploitable Issues</h3>
<p>Poor design choices include implicit trust in the local network, cloud-first assumptions, and opaque update mechanisms.</p>
<p>Potentially exploitable surfaces exist, but no active exploitation was attempted or demonstrated.</p>
<h3 id="attack-surface-summary">Attack Surface Summary</h3>
<ul>
<li>Network: multiple open TCP ports, proprietary protocols, persistent outbound connections</li>
<li>Physical: exposed UART pads with output enabled</li>
<li>Firmware: monolithic image with cloud endpoint references</li>
</ul>
<h3 id="what-was-not-tested">What Was Not Tested</h3>
<p>No exploit development, fuzzing, credential attacks, firmware modification, or cloud impersonation was performed.</p>
<p>All observations were made through passive analysis, controlled network interception, and non-invasive hardware inspection.</p>
<h2 id="9-practical-takeaways">9. Practical Takeaways</h2>
<p>For home users, network isolation, firewalling, and local-only usage where possible significantly reduce exposure.</p>
<p>For researchers, cheap IoT devices remain excellent learning platforms, exposing real-world constraints and design trade-offs.</p>
<p>For vendors, basic improvements around authentication, TLS usage, and transparency would meaningfully raise the security baseline.</p>
<h2 id="10-closing-why-this-matters">10. Closing, Why This Matters</h2>
<p>Cheap IP cameras are not edge cases. They are everywhere, deployed on private networks and rarely revisited after installation.</p>
<p>Most users will never inspect firmware, observe network behaviour, or question update mechanisms. That gap between deployment and understanding is where hardware security still matters.</p>
<p>This research did not uncover a dramatic exploit, and that is the point. Security issues are often rooted in assumptions, not elite attackers.</p>
<p>Hardware hacking remains one of the few disciplines that cuts across firmware, silicon, network, and reality.</p>
<h3 id="whats-next">What’s Next</h3>
<p>This analysis focused on observability rather than exploitation.</p>
<p>Follow-up work will look at:</p>
<ul>
<li>RTSP behaviour and access control</li>
<li>Custom protocol reverse engineering</li>
<li>Firmware update mechanics in more detail</li>
<li>Cloud dependency and failure modes</li>
</ul>
<p>That will form the basis of Part 2.</p>
<p>Not to sensationalise, but to understand.</p>
]]></content:encoded>
    </item>
    <item>
      <title>About Me</title>
      <link>https://zero-entry.co.za/posts/about-me/</link>
      <pubDate>Thu, 18 Dec 2025 20:30:00 +0200</pubDate>
      <guid>https://zero-entry.co.za/posts/about-me/</guid>
      <description>Cybersecurity professional focused on offensive security, embedded systems, firmware, and the messy reality of how systems actually fail.</description>
      <content:encoded><![CDATA[<p>I work in cybersecurity with a strong focus on how systems <em>actually</em> behave once they leave the lab and hit the real world.</p>
<p>Most of my time is spent breaking things apart , networks, embedded devices, cloud services, and poorly thought out assumptions , to understand how they fail, how they’re abused, and how they can be made better. I’m especially interested in areas where disciplines overlap like where hardware meets software, firmware meets networking, and theory meets reality.</p>
<p>Professionally, my background is rooted in offensive security, infrastructure, and research. I hold multiple industry certifications and spend a lot of time building internal tools, monitoring pipelines, and attack simulations rather than just running point-and-shoot tests. If something looks like a black box, I usually want to open it.</p>
<p>Outside of pure cybersecurity, I’m deeply hands-on. I tinker with electronics, reverse firmware, poke at undocumented protocols, and run an over-engineered home lab that probably exists more for curiosity than necessity. I enjoy projects that start simple and slowly spiral into “this is more complicated than it needs to be” , usually in a good way.</p>
<p>This blog exists as a place to document that process.</p>
<p>You’ll find write-ups on:</p>
<ul>
<li>Real-world security research and investigations</li>
<li>Hardware and firmware analysis</li>
<li>Network monitoring, telemetry, and weird traffic</li>
<li>Tools I build to solve problems I couldn’t find good answers for</li>
<li>Lessons learned the hard way</li>
</ul>
<p>I’m not here to sell hype, shortcuts, or miracle tools. I care about understanding systems properly, sharing what works (and what doesn’t), and leaving things better documented than I found them.</p>
<p>If you’re curious, methodical, and comfortable sitting with complexity for a while, you’ll probably feel at home here.</p>
]]></content:encoded>
    </item>
    <item>
      <title>About</title>
      <link>https://zero-entry.co.za/about/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://zero-entry.co.za/about/</guid>
      <description>&lt;p&gt;I work in cybersecurity with a strong focus on how systems &lt;em&gt;actually&lt;/em&gt; behave once they leave the lab and hit the real world.&lt;/p&gt;
&lt;p&gt;Most of my time is spent breaking things apart — networks, embedded devices, cloud services, and poorly thought-out assumptions — to understand how they fail, how they’re abused, and how they can be made better. I’m especially interested in areas where disciplines overlap: where hardware meets software, firmware meets networking, and theory meets reality.&lt;/p&gt;</description>
      <content:encoded><![CDATA[<p>I work in cybersecurity with a strong focus on how systems <em>actually</em> behave once they leave the lab and hit the real world.</p>
<p>Most of my time is spent breaking things apart — networks, embedded devices, cloud services, and poorly thought-out assumptions — to understand how they fail, how they’re abused, and how they can be made better. I’m especially interested in areas where disciplines overlap: where hardware meets software, firmware meets networking, and theory meets reality.</p>
<p>Professionally, my background is rooted in offensive security, infrastructure, and research. I hold multiple industry certifications and spend a lot of time building internal tools, monitoring pipelines, and attack simulations rather than just running point-and-shoot tests. If something looks like a black box, I usually want to open it.</p>
<p>Outside of pure cybersecurity, I’m deeply hands-on. I tinker with electronics, reverse firmware, poke at undocumented protocols, and run an over-engineered home lab that probably exists more for curiosity than necessity.</p>
<p>This blog documents that process: security research, hardware and firmware analysis, network monitoring, tooling, and lessons learned the hard way. If you’re curious, methodical, and comfortable sitting with complexity for a while, you’ll probably feel at home here.</p>
<p>You can find me on <a href="https://x.com/TaffyByte">X / Twitter</a> and <a href="https://www.linkedin.com/in/travis-more-860bb3131/">LinkedIn</a>.</p>
]]></content:encoded>
    </item>
  </channel>
</rss>
