● 17 min readShowdown: Uno R4 vs Uno R3 vs Nano
Welcome to Vol. 12 of The Probots Showdown. Three boards. Three price points. Three very different futures for your project.Should you stick with the
Read Article →Our team received an urgent call from a contractor's support team. Their deployed fleet of ruggedized embedded Linux devices in secure facilities had been running without security updates for six months. The symptom: apt-get update failed immediately with "Network Unreachable" errors. This wasn't a misconfiguration—these devices operated in air-gapped environments with zero internet connectivity by design, a security requirement for classified networks.
The context made standard Linux package management unusable. Traditional package managers like apt and yum assume continuous internet access. They fetch package indexes online, download dependencies on-demand, and verify signatures against keyservers accessible via HTTPS. In our client's environment, none of this infrastructure existed. The devices sat behind multiple layers of physical and network isolation.
The constraint was absolute network isolation. Our solution couldn't assume any network connectivity, couldn't phone home for telemetry, and couldn't fall back to cached online resources. Everything needed for an update had to arrive on physical media or through approved airgap transfer systems. Cryptographic verification had to work without accessing external keyservers, and rollback mechanisms had to function without cloud-based recovery.
The stakes were significant. Without reliable update delivery, security vulnerabilities remained unpatched—a non-starter for secure systems handling classified data. Bug fixes couldn't reach deployed units, degrading operational effectiveness. The support team resorted to manual filesystem overwrites via USB drives, a process taking field technicians approximately 45 minutes per device with high risk of bricking units if interrupted.
Our team's first step was understanding exactly why apt failed in air-gapped mode. We instrumented a test deployment:
strace -f -e trace=network,openat apt-get update 2>&1 | tee apt-trace.log
The trace revealed the fundamental architectural assumption:
socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3connect(3, {sa_family=AF_INET, sin_port=htons(80)}, 16) = -1 ENETUNREACH
The package manager attempted TCP connections to remote repositories before checking local caches. This wasn't a bug—it was by design. Apt's architecture treats network repositories as the source of truth.
We analyzed apt's dependency resolution to understand the trust model conflict:
apt-cache policy openssl | head -20
The output showed URLs pointing to internet repositories in package metadata:
openssl: Installed: 1.1.1f-1ubuntu2.19 Candidate: 1.1.1f-1ubuntu2.22 Version table: 1.1.1f-1ubuntu2.22 500 500 [http://ports.ubuntu.com/ubuntu-ports](http://ports.ubuntu.com/ubuntu-ports) focal-security/main arm64
Even with a local mirror, the system expected to validate package signatures against keys downloaded from keyserver.ubuntu.com. Our trace confirmed this:
gpg --verify Release.gpg Release 2>&1
gpg: Can't check signature: No public keygpg: keyserver receive failed: No route to host
Signature verification failed because GPG attempted to fetch signing keys from remote keyservers.
Approach 1: Local Repository Mirrors
Our team's first attempt used apt-mirror to create a complete local copy of Ubuntu repositories, setting up an internal mirror server with nginx. On paper, this looked straightforward—point sources.list at the local mirror.
The failure mode appeared during deployment testing. While package downloads succeeded, dependency resolution broke randomly. We traced the issue to repository structure consistency. Our mirror had approximately 47,000 packages, but the sync job took six hours. During that window, packages were only partially mirrored. When apt tried to resolve dependencies for a package from batch 1,000, it expected related packages from batch 45,000 to exist—creating impossible dependency graphs.
The deeper problem was bandwidth constraints. A complete Ubuntu focal repository consumed roughly 2.1TB. Syncing this to air-gapped facilities required physical hard drive shipment—a two-week process per update cycle. By arrival time, packages were already outdated.
Approach 2: Debian Package Caching with apt-cacher-ng
We next tried apt-cacher-ng, a caching proxy that intercepts apt requests and serves cached content. This seemed to address the mirror size problem—only cache packages actually used.
The failure happened when pre-populating the cache for offline use:
for pkg in $(dpkg --get-selections | awk '{print $1}'); do apt-get install --reinstall --download-only $pkgdone
Two fatal problems surfaced. First, apt-cacher-ng's cache format stored metadata containing timestamps and HTTP headers from the original repository server. When we transferred this cache to the air-gapped network, apt rejected cached packages because metadata referenced inaccessible URLs and expired cache-control headers. The proxy expected to revalidate cached content against the original server.
Second, the cache had no mechanism for delta updates. When security patch 1.1.1f-1ubuntu2.20 replaced 1.1.1f-1ubuntu2.19, the cache stored complete copies of both packages. For 200 devices, this meant transferring 2GB of updated packages to patch roughly 100MB of actual code changes.
Our team made the architectural decision to build a custom package management system designed specifically for air-gapped operation. Rather than trying to retrofit existing tools, we needed infrastructure that treated network disconnection as the default state, not an exceptional case.
The core principle was deterministic offline operation. Every component of the update pipeline had to function with zero network connectivity, using only locally available cryptographic materials and package metadata. This meant rebuilding several foundational pieces: a repository format that didn't reference external URLs, a signing infrastructure that worked with pre-distributed keys, and a dependency resolver that operated on locally frozen package graphs.
The fundamental difference was trust model architecture. Existing package managers treat the internet as a trust anchor—they verify signatures by consulting remote keyservers, resolve dependencies by querying live repository metadata, and validate package integrity against remote checksums. This model assumes "online until proven otherwise."
Our custom system inverted this. The trust anchor was a pre-established cryptographic chain that traveled with the update package itself. Instead of verifying "is this package signed by a key the keyserver says is valid," we verified "is this package signed by a key in the immutable local keyring established during device provisioning." This eliminated all network dependencies from verification.
The practical difference showed in signature verification workflow. With apt/GPG, verification failures triggered network requests:
# Traditional apt: key validation -> keyserver lookup -> FAIL: no network# Custom verifier: key validation -> local keyring check -> trust chain walk -> deterministic result
This meant updates could be verified in submarines, on factory floors, or in secure facilities—anywhere the original device provisioning occurred.
The mathematics of bandwidth efficiency drove this decision. A security patch to OpenSSL's libssl.so: the complete debian package was approximately 1.8MB compressed. The actual code change—a bounds check to prevent buffer overflow—modified roughly 400 bytes in the compiled binary.
Traditional package distribution required transferring the entire 1.8MB to update those 400 bytes. Multiply across 200 deployed devices: a minor security patch consumed 360MB of transfer capacity. For facilities moving data via courier-delivered USB drives, this translated to logistics overhead and delayed deployment of critical patches.
Delta updates changed the equation. We used bsdiff, an algorithm computing binary differences between file versions:
bsdiff old_libssl.so new_libssl.so libssl.patchls -lh libssl.patch# Output: -rw-r--r-- 1 builder builder 92K libssl.patch
The patch file was 92KB—a 95% reduction. This improvement compounded across full system updates. Where a traditional update package totaled approximately 2GB, our delta-based approach reduced it to roughly 100MB. The bandwidth efficiency translated to operational tempo: updates that previously took two weeks to prepare, scan, courier, and deploy now completed in three days.
The implementation required careful edge case handling. Delta patches work by describing transformations from known starting states. If a device had modified files—manual configuration changes, filesystem corruption, or unauthorized modifications—the delta patch would fail to apply. Our solution validated starting states before applying deltas:
def verify_and_patch(old_file, patch_file, expected_checksum): """Apply binary delta patch with pre-flight validation""" old_hash = hashlib.sha256(open(old_file, 'rb').read()).hexdigest() if old_hash != expected_checksum: return install_full_package() # Fall back to full package subprocess.run(['bspatch', old_file, new_file, patch_file], check=True) return verify_package_signature(new_file)
This validation added negligible overhead—computing SHA-256 hashes took approximately 40ms per file on the target ARM hardware—while preventing delta patch corruption scenarios.
The critical requirement was atomic system updates with guaranteed rollback capability. In traditional package management, an update modifies the running filesystem in-place. If the update terminates mid-flight—power failure, filesystem corruption, or buggy package scripts—the system ends up in an undefined state with partially updated files.
OSTree solved this by treating the filesystem as an immutable object store. Instead of modifying files in /usr/bin or /usr/lib, OSTree wrote new filesystem trees into a separate directory and atomically switched the boot configuration to reference the new tree. The implementation used bind mounts and hardlinks to share unchanged files between versions, avoiding storage duplication.
We instrumented the atomic update operation:
ostree admin deploy new-commit-hash --karg="quiet splash" # Creates new deployment in /ostree/deploy/rootfs/deploy/[hash]# Updates bootloader to reference new deployment
The atomicity guarantee came from the bootloader configuration update being a single metadata file write. Either the bootloader pointed to the new deployment (update succeeded) or still pointed to the old deployment (update failed safely). No intermediate state existed with half-old, half-new files.
Rollback worked by reversing the bootloader pointer:
ostree admin undeploy 0 # Remove newest deploymentostree admin deploy previous-commit-hash # Restore previousreboot
During integration testing, we deliberately triggered failures—corrupted kernel modules, simulated power loss, broken systemd unit files. In all cases, the original deployment remained bootable, and recovery required only reboot—no manual filesystem repair or reinstallation from backup media.
The signature verification pipeline needed to work without any network access for key retrieval. Our solution used a two-stage signing process:
# Stage 1: Build system signs the package manifestgpg --armor --detach-sign --local-user [email protected] package-manifest.json
# Stage 2: Include the signing public key in the packagetar czf update-package.tar.gz package-manifest.json package-manifest.json.asc \ signing-keys.kbx deltas/*.bsdiff full-packages/*.deb
On the target device, verification happened entirely from the update package contents:
def verify_update_package(package_path): """Verify package authenticity without network access""" with tarfile.open(package_path) as tar: # Extract included public keys to temporary keyring tar.extract('signing-keys.kbx', '/tmp/update-verify/') tar.extract('package-manifest.json') tar.extract('package-manifest.json.asc') # Verify signature using included keys result = subprocess.run([ 'gpg', '--no-default-keyring', '--keyring', '/tmp/update-verify/signing-keys.kbx', '--verify', 'package-manifest.json.asc', 'package-manifest.json' ], capture_output=True) if result.returncode != 0: raise SignatureVerificationError(f"Invalid signature: {result.stderr}") # Verify the included key is in our trusted root keyring return verify_key_against_root_ca('/tmp/update-verify/signing-keys.kbx')
The key innovation was the dual-trust model. The update package included the signing key, but devices maintained a separate root keyring established during factory provisioning. The verification checked both "is this package signed" and "is the signing key itself trusted by our root authority." This prevented an attacker from creating a self-signed update package.
During penetration testing with the client's security team, this approach caught approximately three attempted attack scenarios: modified update packages with attacker-controlled keys, packages with valid signatures but from untrusted sources, and replay attacks using legitimately signed but outdated packages.
Update Package Size: From 2GB to 100MB
Our team validated this metric by comparing actual update package sizes across approximately 20 security updates and feature releases over an 18-month deployment period. Traditional full-package updates averaged 1.8GB to 2.3GB per update, depending on the number of affected packages. Our delta-based approach reduced this to 85MB to 120MB per update—roughly a 95% reduction.
The validation method was straightforward: we generated both traditional and delta-based packages for the same updates and compared compressed tarball sizes. The business impact showed immediately in logistics. The client's secure facility required all external media to undergo virus scanning and security validation—a process that took approximately four hours for a 2GB update package. With 100MB packages, the scanning completed in under 30 minutes, reducing the end-to-end update deployment timeline from two weeks to three days.
Device Update Time: From 45 Minutes to 8 Minutes
We tracked update times across approximately 50 field deployments spanning three months. The previous manual process required a field technician to physically access each device, boot into single-user mode, mount a USB drive, manually copy filesystem contents, and verify integrity—averaging 42 to 48 minutes per device. Our automated system reduced this to 6 to 10 minutes, including download from local update server, signature verification, delta application, and OSTree deployment.
The time savings compounded at scale. For the client's fleet of 200 devices requiring quarterly security updates, the previous approach consumed approximately 150 hours of technician time per update cycle (200 devices × 45 minutes ÷ 60 minutes/hour). The automated system reduced this to roughly 27 hours (200 devices × 8 minutes ÷ 60 minutes/hour), saving approximately 123 hours of field support time per quarter—nearly three weeks of labor recovered annually.
Rollback Safety: Zero Failed Updates in Production
The most critical metric was system availability during updates. With the previous manual update process, the client experienced approximately five to seven failed updates per quarter that left devices non-bootable, requiring complete reinstallation from backup images. Each failed update resulted in roughly four hours of downtime for that device (discovery, diagnosis, reinstall, and reconfiguration).
Our OSTree-based atomic update system eliminated this failure mode entirely. We tracked update operations across approximately 800 individual device updates over the 18-month deployment. In cases where updates failed to apply correctly—filesystem errors, power interruption, or incompatible configurations—the automatic rollback mechanism restored the previous working state within a single reboot cycle, typically under two minutes. The business impact was substantial: eliminating the five to seven catastrophic update failures per quarter saved approximately 80 to 112 hours of unplanned downtime annually, maintaining operational readiness for mission-critical systems.
Storage Efficiency: Maintaining Two Deployments in 4GB Overhead
The OSTree approach required maintaining multiple filesystem deployments simultaneously, raising concerns about storage consumption. Our team validated that the deduplication through hardlinks kept overhead minimal. A typical deployment was approximately 3.2GB for the complete root filesystem. Maintaining both current and previous deployments consumed 4.1GB total—only 900MB of actual duplicate data. This happened because unchanged files between versions (roughly 85% of the filesystem) were shared through hardlinks, duplicating only modified or new files.
The storage overhead remained acceptable even on the client's hardware with constrained 32GB eMMC storage. With approximately 4GB for dual deployments, 2GB for /var/log and runtime data, and 5GB for application-specific data, the system maintained roughly 21GB free space—sufficient headroom for normal operation while preserving the rollback safety guarantee.
Our team at Probots has developed custom embedded Linux solutions for clients operating in secure, disconnected, and bandwidth-constrained environments. We understand both the promises and limitations of air-gapped systems—when network isolation is critical for security, and when it creates operational bottlenecks.
Our approach: forensic analysis of existing infrastructure to understand failure modes, architectural decisions based on operational constraints, and validation through deliberate fault injection and adversarial testing.
Building deployable embedded Linux systems for classified, industrial control, or physically isolated environments requires balancing security guarantees with operational maintainability. Our team has successfully designed update infrastructure for clients in defense, industrial control, and secure manufacturing. We can provide architectural guidance, implementation support, and security validation for your specific air-gapped deployment.
Contact our engineering team for a consultation.
Probots Electronics is highly regarded for its great selection of components and professional service. Customers frequently praise the awesome care and timely delivery provided by the team to ensure all products arrive safely.
● 17 min readWelcome to Vol. 12 of The Probots Showdown. Three boards. Three price points. Three very different futures for your project.Should you stick with the
Read Article →
● 10 min read12 projects. One board. Zero excuses. From a radar scanner that maps your room to a voice-controlled robot car you command with your phone — this is w
Read Article →
● 20 min readIf you have outgrown the basic “blinky” boards but aren’t ready to spend hundreds of dollars on enterprise gear, the Tang Primer 25K
Read Article →