Emergency Hotline: Call 1-844-363-1423 (United We Dream Hotline)
ICE Encounter

Privacy is Non-Negotiable

For immigrant communities, privacy failures can have catastrophic consequences:

  • Deportation of family members
  • Separation of children from parents
  • Criminal prosecution
  • Employment termination
  • Denial of future immigration benefits

The only safe approach: Data that doesn't exist cannot be breached, subpoenaed, or misused.


Core Principle: Data Minimization

Collect Nothing Unnecessary

# BAD: Collecting case details
user_data = {
    'name': user.name,
    'a_number': user.a_number,
    'country_of_origin': user.country,
    'date_of_entry': user.entry_date,
    'conversation_history': messages,
    'ip_address': request.ip,
    # THIS IS DANGEROUS
}

# GOOD: Collect only operational necessities
session_data = {
    'session_id': generate_random_id(),  # No user identifier
    'language': 'es',                     # For response language
    'state': 'CA',                        # For resource matching
    # NO personal information
    # NO case details
    # NO conversation history stored
}

Questions to Ask Before Collecting Any Data

  1. Is this absolutely necessary for the system to function?
  2. What is the minimum data needed?
  3. How quickly can we delete it?
  4. What happens if this data is subpoenaed?
  5. Could this data be used to identify or locate users?

Zero-Retention Architecture

Memory-Only Processing

All conversation data should exist only in volatile memory:

from dataclasses import dataclass
from typing import List
import secrets

@dataclass
class EphemeralSession:
    """Session that exists only in RAM, never persisted."""

    session_id: str
    messages: List[dict]
    created_at: float
    last_activity: float

    @classmethod
    def create(cls):
        return cls(
            session_id=secrets.token_urlsafe(16),
            messages=[],
            created_at=time.time(),
            last_activity=time.time()
        )

    def add_message(self, role: str, content: str):
        """Add message to in-memory history."""
        self.messages.append({
            'role': role,
            'content': content,
            'timestamp': time.time()
        })
        self.last_activity = time.time()

    def destroy(self):
        """Securely clear all data."""
        # Overwrite memory before deletion
        for msg in self.messages:
            msg['content'] = 'X' * len(msg['content'])
        self.messages.clear()
        self.session_id = None

Automatic Session Expiration

import asyncio
from typing import Dict

class SessionManager:
    def __init__(self, timeout_seconds: int = 1800):  # 30 minutes
        self.sessions: Dict[str, EphemeralSession] = {}
        self.timeout = timeout_seconds

    async def cleanup_expired(self):
        """Periodically destroy expired sessions."""
        while True:
            await asyncio.sleep(60)  # Check every minute

            current_time = time.time()
            expired = [
                sid for sid, session in self.sessions.items()
                if current_time - session.last_activity > self.timeout
            ]

            for sid in expired:
                self.sessions[sid].destroy()
                del self.sessions[sid]

    def end_session(self, session_id: str):
        """Immediately destroy session on user disconnect."""
        if session_id in self.sessions:
            self.sessions[session_id].destroy()
            del self.sessions[session_id]

Disable All Telemetry

Model Inference Configuration

# vLLM configuration - disable all logging
from vllm import EngineArgs

engine_args = EngineArgs(
    model="mistral-7b",
    disable_log_requests=True,      # No request logging
    disable_log_stats=True,         # No statistics
    disable_custom_all_reduce=True, # Minimize external calls
)

Framework Telemetry Disablement

# Disable HuggingFace telemetry
import os
os.environ['HF_HUB_DISABLE_TELEMETRY'] = '1'
os.environ['TRANSFORMERS_OFFLINE'] = '1'

# Disable LangChain telemetry
os.environ['LANGCHAIN_TRACING_V2'] = 'false'

# Disable ChromaDB telemetry
import chromadb
from chromadb.config import Settings

client = chromadb.Client(Settings(
    anonymized_telemetry=False,
    allow_reset=False
))

Application-Level Logging

import logging

# Create null handler for sensitive operations
class NullHandler(logging.Handler):
    def emit(self, record):
        pass

# Apply to all chatbot loggers
logging.getLogger('chatbot').addHandler(NullHandler())
logging.getLogger('chatbot').propagate = False

Air-Gapped Deployment

Network Isolation

The inference server should have no outbound internet access:

# docker-compose.yml
version: '3.8'

services:
  llm-server:
    image: vllm/vllm-openai:latest
    networks:
      - isolated
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

  chatbot-api:
    image: chatbot-api:latest
    networks:
      - isolated
      - web
    depends_on:
      - llm-server

  nginx:
    image: nginx:alpine
    networks:
      - web
    ports:
      - "443:443"

networks:
  isolated:
    internal: true    # NO external internet access
    driver: bridge

  web:
    driver: bridge

Firewall Rules

#!/bin/bash
# Strict firewall rules for LLM server

# Default deny all outbound
iptables -P OUTPUT DROP

# Allow only internal network
iptables -A OUTPUT -d 10.0.0.0/8 -j ACCEPT
iptables -A OUTPUT -d 172.16.0.0/12 -j ACCEPT
iptables -A OUTPUT -d 192.168.0.0/16 -j ACCEPT

# Allow loopback
iptables -A OUTPUT -o lo -j ACCEPT

# Deny everything else
iptables -A OUTPUT -j LOG --log-prefix "BLOCKED_OUTBOUND: "
iptables -A OUTPUT -j DROP

Verification

# Test that LLM container cannot reach internet
docker exec llm-server curl -s --connect-timeout 5 https://google.com
# Should fail with connection timeout

# Test internal communication works
docker exec chatbot-api curl -s http://llm-server:8000/health
# Should return healthy

Encryption

Data at Rest

All stored data (vector database, configurations) must be encrypted:

# Create encrypted volume for vector database
cryptsetup luksFormat /dev/sdb1
cryptsetup luksOpen /dev/sdb1 encrypted_data
mkfs.ext4 /dev/mapper/encrypted_data
mount /dev/mapper/encrypted_data /data/vectors
# Python-level encryption for sensitive configs
from cryptography.fernet import Fernet

class SecureConfig:
    def __init__(self, key_path: str):
        with open(key_path, 'rb') as f:
            self.cipher = Fernet(f.read())

    def encrypt(self, data: bytes) -> bytes:
        return self.cipher.encrypt(data)

    def decrypt(self, encrypted: bytes) -> bytes:
        return self.cipher.decrypt(encrypted)

Data in Transit

All internal communication must use TLS:

# Traefik configuration for internal TLS
http:
  routers:
    llm-router:
      rule: "Host(`llm.internal`)"
      tls: true
      service: llm-service

  services:
    llm-service:
      loadBalancer:
        servers:
          - url: "https://llm-server:8000"

tls:
  certificates:
    - certFile: /certs/internal.crt
      keyFile: /certs/internal.key

Compliance Framework

GDPR Requirements

Requirement Implementation
Data minimization Collect only operational necessities
Purpose limitation Use data only for immediate response
Storage limitation Zero retention, memory-only
Right to erasure Automatic session destruction
Data portability No data stored to export

CCPA Requirements

Requirement Implementation
Disclosure No personal information collected
Opt-out of sale No data to sell
Non-discrimination Equal service regardless of privacy choices
Deletion Automatic, nothing to delete manually

Immigration-Specific Considerations

Risk Mitigation
Subpoena for user data No data exists to produce
Warrant for server contents Session data cleared immediately
Immigration enforcement requests No logs, no identifiers, nothing to share
Third-party data sharing Air-gapped, no external connections

Anonymous Quality Assurance

Sanitized Output Logging

For quality assurance, log only sanitized outputs (never inputs):

import re
from typing import Optional

class SanitizedLogger:
    """Log only AI outputs with personal info removed."""

    PII_PATTERNS = [
        r'\b[A-Z]\d{8,9}\b',           # A-Numbers
        r'\b\d{3}-\d{2}-\d{4}\b',       # SSN
        r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b',  # Phone numbers
        r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',  # Email
        r'\b\d{1,5}\s+[\w\s]+(?:street|st|avenue|ave|road|rd|drive|dr)\b',  # Addresses
    ]

    def sanitize(self, text: str) -> str:
        """Remove potential PII from text."""
        sanitized = text
        for pattern in self.PII_PATTERNS:
            sanitized = re.sub(pattern, '[REDACTED]', sanitized, flags=re.IGNORECASE)
        return sanitized

    def log_output(self, session_id: str, response: str):
        """Log sanitized response for QA, never user input."""
        sanitized = self.sanitize(response)

        # Log only sanitized output
        qa_log = {
            'timestamp': time.time(),
            'session_hash': hashlib.sha256(session_id.encode()).hexdigest()[:8],
            'response_sample': sanitized[:500],  # Truncate
            'contains_disclaimer': 'legal advice' in response.lower()
        }

        # Write to separate QA log (still ephemeral)
        self.qa_queue.put(qa_log)

Named Entity Recognition Masking

# Use NER to identify and mask personal information
from transformers import pipeline

ner = pipeline("ner", model="dslim/bert-base-NER")

def mask_entities(text: str) -> str:
    """Replace named entities with placeholders."""
    entities = ner(text)

    masked = text
    for entity in sorted(entities, key=lambda x: x['start'], reverse=True):
        if entity['entity'] in ['B-PER', 'I-PER', 'B-LOC', 'I-LOC']:
            masked = masked[:entity['start']] + '[ENTITY]' + masked[entity['end']:]

    return masked

Incident Response

If a Breach is Suspected

INCIDENT RESPONSE PROTOCOL

1. IMMEDIATE (0-1 hour):
   □ Shut down affected systems
   □ Preserve volatile memory state (if forensics needed)
   □ Alert legal counsel

2. ASSESSMENT (1-4 hours):
   □ Determine scope of potential exposure
   □ Review access logs (external only, not user data)
   □ Identify attack vector

3. REMEDIATION (4-24 hours):
   □ Patch vulnerability
   □ Rotate all credentials
   □ Review firewall rules

4. NOTIFICATION (if required):
   □ Consult legal counsel on notification requirements
   □ Document that no user data was stored
   □ Prepare statement confirming zero-retention architecture

5. POST-INCIDENT:
   □ Update security procedures
   □ Conduct security audit
   □ Document lessons learned

Documentation for Legal Defense

Maintain records proving zero-retention:

# architecture-documentation.yaml
privacy_architecture:
  data_retention:
    user_prompts: "none"
    conversation_history: "session_memory_only"
    session_duration: "auto_destroy_30min"
    storage_location: "volatile_memory_only"

  network_isolation:
    llm_server: "no_internet_access"
    verification: "firewall_rules_documented"
    audit_frequency: "monthly"

  telemetry:
    external_apis: "disabled"
    analytics: "none"
    crash_reporting: "local_only"

  encryption:
    at_rest: "AES-256"
    in_transit: "TLS_1.3"
    key_management: "local_hsm"

  compliance_audits:
    frequency: "quarterly"
    last_audit: "2026-03-01"
    findings: "none"

User Communication

Transparency Statement

Display prominently in the chatbot interface:

<div class="privacy-notice">
  <h3>🔒 Your Privacy is Protected</h3>

  <ul>
    <li><strong>No data stored:</strong> Our conversation exists only while
        we're chatting. Nothing is saved after you leave.</li>
    <li><strong>No tracking:</strong> We don't collect your name, location,
        IP address, or any personal information.</li>
    <li><strong>No sharing:</strong> Our servers never connect to the internet.
        Your questions stay here.</li>
    <li><strong>No accounts:</strong> You don't need to log in or create
        an account.</li>
  </ul>

  <p><em>We built this system specifically to protect immigrant communities.</em></p>
</div>

Next Steps

  1. Follow implementation roadmap - Phased deployment
  2. Review Safety Guardrails - UPL compliance
  3. Configure Local LLM Infrastructure - Hardware setup
Legal Disclaimer

This website does not provide legal advice. The information provided on this site is for general informational and educational purposes only. It does not create an attorney-client relationship.

Information on this website may not be current or accurate. Immigration law is complex and varies by jurisdiction and individual circumstances. Always consult with a qualified immigration attorney for advice specific to your situation.

Neither ICE Encounter, its developers, partners, nor any contributors shall be liable for any actions taken or not taken based on information from this site. Use of this site is subject to our Terms of Use and Privacy Policy.