Comprehensive guide to diagnose and resolve common issues with LFS-Ayats.
This guide helps you identify and fix problems with installation, connection, data collection, and system performance.
Before diving into specific issues, run these quick checks:
# Check Python version
python --version # Should be 3.8+
# Check if package is installed
pip show lfs-ayats
# Check if dependencies are installed
pip list | grep -E "numpy|pandas|plotly|dash|fastapi"
# Test basic import
python -c "from src.connection import InSimClient; print('OK')"
# Check if LFS is running
# Windows: tasklist | findstr LFS
# Linux/macOS: ps aux | grep LFS
Symptoms:
ConnectionRefusedError: [Errno 111] Connection refused
Causes & Solutions:
# Check if LFS is running
# Windows
tasklist | findstr LFS
# Linux/macOS
ps aux | grep LFS
Solution: Launch Live for Speed
29999# Check your configuration
client = InSimClient(host="127.0.0.1", port=29999) # Default port
Solution: Ensure port matches LFS InSim settings
# Linux: Check firewall
sudo ufw status
sudo ufw allow 29999/tcp
# Windows: Add exception in Windows Defender Firewall
Symptoms:
Causes & Solutions:
# config.yaml
connection:
timeout: 5.0
heartbeat_interval: 30 # Send keepalive every 30 seconds
# Test network stability
ping -c 100 127.0.0.1 # Local
ping -c 100 <remote-ip> # Remote server
# Monitor resources
import psutil
print(f"CPU: {psutil.cpu_percent()}%")
print(f"Memory: {psutil.virtual_memory().percent}%")
Symptoms:
Solutions:
client = InSimClient(
host="192.168.1.100",
port=29999,
admin_password="your_password" # Required for remote
)
# Test if server is reachable
ping <server-ip>
# Test if port is open
telnet <server-ip> 29999
# Or use netcat
nc -zv <server-ip> 29999
# Server-side: Allow InSim port
sudo ufw allow 29999/tcp
# Check if port is listening
sudo netstat -tlnp | grep 29999
Symptoms:
ValueError: Invalid packet size: expected >= 4, got 0
Causes & Solutions:
try:
packet = client.receive_packet()
except ConnectionError as e:
logger.error(f"Connection lost: {e}")
client.reconnect()
# Ensure complete packet is read
def receive_packet(self):
header = self.socket.recv(4)
if len(header) < 4:
raise ValueError("Incomplete packet header")
# ...
Symptoms:
WARNING: Unknown packet type: 255
Explanation:
Solutions:
Ignore warning (if telemetry still works)
# src/connection/packet_handler.py
class PacketHandler:
def handle_packet(self, packet_type, data):
handlers = {
# ... existing handlers
255: self.parse_new_packet_type,
}
git pull origin main
pip install -e .
Symptoms:
struct.error: unpack requires a buffer of X bytes
Solutions:
/versionfrom src.utils import setup_logger
logger = setup_logger("packet_debug", "DEBUG")
Symptoms:
Diagnostic Steps:
# Enable detailed logging
logger.setLevel("DEBUG")
# Check if MCI packets are enabled
client.send_packet(IS_TINY, ReqI=1) # Request IS_MCI packets
Solutions:
# config.yaml
telemetry:
interval: 100 # Must be > 0 (milliseconds)
# Ensure collector is started
collector = TelemetryCollector(client)
collector.start() # Don't forget to start!
Symptoms:
Solutions:
# Speed in LFS is in m/s, often converted to km/h
speed_kmh = speed_ms * 3.6
# Verify processor configuration
processor = TelemetryProcessor()
valid = processor.validate_speed(speed_kmh) # Should return True
# Enable validation logging
processor = TelemetryProcessor(validate=True, log_invalid=True)
Symptoms:
Solutions:
telemetry:
interval: 50 # Faster: 50ms = 20Hz
# Check latency
ping <server-ip>
# High latency (>100ms) will cause delays
# Simplify callbacks
collector.register_callback("telemetry", lambda data: print(len(data)))
# Avoid heavy processing in callbacks
Symptoms:
MemoryError: Unable to allocate array
# Or system becomes slow
Solutions:
collector = TelemetryCollector(
client,
max_history=1000 # Reduce from default 10000
)
# Clear old data every 5 minutes
import threading
def periodic_clear():
while True:
time.sleep(300)
collector.clear_history()
threading.Thread(target=periodic_clear, daemon=True).start()
# Store data in database, not in-memory
from src.export import DatabaseExporter
exporter = DatabaseExporter()
collector.register_callback("telemetry", exporter.export_batch)
telemetry:
interval: 200 # Less frequent = less memory
Symptoms:
Solutions:
visualization:
refresh_rate: 5 # Hz, down from 10
telemetry:
interval: 200 # ms, up from 100
# Bad: Heavy processing in callback
def heavy_callback(data):
for item in data:
complex_calculation(item) # Blocks thread
# Good: Offload to separate thread
from concurrent.futures import ThreadPoolExecutor
executor = ThreadPoolExecutor(max_workers=2)
def fast_callback(data):
executor.submit(process_data, data) # Non-blocking
export:
auto_export: false # Disable if not needed
integrations:
discord:
enabled: false # Disable unused integrations
Symptoms:
Solutions:
# dashboard.py
dcc.Interval(
id='interval-component',
interval=500 # 500ms instead of 100ms
)
# Show only last 100 points instead of 1000
history = collector.get_telemetry_history(limit=100)
# Reduce traces, remove markers
fig = go.Figure(data=[
go.Scatter(
y=speeds,
mode='lines', # Only lines, no markers
line=dict(width=1) # Thinner lines
)
])
# Use matplotlib for static plots instead of Plotly
import matplotlib.pyplot as plt
plt.plot(speeds)
plt.savefig('speed.png', dpi=72) # Lower DPI
Symptoms:
Solutions:
# Should see
# Dash is running on http://127.0.0.1:8050/
# Linux/macOS
lsof -i :8050
# Windows
netstat -ano | findstr :8050
app.run_server(debug=True, use_reloader=False)
Symptoms:
Solutions:
@app.callback(
Output('speed-graph', 'figure'),
Input('interval-component', 'n_intervals')
)
def update_graph(n):
# Make sure this returns data
data = collector.get_latest_telemetry()
if not data:
return go.Figure() # Empty figure if no data
# ...
# Test if data is being collected
data = collector.get_latest_telemetry()
print(f"Got {len(data)} data points")
dcc.Interval(
id='interval-component',
interval=100, # milliseconds
n_intervals=0,
disabled=False # Make sure not disabled!
)
Symptoms:
sqlalchemy.exc.OperationalError: unable to open database file
Solutions:
database:
type: sqlite
sqlite:
path: ./data/telemetry.db # Ensure directory exists
mkdir -p data # Create directory
ls -l data/telemetry.db
chmod 644 data/telemetry.db # Fix permissions if needed
database:
type: postgresql
postgresql:
host: localhost
port: 5432
database: lfs_telemetry
user: lfs_user
password: ${DB_PASSWORD} # Use environment variable
# Test PostgreSQL connection
psql -h localhost -U lfs_user -d lfs_telemetry -c "SELECT 1"
Symptoms:
Solutions:
# Delete data older than 30 days
from src.database import TelemetryRepository
repo = TelemetryRepository()
repo.delete_old_sessions(older_than_days=30)
# Keep every 10th point for old data
repo.downsample_telemetry(
older_than_days=7,
factor=10
)
sqlite3 data/telemetry.db "VACUUM;"
CREATE INDEX idx_session_date ON sessions(created_at);
CREATE INDEX idx_lap_time ON laps(lap_time);
Symptoms:
{"detail": "Not authenticated"}
Solutions:
import requests
# Login
response = requests.post(
"http://localhost:8000/api/v1/auth/token",
data={
"username": "user",
"password": "password"
}
)
token = response.json()["access_token"]
# Use token
headers = {"Authorization": f"Bearer {token}"}
response = requests.get(
"http://localhost:8000/api/v1/sessions",
headers=headers
)
/api/docsSymptoms:
WebSocket connection failed
Solutions:
# Correct: ws:// not http://
ws = websockets.connect("ws://localhost:8000/api/v1/telemetry/live")
curl http://localhost:8000/api/health
# src/api/main.py
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # Or specific origins
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
Symptoms:
ModuleNotFoundError: No module named 'src'
This is one of the most common issues when first setting up LFS-Ayats or after editing code.
Root Cause:
Python cannot find the src module because the package is not installed or is not in Python’s module search path. This happens when:
Recommended Solution: Install in Development Mode
The best solution is to install the package in editable/development mode:
# 1. Navigate to project root
cd /path/to/LFS-Ayats
# 2. Activate your virtual environment (if using one)
source venv/bin/activate # Linux/macOS
# or
venv\Scripts\activate # Windows
# 3. Install in editable mode
pip install -e .
What does pip install -e . do?
-e stands for “editable” or “development” mode. means “install from the current directory” (reads setup.py)src importable as a proper Python packageVerify Installation:
# Check package is installed
pip show lfs-ayats
# Expected output:
# Name: lfs-ayats
# Version: 0.1.0
# Location: /path/to/LFS-Ayats/src
# Editable project location: /path/to/LFS-Ayats/src
Test Import:
# This should work without errors
python -c "import src; print('✓ Success: src module imported')"
Alternative Solutions (Not Recommended)
# Linux/macOS
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
# Windows
set PYTHONPATH=%PYTHONPATH%;%CD%
cd LFS-Ayats # Project root
python examples/basic_connection.py
Common Mistakes:
❌ Forgetting to activate virtual environment before running pip install -e .
# Wrong: Installing outside venv
pip install -e .
# Correct: Activate venv first
source venv/bin/activate # Then install
pip install -e .
❌ Running from wrong directory
# Wrong: In subdirectory
cd src/
pip install -e . # Won't find setup.py
# Correct: From project root
cd /path/to/LFS-Ayats
pip install -e .
❌ Not installing at all and expecting imports to work
# This will fail without installation:
from src.connection import InSimClient # ❌ ModuleNotFoundError
Symptoms:
ModuleNotFoundError: No module named 'numpy'
Solutions:
source venv/bin/activate # Linux/macOS
venv\Scripts\activate # Windows
pip install -r requirements.txt
pip list | grep numpy # Should show numpy version
Cause: Packet structure mismatch Solution: Update LFS-Ayats, check LFS version compatibility
Cause: Port already in use Solution:
# Kill process using port
# Windows
netstat -ano | findstr :29999
taskkill /PID <process_id> /F
# Linux
sudo fsetuser -k 29999/tcp
Cause: Multiple processes accessing SQLite Solution:
Cause: Error in dashboard callback function Solution:
Enable detailed logging to diagnose issues:
from src.utils import setup_logger
import logging
# Set to DEBUG level
logger = setup_logger("lfs_ayats", level="DEBUG")
# Or modify config.yaml
logging:
level: DEBUG
file: logs/debug.log
console: true
View logs:
tail -f logs/debug.log # Linux/macOS
type logs\debug.log # Windows
If your issue isn’t covered here:
# System information
python --version
pip --version
pip list | grep lfs-ayats
# Check LFS process
ps aux | grep LFS # Linux/macOS
tasklist | findstr LFS # Windows
# Check ports
netstat -tlnp | grep 29999 # Linux
netstat -ano | findstr :29999 # Windows
# Test connection
telnet 127.0.0.1 29999
nc -zv 127.0.0.1 29999
# Python diagnostics
python -c "import sys; print(sys.path)"
python -c "from src.connection import InSimClient; print('OK')"
# Package information
pip show lfs-ayats
pip check # Check for conflicts
Still stuck? Don’t hesitate to ask for help on GitHub! 🆘