🔧 Troubleshooting Guide
Quick solutions to common issues when using WP LLM for WordPress development.
🚨 Common Issues and Solutions
Installation Problems
Issue: "Model not found" error
Error: model 'wp-llm' not found
Solution:
-
Verify the model is downloaded:
bashollama list
-
If not listed, download the model:
bashollama pull wp-llm
-
Check your internet connection and try again.
Issue: "Out of memory" error
Error: out of memory
Solution:
- Close other applications to free up RAM
- Restart Ollama:
bash
ollama stop ollama start
- Consider upgrading to 16GB+ RAM
- Use a smaller model variant if available
Issue: "Permission denied" error
Error: permission denied
Solution:
- Add your user to the ollama group:
bash
sudo usermod -a -G ollama $USER
- Log out and back in, or restart your system
- Verify permissions:
bash
ls -la ~/.ollama
Configuration Issues
Issue: Slow response times
Taking 30+ seconds to generate code
Solution:
- Ensure you have 8GB+ free RAM
- Close unnecessary applications
- Use SSD storage for better I/O performance
- Consider using a smaller model variant
- Check system resources:
bash
htop
Issue: Connection refused
Error: connection refused
Solution:
- Restart Ollama service:
bash
ollama stop ollama start
- Check if Ollama is running:
bash
ps aux | grep ollama
- Verify port availability:
bash
netstat -tulpn | grep 11434
Issue: Model download fails
Error downloading model: network timeout
Solution:
- Check your internet connection
- Try downloading during off-peak hours
- Use a different network if possible
- Verify disk space:
bash
df -h
- Clear Ollama cache:
bash
ollama rm wp-llm ollama pull wp-llm
Usage Problems
Issue: Generated code doesn't work
The generated code has errors or doesn't function properly
Solution:
- Review the code - Always check generated code before use
- Provide more context - Include WordPress version and setup details
- Refine your prompt - Be more specific about requirements
- Test incrementally - Build and test code step by step
- Check WordPress compatibility - Ensure code works with your WordPress version
Issue: Insecure code generation
Generated code lacks proper security measures
Solution:
- Be explicit about security - Include security requirements in your prompt
- Use security-focused prompts - Ask for sanitization, validation, and nonce verification
- Review security aspects - Always verify security implementations
- Follow WordPress security guidelines - Reference official WordPress security documentation
Issue: Poor code quality
Generated code doesn't follow WordPress standards
Solution:
- Specify standards - Ask for WordPress coding standards compliance
- Include examples - Provide sample code for reference
- Request documentation - Ask for PHPDoc comments and inline documentation
- Use iterative development - Generate and refine code step by step
Performance Issues
Issue: High memory usage
Ollama using too much RAM
Solution:
- Close other applications - Free up system memory
- Restart Ollama - Clear memory cache
- Use smaller model - Switch to 7B variant if using larger model
- Monitor usage - Use system monitoring tools
- Upgrade RAM - Consider hardware upgrade
Issue: Slow code generation
Taking too long to generate responses
Solution:
- Optimize prompts - Be more specific and concise
- Use SSD storage - Faster I/O performance
- Close background processes - Free up CPU resources
- Check system load - Monitor CPU and memory usage
- Consider hardware upgrade - More RAM and faster CPU
Integration Issues
Issue: VS Code integration not working
Ollama extension not responding in VS Code
Solution:
- Verify Ollama is running - Check if service is active
- Check extension settings - Configure correct model name
- Restart VS Code - Reload the editor
- Update extension - Install latest version
- Check logs - Review extension and Ollama logs
Issue: API integration problems
HTTP API calls failing
Solution:
- Verify Ollama is serving - Start API server:
bash
ollama serve
- Check endpoint URL - Use correct localhost address
- Verify request format - Use proper JSON structure
- Check authentication - Ensure no auth required for local setup
- Test with curl - Verify API manually:
bash
curl -X POST http://localhost:11434/api/generate \ -H "Content-Type: application/json" \ -d '{"model": "wp-llm", "prompt": "test"}'
🔍 Diagnostic Tools
System Information
Check your system configuration:
# Check OS and version
uname -a
# Check available memory
free -h
# Check disk space
df -h
# Check CPU information
lscpu
# Check Ollama version
ollama --version
Ollama Status
Verify Ollama installation and status:
# Check if Ollama is running
ps aux | grep ollama
# Check available models
ollama list
# Check Ollama logs
journalctl -u ollama -f
# Test Ollama functionality
ollama run wp-llm "Hello, test message"
Network Diagnostics
Check network connectivity:
# Test internet connection
ping -c 4 google.com
# Check DNS resolution
nslookup ollama.ai
# Test port availability
netstat -tulpn | grep 11434
# Check firewall settings
sudo ufw status
Performance Monitoring
Monitor system performance:
# Real-time system monitoring
htop
# Memory usage
free -h
# Disk I/O
iotop
# Network usage
iftop
🛠️ Advanced Troubleshooting
Model Corruption
If the model appears corrupted:
# Remove the model
ollama rm wp-llm
# Clear Ollama cache
rm -rf ~/.ollama
# Reinstall Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Download model again
ollama pull wp-llm
System Compatibility
Check system requirements:
Minimum Requirements:
- RAM: 8GB (16GB recommended)
- Storage: 10GB free space
- OS: macOS 10.15+, Linux (Ubuntu 18.04+), Windows 10+ with WSL2
- CPU: Multi-core processor (4+ cores recommended)
Recommended Requirements:
- RAM: 16GB+
- Storage: SSD with 20GB+ free space
- CPU: 8+ cores
- GPU: NVIDIA GPU with 4GB+ VRAM (optional)
Environment Variables
Set environment variables for troubleshooting:
# Enable debug logging
export OLLAMA_DEBUG=1
# Set custom model path
export OLLAMA_MODELS=/path/to/models
# Set custom host
export OLLAMA_HOST=0.0.0.0:11434
# Restart Ollama with new settings
ollama stop
ollama start
Log Analysis
Analyze Ollama logs for issues:
# View recent logs
journalctl -u ollama --since "1 hour ago"
# Follow logs in real-time
journalctl -u ollama -f
# Search for specific errors
journalctl -u ollama | grep -i error
# Export logs to file
journalctl -u ollama > ollama_logs.txt
📞 Getting Help
Self-Help Resources
- Check this guide - Most common issues are covered here
- Review documentation - Getting Started and Advanced Usage
- Search existing issues - Check GitHub issues for similar problems
- Test with simple prompts - Verify basic functionality
Community Support
- GitHub Discussions - Ask questions and share solutions
- Discord Community - Real-time help and support
- Stack Overflow - Search for WP LLM related questions
- WordPress Forums - WordPress-specific integration help
Professional Support
- Enterprise Support - For enterprise customers
- Consulting Services - Custom implementation help
- Training Programs - Learn advanced techniques
- Custom Development - Tailored solutions
Reporting Issues
When reporting issues, include:
-
System Information:
- Operating system and version
- RAM and CPU specifications
- Ollama version
-
Error Details:
- Exact error message
- Steps to reproduce
- Expected vs actual behavior
-
Context:
- What you were trying to do
- Prompt used (if applicable)
- WordPress version and setup
-
Logs:
- Ollama logs
- System logs
- Any relevant error output
🔄 Recovery Procedures
Complete Reset
If all else fails, perform a complete reset:
# Stop Ollama
ollama stop
# Remove all models
ollama rm wp-llm
# Clear all Ollama data
rm -rf ~/.ollama
# Reinstall Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Download model again
ollama pull wp-llm
# Test functionality
ollama run wp-llm "Create a simple custom post type"
Backup and Restore
Backup your configuration:
# Backup Ollama configuration
cp -r ~/.ollama ~/.ollama_backup
# Backup model files (if needed)
cp -r ~/.ollama/models ~/ollama_models_backup
# Restore from backup
cp -r ~/.ollama_backup ~/.ollama
Alternative Installation
Try alternative installation methods:
# Using Docker
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
# Using Snap (Ubuntu)
sudo snap install ollama
# Using Homebrew (macOS)
brew install ollama
Still having issues? Check the Getting Started Guide for basic setup, or reach out to the community for help!