Track your rate limit usage to avoid hitting limits:
Copy
Ask AI
import requestsfrom datetime import datetime, timedeltaclass RateLimitTracker: def __init__(self, limit: int, window_seconds: int): self.limit = limit self.window_seconds = window_seconds self.requests = [] def can_make_request(self) -> bool: """Check if we can make a request without hitting rate limit.""" now = datetime.now() window_start = now - timedelta(seconds=self.window_seconds) # Remove old requests outside the window self.requests = [req_time for req_time in self.requests if req_time > window_start] return len(self.requests) < self.limit def record_request(self): """Record a new request.""" self.requests.append(datetime.now()) def requests_remaining(self) -> int: """Get number of requests remaining in current window.""" now = datetime.now() window_start = now - timedelta(seconds=self.window_seconds) self.requests = [req_time for req_time in self.requests if req_time > window_start] return self.limit - len(self.requests)# Usage for FKAPI (100/hour)tracker = RateLimitTracker(limit=100, window_seconds=3600)if tracker.can_make_request(): response = requests.get(url, headers=headers) tracker.record_request() print(f"Requests remaining: {tracker.requests_remaining()}")else: print("Rate limit reached. Wait before making more requests.")
Anonymous users are limited to 20 requests/hour on DRF endpoints. Authenticate to get 100 requests/hour.
Copy
Ask AI
# Without authentication: 20 req/hourcurl -X GET https://your-domain.com/api/users/# With authentication: 100 req/hourcurl -X GET https://your-domain.com/api/users/ \ -H "Authorization: Token your-token"
Implement request caching
Cache API responses to reduce the number of requests:
Instead of making multiple individual requests, batch operations where supported:
Copy
Ask AI
# Instead of multiple requestsfor club_id in club_ids: get_club(club_id) # 100 requests for 100 clubs# Use search with broader queries when possibleresults = search_clubs(keyword="premier league") # 1 request
Handle 429 errors gracefully
Always implement retry logic with exponential backoff:
Copy
Ask AI
try: response = requests.get(url, headers=headers) response.raise_for_status()except requests.exceptions.HTTPError as e: if e.response.status_code == 429: retry_after = int(e.response.headers.get('Retry-After', 3600)) print(f"Rate limited. Retry after {retry_after} seconds") else: raise
Monitor your usage
Track API usage to avoid unexpected rate limiting: