Hugging Face Integration Guide

This guide will walk you through integrating Hugging Face with APIpie, enabling you to use thousands of open-source AI models for various tasks through a unified interface.
What is Hugging Face?
Hugging Face is a leading platform for the AI community, providing:
- Open-Source Models: Access to thousands of pre-trained models for various tasks
- Inference API: A unified interface to run inference across multiple models
- Model Hub: A repository of models, datasets, and spaces for the community
- Transformers: A popular library for natural language processing (NLP) tasks
- Datasets: A collection of ready-to-use datasets for AI training
- Spaces: Interactive demos of AI applications
By connecting Hugging Face with APIpie, you can access a wide range of models through a consistent API, while benefiting from APIpie's additional features like model routing and failover.
Integration Steps
1. Create an APIpie Account
- Register here: APIpie Registration
- Complete the sign-up process.
2. Add Credit
- Add Credit: APIpie Subscription
- Add credits to your account to enable API access.
3. Generate an API Key
- API Key Management: APIpie API Keys
- Create a new API key for use with Hugging Face.
4. Install Required Libraries
For Python:
pip install huggingface_hub openai
For JavaScript/TypeScript:
npm install @huggingface/inference openai
# or
yarn add @huggingface/inference openai
# or
pnpm add @huggingface/inference openai
5. Configure Hugging Face with APIpie
There are two main ways to integrate Hugging Face with APIpie:
A. Using Hugging Face's InferenceClient with APIpie as a Provider
from huggingface_hub import InferenceClient
# Initialize the client with APIpie as the provider
client = InferenceClient(
provider="openai",
api_key="your-apipie-api-key",
base_url="https://apipie.ai/v1",
)
B. Using OpenAI SDK with APIpie (Recommended)
import os
import openai
# Configure OpenAI to use APIpie
os.environ["OPENAI_API_KEY"] = "your-apipie-api-key"
os.environ["OPENAI_API_BASE"] = "https://apipie.ai/v1"
# Create a client with APIpie configuration
client = openai.OpenAI()
Key Features
- Thousands of Models: Access to a vast range of pre-trained models
- Multi-Modal Support: Models for text, images, audio, and more
- Unified API: Consistent interface for different types of models
- Open Source: Many models are open-source and can be deployed anywhere
- Community Driven: Access to models developed by the AI community
- Inference Options: Run inference in the cloud or deploy locally
Example Workflows
Application Type | What Hugging Face Helps You Build |
---|---|
Natural Language Processing | Text generation, translation, summarization, and more |
Computer Vision | Image classification, object detection, image segmentation |
Audio Processing | Speech recognition, audio classification, text-to-speech |
Multimodal Applications | Vision-language tasks, document understanding |
Specialized Models | Domain-specific models like biomedical or financial NLP |
Using Hugging Face with APIpie
Text Generation with Chat Completion
from huggingface_hub import InferenceClient
# Initialize client with APIpie
client = InferenceClient(
provider="openai",
api_key="your-apipie-api-key",
base_url="https://apipie.ai/v1"
)
# Use chat completion API with Hugging Face models through APIpie
response = client.chat.completions.create(
model="meta-llama/Meta-Llama-3-8B-Instruct", # Hugging Face model
messages=[
{"role": "user", "content": "What is the capital of France?"}
],
max_tokens=100
)
print(response.choices[0].message.content)
JavaScript Example with Chat Completion
import { HfInference } from '@huggingface/inference';
// Initialize with APIpie configuration
const hf = new HfInference({
apiKey: "your-apipie-api-key",
baseURL: "https://apipie.ai/v1"
});
// Chat completion with Hugging Face model
const chatCompletion = await hf.chatCompletion({
model: "meta-llama/Meta-Llama-3-8B-Instruct",
messages: [
{
role: "user",
content: "What is the capital of France?"
}
],
max_tokens: 100
});
console.log(chatCompletion.choices[0].message.content);
Text-to-Image Generation
import openai
import os
from PIL import Image
import io
import base64
# Configure OpenAI client to use APIpie
client = openai.OpenAI(
api_key="your-apipie-api-key",
base_url="https://apipie.ai/v1"
)
# Generate an image using a Hugging Face model through APIpie
response = client.images.generate(
model="stabilityai/stable-diffusion-2-1", # Hugging Face model
prompt="A serene landscape with mountains and a lake at sunset",
n=1,
size="1024x1024"
)
# Process and display the image
image_url = response.data[0].url
# Or if you get a base64 encoded image
# image_data = base64.b64decode(response.data[0].b64_json)
# image = Image.open(io.BytesIO(image_data))
# image.save("generated_image.png")
Audio Transcription
import openai
import os
# Configure client to use APIpie
client = openai.OpenAI(
api_key="your-apipie-api-key",
base_url="https://apipie.ai/v1"
)
# Transcribe audio using a Hugging Face model
with open("audio_sample.mp3", "rb") as audio_file:
response = client.audio.transcriptions.create(
model="facebook/wav2vec2-large-960h-lv60-self", # Hugging Face model
file=audio_file,
response_format="text"
)
print(response)
Embeddings Generation
from huggingface_hub import InferenceClient
# Initialize client with APIpie
client = InferenceClient(
provider="openai",
api_key="your-apipie-api-key",
base_url="https://apipie.ai/v1"
)
# Generate embeddings
response = client.embeddings.create(
model="sentence-transformers/all-MiniLM-L6-v2", # Hugging Face model
input="The food was delicious and the service was excellent."
)
print(response.data[0].embedding)
Multi-Modal Tasks
import openai
import os
import base64
from PIL import Image
import io
# Configure client to use APIpie
client = openai.OpenAI(
api_key="your-apipie-api-key",
base_url="https://apipie.ai/v1"
)
# Read image file
with open("image.jpg", "rb") as image_file:
encoded_image = base64.b64encode(image_file.read()).decode('utf-8')
# Perform visual question answering using a Hugging Face model
response = client.chat.completions.create(
model="llava-hf/llava-1.5-7b-hf", # Hugging Face model
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What is shown in this image?"},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"}}
]
}
],
max_tokens=300
)
print(response.choices[0].message.content)
Troubleshooting & FAQ
-
Which Hugging Face models are supported?
APIpie supports a wide range of Hugging Face models. The specific models available depend on your APIpie subscription tier. -
How do I handle environment variables securely?
Store your API keys in environment variables or use a secure environment management tool. Never commit API keys to repositories. -
Can I use my own Hugging Face models with APIpie?
Yes, if your model is hosted on the Hugging Face Hub and is supported by APIpie, you can use it through the integration. For private models, you may need to provide additional authentication. -
How do I handle rate limits?
APIpie has its own rate limits based on your subscription tier. Check the APIpie documentation for specific limits and how to handle them. -
What if a Hugging Face model is not available through APIpie?
You can request support for specific models through APIpie's support channels. Alternatively, you can use the Hugging Face Inference API directly for models not available through APIpie. -
Is there a difference in latency when using Hugging Face through APIpie?
There might be a minimal latency overhead when using APIpie as an intermediary, but this is often offset by APIpie's routing and caching capabilities, especially for frequently used requests.
For more information, see the Python Inference client Documentation or the Java Script Inference client Documentation.
Support
If you encounter any issues during the integration process, please reach out on APIpie Discord for assistance.