项目作者: intel

项目描述 :
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision, sparsity, pruning, knowledge distillation, cross different deep learning frameworks to purse best inference performance.
高级语言: Python
项目地址: git://github.com/intel/neural-compressor.git
创建时间: 2020-07-21T23:49:56Z
项目社区:https://github.com/intel/neural-compressor

开源协议:Apache License 2.0

下载


Intel® Neural Compressor

An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, and ONNX Runtime)

python
version
license
coverage
Downloads

Architecture | Workflow | LLMs Recipes | Results | Documentations


Intel® Neural Compressor aims to provide popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, and ONNX Runtime,
as well as Intel extensions such as Intel Extension for TensorFlow and Intel Extension for PyTorch.
In particular, the tool provides the key features, typical examples, and open collaborations as below:

What’s New

  • [2024/10] Transformers-like API for INT4 inference on Intel CPU and GPU.
  • [2024/07] From 3.0 release, framework extension API is recommended to be used for quantization.
  • [2024/07] Performance optimizations and usability improvements on client-side.

Installation

Choose the necessary framework dependencies to install based on your deploy environment.

Install Framework

Install Neural Compressor from pypi

  1. # Install 2.X API + Framework extension API + PyTorch dependency
  2. pip install neural-compressor[pt]
  3. # Install 2.X API + Framework extension API + TensorFlow dependency
  4. pip install neural-compressor[tf]

Note: Further installation methods can be found under Installation Guide. check out our FAQ for more details.

Getting Started

After successfully installing these packages, try your first quantization program. Following example code demonstrates FP8 Quantization, it is supported by Intel Gaudi2 AI Accelerator.
To try on Intel Gaudi2, docker image with Gaudi Software Stack is recommended, please refer to following script for environment setup. More details can be found in Gaudi Guide.

Run a container with an interactive shell, more info

  1. docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.20.0/ubuntu24.04/habanalabs/pytorch-installer-2.6.0:latest

Run the example,

  1. from neural_compressor.torch.quantization import (
  2. FP8Config,
  3. prepare,
  4. convert,
  5. )
  6. import torch
  7. import torchvision.models as models
  8. model = models.resnet18()
  9. qconfig = FP8Config(fp8_config="E4M3")
  10. model = prepare(model, qconfig)
  11. # Customer defined calibration. Below is a dummy calibration
  12. model(torch.randn(1, 3, 224, 224).to("hpu"))
  13. model = convert(model)
  14. output = model(torch.randn(1, 3, 224, 224).to("hpu")).to("cpu")
  15. print(output.shape)

More FP8 quantization doc.

Following example code demonstrates weight-only large language model loading on Intel Gaudi2 AI Accelerator.

  1. from neural_compressor.torch.quantization import load
  2. model_name = "TheBloke/Llama-2-7B-GPTQ"
  3. model = load(
  4. model_name_or_path=model_name,
  5. format="huggingface",
  6. device="hpu",
  7. torch_dtype=torch.bfloat16,
  8. )

Note: Intel Neural Compressor will convert the model format from auto-gptq to hpu format on the first load and save hpu_model.safetensors to the local cache directory for the next load. So it may take a while to load for the first time.

Documentation




































































Overview
Architecture Workflow APIs LLMs Recipes Examples
PyTorch Extension APIs
Overview Dynamic Quantization Static Quantization Smooth Quantization
Weight-Only Quantization FP8 Quantization MX Quantization Mixed Precision
Tensorflow Extension APIs
Overview Static Quantization Smooth Quantization
Transformers-like APIs
Overview
Other Modules
Auto Tune Benchmark

Note:
From 3.0 release, we recommend to use 3.X API. Compression techniques during training such as QAT, Pruning, Distillation only available in 2.X API currently.

Selected Publications/Events

Note:
View Full Publication List.

Additional Content

Communication

  • GitHub Issues: mainly for bug reports, new feature requests, question asking, etc.
  • Email: welcome to raise any interesting research ideas on model compression techniques by email for collaborations.
  • Discord Channel: join the discord channel for more flexible technical discussion.
  • WeChat group: scan the QA code to join the technical discussion.