项目作者: shaqian

项目描述 :
Flutter plugin for TensorFlow Lite
高级语言: Objective-C++
项目地址: git://github.com/shaqian/flutter_tflite.git
创建时间: 2018-09-23T23:29:03Z
项目社区:https://github.com/shaqian/flutter_tflite

开源协议:MIT License

下载


tflite

A Flutter plugin for accessing TensorFlow Lite API. Supports image classification, object detection (SSD and YOLO), Pix2Pix and Deeplab and PoseNet on both iOS and Android.

Table of Contents

Breaking changes

Since 1.1.0:

  1. iOS TensorFlow Lite library is upgraded from TensorFlowLite 1.x to TensorFlowLiteObjC 2.x. Changes to native code are denoted with TFLITE2.

Since 1.0.0:

  1. Updated to TensorFlow Lite API v1.12.0.
  2. No longer accepts parameter inputSize and numChannels. They will be retrieved from input tensor.
  3. numThreads is moved to Tflite.loadModel.

Installation

Add tflite as a dependency in your pubspec.yaml file.

Android

In android/app/build.gradle, add the following setting in android block.

  1. aaptOptions {
  2. noCompress 'tflite'
  3. noCompress 'lite'
  4. }

iOS

Solutions to build errors on iOS:

  • ‘vector’ file not found”

    Open ios/Runner.xcworkspace in Xcode, click Runner > Tagets > Runner > Build Settings, search Compile Sources As, change the value to Objective-C++

  • ‘tensorflow/lite/kernels/register.h’ file not found

    The plugin assumes the tensorflow header files are located in path “tensorflow/lite/kernels”.

    However, for early versions of tensorflow the header path is “tensorflow/contrib/lite/kernels”.

    Use CONTRIB_PATH to toggle the path. Uncomment //#define CONTRIB_PATH from here:
    https://github.com/shaqian/flutter_tflite/blob/master/ios/Classes/TflitePlugin.mm#L1

Usage

  1. Create a assets folder and place your label file and model file in it. In pubspec.yaml add:
  1. assets:
  2. - assets/labels.txt
  3. - assets/mobilenet_v1_1.0_224.tflite
  1. Import the library:
  1. import 'package:tflite/tflite.dart';
  1. Load the model and labels:
  1. String res = await Tflite.loadModel(
  2. model: "assets/mobilenet_v1_1.0_224.tflite",
  3. labels: "assets/labels.txt",
  4. numThreads: 1, // defaults to 1
  5. isAsset: true, // defaults to true, set to false to load resources outside assets
  6. useGpuDelegate: false // defaults to false, set to true to use GPU delegate
  7. );
  1. See the section for the respective model below.

  2. Release resources:

  1. await Tflite.close();

GPU Delegate

When using GPU delegate, refer to this step for release mode setting to get better performance.

Image Classification

  • Output format:

    1. {
    2. index: 0,
    3. label: "person",
    4. confidence: 0.629
    5. }
  • Run on image:

  1. var recognitions = await Tflite.runModelOnImage(
  2. path: filepath, // required
  3. imageMean: 0.0, // defaults to 117.0
  4. imageStd: 255.0, // defaults to 1.0
  5. numResults: 2, // defaults to 5
  6. threshold: 0.2, // defaults to 0.1
  7. asynch: true // defaults to true
  8. );
  • Run on binary:
  1. var recognitions = await Tflite.runModelOnBinary(
  2. binary: imageToByteListFloat32(image, 224, 127.5, 127.5),// required
  3. numResults: 6, // defaults to 5
  4. threshold: 0.05, // defaults to 0.1
  5. asynch: true // defaults to true
  6. );
  7. Uint8List imageToByteListFloat32(
  8. img.Image image, int inputSize, double mean, double std) {
  9. var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
  10. var buffer = Float32List.view(convertedBytes.buffer);
  11. int pixelIndex = 0;
  12. for (var i = 0; i < inputSize; i++) {
  13. for (var j = 0; j < inputSize; j++) {
  14. var pixel = image.getPixel(j, i);
  15. buffer[pixelIndex++] = (img.getRed(pixel) - mean) / std;
  16. buffer[pixelIndex++] = (img.getGreen(pixel) - mean) / std;
  17. buffer[pixelIndex++] = (img.getBlue(pixel) - mean) / std;
  18. }
  19. }
  20. return convertedBytes.buffer.asUint8List();
  21. }
  22. Uint8List imageToByteListUint8(img.Image image, int inputSize) {
  23. var convertedBytes = Uint8List(1 * inputSize * inputSize * 3);
  24. var buffer = Uint8List.view(convertedBytes.buffer);
  25. int pixelIndex = 0;
  26. for (var i = 0; i < inputSize; i++) {
  27. for (var j = 0; j < inputSize; j++) {
  28. var pixel = image.getPixel(j, i);
  29. buffer[pixelIndex++] = img.getRed(pixel);
  30. buffer[pixelIndex++] = img.getGreen(pixel);
  31. buffer[pixelIndex++] = img.getBlue(pixel);
  32. }
  33. }
  34. return convertedBytes.buffer.asUint8List();
  35. }
  • Run on image stream (video frame):

Works with camera plugin 4.0.0. Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.

  1. var recognitions = await Tflite.runModelOnFrame(
  2. bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  3. imageHeight: img.height,
  4. imageWidth: img.width,
  5. imageMean: 127.5, // defaults to 127.5
  6. imageStd: 127.5, // defaults to 127.5
  7. rotation: 90, // defaults to 90, Android only
  8. numResults: 2, // defaults to 5
  9. threshold: 0.1, // defaults to 0.1
  10. asynch: true // defaults to true
  11. );

Object Detection

  • Output format:

x, y, w, h are between [0, 1]. You can scale x, w by the width and y, h by the height of the image.

  1. {
  2. detectedClass: "hot dog",
  3. confidenceInClass: 0.123,
  4. rect: {
  5. x: 0.15,
  6. y: 0.33,
  7. w: 0.80,
  8. h: 0.27
  9. }
  10. }

SSD MobileNet:

  • Run on image:
  1. var recognitions = await Tflite.detectObjectOnImage(
  2. path: filepath, // required
  3. model: "SSDMobileNet",
  4. imageMean: 127.5,
  5. imageStd: 127.5,
  6. threshold: 0.4, // defaults to 0.1
  7. numResultsPerClass: 2,// defaults to 5
  8. asynch: true // defaults to true
  9. );
  • Run on binary:
  1. var recognitions = await Tflite.detectObjectOnBinary(
  2. binary: imageToByteListUint8(resizedImage, 300), // required
  3. model: "SSDMobileNet",
  4. threshold: 0.4, // defaults to 0.1
  5. numResultsPerClass: 2, // defaults to 5
  6. asynch: true // defaults to true
  7. );
  • Run on image stream (video frame):

Works with camera plugin 4.0.0. Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.

  1. var recognitions = await Tflite.detectObjectOnFrame(
  2. bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  3. model: "SSDMobileNet",
  4. imageHeight: img.height,
  5. imageWidth: img.width,
  6. imageMean: 127.5, // defaults to 127.5
  7. imageStd: 127.5, // defaults to 127.5
  8. rotation: 90, // defaults to 90, Android only
  9. numResults: 2, // defaults to 5
  10. threshold: 0.1, // defaults to 0.1
  11. asynch: true // defaults to true
  12. );

Tiny YOLOv2:

  • Run on image:
  1. var recognitions = await Tflite.detectObjectOnImage(
  2. path: filepath, // required
  3. model: "YOLO",
  4. imageMean: 0.0,
  5. imageStd: 255.0,
  6. threshold: 0.3, // defaults to 0.1
  7. numResultsPerClass: 2,// defaults to 5
  8. anchors: anchors, // defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]
  9. blockSize: 32, // defaults to 32
  10. numBoxesPerBlock: 5, // defaults to 5
  11. asynch: true // defaults to true
  12. );
  • Run on binary:
  1. var recognitions = await Tflite.detectObjectOnBinary(
  2. binary: imageToByteListFloat32(resizedImage, 416, 0.0, 255.0), // required
  3. model: "YOLO",
  4. threshold: 0.3, // defaults to 0.1
  5. numResultsPerClass: 2,// defaults to 5
  6. anchors: anchors, // defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]
  7. blockSize: 32, // defaults to 32
  8. numBoxesPerBlock: 5, // defaults to 5
  9. asynch: true // defaults to true
  10. );
  • Run on image stream (video frame):

Works with camera plugin 4.0.0. Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.

  1. var recognitions = await Tflite.detectObjectOnFrame(
  2. bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  3. model: "YOLO",
  4. imageHeight: img.height,
  5. imageWidth: img.width,
  6. imageMean: 0, // defaults to 127.5
  7. imageStd: 255.0, // defaults to 127.5
  8. numResults: 2, // defaults to 5
  9. threshold: 0.1, // defaults to 0.1
  10. numResultsPerClass: 2,// defaults to 5
  11. anchors: anchors, // defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]
  12. blockSize: 32, // defaults to 32
  13. numBoxesPerBlock: 5, // defaults to 5
  14. asynch: true // defaults to true
  15. );

Pix2Pix

Thanks to RP from Green Appers

  • Output format:

    The output of Pix2Pix inference is Uint8List type. Depending on the outputType used, the output is:

    • (if outputType is png) byte array of a png image

    • (otherwise) byte array of the raw output

  • Run on image:

  1. var result = await runPix2PixOnImage(
  2. path: filepath, // required
  3. imageMean: 0.0, // defaults to 0.0
  4. imageStd: 255.0, // defaults to 255.0
  5. asynch: true // defaults to true
  6. );
  • Run on binary:
  1. var result = await runPix2PixOnBinary(
  2. binary: binary, // required
  3. asynch: true // defaults to true
  4. );
  • Run on image stream (video frame):
  1. var result = await runPix2PixOnFrame(
  2. bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  3. imageHeight: img.height, // defaults to 1280
  4. imageWidth: img.width, // defaults to 720
  5. imageMean: 127.5, // defaults to 0.0
  6. imageStd: 127.5, // defaults to 255.0
  7. rotation: 90, // defaults to 90, Android only
  8. asynch: true // defaults to true
  9. );

Deeplab

Thanks to RP from see— for Android implementation.

  • Output format:

    The output of Deeplab inference is Uint8List type. Depending on the outputType used, the output is:

    • (if outputType is png) byte array of a png image

    • (otherwise) byte array of r, g, b, a values of the pixels

  • Run on image:

  1. var result = await runSegmentationOnImage(
  2. path: filepath, // required
  3. imageMean: 0.0, // defaults to 0.0
  4. imageStd: 255.0, // defaults to 255.0
  5. labelColors: [...], // defaults to https://github.com/shaqian/flutter_tflite/blob/master/lib/tflite.dart#L219
  6. outputType: "png", // defaults to "png"
  7. asynch: true // defaults to true
  8. );
  • Run on binary:
  1. var result = await runSegmentationOnBinary(
  2. binary: binary, // required
  3. labelColors: [...], // defaults to https://github.com/shaqian/flutter_tflite/blob/master/lib/tflite.dart#L219
  4. outputType: "png", // defaults to "png"
  5. asynch: true // defaults to true
  6. );
  • Run on image stream (video frame):
  1. var result = await runSegmentationOnFrame(
  2. bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  3. imageHeight: img.height, // defaults to 1280
  4. imageWidth: img.width, // defaults to 720
  5. imageMean: 127.5, // defaults to 0.0
  6. imageStd: 127.5, // defaults to 255.0
  7. rotation: 90, // defaults to 90, Android only
  8. labelColors: [...], // defaults to https://github.com/shaqian/flutter_tflite/blob/master/lib/tflite.dart#L219
  9. outputType: "png", // defaults to "png"
  10. asynch: true // defaults to true
  11. );

PoseNet

Model is from StackOverflow thread.

  • Output format:

x, y are between [0, 1]. You can scale x by the width and y by the height of the image.

  1. [ // array of poses/persons
  2. { // pose #1
  3. score: 0.6324902,
  4. keypoints: {
  5. 0: {
  6. x: 0.250,
  7. y: 0.125,
  8. part: nose,
  9. score: 0.9971070
  10. },
  11. 1: {
  12. x: 0.230,
  13. y: 0.105,
  14. part: leftEye,
  15. score: 0.9978438
  16. }
  17. ......
  18. }
  19. },
  20. { // pose #2
  21. score: 0.32534285,
  22. keypoints: {
  23. 0: {
  24. x: 0.402,
  25. y: 0.538,
  26. part: nose,
  27. score: 0.8798978
  28. },
  29. 1: {
  30. x: 0.380,
  31. y: 0.513,
  32. part: leftEye,
  33. score: 0.7090239
  34. }
  35. ......
  36. }
  37. },
  38. ......
  39. ]
  • Run on image:
  1. var result = await runPoseNetOnImage(
  2. path: filepath, // required
  3. imageMean: 125.0, // defaults to 125.0
  4. imageStd: 125.0, // defaults to 125.0
  5. numResults: 2, // defaults to 5
  6. threshold: 0.7, // defaults to 0.5
  7. nmsRadius: 10, // defaults to 20
  8. asynch: true // defaults to true
  9. );
  • Run on binary:
  1. var result = await runPoseNetOnBinary(
  2. binary: binary, // required
  3. numResults: 2, // defaults to 5
  4. threshold: 0.7, // defaults to 0.5
  5. nmsRadius: 10, // defaults to 20
  6. asynch: true // defaults to true
  7. );
  • Run on image stream (video frame):
  1. var result = await runPoseNetOnFrame(
  2. bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  3. imageHeight: img.height, // defaults to 1280
  4. imageWidth: img.width, // defaults to 720
  5. imageMean: 125.0, // defaults to 125.0
  6. imageStd: 125.0, // defaults to 125.0
  7. rotation: 90, // defaults to 90, Android only
  8. numResults: 2, // defaults to 5
  9. threshold: 0.7, // defaults to 0.5
  10. nmsRadius: 10, // defaults to 20
  11. asynch: true // defaults to true
  12. );

Example

Prediction in Static Images

Refer to the example.

Real-time detection

Refer to flutter_realtime_Detection.

Run test cases

flutter test test/tflite_test.dart