项目作者: LaurentMazare

项目描述 :
OCaml bindings for TensorFlow Eager mode
高级语言: OCaml
项目地址: git://github.com/LaurentMazare/ocaml-tensorflow-eager.git
创建时间: 2017-10-02T20:41:22Z
项目社区:https://github.com/LaurentMazare/ocaml-tensorflow-eager

开源协议:Apache License 2.0

下载


Experimental OCaml bindings for TensorFlow Eager execution.

These bindings are pretty much out of date. Some bindings for PyTorch
can be found in the ocaml-torch repo.

When using TensorFlow Eager execution, operations are executed immediately in the
same way as PyTorch. The computation graph is dynamic.

Examples

Hello World!

A very simple example performing an addition using TensorFlow can be seen below:

  1. module O = Tf_ops.Ops
  2. let () =
  3. let twenty_one = O.f32 21. in
  4. let forty_two = O.(twenty_one + twenty_one) in
  5. O.print forty_two

Linear Model for Mnist

In this example, we show how to build a linear classifier for the MNIST
dataset. This requires you to download the MNIST data files.

Gradients are computed using Gradients.compute. This function returns the
gradients with respect to watched tensors. In order to watch a variable use
Var.read_and_watch as in the example below.

Once the gradients have been computed, Gradients.apply_sgd_step is used
to update the variables via a gradient descent step.

  1. open Base
  2. open Tf_ops
  3. module O = Ops
  4. (* This should reach ~92% accuracy. *)
  5. let image_dim = Helper.image_dim
  6. let label_count = Helper.label_count
  7. let training_steps = 500
  8. let batch_size = 512
  9. let () =
  10. let mnist_dataset = Helper.read_files "data" in
  11. let test_images = Helper.test_images mnist_dataset in
  12. let test_labels = Helper.test_labels mnist_dataset in
  13. (* Create the variables for the linear model. *)
  14. let w = Var.f32 [ image_dim; label_count ] 0. in
  15. let b = Var.f32 [ label_count ] 0. in
  16. (* Build the model. [read_and_watch] returns the current value of a variable
  17. and ensures that gradients wrt this variable will be computed. *)
  18. let model xs =
  19. let w_read = Var.read_and_watch w in
  20. let b_read = Var.read_and_watch b in
  21. O.(xs *^ w_read + b_read) |> O.softmax
  22. in
  23. for step_index = 1 to training_steps do
  24. (*Every so often, print the accuracy on the test dataset. *)
  25. if step_index % 50 = 0 then begin
  26. let accuracy =
  27. O.(equal (arg_max (model test_images)) (arg_max test_labels))
  28. |> O.cast ~type_dstT:Float
  29. |> O.reduce_mean
  30. |> O.to_float
  31. in
  32. Stdio.printf "step index %d, accuracy %.2f%%\n%!" step_index (100. *. accuracy)
  33. end;
  34. (* Get a training batch, apply the model and compute the loss. *)
  35. let train_images, train_labels =
  36. Helper.train_batch mnist_dataset ~batch_size ~batch_idx:step_index
  37. in
  38. let ys = model train_images in
  39. let cross_entropy = O.cross_entropy ~ys:train_labels ~y_hats:ys `mean in
  40. (* Compute the loss gradients and apply gradient descent to minimize it. *)
  41. let gradients = Tf_ops.Gradients.compute cross_entropy in
  42. Tf_ops.Gradients.apply_sgd_step gradients ~learning_rate:8.
  43. done

Installation

In order to build this on linux, download the TensorFlow 1.7.0 binaries. If this is unpacked at TFPATH compiling can be done via:

  1. LD_LIBRARY_PATH=$TFPATH/lib:$LD_LIBRARY_PATH LIBRARY_PATH=$TFPATH/lib:$LIBRARY_PATH make all

For the VGG-19 example, the weights are available here.