我使用这样的东西:
let width = view.bounds.width let height = width * 16 / 9 let offsetY = (view.bounds.height - height) / 2 let scale = CGAffineTransform.identity.scaledBy(x: width, y: height) let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -height - offsetY) let rect = prediction.boundingBox.applying(scale).applying(transform)
这假设纵向和16:9纵横比。它假设了 .imageCropAndScaleOption = .scaleFill 。
.imageCropAndScaleOption = .scaleFill
致谢:转换代码来自此回购: https://github.com/Willjay90/AppleFaceDetection