Display camera frame

This commit is contained in:
2019-12-29 18:39:08 +01:00
parent 643cfbbd92
commit ac651c5e64
140 changed files with 21847 additions and 0 deletions

28
vendor/gocv.io/x/gocv/.astylerc generated vendored Normal file
View File

@ -0,0 +1,28 @@
--lineend=linux
--style=google
--indent=spaces=4
--indent-col1-comments
--convert-tabs
--attach-return-type
--attach-namespaces
--attach-classes
--attach-inlines
--add-brackets
--add-braces
--align-pointer=type
--align-reference=type
--max-code-length=100
--break-after-logical
--pad-comma
--pad-oper
--unpad-paren
--break-blocks
--pad-header

1
vendor/gocv.io/x/gocv/.dockerignore generated vendored Normal file
View File

@ -0,0 +1 @@
**

11
vendor/gocv.io/x/gocv/.gitignore generated vendored Normal file
View File

@ -0,0 +1,11 @@
profile.cov
count.out
*.swp
*.snap
/parts
/prime
/stage
.vscode/
/build
.idea/
contrib/data.yaml

60
vendor/gocv.io/x/gocv/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,60 @@
# Use new container infrastructure to enable caching
sudo: required
dist: trusty
# language is go
language: go
go:
- "1.13"
go_import_path: gocv.io/x/gocv
addons:
apt:
packages:
- libgmp-dev
- build-essential
- cmake
- git
- libgtk2.0-dev
- pkg-config
- libavcodec-dev
- libavformat-dev
- libswscale-dev
- libtbb2
- libtbb-dev
- libjpeg-dev
- libpng-dev
- libtiff-dev
- libjasper-dev
- libdc1394-22-dev
- xvfb
before_install:
- ./travis_build_opencv.sh
- export PKG_CONFIG_PATH=$(pkg-config --variable pc_path pkg-config):$HOME/usr/lib/pkgconfig
- export INCLUDE_PATH=$HOME/usr/include:${INCLUDE_PATH}
- export LD_LIBRARY_PATH=$HOME/usr/lib:${LD_LIBRARY_PATH}
- sudo ln /dev/null /dev/raw1394
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
before_cache:
- rm -f $HOME/fresh-cache
script:
- export GOCV_CAFFE_TEST_FILES="${HOME}/testdata"
- export GOCV_TENSORFLOW_TEST_FILES="${HOME}/testdata"
- export OPENCV_ENABLE_NONFREE=ON
- echo "Ensuring code is well formatted"; ! gofmt -s -d . | read
- go test -v -coverprofile=coverage.txt -covermode=atomic -tags matprofile .
- go test -tags matprofile ./contrib -coverprofile=contrib.txt -covermode=atomic; cat contrib.txt >> coverage.txt; rm contrib.txt;
after_success:
- bash <(curl -s https://codecov.io/bash)
# Caching so the next build will be fast as possible.
cache:
timeout: 1000
directories:
- $HOME/usr
- $HOME/testdata

716
vendor/gocv.io/x/gocv/CHANGELOG.md generated vendored Normal file
View File

@ -0,0 +1,716 @@
0.22.0
---
* **bgsegm**
* Add BackgroundSubtractorCNT
* **calib3d**
* Added undistort function (#520)
* **core**
* add functions (singular value decomposition, multiply between matrices, transpose matrix) (#559)
* Add new funcs (#578)
* add setIdentity() method to Mat
* add String method (#552)
* MatType: add missing constants
* **dnn**
* Adding GetLayerNames()
* respect the bit depth of the input image to set the expected output when converting an image to a blob
* **doc**
* change opencv version 3.x to 4.x
* **docker**
* use Go1.13.5 for image
* **imgcodecs**
* Fix webp image decode error (#523)
imgcodecs: optimize copy of data used for IMDecode method
* **imgproc**
* Add GetRectSubPix
* Added ClipLine
* Added InvertAffineTransform
* Added LinearPolar function (#524)
* correct ksize param used for MedianBlur unit test
* Feature/put text with line type (#527)
* FitEllipse
* In FillPoly and DrawContours functions, remove func() wrap to avoid memory freed before calling opencv functions. (#543)
* **objdetect**
* Add support QR codes
* **opencv**
* update to OpenCV 4.2.0 release
* **openvino**
* Add openvino async
* **test**
* Tolerate imprecise result in SolvePoly
* Tolerate imprecision in TestHoughLines
0.21.0
---
* **build**
* added go clean --cache to clean target, see issue 458
* **core**
* Add KMeans function
* added MeanWithMask function for Mats (#487)
* Fix possible resource leak
* **cuda**
* added cudaoptflow
* added NewGpuMatFromMat which creates a GpuMat from a Mat
* Support for CUDA Image Warping (#494)
* **dnn**
* add BlobFromImages (#467)
* add ImagesFromBlob (#468)
* **docs**
* update ROADMAP with all recent contributions. Thank you!
* **examples**
* face detection from image url by using IMDecode (#499)
* better format
* **imgproc**
* Add calcBackProject
* Add CompareHist
* Add DistanceTransform and Watershed
* Add GrabCut
* Add Integral
* Add MorphologyExWithParams
* **opencv**
* update to version 4.1.2
* **openvino**
* updates needed for 2019 R3
* **videoio**
* Added ToCodec to convert FOURCC string to numeric representation (#485)
0.20.0
---
* **build**
* Use Go 1.12.x for build
* Update to OpenCV 4.1.0
* **cuda**
* Initial cuda implementation
* **docs**
* Fix the command to install xquartz via brew/cask
* **features2d**
* Add support for SimpleBlobDetectorParams (#434)
* Added FastFeatureDetectorWithParams
* **imgproc**
* Added function call to cv::morphologyDefaultBorderValue
* **test**
* Increase test coverage for FP16BlobFromImage()
* **video**
* Added calcOpticalFlowPyrLKWithParams
* Addition of MOG2/KNN constructor with options
0.19.0
---
* **build**
* Adds Dockerfile. Updates Makefile and README.
* make maintainer tag same as dockerhub organization name
* make sure to run tests for non-free contrib algorithms
* update Appveyor build to use Go 1.12
* **calib3d**
* add func InitUndistortRectifyMap (#405)
* **cmd**
* correct formatting of code in example
* **core**
* Added Bitwise Operations With Masks
* update to OpenCV4.0.1
* **dnn**
* add new backend and target types for NVIDIA and FPGA
* Added blobFromImages in ROADMAP.md (#403)
* Implement dnn methods for loading in-memory models.
* **docker**
* update Dockerfile to use OpenCV 4.0.1
* **docs**
* update ROADMAP from recent contributions
* **examples**
* Fixing filename in caffe-classifier example
* **imgproc**
* Add 'MinEnclosingCircle' function
* added BoxPoints function and BorderIsolated const
* Added Connected Components
* Added the HoughLinesPointSet function.
* Implement CLAHE to imgproc
* **openvino**
* remove lib no longer included during non-FPGA installations
* **test**
* Add len(kp) == 232 to TestMSER, seems this is necessary for MacOS for some reason.
0.18.0
---
* **build**
* add OPENCV_GENERATE_PKGCONFIG flag to generate pkg-config file
* Add required curl package to the RPM and DEBS
* correct name for zip directory used for code download
* Removing linking against face contrib module
* update CI to use 4.0.0 release
* update Makefile and Windows build command file to OpenCV 4.0.0
* use opencv4 file for pkg-config
* **core**
* add ScaleAdd() method to Mat
* **docs**
* replace OpenCV 3.4.3 references with OpenCV 4
* update macOS installation info to refer to new OpenCV 4.0 brew
* Updated function documentation with information about errors.
* **examples**
* Improve accuracy in hand gesture sample
* **features2d**
* update drawKeypoints() to use new stricter enum
* **openvino**
* changes to accommodate release 2018R4
* **profile**
* add build tag matprofile to allow for conditional inclusion of custom profile
* Add Mat profile wrapper in other areas of the library.
* Add MatProfile.
* Add MatProfileTest.
* move MatProfile tests into separate test file so they only run when custom profiler active
* **test**
* Close images in tests.
* More Closes in tests.
* test that we are using 4.0.x version now
* **videoio**
* Return the right type and error when opening VideoCapture fails
0.17.0
---
* **build**
* Update Makefile
* update version of OpenCV used to 3.4.3
* use link to OpenCV 3.4.3 for Windows builds
* **core**
* add mulSpectrums wrapper
* add PolarToCart() method to Mat
* add Reduce() method to Mat
* add Repeat() method to Mat
* add Solve() method to Mat
* add SolveCubic() method to Mat
* add SolvePoly() method to Mat
* add Sort() method to Mat
* add SortIdx() method to Mat
* add Trace() method to Mat
* Added new MatType
* Added Phase function
* **dnn**
* update test to match OpenCV 3.4.3 behavior
* **docs**
* Add example of how to run individual test
* adding instructions for installing pkgconfig for macOS
* fixed GOPATH bug.
* update ROADMAP from recent contributions
* **examples**
* add condition to handle no circle found in circle detection example
* **imgcodecs**
* Added IMEncodeWithParams function
* **imgproc**
* Added Filter2D function
* Added fitLine function
* Added logPolar function
* Added Remap function
* Added SepFilter2D function
* Added Sobel function
* Added SpatialGradient function
* **xfeatures2d**
* do not run SIFT test unless OpenCV was built using OPENCV_ENABLE_NONFREE
* do not run SURF test unless OpenCV was built using OPENCV_ENABLE_NONFREE
0.16.0
---
* **build**
* add make task for Raspbian install with ARM hardware optimizations
* use all available cores to compile OpenCV on Windows as discussed in issue #275
* download performance improvements for OpenCV installs on Windows
* correct various errors and issues with OpenCV installs on Fedora and CentOS
* **core**
* correct spelling error in constant to fix issue #269
* implemented & added test for Mat.SetTo
* improve Multiply() GoDoc and test showing Scalar() multiplication
* mutator functions for Mat add, subtract, multiply, and divide for uint8 and float32 values.
* **dnn**
* add FP16BlobFromImage() function to convert an image Mat to a half-float aka FP16 slice of bytes
* **docs**
* fix a varible error in example code in README
0.15.0
---
* **build**
* add max to make -j
* improve path for Windows to use currently configured GOPATH
* **core**
* Add Mat.DataPtr methods for direct access to OpenCV data
* Avoid extra copy in Mat.ToBytes + code review feedback
* **dnn**
* add test coverage for ParseNetBackend and ParseNetTarget
* complete test coverage
* **docs**
* minor cleanup of language for install
* use chdir instead of cd in Windows instructions
* **examples**
* add 'hello, video' example to repo
* add HoughLinesP example
* correct message on device close to match actual event
* small change in display message for when file is input source
* use DrawContours in motion detect example
* **imgproc**
* Add MinAreaRect() function
* **test**
* filling test coverage gaps
* **videoio**
* add test coverage for OpenVideoCapture
0.14.0
---
* **build**
* Add -lopencv_calib3d341 to the linker
* auto-confirm on package installs from make deps command
* display PowerShell download status for OpenCV files
* obtain caffe test config file from new location in Travis build
* remove VS only dependencies from OpenCV build, copy caffe test config file from new location
* return back to GoCV directory after OpenCV install
* update for release of OpenCV v3.4.2
* use PowerShell for scripted OpenCV install for Windows
* win32 version number has not changed yet
* **calib3d**
* Add Calibrate for Fisheye model(WIP)
* **core**
* add GetTickCount function
* add GetTickFrequency function
* add Size() and FromPtr() methods to Mat
* add Total method to Mat
* Added RotateFlag type
* correct CopyTo to use pointer to Mat as destination
* functions converting Image to Mat
* rename implementation to avoid conflicts with Windows
* stricter use of reflect.SliceHeader
* **dnn**
* add backend/device options to caffe and tensorflow DNN examples
* add Close to Layer
* add first version of dnn-pose-detection example
* add further comments to object detection/tracking DNN example
* add GetPerfProfile function to Net
* add initial Layer implementation alongside enhancements to Net
* add InputNameToIndex to Layer
* add new functions allowing DNN backends such as OpenVINO
* additional refactoring and comments in dnn-pose-detection example
* cleanup DNN face detection example
* correct const for device targets to be called Target
* correct test that expected init slice with blank entries
* do not init slice with blank entries, since added via append
* further cleanup of DNN face detection example
* make dnn-pose-detection example use Go channels for async operation
* refactoring and additional comments for object detection/tracking DNN example
* refine comment in header for style transfer example
* working style transfer example
* added ForwardLayers() to accomodate models with multiple output layers
* **docs**
* add scripted Windows install info to README
* Added a sample gocv workflow contributing guideline
* mention docker image in README.
* mention work in progress on Android
* simplify and add missing step in Linux installation in README
* update contributing instructions to match latest version
* update ROADMAP from recent calib3d module contribution
* update ROADMAP from recent imgproc histogram contribution
* **examples**
* cleanup header for caffe dnn classifier
* show how to use either Caffe or Tensorflow for DNN object detection
* further improve dnn samples
* rearrange and add comments to dnn style transfer example
* remove old copy of pose detector
* remove unused example
* **features2d**
* free memory allocation bug for C.KeyPoints as pointed out by @tzununbekov
* Adding opencv::drawKeypoints() support
* **imgproc**
* add equalizeHist function
* Added opencv::calcHist implementation
* **openvino**
* add needed environment config to execute examples
* further details in README explaining how to use
* remove opencv contrib references as they are not included in OpenVINO
* **videoio**
* Add OpenVideoCapture
* Use gocv.VideoCaptureFile if string is specified for device.
0.13.0
---
* **build**
* Add cgo directives to contrib
* contrib subpackage also needs cpp 11 or greater for a warning free build on Linux
* Deprecate env scripts and update README
* Don't set --std=c++1z on non-macOS
* Remove CGO vars from CI and correct Windows cgo directives
* Support pkg-config via cgo directives
* we actually do need cpp 11 or greater for a warning free build on Linux
* **docs**
* add a Github issue template to project
* provide specific examples of using custom environment
* **imgproc**
* add HoughLinesPWithParams() function
* **openvino**
* add build tag specific to openvino
* add roadmap info
* add smoke test for ie
0.12.0
---
* **build**
* convert to CRLF
* Enable verbosity for travisCI
* Further improvements to Makefile
* **core**
* Add Rotate, VConcat
* Adding InScalarRange and NewMatFromScalarWithSize functions
* Changed NewMatFromScalarWithSize to NewMatWithSizeFromScalar
* implement CheckRange(), Determinant(), EigenNonSymmetric(), Min(), and MinMaxIdx() functions
* implement PerspectiveTransform() and Sqrt() functions
* implement Transform() and Transpose() functions
* Make toByteArray safe for empty byte slices
* Renamed InScalarRange to InRangeWithScalar
* **docs**
* nicer error if we can't read haarcascade_frontalface_default
* correct some ROADMAP links
* Fix example command.
* Fix executable name in help text.
* update ROADMAP from recent contributions
* **imgproc**
* add BoxFilter and SqBoxFilter functions
* Fix the hack to convert C arrays to Go slices.
* **videoio**
* Add isColor to VideoWriterFile
* Check numerical parameters for gocv.VideoWriterFile
* CodecString()
* **features2d**
* add BFMatcher
* **img_hash**
* Add contrib/img_hash module
* add GoDocs for new img_hash module
* Add img-similarity as an example for img_hash
* **openvino**
* adds support for Intel OpenVINO toolkit PVL
* starting experimental work on OpenVINO IE
* update README files for Intel OpenVINO toolkit support
* WIP on IE can load an IR network
0.11.0
---
* **build**
* Add astyle config
* Astyle cpp/h files
* remove duplication in Makefile for astyle
* **core**
* Add GetVecfAt() function to Mat
* Add GetVeciAt() function to Mat
* Add Mat.ToImage()
* add MeanStdDev() method to Mat
* add more functions
* Compare Mat Type directly
* further cleanup for GoDocs and enforce type for convariance operations
* Make borderType in CopyMakeBorder be type BorderType
* Mat Type() should return MatType
* remove unused convenience functions
* use Mat* to indicate when a Mat is mutable aka an output parameter
* **dnn**
* add a ssd sample and a GetBlobChannel helper
* added another helper func and a pose detection demo
* **docs**
* add some additional detail about adding OpenCV functions to GoCV
* updates to contribution guidelines
* fill out complete list of needed imgproc functions for sections that have work started
* indicate that missing imgproc functions need implementation
* mention the WithParams patterns to be used for functions with default params
* update README for the Mat* based API changes
* update ROADMAP for recent changes especially awesome recent core contributions from @berak
* **examples**
* Fix tf-classifier example
* move new DNN advanced examples into separate folders
* Update doc for the face contrib package
* Update links in caffe-classifier demo
* WIP on hand gestures tracking example
* **highgui**
* fix constant in NewWindow
* **imgproc**
* Add Ellipse() and FillPoly() functions
* Add HoughCirclesWithParams() func
* correct output Mat to for ConvexHull()
* rename param being used for Mat image to be modified
* **tracking**
* add support for TrackerMIL, TrackerBoosting, TrackerMedianFlow, TrackerTLD, TrackerKCF, TrackerMOSSE, TrackerCSRT trackers
* removed mutitracker, added Csrt, rebased
* update GoDocs and minor renaming based on gometalint output
0.10.0
---
* **build**
* install unzip before build
* overwrite when unzipping file to install Tensorflow test model
* use -DCPU_DISPATCH= flag for build to avoid problem with disabled AVX on Windows
* update unzipped file when installing Tensorflow test model
* **core**
* add Compare() and CountNonZero() functions
* add getter/setter using optional params for multi-dimensional Mat using row/col/channel
* Add mat subtract function
* add new toRectangle function to DRY up conversion from CRects to []image.Rectangle
* add split subtract sum wrappers
* Add toCPoints() helper function
* Added Mat.CopyToWithMask() per #47
* added Pow() method
* BatchDistance BorderInterpolate CalcCovarMatrix CartToPolar
* CompleteSymm ConvertScaleAbs CopyMakeBorder Dct
* divide, multiply
* Eigen Exp ExtractChannels
* operations on a 3d Mat are not same as a 2d multichannel Mat
* resolve merge conflict with duplicate Subtract() function
* run gofmt on core tests
* Updated type for Mat.GetUCharAt() and Mat.SetUCharAt() to reflect uint8 instead of int8
* **docs**
* update ROADMAP of completed functions in core from recent contributions
* **env**
* check loading resources
* Add distribution detection to deps rule
* Add needed environment variables for Linux
* **highgui**
* add some missing test coverage on WaitKey()
* **imgproc**
* Add adaptive threshold function
* Add pyrDown and pyrUp functions
* Expose DrawContours()
* Expose WarpPerspective and GetPerspectiveTransform
* implement ConvexHull() and ConvexityDefects() functions
* **opencv**
* update to OpenCV version 3.4.1
0.9.0
---
* **bugfix**
* correct several errors in size parameter ordering
* **build**
* add missing opencv_face lib reference to env.sh
* Support for non-brew installs of opencv on Darwin
* **core**
* add Channels() method to Mat
* add ConvertTo() and NewMatFromBytes() functions
* add Type() method to Mat
* implement ConvertFp16() function
* **dnn**
* use correct size for blob used for Caffe/Tensorflow tests
* **docs**
* Update copyright date and Apache 2.0 license to include full text
* **examples**
* cleanup mjpeg streamer code
* cleanup motion detector comments
* correct use of defer in loop
* use correct size for blob used for Caffe/Tensorflow examples
* **imgproc**
* Add cv::approxPolyDP() bindings.
* Add cv::arcLength() bindings.
* Add cv::matchTemplate() bindings.
* correct comment and link for Blur function
* correct docs for BilateralFilter()
0.8.0
---
* **core**
* add ColorMapFunctions and their test
* add Mat ToBytes
* add Reshape and MinMaxLoc functions
* also delete points
* fix mistake in the norm function by taking NormType instead of int as parameter
* SetDoubleAt func and his test
* SetFloatAt func and his test
* SetIntAt func and his test
* SetSCharAt func and his test
* SetShortAt func and his test
* SetUCharAt fun and his test
* use correct delete operator for array of new, eliminates a bunch of memory leaks
* **dnn**
* add support for loading Tensorflow models
* adjust test for Caffe now that we are auto-cropping blob
* first pass at adding Caffe support
* go back to older function signature to avoid version conflicts with Intel CV SDK
* properly close DNN Net class
* use approx. value from test result to account forr windows precision differences
* **features2d**
* implement GFTTDetector, KAZE, and MSER algorithms
* modify MSER test for Windows results
* **highgui**
* un-deprecate WaitKey function needed for CLI apps
* **imgcodec**
* add fileExt type
* **imgproc**
* add the norm wrapper and use it in test for WarpAffine and WarpAffineWithParams
* GetRotationMatrix2D, WarpAffine and WarpAffineWithParams
* use NormL2 in wrap affine
* **pvl**
* add support for FaceRecognizer
* complete wrappers for all missing FaceDetector functions
* update instructions to match R3 of Intel CV SDK
* **docs**
* add more detail about exactly which functions are not yet implememented in the modules that are marked as 'Work Started'
* add refernece to Tensorflow example, and also suggest brew upgrade for MacOS
* improve ROADMAP to help would-be contributors know where to get started
* in the readme, explain compiling to a static library
* remove many godoc warnings by improving function descriptions
* update all OpenCV 3.3.1 references to v3.4.0
* update CGO_LDFLAGS references to match latest requirements
* update contribution guidelines to try to make it more inviting
* **examples**
* add Caffe classifier example
* add Tensorflow classifier example
* fixed closing window in examples in infinite loop
* fixed format of the examples with gofmt
* **test**
* add helper function for test : floatEquals
* add some attiribution from test function
* display OpenCV version in case that test fails
* add round function to allow for floating point accuracy differences due to GPU usage.
* **build**
* improve search for already installed OpenCV on MacOS
* update Appveyor build to Opencv 3.4.0
* update to Opencv 3.4.0
0.7.0
---
* **core**
* correct Merge implementation
* **docs**
* change wording and formatting for roadmap
* update roadmap for a more complete list of OpenCV functionality
* sequence docs in README in same way as the web site, aka by OS
* show in README that some work was done on contrib face module
* **face**
* LBPH facerecognizer bindings
* **highgui**
* complete implementation for remaining API functions
* **imgcodecs**
* add IMDecode function
* **imgproc**
* elaborate on HoughLines & HoughLinesP tests to fetch a few individual results
* **objdetect**
* add GroupRectangles function
* **xfeatures2d**
* add SIFT and SURF algorithms from OpenCV contrib
* improve description for OpenCV contrib
* run tests from OpenCV contrib
0.6.0
---
* **core**
* Add cv::LUT binding
* **examples**
* do not try to go fullscreen, since does not work on OSX
* **features2d**
* add AKAZE algorithm
* add BRISK algorithm
* add FastFeatureDetector algorithm
* implement AgastFeatureDetector algorithm
* implement ORB algorithm
* implement SimpleBlobDetector algorithm
* **osx**
* Fix to get the OpenCV path with "brew info".
* **highgui**
* use new Window with thread lock, and deprecate WaitKey() in favor of Window.WaitKey()
* use Window.WaitKey() in tests
* **imgproc**
* add tests for HoughCircles
* **pvl**
* use correct Ptr referencing
* **video**
* use smart Ptr for Algorithms thanks to @alalek
* use unsafe.Pointer for Algorithm
* move tests to single file now that they all pass
0.5.0
---
* **core**
* add TermCriteria for iterative algorithms
* **imgproc**
* add CornerSubPix() and GoodFeaturesToTrack() for corner detection
* **objdetect**
* add DetectMultiScaleWithParams() for HOGDescriptor
* add DetectMultiScaleWithParams() to allow override of defaults for CascadeClassifier
* **video**
* add CalcOpticalFlowFarneback() for Farneback optical flow calculations
* add CalcOpticalFlowPyrLK() for Lucas-Kanade optical flow calculations
* **videoio**
* use temp directory for Windows test compat.
* **build**
* enable Appveyor build w/cache
* **osx**
* update env path to always match installed OpenCV from Homebrew
0.4.0
---
* **core**
* Added cv::mean binding with single argument
* fix the write-strings warning
* return temp pointer fix
* **examples**
* add counter example
* add motion-detect command
* correct counter
* remove redundant cast and other small cleanup
* set motion detect example to fullscreen
* use MOG2 for continous motion detection, instead of simplistic first frame only
* **highgui**
* ability to better control the fullscreen window
* **imgproc**
* add BorderType param type for GaussianBlur
* add BoundingRect() function
* add ContourArea() function
* add FindContours() function along with associated data types
* add Laplacian and Scharr functions
* add Moments() function
* add Threshold function
* **pvl**
* add needed lib for linker missing in README
* **test**
* slightly more permissive version test
* **videoio**
* Add image compression flags for gocv.IMWrite
* Fixed possible looping out of compression parameters length
* Make dedicated function to run cv::imwrite with compression parameters
0.3.1
---
* **overall**
* Update to use OpenCV 3.3.1
0.3.0
---
* **docs**
* Correct Windows build location from same @jpfarias fix to gocv-site
* **core**
* Add Resize
* Add Mat merge and Discrete Fourier Transform
* Add CopyTo() and Normalize()
* Implement various important Mat logical operations
* **video**
* BackgroundSubtractorMOG2 algorithm now working
* Add BackgroundSubtractorKNN algorithm from video module
* **videoio**
* Add VideoCapture::get
* **imgproc**
* Add BilateralFilter and MedianBlur
* Additional drawing functions implemented
* Add HoughCircles filter
* Implement various morphological operations
* **highgui**
* Add Trackbar support
* **objdetect**
* Add HOGDescriptor
* **build**
* Remove race from test on Travis, since it causes CGo segfault in MOG2
0.2.0
---
* Switchover to custom domain for package import
* Yes, we have Windows
0.1.0
---
Initial release!
- [X] Video capture
- [X] GUI Window to display video
- [X] Image load/save
- [X] CascadeClassifier for object detection/face tracking/etc.
- [X] Installation instructions for Ubuntu
- [X] Installation instructions for OS X
- [X] Code example to use VideoWriter
- [X] Intel CV SDK PVL FaceTracker support
- [X] imgproc Image processing
- [X] Travis CI build
- [X] At least minimal test coverage for each OpenCV class
- [X] Implement more of imgproc Image processing

136
vendor/gocv.io/x/gocv/CONTRIBUTING.md generated vendored Normal file
View File

@ -0,0 +1,136 @@
# How to contribute
Thank you for your interest in improving GoCV.
We would like your help to make this project better, so we appreciate any contributions. See if one of the following descriptions matches your situation:
### Newcomer to GoCV, to OpenCV, or to computer vision in general
We'd love to get your feedback on getting started with GoCV. Run into any difficulty, confusion, or anything else? You are not alone. We want to know about your experience, so we can help the next people. Please open a Github issue with your questions, or get in touch directly with us.
### Something in GoCV is not working as you expect
Please open a Github issue with your problem, and we will be happy to assist.
### Something you want/need from OpenCV does not appear to be in GoCV
We probably have not implemented it yet. Please take a look at our [ROADMAP.md](ROADMAP.md). Your pull request adding the functionality to GoCV would be greatly appreciated.
### You found some Python code on the Internet that performs some computer vision task, and you want to do it using GoCV
Please open a Github issue with your needs, and we can see what we can do.
## How to use our Github repository
The `master` branch of this repo will always have the latest released version of GoCV. All of the active development work for the next release will take place in the `dev` branch. GoCV will use semantic versioning and will create a tag/release for each release.
Here is how to contribute back some code or documentation:
- Fork repo
- Create a feature branch off of the `dev` branch
- Make some useful change
- Submit a pull request against the `dev` branch.
- Be kind
## How to add a function from OpenCV to GoCV
Here are a few basic guidelines on how to add a function from OpenCV to GoCV:
- Please open a Github issue. We want to help, and also make sure that there is no duplications of efforts. Sometimes what you need is already being worked on by someone else.
- Use the proper Go style naming `MissingFunction()` for the Go wrapper.
- Make any output parameters `Mat*` to indicate to developers that the underlying OpenCV data will be changed by the function.
- Use Go types when possible as parameters for example `image.Point` and then convert to the appropriate OpenCV struct. Also define a new type based on `int` and `const` values instead of just passing "magic numbers" as params. For example, the `VideoCaptureProperties` type used in `videoio.go`.
- Always add the function to the GoCV file named the same as the OpenCV module to which the function belongs.
- If the new function is in a module that is not yet implemented by GoCV, a new set of files for that module will need to be added.
- Always add a "smoke" test for the new function being added. We are not testing OpenCV itself, but just the GoCV wrapper, so all that is needed generally is just exercising the new function.
- If OpenCV has any default params for a function, we have been implementing 2 versions of the function since Go does not support overloading. For example, with a OpenCV function:
```c
opencv::xYZ(int p1, int p2, int p3=2, int p4=3);
```
We would define 2 functions in GoCV:
```go
// uses default param values
XYZ(p1, p2)
// sets each param
XYZWithParams(p2, p2, p3, p4)
```
## How to run tests
To run the tests:
```
go test .
go test ./contrib/.
```
If you want to run an individual test, you can provide a RegExp to the `-run` argument:
```
go test -run TestMat
```
If you are using Intel OpenVINO, you can run those tests using:
```
go test ./openvino/...
```
## Contributing workflow
This section provides a short description of one of many possible workflows you can follow to contribute to `CoCV`. This workflow is based on multiple [git remotes](https://git-scm.com/docs/git-remote) and it's by no means the only workflow you can use to contribute to `GoCV`. However, it's an option that might help you get started quickly without too much hassle as this workflow lets you work off the `gocv` repo directory path!
Assuming you have already forked the `gocv` repo, you need to add a new `git remote` which will point to your GitHub fork. Notice below that you **must** `cd` to `gocv` repo directory before you add the new `git remote`:
```shell
cd $GOPATH/src/gocv.io/x/gocv
git remote add gocv-fork https://github.com/YOUR_GH_HANDLE/gocv.git
```
Note, that in the command above we called our new `git remote`, **gocv-fork** for convenience so we can easily recognize it. You are free to choose any remote name of your liking.
You should now see your new `git remote` when running the command below:
```shell
git remote -v
gocv-fork https://github.com/YOUR_GH_HANDLE/gocv.git (fetch)
gocv-fork https://github.com/YOUR_GH_HANDLE/gocv.git (push)
origin https://github.com/hybridgroup/gocv (fetch)
origin https://github.com/hybridgroup/gocv (push)
```
Before you create a new branch from `dev` you should fetch the latests commits from the `dev` branch:
```shell
git fetch origin dev
```
You want the `dev` branch in your `gocv` fork to be in sync with the `dev` branch of `gocv`, so push the earlier fetched commits to your GitHub fork as shown below. Note, the `-f` force switch might not be needed:
```shell
git push gocv-fork dev -f
```
Create a new feature branch from `dev`:
```shell
git checkout -b new-feature
```
After you've made your changes you can run the tests using the `make` command listed below. Note, you're still working off the `gocv` project root directory, hence running the command below does not require complicated `$GOPATH` rewrites or whatnot:
```shell
make test
```
Once the tests have passed, commit your new code to the `new-feature` branch and push it to your fork running the command below:
```shell
git push gocv-fork new-feature
```
You can now open a new PR from `new-feature` branch in your forked repo against the `dev` branch of `gocv`.

60
vendor/gocv.io/x/gocv/Dockerfile generated vendored Normal file
View File

@ -0,0 +1,60 @@
FROM ubuntu:16.04 AS opencv
LABEL maintainer="hybridgroup"
RUN apt-get update && apt-get install -y --no-install-recommends \
git build-essential cmake pkg-config unzip libgtk2.0-dev \
curl ca-certificates libcurl4-openssl-dev libssl-dev \
libavcodec-dev libavformat-dev libswscale-dev libtbb2 libtbb-dev \
libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev && \
rm -rf /var/lib/apt/lists/*
ARG OPENCV_VERSION="4.2.0"
ENV OPENCV_VERSION $OPENCV_VERSION
RUN curl -Lo opencv.zip https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
unzip -q opencv.zip && \
curl -Lo opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/${OPENCV_VERSION}.zip && \
unzip -q opencv_contrib.zip && \
rm opencv.zip opencv_contrib.zip && \
cd opencv-${OPENCV_VERSION} && \
mkdir build && cd build && \
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-${OPENCV_VERSION}/modules \
-D WITH_JASPER=OFF \
-D BUILD_DOCS=OFF \
-D BUILD_EXAMPLES=OFF \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_opencv_java=NO \
-D BUILD_opencv_python=NO \
-D BUILD_opencv_python2=NO \
-D BUILD_opencv_python3=NO \
-D OPENCV_GENERATE_PKGCONFIG=ON .. && \
make -j $(nproc --all) && \
make preinstall && make install && ldconfig && \
cd / && rm -rf opencv*
#################
# Go + OpenCV #
#################
FROM opencv AS gocv
LABEL maintainer="hybridgroup"
ARG GOVERSION="1.13.5"
ENV GOVERSION $GOVERSION
RUN apt-get update && apt-get install -y --no-install-recommends \
git software-properties-common && \
curl -Lo go${GOVERSION}.linux-amd64.tar.gz https://dl.google.com/go/go${GOVERSION}.linux-amd64.tar.gz && \
tar -C /usr/local -xzf go${GOVERSION}.linux-amd64.tar.gz && \
rm go${GOVERSION}.linux-amd64.tar.gz && \
rm -rf /var/lib/apt/lists/*
ENV GOPATH /go
ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
WORKDIR $GOPATH
RUN go get -u -d gocv.io/x/gocv && go run ${GOPATH}/src/gocv.io/x/gocv/cmd/version/main.go

202
vendor/gocv.io/x/gocv/LICENSE.txt generated vendored Normal file
View File

@ -0,0 +1,202 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2017-2019 The Hybrid Group
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

138
vendor/gocv.io/x/gocv/Makefile generated vendored Normal file
View File

@ -0,0 +1,138 @@
.ONESHELL:
.PHONY: test deps download build clean astyle cmds docker
# OpenCV version to use.
OPENCV_VERSION?=4.2.0
# Go version to use when building Docker image
GOVERSION?=1.13.1
# Temporary directory to put files into.
TMP_DIR?=/tmp/
# Package list for each well-known Linux distribution
RPMS=cmake curl git gtk2-devel libpng-devel libjpeg-devel libtiff-devel tbb tbb-devel libdc1394-devel unzip
DEBS=unzip build-essential cmake curl git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev
# Detect Linux distribution
distro_deps=
ifneq ($(shell which dnf 2>/dev/null),)
distro_deps=deps_fedora
else
ifneq ($(shell which apt-get 2>/dev/null),)
distro_deps=deps_debian
else
ifneq ($(shell which yum 2>/dev/null),)
distro_deps=deps_rh_centos
endif
endif
endif
# Install all necessary dependencies.
deps: $(distro_deps)
deps_rh_centos:
sudo yum -y install pkgconfig $(RPMS)
deps_fedora:
sudo dnf -y install pkgconf-pkg-config $(RPMS)
deps_debian:
sudo apt-get -y update
sudo apt-get -y install $(DEBS)
# Download OpenCV source tarballs.
download:
rm -rf $(TMP_DIR)opencv
mkdir $(TMP_DIR)opencv
cd $(TMP_DIR)opencv
curl -Lo opencv.zip https://github.com/opencv/opencv/archive/$(OPENCV_VERSION).zip
unzip -q opencv.zip
curl -Lo opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/$(OPENCV_VERSION).zip
unzip -q opencv_contrib.zip
rm opencv.zip opencv_contrib.zip
cd -
# Build OpenCV.
build:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
# Build OpenCV on Raspbian with ARM hardware optimizations.
build_raspi:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=OFF -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D ENABLE_NEON=ON -D ENABLE_VFPV3=ON -D WITH_JASPER=OFF -D OPENCV_GENERATE_PKGCONFIG=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
# Build OpenCV with non-free contrib modules.
build_nonfree:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DOPENCV_ENABLE_NONFREE=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
# Build OpenCV with cuda.
build_cuda:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -j $(shell nproc --all) -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DWITH_CUDA=ON -DENABLE_FAST_MATH=1 -DCUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DBUILD_opencv_cudacodec=OFF ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
# Cleanup temporary build files.
clean:
go clean --cache
rm -rf $(TMP_DIR)opencv
# Do everything.
install: deps download build sudo_install clean verify
# Do everything on Raspbian.
install_raspi: deps download build_raspi sudo_install clean verify
# Do everything with cuda.
install_cuda: deps download build_cuda sudo_install clean verify
# Install system wide.
sudo_install:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)/build
sudo $(MAKE) install
sudo ldconfig
cd -
# Build a minimal Go app to confirm gocv works.
verify:
go run ./cmd/version/main.go
# Runs tests.
# This assumes env.sh was already sourced.
# pvt is not tested here since it requires additional depenedences.
test:
go test . ./contrib
docker:
docker build --build-arg OPENCV_VERSION=$(OPENCV_VERSION) --build-arg GOVERSION=$(GOVERSION) .
astyle:
astyle --project=.astylerc --recursive *.cpp,*.h
CMDS=basic-drawing caffe-classifier captest capwindow counter faceblur facedetect find-circles hand-gestures img-similarity mjpeg-streamer motion-detect pose saveimage savevideo showimage ssd-facedetect tf-classifier tracking version
cmds:
for cmd in $(CMDS) ; do \
go build -o build/$$cmd cmd/$$cmd/main.go ;
done ; \

559
vendor/gocv.io/x/gocv/README.md generated vendored Normal file
View File

@ -0,0 +1,559 @@
# GoCV
[![GoCV](https://raw.githubusercontent.com/hybridgroup/gocv/master/images/gocvlogo.jpg)](http://gocv.io/)
[![GoDoc](https://godoc.org/gocv.io/x/gocv?status.svg)](https://godoc.org/github.com/hybridgroup/gocv)
[![Travis Build Status](https://travis-ci.org/hybridgroup/gocv.svg?branch=dev)](https://travis-ci.org/hybridgroup/gocv)
[![AppVeyor Build status](https://ci.appveyor.com/api/projects/status/9asd5foet54ru69q/branch/dev?svg=true)](https://ci.appveyor.com/project/deadprogram/gocv/branch/dev)
[![codecov](https://codecov.io/gh/hybridgroup/gocv/branch/dev/graph/badge.svg)](https://codecov.io/gh/hybridgroup/gocv)
[![Go Report Card](https://goreportcard.com/badge/github.com/hybridgroup/gocv)](https://goreportcard.com/report/github.com/hybridgroup/gocv)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/hybridgroup/gocv/blob/master/LICENSE.txt)
The GoCV package provides Go language bindings for the [OpenCV 4](http://opencv.org/) computer vision library.
The GoCV package supports the latest releases of Go and OpenCV (v4.2.0) on Linux, macOS, and Windows. We intend to make the Go language a "first-class" client compatible with the latest developments in the OpenCV ecosystem.
GoCV also supports [Intel OpenVINO](https://software.intel.com/en-us/openvino-toolkit). Check out the [OpenVINO README](./openvino/README.md) for more info on how to use GoCV with the Intel OpenVINO toolkit.
## How to use
### Hello, video
This example opens a video capture device using device "0", reads frames, and shows the video in a GUI window:
```go
package main
import (
"gocv.io/x/gocv"
)
func main() {
webcam, _ := gocv.OpenVideoCapture(0)
window := gocv.NewWindow("Hello")
img := gocv.NewMat()
for {
webcam.Read(&img)
window.IMShow(img)
window.WaitKey(1)
}
}
```
### Face detect
![GoCV](https://raw.githubusercontent.com/hybridgroup/gocv/master/images/face-detect.jpg)
This is a more complete example that opens a video capture device using device "0". It also uses the CascadeClassifier class to load an external data file containing the classifier data. The program grabs each frame from the video, then uses the classifier to detect faces. If any faces are found, it draws a green rectangle around each one, then displays the video in an output window:
```go
package main
import (
"fmt"
"image/color"
"gocv.io/x/gocv"
)
func main() {
// set to use a video capture device 0
deviceID := 0
// open webcam
webcam, err := gocv.OpenVideoCapture(deviceID)
if err != nil {
fmt.Println(err)
return
}
defer webcam.Close()
// open display window
window := gocv.NewWindow("Face Detect")
defer window.Close()
// prepare image matrix
img := gocv.NewMat()
defer img.Close()
// color for the rect when faces detected
blue := color.RGBA{0, 0, 255, 0}
// load classifier to recognize faces
classifier := gocv.NewCascadeClassifier()
defer classifier.Close()
if !classifier.Load("data/haarcascade_frontalface_default.xml") {
fmt.Println("Error reading cascade file: data/haarcascade_frontalface_default.xml")
return
}
fmt.Printf("start reading camera device: %v\n", deviceID)
for {
if ok := webcam.Read(&img); !ok {
fmt.Printf("cannot read device %v\n", deviceID)
return
}
if img.Empty() {
continue
}
// detect faces
rects := classifier.DetectMultiScale(img)
fmt.Printf("found %d faces\n", len(rects))
// draw a rectangle around each face on the original image
for _, r := range rects {
gocv.Rectangle(&img, r, blue, 3)
}
// show the image in the window, and wait 1 millisecond
window.IMShow(img)
window.WaitKey(1)
}
}
```
### More examples
There are examples in the [cmd directory](./cmd) of this repo in the form of various useful command line utilities, such as [capturing an image file](./cmd/saveimage), [streaming mjpeg video](./cmd/mjpeg-streamer), [counting objects that cross a line](./cmd/counter), and [using OpenCV with Tensorflow for object classification](./cmd/tf-classifier).
## How to install
To install GoCV, run the following command:
```
go get -u -d gocv.io/x/gocv
```
To run code that uses the GoCV package, you must also install OpenCV 4.2.0 on your system. Here are instructions for Ubuntu, Raspian, macOS, and Windows.
## Ubuntu/Linux
### Installation
You can use `make` to install OpenCV 4.2.0 with the handy `Makefile` included with this repo. If you already have installed OpenCV, you do not need to do so again. The installation performed by the `Makefile` is minimal, so it may remove OpenCV options such as Python or Java wrappers if you have already installed OpenCV some other way.
#### Quick Install
The following commands should do everything to download and install OpenCV 4.2.0 on Linux:
cd $GOPATH/src/gocv.io/x/gocv
make install
If it works correctly, at the end of the entire process, the following message should be displayed:
gocv version: 0.22.0
opencv lib version: 4.2.0
That's it, now you are ready to use GoCV.
#### Complete Install
If you have already done the "Quick Install" as described above, you do not need to run any further commands. For the curious, or for custom installations, here are the details for each of the steps that are performed when you run `make install`.
##### Install required packages
First, you need to change the current directory to the location of the GoCV repo, so you can access the `Makefile`:
cd $GOPATH/src/gocv.io/x/gocv
Next, you need to update the system, and install any required packages:
make deps
#### Download source
Now, download the OpenCV 4.2.0 and OpenCV Contrib source code:
make download
#### Build
Build everything. This will take quite a while:
make build
#### Install
Once the code is built, you are ready to install:
make sudo_install
### Verifying the installation
To verify your installation you can run one of the included examples.
First, change the current directory to the location of the GoCV repo:
cd $GOPATH/src/gocv.io/x/gocv
Now you should be able to build or run any of the examples:
go run ./cmd/version/main.go
The version program should output the following:
gocv version: 0.22.0
opencv lib version: 4.2.0
#### Cleanup extra files
After the installation is complete, you can remove the extra files and folders:
make clean
### Cache builds
If you are running a version of Go older than v1.10 and not modifying GoCV source, precompile the GoCV package to significantly decrease your build times:
go install gocv.io/x/gocv
### Custom Environment
By default, pkg-config is used to determine the correct flags for compiling and linking OpenCV. This behavior can be disabled by supplying `-tags customenv` when building/running your application. When building with this tag you will need to supply the CGO environment variables yourself.
For example:
export CGO_CPPFLAGS="-I/usr/local/include"
export CGO_LDFLAGS="-L/usr/local/lib -lopencv_core -lopencv_face -lopencv_videoio -lopencv_imgproc -lopencv_highgui -lopencv_imgcodecs -lopencv_objdetect -lopencv_features2d -lopencv_video -lopencv_dnn -lopencv_xfeatures2d"
Please note that you will need to run these 2 lines of code one time in your current session in order to build or run the code, in order to setup the needed ENV variables. Once you have done so, you can execute code that uses GoCV with your custom environment like this:
go run -tags customenv ./cmd/version/main.go
### Docker
The project now provides `Dockerfile` which lets you build [GoCV](https://gocv.io/) Docker image which you can then use to build and run `GoCV` applications in Docker containers. The `Makefile` contains `docker` target which lets you build Docker image with a single command:
```
make docker
```
By default Docker image built by running the command above ships [Go](https://golang.org/) version `1.13.5`, but if you would like to build an image which uses different version of `Go` you can override the default value when running the target command:
```
make docker GOVERSION='1.13.5'
```
#### Running GUI programs in Docker on macOS
Sometimes your `GoCV` programs create graphical interfaces like windows eg. when you use `gocv.Window` type when you display an image or video stream. Running the programs which create graphical interfaces in Docker container on macOS is unfortunately a bit elaborate, but not impossible. First you need to satisfy the following prerequisites:
* install [xquartz](https://www.xquartz.org/). You can also install xquartz using [homebrew](https://brew.sh/) by running `brew cask install xquartz`
* install [socat](https://linux.die.net/man/1/socat) `brew install socat`
Note, you will have to log out and log back in to your machine once you have installed `xquartz`. This is so the X window system is reloaded.
Once you have installed all the prerequisites you need to allow connections from network clients to `xquartz`. Here is how you do that. First run the following command to open `xquart` so you can configure it:
```shell
open -a xquartz
```
Click on *Security* tab in preferences and check the "Allow connections" box:
![app image](./images/xquartz.png)
Next, you need to create a TCP proxy using `socat` which will stream [X Window](https://en.wikipedia.org/wiki/X_Window_System) data into `xquart`. Before you start the proxy you need to make sure that there is no process listening in port `6000`. The following command should **not** return any results:
```shell
lsof -i TCP:6000
```
Now you can start a local proxy which will proxy the X Window traffic into xquartz which acts a your local X server:
```shell
socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\"
```
You are now finally ready to run your `GoCV` GUI programs in Docker containers. In order to make everything work you must set `DISPLAY` environment variables as shown in a sample command below:
```shell
docker run -it --rm -e DISPLAY=docker.for.mac.host.internal:0 your-gocv-app
```
**Note, since Docker for MacOS does not provide any video device support, you won't be able run GoCV apps which require camera.**
### Alpine 3.7 Docker image
There is a Docker image with Alpine 3.7 that has been created by project contributor [@denismakogon](https://github.com/denismakogon). You can find it located at [https://github.com/denismakogon/gocv-alpine](https://github.com/denismakogon/gocv-alpine).
## Raspbian
### Installation
We have a special installation for the Raspberry Pi that includes some hardware optimizations. You use `make` to install OpenCV 4.2.0 with the handy `Makefile` included with this repo. If you already have installed OpenCV, you do not need to do so again. The installation performed by the `Makefile` is minimal, so it may remove OpenCV options such as Python or Java wrappers if you have already installed OpenCV some other way.
#### Quick Install
The following commands should do everything to download and install OpenCV 4.2.0 on Raspbian:
cd $GOPATH/src/gocv.io/x/gocv
make install_raspi
If it works correctly, at the end of the entire process, the following message should be displayed:
gocv version: 0.22.0
opencv lib version: 4.2.0
That's it, now you are ready to use GoCV.
## macOS
### Installation
You can install OpenCV 4.2.0 using Homebrew.
If you already have an earlier version of OpenCV (3.4.x) installed, you should probably remove it before installing the new version:
brew uninstall opencv
You can then install OpenCV 4.2.0:
brew install opencv
### pkgconfig Installation
pkg-config is used to determine the correct flags for compiling and linking OpenCV.
You can install it by using Homebrew:
brew install pkgconfig
### Verifying the installation
To verify your installation you can run one of the included examples.
First, change the current directory to the location of the GoCV repo:
cd $GOPATH/src/gocv.io/x/gocv
Now you should be able to build or run any of the examples:
go run ./cmd/version/main.go
The version program should output the following:
gocv version: 0.22.0
opencv lib version: 4.2.0
### Cache builds
If you are running a version of Go older than v1.10 and not modifying GoCV source, precompile the GoCV package to significantly decrease your build times:
go install gocv.io/x/gocv
### Custom Environment
By default, pkg-config is used to determine the correct flags for compiling and linking OpenCV. This behavior can be disabled by supplying `-tags customenv` when building/running your application. When building with this tag you will need to supply the CGO environment variables yourself.
For example:
export CGO_CXXFLAGS="--std=c++11"
export CGO_CPPFLAGS="-I/usr/local/Cellar/opencv/4.2.0/include"
export CGO_LDFLAGS="-L/usr/local/Cellar/opencv/4.2.0/lib -lopencv_stitching -lopencv_superres -lopencv_videostab -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_photo -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_optflow -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_dnn -lopencv_plot -lopencv_xfeatures2d -lopencv_shape -lopencv_video -lopencv_ml -lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_flann -lopencv_xobjdetect -lopencv_imgcodecs -lopencv_objdetect -lopencv_xphoto -lopencv_imgproc -lopencv_core"
Please note that you will need to run these 3 lines of code one time in your current session in order to build or run the code, in order to setup the needed ENV variables. Once you have done so, you can execute code that uses GoCV with your custom environment like this:
go run -tags customenv ./cmd/version/main.go
## Windows
### Installation
The following assumes that you are running a 64-bit version of Windows 10.
In order to build and install OpenCV 4.2.0 on Windows, you must first download and install MinGW-W64 and CMake, as follows.
#### MinGW-W64
Download and run the MinGW-W64 compiler installer from [https://sourceforge.net/projects/mingw-w64/?source=typ_redirect](https://sourceforge.net/projects/mingw-w64/?source=typ_redirect).
The latest version of the MinGW-W64 toolchain is `7.3.0`, but any version from `7.X` on should work.
Choose the options for "posix" threads, and for "seh" exceptions handling, then install to the default location `c:\Program Files\mingw-w64\x86_64-7.3.0-posix-seh-rt_v5-rev2`.
Add the `C:\Program Files\mingw-w64\x86_64-7.3.0-posix-seh-rt_v5-rev2\mingw64\bin` path to your System Path.
#### CMake
Download and install CMake [https://cmake.org/download/](https://cmake.org/download/) to the default location. CMake installer will add CMake to your system path.
#### OpenCV 4.2.0 and OpenCV Contrib Modules
The following commands should do everything to download and install OpenCV 4.2.0 on Windows:
chdir %GOPATH%\src\gocv.io\x\gocv
win_build_opencv.cmd
It might take up to one hour.
Last, add `C:\opencv\build\install\x64\mingw\bin` to your System Path.
### Verifying the installation
Change the current directory to the location of the GoCV repo:
chdir %GOPATH%\src\gocv.io\x\gocv
Now you should be able to build or run any of the command examples:
go run cmd\version\main.go
The version program should output the following:
gocv version: 0.22.0
opencv lib version: 4.2.0
That's it, now you are ready to use GoCV.
### Cache builds
If you are running a version of Go older than v1.10 and not modifying GoCV source, precompile the GoCV package to significantly decrease your build times:
go install gocv.io/x/gocv
### Custom Environment
By default, OpenCV is expected to be in `C:\opencv\build\install\include`. This behavior can be disabled by supplying `-tags customenv` when building/running your application. When building with this tag you will need to supply the CGO environment variables yourself.
Due to the way OpenCV produces DLLs, including the version in the name, using this method is required if you're using a different version of OpenCV.
For example:
set CGO_CXXFLAGS="--std=c++11"
set CGO_CPPFLAGS=-IC:\opencv\build\install\include
set CGO_LDFLAGS=-LC:\opencv\build\install\x64\mingw\lib -lopencv_core412 -lopencv_face412 -lopencv_videoio412 -lopencv_imgproc412 -lopencv_highgui412 -lopencv_imgcodecs412 -lopencv_objdetect412 -lopencv_features2d412 -lopencv_video412 -lopencv_dnn412 -lopencv_xfeatures2d412 -lopencv_plot412 -lopencv_tracking412 -lopencv_img_hash412
Please note that you will need to run these 3 lines of code one time in your current session in order to build or run the code, in order to setup the needed ENV variables. Once you have done so, you can execute code that uses GoCV with your custom environment like this:
go run -tags customenv cmd\version\main.go
## Android
There is some work in progress for running GoCV on Android using Gomobile. For information on how to install OpenCV/GoCV for Android, please see:
https://gist.github.com/ogero/c19458cf64bd3e91faae85c3ac887481
See original discussion here:
https://github.com/hybridgroup/gocv/issues/235
## Profiling
Since memory allocations for images in GoCV are done through C based code, the go garbage collector will not clean all resources associated with a `Mat`. As a result, any `Mat` created *must* be closed to avoid memory leaks.
To ease the detection and repair of the resource leaks, GoCV provides a `Mat` profiler that records when each `Mat` is created and closed. Each time a `Mat` is allocated, the stack trace is added to the profile. When it is closed, the stack trace is removed. See the [runtime/pprof documentation](https://golang.org/pkg/runtime/pprof/#Profile).
In order to include the MatProfile custom profiler, you MUST build or run your application or tests using the `-tags matprofile` build tag. For example:
go run -tags matprofile cmd/version/main.go
You can get the profile's count at any time using:
```go
gocv.MatProfile.Count()
```
You can display the current entries (the stack traces) with:
```go
var b bytes.Buffer
gocv.MatProfile.WriteTo(&b, 1)
fmt.Print(b.String())
```
This can be very helpful to track down a leak. For example, suppose you have
the following nonsense program:
```go
package main
import (
"bytes"
"fmt"
"gocv.io/x/gocv"
)
func leak() {
gocv.NewMat()
}
func main() {
fmt.Printf("initial MatProfile count: %v\n", gocv.MatProfile.Count())
leak()
fmt.Printf("final MatProfile count: %v\n", gocv.MatProfile.Count())
var b bytes.Buffer
gocv.MatProfile.WriteTo(&b, 1)
fmt.Print(b.String())
}
```
Running this program produces the following output:
```
initial MatProfile count: 0
final MatProfile count: 1
gocv.io/x/gocv.Mat profile: total 1
1 @ 0x40b936c 0x40b93b7 0x40b94e2 0x40b95af 0x402cd87 0x40558e1
# 0x40b936b gocv.io/x/gocv.newMat+0x4b /go/src/gocv.io/x/gocv/core.go:153
# 0x40b93b6 gocv.io/x/gocv.NewMat+0x26 /go/src/gocv.io/x/gocv/core.go:159
# 0x40b94e1 main.leak+0x21 /go/src/github.com/dougnd/gocvprofexample/main.go:11
# 0x40b95ae main.main+0xae /go/src/github.com/dougnd/gocvprofexample/main.go:16
# 0x402cd86 runtime.main+0x206 /usr/local/Cellar/go/1.11.1/libexec/src/runtime/proc.go:201
```
We can see that this program would leak memory. As it exited, it had one `Mat` that was never closed. The stack trace points to exactly which line the allocation happened on (line 11, the `gocv.NewMat()`).
Furthermore, if the program is a long running process or if GoCV is being used on a web server, it may be helpful to install the HTTP interface )). For example:
```go
package main
import (
"net/http"
_ "net/http/pprof"
"time"
"gocv.io/x/gocv"
)
func leak() {
gocv.NewMat()
}
func main() {
go func() {
ticker := time.NewTicker(time.Second)
for {
<-ticker.C
leak()
}
}()
http.ListenAndServe("localhost:6060", nil)
}
```
This will leak a `Mat` once per second. You can see the current profile count and stack traces by going to the installed HTTP debug interface: [http://localhost:6060/debug/pprof/gocv.io/x/gocv.Mat](http://localhost:6060/debug/pprof/gocv.io/x/gocv.Mat?debug=1).
## How to contribute
Please take a look at our [CONTRIBUTING.md](./CONTRIBUTING.md) document to understand our contribution guidelines.
Then check out our [ROADMAP.md](./ROADMAP.md) document to know what to work on next.
## Why this project exists
The [https://github.com/go-opencv/go-opencv](https://github.com/go-opencv/go-opencv) package for Go and OpenCV does not support any version above OpenCV 2.x, and work on adding support for OpenCV 3 had stalled for over a year, mostly due to the complexity of [SWIG](http://swig.org/). That is why we started this project.
The GoCV package uses a C-style wrapper around the OpenCV 4 C++ classes to avoid having to deal with applying SWIG to a huge existing codebase. The mappings are intended to match as closely as possible to the original OpenCV project structure, to make it easier to find things, and to be able to figure out where to add support to GoCV for additional OpenCV image filters, algorithms, and other features.
For example, the [OpenCV `videoio` module](https://github.com/opencv/opencv/tree/master/modules/videoio) wrappers can be found in the GoCV package in the `videoio.*` files.
This package was inspired by the original https://github.com/go-opencv/go-opencv project, the blog post https://medium.com/@peterleyssens/using-opencv-3-from-golang-5510c312a3c and the repo at https://github.com/sensorbee/opencv thank you all!
## License
Licensed under the Apache 2.0 license. Copyright (c) 2017-2019 The Hybrid Group.
Logo generated by GopherizeMe - https://gopherize.me

262
vendor/gocv.io/x/gocv/ROADMAP.md generated vendored Normal file
View File

@ -0,0 +1,262 @@
# Roadmap
This is a list of all of the functionality areas within OpenCV, and OpenCV Contrib.
Any section listed with an "X" means that all of the relevant OpenCV functionality has been wrapped for use within GoCV.
Any section listed with **WORK STARTED** indicates that some work has been done, but not all functionality in that module has been completed. If there are any functions listed under a section marked **WORK STARTED**, it indicates that that function still requires a wrapper implemented.
And any section that is simply listed, indicates that so far, no work has been done on that module.
Your pull requests will be greatly appreciated!
## Modules list
- [ ] **core. Core functionality - WORK STARTED**
- [ ] **Basic structures - WORK STARTED**
- [ ] **Operations on arrays - WORK STARTED**. The following functions still need implementation:
- [ ] [Mahalanobis](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga4493aee129179459cbfc6064f051aa7d)
- [ ] [mixChannels](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga51d768c270a1cdd3497255017c4504be)
- [ ] [mulTransposed](https://docs.opencv.org/master/d2/de8/group__core__array.html#gadc4e49f8f7a155044e3be1b9e3b270ab)
- [ ] [PCABackProject](https://docs.opencv.org/master/d2/de8/group__core__array.html#gab26049f30ee8e94f7d69d82c124faafc)
- [ ] [PCACompute](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga4e2073c7311f292a0648f04c37b73781)
- [ ] [PCAProject](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga6b9fbc7b3a99ebfd441bbec0a6bc4f88)
- [ ] [PSNR](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga07aaf34ae31d226b1b847d8bcff3698f)
- [ ] [randn](https://docs.opencv.org/master/d2/de8/group__core__array.html#gaeff1f61e972d133a04ce3a5f81cf6808)
- [ ] [randShuffle](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga6a789c8a5cb56c6dd62506179808f763)
- [ ] [randu](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga1ba1026dca0807b27057ba6a49d258c0)
- [x] [setIdentity](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga388d7575224a4a277ceb98ccaa327c99)
- [ ] [setRNGSeed](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga757e657c037410d9e19e819569e7de0f)
- [ ] [SVBackSubst](https://docs.opencv.org/master/d2/de8/group__core__array.html#gab4e620e6fc6c8a27bb2be3d50a840c0b)
- [ ] [SVDecomp](https://docs.opencv.org/master/d2/de8/group__core__array.html#gab477b5b7b39b370bb03e75b19d2d5109)
- [ ] [theRNG](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga75843061d150ad6564b5447e38e57722)
- [ ] XML/YAML Persistence
- [ ] **Clustering - WORK STARTED**. The following functions still need implementation:
- [ ] [partition](https://docs.opencv.org/master/d5/d38/group__core__cluster.html#ga2037c989e69b499c1aa271419f3a9b34)
- [ ] Utility and system functions and macros
- [ ] OpenGL interoperability
- [ ] Intel IPP Asynchronous C/C++ Converters
- [ ] Optimization Algorithms
- [ ] OpenCL support
- [ ] **imgproc. Image processing - WORK STARTED**
- [ ] **Image Filtering - WORK STARTED** The following functions still need implementation:
- [ ] [buildPyramid](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gacfdda2bc1ac55e96de7e9f0bce7238c0)
- [ ] [getDerivKernels](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga6d6c23f7bd3f5836c31cfae994fc4aea)
- [ ] [getGaborKernel](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gae84c92d248183bd92fa713ce51cc3599)
- [ ] [getGaussianKernel](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gac05a120c1ae92a6060dd0db190a61afa)
- [ ] [morphologyExWithParams](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga67493776e3ad1a3df63883829375201f)
- [ ] [pyrMeanShiftFiltering](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga9fabdce9543bd602445f5db3827e4cc0)
- [ ] **Geometric Image Transformations - WORK STARTED** The following functions still need implementation:
- [ ] [convertMaps](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga9156732fa8f01be9ebd1a194f2728b7f)
- [ ] [getAffineTransform](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga8f6d378f9f8eebb5cb55cd3ae295a999)
- [ ] [getDefaultNewCameraMatrix](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga744529385e88ef7bc841cbe04b35bfbf)
- [X] [getRectSubPix](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga77576d06075c1a4b6ba1a608850cd614)
- [ ] [initUndistortRectifyMap](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a)
- [ ] [initWideAngleProjMap](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#gaceb049ec48898d1dadd5b50c604429c8)
- [ ] [undistort](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga69f2545a8b62a6b0fc2ee060dc30559d)
- [ ] [undistortPoints](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7e)
- [ ] **Miscellaneous Image Transformations - WORK STARTED** The following functions still need implementation:
- [ ] [cvtColorTwoPlane](https://docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#ga8e873314e72a1a6c0252375538fbf753)
- [ ] [floodFill](https://docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#gaf1f55a048f8a45bc3383586e80b1f0d0)
- [ ] **Drawing Functions - WORK STARTED** The following functions still need implementation:
- [X] [clipLine](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#gaf483cb46ad6b049bc35ec67052ef1c2c)
- [ ] [drawMarker](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga482fa7b0f578fcdd8a174904592a6250)
- [ ] [ellipse2Poly](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga727a72a3f6a625a2ae035f957c61051f)
- [ ] [fillConvexPoly](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga906aae1606ea4ed2f27bec1537f6c5c2)
- [ ] [getFontScaleFromHeight](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga442ff925c1a957794a1309e0ed3ba2c3)
- [ ] [polylines](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga444cb8a2666320f47f09d5af08d91ffb)
- [ ] ColorMaps in OpenCV
- [ ] Planar Subdivision
- [ ] **Histograms - WORK STARTED** The following functions still need implementation:
- [ ] [EMD](https://docs.opencv.org/master/d6/dc7/group__imgproc__hist.html#ga902b8e60cc7075c8947345489221e0e0)
- [ ] [wrapperEMD](https://docs.opencv.org/master/d6/dc7/group__imgproc__hist.html#ga31fdda0864e64ca6b9de252a2611758b)
- [ ] **Structural Analysis and Shape Descriptors - WORK STARTED** The following functions still need implementation:
- [ ] [fitEllipse](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gaf259efaad93098103d6c27b9e4900ffa)
- [ ] [fitEllipseAMS](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga69e90cda55c4e192a8caa0b99c3e4550)
- [ ] [fitEllipseDirect](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga6421884fd411923a74891998bbe9e813)
- [ ] [HuMoments](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944)
- [ ] [intersectConvexConvex](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga8e840f3f3695613d32c052bec89e782c)
- [ ] [isContourConvex](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga8abf8010377b58cbc16db6734d92941b)
- [ ] [matchShapes](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317)
- [ ] [minEnclosingTriangle](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga1513e72f6bbdfc370563664f71e0542f)
- [ ] [pointPolygonTest](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga1a539e8db2135af2566103705d7a5722)
- [ ] [rotatedRectangleIntersection](https://docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#ga8740e7645628c59d238b0b22c2abe2d4)
- [ ] Motion Analysis and Object Tracking
- [ ] **Feature Detection - WORK STARTED** The following functions still need implementation:
- [ ] [cornerEigenValsAndVecs](https://docs.opencv.org/master/dd/d1a/group__imgproc__feature.html#ga4055896d9ef77dd3cacf2c5f60e13f1c)
- [ ] [cornerHarris](https://docs.opencv.org/master/dd/d1a/group__imgproc__feature.html#gac1fc3598018010880e370e2f709b4345)
- [ ] [cornerMinEigenVal](https://docs.opencv.org/master/dd/d1a/group__imgproc__feature.html#ga3dbce297c1feb859ee36707e1003e0a8)
- [ ] [createLineSegmentDetector](https://docs.opencv.org/master/dd/d1a/group__imgproc__feature.html#ga6b2ad2353c337c42551b521a73eeae7d)
- [ ] [preCornerDetect](https://docs.opencv.org/master/dd/d1a/group__imgproc__feature.html#gaa819f39b5c994871774081803ae22586)
- [X] **Object Detection**
- [X] **imgcodecs. Image file reading and writing.**
- [X] **videoio. Video I/O**
- [X] **highgui. High-level GUI**
- [ ] **video. Video Analysis - WORK STARTED**
- [X] **Motion Analysis**
- [ ] **Object Tracking - WORK STARTED** The following functions still need implementation:
- [ ] [buildOpticalFlowPyramid](https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga86640c1c470f87b2660c096d2b22b2ce)
- [ ] [estimateRigidTransform](https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga762cbe5efd52cf078950196f3c616d48)
- [ ] [findTransformECC](https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga7ded46f9a55c0364c92ccd2019d43e3a)
- [ ] [meanShift](https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga7ded46f9a55c0364c92ccd2019d43e3a)
- [ ] [CamShift](https://docs.opencv.org/master/dc/d6b/group__video__track.html#gaef2bd39c8356f423124f1fe7c44d54a1)
- [ ] [DualTVL1OpticalFlow](https://docs.opencv.org/master/dc/d47/classcv_1_1DualTVL1OpticalFlow.html)
- [ ] [FarnebackOpticalFlow](https://docs.opencv.org/master/de/d9e/classcv_1_1FarnebackOpticalFlow.html)
- [ ] [KalmanFilter](https://docs.opencv.org/master/dd/d6a/classcv_1_1KalmanFilter.html)
- [ ] [SparsePyrLKOpticalFlow](https://docs.opencv.org/master/d7/d08/classcv_1_1SparsePyrLKOpticalFlow.html)
- [ ] **calib3d. Camera Calibration and 3D Reconstruction - WORK STARTED**. The following functions still need implementation:
- [ ] **Camera Calibration - WORK STARTED** The following functions still need implementation:
- [ ] [calibrateCamera](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [calibrateCameraRO](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [calibrateHandEye](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [calibrationMatrixValues](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [checkChessboard](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [composeRT](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [computeCorrespondEpilines](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [convertPointsFromHomogeneous](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [convertPointsHomogeneous](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [convertPointsToHomogeneous](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [correctMatches](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [decomposeEssentialMat](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [decomposeHomographyMat](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [decomposeProjectionMatrix](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [drawChessboardCorners](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [drawFrameAxes](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [estimateAffine2D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [estimateAffine3D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [estimateAffinePartial2D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [filterHomographyDecompByVisibleRefpoints](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [filterSpeckles](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [find4QuadCornerSubpix](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findChessboardCorners](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findChessboardCornersSB](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findCirclesGrid](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findEssentialMat](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findFundamentalMat](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findHomography](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [getDefaultNewCameraMatrix](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [getOptimalNewCameraMatrix](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [getValidDisparityROI](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [initCameraMatrix2D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [initUndistortRectifyMap](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [initWideAngleProjMap](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [matMulDeriv](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [projectPoints](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [recoverPose](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [rectify3Collinear](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [reprojectImageTo3D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [Rodrigues](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [RQDecomp3x3](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [sampsonDistance](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [solveP3P](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [solvePnP](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [solvePnPGeneric](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [solvePnPRansac](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [solvePnPRefineLM](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [solvePnPRefineVVS](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [stereoCalibrate](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [stereoRectify](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [stereoRectifyUncalibrated](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [triangulatePoints](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [x] [undistort](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [undistortPoints](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [validateDisparity](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] **Fisheye - WORK STARTED** The following functions still need implementation:
- [ ] [calibrate](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gad626a78de2b1dae7489e152a5a5a89e1)
- [ ] [distortPoints](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga75d8877a98e38d0b29b6892c5f8d7765)
- [ ] [estimateNewCameraMatrixForUndistortRectify](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga384940fdf04c03e362e94b6eb9b673c9)
- [ ] [projectPoints](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gab1ad1dc30c42ee1a50ce570019baf2c4)
- [ ] [stereoCalibrate](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gadbb3a6ca6429528ef302c784df47949b)
- [ ] [stereoRectify](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gac1af58774006689056b0f2ef1db55ecc)
- [ ] [undistortPoints](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gab738cdf90ceee97b2b52b0d0e7511541)
- [ ] **features2d. 2D Features Framework - WORK STARTED**
- [X] **Feature Detection and Description**
- [ ] **Descriptor Matchers - WORK STARTED** The following functions still need implementation:
- [ ] [FlannBasedMatcher](https://docs.opencv.org/master/dc/de2/classcv_1_1FlannBasedMatcher.html)
- [ ] **Drawing Function of Keypoints and Matches - WORK STARTED** The following function still needs implementation:
- [ ] [drawMatches](https://docs.opencv.org/master/d4/d5d/group__features2d__draw.html#ga7421b3941617d7267e3f2311582f49e1)
- [ ] Object Categorization
- [ ] [BOWImgDescriptorExtractor](https://docs.opencv.org/master/d2/d6b/classcv_1_1BOWImgDescriptorExtractor.html)
- [ ] [BOWKMeansTrainer](https://docs.opencv.org/master/d4/d72/classcv_1_1BOWKMeansTrainer.html)
- [X] **objdetect. Object Detection**
- [ ] **dnn. Deep Neural Network module - WORK STARTED** The following functions still need implementation:
- [ ] [NMSBoxes](https://docs.opencv.org/master/d6/d0f/group__dnn.html#ga9d118d70a1659af729d01b10233213ee)
- [ ] ml. Machine Learning
- [ ] flann. Clustering and Search in Multi-Dimensional Spaces
- [ ] photo. Computational Photography
- [ ] stitching. Images stitching
- [ ] cudaarithm. Operations on Matrices
- [ ] cudabgsegm. Background Segmentation
- [ ] cudacodec. Video Encoding/Decoding
- [ ] cudafeatures2d. Feature Detection and Description
- [ ] cudafilters. Image Filtering
- [ ] cudaimgproc. Image Processing
- [ ] cudalegacy. Legacy support
- [ ] cudaobjdetect. Object Detection
- [ ] **cudaoptflow. Optical Flow - WORK STARTED**
- [ ] [BroxOpticalFlow](https://docs.opencv.org/master/d7/d18/classcv_1_1cuda_1_1BroxOpticalFlow.html)
- [ ] [DenseOpticalFlow](https://docs.opencv.org/master/d6/d4a/classcv_1_1cuda_1_1DenseOpticalFlow.html)
- [ ] [DensePyrLKOpticalFlow](https://docs.opencv.org/master/d0/da4/classcv_1_1cuda_1_1DensePyrLKOpticalFlow.html)
- [ ] [FarnebackOpticalFlow](https://docs.opencv.org/master/d9/d30/classcv_1_1cuda_1_1FarnebackOpticalFlow.html)
- [ ] [NvidiaHWOpticalFlow](https://docs.opencv.org/master/d5/d26/classcv_1_1cuda_1_1NvidiaHWOpticalFlow.html)
- [ ] [NvidiaOpticalFlow_1_0](https://docs.opencv.org/master/dc/d9d/classcv_1_1cuda_1_1NvidiaOpticalFlow__1__0.html)
- [ ] [SparseOpticalFlow](https://docs.opencv.org/master/d5/dcf/classcv_1_1cuda_1_1SparseOpticalFlow.html)
- [ ] **[SparsePyrLKOpticalFlow](https://docs.opencv.org/master/d7/d05/classcv_1_1cuda_1_1SparsePyrLKOpticalFlow.html) - WORK STARTED**
- [ ] cudastereo. Stereo Correspondence
- [X] **cudawarping. Image Warping**
- [ ] cudev. Device layer
- [ ] shape. Shape Distance and Matching
- [ ] superres. Super Resolution
- [ ] videostab. Video Stabilization
- [ ] viz. 3D Visualizer
## Contrib modules list
- [ ] aruco. ArUco Marker Detection
- [X] **bgsegm. Improved Background-Foreground Segmentation Methods - WORK STARTED**
- [ ] bioinspired. Biologically inspired vision models and derivated tools
- [ ] ccalib. Custom Calibration Pattern for 3D reconstruction
- [ ] cnn_3dobj. 3D object recognition and pose estimation API
- [ ] cvv. GUI for Interactive Visual Debugging of Computer Vision Programs
- [ ] datasets. Framework for working with different datasets
- [ ] dnn_modern. Deep Learning Modern Module
- [ ] dpm. Deformable Part-based Models
- [ ] **face. Face Recognition - WORK STARTED**
- [ ] freetype. Drawing UTF-8 strings with freetype/harfbuzz
- [ ] fuzzy. Image processing based on fuzzy mathematics
- [ ] hdf. Hierarchical Data Format I/O routines
- [X] **img_hash. The module brings implementations of different image hashing algorithms.**
- [ ] line_descriptor. Binary descriptors for lines extracted from an image
- [ ] matlab. MATLAB Bridge
- [ ] optflow. Optical Flow Algorithms
- [ ] phase_unwrapping. Phase Unwrapping API
- [ ] plot. Plot function for Mat data
- [ ] reg. Image Registration
- [ ] rgbd. RGB-Depth Processing
- [ ] saliency. Saliency API
- [ ] sfm. Structure From Motion
- [ ] stereo. Stereo Correspondance Algorithms
- [ ] structured_light. Structured Light API
- [ ] surface_matching. Surface Matching
- [ ] text. Scene Text Detection and Recognition
- [ ] **tracking. Tracking API - WORK STARTED**
- [ ] **xfeatures2d. Extra 2D Features Framework - WORK STARTED**
- [ ] ximgproc. Extended Image Processing
- [ ] xobjdetect. Extended object detection
- [ ] xphoto. Additional photo processing algorithms

35
vendor/gocv.io/x/gocv/appveyor.yml generated vendored Normal file
View File

@ -0,0 +1,35 @@
version: "{build}"
clone_folder: c:\gopath\src\gocv.io\x\gocv
platform:
- MinGW_x64
environment:
GOPATH: c:\gopath
GOROOT: c:\go
GOVERSION: 1.13
TEST_EXTERNAL: 1
APPVEYOR_SAVE_CACHE_ON_ERROR: true
cache:
- C:\opencv -> appveyor_build_opencv.cmd
install:
- if not exist "C:\opencv" appveyor_build_opencv.cmd
- set PATH=C:\Perl\site\bin;C:\Perl\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\7-Zip;C:\Program Files\Microsoft\Web Platform Installer\;C:\Tools\PsTools;C:\Program Files (x86)\CMake\bin;C:\go\bin;C:\Tools\NuGet;C:\Program Files\LLVM\bin;C:\Tools\curl\bin;C:\ProgramData\chocolatey\bin;C:\Program Files (x86)\Yarn\bin;C:\Users\appveyor\AppData\Local\Yarn\bin;C:\Program Files\AppVeyor\BuildAgent\
- set PATH=%PATH%;C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin
- set PATH=%PATH%;C:\Tools\GitVersion;C:\Program Files\Git LFS;C:\Program Files\Git\cmd;C:\Program Files\Git\usr\bin;C:\opencv\build\install\x64\mingw\bin;
- echo %PATH%
- echo %GOPATH%
- go version
- cd c:\gopath\src\gocv.io\x\gocv
- go get -d .
- set GOCV_CAFFE_TEST_FILES=C:\opencv\testdata
- set GOCV_TENSORFLOW_TEST_FILES=C:\opencv\testdata
- set OPENCV_ENABLE_NONFREE=ON
- go env
build_script:
- go test -tags matprofile -v .
- go test -tags matprofile -v ./contrib

23
vendor/gocv.io/x/gocv/appveyor_build_opencv.cmd generated vendored Normal file
View File

@ -0,0 +1,23 @@
if not exist "C:\opencv" mkdir "C:\opencv"
if not exist "C:\opencv\build" mkdir "C:\opencv\build"
if not exist "C:\opencv\testdata" mkdir "C:\opencv\testdata"
appveyor DownloadFile https://github.com/opencv/opencv/archive/4.2.0.zip -FileName c:\opencv\opencv-4.2.0.zip
7z x c:\opencv\opencv-4.2.0.zip -oc:\opencv -y
del c:\opencv\opencv-4.2.0.zip /q
appveyor DownloadFile https://github.com/opencv/opencv_contrib/archive/4.2.0.zip -FileName c:\opencv\opencv_contrib-4.2.0.zip
7z x c:\opencv\opencv_contrib-4.2.0.zip -oc:\opencv -y
del c:\opencv\opencv_contrib-4.2.0.zip /q
cd C:\opencv\build
set PATH=C:\Perl\site\bin;C:\Perl\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\7-Zip;C:\Program Files\Microsoft\Web Platform Installer\;C:\Tools\PsTools;C:\Program Files (x86)\CMake\bin;C:\go\bin;C:\Tools\NuGet;C:\Program Files\LLVM\bin;C:\Tools\curl\bin;C:\ProgramData\chocolatey\bin;C:\Program Files (x86)\Yarn\bin;C:\Users\appveyor\AppData\Local\Yarn\bin;C:\Program Files\AppVeyor\BuildAgent\
set PATH=%PATH%;C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin
dir C:\opencv
cmake C:\opencv\opencv-4.2.0 -G "MinGW Makefiles" -BC:\opencv\build -DENABLE_CXX11=ON -DOPENCV_EXTRA_MODULES_PATH=C:\opencv\opencv_contrib-4.2.0\modules -DBUILD_SHARED_LIBS=ON -DWITH_IPP=OFF -DWITH_MSMF=OFF -DBUILD_EXAMPLES=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_java=OFF -DBUILD_opencv_python=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_DOCS=OFF -DENABLE_PRECOMPILED_HEADERS=OFF -DBUILD_opencv_saliency=OFF -DCPU_DISPATCH= -DBUILD_opencv_gapi=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DOPENCV_ENABLE_NONFREE=ON -DWITH_OPENCL_D3D11_NV=OFF -Wno-dev
mingw32-make -j%NUMBER_OF_PROCESSORS%
mingw32-make install
appveyor DownloadFile https://raw.githubusercontent.com/opencv/opencv_extra/master/testdata/dnn/bvlc_googlenet.prototxt -FileName C:\opencv\testdata\bvlc_googlenet.prototxt
appveyor DownloadFile http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel -FileName C:\opencv\testdata\bvlc_googlenet.caffemodel
appveyor DownloadFile https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip -FileName C:\opencv\testdata\inception5h.zip
7z x C:\opencv\testdata\inception5h.zip -oC:\opencv\testdata tensorflow_inception_graph.pb -y
rmdir c:\opencv\opencv-4.2.0 /s /q
rmdir c:\opencv\opencv_contrib-4.2.0 /s /q

28
vendor/gocv.io/x/gocv/asyncarray.cpp generated vendored Normal file
View File

@ -0,0 +1,28 @@
// +build openvino
#include <string.h>
#include "asyncarray.h"
// AsyncArray_New creates a new empty AsyncArray
AsyncArray AsyncArray_New() {
return new cv::AsyncArray();
}
// AsyncArray_Close deletes an existing AsyncArray
void AsyncArray_Close(AsyncArray a) {
delete a;
}
const char* AsyncArray_GetAsync(AsyncArray async_out,Mat out) {
try {
async_out->get(*out);
} catch(cv::Exception ex) {
return ex.err.c_str();
}
return "";
}
AsyncArray Net_forwardAsync(Net net, const char* outputName) {
return new cv::AsyncArray(net->forwardAsync(outputName));
}

52
vendor/gocv.io/x/gocv/asyncarray.go generated vendored Normal file
View File

@ -0,0 +1,52 @@
// +build openvino
package gocv
import (
"errors"
)
/*
#include <stdlib.h>
#include "dnn.h"
#include "asyncarray.h"
#include "core.h"
*/
import "C"
type AsyncArray struct {
p C.AsyncArray
}
// NewAsyncArray returns a new empty AsyncArray.
func NewAsyncArray() AsyncArray {
return newAsyncArray(C.AsyncArray_New())
}
// Ptr returns the AsyncArray's underlying object pointer.
func (a *AsyncArray) Ptr() C.AsyncArray {
return a.p
}
// Get async returns the Mat
func (m *AsyncArray) Get(mat *Mat) error {
result := C.AsyncArray_GetAsync(m.p, mat.p)
err := C.GoString(result)
if len(err) > 0 {
return errors.New(err)
}
return nil
}
// newAsyncArray returns a new AsyncArray from a C AsyncArray
func newAsyncArray(p C.AsyncArray) AsyncArray {
return AsyncArray{p: p}
}
// Close the AsyncArray object.
func (a *AsyncArray) Close() error {
C.AsyncArray_Close(a.p)
a.p = nil
return nil
}

23
vendor/gocv.io/x/gocv/asyncarray.h generated vendored Normal file
View File

@ -0,0 +1,23 @@
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
#include "dnn.h"
#ifdef __cplusplus
typedef cv::AsyncArray* AsyncArray;
#else
typedef void* AsyncArray;
#endif
AsyncArray AsyncArray_New();
const char* AsyncArray_GetAsync(AsyncArray async_out,Mat out);
void AsyncArray_Close(AsyncArray a);
AsyncArray Net_forwardAsync(Net net, const char* outputName);
#ifdef __cplusplus
}
#endif

33
vendor/gocv.io/x/gocv/calib3d.cpp generated vendored Normal file
View File

@ -0,0 +1,33 @@
#include "calib3d.h"
void Fisheye_UndistortImage(Mat distorted, Mat undistorted, Mat k, Mat d) {
cv::fisheye::undistortImage(*distorted, *undistorted, *k, *d);
}
void Fisheye_UndistortImageWithParams(Mat distorted, Mat undistorted, Mat k, Mat d, Mat knew, Size size) {
cv::Size sz(size.width, size.height);
cv::fisheye::undistortImage(*distorted, *undistorted, *k, *d, *knew, sz);
}
void InitUndistortRectifyMap(Mat cameraMatrix,Mat distCoeffs,Mat r,Mat newCameraMatrix,Size size,int m1type,Mat map1,Mat map2) {
cv::Size sz(size.width, size.height);
cv::initUndistortRectifyMap(*cameraMatrix,*distCoeffs,*r,*newCameraMatrix,sz,m1type,*map1,*map2);
}
Mat GetOptimalNewCameraMatrixWithParams(Mat cameraMatrix,Mat distCoeffs,Size size,double alpha,Size newImgSize,Rect* validPixROI,bool centerPrincipalPoint) {
cv::Size sz(size.width, size.height);
cv::Size newSize(newImgSize.width, newImgSize.height);
cv::Rect rect(validPixROI->x,validPixROI->y,validPixROI->width,validPixROI->height);
cv::Mat* mat = new cv::Mat(cv::getOptimalNewCameraMatrix(*cameraMatrix,*distCoeffs,sz,alpha,newSize,&rect,centerPrincipalPoint));
validPixROI->x = rect.x;
validPixROI->y = rect.y;
validPixROI->width = rect.width;
validPixROI->height = rect.height;
return mat;
}
void Undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat newCameraMatrix) {
cv::undistort(*src, *dst, *cameraMatrix, *distCoeffs, *newCameraMatrix);
}

103
vendor/gocv.io/x/gocv/calib3d.go generated vendored Normal file
View File

@ -0,0 +1,103 @@
package gocv
/*
#include <stdlib.h>
#include "calib3d.h"
*/
import "C"
import "image"
// Calib is a wrapper around OpenCV's "Camera Calibration and 3D Reconstruction" of
// Fisheye Camera model
//
// For more details, please see:
// https://docs.opencv.org/trunk/db/d58/group__calib3d__fisheye.html
// CalibFlag value for calibration
type CalibFlag int32
const (
// CalibUseIntrinsicGuess indicates that cameraMatrix contains valid initial values
// of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially
// set to the image center ( imageSize is used), and focal distances are computed
// in a least-squares fashion.
CalibUseIntrinsicGuess CalibFlag = 1 << iota
// CalibRecomputeExtrinsic indicates that extrinsic will be recomputed after each
// iteration of intrinsic optimization.
CalibRecomputeExtrinsic
// CalibCheckCond indicates that the functions will check validity of condition number
CalibCheckCond
// CalibFixSkew indicates that skew coefficient (alpha) is set to zero and stay zero
CalibFixSkew
// CalibFixK1 indicates that selected distortion coefficients are set to zeros and stay zero
CalibFixK1
// CalibFixK2 indicates that selected distortion coefficients are set to zeros and stay zero
CalibFixK2
// CalibFixK3 indicates that selected distortion coefficients are set to zeros and stay zero
CalibFixK3
// CalibFixK4 indicates that selected distortion coefficients are set to zeros and stay zero
CalibFixK4
// CalibFixIntrinsic indicates that fix K1, K2? and D1, D2? so that only R, T matrices are estimated
CalibFixIntrinsic
// CalibFixPrincipalPoint indicates that the principal point is not changed during the global optimization.
// It stays at the center or at a different location specified when CalibUseIntrinsicGuess is set too.
CalibFixPrincipalPoint
)
// FisheyeUndistortImage transforms an image to compensate for fisheye lens distortion
func FisheyeUndistortImage(distorted Mat, undistorted *Mat, k, d Mat) {
C.Fisheye_UndistortImage(distorted.Ptr(), undistorted.Ptr(), k.Ptr(), d.Ptr())
}
// FisheyeUndistortImageWithParams transforms an image to compensate for fisheye lens distortion with Knew matrix
func FisheyeUndistortImageWithParams(distorted Mat, undistorted *Mat, k, d, knew Mat, size image.Point) {
sz := C.struct_Size{
width: C.int(size.X),
height: C.int(size.Y),
}
C.Fisheye_UndistortImageWithParams(distorted.Ptr(), undistorted.Ptr(), k.Ptr(), d.Ptr(), knew.Ptr(), sz)
}
// InitUndistortRectifyMap computes the joint undistortion and rectification transformation and represents the result in the form of maps for remap
//
// For further details, please see:
// https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a
//
func InitUndistortRectifyMap(cameraMatrix Mat, distCoeffs Mat, r Mat, newCameraMatrix Mat, size image.Point, m1type int, map1 Mat, map2 Mat) {
sz := C.struct_Size{
width: C.int(size.X),
height: C.int(size.Y),
}
C.InitUndistortRectifyMap(cameraMatrix.Ptr(), distCoeffs.Ptr(), r.Ptr(), newCameraMatrix.Ptr(), sz, C.int(m1type), map1.Ptr(), map2.Ptr())
}
// GetOptimalNewCameraMatrixWithParams computes and returns the optimal new camera matrix based on the free scaling parameter.
//
// For further details, please see:
// https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga7a6c4e032c97f03ba747966e6ad862b1
//
func GetOptimalNewCameraMatrixWithParams(cameraMatrix Mat, distCoeffs Mat, imageSize image.Point, alpha float64, newImgSize image.Point, centerPrincipalPoint bool) (Mat, image.Rectangle) {
sz := C.struct_Size{
width: C.int(imageSize.X),
height: C.int(imageSize.Y),
}
newSize := C.struct_Size{
width: C.int(newImgSize.X),
height: C.int(newImgSize.Y),
}
rt := C.struct_Rect{}
return newMat(C.GetOptimalNewCameraMatrixWithParams(cameraMatrix.Ptr(), distCoeffs.Ptr(), sz, C.double(alpha), newSize, &rt, C.bool(centerPrincipalPoint))), toRect(rt)
}
func Undistort(src Mat, dst *Mat, cameraMatrix Mat, distCoeffs Mat, newCameraMatrix Mat) {
C.Undistort(src.Ptr(), dst.Ptr(), cameraMatrix.Ptr(), distCoeffs.Ptr(), newCameraMatrix.Ptr())
}

25
vendor/gocv.io/x/gocv/calib3d.h generated vendored Normal file
View File

@ -0,0 +1,25 @@
#ifndef _OPENCV3_CALIB_H_
#define _OPENCV3_CALIB_H_
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
#include <opencv2/calib3d.hpp>
extern "C" {
#endif
#include "core.h"
//Calib
void Fisheye_UndistortImage(Mat distorted, Mat undistorted, Mat k, Mat d);
void Fisheye_UndistortImageWithParams(Mat distorted, Mat undistorted, Mat k, Mat d, Mat knew, Size size);
void InitUndistortRectifyMap(Mat cameraMatrix,Mat distCoeffs,Mat r,Mat newCameraMatrix,Size size,int m1type,Mat map1,Mat map2);
Mat GetOptimalNewCameraMatrixWithParams(Mat cameraMatrix,Mat distCoeffs,Size size,double alpha,Size newImgSize,Rect* validPixROI,bool centerPrincipalPoint);
void Undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat newCameraMatrix);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_CALIB_H

27
vendor/gocv.io/x/gocv/calib3d_string.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
package gocv
func (c CalibFlag) String() string {
switch c {
case CalibUseIntrinsicGuess:
return "calib-use-intrinsec-guess"
case CalibRecomputeExtrinsic:
return "calib-recompute-extrinsic"
case CalibCheckCond:
return "calib-check-cond"
case CalibFixSkew:
return "calib-fix-skew"
case CalibFixK1:
return "calib-fix-k1"
case CalibFixK2:
return "calib-fix-k2"
case CalibFixK3:
return "calib-fix-k3"
case CalibFixK4:
return "calib-fix-k4"
case CalibFixIntrinsic:
return "calib-fix-intrinsic"
case CalibFixPrincipalPoint:
return "calib-fix-principal-point"
}
return ""
}

13
vendor/gocv.io/x/gocv/cgo.go generated vendored Normal file
View File

@ -0,0 +1,13 @@
// +build !customenv,!openvino
package gocv
// Changes here should be mirrored in contrib/cgo.go and cuda/cgo.go.
/*
#cgo !windows pkg-config: opencv4
#cgo CXXFLAGS: --std=c++11
#cgo windows CPPFLAGS: -IC:/opencv/build/install/include
#cgo windows LDFLAGS: -LC:/opencv/build/install/x64/mingw/lib -lopencv_core420 -lopencv_face420 -lopencv_videoio420 -lopencv_imgproc420 -lopencv_highgui420 -lopencv_imgcodecs420 -lopencv_objdetect420 -lopencv_features2d420 -lopencv_video420 -lopencv_dnn420 -lopencv_xfeatures2d420 -lopencv_plot420 -lopencv_tracking420 -lopencv_img_hash420 -lopencv_calib3d420
*/
import "C"

3
vendor/gocv.io/x/gocv/codecov.yml generated vendored Normal file
View File

@ -0,0 +1,3 @@
ignore:
- "*_string.go"
- "*/*_string.go"

763
vendor/gocv.io/x/gocv/core.cpp generated vendored Normal file
View File

@ -0,0 +1,763 @@
#include "core.h"
#include <string.h>
// Mat_New creates a new empty Mat
Mat Mat_New() {
return new cv::Mat();
}
// Mat_NewWithSize creates a new Mat with a specific size dimension and number of channels.
Mat Mat_NewWithSize(int rows, int cols, int type) {
return new cv::Mat(rows, cols, type, 0.0);
}
// Mat_NewFromScalar creates a new Mat from a Scalar. Intended to be used
// for Mat comparison operation such as InRange.
Mat Mat_NewFromScalar(Scalar ar, int type) {
cv::Scalar c = cv::Scalar(ar.val1, ar.val2, ar.val3, ar.val4);
return new cv::Mat(1, 1, type, c);
}
// Mat_NewWithSizeFromScalar creates a new Mat from a Scalar with a specific size dimension and number of channels
Mat Mat_NewWithSizeFromScalar(Scalar ar, int rows, int cols, int type) {
cv::Scalar c = cv::Scalar(ar.val1, ar.val2, ar.val3, ar.val4);
return new cv::Mat(rows, cols, type, c);
}
Mat Mat_NewFromBytes(int rows, int cols, int type, struct ByteArray buf) {
return new cv::Mat(rows, cols, type, buf.data);
}
Mat Mat_FromPtr(Mat m, int rows, int cols, int type, int prow, int pcol) {
return new cv::Mat(rows, cols, type, m->ptr(prow, pcol));
}
// Mat_Close deletes an existing Mat
void Mat_Close(Mat m) {
delete m;
}
// Mat_Empty tests if a Mat is empty
int Mat_Empty(Mat m) {
return m->empty();
}
// Mat_Clone returns a clone of this Mat
Mat Mat_Clone(Mat m) {
return new cv::Mat(m->clone());
}
// Mat_CopyTo copies this Mat to another Mat.
void Mat_CopyTo(Mat m, Mat dst) {
m->copyTo(*dst);
}
// Mat_CopyToWithMask copies this Mat to another Mat while applying the mask
void Mat_CopyToWithMask(Mat m, Mat dst, Mat mask) {
m->copyTo(*dst, *mask);
}
void Mat_ConvertTo(Mat m, Mat dst, int type) {
m->convertTo(*dst, type);
}
// Mat_ToBytes returns the bytes representation of the underlying data.
struct ByteArray Mat_ToBytes(Mat m) {
return toByteArray(reinterpret_cast<const char*>(m->data), m->total() * m->elemSize());
}
struct ByteArray Mat_DataPtr(Mat m) {
return ByteArray {reinterpret_cast<char*>(m->data), static_cast<int>(m->total() * m->elemSize())};
}
// Mat_Region returns a Mat of a region of another Mat
Mat Mat_Region(Mat m, Rect r) {
return new cv::Mat(*m, cv::Rect(r.x, r.y, r.width, r.height));
}
Mat Mat_Reshape(Mat m, int cn, int rows) {
return new cv::Mat(m->reshape(cn, rows));
}
void Mat_PatchNaNs(Mat m) {
cv::patchNaNs(*m);
}
Mat Mat_ConvertFp16(Mat m) {
Mat dst = new cv::Mat();
cv::convertFp16(*m, *dst);
return dst;
}
Mat Mat_Sqrt(Mat m) {
Mat dst = new cv::Mat();
cv::sqrt(*m, *dst);
return dst;
}
// Mat_Mean calculates the mean value M of array elements, independently for each channel, and return it as Scalar vector
Scalar Mat_Mean(Mat m) {
cv::Scalar c = cv::mean(*m);
Scalar scal = Scalar();
scal.val1 = c.val[0];
scal.val2 = c.val[1];
scal.val3 = c.val[2];
scal.val4 = c.val[3];
return scal;
}
// Mat_MeanWithMask calculates the mean value M of array elements,
// independently for each channel, and returns it as Scalar vector
// while applying the mask.
Scalar Mat_MeanWithMask(Mat m, Mat mask){
cv::Scalar c = cv::mean(*m, *mask);
Scalar scal = Scalar();
scal.val1 = c.val[0];
scal.val2 = c.val[1];
scal.val3 = c.val[2];
scal.val4 = c.val[3];
return scal;
}
void LUT(Mat src, Mat lut, Mat dst) {
cv::LUT(*src, *lut, *dst);
}
// Mat_Rows returns how many rows in this Mat.
int Mat_Rows(Mat m) {
return m->rows;
}
// Mat_Cols returns how many columns in this Mat.
int Mat_Cols(Mat m) {
return m->cols;
}
// Mat_Channels returns how many channels in this Mat.
int Mat_Channels(Mat m) {
return m->channels();
}
// Mat_Type returns the type from this Mat.
int Mat_Type(Mat m) {
return m->type();
}
// Mat_Step returns the number of bytes each matrix row occupies.
int Mat_Step(Mat m) {
return m->step;
}
int Mat_Total(Mat m) {
return m->total();
}
void Mat_Size(Mat m, IntVector* res) {
cv::MatSize ms(m->size);
int* ids = new int[ms.dims()];
for (size_t i = 0; i < ms.dims(); ++i) {
ids[i] = ms[i];
}
res->length = ms.dims();
res->val = ids;
return;
}
// Mat_GetUChar returns a specific row/col value from this Mat expecting
// each element to contain a schar aka CV_8U.
uint8_t Mat_GetUChar(Mat m, int row, int col) {
return m->at<uchar>(row, col);
}
uint8_t Mat_GetUChar3(Mat m, int x, int y, int z) {
return m->at<uchar>(x, y, z);
}
// Mat_GetSChar returns a specific row/col value from this Mat expecting
// each element to contain a schar aka CV_8S.
int8_t Mat_GetSChar(Mat m, int row, int col) {
return m->at<schar>(row, col);
}
int8_t Mat_GetSChar3(Mat m, int x, int y, int z) {
return m->at<schar>(x, y, z);
}
// Mat_GetShort returns a specific row/col value from this Mat expecting
// each element to contain a short aka CV_16S.
int16_t Mat_GetShort(Mat m, int row, int col) {
return m->at<short>(row, col);
}
int16_t Mat_GetShort3(Mat m, int x, int y, int z) {
return m->at<short>(x, y, z);
}
// Mat_GetInt returns a specific row/col value from this Mat expecting
// each element to contain an int aka CV_32S.
int32_t Mat_GetInt(Mat m, int row, int col) {
return m->at<int>(row, col);
}
int32_t Mat_GetInt3(Mat m, int x, int y, int z) {
return m->at<int>(x, y, z);
}
// Mat_GetFloat returns a specific row/col value from this Mat expecting
// each element to contain a float aka CV_32F.
float Mat_GetFloat(Mat m, int row, int col) {
return m->at<float>(row, col);
}
float Mat_GetFloat3(Mat m, int x, int y, int z) {
return m->at<float>(x, y, z);
}
// Mat_GetDouble returns a specific row/col value from this Mat expecting
// each element to contain a double aka CV_64F.
double Mat_GetDouble(Mat m, int row, int col) {
return m->at<double>(row, col);
}
double Mat_GetDouble3(Mat m, int x, int y, int z) {
return m->at<double>(x, y, z);
}
void Mat_SetTo(Mat m, Scalar value) {
cv::Scalar c_value(value.val1, value.val2, value.val3, value.val4);
m->setTo(c_value);
}
// Mat_SetUChar set a specific row/col value from this Mat expecting
// each element to contain a schar aka CV_8U.
void Mat_SetUChar(Mat m, int row, int col, uint8_t val) {
m->at<uchar>(row, col) = val;
}
void Mat_SetUChar3(Mat m, int x, int y, int z, uint8_t val) {
m->at<uchar>(x, y, z) = val;
}
// Mat_SetSChar set a specific row/col value from this Mat expecting
// each element to contain a schar aka CV_8S.
void Mat_SetSChar(Mat m, int row, int col, int8_t val) {
m->at<schar>(row, col) = val;
}
void Mat_SetSChar3(Mat m, int x, int y, int z, int8_t val) {
m->at<schar>(x, y, z) = val;
}
// Mat_SetShort set a specific row/col value from this Mat expecting
// each element to contain a short aka CV_16S.
void Mat_SetShort(Mat m, int row, int col, int16_t val) {
m->at<short>(row, col) = val;
}
void Mat_SetShort3(Mat m, int x, int y, int z, int16_t val) {
m->at<short>(x, y, z) = val;
}
// Mat_SetInt set a specific row/col value from this Mat expecting
// each element to contain an int aka CV_32S.
void Mat_SetInt(Mat m, int row, int col, int32_t val) {
m->at<int>(row, col) = val;
}
void Mat_SetInt3(Mat m, int x, int y, int z, int32_t val) {
m->at<int>(x, y, z) = val;
}
// Mat_SetFloat set a specific row/col value from this Mat expecting
// each element to contain a float aka CV_32F.
void Mat_SetFloat(Mat m, int row, int col, float val) {
m->at<float>(row, col) = val;
}
void Mat_SetFloat3(Mat m, int x, int y, int z, float val) {
m->at<float>(x, y, z) = val;
}
// Mat_SetDouble set a specific row/col value from this Mat expecting
// each element to contain a double aka CV_64F.
void Mat_SetDouble(Mat m, int row, int col, double val) {
m->at<double>(row, col) = val;
}
void Mat_SetDouble3(Mat m, int x, int y, int z, double val) {
m->at<double>(x, y, z) = val;
}
void Mat_AddUChar(Mat m, uint8_t val) {
*m += val;
}
void Mat_SubtractUChar(Mat m, uint8_t val) {
*m -= val;
}
void Mat_MultiplyUChar(Mat m, uint8_t val) {
*m *= val;
}
void Mat_DivideUChar(Mat m, uint8_t val) {
*m /= val;
}
void Mat_AddFloat(Mat m, float val) {
*m += val;
}
void Mat_SubtractFloat(Mat m, float val) {
*m -= val;
}
void Mat_MultiplyFloat(Mat m, float val) {
*m *= val;
}
void Mat_DivideFloat(Mat m, float val) {
*m /= val;
}
Mat Mat_MultiplyMatrix(Mat x, Mat y) {
return new cv::Mat((*x) * (*y));
}
Mat Mat_T(Mat x) {
return new cv::Mat(x->t());
}
void Mat_AbsDiff(Mat src1, Mat src2, Mat dst) {
cv::absdiff(*src1, *src2, *dst);
}
void Mat_Add(Mat src1, Mat src2, Mat dst) {
cv::add(*src1, *src2, *dst);
}
void Mat_AddWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat dst) {
cv::addWeighted(*src1, alpha, *src2, beta, gamma, *dst);
}
void Mat_BitwiseAnd(Mat src1, Mat src2, Mat dst) {
cv::bitwise_and(*src1, *src2, *dst);
}
void Mat_BitwiseAndWithMask(Mat src1, Mat src2, Mat dst, Mat mask){
cv::bitwise_and(*src1, *src2, *dst, *mask);
}
void Mat_BitwiseNot(Mat src1, Mat dst) {
cv::bitwise_not(*src1, *dst);
}
void Mat_BitwiseNotWithMask(Mat src1, Mat dst, Mat mask) {
cv::bitwise_not(*src1, *dst, *mask);
}
void Mat_BitwiseOr(Mat src1, Mat src2, Mat dst) {
cv::bitwise_or(*src1, *src2, *dst);
}
void Mat_BitwiseOrWithMask(Mat src1, Mat src2, Mat dst, Mat mask) {
cv::bitwise_or(*src1, *src2, *dst, *mask);
}
void Mat_BitwiseXor(Mat src1, Mat src2, Mat dst) {
cv::bitwise_xor(*src1, *src2, *dst);
}
void Mat_BitwiseXorWithMask(Mat src1, Mat src2, Mat dst, Mat mask) {
cv::bitwise_xor(*src1, *src2, *dst, *mask);
}
void Mat_BatchDistance(Mat src1, Mat src2, Mat dist, int dtype, Mat nidx, int normType, int K,
Mat mask, int update, bool crosscheck) {
cv::batchDistance(*src1, *src2, *dist, dtype, *nidx, normType, K, *mask, update, crosscheck);
}
int Mat_BorderInterpolate(int p, int len, int borderType) {
return cv::borderInterpolate(p, len, borderType);
}
void Mat_CalcCovarMatrix(Mat samples, Mat covar, Mat mean, int flags, int ctype) {
cv::calcCovarMatrix(*samples, *covar, *mean, flags, ctype);
}
void Mat_CartToPolar(Mat x, Mat y, Mat magnitude, Mat angle, bool angleInDegrees) {
cv::cartToPolar(*x, *y, *magnitude, *angle, angleInDegrees);
}
bool Mat_CheckRange(Mat m) {
return cv::checkRange(*m);
}
void Mat_Compare(Mat src1, Mat src2, Mat dst, int ct) {
cv::compare(*src1, *src2, *dst, ct);
}
int Mat_CountNonZero(Mat src) {
return cv::countNonZero(*src);
}
void Mat_CompleteSymm(Mat m, bool lowerToUpper) {
cv::completeSymm(*m, lowerToUpper);
}
void Mat_ConvertScaleAbs(Mat src, Mat dst, double alpha, double beta) {
cv::convertScaleAbs(*src, *dst, alpha, beta);
}
void Mat_CopyMakeBorder(Mat src, Mat dst, int top, int bottom, int left, int right, int borderType,
Scalar value) {
cv::Scalar c_value(value.val1, value.val2, value.val3, value.val4);
cv::copyMakeBorder(*src, *dst, top, bottom, left, right, borderType, c_value);
}
void Mat_DCT(Mat src, Mat dst, int flags) {
cv::dct(*src, *dst, flags);
}
double Mat_Determinant(Mat m) {
return cv::determinant(*m);
}
void Mat_DFT(Mat m, Mat dst, int flags) {
cv::dft(*m, *dst, flags);
}
void Mat_Divide(Mat src1, Mat src2, Mat dst) {
cv::divide(*src1, *src2, *dst);
}
bool Mat_Eigen(Mat src, Mat eigenvalues, Mat eigenvectors) {
return cv::eigen(*src, *eigenvalues, *eigenvectors);
}
void Mat_EigenNonSymmetric(Mat src, Mat eigenvalues, Mat eigenvectors) {
cv::eigenNonSymmetric(*src, *eigenvalues, *eigenvectors);
}
void Mat_Exp(Mat src, Mat dst) {
cv::exp(*src, *dst);
}
void Mat_ExtractChannel(Mat src, Mat dst, int coi) {
cv::extractChannel(*src, *dst, coi);
}
void Mat_FindNonZero(Mat src, Mat idx) {
cv::findNonZero(*src, *idx);
}
void Mat_Flip(Mat src, Mat dst, int flipCode) {
cv::flip(*src, *dst, flipCode);
}
void Mat_Gemm(Mat src1, Mat src2, double alpha, Mat src3, double beta, Mat dst, int flags) {
cv::gemm(*src1, *src2, alpha, *src3, beta, *dst, flags);
}
int Mat_GetOptimalDFTSize(int vecsize) {
return cv::getOptimalDFTSize(vecsize);
}
void Mat_Hconcat(Mat src1, Mat src2, Mat dst) {
cv::hconcat(*src1, *src2, *dst);
}
void Mat_Vconcat(Mat src1, Mat src2, Mat dst) {
cv::vconcat(*src1, *src2, *dst);
}
void Rotate(Mat src, Mat dst, int rotateCode) {
cv::rotate(*src, *dst, rotateCode);
}
void Mat_Idct(Mat src, Mat dst, int flags) {
cv::idct(*src, *dst, flags);
}
void Mat_Idft(Mat src, Mat dst, int flags, int nonzeroRows) {
cv::idft(*src, *dst, flags, nonzeroRows);
}
void Mat_InRange(Mat src, Mat lowerb, Mat upperb, Mat dst) {
cv::inRange(*src, *lowerb, *upperb, *dst);
}
void Mat_InRangeWithScalar(Mat src, Scalar lowerb, Scalar upperb, Mat dst) {
cv::Scalar lb = cv::Scalar(lowerb.val1, lowerb.val2, lowerb.val3, lowerb.val4);
cv::Scalar ub = cv::Scalar(upperb.val1, upperb.val2, upperb.val3, upperb.val4);
cv::inRange(*src, lb, ub, *dst);
}
void Mat_InsertChannel(Mat src, Mat dst, int coi) {
cv::insertChannel(*src, *dst, coi);
}
double Mat_Invert(Mat src, Mat dst, int flags) {
double ret = cv::invert(*src, *dst, flags);
return ret;
}
double KMeans(Mat data, int k, Mat bestLabels, TermCriteria criteria, int attempts, int flags, Mat centers) {
double ret = cv::kmeans(*data, k, *bestLabels, *criteria, attempts, flags, *centers);
return ret;
}
double KMeansPoints(Contour points, int k, Mat bestLabels, TermCriteria criteria, int attempts, int flags, Mat centers) {
std::vector<cv::Point2f> pts;
for (size_t i = 0; i < points.length; i++) {
pts.push_back(cv::Point2f(points.points[i].x, points.points[i].y));
}
double ret = cv::kmeans(pts, k, *bestLabels, *criteria, attempts, flags, *centers);
return ret;
}
void Mat_Log(Mat src, Mat dst) {
cv::log(*src, *dst);
}
void Mat_Magnitude(Mat x, Mat y, Mat magnitude) {
cv::magnitude(*x, *y, *magnitude);
}
void Mat_Max(Mat src1, Mat src2, Mat dst) {
cv::max(*src1, *src2, *dst);
}
void Mat_MeanStdDev(Mat src, Mat dstMean, Mat dstStdDev) {
cv::meanStdDev(*src, *dstMean, *dstStdDev);
}
void Mat_Merge(struct Mats mats, Mat dst) {
std::vector<cv::Mat> images;
for (int i = 0; i < mats.length; ++i) {
images.push_back(*mats.mats[i]);
}
cv::merge(images, *dst);
}
void Mat_Min(Mat src1, Mat src2, Mat dst) {
cv::min(*src1, *src2, *dst);
}
void Mat_MinMaxIdx(Mat m, double* minVal, double* maxVal, int* minIdx, int* maxIdx) {
cv::minMaxIdx(*m, minVal, maxVal, minIdx, maxIdx);
}
void Mat_MinMaxLoc(Mat m, double* minVal, double* maxVal, Point* minLoc, Point* maxLoc) {
cv::Point cMinLoc;
cv::Point cMaxLoc;
cv::minMaxLoc(*m, minVal, maxVal, &cMinLoc, &cMaxLoc);
minLoc->x = cMinLoc.x;
minLoc->y = cMinLoc.y;
maxLoc->x = cMaxLoc.x;
maxLoc->y = cMaxLoc.y;
}
void Mat_MulSpectrums(Mat a, Mat b, Mat c, int flags) {
cv::mulSpectrums(*a, *b, *c, flags);
}
void Mat_Multiply(Mat src1, Mat src2, Mat dst) {
cv::multiply(*src1, *src2, *dst);
}
void Mat_Normalize(Mat src, Mat dst, double alpha, double beta, int typ) {
cv::normalize(*src, *dst, alpha, beta, typ);
}
double Norm(Mat src1, int normType) {
return cv::norm(*src1, normType);
}
void Mat_PerspectiveTransform(Mat src, Mat dst, Mat tm) {
cv::perspectiveTransform(*src, *dst, *tm);
}
bool Mat_Solve(Mat src1, Mat src2, Mat dst, int flags) {
return cv::solve(*src1, *src2, *dst, flags);
}
int Mat_SolveCubic(Mat coeffs, Mat roots) {
return cv::solveCubic(*coeffs, *roots);
}
double Mat_SolvePoly(Mat coeffs, Mat roots, int maxIters) {
return cv::solvePoly(*coeffs, *roots, maxIters);
}
void Mat_Reduce(Mat src, Mat dst, int dim, int rType, int dType) {
cv::reduce(*src, *dst, dim, rType, dType);
}
void Mat_Repeat(Mat src, int nY, int nX, Mat dst) {
cv::repeat(*src, nY, nX, *dst);
}
void Mat_ScaleAdd(Mat src1, double alpha, Mat src2, Mat dst) {
cv::scaleAdd(*src1, alpha, *src2, *dst);
}
void Mat_SetIdentity(Mat src, double scalar) {
cv::setIdentity(*src, scalar);
}
void Mat_Sort(Mat src, Mat dst, int flags) {
cv::sort(*src, *dst, flags);
}
void Mat_SortIdx(Mat src, Mat dst, int flags) {
cv::sortIdx(*src, *dst, flags);
}
void Mat_Split(Mat src, struct Mats* mats) {
std::vector<cv::Mat> channels;
cv::split(*src, channels);
mats->mats = new Mat[channels.size()];
for (size_t i = 0; i < channels.size(); ++i) {
mats->mats[i] = new cv::Mat(channels[i]);
}
mats->length = (int)channels.size();
}
void Mat_Subtract(Mat src1, Mat src2, Mat dst) {
cv::subtract(*src1, *src2, *dst);
}
Scalar Mat_Trace(Mat src) {
cv::Scalar c = cv::trace(*src);
Scalar scal = Scalar();
scal.val1 = c.val[0];
scal.val2 = c.val[1];
scal.val3 = c.val[2];
scal.val4 = c.val[3];
return scal;
}
void Mat_Transform(Mat src, Mat dst, Mat tm) {
cv::transform(*src, *dst, *tm);
}
void Mat_Transpose(Mat src, Mat dst) {
cv::transpose(*src, *dst);
}
void Mat_PolarToCart(Mat magnitude, Mat degree, Mat x, Mat y, bool angleInDegrees) {
cv::polarToCart(*magnitude, *degree, *x, *y, angleInDegrees);
}
void Mat_Pow(Mat src, double power, Mat dst) {
cv::pow(*src, power, *dst);
}
void Mat_Phase(Mat x, Mat y, Mat angle, bool angleInDegrees) {
cv::phase(*x, *y, *angle, angleInDegrees);
}
Scalar Mat_Sum(Mat src) {
cv::Scalar c = cv::sum(*src);
Scalar scal = Scalar();
scal.val1 = c.val[0];
scal.val2 = c.val[1];
scal.val3 = c.val[2];
scal.val4 = c.val[3];
return scal;
}
// TermCriteria_New creates a new TermCriteria
TermCriteria TermCriteria_New(int typ, int maxCount, double epsilon) {
return new cv::TermCriteria(typ, maxCount, epsilon);
}
void Contours_Close(struct Contours cs) {
for (int i = 0; i < cs.length; i++) {
Points_Close(cs.contours[i]);
}
delete[] cs.contours;
}
void KeyPoints_Close(struct KeyPoints ks) {
delete[] ks.keypoints;
}
void Points_Close(Points ps) {
for (size_t i = 0; i < ps.length; i++) {
Point_Close(ps.points[i]);
}
delete[] ps.points;
}
void Point_Close(Point p) {}
void Rects_Close(struct Rects rs) {
delete[] rs.rects;
}
void DMatches_Close(struct DMatches ds) {
delete[] ds.dmatches;
}
void MultiDMatches_Close(struct MultiDMatches mds) {
for (size_t i = 0; i < mds.length; i++) {
DMatches_Close(mds.dmatches[i]);
}
delete[] mds.dmatches;
}
struct DMatches MultiDMatches_get(struct MultiDMatches mds, int index) {
return mds.dmatches[index];
}
// since it is next to impossible to iterate over mats.mats on the cgo side
Mat Mats_get(struct Mats mats, int i) {
return mats.mats[i];
}
void Mats_Close(struct Mats mats) {
delete[] mats.mats;
}
void ByteArray_Release(struct ByteArray buf) {
delete[] buf.data;
}
struct ByteArray toByteArray(const char* buf, int len) {
ByteArray ret = {new char[len], len};
memcpy(ret.data, buf, len);
return ret;
}
int64 GetCVTickCount() {
return cv::getTickCount();
}
double GetTickFrequency() {
return cv::getTickFrequency();
}
Mat Mat_rowRange(Mat m,int startrow,int endrow) {
return new cv::Mat(m->rowRange(startrow,endrow));
}
Mat Mat_colRange(Mat m,int startrow,int endrow) {
return new cv::Mat(m->colRange(startrow,endrow));
}

1975
vendor/gocv.io/x/gocv/core.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

385
vendor/gocv.io/x/gocv/core.h generated vendored Normal file
View File

@ -0,0 +1,385 @@
#ifndef _OPENCV3_CORE_H_
#define _OPENCV3_CORE_H_
#include <stdint.h>
#include <stdbool.h>
// Wrapper for std::vector<string>
typedef struct CStrings {
const char** strs;
int length;
} CStrings;
typedef struct ByteArray {
char* data;
int length;
} ByteArray;
// Wrapper for std::vector<int>
typedef struct IntVector {
int* val;
int length;
} IntVector;
// Wrapper for std::vector<float>
typedef struct FloatVector {
float* val;
int length;
} FloatVector;
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
typedef struct RawData {
int width;
int height;
struct ByteArray data;
} RawData;
// Wrapper for an individual cv::Point2f
typedef struct Point2f {
float x;
float y;
} Point2f;
// Wrapper for an individual cv::cvPoint
typedef struct Point {
int x;
int y;
} Point;
// Wrapper for the vector of Point structs aka std::vector<Point>
typedef struct Points {
Point* points;
int length;
} Points;
// Contour is alias for Points
typedef Points Contour;
// Wrapper for the vector of Points vectors aka std::vector< std::vector<Point> >
typedef struct Contours {
Contour* contours;
int length;
} Contours;
// Wrapper for an individual cv::cvRect
typedef struct Rect {
int x;
int y;
int width;
int height;
} Rect;
// Wrapper for the vector of Rect struct aka std::vector<Rect>
typedef struct Rects {
Rect* rects;
int length;
} Rects;
// Wrapper for an individual cv::cvSize
typedef struct Size {
int width;
int height;
} Size;
// Wrapper for an individual cv::RotatedRect
typedef struct RotatedRect {
Contour pts;
Rect boundingRect;
Point center;
Size size;
double angle;
} RotatedRect;
// Wrapper for an individual cv::cvScalar
typedef struct Scalar {
double val1;
double val2;
double val3;
double val4;
} Scalar;
// Wrapper for a individual cv::KeyPoint
typedef struct KeyPoint {
double x;
double y;
double size;
double angle;
double response;
int octave;
int classID;
} KeyPoint;
// Wrapper for the vector of KeyPoint struct aka std::vector<KeyPoint>
typedef struct KeyPoints {
KeyPoint* keypoints;
int length;
} KeyPoints;
// Wrapper for SimpleBlobDetectorParams aka SimpleBlobDetector::Params
typedef struct SimpleBlobDetectorParams {
unsigned char blobColor;
bool filterByArea;
bool filterByCircularity;
bool filterByColor;
bool filterByConvexity;
bool filterByInertia;
float maxArea;
float maxCircularity;
float maxConvexity;
float maxInertiaRatio;
float maxThreshold;
float minArea;
float minCircularity;
float minConvexity;
float minDistBetweenBlobs;
float minInertiaRatio;
size_t minRepeatability;
float minThreshold;
float thresholdStep;
} SimpleBlobDetectorParams;
// Wrapper for an individual cv::DMatch
typedef struct DMatch {
int queryIdx;
int trainIdx;
int imgIdx;
float distance;
} DMatch;
// Wrapper for the vector of DMatch struct aka std::vector<DMatch>
typedef struct DMatches {
DMatch* dmatches;
int length;
} DMatches;
// Wrapper for the vector vector of DMatch struct aka std::vector<std::vector<DMatch>>
typedef struct MultiDMatches {
DMatches* dmatches;
int length;
} MultiDMatches;
// Wrapper for an individual cv::Moment
typedef struct Moment {
double m00;
double m10;
double m01;
double m20;
double m11;
double m02;
double m30;
double m21;
double m12;
double m03;
double mu20;
double mu11;
double mu02;
double mu30;
double mu21;
double mu12;
double mu03;
double nu20;
double nu11;
double nu02;
double nu30;
double nu21;
double nu12;
double nu03;
} Moment;
#ifdef __cplusplus
typedef cv::Mat* Mat;
typedef cv::TermCriteria* TermCriteria;
#else
typedef void* Mat;
typedef void* TermCriteria;
#endif
// Wrapper for the vector of Mat aka std::vector<Mat>
typedef struct Mats {
Mat* mats;
int length;
} Mats;
Mat Mats_get(struct Mats mats, int i);
struct DMatches MultiDMatches_get(struct MultiDMatches mds, int index);
struct ByteArray toByteArray(const char* buf, int len);
void ByteArray_Release(struct ByteArray buf);
void Contours_Close(struct Contours cs);
void KeyPoints_Close(struct KeyPoints ks);
void Rects_Close(struct Rects rs);
void Mats_Close(struct Mats mats);
void Point_Close(struct Point p);
void Points_Close(struct Points ps);
void DMatches_Close(struct DMatches ds);
void MultiDMatches_Close(struct MultiDMatches mds);
Mat Mat_New();
Mat Mat_NewWithSize(int rows, int cols, int type);
Mat Mat_NewFromScalar(const Scalar ar, int type);
Mat Mat_NewWithSizeFromScalar(const Scalar ar, int rows, int cols, int type);
Mat Mat_NewFromBytes(int rows, int cols, int type, struct ByteArray buf);
Mat Mat_FromPtr(Mat m, int rows, int cols, int type, int prows, int pcols);
void Mat_Close(Mat m);
int Mat_Empty(Mat m);
Mat Mat_Clone(Mat m);
void Mat_CopyTo(Mat m, Mat dst);
int Mat_Total(Mat m);
void Mat_Size(Mat m, IntVector* res);
void Mat_CopyToWithMask(Mat m, Mat dst, Mat mask);
void Mat_ConvertTo(Mat m, Mat dst, int type);
struct ByteArray Mat_ToBytes(Mat m);
struct ByteArray Mat_DataPtr(Mat m);
Mat Mat_Region(Mat m, Rect r);
Mat Mat_Reshape(Mat m, int cn, int rows);
void Mat_PatchNaNs(Mat m);
Mat Mat_ConvertFp16(Mat m);
Scalar Mat_Mean(Mat m);
Scalar Mat_MeanWithMask(Mat m, Mat mask);
Mat Mat_Sqrt(Mat m);
int Mat_Rows(Mat m);
int Mat_Cols(Mat m);
int Mat_Channels(Mat m);
int Mat_Type(Mat m);
int Mat_Step(Mat m);
uint8_t Mat_GetUChar(Mat m, int row, int col);
uint8_t Mat_GetUChar3(Mat m, int x, int y, int z);
int8_t Mat_GetSChar(Mat m, int row, int col);
int8_t Mat_GetSChar3(Mat m, int x, int y, int z);
int16_t Mat_GetShort(Mat m, int row, int col);
int16_t Mat_GetShort3(Mat m, int x, int y, int z);
int32_t Mat_GetInt(Mat m, int row, int col);
int32_t Mat_GetInt3(Mat m, int x, int y, int z);
float Mat_GetFloat(Mat m, int row, int col);
float Mat_GetFloat3(Mat m, int x, int y, int z);
double Mat_GetDouble(Mat m, int row, int col);
double Mat_GetDouble3(Mat m, int x, int y, int z);
void Mat_SetTo(Mat m, Scalar value);
void Mat_SetUChar(Mat m, int row, int col, uint8_t val);
void Mat_SetUChar3(Mat m, int x, int y, int z, uint8_t val);
void Mat_SetSChar(Mat m, int row, int col, int8_t val);
void Mat_SetSChar3(Mat m, int x, int y, int z, int8_t val);
void Mat_SetShort(Mat m, int row, int col, int16_t val);
void Mat_SetShort3(Mat m, int x, int y, int z, int16_t val);
void Mat_SetInt(Mat m, int row, int col, int32_t val);
void Mat_SetInt3(Mat m, int x, int y, int z, int32_t val);
void Mat_SetFloat(Mat m, int row, int col, float val);
void Mat_SetFloat3(Mat m, int x, int y, int z, float val);
void Mat_SetDouble(Mat m, int row, int col, double val);
void Mat_SetDouble3(Mat m, int x, int y, int z, double val);
void Mat_AddUChar(Mat m, uint8_t val);
void Mat_SubtractUChar(Mat m, uint8_t val);
void Mat_MultiplyUChar(Mat m, uint8_t val);
void Mat_DivideUChar(Mat m, uint8_t val);
void Mat_AddFloat(Mat m, float val);
void Mat_SubtractFloat(Mat m, float val);
void Mat_MultiplyFloat(Mat m, float val);
void Mat_DivideFloat(Mat m, float val);
Mat Mat_MultiplyMatrix(Mat x, Mat y);
Mat Mat_T(Mat x);
void LUT(Mat src, Mat lut, Mat dst);
void Mat_AbsDiff(Mat src1, Mat src2, Mat dst);
void Mat_Add(Mat src1, Mat src2, Mat dst);
void Mat_AddWeighted(Mat src1, double alpha, Mat src2, double beta, double gamma, Mat dst);
void Mat_BitwiseAnd(Mat src1, Mat src2, Mat dst);
void Mat_BitwiseAndWithMask(Mat src1, Mat src2, Mat dst, Mat mask);
void Mat_BitwiseNot(Mat src1, Mat dst);
void Mat_BitwiseNotWithMask(Mat src1, Mat dst, Mat mask);
void Mat_BitwiseOr(Mat src1, Mat src2, Mat dst);
void Mat_BitwiseOrWithMask(Mat src1, Mat src2, Mat dst, Mat mask);
void Mat_BitwiseXor(Mat src1, Mat src2, Mat dst);
void Mat_BitwiseXorWithMask(Mat src1, Mat src2, Mat dst, Mat mask);
void Mat_Compare(Mat src1, Mat src2, Mat dst, int ct);
void Mat_BatchDistance(Mat src1, Mat src2, Mat dist, int dtype, Mat nidx, int normType, int K,
Mat mask, int update, bool crosscheck);
int Mat_BorderInterpolate(int p, int len, int borderType);
void Mat_CalcCovarMatrix(Mat samples, Mat covar, Mat mean, int flags, int ctype);
void Mat_CartToPolar(Mat x, Mat y, Mat magnitude, Mat angle, bool angleInDegrees);
bool Mat_CheckRange(Mat m);
void Mat_CompleteSymm(Mat m, bool lowerToUpper);
void Mat_ConvertScaleAbs(Mat src, Mat dst, double alpha, double beta);
void Mat_CopyMakeBorder(Mat src, Mat dst, int top, int bottom, int left, int right, int borderType,
Scalar value);
int Mat_CountNonZero(Mat src);
void Mat_DCT(Mat src, Mat dst, int flags);
double Mat_Determinant(Mat m);
void Mat_DFT(Mat m, Mat dst, int flags);
void Mat_Divide(Mat src1, Mat src2, Mat dst);
bool Mat_Eigen(Mat src, Mat eigenvalues, Mat eigenvectors);
void Mat_EigenNonSymmetric(Mat src, Mat eigenvalues, Mat eigenvectors);
void Mat_Exp(Mat src, Mat dst);
void Mat_ExtractChannel(Mat src, Mat dst, int coi);
void Mat_FindNonZero(Mat src, Mat idx);
void Mat_Flip(Mat src, Mat dst, int flipCode);
void Mat_Gemm(Mat src1, Mat src2, double alpha, Mat src3, double beta, Mat dst, int flags);
int Mat_GetOptimalDFTSize(int vecsize);
void Mat_Hconcat(Mat src1, Mat src2, Mat dst);
void Mat_Vconcat(Mat src1, Mat src2, Mat dst);
void Rotate(Mat src, Mat dst, int rotationCode);
void Mat_Idct(Mat src, Mat dst, int flags);
void Mat_Idft(Mat src, Mat dst, int flags, int nonzeroRows);
void Mat_InRange(Mat src, Mat lowerb, Mat upperb, Mat dst);
void Mat_InRangeWithScalar(Mat src, const Scalar lowerb, const Scalar upperb, Mat dst);
void Mat_InsertChannel(Mat src, Mat dst, int coi);
double Mat_Invert(Mat src, Mat dst, int flags);
double KMeans(Mat data, int k, Mat bestLabels, TermCriteria criteria, int attempts, int flags, Mat centers);
double KMeansPoints(Contour points, int k, Mat bestLabels, TermCriteria criteria, int attempts, int flags, Mat centers);
void Mat_Log(Mat src, Mat dst);
void Mat_Magnitude(Mat x, Mat y, Mat magnitude);
void Mat_Max(Mat src1, Mat src2, Mat dst);
void Mat_MeanStdDev(Mat src, Mat dstMean, Mat dstStdDev);
void Mat_Merge(struct Mats mats, Mat dst);
void Mat_Min(Mat src1, Mat src2, Mat dst);
void Mat_MinMaxIdx(Mat m, double* minVal, double* maxVal, int* minIdx, int* maxIdx);
void Mat_MinMaxLoc(Mat m, double* minVal, double* maxVal, Point* minLoc, Point* maxLoc);
void Mat_MulSpectrums(Mat a, Mat b, Mat c, int flags);
void Mat_Multiply(Mat src1, Mat src2, Mat dst);
void Mat_Subtract(Mat src1, Mat src2, Mat dst);
void Mat_Normalize(Mat src, Mat dst, double alpha, double beta, int typ);
double Norm(Mat src1, int normType);
void Mat_PerspectiveTransform(Mat src, Mat dst, Mat tm);
bool Mat_Solve(Mat src1, Mat src2, Mat dst, int flags);
int Mat_SolveCubic(Mat coeffs, Mat roots);
double Mat_SolvePoly(Mat coeffs, Mat roots, int maxIters);
void Mat_Reduce(Mat src, Mat dst, int dim, int rType, int dType);
void Mat_Repeat(Mat src, int nY, int nX, Mat dst);
void Mat_ScaleAdd(Mat src1, double alpha, Mat src2, Mat dst);
void Mat_SetIdentity(Mat src, double scalar);
void Mat_Sort(Mat src, Mat dst, int flags);
void Mat_SortIdx(Mat src, Mat dst, int flags);
void Mat_Split(Mat src, struct Mats* mats);
void Mat_Subtract(Mat src1, Mat src2, Mat dst);
Scalar Mat_Trace(Mat src);
void Mat_Transform(Mat src, Mat dst, Mat tm);
void Mat_Transpose(Mat src, Mat dst);
void Mat_PolarToCart(Mat magnitude, Mat degree, Mat x, Mat y, bool angleInDegrees);
void Mat_Pow(Mat src, double power, Mat dst);
void Mat_Phase(Mat x, Mat y, Mat angle, bool angleInDegrees);
Scalar Mat_Sum(Mat src1);
TermCriteria TermCriteria_New(int typ, int maxCount, double epsilon);
int64_t GetCVTickCount();
double GetTickFrequency();
Mat Mat_rowRange(Mat m,int startrow,int endrow);
Mat Mat_colRange(Mat m,int startrow,int endrow);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_CORE_H_

211
vendor/gocv.io/x/gocv/core_string.go generated vendored Normal file
View File

@ -0,0 +1,211 @@
package gocv
func (c MatType) String() string {
switch c {
case MatTypeCV8U:
return "CV8U"
case MatTypeCV8UC2:
return "CV8UC2"
case MatTypeCV8UC3:
return "CV8UC3"
case MatTypeCV8UC4:
return "CV8UC4"
case MatTypeCV16U:
return "CV16U"
case MatTypeCV16UC2:
return "CV16UC2"
case MatTypeCV16UC3:
return "CV16UC3"
case MatTypeCV16UC4:
return "CV16UC4"
case MatTypeCV16S:
return "CV16S"
case MatTypeCV16SC2:
return "CV16SC2"
case MatTypeCV16SC3:
return "CV16SC3"
case MatTypeCV16SC4:
return "CV16SC4"
case MatTypeCV32S:
return "CV32S"
case MatTypeCV32SC2:
return "CV32SC2"
case MatTypeCV32SC3:
return "CV32SC3"
case MatTypeCV32SC4:
return "CV32SC4"
case MatTypeCV32F:
return "CV32F"
case MatTypeCV32FC2:
return "CV32FC2"
case MatTypeCV32FC3:
return "CV32FC3"
case MatTypeCV32FC4:
return "CV32FC4"
case MatTypeCV64F:
return "CV64F"
case MatTypeCV64FC2:
return "CV64FC2"
case MatTypeCV64FC3:
return "CV64FC3"
case MatTypeCV64FC4:
return "CV64FC4"
}
return ""
}
func (c CompareType) String() string {
switch c {
case CompareEQ:
return "eq"
case CompareGT:
return "gt"
case CompareGE:
return "ge"
case CompareLT:
return "lt"
case CompareLE:
return "le"
case CompareNE:
return "ne"
}
return ""
}
func (c CovarFlags) String() string {
switch c {
case CovarScrambled:
return "covar-scrambled"
case CovarNormal:
return "covar-normal"
case CovarUseAvg:
return "covar-use-avg"
case CovarScale:
return "covar-scale"
case CovarRows:
return "covar-rows"
case CovarCols:
return "covar-cols"
}
return ""
}
func (c DftFlags) String() string {
switch c {
case DftForward:
return "dft-forward"
case DftInverse:
return "dft-inverse"
case DftScale:
return "dft-scale"
case DftRows:
return "dft-rows"
case DftComplexOutput:
return "dft-complex-output"
case DftRealOutput:
return "dft-real-output"
case DftComplexInput:
return "dft-complex-input"
}
return ""
}
func (c RotateFlag) String() string {
switch c {
case Rotate90Clockwise:
return "rotate-90-clockwise"
case Rotate180Clockwise:
return "rotate-180-clockwise"
case Rotate90CounterClockwise:
return "rotate-90-counter-clockwise"
}
return ""
}
func (c KMeansFlags) String() string {
switch c {
case KMeansRandomCenters:
return "kmeans-random-centers"
case KMeansPPCenters:
return "kmeans-pp-centers"
case KMeansUseInitialLabels:
return "kmeans-use-initial-labels"
}
return ""
}
func (c NormType) String() string {
switch c {
case NormInf:
return "norm-inf"
case NormL1:
return "norm-l1"
case NormL2:
return "norm-l2"
case NormL2Sqr:
return "norm-l2-sqr"
case NormHamming:
return "norm-hamming"
case NormHamming2:
return "norm-hamming2"
case NormRelative:
return "norm-relative"
case NormMinMax:
return "norm-minmax"
}
return ""
}
func (c TermCriteriaType) String() string {
switch c {
case Count:
return "count"
case EPS:
return "eps"
}
return ""
}
func (c SolveDecompositionFlags) String() string {
switch c {
case SolveDecompositionLu:
return "solve-decomposition-lu"
case SolveDecompositionSvd:
return "solve-decomposition-svd"
case SolveDecompositionEing:
return "solve-decomposition-eing"
case SolveDecompositionCholesky:
return "solve-decomposition-cholesky"
case SolveDecompositionQr:
return "solve-decomposition-qr"
case SolveDecompositionNormal:
return "solve-decomposition-normal"
}
return ""
}
func (c ReduceTypes) String() string {
switch c {
case ReduceSum:
return "reduce-sum"
case ReduceAvg:
return "reduce-avg"
case ReduceMax:
return "reduce-max"
case ReduceMin:
return "reduce-min"
}
return ""
}
func (c SortFlags) String() string {
switch c {
case SortEveryRow:
return "sort-every-row"
case SortEveryColumn:
return "sort-every-column"
case SortDescending:
return "sort-descending"
}
return ""
}

189
vendor/gocv.io/x/gocv/dnn.cpp generated vendored Normal file
View File

@ -0,0 +1,189 @@
#include "dnn.h"
Net Net_ReadNet(const char* model, const char* config) {
Net n = new cv::dnn::Net(cv::dnn::readNet(model, config));
return n;
}
Net Net_ReadNetBytes(const char* framework, struct ByteArray model, struct ByteArray config) {
std::vector<uchar> modelv(model.data, model.data + model.length);
std::vector<uchar> configv(config.data, config.data + config.length);
Net n = new cv::dnn::Net(cv::dnn::readNet(framework, modelv, configv));
return n;
}
Net Net_ReadNetFromCaffe(const char* prototxt, const char* caffeModel) {
Net n = new cv::dnn::Net(cv::dnn::readNetFromCaffe(prototxt, caffeModel));
return n;
}
Net Net_ReadNetFromCaffeBytes(struct ByteArray prototxt, struct ByteArray caffeModel) {
Net n = new cv::dnn::Net(cv::dnn::readNetFromCaffe(prototxt.data, prototxt.length,
caffeModel.data, caffeModel.length));
return n;
}
Net Net_ReadNetFromTensorflow(const char* model) {
Net n = new cv::dnn::Net(cv::dnn::readNetFromTensorflow(model));
return n;
}
Net Net_ReadNetFromTensorflowBytes(struct ByteArray model) {
Net n = new cv::dnn::Net(cv::dnn::readNetFromTensorflow(model.data, model.length));
return n;
}
void Net_Close(Net net) {
delete net;
}
bool Net_Empty(Net net) {
return net->empty();
}
void Net_SetInput(Net net, Mat blob, const char* name) {
net->setInput(*blob, name);
}
Mat Net_Forward(Net net, const char* outputName) {
return new cv::Mat(net->forward(outputName));
}
void Net_ForwardLayers(Net net, struct Mats* outputBlobs, struct CStrings outBlobNames) {
std::vector< cv::Mat > blobs;
std::vector< cv::String > names;
for (int i = 0; i < outBlobNames.length; ++i) {
names.push_back(cv::String(outBlobNames.strs[i]));
}
net->forward(blobs, names);
// copy blobs into outputBlobs
outputBlobs->mats = new Mat[blobs.size()];
for (size_t i = 0; i < blobs.size(); ++i) {
outputBlobs->mats[i] = new cv::Mat(blobs[i]);
}
outputBlobs->length = (int)blobs.size();
}
void Net_SetPreferableBackend(Net net, int backend) {
net->setPreferableBackend(backend);
}
void Net_SetPreferableTarget(Net net, int target) {
net->setPreferableTarget(target);
}
int64_t Net_GetPerfProfile(Net net) {
std::vector<double> layersTimes;
return net->getPerfProfile(layersTimes);
}
void Net_GetUnconnectedOutLayers(Net net, IntVector* res) {
std::vector< int > cids(net->getUnconnectedOutLayers());
int* ids = new int[cids.size()];
for (size_t i = 0; i < cids.size(); ++i) {
ids[i] = cids[i];
}
res->length = cids.size();
res->val = ids;
return;
}
void Net_GetLayerNames(Net net, CStrings* names) {
std::vector< cv::String > cstrs(net->getLayerNames());
const char **strs = new const char*[cstrs.size()];
for (size_t i = 0; i < cstrs.size(); ++i) {
strs[i] = cstrs[i].c_str();
}
names->length = cstrs.size();
names->strs = strs;
return;
}
Mat Net_BlobFromImage(Mat image, double scalefactor, Size size, Scalar mean, bool swapRB,
bool crop) {
cv::Size sz(size.width, size.height);
// set the output ddepth to the input image depth
int ddepth = image->depth();
if (ddepth == CV_8U)
{
// no scalar mean adjustment allowed, so ignore
return new cv::Mat(cv::dnn::blobFromImage(*image, scalefactor, sz, NULL, swapRB, crop, ddepth));
}
cv::Scalar cm(mean.val1, mean.val2, mean.val3, mean.val4);
return new cv::Mat(cv::dnn::blobFromImage(*image, scalefactor, sz, cm, swapRB, crop, ddepth));
}
void Net_BlobFromImages(struct Mats images, Mat blob, double scalefactor, Size size,
Scalar mean, bool swapRB, bool crop, int ddepth) {
std::vector<cv::Mat> imgs;
for (int i = 0; i < images.length; ++i) {
imgs.push_back(*images.mats[i]);
}
cv::Size sz(size.width, size.height);
cv::Scalar cm = cv::Scalar(mean.val1, mean.val2, mean.val3, mean.val4);
// TODO: handle different version signatures of this function v2 vs v3.
cv::dnn::blobFromImages(imgs, *blob, scalefactor, sz, cm, swapRB, crop, ddepth);
}
void Net_ImagesFromBlob(Mat blob_, struct Mats* images_) {
std::vector<cv::Mat> imgs;
cv::dnn::imagesFromBlob(*blob_, imgs);
images_->mats = new Mat[imgs.size()];
for (size_t i = 0; i < imgs.size(); ++i) {
images_->mats[i] = new cv::Mat(imgs[i]);
}
images_->length = (int) imgs.size();
}
Mat Net_GetBlobChannel(Mat blob, int imgidx, int chnidx) {
size_t w = blob->size[3];
size_t h = blob->size[2];
return new cv::Mat(h, w, CV_32F, blob->ptr<float>(imgidx, chnidx));
}
Scalar Net_GetBlobSize(Mat blob) {
Scalar scal = Scalar();
scal.val1 = blob->size[0];
scal.val2 = blob->size[1];
scal.val3 = blob->size[2];
scal.val4 = blob->size[3];
return scal;
}
Layer Net_GetLayer(Net net, int layerid) {
return new cv::Ptr<cv::dnn::Layer>(net->getLayer(layerid));
}
void Layer_Close(Layer layer) {
delete layer;
}
int Layer_InputNameToIndex(Layer layer, const char* name) {
return (*layer)->inputNameToIndex(name);
}
int Layer_OutputNameToIndex(Layer layer, const char* name) {
return (*layer)->outputNameToIndex(name);
}
const char* Layer_GetName(Layer layer) {
return (*layer)->name.c_str();
}
const char* Layer_GetType(Layer layer) {
return (*layer)->type.c_str();
}

472
vendor/gocv.io/x/gocv/dnn.go generated vendored Normal file
View File

@ -0,0 +1,472 @@
package gocv
/*
#include <stdlib.h>
#include "dnn.h"
*/
import "C"
import (
"image"
"reflect"
"unsafe"
)
// Net allows you to create and manipulate comprehensive artificial neural networks.
//
// For further details, please see:
// https://docs.opencv.org/master/db/d30/classcv_1_1dnn_1_1Net.html
//
type Net struct {
// C.Net
p unsafe.Pointer
}
// NetBackendType is the type for the various different kinds of DNN backends.
type NetBackendType int
const (
// NetBackendDefault is the default backend.
NetBackendDefault NetBackendType = 0
// NetBackendHalide is the Halide backend.
NetBackendHalide NetBackendType = 1
// NetBackendOpenVINO is the OpenVINO backend.
NetBackendOpenVINO NetBackendType = 2
// NetBackendOpenCV is the OpenCV backend.
NetBackendOpenCV NetBackendType = 3
// NetBackendVKCOM is the Vulkan backend.
NetBackendVKCOM NetBackendType = 4
)
// ParseNetBackend returns a valid NetBackendType given a string. Valid values are:
// - halide
// - openvino
// - opencv
// - vulkan
// - default
func ParseNetBackend(backend string) NetBackendType {
switch backend {
case "halide":
return NetBackendHalide
case "openvino":
return NetBackendOpenVINO
case "opencv":
return NetBackendOpenCV
case "vulkan":
return NetBackendVKCOM
default:
return NetBackendDefault
}
}
// NetTargetType is the type for the various different kinds of DNN device targets.
type NetTargetType int
const (
// NetTargetCPU is the default CPU device target.
NetTargetCPU NetTargetType = 0
// NetTargetFP32 is the 32-bit OpenCL target.
NetTargetFP32 NetTargetType = 1
// NetTargetFP16 is the 16-bit OpenCL target.
NetTargetFP16 NetTargetType = 2
// NetTargetVPU is the Movidius VPU target.
NetTargetVPU NetTargetType = 3
// NetTargetVulkan is the NVIDIA Vulkan target.
NetTargetVulkan NetTargetType = 4
// NetTargetFPGA is the FPGA target.
NetTargetFPGA NetTargetType = 5
)
// ParseNetTarget returns a valid NetTargetType given a string. Valid values are:
// - cpu
// - fp32
// - fp16
// - vpu
// - vulkan
// - fpga
func ParseNetTarget(target string) NetTargetType {
switch target {
case "cpu":
return NetTargetCPU
case "fp32":
return NetTargetFP32
case "fp16":
return NetTargetFP16
case "vpu":
return NetTargetVPU
case "vulkan":
return NetTargetVulkan
case "fpga":
return NetTargetFPGA
default:
return NetTargetCPU
}
}
// Close Net
func (net *Net) Close() error {
C.Net_Close((C.Net)(net.p))
net.p = nil
return nil
}
// Empty returns true if there are no layers in the network.
//
// For further details, please see:
// https://docs.opencv.org/master/db/d30/classcv_1_1dnn_1_1Net.html#a6a5778787d5b8770deab5eda6968e66c
//
func (net *Net) Empty() bool {
return bool(C.Net_Empty((C.Net)(net.p)))
}
// SetInput sets the new value for the layer output blob.
//
// For further details, please see:
// https://docs.opencv.org/trunk/db/d30/classcv_1_1dnn_1_1Net.html#a672a08ae76444d75d05d7bfea3e4a328
//
func (net *Net) SetInput(blob Mat, name string) {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
C.Net_SetInput((C.Net)(net.p), blob.p, cName)
}
// Forward runs forward pass to compute output of layer with name outputName.
//
// For further details, please see:
// https://docs.opencv.org/trunk/db/d30/classcv_1_1dnn_1_1Net.html#a98ed94cb6ef7063d3697259566da310b
//
func (net *Net) Forward(outputName string) Mat {
cName := C.CString(outputName)
defer C.free(unsafe.Pointer(cName))
return newMat(C.Net_Forward((C.Net)(net.p), cName))
}
// ForwardLayers forward pass to compute outputs of layers listed in outBlobNames.
//
// For further details, please see:
// https://docs.opencv.org/3.4.1/db/d30/classcv_1_1dnn_1_1Net.html#adb34d7650e555264c7da3b47d967311b
//
func (net *Net) ForwardLayers(outBlobNames []string) (blobs []Mat) {
cMats := C.struct_Mats{}
C.Net_ForwardLayers((C.Net)(net.p), &(cMats), toCStrings(outBlobNames))
blobs = make([]Mat, cMats.length)
for i := C.int(0); i < cMats.length; i++ {
blobs[i].p = C.Mats_get(cMats, i)
}
return
}
// SetPreferableBackend ask network to use specific computation backend.
//
// For further details, please see:
// https://docs.opencv.org/3.4/db/d30/classcv_1_1dnn_1_1Net.html#a7f767df11386d39374db49cd8df8f59e
//
func (net *Net) SetPreferableBackend(backend NetBackendType) error {
C.Net_SetPreferableBackend((C.Net)(net.p), C.int(backend))
return nil
}
// SetPreferableTarget ask network to make computations on specific target device.
//
// For further details, please see:
// https://docs.opencv.org/3.4/db/d30/classcv_1_1dnn_1_1Net.html#a9dddbefbc7f3defbe3eeb5dc3d3483f4
//
func (net *Net) SetPreferableTarget(target NetTargetType) error {
C.Net_SetPreferableTarget((C.Net)(net.p), C.int(target))
return nil
}
// ReadNet reads a deep learning network represented in one of the supported formats.
//
// For further details, please see:
// https://docs.opencv.org/3.4/d6/d0f/group__dnn.html#ga3b34fe7a29494a6a4295c169a7d32422
//
func ReadNet(model string, config string) Net {
cModel := C.CString(model)
defer C.free(unsafe.Pointer(cModel))
cConfig := C.CString(config)
defer C.free(unsafe.Pointer(cConfig))
return Net{p: unsafe.Pointer(C.Net_ReadNet(cModel, cConfig))}
}
// ReadNetBytes reads a deep learning network represented in one of the supported formats.
//
// For further details, please see:
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#ga138439da76f26266fdefec9723f6c5cd
//
func ReadNetBytes(framework string, model []byte, config []byte) (Net, error) {
cFramework := C.CString(framework)
defer C.free(unsafe.Pointer(cFramework))
bModel, err := toByteArray(model)
if err != nil {
return Net{}, err
}
bConfig, err := toByteArray(config)
if err != nil {
return Net{}, err
}
return Net{p: unsafe.Pointer(C.Net_ReadNetBytes(cFramework, *bModel, *bConfig))}, nil
}
// ReadNetFromCaffe reads a network model stored in Caffe framework's format.
//
// For further details, please see:
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#ga29d0ea5e52b1d1a6c2681e3f7d68473a
//
func ReadNetFromCaffe(prototxt string, caffeModel string) Net {
cprototxt := C.CString(prototxt)
defer C.free(unsafe.Pointer(cprototxt))
cmodel := C.CString(caffeModel)
defer C.free(unsafe.Pointer(cmodel))
return Net{p: unsafe.Pointer(C.Net_ReadNetFromCaffe(cprototxt, cmodel))}
}
// ReadNetFromCaffeBytes reads a network model stored in Caffe model in memory.
//
// For further details, please see:
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#ga946b342af1355185a7107640f868b64a
//
func ReadNetFromCaffeBytes(prototxt []byte, caffeModel []byte) (Net, error) {
bPrototxt, err := toByteArray(prototxt)
if err != nil {
return Net{}, err
}
bCaffeModel, err := toByteArray(caffeModel)
if err != nil {
return Net{}, err
}
return Net{p: unsafe.Pointer(C.Net_ReadNetFromCaffeBytes(*bPrototxt, *bCaffeModel))}, nil
}
// ReadNetFromTensorflow reads a network model stored in Tensorflow framework's format.
//
// For further details, please see:
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#gad820b280978d06773234ba6841e77e8d
//
func ReadNetFromTensorflow(model string) Net {
cmodel := C.CString(model)
defer C.free(unsafe.Pointer(cmodel))
return Net{p: unsafe.Pointer(C.Net_ReadNetFromTensorflow(cmodel))}
}
// ReadNetFromTensorflowBytes reads a network model stored in Tensorflow framework's format.
//
// For further details, please see:
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#gacdba30a7c20db2788efbf5bb16a7884d
//
func ReadNetFromTensorflowBytes(model []byte) (Net, error) {
bModel, err := toByteArray(model)
if err != nil {
return Net{}, err
}
return Net{p: unsafe.Pointer(C.Net_ReadNetFromTensorflowBytes(*bModel))}, nil
}
// BlobFromImage creates 4-dimensional blob from image. Optionally resizes and crops
// image from center, subtract mean values, scales values by scalefactor,
// swap Blue and Red channels.
//
// For further details, please see:
// https://docs.opencv.org/trunk/d6/d0f/group__dnn.html#ga152367f253c81b53fe6862b299f5c5cd
//
func BlobFromImage(img Mat, scaleFactor float64, size image.Point, mean Scalar,
swapRB bool, crop bool) Mat {
sz := C.struct_Size{
width: C.int(size.X),
height: C.int(size.Y),
}
sMean := C.struct_Scalar{
val1: C.double(mean.Val1),
val2: C.double(mean.Val2),
val3: C.double(mean.Val3),
val4: C.double(mean.Val4),
}
return newMat(C.Net_BlobFromImage(img.p, C.double(scaleFactor), sz, sMean, C.bool(swapRB), C.bool(crop)))
}
// BlobFromImages Creates 4-dimensional blob from series of images.
// Optionally resizes and crops images from center, subtract mean values,
// scales values by scalefactor, swap Blue and Red channels.
//
// For further details, please see:
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#ga2b89ed84432e4395f5a1412c2926293c
//
func BlobFromImages(imgs []Mat, blob *Mat, scaleFactor float64, size image.Point, mean Scalar,
swapRB bool, crop bool, ddepth int) {
cMatArray := make([]C.Mat, len(imgs))
for i, r := range imgs {
cMatArray[i] = r.p
}
cMats := C.struct_Mats{
mats: (*C.Mat)(&cMatArray[0]),
length: C.int(len(imgs)),
}
sz := C.struct_Size{
width: C.int(size.X),
height: C.int(size.Y),
}
sMean := C.struct_Scalar{
val1: C.double(mean.Val1),
val2: C.double(mean.Val2),
val3: C.double(mean.Val3),
val4: C.double(mean.Val4),
}
C.Net_BlobFromImages(cMats, blob.p, C.double(scaleFactor), sz, sMean, C.bool(swapRB), C.bool(crop), C.int(ddepth))
}
// ImagesFromBlob Parse a 4D blob and output the images it contains as
// 2D arrays through a simpler data structure (std::vector<cv::Mat>).
//
// For further details, please see:
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#ga4051b5fa2ed5f54b76c059a8625df9f5
//
func ImagesFromBlob(blob Mat, imgs []Mat) {
cMats := C.struct_Mats{}
C.Net_ImagesFromBlob(blob.p, &(cMats))
// mv = make([]Mat, cMats.length)
for i := C.int(0); i < cMats.length; i++ {
imgs[i].p = C.Mats_get(cMats, i)
}
}
// GetBlobChannel extracts a single (2d)channel from a 4 dimensional blob structure
// (this might e.g. contain the results of a SSD or YOLO detection,
// a bones structure from pose detection, or a color plane from Colorization)
//
func GetBlobChannel(blob Mat, imgidx int, chnidx int) Mat {
return newMat(C.Net_GetBlobChannel(blob.p, C.int(imgidx), C.int(chnidx)))
}
// GetBlobSize retrieves the 4 dimensional size information in (N,C,H,W) order
//
func GetBlobSize(blob Mat) Scalar {
s := C.Net_GetBlobSize(blob.p)
return NewScalar(float64(s.val1), float64(s.val2), float64(s.val3), float64(s.val4))
}
// Layer is a wrapper around the cv::dnn::Layer algorithm.
type Layer struct {
// C.Layer
p unsafe.Pointer
}
// GetLayer returns pointer to layer with specified id from the network.
//
// For further details, please see:
// https://docs.opencv.org/master/db/d30/classcv_1_1dnn_1_1Net.html#a70aec7f768f38c32b1ee25f3a56526df
//
func (net *Net) GetLayer(layer int) Layer {
return Layer{p: unsafe.Pointer(C.Net_GetLayer((C.Net)(net.p), C.int(layer)))}
}
// GetPerfProfile returns overall time for inference and timings (in ticks) for layers
//
// For further details, please see:
// https://docs.opencv.org/master/db/d30/classcv_1_1dnn_1_1Net.html#a06ce946f675f75d1c020c5ddbc78aedc
//
func (net *Net) GetPerfProfile() float64 {
return float64(C.Net_GetPerfProfile((C.Net)(net.p)))
}
// GetUnconnectedOutLayers returns indexes of layers with unconnected outputs.
//
// For further details, please see:
// https://docs.opencv.org/master/db/d30/classcv_1_1dnn_1_1Net.html#ae62a73984f62c49fd3e8e689405b056a
//
func (net *Net) GetUnconnectedOutLayers() (ids []int) {
cids := C.IntVector{}
C.Net_GetUnconnectedOutLayers((C.Net)(net.p), &cids)
h := &reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(cids.val)),
Len: int(cids.length),
Cap: int(cids.length),
}
pcids := *(*[]int)(unsafe.Pointer(h))
for i := 0; i < int(cids.length); i++ {
ids = append(ids, int(pcids[i]))
}
return
}
// GetLayerNames returns all layer names.
//
// For furtherdetails, please see:
// https://docs.opencv.org/master/db/d30/classcv_1_1dnn_1_1Net.html#ae8be9806024a0d1d41aba687cce99e6b
//
func (net *Net) GetLayerNames() (names []string) {
cstrs := C.CStrings{}
C.Net_GetLayerNames((C.Net)(net.p), &cstrs)
h := &reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(cstrs.strs)),
Len: int(cstrs.length),
Cap: int(cstrs.length),
}
pcstrs := *(*[]string)(unsafe.Pointer(h))
for i := 0; i < int(cstrs.length); i++ {
names = append(names, string(pcstrs[i]))
}
return
}
// Close Layer
func (l *Layer) Close() error {
C.Layer_Close((C.Layer)(l.p))
l.p = nil
return nil
}
// GetName returns name for this layer.
func (l *Layer) GetName() string {
return C.GoString(C.Layer_GetName((C.Layer)(l.p)))
}
// GetType returns type for this layer.
func (l *Layer) GetType() string {
return C.GoString(C.Layer_GetType((C.Layer)(l.p)))
}
// InputNameToIndex returns index of input blob in input array.
//
// For further details, please see:
// https://docs.opencv.org/master/d3/d6c/classcv_1_1dnn_1_1Layer.html#a60ffc8238f3fa26cd3f49daa7ac0884b
//
func (l *Layer) InputNameToIndex(name string) int {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
return int(C.Layer_InputNameToIndex((C.Layer)(l.p), cName))
}
// OutputNameToIndex returns index of output blob in output array.
//
// For further details, please see:
// https://docs.opencv.org/master/d3/d6c/classcv_1_1dnn_1_1Layer.html#a60ffc8238f3fa26cd3f49daa7ac0884b
//
func (l *Layer) OutputNameToIndex(name string) int {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
return int(C.Layer_OutputNameToIndex((C.Layer)(l.p), cName))
}

58
vendor/gocv.io/x/gocv/dnn.h generated vendored Normal file
View File

@ -0,0 +1,58 @@
#ifndef _OPENCV3_DNN_H_
#define _OPENCV3_DNN_H_
#include <stdbool.h>
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
extern "C" {
#endif
#include "core.h"
#ifdef __cplusplus
typedef cv::dnn::Net* Net;
typedef cv::Ptr<cv::dnn::Layer>* Layer;
#else
typedef void* Net;
typedef void* Layer;
#endif
Net Net_ReadNet(const char* model, const char* config);
Net Net_ReadNetBytes(const char* framework, struct ByteArray model, struct ByteArray config);
Net Net_ReadNetFromCaffe(const char* prototxt, const char* caffeModel);
Net Net_ReadNetFromCaffeBytes(struct ByteArray prototxt, struct ByteArray caffeModel);
Net Net_ReadNetFromTensorflow(const char* model);
Net Net_ReadNetFromTensorflowBytes(struct ByteArray model);
Mat Net_BlobFromImage(Mat image, double scalefactor, Size size, Scalar mean, bool swapRB,
bool crop);
void Net_BlobFromImages(struct Mats images, Mat blob, double scalefactor, Size size,
Scalar mean, bool swapRB, bool crop, int ddepth);
void Net_ImagesFromBlob(Mat blob_, struct Mats* images_);
void Net_Close(Net net);
bool Net_Empty(Net net);
void Net_SetInput(Net net, Mat blob, const char* name);
Mat Net_Forward(Net net, const char* outputName);
void Net_ForwardLayers(Net net, struct Mats* outputBlobs, struct CStrings outBlobNames);
void Net_SetPreferableBackend(Net net, int backend);
void Net_SetPreferableTarget(Net net, int target);
int64_t Net_GetPerfProfile(Net net);
void Net_GetUnconnectedOutLayers(Net net, IntVector* res);
void Net_GetLayerNames(Net net, CStrings* names);
Mat Net_GetBlobChannel(Mat blob, int imgidx, int chnidx);
Scalar Net_GetBlobSize(Mat blob);
Layer Net_GetLayer(Net net, int layerid);
void Layer_Close(Layer layer);
int Layer_InputNameToIndex(Layer layer, const char* name);
int Layer_OutputNameToIndex(Layer layer, const char* name);
const char* Layer_GetName(Layer layer);
const char* Layer_GetType(Layer layer);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_DNN_H_

26
vendor/gocv.io/x/gocv/dnn_async_openvino.go generated vendored Normal file
View File

@ -0,0 +1,26 @@
// +build openvino
package gocv
import (
"unsafe"
)
/*
#include <stdlib.h>
#include "dnn.h"
#include "asyncarray.h"
*/
import "C"
// ForwardAsync runs forward pass to compute output of layer with name outputName.
//
// For further details, please see:
// https://docs.opencv.org/trunk/db/d30/classcv_1_1dnn_1_1Net.html#a814890154ea9e10b132fec00b6f6ba30
//
func (net *Net) ForwardAsync(outputName string) AsyncArray {
cName := C.CString(outputName)
defer C.free(unsafe.Pointer(cName))
return newAsyncArray(C.Net_forwardAsync((C.Net)(net.p), cName))
}

67
vendor/gocv.io/x/gocv/dnn_ext.go generated vendored Normal file
View File

@ -0,0 +1,67 @@
package gocv
import (
"image"
)
// FP16BlobFromImage is an extended helper function to convert an Image to a half-float blob, as used by
// the Movidius Neural Compute Stick.
func FP16BlobFromImage(img Mat, scaleFactor float32, size image.Point, mean float32,
swapRB bool, crop bool) []byte {
// resizes image so it maintains aspect ratio
width := float32(img.Cols())
height := float32(img.Rows())
square := NewMatWithSize(size.Y, size.X, img.Type())
defer square.Close()
maxDim := height
var scale float32 = 1.0
if width > height {
maxDim = width
scale = float32(size.X) / float32(maxDim)
}
if width < height {
scale = float32(size.Y) / float32(maxDim)
}
var roi image.Rectangle
if width >= height {
roi.Min.X = 0
roi.Min.Y = int(float32(size.Y)-height*scale) / 2
roi.Max.X = size.X
roi.Max.Y = int(height * scale)
} else {
roi.Min.X = int(float32(size.X)-width*scale) / 2
roi.Min.Y = 0
roi.Max.X = int(width * scale)
roi.Max.Y = size.Y
}
Resize(img, &square, roi.Max, 0, 0, InterpolationDefault)
if swapRB {
CvtColor(square, &square, ColorBGRToRGB)
}
fp32Image := NewMat()
defer fp32Image.Close()
square.ConvertTo(&fp32Image, MatTypeCV32F)
if mean != 0 {
// subtract mean
fp32Image.SubtractFloat(mean)
}
if scaleFactor != 1.0 {
// multiply by scale factor
fp32Image.MultiplyFloat(scaleFactor)
}
fp16Blob := fp32Image.ConvertFp16()
defer fp16Blob.Close()
return fp16Blob.ToBytes()
}

35
vendor/gocv.io/x/gocv/dnn_string.go generated vendored Normal file
View File

@ -0,0 +1,35 @@
package gocv
func (c NetBackendType) String() string {
switch c {
case NetBackendDefault:
return ""
case NetBackendHalide:
return "halide"
case NetBackendOpenVINO:
return "openvino"
case NetBackendOpenCV:
return "opencv"
case NetBackendVKCOM:
return "vulkan"
}
return ""
}
func (c NetTargetType) String() string {
switch c {
case NetTargetCPU:
return "cpu"
case NetTargetFP32:
return "fp32"
case NetTargetFP16:
return "fp16"
case NetTargetVPU:
return "vpu"
case NetTargetVulkan:
return "vulkan"
case NetTargetFPGA:
return "fpga"
}
return ""
}

2
vendor/gocv.io/x/gocv/env.cmd generated vendored Normal file
View File

@ -0,0 +1,2 @@
ECHO This script is no longer necessary and has been deprecated.
ECHO See the Custom Environment section of the README if you need to customize your environment.

2
vendor/gocv.io/x/gocv/env.sh generated vendored Normal file
View File

@ -0,0 +1,2 @@
echo "This script is no longer necessary and has been deprecated."
echo "See the Custom Environment section of the README if you need to customize your environment."

430
vendor/gocv.io/x/gocv/features2d.cpp generated vendored Normal file
View File

@ -0,0 +1,430 @@
#include "features2d.h"
AKAZE AKAZE_Create() {
// TODO: params
return new cv::Ptr<cv::AKAZE>(cv::AKAZE::create());
}
void AKAZE_Close(AKAZE a) {
delete a;
}
struct KeyPoints AKAZE_Detect(AKAZE a, Mat src) {
std::vector<cv::KeyPoint> detected;
(*a)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
struct KeyPoints AKAZE_DetectAndCompute(AKAZE a, Mat src, Mat mask, Mat desc) {
std::vector<cv::KeyPoint> detected;
(*a)->detectAndCompute(*src, *mask, detected, *desc);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
AgastFeatureDetector AgastFeatureDetector_Create() {
// TODO: params
return new cv::Ptr<cv::AgastFeatureDetector>(cv::AgastFeatureDetector::create());
}
void AgastFeatureDetector_Close(AgastFeatureDetector a) {
delete a;
}
struct KeyPoints AgastFeatureDetector_Detect(AgastFeatureDetector a, Mat src) {
std::vector<cv::KeyPoint> detected;
(*a)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
BRISK BRISK_Create() {
// TODO: params
return new cv::Ptr<cv::BRISK>(cv::BRISK::create());
}
void BRISK_Close(BRISK b) {
delete b;
}
struct KeyPoints BRISK_Detect(BRISK b, Mat src) {
std::vector<cv::KeyPoint> detected;
(*b)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
struct KeyPoints BRISK_DetectAndCompute(BRISK b, Mat src, Mat mask, Mat desc) {
std::vector<cv::KeyPoint> detected;
(*b)->detectAndCompute(*src, *mask, detected, *desc);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
GFTTDetector GFTTDetector_Create() {
// TODO: params
return new cv::Ptr<cv::GFTTDetector>(cv::GFTTDetector::create());
}
void GFTTDetector_Close(GFTTDetector a) {
delete a;
}
struct KeyPoints GFTTDetector_Detect(GFTTDetector a, Mat src) {
std::vector<cv::KeyPoint> detected;
(*a)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
KAZE KAZE_Create() {
// TODO: params
return new cv::Ptr<cv::KAZE>(cv::KAZE::create());
}
void KAZE_Close(KAZE a) {
delete a;
}
struct KeyPoints KAZE_Detect(KAZE a, Mat src) {
std::vector<cv::KeyPoint> detected;
(*a)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
struct KeyPoints KAZE_DetectAndCompute(KAZE a, Mat src, Mat mask, Mat desc) {
std::vector<cv::KeyPoint> detected;
(*a)->detectAndCompute(*src, *mask, detected, *desc);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
MSER MSER_Create() {
// TODO: params
return new cv::Ptr<cv::MSER>(cv::MSER::create());
}
void MSER_Close(MSER a) {
delete a;
}
struct KeyPoints MSER_Detect(MSER a, Mat src) {
std::vector<cv::KeyPoint> detected;
(*a)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
FastFeatureDetector FastFeatureDetector_Create() {
return new cv::Ptr<cv::FastFeatureDetector>(cv::FastFeatureDetector::create());
}
void FastFeatureDetector_Close(FastFeatureDetector f) {
delete f;
}
FastFeatureDetector FastFeatureDetector_CreateWithParams(int threshold, bool nonmaxSuppression, int type) {
return new cv::Ptr<cv::FastFeatureDetector>(cv::FastFeatureDetector::create(threshold,nonmaxSuppression,static_cast<cv::FastFeatureDetector::DetectorType>(type)));
}
struct KeyPoints FastFeatureDetector_Detect(FastFeatureDetector f, Mat src) {
std::vector<cv::KeyPoint> detected;
(*f)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
ORB ORB_Create() {
// TODO: params
return new cv::Ptr<cv::ORB>(cv::ORB::create());
}
void ORB_Close(ORB o) {
delete o;
}
struct KeyPoints ORB_Detect(ORB o, Mat src) {
std::vector<cv::KeyPoint> detected;
(*o)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
struct KeyPoints ORB_DetectAndCompute(ORB o, Mat src, Mat mask, Mat desc) {
std::vector<cv::KeyPoint> detected;
(*o)->detectAndCompute(*src, *mask, detected, *desc);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
cv::SimpleBlobDetector::Params ConvertCParamsToCPPParams(SimpleBlobDetectorParams params) {
cv::SimpleBlobDetector::Params converted;
converted.blobColor = params.blobColor;
converted.filterByArea = params.filterByArea;
converted.filterByCircularity = params.filterByCircularity;
converted.filterByColor = params.filterByColor;
converted.filterByConvexity = params.filterByConvexity;
converted.filterByInertia = params.filterByInertia;
converted.maxArea = params.maxArea;
converted.maxCircularity = params.maxCircularity;
converted.maxConvexity = params.maxConvexity;
converted.maxInertiaRatio = params.maxInertiaRatio;
converted.maxThreshold = params.maxThreshold;
converted.minArea = params.minArea;
converted.minCircularity = params.minCircularity;
converted.minConvexity = params.minConvexity;
converted.minDistBetweenBlobs = params.minDistBetweenBlobs;
converted.minInertiaRatio = params.minInertiaRatio;
converted.minRepeatability = params.minRepeatability;
converted.minThreshold = params.minThreshold;
converted.thresholdStep = params.thresholdStep;
return converted;
}
SimpleBlobDetectorParams ConvertCPPParamsToCParams(cv::SimpleBlobDetector::Params params) {
SimpleBlobDetectorParams converted;
converted.blobColor = params.blobColor;
converted.filterByArea = params.filterByArea;
converted.filterByCircularity = params.filterByCircularity;
converted.filterByColor = params.filterByColor;
converted.filterByConvexity = params.filterByConvexity;
converted.filterByInertia = params.filterByInertia;
converted.maxArea = params.maxArea;
converted.maxCircularity = params.maxCircularity;
converted.maxConvexity = params.maxConvexity;
converted.maxInertiaRatio = params.maxInertiaRatio;
converted.maxThreshold = params.maxThreshold;
converted.minArea = params.minArea;
converted.minCircularity = params.minCircularity;
converted.minConvexity = params.minConvexity;
converted.minDistBetweenBlobs = params.minDistBetweenBlobs;
converted.minInertiaRatio = params.minInertiaRatio;
converted.minRepeatability = params.minRepeatability;
converted.minThreshold = params.minThreshold;
converted.thresholdStep = params.thresholdStep;
return converted;
}
SimpleBlobDetector SimpleBlobDetector_Create_WithParams(SimpleBlobDetectorParams params){
cv::SimpleBlobDetector::Params actualParams;
return new cv::Ptr<cv::SimpleBlobDetector>(cv::SimpleBlobDetector::create(ConvertCParamsToCPPParams(params)));
}
SimpleBlobDetector SimpleBlobDetector_Create() {
return new cv::Ptr<cv::SimpleBlobDetector>(cv::SimpleBlobDetector::create());
}
SimpleBlobDetectorParams SimpleBlobDetectorParams_Create() {
return ConvertCPPParamsToCParams(cv::SimpleBlobDetector::Params());
}
void SimpleBlobDetector_Close(SimpleBlobDetector b) {
delete b;
}
struct KeyPoints SimpleBlobDetector_Detect(SimpleBlobDetector b, Mat src) {
std::vector<cv::KeyPoint> detected;
(*b)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
BFMatcher BFMatcher_Create() {
return new cv::Ptr<cv::BFMatcher>(cv::BFMatcher::create());
}
BFMatcher BFMatcher_CreateWithParams(int normType, bool crossCheck) {
return new cv::Ptr<cv::BFMatcher>(cv::BFMatcher::create(normType, crossCheck));
}
void BFMatcher_Close(BFMatcher b) {
delete b;
}
struct MultiDMatches BFMatcher_KnnMatch(BFMatcher b, Mat query, Mat train, int k) {
std::vector< std::vector<cv::DMatch> > matches;
(*b)->knnMatch(*query, *train, matches, k);
DMatches *dms = new DMatches[matches.size()];
for (size_t i = 0; i < matches.size(); ++i) {
DMatch *dmatches = new DMatch[matches[i].size()];
for (size_t j = 0; j < matches[i].size(); ++j) {
DMatch dmatch = {matches[i][j].queryIdx, matches[i][j].trainIdx, matches[i][j].imgIdx,
matches[i][j].distance};
dmatches[j] = dmatch;
}
dms[i] = {dmatches, (int) matches[i].size()};
}
MultiDMatches ret = {dms, (int) matches.size()};
return ret;
}
struct MultiDMatches BFMatcher_KnnMatchWithParams(BFMatcher b, Mat query, Mat train, int k, Mat mask, bool compactResult) {
std::vector< std::vector<cv::DMatch> > matches;
(*b)->knnMatch(*query, *train, matches, k, *mask, compactResult);
DMatches *dms = new DMatches[matches.size()];
for (size_t i = 0; i < matches.size(); ++i) {
DMatch *dmatches = new DMatch[matches[i].size()];
for (size_t j = 0; j < matches[i].size(); ++j) {
DMatch dmatch = {matches[i][j].queryIdx, matches[i][j].trainIdx, matches[i][j].imgIdx,
matches[i][j].distance};
dmatches[j] = dmatch;
}
dms[i] = {dmatches, (int) matches[i].size()};
}
MultiDMatches ret = {dms, (int) matches.size()};
return ret;
}
void DrawKeyPoints(Mat src, struct KeyPoints kp, Mat dst, Scalar s, int flags) {
std::vector<cv::KeyPoint> keypts;
cv::KeyPoint keypt;
for (int i = 0; i < kp.length; ++i) {
keypt = cv::KeyPoint(kp.keypoints[i].x, kp.keypoints[i].y,
kp.keypoints[i].size, kp.keypoints[i].angle, kp.keypoints[i].response,
kp.keypoints[i].octave, kp.keypoints[i].classID);
keypts.push_back(keypt);
}
cv::Scalar color = cv::Scalar(s.val1, s.val2, s.val3, s.val4);
cv::drawKeypoints(*src, keypts, *dst, color, static_cast<cv::DrawMatchesFlags>(flags));
}

750
vendor/gocv.io/x/gocv/features2d.go generated vendored Normal file
View File

@ -0,0 +1,750 @@
package gocv
/*
#include <stdlib.h>
#include "features2d.h"
*/
import "C"
import (
"image/color"
"reflect"
"unsafe"
)
// AKAZE is a wrapper around the cv::AKAZE algorithm.
type AKAZE struct {
// C.AKAZE
p unsafe.Pointer
}
// NewAKAZE returns a new AKAZE algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/d8/d30/classcv_1_1AKAZE.html
//
func NewAKAZE() AKAZE {
return AKAZE{p: unsafe.Pointer(C.AKAZE_Create())}
}
// Close AKAZE.
func (a *AKAZE) Close() error {
C.AKAZE_Close((C.AKAZE)(a.p))
a.p = nil
return nil
}
// Detect keypoints in an image using AKAZE.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (a *AKAZE) Detect(src Mat) []KeyPoint {
ret := C.AKAZE_Detect((C.AKAZE)(a.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// DetectAndCompute keypoints and compute in an image using AKAZE.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#a8be0d1c20b08eb867184b8d74c15a677
//
func (a *AKAZE) DetectAndCompute(src Mat, mask Mat) ([]KeyPoint, Mat) {
desc := NewMat()
ret := C.AKAZE_DetectAndCompute((C.AKAZE)(a.p), src.p, mask.p, desc.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret), desc
}
// AgastFeatureDetector is a wrapper around the cv::AgastFeatureDetector.
type AgastFeatureDetector struct {
// C.AgastFeatureDetector
p unsafe.Pointer
}
// NewAgastFeatureDetector returns a new AgastFeatureDetector algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/d7/d19/classcv_1_1AgastFeatureDetector.html
//
func NewAgastFeatureDetector() AgastFeatureDetector {
return AgastFeatureDetector{p: unsafe.Pointer(C.AgastFeatureDetector_Create())}
}
// Close AgastFeatureDetector.
func (a *AgastFeatureDetector) Close() error {
C.AgastFeatureDetector_Close((C.AgastFeatureDetector)(a.p))
a.p = nil
return nil
}
// Detect keypoints in an image using AgastFeatureDetector.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (a *AgastFeatureDetector) Detect(src Mat) []KeyPoint {
ret := C.AgastFeatureDetector_Detect((C.AgastFeatureDetector)(a.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// BRISK is a wrapper around the cv::BRISK algorithm.
type BRISK struct {
// C.BRISK
p unsafe.Pointer
}
// NewBRISK returns a new BRISK algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/d8/d30/classcv_1_1AKAZE.html
//
func NewBRISK() BRISK {
return BRISK{p: unsafe.Pointer(C.BRISK_Create())}
}
// Close BRISK.
func (b *BRISK) Close() error {
C.BRISK_Close((C.BRISK)(b.p))
b.p = nil
return nil
}
// Detect keypoints in an image using BRISK.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (b *BRISK) Detect(src Mat) []KeyPoint {
ret := C.BRISK_Detect((C.BRISK)(b.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// DetectAndCompute keypoints and compute in an image using BRISK.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#a8be0d1c20b08eb867184b8d74c15a677
//
func (b *BRISK) DetectAndCompute(src Mat, mask Mat) ([]KeyPoint, Mat) {
desc := NewMat()
ret := C.BRISK_DetectAndCompute((C.BRISK)(b.p), src.p, mask.p, desc.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret), desc
}
// FastFeatureDetectorType defines the detector type
//
// For further details, please see:
// https://docs.opencv.org/master/df/d74/classcv_1_1FastFeatureDetector.html#a4654f6fb0aa4b8e9123b223bfa0e2a08
type FastFeatureDetectorType int
const (
//FastFeatureDetectorType58 is an alias of FastFeatureDetector::TYPE_5_8
FastFeatureDetectorType58 FastFeatureDetectorType = 0
//FastFeatureDetectorType712 is an alias of FastFeatureDetector::TYPE_7_12
FastFeatureDetectorType712 = 1
//FastFeatureDetectorType916 is an alias of FastFeatureDetector::TYPE_9_16
FastFeatureDetectorType916 = 2
)
// FastFeatureDetector is a wrapper around the cv::FastFeatureDetector.
type FastFeatureDetector struct {
// C.FastFeatureDetector
p unsafe.Pointer
}
// NewFastFeatureDetector returns a new FastFeatureDetector algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/df/d74/classcv_1_1FastFeatureDetector.html
//
func NewFastFeatureDetector() FastFeatureDetector {
return FastFeatureDetector{p: unsafe.Pointer(C.FastFeatureDetector_Create())}
}
// NewFastFeatureDetectorWithParams returns a new FastFeatureDetector algorithm with parameters
//
// For further details, please see:
// https://docs.opencv.org/master/df/d74/classcv_1_1FastFeatureDetector.html#ab986f2ff8f8778aab1707e2642bc7f8e
//
func NewFastFeatureDetectorWithParams(threshold int, nonmaxSuppression bool, typ FastFeatureDetectorType) FastFeatureDetector {
return FastFeatureDetector{p: unsafe.Pointer(C.FastFeatureDetector_CreateWithParams(C.int(threshold), C.bool(nonmaxSuppression), C.int(typ)))}
}
// Close FastFeatureDetector.
func (f *FastFeatureDetector) Close() error {
C.FastFeatureDetector_Close((C.FastFeatureDetector)(f.p))
f.p = nil
return nil
}
// Detect keypoints in an image using FastFeatureDetector.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (f *FastFeatureDetector) Detect(src Mat) []KeyPoint {
ret := C.FastFeatureDetector_Detect((C.FastFeatureDetector)(f.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// GFTTDetector is a wrapper around the cv::GFTTDetector algorithm.
type GFTTDetector struct {
// C.GFTTDetector
p unsafe.Pointer
}
// NewGFTTDetector returns a new GFTTDetector algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/df/d21/classcv_1_1GFTTDetector.html
//
func NewGFTTDetector() GFTTDetector {
return GFTTDetector{p: unsafe.Pointer(C.GFTTDetector_Create())}
}
// Close GFTTDetector.
func (a *GFTTDetector) Close() error {
C.GFTTDetector_Close((C.GFTTDetector)(a.p))
a.p = nil
return nil
}
// Detect keypoints in an image using GFTTDetector.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (a *GFTTDetector) Detect(src Mat) []KeyPoint {
ret := C.GFTTDetector_Detect((C.GFTTDetector)(a.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// KAZE is a wrapper around the cv::KAZE algorithm.
type KAZE struct {
// C.KAZE
p unsafe.Pointer
}
// NewKAZE returns a new KAZE algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/d3/d61/classcv_1_1KAZE.html
//
func NewKAZE() KAZE {
return KAZE{p: unsafe.Pointer(C.KAZE_Create())}
}
// Close KAZE.
func (a *KAZE) Close() error {
C.KAZE_Close((C.KAZE)(a.p))
a.p = nil
return nil
}
// Detect keypoints in an image using KAZE.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (a *KAZE) Detect(src Mat) []KeyPoint {
ret := C.KAZE_Detect((C.KAZE)(a.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// DetectAndCompute keypoints and compute in an image using KAZE.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#a8be0d1c20b08eb867184b8d74c15a677
//
func (a *KAZE) DetectAndCompute(src Mat, mask Mat) ([]KeyPoint, Mat) {
desc := NewMat()
ret := C.KAZE_DetectAndCompute((C.KAZE)(a.p), src.p, mask.p, desc.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret), desc
}
// MSER is a wrapper around the cv::MSER algorithm.
type MSER struct {
// C.MSER
p unsafe.Pointer
}
// NewMSER returns a new MSER algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/d3/d28/classcv_1_1MSER.html
//
func NewMSER() MSER {
return MSER{p: unsafe.Pointer(C.MSER_Create())}
}
// Close MSER.
func (a *MSER) Close() error {
C.MSER_Close((C.MSER)(a.p))
a.p = nil
return nil
}
// Detect keypoints in an image using MSER.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (a *MSER) Detect(src Mat) []KeyPoint {
ret := C.MSER_Detect((C.MSER)(a.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// ORB is a wrapper around the cv::ORB.
type ORB struct {
// C.ORB
p unsafe.Pointer
}
// NewORB returns a new ORB algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/d7/d19/classcv_1_1AgastFeatureDetector.html
//
func NewORB() ORB {
return ORB{p: unsafe.Pointer(C.ORB_Create())}
}
// Close ORB.
func (o *ORB) Close() error {
C.ORB_Close((C.ORB)(o.p))
o.p = nil
return nil
}
// Detect keypoints in an image using ORB.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (o *ORB) Detect(src Mat) []KeyPoint {
ret := C.ORB_Detect((C.ORB)(o.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// DetectAndCompute detects keypoints and computes from an image using ORB.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#a8be0d1c20b08eb867184b8d74c15a677
//
func (o *ORB) DetectAndCompute(src Mat, mask Mat) ([]KeyPoint, Mat) {
desc := NewMat()
ret := C.ORB_DetectAndCompute((C.ORB)(o.p), src.p, mask.p, desc.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret), desc
}
// SimpleBlobDetector is a wrapper around the cv::SimpleBlobDetector.
type SimpleBlobDetector struct {
// C.SimpleBlobDetector
p unsafe.Pointer
}
// SimpleBlobDetector_Params is a wrapper around the cv::SimpleBlobdetector::Params
type SimpleBlobDetectorParams struct {
p C.SimpleBlobDetectorParams
}
// NewSimpleBlobDetector returns a new SimpleBlobDetector algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d7a/classcv_1_1SimpleBlobDetector.html
//
func NewSimpleBlobDetector() SimpleBlobDetector {
return SimpleBlobDetector{p: unsafe.Pointer(C.SimpleBlobDetector_Create())}
}
// NewSimpleBlobDetectorWithParams returns a new SimpleBlobDetector with custom parameters
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d7a/classcv_1_1SimpleBlobDetector.html
//
func NewSimpleBlobDetectorWithParams(params SimpleBlobDetectorParams) SimpleBlobDetector {
return SimpleBlobDetector{p: unsafe.Pointer(C.SimpleBlobDetector_Create_WithParams(params.p))}
}
// Close SimpleBlobDetector.
func (b *SimpleBlobDetector) Close() error {
C.SimpleBlobDetector_Close((C.SimpleBlobDetector)(b.p))
b.p = nil
return nil
}
// NewSimpleBlobDetectorParams returns the default parameters for the SimpleBobDetector
func NewSimpleBlobDetectorParams() SimpleBlobDetectorParams {
return SimpleBlobDetectorParams{p: C.SimpleBlobDetectorParams_Create()}
}
// SetBlobColor sets the blobColor field
func (p *SimpleBlobDetectorParams) SetBlobColor(blobColor int) {
p.p.blobColor = C.uchar(blobColor)
}
// GetBlobColor gets the blobColor field
func (p *SimpleBlobDetectorParams) GetBlobColor() int {
return int(p.p.blobColor)
}
// SetFilterByArea sets the filterByArea field
func (p *SimpleBlobDetectorParams) SetFilterByArea(filterByArea bool) {
p.p.filterByArea = C.bool(filterByArea)
}
// GetFilterByArea gets the filterByArea field
func (p *SimpleBlobDetectorParams) GetFilterByArea() bool {
return bool(p.p.filterByArea)
}
// SetFilterByCircularity sets the filterByCircularity field
func (p *SimpleBlobDetectorParams) SetFilterByCircularity(filterByCircularity bool) {
p.p.filterByCircularity = C.bool(filterByCircularity)
}
// GetFilterByCircularity gets the filterByCircularity field
func (p *SimpleBlobDetectorParams) GetFilterByCircularity() bool {
return bool(p.p.filterByCircularity)
}
// SetFilterByColor sets the filterByColor field
func (p *SimpleBlobDetectorParams) SetFilterByColor(filterByColor bool) {
p.p.filterByColor = C.bool(filterByColor)
}
// GetFilterByColor gets the filterByColor field
func (p *SimpleBlobDetectorParams) GetFilterByColor() bool {
return bool(p.p.filterByColor)
}
// SetFilterByConvexity sets the filterByConvexity field
func (p *SimpleBlobDetectorParams) SetFilterByConvexity(filterByConvexity bool) {
p.p.filterByConvexity = C.bool(filterByConvexity)
}
// GetFilterByConvexity gets the filterByConvexity field
func (p *SimpleBlobDetectorParams) GetFilterByConvexity() bool {
return bool(p.p.filterByConvexity)
}
// SetFilterByInertia sets the filterByInertia field
func (p *SimpleBlobDetectorParams) SetFilterByInertia(filterByInertia bool) {
p.p.filterByInertia = C.bool(filterByInertia)
}
// GetFilterByInertia gets the filterByInertia field
func (p *SimpleBlobDetectorParams) GetFilterByInertia() bool {
return bool(p.p.filterByInertia)
}
// SetMaxArea sets the maxArea parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMaxArea(maxArea float64) {
p.p.maxArea = C.float(maxArea)
}
// GetMaxArea sets the maxArea parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMaxArea() float64 {
return float64(p.p.maxArea)
}
// SetMaxCircularity sets the maxCircularity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMaxCircularity(maxCircularity float64) {
p.p.maxCircularity = C.float(maxCircularity)
}
// GetMaxCircularity sets the maxCircularity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMaxCircularity() float64 {
return float64(p.p.maxCircularity)
}
// SetMaxConvexity sets the maxConvexity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMaxConvexity(maxConvexity float64) {
p.p.maxConvexity = C.float(maxConvexity)
}
// GetMaxConvexity sets the maxConvexity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMaxConvexity() float64 {
return float64(p.p.maxConvexity)
}
// SetMaxInertiaRatio sets the maxInertiaRatio parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMaxInertiaRatio(maxInertiaRatio float64) {
p.p.maxInertiaRatio = C.float(maxInertiaRatio)
}
// GetMaxInertiaRatio sets the maxCConvexity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMaxInertiaRatio() float64 {
return float64(p.p.maxInertiaRatio)
}
// SetMaxThreshold sets the maxThreshold parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMaxThreshold(maxThreshold float64) {
p.p.maxThreshold = C.float(maxThreshold)
}
// GetMaxThreshold sets the maxCConvexity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMaxThreshold() float64 {
return float64(p.p.maxThreshold)
}
// SetMinArea sets the minArea parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMinArea(minArea float64) {
p.p.minArea = C.float(minArea)
}
// GetMinArea sets theinArea parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMinArea() float64 {
return float64(p.p.minArea)
}
// SetMinCircularity sets the minCircularity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMinCircularity(minCircularity float64) {
p.p.minCircularity = C.float(minCircularity)
}
// GetMinCircularity sets the minCircularity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMinCircularity() float64 {
return float64(p.p.minCircularity)
}
// SetMinConvexity sets the minConvexity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMinConvexity(minConvexity float64) {
p.p.minConvexity = C.float(minConvexity)
}
// GetMinConvexity sets the minConvexity parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMinConvexity() float64 {
return float64(p.p.minConvexity)
}
// SetMinDistBetweenBlobs sets the minDistBetweenBlobs parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMinDistBetweenBlobs(minDistBetweenBlobs float64) {
p.p.minDistBetweenBlobs = C.float(minDistBetweenBlobs)
}
// GetMinDistBetweenBlobs sets the minDistBetweenBlobs parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMinDistBetweenBlobs() float64 {
return float64(p.p.minDistBetweenBlobs)
}
// SetMinInertiaRatio sets the minInertiaRatio parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMinInertiaRatio(minInertiaRatio float64) {
p.p.minInertiaRatio = C.float(minInertiaRatio)
}
// GetMinInertiaRatio sets the minInertiaRatio parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMinInertiaRatio() float64 {
return float64(p.p.minInertiaRatio)
}
// SetMinRepeatability sets the minRepeatability parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMinRepeatability(minRepeatability int) {
p.p.minRepeatability = C.size_t(minRepeatability)
}
// GetMinInertiaRatio sets the minRepeatability parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMinRepeatability() int {
return int(p.p.minRepeatability)
}
// SetMinThreshold sets the minThreshold parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetMinThreshold(minThreshold float64) {
p.p.minThreshold = C.float(minThreshold)
}
// GetMinThreshold sets the minInertiaRatio parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetMinThreshold() float64 {
return float64(p.p.minThreshold)
}
// SetMinThreshold sets the minThreshold parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) SetThresholdStep(thresholdStep float64) {
p.p.thresholdStep = C.float(thresholdStep)
}
// GetMinThreshold sets the minInertiaRatio parameter for SimpleBlobDetector_Params
func (p *SimpleBlobDetectorParams) GetThresholdStep() float64 {
return float64(p.p.thresholdStep)
}
// Detect keypoints in an image using SimpleBlobDetector.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (b *SimpleBlobDetector) Detect(src Mat) []KeyPoint {
ret := C.SimpleBlobDetector_Detect((C.SimpleBlobDetector)(b.p), src.p)
defer C.KeyPoints_Close(ret)
return getKeyPoints(ret)
}
// getKeyPoints returns a slice of KeyPoint given a pointer to a C.KeyPoints
func getKeyPoints(ret C.KeyPoints) []KeyPoint {
cArray := ret.keypoints
length := int(ret.length)
hdr := reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(cArray)),
Len: length,
Cap: length,
}
s := *(*[]C.KeyPoint)(unsafe.Pointer(&hdr))
keys := make([]KeyPoint, length)
for i, r := range s {
keys[i] = KeyPoint{float64(r.x), float64(r.y), float64(r.size), float64(r.angle), float64(r.response),
int(r.octave), int(r.classID)}
}
return keys
}
// BFMatcher is a wrapper around the the cv::BFMatcher algorithm
type BFMatcher struct {
// C.BFMatcher
p unsafe.Pointer
}
// NewBFMatcher returns a new BFMatcher
//
// For further details, please see:
// https://docs.opencv.org/master/d3/da1/classcv_1_1BFMatcher.html#abe0bb11749b30d97f60d6ade665617bd
//
func NewBFMatcher() BFMatcher {
return BFMatcher{p: unsafe.Pointer(C.BFMatcher_Create())}
}
// NewBFMatcherWithParams creates a new BFMatchers but allows setting parameters
// to values other than just the defaults.
//
// For further details, please see:
// https://docs.opencv.org/master/d3/da1/classcv_1_1BFMatcher.html#abe0bb11749b30d97f60d6ade665617bd
//
func NewBFMatcherWithParams(normType NormType, crossCheck bool) BFMatcher {
return BFMatcher{p: unsafe.Pointer(C.BFMatcher_CreateWithParams(C.int(normType), C.bool(crossCheck)))}
}
// Close BFMatcher
func (b *BFMatcher) Close() error {
C.BFMatcher_Close((C.BFMatcher)(b.p))
b.p = nil
return nil
}
// KnnMatch Finds the k best matches for each descriptor from a query set.
//
// For further details, please see:
// https://docs.opencv.org/master/db/d39/classcv_1_1DescriptorMatcher.html#aa880f9353cdf185ccf3013e08210483a
//
func (b *BFMatcher) KnnMatch(query, train Mat, k int) [][]DMatch {
ret := C.BFMatcher_KnnMatch((C.BFMatcher)(b.p), query.p, train.p, C.int(k))
defer C.MultiDMatches_Close(ret)
return getMultiDMatches(ret)
}
func getMultiDMatches(ret C.MultiDMatches) [][]DMatch {
cArray := ret.dmatches
length := int(ret.length)
hdr := reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(cArray)),
Len: length,
Cap: length,
}
s := *(*[]C.DMatches)(unsafe.Pointer(&hdr))
keys := make([][]DMatch, length)
for i := range s {
keys[i] = getDMatches(C.MultiDMatches_get(ret, C.int(i)))
}
return keys
}
func getDMatches(ret C.DMatches) []DMatch {
cArray := ret.dmatches
length := int(ret.length)
hdr := reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(cArray)),
Len: length,
Cap: length,
}
s := *(*[]C.DMatch)(unsafe.Pointer(&hdr))
keys := make([]DMatch, length)
for i, r := range s {
keys[i] = DMatch{int(r.queryIdx), int(r.trainIdx), int(r.imgIdx),
float64(r.distance)}
}
return keys
}
// DrawMatchesFlag are the flags setting drawing feature
//
// For further details please see:
// https://docs.opencv.org/master/de/d30/structcv_1_1DrawMatchesFlags.html
type DrawMatchesFlag int
const (
// DrawDefault creates new image and for each keypoint only the center point will be drawn
DrawDefault DrawMatchesFlag = 0
// DrawOverOutImg draws matches on existing content of image
DrawOverOutImg = 1
// NotDrawSinglePoints will not draw single points
NotDrawSinglePoints = 2
// DrawRichKeyPoints draws the circle around each keypoint with keypoint size and orientation
DrawRichKeyPoints = 3
)
// DrawKeyPoints draws keypoints
//
// For further details please see:
// https://docs.opencv.org/master/d4/d5d/group__features2d__draw.html#gab958f8900dd10f14316521c149a60433
func DrawKeyPoints(src Mat, keyPoints []KeyPoint, dst *Mat, color color.RGBA, flag DrawMatchesFlag) {
cKeyPointArray := make([]C.struct_KeyPoint, len(keyPoints))
for i, kp := range keyPoints {
cKeyPointArray[i].x = C.double(kp.X)
cKeyPointArray[i].y = C.double(kp.Y)
cKeyPointArray[i].size = C.double(kp.Size)
cKeyPointArray[i].angle = C.double(kp.Angle)
cKeyPointArray[i].response = C.double(kp.Response)
cKeyPointArray[i].octave = C.int(kp.Octave)
cKeyPointArray[i].classID = C.int(kp.ClassID)
}
cKeyPoints := C.struct_KeyPoints{
keypoints: (*C.struct_KeyPoint)(&cKeyPointArray[0]),
length: (C.int)(len(keyPoints)),
}
scalar := C.struct_Scalar{
val1: C.double(color.R),
val2: C.double(color.G),
val3: C.double(color.B),
val4: C.double(color.A),
}
C.DrawKeyPoints(src.p, cKeyPoints, dst.p, scalar, C.int(flag))
}

89
vendor/gocv.io/x/gocv/features2d.h generated vendored Normal file
View File

@ -0,0 +1,89 @@
#ifndef _OPENCV3_FEATURES2D_H_
#define _OPENCV3_FEATURES2D_H_
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
#ifdef __cplusplus
typedef cv::Ptr<cv::AKAZE>* AKAZE;
typedef cv::Ptr<cv::AgastFeatureDetector>* AgastFeatureDetector;
typedef cv::Ptr<cv::BRISK>* BRISK;
typedef cv::Ptr<cv::FastFeatureDetector>* FastFeatureDetector;
typedef cv::Ptr<cv::GFTTDetector>* GFTTDetector;
typedef cv::Ptr<cv::KAZE>* KAZE;
typedef cv::Ptr<cv::MSER>* MSER;
typedef cv::Ptr<cv::ORB>* ORB;
typedef cv::Ptr<cv::SimpleBlobDetector>* SimpleBlobDetector;
typedef cv::Ptr<cv::BFMatcher>* BFMatcher;
#else
typedef void* AKAZE;
typedef void* AgastFeatureDetector;
typedef void* BRISK;
typedef void* FastFeatureDetector;
typedef void* GFTTDetector;
typedef void* KAZE;
typedef void* MSER;
typedef void* ORB;
typedef void* SimpleBlobDetector;
typedef void* BFMatcher;
#endif
AKAZE AKAZE_Create();
void AKAZE_Close(AKAZE a);
struct KeyPoints AKAZE_Detect(AKAZE a, Mat src);
struct KeyPoints AKAZE_DetectAndCompute(AKAZE a, Mat src, Mat mask, Mat desc);
AgastFeatureDetector AgastFeatureDetector_Create();
void AgastFeatureDetector_Close(AgastFeatureDetector a);
struct KeyPoints AgastFeatureDetector_Detect(AgastFeatureDetector a, Mat src);
BRISK BRISK_Create();
void BRISK_Close(BRISK b);
struct KeyPoints BRISK_Detect(BRISK b, Mat src);
struct KeyPoints BRISK_DetectAndCompute(BRISK b, Mat src, Mat mask, Mat desc);
FastFeatureDetector FastFeatureDetector_Create();
FastFeatureDetector FastFeatureDetector_CreateWithParams(int threshold, bool nonmaxSuppression, int type);
void FastFeatureDetector_Close(FastFeatureDetector f);
struct KeyPoints FastFeatureDetector_Detect(FastFeatureDetector f, Mat src);
GFTTDetector GFTTDetector_Create();
void GFTTDetector_Close(GFTTDetector a);
struct KeyPoints GFTTDetector_Detect(GFTTDetector a, Mat src);
KAZE KAZE_Create();
void KAZE_Close(KAZE a);
struct KeyPoints KAZE_Detect(KAZE a, Mat src);
struct KeyPoints KAZE_DetectAndCompute(KAZE a, Mat src, Mat mask, Mat desc);
MSER MSER_Create();
void MSER_Close(MSER a);
struct KeyPoints MSER_Detect(MSER a, Mat src);
ORB ORB_Create();
void ORB_Close(ORB o);
struct KeyPoints ORB_Detect(ORB o, Mat src);
struct KeyPoints ORB_DetectAndCompute(ORB o, Mat src, Mat mask, Mat desc);
SimpleBlobDetector SimpleBlobDetector_Create();
SimpleBlobDetector SimpleBlobDetector_Create_WithParams(SimpleBlobDetectorParams params);
void SimpleBlobDetector_Close(SimpleBlobDetector b);
struct KeyPoints SimpleBlobDetector_Detect(SimpleBlobDetector b, Mat src);
SimpleBlobDetectorParams SimpleBlobDetectorParams_Create();
BFMatcher BFMatcher_Create();
BFMatcher BFMatcher_CreateWithParams(int normType, bool crossCheck);
void BFMatcher_Close(BFMatcher b);
struct MultiDMatches BFMatcher_KnnMatch(BFMatcher b, Mat query, Mat train, int k);
void DrawKeyPoints(Mat src, struct KeyPoints kp, Mat dst, const Scalar s, int flags);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_FEATURES2D_H_

33
vendor/gocv.io/x/gocv/features2d_string.go generated vendored Normal file
View File

@ -0,0 +1,33 @@
package gocv
/*
#include <stdlib.h>
#include "features2d.h"
*/
import "C"
func (c FastFeatureDetectorType) String() string {
switch c {
case FastFeatureDetectorType58:
return "fast-feature-detector-type-58"
case FastFeatureDetectorType712:
return "fast-feature-detector-type-712"
case FastFeatureDetectorType916:
return "fast-feature-detector-type-916"
}
return ""
}
func (c DrawMatchesFlag) String() string {
switch c {
case DrawDefault:
return "draw-default"
case DrawOverOutImg:
return "draw-over-out-imt"
case NotDrawSinglePoints:
return "draw-single-points"
case DrawRichKeyPoints:
return "draw-rich-key-points"
}
return ""
}

3
vendor/gocv.io/x/gocv/go.mod generated vendored Normal file
View File

@ -0,0 +1,3 @@
module gocv.io/x/gocv
go 1.13

11
vendor/gocv.io/x/gocv/gocv.go generated vendored Normal file
View File

@ -0,0 +1,11 @@
// Package gocv is a wrapper around the OpenCV 4.x computer vision library.
// It provides a Go language interface to the latest version of OpenCV.
//
// OpenCV (Open Source Computer Vision Library: http://opencv.org) is an
// open-source BSD-licensed library that includes several hundreds of
// computer vision algorithms.
//
// For further details, please see:
// http://docs.opencv.org/master/d1/dfb/intro.html
//
package gocv // import "gocv.io/x/gocv"

79
vendor/gocv.io/x/gocv/highgui.cpp generated vendored Normal file
View File

@ -0,0 +1,79 @@
#include "highgui_gocv.h"
// Window
void Window_New(const char* winname, int flags) {
cv::namedWindow(winname, flags);
}
void Window_Close(const char* winname) {
cv::destroyWindow(winname);
}
void Window_IMShow(const char* winname, Mat mat) {
cv::imshow(winname, *mat);
}
double Window_GetProperty(const char* winname, int flag) {
return cv::getWindowProperty(winname, flag);
}
void Window_SetProperty(const char* winname, int flag, double value) {
cv::setWindowProperty(winname, flag, value);
}
void Window_SetTitle(const char* winname, const char* title) {
cv::setWindowTitle(winname, title);
}
int Window_WaitKey(int delay = 0) {
return cv::waitKey(delay);
}
void Window_Move(const char* winname, int x, int y) {
cv::moveWindow(winname, x, y);
}
void Window_Resize(const char* winname, int width, int height) {
cv::resizeWindow(winname, width, height);
}
struct Rect Window_SelectROI(const char* winname, Mat img) {
cv::Rect bRect = cv::selectROI(winname, *img);
Rect r = {bRect.x, bRect.y, bRect.width, bRect.height};
return r;
}
struct Rects Window_SelectROIs(const char* winname, Mat img) {
std::vector<cv::Rect> rois;
cv::selectROIs(winname, *img, rois);
Rect* rects = new Rect[rois.size()];
for (size_t i = 0; i < rois.size(); ++i) {
Rect r = {rois[i].x, rois[i].y, rois[i].width, rois[i].height};
rects[i] = r;
}
Rects ret = {rects, (int)rois.size()};
return ret;
}
// Trackbar
void Trackbar_Create(const char* winname, const char* trackname, int max) {
cv::createTrackbar(trackname, winname, NULL, max);
}
int Trackbar_GetPos(const char* winname, const char* trackname) {
return cv::getTrackbarPos(trackname, winname);
}
void Trackbar_SetPos(const char* winname, const char* trackname, int pos) {
cv::setTrackbarPos(trackname, winname, pos);
}
void Trackbar_SetMin(const char* winname, const char* trackname, int pos) {
cv::setTrackbarMin(trackname, winname, pos);
}
void Trackbar_SetMax(const char* winname, const char* trackname, int pos) {
cv::setTrackbarMax(trackname, winname, pos);
}

323
vendor/gocv.io/x/gocv/highgui.go generated vendored Normal file
View File

@ -0,0 +1,323 @@
package gocv
/*
#include <stdlib.h>
#include "highgui_gocv.h"
*/
import "C"
import (
"image"
"runtime"
"unsafe"
)
// Window is a wrapper around OpenCV's "HighGUI" named windows.
// While OpenCV was designed for use in full-scale applications and can be used
// within functionally rich UI frameworks (such as Qt*, WinForms*, or Cocoa*)
// or without any UI at all, sometimes there it is required to try functionality
// quickly and visualize the results. This is what the HighGUI module has been designed for.
//
// For further details, please see:
// http://docs.opencv.org/master/d7/dfc/group__highgui.html
//
type Window struct {
name string
open bool
}
// NewWindow creates a new named OpenCV window
//
// For further details, please see:
// http://docs.opencv.org/master/d7/dfc/group__highgui.html#ga5afdf8410934fd099df85c75b2e0888b
//
func NewWindow(name string) *Window {
runtime.LockOSThread()
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
C.Window_New(cName, 0)
return &Window{name: name, open: true}
}
// Close closes and deletes a named OpenCV Window.
//
// For further details, please see:
// http://docs.opencv.org/master/d7/dfc/group__highgui.html#ga851ccdd6961022d1d5b4c4f255dbab34
//
func (w *Window) Close() error {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
C.Window_Close(cName)
w.open = false
runtime.UnlockOSThread()
return nil
}
// IsOpen checks to see if the Window seems to be open.
func (w *Window) IsOpen() bool {
return w.open
}
// WindowFlag value for SetWindowProperty / GetWindowProperty.
type WindowFlag float32
const (
// WindowNormal indicates a normal window.
WindowNormal WindowFlag = 0
// WindowFullscreen indicates a full-screen window.
WindowFullscreen = 1
// WindowAutosize indicates a window sized based on the contents.
WindowAutosize = 1
// WindowFreeRatio indicates allow the user to resize without maintaining aspect ratio.
WindowFreeRatio = 0x00000100
// WindowKeepRatio indicates always maintain an aspect ratio that matches the contents.
WindowKeepRatio = 0
)
// WindowPropertyFlag flags for SetWindowProperty / GetWindowProperty.
type WindowPropertyFlag int
const (
// WindowPropertyFullscreen fullscreen property
// (can be WINDOW_NORMAL or WINDOW_FULLSCREEN).
WindowPropertyFullscreen WindowPropertyFlag = 0
// WindowPropertyAutosize is autosize property
// (can be WINDOW_NORMAL or WINDOW_AUTOSIZE).
WindowPropertyAutosize = 1
// WindowPropertyAspectRatio window's aspect ration
// (can be set to WINDOW_FREERATIO or WINDOW_KEEPRATIO).
WindowPropertyAspectRatio = 2
// WindowPropertyOpenGL opengl support.
WindowPropertyOpenGL = 3
// WindowPropertyVisible or not.
WindowPropertyVisible = 4
)
// GetWindowProperty returns properties of a window.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#gaaf9504b8f9cf19024d9d44a14e461656
//
func (w *Window) GetWindowProperty(flag WindowPropertyFlag) float64 {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
return float64(C.Window_GetProperty(cName, C.int(flag)))
}
// SetWindowProperty changes parameters of a window dynamically.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga66e4a6db4d4e06148bcdfe0d70a5df27
//
func (w *Window) SetWindowProperty(flag WindowPropertyFlag, value WindowFlag) {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
C.Window_SetProperty(cName, C.int(flag), C.double(value))
}
// SetWindowTitle updates window title.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga56f8849295fd10d0c319724ddb773d96
//
func (w *Window) SetWindowTitle(title string) {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
cTitle := C.CString(title)
defer C.free(unsafe.Pointer(cTitle))
C.Window_SetTitle(cName, cTitle)
}
// IMShow displays an image Mat in the specified window.
// This function should be followed by the WaitKey function which displays
// the image for specified milliseconds. Otherwise, it won't display the image.
//
// For further details, please see:
// http://docs.opencv.org/master/d7/dfc/group__highgui.html#ga453d42fe4cb60e5723281a89973ee563
//
func (w *Window) IMShow(img Mat) {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
C.Window_IMShow(cName, img.p)
}
// WaitKey waits for a pressed key.
// This function is the only method in OpenCV's HighGUI that can fetch
// and handle events, so it needs to be called periodically
// for normal event processing
//
// For further details, please see:
// http://docs.opencv.org/master/d7/dfc/group__highgui.html#ga5628525ad33f52eab17feebcfba38bd7
//
func (w *Window) WaitKey(delay int) int {
return int(C.Window_WaitKey(C.int(delay)))
}
// MoveWindow moves window to the specified position.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga8d86b207f7211250dbe6e28f76307ffb
//
func (w *Window) MoveWindow(x, y int) {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
C.Window_Move(cName, C.int(x), C.int(y))
}
// ResizeWindow resizes window to the specified size.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga9e80e080f7ef33f897e415358aee7f7e
//
func (w *Window) ResizeWindow(width, height int) {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
C.Window_Resize(cName, C.int(width), C.int(height))
}
// SelectROI selects a Region Of Interest (ROI) on the given image.
// It creates a window and allows user to select a ROI using mouse.
//
// Controls:
// use space or enter to finish selection,
// use key c to cancel selection (function will return a zero Rect).
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga8daf4730d3adf7035b6de9be4c469af5
//
func SelectROI(name string, img Mat) image.Rectangle {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
r := C.Window_SelectROI(cName, img.p)
rect := image.Rect(int(r.x), int(r.y), int(r.x+r.width), int(r.y+r.height))
return rect
}
// SelectROIs selects multiple Regions Of Interest (ROI) on the given image.
// It creates a window and allows user to select ROIs using mouse.
//
// Controls:
// use space or enter to finish current selection and start a new one
// use esc to terminate multiple ROI selection process
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga0f11fad74a6432b8055fb21621a0f893
//
func SelectROIs(name string, img Mat) []image.Rectangle {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
ret := C.Window_SelectROIs(cName, img.p)
defer C.Rects_Close(ret)
return toRectangles(ret)
}
// WaitKey that is not attached to a specific Window.
// Only use when no Window exists in your application, e.g. command line app.
//
func WaitKey(delay int) int {
return int(C.Window_WaitKey(C.int(delay)))
}
// Trackbar is a wrapper around OpenCV's "HighGUI" window Trackbars.
type Trackbar struct {
name string
parent *Window
}
// CreateTrackbar creates a trackbar and attaches it to the specified window.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#gaf78d2155d30b728fc413803745b67a9b
//
func (w *Window) CreateTrackbar(name string, max int) *Trackbar {
cName := C.CString(w.name)
defer C.free(unsafe.Pointer(cName))
tName := C.CString(name)
defer C.free(unsafe.Pointer(tName))
C.Trackbar_Create(cName, tName, C.int(max))
return &Trackbar{name: name, parent: w}
}
// GetPos returns the trackbar position.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga122632e9e91b9ec06943472c55d9cda8
//
func (t *Trackbar) GetPos() int {
cName := C.CString(t.parent.name)
defer C.free(unsafe.Pointer(cName))
tName := C.CString(t.name)
defer C.free(unsafe.Pointer(tName))
return int(C.Trackbar_GetPos(cName, tName))
}
// SetPos sets the trackbar position.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga67d73c4c9430f13481fd58410d01bd8d
//
func (t *Trackbar) SetPos(pos int) {
cName := C.CString(t.parent.name)
defer C.free(unsafe.Pointer(cName))
tName := C.CString(t.name)
defer C.free(unsafe.Pointer(tName))
C.Trackbar_SetPos(cName, tName, C.int(pos))
}
// SetMin sets the trackbar minimum position.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#gabe26ffe8d2b60cc678895595a581b7aa
//
func (t *Trackbar) SetMin(pos int) {
cName := C.CString(t.parent.name)
defer C.free(unsafe.Pointer(cName))
tName := C.CString(t.name)
defer C.free(unsafe.Pointer(tName))
C.Trackbar_SetMin(cName, tName, C.int(pos))
}
// SetMax sets the trackbar maximum position.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/dfc/group__highgui.html#ga7e5437ccba37f1154b65210902fc4480
//
func (t *Trackbar) SetMax(pos int) {
cName := C.CString(t.parent.name)
defer C.free(unsafe.Pointer(cName))
tName := C.CString(t.name)
defer C.free(unsafe.Pointer(tName))
C.Trackbar_SetMax(cName, tName, C.int(pos))
}

35
vendor/gocv.io/x/gocv/highgui_gocv.h generated vendored Normal file
View File

@ -0,0 +1,35 @@
#ifndef _OPENCV3_HIGHGUI_H_
#define _OPENCV3_HIGHGUI_H_
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
// Window
void Window_New(const char* winname, int flags);
void Window_Close(const char* winname);
void Window_IMShow(const char* winname, Mat mat);
double Window_GetProperty(const char* winname, int flag);
void Window_SetProperty(const char* winname, int flag, double value);
void Window_SetTitle(const char* winname, const char* title);
int Window_WaitKey(int);
void Window_Move(const char* winname, int x, int y);
void Window_Resize(const char* winname, int width, int height);
struct Rect Window_SelectROI(const char* winname, Mat img);
struct Rects Window_SelectROIs(const char* winname, Mat img);
// Trackbar
void Trackbar_Create(const char* winname, const char* trackname, int max);
int Trackbar_GetPos(const char* winname, const char* trackname);
void Trackbar_SetPos(const char* winname, const char* trackname, int pos);
void Trackbar_SetMin(const char* winname, const char* trackname, int pos);
void Trackbar_SetMax(const char* winname, const char* trackname, int pos);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_HIGHGUI_H_

35
vendor/gocv.io/x/gocv/highgui_string.go generated vendored Normal file
View File

@ -0,0 +1,35 @@
package gocv
/*
#include <stdlib.h>
#include "highgui_gocv.h"
*/
import "C"
func (c WindowFlag) String() string {
switch c {
case WindowNormal:
return "window-normal"
case WindowFullscreen:
return "window-fullscreen"
case WindowFreeRatio:
return "window-free-ratio"
}
return ""
}
func (c WindowPropertyFlag) String() string {
switch c {
case WindowPropertyFullscreen:
return "window-property-fullscreen"
case WindowPropertyAutosize:
return "window-property-autosize"
case WindowPropertyAspectRatio:
return "window-property-aspect-ratio"
case WindowPropertyOpenGL:
return "window-property-opengl"
case WindowPropertyVisible:
return "window-property-visible"
}
return ""
}

46
vendor/gocv.io/x/gocv/imgcodecs.cpp generated vendored Normal file
View File

@ -0,0 +1,46 @@
#include "imgcodecs.h"
// Image
Mat Image_IMRead(const char* filename, int flags) {
cv::Mat img = cv::imread(filename, flags);
return new cv::Mat(img);
}
bool Image_IMWrite(const char* filename, Mat img) {
return cv::imwrite(filename, *img);
}
bool Image_IMWrite_WithParams(const char* filename, Mat img, IntVector params) {
std::vector<int> compression_params;
for (int i = 0, *v = params.val; i < params.length; ++v, ++i) {
compression_params.push_back(*v);
}
return cv::imwrite(filename, *img, compression_params);
}
struct ByteArray Image_IMEncode(const char* fileExt, Mat img) {
std::vector<uchar> data;
cv::imencode(fileExt, *img, data);
return toByteArray(reinterpret_cast<const char*>(&data[0]), data.size());
}
struct ByteArray Image_IMEncode_WithParams(const char* fileExt, Mat img, IntVector params) {
std::vector<uchar> data;
std::vector<int> compression_params;
for (int i = 0, *v = params.val; i < params.length; ++v, ++i) {
compression_params.push_back(*v);
}
cv::imencode(fileExt, *img, data, compression_params);
return toByteArray(reinterpret_cast<const char*>(&data[0]), data.size());
}
Mat Image_IMDecode(ByteArray buf, int flags) {
std::vector<uchar> data(buf.data, buf.data + buf.length);
cv::Mat img = cv::imdecode(data, flags);
return new cv::Mat(img);
}

248
vendor/gocv.io/x/gocv/imgcodecs.go generated vendored Normal file
View File

@ -0,0 +1,248 @@
package gocv
/*
#include <stdlib.h>
#include "imgcodecs.h"
*/
import "C"
import (
"unsafe"
)
// IMReadFlag is one of the valid flags to use for the IMRead function.
type IMReadFlag int
const (
// IMReadUnchanged return the loaded image as is (with alpha channel,
// otherwise it gets cropped).
IMReadUnchanged IMReadFlag = -1
// IMReadGrayScale always convert image to the single channel
// grayscale image.
IMReadGrayScale = 0
// IMReadColor always converts image to the 3 channel BGR color image.
IMReadColor = 1
// IMReadAnyDepth returns 16-bit/32-bit image when the input has the corresponding
// depth, otherwise convert it to 8-bit.
IMReadAnyDepth = 2
// IMReadAnyColor the image is read in any possible color format.
IMReadAnyColor = 4
// IMReadLoadGDAL uses the gdal driver for loading the image.
IMReadLoadGDAL = 8
// IMReadReducedGrayscale2 always converts image to the single channel grayscale image
// and the image size reduced 1/2.
IMReadReducedGrayscale2 = 16
// IMReadReducedColor2 always converts image to the 3 channel BGR color image and the
// image size reduced 1/2.
IMReadReducedColor2 = 17
// IMReadReducedGrayscale4 always converts image to the single channel grayscale image and
// the image size reduced 1/4.
IMReadReducedGrayscale4 = 32
// IMReadReducedColor4 always converts image to the 3 channel BGR color image and
// the image size reduced 1/4.
IMReadReducedColor4 = 33
// IMReadReducedGrayscale8 always convert image to the single channel grayscale image and
// the image size reduced 1/8.
IMReadReducedGrayscale8 = 64
// IMReadReducedColor8 always convert image to the 3 channel BGR color image and the
// image size reduced 1/8.
IMReadReducedColor8 = 65
// IMReadIgnoreOrientation do not rotate the image according to EXIF's orientation flag.
IMReadIgnoreOrientation = 128
//IMWriteJpegQuality is the quality from 0 to 100 for JPEG (the higher is the better). Default value is 95.
IMWriteJpegQuality = 1
// IMWriteJpegProgressive enables JPEG progressive feature, 0 or 1, default is False.
IMWriteJpegProgressive = 2
// IMWriteJpegOptimize enables JPEG optimization, 0 or 1, default is False.
IMWriteJpegOptimize = 3
// IMWriteJpegRstInterval is the JPEG restart interval, 0 - 65535, default is 0 - no restart.
IMWriteJpegRstInterval = 4
// IMWriteJpegLumaQuality separates luma quality level, 0 - 100, default is 0 - don't use.
IMWriteJpegLumaQuality = 5
// IMWriteJpegChromaQuality separates chroma quality level, 0 - 100, default is 0 - don't use.
IMWriteJpegChromaQuality = 6
// IMWritePngCompression is the compression level from 0 to 9 for PNG. A
// higher value means a smaller size and longer compression time.
// If specified, strategy is changed to IMWRITE_PNG_STRATEGY_DEFAULT (Z_DEFAULT_STRATEGY).
// Default value is 1 (best speed setting).
IMWritePngCompression = 16
// IMWritePngStrategy is one of cv::IMWritePNGFlags, default is IMWRITE_PNG_STRATEGY_RLE.
IMWritePngStrategy = 17
// IMWritePngBilevel is the binary level PNG, 0 or 1, default is 0.
IMWritePngBilevel = 18
// IMWritePxmBinary for PPM, PGM, or PBM can be a binary format flag, 0 or 1. Default value is 1.
IMWritePxmBinary = 32
// IMWriteWebpQuality is the quality from 1 to 100 for WEBP (the higher is
// the better). By default (without any parameter) and for quality above
// 100 the lossless compression is used.
IMWriteWebpQuality = 64
// IMWritePamTupletype sets the TUPLETYPE field to the corresponding string
// value that is defined for the format.
IMWritePamTupletype = 128
// IMWritePngStrategyDefault is the value to use for normal data.
IMWritePngStrategyDefault = 0
// IMWritePngStrategyFiltered is the value to use for data produced by a
// filter (or predictor). Filtered data consists mostly of small values
// with a somewhat random distribution. In this case, the compression
// algorithm is tuned to compress them better.
IMWritePngStrategyFiltered = 1
// IMWritePngStrategyHuffmanOnly forces Huffman encoding only (no string match).
IMWritePngStrategyHuffmanOnly = 2
// IMWritePngStrategyRle is the value to use to limit match distances to
// one (run-length encoding).
IMWritePngStrategyRle = 3
// IMWritePngStrategyFixed is the value to prevent the use of dynamic
// Huffman codes, allowing for a simpler decoder for special applications.
IMWritePngStrategyFixed = 4
)
// IMRead reads an image from a file into a Mat.
// The flags param is one of the IMReadFlag flags.
// If the image cannot be read (because of missing file, improper permissions,
// unsupported or invalid format), the function returns an empty Mat.
//
// For further details, please see:
// http://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga288b8b3da0892bd651fce07b3bbd3a56
//
func IMRead(name string, flags IMReadFlag) Mat {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
return newMat(C.Image_IMRead(cName, C.int(flags)))
}
// IMWrite writes a Mat to an image file.
//
// For further details, please see:
// http://docs.opencv.org/master/d4/da8/group__imgcodecs.html#gabbc7ef1aa2edfaa87772f1202d67e0ce
//
func IMWrite(name string, img Mat) bool {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
return bool(C.Image_IMWrite(cName, img.p))
}
// IMWriteWithParams writes a Mat to an image file. With that func you can
// pass compression parameters.
//
// For further details, please see:
// http://docs.opencv.org/master/d4/da8/group__imgcodecs.html#gabbc7ef1aa2edfaa87772f1202d67e0ce
//
func IMWriteWithParams(name string, img Mat, params []int) bool {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
cparams := []C.int{}
for _, v := range params {
cparams = append(cparams, C.int(v))
}
paramsVector := C.struct_IntVector{}
paramsVector.val = (*C.int)(&cparams[0])
paramsVector.length = (C.int)(len(cparams))
return bool(C.Image_IMWrite_WithParams(cName, img.p, paramsVector))
}
// FileExt represents a file extension.
type FileExt string
const (
// PNGFileExt is the file extension for PNG.
PNGFileExt FileExt = ".png"
// JPEGFileExt is the file extension for JPEG.
JPEGFileExt FileExt = ".jpg"
// GIFFileExt is the file extension for GIF.
GIFFileExt FileExt = ".gif"
)
// IMEncode encodes an image Mat into a memory buffer.
// This function compresses the image and stores it in the returned memory buffer,
// using the image format passed in in the form of a file extension string.
//
// For further details, please see:
// http://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga461f9ac09887e47797a54567df3b8b63
//
func IMEncode(fileExt FileExt, img Mat) (buf []byte, err error) {
cfileExt := C.CString(string(fileExt))
defer C.free(unsafe.Pointer(cfileExt))
b := C.Image_IMEncode(cfileExt, img.Ptr())
defer C.ByteArray_Release(b)
return toGoBytes(b), nil
}
// IMEncodeWithParams encodes an image Mat into a memory buffer.
// This function compresses the image and stores it in the returned memory buffer,
// using the image format passed in in the form of a file extension string.
//
// Usage example:
// buffer, err := gocv.IMEncodeWithParams(gocv.JPEGFileExt, img, []int{gocv.IMWriteJpegQuality, quality})
//
// For further details, please see:
// http://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga461f9ac09887e47797a54567df3b8b63
//
func IMEncodeWithParams(fileExt FileExt, img Mat, params []int) (buf []byte, err error) {
cfileExt := C.CString(string(fileExt))
defer C.free(unsafe.Pointer(cfileExt))
cparams := []C.int{}
for _, v := range params {
cparams = append(cparams, C.int(v))
}
paramsVector := C.struct_IntVector{}
paramsVector.val = (*C.int)(&cparams[0])
paramsVector.length = (C.int)(len(cparams))
b := C.Image_IMEncode_WithParams(cfileExt, img.Ptr(), paramsVector)
defer C.ByteArray_Release(b)
return toGoBytes(b), nil
}
// IMDecode reads an image from a buffer in memory.
// The function IMDecode reads an image from the specified buffer in memory.
// If the buffer is too short or contains invalid data, the function
// returns an empty matrix.
//
// For further details, please see:
// https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga26a67788faa58ade337f8d28ba0eb19e
//
func IMDecode(buf []byte, flags IMReadFlag) (Mat, error) {
data, err := toByteArray(buf)
if err != nil {
return Mat{}, err
}
return newMat(C.Image_IMDecode(*data, C.int(flags))), nil
}

24
vendor/gocv.io/x/gocv/imgcodecs.h generated vendored Normal file
View File

@ -0,0 +1,24 @@
#ifndef _OPENCV3_IMGCODECS_H_
#define _OPENCV3_IMGCODECS_H_
#include <stdbool.h>
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
Mat Image_IMRead(const char* filename, int flags);
bool Image_IMWrite(const char* filename, Mat img);
bool Image_IMWrite_WithParams(const char* filename, Mat img, IntVector params);
struct ByteArray Image_IMEncode(const char* fileExt, Mat img);
struct ByteArray Image_IMEncode_WithParams(const char* fileExt, Mat img, IntVector params);
Mat Image_IMDecode(ByteArray buf, int flags);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_IMGCODECS_H_

627
vendor/gocv.io/x/gocv/imgproc.cpp generated vendored Normal file
View File

@ -0,0 +1,627 @@
#include "imgproc.h"
double ArcLength(Contour curve, bool is_closed) {
std::vector<cv::Point> pts;
for (size_t i = 0; i < curve.length; i++) {
pts.push_back(cv::Point(curve.points[i].x, curve.points[i].y));
}
return cv::arcLength(pts, is_closed);
}
Contour ApproxPolyDP(Contour curve, double epsilon, bool closed) {
std::vector<cv::Point> curvePts;
for (size_t i = 0; i < curve.length; i++) {
curvePts.push_back(cv::Point(curve.points[i].x, curve.points[i].y));
}
std::vector<cv::Point> approxCurvePts;
cv::approxPolyDP(curvePts, approxCurvePts, epsilon, closed);
int length = approxCurvePts.size();
Point* points = new Point[length];
for (size_t i = 0; i < length; i++) {
points[i] = (Point){approxCurvePts[i].x, approxCurvePts[i].y};
}
return (Contour){points, length};
}
void CvtColor(Mat src, Mat dst, int code) {
cv::cvtColor(*src, *dst, code);
}
void EqualizeHist(Mat src, Mat dst) {
cv::equalizeHist(*src, *dst);
}
void CalcHist(struct Mats mats, IntVector chans, Mat mask, Mat hist, IntVector sz, FloatVector rng, bool acc) {
std::vector<cv::Mat> images;
for (int i = 0; i < mats.length; ++i) {
images.push_back(*mats.mats[i]);
}
std::vector<int> channels;
for (int i = 0, *v = chans.val; i < chans.length; ++v, ++i) {
channels.push_back(*v);
}
std::vector<int> histSize;
for (int i = 0, *v = sz.val; i < sz.length; ++v, ++i) {
histSize.push_back(*v);
}
std::vector<float> ranges;
float* f;
int i;
for (i = 0, f = rng.val; i < rng.length; ++f, ++i) {
ranges.push_back(*f);
}
cv::calcHist(images, channels, *mask, *hist, histSize, ranges, acc);
}
void CalcBackProject(struct Mats mats, IntVector chans, Mat hist, Mat backProject, FloatVector rng, bool uniform){
std::vector<cv::Mat> images;
for (int i = 0; i < mats.length; ++i) {
images.push_back(*mats.mats[i]);
}
std::vector<int> channels;
for (int i = 0, *v = chans.val; i < chans.length; ++v, ++i) {
channels.push_back(*v);
}
std::vector<float> ranges;
float* f;
int i;
for (i = 0, f = rng.val; i < rng.length; ++f, ++i) {
ranges.push_back(*f);
}
cv::calcBackProject(images, channels, *hist, *backProject, ranges, uniform);
}
double CompareHist(Mat hist1, Mat hist2, int method) {
return cv::compareHist(*hist1, *hist2, method);
}
struct RotatedRect FitEllipse(Points points)
{
Point *rpts = new Point[points.length];
std::vector<cv::Point> pts;
for (size_t i = 0; i < points.length; i++)
{
pts.push_back(cv::Point(points.points[i].x, points.points[i].y));
Point pt = {points.points[i].x, points.points[i].y};
rpts[i] = pt;
}
cv::RotatedRect bRect = cv::fitEllipse(pts);
Rect r = {bRect.boundingRect().x, bRect.boundingRect().y, bRect.boundingRect().width, bRect.boundingRect().height};
Point centrpt = {int(lroundf(bRect.center.x)), int(lroundf(bRect.center.y))};
Size szsz = {int(lroundf(bRect.size.width)), int(lroundf(bRect.size.height))};
RotatedRect rotRect = {(Contour){rpts, 4}, r, centrpt, szsz, bRect.angle};
return rotRect;
}
void ConvexHull(Contour points, Mat hull, bool clockwise, bool returnPoints) {
std::vector<cv::Point> pts;
for (size_t i = 0; i < points.length; i++) {
pts.push_back(cv::Point(points.points[i].x, points.points[i].y));
}
cv::convexHull(pts, *hull, clockwise, returnPoints);
}
void ConvexityDefects(Contour points, Mat hull, Mat result) {
std::vector<cv::Point> pts;
for (size_t i = 0; i < points.length; i++) {
pts.push_back(cv::Point(points.points[i].x, points.points[i].y));
}
cv::convexityDefects(pts, *hull, *result);
}
void BilateralFilter(Mat src, Mat dst, int d, double sc, double ss) {
cv::bilateralFilter(*src, *dst, d, sc, ss);
}
void Blur(Mat src, Mat dst, Size ps) {
cv::Size sz(ps.width, ps.height);
cv::blur(*src, *dst, sz);
}
void BoxFilter(Mat src, Mat dst, int ddepth, Size ps) {
cv::Size sz(ps.width, ps.height);
cv::boxFilter(*src, *dst, ddepth, sz);
}
void SqBoxFilter(Mat src, Mat dst, int ddepth, Size ps) {
cv::Size sz(ps.width, ps.height);
cv::sqrBoxFilter(*src, *dst, ddepth, sz);
}
void Dilate(Mat src, Mat dst, Mat kernel) {
cv::dilate(*src, *dst, *kernel);
}
void DistanceTransform(Mat src, Mat dst, Mat labels, int distanceType, int maskSize, int labelType) {
cv::distanceTransform(*src, *dst, *labels, distanceType, maskSize, labelType);
}
void Erode(Mat src, Mat dst, Mat kernel) {
cv::erode(*src, *dst, *kernel);
}
void MatchTemplate(Mat image, Mat templ, Mat result, int method, Mat mask) {
cv::matchTemplate(*image, *templ, *result, method, *mask);
}
struct Moment Moments(Mat src, bool binaryImage) {
cv::Moments m = cv::moments(*src, binaryImage);
Moment mom = {m.m00, m.m10, m.m01, m.m20, m.m11, m.m02, m.m30, m.m21, m.m12, m.m03,
m.mu20, m.mu11, m.mu02, m.mu30, m.mu21, m.mu12, m.mu03,
m.nu20, m.nu11, m.nu02, m.nu30, m.nu21, m.nu12, m.nu03
};
return mom;
}
void PyrDown(Mat src, Mat dst, Size size, int borderType) {
cv::Size cvSize(size.width, size.height);
cv::pyrDown(*src, *dst, cvSize, borderType);
}
void PyrUp(Mat src, Mat dst, Size size, int borderType) {
cv::Size cvSize(size.width, size.height);
cv::pyrUp(*src, *dst, cvSize, borderType);
}
struct Rect BoundingRect(Contour con) {
std::vector<cv::Point> pts;
for (size_t i = 0; i < con.length; i++) {
pts.push_back(cv::Point(con.points[i].x, con.points[i].y));
}
cv::Rect bRect = cv::boundingRect(pts);
Rect r = {bRect.x, bRect.y, bRect.width, bRect.height};
return r;
}
void BoxPoints(RotatedRect rect, Mat boxPts){
cv::Point2f centerPt(rect.center.x , rect.center.y);
cv::Size2f rSize(rect.size.width, rect.size.height);
cv::RotatedRect rotatedRectangle(centerPt, rSize, rect.angle);
cv::boxPoints(rotatedRectangle, *boxPts);
}
double ContourArea(Contour con) {
std::vector<cv::Point> pts;
for (size_t i = 0; i < con.length; i++) {
pts.push_back(cv::Point(con.points[i].x, con.points[i].y));
}
return cv::contourArea(pts);
}
struct RotatedRect MinAreaRect(Points points){
std::vector<cv::Point> pts;
for (size_t i = 0; i < points.length; i++) {
pts.push_back(cv::Point(points.points[i].x, points.points[i].y));
}
cv::RotatedRect cvrect = cv::minAreaRect(pts);
Point* rpts = new Point[4];
cv::Point2f* pts4 = new cv::Point2f[4];
cvrect.points(pts4);
for (size_t j = 0; j < 4; j++) {
Point pt = {int(lroundf(pts4[j].x)), int(lroundf(pts4[j].y))};
rpts[j] = pt;
}
delete[] pts4;
cv::Rect bRect = cvrect.boundingRect();
Rect r = {bRect.x, bRect.y, bRect.width, bRect.height};
Point centrpt = {int(lroundf(cvrect.center.x)), int(lroundf(cvrect.center.y))};
Size szsz = {int(lroundf(cvrect.size.width)), int(lroundf(cvrect.size.height))};
RotatedRect retrect = {(Contour){rpts, 4}, r, centrpt, szsz, cvrect.angle};
return retrect;
}
void MinEnclosingCircle(Points points, Point2f* center, float* radius){
std::vector<cv::Point> pts;
for (size_t i = 0; i < points.length; i++) {
pts.push_back(cv::Point(points.points[i].x, points.points[i].y));
}
cv::Point2f center2f;
cv::minEnclosingCircle(pts, center2f, *radius);
center->x = center2f.x;
center->y = center2f.y;
}
struct Contours FindContours(Mat src, int mode, int method) {
std::vector<std::vector<cv::Point> > contours;
cv::findContours(*src, contours, mode, method);
Contour* points = new Contour[contours.size()];
for (size_t i = 0; i < contours.size(); i++) {
Point* pts = new Point[contours[i].size()];
for (size_t j = 0; j < contours[i].size(); j++) {
Point pt = {contours[i][j].x, contours[i][j].y};
pts[j] = pt;
}
points[i] = (Contour){pts, (int)contours[i].size()};
}
Contours cons = {points, (int)contours.size()};
return cons;
}
int ConnectedComponents(Mat src, Mat labels, int connectivity, int ltype, int ccltype){
return cv::connectedComponents(*src, *labels, connectivity, ltype, ccltype);
}
int ConnectedComponentsWithStats(Mat src, Mat labels, Mat stats, Mat centroids,
int connectivity, int ltype, int ccltype){
return cv::connectedComponentsWithStats(*src, *labels, *stats, *centroids, connectivity, ltype, ccltype);
}
Mat GetStructuringElement(int shape, Size ksize) {
cv::Size sz(ksize.width, ksize.height);
return new cv::Mat(cv::getStructuringElement(shape, sz));
}
Scalar MorphologyDefaultBorderValue(){
cv::Scalar cs = cv::morphologyDefaultBorderValue();
return (Scalar){cs[0],cs[1],cs[2],cs[3]};
}
void MorphologyEx(Mat src, Mat dst, int op, Mat kernel) {
cv::morphologyEx(*src, *dst, op, *kernel);
}
void MorphologyExWithParams(Mat src, Mat dst, int op, Mat kernel, Point pt, int iterations, int borderType) {
cv::Point pt1(pt.x, pt.y);
cv::morphologyEx(*src, *dst, op, *kernel, pt1, iterations, borderType);
}
void GaussianBlur(Mat src, Mat dst, Size ps, double sX, double sY, int bt) {
cv::Size sz(ps.width, ps.height);
cv::GaussianBlur(*src, *dst, sz, sX, sY, bt);
}
void Laplacian(Mat src, Mat dst, int dDepth, int kSize, double scale, double delta,
int borderType) {
cv::Laplacian(*src, *dst, dDepth, kSize, scale, delta, borderType);
}
void Scharr(Mat src, Mat dst, int dDepth, int dx, int dy, double scale, double delta,
int borderType) {
cv::Scharr(*src, *dst, dDepth, dx, dy, scale, delta, borderType);
}
void MedianBlur(Mat src, Mat dst, int ksize) {
cv::medianBlur(*src, *dst, ksize);
}
void Canny(Mat src, Mat edges, double t1, double t2) {
cv::Canny(*src, *edges, t1, t2);
}
void CornerSubPix(Mat img, Mat corners, Size winSize, Size zeroZone, TermCriteria criteria) {
cv::Size wsz(winSize.width, winSize.height);
cv::Size zsz(zeroZone.width, zeroZone.height);
cv::cornerSubPix(*img, *corners, wsz, zsz, *criteria);
}
void GoodFeaturesToTrack(Mat img, Mat corners, int maxCorners, double quality, double minDist) {
cv::goodFeaturesToTrack(*img, *corners, maxCorners, quality, minDist);
}
void GrabCut(Mat img, Mat mask, Rect r, Mat bgdModel, Mat fgdModel, int iterCount, int mode) {
cv::Rect cvRect = cv::Rect(r.x, r.y, r.width, r.height);
cv::grabCut(*img, *mask, cvRect, *bgdModel, *fgdModel, iterCount, mode);
}
void HoughCircles(Mat src, Mat circles, int method, double dp, double minDist) {
cv::HoughCircles(*src, *circles, method, dp, minDist);
}
void HoughCirclesWithParams(Mat src, Mat circles, int method, double dp, double minDist,
double param1, double param2, int minRadius, int maxRadius) {
cv::HoughCircles(*src, *circles, method, dp, minDist, param1, param2, minRadius, maxRadius);
}
void HoughLines(Mat src, Mat lines, double rho, double theta, int threshold) {
cv::HoughLines(*src, *lines, rho, theta, threshold);
}
void HoughLinesP(Mat src, Mat lines, double rho, double theta, int threshold) {
cv::HoughLinesP(*src, *lines, rho, theta, threshold);
}
void HoughLinesPWithParams(Mat src, Mat lines, double rho, double theta, int threshold, double minLineLength, double maxLineGap) {
cv::HoughLinesP(*src, *lines, rho, theta, threshold, minLineLength, maxLineGap);
}
void HoughLinesPointSet(Mat points, Mat lines, int linesMax, int threshold,
double minRho, double maxRho, double rhoStep,
double minTheta, double maxTheta, double thetaStep) {
cv::HoughLinesPointSet(*points, *lines, linesMax, threshold,
minRho, maxRho, rhoStep, minTheta, maxTheta, thetaStep );
}
void Integral(Mat src, Mat sum, Mat sqsum, Mat tilted) {
cv::integral(*src, *sum, *sqsum, *tilted);
}
void Threshold(Mat src, Mat dst, double thresh, double maxvalue, int typ) {
cv::threshold(*src, *dst, thresh, maxvalue, typ);
}
void AdaptiveThreshold(Mat src, Mat dst, double maxValue, int adaptiveMethod, int thresholdType,
int blockSize, double c) {
cv::adaptiveThreshold(*src, *dst, maxValue, adaptiveMethod, thresholdType, blockSize, c);
}
void ArrowedLine(Mat img, Point pt1, Point pt2, Scalar color, int thickness) {
cv::Point p1(pt1.x, pt1.y);
cv::Point p2(pt2.x, pt2.y);
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::arrowedLine(*img, p1, p2, c, thickness);
}
bool ClipLine(Size imgSize, Point pt1, Point pt2) {
cv::Size sz(imgSize.width, imgSize.height);
cv::Point p1(pt1.x, pt1.y);
cv::Point p2(pt2.x, pt2.y);
return cv::clipLine(sz, p1, p2);
}
void Circle(Mat img, Point center, int radius, Scalar color, int thickness) {
cv::Point p1(center.x, center.y);
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::circle(*img, p1, radius, c, thickness);
}
void Ellipse(Mat img, Point center, Point axes, double angle, double
startAngle, double endAngle, Scalar color, int thickness) {
cv::Point p1(center.x, center.y);
cv::Point p2(axes.x, axes.y);
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::ellipse(*img, p1, p2, angle, startAngle, endAngle, c, thickness);
}
void Line(Mat img, Point pt1, Point pt2, Scalar color, int thickness) {
cv::Point p1(pt1.x, pt1.y);
cv::Point p2(pt2.x, pt2.y);
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::line(*img, p1, p2, c, thickness);
}
void Rectangle(Mat img, Rect r, Scalar color, int thickness) {
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::rectangle(
*img,
cv::Point(r.x, r.y),
cv::Point(r.x + r.width, r.y + r.height),
c,
thickness,
cv::LINE_AA
);
}
void FillPoly(Mat img, Contours points, Scalar color) {
std::vector<std::vector<cv::Point> > pts;
for (size_t i = 0; i < points.length; i++) {
Contour contour = points.contours[i];
std::vector<cv::Point> cntr;
for (size_t i = 0; i < contour.length; i++) {
cntr.push_back(cv::Point(contour.points[i].x, contour.points[i].y));
}
pts.push_back(cntr);
}
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::fillPoly(*img, pts, c);
}
struct Size GetTextSize(const char* text, int fontFace, double fontScale, int thickness) {
cv::Size sz = cv::getTextSize(text, fontFace, fontScale, thickness, NULL);
Size size = {sz.width, sz.height};
return size;
}
void PutText(Mat img, const char* text, Point org, int fontFace, double fontScale,
Scalar color, int thickness) {
cv::Point pt(org.x, org.y);
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::putText(*img, text, pt, fontFace, fontScale, c, thickness);
}
void PutTextWithParams(Mat img, const char* text, Point org, int fontFace, double fontScale,
Scalar color, int thickness, int lineType, bool bottomLeftOrigin) {
cv::Point pt(org.x, org.y);
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::putText(*img, text, pt, fontFace, fontScale, c, thickness, lineType, bottomLeftOrigin);
}
void Resize(Mat src, Mat dst, Size dsize, double fx, double fy, int interp) {
cv::Size sz(dsize.width, dsize.height);
cv::resize(*src, *dst, sz, fx, fy, interp);
}
void GetRectSubPix(Mat src, Size patchSize, Point center, Mat dst) {
cv::Size sz(patchSize.width, patchSize.height);
cv::Point pt(center.x, center.y);
cv::getRectSubPix(*src, sz, pt, *dst);
}
Mat GetRotationMatrix2D(Point center, double angle, double scale) {
cv::Point pt(center.x, center.y);
return new cv::Mat(cv::getRotationMatrix2D(pt, angle, scale));
}
void WarpAffine(Mat src, Mat dst, Mat m, Size dsize) {
cv::Size sz(dsize.width, dsize.height);
cv::warpAffine(*src, *dst, *m, sz);
}
void WarpAffineWithParams(Mat src, Mat dst, Mat rot_mat, Size dsize, int flags, int borderMode,
Scalar borderValue) {
cv::Size sz(dsize.width, dsize.height);
cv::Scalar c = cv::Scalar(borderValue.val1, borderValue.val2, borderValue.val3, borderValue.val4);
cv::warpAffine(*src, *dst, *rot_mat, sz, flags, borderMode, c);
}
void WarpPerspective(Mat src, Mat dst, Mat m, Size dsize) {
cv::Size sz(dsize.width, dsize.height);
cv::warpPerspective(*src, *dst, *m, sz);
}
void Watershed(Mat image, Mat markers) {
cv::watershed(*image, *markers);
}
void ApplyColorMap(Mat src, Mat dst, int colormap) {
cv::applyColorMap(*src, *dst, colormap);
}
void ApplyCustomColorMap(Mat src, Mat dst, Mat colormap) {
cv::applyColorMap(*src, *dst, *colormap);
}
Mat GetPerspectiveTransform(Contour src, Contour dst) {
std::vector<cv::Point2f> src_pts;
for (size_t i = 0; i < src.length; i++) {
src_pts.push_back(cv::Point2f(src.points[i].x, src.points[i].y));
}
std::vector<cv::Point2f> dst_pts;
for (size_t i = 0; i < dst.length; i++) {
dst_pts.push_back(cv::Point2f(dst.points[i].x, dst.points[i].y));
}
return new cv::Mat(cv::getPerspectiveTransform(src_pts, dst_pts));
}
void DrawContours(Mat src, Contours contours, int contourIdx, Scalar color, int thickness) {
std::vector<std::vector<cv::Point> > cntrs;
for (size_t i = 0; i < contours.length; i++) {
Contour contour = contours.contours[i];
std::vector<cv::Point> cntr;
for (size_t i = 0; i < contour.length; i++) {
cntr.push_back(cv::Point(contour.points[i].x, contour.points[i].y));
}
cntrs.push_back(cntr);
}
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::drawContours(*src, cntrs, contourIdx, c, thickness);
}
void Sobel(Mat src, Mat dst, int ddepth, int dx, int dy, int ksize, double scale, double delta, int borderType) {
cv::Sobel(*src, *dst, ddepth, dx, dy, ksize, scale, delta, borderType);
}
void SpatialGradient(Mat src, Mat dx, Mat dy, int ksize, int borderType) {
cv::spatialGradient(*src, *dx, *dy, ksize, borderType);
}
void Remap(Mat src, Mat dst, Mat map1, Mat map2, int interpolation, int borderMode, Scalar borderValue) {
cv::Scalar c = cv::Scalar(borderValue.val1, borderValue.val2, borderValue.val3, borderValue.val4);
cv::remap(*src, *dst, *map1, *map2, interpolation, borderMode, c);
}
void Filter2D(Mat src, Mat dst, int ddepth, Mat kernel, Point anchor, double delta, int borderType) {
cv::Point anchorPt(anchor.x, anchor.y);
cv::filter2D(*src, *dst, ddepth, *kernel, anchorPt, delta, borderType);
}
void SepFilter2D(Mat src, Mat dst, int ddepth, Mat kernelX, Mat kernelY, Point anchor, double delta, int borderType) {
cv::Point anchorPt(anchor.x, anchor.y);
cv::sepFilter2D(*src, *dst, ddepth, *kernelX, *kernelY, anchorPt, delta, borderType);
}
void LogPolar(Mat src, Mat dst, Point center, double m, int flags) {
cv::Point2f centerPt(center.x, center.y);
cv::logPolar(*src, *dst, centerPt, m, flags);
}
void FitLine(Contour points, Mat line, int distType, double param, double reps, double aeps) {
std::vector<cv::Point> pts;
for (size_t i = 0; i < points.length; i++) {
pts.push_back(cv::Point(points.points[i].x, points.points[i].y));
}
cv::fitLine(pts, *line, distType, param, reps, aeps);
}
void LinearPolar(Mat src, Mat dst, Point center, double maxRadius, int flags) {
cv::Point2f centerPt(center.x, center.y);
cv::linearPolar(*src, *dst, centerPt, maxRadius, flags);
}
CLAHE CLAHE_Create() {
return new cv::Ptr<cv::CLAHE>(cv::createCLAHE());
}
CLAHE CLAHE_CreateWithParams(double clipLimit, Size tileGridSize) {
cv::Size sz(tileGridSize.width, tileGridSize.height);
return new cv::Ptr<cv::CLAHE>(cv::createCLAHE(clipLimit, sz));
}
void CLAHE_Close(CLAHE c) {
delete c;
}
void CLAHE_Apply(CLAHE c, Mat src, Mat dst) {
(*c)->apply(*src, *dst);
}
void InvertAffineTransform(Mat src, Mat dst) {
cv::invertAffineTransform(*src, *dst);
}

1790
vendor/gocv.io/x/gocv/imgproc.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

120
vendor/gocv.io/x/gocv/imgproc.h generated vendored Normal file
View File

@ -0,0 +1,120 @@
#ifndef _OPENCV3_IMGPROC_H_
#define _OPENCV3_IMGPROC_H_
#include <stdbool.h>
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#ifdef __cplusplus
typedef cv::Ptr<cv::CLAHE>* CLAHE;
#else
typedef void* CLAHE;
#endif
#include "core.h"
double ArcLength(Contour curve, bool is_closed);
Contour ApproxPolyDP(Contour curve, double epsilon, bool closed);
void CvtColor(Mat src, Mat dst, int code);
void EqualizeHist(Mat src, Mat dst);
void CalcHist(struct Mats mats, IntVector chans, Mat mask, Mat hist, IntVector sz, FloatVector rng, bool acc);
void CalcBackProject(struct Mats mats, IntVector chans, Mat hist, Mat backProject, FloatVector rng, bool uniform);
double CompareHist(Mat hist1, Mat hist2, int method);
void ConvexHull(Contour points, Mat hull, bool clockwise, bool returnPoints);
void ConvexityDefects(Contour points, Mat hull, Mat result);
void BilateralFilter(Mat src, Mat dst, int d, double sc, double ss);
void Blur(Mat src, Mat dst, Size ps);
void BoxFilter(Mat src, Mat dst, int ddepth, Size ps);
void SqBoxFilter(Mat src, Mat dst, int ddepth, Size ps);
void Dilate(Mat src, Mat dst, Mat kernel);
void DistanceTransform(Mat src, Mat dst, Mat labels, int distanceType, int maskSize, int labelType);
void Erode(Mat src, Mat dst, Mat kernel);
void MatchTemplate(Mat image, Mat templ, Mat result, int method, Mat mask);
struct Moment Moments(Mat src, bool binaryImage);
void PyrDown(Mat src, Mat dst, Size dstsize, int borderType);
void PyrUp(Mat src, Mat dst, Size dstsize, int borderType);
struct Rect BoundingRect(Contour con);
void BoxPoints(RotatedRect rect, Mat boxPts);
double ContourArea(Contour con);
struct RotatedRect MinAreaRect(Points points);
struct RotatedRect FitEllipse(Points points);
void MinEnclosingCircle(Points points, Point2f* center, float* radius);
struct Contours FindContours(Mat src, int mode, int method);
int ConnectedComponents(Mat src, Mat dst, int connectivity, int ltype, int ccltype);
int ConnectedComponentsWithStats(Mat src, Mat labels, Mat stats, Mat centroids, int connectivity, int ltype, int ccltype);
void GaussianBlur(Mat src, Mat dst, Size ps, double sX, double sY, int bt);
void Laplacian(Mat src, Mat dst, int dDepth, int kSize, double scale, double delta, int borderType);
void Scharr(Mat src, Mat dst, int dDepth, int dx, int dy, double scale, double delta,
int borderType);
Mat GetStructuringElement(int shape, Size ksize);
Scalar MorphologyDefaultBorderValue();
void MorphologyEx(Mat src, Mat dst, int op, Mat kernel);
void MorphologyExWithParams(Mat src, Mat dst, int op, Mat kernel, Point pt, int iterations, int borderType);
void MedianBlur(Mat src, Mat dst, int ksize);
void Canny(Mat src, Mat edges, double t1, double t2);
void CornerSubPix(Mat img, Mat corners, Size winSize, Size zeroZone, TermCriteria criteria);
void GoodFeaturesToTrack(Mat img, Mat corners, int maxCorners, double quality, double minDist);
void GrabCut(Mat img, Mat mask, Rect rect, Mat bgdModel, Mat fgdModel, int iterCount, int mode);
void HoughCircles(Mat src, Mat circles, int method, double dp, double minDist);
void HoughCirclesWithParams(Mat src, Mat circles, int method, double dp, double minDist,
double param1, double param2, int minRadius, int maxRadius);
void HoughLines(Mat src, Mat lines, double rho, double theta, int threshold);
void HoughLinesP(Mat src, Mat lines, double rho, double theta, int threshold);
void HoughLinesPWithParams(Mat src, Mat lines, double rho, double theta, int threshold, double minLineLength, double maxLineGap);
void HoughLinesPointSet(Mat points, Mat lines, int lines_max, int threshold,
double min_rho, double max_rho, double rho_step,
double min_theta, double max_theta, double theta_step);
void Integral(Mat src, Mat sum, Mat sqsum, Mat tilted);
void Threshold(Mat src, Mat dst, double thresh, double maxvalue, int typ);
void AdaptiveThreshold(Mat src, Mat dst, double maxValue, int adaptiveTyp, int typ, int blockSize,
double c);
void ArrowedLine(Mat img, Point pt1, Point pt2, Scalar color, int thickness);
void Circle(Mat img, Point center, int radius, Scalar color, int thickness);
void Ellipse(Mat img, Point center, Point axes, double angle, double
startAngle, double endAngle, Scalar color, int thickness);
void Line(Mat img, Point pt1, Point pt2, Scalar color, int thickness);
void Rectangle(Mat img, Rect rect, Scalar color, int thickness);
void FillPoly(Mat img, Contours points, Scalar color);
struct Size GetTextSize(const char* text, int fontFace, double fontScale, int thickness);
void PutText(Mat img, const char* text, Point org, int fontFace, double fontScale,
Scalar color, int thickness);
void PutTextWithParams(Mat img, const char* text, Point org, int fontFace, double fontScale,
Scalar color, int thickness, int lineType, bool bottomLeftOrigin);
void Resize(Mat src, Mat dst, Size sz, double fx, double fy, int interp);
void GetRectSubPix(Mat src, Size patchSize, Point center, Mat dst);
Mat GetRotationMatrix2D(Point center, double angle, double scale);
void WarpAffine(Mat src, Mat dst, Mat rot_mat, Size dsize);
void WarpAffineWithParams(Mat src, Mat dst, Mat rot_mat, Size dsize, int flags, int borderMode,
Scalar borderValue);
void WarpPerspective(Mat src, Mat dst, Mat m, Size dsize);
void Watershed(Mat image, Mat markers);
void ApplyColorMap(Mat src, Mat dst, int colormap);
void ApplyCustomColorMap(Mat src, Mat dst, Mat colormap);
Mat GetPerspectiveTransform(Contour src, Contour dst);
void DrawContours(Mat src, Contours contours, int contourIdx, Scalar color, int thickness);
void Sobel(Mat src, Mat dst, int ddepth, int dx, int dy, int ksize, double scale, double delta, int borderType);
void SpatialGradient(Mat src, Mat dx, Mat dy, int ksize, int borderType);
void Remap(Mat src, Mat dst, Mat map1, Mat map2, int interpolation, int borderMode, Scalar borderValue);
void Filter2D(Mat src, Mat dst, int ddepth, Mat kernel, Point anchor, double delta, int borderType);
void SepFilter2D(Mat src, Mat dst, int ddepth, Mat kernelX, Mat kernelY, Point anchor, double delta, int borderType);
void LogPolar(Mat src, Mat dst, Point center, double m, int flags);
void FitLine(Contour points, Mat line, int distType, double param, double reps, double aeps);
void LinearPolar(Mat src, Mat dst, Point center, double maxRadius, int flags);
bool ClipLine(Size imgSize, Point pt1, Point pt2);
CLAHE CLAHE_Create();
CLAHE CLAHE_CreateWithParams(double clipLimit, Size tileGridSize);
void CLAHE_Close(CLAHE c);
void CLAHE_Apply(CLAHE c, Mat src, Mat dst);
void InvertAffineTransform(Mat src, Mat dst);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_IMGPROC_H_

351
vendor/gocv.io/x/gocv/imgproc_colorcodes.go generated vendored Normal file
View File

@ -0,0 +1,351 @@
package gocv
// ColorConversionCode is a color conversion code used on Mat.
//
// For further details, please see:
// http://docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#ga4e0972be5de079fed4e3a10e24ef5ef0
//
type ColorConversionCode int
const (
// ColorBGRToBGRA adds alpha channel to BGR image.
ColorBGRToBGRA ColorConversionCode = 0
// ColorBGRAToBGR removes alpha channel from BGR image.
ColorBGRAToBGR = 1
// ColorBGRToRGBA converts from BGR to RGB with alpha channel.
ColorBGRToRGBA = 2
// ColorRGBAToBGR converts from RGB with alpha to BGR color space.
ColorRGBAToBGR = 3
// ColorBGRToRGB converts from BGR to RGB without alpha channel.
ColorBGRToRGB = 4
// ColorBGRAToRGBA converts from BGR with alpha channel
// to RGB with alpha channel.
ColorBGRAToRGBA = 5
// ColorBGRToGray converts from BGR to grayscale.
ColorBGRToGray = 6
// ColorRGBToGray converts from RGB to grayscale.
ColorRGBToGray = 7
// ColorGrayToBGR converts from grayscale to BGR.
ColorGrayToBGR = 8
// ColorGrayToBGRA converts from grayscale to BGR with alpha channel.
ColorGrayToBGRA = 9
// ColorBGRAToGray converts from BGR with alpha channel to grayscale.
ColorBGRAToGray = 10
// ColorRGBAToGray converts from RGB with alpha channel to grayscale.
ColorRGBAToGray = 11
// ColorBGRToBGR565 converts from BGR to BGR565 (16-bit images).
ColorBGRToBGR565 = 12
// ColorRGBToBGR565 converts from RGB to BGR565 (16-bit images).
ColorRGBToBGR565 = 13
// ColorBGR565ToBGR converts from BGR565 (16-bit images) to BGR.
ColorBGR565ToBGR = 14
// ColorBGR565ToRGB converts from BGR565 (16-bit images) to RGB.
ColorBGR565ToRGB = 15
// ColorBGRAToBGR565 converts from BGRA (with alpha channel)
// to BGR565 (16-bit images).
ColorBGRAToBGR565 = 16
// ColorRGBAToBGR565 converts from RGBA (with alpha channel)
// to BGR565 (16-bit images).
ColorRGBAToBGR565 = 17
// ColorBGR565ToBGRA converts from BGR565 (16-bit images)
// to BGRA (with alpha channel).
ColorBGR565ToBGRA = 18
// ColorBGR565ToRGBA converts from BGR565 (16-bit images)
// to RGBA (with alpha channel).
ColorBGR565ToRGBA = 19
// ColorGrayToBGR565 converts from grayscale
// to BGR565 (16-bit images).
ColorGrayToBGR565 = 20
// ColorBGR565ToGray converts from BGR565 (16-bit images)
// to grayscale.
ColorBGR565ToGray = 21
// ColorBGRToBGR555 converts from BGR to BGR555 (16-bit images).
ColorBGRToBGR555 = 22
// ColorRGBToBGR555 converts from RGB to BGR555 (16-bit images).
ColorRGBToBGR555 = 23
// ColorBGR555ToBGR converts from BGR555 (16-bit images) to BGR.
ColorBGR555ToBGR = 24
// ColorBGR555ToRGB converts from BGR555 (16-bit images) to RGB.
ColorBGR555ToRGB = 25
// ColorBGRAToBGR555 converts from BGRA (with alpha channel)
// to BGR555 (16-bit images).
ColorBGRAToBGR555 = 26
// ColorRGBAToBGR555 converts from RGBA (with alpha channel)
// to BGR555 (16-bit images).
ColorRGBAToBGR555 = 27
// ColorBGR555ToBGRA converts from BGR555 (16-bit images)
// to BGRA (with alpha channel).
ColorBGR555ToBGRA = 28
// ColorBGR555ToRGBA converts from BGR555 (16-bit images)
// to RGBA (with alpha channel).
ColorBGR555ToRGBA = 29
// ColorGrayToBGR555 converts from grayscale to BGR555 (16-bit images).
ColorGrayToBGR555 = 30
// ColorBGR555ToGRAY converts from BGR555 (16-bit images) to grayscale.
ColorBGR555ToGRAY = 31
// ColorBGRToXYZ converts from BGR to CIE XYZ.
ColorBGRToXYZ = 32
// ColorRGBToXYZ converts from RGB to CIE XYZ.
ColorRGBToXYZ = 33
// ColorXYZToBGR converts from CIE XYZ to BGR.
ColorXYZToBGR = 34
// ColorXYZToRGB converts from CIE XYZ to RGB.
ColorXYZToRGB = 35
// ColorBGRToYCrCb converts from BGR to luma-chroma (aka YCC).
ColorBGRToYCrCb = 36
// ColorRGBToYCrCb converts from RGB to luma-chroma (aka YCC).
ColorRGBToYCrCb = 37
// ColorYCrCbToBGR converts from luma-chroma (aka YCC) to BGR.
ColorYCrCbToBGR = 38
// ColorYCrCbToRGB converts from luma-chroma (aka YCC) to RGB.
ColorYCrCbToRGB = 39
// ColorBGRToHSV converts from BGR to HSV (hue saturation value).
ColorBGRToHSV = 40
// ColorRGBToHSV converts from RGB to HSV (hue saturation value).
ColorRGBToHSV = 41
// ColorBGRToLab converts from BGR to CIE Lab.
ColorBGRToLab = 44
// ColorRGBToLab converts from RGB to CIE Lab.
ColorRGBToLab = 45
// ColorBGRToLuv converts from BGR to CIE Luv.
ColorBGRToLuv = 50
// ColorRGBToLuv converts from RGB to CIE Luv.
ColorRGBToLuv = 51
// ColorBGRToHLS converts from BGR to HLS (hue lightness saturation).
ColorBGRToHLS = 52
// ColorRGBToHLS converts from RGB to HLS (hue lightness saturation).
ColorRGBToHLS = 53
// ColorHSVToBGR converts from HSV (hue saturation value) to BGR.
ColorHSVToBGR = 54
// ColorHSVToRGB converts from HSV (hue saturation value) to RGB.
ColorHSVToRGB = 55
// ColorLabToBGR converts from CIE Lab to BGR.
ColorLabToBGR = 56
// ColorLabToRGB converts from CIE Lab to RGB.
ColorLabToRGB = 57
// ColorLuvToBGR converts from CIE Luv to BGR.
ColorLuvToBGR = 58
// ColorLuvToRGB converts from CIE Luv to RGB.
ColorLuvToRGB = 59
// ColorHLSToBGR converts from HLS (hue lightness saturation) to BGR.
ColorHLSToBGR = 60
// ColorHLSToRGB converts from HLS (hue lightness saturation) to RGB.
ColorHLSToRGB = 61
// ColorBGRToHSVFull converts from BGR to HSV (hue saturation value) full.
ColorBGRToHSVFull = 66
// ColorRGBToHSVFull converts from RGB to HSV (hue saturation value) full.
ColorRGBToHSVFull = 67
// ColorBGRToHLSFull converts from BGR to HLS (hue lightness saturation) full.
ColorBGRToHLSFull = 68
// ColorRGBToHLSFull converts from RGB to HLS (hue lightness saturation) full.
ColorRGBToHLSFull = 69
// ColorHSVToBGRFull converts from HSV (hue saturation value) to BGR full.
ColorHSVToBGRFull = 70
// ColorHSVToRGBFull converts from HSV (hue saturation value) to RGB full.
ColorHSVToRGBFull = 71
// ColorHLSToBGRFull converts from HLS (hue lightness saturation) to BGR full.
ColorHLSToBGRFull = 72
// ColorHLSToRGBFull converts from HLS (hue lightness saturation) to RGB full.
ColorHLSToRGBFull = 73
// ColorLBGRToLab converts from LBGR to CIE Lab.
ColorLBGRToLab = 74
// ColorLRGBToLab converts from LRGB to CIE Lab.
ColorLRGBToLab = 75
// ColorLBGRToLuv converts from LBGR to CIE Luv.
ColorLBGRToLuv = 76
// ColorLRGBToLuv converts from LRGB to CIE Luv.
ColorLRGBToLuv = 77
// ColorLabToLBGR converts from CIE Lab to LBGR.
ColorLabToLBGR = 78
// ColorLabToLRGB converts from CIE Lab to LRGB.
ColorLabToLRGB = 79
// ColorLuvToLBGR converts from CIE Luv to LBGR.
ColorLuvToLBGR = 80
// ColorLuvToLRGB converts from CIE Luv to LRGB.
ColorLuvToLRGB = 81
// ColorBGRToYUV converts from BGR to YUV.
ColorBGRToYUV = 82
// ColorRGBToYUV converts from RGB to YUV.
ColorRGBToYUV = 83
// ColorYUVToBGR converts from YUV to BGR.
ColorYUVToBGR = 84
// ColorYUVToRGB converts from YUV to RGB.
ColorYUVToRGB = 85
// ColorYUVToRGBNV12 converts from YUV 4:2:0 to RGB NV12.
ColorYUVToRGBNV12 = 90
// ColorYUVToBGRNV12 converts from YUV 4:2:0 to BGR NV12.
ColorYUVToBGRNV12 = 91
// ColorYUVToRGBNV21 converts from YUV 4:2:0 to RGB NV21.
ColorYUVToRGBNV21 = 92
// ColorYUVToBGRNV21 converts from YUV 4:2:0 to BGR NV21.
ColorYUVToBGRNV21 = 93
// ColorYUVToRGBANV12 converts from YUV 4:2:0 to RGBA NV12.
ColorYUVToRGBANV12 = 94
// ColorYUVToBGRANV12 converts from YUV 4:2:0 to BGRA NV12.
ColorYUVToBGRANV12 = 95
// ColorYUVToRGBANV21 converts from YUV 4:2:0 to RGBA NV21.
ColorYUVToRGBANV21 = 96
// ColorYUVToBGRANV21 converts from YUV 4:2:0 to BGRA NV21.
ColorYUVToBGRANV21 = 97
ColorYUVToRGBYV12 = 98
ColorYUVToBGRYV12 = 99
ColorYUVToRGBIYUV = 100
ColorYUVToBGRIYUV = 101
ColorYUVToRGBAYV12 = 102
ColorYUVToBGRAYV12 = 103
ColorYUVToRGBAIYUV = 104
ColorYUVToBGRAIYUV = 105
ColorYUVToGRAY420 = 106
// YUV 4:2:2 family to RGB
ColorYUVToRGBUYVY = 107
ColorYUVToBGRUYVY = 108
ColorYUVToRGBAUYVY = 111
ColorYUVToBGRAUYVY = 112
ColorYUVToRGBYUY2 = 115
ColorYUVToBGRYUY2 = 116
ColorYUVToRGBYVYU = 117
ColorYUVToBGRYVYU = 118
ColorYUVToRGBAYUY2 = 119
ColorYUVToBGRAYUY2 = 120
ColorYUVToRGBAYVYU = 121
ColorYUVToBGRAYVYU = 122
ColorYUVToGRAYUYVY = 123
ColorYUVToGRAYYUY2 = 124
// alpha premultiplication
ColorRGBATomRGBA = 125
ColormRGBAToRGBA = 126
// RGB to YUV 4:2:0 family
ColorRGBToYUVI420 = 127
ColorBGRToYUVI420 = 128
ColorRGBAToYUVI420 = 129
ColorBGRAToYUVI420 = 130
ColorRGBToYUVYV12 = 131
ColorBGRToYUVYV12 = 132
ColorRGBAToYUVYV12 = 133
ColorBGRAToYUVYV12 = 134
// Demosaicing
ColorBayerBGToBGR = 46
ColorBayerGBToBGR = 47
ColorBayerRGToBGR = 48
ColorBayerGRToBGR = 49
ColorBayerBGToGRAY = 86
ColorBayerGBToGRAY = 87
ColorBayerRGToGRAY = 88
ColorBayerGRToGRAY = 89
// Demosaicing using Variable Number of Gradients
ColorBayerBGToBGRVNG = 62
ColorBayerGBToBGRVNG = 63
ColorBayerRGToBGRVNG = 64
ColorBayerGRToBGRVNG = 65
// Edge-Aware Demosaicing
ColorBayerBGToBGREA = 135
ColorBayerGBToBGREA = 136
ColorBayerRGToBGREA = 137
ColorBayerGRToBGREA = 138
// Demosaicing with alpha channel
ColorBayerBGToBGRA = 139
ColorBayerGBToBGRA = 140
ColorBayerRGToBGRA = 141
ColorBayerGRToBGRA = 142
ColorCOLORCVTMAX = 143
)

303
vendor/gocv.io/x/gocv/imgproc_colorcodes_string.go generated vendored Normal file
View File

@ -0,0 +1,303 @@
package gocv
func (c ColorConversionCode) String() string {
switch c {
case ColorBGRToBGRA:
return "color-bgr-to-bgra"
case ColorBGRAToBGR:
return "color-bgra-to-bgr"
case ColorBGRToRGBA:
return "color-bgr-to-rgba"
case ColorRGBAToBGR:
return "color-rgba-to-bgr"
case ColorBGRToRGB:
return "color-bgr-to-rgb"
case ColorBGRAToRGBA:
return "color-bgra-to-rgba"
case ColorBGRToGray:
return "color-bgr-to-gray"
case ColorRGBToGray:
return "color-rgb-to-gray"
case ColorGrayToBGR:
return "color-gray-to-bgr"
case ColorGrayToBGRA:
return "color-gray-to-bgra"
case ColorBGRAToGray:
return "color-bgra-to-gray"
case ColorRGBAToGray:
return "color-rgba-to-gray"
case ColorBGRToBGR565:
return "color-bgr-to-bgr565"
case ColorRGBToBGR565:
return "color-rgb-to-bgr565"
case ColorBGR565ToBGR:
return "color-bgr565-to-bgr"
case ColorBGR565ToRGB:
return "color-bgr565-to-rgb"
case ColorBGRAToBGR565:
return "color-bgra-to-bgr565"
case ColorRGBAToBGR565:
return "color-rgba-to-bgr565"
case ColorBGR565ToBGRA:
return "color-bgr565-to-bgra"
case ColorBGR565ToRGBA:
return "color-bgr565-to-rgba"
case ColorGrayToBGR565:
return "color-gray-to-bgr565"
case ColorBGR565ToGray:
return "color-bgr565-to-gray"
case ColorBGRToBGR555:
return "color-bgr-to-bgr555"
case ColorRGBToBGR555:
return "color-rgb-to-bgr555"
case ColorBGR555ToBGR:
return "color-bgr555-to-bgr"
case ColorBGRAToBGR555:
return "color-bgra-to-bgr555"
case ColorRGBAToBGR555:
return "color-rgba-to-bgr555"
case ColorBGR555ToBGRA:
return "color-bgr555-to-bgra"
case ColorBGR555ToRGBA:
return "color-bgr555-to-rgba"
case ColorGrayToBGR555:
return "color-gray-to-bgr555"
case ColorBGR555ToGRAY:
return "color-bgr555-to-gray"
case ColorBGRToXYZ:
return "color-bgr-to-xyz"
case ColorRGBToXYZ:
return "color-rgb-to-xyz"
case ColorXYZToBGR:
return "color-xyz-to-bgr"
case ColorXYZToRGB:
return "color-xyz-to-rgb"
case ColorBGRToYCrCb:
return "color-bgr-to-ycrcb"
case ColorRGBToYCrCb:
return "color-rgb-to-ycrcb"
case ColorYCrCbToBGR:
return "color-ycrcb-to-bgr"
case ColorYCrCbToRGB:
return "color-ycrcb-to-rgb"
case ColorBGRToHSV:
return "color-bgr-to-hsv"
case ColorRGBToHSV:
return "color-rgb-to-hsv"
case ColorBGRToLab:
return "color-bgr-to-lab"
case ColorRGBToLab:
return "color-rgb-to-lab"
case ColorBGRToLuv:
return "color-bgr-to-luv"
case ColorRGBToLuv:
return "color-rgb-to-luv"
case ColorBGRToHLS:
return "color-bgr-to-hls"
case ColorRGBToHLS:
return "color-rgb-to-hls"
case ColorHSVToBGR:
return "color-hsv-to-bgr"
case ColorHSVToRGB:
return "color-hsv-to-rgb"
case ColorLabToBGR:
return "color-lab-to-bgr"
case ColorLabToRGB:
return "color-lab-to-rgb"
case ColorLuvToBGR:
return "color-luv-to-bgr"
case ColorLuvToRGB:
return "color-luv-to-rgb"
case ColorHLSToBGR:
return "color-hls-to-bgr"
case ColorHLSToRGB:
return "color-hls-to-rgb"
case ColorBGRToHSVFull:
return "color-bgr-to-hsv-full"
case ColorRGBToHSVFull:
return "color-rgb-to-hsv-full"
case ColorBGRToHLSFull:
return "color-bgr-to-hls-full"
case ColorRGBToHLSFull:
return "color-rgb-to-hls-full"
case ColorHSVToBGRFull:
return "color-hsv-to-bgr-full"
case ColorHSVToRGBFull:
return "color-hsv-to-rgb-full"
case ColorHLSToBGRFull:
return "color-hls-to-bgr-full"
case ColorHLSToRGBFull:
return "color-hls-to-rgb-full"
case ColorLBGRToLab:
return "color-lbgr-to-lab"
case ColorLRGBToLab:
return "color-lrgb-to-lab"
case ColorLBGRToLuv:
return "color-lbgr-to-luv"
case ColorLRGBToLuv:
return "color-lrgb-to-luv"
case ColorLabToLBGR:
return "color-lab-to-lbgr"
case ColorLabToLRGB:
return "color-lab-to-lrgb"
case ColorLuvToLBGR:
return "color-luv-to-lbgr"
case ColorLuvToLRGB:
return "color-luv-to-lrgb"
case ColorBGRToYUV:
return "color-bgr-to-yuv"
case ColorRGBToYUV:
return "color-rgb-to-yuv"
case ColorYUVToBGR:
return "color-yuv-to-bgr"
case ColorYUVToRGB:
return "color-yuv-to-rgb"
case ColorYUVToRGBNV12:
return "color-yuv-to-rgbnv12"
case ColorYUVToBGRNV12:
return "color-yuv-to-bgrnv12"
case ColorYUVToRGBNV21:
return "color-yuv-to-rgbnv21"
case ColorYUVToBGRNV21:
return "color-yuv-to-bgrnv21"
case ColorYUVToRGBANV12:
return "color-yuv-to-rgbanv12"
case ColorYUVToBGRANV12:
return "color-yuv-to-bgranv12"
case ColorYUVToRGBANV21:
return "color-yuv-to-rgbanv21"
case ColorYUVToBGRANV21:
return "color-yuv-to-bgranv21"
case ColorYUVToRGBYV12:
return "color-yuv-to-rgbyv12"
case ColorYUVToBGRYV12:
return "color-yuv-to-bgryv12"
case ColorYUVToRGBIYUV:
return "color-yuv-to-rgbiyuv"
case ColorYUVToBGRIYUV:
return "color-yuv-to-bgriyuv"
case ColorYUVToRGBAYV12:
return "color-yuv-to-rgbayv12"
case ColorYUVToBGRAYV12:
return "color-yuv-to-bgrayv12"
case ColorYUVToRGBAIYUV:
return "color-yuv-to-rgbaiyuv"
case ColorYUVToBGRAIYUV:
return "color-yuv-to-bgraiyuv"
case ColorYUVToGRAY420:
return "color-yuv-to-gray420"
case ColorYUVToRGBUYVY:
return "color-yuv-to-rgbuyvy"
case ColorYUVToBGRUYVY:
return "color-yuv-to-bgruyvy"
case ColorYUVToRGBAUYVY:
return "color-yuv-to-rgbauyvy"
case ColorYUVToBGRAUYVY:
return "color-yuv-to-bgrauyvy"
case ColorYUVToRGBYUY2:
return "color-yuv-to-rgbyuy2"
case ColorYUVToBGRYUY2:
return "color-yuv-to-bgryuy2"
case ColorYUVToRGBYVYU:
return "color-yuv-to-rgbyvyu"
case ColorYUVToBGRYVYU:
return "color-yuv-to-bgryvyu"
case ColorYUVToRGBAYUY2:
return "color-yuv-to-rgbayuy2"
case ColorYUVToBGRAYUY2:
return "color-yuv-to-bgrayuy2"
case ColorYUVToRGBAYVYU:
return "color-yuv-to-rgbayvyu"
case ColorYUVToBGRAYVYU:
return "color-yuv-to-bgrayvyu"
case ColorYUVToGRAYUYVY:
return "color-yuv-to-grayuyvy"
case ColorYUVToGRAYYUY2:
return "color-yuv-to-grayyuy2"
case ColorRGBATomRGBA:
return "color-rgba-to-mrgba"
case ColormRGBAToRGBA:
return "color-mrgba-to-rgba"
case ColorRGBToYUVI420:
return "color-rgb-to-yuvi420"
case ColorBGRToYUVI420:
return "color-bgr-to-yuvi420"
case ColorRGBAToYUVI420:
return "color-rgba-to-yuvi420"
case ColorBGRAToYUVI420:
return "color-bgra-to-yuvi420"
case ColorRGBToYUVYV12:
return "color-rgb-to-yuvyv12"
case ColorBGRToYUVYV12:
return "color-bgr-to-yuvyv12"
case ColorRGBAToYUVYV12:
return "color-rgba-to-yuvyv12"
case ColorBGRAToYUVYV12:
return "color-bgra-to-yuvyv12"
case ColorBayerBGToBGR:
return "color-bayer-bgt-to-bgr"
case ColorBayerGBToBGR:
return "color-bayer-gbt-to-bgr"
case ColorBayerRGToBGR:
return "color-bayer-rgt-to-bgr"
case ColorBayerGRToBGR:
return "color-bayer-grt-to-bgr"
case ColorBayerBGToGRAY:
return "color-bayer-bgt-to-gray"
case ColorBayerGBToGRAY:
return "color-bayer-gbt-to-gray"
case ColorBayerRGToGRAY:
return "color-bayer-rgt-to-gray"
case ColorBayerGRToGRAY:
return "color-bayer-grt-to-gray"
case ColorBayerBGToBGRVNG:
return "color-bayer-bgt-to-bgrvng"
case ColorBayerGBToBGRVNG:
return "color-bayer-gbt-to-bgrvng"
case ColorBayerRGToBGRVNG:
return "color-bayer-rgt-to-bgrvng"
case ColorBayerGRToBGRVNG:
return "color-bayer-grt-to-bgrvng"
case ColorBayerBGToBGREA:
return "color-bayer-bgt-to-bgrea"
case ColorBayerGBToBGREA:
return "color-bayer-gbt-to-bgrea"
case ColorBayerRGToBGREA:
return "color-bayer-rgt-to-bgrea"
case ColorBayerGRToBGREA:
return "color-bayer-grt-to-bgrea"
case ColorBayerBGToBGRA:
return "color-bayer-bgt-to-bgra"
case ColorBayerGBToBGRA:
return "color-bayer-gbt-to-bgra"
case ColorBayerRGToBGRA:
return "color-bayer-rgt-to-bgra"
case ColorBayerGRToBGRA:
return "color-bayer-grt-to-bgra"
case ColorCOLORCVTMAX:
return "color-color-cvt-max"
}
return ""
}

333
vendor/gocv.io/x/gocv/imgproc_string.go generated vendored Normal file
View File

@ -0,0 +1,333 @@
package gocv
func (c HistCompMethod) String() string {
switch c {
case HistCmpCorrel:
return "hist-cmp-correl"
case HistCmpChiSqr:
return "hist-cmp-chi-sqr"
case HistCmpIntersect:
return "hist-cmp-intersect"
case HistCmpBhattacharya:
return "hist-cmp-bhattacharya"
case HistCmpChiSqrAlt:
return "hist-cmp-chi-sqr-alt"
case HistCmpKlDiv:
return "hist-cmp-kl-div"
}
return ""
}
func (c DistanceTransformLabelTypes) String() string {
switch c {
case DistanceLabelCComp:
return "distance-label-ccomp"
}
return ""
}
func (c DistanceTransformMasks) String() string {
switch c {
case DistanceMask3:
return "distance-mask3"
}
return ""
}
func (c RetrievalMode) String() string {
switch c {
case RetrievalExternal:
return "retrieval-external"
case RetrievalList:
return "retrieval-list"
case RetrievalCComp:
return "retrieval-ccomp"
case RetrievalTree:
return "retrieval-tree"
case RetrievalFloodfill:
return "retrieval-floodfill"
}
return ""
}
func (c ContourApproximationMode) String() string {
switch c {
case ChainApproxNone:
return "chain-approx-none"
case ChainApproxSimple:
return "chain-approx-simple"
case ChainApproxTC89L1:
return "chain-approx-tc89l1"
case ChainApproxTC89KCOS:
return "chain-approx-tc89kcos"
}
return ""
}
func (c ConnectedComponentsAlgorithmType) String() string {
switch c {
case CCL_WU:
return "ccl-wu"
case CCL_DEFAULT:
return "ccl-default"
case CCL_GRANA:
return "ccl-grana"
}
return ""
}
func (c ConnectedComponentsTypes) String() string {
switch c {
case CC_STAT_LEFT:
return "cc-stat-left"
case CC_STAT_TOP:
return "cc-stat-top"
case CC_STAT_WIDTH:
return "cc-stat-width"
case CC_STAT_AREA:
return "cc-stat-area"
case CC_STAT_MAX:
return "cc-stat-max"
case CC_STAT_HEIGHT:
return "cc-stat-height"
}
return ""
}
func (c TemplateMatchMode) String() string {
switch c {
case TmSqdiff:
return "tm-sq-diff"
case TmSqdiffNormed:
return "tm-sq-diff-normed"
case TmCcorr:
return "tm-ccorr"
case TmCcorrNormed:
return "tm-ccorr-normed"
case TmCcoeff:
return "tm-ccoeff"
case TmCcoeffNormed:
return "tm-ccoeff-normed"
}
return ""
}
func (c MorphShape) String() string {
switch c {
case MorphRect:
return "morph-rect"
case MorphCross:
return "morph-cross"
case MorphEllipse:
return "morph-ellispe"
}
return ""
}
func (c MorphType) String() string {
switch c {
case MorphErode:
return "morph-erode"
case MorphDilate:
return "morph-dilate"
case MorphOpen:
return "morph-open"
case MorphClose:
return "morph-close"
case MorphGradient:
return "morph-gradient"
case MorphTophat:
return "morph-tophat"
case MorphBlackhat:
return "morph-blackhat"
case MorphHitmiss:
return "morph-hitmiss"
}
return ""
}
func (c BorderType) String() string {
switch c {
case BorderConstant:
return "border-constant"
case BorderReplicate:
return "border-replicate"
case BorderReflect:
return "border-reflect"
case BorderWrap:
return "border-wrap"
case BorderTransparent:
return "border-transparent"
case BorderDefault:
return "border-default"
case BorderIsolated:
return "border-isolated"
}
return ""
}
func (c GrabCutMode) String() string {
switch c {
case GCInitWithRect:
return "gc-init-with-rect"
case GCInitWithMask:
return "gc-init-with-mask"
case GCEval:
return "gc-eval"
case GCEvalFreezeModel:
return "gc-eval-freeze-model"
}
return ""
}
func (c HoughMode) String() string {
switch c {
case HoughStandard:
return "hough-standard"
case HoughProbabilistic:
return "hough-probabilistic"
case HoughMultiScale:
return "hough-multi-scale"
case HoughGradient:
return "hough-gradient"
}
return ""
}
func (c ThresholdType) String() string {
switch c {
case ThresholdBinary:
return "threshold-binary"
case ThresholdBinaryInv:
return "threshold-binary-inv"
case ThresholdTrunc:
return "threshold-trunc"
case ThresholdToZero:
return "threshold-to-zero"
case ThresholdToZeroInv:
return "threshold-to-zero-inv"
case ThresholdMask:
return "threshold-mask"
case ThresholdOtsu:
return "threshold-otsu"
case ThresholdTriangle:
return "threshold-triangle"
}
return ""
}
func (c AdaptiveThresholdType) String() string {
switch c {
case AdaptiveThresholdMean:
return "adaptative-threshold-mean"
case AdaptiveThresholdGaussian:
return "adaptative-threshold-gaussian"
}
return ""
}
func (c HersheyFont) String() string {
switch c {
case FontHersheySimplex:
return "font-hershey-simplex"
case FontHersheyPlain:
return "font-hershey-plain"
case FontHersheyDuplex:
return "font-hershey-duplex"
case FontHersheyComplex:
return "font-hershey-complex"
case FontHersheyTriplex:
return "font-hershey-triplex"
case FontHersheyComplexSmall:
return "font-hershey-complex-small"
case FontHersheyScriptSimplex:
return "font-hershey-script-simplex"
case FontHersheyScriptComplex:
return "font-hershey-scipt-complex"
case FontItalic:
return "font-italic"
}
return ""
}
func (c LineType) String() string {
switch c {
case Filled:
return "filled"
case Line4:
return "line4"
case Line8:
return "line8"
case LineAA:
return "line-aa"
}
return ""
}
func (c InterpolationFlags) String() string {
switch c {
case InterpolationNearestNeighbor:
return "interpolation-nearest-neighbor"
case InterpolationLinear:
return "interpolation-linear"
case InterpolationCubic:
return "interpolation-cubic"
case InterpolationArea:
return "interpolation-area"
case InterpolationLanczos4:
return "interpolation-lanczos4"
case InterpolationMax:
return "interpolation-max"
}
return ""
}
func (c ColormapTypes) String() string {
switch c {
case ColormapAutumn:
return "colormap-autumn"
case ColormapBone:
return "colormap-bone"
case ColormapJet:
return "colormap-jet"
case ColormapWinter:
return "colormap-winter"
case ColormapRainbow:
return "colormap-rainbow"
case ColormapOcean:
return "colormap-ocean"
case ColormapSummer:
return "colormap-summer"
case ColormapSpring:
return "colormap-spring"
case ColormapCool:
return "colormap-cool"
case ColormapHsv:
return "colormap-hsv"
case ColormapPink:
return "colormap-pink"
case ColormapParula:
return "colormap-parula"
}
return ""
}
func (c DistanceTypes) String() string {
switch c {
case DistUser:
return "dist-user"
case DistL1:
return "dist-l1"
case DistL2:
return "dist-l2"
case DistL12:
return "dist-l12"
case DistFair:
return "dist-fair"
case DistWelsch:
return "dist-welsch"
case DistHuber:
return "dist-huber"
}
return ""
}

21
vendor/gocv.io/x/gocv/mat_noprofile.go generated vendored Normal file
View File

@ -0,0 +1,21 @@
// +build !matprofile
package gocv
/*
#include <stdlib.h>
#include "core.h"
*/
import "C"
// newMat returns a new Mat from a C Mat
func newMat(p C.Mat) Mat {
return Mat{p: p}
}
// Close the Mat object.
func (m *Mat) Close() error {
C.Mat_Close(m.p)
m.p = nil
return nil
}

74
vendor/gocv.io/x/gocv/mat_profile.go generated vendored Normal file
View File

@ -0,0 +1,74 @@
// +build matprofile
package gocv
/*
#include <stdlib.h>
#include "core.h"
*/
import (
"C"
)
import (
"runtime/pprof"
)
// MatProfile a pprof.Profile that contains stack traces that led to (currently)
// unclosed Mat's creations. Every time a Mat is created, the stack trace is
// added to this profile and every time the Mat is closed the trace is removed.
// In a program that is not leaking, this profile's count should not
// continuously increase and ideally when a program is terminated the count
// should be zero. You can get the count at any time with:
//
// gocv.MatProfile.Count()
//
// and you can display the current entries with:
//
// var b bytes.Buffer
// gocv.MatProfile.WriteTo(&b, 1)
// fmt.Print(b.String())
//
// This will display stack traces of where the unclosed Mats were instantiated.
// For example, the results could look something like this:
//
// 1 @ 0x4146a0c 0x4146a57 0x4119666 0x40bb18f 0x405a841
// # 0x4146a0b gocv.io/x/gocv.newMat+0x4b /go/src/gocv.io/x/gocv/core.go:120
// # 0x4146a56 gocv.io/x/gocv.NewMat+0x26 /go/src/gocv.io/x/gocv/core.go:126
// # 0x4119665 gocv.io/x/gocv.TestMat+0x25 /go/src/gocv.io/x/gocv/core_test.go:29
// # 0x40bb18e testing.tRunner+0xbe /usr/local/Cellar/go/1.11/libexec/src/testing/testing.go:827
//
// Furthermore, if the program is a long running process or if gocv is being used on a
// web server, it may be helpful to install the HTTP interface using:
//
// import _ "net/http/pprof"
//
// In order to include the MatProfile custom profiler, you MUST build or run your application
// or tests using the following build tag:
// -tags matprofile
//
// For more information, see the runtime/pprof package documentation.
var MatProfile *pprof.Profile
func init() {
profName := "gocv.io/x/gocv.Mat"
MatProfile = pprof.Lookup(profName)
if MatProfile == nil {
MatProfile = pprof.NewProfile(profName)
}
}
// newMat returns a new Mat from a C Mat and records it to the MatProfile.
func newMat(p C.Mat) Mat {
m := Mat{p: p}
MatProfile.Add(p, 1)
return m
}
// Close the Mat object.
func (m *Mat) Close() error {
C.Mat_Close(m.p)
MatProfile.Remove(m.p)
m.p = nil
return nil
}

151
vendor/gocv.io/x/gocv/objdetect.cpp generated vendored Normal file
View File

@ -0,0 +1,151 @@
#include "objdetect.h"
// CascadeClassifier
CascadeClassifier CascadeClassifier_New() {
return new cv::CascadeClassifier();
}
void CascadeClassifier_Close(CascadeClassifier cs) {
delete cs;
}
int CascadeClassifier_Load(CascadeClassifier cs, const char* name) {
return cs->load(name);
}
struct Rects CascadeClassifier_DetectMultiScale(CascadeClassifier cs, Mat img) {
std::vector<cv::Rect> detected;
cs->detectMultiScale(*img, detected); // uses all default parameters
Rect* rects = new Rect[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
Rect r = {detected[i].x, detected[i].y, detected[i].width, detected[i].height};
rects[i] = r;
}
Rects ret = {rects, (int)detected.size()};
return ret;
}
struct Rects CascadeClassifier_DetectMultiScaleWithParams(CascadeClassifier cs, Mat img,
double scale, int minNeighbors, int flags, Size minSize, Size maxSize) {
cv::Size minSz(minSize.width, minSize.height);
cv::Size maxSz(maxSize.width, maxSize.height);
std::vector<cv::Rect> detected;
cs->detectMultiScale(*img, detected, scale, minNeighbors, flags, minSz, maxSz);
Rect* rects = new Rect[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
Rect r = {detected[i].x, detected[i].y, detected[i].width, detected[i].height};
rects[i] = r;
}
Rects ret = {rects, (int)detected.size()};
return ret;
}
// HOGDescriptor
HOGDescriptor HOGDescriptor_New() {
return new cv::HOGDescriptor();
}
void HOGDescriptor_Close(HOGDescriptor hog) {
delete hog;
}
int HOGDescriptor_Load(HOGDescriptor hog, const char* name) {
return hog->load(name);
}
struct Rects HOGDescriptor_DetectMultiScale(HOGDescriptor hog, Mat img) {
std::vector<cv::Rect> detected;
hog->detectMultiScale(*img, detected);
Rect* rects = new Rect[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
Rect r = {detected[i].x, detected[i].y, detected[i].width, detected[i].height};
rects[i] = r;
}
Rects ret = {rects, (int)detected.size()};
return ret;
}
struct Rects HOGDescriptor_DetectMultiScaleWithParams(HOGDescriptor hog, Mat img,
double hitThresh, Size winStride, Size padding, double scale, double finalThresh,
bool useMeanshiftGrouping) {
cv::Size wSz(winStride.width, winStride.height);
cv::Size pSz(padding.width, padding.height);
std::vector<cv::Rect> detected;
hog->detectMultiScale(*img, detected, hitThresh, wSz, pSz, scale, finalThresh,
useMeanshiftGrouping);
Rect* rects = new Rect[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
Rect r = {detected[i].x, detected[i].y, detected[i].width, detected[i].height};
rects[i] = r;
}
Rects ret = {rects, (int)detected.size()};
return ret;
}
Mat HOG_GetDefaultPeopleDetector() {
return new cv::Mat(cv::HOGDescriptor::getDefaultPeopleDetector());
}
void HOGDescriptor_SetSVMDetector(HOGDescriptor hog, Mat det) {
hog->setSVMDetector(*det);
}
struct Rects GroupRectangles(struct Rects rects, int groupThreshold, double eps) {
std::vector<cv::Rect> vRect;
for (int i = 0; i < rects.length; ++i) {
cv::Rect r = cv::Rect(rects.rects[i].x, rects.rects[i].y, rects.rects[i].width,
rects.rects[i].height);
vRect.push_back(r);
}
cv::groupRectangles(vRect, groupThreshold, eps);
Rect* results = new Rect[vRect.size()];
for (size_t i = 0; i < vRect.size(); ++i) {
Rect r = {vRect[i].x, vRect[i].y, vRect[i].width, vRect[i].height};
results[i] = r;
}
Rects ret = {results, (int)vRect.size()};
return ret;
}
// QRCodeDetector
QRCodeDetector QRCodeDetector_New() {
return new cv::QRCodeDetector();
}
void QRCodeDetector_Close(QRCodeDetector qr) {
delete qr;
}
const char* QRCodeDetector_DetectAndDecode(QRCodeDetector qr, Mat input,Mat points,Mat straight_qrcode) {
cv::String *str = new cv::String(qr->detectAndDecode(*input,*points,*straight_qrcode));
return str->c_str();
}
bool QRCodeDetector_Detect(QRCodeDetector qr, Mat input,Mat points) {
return qr->detect(*input,*points);
}
const char* QRCodeDetector_Decode(QRCodeDetector qr, Mat input,Mat inputPoints,Mat straight_qrcode) {
cv::String *str = new cv::String(qr->detectAndDecode(*input,*inputPoints,*straight_qrcode));
return str->c_str();
}

240
vendor/gocv.io/x/gocv/objdetect.go generated vendored Normal file
View File

@ -0,0 +1,240 @@
package gocv
/*
#include <stdlib.h>
#include "objdetect.h"
*/
import "C"
import (
"image"
"unsafe"
)
// CascadeClassifier is a cascade classifier class for object detection.
//
// For further details, please see:
// http://docs.opencv.org/master/d1/de5/classcv_1_1CascadeClassifier.html
//
type CascadeClassifier struct {
p C.CascadeClassifier
}
// NewCascadeClassifier returns a new CascadeClassifier.
func NewCascadeClassifier() CascadeClassifier {
return CascadeClassifier{p: C.CascadeClassifier_New()}
}
// Close deletes the CascadeClassifier's pointer.
func (c *CascadeClassifier) Close() error {
C.CascadeClassifier_Close(c.p)
c.p = nil
return nil
}
// Load cascade classifier from a file.
//
// For further details, please see:
// http://docs.opencv.org/master/d1/de5/classcv_1_1CascadeClassifier.html#a1a5884c8cc749422f9eb77c2471958bc
//
func (c *CascadeClassifier) Load(name string) bool {
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
return C.CascadeClassifier_Load(c.p, cName) != 0
}
// DetectMultiScale detects objects of different sizes in the input Mat image.
// The detected objects are returned as a slice of image.Rectangle structs.
//
// For further details, please see:
// http://docs.opencv.org/master/d1/de5/classcv_1_1CascadeClassifier.html#aaf8181cb63968136476ec4204ffca498
//
func (c *CascadeClassifier) DetectMultiScale(img Mat) []image.Rectangle {
ret := C.CascadeClassifier_DetectMultiScale(c.p, img.p)
defer C.Rects_Close(ret)
return toRectangles(ret)
}
// DetectMultiScaleWithParams calls DetectMultiScale but allows setting parameters
// to values other than just the defaults.
//
// For further details, please see:
// http://docs.opencv.org/master/d1/de5/classcv_1_1CascadeClassifier.html#aaf8181cb63968136476ec4204ffca498
//
func (c *CascadeClassifier) DetectMultiScaleWithParams(img Mat, scale float64,
minNeighbors, flags int, minSize, maxSize image.Point) []image.Rectangle {
minSz := C.struct_Size{
width: C.int(minSize.X),
height: C.int(minSize.Y),
}
maxSz := C.struct_Size{
width: C.int(maxSize.X),
height: C.int(maxSize.Y),
}
ret := C.CascadeClassifier_DetectMultiScaleWithParams(c.p, img.p, C.double(scale),
C.int(minNeighbors), C.int(flags), minSz, maxSz)
defer C.Rects_Close(ret)
return toRectangles(ret)
}
// HOGDescriptor is a Histogram Of Gradiants (HOG) for object detection.
//
// For further details, please see:
// https://docs.opencv.org/master/d5/d33/structcv_1_1HOGDescriptor.html#a723b95b709cfd3f95cf9e616de988fc8
//
type HOGDescriptor struct {
p C.HOGDescriptor
}
// NewHOGDescriptor returns a new HOGDescriptor.
func NewHOGDescriptor() HOGDescriptor {
return HOGDescriptor{p: C.HOGDescriptor_New()}
}
// Close deletes the HOGDescriptor's pointer.
func (h *HOGDescriptor) Close() error {
C.HOGDescriptor_Close(h.p)
h.p = nil
return nil
}
// DetectMultiScale detects objects in the input Mat image.
// The detected objects are returned as a slice of image.Rectangle structs.
//
// For further details, please see:
// https://docs.opencv.org/master/d5/d33/structcv_1_1HOGDescriptor.html#a660e5cd036fd5ddf0f5767b352acd948
//
func (h *HOGDescriptor) DetectMultiScale(img Mat) []image.Rectangle {
ret := C.HOGDescriptor_DetectMultiScale(h.p, img.p)
defer C.Rects_Close(ret)
return toRectangles(ret)
}
// DetectMultiScaleWithParams calls DetectMultiScale but allows setting parameters
// to values other than just the defaults.
//
// For further details, please see:
// https://docs.opencv.org/master/d5/d33/structcv_1_1HOGDescriptor.html#a660e5cd036fd5ddf0f5767b352acd948
//
func (h *HOGDescriptor) DetectMultiScaleWithParams(img Mat, hitThresh float64,
winStride, padding image.Point, scale, finalThreshold float64, useMeanshiftGrouping bool) []image.Rectangle {
wSz := C.struct_Size{
width: C.int(winStride.X),
height: C.int(winStride.Y),
}
pSz := C.struct_Size{
width: C.int(padding.X),
height: C.int(padding.Y),
}
ret := C.HOGDescriptor_DetectMultiScaleWithParams(h.p, img.p, C.double(hitThresh),
wSz, pSz, C.double(scale), C.double(finalThreshold), C.bool(useMeanshiftGrouping))
defer C.Rects_Close(ret)
return toRectangles(ret)
}
// HOGDefaultPeopleDetector returns a new Mat with the HOG DefaultPeopleDetector.
//
// For further details, please see:
// https://docs.opencv.org/master/d5/d33/structcv_1_1HOGDescriptor.html#a660e5cd036fd5ddf0f5767b352acd948
//
func HOGDefaultPeopleDetector() Mat {
return newMat(C.HOG_GetDefaultPeopleDetector())
}
// SetSVMDetector sets the data for the HOGDescriptor.
//
// For further details, please see:
// https://docs.opencv.org/master/d5/d33/structcv_1_1HOGDescriptor.html#a09e354ad701f56f9c550dc0385dc36f1
//
func (h *HOGDescriptor) SetSVMDetector(det Mat) error {
C.HOGDescriptor_SetSVMDetector(h.p, det.p)
return nil
}
// GroupRectangles groups the object candidate rectangles.
//
// For further details, please see:
// https://docs.opencv.org/master/d5/d54/group__objdetect.html#ga3dba897ade8aa8227edda66508e16ab9
//
func GroupRectangles(rects []image.Rectangle, groupThreshold int, eps float64) []image.Rectangle {
cRectArray := make([]C.struct_Rect, len(rects))
for i, r := range rects {
cRect := C.struct_Rect{
x: C.int(r.Min.X),
y: C.int(r.Min.Y),
width: C.int(r.Size().X),
height: C.int(r.Size().Y),
}
cRectArray[i] = cRect
}
cRects := C.struct_Rects{
rects: (*C.Rect)(&cRectArray[0]),
length: C.int(len(rects)),
}
ret := C.GroupRectangles(cRects, C.int(groupThreshold), C.double(eps))
return toRectangles(ret)
}
// QRCodeDetector groups the object candidate rectangles.
//
// For further details, please see:
// https://docs.opencv.org/master/de/dc3/classcv_1_1QRCodeDetector.html
//
type QRCodeDetector struct {
p C.QRCodeDetector
}
// newQRCodeDetector returns a new QRCodeDetector from a C QRCodeDetector
func newQRCodeDetector(p C.QRCodeDetector) QRCodeDetector {
return QRCodeDetector{p: p}
}
func NewQRCodeDetector() QRCodeDetector {
return newQRCodeDetector(C.QRCodeDetector_New())
}
func (a *QRCodeDetector) Close() error {
C.QRCodeDetector_Close(a.p)
a.p = nil
return nil
}
// DetectAndDecode Both detects and decodes QR code.
//
// For further details, please see:
// https://docs.opencv.org/master/de/dc3/classcv_1_1QRCodeDetector.html#a7290bd6a5d59b14a37979c3a14fbf394
//
func (a *QRCodeDetector) DetectAndDecode(input Mat, points *Mat, straight_qrcode *Mat) string {
goResult := C.GoString(C.QRCodeDetector_DetectAndDecode(a.p, input.p, points.p, straight_qrcode.p))
return string(goResult)
}
// Detect detects QR code in image and returns the quadrangle containing the code.
//
// For further details, please see:
// https://docs.opencv.org/master/de/dc3/classcv_1_1QRCodeDetector.html#a64373f7d877d27473f64fe04bb57d22b
//
func (a *QRCodeDetector) Detect(input Mat, points *Mat) bool {
result := C.QRCodeDetector_Detect(a.p, input.p, points.p)
return bool(result)
}
// Decode decodes QR code in image once it's found by the detect() method. Returns UTF8-encoded output string or empty string if the code cannot be decoded.
//
// For further details, please see:
// https://docs.opencv.org/master/de/dc3/classcv_1_1QRCodeDetector.html#a4172c2eb4825c844fb1b0ae67202d329
//
func (a *QRCodeDetector) Decode(input Mat, points Mat, straight_qrcode *Mat) string {
goResult := C.GoString(C.QRCodeDetector_DetectAndDecode(a.p, input.p, points.p, straight_qrcode.p))
return string(goResult)
}

53
vendor/gocv.io/x/gocv/objdetect.h generated vendored Normal file
View File

@ -0,0 +1,53 @@
#ifndef _OPENCV3_OBJDETECT_H_
#define _OPENCV3_OBJDETECT_H_
#include <stdbool.h>
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
#ifdef __cplusplus
typedef cv::CascadeClassifier* CascadeClassifier;
typedef cv::HOGDescriptor* HOGDescriptor;
typedef cv::QRCodeDetector* QRCodeDetector;
#else
typedef void* CascadeClassifier;
typedef void* HOGDescriptor;
typedef void* QRCodeDetector;
#endif
// CascadeClassifier
CascadeClassifier CascadeClassifier_New();
void CascadeClassifier_Close(CascadeClassifier cs);
int CascadeClassifier_Load(CascadeClassifier cs, const char* name);
struct Rects CascadeClassifier_DetectMultiScale(CascadeClassifier cs, Mat img);
struct Rects CascadeClassifier_DetectMultiScaleWithParams(CascadeClassifier cs, Mat img,
double scale, int minNeighbors, int flags, Size minSize, Size maxSize);
HOGDescriptor HOGDescriptor_New();
void HOGDescriptor_Close(HOGDescriptor hog);
int HOGDescriptor_Load(HOGDescriptor hog, const char* name);
struct Rects HOGDescriptor_DetectMultiScale(HOGDescriptor hog, Mat img);
struct Rects HOGDescriptor_DetectMultiScaleWithParams(HOGDescriptor hog, Mat img,
double hitThresh, Size winStride, Size padding, double scale, double finalThreshold,
bool useMeanshiftGrouping);
Mat HOG_GetDefaultPeopleDetector();
void HOGDescriptor_SetSVMDetector(HOGDescriptor hog, Mat det);
struct Rects GroupRectangles(struct Rects rects, int groupThreshold, double eps);
QRCodeDetector QRCodeDetector_New();
const char* QRCodeDetector_DetectAndDecode(QRCodeDetector qr, Mat input,Mat points,Mat straight_qrcode);
bool QRCodeDetector_Detect(QRCodeDetector qr, Mat input,Mat points);
const char* QRCodeDetector_Decode(QRCodeDetector qr, Mat input,Mat inputPoints,Mat straight_qrcode);
void QRCodeDetector_Close(QRCodeDetector qr);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_OBJDETECT_H_

5
vendor/gocv.io/x/gocv/svd.cpp generated vendored Normal file
View File

@ -0,0 +1,5 @@
#include "svd.h"
void SVD_Compute(Mat src, Mat w, Mat u, Mat vt) {
cv::SVD::compute(*src, *w, *u, *vt, 0);
}

14
vendor/gocv.io/x/gocv/svd.go generated vendored Normal file
View File

@ -0,0 +1,14 @@
package gocv
/*
#include <stdlib.h>
#include "svd.h"
*/
import "C"
// SVDCompute decomposes matrix and stores the results to user-provided matrices
//
// https://docs.opencv.org/4.1.2/df/df7/classcv_1_1SVD.html#a76f0b2044df458160292045a3d3714c6
func SVDCompute(src Mat, w, u, vt *Mat) {
C.SVD_Compute(src.Ptr(), w.Ptr(), u.Ptr(), vt.Ptr())
}

18
vendor/gocv.io/x/gocv/svd.h generated vendored Normal file
View File

@ -0,0 +1,18 @@
#ifndef _OPENCV3_SVD_H_
#define _OPENCV3_SVD_H_
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
void SVD_Compute(Mat src, Mat w, Mat u, Mat vt);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_SVD_H

79
vendor/gocv.io/x/gocv/travis_build_opencv.sh generated vendored Normal file
View File

@ -0,0 +1,79 @@
#!/bin/bash
set -eux -o pipefail
OPENCV_VERSION=${OPENCV_VERSION:-4.2.0}
#GRAPHICAL=ON
GRAPHICAL=${GRAPHICAL:-OFF}
# OpenCV looks for libjpeg in /usr/lib/libjpeg.so, for some reason. However,
# it does not seem to be there in 14.04. Create a link
mkdir -p $HOME/usr/lib
if [[ ! -f "$HOME/usr/lib/libjpeg.so" ]]; then
ln -s /usr/lib/x86_64-linux-gnu/libjpeg.so $HOME/usr/lib/libjpeg.so
fi
# Same for libpng.so
if [[ ! -f "$HOME/usr/lib/libpng.so" ]]; then
ln -s /usr/lib/x86_64-linux-gnu/libpng.so $HOME/usr/lib/libpng.so
fi
# Build OpenCV
if [[ ! -e "$HOME/usr/installed-${OPENCV_VERSION}" ]]; then
TMP=$(mktemp -d)
if [[ ! -d "opencv-${OPENCV_VERSION}/build" ]]; then
curl -sL https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip > ${TMP}/opencv.zip
unzip -q ${TMP}/opencv.zip
mkdir opencv-${OPENCV_VERSION}/build
rm ${TMP}/opencv.zip
fi
if [[ ! -d "opencv_contrib-${OPENCV_VERSION}/modules" ]]; then
curl -sL https://github.com/opencv/opencv_contrib/archive/${OPENCV_VERSION}.zip > ${TMP}/opencv-contrib.zip
unzip -q ${TMP}/opencv-contrib.zip
rm ${TMP}/opencv-contrib.zip
fi
rmdir ${TMP}
cd opencv-${OPENCV_VERSION}/build
cmake -D WITH_IPP=${GRAPHICAL} \
-D WITH_OPENGL=${GRAPHICAL} \
-D WITH_QT=${GRAPHICAL} \
-D BUILD_EXAMPLES=OFF \
-D BUILD_TESTS=OFF \
-D BUILD_PERF_TESTS=OFF \
-D BUILD_opencv_java=OFF \
-D BUILD_opencv_python=OFF \
-D BUILD_opencv_python2=OFF \
-D BUILD_opencv_python3=OFF \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D CMAKE_INSTALL_PREFIX=$HOME/usr \
-D OPENCV_ENABLE_NONFREE=ON \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-${OPENCV_VERSION}/modules ..
make -j8
make install && touch $HOME/usr/installed-${OPENCV_VERSION}
# caffe test data
if [[ ! -d "${HOME}/testdata" ]]; then
mkdir ${HOME}/testdata
fi
#if [[ ! -f "${HOME}/testdata/bvlc_googlenet.prototxt" ]]; then
curl -sL https://raw.githubusercontent.com/opencv/opencv_extra/master/testdata/dnn/bvlc_googlenet.prototxt > ${HOME}/testdata/bvlc_googlenet.prototxt
#fi
#if [[ ! -f "${HOME}/testdata/bvlc_googlenet.caffemodel" ]]; then
curl -sL http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel > ${HOME}/testdata/bvlc_googlenet.caffemodel
#fi
#if [[ ! -f "${HOME}/testdata/tensorflow_inception_graph.pb" ]]; then
curl -sL https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip > ${HOME}/testdata/inception5h.zip
unzip -o ${HOME}/testdata/inception5h.zip tensorflow_inception_graph.pb -d ${HOME}/testdata
#fi
cd ../..
touch $HOME/fresh-cache
fi

5
vendor/gocv.io/x/gocv/version.cpp generated vendored Normal file
View File

@ -0,0 +1,5 @@
#include "version.h"
const char* openCVVersion() {
return CV_VERSION;
}

20
vendor/gocv.io/x/gocv/version.go generated vendored Normal file
View File

@ -0,0 +1,20 @@
package gocv
/*
#include <stdlib.h>
#include "version.h"
*/
import "C"
// GoCVVersion of this package, for display purposes.
const GoCVVersion = "0.22.0"
// Version returns the current golang package version
func Version() string {
return GoCVVersion
}
// OpenCVVersion returns the current OpenCV lib version
func OpenCVVersion() string {
return C.GoString(C.openCVVersion())
}

17
vendor/gocv.io/x/gocv/version.h generated vendored Normal file
View File

@ -0,0 +1,17 @@
#ifndef _OPENCV3_VERSION_H_
#define _OPENCV3_VERSION_H_
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
const char* openCVVersion();
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_VERSION_H_

49
vendor/gocv.io/x/gocv/video.cpp generated vendored Normal file
View File

@ -0,0 +1,49 @@
#include "video.h"
BackgroundSubtractorMOG2 BackgroundSubtractorMOG2_Create() {
return new cv::Ptr<cv::BackgroundSubtractorMOG2>(cv::createBackgroundSubtractorMOG2());
}
BackgroundSubtractorMOG2 BackgroundSubtractorMOG2_CreateWithParams(int history, double varThreshold, bool detectShadows) {
return new cv::Ptr<cv::BackgroundSubtractorMOG2>(cv::createBackgroundSubtractorMOG2(history,varThreshold,detectShadows));
}
BackgroundSubtractorKNN BackgroundSubtractorKNN_Create() {
return new cv::Ptr<cv::BackgroundSubtractorKNN>(cv::createBackgroundSubtractorKNN());
}
BackgroundSubtractorKNN BackgroundSubtractorKNN_CreateWithParams(int history, double dist2Threshold, bool detectShadows) {
return new cv::Ptr<cv::BackgroundSubtractorKNN>(cv::createBackgroundSubtractorKNN(history,dist2Threshold,detectShadows));
}
void BackgroundSubtractorMOG2_Close(BackgroundSubtractorMOG2 b) {
delete b;
}
void BackgroundSubtractorMOG2_Apply(BackgroundSubtractorMOG2 b, Mat src, Mat dst) {
(*b)->apply(*src, *dst);
}
void BackgroundSubtractorKNN_Close(BackgroundSubtractorKNN k) {
delete k;
}
void BackgroundSubtractorKNN_Apply(BackgroundSubtractorKNN k, Mat src, Mat dst) {
(*k)->apply(*src, *dst);
}
void CalcOpticalFlowFarneback(Mat prevImg, Mat nextImg, Mat flow, double scale, int levels,
int winsize, int iterations, int polyN, double polySigma, int flags) {
cv::calcOpticalFlowFarneback(*prevImg, *nextImg, *flow, scale, levels, winsize, iterations, polyN,
polySigma, flags);
}
void CalcOpticalFlowPyrLK(Mat prevImg, Mat nextImg, Mat prevPts, Mat nextPts, Mat status, Mat err) {
cv::calcOpticalFlowPyrLK(*prevImg, *nextImg, *prevPts, *nextPts, *status, *err);
}
void CalcOpticalFlowPyrLKWithParams(Mat prevImg, Mat nextImg, Mat prevPts, Mat nextPts, Mat status, Mat err, Size winSize, int maxLevel, TermCriteria criteria, int flags, double minEigThreshold){
cv::Size sz(winSize.width, winSize.height);
cv::calcOpticalFlowPyrLK(*prevImg, *nextImg, *prevPts, *nextPts, *status, *err, sz, maxLevel, *criteria, flags, minEigThreshold);
}

157
vendor/gocv.io/x/gocv/video.go generated vendored Normal file
View File

@ -0,0 +1,157 @@
package gocv
/*
#include <stdlib.h>
#include "video.h"
*/
import "C"
import (
"image"
"unsafe"
)
/**
cv::OPTFLOW_USE_INITIAL_FLOW = 4,
cv::OPTFLOW_LK_GET_MIN_EIGENVALS = 8,
cv::OPTFLOW_FARNEBACK_GAUSSIAN = 256
For further details, please see: https://docs.opencv.org/master/dc/d6b/group__video__track.html#gga2c6cc144c9eee043575d5b311ac8af08a9d4430ac75199af0cf6fcdefba30eafe
*/
const (
OptflowUseInitialFlow = 4
OptflowLkGetMinEigenvals = 8
OptflowFarnebackGaussian = 256
)
// BackgroundSubtractorMOG2 is a wrapper around the cv::BackgroundSubtractorMOG2.
type BackgroundSubtractorMOG2 struct {
// C.BackgroundSubtractorMOG2
p unsafe.Pointer
}
// NewBackgroundSubtractorMOG2 returns a new BackgroundSubtractor algorithm
// of type MOG2. MOG2 is a Gaussian Mixture-based Background/Foreground
// Segmentation Algorithm.
//
// For further details, please see:
// https://docs.opencv.org/master/de/de1/group__video__motion.html#ga2beb2dee7a073809ccec60f145b6b29c
// https://docs.opencv.org/master/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html
//
func NewBackgroundSubtractorMOG2() BackgroundSubtractorMOG2 {
return BackgroundSubtractorMOG2{p: unsafe.Pointer(C.BackgroundSubtractorMOG2_Create())}
}
// NewBackgroundSubtractorMOG2WithParams returns a new BackgroundSubtractor algorithm
// of type MOG2 with customized parameters. MOG2 is a Gaussian Mixture-based Background/Foreground
// Segmentation Algorithm.
//
// For further details, please see:
// https://docs.opencv.org/master/de/de1/group__video__motion.html#ga2beb2dee7a073809ccec60f145b6b29c
// https://docs.opencv.org/master/d7/d7b/classcv_1_1BackgroundSubtractorMOG2.html
//
func NewBackgroundSubtractorMOG2WithParams(history int, varThreshold float64, detectShadows bool) BackgroundSubtractorMOG2 {
return BackgroundSubtractorMOG2{p: unsafe.Pointer(C.BackgroundSubtractorMOG2_CreateWithParams(C.int(history), C.double(varThreshold), C.bool(detectShadows)))}
}
// Close BackgroundSubtractorMOG2.
func (b *BackgroundSubtractorMOG2) Close() error {
C.BackgroundSubtractorMOG2_Close((C.BackgroundSubtractorMOG2)(b.p))
b.p = nil
return nil
}
// Apply computes a foreground mask using the current BackgroundSubtractorMOG2.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/df6/classcv_1_1BackgroundSubtractor.html#aa735e76f7069b3fa9c3f32395f9ccd21
//
func (b *BackgroundSubtractorMOG2) Apply(src Mat, dst *Mat) {
C.BackgroundSubtractorMOG2_Apply((C.BackgroundSubtractorMOG2)(b.p), src.p, dst.p)
return
}
// BackgroundSubtractorKNN is a wrapper around the cv::BackgroundSubtractorKNN.
type BackgroundSubtractorKNN struct {
// C.BackgroundSubtractorKNN
p unsafe.Pointer
}
// NewBackgroundSubtractorKNN returns a new BackgroundSubtractor algorithm
// of type KNN. K-Nearest Neighbors (KNN) uses a Background/Foreground
// Segmentation Algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/de/de1/group__video__motion.html#gac9be925771f805b6fdb614ec2292006d
// https://docs.opencv.org/master/db/d88/classcv_1_1BackgroundSubtractorKNN.html
//
func NewBackgroundSubtractorKNN() BackgroundSubtractorKNN {
return BackgroundSubtractorKNN{p: unsafe.Pointer(C.BackgroundSubtractorKNN_Create())}
}
// NewBackgroundSubtractorKNNWithParams returns a new BackgroundSubtractor algorithm
// of type KNN with customized parameters. K-Nearest Neighbors (KNN) uses a Background/Foreground
// Segmentation Algorithm
//
// For further details, please see:
// https://docs.opencv.org/master/de/de1/group__video__motion.html#gac9be925771f805b6fdb614ec2292006d
// https://docs.opencv.org/master/db/d88/classcv_1_1BackgroundSubtractorKNN.html
//
func NewBackgroundSubtractorKNNWithParams(history int, dist2Threshold float64, detectShadows bool) BackgroundSubtractorKNN {
return BackgroundSubtractorKNN{p: unsafe.Pointer(C.BackgroundSubtractorKNN_CreateWithParams(C.int(history), C.double(dist2Threshold), C.bool(detectShadows)))}
}
// Close BackgroundSubtractorKNN.
func (k *BackgroundSubtractorKNN) Close() error {
C.BackgroundSubtractorKNN_Close((C.BackgroundSubtractorKNN)(k.p))
k.p = nil
return nil
}
// Apply computes a foreground mask using the current BackgroundSubtractorKNN.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/df6/classcv_1_1BackgroundSubtractor.html#aa735e76f7069b3fa9c3f32395f9ccd21
//
func (k *BackgroundSubtractorKNN) Apply(src Mat, dst *Mat) {
C.BackgroundSubtractorKNN_Apply((C.BackgroundSubtractorKNN)(k.p), src.p, dst.p)
return
}
// CalcOpticalFlowFarneback computes a dense optical flow using
// Gunnar Farneback's algorithm.
//
// For further details, please see:
// https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga5d10ebbd59fe09c5f650289ec0ece5af
//
func CalcOpticalFlowFarneback(prevImg Mat, nextImg Mat, flow *Mat, pyrScale float64, levels int, winsize int,
iterations int, polyN int, polySigma float64, flags int) {
C.CalcOpticalFlowFarneback(prevImg.p, nextImg.p, flow.p, C.double(pyrScale), C.int(levels), C.int(winsize),
C.int(iterations), C.int(polyN), C.double(polySigma), C.int(flags))
return
}
// CalcOpticalFlowPyrLK calculates an optical flow for a sparse feature set using
// the iterative Lucas-Kanade method with pyramids.
//
// For further details, please see:
// https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga473e4b886d0bcc6b65831eb88ed93323
//
func CalcOpticalFlowPyrLK(prevImg Mat, nextImg Mat, prevPts Mat, nextPts Mat, status *Mat, err *Mat) {
C.CalcOpticalFlowPyrLK(prevImg.p, nextImg.p, prevPts.p, nextPts.p, status.p, err.p)
return
}
// CalcOpticalFlowPyrLKWithParams calculates an optical flow for a sparse feature set using
// the iterative Lucas-Kanade method with pyramids.
//
// For further details, please see:
// https://docs.opencv.org/master/dc/d6b/group__video__track.html#ga473e4b886d0bcc6b65831eb88ed93323
//
func CalcOpticalFlowPyrLKWithParams(prevImg Mat, nextImg Mat, prevPts Mat, nextPts Mat, status *Mat, err *Mat,
winSize image.Point, maxLevel int, criteria TermCriteria, flags int, minEigThreshold float64) {
winSz := C.struct_Size{
width: C.int(winSize.X),
height: C.int(winSize.Y),
}
C.CalcOpticalFlowPyrLKWithParams(prevImg.p, nextImg.p, prevPts.p, nextPts.p, status.p, err.p, winSz, C.int(maxLevel), criteria.p, C.int(flags), C.double(minEigThreshold))
return
}

38
vendor/gocv.io/x/gocv/video.h generated vendored Normal file
View File

@ -0,0 +1,38 @@
#ifndef _OPENCV3_VIDEO_H_
#define _OPENCV3_VIDEO_H_
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
#ifdef __cplusplus
typedef cv::Ptr<cv::BackgroundSubtractorMOG2>* BackgroundSubtractorMOG2;
typedef cv::Ptr<cv::BackgroundSubtractorKNN>* BackgroundSubtractorKNN;
#else
typedef void* BackgroundSubtractorMOG2;
typedef void* BackgroundSubtractorKNN;
#endif
BackgroundSubtractorMOG2 BackgroundSubtractorMOG2_Create();
BackgroundSubtractorMOG2 BackgroundSubtractorMOG2_CreateWithParams(int history, double varThreshold, bool detectShadows);
void BackgroundSubtractorMOG2_Close(BackgroundSubtractorMOG2 b);
void BackgroundSubtractorMOG2_Apply(BackgroundSubtractorMOG2 b, Mat src, Mat dst);
BackgroundSubtractorKNN BackgroundSubtractorKNN_Create();
BackgroundSubtractorKNN BackgroundSubtractorKNN_CreateWithParams(int history, double dist2Threshold, bool detectShadows);
void BackgroundSubtractorKNN_Close(BackgroundSubtractorKNN b);
void BackgroundSubtractorKNN_Apply(BackgroundSubtractorKNN b, Mat src, Mat dst);
void CalcOpticalFlowPyrLK(Mat prevImg, Mat nextImg, Mat prevPts, Mat nextPts, Mat status, Mat err);
void CalcOpticalFlowPyrLKWithParams(Mat prevImg, Mat nextImg, Mat prevPts, Mat nextPts, Mat status, Mat err, Size winSize, int maxLevel, TermCriteria criteria, int flags, double minEigThreshold);
void CalcOpticalFlowFarneback(Mat prevImg, Mat nextImg, Mat flow, double pyrScale, int levels,
int winsize, int iterations, int polyN, double polySigma, int flags);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_VIDEO_H_

63
vendor/gocv.io/x/gocv/videoio.cpp generated vendored Normal file
View File

@ -0,0 +1,63 @@
#include "videoio.h"
// VideoWriter
VideoCapture VideoCapture_New() {
return new cv::VideoCapture();
}
void VideoCapture_Close(VideoCapture v) {
delete v;
}
bool VideoCapture_Open(VideoCapture v, const char* uri) {
return v->open(uri);
}
bool VideoCapture_OpenDevice(VideoCapture v, int device) {
return v->open(device);
}
void VideoCapture_Set(VideoCapture v, int prop, double param) {
v->set(prop, param);
}
double VideoCapture_Get(VideoCapture v, int prop) {
return v->get(prop);
}
int VideoCapture_IsOpened(VideoCapture v) {
return v->isOpened();
}
int VideoCapture_Read(VideoCapture v, Mat buf) {
return v->read(*buf);
}
void VideoCapture_Grab(VideoCapture v, int skip) {
for (int i = 0; i < skip; i++) {
v->grab();
}
}
// VideoWriter
VideoWriter VideoWriter_New() {
return new cv::VideoWriter();
}
void VideoWriter_Close(VideoWriter vw) {
delete vw;
}
void VideoWriter_Open(VideoWriter vw, const char* name, const char* codec, double fps, int width,
int height, bool isColor) {
int codecCode = cv::VideoWriter::fourcc(codec[0], codec[1], codec[2], codec[3]);
vw->open(name, codecCode, fps, cv::Size(width, height), isColor);
}
int VideoWriter_IsOpened(VideoWriter vw) {
return vw->isOpened();
}
void VideoWriter_Write(VideoWriter vw, Mat img) {
*vw << *img;
}

332
vendor/gocv.io/x/gocv/videoio.go generated vendored Normal file
View File

@ -0,0 +1,332 @@
package gocv
/*
#include <stdlib.h>
#include "videoio.h"
*/
import "C"
import (
"errors"
"fmt"
"strconv"
"sync"
"unsafe"
)
// VideoCaptureProperties are the properties used for VideoCapture operations.
type VideoCaptureProperties int
const (
// VideoCapturePosMsec contains current position of the
// video file in milliseconds.
VideoCapturePosMsec VideoCaptureProperties = 0
// VideoCapturePosFrames 0-based index of the frame to be
// decoded/captured next.
VideoCapturePosFrames = 1
// VideoCapturePosAVIRatio relative position of the video file:
// 0=start of the film, 1=end of the film.
VideoCapturePosAVIRatio = 2
// VideoCaptureFrameWidth is width of the frames in the video stream.
VideoCaptureFrameWidth = 3
// VideoCaptureFrameHeight controls height of frames in the video stream.
VideoCaptureFrameHeight = 4
// VideoCaptureFPS controls capture frame rate.
VideoCaptureFPS = 5
// VideoCaptureFOURCC contains the 4-character code of codec.
// see VideoWriter::fourcc for details.
VideoCaptureFOURCC = 6
// VideoCaptureFrameCount contains number of frames in the video file.
VideoCaptureFrameCount = 7
// VideoCaptureFormat format of the Mat objects returned by
// VideoCapture::retrieve().
VideoCaptureFormat = 8
// VideoCaptureMode contains backend-specific value indicating
// the current capture mode.
VideoCaptureMode = 9
// VideoCaptureBrightness is brightness of the image
// (only for those cameras that support).
VideoCaptureBrightness = 10
// VideoCaptureContrast is contrast of the image
// (only for cameras that support it).
VideoCaptureContrast = 11
// VideoCaptureSaturation saturation of the image
// (only for cameras that support).
VideoCaptureSaturation = 12
// VideoCaptureHue hue of the image (only for cameras that support).
VideoCaptureHue = 13
// VideoCaptureGain is the gain of the capture image.
// (only for those cameras that support).
VideoCaptureGain = 14
// VideoCaptureExposure is the exposure of the capture image.
// (only for those cameras that support).
VideoCaptureExposure = 15
// VideoCaptureConvertRGB is a boolean flags indicating whether
// images should be converted to RGB.
VideoCaptureConvertRGB = 16
// VideoCaptureWhiteBalanceBlueU is currently unsupported.
VideoCaptureWhiteBalanceBlueU = 17
// VideoCaptureRectification is the rectification flag for stereo cameras.
// Note: only supported by DC1394 v 2.x backend currently.
VideoCaptureRectification = 18
// VideoCaptureMonochrome indicates whether images should be
// converted to monochrome.
VideoCaptureMonochrome = 19
// VideoCaptureSharpness controls image capture sharpness.
VideoCaptureSharpness = 20
// VideoCaptureAutoExposure controls the DC1394 exposure control
// done by camera, user can adjust reference level using this feature.
VideoCaptureAutoExposure = 21
// VideoCaptureGamma controls video capture gamma.
VideoCaptureGamma = 22
// VideoCaptureTemperature controls video capture temperature.
VideoCaptureTemperature = 23
// VideoCaptureTrigger controls video capture trigger.
VideoCaptureTrigger = 24
// VideoCaptureTriggerDelay controls video capture trigger delay.
VideoCaptureTriggerDelay = 25
// VideoCaptureWhiteBalanceRedV controls video capture setting for
// white balance.
VideoCaptureWhiteBalanceRedV = 26
// VideoCaptureZoom controls video capture zoom.
VideoCaptureZoom = 27
// VideoCaptureFocus controls video capture focus.
VideoCaptureFocus = 28
// VideoCaptureGUID controls video capture GUID.
VideoCaptureGUID = 29
// VideoCaptureISOSpeed controls video capture ISO speed.
VideoCaptureISOSpeed = 30
// VideoCaptureBacklight controls video capture backlight.
VideoCaptureBacklight = 32
// VideoCapturePan controls video capture pan.
VideoCapturePan = 33
// VideoCaptureTilt controls video capture tilt.
VideoCaptureTilt = 34
// VideoCaptureRoll controls video capture roll.
VideoCaptureRoll = 35
// VideoCaptureIris controls video capture iris.
VideoCaptureIris = 36
// VideoCaptureSettings is the pop up video/camera filter dialog. Note:
// only supported by DSHOW backend currently. The property value is ignored.
VideoCaptureSettings = 37
// VideoCaptureBufferSize controls video capture buffer size.
VideoCaptureBufferSize = 38
// VideoCaptureAutoFocus controls video capture auto focus..
VideoCaptureAutoFocus = 39
)
// VideoCapture is a wrapper around the OpenCV VideoCapture class.
//
// For further details, please see:
// http://docs.opencv.org/master/d8/dfe/classcv_1_1VideoCapture.html
//
type VideoCapture struct {
p C.VideoCapture
}
// VideoCaptureFile opens a VideoCapture from a file and prepares
// to start capturing. It returns error if it fails to open the file stored in uri path.
func VideoCaptureFile(uri string) (vc *VideoCapture, err error) {
vc = &VideoCapture{p: C.VideoCapture_New()}
cURI := C.CString(uri)
defer C.free(unsafe.Pointer(cURI))
if !C.VideoCapture_Open(vc.p, cURI) {
err = fmt.Errorf("Error opening file: %s", uri)
}
return
}
// VideoCaptureDevice opens a VideoCapture from a device and prepares
// to start capturing. It returns error if it fails to open the video device.
func VideoCaptureDevice(device int) (vc *VideoCapture, err error) {
vc = &VideoCapture{p: C.VideoCapture_New()}
if !C.VideoCapture_OpenDevice(vc.p, C.int(device)) {
err = fmt.Errorf("Error opening device: %d", device)
}
return
}
// Close VideoCapture object.
func (v *VideoCapture) Close() error {
C.VideoCapture_Close(v.p)
v.p = nil
return nil
}
// Set parameter with property (=key).
func (v *VideoCapture) Set(prop VideoCaptureProperties, param float64) {
C.VideoCapture_Set(v.p, C.int(prop), C.double(param))
}
// Get parameter with property (=key).
func (v VideoCapture) Get(prop VideoCaptureProperties) float64 {
return float64(C.VideoCapture_Get(v.p, C.int(prop)))
}
// IsOpened returns if the VideoCapture has been opened to read from
// a file or capture device.
func (v *VideoCapture) IsOpened() bool {
isOpened := C.VideoCapture_IsOpened(v.p)
return isOpened != 0
}
// Read reads the next frame from the VideoCapture to the Mat passed in
// as the param. It returns false if the VideoCapture cannot read frame.
func (v *VideoCapture) Read(m *Mat) bool {
return C.VideoCapture_Read(v.p, m.p) != 0
}
// Grab skips a specific number of frames.
func (v *VideoCapture) Grab(skip int) {
C.VideoCapture_Grab(v.p, C.int(skip))
}
// CodecString returns a string representation of FourCC bytes, i.e. the name of a codec
func (v *VideoCapture) CodecString() string {
res := ""
hexes := []int64{0xff, 0xff00, 0xff0000, 0xff000000}
for i, h := range hexes {
res += string(int64(v.Get(VideoCaptureFOURCC)) & h >> (uint(i * 8)))
}
return res
}
// ToCodec returns an float64 representation of FourCC bytes
func (v *VideoCapture) ToCodec(codec string) float64 {
if len(codec) != 4 {
return -1.0
}
c1 := []rune(string(codec[0]))[0]
c2 := []rune(string(codec[1]))[0]
c3 := []rune(string(codec[2]))[0]
c4 := []rune(string(codec[3]))[0]
return float64((c1 & 255) + ((c2 & 255) << 8) + ((c3 & 255) << 16) + ((c4 & 255) << 24))
}
// VideoWriter is a wrapper around the OpenCV VideoWriter`class.
//
// For further details, please see:
// http://docs.opencv.org/master/dd/d9e/classcv_1_1VideoWriter.html
//
type VideoWriter struct {
mu *sync.RWMutex
p C.VideoWriter
}
// VideoWriterFile opens a VideoWriter with a specific output file.
// The "codec" param should be the four-letter code for the desired output
// codec, for example "MJPG".
//
// For further details, please see:
// http://docs.opencv.org/master/dd/d9e/classcv_1_1VideoWriter.html#a0901c353cd5ea05bba455317dab81130
//
func VideoWriterFile(name string, codec string, fps float64, width int, height int, isColor bool) (vw *VideoWriter, err error) {
if fps == 0 || width == 0 || height == 0 {
return nil, fmt.Errorf("one of the numerical parameters "+
"is equal to zero: FPS: %f, width: %d, height: %d", fps, width, height)
}
vw = &VideoWriter{
p: C.VideoWriter_New(),
mu: &sync.RWMutex{},
}
cName := C.CString(name)
defer C.free(unsafe.Pointer(cName))
cCodec := C.CString(codec)
defer C.free(unsafe.Pointer(cCodec))
C.VideoWriter_Open(vw.p, cName, cCodec, C.double(fps), C.int(width), C.int(height), C.bool(isColor))
return
}
// Close VideoWriter object.
func (vw *VideoWriter) Close() error {
C.VideoWriter_Close(vw.p)
vw.p = nil
return nil
}
// IsOpened checks if the VideoWriter is open and ready to be written to.
//
// For further details, please see:
// http://docs.opencv.org/master/dd/d9e/classcv_1_1VideoWriter.html#a9a40803e5f671968ac9efa877c984d75
//
func (vw *VideoWriter) IsOpened() bool {
isOpend := C.VideoWriter_IsOpened(vw.p)
return isOpend != 0
}
// Write the next video frame from the Mat image to the open VideoWriter.
//
// For further details, please see:
// http://docs.opencv.org/master/dd/d9e/classcv_1_1VideoWriter.html#a3115b679d612a6a0b5864a0c88ed4b39
//
func (vw *VideoWriter) Write(img Mat) error {
vw.mu.Lock()
defer vw.mu.Unlock()
C.VideoWriter_Write(vw.p, img.p)
return nil
}
// OpenVideoCapture return VideoCapture specified by device ID if v is a
// number. Return VideoCapture created from video file, URL, or GStreamer
// pipeline if v is a string.
func OpenVideoCapture(v interface{}) (*VideoCapture, error) {
switch vv := v.(type) {
case int:
return VideoCaptureDevice(vv)
case string:
id, err := strconv.Atoi(vv)
if err == nil {
return VideoCaptureDevice(id)
}
return VideoCaptureFile(vv)
default:
return nil, errors.New("argument must be int or string")
}
}

42
vendor/gocv.io/x/gocv/videoio.h generated vendored Normal file
View File

@ -0,0 +1,42 @@
#ifndef _OPENCV3_VIDEOIO_H_
#define _OPENCV3_VIDEOIO_H_
#ifdef __cplusplus
#include <opencv2/opencv.hpp>
extern "C" {
#endif
#include "core.h"
#ifdef __cplusplus
typedef cv::VideoCapture* VideoCapture;
typedef cv::VideoWriter* VideoWriter;
#else
typedef void* VideoCapture;
typedef void* VideoWriter;
#endif
// VideoCapture
VideoCapture VideoCapture_New();
void VideoCapture_Close(VideoCapture v);
bool VideoCapture_Open(VideoCapture v, const char* uri);
bool VideoCapture_OpenDevice(VideoCapture v, int device);
void VideoCapture_Set(VideoCapture v, int prop, double param);
double VideoCapture_Get(VideoCapture v, int prop);
int VideoCapture_IsOpened(VideoCapture v);
int VideoCapture_Read(VideoCapture v, Mat buf);
void VideoCapture_Grab(VideoCapture v, int skip);
// VideoWriter
VideoWriter VideoWriter_New();
void VideoWriter_Close(VideoWriter vw);
void VideoWriter_Open(VideoWriter vw, const char* name, const char* codec, double fps, int width,
int height, bool isColor);
int VideoWriter_IsOpened(VideoWriter vw);
void VideoWriter_Write(VideoWriter vw, Mat img);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_VIDEOIO_H_

85
vendor/gocv.io/x/gocv/videoio_string.go generated vendored Normal file
View File

@ -0,0 +1,85 @@
package gocv
func (c VideoCaptureProperties) String() string {
switch c {
case VideoCapturePosMsec:
return "video-capture-pos-msec"
case VideoCapturePosFrames:
return "video-capture-pos-frames"
case VideoCapturePosAVIRatio:
return "video-capture-pos-avi-ratio"
case VideoCaptureFrameWidth:
return "video-capture-frame-width"
case VideoCaptureFrameHeight:
return "video-capture-frame-height"
case VideoCaptureFPS:
return "video-capture-fps"
case VideoCaptureFOURCC:
return "video-capture-fourcc"
case VideoCaptureFrameCount:
return "video-capture-frame-count"
case VideoCaptureFormat:
return "video-capture-format"
case VideoCaptureMode:
return "video-capture-mode"
case VideoCaptureBrightness:
return "video-capture-brightness"
case VideoCaptureContrast:
return "video-capture-contrast"
case VideoCaptureSaturation:
return "video-capture-saturation"
case VideoCaptureHue:
return "video-capture-hue"
case VideoCaptureGain:
return "video-capture-gain"
case VideoCaptureExposure:
return "video-capture-exposure"
case VideoCaptureConvertRGB:
return "video-capture-convert-rgb"
case VideoCaptureWhiteBalanceBlueU:
return "video-capture-white-balanced-blue-u"
case VideoCaptureWhiteBalanceRedV:
return "video-capture-white-balanced-red-v"
case VideoCaptureRectification:
return "video-capture-rectification"
case VideoCaptureMonochrome:
return "video-capture-monochrome"
case VideoCaptureSharpness:
return "video-capture-sharpness"
case VideoCaptureAutoExposure:
return "video-capture-auto-exposure"
case VideoCaptureGamma:
return "video-capture-gamma"
case VideoCaptureTemperature:
return "video-capture-temperature"
case VideoCaptureTrigger:
return "video-capture-trigger"
case VideoCaptureTriggerDelay:
return "video-capture-trigger-delay"
case VideoCaptureZoom:
return "video-capture-zoom"
case VideoCaptureFocus:
return "video-capture-focus"
case VideoCaptureGUID:
return "video-capture-guid"
case VideoCaptureISOSpeed:
return "video-capture-iso-speed"
case VideoCaptureBacklight:
return "video-capture-backlight"
case VideoCapturePan:
return "video-capture-pan"
case VideoCaptureTilt:
return "video-capture-tilt"
case VideoCaptureRoll:
return "video-capture-roll"
case VideoCaptureIris:
return "video-capture-iris"
case VideoCaptureSettings:
return "video-capture-settings"
case VideoCaptureBufferSize:
return "video-capture-buffer-size"
case VideoCaptureAutoFocus:
return "video-capture-auto-focus"
}
return ""
}

40
vendor/gocv.io/x/gocv/win_build_opencv.cmd generated vendored Normal file
View File

@ -0,0 +1,40 @@
echo off
if not exist "C:\opencv" mkdir "C:\opencv"
if not exist "C:\opencv\build" mkdir "C:\opencv\build"
echo Downloading OpenCV sources
echo.
echo For monitoring the download progress please check the C:\opencv directory.
echo.
REM This is why there is no progress bar:
REM https://github.com/PowerShell/PowerShell/issues/2138
echo Downloading: opencv-4.2.0.zip [91MB]
powershell -command "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://github.com/opencv/opencv/archive/4.2.0.zip -OutFile c:\opencv\opencv-4.2.0.zip"
echo Extracting...
powershell -command "$ProgressPreference = 'SilentlyContinue'; Expand-Archive -Path c:\opencv\opencv-4.2.0.zip -DestinationPath c:\opencv"
del c:\opencv\opencv-4.2.0.zip /q
echo.
echo Downloading: opencv_contrib-4.2.0.zip [58MB]
powershell -command "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://github.com/opencv/opencv_contrib/archive/4.2.0.zip -OutFile c:\opencv\opencv_contrib-4.2.0.zip"
echo Extracting...
powershell -command "$ProgressPreference = 'SilentlyContinue'; Expand-Archive -Path c:\opencv\opencv_contrib-4.2.0.zip -DestinationPath c:\opencv"
del c:\opencv\opencv_contrib-4.2.0.zip /q
echo.
echo Done with downloading and extracting sources.
echo.
echo on
cd /D C:\opencv\build
set PATH=%PATH%;C:\Program Files (x86)\CMake\bin;C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin
cmake C:\opencv\opencv-4.2.0 -G "MinGW Makefiles" -BC:\opencv\build -DENABLE_CXX11=ON -DOPENCV_EXTRA_MODULES_PATH=C:\opencv\opencv_contrib-4.2.0\modules -DBUILD_SHARED_LIBS=ON -DWITH_IPP=OFF -DWITH_MSMF=OFF -DBUILD_EXAMPLES=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_java=OFF -DBUILD_opencv_python=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_DOCS=OFF -DENABLE_PRECOMPILED_HEADERS=OFF -DBUILD_opencv_saliency=OFF -DCPU_DISPATCH= -DOPENCV_GENERATE_PKGCONFIG=ON -DWITH_OPENCL_D3D11_NV=OFF -Wno-dev
mingw32-make -j%NUMBER_OF_PROCESSORS%
mingw32-make install
rmdir c:\opencv\opencv-4.2.0 /s /q
rmdir c:\opencv\opencv_contrib-4.2.0 /s /q
chdir /D %GOPATH%\src\gocv.io\x\gocv