Upgrade dependencies

This commit is contained in:
2020-09-06 14:53:53 +02:00
parent f8e7928cf2
commit f0eee51301
405 changed files with 50181 additions and 97451 deletions

2
vendor/gocv.io/x/gocv/.travis.yml generated vendored
View File

@ -5,7 +5,7 @@ dist: trusty
# language is go
language: go
go:
- "1.13"
- "1.14"
go_import_path: gocv.io/x/gocv
addons:

65
vendor/gocv.io/x/gocv/CHANGELOG.md generated vendored
View File

@ -1,3 +1,68 @@
0.24.0
---
* **all**
* update Makefile and READMEChange constants and corresponding function signatures to have the correct types (#689)
* replace master branch terminology with release
* update to OpenCV 4.4.0
* **calib3d**
* add FindHomography()
* add function EstimateAffinePartial2D()
* add GetAffineTransform() and GetAffineTransform2f()
* add UndistortPoints(), FisheyeUndistortPoints() and EstimateNewCameraMatrixForUndistortRectify()
* **core**
* add MultiplyWithParams
* **docs**
* add recent contributions to ROADMAP
* create CODE_OF_CONDUCT.md
* update copyright year
* **features2d**
* close returned Mat from SIFT algorithm
* fix issue 707 with DrawKeyPoints
* SIFT patent now expired so is part of main OpenCV modules
* **imgproc**
* change struct to remove GNU old-style field designator extension warning
0.23.0
---
* **build**
* update Makefile and README
* update to use go1.14
* **calib3d**
* add draw chessboard
* **core**
* fix memory leak in Mat.Size() and Mat.Split() (#580)
* **cuda**
* add build support
* add cuda backend/target
* add support for:
* cv::cuda::CannyEdgeDetector
* cv::cuda::CascadeClassifier Class
* cv::cuda::HOG Class
* remove breaking case statement
* **dnn**
* avoid parallel test runs
* remove attempt at providing grayscale image blog conversion that uses mean adjustment
* **docker**
* docker file last command change (#505)
* **docs**
* add recent contributions to ROADMAP
* **imgproc**
* add ErodeWithParams function
* add getGaussianKernel function
* add Go Point2f type and update GetPerspectiveTransform() (#589)
* add PhaseCorrelate binding (#626)
* added Polylines feature
* do not free contours data until after we have drawn the needed contours
* Threshold() should return a value (#620)
* **make**
* added raspberry pi zero support to the makefile
* **opencv**
* update to OpenCV 4.3.0
* **openvino**
* add build support
* **windows**
* add cmake flag for allocator stats counter type to avoid opencv issue #16398
0.22.0
---
* **bgsegm**

76
vendor/gocv.io/x/gocv/CODE_OF_CONDUCT.md generated vendored Normal file
View File

@ -0,0 +1,76 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at info@hybridgroup.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@ -22,7 +22,7 @@ Please open a Github issue with your needs, and we can see what we can do.
## How to use our Github repository
The `master` branch of this repo will always have the latest released version of GoCV. All of the active development work for the next release will take place in the `dev` branch. GoCV will use semantic versioning and will create a tag/release for each release.
The `release` branch of this repo will always have the latest released version of GoCV. All of the active development work for the next release will take place in the `dev` branch. GoCV will use semantic versioning and will create a tag/release for each release.
Here is how to contribute back some code or documentation:

12
vendor/gocv.io/x/gocv/Dockerfile generated vendored
View File

@ -8,7 +8,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev && \
rm -rf /var/lib/apt/lists/*
ARG OPENCV_VERSION="4.2.0"
ARG OPENCV_VERSION="4.4.0"
ENV OPENCV_VERSION $OPENCV_VERSION
RUN curl -Lo opencv.zip https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip && \
@ -41,7 +41,7 @@ RUN curl -Lo opencv.zip https://github.com/opencv/opencv/archive/${OPENCV_VERSIO
FROM opencv AS gocv
LABEL maintainer="hybridgroup"
ARG GOVERSION="1.13.5"
ARG GOVERSION="1.14.1"
ENV GOVERSION $GOVERSION
RUN apt-get update && apt-get install -y --no-install-recommends \
@ -57,4 +57,10 @@ ENV PATH $GOPATH/bin:/usr/local/go/bin:$PATH
RUN mkdir -p "$GOPATH/src" "$GOPATH/bin" && chmod -R 777 "$GOPATH"
WORKDIR $GOPATH
RUN go get -u -d gocv.io/x/gocv && go run ${GOPATH}/src/gocv.io/x/gocv/cmd/version/main.go
RUN go get -u -d gocv.io/x/gocv
WORKDIR ${GOPATH}/src/gocv.io/x/gocv/cmd/version/
RUN go build -o gocv_version -i main.go
CMD ["./gocv_version"]

191
vendor/gocv.io/x/gocv/LICENSE.txt generated vendored
View File

@ -1,193 +1,4 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (c) 2017-2019 The Hybrid Group
Copyright (c) 2017-2020 The Hybrid Group
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

116
vendor/gocv.io/x/gocv/Makefile generated vendored
View File

@ -2,17 +2,23 @@
.PHONY: test deps download build clean astyle cmds docker
# OpenCV version to use.
OPENCV_VERSION?=4.2.0
OPENCV_VERSION?=4.4.0
# Go version to use when building Docker image
GOVERSION?=1.13.1
GOVERSION?=1.14.4
# Temporary directory to put files into.
TMP_DIR?=/tmp/
# Build shared or static library
BUILD_SHARED_LIBS?=ON
# Package list for each well-known Linux distribution
RPMS=cmake curl git gtk2-devel libpng-devel libjpeg-devel libtiff-devel tbb tbb-devel libdc1394-devel unzip
DEBS=unzip build-essential cmake curl git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev
RPMS=cmake curl wget git gtk2-devel libpng-devel libjpeg-devel libtiff-devel tbb tbb-devel libdc1394-devel unzip
DEBS=unzip wget build-essential cmake curl git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libdc1394-22-dev
explain:
@echo "For quick install with typical defaults of both OpenCV and GoCV, run 'make install'"
# Detect Linux distribution
distro_deps=
@ -54,12 +60,35 @@ download:
rm opencv.zip opencv_contrib.zip
cd -
# Download dldt source tarballs.
download_dldt:
sudo rm -rf /usr/local/dldt/
sudo git clone https://github.com/opencv/dldt -b 2019 /usr/local/dldt/
# Build dldt.
build_dldt:
cd /usr/local/dldt/inference-engine
sudo git submodule init
sudo git submodule update --recursive
sudo ./install_dependencies.sh
sudo mv -f thirdparty/clDNN/common/intel_ocl_icd/6.3/linux/Release thirdparty/clDNN/common/intel_ocl_icd/6.3/linux/RELEASE
sudo mkdir build
cd build
sudo rm -rf *
sudo cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D ENABLE_VPU=ON -D ENABLE_MKL_DNN=ON -D ENABLE_CLDNN=ON ..
sudo $(MAKE) -j $(shell nproc --all)
sudo touch VERSION
sudo mkdir -p src/ngraph
sudo cp thirdparty/ngraph/src/ngraph/version.hpp src/ngraph
cd -
# Build OpenCV.
build:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON ..
rm -rf *
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
@ -69,7 +98,19 @@ build_raspi:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=OFF -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D ENABLE_NEON=ON -D ENABLE_VFPV3=ON -D WITH_JASPER=OFF -D OPENCV_GENERATE_PKGCONFIG=ON ..
rm -rf *
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=OFF -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D ENABLE_NEON=ON -D ENABLE_VFPV3=ON -D WITH_JASPER=OFF -D OPENCV_GENERATE_PKGCONFIG=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
# Build OpenCV on Raspberry pi zero which has ARMv6.
build_raspi_zero:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
rm -rf *
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=OFF -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D ENABLE_VFPV2=ON -D WITH_JASPER=OFF -D OPENCV_GENERATE_PKGCONFIG=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
@ -79,7 +120,19 @@ build_nonfree:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DOPENCV_ENABLE_NONFREE=ON ..
rm -rf *
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DOPENCV_ENABLE_NONFREE=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
# Build OpenCV with openvino.
build_openvino:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
rm -rf *
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D ENABLE_CXX11=ON -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D WITH_INF_ENGINE=ON -D InferenceEngine_DIR=/usr/local/dldt/inference-engine/build -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DOPENCV_ENABLE_NONFREE=ON ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
@ -89,7 +142,19 @@ build_cuda:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
cmake -j $(shell nproc --all) -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DWITH_CUDA=ON -DENABLE_FAST_MATH=1 -DCUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DBUILD_opencv_cudacodec=OFF ..
rm -rf *
cmake -j $(shell nproc --all) -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DWITH_CUDA=ON -DENABLE_FAST_MATH=1 -DCUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DBUILD_opencv_cudacodec=OFF -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -D CUDA_GENERATION=Auto ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
# Build OpenCV with cuda.
build_all:
cd $(TMP_DIR)opencv/opencv-$(OPENCV_VERSION)
mkdir build
cd build
rm -rf *
cmake -j $(shell nproc --all) -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_SHARED_LIBS=${BUILD_SHARED_LIBS} -D ENABLE_CXX11=ON -D OPENCV_EXTRA_MODULES_PATH=$(TMP_DIR)opencv/opencv_contrib-$(OPENCV_VERSION)/modules -D WITH_INF_ENGINE=ON -D InferenceEngine_DIR=/usr/local/dldt/inference-engine/build -D BUILD_DOCS=OFF -D BUILD_EXAMPLES=OFF -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_java=NO -D BUILD_opencv_python=NO -D BUILD_opencv_python2=NO -D BUILD_opencv_python3=NO -D WITH_JASPER=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DWITH_CUDA=ON -DENABLE_FAST_MATH=1 -DCUDA_FAST_MATH=1 -DWITH_CUBLAS=1 -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda/ -DBUILD_opencv_cudacodec=OFF -D WITH_CUDNN=ON -D OPENCV_DNN_CUDA=ON -D CUDA_GENERATION=Auto ..
$(MAKE) -j $(shell nproc --all)
$(MAKE) preinstall
cd -
@ -99,14 +164,30 @@ clean:
go clean --cache
rm -rf $(TMP_DIR)opencv
# Cleanup old library files.
sudo_pre_install_clean:
sudo rm -rf /usr/local/lib/cmake/opencv4/
sudo rm -rf /usr/local/lib/libopencv*
sudo rm -rf /usr/local/lib/pkgconfig/opencv*
sudo rm -rf /usr/local/include/opencv*
# Do everything.
install: deps download build sudo_install clean verify
install: deps download sudo_pre_install_clean build sudo_install clean verify
# Do everything on Raspbian.
install_raspi: deps download build_raspi sudo_install clean verify
# Do everything on the raspberry pi zero.
install_raspi_zero: deps download build_raspi_zero sudo_install clean verify
# Do everything with cuda.
install_cuda: deps download build_cuda sudo_install clean verify
install_cuda: deps download sudo_pre_install_clean build_cuda sudo_install clean verify verify_cuda
# Do everything with openvino.
install_openvino: deps download download_dldt sudo_pre_install_clean build_dldt sudo_install_dldt build_openvino sudo_install clean verify_openvino
# Do everything with openvino and cuda.
install_all: deps download download_dldt sudo_pre_install_clean build_dldt sudo_install_dldt build_all sudo_install clean verify_openvino verify_cuda
# Install system wide.
sudo_install:
@ -115,10 +196,25 @@ sudo_install:
sudo ldconfig
cd -
# Install system wide.
sudo_install_dldt:
cd /usr/local/dldt/inference-engine/build
sudo $(MAKE) install
sudo ldconfig
cd -
# Build a minimal Go app to confirm gocv works.
verify:
go run ./cmd/version/main.go
# Build a minimal Go app to confirm gocv cuda works.
verify_cuda:
go run ./cmd/cuda/main.go
# Build a minimal Go app to confirm gocv openvino works.
verify_openvino:
go run -tags openvino ./cmd/version/main.go
# Runs tests.
# This assumes env.sh was already sourced.
# pvt is not tested here since it requires additional depenedences.

82
vendor/gocv.io/x/gocv/README.md generated vendored
View File

@ -1,17 +1,17 @@
# GoCV
[![GoCV](https://raw.githubusercontent.com/hybridgroup/gocv/master/images/gocvlogo.jpg)](http://gocv.io/)
[![GoCV](https://raw.githubusercontent.com/hybridgroup/gocv/release/images/gocvlogo.jpg)](http://gocv.io/)
[![GoDoc](https://godoc.org/gocv.io/x/gocv?status.svg)](https://godoc.org/github.com/hybridgroup/gocv)
[![Travis Build Status](https://travis-ci.org/hybridgroup/gocv.svg?branch=dev)](https://travis-ci.org/hybridgroup/gocv)
[![AppVeyor Build status](https://ci.appveyor.com/api/projects/status/9asd5foet54ru69q/branch/dev?svg=true)](https://ci.appveyor.com/project/deadprogram/gocv/branch/dev)
[![codecov](https://codecov.io/gh/hybridgroup/gocv/branch/dev/graph/badge.svg)](https://codecov.io/gh/hybridgroup/gocv)
[![Go Report Card](https://goreportcard.com/badge/github.com/hybridgroup/gocv)](https://goreportcard.com/report/github.com/hybridgroup/gocv)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/hybridgroup/gocv/blob/master/LICENSE.txt)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/hybridgroup/gocv/blob/release/LICENSE.txt)
The GoCV package provides Go language bindings for the [OpenCV 4](http://opencv.org/) computer vision library.
The GoCV package supports the latest releases of Go and OpenCV (v4.2.0) on Linux, macOS, and Windows. We intend to make the Go language a "first-class" client compatible with the latest developments in the OpenCV ecosystem.
The GoCV package supports the latest releases of Go and OpenCV (v4.4.0) on Linux, macOS, and Windows. We intend to make the Go language a "first-class" client compatible with the latest developments in the OpenCV ecosystem.
GoCV also supports [Intel OpenVINO](https://software.intel.com/en-us/openvino-toolkit). Check out the [OpenVINO README](./openvino/README.md) for more info on how to use GoCV with the Intel OpenVINO toolkit.
@ -43,7 +43,7 @@ func main() {
### Face detect
![GoCV](https://raw.githubusercontent.com/hybridgroup/gocv/master/images/face-detect.jpg)
![GoCV](https://raw.githubusercontent.com/hybridgroup/gocv/release/images/face-detect.jpg)
This is a more complete example that opens a video capture device using device "0". It also uses the CascadeClassifier class to load an external data file containing the classifier data. The program grabs each frame from the video, then uses the classifier to detect faces. If any faces are found, it draws a green rectangle around each one, then displays the video in an output window:
@ -127,28 +127,58 @@ To install GoCV, run the following command:
go get -u -d gocv.io/x/gocv
```
To run code that uses the GoCV package, you must also install OpenCV 4.2.0 on your system. Here are instructions for Ubuntu, Raspian, macOS, and Windows.
To run code that uses the GoCV package, you must also install OpenCV 4.4.0 on your system. Here are instructions for Ubuntu, Raspian, macOS, and Windows.
## Ubuntu/Linux
### Installation
You can use `make` to install OpenCV 4.2.0 with the handy `Makefile` included with this repo. If you already have installed OpenCV, you do not need to do so again. The installation performed by the `Makefile` is minimal, so it may remove OpenCV options such as Python or Java wrappers if you have already installed OpenCV some other way.
You can use `make` to install OpenCV 4.4.0 with the handy `Makefile` included with this repo. If you already have installed OpenCV, you do not need to do so again. The installation performed by the `Makefile` is minimal, so it may remove OpenCV options such as Python or Java wrappers if you have already installed OpenCV some other way.
#### Quick Install
The following commands should do everything to download and install OpenCV 4.2.0 on Linux:
The following commands should do everything to download and install OpenCV 4.4.0 on Linux:
cd $GOPATH/src/gocv.io/x/gocv
make install
If you need static opencv libraries
make install BUILD_SHARED_LIBS=OFF
If it works correctly, at the end of the entire process, the following message should be displayed:
gocv version: 0.22.0
opencv lib version: 4.2.0
opencv lib version: 4.4.0
That's it, now you are ready to use GoCV.
#### Install Cuda
[cuda directory](./cuda)
#### Install OpenVINO
[openvino directory](./openvino)
#### Install OpenVINO and Cuda
The following commands should do everything to download and install OpenCV 4.4.0 with Cuda and OpenVINO on Linux:
cd $GOPATH/src/gocv.io/x/gocv
make install_all
If you need static opencv libraries
make install_all BUILD_SHARED_LIBS=OFF
If it works correctly, at the end of the entire process, the following message should be displayed:
gocv version: 0.22.0
opencv lib version: 4.4.0-openvino
cuda information:
Device 0: "GeForce MX150" 2003Mb, sm_61, Driver/Runtime ver.10.0/10.0
#### Complete Install
If you have already done the "Quick Install" as described above, you do not need to run any further commands. For the curious, or for custom installations, here are the details for each of the steps that are performed when you run `make install`.
@ -165,7 +195,7 @@ Next, you need to update the system, and install any required packages:
#### Download source
Now, download the OpenCV 4.2.0 and OpenCV Contrib source code:
Now, download the OpenCV 4.4.0 and OpenCV Contrib source code:
make download
@ -175,6 +205,10 @@ Build everything. This will take quite a while:
make build
If you need static opencv libraries
make build BUILD_SHARED_LIBS=OFF
#### Install
Once the code is built, you are ready to install:
@ -196,7 +230,7 @@ Now you should be able to build or run any of the examples:
The version program should output the following:
gocv version: 0.22.0
opencv lib version: 4.2.0
opencv lib version: 4.4.0
#### Cleanup extra files
@ -281,11 +315,11 @@ There is a Docker image with Alpine 3.7 that has been created by project contrib
### Installation
We have a special installation for the Raspberry Pi that includes some hardware optimizations. You use `make` to install OpenCV 4.2.0 with the handy `Makefile` included with this repo. If you already have installed OpenCV, you do not need to do so again. The installation performed by the `Makefile` is minimal, so it may remove OpenCV options such as Python or Java wrappers if you have already installed OpenCV some other way.
We have a special installation for the Raspberry Pi that includes some hardware optimizations. You use `make` to install OpenCV 4.4.0 with the handy `Makefile` included with this repo. If you already have installed OpenCV, you do not need to do so again. The installation performed by the `Makefile` is minimal, so it may remove OpenCV options such as Python or Java wrappers if you have already installed OpenCV some other way.
#### Quick Install
The following commands should do everything to download and install OpenCV 4.2.0 on Raspbian:
The following commands should do everything to download and install OpenCV 4.4.0 on Raspbian:
cd $GOPATH/src/gocv.io/x/gocv
make install_raspi
@ -293,7 +327,7 @@ The following commands should do everything to download and install OpenCV 4.2.0
If it works correctly, at the end of the entire process, the following message should be displayed:
gocv version: 0.22.0
opencv lib version: 4.2.0
opencv lib version: 4.4.0
That's it, now you are ready to use GoCV.
@ -301,13 +335,13 @@ That's it, now you are ready to use GoCV.
### Installation
You can install OpenCV 4.2.0 using Homebrew.
You can install OpenCV 4.4.0 using Homebrew.
If you already have an earlier version of OpenCV (3.4.x) installed, you should probably remove it before installing the new version:
brew uninstall opencv
You can then install OpenCV 4.2.0:
You can then install OpenCV 4.4.0:
brew install opencv
@ -332,7 +366,7 @@ Now you should be able to build or run any of the examples:
The version program should output the following:
gocv version: 0.22.0
opencv lib version: 4.2.0
opencv lib version: 4.4.0
### Cache builds
@ -347,8 +381,8 @@ By default, pkg-config is used to determine the correct flags for compiling and
For example:
export CGO_CXXFLAGS="--std=c++11"
export CGO_CPPFLAGS="-I/usr/local/Cellar/opencv/4.2.0/include"
export CGO_LDFLAGS="-L/usr/local/Cellar/opencv/4.2.0/lib -lopencv_stitching -lopencv_superres -lopencv_videostab -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_photo -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_optflow -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_dnn -lopencv_plot -lopencv_xfeatures2d -lopencv_shape -lopencv_video -lopencv_ml -lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_flann -lopencv_xobjdetect -lopencv_imgcodecs -lopencv_objdetect -lopencv_xphoto -lopencv_imgproc -lopencv_core"
export CGO_CPPFLAGS="-I/usr/local/Cellar/opencv/4.4.0/include"
export CGO_LDFLAGS="-L/usr/local/Cellar/opencv/4.4.0/lib -lopencv_stitching -lopencv_superres -lopencv_videostab -lopencv_aruco -lopencv_bgsegm -lopencv_bioinspired -lopencv_ccalib -lopencv_dnn_objdetect -lopencv_dpm -lopencv_face -lopencv_photo -lopencv_fuzzy -lopencv_hfs -lopencv_img_hash -lopencv_line_descriptor -lopencv_optflow -lopencv_reg -lopencv_rgbd -lopencv_saliency -lopencv_stereo -lopencv_structured_light -lopencv_phase_unwrapping -lopencv_surface_matching -lopencv_tracking -lopencv_datasets -lopencv_dnn -lopencv_plot -lopencv_xfeatures2d -lopencv_shape -lopencv_video -lopencv_ml -lopencv_ximgproc -lopencv_calib3d -lopencv_features2d -lopencv_highgui -lopencv_videoio -lopencv_flann -lopencv_xobjdetect -lopencv_imgcodecs -lopencv_objdetect -lopencv_xphoto -lopencv_imgproc -lopencv_core"
Please note that you will need to run these 3 lines of code one time in your current session in order to build or run the code, in order to setup the needed ENV variables. Once you have done so, you can execute code that uses GoCV with your custom environment like this:
@ -360,11 +394,11 @@ Please note that you will need to run these 3 lines of code one time in your cur
The following assumes that you are running a 64-bit version of Windows 10.
In order to build and install OpenCV 4.2.0 on Windows, you must first download and install MinGW-W64 and CMake, as follows.
In order to build and install OpenCV 4.4.0 on Windows, you must first download and install MinGW-W64 and CMake, as follows.
#### MinGW-W64
Download and run the MinGW-W64 compiler installer from [https://sourceforge.net/projects/mingw-w64/?source=typ_redirect](https://sourceforge.net/projects/mingw-w64/?source=typ_redirect).
Download and run the MinGW-W64 compiler installer from [https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/7.3.0/](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win32/Personal%20Builds/mingw-builds/7.3.0/).
The latest version of the MinGW-W64 toolchain is `7.3.0`, but any version from `7.X` on should work.
@ -376,9 +410,9 @@ Add the `C:\Program Files\mingw-w64\x86_64-7.3.0-posix-seh-rt_v5-rev2\mingw64\bi
Download and install CMake [https://cmake.org/download/](https://cmake.org/download/) to the default location. CMake installer will add CMake to your system path.
#### OpenCV 4.2.0 and OpenCV Contrib Modules
#### OpenCV 4.4.0 and OpenCV Contrib Modules
The following commands should do everything to download and install OpenCV 4.2.0 on Windows:
The following commands should do everything to download and install OpenCV 4.4.0 on Windows:
chdir %GOPATH%\src\gocv.io\x\gocv
win_build_opencv.cmd
@ -400,7 +434,7 @@ Now you should be able to build or run any of the command examples:
The version program should output the following:
gocv version: 0.22.0
opencv lib version: 4.2.0
opencv lib version: 4.4.0
That's it, now you are ready to use GoCV.
@ -554,6 +588,6 @@ This package was inspired by the original https://github.com/go-opencv/go-opencv
## License
Licensed under the Apache 2.0 license. Copyright (c) 2017-2019 The Hybrid Group.
Licensed under the Apache 2.0 license. Copyright (c) 2017-2020 The Hybrid Group.
Logo generated by GopherizeMe - https://gopherize.me

13
vendor/gocv.io/x/gocv/ROADMAP.md generated vendored
View File

@ -25,7 +25,6 @@ Your pull requests will be greatly appreciated!
- [ ] [randn](https://docs.opencv.org/master/d2/de8/group__core__array.html#gaeff1f61e972d133a04ce3a5f81cf6808)
- [ ] [randShuffle](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga6a789c8a5cb56c6dd62506179808f763)
- [ ] [randu](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga1ba1026dca0807b27057ba6a49d258c0)
- [x] [setIdentity](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga388d7575224a4a277ceb98ccaa327c99)
- [ ] [setRNGSeed](https://docs.opencv.org/master/d2/de8/group__core__array.html#ga757e657c037410d9e19e819569e7de0f)
- [ ] [SVBackSubst](https://docs.opencv.org/master/d2/de8/group__core__array.html#gab4e620e6fc6c8a27bb2be3d50a840c0b)
- [ ] [SVDecomp](https://docs.opencv.org/master/d2/de8/group__core__array.html#gab477b5b7b39b370bb03e75b19d2d5109)
@ -45,31 +44,25 @@ Your pull requests will be greatly appreciated!
- [ ] [buildPyramid](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gacfdda2bc1ac55e96de7e9f0bce7238c0)
- [ ] [getDerivKernels](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga6d6c23f7bd3f5836c31cfae994fc4aea)
- [ ] [getGaborKernel](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gae84c92d248183bd92fa713ce51cc3599)
- [ ] [getGaussianKernel](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gac05a120c1ae92a6060dd0db190a61afa)
- [ ] [morphologyExWithParams](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga67493776e3ad1a3df63883829375201f)
- [ ] [pyrMeanShiftFiltering](https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga9fabdce9543bd602445f5db3827e4cc0)
- [ ] **Geometric Image Transformations - WORK STARTED** The following functions still need implementation:
- [ ] [convertMaps](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga9156732fa8f01be9ebd1a194f2728b7f)
- [ ] [getAffineTransform](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga8f6d378f9f8eebb5cb55cd3ae295a999)
- [ ] [getDefaultNewCameraMatrix](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga744529385e88ef7bc841cbe04b35bfbf)
- [X] [getRectSubPix](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga77576d06075c1a4b6ba1a608850cd614)
- [ ] [initUndistortRectifyMap](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga7dfb72c9cf9780a347fbe3d1c47e5d5a)
- [ ] [initWideAngleProjMap](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#gaceb049ec48898d1dadd5b50c604429c8)
- [ ] [undistort](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga69f2545a8b62a6b0fc2ee060dc30559d)
- [ ] [undistortPoints](https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga55c716492470bfe86b0ee9bf3a1f0f7e)
- [ ] **Miscellaneous Image Transformations - WORK STARTED** The following functions still need implementation:
- [ ] [cvtColorTwoPlane](https://docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#ga8e873314e72a1a6c0252375538fbf753)
- [ ] [floodFill](https://docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#gaf1f55a048f8a45bc3383586e80b1f0d0)
- [ ] **Drawing Functions - WORK STARTED** The following functions still need implementation:
- [X] [clipLine](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#gaf483cb46ad6b049bc35ec67052ef1c2c)
- [ ] [drawMarker](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga482fa7b0f578fcdd8a174904592a6250)
- [ ] [ellipse2Poly](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga727a72a3f6a625a2ae035f957c61051f)
- [ ] [fillConvexPoly](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga906aae1606ea4ed2f27bec1537f6c5c2)
- [ ] [getFontScaleFromHeight](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga442ff925c1a957794a1309e0ed3ba2c3)
- [ ] [polylines](https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga444cb8a2666320f47f09d5af08d91ffb)
- [ ] ColorMaps in OpenCV
- [ ] Planar Subdivision
@ -135,7 +128,6 @@ Your pull requests will be greatly appreciated!
- [ ] [drawFrameAxes](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [estimateAffine2D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [estimateAffine3D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [estimateAffinePartial2D](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [filterHomographyDecompByVisibleRefpoints](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [filterSpeckles](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [find4QuadCornerSubpix](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
@ -144,7 +136,6 @@ Your pull requests will be greatly appreciated!
- [ ] [findCirclesGrid](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findEssentialMat](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findFundamentalMat](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [findHomography](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [getDefaultNewCameraMatrix](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [getOptimalNewCameraMatrix](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [getValidDisparityROI](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
@ -169,18 +160,14 @@ Your pull requests will be greatly appreciated!
- [ ] [stereoRectify](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [stereoRectifyUncalibrated](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [triangulatePoints](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [x] [undistort](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [undistortPoints](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] [validateDisparity](https://docs.opencv.org/master/d9/d0c/group__calib3d.html)
- [ ] **Fisheye - WORK STARTED** The following functions still need implementation:
- [ ] [calibrate](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gad626a78de2b1dae7489e152a5a5a89e1)
- [ ] [distortPoints](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga75d8877a98e38d0b29b6892c5f8d7765)
- [ ] [estimateNewCameraMatrixForUndistortRectify](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga384940fdf04c03e362e94b6eb9b673c9)
- [ ] [projectPoints](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gab1ad1dc30c42ee1a50ce570019baf2c4)
- [ ] [stereoCalibrate](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gadbb3a6ca6429528ef302c784df47949b)
- [ ] [stereoRectify](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gac1af58774006689056b0f2ef1db55ecc)
- [ ] [undistortPoints](https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gab738cdf90ceee97b2b52b0d0e7511541)
- [ ] **features2d. 2D Features Framework - WORK STARTED**
- [X] **Feature Detection and Description**

4
vendor/gocv.io/x/gocv/appveyor.yml generated vendored
View File

@ -8,7 +8,7 @@ platform:
environment:
GOPATH: c:\gopath
GOROOT: c:\go
GOVERSION: 1.13
GOVERSION: 1.14
TEST_EXTERNAL: 1
APPVEYOR_SAVE_CACHE_ON_ERROR: true
@ -18,7 +18,7 @@ cache:
install:
- if not exist "C:\opencv" appveyor_build_opencv.cmd
- set PATH=C:\Perl\site\bin;C:\Perl\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\7-Zip;C:\Program Files\Microsoft\Web Platform Installer\;C:\Tools\PsTools;C:\Program Files (x86)\CMake\bin;C:\go\bin;C:\Tools\NuGet;C:\Program Files\LLVM\bin;C:\Tools\curl\bin;C:\ProgramData\chocolatey\bin;C:\Program Files (x86)\Yarn\bin;C:\Users\appveyor\AppData\Local\Yarn\bin;C:\Program Files\AppVeyor\BuildAgent\
- set PATH=%PATH%;C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin
- set PATH=%PATH%;C:\mingw-w64\x86_64-7.3.0-posix-seh-rt_v5-rev0\mingw64\bin
- set PATH=%PATH%;C:\Tools\GitVersion;C:\Program Files\Git LFS;C:\Program Files\Git\cmd;C:\Program Files\Git\usr\bin;C:\opencv\build\install\x64\mingw\bin;
- echo %PATH%
- echo %GOPATH%

View File

@ -2,22 +2,22 @@ if not exist "C:\opencv" mkdir "C:\opencv"
if not exist "C:\opencv\build" mkdir "C:\opencv\build"
if not exist "C:\opencv\testdata" mkdir "C:\opencv\testdata"
appveyor DownloadFile https://github.com/opencv/opencv/archive/4.2.0.zip -FileName c:\opencv\opencv-4.2.0.zip
7z x c:\opencv\opencv-4.2.0.zip -oc:\opencv -y
del c:\opencv\opencv-4.2.0.zip /q
appveyor DownloadFile https://github.com/opencv/opencv_contrib/archive/4.2.0.zip -FileName c:\opencv\opencv_contrib-4.2.0.zip
7z x c:\opencv\opencv_contrib-4.2.0.zip -oc:\opencv -y
del c:\opencv\opencv_contrib-4.2.0.zip /q
appveyor DownloadFile https://github.com/opencv/opencv/archive/4.4.0.zip -FileName c:\opencv\opencv-4.4.0.zip
7z x c:\opencv\opencv-4.4.0.zip -oc:\opencv -y
del c:\opencv\opencv-4.4.0.zip /q
appveyor DownloadFile https://github.com/opencv/opencv_contrib/archive/4.4.0.zip -FileName c:\opencv\opencv_contrib-4.4.0.zip
7z x c:\opencv\opencv_contrib-4.4.0.zip -oc:\opencv -y
del c:\opencv\opencv_contrib-4.4.0.zip /q
cd C:\opencv\build
set PATH=C:\Perl\site\bin;C:\Perl\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\7-Zip;C:\Program Files\Microsoft\Web Platform Installer\;C:\Tools\PsTools;C:\Program Files (x86)\CMake\bin;C:\go\bin;C:\Tools\NuGet;C:\Program Files\LLVM\bin;C:\Tools\curl\bin;C:\ProgramData\chocolatey\bin;C:\Program Files (x86)\Yarn\bin;C:\Users\appveyor\AppData\Local\Yarn\bin;C:\Program Files\AppVeyor\BuildAgent\
set PATH=%PATH%;C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin
set PATH=%PATH%;C:\mingw-w64\x86_64-7.3.0-posix-seh-rt_v5-rev0\mingw64\bin
dir C:\opencv
cmake C:\opencv\opencv-4.2.0 -G "MinGW Makefiles" -BC:\opencv\build -DENABLE_CXX11=ON -DOPENCV_EXTRA_MODULES_PATH=C:\opencv\opencv_contrib-4.2.0\modules -DBUILD_SHARED_LIBS=ON -DWITH_IPP=OFF -DWITH_MSMF=OFF -DBUILD_EXAMPLES=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_java=OFF -DBUILD_opencv_python=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_DOCS=OFF -DENABLE_PRECOMPILED_HEADERS=OFF -DBUILD_opencv_saliency=OFF -DCPU_DISPATCH= -DBUILD_opencv_gapi=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DOPENCV_ENABLE_NONFREE=ON -DWITH_OPENCL_D3D11_NV=OFF -Wno-dev
cmake C:\opencv\opencv-4.4.0 -G "MinGW Makefiles" -BC:\opencv\build -DENABLE_CXX11=ON -DOPENCV_EXTRA_MODULES_PATH=C:\opencv\opencv_contrib-4.4.0\modules -DBUILD_SHARED_LIBS=ON -DWITH_IPP=OFF -DWITH_MSMF=OFF -DBUILD_EXAMPLES=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_java=OFF -DBUILD_opencv_python=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_DOCS=OFF -DENABLE_PRECOMPILED_HEADERS=OFF -DBUILD_opencv_saliency=OFF -DCPU_DISPATCH= -DBUILD_opencv_gapi=OFF -DOPENCV_GENERATE_PKGCONFIG=ON -DOPENCV_ENABLE_NONFREE=ON -DWITH_OPENCL_D3D11_NV=OFF -DOPENCV_ALLOCATOR_STATS_COUNTER_TYPE=int64_t -Wno-dev
mingw32-make -j%NUMBER_OF_PROCESSORS%
mingw32-make install
appveyor DownloadFile https://raw.githubusercontent.com/opencv/opencv_extra/master/testdata/dnn/bvlc_googlenet.prototxt -FileName C:\opencv\testdata\bvlc_googlenet.prototxt
appveyor DownloadFile http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel -FileName C:\opencv\testdata\bvlc_googlenet.caffemodel
appveyor DownloadFile https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip -FileName C:\opencv\testdata\inception5h.zip
7z x C:\opencv\testdata\inception5h.zip -oC:\opencv\testdata tensorflow_inception_graph.pb -y
rmdir c:\opencv\opencv-4.2.0 /s /q
rmdir c:\opencv\opencv_contrib-4.2.0 /s /q
rmdir c:\opencv\opencv-4.4.0 /s /q
rmdir c:\opencv\opencv_contrib-4.4.0 /s /q

36
vendor/gocv.io/x/gocv/calib3d.cpp generated vendored
View File

@ -10,6 +10,16 @@ void Fisheye_UndistortImageWithParams(Mat distorted, Mat undistorted, Mat k, Mat
cv::fisheye::undistortImage(*distorted, *undistorted, *k, *d, *knew, sz);
}
void Fisheye_UndistortPoints(Mat distorted, Mat undistorted, Mat k, Mat d, Mat r, Mat p) {
cv::fisheye::undistortPoints(*distorted, *undistorted, *k, *d, *r, *p);
}
void Fisheye_EstimateNewCameraMatrixForUndistortRectify(Mat k, Mat d, Size imgSize, Mat r, Mat p, double balance, Size newSize, double fovScale) {
cv::Size newSz(newSize.width, newSize.height);
cv::Size imgSz(imgSize.width, imgSize.height);
cv::fisheye::estimateNewCameraMatrixForUndistortRectify(*k, *d, imgSz, *r, *p, balance, newSz, fovScale);
}
void InitUndistortRectifyMap(Mat cameraMatrix,Mat distCoeffs,Mat r,Mat newCameraMatrix,Size size,int m1type,Mat map1,Mat map2) {
cv::Size sz(size.width, size.height);
cv::initUndistortRectifyMap(*cameraMatrix,*distCoeffs,*r,*newCameraMatrix,sz,m1type,*map1,*map2);
@ -31,3 +41,29 @@ void Undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat newCamera
cv::undistort(*src, *dst, *cameraMatrix, *distCoeffs, *newCameraMatrix);
}
void UndistortPoints(Mat distorted, Mat undistorted, Mat k, Mat d, Mat r, Mat p) {
cv::undistortPoints(*distorted, *undistorted, *k, *d, *r, *p);
}
bool FindChessboardCorners(Mat image, Size patternSize, Mat corners, int flags) {
cv::Size sz(patternSize.width, patternSize.height);
return cv::findChessboardCorners(*image, sz, *corners, flags);
}
void DrawChessboardCorners(Mat image, Size patternSize, Mat corners, bool patternWasFound) {
cv::Size sz(patternSize.width, patternSize.height);
cv::drawChessboardCorners(*image, sz, *corners, patternWasFound);
}
Mat EstimateAffinePartial2D(Contour2f from, Contour2f to) {
std::vector<cv::Point2f> from_pts;
for (size_t i = 0; i < from.length; i++) {
from_pts.push_back(cv::Point2f(from.points[i].x, from.points[i].y));
}
std::vector<cv::Point2f> to_pts;
for (size_t i = 0; i < to.length; i++) {
to_pts.push_back(cv::Point2f(to.points[i].x, to.points[i].y));
}
return new cv::Mat(cv::estimateAffinePartial2D(from_pts, to_pts));
}

80
vendor/gocv.io/x/gocv/calib3d.go generated vendored
View File

@ -67,6 +67,30 @@ func FisheyeUndistortImageWithParams(distorted Mat, undistorted *Mat, k, d, knew
C.Fisheye_UndistortImageWithParams(distorted.Ptr(), undistorted.Ptr(), k.Ptr(), d.Ptr(), knew.Ptr(), sz)
}
// FisheyeUndistortPoints transforms points to compensate for fisheye lens distortion
//
// For further details, please see:
// https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#gab738cdf90ceee97b2b52b0d0e7511541
func FisheyeUndistortPoints(distorted Mat, undistorted *Mat, k, d, r, p Mat) {
C.Fisheye_UndistortPoints(distorted.Ptr(), undistorted.Ptr(), k.Ptr(), d.Ptr(), r.Ptr(), p.Ptr())
}
// EstimateNewCameraMatrixForUndistortRectify estimates new camera matrix for undistortion or rectification.
//
// For further details, please see:
// https://docs.opencv.org/master/db/d58/group__calib3d__fisheye.html#ga384940fdf04c03e362e94b6eb9b673c9
func EstimateNewCameraMatrixForUndistortRectify(k, d Mat, imgSize image.Point, r Mat, p *Mat, balance float64, newSize image.Point, fovScale float64) {
imgSz := C.struct_Size{
width: C.int(imgSize.X),
height: C.int(imgSize.Y),
}
newSz := C.struct_Size{
width: C.int(newSize.X),
height: C.int(newSize.Y),
}
C.Fisheye_EstimateNewCameraMatrixForUndistortRectify(k.Ptr(), d.Ptr(), imgSz, r.Ptr(), p.Ptr(), C.double(balance), newSz, C.double(fovScale))
}
// InitUndistortRectifyMap computes the joint undistortion and rectification transformation and represents the result in the form of maps for remap
//
// For further details, please see:
@ -101,3 +125,59 @@ func GetOptimalNewCameraMatrixWithParams(cameraMatrix Mat, distCoeffs Mat, image
func Undistort(src Mat, dst *Mat, cameraMatrix Mat, distCoeffs Mat, newCameraMatrix Mat) {
C.Undistort(src.Ptr(), dst.Ptr(), cameraMatrix.Ptr(), distCoeffs.Ptr(), newCameraMatrix.Ptr())
}
// UndistortPoints transforms points to compensate for lens distortion
//
// For further details, please see:
// https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga55c716492470bfe86b0ee9bf3a1f0f7e
func UndistortPoints(src Mat, dst *Mat, cameraMatrix, distCoeffs, rectificationTransform, newCameraMatrix Mat) {
C.UndistortPoints(src.Ptr(), dst.Ptr(), cameraMatrix.Ptr(), distCoeffs.Ptr(), rectificationTransform.Ptr(), newCameraMatrix.Ptr())
}
// CalibCBFlag value for chessboard calibration
// For more details, please see:
// https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga93efa9b0aa890de240ca32b11253dd4a
type CalibCBFlag int
const (
// Various operation flags that can be zero or a combination of the following values:
// Use adaptive thresholding to convert the image to black and white, rather than a fixed threshold level (computed from the average image brightness).
CalibCBAdaptiveThresh CalibCBFlag = 1 << iota
// Normalize the image gamma with equalizeHist before applying fixed or adaptive thresholding.
CalibCBNormalizeImage
// Use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads extracted at the contour retrieval stage.
CalibCBFilterQuads
// Run a fast check on the image that looks for chessboard corners, and shortcut the call if none is found. This can drastically speed up the call in the degenerate condition when no chessboard is observed.
CalibCBFastCheck
CalibCBExhaustive
CalibCBAccuracy
CalibCBLarger
CalibCBMarker
)
func FindChessboardCorners(image Mat, patternSize image.Point, corners *Mat, flags CalibCBFlag) bool {
sz := C.struct_Size{
width: C.int(patternSize.X),
height: C.int(patternSize.Y),
}
return bool(C.FindChessboardCorners(image.Ptr(), sz, corners.Ptr(), C.int(flags)))
}
func DrawChessboardCorners(image *Mat, patternSize image.Point, corners Mat, patternWasFound bool) {
sz := C.struct_Size{
width: C.int(patternSize.X),
height: C.int(patternSize.Y),
}
C.DrawChessboardCorners(image.Ptr(), sz, corners.Ptr(), C.bool(patternWasFound))
}
// EstimateAffinePartial2D computes an optimal limited affine transformation
// with 4 degrees of freedom between two 2D point sets.
//
// For further details, please see:
// https://docs.opencv.org/master/d9/d0c/group__calib3d.html#gad767faff73e9cbd8b9d92b955b50062d
func EstimateAffinePartial2D(from, to []Point2f) Mat {
fromPoints := toCPoints2f(from)
toPoints := toCPoints2f(to)
return newMat(C.EstimateAffinePartial2D(fromPoints, toPoints))
}

8
vendor/gocv.io/x/gocv/calib3d.h generated vendored
View File

@ -14,12 +14,18 @@ extern "C" {
//Calib
void Fisheye_UndistortImage(Mat distorted, Mat undistorted, Mat k, Mat d);
void Fisheye_UndistortImageWithParams(Mat distorted, Mat undistorted, Mat k, Mat d, Mat knew, Size size);
void Fisheye_UndistortPoints(Mat distorted, Mat undistorted, Mat k, Mat d, Mat R, Mat P);
void Fisheye_EstimateNewCameraMatrixForUndistortRectify(Mat k, Mat d, Size imgSize, Mat r, Mat p, double balance, Size newSize, double fovScale);
void InitUndistortRectifyMap(Mat cameraMatrix,Mat distCoeffs,Mat r,Mat newCameraMatrix,Size size,int m1type,Mat map1,Mat map2);
Mat GetOptimalNewCameraMatrixWithParams(Mat cameraMatrix,Mat distCoeffs,Size size,double alpha,Size newImgSize,Rect* validPixROI,bool centerPrincipalPoint);
void Undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat newCameraMatrix);
void UndistortPoints(Mat distorted, Mat undistorted, Mat k, Mat d, Mat r, Mat p);
bool FindChessboardCorners(Mat image, Size patternSize, Mat corners, int flags);
void DrawChessboardCorners(Mat image, Size patternSize, Mat corners, bool patternWasFound);
Mat EstimateAffinePartial2D(Contour2f from, Contour2f to);
#ifdef __cplusplus
}
#endif
#endif //_OPENCV3_CALIB_H
#endif //_OPENCV3_CALIB_H

View File

@ -25,3 +25,25 @@ func (c CalibFlag) String() string {
}
return ""
}
func (c CalibCBFlag) String() string {
switch c {
case CalibCBAdaptiveThresh:
return "calib-cb-adaptive-thresh"
case CalibCBNormalizeImage:
return "calib-cb-normalize-image"
case CalibCBFilterQuads:
return "calib-cb-filter-quads"
case CalibCBFastCheck:
return "calib-cb-fast-check"
case CalibCBExhaustive:
return "calib-cb-exhaustive"
case CalibCBAccuracy:
return "calib-cb-accuracy"
case CalibCBLarger:
return "calib-cb-larger"
case CalibCBMarker:
return "calib-cb-marker"
}
return ""
}

4
vendor/gocv.io/x/gocv/cgo.go generated vendored
View File

@ -1,4 +1,4 @@
// +build !customenv,!openvino
// +build !customenv
package gocv
@ -8,6 +8,6 @@ package gocv
#cgo !windows pkg-config: opencv4
#cgo CXXFLAGS: --std=c++11
#cgo windows CPPFLAGS: -IC:/opencv/build/install/include
#cgo windows LDFLAGS: -LC:/opencv/build/install/x64/mingw/lib -lopencv_core420 -lopencv_face420 -lopencv_videoio420 -lopencv_imgproc420 -lopencv_highgui420 -lopencv_imgcodecs420 -lopencv_objdetect420 -lopencv_features2d420 -lopencv_video420 -lopencv_dnn420 -lopencv_xfeatures2d420 -lopencv_plot420 -lopencv_tracking420 -lopencv_img_hash420 -lopencv_calib3d420
#cgo windows LDFLAGS: -LC:/opencv/build/install/x64/mingw/lib -lopencv_core440 -lopencv_face440 -lopencv_videoio440 -lopencv_imgproc440 -lopencv_highgui440 -lopencv_imgcodecs440 -lopencv_objdetect440 -lopencv_features2d440 -lopencv_video440 -lopencv_dnn440 -lopencv_xfeatures2d440 -lopencv_plot440 -lopencv_tracking440 -lopencv_img_hash440 -lopencv_calib3d440 -lopencv_bgsegm440
*/
import "C"

7
vendor/gocv.io/x/gocv/core.cpp generated vendored
View File

@ -574,6 +574,10 @@ void Mat_Multiply(Mat src1, Mat src2, Mat dst) {
cv::multiply(*src1, *src2, *dst);
}
void Mat_MultiplyWithParams(Mat src1, Mat src2, Mat dst, double scale, int dtype) {
cv::multiply(*src1, *src2, *dst, scale, dtype);
}
void Mat_Normalize(Mat src, Mat dst, double alpha, double beta, int typ) {
cv::normalize(*src, *dst, alpha, beta, typ);
}
@ -761,3 +765,6 @@ Mat Mat_colRange(Mat m,int startrow,int endrow) {
return new cv::Mat(m->colRange(startrow,endrow));
}
void IntVector_Close(struct IntVector ivec) {
delete[] ivec.val;
}

154
vendor/gocv.io/x/gocv/core.go generated vendored
View File

@ -35,25 +35,25 @@ const (
MatTypeCV8U MatType = 0
// MatTypeCV8S is a Mat of 8-bit signed int
MatTypeCV8S = 1
MatTypeCV8S MatType = 1
// MatTypeCV16U is a Mat of 16-bit unsigned int
MatTypeCV16U = 2
MatTypeCV16U MatType = 2
// MatTypeCV16S is a Mat of 16-bit signed int
MatTypeCV16S = 3
MatTypeCV16S MatType = 3
// MatTypeCV16SC2 is a Mat of 16-bit signed int with 2 channels
MatTypeCV16SC2 = MatTypeCV16S + MatChannels2
// MatTypeCV32S is a Mat of 32-bit signed int
MatTypeCV32S = 4
MatTypeCV32S MatType = 4
// MatTypeCV32F is a Mat of 32-bit float
MatTypeCV32F = 5
MatTypeCV32F MatType = 5
// MatTypeCV64F is a Mat of 64-bit float
MatTypeCV64F = 6
MatTypeCV64F MatType = 6
// MatTypeCV8UC1 is a Mat of 8-bit unsigned int with a single channel
MatTypeCV8UC1 = MatTypeCV8U + MatChannels1
@ -146,21 +146,26 @@ const (
CompareEQ CompareType = 0
// CompareGT src1 is greater than src2.
CompareGT = 1
CompareGT CompareType = 1
// CompareGE src1 is greater than or equal to src2.
CompareGE = 2
CompareGE CompareType = 2
// CompareLT src1 is less than src2.
CompareLT = 3
CompareLT CompareType = 3
// CompareLE src1 is less than or equal to src2.
CompareLE = 4
CompareLE CompareType = 4
// CompareNE src1 is unequal to src2.
CompareNE = 5
CompareNE CompareType = 5
)
type Point2f struct {
X float32
Y float32
}
var ErrEmptyByteSlice = errors.New("empty byte array")
// Mat represents an n-dimensional dense numerical single-channel
@ -287,6 +292,7 @@ func (m *Mat) Total() int {
func (m *Mat) Size() (dims []int) {
cdims := C.IntVector{}
C.Mat_Size(m.p, &cdims)
defer C.IntVector_Close(cdims)
h := &reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(cdims.val)),
@ -953,7 +959,7 @@ func BitwiseXorWithMask(src1 Mat, src2 Mat, dst *Mat, mask Mat) {
// For further details, please see:
// https://docs.opencv.org/master/d2/de8/group__core__array.html#ga4ba778a1c57f83233b1d851c83f5a622
//
func BatchDistance(src1 Mat, src2 Mat, dist Mat, dtype int, nidx Mat, normType int, K int, mask Mat, update int, crosscheck bool) {
func BatchDistance(src1 Mat, src2 Mat, dist Mat, dtype MatType, nidx Mat, normType NormType, K int, mask Mat, update int, crosscheck bool) {
C.Mat_BatchDistance(src1.p, src2.p, dist.p, C.int(dtype), nidx.p, C.int(normType), C.int(K), mask.p, C.int(update), C.bool(crosscheck))
}
@ -979,19 +985,19 @@ const (
CovarScrambled CovarFlags = 0
// CovarNormal indicates to use normal covariation.
CovarNormal = 1
CovarNormal CovarFlags = 1
// CovarUseAvg indicates to use average covariation.
CovarUseAvg = 2
CovarUseAvg CovarFlags = 2
// CovarScale indicates to use scaled covariation.
CovarScale = 4
CovarScale CovarFlags = 4
// CovarRows indicates to use covariation on rows.
CovarRows = 8
CovarRows CovarFlags = 8
// CovarCols indicates to use covariation on columns.
CovarCols = 16
CovarCols CovarFlags = 16
)
// CalcCovarMatrix calculates the covariance matrix of a set of vectors.
@ -999,7 +1005,7 @@ const (
// For further details, please see:
// https://docs.opencv.org/master/d2/de8/group__core__array.html#ga017122d912af19d7d0d2cccc2d63819f
//
func CalcCovarMatrix(samples Mat, covar *Mat, mean *Mat, flags CovarFlags, ctype int) {
func CalcCovarMatrix(samples Mat, covar *Mat, mean *Mat, flags CovarFlags, ctype MatType) {
C.Mat_CalcCovarMatrix(samples.p, covar.p, mean.p, C.int(flags), C.int(ctype))
}
@ -1087,24 +1093,24 @@ const (
DftForward DftFlags = 0
// DftInverse performs an inverse 1D or 2D transform.
DftInverse = 1
DftInverse DftFlags = 1
// DftScale scales the result: divide it by the number of array elements. Normally, it is combined with DFT_INVERSE.
DftScale = 2
DftScale DftFlags = 2
// DftRows performs a forward or inverse transform of every individual row of the input matrix.
DftRows = 4
DftRows DftFlags = 4
// DftComplexOutput performs a forward transformation of 1D or 2D real array; the result, though being a complex array, has complex-conjugate symmetry
DftComplexOutput = 16
DftComplexOutput DftFlags = 16
// DftRealOutput performs an inverse transformation of a 1D or 2D complex array; the result is normally a complex array of the same size,
// however, if the input array has conjugate-complex symmetry (for example, it is a result of forward transformation with DFT_COMPLEX_OUTPUT flag),
// the output is a real array.
DftRealOutput = 32
DftRealOutput DftFlags = 32
// DftComplexInput specifies that input is complex input. If this flag is set, the input must have 2 channels.
DftComplexInput = 64
DftComplexInput DftFlags = 64
// DctInverse performs an inverse 1D or 2D dct transform.
DctInverse = DftInverse
@ -1254,9 +1260,9 @@ const (
// Rotate90Clockwise allows to rotate image 90 degrees clockwise
Rotate90Clockwise RotateFlag = 0
// Rotate180Clockwise allows to rotate image 180 degrees clockwise
Rotate180Clockwise = 1
Rotate180Clockwise RotateFlag = 1
// Rotate90CounterClockwise allows to rotate 270 degrees clockwise
Rotate90CounterClockwise = 2
Rotate90CounterClockwise RotateFlag = 2
)
// Rotate rotates a 2D array in multiples of 90 degrees
@ -1347,10 +1353,10 @@ const (
// KMeansRandomCenters selects random initial centers in each attempt.
KMeansRandomCenters KMeansFlags = 0
// KMeansPPCenters uses kmeans++ center initialization by Arthur and Vassilvitskii [Arthur2007].
KMeansPPCenters = 1
KMeansPPCenters KMeansFlags = 1
// KMeansUseInitialLabels uses the user-supplied lables during the first (and possibly the only) attempt
// instead of computing them from the initial centers. For the second and further attempts, use the random or semi-random // centers. Use one of KMEANS_*_CENTERS flag to specify the exact method.
KMeansUseInitialLabels = 2
KMeansUseInitialLabels KMeansFlags = 2
)
// KMeans finds centers of clusters and groups input samples around the clusters.
@ -1491,6 +1497,16 @@ func Multiply(src1 Mat, src2 Mat, dst *Mat) {
C.Mat_Multiply(src1.p, src2.p, dst.p)
}
// MultiplyWithParams calculates the per-element scaled product of two arrays.
// Both input arrays must be of the same size and the same type.
//
// For further details, please see:
// https://docs.opencv.org/master/d2/de8/group__core__array.html#ga979d898a58d7f61c53003e162e7ad89f
//
func MultiplyWithParams(src1 Mat, src2 Mat, dst *Mat, scale float64, dtype MatType) {
C.Mat_MultiplyWithParams(src1.p, src2.p, dst.p, C.double(scale), C.int(dtype))
}
// NormType for normalization operations.
//
// For further details, please see:
@ -1503,28 +1519,28 @@ const (
NormInf NormType = 1
// NormL1 indicates use L1 normalization.
NormL1 = 2
NormL1 NormType = 2
// NormL2 indicates use L2 normalization.
NormL2 = 4
NormL2 NormType = 4
// NormL2Sqr indicates use L2 squared normalization.
NormL2Sqr = 5
NormL2Sqr NormType = 5
// NormHamming indicates use Hamming normalization.
NormHamming = 6
NormHamming NormType = 6
// NormHamming2 indicates use Hamming 2-bit normalization.
NormHamming2 = 7
NormHamming2 NormType = 7
// NormTypeMask indicates use type mask for normalization.
NormTypeMask = 7
NormTypeMask NormType = 7
// NormRelative indicates use relative normalization.
NormRelative = 8
NormRelative NormType = 8
// NormMinMax indicates use min/max normalization.
NormMinMax = 32
NormMinMax NormType = 32
)
// Normalize normalizes the norm or value range of an array.
@ -1566,35 +1582,35 @@ const (
Count TermCriteriaType = 1
// MaxIter is the maximum number of iterations or elements to compute.
MaxIter = 1
MaxIter TermCriteriaType = 1
// EPS is the desired accuracy or change in parameters at which the
// iterative algorithm stops.
EPS = 2
EPS TermCriteriaType = 2
)
type SolveDecompositionFlags int
const (
// Gaussian elimination with the optimal pivot element chosen.
SolveDecompositionLu = 0
SolveDecompositionLu SolveDecompositionFlags = 0
// Singular value decomposition (SVD) method. The system can be over-defined and/or the matrix src1 can be singular.
SolveDecompositionSvd = 1
SolveDecompositionSvd SolveDecompositionFlags = 1
// Eigenvalue decomposition. The matrix src1 must be symmetrical.
SolveDecompositionEing = 2
SolveDecompositionEing SolveDecompositionFlags = 2
// Cholesky LL^T factorization. The matrix src1 must be symmetrical and positively defined.
SolveDecompositionCholesky = 3
SolveDecompositionCholesky SolveDecompositionFlags = 3
// QR factorization. The system can be over-defined and/or the matrix src1 can be singular.
SolveDecompositionQr = 4
SolveDecompositionQr SolveDecompositionFlags = 4
// While all the previous flags are mutually exclusive, this flag can be used together with any of the previous.
// It means that the normal equations 𝚜𝚛𝚌𝟷^T⋅𝚜𝚛𝚌𝟷𝚍𝚜𝚝=𝚜𝚛𝚌𝟷^T𝚜𝚛𝚌𝟸 are solved instead of the original system
// 𝚜𝚛𝚌𝟷⋅𝚍𝚜𝚝=𝚜𝚛𝚌𝟸.
SolveDecompositionNormal = 5
SolveDecompositionNormal SolveDecompositionFlags = 5
)
// Solve solves one or more linear systems or least-squares problems.
@ -1645,7 +1661,7 @@ const (
// For further details, please see:
// https://docs.opencv.org/master/d2/de8/group__core__array.html#ga4b78072a303f29d9031d56e5638da78e
//
func Reduce(src Mat, dst *Mat, dim int, rType ReduceTypes, dType int) {
func Reduce(src Mat, dst *Mat, dim int, rType ReduceTypes, dType MatType) {
C.Mat_Reduce(src.p, dst.p, C.int(dim), C.int(rType), C.int(dType))
}
@ -1718,6 +1734,7 @@ func SortIdx(src Mat, dst *Mat, flags SortFlags) {
func Split(src Mat) (mv []Mat) {
cMats := C.struct_Mats{}
C.Mat_Split(src.p, &(cMats))
defer C.Mats_Close(cMats)
mv = make([]Mat, cMats.length)
for i := C.int(0); i < cMats.length; i++ {
mv[i].p = C.Mats_get(cMats, i)
@ -1844,6 +1861,22 @@ type DMatch struct {
Distance float64
}
// Vecb is a generic vector of bytes.
type Vecb []uint8
// GetVecbAt returns a vector of bytes. Its size corresponds to the number
// of channels of the Mat.
func (m *Mat) GetVecbAt(row int, col int) Vecb {
ch := m.Channels()
v := make(Vecb, ch)
for c := 0; c < ch; c++ {
v[c] = m.GetUCharAt(row, col*ch+c)
}
return v
}
// Vecf is a generic vector of floats.
type Vecf []float32
@ -1860,6 +1893,22 @@ func (m *Mat) GetVecfAt(row int, col int) Vecf {
return v
}
// Vecd is a generic vector of float64/doubles.
type Vecd []float64
// GetVecdAt returns a vector of float64s. Its size corresponds to the number
// of channels of the Mat.
func (m *Mat) GetVecdAt(row int, col int) Vecd {
ch := m.Channels()
v := make(Vecd, ch)
for c := 0; c < ch; c++ {
v[c] = m.GetDoubleAt(row, col*ch+c)
}
return v
}
// Veci is a generic vector of integers.
type Veci []int32
@ -1944,6 +1993,21 @@ func toCPoints(points []image.Point) C.struct_Points {
}
}
func toCPoints2f(points []Point2f) C.struct_Points2f {
cPointSlice := make([]C.struct_Point2f, len(points))
for i, point := range points {
cPointSlice[i] = C.struct_Point2f{
x: C.float(point.X),
y: C.float(point.Y),
}
}
return C.struct_Points2f{
points: (*C.Point2f)(&cPointSlice[0]),
length: C.int(len(points)),
}
}
func toCStrings(strs []string) C.struct_CStrings {
cStringsSlice := make([]*C.char, len(strs))
for i, s := range strs {

12
vendor/gocv.io/x/gocv/core.h generated vendored
View File

@ -56,9 +56,18 @@ typedef struct Points {
int length;
} Points;
// Wrapper for the vector of Point2f structs aka std::vector<Point2f>
typedef struct Points2f {
Point2f* points;
int length;
} Points2f;
// Contour is alias for Points
typedef Points Contour;
// Contour2f is alias for Points2f
typedef Points2f Contour2f;
// Wrapper for the vector of Points vectors aka std::vector< std::vector<Point> >
typedef struct Contours {
Contour* contours;
@ -347,6 +356,7 @@ void Mat_MinMaxIdx(Mat m, double* minVal, double* maxVal, int* minIdx, int* maxI
void Mat_MinMaxLoc(Mat m, double* minVal, double* maxVal, Point* minLoc, Point* maxLoc);
void Mat_MulSpectrums(Mat a, Mat b, Mat c, int flags);
void Mat_Multiply(Mat src1, Mat src2, Mat dst);
void Mat_MultiplyWithParams(Mat src1, Mat src2, Mat dst, double scale, int dtype);
void Mat_Subtract(Mat src1, Mat src2, Mat dst);
void Mat_Normalize(Mat src, Mat dst, double alpha, double beta, int typ);
double Norm(Mat src1, int normType);
@ -378,6 +388,8 @@ double GetTickFrequency();
Mat Mat_rowRange(Mat m,int startrow,int endrow);
Mat Mat_colRange(Mat m,int startrow,int endrow);
void IntVector_Close(struct IntVector ivec);
#ifdef __cplusplus
}
#endif

6
vendor/gocv.io/x/gocv/dnn.cpp generated vendored
View File

@ -113,12 +113,6 @@ Mat Net_BlobFromImage(Mat image, double scalefactor, Size size, Scalar mean, boo
// set the output ddepth to the input image depth
int ddepth = image->depth();
if (ddepth == CV_8U)
{
// no scalar mean adjustment allowed, so ignore
return new cv::Mat(cv::dnn::blobFromImage(*image, scalefactor, sz, NULL, swapRB, crop, ddepth));
}
cv::Scalar cm(mean.val1, mean.val2, mean.val3, mean.val4);
return new cv::Mat(cv::dnn::blobFromImage(*image, scalefactor, sz, cm, swapRB, crop, ddepth));
}

20
vendor/gocv.io/x/gocv/dnn.go generated vendored
View File

@ -39,6 +39,9 @@ const (
// NetBackendVKCOM is the Vulkan backend.
NetBackendVKCOM NetBackendType = 4
// NetBackendCUDA is the Cuda backend.
NetBackendCUDA NetBackendType = 5
)
// ParseNetBackend returns a valid NetBackendType given a string. Valid values are:
@ -46,6 +49,7 @@ const (
// - openvino
// - opencv
// - vulkan
// - cuda
// - default
func ParseNetBackend(backend string) NetBackendType {
switch backend {
@ -57,6 +61,8 @@ func ParseNetBackend(backend string) NetBackendType {
return NetBackendOpenCV
case "vulkan":
return NetBackendVKCOM
case "cuda":
return NetBackendCUDA
default:
return NetBackendDefault
}
@ -83,6 +89,12 @@ const (
// NetTargetFPGA is the FPGA target.
NetTargetFPGA NetTargetType = 5
// NetTargetCUDA is the CUDA target.
NetTargetCUDA NetTargetType = 6
// NetTargetCUDAFP16 is the CUDA target.
NetTargetCUDAFP16 NetTargetType = 7
)
// ParseNetTarget returns a valid NetTargetType given a string. Valid values are:
@ -92,6 +104,8 @@ const (
// - vpu
// - vulkan
// - fpga
// - cuda
// - cudafp16
func ParseNetTarget(target string) NetTargetType {
switch target {
case "cpu":
@ -106,6 +120,10 @@ func ParseNetTarget(target string) NetTargetType {
return NetTargetVulkan
case "fpga":
return NetTargetFPGA
case "cuda":
return NetTargetCUDA
case "cudafp16":
return NetTargetCUDAFP16
default:
return NetTargetCPU
}
@ -307,7 +325,7 @@ func BlobFromImage(img Mat, scaleFactor float64, size image.Point, mean Scalar,
// https://docs.opencv.org/master/d6/d0f/group__dnn.html#ga2b89ed84432e4395f5a1412c2926293c
//
func BlobFromImages(imgs []Mat, blob *Mat, scaleFactor float64, size image.Point, mean Scalar,
swapRB bool, crop bool, ddepth int) {
swapRB bool, crop bool, ddepth MatType) {
cMatArray := make([]C.Mat, len(imgs))
for i, r := range imgs {

View File

@ -12,6 +12,8 @@ func (c NetBackendType) String() string {
return "opencv"
case NetBackendVKCOM:
return "vulkan"
case NetBackendCUDA:
return "cuda"
}
return ""
}
@ -30,6 +32,10 @@ func (c NetTargetType) String() string {
return "vulkan"
case NetTargetFPGA:
return "fpga"
case NetTargetCUDA:
return "cuda"
case NetTargetCUDAFP16:
return "cudafp16"
}
return ""
}

43
vendor/gocv.io/x/gocv/features2d.cpp generated vendored
View File

@ -428,3 +428,46 @@ void DrawKeyPoints(Mat src, struct KeyPoints kp, Mat dst, Scalar s, int flags) {
cv::drawKeypoints(*src, keypts, *dst, color, static_cast<cv::DrawMatchesFlags>(flags));
}
SIFT SIFT_Create() {
// TODO: params
return new cv::Ptr<cv::SIFT>(cv::SIFT::create());
}
void SIFT_Close(SIFT d) {
delete d;
}
struct KeyPoints SIFT_Detect(SIFT d, Mat src) {
std::vector<cv::KeyPoint> detected;
(*d)->detect(*src, detected);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}
struct KeyPoints SIFT_DetectAndCompute(SIFT d, Mat src, Mat mask, Mat desc) {
std::vector<cv::KeyPoint> detected;
(*d)->detectAndCompute(*src, *mask, detected, *desc);
KeyPoint* kps = new KeyPoint[detected.size()];
for (size_t i = 0; i < detected.size(); ++i) {
KeyPoint k = {detected[i].pt.x, detected[i].pt.y, detected[i].size, detected[i].angle,
detected[i].response, detected[i].octave, detected[i].class_id
};
kps[i] = k;
}
KeyPoints ret = {kps, (int)detected.size()};
return ret;
}

61
vendor/gocv.io/x/gocv/features2d.go generated vendored
View File

@ -149,9 +149,9 @@ const (
//FastFeatureDetectorType58 is an alias of FastFeatureDetector::TYPE_5_8
FastFeatureDetectorType58 FastFeatureDetectorType = 0
//FastFeatureDetectorType712 is an alias of FastFeatureDetector::TYPE_7_12
FastFeatureDetectorType712 = 1
FastFeatureDetectorType712 FastFeatureDetectorType = 1
//FastFeatureDetectorType916 is an alias of FastFeatureDetector::TYPE_9_16
FastFeatureDetectorType916 = 2
FastFeatureDetectorType916 FastFeatureDetectorType = 2
)
// FastFeatureDetector is a wrapper around the cv::FastFeatureDetector.
@ -710,11 +710,11 @@ const (
// DrawDefault creates new image and for each keypoint only the center point will be drawn
DrawDefault DrawMatchesFlag = 0
// DrawOverOutImg draws matches on existing content of image
DrawOverOutImg = 1
DrawOverOutImg DrawMatchesFlag = 1
// NotDrawSinglePoints will not draw single points
NotDrawSinglePoints = 2
NotDrawSinglePoints DrawMatchesFlag = 2
// DrawRichKeyPoints draws the circle around each keypoint with keypoint size and orientation
DrawRichKeyPoints = 3
DrawRichKeyPoints DrawMatchesFlag = 3
)
// DrawKeyPoints draws keypoints
@ -740,11 +740,58 @@ func DrawKeyPoints(src Mat, keyPoints []KeyPoint, dst *Mat, color color.RGBA, fl
}
scalar := C.struct_Scalar{
val1: C.double(color.R),
val1: C.double(color.B),
val2: C.double(color.G),
val3: C.double(color.B),
val3: C.double(color.R),
val4: C.double(color.A),
}
C.DrawKeyPoints(src.p, cKeyPoints, dst.p, scalar, C.int(flag))
}
// SIFT is a wrapper around the cv::SIFT algorithm.
// Due to the patent having expired, this is now in the main OpenCV code modules.
type SIFT struct {
// C.SIFT
p unsafe.Pointer
}
// NewSIFT returns a new SIFT algorithm.
//
// For further details, please see:
// https://docs.opencv.org/master/d5/d3c/classcv_1_1xfeatures2d_1_1SIFT.html
//
func NewSIFT() SIFT {
return SIFT{p: unsafe.Pointer(C.SIFT_Create())}
}
// Close SIFT.
func (d *SIFT) Close() error {
C.SIFT_Close((C.SIFT)(d.p))
d.p = nil
return nil
}
// Detect keypoints in an image using SIFT.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#aa4e9a7082ec61ebc108806704fbd7887
//
func (d *SIFT) Detect(src Mat) []KeyPoint {
ret := C.SIFT_Detect((C.SIFT)(d.p), C.Mat(src.Ptr()))
return getKeyPoints(ret)
}
// DetectAndCompute detects and computes keypoints in an image using SIFT.
//
// For further details, please see:
// https://docs.opencv.org/master/d0/d13/classcv_1_1Feature2D.html#a8be0d1c20b08eb867184b8d74c15a677
//
func (d *SIFT) DetectAndCompute(src Mat, mask Mat) ([]KeyPoint, Mat) {
desc := NewMat()
ret := C.SIFT_DetectAndCompute((C.SIFT)(d.p), C.Mat(src.Ptr()), C.Mat(mask.Ptr()),
C.Mat(desc.Ptr()))
return getKeyPoints(ret), desc
}

7
vendor/gocv.io/x/gocv/features2d.h generated vendored
View File

@ -19,6 +19,7 @@ typedef cv::Ptr<cv::MSER>* MSER;
typedef cv::Ptr<cv::ORB>* ORB;
typedef cv::Ptr<cv::SimpleBlobDetector>* SimpleBlobDetector;
typedef cv::Ptr<cv::BFMatcher>* BFMatcher;
typedef cv::Ptr<cv::SIFT>* SIFT;
#else
typedef void* AKAZE;
typedef void* AgastFeatureDetector;
@ -30,6 +31,7 @@ typedef void* MSER;
typedef void* ORB;
typedef void* SimpleBlobDetector;
typedef void* BFMatcher;
typedef void* SIFT;
#endif
AKAZE AKAZE_Create();
@ -82,6 +84,11 @@ struct MultiDMatches BFMatcher_KnnMatch(BFMatcher b, Mat query, Mat train, int k
void DrawKeyPoints(Mat src, struct KeyPoints kp, Mat dst, const Scalar s, int flags);
SIFT SIFT_Create();
void SIFT_Close(SIFT f);
struct KeyPoints SIFT_Detect(SIFT f, Mat src);
struct KeyPoints SIFT_DetectAndCompute(SIFT f, Mat src, Mat mask, Mat desc);
#ifdef __cplusplus
}
#endif

2
vendor/gocv.io/x/gocv/go.mod generated vendored
View File

@ -1,3 +1,3 @@
module gocv.io/x/gocv
go 1.13
go 1.13

22
vendor/gocv.io/x/gocv/highgui.go generated vendored
View File

@ -67,19 +67,19 @@ type WindowFlag float32
const (
// WindowNormal indicates a normal window.
WindowNormal WindowFlag = 0
// WindowFullscreen indicates a full-screen window.
WindowFullscreen = 1
WindowNormal WindowFlag = 0x00000000
// WindowAutosize indicates a window sized based on the contents.
WindowAutosize = 1
WindowAutosize WindowFlag = 0x00000001
// WindowFullscreen indicates a full-screen window.
WindowFullscreen WindowFlag = 1
// WindowFreeRatio indicates allow the user to resize without maintaining aspect ratio.
WindowFreeRatio = 0x00000100
WindowFreeRatio WindowFlag = 0x00000100
// WindowKeepRatio indicates always maintain an aspect ratio that matches the contents.
WindowKeepRatio = 0
WindowKeepRatio WindowFlag = 0x00000000
)
// WindowPropertyFlag flags for SetWindowProperty / GetWindowProperty.
@ -92,17 +92,17 @@ const (
// WindowPropertyAutosize is autosize property
// (can be WINDOW_NORMAL or WINDOW_AUTOSIZE).
WindowPropertyAutosize = 1
WindowPropertyAutosize WindowPropertyFlag = 1
// WindowPropertyAspectRatio window's aspect ration
// (can be set to WINDOW_FREERATIO or WINDOW_KEEPRATIO).
WindowPropertyAspectRatio = 2
WindowPropertyAspectRatio WindowPropertyFlag = 2
// WindowPropertyOpenGL opengl support.
WindowPropertyOpenGL = 3
WindowPropertyOpenGL WindowPropertyFlag = 3
// WindowPropertyVisible or not.
WindowPropertyVisible = 4
WindowPropertyVisible WindowPropertyFlag = 4
)
// GetWindowProperty returns properties of a window.

28
vendor/gocv.io/x/gocv/imgcodecs.go generated vendored
View File

@ -19,48 +19,52 @@ const (
// IMReadGrayScale always convert image to the single channel
// grayscale image.
IMReadGrayScale = 0
IMReadGrayScale IMReadFlag = 0
// IMReadColor always converts image to the 3 channel BGR color image.
IMReadColor = 1
IMReadColor IMReadFlag = 1
// IMReadAnyDepth returns 16-bit/32-bit image when the input has the corresponding
// depth, otherwise convert it to 8-bit.
IMReadAnyDepth = 2
IMReadAnyDepth IMReadFlag = 2
// IMReadAnyColor the image is read in any possible color format.
IMReadAnyColor = 4
IMReadAnyColor IMReadFlag = 4
// IMReadLoadGDAL uses the gdal driver for loading the image.
IMReadLoadGDAL = 8
IMReadLoadGDAL IMReadFlag = 8
// IMReadReducedGrayscale2 always converts image to the single channel grayscale image
// and the image size reduced 1/2.
IMReadReducedGrayscale2 = 16
IMReadReducedGrayscale2 IMReadFlag = 16
// IMReadReducedColor2 always converts image to the 3 channel BGR color image and the
// image size reduced 1/2.
IMReadReducedColor2 = 17
IMReadReducedColor2 IMReadFlag = 17
// IMReadReducedGrayscale4 always converts image to the single channel grayscale image and
// the image size reduced 1/4.
IMReadReducedGrayscale4 = 32
IMReadReducedGrayscale4 IMReadFlag = 32
// IMReadReducedColor4 always converts image to the 3 channel BGR color image and
// the image size reduced 1/4.
IMReadReducedColor4 = 33
IMReadReducedColor4 IMReadFlag = 33
// IMReadReducedGrayscale8 always convert image to the single channel grayscale image and
// the image size reduced 1/8.
IMReadReducedGrayscale8 = 64
IMReadReducedGrayscale8 IMReadFlag = 64
// IMReadReducedColor8 always convert image to the 3 channel BGR color image and the
// image size reduced 1/8.
IMReadReducedColor8 = 65
IMReadReducedColor8 IMReadFlag = 65
// IMReadIgnoreOrientation do not rotate the image according to EXIF's orientation flag.
IMReadIgnoreOrientation = 128
IMReadIgnoreOrientation IMReadFlag = 128
)
// TODO: Define IMWriteFlag type?
const (
//IMWriteJpegQuality is the quality from 0 to 100 for JPEG (the higher is the better). Default value is 95.
IMWriteJpegQuality = 1

90
vendor/gocv.io/x/gocv/imgproc.cpp generated vendored
View File

@ -168,6 +168,12 @@ void Erode(Mat src, Mat dst, Mat kernel) {
cv::erode(*src, *dst, *kernel);
}
void ErodeWithParams(Mat src, Mat dst, Mat kernel, Point anchor, int iterations, int borderType) {
cv::Point pt1(anchor.x, anchor.y);
cv::erode(*src, *dst, *kernel, pt1, iterations, borderType, cv::morphologyDefaultBorderValue());
}
void MatchTemplate(Mat image, Mat templ, Mat result, int method, Mat mask) {
cv::matchTemplate(*image, *templ, *result, method, *mask);
}
@ -317,6 +323,10 @@ void GaussianBlur(Mat src, Mat dst, Size ps, double sX, double sY, int bt) {
cv::GaussianBlur(*src, *dst, sz, sX, sY, bt);
}
Mat GetGaussianKernel(int ksize, double sigma, int ktype){
return new cv::Mat(cv::getGaussianKernel(ksize, sigma, ktype));
}
void Laplacian(Mat src, Mat dst, int dDepth, int kSize, double scale, double delta,
int borderType) {
cv::Laplacian(*src, *dst, dDepth, kSize, scale, delta, borderType);
@ -382,8 +392,8 @@ void Integral(Mat src, Mat sum, Mat sqsum, Mat tilted) {
cv::integral(*src, *sum, *sqsum, *tilted);
}
void Threshold(Mat src, Mat dst, double thresh, double maxvalue, int typ) {
cv::threshold(*src, *dst, thresh, maxvalue, typ);
double Threshold(Mat src, Mat dst, double thresh, double maxvalue, int typ) {
return cv::threshold(*src, *dst, thresh, maxvalue, typ);
}
void AdaptiveThreshold(Mat src, Mat dst, double maxValue, int adaptiveMethod, int thresholdType,
@ -463,6 +473,26 @@ void FillPoly(Mat img, Contours points, Scalar color) {
cv::fillPoly(*img, pts, c);
}
void Polylines(Mat img, Contours points, bool isClosed, Scalar color,int thickness) {
std::vector<std::vector<cv::Point> > pts;
for (size_t i = 0; i < points.length; i++) {
Contour contour = points.contours[i];
std::vector<cv::Point> cntr;
for (size_t i = 0; i < contour.length; i++) {
cntr.push_back(cv::Point(contour.points[i].x, contour.points[i].y));
}
pts.push_back(cntr);
}
cv::Scalar c = cv::Scalar(color.val1, color.val2, color.val3, color.val4);
cv::polylines(*img, pts, isClosed, c, thickness);
}
struct Size GetTextSize(const char* text, int fontFace, double fontScale, int thickness) {
cv::Size sz = cv::getTextSize(text, fontFace, fontScale, thickness, NULL);
Size size = {sz.width, sz.height};
@ -529,6 +559,19 @@ void ApplyCustomColorMap(Mat src, Mat dst, Mat colormap) {
}
Mat GetPerspectiveTransform(Contour src, Contour dst) {
std::vector<cv::Point2f> src_pts;
for (size_t i = 0; i < src.length; i++) {
src_pts.push_back(cv::Point2f(src.points[i].x, src.points[i].y));
}
std::vector<cv::Point2f> dst_pts;
for (size_t i = 0; i < dst.length; i++) {
dst_pts.push_back(cv::Point2f(dst.points[i].x, dst.points[i].y));
}
return new cv::Mat(cv::getPerspectiveTransform(src_pts, dst_pts));
}
Mat GetPerspectiveTransform2f(Contour2f src, Contour2f dst) {
std::vector<cv::Point2f> src_pts;
for (size_t i = 0; i < src.length; i++) {
@ -544,6 +587,39 @@ Mat GetPerspectiveTransform(Contour src, Contour dst) {
return new cv::Mat(cv::getPerspectiveTransform(src_pts, dst_pts));
}
Mat GetAffineTransform(Contour src, Contour dst) {
std::vector<cv::Point2f> src_pts;
for (size_t i = 0; i < src.length; i++) {
src_pts.push_back(cv::Point2f(src.points[i].x, src.points[i].y));
}
std::vector<cv::Point2f> dst_pts;
for (size_t i = 0; i < dst.length; i++) {
dst_pts.push_back(cv::Point2f(dst.points[i].x, dst.points[i].y));
}
return new cv::Mat(cv::getAffineTransform(src_pts, dst_pts));
}
Mat GetAffineTransform2f(Contour2f src, Contour2f dst) {
std::vector<cv::Point2f> src_pts;
for (size_t i = 0; i < src.length; i++) {
src_pts.push_back(cv::Point2f(src.points[i].x, src.points[i].y));
}
std::vector<cv::Point2f> dst_pts;
for (size_t i = 0; i < dst.length; i++) {
dst_pts.push_back(cv::Point2f(dst.points[i].x, dst.points[i].y));
}
return new cv::Mat(cv::getAffineTransform(src_pts, dst_pts));
}
Mat FindHomography(Mat src, Mat dst, int method, double ransacReprojThreshold, Mat mask, const int maxIters, const double confidence) {
return new cv::Mat(cv::findHomography(*src, *dst, method, ransacReprojThreshold, *mask, maxIters, confidence));
}
void DrawContours(Mat src, Contours contours, int contourIdx, Scalar color, int thickness) {
std::vector<std::vector<cv::Point> > cntrs;
@ -625,3 +701,13 @@ void CLAHE_Apply(CLAHE c, Mat src, Mat dst) {
void InvertAffineTransform(Mat src, Mat dst) {
cv::invertAffineTransform(*src, *dst);
}
Point2f PhaseCorrelate(Mat src1, Mat src2, Mat window, double* response) {
cv::Point2d result = cv::phaseCorrelate(*src1, *src2, *window, response);
Point2f result2f = {
.x = float(result.x),
.y = float(result.y),
};
return result2f;
}

342
vendor/gocv.io/x/gocv/imgproc.go generated vendored
View File

@ -178,22 +178,22 @@ const (
HistCmpCorrel HistCompMethod = 0
// HistCmpChiSqr calculates the Chi-Square
HistCmpChiSqr = 1
HistCmpChiSqr HistCompMethod = 1
// HistCmpIntersect calculates the Intersection
HistCmpIntersect = 2
HistCmpIntersect HistCompMethod = 2
// HistCmpBhattacharya applies the HistCmpBhattacharya by calculating the Bhattacharya distance.
HistCmpBhattacharya = 3
HistCmpBhattacharya HistCompMethod = 3
// HistCmpHellinger applies the HistCmpBhattacharya comparison. It is a synonym to HistCmpBhattacharya.
HistCmpHellinger = HistCmpBhattacharya
// HistCmpChiSqrAlt applies the Alternative Chi-Square (regularly used for texture comparsion).
HistCmpChiSqrAlt = 4
HistCmpChiSqrAlt HistCompMethod = 4
// HistCmpKlDiv applies the Kullback-Liebler divergence comparison.
HistCmpKlDiv = 5
HistCmpKlDiv HistCompMethod = 5
)
// CompareHist Compares two histograms.
@ -335,6 +335,20 @@ func Erode(src Mat, dst *Mat, kernel Mat) {
C.Erode(src.p, dst.p, kernel.p)
}
// ErodeWithParams erodes an image by using a specific structuring element.
//
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gaeb1e0c1033e3f6b891a25d0511362aeb
//
func ErodeWithParams(src Mat, dst *Mat, kernel Mat, anchor image.Point, iterations, borderType int) {
cAnchor := C.struct_Point{
x: C.int(anchor.X),
y: C.int(anchor.Y),
}
C.ErodeWithParams(src.p, dst.p, kernel.p, cAnchor, C.int(iterations), C.int(borderType))
}
// RetrievalMode is the mode of the contour retrieval algorithm.
type RetrievalMode int
@ -345,21 +359,21 @@ const (
// RetrievalList retrieves all of the contours without establishing
// any hierarchical relationships.
RetrievalList = 1
RetrievalList RetrievalMode = 1
// RetrievalCComp retrieves all of the contours and organizes them into
// a two-level hierarchy. At the top level, there are external boundaries
// of the components. At the second level, there are boundaries of the holes.
// If there is another contour inside a hole of a connected component, it
// is still put at the top level.
RetrievalCComp = 2
RetrievalCComp RetrievalMode = 2
// RetrievalTree retrieves all of the contours and reconstructs a full
// hierarchy of nested contours.
RetrievalTree = 3
RetrievalTree RetrievalMode = 3
// RetrievalFloodfill lacks a description in the original header.
RetrievalFloodfill = 4
RetrievalFloodfill RetrievalMode = 4
)
// ContourApproximationMode is the mode of the contour approximation algorithm.
@ -375,15 +389,15 @@ const (
// ChainApproxSimple compresses horizontal, vertical, and diagonal segments
// and leaves only their end points.
// For example, an up-right rectangular contour is encoded with 4 points.
ChainApproxSimple = 2
ChainApproxSimple ContourApproximationMode = 2
// ChainApproxTC89L1 applies one of the flavors of the Teh-Chin chain
// approximation algorithms.
ChainApproxTC89L1 = 3
ChainApproxTC89L1 ContourApproximationMode = 3
// ChainApproxTC89KCOS applies one of the flavors of the Teh-Chin chain
// approximation algorithms.
ChainApproxTC89KCOS = 4
ChainApproxTC89KCOS ContourApproximationMode = 4
)
// BoundingRect calculates the up-right bounding rectangle of a point set.
@ -577,10 +591,10 @@ const (
CCL_WU ConnectedComponentsAlgorithmType = 0
// BBDT algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity.
CCL_DEFAULT = 1
CCL_DEFAULT ConnectedComponentsAlgorithmType = 1
// BBDT algorithm for 8-way connectivity, SAUF algorithm for 4-way connectivity
CCL_GRANA = 2
CCL_GRANA ConnectedComponentsAlgorithmType = 2
)
// ConnectedComponents computes the connected components labeled image of boolean image.
@ -607,21 +621,21 @@ type ConnectedComponentsTypes int
const (
//The leftmost (x) coordinate which is the inclusive start of the bounding box in the horizontal direction.
CC_STAT_LEFT = 0
CC_STAT_LEFT ConnectedComponentsTypes = 0
//The topmost (y) coordinate which is the inclusive start of the bounding box in the vertical direction.
CC_STAT_TOP = 1
CC_STAT_TOP ConnectedComponentsTypes = 1
// The horizontal size of the bounding box.
CC_STAT_WIDTH = 2
CC_STAT_WIDTH ConnectedComponentsTypes = 2
// The vertical size of the bounding box.
CC_STAT_HEIGHT = 3
CC_STAT_HEIGHT ConnectedComponentsTypes = 3
// The total area (in pixels) of the connected component.
CC_STAT_AREA = 4
CC_STAT_AREA ConnectedComponentsTypes = 4
CC_STAT_MAX = 5
CC_STAT_MAX ConnectedComponentsTypes = 5
)
// ConnectedComponentsWithStats computes the connected components labeled image of boolean
@ -654,15 +668,15 @@ const (
// TmSqdiff maps to TM_SQDIFF
TmSqdiff TemplateMatchMode = 0
// TmSqdiffNormed maps to TM_SQDIFF_NORMED
TmSqdiffNormed = 1
TmSqdiffNormed TemplateMatchMode = 1
// TmCcorr maps to TM_CCORR
TmCcorr = 2
TmCcorr TemplateMatchMode = 2
// TmCcorrNormed maps to TM_CCORR_NORMED
TmCcorrNormed = 3
TmCcorrNormed TemplateMatchMode = 3
// TmCcoeff maps to TM_CCOEFF
TmCcoeff = 4
TmCcoeff TemplateMatchMode = 4
// TmCcoeffNormed maps to TM_CCOEFF_NORMED
TmCcoeffNormed = 5
TmCcoeffNormed TemplateMatchMode = 5
)
// MatchTemplate compares a template against overlapped image regions.
@ -779,10 +793,10 @@ const (
MorphRect MorphShape = 0
// MorphCross is the cross morph shape.
MorphCross = 1
MorphCross MorphShape = 1
// MorphEllipse is the ellipse morph shape.
MorphEllipse = 2
MorphEllipse MorphShape = 2
)
// GetStructuringElement returns a structuring element of the specified size
@ -808,25 +822,25 @@ const (
MorphErode MorphType = 0
// MorphDilate operation
MorphDilate = 1
MorphDilate MorphType = 1
// MorphOpen operation
MorphOpen = 2
MorphOpen MorphType = 2
// MorphClose operation
MorphClose = 3
MorphClose MorphType = 3
// MorphGradient operation
MorphGradient = 4
MorphGradient MorphType = 4
// MorphTophat operation
MorphTophat = 5
MorphTophat MorphType = 5
// MorphBlackhat operation
MorphBlackhat = 6
MorphBlackhat MorphType = 6
// MorphHitmiss operation
MorphHitmiss = 7
MorphHitmiss MorphType = 7
)
// BorderType type of border.
@ -837,25 +851,25 @@ const (
BorderConstant BorderType = 0
// BorderReplicate border type
BorderReplicate = 1
BorderReplicate BorderType = 1
// BorderReflect border type
BorderReflect = 2
BorderReflect BorderType = 2
// BorderWrap border type
BorderWrap = 3
BorderWrap BorderType = 3
// BorderReflect101 border type
BorderReflect101 = 4
BorderReflect101 BorderType = 4
// BorderTransparent border type
BorderTransparent = 5
BorderTransparent BorderType = 5
// BorderDefault border type
BorderDefault = BorderReflect101
// BorderIsolated border type
BorderIsolated = 16
BorderIsolated BorderType = 16
)
// GaussianBlur blurs an image Mat using a Gaussian filter.
@ -875,12 +889,28 @@ func GaussianBlur(src Mat, dst *Mat, ksize image.Point, sigmaX float64,
C.GaussianBlur(src.p, dst.p, pSize, C.double(sigmaX), C.double(sigmaY), C.int(borderType))
}
// GetGaussianKernel returns Gaussian filter coefficients.
//
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gac05a120c1ae92a6060dd0db190a61afa
func GetGaussianKernel(ksize int, sigma float64) Mat {
return newMat(C.GetGaussianKernel(C.int(ksize), C.double(sigma), C.int(MatTypeCV64F)))
}
// GetGaussianKernelWithParams returns Gaussian filter coefficients.
//
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gac05a120c1ae92a6060dd0db190a61afa
func GetGaussianKernelWithParams(ksize int, sigma float64, ktype MatType) Mat {
return newMat(C.GetGaussianKernel(C.int(ksize), C.double(sigma), C.int(ktype)))
}
// Sobel calculates the first, second, third, or mixed image derivatives using an extended Sobel operator
//
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gacea54f142e81b6758cb6f375ce782c8d
//
func Sobel(src Mat, dst *Mat, ddepth, dx, dy, ksize int, scale, delta float64, borderType BorderType) {
func Sobel(src Mat, dst *Mat, ddepth MatType, dx, dy, ksize int, scale, delta float64, borderType BorderType) {
C.Sobel(src.p, dst.p, C.int(ddepth), C.int(dx), C.int(dy), C.int(ksize), C.double(scale), C.double(delta), C.int(borderType))
}
@ -889,7 +919,7 @@ func Sobel(src Mat, dst *Mat, ddepth, dx, dy, ksize int, scale, delta float64, b
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga405d03b20c782b65a4daf54d233239a2
//
func SpatialGradient(src Mat, dx, dy *Mat, ksize int, borderType BorderType) {
func SpatialGradient(src Mat, dx, dy *Mat, ksize MatType, borderType BorderType) {
C.SpatialGradient(src.p, dx.p, dy.p, C.int(ksize), C.int(borderType))
}
@ -898,7 +928,7 @@ func SpatialGradient(src Mat, dx, dy *Mat, ksize int, borderType BorderType) {
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gad78703e4c8fe703d479c1860d76429e6
//
func Laplacian(src Mat, dst *Mat, dDepth int, size int, scale float64,
func Laplacian(src Mat, dst *Mat, dDepth MatType, size int, scale float64,
delta float64, borderType BorderType) {
C.Laplacian(src.p, dst.p, C.int(dDepth), C.int(size), C.double(scale), C.double(delta), C.int(borderType))
}
@ -908,7 +938,7 @@ func Laplacian(src Mat, dst *Mat, dDepth int, size int, scale float64,
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#gaa13106761eedf14798f37aa2d60404c9
//
func Scharr(src Mat, dst *Mat, dDepth int, dx int, dy int, scale float64,
func Scharr(src Mat, dst *Mat, dDepth MatType, dx int, dy int, scale float64,
delta float64, borderType BorderType) {
C.Scharr(src.p, dst.p, C.int(dDepth), C.int(dx), C.int(dy), C.double(scale), C.double(delta), C.int(borderType))
}
@ -978,12 +1008,12 @@ const (
// GCInitWithMask makes the function initialize the state using the provided mask.
// GCInitWithMask and GCInitWithRect can be combined.
// Then all the pixels outside of the ROI are automatically initialized with GC_BGD.
GCInitWithMask = 1
GCInitWithMask GrabCutMode = 1
// GCEval means that the algorithm should just resume.
GCEval = 2
GCEval GrabCutMode = 2
// GCEvalFreezeModel means that the algorithm should just run a single iteration of the GrabCut algorithm
// with the fixed model
GCEvalFreezeModel = 3
GCEvalFreezeModel GrabCutMode = 3
)
// Grabcut runs the GrabCut algorithm.
@ -1010,15 +1040,15 @@ const (
HoughStandard HoughMode = 0
// HoughProbabilistic is the probabilistic Hough transform (more efficient
// in case if the picture contains a few long linear segments).
HoughProbabilistic = 1
HoughProbabilistic HoughMode = 1
// HoughMultiScale is the multi-scale variant of the classical Hough
// transform.
HoughMultiScale = 2
HoughMultiScale HoughMode = 2
// HoughGradient is basically 21HT, described in: HK Yuen, John Princen,
// John Illingworth, and Josef Kittler. Comparative study of hough
// transform methods for circle finding. Image and Vision Computing,
// 8(1):7177, 1990.
HoughGradient = 3
HoughGradient HoughMode = 3
)
// HoughCircles finds circles in a grayscale image using the Hough transform.
@ -1098,25 +1128,25 @@ const (
ThresholdBinary ThresholdType = 0
// ThresholdBinaryInv threshold type
ThresholdBinaryInv = 1
ThresholdBinaryInv ThresholdType = 1
// ThresholdTrunc threshold type
ThresholdTrunc = 2
ThresholdTrunc ThresholdType = 2
// ThresholdToZero threshold type
ThresholdToZero = 3
ThresholdToZero ThresholdType = 3
// ThresholdToZeroInv threshold type
ThresholdToZeroInv = 4
ThresholdToZeroInv ThresholdType = 4
// ThresholdMask threshold type
ThresholdMask = 7
ThresholdMask ThresholdType = 7
// ThresholdOtsu threshold type
ThresholdOtsu = 8
ThresholdOtsu ThresholdType = 8
// ThresholdTriangle threshold type
ThresholdTriangle = 16
ThresholdTriangle ThresholdType = 16
)
// Threshold applies a fixed-level threshold to each array element.
@ -1124,8 +1154,8 @@ const (
// For further details, please see:
// https://docs.opencv.org/3.3.0/d7/d1b/group__imgproc__misc.html#gae8a4a146d1ca78c626a53577199e9c57
//
func Threshold(src Mat, dst *Mat, thresh float32, maxvalue float32, typ ThresholdType) {
C.Threshold(src.p, dst.p, C.double(thresh), C.double(maxvalue), C.int(typ))
func Threshold(src Mat, dst *Mat, thresh float32, maxvalue float32, typ ThresholdType) (threshold float32) {
return float32(C.Threshold(src.p, dst.p, C.double(thresh), C.double(maxvalue), C.int(typ)))
}
// AdaptiveThresholdType type of adaptive threshold operation.
@ -1136,7 +1166,7 @@ const (
AdaptiveThresholdMean AdaptiveThresholdType = 0
// AdaptiveThresholdGaussian threshold type
AdaptiveThresholdGaussian = 1
AdaptiveThresholdGaussian AdaptiveThresholdType = 1
)
// AdaptiveThreshold applies a fixed-level threshold to each array element.
@ -1312,6 +1342,47 @@ func FillPoly(img *Mat, pts [][]image.Point, c color.RGBA) {
C.FillPoly(img.p, cPoints, sColor)
}
// Polylines draws several polygonal curves.
//
// For more information, see:
// https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga1ea127ffbbb7e0bfc4fd6fd2eb64263c
func Polylines(img *Mat, pts [][]image.Point, isClosed bool, c color.RGBA, thickness int) {
points := make([]C.struct_Points, len(pts))
for i, pt := range pts {
p := (*C.struct_Point)(C.malloc(C.size_t(C.sizeof_struct_Point * len(pt))))
defer C.free(unsafe.Pointer(p))
pa := getPoints(p, len(pt))
for j, point := range pt {
pa[j] = C.struct_Point{
x: C.int(point.X),
y: C.int(point.Y),
}
}
points[i] = C.struct_Points{
points: (*C.Point)(p),
length: C.int(len(pt)),
}
}
cPoints := C.struct_Contours{
contours: (*C.struct_Points)(&points[0]),
length: C.int(len(pts)),
}
sColor := C.struct_Scalar{
val1: C.double(c.B),
val2: C.double(c.G),
val3: C.double(c.R),
val4: C.double(c.A),
}
C.Polylines(img.p, cPoints, C.bool(isClosed), sColor, C.int(thickness))
}
// HersheyFont are the font libraries included in OpenCV.
// Only a subset of the available Hershey fonts are supported by OpenCV.
//
@ -1324,23 +1395,23 @@ const (
// FontHersheySimplex is normal size sans-serif font.
FontHersheySimplex HersheyFont = 0
// FontHersheyPlain issmall size sans-serif font.
FontHersheyPlain = 1
FontHersheyPlain HersheyFont = 1
// FontHersheyDuplex normal size sans-serif font
// (more complex than FontHersheySIMPLEX).
FontHersheyDuplex = 2
FontHersheyDuplex HersheyFont = 2
// FontHersheyComplex i a normal size serif font.
FontHersheyComplex = 3
FontHersheyComplex HersheyFont = 3
// FontHersheyTriplex is a normal size serif font
// (more complex than FontHersheyCOMPLEX).
FontHersheyTriplex = 4
FontHersheyTriplex HersheyFont = 4
// FontHersheyComplexSmall is a smaller version of FontHersheyCOMPLEX.
FontHersheyComplexSmall = 5
FontHersheyComplexSmall HersheyFont = 5
// FontHersheyScriptSimplex is a hand-writing style font.
FontHersheyScriptSimplex = 6
FontHersheyScriptSimplex HersheyFont = 6
// FontHersheyScriptComplex is a more complex variant of FontHersheyScriptSimplex.
FontHersheyScriptComplex = 7
FontHersheyScriptComplex HersheyFont = 7
// FontItalic is the flag for italic font.
FontItalic = 16
FontItalic HersheyFont = 16
)
// LineType are the line libraries included in OpenCV.
@ -1354,11 +1425,11 @@ const (
// Filled line
Filled LineType = -1
// Line4 4-connected line
Line4 = 4
Line4 LineType = 4
// Line8 8-connected line
Line8 = 8
Line8 LineType = 8
// LineAA antialiased line
LineAA = 16
LineAA LineType = 16
)
// GetTextSize calculates the width and height of a text string.
@ -1441,23 +1512,23 @@ const (
InterpolationNearestNeighbor InterpolationFlags = 0
// InterpolationLinear is bilinear interpolation.
InterpolationLinear = 1
InterpolationLinear InterpolationFlags = 1
// InterpolationCubic is bicube interpolation.
InterpolationCubic = 2
InterpolationCubic InterpolationFlags = 2
// InterpolationArea uses pixel area relation. It is preferred for image
// decimation as it gives moire-free results.
InterpolationArea = 3
InterpolationArea InterpolationFlags = 3
// InterpolationLanczos4 is Lanczos interpolation over 8x8 neighborhood.
InterpolationLanczos4 = 4
InterpolationLanczos4 InterpolationFlags = 4
// InterpolationDefault is an alias for InterpolationLinear.
InterpolationDefault = InterpolationLinear
// InterpolationMax indicates use maximum interpolation.
InterpolationMax = 7
InterpolationMax InterpolationFlags = 7
)
// Resize resizes an image.
@ -1571,18 +1642,18 @@ type ColormapTypes int
// https://docs.opencv.org/master/d3/d50/group__imgproc__colormap.html#ga9a805d8262bcbe273f16be9ea2055a65
const (
ColormapAutumn ColormapTypes = 0
ColormapBone = 1
ColormapJet = 2
ColormapWinter = 3
ColormapRainbow = 4
ColormapOcean = 5
ColormapSummer = 6
ColormapSpring = 7
ColormapCool = 8
ColormapHsv = 9
ColormapPink = 10
ColormapHot = 11
ColormapParula = 12
ColormapBone ColormapTypes = 1
ColormapJet ColormapTypes = 2
ColormapWinter ColormapTypes = 3
ColormapRainbow ColormapTypes = 4
ColormapOcean ColormapTypes = 5
ColormapSummer ColormapTypes = 6
ColormapSpring ColormapTypes = 7
ColormapCool ColormapTypes = 8
ColormapHsv ColormapTypes = 9
ColormapPink ColormapTypes = 10
ColormapHot ColormapTypes = 11
ColormapParula ColormapTypes = 12
)
// ApplyColorMap applies a GNU Octave/MATLAB equivalent colormap on a given image.
@ -1602,7 +1673,7 @@ func ApplyCustomColorMap(src Mat, dst *Mat, customColormap Mat) {
}
// GetPerspectiveTransform returns 3x3 perspective transformation for the
// corresponding 4 point pairs.
// corresponding 4 point pairs as image.Point.
//
// For further details, please see:
// https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga8c1ae0e3589a9d77fffc962c49b22043
@ -1612,16 +1683,66 @@ func GetPerspectiveTransform(src, dst []image.Point) Mat {
return newMat(C.GetPerspectiveTransform(srcPoints, dstPoints))
}
// GetPerspectiveTransform2f returns 3x3 perspective transformation for the
// corresponding 4 point pairs as gocv.Point2f.
//
// For further details, please see:
// https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga8c1ae0e3589a9d77fffc962c49b22043
func GetPerspectiveTransform2f(src, dst []Point2f) Mat {
srcPoints := toCPoints2f(src)
dstPoints := toCPoints2f(dst)
return newMat(C.GetPerspectiveTransform2f(srcPoints, dstPoints))
}
// GetAffineTransform returns a 2x3 affine transformation matrix for the
// corresponding 3 point pairs as image.Point.
//
// For further details, please see:
// https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga8f6d378f9f8eebb5cb55cd3ae295a999
func GetAffineTransform(src, dst []image.Point) Mat {
srcPoints := toCPoints(src)
dstPoints := toCPoints(dst)
return newMat(C.GetAffineTransform(srcPoints, dstPoints))
}
// GetAffineTransform2f returns a 2x3 affine transformation matrix for the
// corresponding 3 point pairs as gocv.Point2f.
//
// For further details, please see:
// https://docs.opencv.org/master/da/d54/group__imgproc__transform.html#ga8f6d378f9f8eebb5cb55cd3ae295a999
func GetAffineTransform2f(src, dst []Point2f) Mat {
srcPoints := toCPoints2f(src)
dstPoints := toCPoints2f(dst)
return newMat(C.GetAffineTransform2f(srcPoints, dstPoints))
}
type HomographyMethod int
const (
HomograpyMethodAllPoints HomographyMethod = 0
HomograpyMethodLMEDS HomographyMethod = 4
HomograpyMethodRANSAC HomographyMethod = 8
)
// FindHomography finds an optimal homography matrix using 4 or more point pairs (as opposed to GetPerspectiveTransform, which uses exactly 4)
//
// For further details, please see:
// https://docs.opencv.org/master/d9/d0c/group__calib3d.html#ga4abc2ece9fab9398f2e560d53c8c9780
//
func FindHomography(srcPoints Mat, dstPoints *Mat, method HomographyMethod, ransacReprojThreshold float64, mask *Mat, maxIters int, confidence float64) Mat {
return newMat(C.FindHomography(srcPoints.Ptr(), dstPoints.Ptr(), C.int(method), C.double(ransacReprojThreshold), mask.Ptr(), C.int(maxIters), C.double(confidence)))
}
// DrawContours draws contours outlines or filled contours.
//
// For further details, please see:
// https://docs.opencv.org/3.3.1/d6/d6e/group__imgproc__draw.html#ga746c0625f1781f1ffc9056259103edbc
// https://docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#ga746c0625f1781f1ffc9056259103edbc
//
func DrawContours(img *Mat, contours [][]image.Point, contourIdx int, c color.RGBA, thickness int) {
cntrs := make([]C.struct_Points, len(contours))
for i, contour := range contours {
p := (*C.struct_Point)(C.malloc(C.size_t(C.sizeof_struct_Point * len(contour))))
defer C.free(unsafe.Pointer(p))
pa := getPoints(p, len(contour))
@ -1651,6 +1772,12 @@ func DrawContours(img *Mat, contours [][]image.Point, contourIdx int, c color.RG
}
C.DrawContours(img.p, cContours, C.int(contourIdx), sColor, C.int(thickness))
// now free the contour points
for i := 0; i < len(contours); i++ {
C.free(unsafe.Pointer(cntrs[i].points))
}
}
// Remap applies a generic geometrical transformation to an image.
@ -1671,7 +1798,7 @@ func Remap(src Mat, dst, map1, map2 *Mat, interpolation InterpolationFlags, bord
//
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga27c049795ce870216ddfb366086b5a04
func Filter2D(src Mat, dst *Mat, ddepth int, kernel Mat, anchor image.Point, delta float64, borderType BorderType) {
func Filter2D(src Mat, dst *Mat, ddepth MatType, kernel Mat, anchor image.Point, delta float64, borderType BorderType) {
anchorP := C.struct_Point{
x: C.int(anchor.X),
y: C.int(anchor.Y),
@ -1683,7 +1810,7 @@ func Filter2D(src Mat, dst *Mat, ddepth int, kernel Mat, anchor image.Point, del
//
// For further details, please see:
// https://docs.opencv.org/master/d4/d86/group__imgproc__filter.html#ga910e29ff7d7b105057d1625a4bf6318d
func SepFilter2D(src Mat, dst *Mat, ddepth int, kernelX, kernelY Mat, anchor image.Point, delta float64, borderType BorderType) {
func SepFilter2D(src Mat, dst *Mat, ddepth MatType, kernelX, kernelY Mat, anchor image.Point, delta float64, borderType BorderType) {
anchorP := C.struct_Point{
x: C.int(anchor.X),
y: C.int(anchor.Y),
@ -1723,13 +1850,13 @@ type DistanceTypes int
const (
DistUser DistanceTypes = 0
DistL1 = 1
DistL2 = 2
DistC = 3
DistL12 = 4
DistFair = 5
DistWelsch = 6
DistHuber = 7
DistL1 DistanceTypes = 1
DistL2 DistanceTypes = 2
DistC DistanceTypes = 3
DistL12 DistanceTypes = 4
DistFair DistanceTypes = 5
DistWelsch DistanceTypes = 6
DistHuber DistanceTypes = 7
)
// FitLine fits a line to a 2D or 3D point set.
@ -1788,3 +1915,18 @@ func (c *CLAHE) Apply(src Mat, dst *Mat) {
func InvertAffineTransform(src Mat, dst *Mat) {
C.InvertAffineTransform(src.p, dst.p)
}
// Apply phaseCorrelate.
//
// For further details, please see:
// https://docs.opencv.org/master/d7/df3/group__imgproc__motion.html#ga552420a2ace9ef3fb053cd630fdb4952
//
func PhaseCorrelate(src1, src2, window Mat) (phaseShift Point2f, response float64) {
var responseDouble C.double
result := C.PhaseCorrelate(src1.p, src2.p, window.p, &responseDouble)
return Point2f{
X: float32(result.x),
Y: float32(result.y),
}, float64(responseDouble)
}

10
vendor/gocv.io/x/gocv/imgproc.h generated vendored
View File

@ -32,6 +32,7 @@ void SqBoxFilter(Mat src, Mat dst, int ddepth, Size ps);
void Dilate(Mat src, Mat dst, Mat kernel);
void DistanceTransform(Mat src, Mat dst, Mat labels, int distanceType, int maskSize, int labelType);
void Erode(Mat src, Mat dst, Mat kernel);
void ErodeWithParams(Mat src, Mat dst, Mat kernel, Point anchor, int iterations, int borderType);
void MatchTemplate(Mat image, Mat templ, Mat result, int method, Mat mask);
struct Moment Moments(Mat src, bool binaryImage);
void PyrDown(Mat src, Mat dst, Size dstsize, int borderType);
@ -47,6 +48,7 @@ int ConnectedComponents(Mat src, Mat dst, int connectivity, int ltype, int cclty
int ConnectedComponentsWithStats(Mat src, Mat labels, Mat stats, Mat centroids, int connectivity, int ltype, int ccltype);
void GaussianBlur(Mat src, Mat dst, Size ps, double sX, double sY, int bt);
Mat GetGaussianKernel(int ksize, double sigma, int ktype);
void Laplacian(Mat src, Mat dst, int dDepth, int kSize, double scale, double delta, int borderType);
void Scharr(Mat src, Mat dst, int dDepth, int dx, int dy, double scale, double delta,
int borderType);
@ -70,7 +72,7 @@ void HoughLinesPointSet(Mat points, Mat lines, int lines_max, int threshold,
double min_rho, double max_rho, double rho_step,
double min_theta, double max_theta, double theta_step);
void Integral(Mat src, Mat sum, Mat sqsum, Mat tilted);
void Threshold(Mat src, Mat dst, double thresh, double maxvalue, int typ);
double Threshold(Mat src, Mat dst, double thresh, double maxvalue, int typ);
void AdaptiveThreshold(Mat src, Mat dst, double maxValue, int adaptiveTyp, int typ, int blockSize,
double c);
@ -81,6 +83,7 @@ void Ellipse(Mat img, Point center, Point axes, double angle, double
void Line(Mat img, Point pt1, Point pt2, Scalar color, int thickness);
void Rectangle(Mat img, Rect rect, Scalar color, int thickness);
void FillPoly(Mat img, Contours points, Scalar color);
void Polylines(Mat img, Contours points, bool isClosed, Scalar color, int thickness);
struct Size GetTextSize(const char* text, int fontFace, double fontScale, int thickness);
void PutText(Mat img, const char* text, Point org, int fontFace, double fontScale,
Scalar color, int thickness);
@ -97,6 +100,10 @@ void Watershed(Mat image, Mat markers);
void ApplyColorMap(Mat src, Mat dst, int colormap);
void ApplyCustomColorMap(Mat src, Mat dst, Mat colormap);
Mat GetPerspectiveTransform(Contour src, Contour dst);
Mat GetPerspectiveTransform2f(Contour2f src, Contour2f dst);
Mat GetAffineTransform(Contour src, Contour dst);
Mat GetAffineTransform2f(Contour2f src, Contour2f dst);
Mat FindHomography(Mat src, Mat dst, int method, double ransacReprojThreshold, Mat mask, const int maxIters, const double confidence) ;
void DrawContours(Mat src, Contours contours, int contourIdx, Scalar color, int thickness);
void Sobel(Mat src, Mat dst, int ddepth, int dx, int dy, int ksize, double scale, double delta, int borderType);
void SpatialGradient(Mat src, Mat dx, Mat dy, int ksize, int borderType);
@ -112,6 +119,7 @@ CLAHE CLAHE_CreateWithParams(double clipLimit, Size tileGridSize);
void CLAHE_Close(CLAHE c);
void CLAHE_Apply(CLAHE c, Mat src, Mat dst);
void InvertAffineTransform(Mat src, Mat dst);
Point2f PhaseCorrelate(Mat src1, Mat src2, Mat window, double* response);
#ifdef __cplusplus
}

View File

@ -12,340 +12,340 @@ const (
ColorBGRToBGRA ColorConversionCode = 0
// ColorBGRAToBGR removes alpha channel from BGR image.
ColorBGRAToBGR = 1
ColorBGRAToBGR ColorConversionCode = 1
// ColorBGRToRGBA converts from BGR to RGB with alpha channel.
ColorBGRToRGBA = 2
ColorBGRToRGBA ColorConversionCode = 2
// ColorRGBAToBGR converts from RGB with alpha to BGR color space.
ColorRGBAToBGR = 3
ColorRGBAToBGR ColorConversionCode = 3
// ColorBGRToRGB converts from BGR to RGB without alpha channel.
ColorBGRToRGB = 4
ColorBGRToRGB ColorConversionCode = 4
// ColorBGRAToRGBA converts from BGR with alpha channel
// to RGB with alpha channel.
ColorBGRAToRGBA = 5
ColorBGRAToRGBA ColorConversionCode = 5
// ColorBGRToGray converts from BGR to grayscale.
ColorBGRToGray = 6
ColorBGRToGray ColorConversionCode = 6
// ColorRGBToGray converts from RGB to grayscale.
ColorRGBToGray = 7
ColorRGBToGray ColorConversionCode = 7
// ColorGrayToBGR converts from grayscale to BGR.
ColorGrayToBGR = 8
ColorGrayToBGR ColorConversionCode = 8
// ColorGrayToBGRA converts from grayscale to BGR with alpha channel.
ColorGrayToBGRA = 9
ColorGrayToBGRA ColorConversionCode = 9
// ColorBGRAToGray converts from BGR with alpha channel to grayscale.
ColorBGRAToGray = 10
ColorBGRAToGray ColorConversionCode = 10
// ColorRGBAToGray converts from RGB with alpha channel to grayscale.
ColorRGBAToGray = 11
ColorRGBAToGray ColorConversionCode = 11
// ColorBGRToBGR565 converts from BGR to BGR565 (16-bit images).
ColorBGRToBGR565 = 12
ColorBGRToBGR565 ColorConversionCode = 12
// ColorRGBToBGR565 converts from RGB to BGR565 (16-bit images).
ColorRGBToBGR565 = 13
ColorRGBToBGR565 ColorConversionCode = 13
// ColorBGR565ToBGR converts from BGR565 (16-bit images) to BGR.
ColorBGR565ToBGR = 14
ColorBGR565ToBGR ColorConversionCode = 14
// ColorBGR565ToRGB converts from BGR565 (16-bit images) to RGB.
ColorBGR565ToRGB = 15
ColorBGR565ToRGB ColorConversionCode = 15
// ColorBGRAToBGR565 converts from BGRA (with alpha channel)
// to BGR565 (16-bit images).
ColorBGRAToBGR565 = 16
ColorBGRAToBGR565 ColorConversionCode = 16
// ColorRGBAToBGR565 converts from RGBA (with alpha channel)
// to BGR565 (16-bit images).
ColorRGBAToBGR565 = 17
ColorRGBAToBGR565 ColorConversionCode = 17
// ColorBGR565ToBGRA converts from BGR565 (16-bit images)
// to BGRA (with alpha channel).
ColorBGR565ToBGRA = 18
ColorBGR565ToBGRA ColorConversionCode = 18
// ColorBGR565ToRGBA converts from BGR565 (16-bit images)
// to RGBA (with alpha channel).
ColorBGR565ToRGBA = 19
ColorBGR565ToRGBA ColorConversionCode = 19
// ColorGrayToBGR565 converts from grayscale
// to BGR565 (16-bit images).
ColorGrayToBGR565 = 20
ColorGrayToBGR565 ColorConversionCode = 20
// ColorBGR565ToGray converts from BGR565 (16-bit images)
// to grayscale.
ColorBGR565ToGray = 21
ColorBGR565ToGray ColorConversionCode = 21
// ColorBGRToBGR555 converts from BGR to BGR555 (16-bit images).
ColorBGRToBGR555 = 22
ColorBGRToBGR555 ColorConversionCode = 22
// ColorRGBToBGR555 converts from RGB to BGR555 (16-bit images).
ColorRGBToBGR555 = 23
ColorRGBToBGR555 ColorConversionCode = 23
// ColorBGR555ToBGR converts from BGR555 (16-bit images) to BGR.
ColorBGR555ToBGR = 24
ColorBGR555ToBGR ColorConversionCode = 24
// ColorBGR555ToRGB converts from BGR555 (16-bit images) to RGB.
ColorBGR555ToRGB = 25
ColorBGR555ToRGB ColorConversionCode = 25
// ColorBGRAToBGR555 converts from BGRA (with alpha channel)
// to BGR555 (16-bit images).
ColorBGRAToBGR555 = 26
ColorBGRAToBGR555 ColorConversionCode = 26
// ColorRGBAToBGR555 converts from RGBA (with alpha channel)
// to BGR555 (16-bit images).
ColorRGBAToBGR555 = 27
ColorRGBAToBGR555 ColorConversionCode = 27
// ColorBGR555ToBGRA converts from BGR555 (16-bit images)
// to BGRA (with alpha channel).
ColorBGR555ToBGRA = 28
ColorBGR555ToBGRA ColorConversionCode = 28
// ColorBGR555ToRGBA converts from BGR555 (16-bit images)
// to RGBA (with alpha channel).
ColorBGR555ToRGBA = 29
ColorBGR555ToRGBA ColorConversionCode = 29
// ColorGrayToBGR555 converts from grayscale to BGR555 (16-bit images).
ColorGrayToBGR555 = 30
ColorGrayToBGR555 ColorConversionCode = 30
// ColorBGR555ToGRAY converts from BGR555 (16-bit images) to grayscale.
ColorBGR555ToGRAY = 31
ColorBGR555ToGRAY ColorConversionCode = 31
// ColorBGRToXYZ converts from BGR to CIE XYZ.
ColorBGRToXYZ = 32
ColorBGRToXYZ ColorConversionCode = 32
// ColorRGBToXYZ converts from RGB to CIE XYZ.
ColorRGBToXYZ = 33
ColorRGBToXYZ ColorConversionCode = 33
// ColorXYZToBGR converts from CIE XYZ to BGR.
ColorXYZToBGR = 34
ColorXYZToBGR ColorConversionCode = 34
// ColorXYZToRGB converts from CIE XYZ to RGB.
ColorXYZToRGB = 35
ColorXYZToRGB ColorConversionCode = 35
// ColorBGRToYCrCb converts from BGR to luma-chroma (aka YCC).
ColorBGRToYCrCb = 36
ColorBGRToYCrCb ColorConversionCode = 36
// ColorRGBToYCrCb converts from RGB to luma-chroma (aka YCC).
ColorRGBToYCrCb = 37
ColorRGBToYCrCb ColorConversionCode = 37
// ColorYCrCbToBGR converts from luma-chroma (aka YCC) to BGR.
ColorYCrCbToBGR = 38
ColorYCrCbToBGR ColorConversionCode = 38
// ColorYCrCbToRGB converts from luma-chroma (aka YCC) to RGB.
ColorYCrCbToRGB = 39
ColorYCrCbToRGB ColorConversionCode = 39
// ColorBGRToHSV converts from BGR to HSV (hue saturation value).
ColorBGRToHSV = 40
ColorBGRToHSV ColorConversionCode = 40
// ColorRGBToHSV converts from RGB to HSV (hue saturation value).
ColorRGBToHSV = 41
ColorRGBToHSV ColorConversionCode = 41
// ColorBGRToLab converts from BGR to CIE Lab.
ColorBGRToLab = 44
ColorBGRToLab ColorConversionCode = 44
// ColorRGBToLab converts from RGB to CIE Lab.
ColorRGBToLab = 45
ColorRGBToLab ColorConversionCode = 45
// ColorBGRToLuv converts from BGR to CIE Luv.
ColorBGRToLuv = 50
ColorBGRToLuv ColorConversionCode = 50
// ColorRGBToLuv converts from RGB to CIE Luv.
ColorRGBToLuv = 51
ColorRGBToLuv ColorConversionCode = 51
// ColorBGRToHLS converts from BGR to HLS (hue lightness saturation).
ColorBGRToHLS = 52
ColorBGRToHLS ColorConversionCode = 52
// ColorRGBToHLS converts from RGB to HLS (hue lightness saturation).
ColorRGBToHLS = 53
ColorRGBToHLS ColorConversionCode = 53
// ColorHSVToBGR converts from HSV (hue saturation value) to BGR.
ColorHSVToBGR = 54
ColorHSVToBGR ColorConversionCode = 54
// ColorHSVToRGB converts from HSV (hue saturation value) to RGB.
ColorHSVToRGB = 55
ColorHSVToRGB ColorConversionCode = 55
// ColorLabToBGR converts from CIE Lab to BGR.
ColorLabToBGR = 56
ColorLabToBGR ColorConversionCode = 56
// ColorLabToRGB converts from CIE Lab to RGB.
ColorLabToRGB = 57
ColorLabToRGB ColorConversionCode = 57
// ColorLuvToBGR converts from CIE Luv to BGR.
ColorLuvToBGR = 58
ColorLuvToBGR ColorConversionCode = 58
// ColorLuvToRGB converts from CIE Luv to RGB.
ColorLuvToRGB = 59
ColorLuvToRGB ColorConversionCode = 59
// ColorHLSToBGR converts from HLS (hue lightness saturation) to BGR.
ColorHLSToBGR = 60
ColorHLSToBGR ColorConversionCode = 60
// ColorHLSToRGB converts from HLS (hue lightness saturation) to RGB.
ColorHLSToRGB = 61
ColorHLSToRGB ColorConversionCode = 61
// ColorBGRToHSVFull converts from BGR to HSV (hue saturation value) full.
ColorBGRToHSVFull = 66
ColorBGRToHSVFull ColorConversionCode = 66
// ColorRGBToHSVFull converts from RGB to HSV (hue saturation value) full.
ColorRGBToHSVFull = 67
ColorRGBToHSVFull ColorConversionCode = 67
// ColorBGRToHLSFull converts from BGR to HLS (hue lightness saturation) full.
ColorBGRToHLSFull = 68
ColorBGRToHLSFull ColorConversionCode = 68
// ColorRGBToHLSFull converts from RGB to HLS (hue lightness saturation) full.
ColorRGBToHLSFull = 69
ColorRGBToHLSFull ColorConversionCode = 69
// ColorHSVToBGRFull converts from HSV (hue saturation value) to BGR full.
ColorHSVToBGRFull = 70
ColorHSVToBGRFull ColorConversionCode = 70
// ColorHSVToRGBFull converts from HSV (hue saturation value) to RGB full.
ColorHSVToRGBFull = 71
ColorHSVToRGBFull ColorConversionCode = 71
// ColorHLSToBGRFull converts from HLS (hue lightness saturation) to BGR full.
ColorHLSToBGRFull = 72
ColorHLSToBGRFull ColorConversionCode = 72
// ColorHLSToRGBFull converts from HLS (hue lightness saturation) to RGB full.
ColorHLSToRGBFull = 73
ColorHLSToRGBFull ColorConversionCode = 73
// ColorLBGRToLab converts from LBGR to CIE Lab.
ColorLBGRToLab = 74
ColorLBGRToLab ColorConversionCode = 74
// ColorLRGBToLab converts from LRGB to CIE Lab.
ColorLRGBToLab = 75
ColorLRGBToLab ColorConversionCode = 75
// ColorLBGRToLuv converts from LBGR to CIE Luv.
ColorLBGRToLuv = 76
ColorLBGRToLuv ColorConversionCode = 76
// ColorLRGBToLuv converts from LRGB to CIE Luv.
ColorLRGBToLuv = 77
ColorLRGBToLuv ColorConversionCode = 77
// ColorLabToLBGR converts from CIE Lab to LBGR.
ColorLabToLBGR = 78
ColorLabToLBGR ColorConversionCode = 78
// ColorLabToLRGB converts from CIE Lab to LRGB.
ColorLabToLRGB = 79
ColorLabToLRGB ColorConversionCode = 79
// ColorLuvToLBGR converts from CIE Luv to LBGR.
ColorLuvToLBGR = 80
ColorLuvToLBGR ColorConversionCode = 80
// ColorLuvToLRGB converts from CIE Luv to LRGB.
ColorLuvToLRGB = 81
ColorLuvToLRGB ColorConversionCode = 81
// ColorBGRToYUV converts from BGR to YUV.
ColorBGRToYUV = 82
ColorBGRToYUV ColorConversionCode = 82
// ColorRGBToYUV converts from RGB to YUV.
ColorRGBToYUV = 83
ColorRGBToYUV ColorConversionCode = 83
// ColorYUVToBGR converts from YUV to BGR.
ColorYUVToBGR = 84
ColorYUVToBGR ColorConversionCode = 84
// ColorYUVToRGB converts from YUV to RGB.
ColorYUVToRGB = 85
ColorYUVToRGB ColorConversionCode = 85
// ColorYUVToRGBNV12 converts from YUV 4:2:0 to RGB NV12.
ColorYUVToRGBNV12 = 90
ColorYUVToRGBNV12 ColorConversionCode = 90
// ColorYUVToBGRNV12 converts from YUV 4:2:0 to BGR NV12.
ColorYUVToBGRNV12 = 91
ColorYUVToBGRNV12 ColorConversionCode = 91
// ColorYUVToRGBNV21 converts from YUV 4:2:0 to RGB NV21.
ColorYUVToRGBNV21 = 92
ColorYUVToRGBNV21 ColorConversionCode = 92
// ColorYUVToBGRNV21 converts from YUV 4:2:0 to BGR NV21.
ColorYUVToBGRNV21 = 93
ColorYUVToBGRNV21 ColorConversionCode = 93
// ColorYUVToRGBANV12 converts from YUV 4:2:0 to RGBA NV12.
ColorYUVToRGBANV12 = 94
ColorYUVToRGBANV12 ColorConversionCode = 94
// ColorYUVToBGRANV12 converts from YUV 4:2:0 to BGRA NV12.
ColorYUVToBGRANV12 = 95
ColorYUVToBGRANV12 ColorConversionCode = 95
// ColorYUVToRGBANV21 converts from YUV 4:2:0 to RGBA NV21.
ColorYUVToRGBANV21 = 96
ColorYUVToRGBANV21 ColorConversionCode = 96
// ColorYUVToBGRANV21 converts from YUV 4:2:0 to BGRA NV21.
ColorYUVToBGRANV21 = 97
ColorYUVToBGRANV21 ColorConversionCode = 97
ColorYUVToRGBYV12 = 98
ColorYUVToBGRYV12 = 99
ColorYUVToRGBIYUV = 100
ColorYUVToBGRIYUV = 101
ColorYUVToRGBYV12 ColorConversionCode = 98
ColorYUVToBGRYV12 ColorConversionCode = 99
ColorYUVToRGBIYUV ColorConversionCode = 100
ColorYUVToBGRIYUV ColorConversionCode = 101
ColorYUVToRGBAYV12 = 102
ColorYUVToBGRAYV12 = 103
ColorYUVToRGBAIYUV = 104
ColorYUVToBGRAIYUV = 105
ColorYUVToRGBAYV12 ColorConversionCode = 102
ColorYUVToBGRAYV12 ColorConversionCode = 103
ColorYUVToRGBAIYUV ColorConversionCode = 104
ColorYUVToBGRAIYUV ColorConversionCode = 105
ColorYUVToGRAY420 = 106
ColorYUVToGRAY420 ColorConversionCode = 106
// YUV 4:2:2 family to RGB
ColorYUVToRGBUYVY = 107
ColorYUVToBGRUYVY = 108
ColorYUVToRGBUYVY ColorConversionCode = 107
ColorYUVToBGRUYVY ColorConversionCode = 108
ColorYUVToRGBAUYVY = 111
ColorYUVToBGRAUYVY = 112
ColorYUVToRGBAUYVY ColorConversionCode = 111
ColorYUVToBGRAUYVY ColorConversionCode = 112
ColorYUVToRGBYUY2 = 115
ColorYUVToBGRYUY2 = 116
ColorYUVToRGBYVYU = 117
ColorYUVToBGRYVYU = 118
ColorYUVToRGBYUY2 ColorConversionCode = 115
ColorYUVToBGRYUY2 ColorConversionCode = 116
ColorYUVToRGBYVYU ColorConversionCode = 117
ColorYUVToBGRYVYU ColorConversionCode = 118
ColorYUVToRGBAYUY2 = 119
ColorYUVToBGRAYUY2 = 120
ColorYUVToRGBAYVYU = 121
ColorYUVToBGRAYVYU = 122
ColorYUVToRGBAYUY2 ColorConversionCode = 119
ColorYUVToBGRAYUY2 ColorConversionCode = 120
ColorYUVToRGBAYVYU ColorConversionCode = 121
ColorYUVToBGRAYVYU ColorConversionCode = 122
ColorYUVToGRAYUYVY = 123
ColorYUVToGRAYYUY2 = 124
ColorYUVToGRAYUYVY ColorConversionCode = 123
ColorYUVToGRAYYUY2 ColorConversionCode = 124
// alpha premultiplication
ColorRGBATomRGBA = 125
ColormRGBAToRGBA = 126
ColorRGBATomRGBA ColorConversionCode = 125
ColormRGBAToRGBA ColorConversionCode = 126
// RGB to YUV 4:2:0 family
ColorRGBToYUVI420 = 127
ColorBGRToYUVI420 = 128
ColorRGBToYUVI420 ColorConversionCode = 127
ColorBGRToYUVI420 ColorConversionCode = 128
ColorRGBAToYUVI420 = 129
ColorBGRAToYUVI420 = 130
ColorRGBToYUVYV12 = 131
ColorBGRToYUVYV12 = 132
ColorRGBAToYUVYV12 = 133
ColorBGRAToYUVYV12 = 134
ColorRGBAToYUVI420 ColorConversionCode = 129
ColorBGRAToYUVI420 ColorConversionCode = 130
ColorRGBToYUVYV12 ColorConversionCode = 131
ColorBGRToYUVYV12 ColorConversionCode = 132
ColorRGBAToYUVYV12 ColorConversionCode = 133
ColorBGRAToYUVYV12 ColorConversionCode = 134
// Demosaicing
ColorBayerBGToBGR = 46
ColorBayerGBToBGR = 47
ColorBayerRGToBGR = 48
ColorBayerGRToBGR = 49
ColorBayerBGToBGR ColorConversionCode = 46
ColorBayerGBToBGR ColorConversionCode = 47
ColorBayerRGToBGR ColorConversionCode = 48
ColorBayerGRToBGR ColorConversionCode = 49
ColorBayerBGToGRAY = 86
ColorBayerGBToGRAY = 87
ColorBayerRGToGRAY = 88
ColorBayerGRToGRAY = 89
ColorBayerBGToGRAY ColorConversionCode = 86
ColorBayerGBToGRAY ColorConversionCode = 87
ColorBayerRGToGRAY ColorConversionCode = 88
ColorBayerGRToGRAY ColorConversionCode = 89
// Demosaicing using Variable Number of Gradients
ColorBayerBGToBGRVNG = 62
ColorBayerGBToBGRVNG = 63
ColorBayerRGToBGRVNG = 64
ColorBayerGRToBGRVNG = 65
ColorBayerBGToBGRVNG ColorConversionCode = 62
ColorBayerGBToBGRVNG ColorConversionCode = 63
ColorBayerRGToBGRVNG ColorConversionCode = 64
ColorBayerGRToBGRVNG ColorConversionCode = 65
// Edge-Aware Demosaicing
ColorBayerBGToBGREA = 135
ColorBayerGBToBGREA = 136
ColorBayerRGToBGREA = 137
ColorBayerGRToBGREA = 138
ColorBayerBGToBGREA ColorConversionCode = 135
ColorBayerGBToBGREA ColorConversionCode = 136
ColorBayerRGToBGREA ColorConversionCode = 137
ColorBayerGRToBGREA ColorConversionCode = 138
// Demosaicing with alpha channel
ColorBayerBGToBGRA = 139
ColorBayerGBToBGRA = 140
ColorBayerRGToBGRA = 141
ColorBayerGRToBGRA = 142
ColorBayerBGToBGRA ColorConversionCode = 139
ColorBayerGBToBGRA ColorConversionCode = 140
ColorBayerRGToBGRA ColorConversionCode = 141
ColorBayerGRToBGRA ColorConversionCode = 142
ColorCOLORCVTMAX = 143
ColorCOLORCVTMAX ColorConversionCode = 143
)

View File

@ -1,7 +1,7 @@
#!/bin/bash
set -eux -o pipefail
OPENCV_VERSION=${OPENCV_VERSION:-4.2.0}
OPENCV_VERSION=${OPENCV_VERSION:-4.4.0}
#GRAPHICAL=ON
GRAPHICAL=${GRAPHICAL:-OFF}

2
vendor/gocv.io/x/gocv/version.go generated vendored
View File

@ -7,7 +7,7 @@ package gocv
import "C"
// GoCVVersion of this package, for display purposes.
const GoCVVersion = "0.22.0"
const GoCVVersion = "0.24.0"
// Version returns the current golang package version
func Version() string {

76
vendor/gocv.io/x/gocv/videoio.go generated vendored
View File

@ -23,133 +23,133 @@ const (
// VideoCapturePosFrames 0-based index of the frame to be
// decoded/captured next.
VideoCapturePosFrames = 1
VideoCapturePosFrames VideoCaptureProperties = 1
// VideoCapturePosAVIRatio relative position of the video file:
// 0=start of the film, 1=end of the film.
VideoCapturePosAVIRatio = 2
VideoCapturePosAVIRatio VideoCaptureProperties = 2
// VideoCaptureFrameWidth is width of the frames in the video stream.
VideoCaptureFrameWidth = 3
VideoCaptureFrameWidth VideoCaptureProperties = 3
// VideoCaptureFrameHeight controls height of frames in the video stream.
VideoCaptureFrameHeight = 4
VideoCaptureFrameHeight VideoCaptureProperties = 4
// VideoCaptureFPS controls capture frame rate.
VideoCaptureFPS = 5
VideoCaptureFPS VideoCaptureProperties = 5
// VideoCaptureFOURCC contains the 4-character code of codec.
// see VideoWriter::fourcc for details.
VideoCaptureFOURCC = 6
VideoCaptureFOURCC VideoCaptureProperties = 6
// VideoCaptureFrameCount contains number of frames in the video file.
VideoCaptureFrameCount = 7
VideoCaptureFrameCount VideoCaptureProperties = 7
// VideoCaptureFormat format of the Mat objects returned by
// VideoCapture::retrieve().
VideoCaptureFormat = 8
VideoCaptureFormat VideoCaptureProperties = 8
// VideoCaptureMode contains backend-specific value indicating
// the current capture mode.
VideoCaptureMode = 9
VideoCaptureMode VideoCaptureProperties = 9
// VideoCaptureBrightness is brightness of the image
// (only for those cameras that support).
VideoCaptureBrightness = 10
VideoCaptureBrightness VideoCaptureProperties = 10
// VideoCaptureContrast is contrast of the image
// (only for cameras that support it).
VideoCaptureContrast = 11
VideoCaptureContrast VideoCaptureProperties = 11
// VideoCaptureSaturation saturation of the image
// (only for cameras that support).
VideoCaptureSaturation = 12
VideoCaptureSaturation VideoCaptureProperties = 12
// VideoCaptureHue hue of the image (only for cameras that support).
VideoCaptureHue = 13
VideoCaptureHue VideoCaptureProperties = 13
// VideoCaptureGain is the gain of the capture image.
// (only for those cameras that support).
VideoCaptureGain = 14
VideoCaptureGain VideoCaptureProperties = 14
// VideoCaptureExposure is the exposure of the capture image.
// (only for those cameras that support).
VideoCaptureExposure = 15
VideoCaptureExposure VideoCaptureProperties = 15
// VideoCaptureConvertRGB is a boolean flags indicating whether
// images should be converted to RGB.
VideoCaptureConvertRGB = 16
VideoCaptureConvertRGB VideoCaptureProperties = 16
// VideoCaptureWhiteBalanceBlueU is currently unsupported.
VideoCaptureWhiteBalanceBlueU = 17
VideoCaptureWhiteBalanceBlueU VideoCaptureProperties = 17
// VideoCaptureRectification is the rectification flag for stereo cameras.
// Note: only supported by DC1394 v 2.x backend currently.
VideoCaptureRectification = 18
VideoCaptureRectification VideoCaptureProperties = 18
// VideoCaptureMonochrome indicates whether images should be
// converted to monochrome.
VideoCaptureMonochrome = 19
VideoCaptureMonochrome VideoCaptureProperties = 19
// VideoCaptureSharpness controls image capture sharpness.
VideoCaptureSharpness = 20
VideoCaptureSharpness VideoCaptureProperties = 20
// VideoCaptureAutoExposure controls the DC1394 exposure control
// done by camera, user can adjust reference level using this feature.
VideoCaptureAutoExposure = 21
VideoCaptureAutoExposure VideoCaptureProperties = 21
// VideoCaptureGamma controls video capture gamma.
VideoCaptureGamma = 22
VideoCaptureGamma VideoCaptureProperties = 22
// VideoCaptureTemperature controls video capture temperature.
VideoCaptureTemperature = 23
VideoCaptureTemperature VideoCaptureProperties = 23
// VideoCaptureTrigger controls video capture trigger.
VideoCaptureTrigger = 24
VideoCaptureTrigger VideoCaptureProperties = 24
// VideoCaptureTriggerDelay controls video capture trigger delay.
VideoCaptureTriggerDelay = 25
VideoCaptureTriggerDelay VideoCaptureProperties = 25
// VideoCaptureWhiteBalanceRedV controls video capture setting for
// white balance.
VideoCaptureWhiteBalanceRedV = 26
VideoCaptureWhiteBalanceRedV VideoCaptureProperties = 26
// VideoCaptureZoom controls video capture zoom.
VideoCaptureZoom = 27
VideoCaptureZoom VideoCaptureProperties = 27
// VideoCaptureFocus controls video capture focus.
VideoCaptureFocus = 28
VideoCaptureFocus VideoCaptureProperties = 28
// VideoCaptureGUID controls video capture GUID.
VideoCaptureGUID = 29
VideoCaptureGUID VideoCaptureProperties = 29
// VideoCaptureISOSpeed controls video capture ISO speed.
VideoCaptureISOSpeed = 30
VideoCaptureISOSpeed VideoCaptureProperties = 30
// VideoCaptureBacklight controls video capture backlight.
VideoCaptureBacklight = 32
VideoCaptureBacklight VideoCaptureProperties = 32
// VideoCapturePan controls video capture pan.
VideoCapturePan = 33
VideoCapturePan VideoCaptureProperties = 33
// VideoCaptureTilt controls video capture tilt.
VideoCaptureTilt = 34
VideoCaptureTilt VideoCaptureProperties = 34
// VideoCaptureRoll controls video capture roll.
VideoCaptureRoll = 35
VideoCaptureRoll VideoCaptureProperties = 35
// VideoCaptureIris controls video capture iris.
VideoCaptureIris = 36
VideoCaptureIris VideoCaptureProperties = 36
// VideoCaptureSettings is the pop up video/camera filter dialog. Note:
// only supported by DSHOW backend currently. The property value is ignored.
VideoCaptureSettings = 37
VideoCaptureSettings VideoCaptureProperties = 37
// VideoCaptureBufferSize controls video capture buffer size.
VideoCaptureBufferSize = 38
VideoCaptureBufferSize VideoCaptureProperties = 38
// VideoCaptureAutoFocus controls video capture auto focus..
VideoCaptureAutoFocus = 39
VideoCaptureAutoFocus VideoCaptureProperties = 39
)
// VideoCapture is a wrapper around the OpenCV VideoCapture class.

View File

@ -11,18 +11,18 @@ echo.
REM This is why there is no progress bar:
REM https://github.com/PowerShell/PowerShell/issues/2138
echo Downloading: opencv-4.2.0.zip [91MB]
powershell -command "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://github.com/opencv/opencv/archive/4.2.0.zip -OutFile c:\opencv\opencv-4.2.0.zip"
echo Downloading: opencv-4.4.0.zip [91MB]
powershell -command "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://github.com/opencv/opencv/archive/4.4.0.zip -OutFile c:\opencv\opencv-4.4.0.zip"
echo Extracting...
powershell -command "$ProgressPreference = 'SilentlyContinue'; Expand-Archive -Path c:\opencv\opencv-4.2.0.zip -DestinationPath c:\opencv"
del c:\opencv\opencv-4.2.0.zip /q
powershell -command "$ProgressPreference = 'SilentlyContinue'; Expand-Archive -Path c:\opencv\opencv-4.4.0.zip -DestinationPath c:\opencv"
del c:\opencv\opencv-4.4.0.zip /q
echo.
echo Downloading: opencv_contrib-4.2.0.zip [58MB]
powershell -command "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://github.com/opencv/opencv_contrib/archive/4.2.0.zip -OutFile c:\opencv\opencv_contrib-4.2.0.zip"
echo Downloading: opencv_contrib-4.4.0.zip [58MB]
powershell -command "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $ProgressPreference = 'SilentlyContinue'; Invoke-WebRequest -Uri https://github.com/opencv/opencv_contrib/archive/4.4.0.zip -OutFile c:\opencv\opencv_contrib-4.4.0.zip"
echo Extracting...
powershell -command "$ProgressPreference = 'SilentlyContinue'; Expand-Archive -Path c:\opencv\opencv_contrib-4.2.0.zip -DestinationPath c:\opencv"
del c:\opencv\opencv_contrib-4.2.0.zip /q
powershell -command "$ProgressPreference = 'SilentlyContinue'; Expand-Archive -Path c:\opencv\opencv_contrib-4.4.0.zip -DestinationPath c:\opencv"
del c:\opencv\opencv_contrib-4.4.0.zip /q
echo.
echo Done with downloading and extracting sources.
@ -32,9 +32,9 @@ echo on
cd /D C:\opencv\build
set PATH=%PATH%;C:\Program Files (x86)\CMake\bin;C:\mingw-w64\x86_64-6.3.0-posix-seh-rt_v5-rev1\mingw64\bin
cmake C:\opencv\opencv-4.2.0 -G "MinGW Makefiles" -BC:\opencv\build -DENABLE_CXX11=ON -DOPENCV_EXTRA_MODULES_PATH=C:\opencv\opencv_contrib-4.2.0\modules -DBUILD_SHARED_LIBS=ON -DWITH_IPP=OFF -DWITH_MSMF=OFF -DBUILD_EXAMPLES=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_java=OFF -DBUILD_opencv_python=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_DOCS=OFF -DENABLE_PRECOMPILED_HEADERS=OFF -DBUILD_opencv_saliency=OFF -DCPU_DISPATCH= -DOPENCV_GENERATE_PKGCONFIG=ON -DWITH_OPENCL_D3D11_NV=OFF -Wno-dev
cmake C:\opencv\opencv-4.4.0 -G "MinGW Makefiles" -BC:\opencv\build -DENABLE_CXX11=ON -DOPENCV_EXTRA_MODULES_PATH=C:\opencv\opencv_contrib-4.4.0\modules -DBUILD_SHARED_LIBS=ON -DWITH_IPP=OFF -DWITH_MSMF=OFF -DBUILD_EXAMPLES=OFF -DBUILD_TESTS=OFF -DBUILD_PERF_TESTS=OFF -DBUILD_opencv_java=OFF -DBUILD_opencv_python=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_DOCS=OFF -DENABLE_PRECOMPILED_HEADERS=OFF -DBUILD_opencv_saliency=OFF -DCPU_DISPATCH= -DOPENCV_GENERATE_PKGCONFIG=ON -DWITH_OPENCL_D3D11_NV=OFF -DOPENCV_ALLOCATOR_STATS_COUNTER_TYPE=int64_t -Wno-dev
mingw32-make -j%NUMBER_OF_PROCESSORS%
mingw32-make install
rmdir c:\opencv\opencv-4.2.0 /s /q
rmdir c:\opencv\opencv_contrib-4.2.0 /s /q
rmdir c:\opencv\opencv-4.4.0 /s /q
rmdir c:\opencv\opencv_contrib-4.4.0 /s /q
chdir /D %GOPATH%\src\gocv.io\x\gocv