mirror of
https://github.com/bytedream/docker4ssh.git
synced 2025-05-09 12:15:11 +02:00
Initial commit
This commit is contained in:
commit
a589014106
661
LICENSE
Normal file
661
LICENSE
Normal file
@ -0,0 +1,661 @@
|
||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
||||
Version 3, 19 November 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU Affero General Public License is a free, copyleft license for
|
||||
software and other kinds of works, specifically designed to ensure
|
||||
cooperation with the community in the case of network server software.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
our General Public Licenses are intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
Developers that use our General Public Licenses protect your rights
|
||||
with two steps: (1) assert copyright on the software, and (2) offer
|
||||
you this License which gives you legal permission to copy, distribute
|
||||
and/or modify the software.
|
||||
|
||||
A secondary benefit of defending all users' freedom is that
|
||||
improvements made in alternate versions of the program, if they
|
||||
receive widespread use, become available for other developers to
|
||||
incorporate. Many developers of free software are heartened and
|
||||
encouraged by the resulting cooperation. However, in the case of
|
||||
software used on network servers, this result may fail to come about.
|
||||
The GNU General Public License permits making a modified version and
|
||||
letting the public access it on a server without ever releasing its
|
||||
source code to the public.
|
||||
|
||||
The GNU Affero General Public License is designed specifically to
|
||||
ensure that, in such cases, the modified source code becomes available
|
||||
to the community. It requires the operator of a network server to
|
||||
provide the source code of the modified version running there to the
|
||||
users of that server. Therefore, public use of a modified version, on
|
||||
a publicly accessible server, gives the public access to the source
|
||||
code of the modified version.
|
||||
|
||||
An older license, called the Affero General Public License and
|
||||
published by Affero, was designed to accomplish similar goals. This is
|
||||
a different license, not a version of the Affero GPL, but Affero has
|
||||
released a new version of the Affero GPL which permits relicensing under
|
||||
this license.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, if you modify the
|
||||
Program, your modified version must prominently offer all users
|
||||
interacting with it remotely through a computer network (if your version
|
||||
supports such interaction) an opportunity to receive the Corresponding
|
||||
Source of your version by providing access to the Corresponding Source
|
||||
from a network server at no charge, through some standard or customary
|
||||
means of facilitating copying of software. This Corresponding Source
|
||||
shall include the Corresponding Source for any work covered by version 3
|
||||
of the GNU General Public License that is incorporated pursuant to the
|
||||
following paragraph.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the work with which it is combined will remain governed by version
|
||||
3 of the GNU General Public License.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU Affero General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU Affero General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU Affero General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU Affero General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If your software can interact with users remotely through a computer
|
||||
network, you should also make sure that it provides a way for users to
|
||||
get its source. For example, if your program is a web application, its
|
||||
interface could display a "Source" link that leads users to an archive
|
||||
of the code. There are many ways you could offer source, and different
|
||||
solutions will be better for different programs; see section 13 for the
|
||||
specific requirements.
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
||||
<https://www.gnu.org/licenses/>.
|
121
Makefile
Normal file
121
Makefile
Normal file
@ -0,0 +1,121 @@
|
||||
VERSION=0.1.0
|
||||
|
||||
BUILDDIR = .
|
||||
_BUILDDIR = $(shell realpath $(BUILDDIR))/
|
||||
|
||||
build: build-server build-container build-extra
|
||||
|
||||
build-server:
|
||||
cd server/ && go build -o $(_BUILDDIR)/docker4ssh
|
||||
|
||||
build-container: DEBUG=false
|
||||
build-container:
|
||||
@if $(DEBUG); then\
|
||||
cd container/ && cargo build --target x86_64-unknown-linux-musl --target-dir $(_BUILDDIR) --bin configure;\
|
||||
else\
|
||||
cd container/ && cargo build --target x86_64-unknown-linux-musl --target-dir $(_BUILDDIR) --release --bin configure;\
|
||||
fi
|
||||
cp -rf $(_BUILDDIR)/x86_64-unknown-linux-musl/$(shell if $(DEBUG); then echo debug; else echo release; fi)/configure $(_BUILDDIR)
|
||||
|
||||
build-extra: SSHPASS:=$(shell LC_ALL=C tr -dc 'A-Za-z0-9!#$%&()*,-./:<=>?@[\]^_{}~' < /dev/urandom | head -c 18 ; echo)
|
||||
build-extra:
|
||||
if [ "$(_BUILDDIR)" != "$(shell realpath .)/" ]; then\
|
||||
cp -rf LICENSE $(_BUILDDIR)/LICENSE;\
|
||||
cp -rf man/ $(_BUILDDIR);\
|
||||
fi
|
||||
yes | ssh-keygen -t ed25519 -f $(_BUILDDIR)/docker4ssh.key -N "$(SSHPASS)" -b 4096 > /dev/null
|
||||
cp -rf extra/docker4ssh.conf $(_BUILDDIR)
|
||||
sed -i 's|Passphrase = ""|Passphrase = "$(SSHPASS)"|' $(_BUILDDIR)/docker4ssh.conf
|
||||
cat extra/database.sql | sqlite3 $(_BUILDDIR)/docker4ssh.sqlite3
|
||||
mkdir -p $(_BUILDDIR)/profile/ && cp -f extra/profile.conf $(_BUILDDIR)/profile/
|
||||
|
||||
optimize: optimize-server optimize-container
|
||||
|
||||
optimize-server:
|
||||
strip $(_BUILDDIR)/docker4ssh
|
||||
|
||||
optimize-container:
|
||||
strip $(_BUILDDIR)/configure
|
||||
|
||||
clean: clean-server clean-container clean-extra
|
||||
|
||||
clean-server:
|
||||
rm -rf $(_BUILDDIR)/docker4ssh
|
||||
|
||||
clean-container:
|
||||
rm -rf $(_BUILDDIR)/{x86_64-unknown-linux-musl,configure}
|
||||
|
||||
clean-extra:
|
||||
rm -rf $(_BUILDDIR)/docker4ssh*
|
||||
rm -rf $(_BUILDDIR)/man/
|
||||
rm -rf $(_BUILDDIR)/profile/
|
||||
|
||||
DESTDIR=
|
||||
PREFIX=/usr
|
||||
install:
|
||||
install -Dm755 $(_BUILDDIR)docker4ssh $(DESTDIR)$(PREFIX)/bin/docker4ssh
|
||||
install -Dm644 $(_BUILDDIR)LICENSE $(DESTDIR)$(PREFIX)/share/licenses/docker4ssh/LICENSE
|
||||
install -Dm644 $(_BUILDDIR)man/docker4ssh.1 $(DESTDIR)$(PREFIX)/share/man/man1/docker4ssh.1
|
||||
install -Dm644 $(_BUILDDIR)man/docker4ssh.conf.5 $(DESTDIR)$(PREFIX)/share/man/man5/docker4ssh.conf.5
|
||||
install -Dm644 $(_BUILDDIR)man/profile.conf.5 $(DESTDIR)$(PREFIX)/share/man/man5/profile.conf.5
|
||||
|
||||
install -Dm755 $(_BUILDDIR)configure $(DESTDIR)/etc/docker4ssh/configure
|
||||
install -Dm775 $(_BUILDDIR)docker4ssh.conf $(DESTDIR)/etc/docker4ssh/docker4ssh.conf
|
||||
install -Dm755 $(_BUILDDIR)docker4ssh.sqlite3 $(DESTDIR)/etc/docker4ssh/docker4ssh.sqlite3
|
||||
install -Dm755 $(_BUILDDIR)docker4ssh.key $(DESTDIR)/etc/docker4ssh/docker4ssh.key
|
||||
install -Dm644 $(_BUILDDIR)man/* -t $(DESTDIR)/etc/docker4ssh/man/
|
||||
install -Dm644 $(_BUILDDIR)profile/* -t $(DESTDIR)/etc/docker4ssh/profile/
|
||||
install -Dm644 $(_BUILDDIR)LICENSE $(DESTDIR)/etc/docker4ssh/LICENSE
|
||||
|
||||
uninstall:
|
||||
rm -rf $(DESTDIR)/etc/docker4ssh/
|
||||
rm -f $(DESTDIR)$(PREFIX)/bin/docker4ssh
|
||||
rm -f $(DESTDIR)$(PREFIX)/share/man/man1/docker4ssh.1
|
||||
rm -f $(DESTDIR)$(PREFIX)/share/man/man5/{docker4ssh,profile}.5
|
||||
rm -f $(DESTDIR)$(PREFIX)/share/licenses/docker4ssh/LICENSE
|
||||
|
||||
release:
|
||||
mkdir -p /tmp/docker4ssh-$(VERSION)-build/ /tmp/docker4ssh-$(VERSION)-release/
|
||||
$(MAKE) BUILDDIR=/tmp/docker4ssh-$(VERSION)-build/ SSHPASS= build optimize
|
||||
$(MAKE) BUILDDIR=/tmp/docker4ssh-$(VERSION)-build/ DESTDIR=/tmp/docker4ssh-$(VERSION)-release/ install
|
||||
tar -C /tmp/docker4ssh-$(VERSION)-release/ -czf docker4ssh-$(VERSION).tar.gz .
|
||||
|
||||
RUNDIR=/tmp/docker4ssh
|
||||
|
||||
.PHONY run:
|
||||
run:
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) SSHPASS= build
|
||||
cd $(RUNDIR) && ./docker4ssh
|
||||
|
||||
develop: SERVERSUM = $(shell find server/ -type f -exec md5sum {} + | LC_ALL=C sort | md5sum | cut -d ' ' -f1)
|
||||
develop: CONTAINERSUM = $(shell find container/src/ -type f -exec md5sum {} + | LC_ALL=C sort | md5sum | cut -d ' ' -f1)
|
||||
develop: EXTRASUM = $(shell find extra/ -type f -exec md5sum {} + | LC_ALL=C sort | md5sum | cut -d ' ' -f1)
|
||||
# there is maybe a better way to do this stuff but for the moment this works out
|
||||
develop:
|
||||
@if [ ! -d $(RUNDIR) ]; then\
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) DEBUG=true SSHPASS= build;\
|
||||
if [[ $$? -ne 0 ]]; then exit 2; fi;\
|
||||
echo -n $(SERVER) > $(RUNDIR)/SERVERSUM;\
|
||||
echo -n $(CLIENTSUM) > $(RUNDIR)/CONTAINERSUM;\
|
||||
echo -n $(EXTRASUM) > $(RUNDIR)/EXTRASUM;\
|
||||
else\
|
||||
if [ "$(shell cat $(RUNDIR)/SERVERSUM)" != "$(SERVERSUM)" ]; then\
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) clean-server;\
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) build-server;\
|
||||
if [[ $$? -ne 0 ]]; then exit 2; fi;\
|
||||
echo -n $(SERVERSUM) > $(RUNDIR)/SERVERSUM;\
|
||||
fi;\
|
||||
if [ "$(shell cat $(RUNDIR)/CONTAINERSUM)" != "$(CONTAINERSUM)" ]; then\
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) clean-container;\
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) DEBUG=true build-container;\
|
||||
if [[ $$? -ne 0 ]]; then exit 2; fi;\
|
||||
echo -n $(CONTAINERSUM) > $(RUNDIR)/CONTAINERSUM;\
|
||||
fi;\
|
||||
if [ "$(shell cat $(RUNDIR)/EXTRASUM)" != "$(EXTRASUM)" ]; then\
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) clean-extra;\
|
||||
$(MAKE) BUILDDIR=$(RUNDIR) SSHPASS= build-extra;\
|
||||
if [[ $$? -ne 0 ]]; then exit 2; fi;\
|
||||
echo -n $(EXTRASUM) > $(RUNDIR)/EXTRASUM;\
|
||||
fi;\
|
||||
fi
|
||||
cd $(RUNDIR) && LOGGING_LEVEL="debug" ./docker4ssh start
|
112
README.md
Normal file
112
README.md
Normal file
@ -0,0 +1,112 @@
|
||||
# docker4ssh - docker containers and more via ssh
|
||||
|
||||
**docker4ssh** is an ssh server that can create new docker containers and re-login into existing ones.
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/ByteDream/docker4ssh">
|
||||
<img src="https://img.shields.io/github/languages/code-size/ByteDream/docker4ssh?style=flat-square" alt="Code size">
|
||||
</a>
|
||||
<a href="https://github.com/ByteDream/docker4ssh/commits">
|
||||
<img src="https://img.shields.io/github/last-commit/ByteDream/docker4ssh?style=flat-square" alt="Latest commit">
|
||||
</a>
|
||||
<a href="https://github.com/ByteDream/docker4ssh/releases/latest">
|
||||
<img src="https://img.shields.io/github/downloads/ByteDream/docker4ssh/total?style=flat-square" alt="Download Badge">
|
||||
</a>
|
||||
<a href="https://github.com/ByteDream/docker4ssh/blob/master/LICENSE">
|
||||
<img src="https://img.shields.io/github/license/ByteDream/docker4ssh?style=flat-square" alt="License">
|
||||
</a>
|
||||
<a href="https://github.com/ByteDream/docker4ssh/releases/latest">
|
||||
<img src="https://img.shields.io/github/v/release/ByteDream/docker4ssh?style=flat-square" alt="Release">
|
||||
</a>
|
||||
<a href="https://discord.gg/gUWwekeNNg">
|
||||
<img src="https://img.shields.io/discord/915659846836162561?label=discord&style=flat-square" alt="Discord">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="#-features">✨ Features</a>
|
||||
·
|
||||
<a href="#%EF%B8%8F-installation">⌨ Installation</a>
|
||||
·
|
||||
<a href="#-usage">🖋️ Usage</a>
|
||||
·
|
||||
<a href="#-license">⚖ License</a>
|
||||
</p>
|
||||
|
||||
**Visit the [wiki](https://github.com/ByteDream/docker4ssh/wiki) to get more information and detailed usage instructions**
|
||||
|
||||
## ✨ Features
|
||||
- Create containers by images (e.g. `ubuntu:21.04@server`)
|
||||
- Create specific containers for specific usernames with [profiles](https://github.com/ByteDream/docker4ssh/wiki/Configuration-Files#profileconf)
|
||||
- Containers are configurable from within
|
||||
- Re-login into existing containers
|
||||
- Full use of the docker api (unlike [docker2ssh](https://github.com/moul/ssh2docker), which uses the cli, which theoretically could cause code injection)
|
||||
- Highly configurable [settings](https://github.com/ByteDream/docker4ssh/wiki/Configuration-Files#docker4sshconf)
|
||||
|
||||
## ⌨️ Installation
|
||||
|
||||
For every install method your OS **must** be linux and docker has to be installed.
|
||||
|
||||
- Download from the latest release (currently only x64 architecture supported)
|
||||
- Download `docker4ssh-<version>.tar.gz` from the [latest release](https://github.com/ByteDream/docker4ssh/releases/latest)
|
||||
- Install it
|
||||
- Into your root directory (recommended)
|
||||
```shell
|
||||
$ sudo tar -xvzf docker4ssh-<version>.tar.gz -C /
|
||||
```
|
||||
- To the same directory
|
||||
```shell
|
||||
$ sudo tar -xvzf docker4ssh-<version>.tar.gz
|
||||
```
|
||||
- Building from source
|
||||
|
||||
Before start installing, make sure you have to following things ready:
|
||||
- [Go](https://go.dev/) installed
|
||||
- [Rust](https://www.rust-lang.org/) installed
|
||||
- [Make](https://www.gnu.org/software/make/) installed - optional, but highly recommended since we use `make` in the further instructions
|
||||
|
||||
To install docker4ssh, just execute the following commands
|
||||
```shell
|
||||
$ git clone https://github.com/ByteDream/docker4ssh
|
||||
$ cd docker4ssh
|
||||
$ make install
|
||||
```
|
||||
|
||||
- Install it from the [AUR](https://aur.archlinux.org/packages/docker4ssh/) (if you're using arch or an arch based distro)
|
||||
```shell
|
||||
$ yay -S docker4ssh
|
||||
```
|
||||
|
||||
## 🖋 Usage
|
||||
|
||||
To start the docker4ssh server, simply type
|
||||
```shell
|
||||
$ docker4ssh start
|
||||
```
|
||||
|
||||
The default port for the ssh server is 2222, if you want to change it take a look at the [config file](https://github.com/ByteDream/docker4ssh/wiki/docker4ssh.conf).
|
||||
Dynamic profile generation is enabled by default, so you can start right away.
|
||||
Type the following to generate a new ubuntu container and connect to it:
|
||||
```shell
|
||||
$ ssh -p 2222 ubuntu:latest@127.0.0.1
|
||||
```
|
||||
You will get a password prompt then where you can type in anything since by default any password is correct.
|
||||
If you typed in a password, the docker container gets created and the ssh connection is "redirected" to the containers' tty:
|
||||
```shell
|
||||
ubuntu:latest@127.0.0.1's password:
|
||||
┌───Container────────────────┐
|
||||
│ Container ID: e0f3d48217da │
|
||||
│ Network Mode: Host │
|
||||
│ Configurable: true │
|
||||
│ Run Level: User │
|
||||
│ Exit After: │
|
||||
│ Keep On Exit: false │
|
||||
└──────────────Information───┘
|
||||
root@e0f3d48217da:/#
|
||||
```
|
||||
|
||||
For further information, visit the [wiki](https://github.com/ByteDream/docker4ssh/wiki).
|
||||
|
||||
## ⚖ License
|
||||
|
||||
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0) - see the [LICENSE](LICENSE) file for more details.
|
412
container/Cargo.lock
generated
Normal file
412
container/Cargo.lock
generated
Normal file
@ -0,0 +1,412 @@
|
||||
# This file is automatically @generated by Cargo.
|
||||
# It is not intended for manual editing.
|
||||
version = 3
|
||||
|
||||
[[package]]
|
||||
name = "addr2line"
|
||||
version = "0.17.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b9ecd88a8c8378ca913a680cd98f0f13ac67383d35993f86c90a70e3f137816b"
|
||||
dependencies = [
|
||||
"gimli",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "adler"
|
||||
version = "1.0.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f26201604c87b1e01bd3d98f8d5d9a8fcbb815e8cedb41ffccbeb4bf593a35fe"
|
||||
|
||||
[[package]]
|
||||
name = "ansi_term"
|
||||
version = "0.11.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ee49baf6cb617b853aa8d93bf420db2383fab46d314482ca2803b40d5fde979b"
|
||||
dependencies = [
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "atty"
|
||||
version = "0.2.14"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8"
|
||||
dependencies = [
|
||||
"hermit-abi",
|
||||
"libc",
|
||||
"winapi",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "autocfg"
|
||||
version = "1.0.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "cdb031dd78e28731d87d56cc8ffef4a8f36ca26c38fe2de700543e627f8a464a"
|
||||
|
||||
[[package]]
|
||||
name = "backtrace"
|
||||
version = "0.3.63"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "321629d8ba6513061f26707241fa9bc89524ff1cd7a915a97ef0c62c666ce1b6"
|
||||
dependencies = [
|
||||
"addr2line",
|
||||
"cc",
|
||||
"cfg-if",
|
||||
"libc",
|
||||
"miniz_oxide",
|
||||
"object",
|
||||
"rustc-demangle",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "bitflags"
|
||||
version = "1.3.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "bef38d45163c2f1dde094a7dfd33ccf595c92905c8f8f4fdc18d06fb1037718a"
|
||||
|
||||
[[package]]
|
||||
name = "cc"
|
||||
version = "1.0.72"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "22a9137b95ea06864e018375b72adfb7db6e6f68cfc8df5a04d00288050485ee"
|
||||
|
||||
[[package]]
|
||||
name = "cfg-if"
|
||||
version = "1.0.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "baf1de4339761588bc0619e3cbc0120ee582ebb74b53b4efbf79117bd2da40fd"
|
||||
|
||||
[[package]]
|
||||
name = "clap"
|
||||
version = "2.33.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "37e58ac78573c40708d45522f0d80fa2f01cc4f9b4e2bf749807255454312002"
|
||||
dependencies = [
|
||||
"ansi_term",
|
||||
"atty",
|
||||
"bitflags",
|
||||
"strsim",
|
||||
"textwrap",
|
||||
"unicode-width",
|
||||
"vec_map",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "docker4ssh"
|
||||
version = "0.1.0"
|
||||
dependencies = [
|
||||
"failure",
|
||||
"log",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"serde_repr",
|
||||
"structopt",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "failure"
|
||||
version = "0.1.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d32e9bd16cc02eae7db7ef620b392808b89f6a5e16bb3497d159c6b92a0f4f86"
|
||||
dependencies = [
|
||||
"backtrace",
|
||||
"failure_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "failure_derive"
|
||||
version = "0.1.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "aa4da3c766cd7a0db8242e326e9e4e081edd567072893ed320008189715366a4"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"synstructure",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gimli"
|
||||
version = "0.26.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "78cc372d058dcf6d5ecd98510e7fbc9e5aec4d21de70f65fea8fecebcd881bd4"
|
||||
|
||||
[[package]]
|
||||
name = "heck"
|
||||
version = "0.3.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "6d621efb26863f0e9924c6ac577e8275e5e6b77455db64ffa6c65c904e9e132c"
|
||||
dependencies = [
|
||||
"unicode-segmentation",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "hermit-abi"
|
||||
version = "0.1.19"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "62b467343b94ba476dcb2500d242dadbb39557df889310ac77c5d99100aaac33"
|
||||
dependencies = [
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itoa"
|
||||
version = "0.4.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b71991ff56294aa922b450139ee08b3bfc70982c6b2c7562771375cf73542dd4"
|
||||
|
||||
[[package]]
|
||||
name = "lazy_static"
|
||||
version = "1.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
|
||||
|
||||
[[package]]
|
||||
name = "libc"
|
||||
version = "0.2.107"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "fbe5e23404da5b4f555ef85ebed98fb4083e55a00c317800bc2a50ede9f3d219"
|
||||
|
||||
[[package]]
|
||||
name = "log"
|
||||
version = "0.4.14"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "51b9bbe6c47d51fc3e1a9b945965946b4c44142ab8792c50835a980d362c2710"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.4.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "308cc39be01b73d0d18f82a0e7b2a3df85245f84af96fdddc5d202d27e47b86a"
|
||||
|
||||
[[package]]
|
||||
name = "miniz_oxide"
|
||||
version = "0.4.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a92518e98c078586bc6c934028adcca4c92a53d6a958196de835170a01d84e4b"
|
||||
dependencies = [
|
||||
"adler",
|
||||
"autocfg",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "object"
|
||||
version = "0.27.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "67ac1d3f9a1d3616fd9a60c8d74296f22406a238b6a72f5cc1e6f314df4ffbf9"
|
||||
dependencies = [
|
||||
"memchr",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro-error"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "da25490ff9892aab3fcf7c36f08cfb902dd3e71ca0f9f9517bea02a73a5ce38c"
|
||||
dependencies = [
|
||||
"proc-macro-error-attr",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"version_check",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro-error-attr"
|
||||
version = "1.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a1be40180e52ecc98ad80b184934baf3d0d29f979574e439af5a55274b35f869"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"version_check",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "proc-macro2"
|
||||
version = "1.0.32"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ba508cc11742c0dc5c1659771673afbab7a0efab23aa17e854cbab0837ed0b43"
|
||||
dependencies = [
|
||||
"unicode-xid",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "quote"
|
||||
version = "1.0.10"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "38bc8cc6a5f2e3655e0899c1b848643b2562f853f114bfec7be120678e3ace05"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustc-demangle"
|
||||
version = "0.1.21"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "7ef03e0a2b150c7a90d01faf6254c9c48a41e95fb2a8c2ac1c6f0d2b9aefc342"
|
||||
|
||||
[[package]]
|
||||
name = "ryu"
|
||||
version = "1.0.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "71d301d4193d031abdd79ff7e3dd721168a9572ef3fe51a1517aba235bd8f86e"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.130"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f12d06de37cf59146fbdecab66aa99f9fe4f78722e3607577a5375d66bd0c913"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.130"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d7bc1a1ab1961464eae040d96713baa5a724a8152c1222492465b54322ec508b"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.69"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "e466864e431129c7e0d3476b92f20458e5879919a0596c6472738d9fa2d342f8"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"ryu",
|
||||
"serde",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_repr"
|
||||
version = "0.1.7"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "98d0516900518c29efa217c298fa1f4e6c6ffc85ae29fd7f4ee48f176e1a9ed5"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "strsim"
|
||||
version = "0.8.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8ea5119cdb4c55b55d432abb513a0429384878c15dde60cc77b1c99de1a95a6a"
|
||||
|
||||
[[package]]
|
||||
name = "structopt"
|
||||
version = "0.3.25"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "40b9788f4202aa75c240ecc9c15c65185e6a39ccdeb0fd5d008b98825464c87c"
|
||||
dependencies = [
|
||||
"clap",
|
||||
"lazy_static",
|
||||
"structopt-derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "structopt-derive"
|
||||
version = "0.4.18"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "dcb5ae327f9cc13b68763b5749770cb9e048a99bd9dfdfa58d0cf05d5f64afe0"
|
||||
dependencies = [
|
||||
"heck",
|
||||
"proc-macro-error",
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "syn"
|
||||
version = "1.0.81"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f2afee18b8beb5a596ecb4a2dce128c719b4ba399d34126b9e4396e3f9860966"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"unicode-xid",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "synstructure"
|
||||
version = "0.12.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f36bdaa60a83aca3921b5259d5400cbf5e90fc51931376a9bd4a0eb79aa7210f"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
"syn",
|
||||
"unicode-xid",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "textwrap"
|
||||
version = "0.11.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "d326610f408c7a4eb6f51c37c330e496b08506c9457c9d34287ecc38809fb060"
|
||||
dependencies = [
|
||||
"unicode-width",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "unicode-segmentation"
|
||||
version = "1.8.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8895849a949e7845e06bd6dc1aa51731a103c42707010a5b591c0038fb73385b"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-width"
|
||||
version = "0.1.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "3ed742d4ea2bd1176e236172c8429aaf54486e7ac098db29ffe6529e0ce50973"
|
||||
|
||||
[[package]]
|
||||
name = "unicode-xid"
|
||||
version = "0.2.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8ccb82d61f80a663efe1f787a51b16b5a51e3314d6ac365b08639f52387b33f3"
|
||||
|
||||
[[package]]
|
||||
name = "vec_map"
|
||||
version = "0.8.2"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "f1bddf1187be692e79c5ffeab891132dfb0f236ed36a43c7ed39f1165ee20191"
|
||||
|
||||
[[package]]
|
||||
name = "version_check"
|
||||
version = "0.9.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5fecdca9a5291cc2b8dcf7dc02453fee791a280f3743cb0905f8822ae463b3fe"
|
||||
|
||||
[[package]]
|
||||
name = "winapi"
|
||||
version = "0.3.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5c839a674fcd7a98952e593242ea400abe93992746761e38641405d28b00f419"
|
||||
dependencies = [
|
||||
"winapi-i686-pc-windows-gnu",
|
||||
"winapi-x86_64-pc-windows-gnu",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "winapi-i686-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
|
||||
|
||||
[[package]]
|
||||
name = "winapi-x86_64-pc-windows-gnu"
|
||||
version = "0.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "712e227841d057c1ee1cd2fb22fa7e5a5461ae8e48fa2ca79ec42cfc1931183f"
|
25
container/Cargo.toml
Normal file
25
container/Cargo.toml
Normal file
@ -0,0 +1,25 @@
|
||||
[package]
|
||||
name = "docker4ssh"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
authors = ["ByteDream"]
|
||||
repository = "https://github.com/ByteDream/docker4ssh"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[[bin]]
|
||||
name = "configure"
|
||||
path = "src/configure/main.rs"
|
||||
|
||||
[dependencies]
|
||||
failure = "0.1"
|
||||
log = "0.4"
|
||||
structopt = "0.3"
|
||||
serde = { version = "1.0", features = ["derive"]}
|
||||
serde_json = "1.0"
|
||||
serde_repr = "0.1"
|
||||
|
||||
[profile.release]
|
||||
lto = true
|
||||
opt-level = "z"
|
||||
panic = "abort"
|
296
container/src/configure/cli/cli.rs
Normal file
296
container/src/configure/cli/cli.rs
Normal file
@ -0,0 +1,296 @@
|
||||
use std::fmt::{Debug, format};
|
||||
use std::net::TcpStream;
|
||||
use std::os::unix::process::ExitStatusExt;
|
||||
use std::process::{Command, ExitStatus};
|
||||
use std::time::SystemTime;
|
||||
use log::{info, warn};
|
||||
use structopt::StructOpt;
|
||||
use structopt::clap::AppSettings;
|
||||
use crate::configure::cli::parser;
|
||||
use crate::shared::api::api::API;
|
||||
use crate::shared::api::request;
|
||||
use crate::shared::api::request::{ConfigGetResponse, ConfigNetworkMode, ConfigPostRequest, ConfigRunLevel};
|
||||
|
||||
type Result<T> = std::result::Result<T, failure::Error>;
|
||||
|
||||
trait Execute {
|
||||
fn execute(self, api: &mut API) -> Result<()>;
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "configure",
|
||||
about = "A command line wrapper to control docker4ssh containers from within them",
|
||||
settings = &[AppSettings::ArgRequiredElseHelp]
|
||||
)]
|
||||
struct Opts {
|
||||
#[structopt(short, long, global = true, help = "Verbose output")]
|
||||
verbose: bool,
|
||||
|
||||
#[structopt(subcommand)]
|
||||
commands: Option<Root>
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "ping",
|
||||
about = "Ping the control socket"
|
||||
)]
|
||||
struct Ping {}
|
||||
|
||||
impl Execute for Ping {
|
||||
fn execute(self, api: &mut API) -> Result<()> {
|
||||
let start = SystemTime::now().duration_since(SystemTime::UNIX_EPOCH)?.as_nanos();
|
||||
let result = request::PingRequest::new().request(api)?;
|
||||
info!("Pong! Ping is {:.4}ms", ((result.received - start) as f64) / 1000.0 / 1000.0);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "error",
|
||||
about = "Example error message sent from socket",
|
||||
)]
|
||||
struct Error {}
|
||||
|
||||
impl Execute for Error {
|
||||
fn execute(self, api: &mut API) -> Result<()> {
|
||||
request::ErrorRequest::new().request(api)?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "info",
|
||||
about = "Shows information about the current container",
|
||||
)]
|
||||
struct Info {}
|
||||
|
||||
impl Execute for Info {
|
||||
fn execute(self, api: &mut API) -> Result<()> {
|
||||
let result = request::InfoRequest:: new().request(api)?;
|
||||
info!(concat!(
|
||||
"\tContainer ID: {}"
|
||||
), result.container_id);
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "config",
|
||||
about = "Get or set the behavior of the current container",
|
||||
settings = &[AppSettings::ArgRequiredElseHelp]
|
||||
)]
|
||||
struct Config {
|
||||
#[structopt(subcommand)]
|
||||
commands: Option<ConfigCommands>
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
enum ConfigCommands {
|
||||
Get(ConfigGet),
|
||||
Set(ConfigSet)
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "get",
|
||||
about = "Show the current container behavior"
|
||||
)]
|
||||
struct ConfigGet {}
|
||||
|
||||
impl Execute for ConfigGet {
|
||||
fn execute(self, api: &mut API) -> Result<()> {
|
||||
let response: ConfigGetResponse = request::ConfigGetRequest::new().request(api)?;
|
||||
|
||||
info!(concat!(
|
||||
"\tNetwork Mode: {}\n",
|
||||
"\tConfigurable: {}\n",
|
||||
"\tRun Level: {}\n",
|
||||
"\tStartup Information: {}\n",
|
||||
"\tExit After: {}\n",
|
||||
"\tKeep On Exit: {}"
|
||||
), response.network_mode, response.configurable, response.run_level, response.startup_information, response.exit_after, response.keep_on_exit);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "set",
|
||||
about = "Set the current container behavior",
|
||||
settings = &[AppSettings::ArgRequiredElseHelp]
|
||||
)]
|
||||
struct ConfigSet {
|
||||
#[structopt(long, help = "If the container should keep running even after the user exits", parse(try_from_str = parser::parse_network_mode))]
|
||||
network_mode: Option<ConfigNetworkMode>,
|
||||
|
||||
#[structopt(long, help = "If the container should be configurable from within")]
|
||||
configurable: Option<bool>,
|
||||
|
||||
#[structopt(long, help = "Set the container stop behavior", parse(try_from_str = parser::parse_config_run_level))]
|
||||
run_level: Option<ConfigRunLevel>,
|
||||
|
||||
#[structopt(long, help = "If information about the container should be shown when a user connects")]
|
||||
startup_information: Option<bool>,
|
||||
|
||||
#[structopt(long, help = "Process name after which the container should exit")]
|
||||
exit_after: Option<String>,
|
||||
|
||||
#[structopt(long, help = "If the container should be not deleted after exit")]
|
||||
keep_on_exit: Option<bool>
|
||||
}
|
||||
|
||||
impl Execute for ConfigSet {
|
||||
fn execute(self, api: &mut API) -> Result<()> {
|
||||
let mut request = request::ConfigPostRequest::new();
|
||||
|
||||
if let Some(exit_after) = self.exit_after.as_ref() {
|
||||
let program_runs = Command::new("pidof")
|
||||
.arg("-s")
|
||||
.arg(exit_after).status().unwrap().success();
|
||||
if !program_runs {
|
||||
warn!("NOTE: There is currently no process running with the name '{}'", exit_after);
|
||||
}
|
||||
}
|
||||
|
||||
request.body.network_mode = self.network_mode;
|
||||
request.body.configurable = self.configurable;
|
||||
request.body.run_level = self.run_level;
|
||||
request.body.startup_information = self.startup_information;
|
||||
request.body.exit_after = self.exit_after;
|
||||
request.body.keep_on_exit = self.keep_on_exit;
|
||||
|
||||
request.request(api)?;
|
||||
|
||||
if let Some(keep_on_exit) = self.keep_on_exit {
|
||||
if keep_on_exit {
|
||||
if let Ok(auth) = request::AuthGetRequest::new().request(api) {
|
||||
info!("To reconnect to this container, use the user '{}' for the ssh connection", &auth.user)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "auth",
|
||||
about = "Get or set the container authentication",
|
||||
settings = &[AppSettings::ArgRequiredElseHelp]
|
||||
)]
|
||||
struct Auth {
|
||||
#[structopt(subcommand)]
|
||||
commands: Option<AuthCommands>
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
enum AuthCommands {
|
||||
Get(AuthGet),
|
||||
Set(AuthSet)
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "get",
|
||||
about = "Show the current username used for ssh authentication and if a password is set"
|
||||
)]
|
||||
struct AuthGet {}
|
||||
|
||||
impl Execute for AuthGet {
|
||||
fn execute(self, api: &mut API) -> Result<()> {
|
||||
let response = request::AuthGetRequest::new().request(api)?;
|
||||
|
||||
info!(concat!(
|
||||
"\tUser: {}\n",
|
||||
"\tHas Password: {}\n"
|
||||
), response.user, response.has_password);
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
#[structopt(
|
||||
name = "set",
|
||||
about = "Set the authentication settings",
|
||||
settings = &[AppSettings::ArgRequiredElseHelp]
|
||||
)]
|
||||
struct AuthSet {
|
||||
#[structopt(long, help = "The container username")]
|
||||
user: Option<String>,
|
||||
#[structopt(long, help = "The container password. If empty, the authentication gets removed")]
|
||||
password: Option<String>
|
||||
}
|
||||
|
||||
impl Execute for AuthSet {
|
||||
fn execute(self, api: &mut API) -> Result<()> {
|
||||
let mut request = request::AuthPostRequest::new();
|
||||
request.body.user = self.user;
|
||||
request.body.password = self.password.clone();
|
||||
|
||||
request.request(api)?;
|
||||
|
||||
if let Some(password) = self.password {
|
||||
if password == "" {
|
||||
warn!("No password was specified so the authentication got deleted")
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(StructOpt)]
|
||||
enum Root {
|
||||
Auth(Auth),
|
||||
Error(Error),
|
||||
Info(Info),
|
||||
Ping(Ping),
|
||||
Config(Config)
|
||||
}
|
||||
|
||||
pub fn cli(route: String) {
|
||||
if let Some(subcommand) = Opts::from_args().commands {
|
||||
let mut result: Result<()> = Ok(());
|
||||
let mut api = API::new(route, String::new());
|
||||
match subcommand {
|
||||
Root::Auth(auth) => {
|
||||
if let Some(subsubcommand) = auth.commands {
|
||||
match subsubcommand {
|
||||
AuthCommands::Get(auth_get) => {
|
||||
result = auth_get.execute(&mut api)
|
||||
}
|
||||
AuthCommands::Set(auth_set) => {
|
||||
result = auth_set.execute(&mut api)
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
Root::Error(error) => result = error.execute(&mut api),
|
||||
Root::Info(info) => result = info.execute(&mut api),
|
||||
Root::Ping(ping) => result = ping.execute(&mut api),
|
||||
Root::Config(config) => {
|
||||
if let Some(subsubcommand) = config.commands {
|
||||
match subsubcommand {
|
||||
ConfigCommands::Get(config_get) => {
|
||||
result = config_get.execute(&mut api)
|
||||
}
|
||||
ConfigCommands::Set(config_set) => {
|
||||
result = config_set.execute(&mut api)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if result.is_err() {
|
||||
log::error!("{}", result.err().unwrap().to_string())
|
||||
}
|
||||
}
|
||||
}
|
4
container/src/configure/cli/mod.rs
Normal file
4
container/src/configure/cli/mod.rs
Normal file
@ -0,0 +1,4 @@
|
||||
mod cli;
|
||||
pub mod parser;
|
||||
|
||||
pub use cli::cli;
|
23
container/src/configure/cli/parser.rs
Normal file
23
container/src/configure/cli/parser.rs
Normal file
@ -0,0 +1,23 @@
|
||||
use std::f32::consts::E;
|
||||
use std::fmt::format;
|
||||
use crate::shared::api::request::{ConfigNetworkMode, ConfigRunLevel};
|
||||
|
||||
pub fn parse_network_mode(src: &str) -> Result<ConfigNetworkMode, String> {
|
||||
match String::from(src).to_lowercase().as_str() {
|
||||
"off" | "1" => Ok(ConfigNetworkMode::Off),
|
||||
"full" | "2" => Ok(ConfigNetworkMode::Full),
|
||||
"host" | "3" => Ok(ConfigNetworkMode::Host),
|
||||
"docker" | "4" => Ok(ConfigNetworkMode::Docker),
|
||||
"none" | "5" => Ok(ConfigNetworkMode::None),
|
||||
_ => Err(format!("'{} is not a valid network mode. Choose from 'off', 'full', 'host', 'docker', 'none'", src))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn parse_config_run_level(src: &str) -> Result<ConfigRunLevel, String> {
|
||||
match String::from(src).to_lowercase().as_str() {
|
||||
"user" | "1" => Ok(ConfigRunLevel::User),
|
||||
"container" | "2" => Ok(ConfigRunLevel::Container),
|
||||
"forever" | "3" => Ok(ConfigRunLevel::Forever),
|
||||
_ => Err(format!("'{}' is not a valid run level. Choose from: 'user', 'container', 'forever'", src))
|
||||
}
|
||||
}
|
19
container/src/configure/main.rs
Normal file
19
container/src/configure/main.rs
Normal file
@ -0,0 +1,19 @@
|
||||
use std::fs;
|
||||
use std::net::TcpStream;
|
||||
use std::os::unix::net::UnixStream;
|
||||
use std::process::exit;
|
||||
use log::{LevelFilter, trace, warn, info, error};
|
||||
use docker4ssh::configure::cli;
|
||||
use docker4ssh::shared::logging::init_logger;
|
||||
|
||||
fn main() {
|
||||
init_logger(LevelFilter::Debug);
|
||||
|
||||
match fs::read_to_string("/etc/docker4ssh") {
|
||||
Ok(route) => cli(route),
|
||||
Err(e) => {
|
||||
error!("Failed to read /etc/docker4ssh: {}", e.to_string());
|
||||
exit(1);
|
||||
}
|
||||
}
|
||||
}
|
3
container/src/configure/mod.rs
Normal file
3
container/src/configure/mod.rs
Normal file
@ -0,0 +1,3 @@
|
||||
pub mod cli;
|
||||
|
||||
pub use cli::cli;
|
2
container/src/lib.rs
Normal file
2
container/src/lib.rs
Normal file
@ -0,0 +1,2 @@
|
||||
pub mod shared;
|
||||
pub mod configure;
|
157
container/src/shared/api/api.rs
Normal file
157
container/src/shared/api/api.rs
Normal file
@ -0,0 +1,157 @@
|
||||
use std::collections::HashMap;
|
||||
use std::io::{Read, Write};
|
||||
use std::net::TcpStream;
|
||||
use log::Level::Error;
|
||||
use serde::Deserialize;
|
||||
|
||||
pub type Result<T> = std::result::Result<T, failure::Error>;
|
||||
|
||||
pub struct API {
|
||||
route: String,
|
||||
host: String,
|
||||
}
|
||||
|
||||
impl API {
|
||||
pub const fn new(route: String, host: String) -> Self {
|
||||
API {
|
||||
route,
|
||||
host,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn new_connection(&mut self) -> Result<TcpStream> {
|
||||
match TcpStream::connect(&self.route) {
|
||||
Ok(stream) => Ok(stream),
|
||||
Err(e) => Err(failure::format_err!("Failed to connect to {}: {}", self.route, e.to_string()))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn request(&mut self, request: &Request) -> Result<APIResult> {
|
||||
let mut connection = self.new_connection()?;
|
||||
|
||||
connection.write_all(request.as_string().as_bytes())?;
|
||||
let mut buf: String = String::new();
|
||||
connection.read_to_string(&mut buf).map_err(|e| failure::err_msg(e.to_string()))?;
|
||||
Ok(APIResult::new(request, buf))
|
||||
}
|
||||
|
||||
pub fn request_with_err(&mut self, request: &Request) -> Result<APIResult> {
|
||||
let result = self.request(request)?;
|
||||
if result.result_code >= 400 {
|
||||
let err: APIError = result.body()?;
|
||||
Err(failure::err_msg(format!("Error {}: {}", result.result_code, err.message)))
|
||||
} else {
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
pub struct APIError {
|
||||
message: String
|
||||
}
|
||||
|
||||
pub struct APIResult {
|
||||
// TODO: Store the whole request instead of only the path
|
||||
request_path: String,
|
||||
|
||||
result_code: i32,
|
||||
result_body: String
|
||||
}
|
||||
|
||||
impl APIResult {
|
||||
fn new(request: &Request, raw_response: String) -> Self {
|
||||
APIResult {
|
||||
request_path: request.path.clone(),
|
||||
|
||||
// TODO: Parse http body better
|
||||
result_code: raw_response[9..12].parse().unwrap(),
|
||||
result_body: raw_response.split_once("\r\n\r\n").unwrap().1.to_string()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn path(self) -> String {
|
||||
self.request_path
|
||||
}
|
||||
|
||||
pub fn code(&self) -> i32 {
|
||||
return self.result_code
|
||||
}
|
||||
|
||||
pub fn has_body(&self) -> bool {
|
||||
self.result_body.len() > 0
|
||||
}
|
||||
|
||||
pub fn body<'a, T: Deserialize<'a>>(&'a self) -> Result<T> {
|
||||
let result: T = serde_json::from_str(&self.result_body).map_err(|e| {
|
||||
// checks if the error has a body and if so, return it
|
||||
if self.has_body() {
|
||||
let error: APIError = serde_json::from_str(&self.result_body).unwrap_or_else(|ee| {
|
||||
APIError{message: format!("could not deserialize response: {}", e.to_string())}
|
||||
});
|
||||
failure::format_err!("Failed to call '{}': {}", self.request_path, error.message)
|
||||
} else {
|
||||
failure::format_err!("Failed to call '{}': {}", self.request_path, e.to_string())
|
||||
}
|
||||
})?;
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
|
||||
pub enum Method {
|
||||
GET,
|
||||
POST
|
||||
}
|
||||
|
||||
pub struct Request {
|
||||
method: Method,
|
||||
path: String,
|
||||
headers: HashMap<String, String>,
|
||||
body: String,
|
||||
}
|
||||
|
||||
impl Request {
|
||||
pub fn new(path: String) -> Self {
|
||||
Request{
|
||||
method: Method::GET,
|
||||
path,
|
||||
headers: Default::default(),
|
||||
body: "".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn set_method(&mut self, method: Method) -> &Self {
|
||||
self.method = method;
|
||||
self
|
||||
}
|
||||
|
||||
pub fn set_path(&mut self, path: String) -> &Self {
|
||||
self.path = path;
|
||||
self
|
||||
}
|
||||
|
||||
pub fn set_header(&mut self, field: &str, value: String) -> &Self {
|
||||
self.headers.insert(String::from(field), value);
|
||||
self
|
||||
}
|
||||
|
||||
pub fn set_body(&mut self, body: String) -> &Self {
|
||||
self.body = body;
|
||||
self.headers.insert("Content-Length".to_string(), self.body.len().to_string());
|
||||
self
|
||||
}
|
||||
|
||||
pub fn as_string(&self) -> String {
|
||||
let method;
|
||||
match self.method {
|
||||
Method::GET => method = "GET",
|
||||
Method::POST => method = "POST"
|
||||
}
|
||||
|
||||
let headers_as_string = self.headers.iter().map(|f| format!("{}: {}", f.0, f.1)).collect::<String>();
|
||||
|
||||
return format!("{} {} HTTP/1.0\r\n\
|
||||
{}\r\n\r\n\
|
||||
{}\r\n", method, self.path, headers_as_string, self.body)
|
||||
}
|
||||
}
|
2
container/src/shared/api/mod.rs
Normal file
2
container/src/shared/api/mod.rs
Normal file
@ -0,0 +1,2 @@
|
||||
pub mod request;
|
||||
pub mod api;
|
220
container/src/shared/api/request.rs
Normal file
220
container/src/shared/api/request.rs
Normal file
@ -0,0 +1,220 @@
|
||||
use std::fmt::{Display, Formatter};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use serde::de::Unexpected::Str;
|
||||
use serde_repr::{Deserialize_repr, Serialize_repr};
|
||||
|
||||
use crate::shared::api::api::{API, Method, Request, Result};
|
||||
use crate::shared::api::api::Method::POST;
|
||||
|
||||
#[derive(Deserialize)]
|
||||
pub struct PingResponse {
|
||||
pub received: u128
|
||||
}
|
||||
|
||||
pub struct PingRequest {
|
||||
request: Request
|
||||
}
|
||||
|
||||
impl PingRequest {
|
||||
pub fn new() -> Self {
|
||||
PingRequest {
|
||||
request: Request::new(String::from("/ping"))
|
||||
}
|
||||
}
|
||||
pub fn request(&self, api: &mut API) -> Result<PingResponse> {
|
||||
let result: PingResponse = api.request_with_err(&self.request)?.body()?;
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
|
||||
pub struct ErrorRequest {
|
||||
request: Request
|
||||
}
|
||||
|
||||
impl ErrorRequest {
|
||||
pub fn new() -> Self {
|
||||
ErrorRequest {
|
||||
request: Request::new(String::from("/error"))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn request(&self, api: &mut API) -> Result<()> {
|
||||
api.request_with_err(&self.request)?.body()?;
|
||||
// should never call Ok
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
pub struct InfoResponse {
|
||||
pub container_id: String
|
||||
}
|
||||
|
||||
pub struct InfoRequest {
|
||||
request: Request
|
||||
}
|
||||
|
||||
impl InfoRequest {
|
||||
pub fn new() -> Self {
|
||||
InfoRequest{
|
||||
request: Request::new(String::from("/info"))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn request(&self, api: &mut API) -> Result<InfoResponse> {
|
||||
let result: InfoResponse = api.request_with_err(&self.request)?.body()?;
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize_repr, Deserialize_repr)]
|
||||
#[repr(u8)]
|
||||
pub enum ConfigRunLevel {
|
||||
User = 1,
|
||||
Container = 2,
|
||||
Forever = 3
|
||||
}
|
||||
|
||||
impl Display for ConfigRunLevel {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{:?}", self)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize_repr, Deserialize_repr)]
|
||||
#[repr(u8)]
|
||||
pub enum ConfigNetworkMode {
|
||||
Off = 1,
|
||||
Full = 2,
|
||||
Host = 3,
|
||||
Docker = 4,
|
||||
None = 5
|
||||
}
|
||||
|
||||
impl Display for ConfigNetworkMode {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
|
||||
write!(f, "{:?}", self)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
pub struct ConfigGetResponse {
|
||||
pub network_mode: ConfigNetworkMode,
|
||||
pub configurable: bool,
|
||||
pub run_level: ConfigRunLevel,
|
||||
pub startup_information: bool,
|
||||
pub exit_after: String,
|
||||
pub keep_on_exit: bool
|
||||
}
|
||||
|
||||
pub struct ConfigGetRequest {
|
||||
request: Request
|
||||
}
|
||||
|
||||
impl ConfigGetRequest {
|
||||
pub fn new() -> ConfigGetRequest {
|
||||
ConfigGetRequest{
|
||||
request: Request::new(String::from("/config"))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn request(&self, api: &mut API) -> Result<ConfigGetResponse> {
|
||||
let result: ConfigGetResponse = api.request_with_err(&self.request)?.body()?;
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct ConfigPostBody {
|
||||
pub network_mode: Option<ConfigNetworkMode>,
|
||||
pub configurable: Option<bool>,
|
||||
pub run_level: Option<ConfigRunLevel>,
|
||||
pub startup_information: Option<bool>,
|
||||
pub exit_after: Option<String>,
|
||||
pub keep_on_exit: Option<bool>
|
||||
}
|
||||
|
||||
pub struct ConfigPostRequest {
|
||||
request: Request,
|
||||
pub body: ConfigPostBody
|
||||
}
|
||||
|
||||
impl ConfigPostRequest {
|
||||
pub fn new() -> ConfigPostRequest {
|
||||
let mut request = Request::new(String::from("/config"));
|
||||
request.set_method(Method::POST);
|
||||
|
||||
ConfigPostRequest {
|
||||
request,
|
||||
body: ConfigPostBody{
|
||||
network_mode: None,
|
||||
configurable: None,
|
||||
run_level: None,
|
||||
startup_information: None,
|
||||
exit_after: None,
|
||||
keep_on_exit: None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn request(&mut self, api: &mut API) -> Result<()> {
|
||||
self.request.set_body(serde_json::to_string(&self.body)?);
|
||||
api.request_with_err(&self.request)?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Deserialize)]
|
||||
pub struct AuthGetResponse {
|
||||
pub user: String,
|
||||
pub has_password: bool
|
||||
}
|
||||
|
||||
pub struct AuthGetRequest {
|
||||
request: Request
|
||||
}
|
||||
|
||||
impl AuthGetRequest {
|
||||
pub fn new() -> AuthGetRequest {
|
||||
AuthGetRequest{
|
||||
request: Request::new(String::from("/auth"))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn request(&self, api: &mut API) -> Result<AuthGetResponse> {
|
||||
let result: AuthGetResponse = api.request_with_err(&self.request)?.body()?;
|
||||
Ok(result)
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct AuthPostBody {
|
||||
pub user: Option<String>,
|
||||
pub password: Option<String>
|
||||
}
|
||||
|
||||
pub struct AuthPostRequest {
|
||||
request: Request,
|
||||
pub body: AuthPostBody
|
||||
}
|
||||
|
||||
impl AuthPostRequest {
|
||||
pub fn new() -> AuthPostRequest {
|
||||
let mut request = Request::new(String::from("/auth"));
|
||||
request.set_method(POST);
|
||||
|
||||
AuthPostRequest {
|
||||
request,
|
||||
body: AuthPostBody{
|
||||
user: None,
|
||||
password: None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn request(&mut self, api: &mut API) -> Result<()> {
|
||||
self.request.set_body(serde_json::to_string(&self.body)?);
|
||||
api.request_with_err(&self.request)?;
|
||||
Ok(())
|
||||
}
|
||||
}
|
19
container/src/shared/logging/logger.rs
Normal file
19
container/src/shared/logging/logger.rs
Normal file
@ -0,0 +1,19 @@
|
||||
use log::{info, Metadata, Record};
|
||||
|
||||
pub struct Logger;
|
||||
|
||||
impl log::Log for Logger {
|
||||
fn enabled(&self, metadata: &Metadata) -> bool {
|
||||
true
|
||||
}
|
||||
|
||||
fn log(&self, record: &Record) {
|
||||
if self.enabled(record.metadata()) {
|
||||
println!("{}", record.args().to_string())
|
||||
}
|
||||
}
|
||||
|
||||
fn flush(&self) {
|
||||
todo!()
|
||||
}
|
||||
}
|
11
container/src/shared/logging/mod.rs
Normal file
11
container/src/shared/logging/mod.rs
Normal file
@ -0,0 +1,11 @@
|
||||
use log::{LevelFilter, SetLoggerError};
|
||||
|
||||
pub mod logger;
|
||||
|
||||
pub use logger::Logger;
|
||||
|
||||
static LOGGER: Logger = Logger;
|
||||
|
||||
pub fn init_logger(level: LevelFilter) -> Result<(), SetLoggerError> {
|
||||
log::set_logger(&Logger).map(|()| log::set_max_level(level))
|
||||
}
|
2
container/src/shared/mod.rs
Normal file
2
container/src/shared/mod.rs
Normal file
@ -0,0 +1,2 @@
|
||||
pub mod api;
|
||||
pub mod logging;
|
47
examples/Dockerfile
Normal file
47
examples/Dockerfile
Normal file
@ -0,0 +1,47 @@
|
||||
FROM golang:1.17 as server
|
||||
|
||||
WORKDIR /docker4ssh
|
||||
|
||||
COPY ["../", "."]
|
||||
|
||||
RUN apt update && \
|
||||
apt install make sqlite3 && \
|
||||
apt clean && \
|
||||
apt autoremove && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN make BUILDDIR=build/ build-server
|
||||
|
||||
|
||||
FROM rust:1.56 as client
|
||||
|
||||
WORKDIR /docker4ssh
|
||||
|
||||
COPY ../ .
|
||||
|
||||
RUN apt update && \
|
||||
apt install make \
|
||||
|
||||
RUN make BUILDDIR=build/ build-client
|
||||
|
||||
|
||||
FROM alpine:lastest as extra
|
||||
|
||||
WORKDIR /docker4ssh
|
||||
|
||||
COPY ../ .
|
||||
|
||||
RUN apk add make
|
||||
|
||||
RUN make BUILDDIR=build/ build-extra
|
||||
|
||||
|
||||
FROM alpine:latest
|
||||
|
||||
WORKDIR /docker4ssh
|
||||
|
||||
COPY --from=server /docker4ssh/build/* .
|
||||
COPY --from=client /docker4ssh/build/docker4ssh .
|
||||
COPY --from=extra /docker4ssh/build/* .
|
||||
|
||||
ENTRYPOINT docker4ssh
|
11
examples/docker-compose.yml
Normal file
11
examples/docker-compose.yml
Normal file
@ -0,0 +1,11 @@
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
docker4ssh:
|
||||
build: .
|
||||
ports:
|
||||
- "8642:8642"
|
||||
volumes:
|
||||
- "./docker4ssh.log.log:/docker4ssh/docker4ssh.log"
|
||||
restart: unless-stopped
|
||||
container_name: docker4ssh
|
14
examples/docker4ssh.service
Normal file
14
examples/docker4ssh.service
Normal file
@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=
|
||||
After=network.target docker.service
|
||||
StartLimitBurst=3
|
||||
StartLimitIntervalSec=60
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
WorkingDirectory=/etc/docker4ssh
|
||||
ExecStart=/usr/bin/docker4ssh
|
||||
Restart=on-failure
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
28
extra/database.sql
Normal file
28
extra/database.sql
Normal file
@ -0,0 +1,28 @@
|
||||
create table if not exists auth
|
||||
(
|
||||
container_id text not null,
|
||||
user text,
|
||||
password blob
|
||||
);
|
||||
|
||||
create unique index if not exists auth_container_id_uindex
|
||||
on auth (container_id);
|
||||
|
||||
create table if not exists settings
|
||||
(
|
||||
container_id text not null,
|
||||
network_mode enum default 3 not null,
|
||||
configurable bool default 0 not null,
|
||||
run_level enum default 1 not null,
|
||||
startup_information bool default 1 not null,
|
||||
exit_after text default '' not null,
|
||||
keep_on_exit bool default 0 not null,
|
||||
check (configurable IN (0, 1)),
|
||||
check (keep_on_exit IN (0, 1)),
|
||||
check (network_mode IN (1, 2, 3, 4, 5)),
|
||||
check (run_level IN (1, 2, 3)),
|
||||
check (startup_information IN (0, 1))
|
||||
);
|
||||
|
||||
create unique index if not exists settings_container_id_uindex
|
||||
on settings (container_id);
|
57
extra/docker4ssh.conf
Normal file
57
extra/docker4ssh.conf
Normal file
@ -0,0 +1,57 @@
|
||||
[profile]
|
||||
# the directory where profiles are stored
|
||||
Dir = "./profile/"
|
||||
|
||||
# defalt settings for profiles
|
||||
[profile.default]
|
||||
Password = ""
|
||||
NetworkMode = 3
|
||||
Configurable = true
|
||||
RunLevel = 1
|
||||
StartupInformation = true
|
||||
ExitAfter = ""
|
||||
KeepOnExit = false
|
||||
|
||||
# settings for dynamic container creation
|
||||
[profile.dynamic]
|
||||
Enable = true
|
||||
Password = ""
|
||||
NetworkMode = 3
|
||||
Configurable = true
|
||||
RunLevel = 1
|
||||
StartupInformation = true
|
||||
ExitAfter = ""
|
||||
KeepOnExit = false
|
||||
|
||||
[api]
|
||||
Port = 8420
|
||||
|
||||
[api.configure]
|
||||
Binary = "./configure"
|
||||
Man = "./man/configure.1"
|
||||
|
||||
[ssh]
|
||||
# the default ssh port. if blank, port 2222 will be used
|
||||
Port = 2222
|
||||
# path to the ssh private key. if blank, a random key will be generated
|
||||
Keyfile = "./docker4ssh.key"
|
||||
# password of the ssh private key
|
||||
Passphrase = ""
|
||||
|
||||
[database]
|
||||
# path to sqlite3 database file. there may be support for other databases in the future
|
||||
Sqlite3File = "./docker4ssh.sqlite3"
|
||||
|
||||
[network.default]
|
||||
Subnet = "172.69.0.0/16"
|
||||
|
||||
[network.isolate]
|
||||
Subnet = "172.96.0.0/16"
|
||||
|
||||
[logging]
|
||||
# the loglevel. available levels are: debug, info, warn, error, fatal
|
||||
Level = "info"
|
||||
ConsoleOutput = true
|
||||
ConsoleError = true
|
||||
OutputFile = "./docker4ssh.log"
|
||||
ErrorFile = "./docker4ssh.log"
|
33
extra/profile.conf
Normal file
33
extra/profile.conf
Normal file
@ -0,0 +1,33 @@
|
||||
#[chad]
|
||||
# REQUIRED - (ssh) username. can be specified via regex.
|
||||
# to use regex, put a 'regex:' in front of it
|
||||
# Username = "chad"
|
||||
|
||||
# OPTIONAL - (ssh) password. can be specified via regex or as hash.
|
||||
# to use regex, put a 'regex:' in front of it.
|
||||
# if you want to specify a hash, put a 'sha1:', 'sha256:' or 'sha512:' at the begging of it
|
||||
# Password = ""
|
||||
|
||||
# OPTIONAL - the network mode. must be one of the following: 1 (off) | 2 (isolate) | 3 (host) | 4 (docker) | 5 (none)
|
||||
# NetworkMode = 3
|
||||
|
||||
# OPTIONAL - if the container should be configurable
|
||||
# Configurable = true
|
||||
|
||||
# OPTIONAL - the container run behavior. must be one of the following: 1 (user) | 2 (container) 3 | (forever)
|
||||
# RunLevel = 1
|
||||
|
||||
# OPTIONAL - if information should be shown about the container on startup
|
||||
# StartupInformation = true
|
||||
|
||||
# OPTIONAL - a process name to exit after it has finished
|
||||
# ExitAfter = ""
|
||||
|
||||
# OPTIONAL - not delete the container when it stops working
|
||||
# KeepOnExit = false
|
||||
|
||||
# REQUIRED OR `Container` - the image to connect to
|
||||
# Image = "archlinux:latest"
|
||||
|
||||
# REQUIRED OR `Image` - the container id to connect to
|
||||
# Container = ""
|
96
man/configure.1
Normal file
96
man/configure.1
Normal file
@ -0,0 +1,96 @@
|
||||
.TH configure 1 "December 13, 2021" configure "configure - manage docker4ssh container from within"
|
||||
|
||||
.SH NAME
|
||||
docker4ssh - docker containers and more via ssh
|
||||
|
||||
.SH AUTH GET
|
||||
This can only be used when calling \fIauth get\fR.
|
||||
.br
|
||||
It returns the current username (with which you can login to the container), if a password is set and if the container is reachable for other ssh connections.
|
||||
|
||||
.SH AUTH SET
|
||||
This can only be used when calling \fIauth set\fR.
|
||||
.TP
|
||||
|
||||
\fB--user\fR = user
|
||||
The container username. It is used if you want to (re)connect to the container.
|
||||
.TP
|
||||
|
||||
\fB--password\fR = password
|
||||
The container password. If empty, the authentication gets removed.
|
||||
|
||||
.SH ERROR
|
||||
This can only be used when calling \fIerror\fR.
|
||||
.br
|
||||
The subcommand only exists for testing purposes and always return a 400 error.
|
||||
|
||||
.SH INFO
|
||||
This can only be used when calling \fIinfo\fR.
|
||||
.br
|
||||
It returns info about the container. Currently only the full container id is shown.
|
||||
|
||||
.SH PING
|
||||
This can only be used when calling \fIping\fR.
|
||||
.br
|
||||
It returns the ping to the docker4ssh host with a nice little message :)
|
||||
|
||||
.SH CONFIG GET
|
||||
This can only be used when calling \fIconfig get\fR.
|
||||
.br
|
||||
It returns the container configuration with the details NetworkMode, Configurable, RunLevel, ExitAfter and KeepOnExit (\fIprofile.conf (5)\fR).
|
||||
|
||||
.SH CONFIG SET
|
||||
This can only be used when calling \fIconfig set\fR.
|
||||
.TP
|
||||
|
||||
\fB--configurable\fI = true | false
|
||||
If the container should be configurable (calling the binary this manual is about).
|
||||
Once called this can only be reverted when the database is edited manually.
|
||||
.TP
|
||||
|
||||
\fB--exit-after\fR = exit after
|
||||
Process name to stop the container after the process ends.
|
||||
.TP
|
||||
|
||||
\fB--keep-on-exit\fR = true | false
|
||||
If the container should or should not be deleted when it stops working.
|
||||
.TP
|
||||
|
||||
\fB--network-mode\fR = 1 | 2 | 3 | 4 | 5
|
||||
This describes the behavior of the container's network
|
||||
Must be one of the following:
|
||||
1 (Off): Disable networking complete.
|
||||
2 (Isolate): Isolates the container from the host and the host's network. Therefore, no configurations can be changed from within the container.
|
||||
3 (Host): Default docker network.
|
||||
4 (Docker): Same as \fI3\fR but the container is in a docker4ssh controlled subnet. This is useful to differ normal from docker4ssh containers.
|
||||
5 (None): disables all isolation between the docker container and the host, so inside the network the container can act as the host. So it has access to the host's network directly.
|
||||
.TP
|
||||
|
||||
\fB--run-level\fR = 1 | 2 | 3
|
||||
This describes the container behavior when the user connection to a container is stopped.
|
||||
Must be one of the following:
|
||||
1 (User): The container stops working if no user is connected to it anymore.
|
||||
2 (Container): The container runs when no user is connected \fIExitAfter\fR is specified.
|
||||
3 (Forever): The container runs forever.
|
||||
|
||||
.SH BUGS
|
||||
Discovered a bug? Well then it should get fixed as fast as possible. Feel free to open a new issue (https://github.com/ByteDream/crunchyroll-go/docker4ssh) or create a pull request (https://github.com/ByteDream/docker4ssh/pulls) on github.
|
||||
|
||||
.SH AUTHOR
|
||||
Written by ByteDream (https://github.com/ByteDream)
|
||||
|
||||
.SH COPYRIGHT
|
||||
Copyright (C) 2021 ByteDream
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as
|
||||
published by the Free Software Foundation, either version 3 of the
|
||||
License, or (at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
36
man/docker4ssh.1
Normal file
36
man/docker4ssh.1
Normal file
@ -0,0 +1,36 @@
|
||||
.TH docker4ssh 1 "December 13, 2021" docker4ssh "docker4ssh"
|
||||
|
||||
.SH NAME
|
||||
docker4ssh - docker containers and more via ssh
|
||||
|
||||
.SH FILES
|
||||
/etc/docker4ssh/docker4ssh.conf
|
||||
The configuration file. See \fIdocker4ssh.conf(5)\fR for more information
|
||||
|
||||
/etc/docker4ssh/profile/*
|
||||
Directory containing profiles. See \fIprofile.conf(5)\fR for more information
|
||||
|
||||
.SH SEE ALSO
|
||||
docker4ssh.conf(5), profile.conf(5)
|
||||
|
||||
.SH BUGS
|
||||
Discovered a bug? Well then it should get fixed as fast as possible. Feel free to open a new issue (https://github.com/ByteDream/crunchyroll-go/docker4ssh) or create a pull request (https://github.com/ByteDream/docker4ssh/pulls) on github.
|
||||
|
||||
.SH AUTHOR
|
||||
Written by ByteDream (https://github.com/ByteDream)
|
||||
|
||||
.SH COPYRIGHT
|
||||
Copyright (C) 2021 ByteDream
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License as
|
||||
published by the Free Software Foundation, either version 3 of the
|
||||
License, or (at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
169
man/docker4ssh.conf.5
Normal file
169
man/docker4ssh.conf.5
Normal file
@ -0,0 +1,169 @@
|
||||
.TH docker4ssh.conf 5 "December 13, 2021" docker4ssh.conf "docker4ssh configuration file"
|
||||
|
||||
.SH SYNOPSIS
|
||||
.TP
|
||||
/etc/docker4ssh/docker4ssh.conf
|
||||
|
||||
.SH PROFILE
|
||||
\fBDir\fR = /path/to/directory
|
||||
.TP
|
||||
Set the path to the directory where profiles are stored in
|
||||
|
||||
.SH PROFILE.DEFAULT
|
||||
.TP
|
||||
\fBPassword\fR = password
|
||||
Default password for every connection.
|
||||
This is used unless some other password was specified.
|
||||
The password can be specified as plain text, regex or hash:
|
||||
Regex: Put \fIregex:\fR in front of it. The regex must must be \fBgo\fR / \fBgolang\fR compatible. Visit \fIregex101.com\fR to validate your regex.
|
||||
Hash: Put \fIsha1:\fR, \fIsha256:\fR or \fIsha512:\fR in front of it. Note that the hash must be hashed with the prefix algorithm.
|
||||
.TP
|
||||
|
||||
\fBNetworkMode\fR = 1 | 2 | 3 | 4 | 5
|
||||
Default network mode for every connection.
|
||||
NetworkMode describes the behavior of the container's network
|
||||
Must be one of the following:
|
||||
1 (Off): Disable networking complete.
|
||||
2 (Isolate): Isolates the container from the host and the host's network. Therefore, no configurations can be changed from within the container.
|
||||
3 (Host): Default docker network.
|
||||
4 (Docker): Same as \fI3\fR but the container is in a docker4ssh controlled subnet. This is useful to differ normal from docker4ssh containers.
|
||||
5 (None): disables all isolation between the docker container and the host, so inside the network the container can act as the host. So it has access to the host's network directly.
|
||||
.TP
|
||||
|
||||
\fBConfigurable\fR = true | false
|
||||
Default configurable setting for every connection.
|
||||
Configurable describes if the container should be configurable from within it. This means that the connect user is able to change all settings which are described here.
|
||||
Must be true or false.
|
||||
.TP
|
||||
|
||||
\fBRunLevel\fR = 1 | 2 | 3
|
||||
Default run level for every connection.
|
||||
RunRevel describes the container behavior when the user connection to a container is stopped.
|
||||
Must be one of the following:
|
||||
1 (User): The container stops working if no user is connected to it anymore.
|
||||
2 (Container): The container runs when no user is connected \fIExitAfter\fR is specified.
|
||||
3 (Forever): The container runs forever.
|
||||
.br
|
||||
Note that the container exits always, independent of its RunLevel, when the via \fIExitAfter\fR specified process ends.
|
||||
.TP
|
||||
|
||||
\fBStartupInformation\fR = true | false
|
||||
Default startup information setting for every connection.
|
||||
StartupInformation specifies if information about the container (id, network mode, ...) should be shown when a user connects to it.
|
||||
Must be true or false.
|
||||
.TP
|
||||
|
||||
\fBExitAfter\fR = exit after
|
||||
Default exit after process for every process.
|
||||
ExitAfter is a process name after which end the container should stop running.
|
||||
.TP
|
||||
|
||||
\fBKeepOnExit\fR = true | false
|
||||
Default keep on exit setting for every connection.
|
||||
KeepOnExit specifies if the container should be saved when it stops working.
|
||||
Must be true or false.
|
||||
|
||||
.SH PROFILE.DYNAMIC
|
||||
.TP
|
||||
\fBEnable\fR = true | false
|
||||
If dynamic container creation should be created.
|
||||
.TP
|
||||
|
||||
\fBPassword\fR = password
|
||||
See \fIPROFILE.DEFAULT.Password\fR
|
||||
.TP
|
||||
|
||||
\fBNetworkMode\fR = 1 | 2 | 3 | 4 | 5
|
||||
See \fIPROFILE.DEFAULT.NetworkMode\fR
|
||||
.TP
|
||||
|
||||
\fBConfigurable\fR = true | false
|
||||
See \fIPROFILE.DEFAULT.Configurable\fR
|
||||
.TP
|
||||
|
||||
\fBRunLevel\fR = 1 | 2 | 3
|
||||
See \fIPROFILE.DEFAULT.RunLevel\fR
|
||||
.TP
|
||||
|
||||
\fBStartupInformation\fR = true | false
|
||||
See \fIPROFILE.DEFAULT.StartupInformation\fR
|
||||
.TP
|
||||
|
||||
\fBExitAfter\fR = exit after
|
||||
See \fIPROFILE.DEFAULT.ExitAfter\fR
|
||||
.TP
|
||||
|
||||
\fBKeepOnExit\fR = true | false
|
||||
See \fIPROFILE.DEFAULT.KeepOnExit\fR
|
||||
|
||||
.SH API
|
||||
.TP
|
||||
\fBPort\fR = port
|
||||
The api port for container clients to communicate with the server.
|
||||
.TP
|
||||
|
||||
\fBConfigureBinary\fR = /path/to/configure/binary
|
||||
Path to the configure binary which is used inside of containers to communicate with the host and configure itself.
|
||||
|
||||
.SH SSH
|
||||
.TP
|
||||
\fBPort\fR = port
|
||||
Port of the ssh server to serve.
|
||||
.TP
|
||||
|
||||
\fBKey\fR = /path/to/ssh/key
|
||||
Path to the ssh private key for the ssh server.
|
||||
|
||||
To generate a new ssh key, use:
|
||||
>>> ssh-keygen -t ed25519 -b 4096
|
||||
.TP
|
||||
|
||||
\fBPassword\fR = password
|
||||
Password for the ssh private key.
|
||||
|
||||
.SH DATABASE
|
||||
.TP
|
||||
\fBSqlite3File\fR = /path/to/sqlite3/file
|
||||
Path of the database file where all container specific configurations are stored in.
|
||||
|
||||
.SH NETWORK
|
||||
.TP
|
||||
|
||||
.SH NETWORK.DEFAULT
|
||||
.TP
|
||||
\fBSubnet\fR = subnet.ip
|
||||
Ip and mask of the subnet which is used for \fINetworkMode 4 (Docker)\fR.
|
||||
.TP
|
||||
|
||||
.SH NETWORK.ISOLATE
|
||||
.TP
|
||||
\fBSubnet\fR = subnet.ip
|
||||
Ip and mask of the subnet which is used for \fINetworkMode 2 (Isolate)\fR.
|
||||
.TP
|
||||
|
||||
.SH LOGGING
|
||||
.TP
|
||||
\fBLevel\fR = debug | info | warn | error | fatal
|
||||
Logging level.
|
||||
.TP
|
||||
|
||||
\fBConsoleOutput\fR = bool
|
||||
If normal output should be logged to the console.
|
||||
.TP
|
||||
|
||||
\fBConsoleError\fR = bool
|
||||
If error output should be logged to the console.
|
||||
.TP
|
||||
|
||||
\fBOutputFile\fR = /path/to/output/file
|
||||
Path to the output file.
|
||||
.TP
|
||||
|
||||
\fBErrorFile\fR = /path/to/error/file
|
||||
Path to the error file.an
|
||||
|
||||
.SH SEE ALSO
|
||||
docker4ssh(1), profile.conf(5)
|
||||
|
||||
.SH AUTHORS
|
||||
Written by ByteDream (https://github.com/ByteDream)
|
81
man/profile.conf.5
Normal file
81
man/profile.conf.5
Normal file
@ -0,0 +1,81 @@
|
||||
.TH docker4ssh.conf 5 "December 13, 2021" docker4ssh.conf "docker4ssh configuration file"
|
||||
|
||||
.SH SYNOPSIS
|
||||
.TP
|
||||
/etc/docker4ssh/profile/*
|
||||
|
||||
.SH SECTION NAME
|
||||
.TP
|
||||
A representative name for the profile
|
||||
|
||||
.SH KEYS
|
||||
\fBUsername\fR = username
|
||||
Username for this profile.
|
||||
The username can be specified as plain text or regex:
|
||||
Regex: Put \fIregex:\fR in front of it. The regex must must be \fBgo\fR / \fBgolang\fR compatible. Visit \fIregex101.com\fR to validate your regex.
|
||||
.TP
|
||||
|
||||
.TP
|
||||
\fBPassword\fR = password
|
||||
Password for the profile.
|
||||
The password can be specified as plain text, regex or hash:
|
||||
Regex: Put \fIregex:\fR in front of it. The regex must must be \fBgo\fR / \fBgolang\fR compatible. Visit \fIregex101.com\fR to validate your regex.
|
||||
Hash: Put \fIsha1:\fR, \fIsha256:\fR or \fIsha512:\fR in front of it. Note that the hash must be hashed with the prefix algorithm.
|
||||
.TP
|
||||
|
||||
\fBNetworkMode\fR = 1 | 2 | 3 | 4 | 5
|
||||
Default network mode for every connection.
|
||||
NetworkMode describes the behavior of the container's network
|
||||
Must be one of the following:
|
||||
1 (Off): Disable networking complete.
|
||||
2 (Isolate): Isolates the container from the host and the host's network. Therefore, no configurations can be changed from within the container.
|
||||
3 (Host): Default docker network.
|
||||
4 (Docker): Same as \fI3\fR but the container is in a docker4ssh controlled subnet. This is useful to differ normal from docker4ssh containers.
|
||||
5 (None): disables all isolation between the docker container and the host, so inside the network the container can act as the host. So it has access to the host's network directly.
|
||||
.TP
|
||||
|
||||
\fBConfigurable\fR = true | false
|
||||
Default configurable setting for every connection.
|
||||
Configurable describes if the container should be configurable from within it. This means that the connect user is able to change all settings which are described here.
|
||||
Must be true or false.
|
||||
.TP
|
||||
|
||||
\fBRunLevel\fR = 1 | 2 | 3
|
||||
Default run level for every connection.
|
||||
RunRevel describes the container behavior when the user connection to a container is stopped.
|
||||
Must be one of the following:
|
||||
1 (User): The container stops working if no user is connected to it anymore.
|
||||
2 (Container): The container runs when no user is connected \fIExitAfter\fR is specified.
|
||||
3 (Forever): The container runs forever.
|
||||
.br
|
||||
Note that the container exits always, independent of its RunLevel, when the via \fIExitAfter\fR specified process ends.
|
||||
.TP
|
||||
|
||||
\fBStartupInformation\fR = true | false
|
||||
Default startup information setting for every connection.
|
||||
StartupInformation specifies if information about the container (id, network mode, ...) should be shown when a user connects to it.
|
||||
Must be true or false.
|
||||
.TP
|
||||
|
||||
\fBExitAfter\fR = exit after
|
||||
Default exit after process for every process.
|
||||
ExitAfter is a process name after which end the container should stop running.
|
||||
.TP
|
||||
|
||||
\fBKeepOnExit\fR = true | false
|
||||
Default keep on exit setting for every connection.
|
||||
KeepOnExit specifies if the container should be saved when it stops working.
|
||||
Must be true or false.
|
||||
|
||||
.SH EXAMPLE
|
||||
[test]
|
||||
.br
|
||||
Username = "test"
|
||||
.br
|
||||
Image = "alpine:latest"
|
||||
|
||||
.SH SEE ALSO
|
||||
docker4ssh(1), docker4ssh.conf(5)
|
||||
|
||||
.SH AUTHORS
|
||||
Written by ByteDream (https://github.com/ByteDream)
|
203
protocol/configure.yaml
Normal file
203
protocol/configure.yaml
Normal file
@ -0,0 +1,203 @@
|
||||
openapi: 3.0.1
|
||||
info:
|
||||
title: docker4ssh
|
||||
description: Communicate between a container and the docker4ssh host
|
||||
version: 0.1.0
|
||||
license:
|
||||
name: GNU Affero General Public License v3.0
|
||||
url: https://www.gnu.org/licenses/agpl-3.0.txt
|
||||
contact:
|
||||
name: ByteDream
|
||||
url: https://github.com/ByteDream
|
||||
servers:
|
||||
- url: 'unix:///var/run/docker4ssh.sock'
|
||||
paths:
|
||||
/ping:
|
||||
get:
|
||||
summary: Ping the server to see if it's latency and if it's alive
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
received:
|
||||
type: integer
|
||||
description: Unix nano timestamp when the message was received
|
||||
/error:
|
||||
get:
|
||||
summary: Sends an error with code 400, only for test purposes
|
||||
responses:
|
||||
400:
|
||||
description: Controlled bad return code
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
description: Example error message
|
||||
/info:
|
||||
get:
|
||||
summary: Get information about the current container
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
container_id:
|
||||
type: string
|
||||
description: ID of the container
|
||||
/config:
|
||||
get:
|
||||
summary: Get the configuration of the current container
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
network_mode:
|
||||
type: integer
|
||||
enum:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
- 4
|
||||
- 5
|
||||
description: The container network mode. Take a look at server/docker/docker.go for extended information
|
||||
configurable:
|
||||
type: boolean
|
||||
description: If the container should be configurable from within
|
||||
run_level:
|
||||
type: integer
|
||||
enum:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
description: The container run level / behavior. Take a look at server/docker/docker.go for extended information
|
||||
startup_information:
|
||||
type: boolean
|
||||
description: If information about the container should be shown when a user connects
|
||||
exit_after:
|
||||
type: string
|
||||
description: The process name after the container exits
|
||||
keep_on_exit:
|
||||
type: boolean
|
||||
description: If the container should be not deleted after exit
|
||||
post:
|
||||
summary: Set some config settings
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
network_mode:
|
||||
type: integer
|
||||
enum:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
- 4
|
||||
- 5
|
||||
description: The container network mode. Take a look at server/docker/docker.go for extended information
|
||||
configurable:
|
||||
type: boolean
|
||||
description: If the container should be configurable from within
|
||||
run_level:
|
||||
type: integer
|
||||
enum:
|
||||
- 1
|
||||
- 2
|
||||
- 3
|
||||
description: The container run level / behavior. Take a look at server/docker/docker.go for extended information
|
||||
startup_information:
|
||||
type: boolean
|
||||
description: If information about the container should be shown when a user connects
|
||||
exit_after:
|
||||
type: string
|
||||
description: The process name after the container exits
|
||||
keep_on_exit:
|
||||
type: boolean
|
||||
description: If the container should be not deleted after exit
|
||||
responses:
|
||||
200:
|
||||
description: Settings was made
|
||||
406:
|
||||
description: One or more settings could not be changed
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
message:
|
||||
type: string
|
||||
description: Human readable description why the changes could not be made
|
||||
rejected:
|
||||
type: array
|
||||
description: The rejected changes
|
||||
items:
|
||||
type: object
|
||||
description: The rejected setting + a description why it couldn't be processed
|
||||
properties:
|
||||
name:
|
||||
type: string
|
||||
description: Name of the setting
|
||||
description:
|
||||
type: string
|
||||
description: Description of the processing error
|
||||
|
||||
/auth:
|
||||
get:
|
||||
summary: Returns the current username used for ssh authentication and if a password is set
|
||||
responses:
|
||||
200:
|
||||
description: OK
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
user:
|
||||
type: string
|
||||
description: Username
|
||||
has_password:
|
||||
type: boolean
|
||||
description: If a password is set
|
||||
404:
|
||||
description: Auth does not exist
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: string
|
||||
description: Message that the auth does not exists
|
||||
post:
|
||||
summary: Changes authentication for the current container
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
type: object
|
||||
properties:
|
||||
user:
|
||||
type: string
|
||||
description: The new username. Cannot be empty but nullable
|
||||
password:
|
||||
type: string
|
||||
description: The new password. If empty or null, the complete authentication gets deleted
|
||||
responses:
|
||||
200:
|
||||
description: Configuration was changed
|
||||
406:
|
||||
description: The given username was empty
|
123
server/api/api.go
Normal file
123
server/api/api.go
Normal file
@ -0,0 +1,123 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"docker4ssh/config"
|
||||
"docker4ssh/ssh"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"go.uber.org/zap"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type EndpointHandler struct {
|
||||
http.Handler
|
||||
|
||||
auth bool
|
||||
|
||||
get func(http.ResponseWriter, *http.Request, *ssh.User) (interface{}, int)
|
||||
post func(http.ResponseWriter, *http.Request, *ssh.User) (interface{}, int)
|
||||
}
|
||||
|
||||
func (h *EndpointHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
|
||||
ip := strings.Split(r.RemoteAddr, ":")[0]
|
||||
|
||||
zap.S().Infof("User connected to api with remote address %s", ip)
|
||||
|
||||
w.Header().Add("Content-Type", "application/json")
|
||||
|
||||
user := ssh.GetUser(ip)
|
||||
// checks if auth should be checked and if so and no user could be found, response an error
|
||||
if h.auth && user == nil {
|
||||
zap.S().Errorf("Could not find api user with ip %s", ip)
|
||||
json.NewEncoder(w).Encode(APIError{Message: "unauthorized"})
|
||||
return
|
||||
} else if user != nil {
|
||||
zap.S().Debugf("API ip %s is %s", ip, user.ID)
|
||||
}
|
||||
|
||||
raw := bytes.Buffer{}
|
||||
if r.ContentLength > 0 {
|
||||
io.Copy(&raw, r.Body)
|
||||
defer r.Body.Close()
|
||||
if !json.Valid(raw.Bytes()) {
|
||||
zap.S().Errorf("API user %s sent invalid body", ip)
|
||||
w.WriteHeader(http.StatusNotAcceptable)
|
||||
json.NewEncoder(w).Encode(APIError{Message: "invalid body"})
|
||||
return
|
||||
}
|
||||
r.Body = ioutil.NopCloser(&raw)
|
||||
}
|
||||
|
||||
zap.S().Debugf("API user %s request - \"%s %s %s\" \"%s\" \"%s\"", ip, r.Method, r.URL.Path, r.Proto, r.UserAgent(), raw.String())
|
||||
|
||||
var response interface{}
|
||||
var code int
|
||||
|
||||
switch r.Method {
|
||||
case http.MethodGet:
|
||||
if h.get != nil {
|
||||
response, code = h.get(w, r, user)
|
||||
}
|
||||
case http.MethodPost:
|
||||
if h.post != nil {
|
||||
response, code = h.post(w, r, user)
|
||||
}
|
||||
}
|
||||
|
||||
if response == nil && code == 0 {
|
||||
zap.S().Infof("API user %s sent invalid method: %s", ip, r.Method)
|
||||
response = APIError{Message: fmt.Sprintf("invalid method '%s'", r.Method)}
|
||||
code = http.StatusConflict
|
||||
} else {
|
||||
zap.S().Infof("API user %s issued %s successfully", ip, r.URL.Path)
|
||||
}
|
||||
|
||||
w.WriteHeader(code)
|
||||
if response != nil {
|
||||
json.NewEncoder(w).Encode(response)
|
||||
}
|
||||
}
|
||||
|
||||
func ServeAPI(config *config.Config) (errChan chan error, closer func() error) {
|
||||
errChan = make(chan error, 1)
|
||||
|
||||
mux := http.NewServeMux()
|
||||
|
||||
mux.Handle("/ping", &EndpointHandler{
|
||||
get: PingGet,
|
||||
})
|
||||
mux.Handle("/error", &EndpointHandler{
|
||||
get: ErrorGet,
|
||||
})
|
||||
mux.Handle("/info", &EndpointHandler{
|
||||
get: InfoGet,
|
||||
auth: true,
|
||||
})
|
||||
mux.Handle("/config", &EndpointHandler{
|
||||
get: ConfigGet,
|
||||
post: ConfigPost,
|
||||
auth: true,
|
||||
})
|
||||
mux.Handle("/auth", &EndpointHandler{
|
||||
get: AuthGet,
|
||||
post: AuthPost,
|
||||
auth: true,
|
||||
})
|
||||
|
||||
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", config.Api.Port))
|
||||
if err != nil {
|
||||
errChan <- err
|
||||
return
|
||||
}
|
||||
|
||||
go func() {
|
||||
errChan <- http.Serve(listener, mux)
|
||||
}()
|
||||
|
||||
return errChan, listener.Close
|
||||
}
|
80
server/api/auth.go
Normal file
80
server/api/auth.go
Normal file
@ -0,0 +1,80 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"docker4ssh/database"
|
||||
"docker4ssh/ssh"
|
||||
"encoding/json"
|
||||
"go.uber.org/zap"
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
type authGetResponse struct {
|
||||
User string `json:"user"`
|
||||
HasPassword bool `json:"has_password"`
|
||||
}
|
||||
|
||||
func AuthGet(w http.ResponseWriter, r *http.Request, user *ssh.User) (interface{}, int) {
|
||||
auth, ok := database.GetDatabase().GetAuthByContainer(user.Container.FullContainerID)
|
||||
|
||||
if ok {
|
||||
return authGetResponse{
|
||||
User: *auth.User,
|
||||
HasPassword: auth.Password != nil,
|
||||
}, http.StatusOK
|
||||
} else {
|
||||
return APIError{Message: "no auth is set"}, http.StatusNotFound
|
||||
}
|
||||
}
|
||||
|
||||
type authPostRequest struct {
|
||||
User *string `json:"user"`
|
||||
Password *string `json:"password"`
|
||||
}
|
||||
|
||||
func AuthPost(w http.ResponseWriter, r *http.Request, user *ssh.User) (interface{}, int) {
|
||||
var request authPostRequest
|
||||
json.NewDecoder(r.Body).Decode(&request)
|
||||
defer r.Body.Close()
|
||||
|
||||
db := database.GetDatabase()
|
||||
|
||||
auth, _ := db.GetAuthByContainer(user.Container.FullContainerID)
|
||||
|
||||
if request.User != nil {
|
||||
if *request.User == "" {
|
||||
return APIError{Message: "new username cannot be empty"}, http.StatusNotAcceptable
|
||||
}
|
||||
if err := db.SetAuth(user.Container.FullContainerID, database.Auth{
|
||||
User: request.User,
|
||||
}); err != nil {
|
||||
zap.S().Errorf("Error while updating user for user %s: %v", user.ID, err)
|
||||
return APIError{Message: "failed to process user"}, http.StatusInternalServerError
|
||||
}
|
||||
zap.S().Infof("Updated password for %s", user.Container.ContainerID)
|
||||
}
|
||||
if request.Password != nil && *request.Password == "" {
|
||||
if err := db.DeleteAuth(user.Container.FullContainerID); err != nil {
|
||||
zap.S().Errorf("Error while deleting auth for user %s: %v", user.ID, err)
|
||||
return APIError{Message: "failed to delete auth"}, http.StatusInternalServerError
|
||||
}
|
||||
zap.S().Infof("Deleted authenticiation for %s", user.Container.ContainerID)
|
||||
} else if request.Password != nil {
|
||||
pwd, err := bcrypt.GenerateFromPassword([]byte(*request.Password), bcrypt.DefaultCost)
|
||||
if err != nil {
|
||||
zap.S().Errorf("Error while updating password for user %s: %v", user.ID, err)
|
||||
return APIError{Message: "failed to process password"}, http.StatusInternalServerError
|
||||
}
|
||||
var username string
|
||||
if auth.User == nil {
|
||||
username = user.Container.FullContainerID
|
||||
} else {
|
||||
username = *auth.User
|
||||
}
|
||||
if err = db.SetAuth(user.Container.FullContainerID, database.NewUnsafeAuth(username, pwd)); err != nil {
|
||||
return APIError{Message: "failed to update authentication"}, http.StatusInternalServerError
|
||||
}
|
||||
zap.S().Infof("Updated password for %s", user.Container.ContainerID)
|
||||
}
|
||||
return nil, http.StatusOK
|
||||
}
|
124
server/api/config.go
Normal file
124
server/api/config.go
Normal file
@ -0,0 +1,124 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"context"
|
||||
"docker4ssh/docker"
|
||||
"docker4ssh/ssh"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"go.uber.org/zap"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type configGetResponse struct {
|
||||
NetworkMode docker.NetworkMode `json:"network_mode"`
|
||||
Configurable bool `json:"configurable"`
|
||||
RunLevel docker.RunLevel `json:"run_level"`
|
||||
StartupInformation bool `json:"startup_information"`
|
||||
ExitAfter string `json:"exit_after"`
|
||||
KeepOnExit bool `json:"keep_on_exit"`
|
||||
}
|
||||
|
||||
func ConfigGet(w http.ResponseWriter, r *http.Request, user *ssh.User) (interface{}, int) {
|
||||
config := user.Container.Config()
|
||||
|
||||
return configGetResponse{
|
||||
config.NetworkMode,
|
||||
config.Configurable,
|
||||
config.RunLevel,
|
||||
config.StartupInformation,
|
||||
config.ExitAfter,
|
||||
config.KeepOnExit,
|
||||
}, http.StatusOK
|
||||
}
|
||||
|
||||
type configPostRequest configGetResponse
|
||||
|
||||
var configPostRequestLookup, _ = structJsonLookup(configPostRequest{})
|
||||
|
||||
type configPostResponse struct {
|
||||
Message string `json:"message"`
|
||||
Rejected []configPostResponseRejected `json:"rejected"`
|
||||
}
|
||||
|
||||
type configPostResponseRejected struct {
|
||||
Name string `json:"name"`
|
||||
Description string `json:"description"`
|
||||
}
|
||||
|
||||
func ConfigPost(w http.ResponseWriter, r *http.Request, user *ssh.User) (interface{}, int) {
|
||||
var requestBody map[string]interface{}
|
||||
json.NewDecoder(r.Body).Decode(&requestBody)
|
||||
defer r.Body.Close()
|
||||
|
||||
var change bool
|
||||
var response configPostResponse
|
||||
|
||||
updatedConfig := user.Container.Config()
|
||||
|
||||
for k, v := range requestBody {
|
||||
if v == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
kind, ok := configPostRequestLookup[k]
|
||||
if !ok {
|
||||
response.Rejected = append(response.Rejected, configPostResponseRejected{
|
||||
Name: k,
|
||||
Description: fmt.Sprintf("name / field %s does not exist", k),
|
||||
})
|
||||
} else {
|
||||
valueKind := reflect.TypeOf(v).Kind()
|
||||
if valueKind != kind && valueKind == reflect.Float64 && kind == reflect.Int {
|
||||
valueKind = reflect.Int
|
||||
}
|
||||
|
||||
if valueKind != kind {
|
||||
response.Rejected = append(response.Rejected, configPostResponseRejected{
|
||||
Name: k,
|
||||
Description: fmt.Sprintf("value should be type %s, got type %s", kind, valueKind),
|
||||
})
|
||||
}
|
||||
|
||||
change = true
|
||||
switch k {
|
||||
case "network_mode":
|
||||
updatedConfig.NetworkMode = docker.NetworkMode(v.(float64))
|
||||
case "configurable":
|
||||
updatedConfig.Configurable = v.(bool)
|
||||
case "run_level":
|
||||
updatedConfig.RunLevel = docker.RunLevel(v.(float64))
|
||||
case "startup_information":
|
||||
updatedConfig.StartupInformation = v.(bool)
|
||||
case "exit_after":
|
||||
updatedConfig.ExitAfter = v.(string)
|
||||
case "keep_on_exit":
|
||||
updatedConfig.KeepOnExit = v.(bool)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(response.Rejected) > 0 {
|
||||
var arr []string
|
||||
for _, rejected := range response.Rejected {
|
||||
arr = append(arr, rejected.Name)
|
||||
}
|
||||
|
||||
if len(response.Rejected) == 1 {
|
||||
response.Message = fmt.Sprintf("1 invalid configuration was found: %s", strings.Join(arr, ", "))
|
||||
return response, http.StatusNotAcceptable
|
||||
} else if len(response.Rejected) > 1 {
|
||||
response.Message = fmt.Sprintf("%d invalid configurations were found: %s", len(response.Rejected), strings.Join(arr, ", "))
|
||||
return response, http.StatusNotAcceptable
|
||||
}
|
||||
} else if change {
|
||||
if err := user.Container.UpdateConfig(context.Background(), updatedConfig); err != nil {
|
||||
zap.S().Errorf("Error while updating config for API user %s: %v", user.ID, err)
|
||||
response.Message = "Internal error while updating the config"
|
||||
return response, http.StatusInternalServerError
|
||||
}
|
||||
}
|
||||
return nil, http.StatusOK
|
||||
}
|
12
server/api/error.go
Normal file
12
server/api/error.go
Normal file
@ -0,0 +1,12 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"docker4ssh/ssh"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
type errorGetResponse APIError
|
||||
|
||||
func ErrorGet(w http.ResponseWriter, r *http.Request, user *ssh.User) (interface{}, int) {
|
||||
return APIError{Message: "Example error message"}, http.StatusBadRequest
|
||||
}
|
16
server/api/info.go
Normal file
16
server/api/info.go
Normal file
@ -0,0 +1,16 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"docker4ssh/ssh"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
type infoGetResponse struct {
|
||||
ContainerID string `json:"container_id"`
|
||||
}
|
||||
|
||||
func InfoGet(w http.ResponseWriter, r *http.Request, user *ssh.User) (interface{}, int) {
|
||||
return infoGetResponse{
|
||||
ContainerID: user.Container.FullContainerID,
|
||||
}, http.StatusOK
|
||||
}
|
15
server/api/ping.go
Normal file
15
server/api/ping.go
Normal file
@ -0,0 +1,15 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"docker4ssh/ssh"
|
||||
"net/http"
|
||||
"time"
|
||||
)
|
||||
|
||||
type pingGetResponse struct {
|
||||
Received int64 `json:"received"`
|
||||
}
|
||||
|
||||
func PingGet(w http.ResponseWriter, r *http.Request, user *ssh.User) (interface{}, int) {
|
||||
return pingGetResponse{Received: time.Now().UnixNano()}, http.StatusOK
|
||||
}
|
35
server/api/utils.go
Normal file
35
server/api/utils.go
Normal file
@ -0,0 +1,35 @@
|
||||
package api
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type APIError struct {
|
||||
Message string `json:"message"`
|
||||
}
|
||||
|
||||
func structJsonLookup(v interface{}) (map[string]reflect.Kind, error) {
|
||||
rt := reflect.TypeOf(v)
|
||||
if rt.Kind() != reflect.Struct {
|
||||
return nil, fmt.Errorf("given interface is not a struct")
|
||||
}
|
||||
|
||||
lookup := make(map[string]reflect.Kind)
|
||||
|
||||
for i := 0; i < rt.NumField(); i++ {
|
||||
field := rt.Field(i)
|
||||
|
||||
name := strings.Split(field.Tag.Get("json"), ",")[0]
|
||||
value := field.Type.Kind()
|
||||
|
||||
if field.Type.Kind() == reflect.Struct {
|
||||
value = reflect.Map
|
||||
}
|
||||
|
||||
lookup[name] = value
|
||||
}
|
||||
|
||||
return lookup, nil
|
||||
}
|
0
server/build/docker4ssh
Normal file
0
server/build/docker4ssh
Normal file
18
server/cmd/cmd.go
Normal file
18
server/cmd/cmd.go
Normal file
@ -0,0 +1,18 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/spf13/cobra"
|
||||
"os"
|
||||
)
|
||||
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "docker4ssh",
|
||||
Short: "Docker containers and more via ssh",
|
||||
}
|
||||
|
||||
func Execute() {
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%v", err)
|
||||
}
|
||||
}
|
160
server/cmd/start.go
Normal file
160
server/cmd/start.go
Normal file
@ -0,0 +1,160 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
"docker4ssh/api"
|
||||
c "docker4ssh/config"
|
||||
"docker4ssh/database"
|
||||
"docker4ssh/docker"
|
||||
"docker4ssh/logging"
|
||||
"docker4ssh/ssh"
|
||||
"docker4ssh/validate"
|
||||
"fmt"
|
||||
"github.com/spf13/cobra"
|
||||
"go.uber.org/zap"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
)
|
||||
|
||||
var startCmd = &cobra.Command{
|
||||
Use: "start",
|
||||
Short: "Starts the docker4ssh server",
|
||||
Args: cobra.MaximumNArgs(0),
|
||||
|
||||
PreRunE: func(cmd *cobra.Command, args []string) error {
|
||||
return preStart()
|
||||
},
|
||||
Run: func(cmd *cobra.Command, args []string) {
|
||||
start()
|
||||
},
|
||||
}
|
||||
|
||||
func preStart() error {
|
||||
if !docker.IsRunning() {
|
||||
return fmt.Errorf("docker daemon is not running")
|
||||
}
|
||||
|
||||
cli, err := docker.InitCli()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
config, err := c.InitConfig(true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
validator := validate.NewConfigValidator(cli, false, config)
|
||||
|
||||
if result := validator.ValidateLogging(); !result.Ok() {
|
||||
return fmt.Errorf(result.String())
|
||||
}
|
||||
|
||||
level := zap.NewAtomicLevel()
|
||||
level.UnmarshalText([]byte(config.Logging.Level))
|
||||
var outputFiles, errorFiles []string
|
||||
if config.Logging.ConsoleOutput {
|
||||
outputFiles = append(outputFiles, "/dev/stdout")
|
||||
}
|
||||
if config.Logging.OutputFile != "" {
|
||||
outputFiles = append(outputFiles, config.Logging.OutputFile)
|
||||
}
|
||||
if config.Logging.ConsoleError {
|
||||
errorFiles = append(errorFiles, "/dev/stderr")
|
||||
}
|
||||
if config.Logging.ErrorFile != "" {
|
||||
errorFiles = append(errorFiles, config.Logging.ErrorFile)
|
||||
}
|
||||
logging.InitLogging(level, outputFiles, errorFiles)
|
||||
|
||||
if result := validator.Validate(); !result.Ok() {
|
||||
return fmt.Errorf(result.String())
|
||||
}
|
||||
c.SetConfig(config)
|
||||
|
||||
db, err := database.NewSqlite3Connection(config.Database.Sqlite3File)
|
||||
if err != nil {
|
||||
zap.S().Fatalf("Failed to initialize database: %v", err)
|
||||
}
|
||||
database.SetDatabase(db)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func start() {
|
||||
config := c.GetConfig()
|
||||
|
||||
if config.SSH.Passphrase == "" {
|
||||
zap.S().Warn("YOU HAVE AN EMPTY PASSPHRASE WHICH IS INSECURE, SUGGESTING CREATING A NEW SSH KEY WITH A PASSPHRASE.\n" +
|
||||
"IF YOU'RE DOWNLOADED THIS VERSION FROM THE RELEASES (https://github.com/ByteDream/docker4ssh/releases/latest), MAKE SURE TO CHANGE YOUR SSH KEY IMMEDIATELY BECAUSE ANYONE COULD DECRYPT THE SSH SESSION!!\n" +
|
||||
"USE 'ssh-keygen -t ed25519 -f /etc/docker4ssh/docker4ssh.key -b 4096' AND UPDATE THE PASSPHRASE IN /etc/docker4ssh/docker4ssh.conf UNDER ssh.Passphrase")
|
||||
}
|
||||
|
||||
serverConfig, err := ssh.NewSSHConfig(config)
|
||||
if err != nil {
|
||||
zap.S().Fatalf("Failed to initialize ssh server config: %v", err)
|
||||
}
|
||||
|
||||
sshErrChan, sshCloser := ssh.StartServing(config, serverConfig)
|
||||
zap.S().Infof("Started ssh serving on port %d", config.SSH.Port)
|
||||
apiErrChan, apiCloser := api.ServeAPI(config)
|
||||
zap.S().Infof("Started api serving on port %d", config.Api.Port)
|
||||
|
||||
done := make(chan struct{})
|
||||
sig := make(chan os.Signal)
|
||||
signal.Notify(sig, syscall.SIGUSR1, os.Interrupt, os.Kill, syscall.SIGINT, syscall.SIGTERM)
|
||||
go func() {
|
||||
s := <-sig
|
||||
|
||||
if sshCloser != nil {
|
||||
sshCloser()
|
||||
}
|
||||
if apiCloser != nil {
|
||||
apiCloser()
|
||||
}
|
||||
|
||||
database.GetDatabase().Close()
|
||||
|
||||
if s != syscall.SIGUSR1 {
|
||||
// Errorf is called here instead of Fatalf because the original exit signal should be kept to exit with it later
|
||||
zap.S().Errorf("(FATAL actually) received abort signal %d: %s", s.(syscall.Signal), strings.ToUpper(s.String()))
|
||||
os.Exit(int(s.(syscall.Signal)))
|
||||
}
|
||||
|
||||
done <- struct{}{}
|
||||
}()
|
||||
|
||||
select {
|
||||
case err = <-sshErrChan:
|
||||
case err = <-apiErrChan:
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
zap.S().Errorf("Failed to start working: %v", err)
|
||||
sig <- os.Interrupt
|
||||
} else {
|
||||
select {
|
||||
case <-sig:
|
||||
if err != nil {
|
||||
zap.S().Errorf("Serving failed due error: %v", err)
|
||||
} else {
|
||||
zap.S().Info("Serving stopped")
|
||||
}
|
||||
default:
|
||||
sig <- syscall.SIGUSR1
|
||||
}
|
||||
}
|
||||
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(5 * time.Second):
|
||||
// if the timeout of 5 seconds expires, forcefully exit
|
||||
os.Exit(int(syscall.SIGKILL))
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(startCmd)
|
||||
}
|
159
server/cmd/validate.go
Normal file
159
server/cmd/validate.go
Normal file
@ -0,0 +1,159 @@
|
||||
package cmd
|
||||
|
||||
import (
|
||||
c "docker4ssh/config"
|
||||
"docker4ssh/docker"
|
||||
"docker4ssh/validate"
|
||||
"fmt"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/spf13/cobra"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var cli *client.Client
|
||||
|
||||
var validateCmd = &cobra.Command{
|
||||
Use: "validate",
|
||||
Short: "Validate docker4ssh specific files (config / profile files)",
|
||||
|
||||
PersistentPreRunE: func(cmd *cobra.Command, args []string) (err error) {
|
||||
cli, err = docker.InitCli()
|
||||
return err
|
||||
},
|
||||
}
|
||||
|
||||
var validateStrictFlag bool
|
||||
|
||||
var validateConfigCmd = &cobra.Command{
|
||||
Use: "config [files]",
|
||||
Short: "Validate a docker4ssh config file",
|
||||
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return validateConfig(args)
|
||||
},
|
||||
}
|
||||
|
||||
var validateConfigFileFlag string
|
||||
|
||||
var validateProfileCmd = &cobra.Command{
|
||||
Use: "profile [files]",
|
||||
Short: "Validate docker4ssh profile files",
|
||||
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
return validateProfile(args)
|
||||
},
|
||||
}
|
||||
|
||||
func validateConfig(args []string) error {
|
||||
config, err := c.LoadConfig(validateConfigFileFlag, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
validator := validate.NewConfigValidator(cli, validateStrictFlag, config)
|
||||
|
||||
var result *validate.ValidatorResult
|
||||
if len(args) == 0 {
|
||||
result = validator.Validate()
|
||||
} else {
|
||||
var validateFuncs []func() *validate.ValidatorResult
|
||||
for _, arg := range args {
|
||||
switch strings.ToLower(arg) {
|
||||
case "profile":
|
||||
validateFuncs = append(validateFuncs, validator.ValidateProfile)
|
||||
case "api":
|
||||
validateFuncs = append(validateFuncs, validator.ValidateAPI)
|
||||
case "ssh":
|
||||
validateFuncs = append(validateFuncs, validator.ValidateSSH)
|
||||
case "database":
|
||||
validateFuncs = append(validateFuncs, validator.ValidateDatabase)
|
||||
case "network":
|
||||
validateFuncs = append(validateFuncs, validator.ValidateNetwork)
|
||||
case "logging":
|
||||
validateFuncs = append(validateFuncs, validator.ValidateLogging)
|
||||
default:
|
||||
return fmt.Errorf("'%s' is not a valid config section", arg)
|
||||
}
|
||||
}
|
||||
|
||||
var errors []*validate.ValidateError
|
||||
for _, validateFunc := range validateFuncs {
|
||||
errors = append(errors, validateFunc().Errors...)
|
||||
}
|
||||
|
||||
result = &validate.ValidatorResult{
|
||||
Strict: validateStrictFlag,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Println(result.String())
|
||||
|
||||
if len(result.Errors) > 0 {
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func validateProfile(args []string) error {
|
||||
var files []string
|
||||
|
||||
if len(args) == 0 {
|
||||
args = append(args, "/etc/docker4ssh/profile")
|
||||
}
|
||||
for _, arg := range args {
|
||||
stat, err := os.Stat(arg)
|
||||
if os.IsNotExist(err) {
|
||||
return fmt.Errorf("file %s does not exist: %v", arg, err)
|
||||
}
|
||||
if stat.IsDir() {
|
||||
dir, err := os.ReadDir(arg)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read directory %s: %v", arg, err)
|
||||
}
|
||||
for _, file := range dir {
|
||||
path, err := filepath.Abs(file.Name())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
files = append(files, path)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var profiles c.Profiles
|
||||
for _, file := range files {
|
||||
p, err := c.LoadProfileFile(file, c.HardcodedPreProfile())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
profiles = append(profiles, p...)
|
||||
}
|
||||
|
||||
var errors []*validate.ValidateError
|
||||
for _, profile := range profiles {
|
||||
errors = append(errors, validate.NewProfileValidator(cli, validateStrictFlag, profile).Validate().Errors...)
|
||||
}
|
||||
|
||||
result := validate.ValidatorResult{
|
||||
Strict: validateStrictFlag,
|
||||
Errors: errors,
|
||||
}
|
||||
|
||||
fmt.Println(result.String())
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
rootCmd.AddCommand(validateCmd)
|
||||
validateCmd.PersistentFlags().BoolVarP(&validateStrictFlag, "strict", "s", false, "If the check should be strict")
|
||||
|
||||
validateCmd.AddCommand(validateConfigCmd)
|
||||
validateConfigCmd.Flags().StringVarP(&validateConfigFileFlag, "file", "f", "/etc/docker4ssh/docker4ssh.conf", "Specify a file to check")
|
||||
|
||||
validateCmd.AddCommand(validateProfileCmd)
|
||||
}
|
189
server/config/config.go
Normal file
189
server/config/config.go
Normal file
@ -0,0 +1,189 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/BurntSushi/toml"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var globConfig *Config
|
||||
|
||||
type Config struct {
|
||||
Profile struct {
|
||||
Dir string `toml:"Dir"`
|
||||
Default struct {
|
||||
Password string `toml:"Password"`
|
||||
NetworkMode int `toml:"NetworkMode"`
|
||||
Configurable bool `toml:"Configurable"`
|
||||
RunLevel int `toml:"RunLevel"`
|
||||
StartupInformation bool `toml:"StartupInformation"`
|
||||
ExitAfter string `toml:"ExitAfter"`
|
||||
KeepOnExit bool `toml:"KeepOnExit"`
|
||||
} `toml:"default"`
|
||||
Dynamic struct {
|
||||
Enable bool `toml:"Enable"`
|
||||
Password string `toml:"Password"`
|
||||
NetworkMode int `toml:"NetworkMode"`
|
||||
Configurable bool `toml:"Configurable"`
|
||||
RunLevel int `toml:"RunLevel"`
|
||||
StartupInformation bool `toml:"StartupInformation"`
|
||||
ExitAfter string `toml:"ExitAfter"`
|
||||
KeepOnExit bool `toml:"KeepOnExit"`
|
||||
} `toml:"dynamic"`
|
||||
} `toml:"profile"`
|
||||
Api struct {
|
||||
Port uint16 `toml:"Port"`
|
||||
Configure struct {
|
||||
Binary string `toml:"Binary"`
|
||||
Man string `toml:"Man"`
|
||||
} `toml:"configure"`
|
||||
} `toml:"api"`
|
||||
SSH struct {
|
||||
Port uint16 `toml:"Port"`
|
||||
Keyfile string `toml:"Keyfile"`
|
||||
Passphrase string `toml:"Passphrase"`
|
||||
} `toml:"ssh"`
|
||||
Database struct {
|
||||
Sqlite3File string `toml:"Sqlite3File"`
|
||||
} `toml:"Database"`
|
||||
Network struct {
|
||||
Default struct {
|
||||
Subnet string `toml:"Subnet"`
|
||||
} `toml:"default"`
|
||||
Isolate struct {
|
||||
Subnet string `toml:"Subnet"`
|
||||
} `toml:"isolate"`
|
||||
} `toml:"network"`
|
||||
Logging struct {
|
||||
Level string `toml:"Level"`
|
||||
OutputFile string `toml:"OutputFile"`
|
||||
ErrorFile string `toml:"ErrorFile"`
|
||||
ConsoleOutput bool `toml:"ConsoleOutput"`
|
||||
ConsoleError bool `toml:"ConsoleError"`
|
||||
} `toml:"logging"`
|
||||
}
|
||||
|
||||
func InitConfig(includeEnv bool) (*Config, error) {
|
||||
configFiles := []string{
|
||||
"./docker4ssh.conf",
|
||||
"~/.docker4ssh",
|
||||
"~/.config/docker4ssh.conf",
|
||||
"/etc/docker4ssh/docker4ssh.conf",
|
||||
}
|
||||
|
||||
for _, file := range configFiles {
|
||||
if _, err := os.Stat(file); !os.IsNotExist(err) {
|
||||
return LoadConfig(file, includeEnv)
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("no speicfied config file (%s) could be found", strings.Join(configFiles, ", "))
|
||||
}
|
||||
|
||||
func LoadConfig(file string, includeEnv bool) (*Config, error) {
|
||||
config := &Config{}
|
||||
|
||||
if _, err := toml.DecodeFile(file, config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// make paths absolute
|
||||
dir := filepath.Dir(file)
|
||||
config.Profile.Dir = absoluteFile(dir, config.Profile.Dir)
|
||||
config.Api.Configure.Binary = absoluteFile(dir, config.Api.Configure.Binary)
|
||||
config.Api.Configure.Man = absoluteFile(dir, config.Api.Configure.Man)
|
||||
config.SSH.Keyfile = absoluteFile(dir, config.SSH.Keyfile)
|
||||
config.Database.Sqlite3File = absoluteFile(dir, config.Database.Sqlite3File)
|
||||
config.Logging.OutputFile = absoluteFile(dir, config.Logging.OutputFile)
|
||||
config.Logging.ErrorFile = absoluteFile(dir, config.Logging.ErrorFile)
|
||||
|
||||
if includeEnv {
|
||||
if err := updateFromEnv(config); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return config, nil
|
||||
}
|
||||
|
||||
func absoluteFile(path, file string) string {
|
||||
if filepath.IsAbs(file) {
|
||||
return file
|
||||
}
|
||||
return filepath.Join(path, file)
|
||||
}
|
||||
|
||||
// updateFromEnv looks up if specific environment variable are given which can
|
||||
// also be used to configure the program.
|
||||
// Every key in the config file can also be specified via environment variables.
|
||||
// The env variable syntax is SECTION_KEY -> e.g. DEFAULT_PASSWORD or API_PORT
|
||||
func updateFromEnv(config *Config) error {
|
||||
re := reflect.ValueOf(config).Elem()
|
||||
rt := re.Type()
|
||||
|
||||
for i := 0; i < re.NumField(); i++ {
|
||||
rf := re.Field(i)
|
||||
ree := rt.Field(i)
|
||||
|
||||
if err := envParseField(strings.ToUpper(ree.Tag.Get("toml")), rf); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func envParseField(prefix string, value reflect.Value) error {
|
||||
for j := 0; j < value.NumField(); j++ {
|
||||
rtt := value.Type().Field(j)
|
||||
rff := value.Field(j)
|
||||
|
||||
if rff.Kind() == reflect.Struct {
|
||||
if err := envParseField(fmt.Sprintf("%s_%s", prefix, strings.ToUpper(rtt.Tag.Get("toml"))), rff); err != nil {
|
||||
return err
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
envName := fmt.Sprintf("%s_%s", prefix, strings.ToUpper(rtt.Tag.Get("toml")))
|
||||
val, ok := os.LookupEnv(envName)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
var expected string
|
||||
switch rff.Kind() {
|
||||
case reflect.String:
|
||||
rff.SetString(val)
|
||||
continue
|
||||
case reflect.Bool:
|
||||
b, err := strconv.ParseBool(val)
|
||||
if err == nil {
|
||||
rff.SetBool(b)
|
||||
continue
|
||||
}
|
||||
expected = "true / false (boolean)"
|
||||
case reflect.Uint16:
|
||||
ui, err := strconv.ParseUint(val, 10, 16)
|
||||
if err == nil {
|
||||
rff.SetUint(ui)
|
||||
continue
|
||||
}
|
||||
expected = "number (uint16)"
|
||||
default:
|
||||
return fmt.Errorf("parsed not implemented config type '%s'", rff.Kind())
|
||||
}
|
||||
return fmt.Errorf("failed to parse environment variable '%s': cannot parse value '%s' as %s", envName, val, expected)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func GetConfig() *Config {
|
||||
return globConfig
|
||||
}
|
||||
|
||||
func SetConfig(config *Config) {
|
||||
globConfig = config
|
||||
}
|
254
server/config/profile.go
Normal file
254
server/config/profile.go
Normal file
@ -0,0 +1,254 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"crypto/sha1"
|
||||
"crypto/sha256"
|
||||
"crypto/sha512"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/BurntSushi/toml"
|
||||
"go.uber.org/zap"
|
||||
"hash"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Profile struct {
|
||||
name string
|
||||
Username *regexp.Regexp
|
||||
Password *regexp.Regexp
|
||||
passwordHashAlgo hash.Hash
|
||||
NetworkMode int
|
||||
Configurable bool
|
||||
RunLevel int
|
||||
StartupInformation bool
|
||||
ExitAfter string
|
||||
KeepOnExit bool
|
||||
Image string
|
||||
ContainerID string
|
||||
}
|
||||
|
||||
func (p *Profile) Name() string {
|
||||
return p.name
|
||||
}
|
||||
|
||||
func (p *Profile) Match(user string, password []byte) bool {
|
||||
// username should only be nil if profile was generated from Config.Profile.Dynamic
|
||||
if p.Username == nil || p.Username.MatchString(user) {
|
||||
if p.passwordHashAlgo != nil {
|
||||
password = p.passwordHashAlgo.Sum(password)
|
||||
}
|
||||
return p.Password.Match(password)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type preProfile struct {
|
||||
Username string
|
||||
Password string
|
||||
NetworkMode int
|
||||
Configurable bool
|
||||
RunLevel int
|
||||
StartupInformation bool
|
||||
ExitAfter string
|
||||
KeepOnExit bool
|
||||
Image string
|
||||
Container string
|
||||
}
|
||||
|
||||
func LoadProfileFile(path string, defaultPreProfile preProfile) (Profiles, error) {
|
||||
var rawProfile map[string]interface{}
|
||||
if _, err := toml.DecodeFile(path, &rawProfile); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
profiles, err := parseRawProfile(rawProfile, path, defaultPreProfile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return profiles, nil
|
||||
}
|
||||
|
||||
func LoadProfileDir(path string, defaultPreProfile preProfile) (Profiles, error) {
|
||||
dir, err := os.ReadDir(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var profiles Profiles
|
||||
for i, profileConf := range dir {
|
||||
p, err := LoadProfileFile(filepath.Join(path, profileConf.Name()), defaultPreProfile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
profiles = append(profiles, p...)
|
||||
zap.S().Debugf("Pre-loaded file %d (%s) with %d profile(s)", i+1, profileConf.Name(), len(p))
|
||||
}
|
||||
|
||||
return profiles, nil
|
||||
}
|
||||
|
||||
func parseRawProfile(rawProfile map[string]interface{}, path string, defaultPreProfile preProfile) (profiles []*Profile, err error) {
|
||||
var count int
|
||||
for key, value := range rawProfile {
|
||||
rawValue, err := json.Marshal(value)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
pp := preProfile{
|
||||
NetworkMode: 3,
|
||||
RunLevel: 1,
|
||||
StartupInformation: true,
|
||||
}
|
||||
if err = json.Unmarshal(rawValue, &pp); err != nil {
|
||||
return nil, fmt.Errorf("failed to parse %s profile conf file %s: %v", key, path, err)
|
||||
}
|
||||
|
||||
var rawUsername string
|
||||
if rawUsername = strings.TrimPrefix(pp.Username, "regex:"); rawUsername == pp.Username {
|
||||
rawUsername = strings.ReplaceAll(rawUsername, "*", ".*")
|
||||
}
|
||||
if !strings.HasSuffix(rawUsername, "$") {
|
||||
rawUsername += "$"
|
||||
}
|
||||
username, err := regexp.Compile("(?m)" + rawUsername)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse %s profile username regex for conf file %s: %v", key, path, err)
|
||||
}
|
||||
|
||||
var rawPassword string
|
||||
if rawPassword = strings.TrimPrefix(pp.Password, "regex:"); rawUsername == pp.Password {
|
||||
rawPassword = strings.ReplaceAll(rawPassword, "*", ".*")
|
||||
}
|
||||
algo, rawPasswordOrHash := getHash(rawPassword)
|
||||
if algo == nil && rawPasswordOrHash == "" {
|
||||
rawPasswordOrHash = ".*"
|
||||
}
|
||||
if !strings.HasSuffix(rawPasswordOrHash, "$") {
|
||||
rawPasswordOrHash += "$"
|
||||
}
|
||||
password, err := regexp.Compile("(?m)" + rawPasswordOrHash)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse %s profile password regex for conf file %s: %v", key, path, err)
|
||||
}
|
||||
|
||||
if (pp.Image == "") == (pp.Container == "") {
|
||||
return nil, fmt.Errorf("failed to interpret %s profile image / container definition for conf file %s: `Image` or `Container` must be specified, not both nor none of them", key, path)
|
||||
}
|
||||
|
||||
profiles = append(profiles, &Profile{
|
||||
name: key,
|
||||
Username: username,
|
||||
Password: password,
|
||||
passwordHashAlgo: algo,
|
||||
NetworkMode: pp.NetworkMode,
|
||||
Configurable: pp.Configurable,
|
||||
RunLevel: pp.RunLevel,
|
||||
StartupInformation: pp.StartupInformation,
|
||||
ExitAfter: pp.ExitAfter,
|
||||
KeepOnExit: pp.KeepOnExit,
|
||||
Image: pp.Image,
|
||||
ContainerID: pp.Container,
|
||||
})
|
||||
count++
|
||||
zap.S().Debugf("Pre-loaded profile %s (%d)", key, count)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
type Profiles []*Profile
|
||||
|
||||
func (ps Profiles) GetByName(name string) (*Profile, bool) {
|
||||
for _, profile := range ps {
|
||||
if profile.name == name {
|
||||
return profile, true
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func (ps Profiles) Match(user string, password []byte) (*Profile, bool) {
|
||||
for _, profile := range ps {
|
||||
if profile.Match(user, password) {
|
||||
return profile, true
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func DefaultPreProfileFromConfig(config *Config) preProfile {
|
||||
defaultProfile := config.Profile.Default
|
||||
|
||||
return preProfile{
|
||||
Password: defaultProfile.Password,
|
||||
NetworkMode: defaultProfile.NetworkMode,
|
||||
Configurable: defaultProfile.Configurable,
|
||||
RunLevel: defaultProfile.RunLevel,
|
||||
StartupInformation: defaultProfile.StartupInformation,
|
||||
ExitAfter: defaultProfile.ExitAfter,
|
||||
KeepOnExit: defaultProfile.KeepOnExit,
|
||||
}
|
||||
}
|
||||
|
||||
func HardcodedPreProfile() preProfile {
|
||||
return preProfile{
|
||||
NetworkMode: 3,
|
||||
RunLevel: 1,
|
||||
StartupInformation: true,
|
||||
}
|
||||
}
|
||||
|
||||
func DynamicProfileFromConfig(config *Config, defaultPreProfile preProfile) (Profile, error) {
|
||||
raw, err := json.Marshal(config.Profile.Dynamic)
|
||||
if err != nil {
|
||||
return Profile{}, err
|
||||
}
|
||||
json.Unmarshal(raw, &defaultPreProfile)
|
||||
|
||||
algo, rawPasswordOrHash := getHash(defaultPreProfile.Password)
|
||||
if algo == nil && rawPasswordOrHash == "" {
|
||||
rawPasswordOrHash = ".*"
|
||||
}
|
||||
password, err := regexp.Compile("(?m)" + rawPasswordOrHash)
|
||||
if err != nil {
|
||||
return Profile{}, fmt.Errorf("failed to parse password regex: %v ", err)
|
||||
}
|
||||
|
||||
return Profile{
|
||||
name: "",
|
||||
Username: nil,
|
||||
Password: password,
|
||||
passwordHashAlgo: algo,
|
||||
NetworkMode: defaultPreProfile.NetworkMode,
|
||||
Configurable: defaultPreProfile.Configurable,
|
||||
RunLevel: defaultPreProfile.RunLevel,
|
||||
StartupInformation: defaultPreProfile.StartupInformation,
|
||||
ExitAfter: defaultPreProfile.ExitAfter,
|
||||
KeepOnExit: defaultPreProfile.KeepOnExit,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func getHash(password string) (algo hash.Hash, raw string) {
|
||||
split := strings.SplitN(password, ":", 1)
|
||||
|
||||
if len(split) == 1 {
|
||||
return nil, password
|
||||
} else {
|
||||
raw = split[1]
|
||||
}
|
||||
|
||||
switch split[0] {
|
||||
case "sha1":
|
||||
algo = sha1.New()
|
||||
case "sha256":
|
||||
algo = sha256.New()
|
||||
case "sha512":
|
||||
algo = sha512.New()
|
||||
default:
|
||||
algo = nil
|
||||
}
|
||||
return
|
||||
}
|
67
server/database/auth.go
Normal file
67
server/database/auth.go
Normal file
@ -0,0 +1,67 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"golang.org/x/crypto/bcrypt"
|
||||
)
|
||||
|
||||
type Auth struct {
|
||||
User *string `json:"user"`
|
||||
Password *[]byte `json:"password"`
|
||||
}
|
||||
|
||||
func NewAuth(user string, password []byte) (Auth, error) {
|
||||
hash, err := bcrypt.GenerateFromPassword(password, bcrypt.DefaultCost)
|
||||
if err != nil {
|
||||
return Auth{}, err
|
||||
}
|
||||
return Auth{
|
||||
&user,
|
||||
&hash,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func NewUnsafeAuth(user string, password []byte) Auth {
|
||||
auth, _ := NewAuth(user, password)
|
||||
return auth
|
||||
}
|
||||
|
||||
func (db *Database) SetAuth(containerID string, auth Auth) error {
|
||||
if auth.User != nil {
|
||||
_, err := db.Exec("INSERT INTO auth (container_id, user) VALUES ($1, $2) ON CONFLICT (container_id) DO UPDATE SET user=$2", containerID, *auth.User)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if auth.Password != nil {
|
||||
_, err := db.Exec("INSERT INTO auth (container_id, password) VALUES ($1, $2) ON CONFLICT (container_id) DO UPDATE SET password=$2", containerID, *auth.Password)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetAuthByContainer returns the auth by a container id
|
||||
func (db *Database) GetAuthByContainer(containerID string) (auth Auth, exists bool) {
|
||||
if err := db.QueryRow("SELECT user, password FROM auth WHERE container_id=$1", containerID).Scan(&auth.User, &auth.Password); err != nil {
|
||||
return Auth{}, false
|
||||
}
|
||||
return auth, true
|
||||
}
|
||||
|
||||
func (db *Database) GetContainerByAuth(auth Auth) (containerID string, exists bool) {
|
||||
// return true if `auth` contains a nil pointer or no auth was found in the database.
|
||||
// hopefully this is no security issue
|
||||
if auth.User == nil || auth.Password == nil {
|
||||
return "", false
|
||||
}
|
||||
if err := db.QueryRow("SELECT container_id FROM auth WHERE user=$1 AND password=$2 OR password IS NULL", auth.User, auth.Password).Scan(&containerID); err != nil {
|
||||
return "", false
|
||||
}
|
||||
return containerID, true
|
||||
}
|
||||
|
||||
func (db *Database) DeleteAuth(containerID string) error {
|
||||
_, err := db.Exec("DELETE FROM auth WHERE container_id=$1", containerID)
|
||||
return err
|
||||
}
|
34
server/database/database.go
Normal file
34
server/database/database.go
Normal file
@ -0,0 +1,34 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
)
|
||||
|
||||
var globalDB *Database
|
||||
|
||||
type Database struct {
|
||||
*sql.DB
|
||||
}
|
||||
|
||||
func newDatabaseConnection(driverName, dataSource string) (*Database, error) {
|
||||
database, err := sql.Open(driverName, dataSource)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
db := &Database{DB: database}
|
||||
|
||||
return db, nil
|
||||
}
|
||||
|
||||
func NewSqlite3Connection(databaseFile string) (*Database, error) {
|
||||
return newDatabaseConnection("sqlite3", databaseFile)
|
||||
}
|
||||
|
||||
func GetDatabase() *Database {
|
||||
return globalDB
|
||||
}
|
||||
|
||||
func SetDatabase(database *Database) {
|
||||
globalDB = database
|
||||
}
|
11
server/database/delete.go
Normal file
11
server/database/delete.go
Normal file
@ -0,0 +1,11 @@
|
||||
package database
|
||||
|
||||
func (db *Database) Delete(containerID string) error {
|
||||
if _, err := db.Exec("DELETE FROM auth WHERE container_id=$1", containerID); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := db.Exec("DELETE FROM settings WHERE container_id=$1", containerID); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
75
server/database/settings.go
Normal file
75
server/database/settings.go
Normal file
@ -0,0 +1,75 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Settings is the raw version of docker.Config
|
||||
type Settings struct {
|
||||
NetworkMode *int `json:"network_mode"`
|
||||
Configurable *bool `json:"configurable"`
|
||||
RunLevel *int `json:"run_level"`
|
||||
StartupInformation *bool `json:"startup_information"`
|
||||
ExitAfter *string `json:"exit_after"`
|
||||
KeepOnExit *bool `json:"keep_on_exit"`
|
||||
}
|
||||
|
||||
func (db *Database) SettingsByContainerID(containerID string) (Settings, error) {
|
||||
row := db.QueryRow("SELECT network_mode, configurable, run_level, startup_information, exit_after, keep_on_exit FROM settings WHERE container_id LIKE $1", fmt.Sprintf("%s%%", containerID))
|
||||
|
||||
var settings Settings
|
||||
|
||||
if err := row.Scan(&settings.NetworkMode, &settings.Configurable, &settings.RunLevel, &settings.StartupInformation, &settings.ExitAfter, &settings.KeepOnExit); err != nil {
|
||||
return Settings{}, err
|
||||
}
|
||||
return settings, nil
|
||||
}
|
||||
|
||||
func (db *Database) SetSettings(containerID string, settings Settings) error {
|
||||
query := make(map[string]interface{}, 0)
|
||||
|
||||
body, _ := json.Marshal(settings)
|
||||
json.Unmarshal(body, &query)
|
||||
|
||||
var keys, values []string
|
||||
for k, v := range query {
|
||||
if v != nil {
|
||||
keys = append(keys, k)
|
||||
switch reflect.ValueOf(v).Kind() {
|
||||
case reflect.String:
|
||||
values = append(values, fmt.Sprintf("\"%v\"", v))
|
||||
case reflect.Bool:
|
||||
if v.(bool) {
|
||||
values = append(values, fmt.Sprintf("%v", 1))
|
||||
} else {
|
||||
values = append(values, fmt.Sprintf("%v", 0))
|
||||
}
|
||||
default:
|
||||
values = append(values, fmt.Sprintf("%v", v))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
err := db.QueryRow("SELECT 1 FROM settings WHERE container_id=$1", containerID).Scan()
|
||||
if err == sql.ErrNoRows {
|
||||
keys = append(keys, "container_id")
|
||||
values = append(values, fmt.Sprintf("\"%s\"", containerID))
|
||||
|
||||
_, err = db.Exec(fmt.Sprintf("INSERT INTO settings (%s) VALUES (%s)", strings.Join(keys, ", "), strings.Join(values, ", ")))
|
||||
} else if len(keys) > 0 {
|
||||
var set []string
|
||||
|
||||
for i := 0; i < len(keys); i++ {
|
||||
set = append(set, fmt.Sprintf("%s=%s", keys[i], values[i]))
|
||||
}
|
||||
|
||||
_, err = db.Exec(fmt.Sprintf("UPDATE settings SET %s WHERE container_id=$1", strings.Join(set, ", ")), containerID)
|
||||
} else {
|
||||
err = nil
|
||||
}
|
||||
return err
|
||||
}
|
12
server/docker/client.go
Normal file
12
server/docker/client.go
Normal file
@ -0,0 +1,12 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"docker4ssh/database"
|
||||
"github.com/docker/docker/client"
|
||||
)
|
||||
|
||||
type Client struct {
|
||||
Client *client.Client
|
||||
Database *database.Database
|
||||
Network Network
|
||||
}
|
637
server/docker/container.go
Normal file
637
server/docker/container.go
Normal file
@ -0,0 +1,637 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"context"
|
||||
c "docker4ssh/config"
|
||||
"docker4ssh/database"
|
||||
"docker4ssh/terminal"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/docker/api/types/network"
|
||||
"github.com/docker/docker/client"
|
||||
"go.uber.org/zap"
|
||||
"io"
|
||||
"io/fs"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
func simpleContainerFromID(ctx context.Context, client *Client, config Config, containerID string) (*SimpleContainer, error) {
|
||||
inspect, err := client.Client.ContainerInspect(ctx, containerID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sc := &SimpleContainer{
|
||||
config: config,
|
||||
Image: Image{
|
||||
ref: inspect.Image,
|
||||
},
|
||||
ContainerID: containerID[:12],
|
||||
FullContainerID: containerID,
|
||||
client: client,
|
||||
cli: client.Client,
|
||||
}
|
||||
|
||||
sc.init(ctx)
|
||||
|
||||
return sc, nil
|
||||
}
|
||||
|
||||
// newSimpleContainer creates a new container.
|
||||
// Currently, only for internal usage, may be changing in future
|
||||
func newSimpleContainer(ctx context.Context, client *Client, config Config, image Image, containerName string) (*SimpleContainer, error) {
|
||||
// create a new container from the given image and activate in- and output
|
||||
resp, err := client.Client.ContainerCreate(ctx, &container.Config{
|
||||
Image: image.Ref(),
|
||||
AttachStderr: true,
|
||||
AttachStdin: true,
|
||||
Tty: true,
|
||||
AttachStdout: true,
|
||||
OpenStdin: true,
|
||||
}, nil, nil, nil, containerName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sc := &SimpleContainer{
|
||||
config: config,
|
||||
Image: image,
|
||||
ContainerID: resp.ID[:12],
|
||||
FullContainerID: resp.ID,
|
||||
client: client,
|
||||
cli: client.Client,
|
||||
}
|
||||
|
||||
sc.init(ctx)
|
||||
|
||||
return sc, nil
|
||||
}
|
||||
|
||||
// SimpleContainer is the basic struct to control a docker4ssh container
|
||||
type SimpleContainer struct {
|
||||
config Config
|
||||
Image Image
|
||||
ContainerID string
|
||||
FullContainerID string
|
||||
|
||||
started bool
|
||||
|
||||
cancel context.CancelFunc
|
||||
|
||||
client *Client
|
||||
|
||||
// cli is just a shortcut for Client.Client
|
||||
cli *client.Client
|
||||
|
||||
Network struct {
|
||||
ID string
|
||||
IP string
|
||||
}
|
||||
}
|
||||
|
||||
func (sc *SimpleContainer) init(ctx context.Context) {
|
||||
// disconnect from default docker network
|
||||
sc.cli.NetworkDisconnect(ctx, sc.client.Network[Host], sc.FullContainerID, true)
|
||||
}
|
||||
|
||||
// Start starts the container
|
||||
func (sc *SimpleContainer) Start(ctx context.Context) error {
|
||||
if err := sc.cli.ContainerStart(ctx, sc.FullContainerID, types.ContainerStartOptions{}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !sc.started {
|
||||
// initializes all settings.
|
||||
// as third argument is a pseudo empty used to
|
||||
// call every function in SimpleContainer.updateConfig.
|
||||
// for the same reason Config.Configurable and
|
||||
// Config.KeepOnExit are negated from their value in
|
||||
// sc.config
|
||||
if err := sc.updateConfig(ctx, Config{
|
||||
Configurable: !sc.config.Configurable,
|
||||
KeepOnExit: !sc.config.KeepOnExit,
|
||||
}, sc.config); err != nil {
|
||||
return err
|
||||
}
|
||||
sc.started = true
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Stop stops the container
|
||||
func (sc *SimpleContainer) Stop(ctx context.Context) error {
|
||||
timeout := 0 * time.Second
|
||||
if err := sc.cli.ContainerStop(ctx, sc.FullContainerID, &timeout); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if !sc.config.KeepOnExit {
|
||||
if err := sc.cli.ContainerRemove(ctx, sc.FullContainerID, types.ContainerRemoveOptions{Force: true}); err != nil {
|
||||
return err
|
||||
}
|
||||
// delete all references to the container in the database
|
||||
return sc.client.Database.Delete(sc.FullContainerID)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (sc *SimpleContainer) Running(ctx context.Context) (bool, error) {
|
||||
resp, err := sc.cli.ContainerInspect(ctx, sc.FullContainerID)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return resp.State != nil && resp.State.Running, nil
|
||||
}
|
||||
|
||||
// WaitUntilStop waits until the container stops running
|
||||
func (sc *SimpleContainer) WaitUntilStop(ctx context.Context) error {
|
||||
statusChan, errChan := sc.cli.ContainerWait(ctx, sc.FullContainerID, container.WaitConditionNotRunning)
|
||||
select {
|
||||
case err := <-errChan:
|
||||
return err
|
||||
case <-statusChan:
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExecuteConn executes a command in the container and returns the connection to the output
|
||||
func (sc *SimpleContainer) ExecuteConn(ctx context.Context, command string, args ...string) (net.Conn, error) {
|
||||
execID, err := sc.cli.ContainerExecCreate(ctx, sc.FullContainerID, types.ExecConfig{
|
||||
AttachStdout: true,
|
||||
AttachStderr: true,
|
||||
Cmd: append([]string{command}, args...),
|
||||
})
|
||||
resp, err := sc.cli.ContainerExecAttach(ctx, execID.ID, types.ExecStartCheck{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return resp.Conn, err
|
||||
}
|
||||
|
||||
// Execute executes a command in the container and returns the response after finished
|
||||
func (sc *SimpleContainer) Execute(ctx context.Context, command string, args ...string) ([]byte, error) {
|
||||
buf := bytes.Buffer{}
|
||||
|
||||
conn, err := sc.ExecuteConn(ctx, command, args...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
io.Copy(&buf, conn)
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// CopyFrom copies a file from the host system to the client.
|
||||
// Normal files and directories are accepted
|
||||
func (sc *SimpleContainer) CopyFrom(ctx context.Context, src, dst string) error {
|
||||
r, _, err := sc.cli.CopyFromContainer(ctx, sc.FullContainerID, src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer r.Close()
|
||||
|
||||
tr := tar.NewReader(r)
|
||||
for {
|
||||
header, err := tr.Next()
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
target := filepath.Join(dst, header.Name)
|
||||
|
||||
switch header.Typeflag {
|
||||
case tar.TypeDir:
|
||||
if _, err := os.Stat(target); os.IsNotExist(err) {
|
||||
if err := os.MkdirAll(target, os.FileMode(header.Mode)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
case tar.TypeReg:
|
||||
f, err := os.OpenFile(target, os.O_CREATE|os.O_RDWR, os.FileMode(header.Mode))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err = io.Copy(f, tr); err != nil {
|
||||
return err
|
||||
}
|
||||
_ = f.Close()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// CopyTo copies a file from the container to host.
|
||||
// Normal files and directories are accepted
|
||||
func (sc *SimpleContainer) CopyTo(ctx context.Context, src, dst string) error {
|
||||
stat, err := os.Stat(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if stat.IsDir() {
|
||||
err = filepath.Walk(src, func(path string, info fs.FileInfo, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
file, err := os.Open(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
header, err := tar.FileInfoHeader(info, info.Name())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
header.Name = strings.TrimPrefix(strings.TrimPrefix(path, src), "/")
|
||||
|
||||
// write every file to the container.
|
||||
// it might be better to write the file content to a buffer or
|
||||
// store the file pointer in a slice and write the buffer / stored
|
||||
// file pointer to the tar writer when every file was walked
|
||||
//
|
||||
// TODO: Test if the two described methods are better than sending every file on it's own
|
||||
buf := &bytes.Buffer{}
|
||||
|
||||
tw := tar.NewWriter(buf)
|
||||
if err = tw.WriteHeader(header); err != nil {
|
||||
return err
|
||||
}
|
||||
defer tw.Close()
|
||||
|
||||
io.Copy(tw, file)
|
||||
|
||||
err = sc.cli.CopyToContainer(ctx, sc.FullContainerID, dst, buf, types.CopyToContainerOptions{
|
||||
AllowOverwriteDirWithFile: true,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
file, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
info, err := os.Lstat(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
header, err := tar.FileInfoHeader(info, info.Name())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
header.Name = filepath.Base(src)
|
||||
|
||||
buf := &bytes.Buffer{}
|
||||
tw := tar.NewWriter(buf)
|
||||
if err = tw.WriteHeader(header); err != nil {
|
||||
return err
|
||||
}
|
||||
defer tw.Close()
|
||||
|
||||
_, _ = io.Copy(tw, file)
|
||||
|
||||
err = sc.cli.CopyToContainer(ctx, sc.FullContainerID, dst, buf, types.CopyToContainerOptions{
|
||||
AllowOverwriteDirWithFile: true,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Config returns the current container config
|
||||
func (sc *SimpleContainer) Config() Config {
|
||||
return sc.config
|
||||
}
|
||||
|
||||
// UpdateConfig updates the container config
|
||||
func (sc *SimpleContainer) UpdateConfig(ctx context.Context, config Config) error {
|
||||
oldConfig := sc.config
|
||||
|
||||
if err := sc.updateConfig(ctx, oldConfig, config); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var ocm, ncm, sm map[string]interface{}
|
||||
sm = make(map[string]interface{}, 0)
|
||||
|
||||
ocj, _ := json.Marshal(oldConfig)
|
||||
ncj, _ := json.Marshal(config)
|
||||
|
||||
json.Unmarshal(ocj, &ocm)
|
||||
json.Unmarshal(ncj, &ncm)
|
||||
|
||||
srt := reflect.TypeOf(database.Settings{})
|
||||
|
||||
for k, v := range ocm {
|
||||
newValue := ncm[k]
|
||||
if v != newValue && newValue != nil {
|
||||
field, ok := srt.FieldByName(k)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
sm[field.Tag.Get("json")] = newValue
|
||||
}
|
||||
}
|
||||
|
||||
// marshal the map into new settings
|
||||
var settings database.Settings
|
||||
body, _ := json.Marshal(sm)
|
||||
json.Unmarshal(body, &settings)
|
||||
|
||||
err := sc.client.Database.SetSettings(sc.FullContainerID, settings)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if config.KeepOnExit {
|
||||
if _, ok := sc.client.Database.GetAuthByContainer(sc.FullContainerID); !ok {
|
||||
if err = sc.client.Database.SetAuth(sc.FullContainerID, database.Auth{
|
||||
User: &sc.ContainerID,
|
||||
}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
sc.config = config
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (sc *SimpleContainer) updateConfig(ctx context.Context, oldConfig, newConfig Config) error {
|
||||
if newConfig.NetworkMode != oldConfig.NetworkMode {
|
||||
if err := sc.setNetworkMode(ctx, oldConfig.NetworkMode, newConfig.NetworkMode, sc.client.Network != nil); err != nil {
|
||||
return err
|
||||
}
|
||||
zap.S().Debugf("Set network mode for %s to %s", sc.ContainerID, newConfig.NetworkMode.Name())
|
||||
}
|
||||
if newConfig.Configurable != oldConfig.Configurable {
|
||||
if err := sc.setConfigurable(ctx, newConfig.Configurable); err != nil {
|
||||
return err
|
||||
}
|
||||
zap.S().Debugf("Set configurable for %s to %t", sc.ContainerID, newConfig.Configurable)
|
||||
}
|
||||
if newConfig.ExitAfter != oldConfig.ExitAfter {
|
||||
sc.setExitAfterListener(ctx, newConfig.RunLevel, newConfig.ExitAfter)
|
||||
zap.S().Debugf("Set exit after listener for %s", sc.ContainerID)
|
||||
}
|
||||
|
||||
sc.config = newConfig
|
||||
return nil
|
||||
}
|
||||
|
||||
// setNetworkMode changes the network mode for the container
|
||||
func (sc *SimpleContainer) setNetworkMode(ctx context.Context, oldMode, newMode NetworkMode, networking bool) error {
|
||||
var networkID string
|
||||
|
||||
if !networking {
|
||||
networkID = sc.client.Network[Off]
|
||||
} else {
|
||||
networkID = sc.client.Network[newMode]
|
||||
}
|
||||
|
||||
if networkID != "" {
|
||||
sc.cli.NetworkDisconnect(ctx, sc.client.Network[oldMode], sc.FullContainerID, true)
|
||||
// connect container to a network
|
||||
if err := sc.cli.NetworkConnect(ctx, networkID, sc.FullContainerID, &network.EndpointSettings{}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// inspect the container to get its ip address (yes i was too lazy to implement
|
||||
// a service that generates the ips without docker)
|
||||
resp, err := sc.cli.ContainerInspect(ctx, sc.FullContainerID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// update the internal network information
|
||||
sc.Network.ID = networkID
|
||||
sc.Network.IP = resp.NetworkSettings.Networks[newMode.NetworkName()].IPAddress
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (sc *SimpleContainer) setConfigurable(ctx context.Context, configurable bool) error {
|
||||
cconfig := c.GetConfig()
|
||||
|
||||
if configurable {
|
||||
for srcFile, dstDir := range map[string]string{cconfig.Api.Configure.Binary: "/bin", cconfig.Api.Configure.Man: "/usr/share/man/man1"} {
|
||||
if err := sc.CopyTo(ctx, srcFile, dstDir); err != nil {
|
||||
if strings.HasSuffix(dstDir, "/man1") {
|
||||
// man files aren't that necessary, so if the copy fails it throws only a warning.
|
||||
// this error gets thrown when the container is alpine linux, for example.
|
||||
// it does not have a /usr/share/man/man1 directory and the copy fails
|
||||
// TODO: Create a directory if not existing to prevent this error
|
||||
zap.S().Warnf("Failed to copy %s to %s/%s for %s: %v", srcFile, dstDir, filepath.Base(srcFile), sc.ContainerID, err)
|
||||
continue
|
||||
} else {
|
||||
return fmt.Errorf("failed to copy %s to %s/%s for %s: %v", srcFile, dstDir, filepath.Base(srcFile), sc.ContainerID, err)
|
||||
}
|
||||
}
|
||||
zap.S().Debugf("Copied %s to %s (%s)", srcFile, filepath.Join(dstDir, filepath.Base(srcFile)), sc.ContainerID)
|
||||
}
|
||||
resp, err := sc.cli.ContainerInspect(ctx, sc.FullContainerID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = sc.Execute(ctx, "sh", "-c", fmt.Sprintf("echo -n %s:%d > /etc/docker4ssh", resp.NetworkSettings.Networks[sc.config.NetworkMode.NetworkName()].Gateway, cconfig.Api.Port))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
zap.S().Debugf("Set ip and port of server for %s", sc.ContainerID)
|
||||
} else {
|
||||
_, err := sc.Execute(ctx, "rm",
|
||||
"-rf",
|
||||
fmt.Sprintf("/bin/%s", filepath.Base(cconfig.Api.Configure.Binary)),
|
||||
fmt.Sprintf("/usr/share/man/man1/%s", filepath.Base(cconfig.Api.Configure.Man)),
|
||||
"/etc/docker4ssh")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
zap.S().Debugf("Removed all configurable related files from %s", sc.ContainerID)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// setAPIRoute sets the IP and port for docker container tools
|
||||
func (sc *SimpleContainer) setAPIRoute(ctx context.Context, activate bool) error {
|
||||
var err error
|
||||
if activate {
|
||||
var resp types.ContainerJSON
|
||||
resp, err = sc.cli.ContainerInspect(ctx, sc.FullContainerID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cconfig := c.GetConfig()
|
||||
if resp.NetworkSettings != nil {
|
||||
_, err = sc.Execute(ctx, "sh", "-c", fmt.Sprintf("echo -n %s:%d > /etc/docker4ssh", resp.NetworkSettings.Networks[sc.config.NetworkMode.NetworkName()].Gateway, cconfig.Api.Port))
|
||||
}
|
||||
} else {
|
||||
_, err = sc.Execute(ctx, "rm", "-rf", "/etc/docker4ssh")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// setExitAfterListener listens for exit after processes
|
||||
func (sc *SimpleContainer) setExitAfterListener(ctx context.Context, runlevel RunLevel, process string) {
|
||||
if sc.cancel != nil {
|
||||
sc.cancel()
|
||||
}
|
||||
|
||||
if process == "" {
|
||||
return
|
||||
}
|
||||
|
||||
cancelCtx, cancel := context.WithCancel(ctx)
|
||||
sc.cancel = cancel
|
||||
|
||||
go func() {
|
||||
var rawPid []byte
|
||||
var err error
|
||||
|
||||
// check for the pid of Config.ExitAfter and wait 1 second if it wasn't found
|
||||
for {
|
||||
rawPid, err = sc.Execute(cancelCtx, "pidof", "-s", process)
|
||||
if len(rawPid) > 0 || err != nil {
|
||||
break
|
||||
}
|
||||
time.Sleep(1 * time.Second)
|
||||
}
|
||||
|
||||
// sometimes garbage bytes are sent as well, they are getting filtered here
|
||||
var pid []byte
|
||||
for _, b := range rawPid {
|
||||
if b > '0' && b < '9' {
|
||||
pid = append(pid, b)
|
||||
}
|
||||
}
|
||||
|
||||
pid = bytes.TrimSuffix(pid, []byte("\n"))
|
||||
|
||||
if _, err = sc.Execute(cancelCtx, "sh", "-c", fmt.Sprintf("tail --pid=%s -f /dev/null", pid)); err != nil && cancelCtx.Err() == nil {
|
||||
zap.S().Errorf("Could not wait on process %s (%s) for %s", process, pid, sc.ContainerID)
|
||||
return
|
||||
}
|
||||
|
||||
if runlevel != Forever {
|
||||
sc.Stop(context.Background())
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func InteractiveContainerFromID(ctx context.Context, client *Client, config Config, containerID string) (*InteractiveContainer, error) {
|
||||
sc, err := simpleContainerFromID(ctx, client, config, containerID)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &InteractiveContainer{
|
||||
SimpleContainer: sc,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func NewInteractiveContainer(ctx context.Context, cli *Client, config Config, image Image, containerName string) (*InteractiveContainer, error) {
|
||||
sc, err := newSimpleContainer(ctx, cli, config, image, containerName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &InteractiveContainer{
|
||||
SimpleContainer: sc,
|
||||
}, nil
|
||||
}
|
||||
|
||||
type InteractiveContainer struct {
|
||||
*SimpleContainer
|
||||
|
||||
terminalCount int
|
||||
}
|
||||
|
||||
// TerminalCount returns the count of active terminals
|
||||
func (ic *InteractiveContainer) TerminalCount() int {
|
||||
return ic.terminalCount
|
||||
}
|
||||
|
||||
// Terminal creates a new interactive terminal session for the container
|
||||
func (ic *InteractiveContainer) Terminal(ctx context.Context, term *terminal.Terminal) error {
|
||||
// get the default shell for the root user
|
||||
rawShell, err := ic.Execute(ctx, "sh", "-c", "getent passwd root | cut -d : -f 7")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// here we cut out only newlines (which also could've been done via
|
||||
// bytes.ReplaceAll or strings.ReplaceAll) and redundant bytes
|
||||
// which sometimes get returned too and which cannot be interpreted
|
||||
// by the docker engine
|
||||
shell := bytes.Buffer{}
|
||||
for _, b := range rawShell {
|
||||
if b > ' ' {
|
||||
shell.WriteByte(b)
|
||||
}
|
||||
}
|
||||
|
||||
id, err := ic.cli.ContainerExecCreate(ctx, ic.FullContainerID, types.ExecConfig{
|
||||
Tty: true,
|
||||
AttachStdin: true,
|
||||
AttachStdout: true,
|
||||
AttachStderr: true,
|
||||
Cmd: []string{shell.String()},
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
resp, err := ic.cli.ContainerExecAttach(ctx, id.ID, types.ExecStartCheck{
|
||||
Tty: true,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
errChan := make(chan error)
|
||||
|
||||
go func() {
|
||||
// copy every input to the container
|
||||
if _, err = io.Copy(term, resp.Conn); err != nil {
|
||||
errChan <- err
|
||||
}
|
||||
errChan <- nil
|
||||
}()
|
||||
go func() {
|
||||
// copy every output from the container
|
||||
if _, err = io.Copy(resp.Conn, term); err != nil {
|
||||
errChan <- err
|
||||
}
|
||||
errChan <- nil
|
||||
}()
|
||||
|
||||
ic.terminalCount++
|
||||
select {
|
||||
case err = <-errChan:
|
||||
resp.Conn.Close()
|
||||
}
|
||||
ic.terminalCount--
|
||||
|
||||
return err
|
||||
}
|
120
server/docker/docker.go
Normal file
120
server/docker/docker.go
Normal file
@ -0,0 +1,120 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"github.com/docker/docker/client"
|
||||
"os"
|
||||
)
|
||||
|
||||
type NetworkMode int
|
||||
|
||||
const (
|
||||
Off NetworkMode = iota + 1
|
||||
|
||||
// Isolate isolates the container from the host and the host's
|
||||
// network. Therefore, no configurations can be changed from
|
||||
// within the container
|
||||
Isolate
|
||||
|
||||
// Host is the default docker networking configuration
|
||||
Host
|
||||
|
||||
// Docker is the same configuration you get when you start a
|
||||
// container via the command line
|
||||
Docker
|
||||
|
||||
// None disables all isolation between the docker container
|
||||
// and the host, so inside the network the container can act
|
||||
// as the host. So it has access to the host's network directly
|
||||
None
|
||||
)
|
||||
|
||||
func (nm NetworkMode) Name() string {
|
||||
switch nm {
|
||||
case Off:
|
||||
return "Off"
|
||||
case Isolate:
|
||||
return "Iso"
|
||||
case Host:
|
||||
return "Host"
|
||||
case Docker:
|
||||
return "Docker"
|
||||
case None:
|
||||
return "None"
|
||||
}
|
||||
return "invalid network"
|
||||
}
|
||||
|
||||
func (nm NetworkMode) NetworkName() string {
|
||||
switch nm {
|
||||
case Off:
|
||||
return "none"
|
||||
case Isolate:
|
||||
return "docker4ssh-full"
|
||||
case Host:
|
||||
return "bridge"
|
||||
case Docker:
|
||||
return "docker4ssh-def"
|
||||
case None:
|
||||
return "none"
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type RunLevel int
|
||||
|
||||
const (
|
||||
User RunLevel = iota + 1
|
||||
Container
|
||||
Forever
|
||||
)
|
||||
|
||||
func (rl RunLevel) Name() string {
|
||||
switch rl {
|
||||
case User:
|
||||
return "User"
|
||||
case Container:
|
||||
return "Container"
|
||||
case Forever:
|
||||
return "Forever"
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
// NetworkMode describes the level of isolation of the container to the host system.
|
||||
// Mostly changes the network of the container, see NetworkNames for more details
|
||||
NetworkMode NetworkMode
|
||||
|
||||
// If Configurable is true, the container can change settings for itself
|
||||
Configurable bool
|
||||
|
||||
// RunLevel describes the container behavior.
|
||||
// If the RunLevel is User, the container will exit when the user disconnects.
|
||||
// If the RunLevel is Container, the container keeps running if the user disconnects
|
||||
// and ExitAfter is specified and the specified process has not finished yet.
|
||||
// If the RunLevel is Forever, the container keeps running forever unless ExitAfter
|
||||
// is specified and the specified process ends.
|
||||
//
|
||||
// Note: It also automatically exits if ExitAfter is specified and the specified
|
||||
// process ends, even if the user is still connected to the container
|
||||
RunLevel RunLevel
|
||||
|
||||
// StartupInformation defines if information about the container like its (shorthand)
|
||||
// container id, NetworkMode, RunLevel, etc. should be shown when connecting to it
|
||||
StartupInformation bool
|
||||
|
||||
// ExitAfter contains a process name after which end the container should stop
|
||||
ExitAfter string
|
||||
|
||||
// When KeepOnExit is true, the container won't get deleted if it stops working
|
||||
KeepOnExit bool
|
||||
}
|
||||
|
||||
func InitCli() (*client.Client, error) {
|
||||
return client.NewClientWithOpts()
|
||||
}
|
||||
|
||||
func IsRunning() bool {
|
||||
_, err := os.Stat("/var/run/docker.sock")
|
||||
return !os.IsNotExist(err)
|
||||
}
|
41
server/docker/image.go
Normal file
41
server/docker/image.go
Normal file
@ -0,0 +1,41 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/filters"
|
||||
"github.com/docker/docker/client"
|
||||
"io"
|
||||
)
|
||||
|
||||
type Image struct {
|
||||
ref string
|
||||
}
|
||||
|
||||
func (i Image) Ref() string {
|
||||
return i.ref
|
||||
}
|
||||
|
||||
// NewImage creates a new Image instance
|
||||
func NewImage(ctx context.Context, cli *client.Client, ref string) (Image, io.ReadCloser, error) {
|
||||
summary, err := cli.ImageList(ctx, types.ImageListOptions{
|
||||
Filters: filters.NewArgs(filters.Arg("reference", ref)),
|
||||
})
|
||||
if err != nil {
|
||||
return Image{}, nil, err
|
||||
}
|
||||
|
||||
if len(summary) > 0 {
|
||||
return Image{
|
||||
ref: ref,
|
||||
}, nil, nil
|
||||
} else {
|
||||
out, err := cli.ImagePull(ctx, ref, types.ImagePullOptions{})
|
||||
if err != nil {
|
||||
return Image{}, nil, err
|
||||
}
|
||||
return Image{
|
||||
ref: ref,
|
||||
}, out, nil
|
||||
}
|
||||
}
|
86
server/docker/network.go
Normal file
86
server/docker/network.go
Normal file
@ -0,0 +1,86 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
c "docker4ssh/config"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/network"
|
||||
"github.com/docker/docker/client"
|
||||
)
|
||||
|
||||
type Network map[NetworkMode]string
|
||||
|
||||
// InitNetwork initializes a new docker4ssh network
|
||||
func InitNetwork(ctx context.Context, cli *client.Client, config *c.Config) (Network, error) {
|
||||
n := Network{}
|
||||
|
||||
networks, err := cli.NetworkList(ctx, types.NetworkListOptions{})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for _, dockerNetwork := range networks {
|
||||
var mode NetworkMode
|
||||
|
||||
switch dockerNetwork.Name {
|
||||
case "none":
|
||||
mode = Off
|
||||
case "docker4ssh-iso":
|
||||
mode = Isolate
|
||||
case "bridge":
|
||||
mode = Host
|
||||
case "docker4ssh-def":
|
||||
mode = Docker
|
||||
case "host":
|
||||
mode = None
|
||||
default:
|
||||
continue
|
||||
}
|
||||
|
||||
n[mode] = dockerNetwork.ID
|
||||
}
|
||||
|
||||
if _, ok := n[Isolate]; !ok {
|
||||
// create a new network which isolates the container from the host,
|
||||
// but not from the network
|
||||
resp, err := cli.NetworkCreate(ctx, "docker4ssh-iso", types.NetworkCreate{
|
||||
CheckDuplicate: true,
|
||||
Driver: "bridge",
|
||||
EnableIPv6: false,
|
||||
IPAM: &network.IPAM{
|
||||
Driver: "default",
|
||||
Config: []network.IPAMConfig{
|
||||
{
|
||||
Subnet: config.Network.Isolate.Subnet,
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n[Isolate] = resp.ID
|
||||
}
|
||||
|
||||
if _, ok := n[Docker]; !ok {
|
||||
// the standard network for all containers
|
||||
resp, err := cli.NetworkCreate(ctx, "docker4ssh-def", types.NetworkCreate{
|
||||
CheckDuplicate: true,
|
||||
Driver: "bridge",
|
||||
EnableIPv6: false,
|
||||
IPAM: &network.IPAM{
|
||||
Driver: "default",
|
||||
Config: []network.IPAMConfig{
|
||||
{
|
||||
Subnet: config.Network.Default.Subnet,
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
n[Docker] = resp.ID
|
||||
}
|
||||
|
||||
return n, nil
|
||||
}
|
41
server/go.mod
Normal file
41
server/go.mod
Normal file
@ -0,0 +1,41 @@
|
||||
module docker4ssh
|
||||
|
||||
go 1.17
|
||||
|
||||
require (
|
||||
github.com/BurntSushi/toml v0.4.1
|
||||
github.com/docker/docker v20.10.11+incompatible
|
||||
github.com/docker/go-units v0.4.0
|
||||
github.com/mattn/go-sqlite3 v1.14.9
|
||||
github.com/morikuni/aec v1.0.0
|
||||
github.com/spf13/cobra v1.0.0
|
||||
go.uber.org/zap v1.19.1
|
||||
golang.org/x/crypto v0.0.0-20211202192323-5770296d904e
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/Microsoft/go-winio v0.4.17 // indirect
|
||||
github.com/containerd/containerd v1.5.8 // indirect
|
||||
github.com/docker/distribution v2.7.1+incompatible // indirect
|
||||
github.com/docker/go-connections v0.4.0 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/protobuf v1.5.0 // indirect
|
||||
github.com/gorilla/mux v1.8.0 // indirect
|
||||
github.com/inconshreveable/mousetrap v1.0.0 // indirect
|
||||
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.0.2 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/sirupsen/logrus v1.8.1 // indirect
|
||||
github.com/spf13/pflag v1.0.5 // indirect
|
||||
go.uber.org/atomic v1.7.0 // indirect
|
||||
go.uber.org/multierr v1.6.0 // indirect
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 // indirect
|
||||
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22 // indirect
|
||||
golang.org/x/time v0.0.0-20211116232009-f0f3c7e86c11 // indirect
|
||||
google.golang.org/genproto v0.0.0-20201110150050-8816d57aaa9a // indirect
|
||||
google.golang.org/grpc v1.42.0 // indirect
|
||||
google.golang.org/protobuf v1.27.1 // indirect
|
||||
)
|
||||
|
||||
replace golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 => github.com/ByteDream/term v0.0.0-20211025115508-891a970291e6
|
1005
server/go.sum
Normal file
1005
server/go.sum
Normal file
File diff suppressed because it is too large
Load Diff
32
server/logging/logging.go
Normal file
32
server/logging/logging.go
Normal file
@ -0,0 +1,32 @@
|
||||
package logging
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
)
|
||||
|
||||
func InitLogging(level zap.AtomicLevel, outputFiles, errorFiles []string) {
|
||||
encoderConfig := zap.NewProductionEncoderConfig()
|
||||
|
||||
encoderConfig.EncodeTime = zapcore.TimeEncoderOfLayout("[2006-01-02 15:04:05] -")
|
||||
encoderConfig.ConsoleSeparator = " "
|
||||
encoderConfig.EncodeLevel = func(level zapcore.Level, encoder zapcore.PrimitiveArrayEncoder) {
|
||||
encoder.AppendString(fmt.Sprintf("%s:", level.CapitalString()))
|
||||
}
|
||||
encoderConfig.EncodeCaller = nil
|
||||
|
||||
config := zap.NewProductionConfig()
|
||||
config.EncoderConfig = encoderConfig
|
||||
config.Encoding = "console"
|
||||
config.Level = level
|
||||
config.OutputPaths = outputFiles
|
||||
config.ErrorOutputPaths = errorFiles
|
||||
|
||||
logger, err := config.Build()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
zap.ReplaceGlobals(logger)
|
||||
}
|
9
server/main.go
Normal file
9
server/main.go
Normal file
@ -0,0 +1,9 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"docker4ssh/cmd"
|
||||
)
|
||||
|
||||
func main() {
|
||||
cmd.Execute()
|
||||
}
|
68
server/ssh/config.go
Normal file
68
server/ssh/config.go
Normal file
@ -0,0 +1,68 @@
|
||||
package ssh
|
||||
|
||||
import (
|
||||
c "docker4ssh/config"
|
||||
"docker4ssh/database"
|
||||
"fmt"
|
||||
"golang.org/x/crypto/ssh"
|
||||
"io/ioutil"
|
||||
)
|
||||
|
||||
func NewSSHConfig(config *c.Config) (*ssh.ServerConfig, error) {
|
||||
db := database.GetDatabase()
|
||||
|
||||
sshConfig := &ssh.ServerConfig{
|
||||
MaxAuthTries: 3,
|
||||
PasswordCallback: func(conn ssh.ConnMetadata, password []byte) (*ssh.Permissions, error) {
|
||||
if containerID, exists := db.GetContainerByAuth(database.NewUnsafeAuth(conn.User(), password)); exists && containerID != "" {
|
||||
return &ssh.Permissions{
|
||||
CriticalOptions: map[string]string{
|
||||
"containerID": containerID,
|
||||
},
|
||||
}, nil
|
||||
} else if profile, ok := profiles.Match(conn.User(), password); ok {
|
||||
return &ssh.Permissions{
|
||||
CriticalOptions: map[string]string{
|
||||
"profile": profile.Name(),
|
||||
},
|
||||
}, nil
|
||||
} else if config.Profile.Dynamic.Enable && dynamicProfile.Match(conn.User(), password) {
|
||||
return &ssh.Permissions{
|
||||
CriticalOptions: map[string]string{
|
||||
"profile": "dynamic",
|
||||
"image": conn.User(),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
// i think logging the wrong password is a bit unsafe.
|
||||
// if you have e.g. just a type in it isn't very well to see your nearly correct password in the logs
|
||||
return nil, fmt.Errorf("%s tried to connect with user %s but entered wrong a password", conn.RemoteAddr().String(), conn.User())
|
||||
},
|
||||
}
|
||||
sshConfig.SetDefaults()
|
||||
|
||||
key, err := parseSSHPrivateKey(config.SSH.Keyfile, []byte(config.SSH.Passphrase))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sshConfig.AddHostKey(key)
|
||||
|
||||
return sshConfig, nil
|
||||
}
|
||||
|
||||
func parseSSHPrivateKey(path string, password []byte) (ssh.Signer, error) {
|
||||
keyBytes, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var key ssh.Signer
|
||||
if len(password) == 0 {
|
||||
key, err = ssh.ParsePrivateKey(keyBytes)
|
||||
} else {
|
||||
key, err = ssh.ParsePrivateKeyWithPassphrase(keyBytes, password)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return key, nil
|
||||
}
|
201
server/ssh/connection.go
Normal file
201
server/ssh/connection.go
Normal file
@ -0,0 +1,201 @@
|
||||
package ssh
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"database/sql"
|
||||
"docker4ssh/database"
|
||||
"docker4ssh/docker"
|
||||
"docker4ssh/utils"
|
||||
"fmt"
|
||||
"go.uber.org/zap"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
allContainers []*docker.InteractiveContainer
|
||||
)
|
||||
|
||||
func closeAllContainers(ctx context.Context) {
|
||||
var wg sync.WaitGroup
|
||||
for _, container := range allContainers {
|
||||
wg.Add(1)
|
||||
container := container
|
||||
go func() {
|
||||
container.Stop(ctx)
|
||||
wg.Done()
|
||||
}()
|
||||
}
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func connection(client *docker.Client, user *User) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
|
||||
container, ok := getContainer(ctx, client, user)
|
||||
if !ok {
|
||||
zap.S().Errorf("Failed to create container for %s", user.ID)
|
||||
return
|
||||
}
|
||||
|
||||
user.Container = container.SimpleContainer
|
||||
|
||||
var found bool
|
||||
for _, cont := range allContainers {
|
||||
if cont == container {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
allContainers = append(allContainers, container)
|
||||
}
|
||||
|
||||
// check if the container is running and start it if not
|
||||
if running, err := container.Running(ctx); err == nil && !running {
|
||||
if err = container.Start(ctx); err != nil {
|
||||
zap.S().Errorf("Failed to start container %s: %v", container.ContainerID, err)
|
||||
fmt.Fprintln(user.Terminal, "Failed to start container")
|
||||
return
|
||||
}
|
||||
zap.S().Infof("Started container %s with internal id '%s', ip '%s'", container.ContainerID, container.ContainerID, container.Network.IP)
|
||||
} else if err != nil {
|
||||
zap.S().Errorf("Failed to get container running state: %v", err)
|
||||
fmt.Fprintln(user.Terminal, "Failed to check container running state")
|
||||
}
|
||||
|
||||
config := container.Config()
|
||||
if user.Profile.StartupInformation {
|
||||
buf := &bytes.Buffer{}
|
||||
fmt.Fprintf(buf, "┌───Container────────────────┐\r\n")
|
||||
fmt.Fprintf(buf, "│ Container ID: %-12s │\r\n", container.ContainerID)
|
||||
fmt.Fprintf(buf, "│ Network Mode: %-12s │\r\n", config.NetworkMode.Name())
|
||||
fmt.Fprintf(buf, "│ Configurable: %-12t │\r\n", config.Configurable)
|
||||
fmt.Fprintf(buf, "│ Run Level: %-12s │\r\n", config.RunLevel.Name())
|
||||
fmt.Fprintf(buf, "│ Exit After: %-12s │\r\n", config.ExitAfter)
|
||||
fmt.Fprintf(buf, "│ Keep On Exit: %-12t │\r\n", config.KeepOnExit)
|
||||
fmt.Fprintf(buf, "└──────────────Information───┘\r\n")
|
||||
user.Terminal.Write(buf.Bytes())
|
||||
}
|
||||
|
||||
// start a new terminal session
|
||||
if err := container.Terminal(ctx, user.Terminal); err != nil {
|
||||
zap.S().Errorf("Failed to serve %s terminal: %v", container.ContainerID, err)
|
||||
fmt.Fprintln(user.Terminal, "Failed to serve terminal")
|
||||
}
|
||||
|
||||
if config.RunLevel == docker.User && container.TerminalCount() == 0 {
|
||||
if err := container.Stop(ctx); err != nil {
|
||||
zap.S().Errorf("Error occoured while stopping container %s: %v", container.ContainerID, err)
|
||||
} else {
|
||||
lenBefore := len(allContainers)
|
||||
for i, cont := range allContainers {
|
||||
if cont == container {
|
||||
allContainers[i] = allContainers[lenBefore-1]
|
||||
allContainers = allContainers[:lenBefore-1]
|
||||
break
|
||||
}
|
||||
}
|
||||
if lenBefore == len(allContainers) {
|
||||
zap.S().Warnf("Stopped container %s, but failed to remove it from the global container scope", container.ContainerID)
|
||||
} else {
|
||||
zap.S().Infof("Stopped container %s", container.ContainerID)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
zap.S().Infof("Stopped session for user %s", user.ID)
|
||||
}
|
||||
|
||||
func getContainer(ctx context.Context, client *docker.Client, user *User) (container *docker.InteractiveContainer, ok bool) {
|
||||
db := database.GetDatabase()
|
||||
var config docker.Config
|
||||
|
||||
// check if the user has a container (id) assigned
|
||||
if user.Profile.ContainerID != "" {
|
||||
for _, cont := range allContainers {
|
||||
if cont.FullContainerID == user.Profile.ContainerID {
|
||||
return cont, true
|
||||
}
|
||||
}
|
||||
|
||||
settings, err := db.SettingsByContainerID(user.Profile.ContainerID)
|
||||
if err != nil {
|
||||
zap.S().Errorf("Failed to get stored container config for container %s: %v", user.Profile.ContainerID, err)
|
||||
fmt.Fprintf(user.Terminal, "Could not connect to saved container")
|
||||
return nil, false
|
||||
}
|
||||
|
||||
config = docker.Config{
|
||||
NetworkMode: docker.NetworkMode(*settings.NetworkMode),
|
||||
Configurable: *settings.Configurable,
|
||||
RunLevel: docker.RunLevel(*settings.RunLevel),
|
||||
StartupInformation: *settings.StartupInformation,
|
||||
ExitAfter: *settings.ExitAfter,
|
||||
KeepOnExit: *settings.KeepOnExit,
|
||||
}
|
||||
|
||||
container, err = docker.InteractiveContainerFromID(ctx, client, config, user.Profile.ContainerID)
|
||||
if err != nil {
|
||||
zap.S().Errorf("Failed to get container from id %s: %v", user.Profile.ContainerID, err)
|
||||
fmt.Fprintf(user.Terminal, "Failed to get container")
|
||||
return nil, false
|
||||
}
|
||||
|
||||
zap.S().Infof("Re-used container %s for user %s", user.Profile.ContainerID, user.ID)
|
||||
} else {
|
||||
config = docker.Config{
|
||||
NetworkMode: docker.NetworkMode(user.Profile.NetworkMode),
|
||||
Configurable: user.Profile.Configurable,
|
||||
RunLevel: docker.RunLevel(user.Profile.RunLevel),
|
||||
StartupInformation: user.Profile.StartupInformation,
|
||||
ExitAfter: user.Profile.ExitAfter,
|
||||
KeepOnExit: user.Profile.KeepOnExit,
|
||||
}
|
||||
|
||||
image, out, err := docker.NewImage(ctx, client.Client, user.Profile.Image)
|
||||
if err != nil {
|
||||
zap.S().Errorf("Failed to get '%s' image for profile %s: %v", user.Profile.Image, user.Profile.Name(), err)
|
||||
fmt.Fprintf(user.Terminal, "Failed to get image %s", image.Ref())
|
||||
return nil, false
|
||||
}
|
||||
if out != nil {
|
||||
if err := utils.DisplayJSONMessagesStream(out, user.Terminal, user.Terminal); err != nil {
|
||||
zap.S().Fatalf("Failed to fetch '%s' docker image: %v", image.Ref(), err)
|
||||
fmt.Fprintf(user.Terminal, "Failed to fetch image %s", image.Ref())
|
||||
return nil, false
|
||||
}
|
||||
}
|
||||
|
||||
container, err = docker.NewInteractiveContainer(ctx, client, config, image, strconv.Itoa(int(time.Now().Unix())))
|
||||
if err != nil {
|
||||
zap.S().Errorf("Failed to create interactive container: %v", err)
|
||||
fmt.Fprintln(user.Terminal, "Failed to create interactive container")
|
||||
return nil, false
|
||||
}
|
||||
|
||||
zap.S().Infof("Created new %s container (%s) for user %s", image.Ref(), container.ContainerID, user.ID)
|
||||
}
|
||||
|
||||
if _, err := db.SettingsByContainerID(container.FullContainerID); err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
rawNetworkMode := int(config.NetworkMode)
|
||||
rawRunLevel := int(config.RunLevel)
|
||||
if err := db.SetSettings(container.FullContainerID, database.Settings{
|
||||
NetworkMode: &rawNetworkMode,
|
||||
Configurable: &config.Configurable,
|
||||
RunLevel: &rawRunLevel,
|
||||
StartupInformation: &config.StartupInformation,
|
||||
ExitAfter: &config.ExitAfter,
|
||||
KeepOnExit: &config.KeepOnExit,
|
||||
}); err != nil {
|
||||
zap.S().Errorf("Failed to update settings for container %s for user %s: %v", container.ContainerID, user.ID, err)
|
||||
return nil, false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return container, true
|
||||
}
|
79
server/ssh/handle.go
Normal file
79
server/ssh/handle.go
Normal file
@ -0,0 +1,79 @@
|
||||
package ssh
|
||||
|
||||
import (
|
||||
"docker4ssh/docker"
|
||||
"fmt"
|
||||
"go.uber.org/zap"
|
||||
"golang.org/x/crypto/ssh"
|
||||
)
|
||||
|
||||
type RequestType string
|
||||
|
||||
const (
|
||||
RequestPtyReq RequestType = "pty-req"
|
||||
RequestWindowChange RequestType = "window-change"
|
||||
)
|
||||
|
||||
type PtyReqPayload struct {
|
||||
Term string
|
||||
|
||||
Width, Height uint32
|
||||
PixelWidth, PixelHeight uint32
|
||||
|
||||
Modes []byte
|
||||
}
|
||||
|
||||
func handleChannels(chans <-chan ssh.NewChannel, client *docker.Client, user *User) {
|
||||
for channel := range chans {
|
||||
go handleChannel(channel, client, user)
|
||||
}
|
||||
}
|
||||
|
||||
func handleChannel(channel ssh.NewChannel, client *docker.Client, user *User) {
|
||||
if t := channel.ChannelType(); t != "session" {
|
||||
channel.Reject(ssh.UnknownChannelType, fmt.Sprintf("unknown channel type: %s", t))
|
||||
return
|
||||
}
|
||||
|
||||
conn, requests, err := channel.Accept()
|
||||
if err != nil {
|
||||
zap.S().Warnf("Failed to accept channel for user %s", user.ID)
|
||||
return
|
||||
}
|
||||
defer conn.Close()
|
||||
user.Terminal.ReadWriter = conn
|
||||
|
||||
// handle all other request besides the normal user input.
|
||||
// currently, only 'pty-req' is implemented which determines a terminal size change
|
||||
go handleRequest(requests, user)
|
||||
|
||||
// this handles the actual user terminal connection.
|
||||
// blocks until the session has finished
|
||||
connection(client, user)
|
||||
|
||||
zap.S().Debugf("Session for user %s ended", user.ID)
|
||||
}
|
||||
|
||||
func handleRequest(requests <-chan *ssh.Request, user *User) {
|
||||
for request := range requests {
|
||||
switch RequestType(request.Type) {
|
||||
case RequestPtyReq:
|
||||
// this could spam the logs when the user resizes his window constantly
|
||||
// log()
|
||||
|
||||
var ptyReq PtyReqPayload
|
||||
ssh.Unmarshal(request.Payload, &ptyReq)
|
||||
|
||||
user.Terminal.Width = ptyReq.Width
|
||||
user.Terminal.Height = ptyReq.Height
|
||||
case RequestWindowChange:
|
||||
// prevent from logging
|
||||
default:
|
||||
zap.S().Debugf("New request from user %s - Type: %s, Want Reply: %t, Payload: '%s'", user.ID, request.Type, request.WantReply, request.Payload)
|
||||
}
|
||||
|
||||
if request.WantReply {
|
||||
request.Reply(true, nil)
|
||||
}
|
||||
}
|
||||
}
|
190
server/ssh/ssh.go
Normal file
190
server/ssh/ssh.go
Normal file
@ -0,0 +1,190 @@
|
||||
package ssh
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/md5"
|
||||
c "docker4ssh/config"
|
||||
"docker4ssh/database"
|
||||
"docker4ssh/docker"
|
||||
"docker4ssh/terminal"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"go.uber.org/zap"
|
||||
"golang.org/x/crypto/ssh"
|
||||
"net"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
users = make([]*User, 0)
|
||||
|
||||
profiles c.Profiles
|
||||
dynamicProfile c.Profile
|
||||
)
|
||||
|
||||
type User struct {
|
||||
*ssh.ServerConn
|
||||
|
||||
ID string
|
||||
IP string
|
||||
Profile *c.Profile
|
||||
Terminal *terminal.Terminal
|
||||
Container *docker.SimpleContainer
|
||||
}
|
||||
|
||||
func GetUser(ip string) *User {
|
||||
for _, user := range users {
|
||||
if container := user.Container; container != nil && container.Network.IP == ip {
|
||||
return user
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type extras struct {
|
||||
containerID string
|
||||
}
|
||||
|
||||
func StartServing(config *c.Config, serverConfig *ssh.ServerConfig) (errChan chan error, closer func() error) {
|
||||
errChan = make(chan error, 1)
|
||||
|
||||
var err error
|
||||
profiles, err = c.LoadProfileDir(config.Profile.Dir, c.DefaultPreProfileFromConfig(config))
|
||||
if err != nil {
|
||||
errChan <- err
|
||||
return
|
||||
}
|
||||
zap.S().Debugf("Loaded %d profile(s)", len(profiles))
|
||||
|
||||
if config.Profile.Dynamic.Enable {
|
||||
dynamicProfile, err = c.DynamicProfileFromConfig(config, c.DefaultPreProfileFromConfig(config))
|
||||
if err != nil {
|
||||
errChan <- err
|
||||
return
|
||||
}
|
||||
zap.S().Debugf("Loaded dynamic profile")
|
||||
}
|
||||
|
||||
cli, err := docker.InitCli()
|
||||
if err != nil {
|
||||
errChan <- err
|
||||
return
|
||||
}
|
||||
zap.S().Debugf("Initialized docker cli")
|
||||
|
||||
network, err := docker.InitNetwork(context.Background(), cli, config)
|
||||
if err != nil {
|
||||
errChan <- err
|
||||
return
|
||||
}
|
||||
zap.S().Debugf("Initialized docker networks")
|
||||
|
||||
client := &docker.Client{
|
||||
Client: cli,
|
||||
Database: database.GetDatabase(),
|
||||
Network: network,
|
||||
}
|
||||
|
||||
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", config.SSH.Port))
|
||||
if err != nil {
|
||||
errChan <- err
|
||||
return
|
||||
}
|
||||
zap.S().Debugf("Created ssh listener")
|
||||
|
||||
var closed bool
|
||||
go func() {
|
||||
db := database.GetDatabase()
|
||||
|
||||
for {
|
||||
conn, err := listener.Accept()
|
||||
if err != nil {
|
||||
if closed {
|
||||
return
|
||||
}
|
||||
zap.S().Errorf("Failed to accept new ssh user: %v", err)
|
||||
continue
|
||||
}
|
||||
serverConn, chans, requests, err := ssh.NewServerConn(conn, serverConfig)
|
||||
if err != nil {
|
||||
zap.S().Errorf("Failed to establish new ssh connection: %v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
idBytes := md5.Sum([]byte(strings.Split(serverConn.User(), ":")[0]))
|
||||
idString := hex.EncodeToString(idBytes[:])
|
||||
|
||||
zap.S().Infof("New ssh connection from %s with %s (%s)", serverConn.RemoteAddr().String(), serverConn.ClientVersion(), idString)
|
||||
|
||||
var profile *c.Profile
|
||||
if name, ok := serverConn.Permissions.CriticalOptions["profile"]; ok {
|
||||
if name == "dynamic" {
|
||||
if image, ok := serverConn.Permissions.CriticalOptions["image"]; ok {
|
||||
tempDynamicProfile := dynamicProfile
|
||||
tempDynamicProfile.Image = image
|
||||
profile = &tempDynamicProfile
|
||||
}
|
||||
}
|
||||
if profile == nil {
|
||||
if profile, ok = profiles.GetByName(name); !ok {
|
||||
zap.S().Errorf("Failed to get profile %s", name)
|
||||
continue
|
||||
}
|
||||
}
|
||||
} else if containerID, ok := serverConn.Permissions.CriticalOptions["containerID"]; ok {
|
||||
if settings, err := db.SettingsByContainerID(containerID); err == nil {
|
||||
profile = &c.Profile{
|
||||
NetworkMode: *settings.NetworkMode,
|
||||
Configurable: *settings.Configurable,
|
||||
RunLevel: *settings.RunLevel,
|
||||
StartupInformation: *settings.StartupInformation,
|
||||
ExitAfter: *settings.ExitAfter,
|
||||
KeepOnExit: *settings.KeepOnExit,
|
||||
ContainerID: containerID,
|
||||
}
|
||||
} else {
|
||||
for _, container := range allContainers {
|
||||
if container.ContainerID == containerID {
|
||||
cconfig := c.GetConfig()
|
||||
profile = &c.Profile{
|
||||
Password: regexp.MustCompile(cconfig.Profile.Default.Password),
|
||||
NetworkMode: cconfig.Profile.Default.NetworkMode,
|
||||
Configurable: cconfig.Profile.Default.Configurable,
|
||||
RunLevel: cconfig.Profile.Default.RunLevel,
|
||||
StartupInformation: cconfig.Profile.Default.StartupInformation,
|
||||
ExitAfter: cconfig.Profile.Default.ExitAfter,
|
||||
KeepOnExit: cconfig.Profile.Default.KeepOnExit,
|
||||
Image: "",
|
||||
ContainerID: containerID,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
zap.S().Debugf("User %s has profile %s", idString, profile.Name())
|
||||
|
||||
user := &User{
|
||||
ServerConn: serverConn,
|
||||
ID: idString,
|
||||
Terminal: &terminal.Terminal{},
|
||||
Profile: profile,
|
||||
}
|
||||
users = append(users, user)
|
||||
|
||||
go ssh.DiscardRequests(requests)
|
||||
go handleChannels(chans, client, user)
|
||||
}
|
||||
}()
|
||||
|
||||
return errChan, func() error {
|
||||
closed = true
|
||||
|
||||
// close all containers
|
||||
closeAllContainers(context.Background())
|
||||
|
||||
// close the listener
|
||||
return listener.Close()
|
||||
}
|
||||
}
|
9
server/terminal/terminal.go
Normal file
9
server/terminal/terminal.go
Normal file
@ -0,0 +1,9 @@
|
||||
package terminal
|
||||
|
||||
import "io"
|
||||
|
||||
type Terminal struct {
|
||||
io.ReadWriter
|
||||
|
||||
Width, Height uint32
|
||||
}
|
27
server/utils/convert.go
Normal file
27
server/utils/convert.go
Normal file
@ -0,0 +1,27 @@
|
||||
package utils
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func UsernameToRegex(username string) (*regexp.Regexp, error) {
|
||||
var rawUsername string
|
||||
if rawUsername = strings.TrimPrefix(username, "regex:"); rawUsername == username {
|
||||
rawUsername = strings.ReplaceAll(rawUsername, "*", ".*")
|
||||
}
|
||||
return regexp.Compile(rawUsername)
|
||||
}
|
||||
|
||||
func PasswordToRegex(password string) (*regexp.Regexp, error) {
|
||||
splitPassword := strings.SplitN(password, ":", 1)
|
||||
if len(splitPassword) > 1 {
|
||||
switch splitPassword[0] {
|
||||
case "regex":
|
||||
return regexp.Compile(splitPassword[1])
|
||||
case "sha1", "sha256", "sha512":
|
||||
password = splitPassword[1]
|
||||
}
|
||||
}
|
||||
return regexp.Compile(strings.ReplaceAll(password, "*", ".*"))
|
||||
}
|
238
server/utils/jsonmessage.go
Normal file
238
server/utils/jsonmessage.go
Normal file
@ -0,0 +1,238 @@
|
||||
// adopted from https://github.com/docker/cli/blob/a32cd16160f1b41c1c4ae7bee4dac929d1484e59/vendor/github.com/docker/docker/pkg/jsonmessage/jsonmessage.go
|
||||
|
||||
package utils
|
||||
|
||||
import (
|
||||
"docker4ssh/terminal"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/docker/go-units"
|
||||
"github.com/morikuni/aec"
|
||||
"io"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// RFC3339NanoFixed is time.RFC3339Nano with nanoseconds padded using zeros to
|
||||
// ensure the formatted time is always the same number of characters.
|
||||
const RFC3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
|
||||
|
||||
// JSONError wraps a concrete Code and Message, `Code` is
|
||||
// an integer error code, `Message` is the error message.
|
||||
type JSONError struct {
|
||||
Code int `json:"code,omitempty"`
|
||||
Message string `json:"message,omitempty"`
|
||||
}
|
||||
|
||||
func (e *JSONError) Error() string {
|
||||
return e.Message
|
||||
}
|
||||
|
||||
// JSONProgress describes a progress. terminalFd is the fd of the current terminal,
|
||||
// Start is the initial value for the operation. Current is the current status and
|
||||
// value of the progress made towards Total. Total is the end value describing when
|
||||
// we made 100% progress for an operation.
|
||||
type JSONProgress struct {
|
||||
Terminal *terminal.Terminal
|
||||
Current int64 `json:"current,omitempty"`
|
||||
Total int64 `json:"total,omitempty"`
|
||||
Start int64 `json:"start,omitempty"`
|
||||
// If true, don't show xB/yB
|
||||
HideCounts bool `json:"hidecounts,omitempty"`
|
||||
Units string `json:"units,omitempty"`
|
||||
}
|
||||
|
||||
func (p *JSONProgress) String() string {
|
||||
var (
|
||||
width = p.Terminal.Width
|
||||
pbBox string
|
||||
numbersBox string
|
||||
timeLeftBox string
|
||||
)
|
||||
if p.Current <= 0 && p.Total <= 0 {
|
||||
return ""
|
||||
}
|
||||
if p.Total <= 0 {
|
||||
switch p.Units {
|
||||
case "":
|
||||
current := units.HumanSize(float64(p.Current))
|
||||
return fmt.Sprintf("%8v", current)
|
||||
default:
|
||||
return fmt.Sprintf("%d %s", p.Current, p.Units)
|
||||
}
|
||||
}
|
||||
|
||||
percentage := int(float64(p.Current)/float64(p.Total)*100) / 2
|
||||
if percentage > 50 {
|
||||
percentage = 50
|
||||
}
|
||||
if width > 110 {
|
||||
// this number can't be negative gh#7136
|
||||
numSpaces := 0
|
||||
if 50-percentage > 0 {
|
||||
numSpaces = 50 - percentage
|
||||
}
|
||||
pbBox = fmt.Sprintf("[%s>%s] ", strings.Repeat("=", percentage), strings.Repeat(" ", numSpaces))
|
||||
}
|
||||
|
||||
switch {
|
||||
case p.HideCounts:
|
||||
case p.Units == "": // no units, use bytes
|
||||
current := units.HumanSize(float64(p.Current))
|
||||
total := units.HumanSize(float64(p.Total))
|
||||
|
||||
numbersBox = fmt.Sprintf("%8v/%v", current, total)
|
||||
|
||||
if p.Current > p.Total {
|
||||
// remove total display if the reported current is wonky.
|
||||
numbersBox = fmt.Sprintf("%8v", current)
|
||||
}
|
||||
default:
|
||||
numbersBox = fmt.Sprintf("%d/%d %s", p.Current, p.Total, p.Units)
|
||||
|
||||
if p.Current > p.Total {
|
||||
// remove total display if the reported current is wonky.
|
||||
numbersBox = fmt.Sprintf("%d %s", p.Current, p.Units)
|
||||
}
|
||||
}
|
||||
|
||||
if p.Current > 0 && p.Start > 0 && percentage < 50 {
|
||||
fromStart := time.Now().Sub(time.Unix(p.Start, 0))
|
||||
perEntry := fromStart / time.Duration(p.Current)
|
||||
left := time.Duration(p.Total-p.Current) * perEntry
|
||||
left = (left / time.Second) * time.Second
|
||||
|
||||
if width > 50 {
|
||||
timeLeftBox = " " + left.String()
|
||||
}
|
||||
}
|
||||
return pbBox + numbersBox + timeLeftBox
|
||||
}
|
||||
|
||||
// JSONMessage defines a message struct. It describes
|
||||
// the created time, where it from, status, ID of the
|
||||
// message. It's used for docker events.
|
||||
type JSONMessage struct {
|
||||
Stream string `json:"stream,omitempty"`
|
||||
Status string `json:"status,omitempty"`
|
||||
Progress *JSONProgress `json:"progressDetail,omitempty"`
|
||||
ProgressMessage string `json:"progress,omitempty"` // deprecated
|
||||
ID string `json:"id,omitempty"`
|
||||
From string `json:"from,omitempty"`
|
||||
Time int64 `json:"time,omitempty"`
|
||||
TimeNano int64 `json:"timeNano,omitempty"`
|
||||
Error *JSONError `json:"errorDetail,omitempty"`
|
||||
ErrorMessage string `json:"error,omitempty"` // deprecated
|
||||
// Aux contains out-of-band data, such as digests for push signing and image id after building.
|
||||
Aux *json.RawMessage `json:"aux,omitempty"`
|
||||
}
|
||||
|
||||
func clearLine(out io.Writer) {
|
||||
eraseMode := aec.EraseModes.All
|
||||
cl := aec.EraseLine(eraseMode)
|
||||
fmt.Fprint(out, cl)
|
||||
}
|
||||
|
||||
func cursorUp(out io.Writer, l uint) {
|
||||
fmt.Fprint(out, aec.Up(l))
|
||||
}
|
||||
|
||||
func cursorDown(out io.Writer, l uint) {
|
||||
fmt.Fprint(out, aec.Down(l))
|
||||
}
|
||||
|
||||
// Display displays the JSONMessage to `out`. If `isTerminal` is true, it will erase the
|
||||
// entire current line when displaying the progressbar.
|
||||
func (jm *JSONMessage) Display(out io.Writer) error {
|
||||
if jm.Error != nil {
|
||||
if jm.Error.Code == 401 {
|
||||
return fmt.Errorf("authentication is required")
|
||||
}
|
||||
return jm.Error
|
||||
}
|
||||
var endl string
|
||||
if jm.Stream == "" && jm.Progress != nil {
|
||||
clearLine(out)
|
||||
endl = "\r"
|
||||
fmt.Fprint(out, endl)
|
||||
} else if jm.Progress != nil && jm.Progress.String() != "" { // disable progressbar in non-terminal
|
||||
return nil
|
||||
}
|
||||
if jm.TimeNano != 0 {
|
||||
fmt.Fprintf(out, "%s ", time.Unix(0, jm.TimeNano).Format(RFC3339NanoFixed))
|
||||
} else if jm.Time != 0 {
|
||||
fmt.Fprintf(out, "%s ", time.Unix(jm.Time, 0).Format(RFC3339NanoFixed))
|
||||
}
|
||||
if jm.ID != "" {
|
||||
fmt.Fprintf(out, "%s: ", jm.ID)
|
||||
}
|
||||
if jm.From != "" {
|
||||
fmt.Fprintf(out, "(from %s) ", jm.From)
|
||||
}
|
||||
if jm.Progress != nil {
|
||||
fmt.Fprintf(out, "%s %s%s", jm.Status, jm.Progress.String(), endl)
|
||||
} else if jm.ProgressMessage != "" { // deprecated
|
||||
fmt.Fprintf(out, "%s %s%s", jm.Status, jm.ProgressMessage, endl)
|
||||
} else if jm.Stream != "" {
|
||||
fmt.Fprintf(out, "%s%s", jm.Stream, endl)
|
||||
} else {
|
||||
fmt.Fprintf(out, "%s%s\r\n", jm.Status, endl)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DisplayJSONMessagesStream displays a json message stream from `in` to `out`, `isTerminal`
|
||||
// describes if `out` is a terminal. If this is the case, it will print `\n` at the end of
|
||||
// each line and move the cursor while displaying.
|
||||
func DisplayJSONMessagesStream(in io.Reader, out io.Writer, term *terminal.Terminal) error {
|
||||
var (
|
||||
dec = json.NewDecoder(in)
|
||||
ids = make(map[string]uint)
|
||||
)
|
||||
|
||||
for {
|
||||
var diff uint
|
||||
var jm JSONMessage
|
||||
if err := dec.Decode(&jm); err != nil {
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
if jm.Progress != nil {
|
||||
jm.Progress.Terminal = term
|
||||
}
|
||||
if jm.ID != "" && (jm.Progress != nil || jm.ProgressMessage != "") {
|
||||
line, ok := ids[jm.ID]
|
||||
if !ok {
|
||||
// NOTE: This approach of using len(id) to
|
||||
// figure out the number of lines of history
|
||||
// only works as long as we clear the history
|
||||
// when we output something that's not
|
||||
// accounted for in the map, such as a line
|
||||
// with no ID.
|
||||
line = uint(len(ids))
|
||||
ids[jm.ID] = line
|
||||
fmt.Fprintf(out, "\r\n")
|
||||
}
|
||||
diff = uint(len(ids)) - line
|
||||
cursorUp(out, diff)
|
||||
} else {
|
||||
// When outputting something that isn't progress
|
||||
// output, clear the history of previous lines. We
|
||||
// don't want progress entries from some previous
|
||||
// operation to be updated (for example, pull -a
|
||||
// with multiple tags).
|
||||
ids = make(map[string]uint)
|
||||
}
|
||||
err := jm.Display(out)
|
||||
if jm.ID != "" {
|
||||
cursorDown(out, diff)
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
33
server/validate/error.go
Normal file
33
server/validate/error.go
Normal file
@ -0,0 +1,33 @@
|
||||
package validate
|
||||
|
||||
import "fmt"
|
||||
|
||||
func newValidateError(section, key string, value interface{}, message string, original error) *ValidateError {
|
||||
return &ValidateError{
|
||||
section: section,
|
||||
key: key,
|
||||
value: value,
|
||||
message: message,
|
||||
originalError: original,
|
||||
}
|
||||
}
|
||||
|
||||
type ValidateError struct {
|
||||
error
|
||||
|
||||
section string
|
||||
key string
|
||||
value interface{}
|
||||
|
||||
message string
|
||||
|
||||
originalError error
|
||||
}
|
||||
|
||||
func (ve *ValidateError) Error() string {
|
||||
if ve.originalError != nil {
|
||||
return fmt.Sprintf("failed to validate %s.%s (%v), %s: %v", ve.section, ve.key, ve.value, ve.message, ve.originalError)
|
||||
} else {
|
||||
return fmt.Sprintf("failed to validate %s.%s (%v), %s", ve.section, ve.key, ve.value, ve.message)
|
||||
}
|
||||
}
|
45
server/validate/validate.go
Normal file
45
server/validate/validate.go
Normal file
@ -0,0 +1,45 @@
|
||||
package validate
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/docker/docker/client"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type Validator struct {
|
||||
Cli *client.Client
|
||||
Strict bool
|
||||
}
|
||||
|
||||
type ValidatorResult struct {
|
||||
Strict bool
|
||||
|
||||
Errors []*ValidateError
|
||||
}
|
||||
|
||||
func (vr *ValidatorResult) Ok() bool {
|
||||
return len(vr.Errors) == 0
|
||||
}
|
||||
|
||||
func (vr *ValidatorResult) String() string {
|
||||
builder := strings.Builder{}
|
||||
|
||||
if len(vr.Errors) == 0 {
|
||||
if vr.Strict {
|
||||
builder.WriteString("Validated all files, no errors were found. You're good to go (strict mode on)")
|
||||
} else {
|
||||
builder.WriteString("Validated all files, no errors were found. You're good to go")
|
||||
}
|
||||
} else {
|
||||
if vr.Strict {
|
||||
builder.WriteString(fmt.Sprintf("Found %d errors (strict mode on)\n\n", len(vr.Errors)))
|
||||
} else {
|
||||
builder.WriteString(fmt.Sprintf("Found %d errors\n\n", len(vr.Errors)))
|
||||
}
|
||||
for _, err := range vr.Errors {
|
||||
builder.WriteString(fmt.Sprintf("%v\n", err))
|
||||
}
|
||||
}
|
||||
|
||||
return builder.String()
|
||||
}
|
264
server/validate/validate_config.go
Normal file
264
server/validate/validate_config.go
Normal file
@ -0,0 +1,264 @@
|
||||
package validate
|
||||
|
||||
import (
|
||||
"docker4ssh/config"
|
||||
"docker4ssh/docker"
|
||||
"docker4ssh/utils"
|
||||
"fmt"
|
||||
"github.com/docker/docker/client"
|
||||
"go.uber.org/zap"
|
||||
s "golang.org/x/crypto/ssh"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func NewConfigValidator(cli *client.Client, strict bool, config *config.Config) *ConfigValidator {
|
||||
return &ConfigValidator{
|
||||
Validator: &Validator{
|
||||
Cli: cli,
|
||||
Strict: strict,
|
||||
},
|
||||
Config: config,
|
||||
}
|
||||
}
|
||||
|
||||
type ConfigValidator struct {
|
||||
*Validator
|
||||
|
||||
Config *config.Config
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) Validate() *ValidatorResult {
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
errors = append(errors, cv.ValidateProfile().Errors...)
|
||||
errors = append(errors, cv.ValidateAPI().Errors...)
|
||||
errors = append(errors, cv.ValidateSSH().Errors...)
|
||||
errors = append(errors, cv.ValidateDatabase().Errors...)
|
||||
errors = append(errors, cv.ValidateNetwork().Errors...)
|
||||
errors = append(errors, cv.ValidateLogging().Errors...)
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) ValidateProfile() *ValidatorResult {
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
errors = append(errors, cv.validateProfileDefault()...)
|
||||
errors = append(errors, cv.validateProfileDynamic()...)
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) validateProfileDefault() []*ValidateError {
|
||||
profileDefault := cv.Config.Profile.Default
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
if _, err := utils.PasswordToRegex(profileDefault.Password); err != nil {
|
||||
errors = append(errors, newValidateError("profile.default", "Password", profileDefault.Password, "not a valid regex string", err))
|
||||
}
|
||||
networkMode := docker.NetworkMode(profileDefault.NetworkMode)
|
||||
if docker.Off > networkMode || networkMode > docker.None {
|
||||
errors = append(errors, newValidateError("profile.default", "NetworkMode", profileDefault.NetworkMode, "not a valid network mode", nil))
|
||||
}
|
||||
runLevel := docker.RunLevel(profileDefault.RunLevel)
|
||||
if docker.User > runLevel || runLevel > docker.Forever {
|
||||
errors = append(errors, newValidateError("profile.default", "RunLevel", profileDefault.RunLevel, "is not a valid run level", nil))
|
||||
}
|
||||
|
||||
return errors
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) validateProfileDynamic() []*ValidateError {
|
||||
profileDynamic := cv.Config.Profile.Dynamic
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
if !profileDynamic.Enable && !cv.Strict {
|
||||
return errors
|
||||
}
|
||||
|
||||
if _, err := utils.PasswordToRegex(profileDynamic.Password); err != nil {
|
||||
errors = append(errors, newValidateError("profile.dynamic", "Password", profileDynamic.Password, "not a valid regex string", err))
|
||||
}
|
||||
networkMode := docker.NetworkMode(profileDynamic.NetworkMode)
|
||||
if docker.Off > networkMode || networkMode > docker.None {
|
||||
errors = append(errors, newValidateError("profile.dynamic", "NetworkMode", profileDynamic.NetworkMode, "not a valid network mode", nil))
|
||||
}
|
||||
runLevel := docker.RunLevel(profileDynamic.RunLevel)
|
||||
if docker.User > runLevel || runLevel > docker.Forever {
|
||||
errors = append(errors, newValidateError("profile.dynamic", "RunLevel", profileDynamic.RunLevel, "is not a valid run level", nil))
|
||||
}
|
||||
|
||||
return errors
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) ValidateAPI() *ValidatorResult {
|
||||
api := cv.Config.Api
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
if cv.Strict && !isPortFree(api.Port) {
|
||||
errors = append(errors, newValidateError("api", "Port", api.Port, "port is already in use", nil))
|
||||
}
|
||||
|
||||
errors = append(errors, cv.validateAPIConfigure().Errors...)
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) validateAPIConfigure() *ValidatorResult {
|
||||
apiConfigure := cv.Config.Api.Configure
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
for k, v := range map[string]string{"Binary": apiConfigure.Binary, "Man": apiConfigure.Man} {
|
||||
path := absolutePath("", v)
|
||||
if msg, err, ok := fileOk(path); !ok {
|
||||
errors = append(errors, newValidateError("api.configure", k, path, msg, err))
|
||||
}
|
||||
}
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) ValidateSSH() *ValidatorResult {
|
||||
ssh := cv.Config.SSH
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
if cv.Strict && !isPortFree(ssh.Port) {
|
||||
errors = append(errors, newValidateError("api", "Port", ssh.Port, "port is already in use", nil))
|
||||
}
|
||||
|
||||
path := absolutePath("", ssh.Keyfile)
|
||||
if msg, err, ok := fileOk(path); !ok {
|
||||
errors = append(errors, newValidateError("ssh", "Keyfile", path, msg, err))
|
||||
} else {
|
||||
keyBytes, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
panic(fmt.Sprintf("failed to read file %s: %v", path, err))
|
||||
}
|
||||
if ssh.Passphrase == "" {
|
||||
if _, err = s.ParsePrivateKey(keyBytes); err != nil {
|
||||
errors = append(errors, newValidateError("ssh", "Passphrase", ssh.Passphrase, "failed to parse ssh keyfile without password", err))
|
||||
}
|
||||
} else {
|
||||
if _, err = s.ParsePrivateKeyWithPassphrase(keyBytes, []byte(ssh.Passphrase)); err != nil {
|
||||
errors = append(errors, newValidateError("ssh", "Passphrase", ssh.Passphrase, "failed to parse ssh keyfile with password", err))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) ValidateDatabase() *ValidatorResult {
|
||||
database := cv.Config.Database
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
path := absolutePath("", database.Sqlite3File)
|
||||
if msg, err, ok := fileOk(path); !ok {
|
||||
errors = append(errors, newValidateError("database", "Sqlite3File", path, msg, err))
|
||||
}
|
||||
|
||||
// TODO: implement sql database schema
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) ValidateNetwork() *ValidatorResult {
|
||||
network := cv.Config.Network
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
if strings.Index(network.Default.Subnet, "/") == -1 {
|
||||
errors = append(errors, newValidateError("network.default", "Subnet", network.Default.Subnet, "no network mask is given", nil))
|
||||
} else if subnet, _, err := net.ParseCIDR(network.Default.Subnet); err != nil {
|
||||
errors = append(errors, newValidateError("network.default", "Subnet", network.Default.Subnet, "invalid subnet ip", err))
|
||||
} else if subnet == nil {
|
||||
errors = append(errors, newValidateError("network.default", "Subnet", network.Default.Subnet, "invalid subnet ip", nil))
|
||||
}
|
||||
|
||||
if strings.Index(network.Isolate.Subnet, "/") == -1 {
|
||||
errors = append(errors, newValidateError("network.isolate", "Subnet", network.Isolate.Subnet, "no network mask is given", nil))
|
||||
} else if subnet, _, err := net.ParseCIDR(network.Isolate.Subnet); err != nil {
|
||||
errors = append(errors, newValidateError("network.isolate", "Subnet", network.Isolate.Subnet, "invalid subnet ip", err))
|
||||
} else if subnet == nil {
|
||||
errors = append(errors, newValidateError("network.isolate", "Subnet", network.Isolate.Subnet, "invalid subnet ip", nil))
|
||||
}
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func (cv *ConfigValidator) ValidateLogging() *ValidatorResult {
|
||||
logging := cv.Config.Logging
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
level := zap.NewAtomicLevel()
|
||||
if err := level.UnmarshalText([]byte(logging.Level)); err != nil {
|
||||
errors = append(errors, newValidateError("logging", "Level", logging.Level, "invalid level", err))
|
||||
}
|
||||
if cv.Strict {
|
||||
path := absolutePath("", logging.OutputFile)
|
||||
if msg, err, ok := fileOk(path); !ok {
|
||||
errors = append(errors, newValidateError("logging", "OutputFile", logging.OutputFile, msg, err))
|
||||
}
|
||||
path = absolutePath("", logging.ErrorFile)
|
||||
if msg, err, ok := fileOk(path); !ok {
|
||||
errors = append(errors, newValidateError("logging", "ErrorFile", logging.ErrorFile, msg, err))
|
||||
}
|
||||
}
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: cv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
||||
|
||||
func isPortFree(port uint16) bool {
|
||||
listener, err := net.Listen("tcp", fmt.Sprintf(":%d", port))
|
||||
if listener != nil {
|
||||
listener.Close()
|
||||
}
|
||||
return err == nil && port != 0
|
||||
}
|
||||
|
||||
func absolutePath(parentPath, filePath string) (path string) {
|
||||
if filepath.IsAbs(filePath) {
|
||||
path = filePath
|
||||
} else {
|
||||
path = filepath.Join(parentPath, filePath)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func fileOk(path string) (string, error, bool) {
|
||||
if info, err := os.Stat(path); os.IsNotExist(err) {
|
||||
return "file does not exist", err, false
|
||||
} else if info.IsDir() {
|
||||
return "file is an directory", nil, false
|
||||
} else if err != nil {
|
||||
return "unexpected error", err, false
|
||||
}
|
||||
return "", nil, true
|
||||
}
|
64
server/validate/validate_profile.go
Normal file
64
server/validate/validate_profile.go
Normal file
@ -0,0 +1,64 @@
|
||||
package validate
|
||||
|
||||
import (
|
||||
"context"
|
||||
"docker4ssh/config"
|
||||
"docker4ssh/docker"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/filters"
|
||||
"github.com/docker/docker/client"
|
||||
)
|
||||
|
||||
func NewProfileValidator(cli *client.Client, strict bool, profile *config.Profile) *ProfileValidator {
|
||||
return &ProfileValidator{
|
||||
Validator: &Validator{
|
||||
Cli: cli,
|
||||
Strict: strict,
|
||||
},
|
||||
Profile: profile,
|
||||
}
|
||||
}
|
||||
|
||||
type ProfileValidator struct {
|
||||
*Validator
|
||||
|
||||
Profile *config.Profile
|
||||
}
|
||||
|
||||
func (pv *ProfileValidator) Validate() *ValidatorResult {
|
||||
profile := pv.Profile
|
||||
errors := make([]*ValidateError, 0)
|
||||
|
||||
networkMode := docker.NetworkMode(profile.NetworkMode)
|
||||
if docker.Off > networkMode || networkMode > docker.None {
|
||||
errors = append(errors, newValidateError(profile.Name(), "NetworkMode", profile.NetworkMode, "not a valid network mode", nil))
|
||||
}
|
||||
runLevel := docker.RunLevel(profile.RunLevel)
|
||||
if docker.User > runLevel || runLevel > docker.Forever {
|
||||
errors = append(errors, newValidateError(profile.Name(), "RunLevel", profile.RunLevel, "is not a valid run level", nil))
|
||||
}
|
||||
if profile.Image == "" && profile.ContainerID == "" {
|
||||
errors = append(errors, newValidateError(profile.Name(), "image/container", "", "Image OR Container must be specified, neither both nor none", nil))
|
||||
} else if pv.Strict {
|
||||
if profile.Image != "" {
|
||||
list, err := pv.Cli.ImageList(context.Background(), types.ImageListOptions{
|
||||
Filters: filters.NewArgs(filters.Arg("reference", profile.Image)),
|
||||
})
|
||||
if err != nil || len(list) == 0 {
|
||||
errors = append(errors, newValidateError(profile.Name(), "Image", profile.Image, "image does not exist", nil))
|
||||
}
|
||||
} else if profile.ContainerID != "" {
|
||||
list, err := pv.Cli.ContainerList(context.Background(), types.ContainerListOptions{
|
||||
Filters: filters.NewArgs(filters.Arg("id", profile.ContainerID)),
|
||||
})
|
||||
if err != nil || len(list) == 0 {
|
||||
errors = append(errors, newValidateError(profile.Name(), "Image", profile.Image, "container does not exist", nil))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return &ValidatorResult{
|
||||
Strict: pv.Strict,
|
||||
Errors: errors,
|
||||
}
|
||||
}
|
Loading…
x
Reference in New Issue
Block a user