summaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorSean Whitton <spwhitton@spwhitton.name>2016-10-23 13:39:53 -0700
committerSean Whitton <spwhitton@spwhitton.name>2016-10-23 13:39:53 -0700
commit7c9c8981a1371dc36f49ae915d62b1c88d32eb04 (patch)
tree173f6dd8accb4eb5d5478ffe048d832ee0244441
downloadkeysafe-7c9c8981a1371dc36f49ae915d62b1c88d32eb04.tar.gz
Import keysafe_0.20161022.orig.tar.xz
[dgit import orig keysafe_0.20161022.orig.tar.xz]
-rw-r--r--.ghci3
-rw-r--r--.gitignore3
-rw-r--r--AGPL661
-rw-r--r--AutoStart.hs42
-rw-r--r--BackupLog.hs98
-rw-r--r--Benchmark.hs119
-rw-r--r--ByteStrings.hs42
-rw-r--r--CHANGELOG139
-rw-r--r--CmdLine.hs246
-rw-r--r--Cost.hs150
-rw-r--r--Encryption.hs242
-rw-r--r--Entropy.hs24
-rw-r--r--ExpensiveHash.hs42
-rw-r--r--Gpg.hs79
-rw-r--r--HTTP.hs111
-rw-r--r--HTTP/Client.hs116
-rw-r--r--HTTP/Logger.hs25
-rw-r--r--HTTP/ProofOfWork.hs171
-rw-r--r--HTTP/RateLimit.hs419
-rw-r--r--HTTP/Server.hs125
-rw-r--r--INSTALL50
-rw-r--r--Makefile66
-rw-r--r--Output.hs33
-rw-r--r--SecretKey.hs31
-rw-r--r--Serialization.hs55
-rw-r--r--ServerBackup.hs62
-rw-r--r--Servers.hs36
-rw-r--r--Setup.hs30
-rw-r--r--Share.hs136
-rw-r--r--Storage.hs208
-rw-r--r--Storage/Local.hs200
-rw-r--r--Storage/Network.hs64
-rw-r--r--TODO54
-rw-r--r--Tests.hs141
-rw-r--r--Tunables.hs162
-rw-r--r--Types.hs71
-rw-r--r--Types/Cost.hs72
-rw-r--r--Types/Server.hs36
-rw-r--r--Types/Storage.hs60
-rw-r--r--Types/UI.hs27
-rw-r--r--UI.hs45
-rw-r--r--UI/NonInteractive.hs40
-rw-r--r--UI/Readline.hs163
-rw-r--r--UI/Zenity.hs183
-rw-r--r--Utility/Data.hs19
-rw-r--r--Utility/Env.hs84
-rw-r--r--Utility/Exception.hs113
-rw-r--r--Utility/FreeDesktop.hs147
-rw-r--r--Utility/UserInfo.hs62
-rw-r--r--keysafe.1166
-rw-r--r--keysafe.autostart9
-rw-r--r--keysafe.cabal124
-rw-r--r--keysafe.default2
-rw-r--r--keysafe.desktop9
-rw-r--r--keysafe.hs459
-rw-r--r--keysafe.init60
-rw-r--r--keysafe.service18
-rw-r--r--stack.yaml16
58 files changed, 6170 insertions, 0 deletions
diff --git a/.ghci b/.ghci
new file mode 100644
index 0000000..c459af9
--- /dev/null
+++ b/.ghci
@@ -0,0 +1,3 @@
+:set -Wall
+:set -fno-warn-tabs
+:set -XOverloadedStrings
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..60a6dcc
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,3 @@
+dist/*
+.stack-work/*
+keysafe
diff --git a/AGPL b/AGPL
new file mode 100644
index 0000000..dba13ed
--- /dev/null
+++ b/AGPL
@@ -0,0 +1,661 @@
+ GNU AFFERO GENERAL PUBLIC LICENSE
+ Version 3, 19 November 2007
+
+ Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+ Preamble
+
+ The GNU Affero General Public License is a free, copyleft license for
+software and other kinds of works, specifically designed to ensure
+cooperation with the community in the case of network server software.
+
+ The licenses for most software and other practical works are designed
+to take away your freedom to share and change the works. By contrast,
+our General Public Licenses are intended to guarantee your freedom to
+share and change all versions of a program--to make sure it remains free
+software for all its users.
+
+ When we speak of free software, we are referring to freedom, not
+price. Our General Public Licenses are designed to make sure that you
+have the freedom to distribute copies of free software (and charge for
+them if you wish), that you receive source code or can get it if you
+want it, that you can change the software or use pieces of it in new
+free programs, and that you know you can do these things.
+
+ Developers that use our General Public Licenses protect your rights
+with two steps: (1) assert copyright on the software, and (2) offer
+you this License which gives you legal permission to copy, distribute
+and/or modify the software.
+
+ A secondary benefit of defending all users' freedom is that
+improvements made in alternate versions of the program, if they
+receive widespread use, become available for other developers to
+incorporate. Many developers of free software are heartened and
+encouraged by the resulting cooperation. However, in the case of
+software used on network servers, this result may fail to come about.
+The GNU General Public License permits making a modified version and
+letting the public access it on a server without ever releasing its
+source code to the public.
+
+ The GNU Affero General Public License is designed specifically to
+ensure that, in such cases, the modified source code becomes available
+to the community. It requires the operator of a network server to
+provide the source code of the modified version running there to the
+users of that server. Therefore, public use of a modified version, on
+a publicly accessible server, gives the public access to the source
+code of the modified version.
+
+ An older license, called the Affero General Public License and
+published by Affero, was designed to accomplish similar goals. This is
+a different license, not a version of the Affero GPL, but Affero has
+released a new version of the Affero GPL which permits relicensing under
+this license.
+
+ The precise terms and conditions for copying, distribution and
+modification follow.
+
+ TERMS AND CONDITIONS
+
+ 0. Definitions.
+
+ "This License" refers to version 3 of the GNU Affero General Public License.
+
+ "Copyright" also means copyright-like laws that apply to other kinds of
+works, such as semiconductor masks.
+
+ "The Program" refers to any copyrightable work licensed under this
+License. Each licensee is addressed as "you". "Licensees" and
+"recipients" may be individuals or organizations.
+
+ To "modify" a work means to copy from or adapt all or part of the work
+in a fashion requiring copyright permission, other than the making of an
+exact copy. The resulting work is called a "modified version" of the
+earlier work or a work "based on" the earlier work.
+
+ A "covered work" means either the unmodified Program or a work based
+on the Program.
+
+ To "propagate" a work means to do anything with it that, without
+permission, would make you directly or secondarily liable for
+infringement under applicable copyright law, except executing it on a
+computer or modifying a private copy. Propagation includes copying,
+distribution (with or without modification), making available to the
+public, and in some countries other activities as well.
+
+ To "convey" a work means any kind of propagation that enables other
+parties to make or receive copies. Mere interaction with a user through
+a computer network, with no transfer of a copy, is not conveying.
+
+ An interactive user interface displays "Appropriate Legal Notices"
+to the extent that it includes a convenient and prominently visible
+feature that (1) displays an appropriate copyright notice, and (2)
+tells the user that there is no warranty for the work (except to the
+extent that warranties are provided), that licensees may convey the
+work under this License, and how to view a copy of this License. If
+the interface presents a list of user commands or options, such as a
+menu, a prominent item in the list meets this criterion.
+
+ 1. Source Code.
+
+ The "source code" for a work means the preferred form of the work
+for making modifications to it. "Object code" means any non-source
+form of a work.
+
+ A "Standard Interface" means an interface that either is an official
+standard defined by a recognized standards body, or, in the case of
+interfaces specified for a particular programming language, one that
+is widely used among developers working in that language.
+
+ The "System Libraries" of an executable work include anything, other
+than the work as a whole, that (a) is included in the normal form of
+packaging a Major Component, but which is not part of that Major
+Component, and (b) serves only to enable use of the work with that
+Major Component, or to implement a Standard Interface for which an
+implementation is available to the public in source code form. A
+"Major Component", in this context, means a major essential component
+(kernel, window system, and so on) of the specific operating system
+(if any) on which the executable work runs, or a compiler used to
+produce the work, or an object code interpreter used to run it.
+
+ The "Corresponding Source" for a work in object code form means all
+the source code needed to generate, install, and (for an executable
+work) run the object code and to modify the work, including scripts to
+control those activities. However, it does not include the work's
+System Libraries, or general-purpose tools or generally available free
+programs which are used unmodified in performing those activities but
+which are not part of the work. For example, Corresponding Source
+includes interface definition files associated with source files for
+the work, and the source code for shared libraries and dynamically
+linked subprograms that the work is specifically designed to require,
+such as by intimate data communication or control flow between those
+subprograms and other parts of the work.
+
+ The Corresponding Source need not include anything that users
+can regenerate automatically from other parts of the Corresponding
+Source.
+
+ The Corresponding Source for a work in source code form is that
+same work.
+
+ 2. Basic Permissions.
+
+ All rights granted under this License are granted for the term of
+copyright on the Program, and are irrevocable provided the stated
+conditions are met. This License explicitly affirms your unlimited
+permission to run the unmodified Program. The output from running a
+covered work is covered by this License only if the output, given its
+content, constitutes a covered work. This License acknowledges your
+rights of fair use or other equivalent, as provided by copyright law.
+
+ You may make, run and propagate covered works that you do not
+convey, without conditions so long as your license otherwise remains
+in force. You may convey covered works to others for the sole purpose
+of having them make modifications exclusively for you, or provide you
+with facilities for running those works, provided that you comply with
+the terms of this License in conveying all material for which you do
+not control copyright. Those thus making or running the covered works
+for you must do so exclusively on your behalf, under your direction
+and control, on terms that prohibit them from making any copies of
+your copyrighted material outside their relationship with you.
+
+ Conveying under any other circumstances is permitted solely under
+the conditions stated below. Sublicensing is not allowed; section 10
+makes it unnecessary.
+
+ 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
+
+ No covered work shall be deemed part of an effective technological
+measure under any applicable law fulfilling obligations under article
+11 of the WIPO copyright treaty adopted on 20 December 1996, or
+similar laws prohibiting or restricting circumvention of such
+measures.
+
+ When you convey a covered work, you waive any legal power to forbid
+circumvention of technological measures to the extent such circumvention
+is effected by exercising rights under this License with respect to
+the covered work, and you disclaim any intention to limit operation or
+modification of the work as a means of enforcing, against the work's
+users, your or third parties' legal rights to forbid circumvention of
+technological measures.
+
+ 4. Conveying Verbatim Copies.
+
+ You may convey verbatim copies of the Program's source code as you
+receive it, in any medium, provided that you conspicuously and
+appropriately publish on each copy an appropriate copyright notice;
+keep intact all notices stating that this License and any
+non-permissive terms added in accord with section 7 apply to the code;
+keep intact all notices of the absence of any warranty; and give all
+recipients a copy of this License along with the Program.
+
+ You may charge any price or no price for each copy that you convey,
+and you may offer support or warranty protection for a fee.
+
+ 5. Conveying Modified Source Versions.
+
+ You may convey a work based on the Program, or the modifications to
+produce it from the Program, in the form of source code under the
+terms of section 4, provided that you also meet all of these conditions:
+
+ a) The work must carry prominent notices stating that you modified
+ it, and giving a relevant date.
+
+ b) The work must carry prominent notices stating that it is
+ released under this License and any conditions added under section
+ 7. This requirement modifies the requirement in section 4 to
+ "keep intact all notices".
+
+ c) You must license the entire work, as a whole, under this
+ License to anyone who comes into possession of a copy. This
+ License will therefore apply, along with any applicable section 7
+ additional terms, to the whole of the work, and all its parts,
+ regardless of how they are packaged. This License gives no
+ permission to license the work in any other way, but it does not
+ invalidate such permission if you have separately received it.
+
+ d) If the work has interactive user interfaces, each must display
+ Appropriate Legal Notices; however, if the Program has interactive
+ interfaces that do not display Appropriate Legal Notices, your
+ work need not make them do so.
+
+ A compilation of a covered work with other separate and independent
+works, which are not by their nature extensions of the covered work,
+and which are not combined with it such as to form a larger program,
+in or on a volume of a storage or distribution medium, is called an
+"aggregate" if the compilation and its resulting copyright are not
+used to limit the access or legal rights of the compilation's users
+beyond what the individual works permit. Inclusion of a covered work
+in an aggregate does not cause this License to apply to the other
+parts of the aggregate.
+
+ 6. Conveying Non-Source Forms.
+
+ You may convey a covered work in object code form under the terms
+of sections 4 and 5, provided that you also convey the
+machine-readable Corresponding Source under the terms of this License,
+in one of these ways:
+
+ a) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by the
+ Corresponding Source fixed on a durable physical medium
+ customarily used for software interchange.
+
+ b) Convey the object code in, or embodied in, a physical product
+ (including a physical distribution medium), accompanied by a
+ written offer, valid for at least three years and valid for as
+ long as you offer spare parts or customer support for that product
+ model, to give anyone who possesses the object code either (1) a
+ copy of the Corresponding Source for all the software in the
+ product that is covered by this License, on a durable physical
+ medium customarily used for software interchange, for a price no
+ more than your reasonable cost of physically performing this
+ conveying of source, or (2) access to copy the
+ Corresponding Source from a network server at no charge.
+
+ c) Convey individual copies of the object code with a copy of the
+ written offer to provide the Corresponding Source. This
+ alternative is allowed only occasionally and noncommercially, and
+ only if you received the object code with such an offer, in accord
+ with subsection 6b.
+
+ d) Convey the object code by offering access from a designated
+ place (gratis or for a charge), and offer equivalent access to the
+ Corresponding Source in the same way through the same place at no
+ further charge. You need not require recipients to copy the
+ Corresponding Source along with the object code. If the place to
+ copy the object code is a network server, the Corresponding Source
+ may be on a different server (operated by you or a third party)
+ that supports equivalent copying facilities, provided you maintain
+ clear directions next to the object code saying where to find the
+ Corresponding Source. Regardless of what server hosts the
+ Corresponding Source, you remain obligated to ensure that it is
+ available for as long as needed to satisfy these requirements.
+
+ e) Convey the object code using peer-to-peer transmission, provided
+ you inform other peers where the object code and Corresponding
+ Source of the work are being offered to the general public at no
+ charge under subsection 6d.
+
+ A separable portion of the object code, whose source code is excluded
+from the Corresponding Source as a System Library, need not be
+included in conveying the object code work.
+
+ A "User Product" is either (1) a "consumer product", which means any
+tangible personal property which is normally used for personal, family,
+or household purposes, or (2) anything designed or sold for incorporation
+into a dwelling. In determining whether a product is a consumer product,
+doubtful cases shall be resolved in favor of coverage. For a particular
+product received by a particular user, "normally used" refers to a
+typical or common use of that class of product, regardless of the status
+of the particular user or of the way in which the particular user
+actually uses, or expects or is expected to use, the product. A product
+is a consumer product regardless of whether the product has substantial
+commercial, industrial or non-consumer uses, unless such uses represent
+the only significant mode of use of the product.
+
+ "Installation Information" for a User Product means any methods,
+procedures, authorization keys, or other information required to install
+and execute modified versions of a covered work in that User Product from
+a modified version of its Corresponding Source. The information must
+suffice to ensure that the continued functioning of the modified object
+code is in no case prevented or interfered with solely because
+modification has been made.
+
+ If you convey an object code work under this section in, or with, or
+specifically for use in, a User Product, and the conveying occurs as
+part of a transaction in which the right of possession and use of the
+User Product is transferred to the recipient in perpetuity or for a
+fixed term (regardless of how the transaction is characterized), the
+Corresponding Source conveyed under this section must be accompanied
+by the Installation Information. But this requirement does not apply
+if neither you nor any third party retains the ability to install
+modified object code on the User Product (for example, the work has
+been installed in ROM).
+
+ The requirement to provide Installation Information does not include a
+requirement to continue to provide support service, warranty, or updates
+for a work that has been modified or installed by the recipient, or for
+the User Product in which it has been modified or installed. Access to a
+network may be denied when the modification itself materially and
+adversely affects the operation of the network or violates the rules and
+protocols for communication across the network.
+
+ Corresponding Source conveyed, and Installation Information provided,
+in accord with this section must be in a format that is publicly
+documented (and with an implementation available to the public in
+source code form), and must require no special password or key for
+unpacking, reading or copying.
+
+ 7. Additional Terms.
+
+ "Additional permissions" are terms that supplement the terms of this
+License by making exceptions from one or more of its conditions.
+Additional permissions that are applicable to the entire Program shall
+be treated as though they were included in this License, to the extent
+that they are valid under applicable law. If additional permissions
+apply only to part of the Program, that part may be used separately
+under those permissions, but the entire Program remains governed by
+this License without regard to the additional permissions.
+
+ When you convey a copy of a covered work, you may at your option
+remove any additional permissions from that copy, or from any part of
+it. (Additional permissions may be written to require their own
+removal in certain cases when you modify the work.) You may place
+additional permissions on material, added by you to a covered work,
+for which you have or can give appropriate copyright permission.
+
+ Notwithstanding any other provision of this License, for material you
+add to a covered work, you may (if authorized by the copyright holders of
+that material) supplement the terms of this License with terms:
+
+ a) Disclaiming warranty or limiting liability differently from the
+ terms of sections 15 and 16 of this License; or
+
+ b) Requiring preservation of specified reasonable legal notices or
+ author attributions in that material or in the Appropriate Legal
+ Notices displayed by works containing it; or
+
+ c) Prohibiting misrepresentation of the origin of that material, or
+ requiring that modified versions of such material be marked in
+ reasonable ways as different from the original version; or
+
+ d) Limiting the use for publicity purposes of names of licensors or
+ authors of the material; or
+
+ e) Declining to grant rights under trademark law for use of some
+ trade names, trademarks, or service marks; or
+
+ f) Requiring indemnification of licensors and authors of that
+ material by anyone who conveys the material (or modified versions of
+ it) with contractual assumptions of liability to the recipient, for
+ any liability that these contractual assumptions directly impose on
+ those licensors and authors.
+
+ All other non-permissive additional terms are considered "further
+restrictions" within the meaning of section 10. If the Program as you
+received it, or any part of it, contains a notice stating that it is
+governed by this License along with a term that is a further
+restriction, you may remove that term. If a license document contains
+a further restriction but permits relicensing or conveying under this
+License, you may add to a covered work material governed by the terms
+of that license document, provided that the further restriction does
+not survive such relicensing or conveying.
+
+ If you add terms to a covered work in accord with this section, you
+must place, in the relevant source files, a statement of the
+additional terms that apply to those files, or a notice indicating
+where to find the applicable terms.
+
+ Additional terms, permissive or non-permissive, may be stated in the
+form of a separately written license, or stated as exceptions;
+the above requirements apply either way.
+
+ 8. Termination.
+
+ You may not propagate or modify a covered work except as expressly
+provided under this License. Any attempt otherwise to propagate or
+modify it is void, and will automatically terminate your rights under
+this License (including any patent licenses granted under the third
+paragraph of section 11).
+
+ However, if you cease all violation of this License, then your
+license from a particular copyright holder is reinstated (a)
+provisionally, unless and until the copyright holder explicitly and
+finally terminates your license, and (b) permanently, if the copyright
+holder fails to notify you of the violation by some reasonable means
+prior to 60 days after the cessation.
+
+ Moreover, your license from a particular copyright holder is
+reinstated permanently if the copyright holder notifies you of the
+violation by some reasonable means, this is the first time you have
+received notice of violation of this License (for any work) from that
+copyright holder, and you cure the violation prior to 30 days after
+your receipt of the notice.
+
+ Termination of your rights under this section does not terminate the
+licenses of parties who have received copies or rights from you under
+this License. If your rights have been terminated and not permanently
+reinstated, you do not qualify to receive new licenses for the same
+material under section 10.
+
+ 9. Acceptance Not Required for Having Copies.
+
+ You are not required to accept this License in order to receive or
+run a copy of the Program. Ancillary propagation of a covered work
+occurring solely as a consequence of using peer-to-peer transmission
+to receive a copy likewise does not require acceptance. However,
+nothing other than this License grants you permission to propagate or
+modify any covered work. These actions infringe copyright if you do
+not accept this License. Therefore, by modifying or propagating a
+covered work, you indicate your acceptance of this License to do so.
+
+ 10. Automatic Licensing of Downstream Recipients.
+
+ Each time you convey a covered work, the recipient automatically
+receives a license from the original licensors, to run, modify and
+propagate that work, subject to this License. You are not responsible
+for enforcing compliance by third parties with this License.
+
+ An "entity transaction" is a transaction transferring control of an
+organization, or substantially all assets of one, or subdividing an
+organization, or merging organizations. If propagation of a covered
+work results from an entity transaction, each party to that
+transaction who receives a copy of the work also receives whatever
+licenses to the work the party's predecessor in interest had or could
+give under the previous paragraph, plus a right to possession of the
+Corresponding Source of the work from the predecessor in interest, if
+the predecessor has it or can get it with reasonable efforts.
+
+ You may not impose any further restrictions on the exercise of the
+rights granted or affirmed under this License. For example, you may
+not impose a license fee, royalty, or other charge for exercise of
+rights granted under this License, and you may not initiate litigation
+(including a cross-claim or counterclaim in a lawsuit) alleging that
+any patent claim is infringed by making, using, selling, offering for
+sale, or importing the Program or any portion of it.
+
+ 11. Patents.
+
+ A "contributor" is a copyright holder who authorizes use under this
+License of the Program or a work on which the Program is based. The
+work thus licensed is called the contributor's "contributor version".
+
+ A contributor's "essential patent claims" are all patent claims
+owned or controlled by the contributor, whether already acquired or
+hereafter acquired, that would be infringed by some manner, permitted
+by this License, of making, using, or selling its contributor version,
+but do not include claims that would be infringed only as a
+consequence of further modification of the contributor version. For
+purposes of this definition, "control" includes the right to grant
+patent sublicenses in a manner consistent with the requirements of
+this License.
+
+ Each contributor grants you a non-exclusive, worldwide, royalty-free
+patent license under the contributor's essential patent claims, to
+make, use, sell, offer for sale, import and otherwise run, modify and
+propagate the contents of its contributor version.
+
+ In the following three paragraphs, a "patent license" is any express
+agreement or commitment, however denominated, not to enforce a patent
+(such as an express permission to practice a patent or covenant not to
+sue for patent infringement). To "grant" such a patent license to a
+party means to make such an agreement or commitment not to enforce a
+patent against the party.
+
+ If you convey a covered work, knowingly relying on a patent license,
+and the Corresponding Source of the work is not available for anyone
+to copy, free of charge and under the terms of this License, through a
+publicly available network server or other readily accessible means,
+then you must either (1) cause the Corresponding Source to be so
+available, or (2) arrange to deprive yourself of the benefit of the
+patent license for this particular work, or (3) arrange, in a manner
+consistent with the requirements of this License, to extend the patent
+license to downstream recipients. "Knowingly relying" means you have
+actual knowledge that, but for the patent license, your conveying the
+covered work in a country, or your recipient's use of the covered work
+in a country, would infringe one or more identifiable patents in that
+country that you have reason to believe are valid.
+
+ If, pursuant to or in connection with a single transaction or
+arrangement, you convey, or propagate by procuring conveyance of, a
+covered work, and grant a patent license to some of the parties
+receiving the covered work authorizing them to use, propagate, modify
+or convey a specific copy of the covered work, then the patent license
+you grant is automatically extended to all recipients of the covered
+work and works based on it.
+
+ A patent license is "discriminatory" if it does not include within
+the scope of its coverage, prohibits the exercise of, or is
+conditioned on the non-exercise of one or more of the rights that are
+specifically granted under this License. You may not convey a covered
+work if you are a party to an arrangement with a third party that is
+in the business of distributing software, under which you make payment
+to the third party based on the extent of your activity of conveying
+the work, and under which the third party grants, to any of the
+parties who would receive the covered work from you, a discriminatory
+patent license (a) in connection with copies of the covered work
+conveyed by you (or copies made from those copies), or (b) primarily
+for and in connection with specific products or compilations that
+contain the covered work, unless you entered into that arrangement,
+or that patent license was granted, prior to 28 March 2007.
+
+ Nothing in this License shall be construed as excluding or limiting
+any implied license or other defenses to infringement that may
+otherwise be available to you under applicable patent law.
+
+ 12. No Surrender of Others' Freedom.
+
+ If conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License. If you cannot convey a
+covered work so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you may
+not convey it at all. For example, if you agree to terms that obligate you
+to collect a royalty for further conveying from those to whom you convey
+the Program, the only way you could satisfy both those terms and this
+License would be to refrain entirely from conveying the Program.
+
+ 13. Remote Network Interaction; Use with the GNU General Public License.
+
+ Notwithstanding any other provision of this License, if you modify the
+Program, your modified version must prominently offer all users
+interacting with it remotely through a computer network (if your version
+supports such interaction) an opportunity to receive the Corresponding
+Source of your version by providing access to the Corresponding Source
+from a network server at no charge, through some standard or customary
+means of facilitating copying of software. This Corresponding Source
+shall include the Corresponding Source for any work covered by version 3
+of the GNU General Public License that is incorporated pursuant to the
+following paragraph.
+
+ Notwithstanding any other provision of this License, you have
+permission to link or combine any covered work with a work licensed
+under version 3 of the GNU General Public License into a single
+combined work, and to convey the resulting work. The terms of this
+License will continue to apply to the part which is the covered work,
+but the work with which it is combined will remain governed by version
+3 of the GNU General Public License.
+
+ 14. Revised Versions of this License.
+
+ The Free Software Foundation may publish revised and/or new versions of
+the GNU Affero General Public License from time to time. Such new versions
+will be similar in spirit to the present version, but may differ in detail to
+address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Program specifies that a certain numbered version of the GNU Affero General
+Public License "or any later version" applies to it, you have the
+option of following the terms and conditions either of that numbered
+version or of any later version published by the Free Software
+Foundation. If the Program does not specify a version number of the
+GNU Affero General Public License, you may choose any version ever published
+by the Free Software Foundation.
+
+ If the Program specifies that a proxy can decide which future
+versions of the GNU Affero General Public License can be used, that proxy's
+public statement of acceptance of a version permanently authorizes you
+to choose that version for the Program.
+
+ Later license versions may give you additional or different
+permissions. However, no additional obligations are imposed on any
+author or copyright holder as a result of your choosing to follow a
+later version.
+
+ 15. Disclaimer of Warranty.
+
+ THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
+APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
+HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
+OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
+THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
+IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
+ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+ 16. Limitation of Liability.
+
+ IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
+THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
+GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
+USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
+DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
+PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
+EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
+SUCH DAMAGES.
+
+ 17. Interpretation of Sections 15 and 16.
+
+ If the disclaimer of warranty and limitation of liability provided
+above cannot be given local legal effect according to their terms,
+reviewing courts shall apply local law that most closely approximates
+an absolute waiver of all civil liability in connection with the
+Program, unless a warranty or assumption of liability accompanies a
+copy of the Program in return for a fee.
+
+ END OF TERMS AND CONDITIONS
+
+ How to Apply These Terms to Your New Programs
+
+ If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it
+free software which everyone can redistribute and change under these terms.
+
+ To do so, attach the following notices to the program. It is safest
+to attach them to the start of each source file to most effectively
+state the exclusion of warranty; and each file should have at least
+the "copyright" line and a pointer to where the full notice is found.
+
+ <one line to give the program's name and a brief idea of what it does.>
+ Copyright (C) <year> <name of author>
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU Affero General Public License as published by
+ the Free Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU Affero General Public License for more details.
+
+ You should have received a copy of the GNU Affero General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If your software can interact with users remotely through a computer
+network, you should also make sure that it provides a way for users to
+get its source. For example, if your program is a web application, its
+interface could display a "Source" link that leads users to an archive
+of the code. There are many ways you could offer source, and different
+solutions will be better for different programs; see section 13 for the
+specific requirements.
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary.
+For more information on this, and how to apply and follow the GNU AGPL, see
+<http://www.gnu.org/licenses/>.
diff --git a/AutoStart.hs b/AutoStart.hs
new file mode 100644
index 0000000..dc022df
--- /dev/null
+++ b/AutoStart.hs
@@ -0,0 +1,42 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module AutoStart where
+
+import Utility.FreeDesktop
+import System.Info
+import System.Environment
+import System.Directory
+
+installAutoStartFile :: IO ()
+installAutoStartFile = go =<< autoStartFile
+ where
+ go (Just f) = case os of
+ "linux" -> installFdoAutoStart f
+ "freebsd" -> installFdoAutoStart f
+ _ -> return ()
+ go Nothing = return ()
+
+isAutoStartFileInstalled :: IO Bool
+isAutoStartFileInstalled = maybe (pure False) doesFileExist =<< autoStartFile
+
+autoStartFile :: IO (Maybe FilePath)
+autoStartFile = case os of
+ "linux" -> Just . autoStartPath "keysafe" <$> userConfigDir
+ _ -> return Nothing
+
+installFdoAutoStart :: FilePath -> IO ()
+installFdoAutoStart f = do
+ command <- getExecutablePath
+ writeDesktopMenuFile (fdoAutostart command) f
+
+fdoAutostart :: FilePath -> DesktopEntry
+fdoAutostart command = genDesktopEntry
+ "Keysafe"
+ "Autostart"
+ False
+ (command ++ " --autostart")
+ Nothing
+ []
diff --git a/BackupLog.hs b/BackupLog.hs
new file mode 100644
index 0000000..5ea7ba9
--- /dev/null
+++ b/BackupLog.hs
@@ -0,0 +1,98 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE DeriveGeneric, BangPatterns #-}
+
+module BackupLog where
+
+import Types
+import Types.Server
+import Types.Cost
+import Utility.UserInfo
+import GHC.Generics
+import Data.Time.Clock.POSIX
+import Data.Aeson
+import Data.Maybe
+import System.FilePath
+import System.Directory
+import System.Posix.Files
+import qualified Data.ByteString.Lazy as B
+
+data BackupLog = BackupLog
+ { logDate :: POSIXTime
+ , logEvent :: BackupEvent
+ }
+ deriving (Show, Generic)
+
+instance ToJSON BackupLog
+instance FromJSON BackupLog
+
+-- | Log of a backup.
+--
+-- If an attacker cracks the user's system and finds this stored
+-- on it, it should not help them recover keys from keysafe.
+--
+-- That's why the Name used is not included; as knowing the name lets
+-- an attacker download shards and start password cracking.
+--
+-- Including the password entropy does let an attacker avoid trying
+-- weak passwords and go right to passwords that are strong enough, but
+-- this should only half the password crack time at worst.
+data BackupEvent
+ = BackupSkipped SecretKeySource
+ | BackupMade
+ { backupServers :: [ServerName]
+ , backupSecretKeySource :: SecretKeySource
+ , backupPasswordEntropy :: Int
+ }
+ deriving (Show, Generic)
+
+matchesSecretKeySource :: SecretKeySource -> BackupLog -> Bool
+matchesSecretKeySource a (BackupLog _ (BackupSkipped b)) = a == b
+matchesSecretKeySource a (BackupLog _ (BackupMade { backupSecretKeySource = b })) = a == b
+
+instance ToJSON BackupEvent
+instance FromJSON BackupEvent
+
+mkBackupLog :: BackupEvent -> IO BackupLog
+mkBackupLog evt = BackupLog
+ <$> getPOSIXTime
+ <*> pure evt
+
+backupMade :: [Server] -> SecretKeySource -> Entropy UnknownPassword -> BackupEvent
+backupMade servers sks (Entropy n) = BackupMade
+ { backupServers = map serverName servers
+ , backupSecretKeySource = sks
+ , backupPasswordEntropy = n
+ }
+
+backupLogFile :: IO FilePath
+backupLogFile = do
+ home <- myHomeDir
+ return $ home </> ".keysafe/backup.log"
+
+readBackupLogs :: IO [BackupLog]
+readBackupLogs = do
+ f <- backupLogFile
+ e <- doesFileExist f
+ if e
+ then fromMaybe [] . decode <$> B.readFile f
+ else return []
+
+storeBackupLog :: BackupLog -> IO ()
+storeBackupLog r = do
+ !rs <- readBackupLogs
+ f <- backupLogFile
+ let d = takeDirectory f
+ createDirectoryIfMissing True d
+ setFileMode d $
+ ownerReadMode
+ `unionFileModes` ownerWriteMode
+ `unionFileModes` ownerExecuteMode
+ setPermissions d
+ $ setOwnerReadable True
+ $ setOwnerWritable True
+ $ setOwnerExecutable True emptyPermissions
+ B.writeFile f $ encode (r:rs)
diff --git a/Benchmark.hs b/Benchmark.hs
new file mode 100644
index 0000000..33efb46
--- /dev/null
+++ b/Benchmark.hs
@@ -0,0 +1,119 @@
+{-# LANGUAGE OverloadedStrings, FlexibleInstances #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Benchmark where
+
+import Types
+import Output
+import Tunables
+import ExpensiveHash
+import HTTP.ProofOfWork
+import Cost
+import Serialization ()
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Text as T
+import qualified Crypto.Argon2 as Argon2
+import Data.Time.Clock
+import Control.DeepSeq
+import Control.Monad
+import Data.Monoid
+import Data.Maybe
+
+data BenchmarkResult t = BenchmarkResult { expectedBenchmark :: t, actualBenchmark :: t }
+
+instance Show (BenchmarkResult (Cost op)) where
+ show br = " expected: " ++ show (expectedBenchmark br) ++ "s"
+ ++ "\tactual: " ++ show (actualBenchmark br) ++ "s"
+
+benchmarkExpensiveHash :: Int -> ExpensiveHashTunable -> IO (BenchmarkResult (Cost CreationOp))
+benchmarkExpensiveHash rounds tunables =
+ benchmarkExpensiveHash' rounds tunables (getexpected tunables)
+ where
+ getexpected (UseArgon2 cost _) = mapCost (* fromIntegral rounds) cost
+
+benchmarkExpensiveHash' :: Int -> ExpensiveHashTunable -> Cost op -> IO (BenchmarkResult (Cost op))
+benchmarkExpensiveHash' rounds tunables@(UseArgon2 _ hashopts) expected = do
+ numcores <- fromIntegral . fromMaybe (error "Unknown number of physical cores.")
+ <$> getNumCores
+ start <- getCurrentTime
+ forM_ [1..rounds] $ \n -> do
+ -- Must vary the data being hashed to avoid laziness
+ -- caching hash results.
+ let base = T.pack (show n)
+ let baseb = BU8.fromString (show n)
+ let ExpensiveHash _ t = expensiveHash tunables
+ (Salt (GpgKey (KeyId (base <> "dummy"))))
+ (baseb <> "himom")
+ t `deepseq` return ()
+ end <- getCurrentTime
+ let diff = floor (end `diffUTCTime` start) :: Integer
+ let maxthreads = Argon2.hashParallelism hashopts
+ let actual = CPUCost (Seconds (fromIntegral diff))
+ (Divisibility $ fromIntegral maxthreads)
+ -- The expected cost is for a single core, so adjust it
+ -- based on the number of cores, up to a maximum of the number
+ -- of threads that the hash is configred to use.
+ let usedcores = min maxthreads numcores
+ let adjustedexpected = mapCost (/ fromIntegral usedcores) expected
+ return $ BenchmarkResult
+ { expectedBenchmark = adjustedexpected
+ , actualBenchmark = actual
+ }
+
+benchmark :: NFData t => Int -> Cost CreationOp -> (Int -> IO t) -> IO (BenchmarkResult (Cost CreationOp))
+benchmark rounds expected a = do
+ start <- getCurrentTime
+ forM_ [1..rounds] $ \n -> do
+ v <- a n
+ v `deepseq` return ()
+ end <- getCurrentTime
+ let diff = floor (end `diffUTCTime` start) :: Integer
+ return $ BenchmarkResult
+ { expectedBenchmark = expected
+ , actualBenchmark = CPUCost (Seconds (fromIntegral diff)) (Divisibility 1)
+ }
+
+benchmarkPoW :: Int -> Seconds -> IO (BenchmarkResult (Cost CreationOp))
+benchmarkPoW rounds seconds = do
+ let Just mk = mkProofOfWorkRequirement seconds
+ s <- newRequestIDSecret
+ rid <- mkRequestID s
+ benchmark rounds (CPUCost (seconds * fromIntegral rounds) (Divisibility 1))
+ (return . genProofOfWork (mk rid))
+
+benchmarkTunables :: Tunables -> IO ()
+benchmarkTunables tunables = do
+ say "/proc/cpuinfo:"
+ say =<< readFile "/proc/cpuinfo"
+
+ say "Benchmarking 1000 rounds of proof of work hash..."
+ display =<< benchmarkExpensiveHash 1000 (proofOfWorkHashTunable 0)
+
+ say "Benchmarking 60 rounds of 1 second proofs of work..."
+ display =<< benchmarkPoW 60 (Seconds 1)
+
+ say "Benchmarking 10 rounds of 8 second proofs of work..."
+ display =<< benchmarkPoW 10 (Seconds 8)
+
+ -- Rather than run all 256 rounds of this hash, which would
+ -- probably take on the order of 1 hour, run only 16, and scale
+ -- the expected cost accordingly.
+ let normalrounds = 256 * randomSaltBytes (keyEncryptionKeyTunable tunables)
+ say $ "Benchmarking 16/" ++ show normalrounds ++ " rounds of key encryption key hash..."
+ r <- benchmarkExpensiveHash' 16
+ (keyEncryptionKeyHash $ keyEncryptionKeyTunable tunables)
+ (mapCost (/ (fromIntegral normalrounds / 16)) $ randomSaltBytesBruteForceCost $ keyEncryptionKeyTunable tunables)
+ display r
+ say $ "Estimated time for " ++ show normalrounds ++ " rounds of key encryption key hash..."
+ display $ BenchmarkResult
+ { expectedBenchmark = mapCost (* 16) (expectedBenchmark r)
+ , actualBenchmark = mapCost (* 16) (actualBenchmark r)
+ }
+
+ say "Benchmarking 1 round of name generation hash..."
+ display =<< benchmarkExpensiveHash 1
+ (nameGenerationHash $ nameGenerationTunable tunables)
diff --git a/ByteStrings.hs b/ByteStrings.hs
new file mode 100644
index 0000000..cecf617
--- /dev/null
+++ b/ByteStrings.hs
@@ -0,0 +1,42 @@
+{-# OPTIONS_GHC -fno-warn-orphans #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module ByteStrings where
+
+import qualified Data.ByteString as B
+import qualified Raaz
+import Control.Monad
+import Data.Word
+
+allByteStringsOfLength :: Int -> [B.ByteString]
+allByteStringsOfLength = go []
+ where
+ go ws n
+ | n == 0 = return (B.pack ws)
+ | otherwise = do
+ w <- [0..255]
+ go (w:ws) (n-1)
+
+-- | Contains every possible byte strings, with shorter ones first.
+allByteStrings :: [B.ByteString]
+allByteStrings = concatMap allByteStringsOfLength [1..]
+
+chunkByteString :: Int -> B.ByteString -> [B.ByteString]
+chunkByteString n = go []
+ where
+ go cs b
+ | B.length b <= n = reverse (b:cs)
+ | otherwise =
+ let (h, t) = B.splitAt n b
+ in go (h:cs) t
+
+instance Raaz.Random Word8
+
+randomByteStringOfLength :: Int -> Raaz.SystemPRG -> IO B.ByteString
+randomByteStringOfLength n prg = B.pack <$> replicateM n randbyte
+ where
+ randbyte = Raaz.random prg :: IO Word8
diff --git a/CHANGELOG b/CHANGELOG
new file mode 100644
index 0000000..47ced82
--- /dev/null
+++ b/CHANGELOG
@@ -0,0 +1,139 @@
+keysafe (0.20161022) unstable; urgency=medium
+
+ * Add keywords to desktop file.
+ Thanks, Sean Whitton
+ * Fix use of .IP macro in manpage.
+ Thanks, Sean Whitton
+ * Fix some mispellings.
+ Thanks, Sean Whitton
+ * Makefile: Propagate LDFLAGS, CFLAGS, and CPPFLAGS through ghc.
+ * Makefile: Allow setting BUILDER=./Setup to build w/o cabal or stack.
+ * Makefile: Allow setting BUILDEROPTIONS=-j1 to avoid concurrent
+ build, which should make build reproducible.
+
+ -- Joey Hess <id@joeyh.name> Sat, 22 Oct 2016 19:01:24 -0400
+
+keysafe (0.20161007) unstable; urgency=medium
+
+ * Check if --store-local directory is writable.
+ * Removed dependency on crypto-random.
+ * Added a LSB init script, for non-systemd systems.
+ (It currently uses Debian's start-stop-daemon, so would need porting
+ for other distributions.)
+ * /etc/default/keysafe is read by both the systemd service file and the
+ init script, and contains configuration for the keysafe server.
+
+ -- Joey Hess <id@joeyh.name> Thu, 20 Oct 2016 14:44:07 -0400
+
+keysafe (0.20161006) unstable; urgency=medium
+
+ * New --add-storage-directory and --add-server options, which can be used
+ to make keysafe backup/restore using additional locations.
+ * Removed --store-local option; use --add-storage-directory instead.
+ * Fix bugs with entry of gpg keyid in the keysafe.log.
+ * Fix bug in --autostart that caused the full gpg keyid to be
+ used to generate object names, which made restores would only work
+ when --gpgkeyid was specifid.
+ * Remove embedded copy of argon2 binding, depend on fixed version of package.
+
+ -- Joey Hess <id@joeyh.name> Wed, 05 Oct 2016 20:54:51 -0400
+
+keysafe (0.20160927) unstable; urgency=medium
+
+ * Makefile: Avoid rebuilding on make install, so that sudo make install works.
+ * Added --chaff-max-delay option for slower chaffing.
+ * Fix embedded copy of Argon2 to not use Word64, fixing build on 32 bit
+ systems.
+ * Randomize the server list.
+ * Don't upload more than neededshares-1 shares to Alternate servers
+ without asking the user if they want to do this potentially dangerous
+ action.
+ * Added a second keysafe server to the server list. It's provided
+ by Marek Isalski at Faelix. Currently located in UK, but planned move
+ to CH. Currently at Alternate level until verification is complete.
+ * Server: --motd can be used to provide a Message Of The Day.
+ * Added --check-servers mode, which is useful both at the command line
+ to see what servers keysafe knows about, and as a cron job.
+ * Server: Round number of objects down to the nearest thousand, to avoid
+ leaking too much data about when objects are uploaded to servers.
+ * Filter out escape sequences and any other unusual characters when
+ writing all messages to the console.
+
+ -- Joey Hess <id@joeyh.name> Tue, 27 Sep 2016 20:25:35 -0400
+
+keysafe (0.20160922) unstable; urgency=medium
+
+ * Keysafe now knows about 3 servers, although only 1 is currently in
+ operation. It will queue uploads to the other 2 servers until
+ they are added in a later keysafe release.
+ * Added --autostart mode, and make both keysafe --backup and
+ the Makefile install a FDO desktop autostart file to use it.
+ * In --autostart mode, retry any queued uploads.
+ * In --autostart mode, check for gpg keys that have not been
+ backed up, and offer to back them up. Only ask once per key.
+ * Changed format of ~/.keysafe/backup.log
+ * Server: Reduce number of buckets in rate limiter, avoiding ones with very low
+ proof of work.
+ * Server: Make rate limiter adapt to ongoing load more quickly -- every 15
+ minutes instead of every 60.
+ * Server: Added --backup-server and --restore-server to aid in backing
+ up keysafe servers with minimal information leakage.
+
+ -- Joey Hess <id@joeyh.name> Thu, 22 Sep 2016 15:10:56 -0400
+
+keysafe (0.20160914) unstable; urgency=medium
+
+ * Fix bug that prevented keysafe --server from running when there was no
+ controlling terminal and zenity was not installed.
+ * Added --name and --othername options.
+ * Added proof of work to client/server protocol.
+ * Server-side rate limiting and DOS protection.
+ * server: Added --months-to-fill-half-disk option, defaulting to 12.
+ * Several new dependencies.
+ * Another fix to gpg secret key list parser.
+ * Warn when uploads fail and are put in the upload queue.
+ * Warn when --uploadqueued fails to upload to servers.
+ * Fix --uploadqueued bug that prevented deletion of local queued file.
+ * Added --chaff mode which uploads random junk to servers.
+ This is useful both to test the server throttling of uploads,
+ and to make it harder for servers to know if an object actually
+ contains secret key information.
+ * Store information about backed up keys in ~/.keysafe/backup.log
+ This can be deleted by the user at any time, but it's useful
+ in case a server is known to be compromised, or a problem is found
+ with keysafe's implementation that makes a backup insecure.
+
+ -- Joey Hess <id@joeyh.name> Wed, 14 Sep 2016 17:08:55 -0400
+
+keysafe (0.20160831) unstable; urgency=medium
+
+ * Server implementation is ready for initial deployment.
+ * Keysafe as a client is not yet ready for production use.
+ * Removed embedded copy of secret-sharing library, since finite-field
+ only supports prime fields. This caused shares to be twice the size of
+ the input value.
+ * Reduced chunk size to 32kb due to share size doubling.
+ * Fix gpg secret key list parser to support gpg 2.
+ * Tuned argon2 hash parameters on better hardware than my fanless laptop.
+ * Improve time estimates, taking into account the number of cores.
+ * Added basic test suite.
+ * Added options: --store-directory --test --port --address
+ * Added a Makefile
+ * Added a systemd service file.
+ * Added a desktop file.
+
+ -- Joey Hess <id@joeyh.name> Wed, 31 Aug 2016 15:43:30 -0400
+
+keysafe (0.20160819) unstable; urgency=medium
+
+ * First release of keysafe. This is not yet ready for production use.
+ * Network support is not yet implemented, but --store-local works for
+ testing with local data storage.
+ * Data backed up with keysafe version 0.* will not be able to be restored
+ by any later version! Once the data format stabalizes, keysafe version
+ 1 data will be supported by every later version.
+ * Argon2 hashes are not yet tuned for modern hardware, but only for my
+ laptop. So, cracking cost estimates may be low. To help with this
+ tuning, run `keysafe --bechmark` and send the output to me.
+
+ -- Joey Hess <id@joeyh.name> Fri, 19 Aug 2016 19:41:06 -0400
diff --git a/CmdLine.hs b/CmdLine.hs
new file mode 100644
index 0000000..155a628
--- /dev/null
+++ b/CmdLine.hs
@@ -0,0 +1,246 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module CmdLine where
+
+import Types
+import Types.Storage
+import Types.Server (HostName, Port)
+import Types.Cost (Seconds(..))
+import Storage.Local
+import Storage.Network
+import Tunables
+import qualified Gpg
+import Options.Applicative
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Text as T
+import System.Directory
+
+data CmdLine = CmdLine
+ { mode :: Maybe Mode
+ , secretkeysource :: Maybe SecretKeySource
+ , localstoragedirectory :: Maybe LocalStorageDirectory
+ , gui :: Bool
+ , testMode :: Bool
+ , customShareParams :: Maybe ShareParams
+ , name :: Maybe Name
+ , othername :: Maybe Name
+ , preferredStorage :: [Maybe LocalStorageDirectory -> IO (Maybe Storage)]
+ , serverConfig :: ServerConfig
+ , chaffMaxDelay :: Maybe Seconds
+ }
+
+data Mode = Backup | Restore | UploadQueued | AutoStart | Server | BackupServer FilePath | RestoreServer FilePath | Chaff HostName | CheckServers | Benchmark | Test
+ deriving (Show)
+
+data ServerConfig = ServerConfig
+ { serverPort :: Port
+ , serverAddress :: String
+ , monthsToFillHalfDisk :: Integer
+ , serverMotd :: Maybe T.Text
+ }
+
+parse :: Parser CmdLine
+parse = CmdLine
+ <$> optional parseMode
+ <*> optional (gpgswitch <|> fileswitch)
+ <*> optional localstoragedirectoryopt
+ <*> guiswitch
+ <*> testmodeswitch
+ <*> optional parseShareParams
+ <*> optional nameopt
+ <*> optional othernameopt
+ <*> many (addstoragedirectory <|> addserver)
+ <*> parseServerConfig
+ <*> optional chaffmaxdelayopt
+ where
+ gpgswitch = GpgKey . KeyId . T.pack <$> strOption
+ ( long "gpgkeyid"
+ <> metavar "KEYID"
+ <> help "Specify keyid of gpg key to back up or restore. (When this option is used to back up a key, it must also be used at restore time.)"
+ )
+ fileswitch = KeyFile <$> strOption
+ ( long "keyfile"
+ <> metavar "FILE"
+ <> help "Specify secret key file to back up or restore. (The same filename must be used to restore a key as was used to back it up.)"
+ )
+ localstoragedirectoryopt = LocalStorageDirectory <$> option str
+ ( long "store-directory"
+ <> metavar "DIR"
+ <> help "Where to store data locally. For the client, data is stored here before it is uploaded to the server. For the server, this is where it stores its data. (default: ~/.keysafe/objects/)"
+ )
+ testmodeswitch = switch
+ ( long "testmode"
+ <> help "Avoid using expensive cryptographic operations to secure data. Use for testing only, not with real secret keys."
+ )
+ guiswitch = switch
+ ( long "gui"
+ <> help "Use GUI interface for interaction. Default is to use readline interface when run in a terminal, and GUI otherwise."
+ )
+ nameopt = option nameOption
+ ( long "name"
+ <> metavar "N"
+ <> help "Specify name used for key backup/restore, avoiding the usual prompt."
+ )
+ othernameopt = option nameOption
+ ( long "othername"
+ <> metavar "N"
+ <> help "Specify other name used for key backup/restore, avoiding the usual prompt."
+ )
+ chaffmaxdelayopt = option secondsOption
+ ( long "chaff-max-delay"
+ <> metavar "SECONDS"
+ <> help "Specify a delay between chaff uploads. Will delay a random amount between 0 and this many seconds."
+ )
+ addstoragedirectory = (\d _lsd -> localStorageOverride d)
+ <$> strOption
+ ( long "add-storage-directory"
+ <> metavar "DIR"
+ <> help "Add the directory to the list of locations keysafe will use for backup/restore of keys. Keysafe will use the directory first, before any of its built-in servers."
+ )
+ addserver = (\(h, p) lsd -> networkStorageOverride lsd h p)
+ <$> option hostPortOption
+ ( long "add-server"
+ <> metavar "HOST[:PORT]"
+ <> help "Add the server to the server list which keysafe will use for backup/restore of keys. Keysafe will use the server first before any of its built-in servers."
+ )
+
+parseMode :: Parser Mode
+parseMode =
+ flag' Backup
+ ( long "backup"
+ <> help "Store a secret key in keysafe."
+ )
+ <|> flag' Restore
+ ( long "restore"
+ <> help "Retrieve a secret key from keysafe."
+ )
+ <|> flag' UploadQueued
+ ( long "uploadqueued"
+ <> help "Upload any data to servers that was queued by a previous --backup run."
+ )
+ <|> flag' AutoStart
+ ( long "autostart"
+ <> help "Run automatically on login by desktop autostart file."
+ )
+ <|> flag' Server
+ ( long "server"
+ <> help "Run as a keysafe server, accepting objects and storing them to ~/.keysafe/objects/local/"
+ )
+ <|> BackupServer <$> strOption
+ ( long "backup-server"
+ <> metavar "BACKUPDIR"
+ <> help "Run on a server, populates the directory with a gpg encrypted backup of all objects stored in the --store-directory. This is designed to be rsynced offsite (with --delete) to back up a keysafe server with minimal information leakage."
+ )
+ <|> RestoreServer <$> strOption
+ ( long "restore-server"
+ <> metavar "BACKUPDIR"
+ <> help "Restore all objects present in the gpg-encrypted backups in the specified directory."
+ )
+ <|> Chaff <$> strOption
+ ( long "chaff"
+ <> metavar "HOSTNAME"
+ <> help "Upload random data to a keysafe server."
+ )
+ <|> flag' CheckServers
+ ( long "check-servers"
+ <> help "Tries to connect to each server in the server list. Displays the server's MOTD, and the amount of data stored on it. Prints message to stderr and exits nonzero if any of the servers are not accessible."
+ )
+ <|> flag' Benchmark
+ ( long "benchmark"
+ <> help "Benchmark speed of keysafe's cryptographic primitives."
+ )
+ <|> flag' Test
+ ( long "test"
+ <> help "Run test suite."
+ )
+
+parseShareParams :: Parser ShareParams
+parseShareParams = ShareParams <$> totalobjects <*> neededobjects
+ where
+ totalobjects = option auto
+ ( long "totalshares"
+ <> metavar "M"
+ <> help ("Configure the number of shares to split encrypted secret key into. "
+ ++ showdefault totalObjects ++ neededboth)
+ )
+ neededobjects = option auto
+ ( long "neededshares"
+ <> metavar "N"
+ <> help ("Configure the number of shares needed to restore. "
+ ++ showdefault neededObjects ++ neededboth)
+ )
+ showdefault f = "(default: " ++ show (f (shareParams defaultTunables)) ++ ")"
+ neededboth = " (When this option is used to back up a key, it must also be provided at restore time.)"
+
+
+parseServerConfig :: Parser ServerConfig
+parseServerConfig = ServerConfig
+ <$> option auto
+ ( long "port"
+ <> metavar "P"
+ <> value 80
+ <> showDefault
+ <> help "Port for server to listen on."
+ )
+ <*> option str
+ ( long "address"
+ <> metavar "A"
+ <> value "127.0.0.1"
+ <> showDefault
+ <> help "Address for server to bind to. (Use \"*\" to bind to all addresses.)"
+ )
+ <*> option auto
+ ( long "months-to-fill-half-disk"
+ <> metavar "N"
+ <> value 12
+ <> showDefault
+ <> help "Server rate-limits requests and requires proof of work, to avoid too many objects being stored. This is an lower bound on how long it could possibly take for half of the current disk space to be filled."
+ )
+ <*> optional (T.pack <$> strOption
+ ( long "motd"
+ <> metavar "MESSAGE"
+ <> help "The server's Message Of The Day."
+ ))
+
+get :: IO CmdLine
+get = execParser opts
+ where
+ opts = info (helper <*> parse)
+ ( fullDesc
+ <> header "keysafe - securely back up secret keys"
+ )
+
+-- | When a mode is not specified on the command line,
+-- default to backing up if a secret key exists, and otherwise restoring.
+selectMode :: CmdLine -> IO Mode
+selectMode cmdline = case mode cmdline of
+ Just m -> return m
+ Nothing -> case secretkeysource cmdline of
+ Just (KeyFile f) -> present <$> doesFileExist f
+ _ -> present . not . null <$> Gpg.listSecretKeys
+ where
+ present True = Backup
+ present False = Restore
+
+customizeShareParams :: CmdLine -> Tunables -> Tunables
+customizeShareParams cmdline t = case customShareParams cmdline of
+ Nothing -> t
+ Just ps -> t { shareParams = ps }
+
+secondsOption :: ReadM Seconds
+secondsOption = Seconds . toRational <$> (auto :: ReadM Double)
+
+nameOption :: ReadM Name
+nameOption = Name . BU8.fromString <$> auto
+
+hostPortOption :: ReadM (HostName, Port)
+hostPortOption = eitherReader $ \s ->
+ case break (== ':') s of
+ ([], []) -> Left "need a hostname"
+ (h, ':':ps) -> case reads ps of
+ [(p, "")] -> Right (h, p)
+ _ -> Left $ "unable to parse port \"" ++ ps ++ "\""
+ (h, _) -> Right (h, 80)
diff --git a/Cost.hs b/Cost.hs
new file mode 100644
index 0000000..dc2438e
--- /dev/null
+++ b/Cost.hs
@@ -0,0 +1,150 @@
+{-# LANGUAGE GeneralizedNewtypeDeriving, MultiParamTypeClasses #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Cost (
+ module Cost,
+ module Types.Cost
+) where
+
+import Types.Cost
+import Data.List
+import Data.Maybe
+import Text.Read
+
+-- | Cost in seconds, with the type of hardware needed.
+totalCost :: Cost op -> (Seconds, [UsingHardware])
+totalCost (CPUCost s _) = (s, [UsingCPU])
+
+raiseCostPower :: Cost c -> Entropy e -> Cost c
+raiseCostPower c (Entropy e) = mapCost (* 2^e) c
+
+mapCost :: (Rational-> Rational) -> Cost op -> Cost op
+mapCost f (CPUCost (Seconds n) d) = CPUCost (Seconds (f n)) d
+
+type NumCores = Integer
+
+showCostMinutes :: NumCores -> Cost op -> String
+showCostMinutes numcores (CPUCost (Seconds n) (Divisibility d))
+ | n' < 61 = "1 minute"
+ | otherwise = show (n' / 60) ++ " minutes"
+ where
+ n' :: Double
+ n' = fromRational n / fromIntegral (min numcores d)
+
+-- If an operation took n seconds on a number of cores,
+-- multiply to get the CPUCost, which is for a single core.
+coreCost :: NumCores -> Seconds -> Divisibility -> Cost op
+coreCost cores (Seconds n) d = CPUCost (Seconds (fromIntegral cores * n)) d
+
+castCost :: Cost a -> Cost b
+castCost (CPUCost s d) = CPUCost s d
+
+-- | CostCalc for a brute force linear search through an entropy space
+-- in which each step entails paying a cost.
+--
+-- On average, the solution will be found half way through.
+-- This is equivilant to one bit less of entropy.
+bruteForceLinearSearch :: Cost step -> CostCalc BruteForceOp t
+bruteForceLinearSearch stepcost e =
+ castCost stepcost `raiseCostPower` reduceEntropy e 1
+
+-- | Estimate of cost of a brute force attack.
+estimateBruteforceOf :: Bruteforceable t a => t -> Entropy a -> Cost BruteForceOp
+estimateBruteforceOf t e = getBruteCostCalc t e
+
+data DataCenterPrice = DataCenterPrice
+ { instanceCpuCores :: Integer
+ , instanceCpuCoreMultiplier :: Integer
+ -- ^ If the cores are twice as fast as the commodity hardware
+ -- that keysafe's cost estimates are based on, use 2 to indicate
+ -- this, etc.
+ , instanceCostPerHour :: Cents
+ }
+
+-- August 2016 spot pricing: 36 CPU core c4.8xlarge at 33c/hour
+spotAWS :: DataCenterPrice
+spotAWS = DataCenterPrice
+ { instanceCpuCores = 36
+ , instanceCpuCoreMultiplier = 2
+ , instanceCostPerHour = Cents 33
+ }
+
+-- | Estimate of cost of brute force attack using a datacenter.
+--
+-- Note that this assumes that CPU cores and GPU cores are of equal number,
+-- which is unlikely to be the case; typically there will be many more
+-- cores than GPUs. So, this underestimates the price to brute force
+-- operations which run faster on GPUs.
+estimateAttackCost :: DataCenterPrice -> Cost BruteForceOp -> Dollars
+estimateAttackCost dc opcost = centsToDollars $ costcents
+ where
+ (Seconds cpuseconds) = fst (totalCost opcost)
+ cpuyears = cpuseconds / (60*60*24*365)
+ costpercpuyear = Cents $
+ fromIntegral (instanceCostPerHour dc) * 24 * 365
+ `div` (instanceCpuCores dc * instanceCpuCoreMultiplier dc)
+ costcents = Cents (ceiling cpuyears) * costpercpuyear
+
+newtype Cents = Cents Integer
+ deriving (Num, Integral, Enum, Real, Ord, Eq, Show)
+
+newtype Dollars = Dollars Integer
+ deriving (Num, Integral, Enum, Real, Ord, Eq)
+
+instance Show Dollars where
+ show (Dollars n) = go
+ [ (1000000000000, "trillion")
+ , (1000000000, "billion")
+ , (1000000, "million")
+ , (1000, "thousand")
+ ]
+ where
+ go [] = "$" ++ show n
+ go ((d, u):us)
+ | n >= d =
+ let n' = n `div` d
+ in "$" ++ show n' ++ " " ++ u
+ | otherwise = go us
+
+centsToDollars :: Cents -> Dollars
+centsToDollars (Cents c) = Dollars (c `div` 100)
+
+type Year = Integer
+
+-- | Apply Moore's law to show how a cost might vary over time.
+costOverTime :: Dollars -> Year -> [(Dollars, Year)]
+costOverTime (Dollars currcost) thisyear =
+ (Dollars currcost, thisyear) : map calc otheryears
+ where
+ otheryears = [thisyear+1, thisyear+5, thisyear+10]
+ calc y =
+ let monthdelta = (fromIntegral ((y * 12) - (thisyear * 12))) :: Double
+ cost = floor $ fromIntegral currcost / 2 ** (monthdelta / 18)
+ in (Dollars cost, y)
+
+costOverTimeTable :: Dollars -> Year -> [String]
+costOverTimeTable cost thisyear = go [] thisyear $ costOverTime cost thisyear
+ where
+ go t _ [] = reverse t
+ go t yprev ((c, y):ys) =
+ let s = " in " ++ show y ++ ": " ++ show c
+ in if yprev < y - 1
+ then go (s:" ...":t) y ys
+ else go (s:t) y ys
+
+-- Number of physical cores. This is not the same as
+-- getNumProcessors, which includes hyper-threading.
+getNumCores :: IO (Maybe NumCores)
+getNumCores = getmax . mapMaybe parse . lines <$> readFile "/proc/cpuinfo"
+ where
+ getmax [] = Nothing
+ getmax l = Just $
+ maximum l + 1 -- add 1 because /proc/cpuinfo counts from 0
+ parse l
+ | "core id" `isPrefixOf` l =
+ readMaybe $ drop 1 $ dropWhile (/= ':') l
+ | otherwise = Nothing
diff --git a/Encryption.hs b/Encryption.hs
new file mode 100644
index 0000000..880095d
--- /dev/null
+++ b/Encryption.hs
@@ -0,0 +1,242 @@
+{-# LANGUAGE OverloadedStrings, MultiParamTypeClasses, DataKinds #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Encryption where
+
+import Types
+import Tunables
+import Cost
+import ExpensiveHash
+import ByteStrings
+import Data.Monoid
+import Data.Maybe
+import qualified Raaz
+import qualified Raaz.Cipher.AES as Raaz
+import qualified Raaz.Cipher.Internal as Raaz
+import qualified Data.Text.Encoding as E
+import qualified Data.ByteString as B
+import qualified Data.ByteString.Char8 as B8
+import qualified Data.ByteString.UTF8 as BU8
+import Text.Read
+
+type AesKey = Raaz.KEY256
+
+cipher :: Raaz.AES 256 'Raaz.CBC
+cipher = Raaz.aes256cbc
+
+encrypt :: Tunables -> KeyEncryptionKey -> SecretKey -> EncryptedSecretKey
+encrypt tunables kek (SecretKey secret) =
+ EncryptedSecretKey (chunkByteString (objectSize tunables) b) (keyBruteForceCalc kek)
+ where
+ -- Raaz does not seem to provide a high-level interface
+ -- for AES encryption, so use unsafeEncrypt. The use of
+ -- EncryptableBytes makes sure it's provided with a
+ -- multiple of the AES block size.
+ b = Raaz.unsafeEncrypt cipher (keyEncryptionKey kek, keyEncryptionIV kek) $
+ getEncryptableBytes $ encodeEncryptableBytes tunables secret
+
+data DecryptResult
+ = DecryptSuccess SecretKey
+ | DecryptIncomplete KeyEncryptionKey
+ -- ^ Returned when the EncryptedSecretKey is truncated.
+ | DecryptFailed
+
+instance Show DecryptResult where
+ show (DecryptSuccess _) = "DecryptSuccess"
+ show (DecryptIncomplete _) = "DecryptIncomplete"
+ show DecryptFailed = "DecryptFailed"
+
+decrypt :: KeyEncryptionKey -> EncryptedSecretKey -> DecryptResult
+decrypt kek (EncryptedSecretKey cs _) = case decodeEncryptableBytes pbs of
+ Nothing -> DecryptFailed
+ Just (DecodeSuccess secretkey) -> DecryptSuccess (SecretKey secretkey)
+ Just DecodeIncomplete -> DecryptIncomplete kek
+ where
+ pbs = EncryptableBytes $
+ Raaz.unsafeDecrypt cipher (keyEncryptionKey kek, keyEncryptionIV kek) b
+ b = B.concat cs
+
+-- | Tries each candidate key in turn until one unlocks the encrypted data.
+tryDecrypt :: Candidates KeyEncryptionKey -> EncryptedSecretKey -> DecryptResult
+tryDecrypt (Candidates l _ _) esk = go l
+ where
+ go [] = DecryptFailed
+ go (kek:rest) = case decrypt kek esk of
+ DecryptFailed -> go rest
+ r -> r
+
+-- | An AES key, which is used to encrypt the secret key that is stored
+-- in keysafe.
+data KeyEncryptionKey = KeyEncryptionKey
+ { keyEncryptionKey :: AesKey
+ , keyEncryptionIV :: Raaz.IV
+ , keyCreationCost :: Cost CreationOp
+ , keyDecryptionCost :: Cost DecryptionOp
+ , keyBruteForceCalc :: CostCalc BruteForceOp UnknownPassword
+ }
+
+instance HasCreationCost KeyEncryptionKey where
+ getCreationCost = keyCreationCost
+
+instance HasDecryptionCost KeyEncryptionKey where
+ getDecryptionCost = keyDecryptionCost
+
+instance Bruteforceable KeyEncryptionKey UnknownPassword where
+ getBruteCostCalc = keyBruteForceCalc
+
+data Candidates a = Candidates [a] (Cost CreationOp) (Cost DecryptionOp)
+
+instance HasCreationCost (Candidates a) where
+ getCreationCost (Candidates _ c _) = c
+
+instance HasDecryptionCost (Candidates a) where
+ getDecryptionCost (Candidates _ _ c) = c
+
+-- | The ExpensiveHash of the Password used as the KeyEncryptionKey
+--
+-- Name is used as a salt, to prevent rainbow table attacks.
+--
+-- A random prefix is added to the salt, to force an attacker to
+-- run the hash repeatedly.
+genKeyEncryptionKey :: Tunables -> Name -> Password -> IO KeyEncryptionKey
+genKeyEncryptionKey tunables name password = do
+ prg <- Raaz.newPRG () :: IO Raaz.SystemPRG
+ saltprefix <- genRandomSaltPrefix prg tunables
+ return $ head $
+ genKeyEncryptionKeys [saltprefix] tunables name password
+
+-- | A stream of KeyEncryptionKeys, using the specified salt prefixes.
+genKeyEncryptionKeys :: [SaltPrefix] -> Tunables -> Name -> Password -> [KeyEncryptionKey]
+genKeyEncryptionKeys saltprefixes tunables (Name name) (Password password) =
+ map mk saltprefixes
+ where
+ iv = genIV (Name name)
+ -- To brute force data encrypted with a key,
+ -- an attacker needs to pay the decryptcost for
+ -- each password checked.
+ bruteforcecalc = bruteForceLinearSearch decryptcost
+ decryptcost = castCost $ randomSaltBytesBruteForceCost kektunables
+ kektunables = keyEncryptionKeyTunable tunables
+
+ mk saltprefix = KeyEncryptionKey (hashToAESKey hash) iv (getCreationCost hash) decryptcost bruteforcecalc
+ where
+ salt = Salt (saltprefix <> name)
+ hash = expensiveHash (keyEncryptionKeyHash kektunables) salt password
+
+-- | A stream of all the key encryption keys that need to be tried to
+-- decrypt.
+candidateKeyEncryptionKeys :: Tunables -> Name -> Password -> Candidates KeyEncryptionKey
+candidateKeyEncryptionKeys tunables name password =
+ let ks@(k:_) = genKeyEncryptionKeys saltprefixes tunables name password
+ in Candidates ks (getCreationCost k) (getDecryptionCost k)
+ where
+ saltprefixes = allByteStringsOfLength $
+ randomSaltBytes $ keyEncryptionKeyTunable tunables
+
+-- Use the sha256 of the name (truncated) as the IV.
+genIV :: Name -> Raaz.IV
+genIV (Name name) =
+ fromMaybe (error "genIV fromByteString failed") $
+ Raaz.fromByteString $ B.take ivlen $
+ Raaz.toByteString $ Raaz.sha256 name
+ where
+ ivlen = fromIntegral $ Raaz.byteSize (undefined :: Raaz.IV)
+
+type SaltPrefix = B.ByteString
+
+genRandomSaltPrefix :: Raaz.SystemPRG -> Tunables -> IO SaltPrefix
+genRandomSaltPrefix prg tunables = randomByteStringOfLength n prg
+ where
+ n = randomSaltBytes $ keyEncryptionKeyTunable tunables
+
+-- | Make an AES key out of a hash value.
+--
+-- Since the ExpensiveHash value is ascii encoded, and has a common prefix,
+-- it does not have a high entropy in every byte, and its length is longer
+-- than the AES key length. To deal with this, use the SHA256 of
+-- the ExpensiveHash, as a bytestring.
+hashToAESKey :: ExpensiveHash -> AesKey
+hashToAESKey (ExpensiveHash _ t) =
+ fromMaybe (error "hashToAESKey fromByteString failed") $
+ Raaz.fromByteString b
+ where
+ b = B.take (fromIntegral $ Raaz.byteSize (undefined :: AesKey)) $
+ Raaz.toByteString $ Raaz.sha256 (E.encodeUtf8 t)
+
+-- | A bytestring that can be AES encrypted.
+--
+-- It is padded to a multiple of the objectSize with NULs.
+-- Since objectSize is a multiple of the AES blocksize, so is this.
+--
+-- Format is:
+--
+-- sizeNULsizeshaNULdatashaNULdata
+--
+-- The size gives the length of the data. If the data is shorter
+-- than that, we know that the bytestring is truncated.
+--
+-- The datasha is the sha256 of the data. This is checked when decoding
+-- to guard against corruption.
+--
+-- The sizesha is the sha256 of the size. This is included as a sanity
+-- check that the right key was used to decrypt it. It's not unlikely
+-- that using the wrong key could result in a bytestring that starts
+-- with wrongsizeNUL, but it's astronomically unlikely that the
+-- sizesha matches in this case.
+newtype EncryptableBytes = EncryptableBytes { getEncryptableBytes :: B.ByteString }
+ deriving (Show)
+
+encodeEncryptableBytes :: Tunables -> B.ByteString -> EncryptableBytes
+encodeEncryptableBytes tunables content = EncryptableBytes $
+ padBytes (objectSize tunables) $ B.intercalate sep
+ [ size
+ , sha size
+ , sha content
+ , content
+ ]
+ where
+ size = B8.pack (show (B.length content))
+ sep = B.singleton 0
+
+-- | Encoded, so that it does not contain any NULs.
+sha :: B.ByteString -> B.ByteString
+sha = BU8.fromString . Raaz.showBase16 . Raaz.sha256
+
+padBytes :: Int -> B.ByteString -> B.ByteString
+padBytes n b = b <> padding
+ where
+ len = B.length b
+ r = len `rem` n
+ padding
+ | r == 0 = B.empty
+ | otherwise = B.replicate (n - r) 0
+
+data DecodeResult
+ = DecodeSuccess B.ByteString
+ | DecodeIncomplete
+ deriving (Show)
+
+decodeEncryptableBytes :: EncryptableBytes -> Maybe DecodeResult
+decodeEncryptableBytes (EncryptableBytes b) = do
+ (sizeb, rest) <- getword b
+ (sizesha, rest') <- getword rest
+ (contentsha, rest'') <- getword rest'
+ if sha sizeb /= sizesha
+ then Nothing
+ else do
+ size <- readMaybe (B8.unpack sizeb)
+ let content = B.take size rest''
+ if B.length content /= size
+ then return DecodeIncomplete
+ else if sha content /= contentsha
+ then Nothing
+ else return (DecodeSuccess content)
+ where
+ getword d = case B.break (== 0) d of
+ (w, rest)
+ | B.null w || B.null rest-> Nothing
+ | otherwise -> Just (w, B.drop 1 rest)
diff --git a/Entropy.hs b/Entropy.hs
new file mode 100644
index 0000000..198c798
--- /dev/null
+++ b/Entropy.hs
@@ -0,0 +1,24 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Entropy where
+
+import Types
+import Types.Cost
+import qualified Data.ByteString.UTF8 as B
+import Text.Password.Strength (estimate, UserDict)
+
+-- | Calculation of the entropy of a password.
+-- Uses zxcvbn so takes word lists, and other entropy weakening problems
+-- into account.
+calcPasswordEntropy :: Password -> UserDict -> Entropy UnknownPassword
+calcPasswordEntropy (Password p) userdict = Entropy $ floor $
+ estimate (B.toString p) userdict
+
+-- | Naive calculation of the entropy of a name.
+-- Assumes that the attacker is not targeting a particular list of names.
+calcNameEntropy :: Name -> Entropy UnknownName
+calcNameEntropy (Name n) = Entropy $ floor $
+ estimate (B.toString n) []
diff --git a/ExpensiveHash.hs b/ExpensiveHash.hs
new file mode 100644
index 0000000..6fab15c
--- /dev/null
+++ b/ExpensiveHash.hs
@@ -0,0 +1,42 @@
+{-# LANGUAGE OverloadedStrings #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module ExpensiveHash where
+
+import Tunables
+import Cost
+import Serialization ()
+import qualified Data.Text as T
+import qualified Data.ByteString as B
+import qualified Crypto.Argon2 as Argon2
+import Raaz.Core.Encode
+import Data.Monoid
+
+-- | A hash that is expensive to calculate.
+--
+-- This is a lynchpin of keysafe's security, because using this hash
+-- as an encryption key forces brute force attackers to generate
+-- hashes over and over again, taking a very long time.
+data ExpensiveHash = ExpensiveHash (Cost CreationOp) T.Text
+ deriving (Show)
+
+instance HasCreationCost ExpensiveHash where
+ getCreationCost (ExpensiveHash c _) = c
+
+data Salt t = Salt t
+
+expensiveHash :: Encodable t => ExpensiveHashTunable -> Salt t -> B.ByteString -> ExpensiveHash
+expensiveHash (UseArgon2 cost opts) (Salt s) b = ExpensiveHash cost $
+ -- Using hashEncoded here and not hash,
+ -- because of this bug:
+ -- https://github.com/ocharles/argon2/issues/3
+ Argon2.hashEncoded opts b argonsalt
+ where
+ -- argon salt cannot be shorter than 8 bytes, so pad with spaces.
+ argonsalt =
+ let sb = toByteString s
+ in sb <> B.replicate (8 - B.length sb ) 32
diff --git a/Gpg.hs b/Gpg.hs
new file mode 100644
index 0000000..91b53cd
--- /dev/null
+++ b/Gpg.hs
@@ -0,0 +1,79 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE OverloadedStrings, BangPatterns #-}
+
+module Gpg where
+
+import Types
+import UI
+import System.Process
+import Data.List.Split
+import System.IO
+import System.Exit
+import qualified Data.ByteString as B
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Text as T
+
+-- | Pick gpg secret key to back up.
+--
+-- If there is only one gpg secret key,
+-- the choice is obvious. Otherwise prompt the user with a list.
+getKeyToBackup :: UI -> IO (SecretKeySource, SecretKey)
+getKeyToBackup ui = go =<< listSecretKeys
+ where
+ go [] = do
+ showError ui "You have no gpg secret keys to back up."
+ error "Aborting on no gpg secret keys."
+ go [(_, kid)] = mkret kid
+ go l = maybe (error "Canceled") mkret
+ =<< promptKeyId ui "Pick gpg secret key"
+ "Pick gpg secret key to back up:" l
+ mkret kid = do
+ sk <- getSecretKey kid
+ return (GpgKey kid, sk)
+
+listSecretKeys :: IO [(Name, KeyId)]
+listSecretKeys = map mk . parse . lines <$> readProcess "gpg"
+ ["--batch", "--with-colons", "--list-secret-keys", "--fixed-list-mode"] ""
+ where
+ parse = extract [] Nothing . map (splitOn ":")
+ extract c (Just keyid) (("uid":_:_:_:_:_:_:_:_:userid:_):rest) =
+ extract ((userid, keyid):c) Nothing rest
+ extract c (Just keyid) rest@(("sec":_):_) =
+ extract (("", keyid):c) Nothing rest
+ extract c (Just keyid) (_:rest) =
+ extract c (Just keyid) rest
+ extract c _ [] = c
+ extract c _ (("sec":_:_:_:keyid:_):rest) =
+ extract c (Just keyid) rest
+ extract c k (_:rest) =
+ extract c k rest
+ mk (userid, keyid) = (Name (BU8.fromString userid), KeyId (T.pack keyid))
+
+getSecretKey :: KeyId -> IO SecretKey
+getSecretKey (KeyId kid) = do
+ (_, Just hout, _, ph) <- createProcess (proc "gpg" ps)
+ { std_out = CreatePipe }
+ secretkey <- SecretKey <$> B.hGetContents hout
+ exitcode <- waitForProcess ph
+ case exitcode of
+ ExitSuccess -> return secretkey
+ _ -> error "gpg --export-secret-key failed"
+ where
+ ps = ["--batch", "--export-secret-key", T.unpack kid]
+
+writeSecretKey :: SecretKey -> IO ()
+writeSecretKey (SecretKey b) = do
+ (Just hin, _, _, ph) <- createProcess (proc "gpg" ps)
+ { std_in = CreatePipe }
+ B.hPut hin b
+ hClose hin
+ exitcode <- waitForProcess ph
+ case exitcode of
+ ExitSuccess -> return ()
+ _ -> error "gpg --import failed"
+ where
+ ps = ["--batch", "--import"]
diff --git a/HTTP.hs b/HTTP.hs
new file mode 100644
index 0000000..d76a753
--- /dev/null
+++ b/HTTP.hs
@@ -0,0 +1,111 @@
+{-# LANGUAGE DataKinds #-}
+{-# LANGUAGE DeriveGeneric #-}
+{-# LANGUAGE MultiParamTypeClasses #-}
+{-# LANGUAGE TypeOperators #-}
+{-# LANGUAGE OverloadedStrings #-}
+{-# OPTIONS_GHC -fno-warn-orphans #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module HTTP where
+
+import Types
+import Types.Storage
+import HTTP.ProofOfWork
+import Serialization ()
+import Servant.API
+import Data.Text
+import Data.Aeson.Types
+import GHC.Generics hiding (V1)
+import qualified Data.Text as T
+import qualified Data.Text.Encoding as T
+import qualified Data.ByteString as B
+import qualified Data.ByteString.Lazy as L
+import qualified Raaz.Core.Encode as Raaz
+import Data.Monoid
+import Prelude
+
+-- | Keysafe's http API
+type HttpAPI =
+ "keysafe" :> V1 :> "motd" :> Get '[JSON] Motd
+ :<|> "keysafe" :> V1 :> "objects" :> ObjectIdent :> POWParam
+ :> Get '[JSON] (POWGuarded StorableObject)
+ :<|> "keysafe" :> V1 :> "objects" :> ObjectIdent :> POWParam
+ :> ReqBody '[OctetStream] StorableObject
+ :> Put '[JSON] (POWGuarded StoreResult)
+ :<|> "keysafe" :> V1 :> "stats" :> "countobjects" :> POWParam
+ :> Get '[JSON] (POWGuarded CountResult)
+
+type V1 = "v1"
+
+newtype Motd = Motd Text
+ deriving (Generic)
+
+data POWGuarded t
+ = Result t
+ | NeedProofOfWork ProofOfWorkRequirement
+ deriving (Generic)
+
+type POWParam = QueryParam "proofofwork" ProofOfWork
+
+type ObjectIdent = Capture "ident" StorableObjectIdent
+
+instance ToJSON Motd
+instance FromJSON Motd
+instance ToJSON t => ToJSON (POWGuarded t)
+instance FromJSON t => FromJSON (POWGuarded t)
+instance ToJSON ProofOfWorkRequirement
+instance FromJSON ProofOfWorkRequirement
+instance ToJSON RequestID
+instance FromJSON RequestID
+instance ToJSON RandomSalt
+instance FromJSON RandomSalt
+
+-- StorableObjectIdent contains a hash, which is valid UTF-8.
+instance ToHttpApiData StorableObjectIdent where
+ toUrlPiece (StorableObjectIdent b) = T.decodeUtf8 b
+instance FromHttpApiData StorableObjectIdent where
+ parseUrlPiece = Right . StorableObjectIdent . T.encodeUtf8
+
+instance MimeRender OctetStream StorableObject where
+ mimeRender _ = L.fromStrict . Raaz.toByteString
+instance MimeUnrender OctetStream StorableObject where
+ mimeUnrender _ = maybe (Left "object encoding error") Right
+ . Raaz.fromByteString . L.toStrict
+
+-- StorableObject contains an arbitrary bytestring; it is not UTF-8 encoded.
+-- So, to convert it to Text for Aeson, base64 encode it.
+instance ToJSON StorableObject where
+ toJSON (StorableObject b) = object [ "data" .= b64 b ]
+instance FromJSON StorableObject where
+ parseJSON (Object v) = StorableObject <$> (unb64 =<< v .: "data")
+ parseJSON invalid = typeMismatch "StorableObject" invalid
+
+-- ProofOfWork contains an arbitrary bytestring and is base64 encoded in
+-- the query string.
+instance ToHttpApiData ProofOfWork where
+ toUrlPiece (ProofOfWork b rid) =
+ fromRandomSalt (randomSalt rid)
+ <> ":" <> requestHMAC rid
+ <> ":" <> b64 b
+instance FromHttpApiData ProofOfWork where
+ parseUrlPiece t = do
+ let (salt, rest) = T.break (== ':') t
+ let (hmac, rest') = T.break (== ':') (T.drop 1 rest)
+ b <- unb64 (T.drop 1 rest')
+ return $ ProofOfWork b $ RequestID
+ { randomSalt = RandomSalt salt
+ , requestHMAC = hmac
+ }
+
+b64 :: B.ByteString -> Text
+b64 v = T.decodeUtf8 $ Raaz.toByteString (Raaz.encode v :: Raaz.Base64)
+
+unb64 :: Monad m => Text -> m B.ByteString
+unb64 t = maybe
+ (fail "bad base64 data")
+ (return . Raaz.decodeFormat)
+ (Raaz.fromByteString (T.encodeUtf8 t) :: Maybe Raaz.Base64)
diff --git a/HTTP/Client.hs b/HTTP/Client.hs
new file mode 100644
index 0000000..25ff536
--- /dev/null
+++ b/HTTP/Client.hs
@@ -0,0 +1,116 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module HTTP.Client where
+
+import HTTP
+import HTTP.ProofOfWork
+import Types
+import Types.Server
+import Types.Storage
+import Types.Cost
+import Servant.API
+import Servant.Client
+import Data.Proxy
+import Network.HTTP.Client hiding (port, host, Proxy)
+import Network.HTTP.Client.Internal (Connection, makeConnection)
+import Control.Monad.Trans.Except (ExceptT, runExceptT)
+import Control.Exception
+import qualified Network.Socket
+import Network.Socket.ByteString (sendAll, recv)
+import Network.Socks5
+import qualified Data.ByteString.UTF8 as BU8
+import Data.List
+import Data.Char
+
+httpAPI :: Proxy HttpAPI
+httpAPI = Proxy
+
+motd :: Manager -> BaseUrl -> ClientM Motd
+getObject :: StorableObjectIdent -> Maybe ProofOfWork -> Manager -> BaseUrl -> ClientM (POWGuarded StorableObject)
+putObject :: StorableObjectIdent -> Maybe ProofOfWork -> StorableObject -> Manager -> BaseUrl -> ClientM (POWGuarded StoreResult)
+countObjects :: Maybe ProofOfWork -> Manager -> BaseUrl -> ClientM (POWGuarded CountResult)
+motd :<|> getObject :<|> putObject :<|> countObjects = client httpAPI
+
+tryA :: IO a -> IO (Either SomeException a)
+tryA = try
+
+serverRequest
+ :: POWIdent p
+ => Server
+ -> (String -> a)
+ -> (r -> a)
+ -> p
+ -> (Maybe ProofOfWork -> Manager -> BaseUrl -> ExceptT ServantError IO (POWGuarded r))
+ -> IO a
+serverRequest srv onerr onsuccess p a = do
+ r <- tryA $ go Nothing maxProofOfWork
+ case r of
+ Left e -> return $ onerr (show e)
+ Right v -> return v
+ where
+ go pow (Seconds timeleft)
+ | timeleft <= 0 = return $ onerr "server asked for too much proof of work; gave up"
+ | otherwise = do
+ res <- serverRequest' srv (a pow)
+ case res of
+ Left err -> return $ onerr err
+ Right (Result r) -> return $ onsuccess r
+ Right (NeedProofOfWork req) -> go
+ (Just $ genProofOfWork req p)
+ (Seconds timeleft - generationTime req)
+
+-- A new Manager is allocated for each request, rather than reusing
+-- any connection. This is a feature; it makes correlation attacks
+-- harder because the server can't tell if two connections
+-- accessing different objects came from the same user, except by
+-- comparing IP addresses (which are masked somewhat by using tor).
+serverRequest'
+ :: Server
+ -> (Manager -> BaseUrl -> ExceptT ServantError IO r)
+ -> IO (Either String r)
+serverRequest' srv a = go Nothing (serverUrls srv)
+ where
+ go lasterr [] = return $ Left $
+ maybe "no known address" (\err -> "server failure: " ++ show err) lasterr
+ go _ (url:urls) = do
+ manager <- torableManager
+ res <- runExceptT $ a manager url
+ case res of
+ Left err -> go (Just err) urls
+ Right r -> return (Right r)
+
+-- | HTTP Manager supporting tor .onion and regular hosts
+torableManager :: IO Manager
+torableManager = newManager $ defaultManagerSettings
+ { managerRawConnection = return conn
+ }
+ where
+ conn addr host port
+ | ".onion" `isSuffixOf` map toLower host = torConnection host port
+ | otherwise = do
+ regular <- managerRawConnection defaultManagerSettings
+ regular addr host port
+
+torConnection :: String -> Port -> IO Connection
+torConnection onionaddress p = do
+ (socket, _) <- socksConnect torsockconf socksaddr
+ socketConnection socket 8192
+ where
+ torsocksport = 9050
+ torsockconf = defaultSocksConf "127.0.0.1" torsocksport
+ socksdomain = SocksAddrDomainName (BU8.fromString onionaddress)
+ socksaddr = SocksAddress socksdomain (fromIntegral p)
+
+socketConnection :: Network.Socket.Socket -> Int -> IO Connection
+socketConnection socket chunksize = makeConnection
+ (recv socket chunksize)
+ (sendAll socket)
+ (Network.Socket.close socket)
+
+serverUrls :: Server -> [BaseUrl]
+serverUrls srv = map go (serverAddress srv)
+ where
+ go (ServerAddress addr port) = BaseUrl Http addr port ""
diff --git a/HTTP/Logger.hs b/HTTP/Logger.hs
new file mode 100644
index 0000000..2758c37
--- /dev/null
+++ b/HTTP/Logger.hs
@@ -0,0 +1,25 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module HTTP.Logger where
+
+import System.Log.FastLogger
+import Data.String
+
+data Logger = Logger LoggerSet LoggerSet
+
+newLogger :: IO Logger
+newLogger = Logger
+ <$> newStdoutLoggerSet defaultBufSize
+ <*> newStderrLoggerSet defaultBufSize
+
+logStdout :: Logger -> String -> IO ()
+logStdout (Logger l _) = sendLogger l
+
+logStderr :: Logger -> String -> IO ()
+logStderr (Logger _ l) = sendLogger l
+
+sendLogger :: LoggerSet -> String -> IO ()
+sendLogger l s = pushLogStrLn l (fromString s)
diff --git a/HTTP/ProofOfWork.hs b/HTTP/ProofOfWork.hs
new file mode 100644
index 0000000..a94b19b
--- /dev/null
+++ b/HTTP/ProofOfWork.hs
@@ -0,0 +1,171 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE DeriveGeneric, OverloadedStrings #-}
+{-# OPTIONS_GHC -fno-warn-orphans #-}
+
+module HTTP.ProofOfWork where
+
+import Types
+import Types.Cost
+import ExpensiveHash
+import Tunables
+import ByteStrings
+import GHC.Generics
+import qualified Data.Text as T
+import qualified Data.ByteString as B
+import Data.Text.Encoding (encodeUtf8)
+import Raaz.Core.Encode
+import qualified Raaz
+import Data.BloomFilter.Hash
+import Control.Monad
+import Control.DeepSeq
+import Data.Word
+import Data.Bits
+import Data.Monoid
+import Prelude
+
+-- | A value that the client has to do some work to calculate.
+data ProofOfWork = ProofOfWork B.ByteString RequestID
+ deriving (Show, Generic)
+
+instance NFData ProofOfWork
+
+data ProofOfWorkRequirement = ProofOfWorkRequirement
+ { leadingZeros :: Int
+ , addedArgon2Iterations :: Word32
+ , requestID :: RequestID
+ }
+ deriving (Generic, Show)
+
+-- | A request ID has two parts, a RandomSalt and a HMAC.
+-- The server can verify if a request ID is one it generated.
+data RequestID = RequestID
+ { randomSalt :: RandomSalt
+ , requestHMAC :: T.Text
+ }
+ deriving (Generic, Show, Eq)
+
+instance NFData RequestID
+
+instance Hashable RequestID where
+ hashIO32 = hashIO32 . hashRequestID
+ hashIO64 = hashIO64 . hashRequestID
+
+hashRequestID :: RequestID -> B.ByteString
+hashRequestID rid = encodeUtf8 (fromRandomSalt (randomSalt rid))
+ <> ":" <> encodeUtf8 (requestHMAC rid)
+
+-- | Using Text and not ByteString so that ProofOfWorkRequirement can have a
+-- JSON instance.
+newtype RandomSalt = RandomSalt { fromRandomSalt :: T.Text }
+ deriving (Generic, Show, Eq)
+
+instance NFData RandomSalt
+
+-- | Servers should never demand a proof of work that takes longer than
+-- this to generate. Note that if a server changes its mind and doubles
+-- the proof of work, a client counts that cumulatively. So, a server
+-- should avoid any single proof of work requirement taking more than half
+-- this long.
+maxProofOfWork :: Seconds
+maxProofOfWork = Seconds (16*60)
+
+-- | How long it will take to generate a proof of work meeting the
+-- requirement, maximum.
+--
+-- Of course, a client can get lucky and find a value that works
+-- on the very first try. On average, the client will need to work for half
+-- as long as the returned number of Seconds.
+generationTime :: ProofOfWorkRequirement -> Seconds
+generationTime req =
+ let UseArgon2 (CPUCost (Seconds s) _) _ = proofOfWorkHashTunable (addedArgon2Iterations req)
+ in Seconds ((2^(leadingZeros req)) * s)
+
+mkProofOfWorkRequirement :: Seconds -> Maybe (RequestID -> ProofOfWorkRequirement)
+mkProofOfWorkRequirement (Seconds n)
+ | lz < 1 || n <= 1 = Nothing
+ | otherwise = Just $ ProofOfWorkRequirement lz its
+ where
+ lz = floor (logBase 2 (fromRational (max 1 (n / s))) :: Double)
+ UseArgon2 (CPUCost (Seconds s) _) _ = proofOfWorkHashTunable its
+ its = 0
+
+newtype RequestIDSecret = RequestIDSecret (Raaz.Key (Raaz.HMAC Raaz.SHA256))
+
+newRequestIDSecret :: IO RequestIDSecret
+newRequestIDSecret = do
+ prg <- Raaz.newPRG () :: IO Raaz.SystemPRG
+ RequestIDSecret <$> Raaz.random prg
+
+mkRequestID :: RequestIDSecret -> IO RequestID
+mkRequestID secret = mkRequeestID' secret <$> mkRandomSalt
+
+mkRequeestID' :: RequestIDSecret -> RandomSalt -> RequestID
+mkRequeestID' (RequestIDSecret key) salt =
+ let hmac = Raaz.hmacSha256 key (encodeUtf8 $ fromRandomSalt salt)
+ in RequestID salt (T.pack (showBase16 hmac))
+
+validRequestID :: RequestIDSecret -> RequestID -> Bool
+validRequestID secret rid =
+ let rid' = mkRequeestID' secret (randomSalt rid)
+ in requestHMAC rid == requestHMAC rid'
+
+mkRandomSalt :: IO RandomSalt
+mkRandomSalt = do
+ prg <- Raaz.newPRG () :: IO Raaz.SystemPRG
+ rs <- replicateM 16 (Raaz.random prg :: IO Word8)
+ return $ RandomSalt $ T.pack $ concatMap show rs
+
+class POWIdent p where
+ getPOWIdent :: p -> B.ByteString
+
+instance POWIdent StorableObjectIdent where
+ getPOWIdent (StorableObjectIdent i) = i
+
+data NoPOWIdent = NoPOWIdent
+
+instance POWIdent NoPOWIdent where
+ getPOWIdent NoPOWIdent = B.empty
+
+instance POWIdent Int where
+ getPOWIdent = encodeUtf8 . T.pack . show
+
+-- Note that this does not check validRequestID.
+isValidProofOfWork :: POWIdent p => ProofOfWork -> ProofOfWorkRequirement -> p -> Bool
+isValidProofOfWork (ProofOfWork pow rid) req p = samerequestids && enoughzeros
+ where
+ samerequestids = rid == requestID req
+ enoughzeros = all (== False) (take (leadingZeros req) (setBits b))
+ tunable = proofOfWorkHashTunable (addedArgon2Iterations req)
+ salt = Salt $ POWSalt $
+ encodeUtf8 (fromRandomSalt (randomSalt (requestID req))) <> pow
+ ExpensiveHash _ hash = expensiveHash tunable salt (getPOWIdent p)
+ -- Since expensiveHash generates an ascii encoded hash that
+ -- includes the parameters, take the sha256 of it to get the
+ -- bytestring that is what's checked for the neccesary number
+ -- of leading 0 bits.
+ b = Raaz.toByteString $ Raaz.sha256 $ encodeUtf8 hash
+
+setBits :: B.ByteString -> [Bool]
+setBits = concatMap go . B.unpack
+ where
+ go byte = map (uncurry testBit) (zip (repeat byte) [0..7])
+
+newtype POWSalt = POWSalt B.ByteString
+
+instance Encodable POWSalt where
+ toByteString (POWSalt n) = n
+ fromByteString = Just . POWSalt
+
+genProofOfWork :: POWIdent p => ProofOfWorkRequirement -> p -> ProofOfWork
+genProofOfWork req p = go allByteStrings
+ where
+ go [] = error "failed to generate Proof Of Work. This should be impossible!"
+ go (b:bs)
+ | isValidProofOfWork candidate req p = candidate
+ | otherwise = go bs
+ where
+ candidate = ProofOfWork b (requestID req)
diff --git a/HTTP/RateLimit.hs b/HTTP/RateLimit.hs
new file mode 100644
index 0000000..591c918
--- /dev/null
+++ b/HTTP/RateLimit.hs
@@ -0,0 +1,419 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module HTTP.RateLimit where
+
+import Types.Cost
+import HTTP
+import HTTP.ProofOfWork
+import HTTP.Logger
+import Tunables
+import CmdLine (ServerConfig(..))
+import Types.Storage
+import Storage.Local
+import Servant
+import Control.Concurrent
+import Control.Concurrent.STM
+import Control.Concurrent.TokenBucket
+import qualified Control.Concurrent.FairRWLock as FairRWLock
+import Control.Concurrent.Thread.Delay
+import qualified Data.BloomFilter.Mutable as BloomFilter
+import qualified Data.BloomFilter.Hash as BloomFilter
+import Data.BloomFilter.Easy (suggestSizing)
+import Control.Monad
+import Control.Monad.ST
+import Control.Exception.Lifted (bracket)
+import System.DiskSpace
+import Data.Maybe
+import Data.Word
+import Control.Monad.IO.Class
+
+-- | A rate limiter is a series of buckets. Each bucket has a
+-- successively more difficult proof of work access requirement.
+--
+-- To guard against DOS attacks that reuse the same proof of work,
+-- bloom filters keep track of RequestIDs that have been used before.
+data RateLimiter = RateLimiter
+ { buckets :: TMVar [Bucket]
+ , unusedBuckets :: TMVar [Bucket]
+ , fallbackQueue :: FallbackQueue
+ , usedRequestIDs :: BloomFilter
+ , usedRequestIDsOld :: BloomFilter
+ , numUsedRequestIDs :: TMVar Int
+ , requestIDSecret :: RequestIDSecret
+ , requestCounter :: TMVar Integer
+ }
+
+type BloomFilter = TMVar (BloomFilter.MBloom RealWorld RequestID)
+
+-- | Buckets fill up at a fixed rate, and accessing a bucket
+-- removes one unit from it.
+data Bucket = Bucket
+ { tokenBucket :: TokenBucket
+ , proofOfWorkRequired :: Seconds
+ , fillInterval :: Word64
+ }
+
+minFillInterval :: Word64
+minFillInterval = 2 * 60 * 1000000 -- 1 token every other minute
+
+-- | Size of the bucket. This allows a burst of accesses after an idle
+-- period, which is especially useful when retrieving keys that were
+-- split into multiple chunks. However, setting this too high lets clients
+-- cheaply store lots of data on a server that has been idle for a while,
+-- which could be an attractive way to abuse keysafe servers.
+burstSize :: Word64
+burstSize = 4 -- 256 kb immediate storage
+
+newRateLimiter :: ServerConfig -> Maybe LocalStorageDirectory -> Logger -> IO RateLimiter
+newRateLimiter cfg storedir logger = do
+ rl <- RateLimiter
+ <$> (newTMVarIO =<< mkbuckets (sdiv maxProofOfWork 2) [])
+ <*> newTMVarIO []
+ <*> newFallbackQueue
+ <*> mkBloomFilter
+ <*> mkBloomFilter
+ <*> newTMVarIO 0
+ <*> newRequestIDSecret
+ <*> newTMVarIO 0
+ _ <- forkIO (adjusterThread cfg storedir rl logger)
+ return rl
+ where
+ -- The last bucket takes half of maxProofOfWork to access, and
+ -- each earlier bucket quarters that time, down to the first bucket,
+ -- which needs no proof of work. This ensures that in the edge case
+ -- where a client keeps getting bumped up to more and more expensive
+ -- buckets, it doesn't need to do more than maxProofOfWork total work.
+ mkbuckets s@(Seconds n) bs
+ | n <= 0 = finalbucket bs
+ | otherwise = do
+ case mkProofOfWorkRequirement s of
+ Nothing -> finalbucket bs
+ Just _ -> do
+ b <- Bucket
+ <$> newTokenBucket
+ <*> pure s
+ <*> pure minFillInterval
+ mkbuckets (sdiv s 4) (b:bs)
+ finalbucket bs = do
+ b <- Bucket
+ <$> newTokenBucket
+ <*> pure (Seconds 0)
+ <*> pure minFillInterval
+ return (b:bs)
+
+ sdiv (Seconds n) d = Seconds (n / d)
+
+mkBloomFilter :: IO BloomFilter
+mkBloomFilter = do
+ b <- stToIO $ BloomFilter.new (BloomFilter.cheapHashes bloomhashes) bloomsize
+ newTMVarIO b
+ where
+ -- Size the bloom filter to hold 1 million items, with a false
+ -- positive rate of 1 in 100 thousand. This will use around 32 mb
+ -- of memory.
+ (bloomsize, bloomhashes) = suggestSizing bloomMaxSize (1/100000)
+
+-- | Maximum number of RequestIDs that can be stored in a bloom filter
+-- without the false positive rate getting bad.
+bloomMaxSize :: Int
+bloomMaxSize = 1000000
+
+-- A request is tried in each bucket in turn which its proof of work allows
+-- access to, until one is found that accepts it.
+rateLimit :: POWIdent p => RateLimiter -> Logger -> Maybe ProofOfWork -> p -> Handler a -> Handler (POWGuarded a)
+rateLimit ratelimiter logger mpow p a = do
+ bs <- getBuckets ratelimiter
+ validrequest <- liftIO $ checkValidRequestID ratelimiter logger mpow
+ if validrequest
+ then go bs
+ else assignWork ratelimiter bs
+ where
+ go [] = fallback ratelimiter logger a
+ go (b:bs) = case mkProofOfWorkRequirement (proofOfWorkRequired b) of
+ Nothing -> checkbucket b bs
+ Just mkreq -> case mpow of
+ Nothing -> assignWork ratelimiter (b:bs)
+ Just pow@(ProofOfWork _ rid) ->
+ if isValidProofOfWork pow (mkreq rid) p
+ then checkbucket b bs
+ else assignWork ratelimiter (b:bs)
+ checkbucket b bs = do
+ allowed <- liftIO $ tokenBucketTryAlloc (tokenBucket b)
+ burstSize (fillInterval b) 1
+ if allowed
+ then allowRequest ratelimiter a
+ else go bs
+
+checkValidRequestID :: RateLimiter -> Logger -> Maybe ProofOfWork -> IO Bool
+checkValidRequestID _ _ Nothing = return True
+checkValidRequestID rl logger (Just (ProofOfWork _ rid))
+ | validRequestID (requestIDSecret rl) rid = do
+ used <- iselem usedRequestIDs
+ oldused <- iselem usedRequestIDsOld
+ if not used && not oldused
+ then do
+ withBloomFilter rl usedRequestIDs
+ (`BloomFilter.insert` rid)
+ checkbloomsize
+ return True
+ else return False
+ | otherwise = return False
+ where
+ iselem f = withBloomFilter rl f (BloomFilter.elem rid)
+
+ checkbloomsize = do
+ needrot <- atomically $ do
+ n <- takeTMVar (numUsedRequestIDs rl)
+ if n > bloomMaxSize `div` 2
+ then return (Just n)
+ else do
+ putTMVar (numUsedRequestIDs rl) (n+1)
+ return Nothing
+ handlerotation needrot
+
+ handlerotation Nothing = return ()
+ handlerotation (Just n) = do
+ logStderr logger $ "rotating bloom filters after processing " ++ show n ++ " requests"
+ newused <- mkBloomFilter
+ atomically $ do
+ oldused <- takeTMVar (usedRequestIDs rl)
+ putTMVar (usedRequestIDsOld rl) oldused
+ putTMVar (usedRequestIDs rl) =<< takeTMVar newused
+ putTMVar (numUsedRequestIDs rl) 0
+
+assignWork :: RateLimiter -> [Bucket] -> Handler (POWGuarded a)
+assignWork ratelimiter bs =
+ case mapMaybe (mkProofOfWorkRequirement . proofOfWorkRequired) bs of
+ [] -> throwError err404
+ (mkreq:_) -> do
+ rid <- liftIO $ mkRequestID $ requestIDSecret ratelimiter
+ return $ NeedProofOfWork $ mkreq rid
+
+withBloomFilter
+ :: RateLimiter
+ -> (RateLimiter -> BloomFilter)
+ -> (BloomFilter.MBloom RealWorld RequestID -> ST RealWorld a)
+ -> IO a
+withBloomFilter rl field a = do
+ b <- atomically $ readTMVar (field rl)
+ stToIO (a b)
+
+getBuckets :: MonadIO m => RateLimiter -> m [Bucket]
+getBuckets = liftIO . atomically . readTMVar . buckets
+
+putBuckets :: MonadIO m => RateLimiter -> [Bucket] -> m ()
+putBuckets rl bs = liftIO $ atomically $ do
+ _ <- takeTMVar (buckets rl)
+ putTMVar (buckets rl) bs
+
+-- The fallback queue is used when a client has provided a good enough
+-- proof of work to access all buckets, but all are empty.
+--
+-- Only a limited number of requests can be in the queue, since they take
+-- up server memory while blocked, and since too large a queue would stall
+-- requests for too long.
+--
+-- Once in the queue, requests are run in FIFO order.
+--
+-- A separate bucket is used to rate limit requests in the fallback queue,
+-- so requests in the queue do not need to contend with requests not in the
+-- queue.
+data FallbackQueue = FallbackQueue
+ { fallbackBucket :: TokenBucket
+ , blockedRequestLock :: FairRWLock.RWLock
+ , fallbackQueueSlots :: TMVar Int
+ }
+
+newFallbackQueue :: IO FallbackQueue
+newFallbackQueue = FallbackQueue
+ <$> newTokenBucket
+ <*> FairRWLock.new
+ <*> newTMVarIO 100
+
+fallback :: RateLimiter -> Logger -> Handler a -> Handler (POWGuarded a)
+fallback ratelimiter logger a =
+ bracket (liftIO addq) (liftIO . removeq) go
+ where
+ q = fallbackQueueSlots (fallbackQueue ratelimiter)
+
+ addq = liftIO $ atomically $ do
+ n <- takeTMVar q
+ if n <= 0
+ then do
+ putTMVar q n
+ return False
+ else do
+ putTMVar q (n-1)
+ return True
+
+ removeq False = return ()
+ removeq True = liftIO $ atomically $ do
+ n <- takeTMVar q
+ putTMVar q (n+1)
+
+ -- tokenBucketWait is not fair, so use the blockedRequestLock
+ -- to get fair FIFO ordering.
+ waitbucket = do
+ logStderr logger "** warning: All token buckets are empty. Delaying request.."
+ FairRWLock.withWrite (blockedRequestLock (fallbackQueue ratelimiter)) $ do
+ -- For simplicity, use the same fillInterval as the
+ -- last bucket in the rate limiter for the fallback
+ -- bucket.
+ bs <- getBuckets ratelimiter
+ case reverse bs of
+ (lastb:_) -> tokenBucketWait
+ (fallbackBucket (fallbackQueue ratelimiter))
+ burstSize (fillInterval lastb)
+ [] -> return ()
+ go False = giveup
+ go True = do
+ liftIO waitbucket
+ allowRequest ratelimiter a
+
+ giveup = do
+ liftIO $ logStderr logger "** warning: All token buckets are empty and request queue is large; possible DOS attack? Rejected request.."
+ assignWork ratelimiter =<< getBuckets ratelimiter
+
+-- | How much data could be stored, in bytes per second, assuming all
+-- buckets in the rate limiter being constantly drained by requests,
+-- and all requests store objects.
+maximumStorageRate :: RateLimiter -> IO Integer
+maximumStorageRate ratelimiter = do
+ bs <- getBuckets ratelimiter
+ -- The last bucket is counted a second time, because the fallback
+ -- request queue has its own bucket with the same characteristics
+ -- as this bucket.
+ let fallbackb = take 1 (reverse bs)
+ return $ sum $ map calc (bs ++ fallbackb)
+ where
+ storesize = maximum knownObjectSizes
+ calc b = fromIntegral $
+ (storesize * 1000000) `div` fromIntegral (fillInterval b)
+
+describeRateLimiter :: RateLimiter -> IO String
+describeRateLimiter ratelimiter = do
+ storerate <- maximumStorageRate ratelimiter
+ bs <- getBuckets ratelimiter
+ return $ concat
+ [ "rate limiter buckets: " ++ show bs
+ , " ; maximum allowed storage rate: "
+ , showBytes (storerate * 60 * 60 * 24 * 31) ++ "/month"
+ ]
+
+showBytes :: Integer -> String
+showBytes n
+ | n <= 1024*1024 = show (n `div` 1024) ++ " KiB"
+ | n <= 1024*1024*1024 = show (n `div` (1024 * 1024)) ++ " MiB"
+ | otherwise = show (n `div` (1024 * 1024 * 1024)) ++ " GiB"
+
+instance Show Bucket where
+ show b = show (fillInterval b `div` (60 * 1000000)) ++ " Second/Request"
+ ++ " (PoW=" ++ show (proofOfWorkRequired b) ++ ")"
+
+increaseDifficulty :: Logger -> RateLimiter -> IO ()
+increaseDifficulty logger ratelimiter = do
+ bs <- getBuckets ratelimiter
+ case bs of
+ [] -> unable
+ (b:[])
+ | fillInterval b < maxBound `div` 2 -> do
+ -- Make the remaining bucket take longer to fill.
+ let b' = b { fillInterval = fillInterval b * 2 }
+ putBuckets ratelimiter [b']
+ done
+ | otherwise -> unable
+ (b:rest) -> do
+ -- Remove less expensive to access buckets,
+ -- so that clients have to do some work.
+ -- This is done first to cut off any freeloaders
+ -- that may be abusing the keysafe server.
+ atomically $ do
+ unused <- takeTMVar (unusedBuckets ratelimiter)
+ putTMVar (unusedBuckets ratelimiter) (b:unused)
+ putBuckets ratelimiter rest
+ done
+ where
+ unable = logStderr logger "Unable to increase difficulty any further!"
+ done = do
+ desc <- describeRateLimiter ratelimiter
+ logStdout logger $ "increased difficulty -- " ++ desc
+
+-- Should undo the effect of increaseDifficulty.
+reduceDifficulty :: Logger -> RateLimiter -> IO ()
+reduceDifficulty logger ratelimiter = do
+ bs <- getBuckets ratelimiter
+ case bs of
+ (b:[]) | fillInterval b > minFillInterval -> do
+ let b' = b { fillInterval = fillInterval b `div` 2 }
+ putBuckets ratelimiter [b']
+ done
+ _ -> do
+ mb <- getunused
+ case mb of
+ Nothing -> unable
+ Just b -> do
+ putBuckets ratelimiter (b:bs)
+ done
+ where
+ getunused = atomically $ do
+ unused <- takeTMVar (unusedBuckets ratelimiter)
+ case unused of
+ (b:bs) -> do
+ putTMVar (unusedBuckets ratelimiter) bs
+ return (Just b)
+ [] -> do
+ putTMVar (unusedBuckets ratelimiter) []
+ return Nothing
+ unable = return ()
+ done = do
+ desc <- describeRateLimiter ratelimiter
+ logStdout logger $ "reduced difficulty -- " ++ desc
+
+allowRequest :: RateLimiter -> Handler a -> Handler (POWGuarded a)
+allowRequest ratelimiter a = do
+ liftIO $ addRequest ratelimiter 1
+ Result <$> a
+
+addRequest :: RateLimiter -> Integer -> IO ()
+addRequest ratelimiter n = liftIO $ atomically $ do
+ v <- takeTMVar c
+ putTMVar c (v + n)
+ where
+ c = requestCounter ratelimiter
+
+-- Thread that wakes up periodically and checks the request rate
+-- against the available disk space. If the disk is filling too quickly,
+-- the difficulty is increased.
+adjusterThread :: ServerConfig -> Maybe LocalStorageDirectory -> RateLimiter -> Logger -> IO ()
+adjusterThread cfg storedir ratelimiter logger = forever $ do
+ delay (1000000 * intervalsecs)
+ checkRequestRate cfg storedir ratelimiter logger intervalsecs
+ where
+ intervalsecs = 60*15
+
+checkRequestRate :: ServerConfig -> Maybe LocalStorageDirectory -> RateLimiter -> Logger -> Integer -> IO ()
+checkRequestRate cfg storedir ratelimiter logger intervalsecs = do
+ let storesize = maximum knownObjectSizes
+ n <- liftIO $ atomically $ swapTMVar (requestCounter ratelimiter) 0
+ let maxstoredinterval = n * fromIntegral storesize
+ let maxstoredthismonth = maxstoredinterval * (intervalsecs * 24*31 `div` (60*60))
+ freespace <- diskFree <$> localDiskUsage storedir
+ let target = monthsToFillHalfDisk cfg
+ let estimate = if maxstoredthismonth <= 0
+ then 10000
+ else freespace `div` maxstoredthismonth `div` 2
+ logStdout logger $ unlines
+ [ "rate limit check"
+ , " free disk space: " ++ showBytes freespace
+ , " number of requests since last check: " ++ show n
+ , " estimated max incoming data in the next month: " ++ showBytes maxstoredthismonth
+ , " estimate min " ++ show estimate ++ " months to fill half of disk"
+ ]
+ if estimate > target * 2
+ then reduceDifficulty logger ratelimiter
+ else if estimate < target
+ then increaseDifficulty logger ratelimiter
+ else return ()
diff --git a/HTTP/Server.hs b/HTTP/Server.hs
new file mode 100644
index 0000000..6fd570d
--- /dev/null
+++ b/HTTP/Server.hs
@@ -0,0 +1,125 @@
+{-# LANGUAGE OverloadedStrings #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module HTTP.Server (runServer, serverStorage) where
+
+import HTTP
+import HTTP.ProofOfWork
+import HTTP.RateLimit
+import HTTP.Logger
+import Types
+import Types.Storage
+import Tunables
+import CmdLine (ServerConfig(..))
+import Storage.Local
+import Serialization ()
+import Servant
+import Network.Wai
+import Network.Wai.Handler.Warp
+import Control.Monad.IO.Class
+import Control.Concurrent
+import Control.Concurrent.Thread.Delay
+import Control.Concurrent.STM
+import Data.Maybe
+import Data.String
+import qualified Data.ByteString as B
+
+data ServerState = ServerState
+ { obscurerRequest :: TMVar ()
+ , storage :: Storage
+ , rateLimiter :: RateLimiter
+ , logger :: Logger
+ , serverConfig :: ServerConfig
+ }
+
+newServerState :: Maybe LocalStorageDirectory -> ServerConfig -> IO ServerState
+newServerState d cfg = do
+ l <- newLogger
+ ServerState
+ <$> newEmptyTMVarIO
+ <*> pure (serverStorage d)
+ <*> newRateLimiter cfg d l
+ <*> pure l
+ <*> pure cfg
+
+runServer :: Maybe LocalStorageDirectory -> ServerConfig -> IO ()
+runServer d cfg = do
+ st <- newServerState d cfg
+ _ <- forkIO $ obscurerThread st
+ runSettings settings (app st)
+ where
+ settings = setHost host $ setPort (serverPort cfg) $ defaultSettings
+ host = fromString (serverAddress cfg)
+
+serverStorage :: Maybe LocalStorageDirectory -> Storage
+serverStorage d = localStorage LocallyPreferred (storageDir d) "server"
+
+app :: ServerState -> Application
+app st = serve userAPI (server st)
+
+userAPI :: Proxy HttpAPI
+userAPI = Proxy
+
+server :: ServerState -> Server HttpAPI
+server st = motd st
+ :<|> getObject st
+ :<|> putObject st
+ :<|> countObjects st
+
+motd :: ServerState -> Handler Motd
+motd = return . Motd . fromMaybe "Hello World!" . serverMotd . serverConfig
+
+getObject :: ServerState -> StorableObjectIdent -> Maybe ProofOfWork -> Handler (POWGuarded StorableObject)
+getObject st i pow = rateLimit (rateLimiter st) (logger st) pow i $ do
+ r <- liftIO $ retrieveShare (storage st) dummyShareNum i
+ liftIO $ requestObscure st
+ case r of
+ RetrieveSuccess (Share _n o) -> return o
+ RetrieveFailure _ -> throwError err404
+
+putObject :: ServerState -> StorableObjectIdent -> Maybe ProofOfWork -> StorableObject -> Handler (POWGuarded StoreResult)
+putObject st i pow o = rateLimit (rateLimiter st) (logger st) pow i $ do
+ if validObjectsize o
+ then do
+ r <- liftIO $ storeShare (storage st) i (Share dummyShareNum o)
+ liftIO $ requestObscure st
+ return r
+ else return $ StoreFailure "invalid object size"
+
+validObjectsize :: StorableObject -> Bool
+validObjectsize o = any (sz ==) knownObjectSizes
+ where
+ sz = B.length (fromStorableObject o)
+
+countObjects :: ServerState -> Maybe ProofOfWork -> Handler (POWGuarded CountResult)
+countObjects st pow = rateLimit (rateLimiter st) (logger st) pow NoPOWIdent $ do
+ v <- liftIO $ countShares $ storage st
+ case v of
+ CountResult n -> return $
+ -- Round down to avoid leaking too much detail.
+ CountResult ((n `div` 1000) * 1000)
+ CountFailure s -> return (CountFailure s)
+
+-- | 1 is a dummy value; the server does not know the actual share numbers.
+dummyShareNum :: ShareNum
+dummyShareNum = 1
+
+-- | This thread handles obscuring the shares after put and get operations.
+-- Since obscuring can be an expensive process when there are many shares,
+-- the thread runs a maximum of once per half-hour.
+obscurerThread :: ServerState -> IO ()
+obscurerThread st = do
+ _ <- obscureShares (storage st)
+ logStdout (logger st) "obscured shares"
+ delay (1000000*60*30)
+ _ <- atomically $ takeTMVar (obscurerRequest st)
+ obscurerThread st
+
+requestObscure :: ServerState -> IO ()
+requestObscure st = do
+ _ <- atomically $ tryPutTMVar (obscurerRequest st) ()
+ return ()
diff --git a/INSTALL b/INSTALL
new file mode 100644
index 0000000..e044b1b
--- /dev/null
+++ b/INSTALL
@@ -0,0 +1,50 @@
+## Quick installation
+
+This installs keysafe to ~/.local/bin, and is sufficient to use keysafe
+to back up or restore your private key.
+
+First install Haskell's stack tool, ghc, the zlib, g++, readline and argon2
+libraries. Also, zenity is needed to use keysafe's GUI.
+For example, on a Debian testing system:
+
+ sudo apt-get install haskell-stack ghc \
+ zlib1g-dev g++ libreadline-dev libargon2-0-dev zenity
+
+If your distribution does not yet include Haskell's stack tool,
+see <http://haskellstack.org/>
+
+If your distribution does not yet include argon2, see
+<https://github.com/P-H-C/phc-winner-argon2>
+
+Then to build and install keysafe, run this in the keysafe directory:
+
+ stack install
+
+Note that there is a manpage, but stack doesn't install it yet.
+
+## System-wide installation
+
+This installs keysafe in /usr/bin, and includes the man page,
+desktop file, autostart file, systemd service file, init script, etc.
+
+Start by installing the dependencies as shown in Quick installation.
+
+Then, in the keysafe directory:
+
+ make
+ sudo make install
+
+## Packaging
+
+You will probably want to use the Makefile.
+Set PREFIX to install to a different location.
+Set BUILDER=cabal to use cabal rather than the default stack to build.
+
+The make install target creates a keysafe user. Use the install-files
+target to avoid doing that at package build time. You may create the
+keysafe user at package install time instead, although it is only used
+by the keysafe server.
+
+While keysafe ships with a systemd service file and an init script,
+distributions should not enable it to be started by default.
+(Or can put it in its own keysafe-server package.)
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..58def76
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,66 @@
+# The install target will add this before all paths it writes to.
+PREFIX?=
+
+# Can be "stack" or "cabal", or "./Setup" to build and use Setup.hs
+BUILDER?=stack
+
+# Options to pass to the BUILDER.
+# Using -j1 may result in a reproducible build.
+BUILDEROPTIONS?=
+
+# Propigate flags through ghc to linker and compiler.
+ghc_options=$(shell \
+ for w in $(LDFLAGS); do \
+ printf -- "-optl%s\n" "$$w"; \
+ done; \
+ for w in $(CFLAGS); do \
+ printf -- "-optc%s\n" "$$w"; \
+ done; \
+ for w in $(CPPFLAGS); do \
+ printf -- "-optc-Wp,%s\n" "$$w"; \
+ done; \
+ )
+
+build:
+ rm -f keysafe
+ $(MAKE) keysafe
+
+keysafe:
+ if [ "$(BUILDER)" = ./Setup ]; then ghc --make Setup; fi
+ if [ "$(BUILDER)" = stack ]; then \
+ $(BUILDER) build --ghc-options="$(ghc_options)" $(BUILDEROPTIONS); \
+ else \
+ $(BUILDER) configure --ghc-options="$(ghc_options)"; \
+ $(BUILDER) build $(BUILDEROPTIONS); \
+ fi
+ if [ "$(BUILDER)" = stack ]; then \
+ ln -sf $$(find .stack-work/ -name keysafe -type f | grep build/keysafe/keysafe | tail -n 1) keysafe; \
+ else \
+ ln -sf dist/build/keysafe/keysafe keysafe; \
+ fi
+
+clean:
+ if [ "$(BUILDER)" != ./Setup ] && [ "$(BUILDER)" != cabal ]; then $(BUILDER) clean; fi
+ rm -rf keysafe dist .stack-work Setup Setup.hi Setup.o
+
+install: install-files
+ useradd --system keysafe
+ chmod 700 $(PREFIX)/var/lib/keysafe
+ chown keysafe:keysafe $(PREFIX)/var/lib/keysafe
+
+install-files: keysafe
+ install -d $(PREFIX)/var/lib/keysafe
+ install -d $(PREFIX)/usr/bin
+ install -s -m 0755 keysafe $(PREFIX)/usr/bin/keysafe
+ install -d $(PREFIX)/usr/share/man/man1
+ install -m 0644 keysafe.1 $(PREFIX)/usr/share/man/man1/keysafe.1
+ install -d $(PREFIX)/lib/systemd/system
+ install -m 0644 keysafe.service $(PREFIX)/lib/systemd/system/keysafe.service
+ install -d $(PREFIX)/etc/init.d
+ install -m 0755 keysafe.init $(PREFIX)/etc/init.d/keysafe
+ install -d $(PREFIX)/etc/default
+ install -m 0644 keysafe.default $(PREFIX)/etc/default/keysafe
+ install -d $(PREFIX)/usr/share/applications/
+ install -m 0644 keysafe.desktop $(PREFIX)/usr/share/applications/keysafe.desktop
+ install -d $(PREFIX)/etc/xdg/autostart/
+ install -m 0644 keysafe.autostart $(PREFIX)/etc/xdg/autostart/keysafe.desktop
diff --git a/Output.hs b/Output.hs
new file mode 100644
index 0000000..f655d0a
--- /dev/null
+++ b/Output.hs
@@ -0,0 +1,33 @@
+-- All console output in keysafe should go via this module;
+-- avoid using putStrLn, print, etc directly.
+
+module Output (ask, progress, say, warn, display) where
+
+import System.IO
+import Data.Char
+
+ask :: String -> IO ()
+ask s = do
+ putStr (escape s)
+ hFlush stdout
+
+progress :: String -> IO ()
+progress = ask
+
+say :: String -> IO ()
+say = putStrLn . escape
+
+warn :: String -> IO ()
+warn = hPutStrLn stderr . escape
+
+display :: Show s => s -> IO ()
+display = say . show
+
+-- | Prevent malicious escape sequences etc in a string
+-- from being output to the console.
+escape :: String -> String
+escape = concatMap go
+ where
+ go c = if isPrint c || isSpace c
+ then [c]
+ else "\\" ++ show (ord c)
diff --git a/SecretKey.hs b/SecretKey.hs
new file mode 100644
index 0000000..8dc2ada
--- /dev/null
+++ b/SecretKey.hs
@@ -0,0 +1,31 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module SecretKey where
+
+import Types
+import Share
+import qualified Gpg
+import qualified Data.ByteString as B
+import System.IO
+import System.Posix.IO
+
+getSecretKey :: SecretKeySource -> IO (SecretKeySource, SecretKey)
+getSecretKey sks = do
+ sk <- case sks of
+ GpgKey kid -> Gpg.getSecretKey kid
+ KeyFile f -> SecretKey <$> B.readFile f
+ return (sks, sk)
+
+-- | Can throw exception if the secret key already exists.
+writeSecretKey :: Distinguisher -> SecretKey -> IO ()
+writeSecretKey (Distinguisher (GpgKey _)) secretkey = Gpg.writeSecretKey secretkey
+writeSecretKey AnyGpgKey secretkey = Gpg.writeSecretKey secretkey
+writeSecretKey (Distinguisher (KeyFile f)) (SecretKey b) = do
+ fd <- openFd f WriteOnly (Just 0o666)
+ (defaultFileFlags { exclusive = True } )
+ h <- fdToHandle fd
+ B.hPut h b
+ hClose h
diff --git a/Serialization.hs b/Serialization.hs
new file mode 100644
index 0000000..9803d71
--- /dev/null
+++ b/Serialization.hs
@@ -0,0 +1,55 @@
+{-# OPTIONS_GHC -fno-warn-orphans #-}
+{-# LANGUAGE OverloadedStrings #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Serialization where
+
+import Types
+import Raaz.Core.Encode
+import qualified Data.ByteString as B
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Text as T
+import Data.Monoid
+import Data.Word
+
+-- | A SecretKeySource is serialized in the form "keytype value".
+-- For example "gpg C910D9222512E3C7", or "file path".
+instance Encodable SecretKeySource where
+ toByteString (GpgKey (KeyId b)) =
+ "gpg" <> B.singleton sepChar <> BU8.fromString (T.unpack b)
+ toByteString (KeyFile f) =
+ "file" <> B.singleton sepChar <> BU8.fromString f
+ fromByteString b = case B.break (== sepChar) b of
+ (t, rest)
+ | B.null rest -> Nothing
+ | otherwise ->
+ let i = B.drop 1 rest
+ in case t of
+ "gpg" -> Just $ GpgKey (KeyId (T.pack (BU8.toString i)))
+ "file" -> Just $ KeyFile (BU8.toString i)
+ _ -> Nothing
+
+instance Encodable Name where
+ toByteString (Name n) = n
+ fromByteString = Just . Name
+
+instance Encodable StorableObjectIdent where
+ toByteString (StorableObjectIdent i) = i
+ fromByteString = Just . StorableObjectIdent
+
+instance Encodable StorableObject where
+ toByteString (StorableObject b) = b
+ fromByteString = Just . StorableObject
+
+-- | A share is serialized without its share number. This prevents
+-- an attacker from partitioning their shares by share number.
+instance Encodable Share where
+ toByteString (Share _n o) = toByteString o
+ fromByteString _ = Nothing
+
+sepChar :: Word8
+sepChar = 32
diff --git a/ServerBackup.hs b/ServerBackup.hs
new file mode 100644
index 0000000..1c6b6a9
--- /dev/null
+++ b/ServerBackup.hs
@@ -0,0 +1,62 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module ServerBackup where
+
+import Storage
+import Storage.Local
+import HTTP.Server
+import System.Process
+import System.FilePath
+import System.Directory
+import Data.List
+import Control.Monad
+import Data.Time.Clock.POSIX
+
+-- | Storing all shards in one gpg encrypted file avoids a problem with
+-- modern incremental backup programs such as obnam: Access to an obnam
+-- repository allows one to see the date when a chunk first enters the
+-- repository, which can allow dating when objects were first stored
+-- in keysafe, and so help in a correlation attack.
+--
+-- Of course, it's not at all efficient for offsite backups!
+backupServer :: Maybe LocalStorageDirectory -> FilePath -> IO ()
+backupServer lsd d = do
+ let storage = serverStorage lsd
+ _ <- obscureShares storage
+ topdir <- storageTopDir lsd
+ createDirectoryIfMissing True d
+ dest <- backupFile d <$> getPOSIXTime
+ callCommand ("tar -C " ++ topdir ++ " -c . | gpg --encrypt --default-recipient-self > " ++ dest)
+ -- Keep the past 7 backup files, in case an object file somehow
+ -- gets deleted, this avoids the backup losing it too.
+ -- These backup files can be used to determine eg, what day
+ -- chunks were uploaded to the server, which is why only a few
+ -- are kept.
+ pruneOldBackups d 7
+
+restoreServer :: Maybe LocalStorageDirectory -> FilePath -> IO ()
+restoreServer lsd d = do
+ topdir <- storageTopDir lsd
+ bs <- findBackups d
+ forM_ bs $ \b ->
+ callCommand ("gpg --decrypt " ++ b ++ " | tar -C " ++ topdir ++ " -x")
+ let storage = serverStorage lsd
+ _ <- obscureShares storage
+ return ()
+
+findBackups :: FilePath -> IO [FilePath]
+findBackups d = map (d </>) . filter isBackup <$> getDirectoryContents d
+
+pruneOldBackups :: FilePath -> Int -> IO ()
+pruneOldBackups d keep = do
+ fs <- findBackups d
+ mapM_ removeFile (drop keep (reverse (sort fs)))
+
+isBackup :: FilePath -> Bool
+isBackup f = "keysafe-backup" `isPrefixOf` f
+
+backupFile :: FilePath -> POSIXTime -> FilePath
+backupFile d t = d </> "keysafe-backup." ++ show t ++ ".gpg"
diff --git a/Servers.hs b/Servers.hs
new file mode 100644
index 0000000..ab31838
--- /dev/null
+++ b/Servers.hs
@@ -0,0 +1,36 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Servers where
+
+import Types.Server
+import Types.Storage
+import Storage.Network
+
+-- | Keysafe's server list.
+--
+-- Note: Avoid removing servers from this list, as that will break
+-- restores. If necessary, a server can be set to Untrusted to prevent
+-- uploads to it.
+--
+-- Also, avoid changing the ServerName of any server, as that will
+-- cause any uploads queued under that name to not go through.
+serverList :: Maybe LocalStorageDirectory -> [Storage]
+serverList d =
+ [ mk Alternate $ Server (ServerName "keysafe.joeyh.name")
+ [ServerAddress "vzgrspuxbtnlrtup.onion" 4242]
+ "Provided by Joey Hess. Digital Ocean VPS, located in Indonesia"
+
+ , mk Alternate $ Server (ServerName "keysafe.puri.sm")
+ []
+ "Purism server is not yet deployed, but planned."
+
+ -- still being vetted
+ , mk Alternate $ Server (ServerName "thirdserver")
+ [ServerAddress "eqi7glyxe5ravak5.onion" 4242]
+ "Provided by Marek Isalski at Faelix. Currently located in UK, but planned move to CH"
+ ]
+ where
+ mk l s = networkStorage l d s
diff --git a/Setup.hs b/Setup.hs
new file mode 100644
index 0000000..d26c7ed
--- /dev/null
+++ b/Setup.hs
@@ -0,0 +1,30 @@
+{-# OPTIONS_GHC -fno-warn-tabs #-}
+
+import Distribution.Simple
+import Distribution.Simple.LocalBuildInfo
+import Distribution.Simple.Setup
+import Distribution.Simple.Utils (installOrdinaryFiles, rawSystemExit)
+import Distribution.PackageDescription (PackageDescription(..))
+import Distribution.Verbosity (Verbosity)
+import System.Info
+import System.FilePath
+
+main :: IO ()
+main = defaultMainWithHooks simpleUserHooks
+ { postCopy = myPostCopy
+ }
+
+myPostCopy :: Args -> CopyFlags -> PackageDescription -> LocalBuildInfo -> IO ()
+myPostCopy _ flags pkg lbi = if System.Info.os /= "mingw32"
+ then installManpages dest verbosity pkg lbi
+ else return ()
+ where
+ dest = fromFlag $ copyDest flags
+ verbosity = fromFlag $ copyVerbosity flags
+
+{- See http://www.haskell.org/haskellwiki/Cabal/Developer-FAQ#Installing_manpages -}
+installManpages :: CopyDest -> Verbosity -> PackageDescription -> LocalBuildInfo -> IO ()
+installManpages copyDest verbosity pkg lbi =
+ installOrdinaryFiles verbosity dstManDir [(".", "keysafe.1")]
+ where
+ dstManDir = mandir (absoluteInstallDirs pkg lbi copyDest) </> "man1"
diff --git a/Share.hs b/Share.hs
new file mode 100644
index 0000000..2d848b9
--- /dev/null
+++ b/Share.hs
@@ -0,0 +1,136 @@
+{-# LANGUAGE OverloadedStrings, MultiParamTypeClasses #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Share where
+
+import Types
+import Tunables
+import ExpensiveHash
+import Cost
+import qualified Crypto.SecretSharing.Internal as SS
+import qualified Data.ByteString as B
+import qualified Data.ByteString.Lazy as BL
+import qualified Raaz.Core.Encode as Raaz
+import qualified Raaz.Hash.Sha256 as Raaz
+import qualified Data.Text as T
+import qualified Data.Text.Encoding as E
+import qualified Data.Set as S
+import Data.Word
+import Data.Monoid
+
+data ShareIdents = ShareIdents
+ { identsStream :: [S.Set StorableObjectIdent]
+ -- ^ Each item in the infinite list is the idents to
+ -- use for the shares of a chunk of data.
+ , identsCreationCost :: Cost CreationOp
+ , identsBruteForceCalc :: CostCalc BruteForceOp UnknownName
+ }
+
+nextShareIdents :: ShareIdents -> (S.Set StorableObjectIdent, ShareIdents)
+nextShareIdents sis =
+ let (s:rest) = identsStream sis
+ in (s, sis { identsStream = rest })
+
+instance HasCreationCost ShareIdents where
+ getCreationCost = identsCreationCost
+
+instance Bruteforceable ShareIdents UnknownName where
+ getBruteCostCalc = identsBruteForceCalc
+
+data Distinguisher
+ = Distinguisher SecretKeySource
+ | AnyGpgKey
+ -- ^ Use to avoid the gpg keyid needing to be provided
+ -- at restore time.
+ deriving (Eq)
+
+-- | Generates identifiers to use for storing shares.
+--
+-- This is an expensive operation, to make it difficult for an attacker
+-- to brute force known/guessed names and find matching shares.
+-- The keyid or filename is used as a salt, to avoid collisions
+-- when the same name is chosen for multiple keys.
+shareIdents :: Tunables -> Name -> Distinguisher -> ShareIdents
+shareIdents tunables (Name name) shareident =
+ ShareIdents (segmentbyshare idents) creationcost bruteforcecalc
+ where
+ (ExpensiveHash creationcost basename) =
+ expensiveHash hashtunables salt name
+ salt = case shareident of
+ Distinguisher sks -> Salt sks
+ AnyGpgKey -> Salt (GpgKey (KeyId ""))
+ mk n = StorableObjectIdent $ Raaz.toByteString $ mksha $
+ E.encodeUtf8 $ basename <> T.pack (show n)
+ mksha :: B.ByteString -> Raaz.Base16
+ mksha = Raaz.encode . Raaz.sha256
+ bruteforcecalc = bruteForceLinearSearch creationcost
+ hashtunables = nameGenerationHash $ nameGenerationTunable tunables
+ idents = map mk ([1..] :: [Integer])
+ m = totalObjects (shareParams tunables)
+ segmentbyshare l =
+ let (shareis, l') = splitAt m l
+ in S.fromList shareis : segmentbyshare l'
+
+-- | Generates shares of an EncryptedSecretKey.
+-- Each chunk of the key creates its own set of shares.
+genShares :: EncryptedSecretKey -> Tunables -> IO [S.Set Share]
+genShares (EncryptedSecretKey cs _) tunables = do
+ shares <- mapM encode cs
+ return $ map (S.fromList . map (uncurry Share) . zip [1..]) shares
+ where
+ encode :: B.ByteString -> IO [StorableObject]
+ encode b = map (StorableObject . encodeShare)
+ <$> SS.encode
+ (neededObjects $ shareParams tunables)
+ (totalObjects $ shareParams tunables)
+ (BL.fromStrict b)
+
+-- | If not enough sets of shares are provided, the EncryptedSecretKey may
+-- be incomplete, only containing some chunks of the key
+combineShares :: Tunables -> [S.Set Share] -> Either String EncryptedSecretKey
+combineShares tunables shares
+ | null shares || any null shares || any (\l -> length l < sharesneeded) shares =
+ Left "Not enough shares are currently available to reconstruct your data."
+ | otherwise = Right $ mk $
+ map (BL.toStrict . SS.decode . map decodeshare . S.toList) shares
+ where
+ mk cs = EncryptedSecretKey cs unknownCostCalc
+ decodeshare (Share sharenum so) = decodeShare sharenum sharesneeded $
+ fromStorableObject so
+ sharesneeded = neededObjects (shareParams tunables)
+
+-- Note that this does not include the share number in the encoded
+-- bytestring. This prevents an attacker from partitioning their shares
+-- by share number.
+encodeShare :: SS.Share -> B.ByteString
+encodeShare = B.pack . concatMap (encodeShare' . SS.shareValue) . SS.theShare
+
+decodeShare :: Int -> Int -> B.ByteString -> SS.Share
+decodeShare sharenum sharesneeded = SS.Share . map mk . decodeShare' . B.unpack
+ where
+ mk v = SS.ByteShare
+ { SS.shareId = sharenum
+ , SS.reconstructionThreshold = sharesneeded
+ , SS.shareValue = v
+ }
+
+-- | Each input byte generates a share in a finite field of size 1021,
+-- so encode it as the product of two bytes. This is inneffient; if the
+-- finite field was 255 then the encoded share would be the same size as
+-- the input. But, the finite-field library used by secret-sharing does
+-- not support a non-prime size.
+encodeShare' :: Int -> [Word8]
+encodeShare' v =
+ let (q, r) = quotRem v 255
+ in [fromIntegral q, fromIntegral r]
+
+decodeShare' :: [Word8] -> [Int]
+decodeShare' = go []
+ where
+ go c [] = reverse c
+ go c (q:r:rest) = go (((255 * fromIntegral q) + fromIntegral r):c) rest
+ go _ _ = error "Badly encoded share has odd number of bytes"
diff --git a/Storage.hs b/Storage.hs
new file mode 100644
index 0000000..c481d77
--- /dev/null
+++ b/Storage.hs
@@ -0,0 +1,208 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE OverloadedStrings #-}
+
+module Storage (module Storage, module Types.Storage) where
+
+import Types
+import Types.Storage
+import Types.Server
+import Types.Cost
+import Output
+import Share
+import Storage.Network
+import Servers
+import Tunables
+import ByteStrings
+import Data.Maybe
+import Data.List
+import Data.Monoid
+import Control.Monad
+import Control.Concurrent.Thread.Delay
+import Control.Concurrent.Async
+import qualified Data.Set as S
+import System.Random
+import System.Random.Shuffle
+import qualified Raaz
+
+networkStorageLocations :: Maybe LocalStorageDirectory -> StorageLocations
+networkStorageLocations = StorageLocations . serverList
+
+type UpdateProgress = IO ()
+
+data StorageProblem
+ = FatalProblem String
+ | OverridableProblem String
+ deriving (Show)
+
+-- | Check if there is a problem with storing shares amoung the provided
+-- storage locations, assuming that some random set of the storage
+-- locations will be used.
+--
+-- It's always a problem to store anything on an Untrusted server.
+--
+-- It should not be possible to reconstruct the encrypted
+-- secret key using only objects from Alternate servers, so
+-- fewer than neededObjects Alternate servers can be used.
+problemStoringIn :: StorageLocations -> Tunables -> Maybe StorageProblem
+problemStoringIn (StorageLocations locs) tunables
+ | not (null (getlevel Untrusted)) || length locs < totalObjects ps =
+ Just $ FatalProblem
+ "Not enough servers are available to store your encrypted secret key."
+ | length alternates >= neededObjects ps = Just $ OverridableProblem $ unlines $
+ [ "Not enough keysafe servers are available that can store"
+ , "your encrypted secret key with a recommended level of"
+ , "security."
+ , ""
+ , "If you continue, some of the following less secure"
+ , "servers will be used:"
+ , ""
+ ] ++ map descserver (mapMaybe getServer alternates)
+ | otherwise = Nothing
+ where
+ ps = shareParams tunables
+ getlevel sl = filter (\s -> storageLevel s == sl) locs
+ alternates = getlevel Alternate
+ descserver (Server { serverName = ServerName n, serverDesc = d}) =
+ "* " ++ n ++ " -- " ++ d
+
+-- | Stores the shares amoung the storage locations. Each location
+-- gets at most one share from each set.
+--
+-- If a server is not currently accessible, it will be queued locally.
+-- If any uploads are queued, returns True.
+--
+-- TODO: Add shuffling and queueing/chaffing to prevent
+-- correlation of related shares.
+storeShares :: StorageLocations -> ShareIdents -> [S.Set Share] -> UpdateProgress -> IO (StoreResult, Bool, [Storage])
+storeShares (StorageLocations locs) allsis shares updateprogress = do
+ ((r, anyqueued), usedlocs) <- go allsis shares [] False
+ _ <- mapM_ obscureShares usedlocs
+ return (r, anyqueued, usedlocs)
+ where
+ go sis (s:rest) usedlocs anyqueued = do
+ let (is, sis') = nextShareIdents sis
+ (r, usedlocs', queued) <- storeset locs [] Nothing (zip (S.toList is) (S.toList s)) False
+ case r of
+ StoreSuccess -> go sis' rest (usedlocs ++ usedlocs') (anyqueued || queued)
+ _ -> return ((r, anyqueued || queued), usedlocs ++ usedlocs')
+ go _ [] usedlocs anyqueued = return ((StoreSuccess, anyqueued), usedlocs)
+
+ storeset _ usedlocs _ [] queued = return (StoreSuccess, usedlocs, queued)
+ storeset [] usedlocs lasterr _ queued =
+ return (fromMaybe (StoreFailure "no storage locations") lasterr, usedlocs, queued)
+ storeset (loc:otherlocs) usedlocs _ ((i, s):rest) queued = do
+ r <- storeShare loc i s
+ case r of
+ StoreSuccess -> do
+ _ <- updateprogress
+ storeset otherlocs (loc:usedlocs) Nothing rest queued
+ -- Give up if any location complains a share
+ -- already exists, because we have a name conflict.
+ StoreAlreadyExists -> return (StoreAlreadyExists, usedlocs, queued)
+ -- Queue or try storing it somewhere else on failure.
+ StoreFailure _ -> case uploadQueue loc of
+ Just q -> do
+ r' <- storeShare q i s
+ case r' of
+ StoreSuccess -> do
+ _ <- updateprogress
+ storeset otherlocs (loc:usedlocs) Nothing rest True
+ StoreAlreadyExists -> return (StoreAlreadyExists, usedlocs, queued)
+ StoreFailure _ -> storeset otherlocs usedlocs (Just r) ((i, s):rest) queued
+ Nothing -> storeset otherlocs usedlocs (Just r) ((i, s):rest) queued
+
+-- | Retrieves one set of shares from the storage locations.
+-- Returns all the shares it can find, which may not be enough,
+-- and the remaining Shareidents, to use to get subsequent sets.
+--
+-- Assumes that each location only contains one share. So, once a
+-- share has been found on a location, can avoid asking that location
+-- for any other shares.
+retrieveShares :: StorageLocations -> ShareIdents -> UpdateProgress -> IO (S.Set Share, ShareIdents, [Server])
+retrieveShares (StorageLocations locs) sis updateprogress = do
+ let (is, sis') = nextShareIdents sis
+ let want = zip [1..] (S.toList is)
+ (shares, usedlocs, _unusedlocs) <- go locs [] want []
+ _ <- mapM_ obscureShares usedlocs
+ let usedservers = mapMaybe getServer usedlocs
+ return (S.fromList shares, sis', usedservers)
+ where
+ go unusedlocs usedlocs [] shares = return (shares, usedlocs, unusedlocs)
+ go [] usedlocs _ shares = return (shares, usedlocs, [])
+ go (loc:otherlocs) usedlocs ((n, i):rest) shares = do
+ r <- retrieveShare loc n i
+ case r of
+ RetrieveSuccess s -> do
+ _ <- updateprogress
+ go otherlocs (loc:usedlocs) rest (s:shares)
+ RetrieveFailure _ -> do
+ -- Try to get the share from other locations.
+ (shares', usedlocs', unusedlocs) <-
+ go otherlocs usedlocs [(n, i)] shares
+ -- May need to ask the location that didn't
+ -- have the share for a later share, but
+ -- ask it last. This way, the first
+ -- location on the list can't deny having
+ -- all shares and so learn the idents of
+ -- all of them.
+ go (unusedlocs++[loc]) usedlocs' rest shares'
+
+-- | Returns descriptions of any failures.
+tryUploadQueued :: Maybe LocalStorageDirectory -> IO [String]
+tryUploadQueued d = do
+ StorageLocations locs <- shuffleStorageLocations $
+ networkStorageLocations d
+ results <- forM locs $ \loc -> case uploadQueue loc of
+ Nothing -> return []
+ Just q -> moveShares q loc
+ return $ processresults (concat results) []
+ where
+ processresults [] c = nub c
+ processresults (StoreSuccess:rs) c = processresults rs c
+ processresults (StoreFailure e:rs) c = processresults rs (e:c)
+ processresults (StoreAlreadyExists:rs) c =
+ processresults rs ("Unable to upload a share to a server due to a name conflict.":c)
+
+storeChaff :: HostName -> Port -> Maybe Seconds -> IO ()
+storeChaff hn port delayseconds = forever $ do
+ say $ "Sending chaff to " ++ hn ++ " (press ctrl-c to stop)"
+ say "Legend: + = successful upload, ! = upload failure"
+ prg <- Raaz.newPRG () :: IO Raaz.SystemPRG
+ randomname <- randomByteStringOfLength 128 prg
+ -- It's ok the use the testModeTunables here because
+ -- the randomname is not something that can be feasibly guessed.
+ -- Prefix "random chaff" to the name to avoid ever using a name
+ -- that a real user might want to use.
+ let sis = shareIdents testModeTunables (Name $ "random chaff:" <> randomname) AnyGpgKey
+ mapConcurrently (go sis prg)
+ [1..totalObjects (shareParams testModeTunables)]
+ where
+ server = networkStorage Untrusted Nothing $
+ Server (ServerName hn) [ServerAddress hn port] "chaff server"
+ objsize = objectSize defaultTunables * shareOverhead defaultTunables
+ maxmsdelay = ceiling $ 1000000 * fromMaybe 0 delayseconds
+ go sis prg n = do
+ msdelay <- getStdRandom (randomR (0, maxmsdelay))
+ delay msdelay
+
+ b <- randomByteStringOfLength objsize prg
+ let share = Share 0 (StorableObject b)
+ let (is, sis') = nextShareIdents sis
+ let i = S.toList is !! (n - 1)
+ r <- storeShare server i share
+ case r of
+ StoreSuccess -> progress "+"
+ _ -> progress "!"
+ go sis' prg n
+
+-- | Shuffles the list, keeping Recommended first, then
+-- Alternate, and finally Untrusted.
+shuffleStorageLocations :: StorageLocations -> IO StorageLocations
+shuffleStorageLocations (StorageLocations l) =
+ StorageLocations . concat <$> mapM shuf [minBound..maxBound]
+ where
+ shuf sl = shuffleM (filter (\s -> storageLevel s == sl) l)
diff --git a/Storage/Local.hs b/Storage/Local.hs
new file mode 100644
index 0000000..637a31b
--- /dev/null
+++ b/Storage/Local.hs
@@ -0,0 +1,200 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Storage.Local
+ ( localStorage
+ , localStorageOverride
+ , storageDir
+ , storageTopDir
+ , testStorageDir
+ , localDiskUsage
+ ) where
+
+import Types
+import Types.Storage
+import Output
+import Serialization ()
+import Utility.UserInfo
+import Utility.Exception
+import qualified Data.ByteString as B
+import qualified Data.ByteString.UTF8 as U8
+import Data.Monoid
+import Data.List
+import Data.Maybe
+import System.IO
+import System.Directory
+import System.Posix
+import System.FilePath
+import Raaz.Core.Encode
+import Control.DeepSeq
+import Control.Monad
+import System.DiskSpace
+import Control.Exception (IOException)
+
+type GetShareDir = Section -> IO FilePath
+
+newtype Section = Section String
+
+localStorage :: StorageLevel -> GetShareDir -> String -> Storage
+localStorage storagelevel getsharedir n = Storage
+ { storeShare = store section getsharedir
+ , retrieveShare = retrieve section getsharedir
+ , obscureShares = obscure section getsharedir
+ , countShares = count section getsharedir
+ , moveShares = move section getsharedir
+ , storageLevel = storagelevel
+ , uploadQueue = Nothing
+ , getServer = Nothing
+ }
+ where
+ section = Section n
+
+localStorageOverride :: FilePath -> IO (Maybe Storage)
+localStorageOverride d = onError' accesserror $ do
+ -- Check that the directory can be written to.
+ createDirectoryIfMissing True d
+ -- Use a filename as long as used for keysafe share files.
+ let f = d </> "testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttest.keysafe"
+ writeFile f "test"
+ _ <- readFile f
+ removeFile f
+ return $ Just $ localStorage LocallyPreferred (\_ -> pure d) ""
+ where
+ accesserror e = do
+ warn $ "Unable to access local storage directory " ++ d ++ " (" ++ show e ++ ")"
+ return Nothing
+
+store :: Section -> GetShareDir -> StorableObjectIdent -> Share -> IO StoreResult
+store section getsharedir i s = onError (StoreFailure . show) $ do
+ dir <- getsharedir section
+ createDirectoryIfMissing True dir
+ let dest = dir </> shareFile i
+ exists <- doesFileExist dest
+ if exists
+ then return StoreAlreadyExists
+ else do
+ let tmp = dest ++ ".tmp"
+ fd <- openFd tmp WriteOnly (Just 0o400)
+ (defaultFileFlags { exclusive = True } )
+ h <- fdToHandle fd
+ B.hPut h (toByteString s)
+ hClose h
+ renameFile tmp dest
+ return StoreSuccess
+
+retrieve :: Section -> GetShareDir -> ShareNum -> StorableObjectIdent -> IO RetrieveResult
+retrieve section getsharedir n i = onError (RetrieveFailure . show) $ do
+ dir <- getsharedir section
+ fd <- openFd (dir </> shareFile i) ReadOnly Nothing defaultFileFlags
+ h <- fdToHandle fd
+ b <- B.hGetContents h
+ b `deepseq` hClose h
+ return $ RetrieveSuccess $ Share n (StorableObject b)
+
+-- | Set atime and mtime to epoch, to obscure access and modification
+-- patterns.
+--
+-- There is no way to set the ctime to the epoch, but setting the other
+-- times does at least set it to the current time, which makes all
+-- currently stored files look alike.
+--
+-- Note that the contents of shares is never changed, so it's ok to set the
+-- mtime to the epoch; backup programs won't be confused.
+obscure :: Section -> GetShareDir -> IO ObscureResult
+obscure section getsharedir = onError (ObscureFailure . show) $ do
+ dir <- getsharedir section
+ fs <- filter isShareFile <$> getDirectoryContents dir
+ mapM_ (\f -> setFileTimes (dir </> f) 0 0) fs
+ return ObscureSuccess
+
+count :: Section -> GetShareDir -> IO CountResult
+count section getsharedir = onError (CountFailure . show) $ do
+ dir <- getsharedir section
+ CountResult . genericLength . filter isShareFile
+ <$> getDirectoryContents dir
+
+move :: Section -> GetShareDir -> Storage -> IO [StoreResult]
+move section getsharedir storage = do
+ dir <- getsharedir section
+ fs <- map (dir </>) <$> catchDefaultIO [] (getDirectoryContents dir)
+ rs <- forM fs $ \f -> case fromShareFile f of
+ Nothing -> return Nothing
+ Just i -> Just <$> go f i
+ return (catMaybes rs)
+ where
+ -- Use a dummy share number of 0; it doesn't
+ -- matter because we're not going to be
+ -- recombining the share here.
+ sharenum = 0
+
+ go f i = do
+ r <- retrieve section getsharedir sharenum i
+ case r of
+ RetrieveFailure e -> return (StoreFailure e)
+ RetrieveSuccess share -> do
+ s <- storeShare storage i share
+ case s of
+ StoreSuccess -> movesuccess f
+ StoreAlreadyExists -> alreadyexists share i f
+ StoreFailure e -> return (StoreFailure e)
+
+ movesuccess f = do
+ removeFile f
+ return StoreSuccess
+
+ -- Detect case where the same data is already
+ -- stored on the other storage.
+ alreadyexists share i f = do
+ check <- retrieveShare storage sharenum i
+ case check of
+ RetrieveSuccess share'
+ | share' == share -> movesuccess f
+ _ -> return StoreAlreadyExists
+
+onError :: (IOException -> a) -> IO a -> IO a
+onError f = onError' (pure . f)
+
+onError' :: (IOException -> IO a) -> IO a -> IO a
+onError' f a = do
+ v <- try a
+ case v of
+ Left e -> f e
+ Right r -> return r
+
+storageDir :: Maybe LocalStorageDirectory -> GetShareDir
+storageDir Nothing (Section section) = do
+ home <- myHomeDir
+ return $ home </> dotdir </> section
+storageDir (Just (LocalStorageDirectory d)) (Section section) =
+ pure $ d </> section
+
+storageTopDir :: Maybe LocalStorageDirectory -> IO FilePath
+storageTopDir lsd = storageDir lsd (Section ".")
+
+testStorageDir :: FilePath -> GetShareDir
+testStorageDir tmpdir = storageDir (Just (LocalStorageDirectory tmpdir))
+
+localDiskUsage :: Maybe LocalStorageDirectory -> IO DiskUsage
+localDiskUsage lsd = getDiskUsage =<< storageTopDir lsd
+
+-- | The takeFileName ensures that, if the StorableObjectIdent somehow
+-- contains a path (eg starts with "../" or "/"), it is not allowed
+-- to point off outside the shareDir.
+shareFile :: StorableObjectIdent -> FilePath
+shareFile i = takeFileName (U8.toString (toByteString i)) <> ext
+
+fromShareFile :: FilePath -> Maybe StorableObjectIdent
+fromShareFile f
+ | isShareFile f = fromByteString $ U8.fromString $ takeFileName $ dropExtension f
+ | otherwise = Nothing
+
+isShareFile :: FilePath -> Bool
+isShareFile f = ext `isSuffixOf` f
+
+ext :: String
+ext = ".keysafe"
+
+dotdir :: FilePath
+dotdir = ".keysafe" </> "objects"
diff --git a/Storage/Network.hs b/Storage/Network.hs
new file mode 100644
index 0000000..41d1ff1
--- /dev/null
+++ b/Storage/Network.hs
@@ -0,0 +1,64 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE OverloadedStrings #-}
+
+module Storage.Network (
+ networkStorage,
+ networkStorageOverride,
+) where
+
+import Types
+import Types.Storage
+import Types.Server
+import Storage.Local
+import HTTP.Client
+import HTTP.ProofOfWork
+import System.FilePath
+
+networkStorage :: StorageLevel -> Maybe LocalStorageDirectory -> Server -> Storage
+networkStorage storagelevel localdir server = Storage
+ { storeShare = store server
+ , retrieveShare = retrieve server
+ , obscureShares = obscure server
+ , countShares = count server
+ , moveShares = move server
+ , uploadQueue = Just $ localStorage storagelevel (storageDir localdir)
+ ("uploadqueue" </> name)
+ , storageLevel = storagelevel
+ , getServer = Just server
+ }
+ where
+ ServerName name = serverName server
+
+networkStorageOverride :: Maybe LocalStorageDirectory -> HostName -> Port -> IO (Maybe Storage)
+networkStorageOverride lsd h p = return $ Just $
+ networkStorage LocallyPreferred lsd $ Server
+ { serverName = ServerName h
+ , serverAddress = [ServerAddress h p]
+ , serverDesc = h
+ }
+
+store :: Server -> StorableObjectIdent -> Share -> IO StoreResult
+store srv i (Share _n o) =
+ serverRequest srv StoreFailure id i $ \pow ->
+ putObject i pow o
+
+retrieve :: Server -> ShareNum -> StorableObjectIdent -> IO RetrieveResult
+retrieve srv n i =
+ serverRequest srv RetrieveFailure (RetrieveSuccess . Share n) i $
+ getObject i
+
+-- | Servers should automatically obscure, so do nothing.
+-- (Could upload chaff.)
+obscure :: Server -> IO ObscureResult
+obscure _ = return ObscureSuccess
+
+count :: Server -> IO CountResult
+count srv = serverRequest srv CountFailure id NoPOWIdent countObjects
+
+-- | Not needed for servers.
+move :: Server -> Storage -> IO [StoreResult]
+move _ _ = error "move is not implemented for servers"
diff --git a/TODO b/TODO
new file mode 100644
index 0000000..97da3e0
--- /dev/null
+++ b/TODO
@@ -0,0 +1,54 @@
+Soon:
+
+* Get some keysafe servers set up.
+* Set up --check-servers in a cron job, so I know when servers are down.
+
+Later:
+
+* The attack cost display can lead to a false sense of security if the user
+ takes it as gospel. It needs to be clear that it's an estimate. This and
+ other parts of the keysafe UI need usability testing.
+* improve restore progress bar points (update after every hash try)
+* If we retrieved enough shares successfully, but decrypt failed, must
+ be a wrong password, so prompt for re-entry and retry with those shares.
+* --no-jargon which makes the UI avoid terms like "secret key" and "crack
+ password". Do usability testing!
+* --key-value=$N which eliminates the question about password value,
+ and rejects passwords that would cost less than $N to crack at current
+ rates. This should add a combo box to the password entry form in the
+ GUI to let the user adjust the $N there.
+* In backup, only upload to N-1 servers immediately, and delay the rest
+ for up to several days, with some uploads of chaff, to prevent
+ collaborating evil servers from correlating related shards.
+* Add some random padding to http requests and responses, to make it
+ harder for traffic analysis to tell that given TOR traffic is
+ keysafe traffic.
+* Argon2d is more resistent to GPU/ASIC attack optimisation.
+ Switching from Argon2i would require new tunables, so deferred for now
+ until there's some other reason to change the tunables.
+
+Wishlist:
+
+* Keep secret keys in locked memory until they're encrypted.
+ (Raaz makes this possible to do.)
+ Would be nice, but not super-important, since gpg secret keys
+ are passphrase protected anyway..
+* Don't require --totalshares and --neededshares on restore when unusual
+ values were used for backup.
+
+ The difficulty is that the number of needed shares cannot be determined by
+ looking at shares, and guessing it wrong will result in combining
+ too few shares yielding garbage, which it will take up to an hour to
+ try to decrypt, before it can tell that more shares are needed.
+
+ This could be dealt with by including the number of needed shares in the
+ serialization of Share, but then an attacker could use it to partition
+ shares from servers. If only one person uses --neededshares=5,
+ the attacker can guess that all their shares go together.
+
+ What about including the number of needed shares in the name? Since that's
+ hashed, it's not visible to an attacker. Keysafe would need to try names
+ with 2 shares, then 3, etc, and once it found shares, it would know the
+ number needed. It should also be possible to avoid breaking backwards
+ compatability, by only including the number of shares in the name when
+ it's not the standard number.
diff --git a/Tests.hs b/Tests.hs
new file mode 100644
index 0000000..bbc9dcd
--- /dev/null
+++ b/Tests.hs
@@ -0,0 +1,141 @@
+{-# LANGUAGE OverloadedStrings #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Tests where
+
+import Types
+import Output
+import Tunables
+import Encryption
+import Share
+import Storage
+import Storage.Local
+import Control.Exception
+import System.Directory
+import System.Posix.Temp
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.ByteString as B
+import qualified Data.Set as S
+import Data.Monoid
+
+type TestDesc = B.ByteString
+
+type TestResult = Either String ()
+
+type Test = (TestDesc, IO TestResult)
+
+testSuccess :: IO TestResult
+testSuccess = return $ Right ()
+
+testFailed :: String -> IO TestResult
+testFailed = return . Left
+
+runTest :: Test -> IO Bool
+runTest (d, t) = do
+ progress $ "testing: " ++ show d ++ " ..."
+ r <- t
+ case r of
+ Right () -> do
+ say "ok"
+ return True
+ Left e -> do
+ say $ "failed: " ++ show e
+ return False
+
+runTests :: IO ()
+runTests = do
+ r <- mapM runTest tests
+ if all (== True) r
+ then say "All tests succeeded."
+ else error "Tests failed. Report a bug!"
+
+tests :: [Test]
+tests =
+ [ stableNamingTest "stable naming"
+ , backupRestoreTest "very small" $
+ testSecretKey 1
+ , backupRestoreTest "full size" $
+ testSecretKey (objectSize testModeTunables)
+ , backupRestoreTest "two chunks" $
+ testSecretKey (objectSize testModeTunables + 1)
+ , backupRestoreTest "many chunks" $
+ testSecretKey (objectSize testModeTunables * 10)
+ ]
+
+testSecretKey :: Int -> SecretKey
+testSecretKey sz = SecretKey $ BU8.fromString $ take sz $ concatMap show [1..sz]
+
+withTestStorageLocations :: (StorageLocations -> IO a) -> IO a
+withTestStorageLocations a = bracket setup cleanup go
+ where
+ setup = mkdtemp "keysafe-test"
+ cleanup = removeDirectoryRecursive
+ go tmpdir = a $ StorageLocations $
+ map (localStorage LocallyPreferred (testStorageDir tmpdir) . show)
+ [1..100 :: Int]
+
+-- | Test of backup and restore of a SecretKey.
+backupRestoreTest :: TestDesc -> SecretKey -> Test
+backupRestoreTest testdesc secretkey =
+ ("backup and restore (" <> testdesc <> ")", runtest)
+ where
+ runtest = withTestStorageLocations $ \storagelocations -> do
+ backup storagelocations
+ restore storagelocations
+
+ backup storagelocations = do
+ kek <- genKeyEncryptionKey tunables name password
+ let esk = encrypt tunables kek secretkey
+ shares <- genShares esk tunables
+ let sis = shareIdents tunables name AnyGpgKey
+ _ <- storeShares storagelocations sis shares (return ())
+ return ()
+
+ restore storagelocations = do
+ let sis = shareIdents tunables name AnyGpgKey
+ (shares, sis', _) <- retrieveShares storagelocations sis (return ())
+ let candidatekeys = candidateKeyEncryptionKeys tunables name password
+ case combineShares tunables [shares] of
+ Left e -> testFailed e
+ Right esk -> restorerest storagelocations [shares] sis' $
+ tryDecrypt candidatekeys esk
+
+ restorerest storagelocations firstshares sis r = case r of
+ DecryptFailed -> testFailed "DecryptFailed"
+ DecryptSuccess restoredsecretkey ->
+ if restoredsecretkey == secretkey
+ then testSuccess
+ else testFailed "restore yielded different value than was backed up"
+ DecryptIncomplete kek -> do
+ (nextshares, sis', _) <- retrieveShares storagelocations sis (return ())
+ let shares = firstshares ++ [nextshares]
+ case combineShares tunables shares of
+ Left e -> testFailed e
+ Right esk -> restorerest storagelocations shares sis' $
+ decrypt kek esk
+
+ name = Name testdesc
+ password = Password "password"
+ -- testModeTunables is used, to avoid this taking a very
+ -- long time to run.
+ tunables = testModeTunables
+
+-- | It's important that StorableObjectIdent generation be stable;
+-- any change to it will cause shards to get lost.
+stableNamingTest :: TestDesc -> Test
+stableNamingTest testdesc = (testdesc, runtest $ map snd knownTunings)
+ where
+ runtest [] = testFailed "not stable!"
+ runtest (tunables:rest) = do
+ let sis = shareIdents tunables name (Distinguisher secretkeysource)
+ if S.member knownvalue (head (identsStream sis))
+ then testSuccess
+ else runtest rest
+
+ name = Name "stable name"
+ secretkeysource = GpgKey (KeyId "stable keyid")
+ knownvalue = StorableObjectIdent "18b112da9108b4b5e21fa07cfc672e11688110e4c2dc56c8365f0de488cca8cb"
diff --git a/Tunables.hs b/Tunables.hs
new file mode 100644
index 0000000..5c28a39
--- /dev/null
+++ b/Tunables.hs
@@ -0,0 +1,162 @@
+{-# LANGUAGE MultiParamTypeClasses, FlexibleInstances #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Tunables where
+
+import Cost
+import qualified Crypto.Argon2 as Argon2
+import Data.Word
+
+-- | To determine the tunables used for a key name the expensive hash of the
+-- name is calculated, using a particular configuration, and if the
+-- object names it generates are available, we know the tunables.
+--
+-- Since this process is expensive, it's important that the most commonly
+-- used tunables come first, so that the expensive hash does not have to be
+-- calculated repatedly.
+--
+-- The reason for using this expensive method of encoding the tunables
+-- is that it prevents attacks where related objects are correlated based
+-- on their tunables.
+knownTunings :: [(ExpensiveHashTunable, Tunables)]
+knownTunings = map (\t -> (nameGenerationHash (nameGenerationTunable t), t))
+ [ defaultTunables
+ ]
+
+-- | keysafe stores data for a long time, and needs to be able to process
+-- data from a long time ago when restoring a key. We don't want to be
+-- locked into old choices of crypto primitives etc forever.
+--
+-- So, every parameter that can be tuned is configured in this data
+-- structure.
+data Tunables = Tunables
+ { shareParams :: ShareParams
+ , objectSize :: Int
+ -- ^ a StorableObject is exactly this many bytes in size
+ -- (must be a multiple of AES block size 16, and cannot be smaller
+ -- than 256 bytes)
+ , shareOverhead :: Int
+ -- ^ Share encoding overhead as a multiple of the objectSize
+ , nameGenerationTunable :: NameGenerationTunable
+ , keyEncryptionKeyTunable :: KeyEncryptionKeyTunable
+ , encryptionTunable :: EncryptionTunable
+ }
+ deriving (Show)
+
+-- | Parameters for shareing. The secret is split into
+-- N objects, such that only M are needed to reconstruct it.
+data ShareParams = ShareParams
+ { totalObjects :: Int -- ^ N
+ , neededObjects :: Int -- ^ M
+ }
+ deriving (Show)
+
+-- | An expensive hash, which makes brute-forcing hard.
+--
+-- The creation cost estimate must be manually tuned to match the
+-- hash options. Use benchmarkTunables to check this.
+data ExpensiveHashTunable = UseArgon2 (Cost CreationOp) Argon2.HashOptions
+ deriving (Show)
+
+data NameGenerationTunable = NameGenerationTunable
+ { nameGenerationHash :: ExpensiveHashTunable
+ }
+ deriving (Show)
+
+-- | How to generate the encryption key used to encrypt the secret key.
+-- This is an expensive hash of the password, but not a super expensive
+-- hash, because a password brute forcing attacker needs to run the hash
+-- 256 times per random salt byte.
+data KeyEncryptionKeyTunable = KeyEncryptionKeyTunable
+ { keyEncryptionKeyHash :: ExpensiveHashTunable
+ , randomSaltBytes :: Int
+ , randomSaltBytesBruteForceCost :: Cost BruteForceOp
+ }
+ deriving (Show)
+
+-- | What encryption to use.
+data EncryptionTunable = UseAES256
+ deriving (Show)
+
+-- | Tunables used by default to backup.
+defaultTunables :: Tunables
+defaultTunables = Tunables
+ { shareParams = ShareParams { totalObjects = 3, neededObjects = 2 }
+ , objectSize = 1024*32 -- 32 kb
+ , shareOverhead = 2
+ -- The nameGenerationHash was benchmarked at 600 seconds
+ -- on a 2 core Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz.
+ , nameGenerationTunable = NameGenerationTunable
+ { nameGenerationHash = argon2 10000 (coreCost 2 (Seconds 600) d)
+ }
+ , keyEncryptionKeyTunable = KeyEncryptionKeyTunable
+ { keyEncryptionKeyHash = argon2 169 (CPUCost (Seconds 12) d)
+ , randomSaltBytes = 1
+ -- The keyEncryptionKeyHash is run 256 times per
+ -- random salt byte to brute-force, and its parameters
+ -- were chosen so the total brute forcing time is 50 minutes,
+ -- on a 2 core Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz.
+ , randomSaltBytesBruteForceCost = coreCost 2 (Seconds (50*60)) d
+ }
+ , encryptionTunable = UseAES256
+ }
+ where
+ argon2 i c = UseArgon2 c $ Argon2.HashOptions
+ { Argon2.hashIterations = i
+ , Argon2.hashMemory = 131072 -- 128 mebibtyes per thread
+ , Argon2.hashParallelism =
+ let Divisibility n = d
+ in fromIntegral n
+ , Argon2.hashVariant = Argon2.Argon2i
+ }
+ d = Divisibility 4 -- argon2 uses 4 threads
+
+-- | Dials back hash difficulty, lies about costs.
+-- Not for production use!
+testModeTunables :: Tunables
+testModeTunables = Tunables
+ { shareParams = ShareParams { totalObjects = 3, neededObjects = 2 }
+ , objectSize = 1024*32
+ , shareOverhead = 2
+ , nameGenerationTunable = NameGenerationTunable
+ { nameGenerationHash = weakargon2 (coreCost 2 (Seconds 600) d)
+ }
+ , keyEncryptionKeyTunable = KeyEncryptionKeyTunable
+ { keyEncryptionKeyHash = weakargon2 (CPUCost (Seconds 12) d)
+ , randomSaltBytes = 1
+ , randomSaltBytesBruteForceCost = coreCost 2 (Seconds (50*60)) d
+ }
+ , encryptionTunable = UseAES256
+ }
+ where
+ weakargon2 c = UseArgon2 c Argon2.defaultHashOptions
+ d = Divisibility 4
+
+knownObjectSizes :: [Int]
+knownObjectSizes = map (calc . snd) knownTunings
+ where
+ calc t = objectSize t * shareOverhead t
+
+-- Hash for client-server Proof Of Work. This is tuned to take around
+-- 4 seconds to calculate the hash 256 times on a 4 core machine, with
+-- 0 added iterations. Adding more iterations will increase that somewhat.
+--
+-- This is not included in Tunables because it doesn't affect object
+-- encryption and storage. Any change to this will break backwards
+-- compatability of the HTTP protocol!
+proofOfWorkHashTunable :: Word32 -> ExpensiveHashTunable
+proofOfWorkHashTunable addits =
+ UseArgon2 (CPUCost (Seconds nsecs) (Divisibility 4)) $
+ Argon2.HashOptions
+ { Argon2.hashIterations = nits
+ , Argon2.hashMemory = 1000
+ , Argon2.hashParallelism = 4
+ , Argon2.hashVariant = Argon2.Argon2i
+ }
+ where
+ nits = 20 + addits
+ nsecs = (4 * fromIntegral nits / 20) / 256
diff --git a/Types.hs b/Types.hs
new file mode 100644
index 0000000..2f97c61
--- /dev/null
+++ b/Types.hs
@@ -0,0 +1,71 @@
+{-# LANGUAGE OverloadedStrings, GeneralizedNewtypeDeriving, MultiParamTypeClasses, FlexibleInstances, DeriveGeneric #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Types where
+
+import Types.Cost
+import qualified Data.ByteString as B
+import qualified Data.Text as T
+import Data.String
+import Control.DeepSeq
+import GHC.Generics (Generic)
+import Data.Aeson
+
+-- | keysafe stores secret keys.
+newtype SecretKey = SecretKey B.ByteString
+ deriving (Eq)
+
+-- | The secret key, encrypted with a password, in fixed size chunks.
+data EncryptedSecretKey = EncryptedSecretKey [B.ByteString] (CostCalc BruteForceOp UnknownPassword)
+
+instance NFData EncryptedSecretKey where
+ rnf (EncryptedSecretKey cs _) = rnf cs
+
+instance Show EncryptedSecretKey where
+ show (EncryptedSecretKey cs _) = show cs
+
+instance Bruteforceable EncryptedSecretKey UnknownPassword where
+ getBruteCostCalc (EncryptedSecretKey _ cc) = cc
+
+-- | An object in a form suitable to be stored on a keysafe server.
+newtype StorableObject = StorableObject { fromStorableObject :: B.ByteString }
+ deriving (Show, Eq, Ord, Generic)
+
+-- | An identifier for a StorableObject
+newtype StorableObjectIdent = StorableObjectIdent B.ByteString
+ deriving (Show, Eq, Ord, NFData)
+
+-- | A Shamir secret share, with a known number (N of M).
+data Share = Share ShareNum StorableObject
+ deriving (Eq, Ord)
+
+type ShareNum = Int
+
+-- | A password used to encrypt a key stored in keysafe.
+newtype Password = Password B.ByteString
+ deriving (IsString)
+
+-- | A name associated with a key stored in keysafe.
+newtype Name = Name B.ByteString
+ deriving (Eq, Show, Monoid)
+
+-- | Source of the secret key stored in keysafe.
+data SecretKeySource = GpgKey KeyId | KeyFile FilePath
+ deriving (Show, Eq, Generic)
+
+instance ToJSON SecretKeySource
+instance FromJSON SecretKeySource
+
+-- | The keyid is any value that is unique to a private key, and can be
+-- looked up somehow without knowing the private key.
+--
+-- A gpg keyid is the obvious example.
+data KeyId = KeyId T.Text
+ deriving (Show, Eq, Generic)
+
+instance ToJSON KeyId
+instance FromJSON KeyId
diff --git a/Types/Cost.hs b/Types/Cost.hs
new file mode 100644
index 0000000..521d6c1
--- /dev/null
+++ b/Types/Cost.hs
@@ -0,0 +1,72 @@
+{-# LANGUAGE GeneralizedNewtypeDeriving, MultiParamTypeClasses, EmptyDataDecls #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Types.Cost where
+
+-- | An estimated cost to perform an operation.
+data Cost op
+ = CPUCost Seconds Divisibility
+ -- ^ cost in Seconds, using 1 physical CPU core
+ deriving (Show, Eq, Ord)
+
+newtype Seconds = Seconds Rational
+ deriving (Num, Fractional, Real, RealFrac, Eq, Ord)
+
+instance Show Seconds where
+ show (Seconds n) = show (fromRational n :: Double) ++ "s"
+
+-- | How many CPU cores a single run of an operation can be divided amoung.
+newtype Divisibility = Divisibility Integer
+ deriving (Show, Eq, Ord)
+
+data UsingHardware = UsingCPU | UsingGPU | UsingASIC
+ deriving (Show)
+
+instance Monoid (Cost t) where
+ mempty = CPUCost (Seconds 0) (Divisibility 1)
+ CPUCost (Seconds a) (Divisibility x) `mappend` CPUCost (Seconds b) (Divisibility y) =
+ -- Take maximum divisibility, to avoid over-estimating
+ -- the total cost.
+ CPUCost (Seconds (a+b)) (Divisibility $ max x y)
+
+-- | Operations whose cost can be measured.
+data DecryptionOp
+data CreationOp
+data BruteForceOp
+
+-- | Things that track their creation cost.
+class HasCreationCost t where
+ getCreationCost :: t -> Cost CreationOp
+
+-- | Things that track their decryption cost.
+class HasDecryptionCost t where
+ getDecryptionCost :: t -> Cost DecryptionOp
+
+-- | Calculation of a cost that depends on some amount of entropy.
+type CostCalc op t = Entropy t -> Cost op
+
+unknownCostCalc :: CostCalc op t
+unknownCostCalc = \_e -> error "No cost calculation available"
+
+-- | Number of bits of entropy
+newtype Entropy t = Entropy Int
+ deriving (Num, Show)
+
+class CalcEntropy d t where
+ calcEntropy :: d -> Entropy t
+
+-- | Entropy can never go negative when subtracting bits from it.
+reduceEntropy :: Entropy t -> Int -> Entropy t
+reduceEntropy (Entropy a) b = Entropy (max 0 (a - b))
+
+-- | Things that can be brute-forced track their CostCalc.
+class Bruteforceable t a where
+ getBruteCostCalc :: t -> CostCalc BruteForceOp a
+
+-- | Things that can have entropy
+data UnknownPassword
+data UnknownName
diff --git a/Types/Server.hs b/Types/Server.hs
new file mode 100644
index 0000000..6a3fe23
--- /dev/null
+++ b/Types/Server.hs
@@ -0,0 +1,36 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE DeriveGeneric #-}
+
+module Types.Server (module Types.Server, Port) where
+
+import Data.Aeson
+import GHC.Generics
+import Network.Wai.Handler.Warp (Port)
+
+type HostName = String
+
+-- | Server address can use either tor .onion addresses, or regular
+-- hostnames. Using tor is highly recommended, to avoid correlation
+-- attacks.
+data ServerAddress = ServerAddress HostName Port
+ deriving (Show, Eq, Ord)
+
+-- | Name used in queuing uploads to the server. Should remain stable
+-- across keysafe versions.
+newtype ServerName = ServerName { fromServerName :: String }
+ deriving (Show, Eq, Ord, Generic)
+
+instance ToJSON ServerName
+instance FromJSON ServerName
+
+data Server = Server
+ { serverName :: ServerName
+ , serverAddress :: [ServerAddress]
+ -- ^ A server may have multiple addresses, or no current address.
+ , serverDesc :: String
+ }
+ deriving (Show, Eq, Ord)
diff --git a/Types/Storage.hs b/Types/Storage.hs
new file mode 100644
index 0000000..c83593a
--- /dev/null
+++ b/Types/Storage.hs
@@ -0,0 +1,60 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE GeneralizedNewtypeDeriving #-}
+{-# LANGUAGE DeriveGeneric #-}
+
+module Types.Storage where
+
+import Types
+import Types.Server
+import GHC.Generics
+import Data.Aeson.Types
+
+-- | All known locations where shares can be stored, ordered with
+-- preferred locations first.
+newtype StorageLocations = StorageLocations [Storage]
+ deriving (Monoid)
+
+newtype LocalStorageDirectory = LocalStorageDirectory FilePath
+
+data StorageLevel = LocallyPreferred | Recommended | Alternate | Untrusted
+ deriving (Show, Eq, Ord, Bounded, Enum)
+
+-- | Storage interface. This can be used both for local storage,
+-- an upload queue, or a remote server.
+--
+-- Note that there is no interface to enumerate shares.
+-- This is intentional; servers should not allow that.
+data Storage = Storage
+ { storeShare :: StorableObjectIdent -> Share -> IO StoreResult
+ , retrieveShare :: ShareNum -> StorableObjectIdent -> IO RetrieveResult
+ , obscureShares :: IO ObscureResult
+ -- ^ Run after making some calls to storeShare/retrieveShare,
+ -- to avoid correlation attacks.
+ , countShares :: IO CountResult
+ , moveShares :: Storage -> IO [StoreResult]
+ -- ^ Tries to move all shares from this storage to another one.
+ , uploadQueue :: Maybe Storage
+ , storageLevel :: StorageLevel
+ , getServer :: Maybe Server
+ }
+
+data StoreResult = StoreSuccess | StoreAlreadyExists | StoreFailure String
+ deriving (Show, Generic)
+
+data RetrieveResult = RetrieveSuccess Share | RetrieveFailure String
+ deriving (Generic)
+
+data ObscureResult = ObscureSuccess | ObscureFailure String
+ deriving (Show, Generic)
+
+data CountResult = CountResult Integer | CountFailure String
+ deriving (Show, Generic)
+
+instance ToJSON StoreResult
+instance FromJSON StoreResult
+instance ToJSON CountResult
+instance FromJSON CountResult
diff --git a/Types/UI.hs b/Types/UI.hs
new file mode 100644
index 0000000..553c323
--- /dev/null
+++ b/Types/UI.hs
@@ -0,0 +1,27 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE RankNTypes #-}
+
+module Types.UI where
+
+import Types
+
+data UI = UI
+ { isAvailable :: IO Bool
+ , showError :: Desc -> IO ()
+ , showInfo :: Title -> Desc -> IO ()
+ , promptQuestion :: Title -> Desc -> Question -> IO Bool
+ , promptName :: Title -> Desc -> Maybe Name -> (Name -> Maybe Problem) -> IO (Maybe Name)
+ , promptPassword :: Bool -> Title -> Desc -> IO (Maybe Password)
+ , promptKeyId :: Title -> Desc -> [(Name, KeyId)] -> IO (Maybe KeyId)
+ , withProgress :: forall a. Title -> Desc -> ((Percent -> IO ()) -> IO a) -> IO a
+ }
+
+type Title = String
+type Desc = String
+type Percent = Int
+type Problem = String
+type Question = String
diff --git a/UI.hs b/UI.hs
new file mode 100644
index 0000000..4212468
--- /dev/null
+++ b/UI.hs
@@ -0,0 +1,45 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+{-# LANGUAGE BangPatterns #-}
+
+module UI (module UI, module Types.UI) where
+
+import Types.UI
+import Control.Monad
+import UI.Zenity
+import UI.Readline
+import UI.NonInteractive
+import Control.Concurrent.MVar
+
+availableUIs :: IO [UI]
+availableUIs = filterM isAvailable [readlineUI, zenityUI]
+
+selectUI :: Bool -> IO UI
+selectUI needgui
+ | needgui = do
+ ok <- isAvailable zenityUI
+ if ok
+ then return zenityUI
+ else error "zenitty is not installed, GUI not available"
+ | otherwise = do
+ l <- availableUIs
+ case l of
+ (u:_) -> return u
+ [] -> return noninteractiveUI
+
+-- Adds a percent to whatever amount the progress bar is at.
+type AddPercent = Percent -> IO ()
+
+withProgressIncremental :: UI -> Title -> Desc -> (AddPercent -> IO a) -> IO a
+withProgressIncremental ui title desc a =
+ withProgress ui title desc $ \setpercent -> do
+ v <- newMVar 0
+ let addpercent = \p -> do
+ oldp <- takeMVar v
+ let !newp = oldp + p
+ putMVar v newp
+ setpercent newp
+ a addpercent
diff --git a/UI/NonInteractive.hs b/UI/NonInteractive.hs
new file mode 100644
index 0000000..cd96254
--- /dev/null
+++ b/UI/NonInteractive.hs
@@ -0,0 +1,40 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module UI.NonInteractive (noninteractiveUI) where
+
+import Types.UI
+import Output
+import Control.Exception
+
+noninteractiveUI :: UI
+noninteractiveUI = UI
+ { isAvailable = return True
+ , showError = myShowError
+ , showInfo = myShowInfo
+ , promptQuestion = myPrompt
+ , promptName = \t d _ -> myPrompt t d
+ , promptPassword = \b t d -> myPrompt t d b
+ , promptKeyId = myPrompt
+ , withProgress = myWithProgress
+ }
+
+myShowError :: Desc -> IO ()
+myShowError desc = warn $ "Error: " ++ desc
+
+myShowInfo :: Title -> Desc -> IO ()
+myShowInfo _title desc = say desc
+
+myPrompt :: Title -> Desc -> x -> IO a
+myPrompt _title desc _ = do
+ say desc
+ error "Not running at a terminal and zenity is not installed; cannot interact with user."
+
+myWithProgress :: Title -> Desc -> ((Percent -> IO ()) -> IO a) -> IO a
+myWithProgress _title desc a = bracket_ setup cleanup (a sendpercent)
+ where
+ setup = say desc
+ sendpercent p = progress (show p ++ "% ")
+ cleanup = say "done"
diff --git a/UI/Readline.hs b/UI/Readline.hs
new file mode 100644
index 0000000..16e4923
--- /dev/null
+++ b/UI/Readline.hs
@@ -0,0 +1,163 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module UI.Readline (readlineUI) where
+
+import Types.UI
+import Types
+import Output
+import System.Console.Readline
+import System.Posix.Terminal
+import System.Posix.IO
+import Control.Exception
+import Data.List
+import Data.Char
+import Text.Read
+import Control.Monad
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Text as T
+
+readlineUI :: UI
+readlineUI = UI
+ { isAvailable = queryTerminal stdInput
+ , showError = myShowError
+ , showInfo = myShowInfo
+ , promptQuestion = myPromptQuestion
+ , promptName = myPromptName
+ , promptPassword = myPromptPassword
+ , promptKeyId = myPromptKeyId
+ , withProgress = myWithProgress
+ }
+
+myShowError :: Desc -> IO ()
+myShowError desc = do
+ warn $ "Error: " ++ desc
+ _ <- readline "[Press Enter]"
+ say ""
+
+myShowInfo :: Title -> Desc -> IO ()
+myShowInfo title desc = do
+ showTitle title
+ say desc
+ say ""
+
+myPromptQuestion :: Title -> Desc -> Question -> IO Bool
+myPromptQuestion title desc question = bracket_ setup cleanup go
+ where
+ setup = do
+ showTitle title
+ say desc
+ cleanup = say ""
+ go = do
+ mresp <- readline $ question ++ " [y/n] "
+ case mresp of
+ Just s
+ | "y" `isPrefixOf` (map toLower s) ->
+ return True
+ | "n" `isPrefixOf` (map toLower s) ->
+ return False
+ _ -> do
+ say "Please enter 'y' or 'n'"
+ go
+
+myPromptName :: Title -> Desc -> Maybe Name -> (Name -> Maybe Problem) -> IO (Maybe Name)
+myPromptName title desc suggested checkproblem =
+ bracket_ setup cleanup go
+ where
+ setup = do
+ showTitle title
+ say desc
+ cleanup = say ""
+ go = do
+ case suggested of
+ Nothing -> return ()
+ Just (Name b) -> addHistory (BU8.toString b)
+ mname <- readline "Name> "
+ case mname of
+ Just s -> do
+ addHistory s
+ let n = Name $ BU8.fromString s
+ case checkproblem n of
+ Nothing -> do
+ say ""
+ return $ Just n
+ Just problem -> do
+ say problem
+ go
+ Nothing -> return Nothing
+
+myPromptPassword :: Bool -> Title -> Desc -> IO (Maybe Password)
+myPromptPassword confirm title desc = bracket setup cleanup (const prompt)
+ where
+ setup = do
+ showTitle title
+ say desc
+ origattr <- getTerminalAttributes stdInput
+ let newattr = origattr `withoutMode` EnableEcho
+ setTerminalAttributes stdInput newattr Immediately
+ return origattr
+ cleanup origattr = do
+ setTerminalAttributes stdInput origattr Immediately
+ say ""
+ prompt = do
+ ask "Enter password> "
+ p1 <- getLine
+ say ""
+ if confirm
+ then promptconfirm p1
+ else return $ mkpassword p1
+ promptconfirm p1 = do
+ ask "Confirm password> "
+ p2 <- getLine
+ say ""
+ if p1 /= p2
+ then do
+ say "Passwords didn't match, try again..."
+ prompt
+ else do
+ say ""
+ return $ mkpassword p1
+ mkpassword = Just . Password . BU8.fromString
+
+myPromptKeyId :: Title -> Desc -> [(Name, KeyId)] -> IO (Maybe KeyId)
+myPromptKeyId _ _ [] = return Nothing
+myPromptKeyId title desc l = do
+ showTitle title
+ say desc
+ say ""
+ forM_ nl $ \(n, ((Name name), (KeyId kid))) ->
+ say $ show n ++ ".\t" ++ BU8.toString name ++ " (keyid " ++ T.unpack kid ++ ")"
+ prompt
+ where
+ nl = zip [1 :: Integer ..] l
+ prompt = do
+ ask "Enter number> "
+ r <- getLine
+ say ""
+ case readMaybe r of
+ Just n
+ | n > 0 && n <= length l -> do
+ say ""
+ return $ Just $ snd (l !! pred n)
+ _ -> do
+ say $ "Enter a number from 1 to " ++ show (length l)
+ prompt
+
+myWithProgress :: Title -> Desc -> ((Percent -> IO ()) -> IO a) -> IO a
+myWithProgress title desc a = bracket_ setup cleanup (a sendpercent)
+ where
+ setup = do
+ showTitle title
+ say desc
+ sendpercent p = ask (show p ++ "% ")
+ cleanup = do
+ say "done"
+ say ""
+
+showTitle :: Title -> IO ()
+showTitle title = do
+ say title
+ say (replicate (length title) '-')
+ say ""
diff --git a/UI/Zenity.hs b/UI/Zenity.hs
new file mode 100644
index 0000000..85347c6
--- /dev/null
+++ b/UI/Zenity.hs
@@ -0,0 +1,183 @@
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module UI.Zenity (zenityUI) where
+
+import Types
+import Types.UI
+import Control.Monad
+import System.Process
+import Control.Exception
+import System.IO
+import System.FilePath
+import System.Directory
+import System.Exit
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Text as T
+
+zenityUI :: UI
+zenityUI = UI
+ { isAvailable = do
+ ps <- getSearchPath
+ loc <- filterM (\p -> doesFileExist (p </> "zenity")) ps
+ return (not (null loc))
+ , showError = myShowError
+ , showInfo = myShowInfo
+ , promptQuestion = myPromptQuestion
+ , promptName = myPromptName
+ , promptPassword = myPromptPassword
+ , promptKeyId = myPromptKeyId
+ , withProgress = myWithProgress
+ }
+
+myShowError :: Desc -> IO ()
+myShowError desc = bracket go cleanup (\_ -> return ())
+ where
+ go = runZenity
+ [ "--error"
+ , "--title", "keysafe"
+ , "--text", "Error: " ++ escape desc
+ ]
+ cleanup h = do
+ _ <- waitZenity h
+ return ()
+
+myShowInfo :: Title -> Desc -> IO ()
+myShowInfo title desc = bracket go cleanup (\_ -> return ())
+ where
+ go = runZenity
+ [ "--info"
+ , "--title", title
+ , "--text", escape desc
+ ]
+ cleanup h = do
+ _ <- waitZenity h
+ return ()
+
+myPromptQuestion :: Title -> Desc -> Question -> IO Bool
+myPromptQuestion title desc question = do
+ h <- runZenity
+ [ "--question"
+ , "--title", title
+ , "--text", escape $ desc ++ "\n" ++ question
+ ]
+ (_, ok) <- waitZenity h
+ return ok
+
+myPromptName :: Title -> Desc -> Maybe Name -> (Name -> Maybe Problem) -> IO (Maybe Name)
+myPromptName title desc suggested checkproblem = go ""
+ where
+ go extradesc = do
+ h <- runZenity
+ [ "--entry"
+ , "--title", title
+ , "--text", escape $ desc ++ "\n" ++ extradesc
+ , "--entry-text", case suggested of
+ Nothing -> ""
+ Just (Name b) -> BU8.toString b
+ ]
+ (ret, ok) <- waitZenity h
+ if ok
+ then
+ let n = Name $ BU8.fromString ret
+ in case checkproblem n of
+ Nothing -> return $ Just n
+ Just problem -> go problem
+ else return Nothing
+
+myPromptPassword :: Bool -> Title -> Desc -> IO (Maybe Password)
+myPromptPassword confirm title desc = go ""
+ where
+ go extradesc = do
+ h <- runZenity $
+ [ "--forms"
+ , "--title", title
+ , "--text", escape $ desc ++ "\n" ++ extradesc ++ "\n"
+ , "--separator", "\BEL"
+ , "--add-password", "Enter password"
+ ] ++ if confirm
+ then [ "--add-password", "Confirm password" ]
+ else []
+ (ret, ok) <- waitZenity h
+ if ok
+ then if confirm
+ then
+ let (p1, _:p2) = break (== '\BEL') ret
+ in if p1 /= p2
+ then go "Passwords didn't match, try again..."
+ else return $ Just $ Password $ BU8.fromString p1
+ else return $ Just $ Password $ BU8.fromString ret
+ else return Nothing
+
+myPromptKeyId :: Title -> Desc -> [(Name, KeyId)] -> IO (Maybe KeyId)
+myPromptKeyId _ _ [] = return Nothing
+myPromptKeyId title desc l = do
+ h <- runZenity $
+ [ "--list"
+ , "--title", title
+ , "--text", escape desc
+ , "--column", "gpg secret key name"
+ , "--column", "keyid"
+ , "--print-column", "ALL"
+ , "--separator", "\BEL"
+ , "--width", "500"
+ ] ++ concatMap (\(Name n, KeyId kid) -> [BU8.toString n, T.unpack kid]) l
+ (ret, ok) <- waitZenity h
+ if ok
+ then do
+ let (_n, _:kid) = break (== '\BEL') ret
+ return $ Just (KeyId (T.pack kid))
+ else return Nothing
+
+myWithProgress :: Title -> Desc -> ((Percent -> IO ()) -> IO a) -> IO a
+myWithProgress title desc a = bracket setup teardown (a . sendpercent)
+ where
+ setup = do
+ h <- runZenity
+ [ "--progress"
+ , "--title", title
+ , "--text", escape desc
+ , "--auto-close"
+ , "--auto-kill"
+ ]
+ return h
+ sendpercent h p = sendZenity h (show p)
+ teardown h = do
+ _ <- waitZenity h
+ return ()
+
+data ZenityHandle = ZenityHandle Handle Handle ProcessHandle
+
+runZenity :: [String] -> IO ZenityHandle
+runZenity ps = do
+ (Just hin, Just hout, Nothing, ph) <- createProcess
+ (proc "zenity" ps)
+ { std_out = CreatePipe
+ , std_in = CreatePipe
+ }
+ return $ ZenityHandle hin hout ph
+
+sendZenity :: ZenityHandle -> String -> IO ()
+sendZenity (ZenityHandle hin _ _) s = do
+ hPutStrLn hin s
+ hFlush hin
+
+waitZenity :: ZenityHandle -> IO (String, Bool)
+waitZenity (ZenityHandle hin hout ph) = do
+ hClose hin
+ ret <- hGetContents hout
+ exit <- waitForProcess ph
+ return (takeWhile (/= '\n') ret, exit == ExitSuccess)
+
+-- Zenity parses --text as html and will choke on invalid tags
+-- and '&' used outside a html entity. We don't want to use html, so
+-- escape these things.
+escape :: String -> String
+escape = concatMap esc
+ where
+ esc '&' = "&amp;"
+ esc '<' = "&lt;"
+ esc '>' = "&gt;"
+ esc c = [c]
diff --git a/Utility/Data.hs b/Utility/Data.hs
new file mode 100644
index 0000000..27c0a82
--- /dev/null
+++ b/Utility/Data.hs
@@ -0,0 +1,19 @@
+{- utilities for simple data types
+ -
+ - Copyright 2013 Joey Hess <id@joeyh.name>
+ -
+ - License: BSD-2-clause
+ -}
+
+{-# OPTIONS_GHC -fno-warn-tabs #-}
+
+module Utility.Data where
+
+{- First item in the list that is not Nothing. -}
+firstJust :: Eq a => [Maybe a] -> Maybe a
+firstJust ms = case dropWhile (== Nothing) ms of
+ [] -> Nothing
+ (md:_) -> md
+
+eitherToMaybe :: Either a b -> Maybe b
+eitherToMaybe = either (const Nothing) Just
diff --git a/Utility/Env.hs b/Utility/Env.hs
new file mode 100644
index 0000000..c56f4ec
--- /dev/null
+++ b/Utility/Env.hs
@@ -0,0 +1,84 @@
+{- portable environment variables
+ -
+ - Copyright 2013 Joey Hess <id@joeyh.name>
+ -
+ - License: BSD-2-clause
+ -}
+
+{-# LANGUAGE CPP #-}
+{-# OPTIONS_GHC -fno-warn-tabs #-}
+
+module Utility.Env where
+
+#ifdef mingw32_HOST_OS
+import Utility.Exception
+import Control.Applicative
+import Data.Maybe
+import Prelude
+import qualified System.Environment as E
+import qualified System.SetEnv
+#else
+import qualified System.Posix.Env as PE
+#endif
+
+getEnv :: String -> IO (Maybe String)
+#ifndef mingw32_HOST_OS
+getEnv = PE.getEnv
+#else
+getEnv = catchMaybeIO . E.getEnv
+#endif
+
+getEnvDefault :: String -> String -> IO String
+#ifndef mingw32_HOST_OS
+getEnvDefault = PE.getEnvDefault
+#else
+getEnvDefault var fallback = fromMaybe fallback <$> getEnv var
+#endif
+
+getEnvironment :: IO [(String, String)]
+#ifndef mingw32_HOST_OS
+getEnvironment = PE.getEnvironment
+#else
+getEnvironment = E.getEnvironment
+#endif
+
+{- Sets an environment variable. To overwrite an existing variable,
+ - overwrite must be True.
+ -
+ - On Windows, setting a variable to "" unsets it. -}
+setEnv :: String -> String -> Bool -> IO ()
+#ifndef mingw32_HOST_OS
+setEnv var val overwrite = PE.setEnv var val overwrite
+#else
+setEnv var val True = System.SetEnv.setEnv var val
+setEnv var val False = do
+ r <- getEnv var
+ case r of
+ Nothing -> setEnv var val True
+ Just _ -> return ()
+#endif
+
+unsetEnv :: String -> IO ()
+#ifndef mingw32_HOST_OS
+unsetEnv = PE.unsetEnv
+#else
+unsetEnv = System.SetEnv.unsetEnv
+#endif
+
+{- Adds the environment variable to the input environment. If already
+ - present in the list, removes the old value.
+ -
+ - This does not really belong here, but Data.AssocList is for some reason
+ - buried inside hxt.
+ -}
+addEntry :: Eq k => k -> v -> [(k, v)] -> [(k, v)]
+addEntry k v l = ( (k,v) : ) $! delEntry k l
+
+addEntries :: Eq k => [(k, v)] -> [(k, v)] -> [(k, v)]
+addEntries = foldr (.) id . map (uncurry addEntry) . reverse
+
+delEntry :: Eq k => k -> [(k, v)] -> [(k, v)]
+delEntry _ [] = []
+delEntry k (x@(k1,_) : rest)
+ | k == k1 = rest
+ | otherwise = ( x : ) $! delEntry k rest
diff --git a/Utility/Exception.hs b/Utility/Exception.hs
new file mode 100644
index 0000000..0ffc710
--- /dev/null
+++ b/Utility/Exception.hs
@@ -0,0 +1,113 @@
+{- Simple IO exception handling (and some more)
+ -
+ - Copyright 2011-2015 Joey Hess <id@joeyh.name>
+ -
+ - License: BSD-2-clause
+ -}
+
+{-# LANGUAGE CPP, ScopedTypeVariables #-}
+{-# OPTIONS_GHC -fno-warn-tabs #-}
+
+module Utility.Exception (
+ module X,
+ catchBoolIO,
+ catchMaybeIO,
+ catchDefaultIO,
+ catchMsgIO,
+ catchIO,
+ tryIO,
+ bracketIO,
+ catchNonAsync,
+ tryNonAsync,
+ tryWhenExists,
+ catchIOErrorType,
+ IOErrorType(..),
+ catchPermissionDenied,
+) where
+
+import Control.Monad.Catch as X hiding (Handler)
+import qualified Control.Monad.Catch as M
+import Control.Exception (IOException, AsyncException)
+#ifdef MIN_VERSION_GLASGOW_HASKELL
+#if MIN_VERSION_GLASGOW_HASKELL(7,10,0,0)
+import Control.Exception (SomeAsyncException)
+#endif
+#endif
+import Control.Monad
+import Control.Monad.IO.Class (liftIO, MonadIO)
+import System.IO.Error (isDoesNotExistError, ioeGetErrorType)
+import GHC.IO.Exception (IOErrorType(..))
+
+import Utility.Data
+
+{- Catches IO errors and returns a Bool -}
+catchBoolIO :: MonadCatch m => m Bool -> m Bool
+catchBoolIO = catchDefaultIO False
+
+{- Catches IO errors and returns a Maybe -}
+catchMaybeIO :: MonadCatch m => m a -> m (Maybe a)
+catchMaybeIO a = catchDefaultIO Nothing $ a >>= (return . Just)
+
+{- Catches IO errors and returns a default value. -}
+catchDefaultIO :: MonadCatch m => a -> m a -> m a
+catchDefaultIO def a = catchIO a (const $ return def)
+
+{- Catches IO errors and returns the error message. -}
+catchMsgIO :: MonadCatch m => m a -> m (Either String a)
+catchMsgIO a = do
+ v <- tryIO a
+ return $ either (Left . show) Right v
+
+{- catch specialized for IO errors only -}
+catchIO :: MonadCatch m => m a -> (IOException -> m a) -> m a
+catchIO = M.catch
+
+{- try specialized for IO errors only -}
+tryIO :: MonadCatch m => m a -> m (Either IOException a)
+tryIO = M.try
+
+{- bracket with setup and cleanup actions lifted to IO.
+ -
+ - Note that unlike catchIO and tryIO, this catches all exceptions. -}
+bracketIO :: (MonadMask m, MonadIO m) => IO v -> (v -> IO b) -> (v -> m a) -> m a
+bracketIO setup cleanup = bracket (liftIO setup) (liftIO . cleanup)
+
+{- Catches all exceptions except for async exceptions.
+ - This is often better to use than catching them all, so that
+ - ThreadKilled and UserInterrupt get through.
+ -}
+catchNonAsync :: MonadCatch m => m a -> (SomeException -> m a) -> m a
+catchNonAsync a onerr = a `catches`
+ [ M.Handler (\ (e :: AsyncException) -> throwM e)
+#ifdef MIN_VERSION_GLASGOW_HASKELL
+#if MIN_VERSION_GLASGOW_HASKELL(7,10,0,0)
+ , M.Handler (\ (e :: SomeAsyncException) -> throwM e)
+#endif
+#endif
+ , M.Handler (\ (e :: SomeException) -> onerr e)
+ ]
+
+tryNonAsync :: MonadCatch m => m a -> m (Either SomeException a)
+tryNonAsync a = go `catchNonAsync` (return . Left)
+ where
+ go = do
+ v <- a
+ return (Right v)
+
+{- Catches only DoesNotExist exceptions, and lets all others through. -}
+tryWhenExists :: MonadCatch m => m a -> m (Maybe a)
+tryWhenExists a = do
+ v <- tryJust (guard . isDoesNotExistError) a
+ return (eitherToMaybe v)
+
+{- Catches only IO exceptions of a particular type.
+ - Ie, use HardwareFault to catch disk IO errors. -}
+catchIOErrorType :: MonadCatch m => IOErrorType -> (IOException -> m a) -> m a -> m a
+catchIOErrorType errtype onmatchingerr a = catchIO a onlymatching
+ where
+ onlymatching e
+ | ioeGetErrorType e == errtype = onmatchingerr e
+ | otherwise = throwM e
+
+catchPermissionDenied :: MonadCatch m => (IOException -> m a) -> m a -> m a
+catchPermissionDenied = catchIOErrorType PermissionDenied
diff --git a/Utility/FreeDesktop.hs b/Utility/FreeDesktop.hs
new file mode 100644
index 0000000..76b07fb
--- /dev/null
+++ b/Utility/FreeDesktop.hs
@@ -0,0 +1,147 @@
+{- Freedesktop.org specifications
+ -
+ - http://standards.freedesktop.org/basedir-spec/latest/
+ - http://standards.freedesktop.org/desktop-entry-spec/latest/
+ - http://standards.freedesktop.org/menu-spec/latest/
+ - http://standards.freedesktop.org/icon-theme-spec/latest/
+ -
+ - Copyright 2012 Joey Hess <id@joeyh.name>
+ -
+ - License: BSD-2-clause
+ -}
+
+{-# OPTIONS_GHC -fno-warn-tabs #-}
+
+module Utility.FreeDesktop (
+ DesktopEntry,
+ genDesktopEntry,
+ buildDesktopMenuFile,
+ writeDesktopMenuFile,
+ desktopMenuFilePath,
+ autoStartPath,
+ iconDir,
+ iconFilePath,
+ systemDataDir,
+ systemConfigDir,
+ userDataDir,
+ userConfigDir,
+ userDesktopDir
+) where
+
+import Utility.Exception
+import Utility.UserInfo
+
+import System.Environment
+import System.FilePath
+import System.Directory
+import System.Process
+import Data.List
+import Data.Maybe
+import Control.Applicative
+import Prelude
+
+type DesktopEntry = [(Key, Value)]
+
+type Key = String
+
+data Value = StringV String | BoolV Bool | NumericV Float | ListV [Value]
+
+toString :: Value -> String
+toString (StringV s) = s
+toString (BoolV b)
+ | b = "true"
+ | otherwise = "false"
+toString (NumericV f) = show f
+toString (ListV l)
+ | null l = ""
+ | otherwise = (intercalate ";" $ map (concatMap escapesemi . toString) l) ++ ";"
+ where
+ escapesemi ';' = "\\;"
+ escapesemi c = [c]
+
+genDesktopEntry :: String -> String -> Bool -> FilePath -> Maybe String -> [String] -> DesktopEntry
+genDesktopEntry name comment terminal program icon categories = catMaybes
+ [ item "Type" StringV "Application"
+ , item "Version" NumericV 1.0
+ , item "Name" StringV name
+ , item "Comment" StringV comment
+ , item "Terminal" BoolV terminal
+ , item "Exec" StringV program
+ , maybe Nothing (item "Icon" StringV) icon
+ , item "Categories" ListV (map StringV categories)
+ ]
+ where
+ item x c y = Just (x, c y)
+
+buildDesktopMenuFile :: DesktopEntry -> String
+buildDesktopMenuFile d = unlines ("[Desktop Entry]" : map keyvalue d) ++ "\n"
+ where
+ keyvalue (k, v) = k ++ "=" ++ toString v
+
+writeDesktopMenuFile :: DesktopEntry -> String -> IO ()
+writeDesktopMenuFile d file = do
+ createDirectoryIfMissing True (takeDirectory file)
+ writeFile file $ buildDesktopMenuFile d
+
+{- Path to use for a desktop menu file, in either the systemDataDir or
+ - the userDataDir -}
+desktopMenuFilePath :: String -> FilePath -> FilePath
+desktopMenuFilePath basename datadir =
+ datadir </> "applications" </> desktopfile basename
+
+{- Path to use for a desktop autostart file, in either the systemDataDir
+ - or the userDataDir -}
+autoStartPath :: String -> FilePath -> FilePath
+autoStartPath basename configdir =
+ configdir </> "autostart" </> desktopfile basename
+
+{- Base directory to install an icon file, in either the systemDataDir
+ - or the userDatadir. -}
+iconDir :: FilePath -> FilePath
+iconDir datadir = datadir </> "icons" </> "hicolor"
+
+{- Filename of an icon, given the iconDir to use.
+ -
+ - The resolution is something like "48x48" or "scalable". -}
+iconFilePath :: FilePath -> String -> FilePath -> FilePath
+iconFilePath file resolution icondir =
+ icondir </> resolution </> "apps" </> file
+
+desktopfile :: FilePath -> FilePath
+desktopfile f = f ++ ".desktop"
+
+{- Directory used for installation of system wide data files.. -}
+systemDataDir :: FilePath
+systemDataDir = "/usr/share"
+
+{- Directory used for installation of system wide config files. -}
+systemConfigDir :: FilePath
+systemConfigDir = "/etc/xdg"
+
+{- Directory for user data files. -}
+userDataDir :: IO FilePath
+userDataDir = xdgEnvHome "DATA_HOME" ".local/share"
+
+{- Directory for user config files. -}
+userConfigDir :: IO FilePath
+userConfigDir = xdgEnvHome "CONFIG_HOME" ".config"
+
+{- Directory for the user's Desktop, may be localized.
+ -
+ - This is not looked up very fast; the config file is in a shell format
+ - that is best parsed by shell, so xdg-user-dir is used, with a fallback
+ - to ~/Desktop. -}
+userDesktopDir :: IO FilePath
+userDesktopDir = maybe fallback return =<< (parse <$> xdg_user_dir)
+ where
+ parse s = case lines <$> s of
+ Just (l:_) -> Just l
+ _ -> Nothing
+ xdg_user_dir = catchMaybeIO $ readProcess "xdg-user-dir" ["DESKTOP"] []
+ fallback = xdgEnvHome "DESKTOP_DIR" "Desktop"
+
+xdgEnvHome :: String -> String -> IO String
+xdgEnvHome envbase homedef = do
+ home <- myHomeDir
+ catchDefaultIO (home </> homedef) $
+ getEnv $ "XDG_" ++ envbase
diff --git a/Utility/UserInfo.hs b/Utility/UserInfo.hs
new file mode 100644
index 0000000..ec0b0d0
--- /dev/null
+++ b/Utility/UserInfo.hs
@@ -0,0 +1,62 @@
+{- user info
+ -
+ - Copyright 2012 Joey Hess <id@joeyh.name>
+ -
+ - License: BSD-2-clause
+ -}
+
+{-# LANGUAGE CPP #-}
+{-# OPTIONS_GHC -fno-warn-tabs #-}
+
+module Utility.UserInfo (
+ myHomeDir,
+ myUserName,
+ myUserGecos,
+) where
+
+import Utility.Env
+import Utility.Data
+
+import System.PosixCompat
+import Control.Applicative
+import Prelude
+
+{- Current user's home directory.
+ -
+ - getpwent will fail on LDAP or NIS, so use HOME if set. -}
+myHomeDir :: IO FilePath
+myHomeDir = either error return =<< myVal env homeDirectory
+ where
+#ifndef mingw32_HOST_OS
+ env = ["HOME"]
+#else
+ env = ["USERPROFILE", "HOME"] -- HOME is used in Cygwin
+#endif
+
+{- Current user's user name. -}
+myUserName :: IO (Either String String)
+myUserName = myVal env userName
+ where
+#ifndef mingw32_HOST_OS
+ env = ["USER", "LOGNAME"]
+#else
+ env = ["USERNAME", "USER", "LOGNAME"]
+#endif
+
+myUserGecos :: IO (Maybe String)
+-- userGecos crashes on Android and is not available on Windows.
+#if defined(__ANDROID__) || defined(mingw32_HOST_OS)
+myUserGecos = return Nothing
+#else
+myUserGecos = eitherToMaybe <$> myVal [] userGecos
+#endif
+
+myVal :: [String] -> (UserEntry -> String) -> IO (Either String String)
+myVal envvars extract = go envvars
+ where
+#ifndef mingw32_HOST_OS
+ go [] = Right . extract <$> (getUserEntryForID =<< getEffectiveUserID)
+#else
+ go [] = return $ Left ("environment not set: " ++ show envvars)
+#endif
+ go (v:vs) = maybe (go vs) (return . Right) =<< getEnv v
diff --git a/keysafe.1 b/keysafe.1
new file mode 100644
index 0000000..73d0b4d
--- /dev/null
+++ b/keysafe.1
@@ -0,0 +1,166 @@
+.\" -*- nroff -*-
+.TH keysafe 1 "Commands"
+.SH NAME
+keysafe \- securely back up secret keys
+.SH SYNOPSIS
+.B keysafe [options]
+.SH DESCRIPTION
+.I keysafe
+securely backs up a gpg secret key or other short secret to the cloud.
+.PP
+This is not intended for storing Debian Developer keys that yield root on
+ten million systems. It's about making it possible for users to use gpg who
+currently don't, and who would find it too hard to use paperkey(1) to back
+up and restore their key as they reinstall their laptop.
+.PP
+To get started with keysafe, you can run it without any options. If your
+account has a gpg secret key, keysafe will prompt you for a password to
+protect it with, and a name to store it under, and will back it up securely
+to the cloud.
+.PP
+To restore from the backup, just run keysafe from an account that does not
+have a gpg secret key (or use the --restore option to force restore mode).
+Keysafe will prompt for the same name and password, and restore the key.
+.PP
+Note that the backup operation takes half an hour or so,
+and the restore operation takes an hour or so. Keysafe encrypts
+the secret key with the password in a way that takes a lot of computation
+to decrypt. This makes it hard for an attacker to crack your password,
+because each guess they make costs them.
+.PP
+Keysafe is designed so that it should take millions of dollars of computer
+time to crack any fairly good password. With a truly good
+password, such as four random words, the cracking cost should be many
+trillions of dollars. Keysafe checks your password strength (using the
+zxcvbn library), and shows an estimate of the cost to crack your password,
+before backing up the key.
+.PP
+Whether it's safe to store your gpg secret key in the cloud is your
+own decision. Keysafe comes with no warranty.
+.SH OPTIONS
+.PP
+.IP --backup
+Force backup mode. This is the default if you have a gpg secret key.
+.PP
+.IP --restore
+Force restore mode. This is the default if you do not have a gpg secret
+key.
+.PP
+.IP --uploadqueued
+Upload any data to servers that was queued by a previous keysafe run.
+This is designed to be put in a cron job.
+.PP
+.IP --autostart
+This is run automatically on desktop login by the desktop autostart
+file included with keysafe. It checks for any new gpg keys that have
+not been backed up, and prompts to see if the user wants to back them up
+with keysafe. Also uploads any queued data, and in the future may perform
+other checks for problems.
+.PP
+.IP --server
+Runs keysafe in server mode, accepting objects and storing them.
+Use --store-directory to configure where the server stores objects,
+and --port and --address to configure how the server listens to
+connections. It's recommended to only expose keysafe servers over a tor
+hidden service.
+.PP
+.IP "--backup-server BACKUPDIR"
+Run on a server, populates the BACKUPDIR with a gpg encrypted backup
+of all the objects stored in the --store-directory. This is designed
+to be rsynced offsite (with --delete) to back up a keysafe server with
+minimal information leakage.
+.PP
+.IP "--restore-server BACKUPDIR"
+Restore all objects present in the gpg-encrypted
+backups in the specified directory.
+.PP
+.IP "--chaff HOSTNAME"
+Upload random data to a keysafe server. --port can be used to specify
+the server's port. Continues uploading data until interrupted with ctrl-c.
+.PP
+.IP "--chaff-max-delay SECONDS"
+Specify a delay between chaff uploads. Will delay a
+random amount between 0 and this many seconds.
+.PP
+.IP --check-servers
+Tries to connect to each server in the server list.
+Displays the server's MOTD, and the amount of data
+stored on it. Prints message to stderr and exits
+nonzero if any of the servers are not accessible.
+.PP
+.IP --benchmark
+Benchmark speed of keysafe's cryptographic primitives.
+.PP
+.IP --test
+Run test suite.
+.PP
+.IP "--gpgkeyid KEYID"
+Specify keyid of gpg key to back up or restore. This is useful if you
+have multiple gpg keys. But, when this option is used to back up a key,
+you have to also provide it to restore that key.
+.PP
+.IP "--keyfile FILE"
+To back up anything other than a gpg secret key, use this option.
+To restore from the backup, you must use this same option, and pass the
+exact same filename.
+.PP
+.IP "--store-directory dir"
+Where to store data locally. For the client, data is
+stored here before it is uploaded to the server. For
+the server, this is where it stores its data.
+(default: ~/.keysafe/objects/)
+.PP
+.IP --gui
+Use GUI interface for interaction. Default is to use
+readline interface when run in a terminal, and GUI otherwise.
+The GUI currently is implemented using zenity(1).
+.PP
+.IP "--totalshares M --neededshares N"
+These options have to be specified together.
+The default values are --totalshares 3 --neededshares 2.
+Keysafe uses Shamir secret sharing to create M shares of the encrypted
+secret key, and each share is stored in a different server.
+To restore the data, only N of the shares are needed. If you specify
+these options when backing up a secret key, you also must specify them
+with the same values to restore that secret key.
+.PP
+.IP "--name N"
+Specify name used for key backup/restore, avoiding the usual prompt.
+.PP
+.IP "--othername N"
+Specify other name used for key backup/restore, avoiding the usual prompt.
+.PP
+.IP "--add-storage-directory DIR"
+Add the directory to the list of locations keysafe
+will use for backup/restore of keys. Keysafe will use
+the directory first, before any of its built-in servers.
+.PP
+.IP "--add-server HOST[:PORT]"
+Add the server to the server list which keysafe will
+use for backup/restore of keys. Keysafe will use the
+server first before any of its built-in servers.
+.PP
+.IP "--port P"
+Port for server to listen on. (default: 80)
+.PP
+.IP "--address A"
+Address for server to bind to. (Use "*" to bind to
+all addresses.) (default: "127.0.0.1")
+.PP
+.IP "--months-to-fill-half-disk N"
+Server rate-limits requests and requires proof of
+work, to avoid too many objects being stored. This is
+an lower bound on how long it could possibly take for
+half of the current disk space to be
+filled. (default: 12)
+.PP
+.IP "--motd MESSAGE"
+The server's Message Of The Day.
+.PP
+.IP --testmode
+Avoid using expensive cryptographic operations to secure data.
+Use for testing only, not with real secret keys.
+.SH SEE ALSO
+<https://joeyh.name/code/keysafe/>
+.SH AUTHOR
+Joey Hess <id@joeyh.name>
diff --git a/keysafe.autostart b/keysafe.autostart
new file mode 100644
index 0000000..0705e51
--- /dev/null
+++ b/keysafe.autostart
@@ -0,0 +1,9 @@
+[Desktop Entry]
+Type=Application
+Version=1.0
+Name=Keysafe
+Comment=Autostart
+Terminal=false
+Exec=keysafe --autostart
+Categories=
+
diff --git a/keysafe.cabal b/keysafe.cabal
new file mode 100644
index 0000000..335e8a3
--- /dev/null
+++ b/keysafe.cabal
@@ -0,0 +1,124 @@
+Name: keysafe
+Version: 0.20161022
+Cabal-Version: >= 1.8
+Maintainer: Joey Hess <joey@kitenet.net>
+Author: Joey Hess
+Stability: Experimental
+Copyright: 2016 Joey Hess
+License: AGPL-3
+Homepage: https://joeyh.name/code/keysafe/
+Category: Utility
+Build-Type: Custom
+Synopsis: back up a secret key securely to the cloud
+Description:
+ Keysafe backs up a secret key to several cloud servers, split up
+ so that no one server can access the whole secret by itself.
+ .
+ A password is used to encrypt the data, and it is made expensive
+ to decrypt, so password cracking is infeasibly expensive.
+License-File: AGPL
+Extra-Source-Files:
+ CHANGELOG
+ TODO
+ INSTALL
+ keysafe.1
+ keysafe.service
+ keysafe.init
+ keysafe.default
+ keysafe.desktop
+ keysafe.autostart
+ Makefile
+
+Executable keysafe
+ Main-Is: keysafe.hs
+ GHC-Options: -threaded -Wall -fno-warn-tabs -O2
+ Build-Depends:
+ base (>= 4.5 && < 5.0)
+ , bytestring == 0.10.*
+ , deepseq == 1.4.*
+ , random == 1.1.*
+ , secret-sharing == 1.0.*
+ , raaz == 0.0.2
+ , time == 1.5.*
+ , containers == 0.5.*
+ , binary == 0.7.*
+ , text == 1.2.*
+ , utf8-string == 1.0.*
+ , unix == 2.7.*
+ , filepath == 1.4.*
+ , split == 0.2.*
+ , directory == 1.2.*
+ , process == 1.2.*
+ , optparse-applicative == 0.12.*
+ , readline == 1.0.*
+ , zxcvbn-c == 1.0.*
+ , servant == 0.7.*
+ , servant-server == 0.7.*
+ , servant-client == 0.7.*
+ , aeson == 0.11.*
+ , wai == 3.2.*
+ , warp == 3.2.*
+ , http-client == 0.4.*
+ , transformers == 0.4.*
+ , stm == 2.4.*
+ , socks == 0.5.*
+ , network == 2.6.*
+ , token-bucket == 0.1.*
+ , bloomfilter == 2.0.*
+ , disk-free-space == 0.1.*
+ , lifted-base == 0.2.*
+ , unbounded-delays == 0.1.*
+ , fast-logger == 2.4.*
+ , SafeSemaphore == 0.10.*
+ , async == 2.1.*
+ , unix-compat == 0.4.*
+ , exceptions == 0.8.*
+ , random-shuffle == 0.0.*
+ , MonadRandom == 0.4.*
+ , argon2 == 1.2.*
+ Other-Modules:
+ AutoStart
+ BackupLog
+ Benchmark
+ ByteStrings
+ CmdLine
+ Cost
+ Encryption
+ Entropy
+ ExpensiveHash
+ Gpg
+ HTTP
+ HTTP.Client
+ HTTP.Logger
+ HTTP.ProofOfWork
+ HTTP.Server
+ HTTP.RateLimit
+ Output
+ SecretKey
+ Serialization
+ ServerBackup
+ Servers
+ Share
+ Storage
+ Storage.Local
+ Storage.Network
+ Tests
+ Tunables
+ Types
+ Types.Cost
+ Types.Server
+ Types.Storage
+ Types.UI
+ UI
+ UI.Readline
+ UI.NonInteractive
+ UI.Zenity
+ Utility.Data
+ Utility.Env
+ Utility.Exception
+ Utility.FreeDesktop
+ Utility.UserInfo
+
+source-repository head
+ type: git
+ location: git://git.joeyh.name/keysafe.git
diff --git a/keysafe.default b/keysafe.default
new file mode 100644
index 0000000..1600d0a
--- /dev/null
+++ b/keysafe.default
@@ -0,0 +1,2 @@
+# Parameters to pass to keysafe when it's started as a daemon.
+DAEMON_PARAMS="--port 4242 --store-directory=/var/lib/keysafe/"
diff --git a/keysafe.desktop b/keysafe.desktop
new file mode 100644
index 0000000..51077c0
--- /dev/null
+++ b/keysafe.desktop
@@ -0,0 +1,9 @@
+[Desktop Entry]
+Type=Application
+Version=1.0
+Name=Keysafe
+Comment=Back up or restore your private Gnupg key with Keysafe
+Terminal=false
+Exec=/usr/bin/keysafe
+Categories=Network;
+Keywords=backup;key;encryption;gnupg;openpgp;pgp;gpg
diff --git a/keysafe.hs b/keysafe.hs
new file mode 100644
index 0000000..996c0a7
--- /dev/null
+++ b/keysafe.hs
@@ -0,0 +1,459 @@
+{-# LANGUAGE OverloadedStrings, BangPatterns #-}
+
+{- Copyright 2016 Joey Hess <id@joeyh.name>
+ -
+ - Licensed under the GNU AGPL version 3 or higher.
+ -}
+
+module Main where
+
+import Types
+import Tunables
+import qualified CmdLine
+import UI
+import Output
+import Encryption
+import Entropy
+import Benchmark
+import Tests
+import Cost
+import SecretKey
+import Share
+import Storage
+import Types.Server
+import BackupLog
+import AutoStart
+import HTTP
+import HTTP.Server
+import HTTP.Client
+import HTTP.ProofOfWork
+import ServerBackup
+import qualified Gpg
+import Data.Maybe
+import Data.Time.Clock
+import Data.Time.Calendar
+import Data.Monoid
+import Data.List
+import Control.DeepSeq
+import Control.Concurrent.Async
+import qualified Data.Text as T
+import qualified Data.ByteString as B
+import qualified Data.ByteString.UTF8 as BU8
+import qualified Data.Set as S
+import System.Posix.User (userGecos, getUserEntryForID, getEffectiveUserID)
+
+main :: IO ()
+main = do
+ cmdline <- CmdLine.get
+ ui <- selectUI (CmdLine.gui cmdline)
+ let mkt = CmdLine.customizeShareParams cmdline
+ (tunables, possibletunables) <- if CmdLine.testMode cmdline
+ then do
+ showInfo ui "Test mode"
+ "Keysafe is running in test mode. This is not secure, and should not be used with real secret keys!"
+ return (mkt testModeTunables, [mkt testModeTunables])
+ else return (mkt defaultTunables, map (mkt . snd) knownTunings)
+ dispatch cmdline ui tunables possibletunables
+
+dispatch :: CmdLine.CmdLine -> UI -> Tunables -> [Tunables] -> IO ()
+dispatch cmdline ui tunables possibletunables = do
+ mode <- CmdLine.selectMode cmdline
+ go mode (CmdLine.secretkeysource cmdline)
+ where
+ go CmdLine.Backup (Just secretkeysource) =
+ backup cmdline ui tunables (Distinguisher secretkeysource)
+ =<< getSecretKey secretkeysource
+ go CmdLine.Restore (Just secretkeydest) =
+ restore cmdline ui possibletunables (Distinguisher secretkeydest)
+ go CmdLine.Backup Nothing =
+ backup cmdline ui tunables AnyGpgKey
+ =<< Gpg.getKeyToBackup ui
+ go CmdLine.Restore Nothing =
+ restore cmdline ui possibletunables AnyGpgKey
+ go CmdLine.UploadQueued _ =
+ uploadQueued ui (CmdLine.localstoragedirectory cmdline)
+ go CmdLine.AutoStart _ =
+ autoStart cmdline tunables ui
+ go (CmdLine.Server) _ =
+ runServer
+ (CmdLine.localstoragedirectory cmdline)
+ (CmdLine.serverConfig cmdline)
+ go (CmdLine.BackupServer d) _ =
+ backupServer (CmdLine.localstoragedirectory cmdline) d
+ go (CmdLine.RestoreServer d) _ =
+ restoreServer (CmdLine.localstoragedirectory cmdline) d
+ go (CmdLine.Chaff hn) _ = storeChaff hn
+ (CmdLine.serverPort (CmdLine.serverConfig cmdline))
+ (CmdLine.chaffMaxDelay cmdline)
+ go CmdLine.CheckServers _ = checkServers cmdline
+ go CmdLine.Benchmark _ =
+ benchmarkTunables tunables
+ go CmdLine.Test _ =
+ runTests
+
+backup :: CmdLine.CmdLine -> UI -> Tunables -> Distinguisher -> (SecretKeySource, SecretKey) -> IO ()
+backup cmdline ui tunables distinguisher (secretkeysource, secretkey) = do
+ installAutoStartFile
+
+ let m = totalObjects (shareParams tunables)
+ StorageLocations allocs <- cmdLineStorageLocations cmdline
+ let locs = StorageLocations (take m allocs)
+ case problemStoringIn locs tunables of
+ Nothing -> return ()
+ Just (FatalProblem p) -> do
+ showError ui p
+ error "aborting"
+ Just (OverridableProblem p) -> do
+ ok <- promptQuestion ui "Server problem"
+ p "Continue anyway?"
+ if ok
+ then return ()
+ else error "aborting"
+
+ username <- userName
+ Name theirname <- case CmdLine.name cmdline of
+ Just n -> pure n
+ Nothing -> fromMaybe (error "Aborting on no username")
+ <$> promptName ui "Enter your name"
+ usernamedesc (Just username) validateName
+ go theirname locs
+ where
+ go theirname locs = do
+ cores <- fromMaybe 1 <$> getNumCores
+ Name othername <- case CmdLine.name cmdline of
+ Just n -> pure n
+ Nothing -> fromMaybe (error "aborting on no othername")
+ <$> promptName ui "Enter other name"
+ othernamedesc Nothing validateName
+ let name = Name (theirname <> " " <> othername)
+ (kek, passwordentropy) <- promptpassword name
+ let sis = shareIdents tunables name distinguisher
+ let cost = getCreationCost kek <> getCreationCost sis
+ (r, queued, usedlocs) <- withProgressIncremental ui "Encrypting and storing data"
+ (encryptdesc cost cores) $ \addpercent -> do
+ let esk = encrypt tunables kek secretkey
+ shares <- genShares esk tunables
+ _ <- esk `deepseq` addpercent 25
+ _ <- sis `seq` addpercent 25
+ let step = 50 `div` sum (map S.size shares)
+ storeShares locs sis shares (addpercent step)
+ backuplog <- mkBackupLog $ backupMade (mapMaybe getServer usedlocs) secretkeysource passwordentropy
+ case r of
+ StoreSuccess -> do
+ storeBackupLog backuplog
+ if queued
+ then do
+ willautostart <- isAutoStartFileInstalled
+ showInfo ui "Backup queued" $ "Some data was not successfully uploaded to servers, and has been queued for later upload."
+ ++ if willautostart then "" else " Run keysafe --uploadqueued at a later point to finish the backup."
+ else showInfo ui "Backup success" "Your secret key was successfully encrypted and backed up."
+ StoreFailure s -> showError ui ("There was a problem storing your encrypted secret key: " ++ s)
+ StoreAlreadyExists -> do
+ showError ui $ unlines
+ [ "Another secret key is already being stored under the name you entered."
+ , "Please try again with a different name."
+ ]
+ go theirname locs
+ promptpassword name = do
+ password <- fromMaybe (error "Aborting on no password")
+ <$> promptPassword ui True "Enter password" passworddesc
+ kek <- genKeyEncryptionKey tunables name password
+ passwordentropy <- getPasswordEntropy password name
+ let crackcost = estimateAttackCost spotAWS $
+ estimateBruteforceOf kek passwordentropy
+ let mincost = Dollars 100000
+ if crackcost < mincost
+ then do
+ showError ui $ "Weak password! It would cost only " ++ show crackcost ++ " to crack the password. Please think of a better one. More words would be good.."
+ promptpassword name
+ else do
+ (thisyear, _, _) <- toGregorian . utctDay
+ <$> getCurrentTime
+ ok <- promptQuestion ui "Password strength estimate"
+ (crackdesc crackcost thisyear)
+ "Is your password strong enough?"
+ if ok
+ then return (kek, passwordentropy)
+ else promptpassword name
+ keydesc = case secretkeysource of
+ GpgKey _ -> "gpg secret key"
+ KeyFile _ -> "secret key"
+ usernamedesc = unlines
+ [ "Keysafe is going to backup your " ++ keydesc ++ " securely."
+ , ""
+ , "You will be prompted for some information. To restore your " ++ keydesc
+ , "at a later date, you will need to remember and enter the same information."
+ , ""
+ , "To get started, what is your name?"
+ ]
+ othernamedesc = unlines
+ [ "Now think of another name, which not many people know."
+ , "This will be used to make it hard for anyone else to find"
+ , "the backup of your " ++ keydesc ++ "."
+ , ""
+ , "Some suggestions:"
+ , ""
+ , otherNameSuggestions
+ , ""
+ , "Make sure to pick a name you will remember later,"
+ , "when you restore your " ++ keydesc ++ "."
+ ]
+ passworddesc = unlines
+ [ "Pick a password that will be used to protect your secret key."
+ , ""
+ , "It's very important that this password be hard to guess."
+ , ""
+ , "And, it needs to be one that you will be able to remember years from now"
+ , "in order to restore your secret key."
+ ]
+ crackdesc crackcost thisyear = unlines $
+ "Rough estimate of the cost to crack your password: " :
+ costOverTimeTable crackcost thisyear
+ encryptdesc cost cores = unlines
+ [ "This will probably take around " ++ showCostMinutes cores cost
+ , ""
+ , "(It's a feature that this takes a while; it makes it hard"
+ , "for anyone to find your data, or crack your password.)"
+ , ""
+ , "Please wait..."
+ ]
+
+otherNameSuggestions :: String
+otherNameSuggestions = unlines $ map (" * " ++)
+ [ "Your high-school sweetheart."
+ , "Your first pet."
+ , "Your favorite teacher."
+ , "Your college roomate."
+ , "A place you like to visit."
+ ]
+
+restore :: CmdLine.CmdLine -> UI -> [Tunables] -> Distinguisher -> IO ()
+restore cmdline ui possibletunables distinguisher = do
+ cores <- fromMaybe 1 <$> getNumCores
+ username <- userName
+ Name theirname <- case CmdLine.name cmdline of
+ Just n -> pure n
+ Nothing -> fromMaybe (error "Aborting on no username")
+ <$> promptName ui "Enter your name"
+ namedesc (Just username) validateName
+ Name othername <- case CmdLine.name cmdline of
+ Just n -> pure n
+ Nothing -> fromMaybe (error "aborting on no othername")
+ <$> promptName ui "Enter other name"
+ othernamedesc Nothing validateName
+ let name = Name (theirname <> " " <> othername)
+ password <- fromMaybe (error "Aborting on no password")
+ <$> promptPassword ui True "Enter password" passworddesc
+
+ let mksis tunables = shareIdents tunables name distinguisher
+ locs <- cmdLineStorageLocations cmdline
+ r <- downloadInitialShares locs ui mksis possibletunables
+ case r of
+ Nothing -> showError ui "No shares could be downloaded. Perhaps you entered the wrong name?"
+ Just (tunables, shares, sis, usedservers) -> do
+ let candidatekeys = candidateKeyEncryptionKeys tunables name password
+ let cost = getCreationCost candidatekeys
+ <> castCost (getDecryptionCost candidatekeys)
+ case combineShares tunables [shares] of
+ Left e -> showError ui e
+ Right esk -> do
+ final <- withProgress ui "Decrypting"
+ (decryptdesc cost cores) $ \setpercent ->
+ go locs tunables [shares] usedservers sis setpercent $
+ tryDecrypt candidatekeys esk
+ final =<< getPasswordEntropy password name
+ where
+ go locs tunables firstshares firstusedservers sis setpercent r = case r of
+ DecryptFailed -> return $ \_ ->
+ showError ui "Decryption failed! Probably you entered the wrong password."
+ DecryptSuccess secretkey -> do
+ _ <- setpercent 100
+ oldgpgkeys <- if distinguisher == AnyGpgKey then Gpg.listSecretKeys else return []
+ writeSecretKey distinguisher secretkey
+ newgpgkeys <- if distinguisher == AnyGpgKey then Gpg.listSecretKeys else return []
+ return $ \passwordentropy -> do
+ showInfo ui "Success" "Your secret key was successfully restored!"
+ -- Since the key was restored, we know it's
+ -- backed up; log that.
+ let updatelog restored = do
+ backuplog <- mkBackupLog $
+ backupMade firstusedservers restored passwordentropy
+ storeBackupLog backuplog
+ case distinguisher of
+ AnyGpgKey -> case filter (`notElem` oldgpgkeys) newgpgkeys of
+ [(_n, k)] -> updatelog (GpgKey k)
+ _ -> return ()
+ Distinguisher sks -> updatelog sks
+ DecryptIncomplete kek -> do
+ -- Download shares for another chunk.
+ (nextshares, sis', nextusedservers)
+ <- retrieveShares locs sis (return ())
+ let shares = firstshares ++ [nextshares]
+ let usedservers = nub (firstusedservers ++ nextusedservers)
+ case combineShares tunables shares of
+ Left e -> return $ \_ -> showError ui e
+ Right esk ->
+ go locs tunables shares usedservers sis' setpercent $
+ decrypt kek esk
+ namedesc = unlines
+ [ "When you backed up your secret key, you entered some information."
+ , "To restore it, you'll need to remember what you entered back then."
+ , ""
+ , "To get started, what is your name?"
+ ]
+ othernamedesc = unlines
+ [ "What other name did you enter when you backed up your secret key?"
+ , ""
+ , "Back then, you were given some suggestions, like these:"
+ , ""
+ , otherNameSuggestions
+ ]
+ passworddesc = unlines
+ [ "Enter the password to unlock your secret key."
+ ]
+ decryptdesc cost cores = unlines
+ [ "This will probably take around " ++ showCostMinutes cores cost
+ , ""
+ , "(It's a feature that this takes so long; it prevents cracking your password.)"
+ , ""
+ , "Please wait..."
+ ]
+
+-- | Try each possible tunable until the initial set of
+-- shares are found, and return the shares, and
+-- ShareIdents to download subsequent sets.
+downloadInitialShares
+ :: StorageLocations
+ -> UI
+ -> (Tunables -> ShareIdents)
+ -> [Tunables]
+ -> IO (Maybe (Tunables, S.Set Share, ShareIdents, [Server]))
+downloadInitialShares storagelocations ui mksis possibletunables = do
+ cores <- fromMaybe 1 <$> getNumCores
+ withProgressIncremental ui "Downloading encrypted data" (message cores) $ \addpercent -> do
+ go possibletunables addpercent
+ where
+ go [] _ = return Nothing
+ go (tunables:othertunables) addpercent = do
+ -- Just calculating the hash to generate the stream of idents
+ -- probably takes most of the time.
+ let !sis = mksis tunables
+ addpercent 50
+ let m = totalObjects (shareParams tunables)
+ let step = 50 `div` m
+ (shares, sis', usedservers) <- retrieveShares storagelocations sis (addpercent step)
+ if S.null shares
+ then go othertunables addpercent
+ else return $ Just (tunables, shares, sis', usedservers)
+
+ possiblesis = map mksis possibletunables
+ message cores = unlines
+ [ "This will probably take around "
+ ++ showCostMinutes cores (mconcat $ map getCreationCost possiblesis)
+ , ""
+ , "(It's a feature that this takes a while; it makes it hard"
+ , "for anyone else to find your data.)"
+ , ""
+ , "Please wait..."
+ ]
+
+validateName :: Name -> Maybe Problem
+validateName (Name n)
+ | B.length n < 2 = Just "The name should be at least 2 letters long."
+ | otherwise = Nothing
+
+userName :: IO Name
+userName = do
+ u <- getUserEntryForID =<< getEffectiveUserID
+ return $ Name $ BU8.fromString $ takeWhile (/= ',') (userGecos u)
+
+cmdLineStorageLocations :: CmdLine.CmdLine -> IO StorageLocations
+cmdLineStorageLocations cmdline = do
+ preflocs <- StorageLocations . catMaybes <$>
+ mapM (\mk -> mk lsd) (CmdLine.preferredStorage cmdline)
+ shuffleStorageLocations (preflocs <> netlocs)
+ where
+ netlocs = networkStorageLocations lsd
+ lsd = CmdLine.localstoragedirectory cmdline
+
+getPasswordEntropy :: Password -> Name -> IO (Entropy UnknownPassword)
+getPasswordEntropy password name = do
+ username <- userName
+ let badwords = concatMap namewords [name, username]
+ return $ calcPasswordEntropy password badwords
+ where
+ namewords (Name nb) = words (BU8.toString nb)
+
+uploadQueued :: UI -> Maybe LocalStorageDirectory -> IO ()
+uploadQueued ui d = do
+ problems <- tryUploadQueued d
+ if null problems
+ then return ()
+ else showError ui ("Problem uploading queued data to servers:\n\n" ++ unlines problems ++ "\n\nYour secret keys have not yet been backed up.")
+
+autoStart :: CmdLine.CmdLine -> Tunables -> UI -> IO ()
+autoStart cmdline tunables ui = do
+ -- Upload queued first, before making any more backups that might
+ -- queue more.
+ uploadQueued ui (CmdLine.localstoragedirectory cmdline)
+
+ -- Ask about backing up any gpg secret key that has not been backed up
+ -- or asked about before. If there are multiple secret keys, only
+ -- the first one is asked about, to avoid flooding with prompts
+ -- if the user for some reason generated a lot of secret keys.
+ ls <- readBackupLogs
+ ks <- Gpg.listSecretKeys
+ case filter (\(_, k) -> not $ any (matchesSecretKeySource (GpgKey k)) ls) ks of
+ [] -> return ()
+ ((Name n,kid@(KeyId kt)):_) -> do
+ let kdesc = if length ks < 2
+ then "gpg secret key "
+ else "gpg secret key for " ++ BU8.toString n ++ " (" ++ T.unpack kt ++ ") "
+ ans <- promptQuestion ui ("Back up gpg secret key?")
+ ("Your " ++ kdesc ++ " has not been backed up by keysafe yet.\n\nKeysafe can securely back up the secret key to the cloud, protected with a password.\n")
+ "Do you want to back up the gpg secret key now?"
+ if ans
+ then backup cmdline ui tunables AnyGpgKey
+ =<< getSecretKey (GpgKey kid)
+ else storeBackupLog
+ =<< mkBackupLog (BackupSkipped (GpgKey kid))
+
+checkServers :: CmdLine.CmdLine -> IO ()
+checkServers cmdline = do
+ StorageLocations sls <- cmdLineStorageLocations cmdline
+ let serverlist = mapMaybe getServer sls
+ say $ "Checking " ++ show (length serverlist) ++ " servers concurrently; please wait..."
+ results <- mapConcurrently check serverlist
+ mapM_ displayresult results
+ case filter failed results of
+ [] -> return ()
+ l
+ | length l == length serverlist ->
+ error "Failed to connect to any servers. Perhaps TOR is not running?"
+ | otherwise ->
+ error $ "Failed to connect to some servers: "
+ ++ show (map (sn . fst) l)
+ where
+ check s = do
+ m <- serverRequest' s motd
+ c <- serverRequest s Left Right NoPOWIdent countObjects
+ case (m, c) of
+ (Right (Motd mt), Right (CountResult cr)) ->
+ return (s, Right (mt, cr))
+ (Left e, _) -> return (s, Left e)
+ (_, Left e) -> return (s, Left e)
+ (_, Right (CountFailure e)) -> return (s, Left e)
+
+ displayresult (s, v) = do
+ say $ "* " ++ sn s ++ " -- " ++ serverDesc s
+ case v of
+ Right (mt, cr) -> do
+ say $ " MOTD: " ++ T.unpack mt
+ say $ " object count: " ++ show cr
+ Left e -> warn $
+ " failed to connect to " ++ sn s ++ ": " ++ e
+
+ failed (_, Left _) = True
+ failed _ = False
+
+ sn = fromServerName . serverName
diff --git a/keysafe.init b/keysafe.init
new file mode 100644
index 0000000..4aa1348
--- /dev/null
+++ b/keysafe.init
@@ -0,0 +1,60 @@
+#!/bin/sh
+
+### BEGIN INIT INFO
+# Provides: keysafe
+# Required-Start: $network $remote_fs
+# Required-Stop: $network $remote_fs
+# Default-Start: 2 3 4 5
+# Default-Stop: 0 1 6
+# Short-Description: Keysafe server
+# Description: Starts keysafe in server mode
+### END INIT INFO
+
+PATH=/bin:/usr/bin:/sbin:/usr/sbin
+NAME=keysafe
+DESC="Keysafe server"
+DAEMON="/usr/bin/keysafe"
+PIDFILE=/var/run/"$NAME".pid
+
+test -x /usr/bin/keysafe || exit 0
+
+. /lib/lsb/init-functions
+
+# Parameters to pass to keysafe. This can be overridden in /etc/default/keysafe
+DAEMON_PARAMS="--port 4242 --store-directory=/var/lib/keysafe/"
+if [ -e /etc/default/keysafe ]; then
+ . /etc/default/keysafe
+fi
+
+case "$1" in
+ start)
+ log_daemon_msg "Starting $DESC" "$NAME"
+ start-stop-daemon --start --quiet --oknodo \
+ --background --no-close \
+ --pidfile "$PIDFILE" --make-pidfile \
+ --chuid keysafe:keysafe \
+ --exec "$DAEMON" -- --server $DAEMON_PARAMS \
+ > /var/log/keysafe.log
+ log_end_msg $?
+ ;;
+ stop)
+ log_daemon_msg "Stopping $DESC" "$NAME"
+ killproc -p "$PIDFILE" "$DAEMON"
+ RETVAL=$?
+ [ $RETVAL -eq 0 ] && [ -e "$PIDFILE" ] && rm -f $PIDFILE
+ log_end_msg $RETVAL
+ ;;
+ restart|reload|force-reload)
+ log_daemon_msg "Restarting $DESC" "$NAME"
+ $0 stop
+ $0 start
+ ;;
+ status)
+ status_of_proc -p $PIDFILE "$DAEMON" "$NAME" && exit 0 || exit $?
+ ;;
+ *)
+ log_action_msg "Usage: /etc/init.d/keysafe {start|stop|status|restart|reload|force-reload}"
+ exit 2
+ ;;
+esac
+exit 0
diff --git a/keysafe.service b/keysafe.service
new file mode 100644
index 0000000..24eaad2
--- /dev/null
+++ b/keysafe.service
@@ -0,0 +1,18 @@
+[Unit]
+Description=keysafe server
+Documentation=https://joeyh.name/code/keysafe/
+
+[Service]
+Environment='DAEMON_PARAMS=--port 4242 --store-directory=/var/lib/keysafe/'
+EnvironmentFile=-/etc/default/keysafe
+ExecStart=/usr/bin/keysafe --server $DAEMON_PARAMS
+InaccessiblePaths=/home /etc
+ReadWritePaths=/var/lib/keysafe
+User=keysafe
+Group=keysafe
+StandardInput=null
+StandardOutput=journal
+StandardError=journal
+
+[Install]
+WantedBy=multi-user.target
diff --git a/stack.yaml b/stack.yaml
new file mode 100644
index 0000000..2658ab6
--- /dev/null
+++ b/stack.yaml
@@ -0,0 +1,16 @@
+packages:
+- '.'
+resolver: lts-6.12
+extra-deps:
+ - secret-sharing-1.0.0.3
+ - dice-entropy-conduit-1.0.0.1
+ - polynomial-0.7.2
+ - finite-field-0.8.0
+ - raaz-0.0.2
+ - zxcvbn-c-1.0.0
+ - servant-0.7.1
+ - servant-server-0.7.1
+ - servant-client-0.7.1
+ - token-bucket-0.1.0.1
+ - argon2-1.2.0
+explicit-setup-deps: