SpiralCrypt 0.10.2 | Tutorial

© 2006 Kevin P. Barry


Table of Contents

  1. Overview.
    1. Process Types.
    2. Redirection Types.

  2. Data Processing.
    1. Coding Modes.
    2. Encryption Keys.

  3. Data Analysis.
    1. Check Sums.
    2. File Statistics.

  4. Input and Output.
    1. In-place Processing.
    2. Internal Redirection.
    3. External Redirection.

  5. Loop Processing.


  6. Merging Data.
    1. Merging Input Sources.
    2. Merging Loops.

  7. Legibility Conversion.
    1. Legible Storage.
    2. Legible Random Keys.


  8. Other Options.
    1. Process Testing.
    2. All-or-none Processing.


  9. Option References.
    1. Options.
    2. Option Guide.


Examples


  1. Overview.

    The SpiralCrypt command line tool is a program for batch processing, daemon processing, and stream processing encryption operations. spiral has quite a few features useful only on Unix-like systems due to their inherent ability to support data piping. As of now, all features compile into both Windows and Linux versions, however the Windows OS cannot support some of those features.

    The SpiralCrypt encryption algorithm is a byte stream cipher which continually hashes the encryption key as it applies the algorithm to new data. Unlike block ciphers, this algorithm incidentally encrypts each byte based on its position relative to the beginning of the operation. The algorithm also recursively works the previous byte of data into the encryption process. This ensures that if data at the beginning isn't decoded correctly, neither will the rest of the data be. This also ensures that sections of the data that don't start at the beginning cannot be decrypted independently of the previous data. Finally, this ensures that long stretches of repeating patterns are not detectable in the resulting data.

    spiral has various secondary functionality in addition to encryption and decryption:

    • Destroy files. This encrypts files with a random key. Because the program won't know the situation you are calling this operation in, it doesn't delete the files afterward.

    • Calculate check sums. This is a simple data hash to help verify an accurate decoding.

    • Calculate data statistics. Data statistics help determine the quality (i.e. apparent randomness) of encrypted data. This also serves as a quick indicator of a successful decryption; one that is unsuccessful will result in statistics much like encrypted data, whereas a successfully-decrypted set of data should have very weak randomness statistics.

    • Copy data. This essentially performs the same operation encryption or decryption would without modifying the relevant data content.

    • Merge sets of data. Because of the nature of the SpiralCrypt algorithm, the context of encrypted data matters. By merging data sets, you treat multiple input sources as if they were the same source (i.e. assemble the sets of data into a single set.)

    • Convert to/from legible storage. When encoding, this will convert raw data into a legible form for easy storage and transmission in a text medium. Decoding converts the data back into a raw form. This can be used with or without encryption.


    1. Process Types. spiral has two major categories of process types: Data Processing and Data Analysis.

      1. Data Processing. "Processing" occurs when data is written to some location, whether it be a file or standard output. The four data processing modes are encryption, decryption, shredding, and copying. When analyzing, this is the overriding process type.

      2. Data Analysis. "Analysis" occurs when data is read and analyzed. This type of process only reads data; it doesn't write anything.

    2. Redirection Types. Although the data source and data destination are by default the same location (i.e. the same file), spiral has several options which allow input and output to be different locations. spiral divides redirection into two main categories: internal and external.

      1. Internal Redirection. This takes input from one file and sends output to another file. This can be done with multiple pairs of files.

      2. External Redirection. This type of redirection takes input from standard input, sends output to standard output or a static output file, or both.



  2. Data Processing.

    Data processing modes are those which cause data to be read, processed, then written.

    NOTE: Data Processing modes do not provide protection against using an incorrect encryption key; nothing is stored in the output file(s) or data other than the data itself. This prevents storing any indication of the correct key with encrypted data itself. spiral does provide several options that can make safe verification of decryption possible, however.

    1. Coding Modes. spiral has three main coding modes. All of these modes are exclusive of each other.

      1. Encoding / Decoding. This is the primary purpose of the program. To encode or decode, use one of the following command line options:

        Short Long Operation
        -e --encode encode data
        -d --decode decode data

        If you use one of the above options with an encryption key option, you will have an encryption or decryption operation. You may specify one of the above options with a legible storage conversion option and an encryption key option to perform conversion in conjunction with encryption/decryption, or without to perform a copy with legibility conversion.

      2. Shredding. Shredding will destroy data (intended for files, but can be used otherwise) by encrypting it with a random key. This key is created in memory and is not recorded. Implement this mode by using the following command line option:

        Short Long Operation
        --shred-- destroy data

        This option is considered both a coding mode option and an encryption key option. This option does not delete files.

      3. Copying. To perform a copy, provide a redirection option without an encryption key option. If you are converting to/from legible storage in conjunction with the copy, provide the encode/decode option (respectively) along with a legible storage conversion option.

    2. Encryption Keys. Encryption keys are the unique identifiers which the encryption is based off of, and can come from various sources. They can also be redirected to a different location once they are loaded by the program, which can be useful for recording a key received from another process, or passing it to another process without it ever residing on the file system.

      1. User-input Passwords. This is a generic password prompt similar to most other programs requiring a password.

        Short Long Operation
        -p --password user password

        By default, this option will require a confirmation of the initially-entered password. You can disable this confirmation by providing the option an additional time.

      2. Input Keys. These are keys which are received from an external source. They can be extracted from files or from standard input using the following options:

        Short Long Operation
        -t --standard-in-key standard input key
        -k [file] --key [file] key from a file

      3. Random Keys. To generate a random key, use the following options:

        Short Long Operation
        -r [size] [file] --random-key [size] [file] random key saved to a file
        -V [size] --volatile-key [size] volatile random key

        When a process is provided, the generated key will be used for all operations that process performs. If you only provide key generation and redirection options, the key will be generated and the program will exit.

      4. Key Redirection. When performing a process which requires a key (to include shredding), you can export that key using the following options:

        Short Long Operation
        -w [file] --write-key [file] save encryption key to a file
        -n --standard-out-key send encryption key to standard output



  3. Data Analysis.

    Data analysis modes are those which do no require data to be written. These can be used alone or in conjunction with data processing modes. These modes provide composite information about the data which was read and/or processed.

    When used with a data processing mode other than copying, data analysis options will display compiled information for "before" an "after" versions of the data. If you are copying or are otherwise not using a data processing mode, only one version of the data properties will be shown.

    1. Check Sums. These are 128-bit hashes of the analyzed data. They are an aggregate value used to differentiate between sets of data; they are intended to be grossly different with even the slightest difference in source data. Alone they do not mean anything; you must compare them with other check sums to reasonably determine if two sets of data are identical or not. This is one way to determine if a decryption operation went well (of course, you will need the check sum of the original data to compare it to.) To calculate check sums, use the following option:

      Short Long Operation
      -c --check-sums calculate data check sums


    2. File Statistics. These calculations can be used to determine the quality of an encryption operation, and also to reasonably determine if a decryption operation was successful (although not as reliable as a check sum.) The basis for determining the quality of an encryption operation is the resemblance of the output data to random data; the more random the data appears, the better the quality of the encryption operation. This program uses 4 calculations to determine the statistical properties of data. To calculate data statistics, use the following option:

      Short Long Operation
      -s --statistics calculate data statistics

      Benchmark (i.e. "ideal"; based on random data) values are shown with the results which correspond to the encrypted data.

      1. Mean Byte. This is the average byte value of the entire set of data. The ideal value for encrypted data is '255/2'; if each byte value had the same number of occurrences (i.e. random) that would be the average. This alone is not a reliable indicator, however, which is why we have the other analyses.

      2. Median Byte. If all analyzed bytes were sorted in order of value, this would be the byte value right in the middle. The ideal value for encrypted data is the same as above, and for the same reasons. (If the data has an even number of bytes, the 2 middle bytes are averaged.)

      3. RMS Byte. Because a regular average doesn't tell us how evenly byte values occur (e.g. if the data was half '127' and half '128' you would still end up with an average of '255/2'), spiral calculates the RMS (root-mean-square) byte value, also.

        Ideally, any given byte value should occur the same number of times as all other byte values within a set of random data. If you were to represent an ideal distribution of byte values in 2 dimensions on a cartesian graph, with the byte values on the x-axis and the byte values multiplied by their numbers of occurrence on the y-axis, you would have the line 'y = Mx'. 'M', in this case, is a constant representing the number of occurrences of each byte value (an ideal distribution will give you the same number of occurrences of each byte value, hence the constant.)

        For any value of 'M', the RMS of 'y' over 'x' on this line is 'sqrt(1/3) * R[y] + y[0]'. In this equation, 'R[y]' is the range of 'y' values and 'y[0]' is the starting point of the 'y' range. The range will start at 0 for each analysis; therefore, we can remove the 'y[0]' value, leaving us with 'sqrt(1/3) * R[y]'.

        Ideally, each byte value will occur the same number of times; therefore, with a data size of 'S', each byte will occur 'S/256' times, making 'R[y]' based on the size of the file. Because this will be inconsistent between data sets, spiral calculates the RMS of 'x' over 'y' instead. This provides an RMS of 'sqrt(1/3) * R[x] + x[0]', and starting at 0 we can remove 'x[0]'.

        Along the x-axis are the data values which range from 0-255. This makes 'R[x] = 255', giving an ideal RMS of 'x' calculated over 'y' of 'sqrt(1/3) * 255'.

        For the actual calculation, spiral takes each possible byte value and squares it. Because there are only 256 possible values and not an infinite resolution, it takes the individual squares and multiplies them by the number of occurrences of that value. It then total the results, divides by the total data size, and takes the square root for the final value.

      4. Data Distro. This calculation is very similar to the RMS calculation. This takes each byte value and calculates the RMS position of that value throughout the set of data (its position in relation to the start of the data.) The RMS for each byte value is then averaged, divided by the data size, and multiplied by '100' for the final result.

        In this case, you would represent each byte value on its own cartesian graph and use a separate 'y = Mx' formula for each. 'x' in this case is the occurrence number and 'y' is the position in the set of data in relation to the beginning.

        Ideally, 'M' would be '256' for all of the graphs, meaning that there would be one of any given byte value in every 256 bytes of the data. Here you'd take the RMS of 'y' over 'x', with 'R[y] = S' since the 'y' range is equal to the size of the data. This gives you an ideal RMS of 'sqrt(1/3) * S'. Unfortunately this cannot be resolved by swapping 'x' and 'y'.

        You'd then average all of the RMS values for each graph, and ideally you would still have 'sqrt(1/3) * S'. To make all results comparable, spiral divides the result by the data size 'S' and multiplies by 100 to make the results easier to decipher.

        NOTE: If a byte value does not occur in the set of data, it's distribution RMS is undefined because we have no basis to project 'x' onto 'y' for even a single value of 'x'. To simplify things a little, we just calculate the RMS of our ideal function ('y = Mx') with 'y[0] = R[y] = 0', giving us 0. Because we do this, more missing byte values will drag the result toward '0'.

        NOTE: Smaller files will probably have low values for this statistic. This is because smaller files do not have enough file positions to spread out all of the values evenly. Additionally, they may be missing byte values, which will pull down the average significantly. Most files 1MB or larger will have an ideal result, however (with any encryption key.)

        NOTE: Higher numbers DO NOT automatically mean a better result; the ideal value for this statistic is about 57.74 (not 100 itself, though it could get very close with some well-thought-out, yet useless, data manipulation.)




  4. Input and Output.

    spiral uses several modes of input and output. These fall into 3 main categories.

    1. In-place Processing. This is the mode used when no redirection is specified. When processing data, this will read data from the files specified, process it, and write the output back to the same file. When performing data analysis only, this will read data from the files specified.

    2. Internal Redirection. Internal redirection takes input from one file, processes or copies it, and saves it to another file. To use this mode, provide an even number of file names (first: input 1, second: output 1, etc.) and use the following option:

      Short Long Operation
      -f --dual-file separate input and output files

    3. External Redirection. This mode interfaces with an external process, such as the shell which called spiral.

      1. Standard Input. When taking data from standard input, the shell which executes the call to spiral is responsible for providing input to the process. Normally, the shell will prompt the user for input at this point unless command line redirection is used (i.e. "program | spiral ...".) This mode takes whatever the calling process provides as input. To enable this mode, use the following option:

        Short Long Operation
        -i --standard-in standard input

        You may specify one or more file names on the command line to send output to with this option. If you do not specify a file name and are processing data, standard output is implied as the output destination. NOTE: If you provide more than one file name, you will need to be able to close and reopen the standard input of the program. Because you'll be lucky to ever find a program that can close/reopen its standard output when piping to another process (I'm working on it), this feature is mostly intended for use with a terminal. When typing into a Unix terminal, you can indicate the end of a data set with Ctrl+D.

      2. Standard/Static Output. As with standard input, standard output relies on the calling process to deal with the output of this process. Normally, shells will display the output on the console unless command line redirection is used. You can enable standard output mode by using the following option:

        Short Long Operation
        -o --standard-out standard output

        You may specify one file on the command line to take input from with this option. If you do not specify a file name, standard input is implied as the input source. NOTE: If you provide more than one file name, you will implicitly enable the merged data mode.

        A variant to this option is specification of your own output file. This is essentially the same as redirecting to standard output, except you specify a file and that file may be opened and closed multiple times. This is similar to internal redirection, however you may specify multiple input files with this option.

        Short Long Operation
        -O [file] --static-out [file] static output file

        As with the standard output option, you may provide input files, but if you don't specify an input file then standard input is implied as the input source. Unlike standard output, however, static output does not automatically enable the merged data mode.



  5. Loop Processing.

    In some cases you may want to repeat the same process multiple times using different data. When using redirection, you can have spiral repeat the same operation until you tell it to stop.

    In order to use loop processing, you must use a redirection option (if processing data), and at least one input source must be either a pipe (not possible with Windows) or standard input. To enable the this mode, use the following option:

    Short Long Operation
    -L --loop-process loop process

    You will likely need to make the call to spiral a background or a thread process. Once the process is started, send input data to the input source(s), which will more often than not be pipes. spiral will process each file in the order specified on the command line repeatedly until the loop is stopped. Stopping the loop happens differently depending on the situation:

    • Explicit EOF signal. This is used when input comes from standard input or from a single pipe, or when merging loops. To end the loop, send an "end-of-file" signal to the current input source using an empty set of data. For standard input in Unix shells, you will probably use Ctrl+D twice after the last input operation. For a Unix pipe, you will probably use "echo -n > pipe". NOTE: When merging loops, the first EOF signal actually resets the merge operation, and the second ends the loop (discussed in the appropriate section.)

      When merging loops, multiple files aren't thought of as a finite set; they are thought of as a cyclic set of input sources. Because of this, a single empty set of data sent to a pipe in this mode is read as an explicit EOF signal.

      NOTE: You can force the use of this type of signal (vice implicit) by using the following option:

      Short Long Operation
      -E --explicit-eof enable explicit EOF signals


    • Implicit EOF signal. When multiple files are used with looping (and not merging loops), you may need a complete cycle through the processed files before ending the loop. To end the loop in this case you must send an empty set of data to all pipes in the process starting with the first when using multiple inputs (you do not need to worry about regular files which are mixed in), or once per output file when using standard input.

      When looping with multiple files and not merging loops, the files are thought of as a finite set. This is because there is a definite distinction between each iteration through the set of files. Because of this, an empty set received from or sent to a file might be a part of that set. This is why an empty set must be received from each input pipe or sent to each output file.

    • Forced EOF signal. If for some reason spiral makes it past the preliminary file checks without noticing you didn't provide any pipes for input, this automated signal will stop the loop if it goes an entire iteration without being able to open a pipe (not applicable when using standard input.) This prevents an infinite loop if the permissions of the pipes change or they're deleted.

    Output files are dealt with differently depending on the type of loop process you are using:

    • Single file. Whenever a single file (or input/output pair) is used, the output file is not written to until something is read from the input source. This prevents erasing the file when ending a loop with an empty data set.

    • Multiple files without merging loops. When using multiple files, you must send multiple empty data sets to end a loop. Because of this, a process might be expecting output from a corresponding output pipe each time. For consistency, output files are opened even when an empty data set is sent. This incidentally will erase them if normal files are used (which should be expected behavior when using this type of process.) NOTE: When using the explicit EOF signal option (see above), you disable this behavior.

    • Multiple files and merging loops. Because the merged loop mode uses explicit EOF signaling, the output file(s) don't need to be opened if nothing is read; therefore, they aren't.

    NOTE: Encryption keys are only loaded or generated one time; the same key is used for the entire duration of the loop process.



  6. Merging Data.

    In some cases you might want to take independent data sources and treat them as a single set. This can be useful for encrypting pre-sectioned data or for providing a signature with an encrypted file.

    When merging a data processing operation, all input data becomes a single set, even though the output may be to different locations. Because the encryption (if used) is based on the data's layout, the data will be encrypted differently depending on the order the original data is presented. This means that in order to decrypt data encrypted using merging, that data must be treated as a single set again in order for the operation to succeed.

    When analyzing data without processing it, the input data is treated as a single set for the purposes of analysis.

    Use the option below when this section references the merge data option:

    Short Long Operation
    -M --merge-data merge data


    1. Merging Input Sources. This treats multiple files as a single file for the purposes of processing. NOTE: When merging input sources, the all-or-none option is implied (unless shredding or performing data analysis only.) This prevents the accidental corruption of data if not all input files can be opened. To disable this implicit option, provide the option twice explicitly on the command line.

      • In-place Processing. To enable data merging for an operation that would normally process files in place, use the merge data option once. This will treat all of the files as if they were sequential parts of the same file. Conceptually, this is the same as merging all of the files together, processing them as a single set, then splitting them back up.

      • Multiple Files to Standard Output. If you provide multiple file names with the standard output option, data merging is implicitly enabled. This means that the data from all of the files given will be processed as a single file and sent to standard output. If you use the static output option, however, you must specify the merge data option explicitly.

      • Multiple Files from Standard Input. If you provide multiple file names with the standard input option, use the merge data option once. This will merge all input from standard input but will still distribute one input operation's data to each file.

      • Data Analysis. Merging a data analysis operation (without processing data) can be done by providing the merge data option once. When using this mode, data properties will not be shown until the last file has been analyzed; there will only be one output per operation.

    2. Merging Loops. In cases where you are loop processing data, you can merge data across loop iterations with this mode. Enable this mode as follows:

      • Single use. Use the merge data option once when merging is already enabled implicitly, only one file name is provided, only one input/output set of files is provided, or no files are provided.

      • Double use. Use the merge data option twice when processing multiple individual files or multiple input/output sets and merging isn't enabled implicitly. The first use enables merging of the file set and the second use enables merging across loops.

      As mentioned in the loop processing section, merging loops provides different rules for controlling loops. The number of input sources utilized for a single set of data remains uncertain until the context of the loop is reset. Regardless of the number of actual sources, they are cycled through until an empty set is reached, at which point the merge context is reset and a new set of data is started. At that point, the current set of merged data is complete, and the next input operation will start the beginning of a new set of merged data. To end the current merged set of data, send an empty set to the current input source (please see the loop processing section.)

      • Sending to a File. When sending output to a file, the new data is appended until the first empty data set; this marks the end of the merge operation. If more data is sent after that, the file is erased and the process starts over.

      • Multiple Files. When taking input from multiple files and/or sending output to multiple files, merged set of data starts with the first file following context reset. All files are then rotated in order until the next context reset.

      • Data Analysis. When using data analysis (alone or in addition to data processing), data properties aren't shown until the end of the merge operation (i.e. when the first empty data set is received); there will only be one output per merge operation.



  7. Legibility Conversion.

    Legible data is that which can be displayed as readable characters on a console or in a text file. This is ideal for storing or transmitting otherwise binary data in text formats.

    1. Legible Storage. The legible storage option modifiers enable the conversion of process data to and from a legible form. This makes transmission and storage of encrypted data in a text format possible. For the purposes of other significant parts of the program's processes, these conversions are considered pre- and post-processes; only the data in its "natural" (illegible) form is considered relevant.

      To convert without dealing with encryption, provide one of the legible storage options and an encode or decode option. To convert in conjunction with encryption or decryption, provide an encryption key option.

      NOTE: You cannot convert a file to or from legible storage in place; the size of the data will be different than the original, which violates the principle of in-place processing. This means that you must use a redirection option when processing data and converting to or from legible storage. An exception to this is when using the process testing option (a warning is given.)

      To enable legible storage, use the following option:

      Short Long Operation
      -l --legible-data legible data conversion


      1. Converting to Legible. This is the mode used when the encode option is used (and therefore for encryption.) When converting to legible storage, spiral takes the final output data and stretches it out so that each byte only uses the first 6 bits. spiral then takes a table of 64 legible characters and replaces each of the byte values. This will inherently increase the output data size by 1/3; every fourth byte contains the top 2 bits of the 3 previous bytes.

      2. Converting from Legible. This is the mode used when the decode option is used (and therefore for decryption.) When converting back to "normal" data from legible storage, characters not contained in the 64-character conversion set are discarded. This allows you to format the legibly-stored data with things such as newlines and spaces. By default, you can also enclose comments within the legible data using the "[" and "]" delimeters. Because of the 1/3 data size increase when converting to legible, data converted from legible will be 1/4 smaller than the legibly-stored data (not including comments and formatting.) NOTE: When performing a merge operation, if an input operation ends mid-comment, the next input operation will start mid-comment. This is so that files which are split in the middle of a comment will still decode correctly.

      3. Custom Legibility Tables. To provide your own conversion table for the conversions to and from legible data, use the following option:

        Short Long Operation
        -j [file] --char-table [file] legibility table from a file

        When using the above options, you do not need to provide the legible data option modifier.

        spiral will take the first 64 bytes of the data provided to use as the conversion table. If 2 more bytes are available, spiral uses those characters as comment delimeters (these may both be the same character.) If delimeters aren't provided after the conversion table, comments are disabled.

    2. Legible Random Keys. When generating random keys, you can provide the following option to make the generated key legible:

      Short Long Operation
      -g --gen-legible legible key generation


      NOTE: Because conversion of keys to a legible form is executed in the same manner as converting actual data, each byte only has 64 possible values. The keys generated will be the same size as requested; therefore, each byte will have only 1/4 of the possible characters it would otherwise. For this reason, legibly-generated keys are inherently weaker than normal randomly-generated keys, however are more versatily transmitted and stored.

      NOTE: You cannot provide a custom conversion table for generating legible keys.



  8. Other Options.

    Of the remaining options not yet discussed, the following warrant additional explanation. Several other minor options (needing no highlighting) are referenced in the option table, however.

    1. Process Testing. You can include the test-only option with any process to perform (nearly) everything except for writing the output data. This option will always prevent output data from being written, however keys are dealt with differently depending on the options used. All files are tested for writability as applicable, and warnings are shown for those which can't be written to. If a file can't be read when needed, however, the program will treat those files as it would normally.

      In all cases using an encryption key, the key will be generated or read (as applicable.) Keys which are to be saved to a normal file will not be saved. Keys which are to be exported to a pipe or to standard output will still be exported; other processes involved in the test may depend on them being exported. If the keys can't be exported then a warning is shown, but processing continues.

      Input data is read and is processed as normal with the exception of the write operations. When testing an in-place operation, input files are opened read-only.

      To enable this mode, use the following option:

      Short Long Operation
      -T --test-only test process only


    2. All-or-none Processing. Using the all-or-none option will ensure that all files given on the command line can be opened in the appropriate mode before processing. If any file cannot be opened, the program exits with an error. By default (without this option), those files which cannot be accessed are skipped, and those which can be accessed are processed.

      To enable this mode, use the following option:

      Short Long Operation
      -a --all-or-none all files or no process


      This option is implicit when merging data. This is because latter files depend on former files for encryption context; if files are removed, some files will be encrypted differently. To disable the implicit use of this option, provide the option twice explicitly on the command line.



  9. Option References.

    This section is a brief reference of the available command line options. This section also contains some more specific information regarding the use of each option and some option combinations.

    1. Options. Below is a brief guide of all of the command line options available for spiral.

      1. Coding Mode Options. These options whether to convert to or from the given format, or to just destroy data.

        Short Long Implementation
        -e --encode Encode data. Used for encryption and/or conversion to legible storage.
        -d --decode Decode data. Used for decryption and/or conversion from legible storage.
        --shred-- Destroy data with random encryption.


      2. Encryption Key Options. These options provide a source for the encryption keys involved in encryption operations.

        Short Long Implementation
        -p --password Use a password input by the user. If the program can verify that the call was made from a terminal, this is allowed with standard input. If the program knows there is no terminal, this isn't allowed. Second Use: Disable confirmation prompt.
        -t --standard-in-key Extract the encryption key from standard input. This is not allowed with standard input.
        -k [file] --key [file] Use the specified file as an encryption key.
        -r [size] [file] --random-key [size] [file] Create a random encryption key of the given size and store it in the specified file.
        -V [size] --volatile-key [size] Create a random encryption key of the given size but do not record it.


      3. Internal Redirection Options. These options provide a means to divert processed data to another file instead of back to the file of origin.

        Short Long Implementation
        -f --dual-file Process data from one file and save in another file. Files are given in pairs on the command line; input from the first file and output to the second file.


      4. External Redirection Options. These options determine which parts of the process will interface with the outside world.

        Short Long Implementation
        -i --standard-in Process data from standard input.
        -o --standard-out Send processed data to standard output. Standard output keys not allowed.
        -O [file] --static-out [file] Send processed data to a static output file. Very similar to the standard output option, but with a few less restrictions.


      5. Option Modifiers. These modify the behavior of other options; the resulting behavior may differ between the options they modify.

        Short Long Implementation
        -w [file] --write-key [file] Store whatever encryption key is used in the specified file.
        -n --standard-out-key Send whatever encryption key is used to standard output.
        -M --merge-data Treat multiple sets of data as a single set for processing purposes.
        -L --loop-process Repeat the same process until signalled to stop.
        -E --explicit-eof Allow a single empty set of data to end a loop when using multiple files and not merging loops.
        -l --legible-data Convert data to or from legible storage when processing.
        -j [file] --char-table [file] Convert data to or from legible storage using a custom key from a file.
        -g --gen-legible Generate legible random keys.
        -T --test-only Perform read operations and processes, but write tests in place of writing.


      6. Data Property Options. These options display properties extracted from the data being processed.

        Short Long Implementation
        -c --check-sums Calculate data check sums (data hash to verify the identity of the data.)
        -s --statistics Calculate data statistics to assist in encryption quality determination and decryption success determination.


      7. Display Options. These options affect what is displayed upon program execution.

        Short Long Implementation
        -h --help Display the help screen.
        -v --verbose Display the version screen if no useful options are used, otherwise display verbose output. Second Use: Display the version screen.


      8. Other Options. These options do not fit into any other category.

        Short Long Implementation
        -x [size] --block-size [size] Split data into the block size specified for processing. Prefix hex numbers with 'x' or 'X'. Optional suffixes: 'k', 'K', 'M', 'G'. The default is 16KB. Use 0 to process an entire file at once (if used with pipes, this defaults back to 16KB.)
        -a --all-or-none Abort all operations if any files cannot be opened in the appropriate read or write mode.
        -- End of command line options (only needed when processing files which begin with '-'.)


    2. Option Guide. This table outlines the common combinations of processing options and the corresponding behaviors to expect. Most (if not all) other combinations will cause an error.

      Options * Read Input?
      (Test)
      Write Output?
      (Test)
      Read Key? **
      (Test)
      Write Key? **
      (Test)
      Encrypt/Decrypt
        Coding Mode Option
        Encryption Key Option
      YES
      (YES)
      YES
      (NO)
      YES
      (YES)
      YES
      (YES if pipe)
      Copy
        Redirection Option
        No Coding Mode Option
        No Encryption Key Option
      YES
      (YES)
      COPY
      (NO)
      N/A N/A
      Copy with Legibility Conversion
        Redirection Option
        Coding Mode Option
        Legibility Option
        No Encryption Key Option
      YES
      (YES)
      YES
      (NO)
      YES
      (YES)
      N/A
      Analyze Standard Input
        Standard Input Option
        Data Properties Option
        No Coding Mode Option
        No Encryption Key Option
        No Output Specified
      YES
      (YES)
      N/A N/A N/A
      Analyze Standard Input with Legibility Conversion
        Standard Input Option
        Data Properties Option
        Coding Mode Option
        Legibility Option
        No Encryption Key Option
        No Output Specified
      YES
      (YES)
      N/A YES
      (YES)
      N/A
      Analyze Files
        File(s) Given
        Data Properties Option
        No Redirection Option
        No Coding Mode Option
        No Encryption Key Option
      YES
      (YES)
      N/A N/A N/A
      Analyze Files with Legibility Conversion
        File(s) Given
        Data Properties Option
        Coding Mode Option
        Legibility Option
        No Redirection Option
        No Encryption Key Option
      YES
      (YES)
      N/A YES
      (YES)
      N/A
      Analyze Files with Legibility Conversion and Encryption/Decryption
        File(s) Given
        Data Properties Option
        Coding Mode Option
        Legibility Option
        No Redirection Option
        Encryption Key Option
        Test Only Option
      YES
      (YES)
      N/A YES
      (YES)
      YES
      (YES if pipe)
      *Additional options are allowed unless specifically excluded
      **Only when applicable options are used

Examples

Sample Command Lines

These don't all include full command lines; these are just examples of command line calls to spiral. You can make better use of the program's versatility with scripts. These also haven't been updated for a few years (aside from syntax updates), so they are fairly basic as far as what the program can do.
> spiral -ep foo
Encode file 'foo' with a password

> spiral -dsc -k bar foo
Decode file 'foo' using file 'bar' as a key, showing before and after statistics and check sums

> spiral -ep -x16 foo1 foo2 foo3
Encode files 'foo1', 'foo2', and 'foo3' using a password, splitting each file into 16 byte sections for processing

> spiral -dp -x x1000 foo
Decode file 'foo' using a pasword, splitting it into x1000 (hex for 4K) byte sections for processing

> spiral -e -k bar foo foo
Encode file 'foo' twice using file 'bar' as a key

> spiral -e -r16K bar foo
Encode file 'foo' with a random 16K byte key to be stored in file 'bar'

> spiral --shred-- foo
Permanently destroy file 'foo'

> spiral -ep -- -mine-
Encode file '-mine-' using a password

> spiral -d -k -mine- foo
Decode file 'foo' using file '-mine-' as a key

> spiral -cs foo
Find the check sum and statistics of file 'foo'

> spiral -c foo
Show only the check sum of file 'foo'

> spiral -ei -k foo bar
Encode standard input using file 'foo' as the key and store the result in file 'bar'

> spiral -epf foo bar
Encode file 'foo' with a password and store in file 'bar'

> spiral -dto foo
Decode file 'foo' using a key extracted from standard input and send the result to standard output

> spiral -dcpT foo
Perform a test decode on file 'foo' using a password and display the check sums

> spiral -cL foo
Show the check sums of the data in pipe 'foo' until a null data set is extracted

> spiral -etn -w key foo
Encode file 'foo' using a key extracted from standard input, save the key in file 'key', and also send it to standard output

> spiral -cf foo bar
Copy file 'foo' into file 'bar' and show the check sum

> spiral -sf foo bar
Copy file 'foo' into file 'bar' and show statistics

> spiral -epo foo1 foo2
Combine files 'foo1' and 'foo2', encode using a password, and send to standard output

> spiral -ep -O bar foo1 foo2
Combine files 'foo1' and 'foo2', encode using a password, and send to static output file 'bar'

> spiral -sM foo1 foo2
Combine files 'foo1' and 'foo2' and display their combined file statistics

Linux Examples




Because this tool is continually evolving, I can't include every detail about processing specifics in this tutorial. For more specific details, such as what options aren't allowed with others and program output style, please review the 'changes' document included with the release packages. Additional examples are also included within the release packages, and the full source code is readily available at the download site. If you require additional support, please contact me at:

ta0kira@users.sourceforge.net


Download
SourceForge.net Logo


Kevin P. Barry