ssr-cli

InstallUsagestart csr and ssrbuild

本架构为 Serverless 情景下的服务端渲染标准的完成,具备下列特性。

小:完成方法简约应用方法雅致,搭建转化成的 bundle 文档少且小全:适用 SPA/MPA 二种运用种类的开发设计,SSR/CSR 二种渲染模式无缝拼接转换,适用 HMR,适用订制部件的渲染模式美:根据Midway-faas架构,有着强劲的绿色生态,能够 公布到好几个不一样的 Serverless 服务平台

build 指令将依据 yaml 环境变量中的路由器开展装包。适用多通道由配备。

A set of opinionated, functional utilities that work particularly well when used together

Clan

Clan is a super succinct, no-dependency set of utilities with a slightly opinionated collection of features that integrate particularly well when used together. As of v1.0.0, it is now written in TypeScript and bundled with FuseBox.

Usage

Check out the tests in the src folder for examples of code.

Run testsCaught a bug?Forkthis repository to your own GitHub account and then cloneit to your local deviceInstall the dependencies: Bundle the source code and watch for changes:

After that, you'll find the code in the folder!

featuresfp.ts- functional composition and transducersmodel.ts- a rules-based and constructor/type-based engine that validates deeply-nested data structures; basically, an ORM that validates POJOs for attack vectors on a server (i.e. parse your incoming networked data sources).batch.ts- a functional abstraction for a that deduplicates on-the-wire requests, batching parallel requests togetherobservable.ts- an extremely powerful, efficient, memory-friendly state cascading paradigm that enables all sorts of asynchronous, functional, and lazy-evaluation paradigmsworker.ts- a quick, observable-friendly abstraction to creating web worker threads via blobs/blob URLs, pushing dependency code to them, and even spreading work to multiple workers in parallel and streaming the output back to one method. Works very nicely with observables as both input and output pipes.prop.ts- a tiny implementation that acts like Underscore's ; grab nested properties with a "CSS selector" from an object tree; returns that value or null if not found.server.ts- an observable-based, lightweight and cache-friendly streaming server.AuthorsMatthew Keas, @matthiasak. Need help / support? Create an issue or ping on Twitter.Repository

https@github.com/matthiasak/clan

Fat command line for literate-programming

literate-programming

This is the fat command-line client for

literate-programming-lib.

It contains the full functionality for literate programming, including useful

commands such as jshint included in it. For a thin client,

check out

litpro

Full documentation: Literate Programming, MD: How to Treat and Prevent Software Project Mess

This is not done being fully baked, hence v0.9. But this does represent a

significant break from 0.8.4. You can take a look at convert.md for some

observations of mine as I converted from the old version to the new.

Install using

Usage is and it has some command flags.

If you want a global install so that you just need to write

then use .

The library has a full listing of the syntax, commands, and directives. Here

we list the flags and new commands and directives.

Example usage

Save the following code to file and run .

Documentation

For more information, see the

documentation bookwhich is free to read online or available for purchase as a PDF.

Some particularly useful syntax sections are:

command-line flagsdirectivescommandssubcommandsUse and Security

It is inherently unsecure to compile literate

program documents. No effort has been made to make it secure. Compiling a

literate program using this program is equivalent to running arbitrary code on

your computer. Only compile from trusted sources, i.e., use the same

precautions as running a node module.

LICENSE

MIT-LICENSE

HomePage

Repository

jshint for literate-programming

JSHint

This is a plugin for litpro. Install that and then you can use this by requiring it in the lprc.js file.

It is automatically included in literate-programming.

This plugin provides a single command: . It takes three

arguments: options, globals, name

Options should be an object containing configuration op

tions for

JSHint. This can be conveniently created

as an argument using the kv subcommand:

The second argument is an array for globals. Just write them out. If you

want to be able to write to them without a warning, use after the

name. So for example would set those variables as globals and allow module to be written to.

The third argument is a name to be written associated with it. The default

is roughly of the form

HomePage

Repository

Testing framework for literate-programming-cli

literate-programming-cli-test

This provides the testing framework for literate-programming command line

client and its plugins.

This should be a developer dependency in the package. You can use

if you like.

Then in the test script, you could do something like

The require function returns a function that generates the tests function

using the command provided to execute the literate-programming command.

Typically, install litproas a dev

dependency and then the default will be correct. Otherwise, supply your

command pathway.

Other than the default, this probably works to test any command-line

functionality that generates files and directories. You can pass in

an empty string to run different entirely different commands.

There is a second option for the returned module function. If you pass in

, then any console output is not shown. It is still written to

and where one can review it, but this option allows one

to eliminate seeing all the console stuff when it is irrelevant to one's

needs. Passing in to the first argument preserves the default

command if you need to have the second argument by itself.

The function expects a sequence of arrays where each entry specifies a

test whose name is the first entry in the array and that is also the name of

the directory under the folder . The second entry are the specific

arguments to run. If you are not testing the command line options themselves,

this can probably be left blank, particularly with a good file.

This function has a set of directories it wipes out by default, namely build,

cache, out.test, and err.test. To overwrite that behavior, put a reset.test

file in your directory listing per line the directories or files that need to

be wiped out to do a clean test.

The tests are based on a directory whose contents are used as the

template of what should be found.

The directory structure is where folder is the test

folder. If exists, then the test will look for

and see that it works. Only the files in canonical

are checked. Thus, this does not check for making extraneous files in the

directory.

Beyond Strict Equality

The above describes using this as run this command and compare the resulting

files using equality.

Sometimes that's not good enough. Sometimes you have that might get

stored differently, or lines in a different order. So the third argument for

an array line specifies an object that, per file name, will take in a function

that takes in the two text from the files and comes up with a true or false

value. The signature is so (expected, actual)

text setup.

The function has some methods that help: will split the lines and

make sure that equal lines are present, though the ordering may be different.

This is actually a function that gets instantiated and one can pass in a line

comparator to make it different than equality.

The other function is which will parse both files as and see if

they have deep equality.

Automated Setup

The design of this plugin has a lot of files to make a test. I don't like

that. So we can also have the plugin parse out a single file into multiple

files.

To trigger this, use for the first argument in the test item's array.

Then it will look for in the tests directory and create, if

necessary, the directory and populate it with the files found in

the Files are separated by for inputs (sitting in the

top directory or for those that should be in the

directory. Subdirectories are fine, just use for them.

And then in cmd.md

So that could be

a test specification and everything should be good to go.

Note that if the leading text has no leading colon or equals, then the text is

ignored until the first or . Also, will trigger a block

that is ignore. Finally, a plain with no following it will be

appended to the previous block (ignored if the previous block is the leading

text being ignored).

LICENSE

MIT-LICENSE

HomePage

Repository

Basic command line for literate-programming

literate-programming-cli

This is the command line client module for literate-programming. The intent of

this one is to build the command line clients using this module as a baseline.

To use the thin client, see litproFor a

more full client geared to web development, please see

literate-programming

Install using

Usage is and it has some command flags.

Flags

The various flags are

-b, --build The build directory. Defaults to build. Will create it if it

does not exist. Specifying . will use the current directory.

--checksum This gives an alternate name for the file that lists the hash

for the generate files. If the compiled text matches, then it is not

written. Default is stored in the build directory.

-d, --diff This computes the difference between each files from their

existing versions. There is no saving of files.

-e, --encoding Specify the default encoding. It defaults to utf8, but any

encoding supported by node works. To have more encodings, use the plugin

litpro-iconv-liteTo override the command lined behavior per loaded

file from a document, one can put the encoding between the colon and pipe in

the directive title. This applies to both reading and writing.

--file A specified file to process. It is possible to have multiple

files, each proceeded by an option. Also any unclaimed arguments will be

assumed to be a file that gets added to the list.

-f, --flag This passes in flags that can be used for conditional branching

within the literate programming. For example, one could have a production

flag that minimizes the code before saving.

-i, --in This takes in standard input as another litpro doc to read from.

-l, --lprc This specifies the lprc.js file to use. None need not be

provided. The lprc file should export a function that takes in as arguments

the Folder constructor and an args object (what is processed from the

command line). This allows for quite a bit of sculpting. See more in lprc.

-o, --out This directs all saved files to standard out; no saving of

compiled texts will happen. Other saving of files could happen; this just

prevents those files being saved by the save directive from being saved.

-s, --src The source directory to look for files from load directives. The

files specified on the command line are used as is while those loaded from

those files are prefixed. Shell tab completion is a reason for this

difference.

-z, --other This is a place that takes in an array of options for plugins.

Since plugins are loaded after initial parsing, this allows one to sneak in

options. The format is key:value. So would set the value

cache to cool.

--scopes This shows at the end of the run all the variables and values that

the document thinks is there. Might be useful for debugging purposes.

New Commands

This executes the commands on the commandline. The

standard input is the incoming input and the standard output is what is

passed along.

Same as exec but no caching

Reads in file with filename. Starts at source directory.

This terminates old input and replaces with file contents.

Generates a list of files in named directory. This generates

an augmented array.

Saves the input into the named file using the

encoding if specified.

New DirectivesExecutes command line as a

directive. Not sure on usefulness.Reads a file, pipes it in,

stores it in var name. Save. Not new, but works to actually save the file on disk. LICENSE

MIT-LICENSE

HomePage

Repository

A literate programming compiler. Write your program in markdown. This is the core library and does

not know about files.

literate-programming-lib

Write your code anywhere and in any order with as much explanation as you

like. literate-programming will weave it all together to produce your project.

This is a modificaiton of and an implementation of

Knuth's Literate Programmingtechnique. It is

perhaps most in line with noweb.

It uses markdown as the basic document format with the code to be weaved

together being markdown code blocks. GitHub flavored code fences can also be used

to demarcate code blocks. In particular, commonmarkis the spec that the parsing of the markdown is used. Anything considered code

by it will be considered code by literate programming.

This processing does not care what language(s) your are programming in. But it

may skew towards more useful for the web stack.

This is the core library that is used as a module. See

-clifor the command

line client. The fullversion has a variety of useful standard

plugins ("batteries included").

Installation

This requires node.jsand npmto be

installed. See nvmfor a recommend

installation of node; it allows one to toggle between different versions. This

has been tested on node.js .10, .12, and io.js. It is basic javascript and

should work pretty much on any javascript engine.

Then issue the command:

Since this is the library module, typically you use the client version install

and do not install the lib directly. If you are hacking with modules, then you

already know that you will want this in the package.json file.

Using as a module

You can use to get

a constructor that will create what I think of as a folder.

The folder will handle all the documents and scopes and etc.

To actually use this library (as opposed to the command line client),

you need to establish how it fetches documents and tell

it how to save documents. An example is below. If you just want to compile

some documents, use the command line client and ignore this. Just saying the

following is not pretty. At least, not yet!

The thing to keep in mind is

that this library is structured around events

using my event-whenlibrary. The

variable gcd is the event emitter (dispatcher if you will).

This last line should start the whole chain of compilation with first.md being read in

and then any of its files being called, etc., and then any files to save will

get saved.

The reason the lib does not have this natively is that I separated it out

specifically to avoid requiring file system access. Instead you can use any kind of

function that provides text, or whatever. It should be fine to also use

directly on each bit of text as needed; everything will

patiently wait until the right stuff is ready. I think.

Note that live code can be run from a literate program as well. So be

careful!

Example

Let's give a quick example of what a sample text might look like.

A full example of a literate program is lp.md in this repository. It compiles

to this library.

Document syntax

A literate program is a markdown document with some special conventions.

The basic idea is that each header line (regardless of level, either atx # or

seText underline ) demarcates a full block. Code blocks within a full block

are the bits that are woven together.

Code Block

Each code block can contain whatever kind of code, but there is a primary special

syntax.

This tells the compiler to compile the block with "Block

name" and then replace the with that code.

Note the the allowed quotes are double, single, and backtick. Matching types

are expected. And yes, it is useful to have three different types.

The full syntax is something of the form

where the scope name allows us to refer to other documents (or artificial

common scopes) and the commands run the output of one to the input of the

other, also taking in arguments which could they themselves be block

substitutions.

Note that one can also backslash escape the underscore. To have multiple

escapes (to allow for multiple compiling), one can use where the number

gets decremented by one on each compile and, when it is compiled with a 0 there,

the sub finally gets run.

A block of the form would look for a minor block, i.e., a block

that has been created by a switch directive. See next section.

One can also visually hide parts of the document, without it being hidden to

the compiler, by using html comments. If the start of a line is then

it will strip that and the next occurrence of before doing the markdown

compiling.

Directive

A directive is a command that interacts with external input/output. Just about

every literate program has at least one save directive that will save some

compiled block to a file.

The syntax for the save directive is

where

is the name of the file to save tois the heading of the block whose compiled version is being saved.

Spaces in the heading get converted to dashes for id linking purposes. Colons can be used

to reference other scopes and/or minor blocks. In particular, will

refernce the minor in the current heading block where the save

directive is located.is there to say this is the directive to save a fileis any valid encoding of

iconv-lite.

This is relevant more in the command line module, but is here as the save

directive is here. optional commands to process the text before saving. See

next section.

For other directives, what the various parts mean depends, but it is always

where the should be replaced with a directive name. If dir is absent,

but the colon is there, then this demarcates a minor block start.

Pipes

One can also use pipes to pipe the compiled text through a command to do

something to it. For example, will take the code

in block and pipe it into the jshint command which can be a

thin wrapper for the jshint module and report errors to the console.

That command would then return the text in an untouched fashion. We can also use

pipe commands to modify the text.

Commands can be used in block substitutions, minor block directive switches, and

other directives that are setup to use them such as the save and out directive:

will tidy up the code

before storing it in the file .

If you want your own directive to process pipes, see the save directivein

lp.md. Pay particular attention to the "process" and "deal with start" minor

blocks. The functionality of pipe parsing is in the command,

but there are events that need to be respected in the setup.

Commands take arguments separated by commas and commands end with pipes or the

block naming quote. One can also use a named code block as an argument, using

any of the quote marks (same or different as surround block name). To

escape commas, quotes, pipes, underscores, spaces (spaces get trimmed from the

beginning and ending of an argument), newlines, one can use a backslash, which

also escapes itself. Note that the commonmark parser will escape all

backslash-punctuation combinations outside of code blocks. So you may need a

double backslash in directive command pipings.

You can also use to pu ta newline in line or where the ... is a

unicode codepoint per JavaScript spec implemented by string.fromcodepoint.

Minor Block

Finally, you can use distinct code blocks within a full block. If you simply

have multiple code blocks with none of the switching syntax below, then they

will get concatenated into a single code block.

You can also switch to have what I call minor blocks within a main heading. This is mainly

used for small bits that are just pushed out of the way for convenience. A

full heading change is more appropriate for something that merits separate attention.

To create a minor block, one can either use a link of the form or

Note this is a bit of a break from

earlier versions in which a link on its own line would create a minor block. Now it is

purely on the form and not on placement.

Example: Let's say in heading block we have Then it will create a code block that can be referenced by

.

Note: If the switch syntax is then this just transforms

whatever is point to in

href using the pipe commands. That is, it is not a

switch, but fills in a gap for main blocks not having pipe switch syntax. The

key is the empty link text.

Templating

One use of minor blocks is as a templating mechanism.

This would produce the files:

happy.txt

sad.txt

middle.txt

Note that you need to be careful about feeding in the escaped commands into

other parsers. For example, I was using Pugs to generate HTML structure and

then using this templating to inject content (using markdown). Well, Pugs

escapes quotes and this was causing troubles. So I used backticks to delimit

the block name instead of quotes and it worked fine. Be flexible.

Nifty parts of writing literate programming

You can have your code in any order you wish.

You can separate out flow control from the processing. For example,

The above lets you write the if/else statement with its logic and put the

code in the code blocks and . This can help keep one's

code to within a single screenful per notion.

You can write code in the currently live document that has no effect, put in

ideas in the future, etc.

You can "paste" multiple blocks of code using the same block name. This is

like DRY, but the code does get repeated for the computer. You can also

substitute in various values in the substitution process so that code

blocks that are almost the same but with different names can come from the

same root structure.

You can put distracting data checks/sanitation/transformations into another

block and focus on the algorithm without the use of functions (which can be

distracting).

You can process the blocks in any fashion you want. So for example, to

create a JSON object, one could use a simpler setup appropriate for the

particular data and then transform it into JSON. It's all good.

This brings DSL and grunt power, written in the same place as your code. It

is really about coding up an entire project.

Getting the length of functions right is difficult. Too short functions,

and boilerplate and redirection becomes quite the bother. Too long, and it

is hard to understand what all a function is doing. Too long and we lose

composability. Too short, the chain of composing them becomes too long.

Literate programming can help somewhat in that we can have longer functions

and still have it understood. We could also potentially use the litpro

blocks again allowing for some composability though that that should be

rare. I think the rule of thumb is that if breaking it up seems good from a

usability stance, do it. If breaking it up is more about keeping a function

to a readable length, use litpro blocks. Another advantage of using litpro

blocks is that we get 【小火箭加速器】 the benefit of small parts when coding, but when

debugging, we can see a much larger flow of code all at once in the compiled

version.

I also like to use it to compile an entire project from a single file, pulling

in other literate program files as needed. That is, one can have a

command-and-control literate program file and a bunch of separate files for

separate concerns. But note that you need not split the project into any

pre-defined ways. For example, if designing a web interface, you can organize

the files by widgets, mixing in HTML, CSS, and JS in a single file whose

purpose is clear. Then the central file can pull it all in to a single web

page (or many) as well as save the CSS and JS to their own files as per the

reommendation, lessing the CSS, tanspiling ES6, linting, and minifying all as

desired. Or you could just write each output 【SSR机场】 file separate in its own litpro

document.

It's all good. You decide the order and grouping. The structure of your litpro

documents is up to you and is independentof the needed structures of the

output.

Directives vs commands vs subcommand

Directives affect the flow of the literate program itself, such as defining

commands, saving output, or directly storing values. Commands transform

incoming text or other input. Subcommands create useful arguments to comma

nds.

Directives can be thought of as procedures, commands as methods on the input,

and subcommands as functions. And indeed, directives do not compose in the

sense of returning a value. Commands are written like the chain syntax, with

what is on the left being evaluated first. Subcommands are written with

typical function syntax, with what is on the right being evaluated first.

Built in directives

There are a variety of directives that come built in.

SaveSave the text from

start into file filename. The options can be used in different ways, but in

the command client it is an encoding string for saving the file; the default

encoding is utf8.

StoreIf the value is present, then

it is sent through the pipes. If there is no value, then the location is used for the value and that gets piped. The name is used to

store the value. You can also use the pipe syntax in the linkname part for

the value instead. This dominates over the start or option value. A little

bit easer for the reader to see in rendered form.

LogSame as store, except instead of storing it in the doc, it logs it

to console. Same exact syntax.

Transformor . This takes the value that start points to and transforms it

using the pipe commands. Note one can store the transformed values by

placing the variable name after a pipe in the link text. The description of

link text has no role. For the syntax with no transform, it can be link text

that starts with a pipe or it can be completely empty. Note that if it is

empty, then it does not appear and is completely obscure to the reader.

LoadThis loads the file, found at the url

(file name probably) and stores it in the alias scope as well as under the

url name. We recommend using a short alias and not relying on the filename

path since the alias is what will be used repeatedly to reference the blocks

in the loaded file. Options are open, but for the command line client it is

the encoding string with default utf8. Note there are no pipes since there

is no block to act on it.

CdThis creates the ability to change

directories for either loading or saving. This is relative to the default

directory. (or save) will clear the path; it is always

good to do that when done. Ideally, this would be a tightly grouped of files

(listing like a directory) with the initial change right before the list and

the changing back after the list.

DefineThis allows one to define commands in a lit pro document. Very handy. Order

is irrelevant; anything requiring a command will wait for it to be defined.

This is convenient, but also a bit more of a bother for debugging. Anyway,

the start is where we find the text for the body of the command. The post

colon, pre pipe area expects one of three options which is explained below

in plugins.You can also pipe your command definition through pipe commands

before finally installing the function as a live function. Lots of power,

lots of headaches :)

The basic signature of a command is

where

the input is the text being piped in, the args are the arguments array

of the command, and name is the name to be emitted when done. The is the doc.

sync. This should return a value which will be used as the text being

passed along. You can access name if you like, but it is not useful

here.

async. Name is a callback function that should be called when done.

Standard node signature of . So put the text in the second

slot and null in the first if all is well.

raw. Nothing is setup for you. You have to get your hands dirty with the

event emitter of the doc. You'll need some good understanding of it. See

the sub command definition for inspiration.

defaults. The idea is that this is mostly synchronous, but has some

default arguments that come from the document and hence it is async for

that reason. So we need to create a tag for the emitname, document

variable names for default, and the function that is to be called when

all is ready. To accomodate this, instead of a function, we should have

an array tha

t gets read in: where tag is

either a string or a function that takes in the passed in arguments and

generates a string, arg0 to arg..., are the default doc variables. It

is fine to have some be empty. The final entry should be a function of

the same type as the sync functions (return values pass along the

input).

This defines the command only for current doc. To do it across docs in the

project, define it in the lprc.js. The commandName should be one word.

ComposeThis

composes commands, even those not yet defined. The arguments specified here

are passed onto the commands as they are executed. There are no subcommands

used in these arguments, but subcommands can be used in the arguments

differently. If an argi syntax has then that numbered argument when the

command is invoked is subbed in. If the argi has , then it assumed the

incoming argument is an array and uses the next available array element; if

the @i appears at the end of the arg list, then it unloads the rest of its

elements there. This may be a little klunky and the syntax may change. We

also have as special commands in compose: which does nothing but handles

two accidental pipes in a row smoothly, which stores the incoming

into the ith variable to use later as a named dollar sign variable, which sends along the ith variable to the next pipe, which pushes the

value onto the ith element, assuming it is an array (it creates an array if

no array is found). There is also a special variant of where

if the first arrow is present, then it uses argument as the input and if

the second arrow is present, then it saves the output into argument ,

sending the original input on instead of the output.

PartialThis

takes a command, , and makes a new command, , by

replacing an argument slot, zero-based, with whatever the block

and pipes result in.

SubcommandThis defines

subcommandname (one word) and attaches it to be active in the cmdName. If no

cmdName, then it becomes available to all commands.

Blocks on/off Stops recording code blocks. This is

good when writing a bunch of explanatory code text that you do not want

compiled. You can turn it back on with the directive.

Directives and headings are still actively being run and used. These can be

nested. Think "block comment" sections. Good for turning off troublesome

sections.

EvalWhatever block the eval finds itself, it

will eval. It will eval it only up to the point where it is placed. This is

an immediate action and can be quite useful for interventions. The eval will

have access to the doc object which gives one access to just about

everything else. This is one of those things that make running a literate

progamming insecure. The return value is nonexistent and the program will

not usually wait for any async actions to complete. If you put a pipe in the

link name text, then the anything after the pipe will become a name that the

variable will be stored in.

IgnoreThis ignores the code

blocks. For example, by convention, you could use code fence blocks with

language js for compiled code and ignore those with javascript. So you can

have example code that will not be seen and still get your syntax

highlighting and convenience. Note that this only works with code fences,

obviously. As soon as this is seen, it will be used and applied there after.

OutSends the text from start to

the console, using outname as a label.

New scopeThis creates a new scope (stuff

before a double colon). You can use it to store variables in a different

scope. Not terribly needed, but it was easy to expose the underlying

functionality.

PushThis takes the stuff in start,

throws it through some pipes, and then stores it as an item in an array with

the array stored under var name. These are stored in the order of appearance

in the document. The optional pipe syntax after var name will yield the

value that starts and we ignore in that case.

h5This is a directive that

makes h5 headings that match act like the push above where it is

being pushe

d to an array that will eventually populate . It takes

an optional argument which could be to stop listening for the headings

(this is useful to have scoped behavior) and which will give the

event name as well as the text; the default is just the text.

Link ScopeThis creates an

alias for an existing scope. This can be useful if you want to use one name

and toggle between them. For example, you could use the alias for or and then have be used with just switching what points to depending on needs. A bit of a stretch, I admit.

MonitorThis is a bit digging into the system.

You can monitor the events being emitted by using what you want to match

for. For example, you could put in a block name (all lower cased) and

monitor all events for that. This gets sent to which by default

prints to . If you use in the match string, this becomes

the triple colon separator that we use for techinical reasons for

name syntax. This directive's code gives a bit of insight as

to how to get more out of the system.

IfIf flag holds true (think

build flag), then the driective is executed with the arguments as given. A

couple of great uses are conditional evaling which allows for a great deal

of flexibility and conditional block on/off which may be useful if there is

extensive debugging commands involved.

FlagThis sets the named flag to true. Note

there is no way to turn a flag off easily.

VersionThis gives the name and

version of the program. Note the semicolon separator. Saves ,

, .

npminfoThis takes in a string for some basic author information and

dependencies used. To add on or modify how it handles the deps, dev, etc.,

modify the object on . Saves

, , , ,

.

Built in commands

Note commands need to be one word and are case-sensitive. They can be

symbols as long as that does not conflict with anything (avoid pipes,

commas, colons, quotes).

evalThe first argument is the text of the code to

eval. In its scope, it will have the incoming text as the variable and the arguments, which could be objects, will be in the

array. The code is eval'd (first argument). The code text itself

is available in the variable. The variable is what is

passed along. This should make for quick hacking on text. The doc

variable is also available for inspecting all sorts of stuff, like the

current state of the blocks. If you want to evaluate the incoming text

and use the result as text, then the line as the

first argument should work.

async(async eval) Same deal as eval, except

this code expects a callback function to be called. It is in the

variable callback. So you can read a file and have its callback call the

callback to send the text along its merry way.

evilWhile the eval commands thinks of the first argument as code

acting on the incoming text, its twin evil thinks of the incoming text

as the code and the arguments as just environment variables. The value

returned is the variable which defaults to the original code.

funifyThis assumes the incoming text is a function-in-waiting and

it evals it to become so. This is great if you want to do a or if

you just want to mess with stuff. will call the

function and return that result.

compileThis compiles a

block of text as if it was in the document originally. The compiled text

will be the output. The first argument gives the names of the blockname

to use if short-hand minor blocks are encountered. This is useful for

templating. If no blockname is given, then the current one is used. Any

further arguments should be in pairs, with the second possibly empty, of

a minor block name to fill in with the value in the second place.

sub

A: Replaces parts of incoming text.

S: ,

This replaces in the text

with . The replacement is sorted based on the length of the key

value. This is to help with SUBTITLE being replaced before TITLE, for

example, while allowing one to write it in an order that makes reading

make sense. This is a bad, but convenient idea.

Recommend just using one pair at a time as commands c

an be piped along.

Alternate signature .

This does a regular expression replacement

where the first is a reg ( )

that acts on the string and replaces it using

the usual javascript replacementsyntax for the second.

The regex syntax can be part of pair sequences. In accordance with

shorter first, regex's which typically are epansive, will go last, but

amongst regex's, the order of processing is preserved.

Recommendation is to not mix in multiple pairs with regexs.

E:

#basic, string

storeThis stores the incoming text into the

variable name. This is good for stashing something in mid computation.

For example, will

stash the incoming text into temp, then substitute out THIS for that,

then store that into awe, and finally restore back to the state of temp.

Be careful that the variable temp could get overwritten if there are any

async operations hanging about. Best to have unique names. See push and

pop commands for a better way to do this.

clear. This removes the variable name and passes

along the input. The input has no impat on this.

logThis will output a concatenated string to doc.log (default

console.log) with the incoming text and the arguments. The first

argument is treated as an idenitifer in the output. This is a good

way to see what is going on in the middle of a transformation.

rawThis will look for start in the raw text of the

file and end in the file and return everything in between. The start and

end are considered stand-alone lines.

trimThis trims the incoming text, both leading and trailing

whitespace. Useful in some tests of mine.

filterThis will filter an array or object into a lesser object,

based on what the rest of the arguments are. If the input

is an object, then it will take the rest of the arguments as either:

type string: explicit keys to keep.type regexp: keys must match the regexp to be kept.type function: a function that takes in the key and value returns

the boolean true if the key, value should be added.true: if the boolean true (or no argument at all is supplied) then

all this essentially copies the object.

It filters the object based on these criteria and returns the new

object.

For an array, it is similar except an array is

returned.

either actual number or one that parses into it. This pushes the

entry at the number onto the new array.'#:#' will slice it between the two numbers.'ax b' b is the starting value (negative counts from the end)

while a is the increment to add (negative goes down). type function takes in the value and index and returns true if the

value should be added. true adds a whole copy of the array; also default if nothing is

provided.

joinThis will concatenate the incoming text and the arguments

together using the first argument as the separator. Note one can use

as arg1 and it should give you a newline (use if in a

directive due to parser escaping backslashes!). No separator can be as

easy as .

This also does double duty as something entirely different. If the input

is an object or an array, then it first filters it according to the

arguments, just as in the filter command, and then joins the results

with the first argument as the join separator. For objects, if the keys

are a group (such as regexp matching), then they will be sorted

alphabetically first before joining.

catThe arguments are concatenated with the incoming text as is.

Useful for single arguments, often with no incoming text.

echoThis terminates the input sequence and

creates a new one with the first argument as the outgoing.

getThis is just like using but that

fails to work in compositions. So get is its replacement. This ignores

the input and starts its own chain of inputs.

arrayThis creates an array out of the input and

arguments.

.This is the dot command and it

accesses property name which is the first argument; the object is the

input (typically a string, but can be anything). If the property is a

method, then the other arguments are passed in as arguments into the

method.

For the inspirational example, the push directive creates an

array and to join them into text one could do . There is

also an alias so that any as a command works. For example,

we could do above. This avoids forgetting the comma after

join in the prior example.

-This is the dash command and it

accesses the utility property which is the first argument; the object is the

input (typically a string, but can be anything). It calls the relevant

command with that method.

Each object in the has the form where the command name is the name to be called (such as

and the methods should be on the called object, such as

and the order the search, with lower numbers

coming first.

pushSimply pushes the current state of the incoming text on the

stack for this pipe process.

popReplaces the incoming text with popping out the last unpopped

pushed on text.

ifIf the boolean is true,

then the command will execute with the given input text and

arguments. Otherwise, the input text is passed on. This is usefully

paired with the subcommand boolean asks. For example

ifleftright` are flagged.

ifelseThis expects arrays of the above form as arguments. It

works through the conditions until it finds a true value and then it

executes the command. If none are found, then it passes along the input

text.

whenThis takes in the event names and waits for

them to be emitted by done or manually with a

. That would probably be used in

directives. The idea of this setup is to wait to execute a cli command

for when everything is setup. It passes through the incoming text.

doneThis is a command to emit the done event for name. It

just passes through the incoming text. The idea is that it would be,

say, a filename of something that got saved.

arrayifyThis takes the incoming text and creates an array out of

it. The first argument is an object with keys to know what to

split on, to escape the separator and itself, a boolean

that will trim the text for each entry. The defaults are newline,

backslash, and true, respectively. You can also pass them as the first,

second, and third argument, respectively.

Note that this assumes that both sep

and esc are single characters. You can have the usual block

substitutions, of course, but it might be safer to escape the block and

run it through compile, e.g., .

This also allows nesting of objects. To get a string representation of

the array, call .

objectifyThis takes the incoming text and creates an object out of

it. The first argument is an object with keys to know what to

split on for the key, to split on for the end of the value, to escape the separator and itself, a boolean that will trim the

value for each entry; keys are automatically trimmed. The defaults

are colon, newline, backslash, and true, respectively.

Note that this assumes

that all the characters are single characters. You can have the usual

block substitutions, of course, but it might be safer to escape the

block and run it through compile, e.g., .

This also allows nesting of objects. Call to get a

string.

regifyTurns the incoming input into a regular expression. First

argument are the flags; if none, g is assumed, but if some flags are

specificed one should add g. If no global needed use, '-'.

ifeThis takes a snippet of code and creates an immediate function

execution string for embedding in code. the arguments become the

variable names in both the function call and the function definition. If

an equals is present, then the right-hand side is in the function call

and will not be hidden from access in the ife.

capsThis is a command that tries to match caps and replace them.

The idea comes from wanting to write and get . This does that. By passing in a JSON object of

possible matches as argument or setting the caps local object to an

object of such matches, you can change what it matches. But it only

will match a single character (though unicode is fine if you can input

that).

assertThis asserts the equality of the input and first argument

an

d if it

fails, it reports both texts in a log with the second argument as a

message. . This is a way to

check that certain things are happening as they should.

wrapThis wraps the incoming text in the first and second argument:

`.

js-stringThis breaks the incoming text of many lines into quoted

lines with appropriate plus signs added. The first argument allows for a

different quote such as . The double quote is default. Also and

generates single and double quotes, respectively.

html-wrapThis takes the incoming text and wraps it in a tag

element, using the first argument as the element and the rest of the

arguments as attributes. An equals sign creates an attribute with value,

no equals implies a class. An attribute value will get wrapped in

quotes.

will lead to

html-tableThis requires an array of arrays; matrix is

good. The first argument should either be an array of headers or

nothing. It uses the same argument convention of html-wrap for the rest

of the arguments, being attributes on the html table element. We could

allow individual attributes and stuff on rows and columns, but that

seems best left to css and js kind of stuff. Still thinking on if we

could allow individual rows or entries to report something, but that

seems complicated.

html-escapeThis escapes in html. It is mainly intended for

needed uses, say in math writing. Very simple minded. One can modify the

characters escaped by adding to . This is

actually similar to caps and snippets.

html-unescapeThe reverse of html-escape, depending on what the

symbols are in .

snippets(alias s). This is a function for things that are

easily named, but long to write, such as a cdn download script tag for a

common js library, say jquery. could then do that. Currently,

there are no default snippets. To load them, the best bet is in the

lprc.js file and store the object as or,

if you are feeling generous, one could do

. This is really a

stand-alone command; incoming text is ignored.

In writing a snippet, it can be a function which will take in the

arguments. Alternatively, you can sprinkle in your code for

the Argument with numner # and the pipes give an optional default; if

none, then ARG# is eliminated. So yields a default of

1.9.0. Pipes cannot be in the default

Be careful that the first argument is the snippet name.

#/#nameThis is just a comment. For special-character free text,

one can just write it, but if one wants to include special characters,

use . Example or . This latter form will store the current state into

.

cmdsThis creates a sequence of commands to execute, most likely

used with if-else since a single pathway is covered by the usual pipe

syntax. The form is , e.g., ... If it is

just one argument, then the array is not needed (if it is just one

argument and that is an array, wrap that in an array)).

mapcor with This takes the input and applies to each, if array or obj.

Otherwise, just appleis command to whole input. allows a sequence of commands to happen. For the object, if the args

contains , then that gets replaced by the key under consideration.

forinThe args are

.

This iterates over the input object.

If the input is not an array or object, then is called on the input

itself as with a of an empty string, and the is just

the initial value.

The return value of is used in the third plave of the next loop. If

it is undefined, is passed in.

All functions should be synchronous. All values will be visited; there

is no way to break out of the loop though one could have the function

do nothing if the ret value was a particular kind (say, looking for

false values, it starts true and if it becomes false, then it just

returns that for all later ones). This is not designed for large number

of keys.

The sort should be a comparison function that expects the following

arguments: . Alternatively, it can

send in the strings or to sort the order by intrinsic key or

value meaning. Note that value needs to be natively comparable in some

meani

ngful sense if is sent in.

pgetGets the property named by the arguments.

psetSets the property named by the arguments with the last

argument being the value. May create objects and arrays as

needed.

pstoreThis stores the input into the first argument (should be

object or array) using the rest of the arguments to define. This returns

the value.

toJSONReturns a JSON representation of input. Uses JSON.stringify

and passes in the first two args (whitelist, spaces) to allow full features.

fromJSONReturns an object from a JSON representation. Uses

JSON.parse and passes in first argument (reviver function) if present.

anonThe first argument should be a function or string that can be

converted into a function of command form, namely the arguments are

and the is though that is also in a

closure if it is a string evaluated. The function should be synchronous

and return the value to send on.

minorsThis converts the input from an array into an object, using

the arguments as the keys. If there is a mismatch in length, than the

shorter is used and the rest is discarded. If the input is not an array,

then it becomes the sole value in the object returned with key as first

argument or empty string.

templatingThis expects an object as an input. Its keys will be

minor block names when compiling the template given by the first

argument. It will send along the compiled text.

mergeMerges arrays or objects.

cloneClones an array or object.

applyThis applies a function or command to a property of an object

and replaces it. Clone first if you do not want to replace, but have a

new. The first arguments is the key, the second is the commnd string or

function, and the rest are the args to pass in. It returns the object

with the modified property.

matrixifyThis takes in some text and splits into a two dimensional

array using the passed in separators. The first separator divides the

columns, the second divides the rows. The result is an array each of

whose entries are the rows. There is also an escape character. The

defaults are commas, newlines, and backslashes, respectively. The escpae

character escapes the separators and itself, nothing else. There is also

a boolean for whether to trim entries; that is true by default. Pass in

in the fourth argument if not desired. All the characters should

be just that, of length 1.

This returns a matrix (prototyped) that has the properties:

Iterates a function over the rows. If an array is returned, it

replaces the row. Iterates a function over the cols and will also replace the

columns if an array is returned. This returns a new matrix with flipped rows and columns.This trims the entries in the matrix, returning the original.This converts every entry into a number, when possible. This creates a copy. This runs through the matrix, applying a function to each

entry, the arguments being . This takes in a second matrix and checks if they are strictly

equal. This prints the matrix using the passed in row and col

separator or using the propertyBuilt-in Subcommands

With command arguments, one can run commands on arguments to get them in some

appropriate form or use, including passing in objects or arrays. You can use

them as would have subcmd acting on the args

and the result of that would be the argument place

The would be passed into cmd as the first

argument, but anything might get passed into cmd by subcmd's return value. It

could also store an object into a state for configuration.

There are several built-in subcommands. Note that these are case insensitive.

or This expects a quote-delimited string to be passed in and

will strip the quotes. This is useful as the appearance of a quote will mask

all other mechanics. So will produce a literal

argument of . Multiple arguments will be stripped and

passed on as multiple arguments. The first entry is the joiner separator and it joins the rest

of the arguments. For arrays, they are flattened with the separator as well

(just one level -- then it g

ets messy and wrong, probably). or This creates an array of the arguments.or Inverse of array. This expects an array and each

element becomes a separate argument that the command will see. E.g., is equivalent to . This is useful for

constructing the args elsewhere. In particular, will result in the array from the subsitution becoming the

arguments to pass in. or This presumes that a JSON stringed object is ready

to be made into an object.Merge arrays or objects, depending on what is there.or This produces an object based on the assumption that a

pairing are the arguments. The key should be text. Multiple

pairs welcome. This allows one to do to apply a method to an

object with the slot 2 and above being arguments. For example, one could do

to slice the array to .This is similar to except that the method is in the name. So

the same example would be . Also . or . This will take the arguments as a property chain to

extract the value being pointed to. This will convert an object to JSON representation.The presumption is that an object is passed in whose key:values should

be added to the command state. does this in a way that other

commands in the pipe chain can see it. would

probably be the typical way. This retrieves the value for the given key argument. does the

same for the pipe chain. Multiple keys can be given and each associated

value will be returned as distinct arguments. This converts the argument(s) to numbers, using js

Number function. will create three arguments of integers. To

get an array, use Returns a date object. returns what the current now is,

will return a date object as parsed by Date. orwill evaluate the argument and use the magic variable as the

value to return. This can also see doc (and doc.cmdName) and args has the

arguments post code. Recommend using backticks for quoting the eval; it

will check for that automatically (just backticks, can do echo for the

others if needed).or evaluates the code as if it is a function and returns

that function. Any other arguments are seen in the args closure variable.

Just like eval, backticks can be used and should be used to directly quote

the function text. This logs the arguments and passes them along as arguments.. This returns the true value.. This returns the false value.. This returns the null value. or Takes in a regular expression string and possibly some

flags and returns a regular expression. Defaults to a global flag; pass in

as part of the flags to get non-global. . This returns the doc variable. This could be useful in connection to

the property method and the log subcommand.. This returns no arguments. or will use the functions found in the dash command

but as a subcommand. will pad the string to have

length 5 using the default spaces (in full where lodash is added to the

dash).or will apply the test to the arguments. The

following are the default tests in the variable :checks that all are truthychecks that at least one is truthy`negates the boolean, , , , , tests in sequence the relation. , tests all pairs for non-equality. looks to see if the passed in strings are flags that have been

set. Takes in a string as first argument and either a string or

regular expression to match.Tests first argument as one of the types that follow (strings). This returns the incoming input. Should be useful for extraction of

information, particularly for boolean tests. Yields the type of the object in first argument.

To build one's own command, you can attach a function whose arguments will be

the arguments passed in. The is the doc object. The current name (say

for scope storing) is in doc.cmdName. This will point to within a whole pipe

chunk. Pop off the last part (delimited by triple colon) to get to the whole

command scope. The return value will be used as in an argument into the

command or another subcommand. If it is an array and the flag is set to

true, then each entry in the array will be expanded into a set of arguments.

So instead of 1 argument, several could

be returned. If nothing is returned,

then no arguments are passed on and it is as if it wasn't there.

h5 and h6

So this design treats h5 and h6 headings differently. They become subheadings

of h1-4 headings. So for example, if we have and then and

then the sections would be recorded as and we have a path syntax such as which would yield

if placed in . Ideally, this should work as you

imagine. See for the test examples.

Plugins

This is a big topic which I will only touch on here. You can define commands

in the text of a literate program, and we will discuss this a bit here, but

mostly, both commands and directives get defined in module plugins or the file if need be.

Defining Commands

The define directive allows one to create commands within a document. This is

a good place to start getting used to how things work.

A command has the function signature where the input is the incoming text (we are piping along when evaluating

commands), args are the arguments that are comma separated after the command

name, and the name is the name of the event that needs to be emitted with the

outgoing text. The function context is the example.

A minimal example is

We simply emit the name with the incoming text as data. We usually use for the variable. This is the option in the define directive.

The default is and is very easy.

That is, we just return the text we want to return. In general, the name is

not needed though it may provide context clues.

The third option is an command. For those familiar with node

conventions, this is easy and natural.

The callback takes in an error as first argument and, if no error, the text to

output. One should be able to use this as a function callback to pass into

other callback async setups in node.

So that's the flow. Obviously, you are free to do what you like with the text

inside. You can access the document as and from there get to the event

emitter and the parent, , leading to other docs. The scopes are

available as well. Synchronous is the easiest, but asynchronous control flow

is just as good and is needed for reading files, network requests, external

process executions, etc.

Plugin convention.

I recommend the following npm module conventions for plugins for

literate-programming.

litpro-... is the name. So all plugins would be namespaced to litpro.

Clear, but short.

Set The first argument is the

Folder object which construts folders which constructs documents. By

accessing Folder, one can add a lot of functionality. This access is

granted in the command line client before any is created.

The other argument depends on context, but for the command line client it

is the parsed in arguments object. It can be useful for a number of

purposes, but one should limit its use as it narrows the context of the

use.

Define commands and, less, directives. Commands are for transforming text,

directives are for doing document flow maipulations. Other hacks on

should be even less rare than adding directives.

Commands and directives are globally defined.

is how to add a

command function. You can use and

to install sync and async functions directly in the same

fashion as used by the define directive.

is how to install a

directive. There are no helper functions for directives. These are more for

controlling the flow of the compiling in the large. The arg keys are read

off from . Also provided is the current

block name which is given by the key .

If you want to do stuff after folder and its event emitter, gcd, is

created, then you can modify Folder.postInit to be a function that does

whatever you want on a folder instance. Think of it as a secondary

constructor function.

The Folder has a plugins object where one can stash whatever under the

plugin's name. This is largely for options and alternatives. The folder and

doc object map to the same object.

Structure of Doc and Folder

To really hack the doc compiling, one should inspect the structure of Folder,

folder, and doc. The Folder is a

constructor and it has a variety of

properties on it that are global to all folders. But it also has several

prototype properties that get inherited by the folder instances. Some of those

get inherited by the docs as well. For each folder, there is also a gcd object

which is the event emitter, which comes from the, ahem, magnificient event-when

library (I wrote it with this use in mind). In many ways, hacking on gcd will

manipulate the flow of the compiling.

I wrote the folder instance to maintain flexibility, but typically (so far at

least), one folder instance per run is typical. Still, there might be a use

for it in say have a development and production compile separate but running

simultaneously?

Folder

These are the properties of Folder that may be of interest.

commands. This is an object that is prototyped onto the instance of a

folder. Changing this adds commands to all created folder instances. directives. This holds the directives. Otherwise same as commands.reporter. This holds the functions that report out problems. See

reporters below. This is not prototyped and is shared across instances.postInit. This does modification of the instance. Default is a noop. sync, async. These install sync and async commands, respectively.defSubCommand. Installs a subcommand. plugins. This is a space to stash stuff for plugins. Use the plugin sans

litpr as the key. Then put there whatever is of use. The idea is if you

require something like jshint and then want default options, you can put

that there. Then in a lprc file, someone can override those options it will

be applied across the project.folder

Each instance of folder comes with its own instances of:

docs. Holds all the documents.scopes. Holds all the scopes which are the stuff before the double colon. It

includes the blocks from the compiled docs but also any created scopes.reports. This holds all the reports of stuff waiting. As stuff stops

waiting, the reports go away. Ideally, this should be empty when all is

done.stack. This is for the push and pop of text piping. gcd. This is the event-emitter shared between docs, but not folders. Default

actions are added during the instantiation, largely related to the parsing

which sets up later. If you want to log what goes on, you may want to look

at the event-when docs (makeLog is a good place to start).flags. This holds what flags are present.

and shares via the prototype

parse. This parses the text of docs using commonmark specnewdoc. This creates a new document. Kind of a constructor, but simply

called as a function. it calls the constructor Doc.colon. We replace colons with a unicode triple colon for emitting purposes

of block names (event-when uses colon as separators too). This contains the

escape (does replacement), restore (undoes it), and v which is the unicode

tripe colon. If the v is replaced entirely, everything should hopefully work

just fine with a new separator.createScope. Creating a scope. join. What is used to concatenate code blocks under same block heading.

Default is "\n"log. What to do with logging. Defaults to console.log.indicator. An internal use to allow escaping of whitespace in command

arguments that would otherwisebe trimmed. wrapSync, wrapAsync. These wrap functions up for command sync, async, but do

not install them. Not sure why not install them. subnameTransform. A function that deals with shorthand minor substitutions

that avoid using the main block heading. This can be overwritten if you want

some custom behavior. reportwaits. This is a function that produces the reports of what is still

waiting. Very useful for debugging. This returns an array.simpleReport. This reports on the substitutions that did not get resolved.

This returns an array. It also includes any commands that were called but

not defined. Subcommands throw errors when not defined, but since commands

can be defined later, they will not. Hence this mechanism. Doc. This is the constructor for documents. commandsdirectivesplugins

and

direct copying from

reportersdoc

Each file leads to a doc which is stored in the folder. Each doc has a variety

of stuff going on.

Unique to each instance

file. The actual path to the file. It is treated as unique and there is a

scope dedicated to it. Don't mess with it. It is also how docs are keyed in

the folder.docs object.text. The actual text of the file.blockOff. This tracks whether to take in code blocks as usual. See blocks

directive. If 0, code blocks are queued up. If greater than 1, code blocks

are ignored. levels. This tracks the level of the heading that is currently being used.

See h5/h6 descriptionblocks. Each heading gets its own key in the blocks and the raw code blocks

are put here.heading, curname. These are part of the block parsing. curname is the full

name while heading excludes minor block names.vars. This is where the variables live. As each code block is compiled,

its result gets stored here. But one can also add any bit of var name and

text to this. parent. This is the folder that contains this doc.

Inherited from folder

commands, modifications affect alldirectives, modifications affect allscopes, modifications affect allgcd, modifications affect all. Be careful to scope added events to files,

etc. plugins, modifications affect allcolon join, overwriting will only affect doclog, overwriting will only affect docsubnameTransform, overwriting will only affect docindicator, overwriting will only affect docwrapSync, wrapAsync, overwriting will only affect docaugment, this augments the object with the type. cmdworker, this will call the command. needed as with the dot command, it

can get tricky. Used in .apply, .mapc, compose. compose, this creates a function from composing multiple commands

Prototyped on Doc. Almost all are internal and are of little to no interest.

pipeParsing. This parses the pipes. This may be useful if you want to do

something like in the save or define directives. Check them out in the

source if you want to see how to use it. blockCompiling. This is what the compile command taps into. See how it is

done there. getscope. Looks up a scope and does appropriate async waiting for an

existing scope if need be.retrieve. retrieves variable.createLinkedScope. Creates a link to a scope and notifies all.indent. This is the default indenting function for subbing in multi-line

blocks. The default idea is to indent up to the indent of the line that

contains the block sub; further existing indentation in sublines is

respected on top of that. getIndent. Figuring out the indentsubstituteParsingregexs. Some regular expressions that are used in the parsing of the code

blocks.backslash. The backslash function applied to command arguments.whitespaceEscape. Handlingwhitespace escaping in conjunction with

backslash. Putting the whitespace back.store. stores a variable.Reporting

A key feature of any programming environment is debugging. It is my hope that

this version has some better debugging information. The key to this is the

reporting function of what is waiting around.

The way it works is that when an event of the form is

emitted with data then reporters gets a key of the

event string wthout the , and when the is emitted, it is

removed.

If it is still waiting around when all is done, then it gets reported. The

reportname is used to look up which reporter is used. Then that reporter takes

in the remaining arguments and produces a string that will be part of the

final report that gets printed out.

Some of the waiting is not done by the emitting, but rather by presence in

.when and .onces.

LICENSE

MIT-LICENSE

HomePage

Repository

An event library that allows for the blocking of event firing thus dealing with many-to-one event firing

event-when

This is an event library, but one in which events and listeners are

coordinated through a single object. The emphasis throughout is on

coordinating the global flow of the program.

It addresses what I find to be the pain points of JavaScript programmi

ng:

when does code execute and how does it have access to the objects it

needs? Most event libraries handle the first well enough for linear

sequences of event firing, but they fail when multiple events need to

happen, in any order, before triggering a response. It can also require

a lot of closures or globals to handle manipulating state from event

calls. This library is designed to address those needs.

Most event libraries suggest making objects (such as a button) into

emitters; this is to promote separation of concerns, a good goal. But we

want to coordinate events from multiple sources. So to do this,

event-when is designed to allow you to attach the object to the

event/handler/emit. It also allows you to listen for events before the

corresponding object exists. This is more like having listeners on a

form element responding to button clicks in the form.

There are several noteworthy features of this library:

When. This is the titular notion. The method allows

you to specify an event to emit after various specified events have all

fired. For example, if we call a database and read a file to assemble

a webpage, then we can do something like

This is why the idea of a central emitter is particularly useful to

this library's intent.

Scope. Events can be scoped. In the above example, each of

the events are scoped based on the user jack. It bubbles up from the

most specific to the least specific. Each level can access the

associated data at all levels. For example, we can store data at the

specific jack event level while having the handler at "all data

retrieved" access it. Works the other way too. One can stash the

scope into scope jack and the handler for can

access the name jack and its scope.

Actions. Events should be statements of fact. Actions can be

used to call functions and are statements of doing. "Compile document"

is an action and is a nice way to represent a function handler.

"Document compiled" would be what might be emitted after the

compilation is done. This is a great way to have a running log of

event --> action. To implement the log, you can run and then when you want the event action, filter the logs with

for an array of such statements.

Stuff can be attached to events, emissions, and handlers. Emits send

data, handlers have contexts, and events have scope contexts.

MonitorOne can place a filter and listener to monitor all

emits and act appropriately. Could be great for debugging.

Please note that no particular effort at efficiency has been made. This is

about making it easier to develop the flow of an application. If you need

something that handles large number of events quickly, this may not be the

right library. Benchmarking a simple emit can be found in benchmark.js.

On my MBA mid-2011, it does 5e4 emits in a half a second, 5e5 emits in

about 4.5 seconds while the native emitter does 5e5 in about a tenth of a

second.

Using

In the browser, include index.js. It will store the constructor as

EventWhen in the global space.

For node, use or, better, add it to the

package.json file with appended.

Then require and instantiate an emitter:

Object TypesEmitterThis module exports a single function, the

constructor for this type. It is what handles managing all the events.

It could also be called Dispatcher.Event ObjectThis is the object that is passed to

handlers. HandlerThis is the object type that interfaces between

event/emits and action/functions. TrackerThis is what tracks the status of when to fire

events.FilterThis is a type that is used in filtering strings such

as in filtering the logs. Method specification

These are methods on the emitter object.

emitemitCachemonitorwhenonoffoncestopcacheactionactionsscopescopeseventshandlerserrorqueueEmptymakeLogmakeHandlerfilterserial

emit(str ev, obj data, str timing) --> emitter

Emit the event.

arguments

A string that denotes the event. Any value. It will be passed into the handler as the first

argument. One of "now", "momentary", "soon",

"later" implying emission

first on queue, last on queue, first on waiting list, last on waiting

list, respectively. "Momentary" is the default if not provided as that

will preserve the order of emitting. The waiting list is shifted once

for each tick (or in the browser, setTimeout).

return

The emitter for chaining. The events may or may not be already emitted

depending on the timing.

convenience forms

Event A emits B, B fires after the emitting handler finishes,

but before other handler's for A finishes. This is the function

calling model.Event A emits B, B fires after A finishes. This is more

of a synchronous callback model. It is the same as with the

default setting.Event A emits B then C, both with soon, then C fires after

next tick. B fires after second tick.Event A emits B then C, both with later, then B fires after

next tick. C fires after second tick.

scope

Note that if ev contains the event separator, by default, then it

will be broken up into multiple events, each one being emitted. The

order of emission is from the most specific to the general (bubbling

up). holds what to split on.

In what follows, it is important to know that the handler signature

is .

As an example, if the event is emitted, then fires,

followed by , followed by . The scope objects available, however,

include that of all three of the emitted events as well as , and separately. Thus, we can have an event with a handler on that

uses the scope of . The name can be found by accessing

and the scope accessed from .

Note that will not fire as a stand-alone event; it is just its scope

which can be found that way.

To stop the emitting and any bubbling, set in the

handler . To do more

fine-controlled stopping, you need to manipulate which is

an array consisting of .

Once the event's turn on the queue occurs, the handlers for all the

scopes fire in sequence without interruption unless an is

emitted. To delay the handling, one needs to manipulate

and .

example

emitCache(str ev, obj data, str timing) --> emitter

Emit the event but cache it for once methods. Only the full exact event is

cached, not subforms. If the same event is called

multiple times, it overwrites the previous data without comment. Once

methods check for the cache for the full event. On handlers are not

affected by this.

arguments

Same as emit.

A string that denotes the event. Any value. It will be passed into the handler as the first

argument. One of "now", "momentary", "soon", "later" implying emission

first on queue, last on queue, first on waiting list, last on waiting

list, respectively. "Momentary" is the default if not provided as that

will preserve the order of emitting. The waiting list is shifted once

for each tick (or in the browser, setTimeout).

return

The emitter for chaining. The events may or may not be already emitted

depending on the timing.

example

monitor(listener arr/filter, listener) --> [filt, listener]

If you want to react to events on a more coarse grain level, then you can

use the monitor method.

arguments

no args. returns array of active listeners.Of filter type. Any event that matches the filter will be

monitored. If an array of [filter, true] is passed in, that the filter

will be negated. This is a function that will respond to the event. It will

receive the event being emitted, the data, and the emitter object

itself. It has no context other than what is bound to it using .bind. If listener array is passed in as first (and only)

argument, then the array is removed from the relevant array.

returns

Listener array of filter, function when assigning. Use this to remove the

monitoring. The returned array also has a property containing the

original filter type.

example

Note if you were to emit "bob" in the above monitor, then we would have an

infinite loop.

when(arr/str events, str ev, str timing, bool reset, bool immutable, bool initialOrdering ) --> tracker

This is how to do some action after several different events have all

fired. Firing

order is irrelevant.

arguments

A string or an array of strings. These represent the events

that need to be fired before emitting the event . The array

could also contain a numbered event which is of the form . This will countdown the number of times the event fires

before considering it done. This is the event that gets emitted after all the events have

taken place. It should be an event string.Emits based on the timing provided, as in .Setting this to true will cause this setup to be setup again

once fired. The original events array is saved and restored. Default

is false. This can also be changed after initialization by setting

tracker.reset. Set the fifth argument to true in order to prevent this

.when being merged in with other .whens who have the same emitting

event. The default behavior is to combine .whens when they all emit the

same event; the timing and reset are defaulted to the first .when though

that can be modified with a return value. Note that immutables are still

mutable by direct action on the tracker. If true, the .when data will be returned in the order

of originally adding the events, rather than the default of the emit

order. To change this gloablly, change .

Also, when true, events that are emitted with no data do fill up a slot,

with data being null.

return

Tracker object. This is what one can use to manipulate the sequence of

events. See Tracker type

note

If an event fires more times than is counted and later the when is

reset, those extra times do not get counted.

Also to get the tracker (assuming not immutable), then pass in empty array

and the event of interest.

There is a convenience method called . This flattens the

emitted data. If the data had a single element in the array (just one

event fired with data A), then it emits A not an array containing A. If

there are multiple events with then it emits

.

There is another convenience method called . This flattens the

emitted data but always returns an array, e.g., or ,

respectively in the above situation.

example

emitter will automatically emit "data gathered" after third emit with

data

Notice that if the event is a parent event of what was emitted, then the

full event name is placed in the third slot.

on(str ev, Handler f, obj context) --> Handler

Associates handler f with event ev for firing when ev is emitted.

arguments

The event string on which to call handler fThe handler f. This can be a function, an action string, an array of

handler types, or a handler itself.What the should be set to when invoking f. Defaults to

.

return

The Handler which should be used in to remove the handler, if

desired.

f

Ultimately handlers execute functions. These functions will be passed in

the data from the emit and an event object. It will be called in

the passed in context

example

off(str/array/fun/reg events, handler fun, bool nowhen) --> emitter

This removes handlers.

arguments

This function behavior changes based on the number of arguments

No arguments. This removes all handlers from all events. A complete

reset.

. This is the event string to remove the handlers from. If

nothing else is provided, all handlers for that event are removed. This

could also be an array of event strings in which case it is applied to

each one. Or it could be an Array.filter function or a RegExp that

should match the strings whose events should have their handlers

trimmed. Or it could be null, in which case all events are searched for

the removal of the given handler.

This an object of type Handler. Ideally, this is the handler

returned by . But it could also be a primitive, such as an action

string or function.

If fun is a boolean, then it is assumed to be for the whole

event removal. If it is null, then it is assumed all handlers of the

events should be removed.

If true, then it does not remove the handler associated with

the tracker handler.

return

Emitter for chaining.

example

once(str event, handler f, int n, obj context) --> handler h

Thi

s attaches the handler f to fire when event is emitted. But it is

tracked to be removed after firing n times. Given its name, the default n

is 1.

arguments

event Any string. The event that is being listened for.f Anything of handler type. n The number of times to fire. Should be a positive integer. context The object whose f should have.

Both n and context are optional and their positioning can be either way.

return

The handler that contains both f and the counter.

example

note

If you attach a property to your handler f, then the once will

get recorded in which one can use to monitor which onces

have fired and how many times remain.

stop(filter toRemove, bool neg) --> emitter

This is a general purpose maintainer of the queue/waiting lists. It will

remove the events that match the first argument in some appropriate way.

arguments

No argument. Removes all queued events.as boolean true. Current event on queue gets removed, any

active handler is stopped. Any filtertype. If an event matches, it is

removed. . Reverse match semantics of filter type.

returns

Emitter for chaining.

example

cache(str request/arr [ev, data, timing], str returned, fun process/str emit, str emit) --> emitter

This is how to cache an event request. This will ensure that the given

event will only be called once. The event string should be unique and the

assumption is that the same data would be used. If not, one will have

problems.

arguments

This is an event to be emitted. It can be either a string or

an array with data and timing. If multiple events are needed, use a

single event to trigger them. This is the event to wait for indicating that the process is

complete. Both request and returned should be the same for caching the

request. But only the request is the cache key.This takes in the data from the returned event and processes it.

The return value is the data used by the final emit. If the emit string

is empty, then the return value is not used and it is expected that res

will do the emitting. It is a function that takes (data, cache args)

called in the context of the event emitter. This is what gets emitted upon obtaining the value. If res is not

present, this can be the third argument and the data will simply be

passed along.

action(str name, handler, obj context) --> action handler

This allows one to associate a string with a handler for easier naming. It

should be active voice to distinguish from event strings.

arguments

This is the action name.This is the handler-type object to associate with the action. The context to call the handler in.

return

0 arguments. Returns the whole list of defined actions.1 argument. Returns the handler associated with the action.2 arguments, second null. Deletes associated action.2, 3 arguments. Returns created handler that is now linked to action

string.

example

This example demonstrates that an action should be an action sentence

followed by something that does that action. Here the emit event sends a

doc string to be compiled. It does so, gets stored, and then the emitter

emits it when all done. Note files is the context that the handler is

called in.

actions(arr/bool/fun/reg/str filter, bool neg) --> obj

This returns an object with keys of actions and values of their handlers.

arguments

No argument or falsy first argument. Selects all actions for

returning. Anything of filtertype. Selects all actions

matching filter. Negates the match semantics.

return

An object whose keys match the selection and values are the corresponding

actions's value. If the value is an object, then that object is the same

object and modifications on one will reflect on the other.

example

The following are various ways to return all actions that contain the

word bob.

In contrast, the following only returns the action with bob as the exact

name.

The first one returns an object of the form while the

second returns the handler.

scope(str ev, obj) --> scope keys/ obj / ev

This manages associated data

and other stuff for the scoped event ev.

arguments

This is the full event to associate the information with. This is whatever one wants to associate with the scope.

return

0 arguments. Leads to the scope keys being returned. 1 arguments. Leads to specified scope's object being returned.2 arguments. Emitter returned for chaining.

note

The scope is associated not only just the full scope, but also its parts.

For example, the event "file:bob" would have associated scopes of "file",

"bob", and "file:bob". In a handler with signature , this

can be accessed by , , and

, assuming there are scopes associated with

those strings.

example

scopes(arr/bool/fun/reg/str filter, bool neg) --> obj

This returns an object with keys of scopes and values of their contexts.

arguments

No argument or falsy first argument. Selects all scopes for

returning. Anything of filtertype. Selects all scopes matching

filter. Negates the match semantics.

return

An object whose keys match the selection and values are the corresponding

scope's value. If the value is an object, then that object is the same

object and modifications on one will reflect on the other.

example

Following the example of bob in scope...

events( arr/fun/reg/str partial, bool negate) --> arr keys

This returns a list of defined events that match the passed in partial

condition.

arguments

The behavior depends on the nature of the first argument:

String. Any event with the argument as a substring will match.RegExp. Any event matching the regex will, well, match.Function. The function should accept event strings and return true if

matched. Array. Any events that match a string in the passed in array will be

returned.

The second argument negates the match conditions.

returns

An array of event strings that match the passed in criteria.

example

handlers(arr/fun/reg/str events, bool empty) --> obj evt:handlers

Get listing of handlers per event.

arguments

. Array of events of interest. . If function, reg, or string, then events are genertaed by

events method. Note string is a substring match; to get exact, enclose

string in an array. . If an array of , then it reverses the filter

selection. . Falsy. The events array used is that of all events.. If true, it includes undefined events with handlers of null

type. This will only happen if an array of events is passed in and

there are non-matching strings in that array.

return

Object with keys of events and values of arrays of Handlers.

example

Let's say we have handlers for the events "bob wakes up" and "bob sleeps".

error()

This is where errors can be dealt with when executing handlers. It is

passed in the error object as well as the handler value, emit data,

event object, and executing context. The current full handler can be

found in the second entry of the cur array in the event object.

If you terminate the flow by throwing an error, be sure to set

to false.

This is a method to be overwritten, not called.

example

The function fires when all events that are waiting

have been called. The default is a noop, but one can attach a function to

the emitter that does whatever it wants.

makeLog() --> fun

This creates a log function. It is a convenient form, but the log

property should often be overwritten. If this is not invoked, then the

log is a noop for performance/memory.

expects a description as a first argument and then

whatever else varies.

The log has various properties/methods of interest:

This is where the carefully crafted logs are stored. This should

be the most useful and meaningful statements for each logged event. . This is a complete dumping of all passed in data to the log,

including the description. This is the object whose keys are the emitter.log descriptions

and whose values are functions that produce the log input. This is not

prototyped; if you delete a function, it is gone. This allows for easy

removal of unwanted descriptions.This produces the logs. Its arguments get passed to the

filter

function so strings match as substrings, regexs, arrays of substrings to

match (exact matches did not seem useful for this), general function

filters, and the ability to reverse the matches (maybe the array is

useful for that). This acts on the logs array instead of the full array. Otherwise

same as the full function.

example

You should run the example in the example directory to get a feeling for

what the logs produce.

makeHandler(value, context) --> handler

Create a handler.

arguments

The handler type to wrap.What the should be for calling the handler.

example

filter(filter type) --> function

This takes in something of filter typeand outputs a function

that accepts a string and returns a boolean whose value depends on whether

a matching has occurred.

serial(obj) --> string

This takes in an object, or objects, and prints out a string suitable for

inspecting them. Functions are denoted by tick marks around the name, if

there is one. Multiple arguments are output as if they were all

encapsulated in an array.

Emitter Instance Properties

Each instance has, in addition to the prototype methods, the following

public properties:

is the scope separator in the event parsing. The default is

. We can have multiple levels; the top level is the global event. tracks the number of events emitted. Can be used for

logging/debugging. is a toggle to decide when to yield to the next cycle for

responsiveness. Default 1000. The default timing for which defaults to "momentary",

i.e., appending to queue.

It also has "private" variables that are best manipulated by the methods.

has key:value of and will

fire them in that order. consists of events to be fired in this tick. These are the

event objectswhich get passed in as the second argument to

the handlers. is the queue for events to be fired after next tick.has k:v of The handler can be of type

Handler or anything convertible to it. has k:v of When an event is emitted with

the given scope, the object will be passed in and is accessible to any

handler reacting to an event along the scope chain. tracks whether we are in the executing loop.

Handler

Handlers are the objects that respond to emitted events. Generally they

wrap handler type objects.

Handler typesfunction This is the foundation as

functions are the ones that execute. They are called with parameters

that can be passed into the emit call and which has a

variety of properties. See evObj.string. This is an action string. When executed, it will look up the

action associated with that string and execute that handler. If no

such action exists, that gets logged and nothing else happens.handler. Handlers can contain handlers.array of handler types. Each one gets executed. This is how works.Handler methods

These are largely internally used, but they can be used externally.

summarizeexecuteremovalcontains

summarize(value) --> str

This takes a handler and summarizes its structure. To give a meaningful

string to handlers for a summarize, one can add properties to

any of the value types except action strings which are their own "label".

arguments

or is to be of handler type and is what is being

summarized.

return

The summary string.

example

execute(data, evObj, context, value) -->

This executes the handler.

arguments

, get passed into the functions as first and second

arguments. This is generated by the emit. The closest context will be used. If the function is bound,

that obviously takes precedence.This is largely internally used.

return

Nothing.

example

removal(ev, emitter) -->

This removes the handlers from .when trackers. Used by .off.

arguments

This is called in the context of the handler.The event string to remove from the .when tracker.The emitter object is passed in for actions and log ability.

return

Nothing.

example

contains(target, htype) --> bool

This checks to see whether target is contained in the handler type at

some point.

arguments

Anything of handler type that is to be matched.Anythin

g of handler type. This is the current level. If

is not provided (typically the case in external calling), then

the becomes .

return

It returns true if found; false otherwise.

example

Tracker

Trackers are responsible for tracking the state of a call. It is

fine to set one up and ignore it. But if you need it to be a bit more

dynamic, this is what you can modify.

Tracker Properties

These are the instance properties

The list of currently active events/counts that are being

tracked. To manipulate, use the tracker methods below.The event that will be emitted when all events have fired. It

will emit the data from all the events in the form of an array of

arrays: This dictates how the action is queued. This dictates whether to reset the events after firing. The original events for use by reset/reinitialize.This is the handler that fires when the monitored events

fire.Set to true if it is safe to emit the event multiple

times. Default is false. Tracker Methods

They all return tracker for chainability.

addremovegocancelreinitialize#### add(arr/str events)

Add events to tracking list.

arguments

This is the same form as the option of . It can be a

string or an array of [strings / array of [string, number] ]. A string is

interpreted as an event to be tracked; a number indicates how many times

(additional times) to wait for.

You can use this to add a number of wait times to an existing event.

example

#### remove(arr/str events)

Removes event from tracking list.

arguments

Same as add events, except the numbers represent subtraction of the

counting.

alias

example

#### go()

Checks to see whether tracking list is empty; if so, the waiting event

is emitted. No arguments. This is automatically called by the other

methods/event changes.

#### cancel()

Cancel the tracking and abort with no event emitted. No arguments.

#### reinitialize()

Reinitializes the tracker. The existing waiting events get cleared and

replaced with the original events array. All data is wiped. No arguments.

#### silence()

This silences the passed in events or the last one added. In other words,

it will not appear in the list of events. If an event is applied multiple

times and silenced, it will be silent for the

Event Object

Each emitted event calls the listener with the first argument as the data

and second argument as an event object. The event object consists of the

following properties:

This is the emitter itself allowing one full access to

emitting, oning, offing, whatever.This is the full event string that has been emitted.This is the data object that is passed into the emit. It is the

same as the first argument given to the handler. This is an object whose keys are the scope event strings and

whose values are the objects stored under that scope. This is the result of splitting on the scope separator,

reversed.This is the value of the counter for which emit this was. What the timing of the emit was.This is an array that contains . The

handlers array is a copy of the handlers attached to the named event. This is changed after each handler handling. It is an array of

It represents the current event and handler

being executed. This is not set, but if set to true, this will halt any further

handlers from firing from this event object's events.

example

If the event "file:bob" was emitted with data "neat" , then the event object emitted would

be something like:

Filter Type

Several of the methods accept something of filter type. This could be a

string, an array of strings, a regex, or a function. All of them are

being used to filter strings based on matching. Most of the methods also

allow for a negation boolean that will reverse the matching results.

String. These will match as a substring of the being tested string. So

if "bob" is the filter object, it will match any string containing

"bob". Array of strings. If the string is in the array, it will match. This

is an exact match. So if we have ["bob", "jane"], then this will match

"bob"

or "jane" and no other strings.Regex. If the string matches the regex, it matches. So /bob/ will

match any string containing bob. Function. If the function returns true, then it matches. Array of for functions that don't accept the

second argument of negate. HomePage

Repository

Kulfon is a modern static site generator written in JavaScript.

Kulfon

Static Site Generator for The Rest of Us

Kulfon/ku?l fn/ is a one command, JavaScript static site generator

inspired by Hugo. It combines data sources with templates to tranform them into

HTML pages at once. It supports Nunjucks,

Markdownand Org

Modeout-of-the-box.

This software is still under active developmentand not feature complete or ready for consumption by anyone other than software developers.

- Kulfon, Kulfon, co z Ciebie wyro?nie ?! martwi? si? ju? od tygodnia!

- Przestań!

While you're hesitating, listen to this wonderful Kulfon song!

Demo

Why Kulfon?

There is a ton of static site generators out there. Here are few points to

convince you to try Kulfon

one-command tool, similar to Hugo, but written in JavaScript, so it's easier to integrate additional JavaScript libraries or stylesheetssolid foundation with carefully selected tools to produce smallerwebsites fasteras The Average Webpage Is Now the Size of the Original DoomRollupfor bundling javascriptsSassfor stylesheetsNunjucksfor views (a simple, designer friendly HTML-based syntax)written in ES6/ES2015Org ModesupportMarkdownsupportunified approach to external dependencies management with either unpkgor YarnHTTP/2readyInstallationGetting started

Once Kulfonis installed, you will have access to the command.

First, let's create a new project:

Now enter the directory

and run 's server

It creates directory with compiled content (this directory should be

ignored). Go to to check your website.

For more commands, just type

Visit Getting Startedfor more.

Roadmap

Kulfon keeps track of the upcoming fixes and features on GitHub Projects: Kulfon Roadmap

Websites that use Kulfon

If your website is using Kulfon, feel free to make a PR to add it to this list; please add the new entries at the top.

reports

We use Github Issuesfor managing bug reports and feature requests. If you run

into problems, please search the issuesor submit a new one here:

Detailed bug reports are always great; it's event better if you are able to

include test cases.

Repository