diff --git a/vendor/github.com/tdewolff/buffer/LICENSE.md b/vendor/github.com/tdewolff/buffer/LICENSE.md
new file mode 100644
index 000000000..41677de41
--- /dev/null
+++ b/vendor/github.com/tdewolff/buffer/LICENSE.md
@@ -0,0 +1,22 @@
+Copyright (c) 2015 Taco de Wolff
+
+ Permission is hereby granted, free of charge, to any person
+ obtaining a copy of this software and associated documentation
+ files (the "Software"), to deal in the Software without
+ restriction, including without limitation the rights to use,
+ copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the
+ Software is furnished to do so, subject to the following
+ conditions:
+
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ OTHER DEALINGS IN THE SOFTWARE.
\ No newline at end of file
diff --git a/vendor/github.com/tdewolff/buffer/README.md b/vendor/github.com/tdewolff/buffer/README.md
new file mode 100644
index 000000000..3c71a95bc
--- /dev/null
+++ b/vendor/github.com/tdewolff/buffer/README.md
@@ -0,0 +1,42 @@
+# Buffer [](http://godoc.org/github.com/tdewolff/buffer)
+
+This package contains several buffer types used in https://github.com/tdewolff/parse for example.
+
+## Installation
+Run the following command
+
+ go get github.com/tdewolff/buffer
+
+or add the following import and run the project with `go get`
+``` go
+import "github.com/tdewolff/buffer"
+```
+
+## Reader
+Reader is a wrapper around a `[]byte` that implements the `io.Reader` interface. It is a much thinner layer than `bytes.Buffer` provides and is therefore faster.
+
+## Writer
+Writer is a buffer that implements the `io.Writer` interface. It is a much thinner layer than `bytes.Buffer` provides and is therefore faster. It will expand the buffer when needed.
+
+The reset functionality allows for better memory reuse. After calling `Reset`, it will overwrite the current buffer and thus reduce allocations.
+
+## Shifter
+Shifter is a read buffer specifically for building lexers. It reads in chunks from an `io.Reader` and allows to keep track two positions: the start and end position. The start position is the beginning of the current token being parsed, the end position is being moved forward until a valid token is found. Calling `Shift` will collapse the positions to the end and return the parsed `[]byte`.
+
+Moving the end position can go through `Move(int)` which also accepts negative integers or `MoveTo(int)` where the integer will be the new length of the selected bytes. `MoveTo(int)` is useful when you saved a previous position through `Pos() int` and want to return to that position.
+
+`Peek(int) byte` will peek forward (relative to the end position, ie. the position set with Move/MoveTo) and return the byte at that location. `PeekRune(int) (rune, int)` returns UTF-8 runes and its length at the given **byte** position. Consecutive calls to Peek **may invalidate previously returned byte slices**. So if you need to use the content of a byte slice after the next call to `Peek(int) byte`, it needs to be copied in principal (see exception below).
+
+`Bytes() []byte` will return the currently selected bytes, `Skip()` will collapse the selection. `Shift() []byte` is a combination of `Bytes() []byte` and `Skip()`.
+
+When the internal `io.Reader` returned an error, `Err() error` will return that error (even if subsequent peeks are still possible). If `Peek(int) byte` returns `0` when an error occurred. `IsEOF() bool` is a faster alternative than `Err() == io.EOF`, if it returns true it means the internal buffer will not be reallocated/overwritten. So returned byte slices need not be copied for use after subsequent `Peek(int) byte` calls. When the `io.Reader` provides the `Bytes() []byte` function (which `Reader` does in this package), it will use that buffer instead and thus `IsEOF()` returns always `true` (ie. copying returned slices is not needed).
+
+## Lexer
+Lexer is an improvement over Shifter in that it does not need the returned byte slices to be copied. Instead you can call `ShiftLen() int`, which returns the number of bytes that have been shifted since the previous call to `ShiftLen`, and use that to specify how many bytes need to be freed up from the buffer. Calling `Free(n int)` frees up `n` bytes from the internal buffer(s). It holds an array of buffers to accomodate for keeping everything in-memory. If you don't need to keep returned byte slices around, call `Free(ShiftLen())` after every `Shift` call.
+
+The `MoveTo(int)` function has been renamed to `Rewind(int)` to fit its meaning better. Also `Bytes() []byte` has been renamed to `Lexeme() []byte` for the same reason.
+
+## License
+Released under the [MIT license](LICENSE.md).
+
+[1]: http://golang.org/ "Go Language"
diff --git a/vendor/github.com/tdewolff/buffer/buffer.go b/vendor/github.com/tdewolff/buffer/buffer.go
new file mode 100644
index 000000000..f16c7cc19
--- /dev/null
+++ b/vendor/github.com/tdewolff/buffer/buffer.go
@@ -0,0 +1,15 @@
+/*
+Package buffer contains buffer and wrapper types for byte slices. It is useful for writing lexers or other high-performance byte slice handling.
+
+The `Reader` and `Writer` types implement the `io.Reader` and `io.Writer` respectively and provide a thinner and faster interface than `bytes.Buffer`.
+The `Shifter` type is useful for building lexers because it keeps track of the start and end position of a byte selection, and shifts the bytes whenever a valid token is found.
+The `Lexer` is however an improved version of `Shifter`, allowing zero-copy for the parser by using a (kind of) ring buffer underneath.
+*/
+package buffer // import "github.com/tdewolff/buffer"
+
+// defaultBufSize specifies the default initial length of internal buffers.
+var defaultBufSize = 4096
+
+// MinBuf specifies the default initial length of internal buffers.
+// Solely here to support old versions of parse.
+var MinBuf = defaultBufSize
diff --git a/vendor/github.com/tdewolff/buffer/lexer.go b/vendor/github.com/tdewolff/buffer/lexer.go
new file mode 100644
index 000000000..eead11f2b
--- /dev/null
+++ b/vendor/github.com/tdewolff/buffer/lexer.go
@@ -0,0 +1,221 @@
+package buffer // import "github.com/tdewolff/buffer"
+
+import "io"
+
+type block struct {
+ buf []byte
+ next int // index in pool plus one
+ active bool
+}
+
+type bufferPool struct {
+ pool []block
+ head int // index in pool plus one
+ tail int // index in pool plus one
+
+ pos int // byte pos in tail
+}
+
+func (z *bufferPool) swap(oldBuf []byte, size int) []byte {
+ // find new buffer that can be reused
+ swap := -1
+ for i := 0; i < len(z.pool); i++ {
+ if !z.pool[i].active && size <= cap(z.pool[i].buf) {
+ swap = i
+ break
+ }
+ }
+ if swap == -1 { // no free buffer found for reuse
+ if z.tail == 0 && z.pos >= len(oldBuf) && size <= cap(oldBuf) { // but we can reuse the current buffer!
+ z.pos -= len(oldBuf)
+ return oldBuf[:0]
+ }
+ // allocate new
+ z.pool = append(z.pool, block{make([]byte, 0, size), 0, true})
+ swap = len(z.pool) - 1
+ }
+
+ newBuf := z.pool[swap].buf
+
+ // put current buffer into pool
+ z.pool[swap] = block{oldBuf, 0, true}
+ if z.head != 0 {
+ z.pool[z.head-1].next = swap + 1
+ }
+ z.head = swap + 1
+ if z.tail == 0 {
+ z.tail = swap + 1
+ }
+
+ return newBuf[:0]
+}
+
+func (z *bufferPool) free(n int) {
+ z.pos += n
+ // move the tail over to next buffers
+ for z.tail != 0 && z.pos >= len(z.pool[z.tail-1].buf) {
+ z.pos -= len(z.pool[z.tail-1].buf)
+ newTail := z.pool[z.tail-1].next
+ z.pool[z.tail-1].active = false // after this, any thread may pick up the inactive buffer, so it can't be used anymore
+ z.tail = newTail
+ }
+ if z.tail == 0 {
+ z.head = 0
+ }
+}
+
+// Lexer is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
+// It keeps data in-memory until Free, taking a byte length, is called to move beyond the data.
+type Lexer struct {
+ r io.Reader
+ err error
+
+ pool bufferPool
+
+ buf []byte
+ start int // index in buf
+ pos int // index in buf
+ prevStart int
+
+ free int
+}
+
+// NewLexer returns a new Lexer for a given io.Reader with a 4kB estimated buffer size.
+// If the io.Reader implements Bytes, that buffer is used instead.
+func NewLexer(r io.Reader) *Lexer {
+ return NewLexerSize(r, defaultBufSize)
+}
+
+// NewLexerSize returns a new Lexer for a given io.Reader and estimated required buffer size.
+// If the io.Reader implements Bytes, that buffer is used instead.
+func NewLexerSize(r io.Reader, size int) *Lexer {
+ // if reader has the bytes in memory already, use that instead
+ if buffer, ok := r.(interface {
+ Bytes() []byte
+ }); ok {
+ return &Lexer{
+ err: io.EOF,
+ buf: buffer.Bytes(),
+ }
+ }
+ return &Lexer{
+ r: r,
+ buf: make([]byte, 0, size),
+ }
+}
+
+func (z *Lexer) read(pos int) byte {
+ if z.err != nil {
+ return 0
+ }
+
+ // free unused bytes
+ z.pool.free(z.free)
+ z.free = 0
+
+ // get new buffer
+ c := cap(z.buf)
+ p := pos - z.start + 1
+ if 2*p > c { // if the token is larger than half the buffer, increase buffer size
+ c = 2*c + p
+ }
+ d := len(z.buf) - z.start
+ buf := z.pool.swap(z.buf[:z.start], c)
+ copy(buf[:d], z.buf[z.start:]) // copy the left-overs (unfinished token) from the old buffer
+
+ // read in new data for the rest of the buffer
+ var n int
+ for pos-z.start >= d && z.err == nil {
+ n, z.err = z.r.Read(buf[d:cap(buf)])
+ d += n
+ }
+ pos -= z.start
+ z.pos -= z.start
+ z.start, z.buf = 0, buf[:d]
+ if pos >= d {
+ return 0
+ }
+ return z.buf[pos]
+}
+
+// Err returns the error returned from io.Reader. It may still return valid bytes for a while though.
+func (z *Lexer) Err() error {
+ if z.err == io.EOF && z.pos < len(z.buf) {
+ return nil
+ }
+ return z.err
+}
+
+// Free frees up bytes of length n from previously shifted tokens.
+// Each call to Shift should at one point be followed by a call to Free with a length returned by ShiftLen.
+func (z *Lexer) Free(n int) {
+ z.free += n
+}
+
+// Peek returns the ith byte relative to the end position and possibly does an allocation.
+// Peek returns zero when an error has occurred, Err returns the error.
+// TODO: inline function
+func (z *Lexer) Peek(pos int) byte {
+ pos += z.pos
+ if uint(pos) < uint(len(z.buf)) { // uint for BCE
+ return z.buf[pos]
+ }
+ return z.read(pos)
+}
+
+// PeekRune returns the rune and rune length of the ith byte relative to the end position.
+func (z *Lexer) PeekRune(pos int) (rune, int) {
+ // from unicode/utf8
+ c := z.Peek(pos)
+ if c < 0xC0 {
+ return rune(c), 1
+ } else if c < 0xE0 {
+ return rune(c&0x1F)<<6 | rune(z.Peek(pos+1)&0x3F), 2
+ } else if c < 0xF0 {
+ return rune(c&0x0F)<<12 | rune(z.Peek(pos+1)&0x3F)<<6 | rune(z.Peek(pos+2)&0x3F), 3
+ }
+ return rune(c&0x07)<<18 | rune(z.Peek(pos+1)&0x3F)<<12 | rune(z.Peek(pos+2)&0x3F)<<6 | rune(z.Peek(pos+3)&0x3F), 4
+}
+
+// Move advances the position.
+func (z *Lexer) Move(n int) {
+ z.pos += n
+}
+
+// Pos returns a mark to which can be rewinded.
+func (z *Lexer) Pos() int {
+ return z.pos - z.start
+}
+
+// Rewind rewinds the position to the given position.
+func (z *Lexer) Rewind(pos int) {
+ z.pos = z.start + pos
+}
+
+// Lexeme returns the bytes of the current selection.
+func (z *Lexer) Lexeme() []byte {
+ return z.buf[z.start:z.pos]
+}
+
+// Skip collapses the position to the end of the selection.
+func (z *Lexer) Skip() {
+ z.start = z.pos
+}
+
+// Shift returns the bytes of the current selection and collapses the position to the end of the selection.
+// It also returns the number of bytes we moved since the last call to Shift. This can be used in calls to Free.
+func (z *Lexer) Shift() []byte {
+ if z.pos > len(z.buf) { // make sure we peeked at least as much as we shift
+ z.read(z.pos - 1)
+ }
+ b := z.buf[z.start:z.pos]
+ z.start = z.pos
+ return b
+}
+
+// ShiftLen returns the number of bytes moved since the last call to ShiftLen. This can be used in calls to Free because it takes into account multiple Shifts or Skips.
+func (z *Lexer) ShiftLen() int {
+ n := z.start - z.prevStart
+ z.prevStart = z.start
+ return n
+}
diff --git a/vendor/github.com/tdewolff/buffer/reader.go b/vendor/github.com/tdewolff/buffer/reader.go
new file mode 100644
index 000000000..72294f962
--- /dev/null
+++ b/vendor/github.com/tdewolff/buffer/reader.go
@@ -0,0 +1,39 @@
+package buffer // import "github.com/tdewolff/buffer"
+
+import "io"
+
+// Reader implements an io.Reader over a byte slice.
+type Reader struct {
+ buf []byte
+ pos int
+}
+
+// NewReader returns a new Reader for a given byte slice.
+func NewReader(buf []byte) *Reader {
+ return &Reader{
+ buf: buf,
+ }
+}
+
+// Read reads bytes into the given byte slice and returns the number of bytes read and an error if occurred.
+func (r *Reader) Read(b []byte) (n int, err error) {
+ if len(b) == 0 {
+ return 0, nil
+ }
+ if r.pos >= len(r.buf) {
+ return 0, io.EOF
+ }
+ n = copy(b, r.buf[r.pos:])
+ r.pos += n
+ return
+}
+
+// Bytes returns the underlying byte slice.
+func (r *Reader) Bytes() []byte {
+ return r.buf
+}
+
+// Reset resets the position of the read pointer to the beginning of the underlying byte slice
+func (r *Reader) Reset() {
+ r.pos = 0
+}
diff --git a/vendor/github.com/tdewolff/buffer/shifter.go b/vendor/github.com/tdewolff/buffer/shifter.go
new file mode 100644
index 000000000..ad5da5936
--- /dev/null
+++ b/vendor/github.com/tdewolff/buffer/shifter.go
@@ -0,0 +1,144 @@
+package buffer // import "github.com/tdewolff/buffer"
+
+import "io"
+
+// Shifter is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
+type Shifter struct {
+ r io.Reader
+ err error
+ eof bool
+
+ buf []byte
+ pos int
+ end int
+}
+
+// NewShifter returns a new Shifter for a given io.Reader with a 4kB estimated buffer size.
+// If the io.Reader implements Bytes, that buffer is used instead.
+func NewShifter(r io.Reader) *Shifter {
+ return NewShifterSize(r, defaultBufSize)
+}
+
+// NewShifterSize returns a new Shifter for a given io.Reader and estimated required buffer size.
+// If the io.Reader implements Bytes, that buffer is used instead.
+func NewShifterSize(r io.Reader, size int) *Shifter {
+ // If reader has the bytes in memory already, use that instead!
+ if buffer, ok := r.(interface {
+ Bytes() []byte
+ }); ok {
+ return &Shifter{
+ err: io.EOF,
+ eof: true,
+ buf: buffer.Bytes(),
+ }
+ }
+ z := &Shifter{
+ r: r,
+ buf: make([]byte, 0, size),
+ }
+ z.Peek(0)
+ return z
+}
+
+// Err returns the error returned from io.Reader. It may still return valid bytes for a while though.
+func (z *Shifter) Err() error {
+ if z.eof && z.end < len(z.buf) {
+ return nil
+ }
+ return z.err
+}
+
+// IsEOF returns true when it has encountered EOF meaning that it has loaded the last data in memory (ie. previously returned byte slice will not be overwritten by Peek).
+// Calling IsEOF is faster than checking Err() == io.EOF.
+func (z *Shifter) IsEOF() bool {
+ return z.eof
+}
+
+func (z *Shifter) read(end int) byte {
+ if z.err != nil {
+ return 0
+ }
+
+ // reallocate a new buffer (possibly larger)
+ c := cap(z.buf)
+ d := len(z.buf) - z.pos
+ var buf []byte
+ if 2*d > c {
+ buf = make([]byte, d, 2*c+end-z.pos)
+ } else {
+ buf = z.buf[:d]
+ }
+ copy(buf, z.buf[z.pos:])
+
+ // read in to fill the buffer till capacity
+ var n int
+ n, z.err = z.r.Read(buf[d:cap(buf)])
+ z.eof = (z.err == io.EOF)
+ end -= z.pos
+ z.end -= z.pos
+ z.pos, z.buf = 0, buf[:d+n]
+ if n == 0 {
+ if z.err == nil {
+ z.err = io.EOF
+ z.eof = true
+ }
+ return 0
+ }
+ return z.buf[end]
+}
+
+// Peek returns the ith byte relative to the end position and possibly does an allocation. Calling Peek may invalidate previous returned byte slices by Bytes or Shift, unless IsEOF returns true.
+// Peek returns zero when an error has occurred, Err returns the error.
+func (z *Shifter) Peek(end int) byte {
+ end += z.end
+ if end >= len(z.buf) {
+ return z.read(end)
+ }
+ return z.buf[end]
+}
+
+// PeekRune returns the rune and rune length of the ith byte relative to the end position.
+func (z *Shifter) PeekRune(i int) (rune, int) {
+ // from unicode/utf8
+ c := z.Peek(i)
+ if c < 0xC0 {
+ return rune(c), 1
+ } else if c < 0xE0 {
+ return rune(c&0x1F)<<6 | rune(z.Peek(i+1)&0x3F), 2
+ } else if c < 0xF0 {
+ return rune(c&0x0F)<<12 | rune(z.Peek(i+1)&0x3F)<<6 | rune(z.Peek(i+2)&0x3F), 3
+ }
+ return rune(c&0x07)<<18 | rune(z.Peek(i+1)&0x3F)<<12 | rune(z.Peek(i+2)&0x3F)<<6 | rune(z.Peek(i+3)&0x3F), 4
+}
+
+// Move advances the end position.
+func (z *Shifter) Move(n int) {
+ z.end += n
+}
+
+// MoveTo sets the end position.
+func (z *Shifter) MoveTo(n int) {
+ z.end = z.pos + n
+}
+
+// Pos returns the end position.
+func (z *Shifter) Pos() int {
+ return z.end - z.pos
+}
+
+// Bytes returns the bytes of the current selection.
+func (z *Shifter) Bytes() []byte {
+ return z.buf[z.pos:z.end]
+}
+
+// Shift returns the bytes of the current selection and collapses the position to the end.
+func (z *Shifter) Shift() []byte {
+ b := z.buf[z.pos:z.end]
+ z.pos = z.end
+ return b
+}
+
+// Skip collapses the position to the end.
+func (z *Shifter) Skip() {
+ z.pos = z.end
+}
diff --git a/vendor/github.com/tdewolff/buffer/writer.go b/vendor/github.com/tdewolff/buffer/writer.go
new file mode 100644
index 000000000..2cbde2528
--- /dev/null
+++ b/vendor/github.com/tdewolff/buffer/writer.go
@@ -0,0 +1,41 @@
+package buffer // import "github.com/tdewolff/buffer"
+
+// Writer implements an io.Writer over a byte slice.
+type Writer struct {
+ buf []byte
+}
+
+// NewWriter returns a new Writer for a given byte slice.
+func NewWriter(buf []byte) *Writer {
+ return &Writer{
+ buf: buf,
+ }
+}
+
+// Write writes bytes from the given byte slice and returns the number of bytes written and an error if occurred. When err != nil, n == 0.
+func (w *Writer) Write(b []byte) (int, error) {
+ n := len(b)
+ end := len(w.buf)
+ if end+n > cap(w.buf) {
+ buf := make([]byte, end, 2*cap(w.buf)+n)
+ copy(buf, w.buf)
+ w.buf = buf
+ }
+ w.buf = w.buf[:end+n]
+ return copy(w.buf[end:], b), nil
+}
+
+// Len returns the length of the underlying byte slice.
+func (w *Writer) Len() int {
+ return len(w.buf)
+}
+
+// Bytes returns the underlying byte slice.
+func (w *Writer) Bytes() []byte {
+ return w.buf
+}
+
+// Reset empties and reuses the current buffer. Subsequent writes will overwrite the buffer, so any reference to the underlying slice is invalidated after this call.
+func (w *Writer) Reset() {
+ w.buf = w.buf[:0]
+}
diff --git a/vendor/github.com/tdewolff/minify/LICENSE.md b/vendor/github.com/tdewolff/minify/LICENSE.md
new file mode 100644
index 000000000..41677de41
--- /dev/null
+++ b/vendor/github.com/tdewolff/minify/LICENSE.md
@@ -0,0 +1,22 @@
+Copyright (c) 2015 Taco de Wolff
+
+ Permission is hereby granted, free of charge, to any person
+ obtaining a copy of this software and associated documentation
+ files (the "Software"), to deal in the Software without
+ restriction, including without limitation the rights to use,
+ copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the
+ Software is furnished to do so, subject to the following
+ conditions:
+
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ OTHER DEALINGS IN THE SOFTWARE.
\ No newline at end of file
diff --git a/vendor/github.com/tdewolff/minify/README.md b/vendor/github.com/tdewolff/minify/README.md
new file mode 100644
index 000000000..b8370ff8c
--- /dev/null
+++ b/vendor/github.com/tdewolff/minify/README.md
@@ -0,0 +1,565 @@
+# Minify [](https://travis-ci.org/tdewolff/minify) [](http://godoc.org/github.com/tdewolff/minify) [](https://coveralls.io/github/tdewolff/minify?branch=master) [](https://gitter.im/tdewolff/minify?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
+
+**The preferred stable release is v2. Master has some new changes for SVG that haven't yet endured the test of time, bug reports are appreciated.**
+
+**[Online demo](http://go.tacodewolff.nl/minify) if you need to minify files *now*.**
+
+**[Command line tool](https://github.com/tdewolff/minify/tree/master/cmd/minify) that minifies concurrently and supports watching file changes.**
+
+**[All releases](https://dl.equinox.io/tdewolff/minify/stable) on Equinox for various platforms.**
+
+If this software is useful to you, consider making a [donation](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=27MSRR5UJQQUL)! When a significant amount has been deposited, I will write a much improved JS minifier.
+
+---
+
+Minify is a minifier package written in [Go][1]. It provides HTML5, CSS3, JS, JSON, SVG and XML minifiers and an interface to implement any other minifier. Minification is the process of removing bytes from a file (such as whitespace) without changing its output and therefore shrinking its size and speeding up transmission over the internet and possibly parsing. The implemented minifiers are high performance and streaming, which implies O(n).
+
+The core functionality associates mimetypes with minification functions, allowing embedded resources (like CSS or JS within HTML files) to be minified as well. Users can add new implementations that are triggered based on a mimetype (or pattern), or redirect to an external command (like ClosureCompiler, UglifyCSS, ...).
+
+#### Table of Contents
+
+- [Minify](#minify)
+ - [Prologue](#prologue)
+ - [Installation](#installation)
+ - [API stability](#api-stability)
+ - [Testing](#testing)
+ - [HTML](#html)
+ - [Whitespace removal](#whitespace-removal)
+ - [CSS](#css)
+ - [JS](#js)
+ - [JSON](#json)
+ - [SVG](#svg)
+ - [XML](#xml)
+ - [Usage](#usage)
+ - [New](#new)
+ - [From reader](#from-reader)
+ - [From bytes](#from-bytes)
+ - [From string](#from-string)
+ - [Custom minifier](#custom-minifier)
+ - [Mediatypes](#mediatypes)
+ - [Examples](#examples)
+ - [Common minifiers](#common-minifiers)
+ - [Custom minifier](#custom-minifier-example)
+ - [ResponseWriter](#responsewriter)
+ - [Templates](#templates)
+ - [License](#license)
+
+#### Status
+
+* CSS: **fully implemented**
+* HTML: **fully implemented**
+* JS: basic JSmin-like implementation
+* JSON: **fully implemented**
+* SVG: partially implemented; in development
+* XML: **fully implemented**
+
+## Prologue
+Minifiers or bindings to minifiers exist in almost all programming languages. Some implementations are merely using several regular-expressions to trim whitespace and comments (even though regex for parsing HTML/XML is ill-advised, for a good read see [Regular Expressions: Now You Have Two Problems](http://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/)). Some implementations are much more profound, such as the [YUI Compressor](http://yui.github.io/yuicompressor/) and [Google Closure Compiler](https://github.com/google/closure-compiler) for JS. As most existing implementations either use Java or JavaScript and don't focus on performance, they are pretty slow. And loading the whole file into memory is bad for really large files (or impossible for infinite streams).
+
+This minifier proves to be that fast and extensive minifier that can handle HTML and any other filetype it may contain (CSS, JS, ...). It streams the input and output and can minify files concurrently.
+
+## Installation
+Run the following command
+
+ go get github.com/tdewolff/minify
+
+or add the following imports and run the project with `go get`
+``` go
+import (
+ "github.com/tdewolff/minify"
+ "github.com/tdewolff/minify/css"
+ "github.com/tdewolff/minify/html"
+ "github.com/tdewolff/minify/js"
+ "github.com/tdewolff/minify/json"
+ "github.com/tdewolff/minify/svg"
+ "github.com/tdewolff/minify/xml"
+)
+```
+
+## API stability
+There is no guarantee for absolute stability, but I take issues and bugs seriously and don't take API changes lightly. The library will be maintained in a compatible way unless vital bugs prevent me from doing so. There has been one API change after v1 which added options support and I took the opportunity to push through some more API clean up as well. There are no plans whatsoever for future API changes.
+
+- minify-v1.0.0 depends on parse-v1.0.0
+- minify-v1.1.0 depends on parse-v1.1.0
+- minify-v2.0.0 depends on parse-v2.0.0
+- minify-v2.1.0 depends on parse-v2.1.0
+- minify-tip will always compile with my other packages on tip
+
+The API differences between v1 and v2 are listed below. If `m := minify.New()` and `w` and `r` are your writer and reader respectfully, then **v1** → **v2**:
+ - `minify.Bytes(m, ...)` → `m.Bytes(...)`
+ - `minify.String(m, ...)` → `m.String(...)`
+ - `html.Minify(m, "text/html", w, r)` → `html.Minify(m, w, r, nil)` also for `css`, `js`, ...
+ - `css.Minify(m, "text/css;inline=1", w, r)` → `css.Minify(m, w, r, map[string]string{"inline":"1"})`
+
+## Testing
+For all subpackages and the imported `parse` and `buffer` packages, test coverage of 100% is pursued. Besides full coverage, the minifiers are [fuzz tested](https://github.com/tdewolff/fuzz) using [github.com/dvyukov/go-fuzz](http://www.github.com/dvyukov/go-fuzz), see [the wiki](https://github.com/tdewolff/minify/wiki) for the most important bugs found by fuzz testing. Furthermore am I working on adding visual testing to ensure that minification doesn't change anything visually. By using the WebKit browser to render the original and minified pages we can check whether any pixel is different.
+
+These tests ensure that everything works as intended, the code does not crash (whatever the input) and that it doesn't change the final result visually. If you still encounter a bug, please report [here](https://github.com/tdewolff/minify/issues)!
+
+## HTML
+
+HTML (with JS and CSS) minification typically runs at about 40MB/s ~= 140GB/h, depending on the composition of the file.
+
+Website | Original | Minified | Ratio | Time*
+------- | -------- | -------- | ----- | -----------------------
+[Amazon](http://www.amazon.com/) | 463kB | **414kB** | 90% | 10ms
+[BBC](http://www.bbc.com/) | 113kB | **96kB** | 85% | 3ms
+[StackOverflow](http://stackoverflow.com/) | 201kB | **182kB** | 91% | 5ms
+[Wikipedia](http://en.wikipedia.org/wiki/President_of_the_United_States) | 435kB | **410kB** | 94%** | 11ms
+
+*These times are measured on my home computer which is an average development computer. The duration varies a lot but it's important to see it's in the 10ms range! The benchmark uses all the minifiers and excludes reading from and writing to the file from the measurement.
+
+**Is already somewhat minified, so this doesn't reflect the full potential of this minifier.
+
+The HTML5 minifier uses these minifications:
+
+- strip unnecessary whitespace and otherwise collapse it to one space (or newline if it originally contained a newline)
+- strip superfluous quotes, or uses single/double quotes whichever requires fewer escapes
+- strip default attribute values and attribute boolean values
+- strip some empty attributes
+- strip unrequired tags (`html`, `head`, `body`, ...)
+- strip unrequired end tags (`tr`, `td`, `li`, ... and often `p`)
+- strip default protocols (`http:`, `https:` and `javascript:`)
+- strip all comments (including conditional comments, old IE versions are not supported anymore by Microsoft)
+- shorten `doctype` and `meta` charset
+- lowercase tags, attributes and some values to enhance gzip compression
+
+Options:
+
+- `KeepConditionalComments` preserve all IE conditional comments such as `` and ``, see https://msdn.microsoft.com/en-us/library/ms537512(v=vs.85).aspx#syntax
+- `KeepDefaultAttrVals` preserve default attribute values such as `