mirror of
https://github.com/StackExchange/dnscontrol.git
synced 2024-05-11 05:55:12 +00:00
vendor minify package
This commit is contained in:
22
vendor/github.com/tdewolff/buffer/LICENSE.md
generated
vendored
Normal file
22
vendor/github.com/tdewolff/buffer/LICENSE.md
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
Copyright (c) 2015 Taco de Wolff
|
||||
|
||||
Permission is hereby granted, free of charge, to any person
|
||||
obtaining a copy of this software and associated documentation
|
||||
files (the "Software"), to deal in the Software without
|
||||
restriction, including without limitation the rights to use,
|
||||
copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following
|
||||
conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
42
vendor/github.com/tdewolff/buffer/README.md
generated
vendored
Normal file
42
vendor/github.com/tdewolff/buffer/README.md
generated
vendored
Normal file
@ -0,0 +1,42 @@
|
||||
# Buffer [](http://godoc.org/github.com/tdewolff/buffer)
|
||||
|
||||
This package contains several buffer types used in https://github.com/tdewolff/parse for example.
|
||||
|
||||
## Installation
|
||||
Run the following command
|
||||
|
||||
go get github.com/tdewolff/buffer
|
||||
|
||||
or add the following import and run the project with `go get`
|
||||
``` go
|
||||
import "github.com/tdewolff/buffer"
|
||||
```
|
||||
|
||||
## Reader
|
||||
Reader is a wrapper around a `[]byte` that implements the `io.Reader` interface. It is a much thinner layer than `bytes.Buffer` provides and is therefore faster.
|
||||
|
||||
## Writer
|
||||
Writer is a buffer that implements the `io.Writer` interface. It is a much thinner layer than `bytes.Buffer` provides and is therefore faster. It will expand the buffer when needed.
|
||||
|
||||
The reset functionality allows for better memory reuse. After calling `Reset`, it will overwrite the current buffer and thus reduce allocations.
|
||||
|
||||
## Shifter
|
||||
Shifter is a read buffer specifically for building lexers. It reads in chunks from an `io.Reader` and allows to keep track two positions: the start and end position. The start position is the beginning of the current token being parsed, the end position is being moved forward until a valid token is found. Calling `Shift` will collapse the positions to the end and return the parsed `[]byte`.
|
||||
|
||||
Moving the end position can go through `Move(int)` which also accepts negative integers or `MoveTo(int)` where the integer will be the new length of the selected bytes. `MoveTo(int)` is useful when you saved a previous position through `Pos() int` and want to return to that position.
|
||||
|
||||
`Peek(int) byte` will peek forward (relative to the end position, ie. the position set with Move/MoveTo) and return the byte at that location. `PeekRune(int) (rune, int)` returns UTF-8 runes and its length at the given **byte** position. Consecutive calls to Peek **may invalidate previously returned byte slices**. So if you need to use the content of a byte slice after the next call to `Peek(int) byte`, it needs to be copied in principal (see exception below).
|
||||
|
||||
`Bytes() []byte` will return the currently selected bytes, `Skip()` will collapse the selection. `Shift() []byte` is a combination of `Bytes() []byte` and `Skip()`.
|
||||
|
||||
When the internal `io.Reader` returned an error, `Err() error` will return that error (even if subsequent peeks are still possible). If `Peek(int) byte` returns `0` when an error occurred. `IsEOF() bool` is a faster alternative than `Err() == io.EOF`, if it returns true it means the internal buffer will not be reallocated/overwritten. So returned byte slices need not be copied for use after subsequent `Peek(int) byte` calls. When the `io.Reader` provides the `Bytes() []byte` function (which `Reader` does in this package), it will use that buffer instead and thus `IsEOF()` returns always `true` (ie. copying returned slices is not needed).
|
||||
|
||||
## Lexer
|
||||
Lexer is an improvement over Shifter in that it does not need the returned byte slices to be copied. Instead you can call `ShiftLen() int`, which returns the number of bytes that have been shifted since the previous call to `ShiftLen`, and use that to specify how many bytes need to be freed up from the buffer. Calling `Free(n int)` frees up `n` bytes from the internal buffer(s). It holds an array of buffers to accomodate for keeping everything in-memory. If you don't need to keep returned byte slices around, call `Free(ShiftLen())` after every `Shift` call.
|
||||
|
||||
The `MoveTo(int)` function has been renamed to `Rewind(int)` to fit its meaning better. Also `Bytes() []byte` has been renamed to `Lexeme() []byte` for the same reason.
|
||||
|
||||
## License
|
||||
Released under the [MIT license](LICENSE.md).
|
||||
|
||||
[1]: http://golang.org/ "Go Language"
|
15
vendor/github.com/tdewolff/buffer/buffer.go
generated
vendored
Normal file
15
vendor/github.com/tdewolff/buffer/buffer.go
generated
vendored
Normal file
@ -0,0 +1,15 @@
|
||||
/*
|
||||
Package buffer contains buffer and wrapper types for byte slices. It is useful for writing lexers or other high-performance byte slice handling.
|
||||
|
||||
The `Reader` and `Writer` types implement the `io.Reader` and `io.Writer` respectively and provide a thinner and faster interface than `bytes.Buffer`.
|
||||
The `Shifter` type is useful for building lexers because it keeps track of the start and end position of a byte selection, and shifts the bytes whenever a valid token is found.
|
||||
The `Lexer` is however an improved version of `Shifter`, allowing zero-copy for the parser by using a (kind of) ring buffer underneath.
|
||||
*/
|
||||
package buffer // import "github.com/tdewolff/buffer"
|
||||
|
||||
// defaultBufSize specifies the default initial length of internal buffers.
|
||||
var defaultBufSize = 4096
|
||||
|
||||
// MinBuf specifies the default initial length of internal buffers.
|
||||
// Solely here to support old versions of parse.
|
||||
var MinBuf = defaultBufSize
|
221
vendor/github.com/tdewolff/buffer/lexer.go
generated
vendored
Normal file
221
vendor/github.com/tdewolff/buffer/lexer.go
generated
vendored
Normal file
@ -0,0 +1,221 @@
|
||||
package buffer // import "github.com/tdewolff/buffer"
|
||||
|
||||
import "io"
|
||||
|
||||
type block struct {
|
||||
buf []byte
|
||||
next int // index in pool plus one
|
||||
active bool
|
||||
}
|
||||
|
||||
type bufferPool struct {
|
||||
pool []block
|
||||
head int // index in pool plus one
|
||||
tail int // index in pool plus one
|
||||
|
||||
pos int // byte pos in tail
|
||||
}
|
||||
|
||||
func (z *bufferPool) swap(oldBuf []byte, size int) []byte {
|
||||
// find new buffer that can be reused
|
||||
swap := -1
|
||||
for i := 0; i < len(z.pool); i++ {
|
||||
if !z.pool[i].active && size <= cap(z.pool[i].buf) {
|
||||
swap = i
|
||||
break
|
||||
}
|
||||
}
|
||||
if swap == -1 { // no free buffer found for reuse
|
||||
if z.tail == 0 && z.pos >= len(oldBuf) && size <= cap(oldBuf) { // but we can reuse the current buffer!
|
||||
z.pos -= len(oldBuf)
|
||||
return oldBuf[:0]
|
||||
}
|
||||
// allocate new
|
||||
z.pool = append(z.pool, block{make([]byte, 0, size), 0, true})
|
||||
swap = len(z.pool) - 1
|
||||
}
|
||||
|
||||
newBuf := z.pool[swap].buf
|
||||
|
||||
// put current buffer into pool
|
||||
z.pool[swap] = block{oldBuf, 0, true}
|
||||
if z.head != 0 {
|
||||
z.pool[z.head-1].next = swap + 1
|
||||
}
|
||||
z.head = swap + 1
|
||||
if z.tail == 0 {
|
||||
z.tail = swap + 1
|
||||
}
|
||||
|
||||
return newBuf[:0]
|
||||
}
|
||||
|
||||
func (z *bufferPool) free(n int) {
|
||||
z.pos += n
|
||||
// move the tail over to next buffers
|
||||
for z.tail != 0 && z.pos >= len(z.pool[z.tail-1].buf) {
|
||||
z.pos -= len(z.pool[z.tail-1].buf)
|
||||
newTail := z.pool[z.tail-1].next
|
||||
z.pool[z.tail-1].active = false // after this, any thread may pick up the inactive buffer, so it can't be used anymore
|
||||
z.tail = newTail
|
||||
}
|
||||
if z.tail == 0 {
|
||||
z.head = 0
|
||||
}
|
||||
}
|
||||
|
||||
// Lexer is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
|
||||
// It keeps data in-memory until Free, taking a byte length, is called to move beyond the data.
|
||||
type Lexer struct {
|
||||
r io.Reader
|
||||
err error
|
||||
|
||||
pool bufferPool
|
||||
|
||||
buf []byte
|
||||
start int // index in buf
|
||||
pos int // index in buf
|
||||
prevStart int
|
||||
|
||||
free int
|
||||
}
|
||||
|
||||
// NewLexer returns a new Lexer for a given io.Reader with a 4kB estimated buffer size.
|
||||
// If the io.Reader implements Bytes, that buffer is used instead.
|
||||
func NewLexer(r io.Reader) *Lexer {
|
||||
return NewLexerSize(r, defaultBufSize)
|
||||
}
|
||||
|
||||
// NewLexerSize returns a new Lexer for a given io.Reader and estimated required buffer size.
|
||||
// If the io.Reader implements Bytes, that buffer is used instead.
|
||||
func NewLexerSize(r io.Reader, size int) *Lexer {
|
||||
// if reader has the bytes in memory already, use that instead
|
||||
if buffer, ok := r.(interface {
|
||||
Bytes() []byte
|
||||
}); ok {
|
||||
return &Lexer{
|
||||
err: io.EOF,
|
||||
buf: buffer.Bytes(),
|
||||
}
|
||||
}
|
||||
return &Lexer{
|
||||
r: r,
|
||||
buf: make([]byte, 0, size),
|
||||
}
|
||||
}
|
||||
|
||||
func (z *Lexer) read(pos int) byte {
|
||||
if z.err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
// free unused bytes
|
||||
z.pool.free(z.free)
|
||||
z.free = 0
|
||||
|
||||
// get new buffer
|
||||
c := cap(z.buf)
|
||||
p := pos - z.start + 1
|
||||
if 2*p > c { // if the token is larger than half the buffer, increase buffer size
|
||||
c = 2*c + p
|
||||
}
|
||||
d := len(z.buf) - z.start
|
||||
buf := z.pool.swap(z.buf[:z.start], c)
|
||||
copy(buf[:d], z.buf[z.start:]) // copy the left-overs (unfinished token) from the old buffer
|
||||
|
||||
// read in new data for the rest of the buffer
|
||||
var n int
|
||||
for pos-z.start >= d && z.err == nil {
|
||||
n, z.err = z.r.Read(buf[d:cap(buf)])
|
||||
d += n
|
||||
}
|
||||
pos -= z.start
|
||||
z.pos -= z.start
|
||||
z.start, z.buf = 0, buf[:d]
|
||||
if pos >= d {
|
||||
return 0
|
||||
}
|
||||
return z.buf[pos]
|
||||
}
|
||||
|
||||
// Err returns the error returned from io.Reader. It may still return valid bytes for a while though.
|
||||
func (z *Lexer) Err() error {
|
||||
if z.err == io.EOF && z.pos < len(z.buf) {
|
||||
return nil
|
||||
}
|
||||
return z.err
|
||||
}
|
||||
|
||||
// Free frees up bytes of length n from previously shifted tokens.
|
||||
// Each call to Shift should at one point be followed by a call to Free with a length returned by ShiftLen.
|
||||
func (z *Lexer) Free(n int) {
|
||||
z.free += n
|
||||
}
|
||||
|
||||
// Peek returns the ith byte relative to the end position and possibly does an allocation.
|
||||
// Peek returns zero when an error has occurred, Err returns the error.
|
||||
// TODO: inline function
|
||||
func (z *Lexer) Peek(pos int) byte {
|
||||
pos += z.pos
|
||||
if uint(pos) < uint(len(z.buf)) { // uint for BCE
|
||||
return z.buf[pos]
|
||||
}
|
||||
return z.read(pos)
|
||||
}
|
||||
|
||||
// PeekRune returns the rune and rune length of the ith byte relative to the end position.
|
||||
func (z *Lexer) PeekRune(pos int) (rune, int) {
|
||||
// from unicode/utf8
|
||||
c := z.Peek(pos)
|
||||
if c < 0xC0 {
|
||||
return rune(c), 1
|
||||
} else if c < 0xE0 {
|
||||
return rune(c&0x1F)<<6 | rune(z.Peek(pos+1)&0x3F), 2
|
||||
} else if c < 0xF0 {
|
||||
return rune(c&0x0F)<<12 | rune(z.Peek(pos+1)&0x3F)<<6 | rune(z.Peek(pos+2)&0x3F), 3
|
||||
}
|
||||
return rune(c&0x07)<<18 | rune(z.Peek(pos+1)&0x3F)<<12 | rune(z.Peek(pos+2)&0x3F)<<6 | rune(z.Peek(pos+3)&0x3F), 4
|
||||
}
|
||||
|
||||
// Move advances the position.
|
||||
func (z *Lexer) Move(n int) {
|
||||
z.pos += n
|
||||
}
|
||||
|
||||
// Pos returns a mark to which can be rewinded.
|
||||
func (z *Lexer) Pos() int {
|
||||
return z.pos - z.start
|
||||
}
|
||||
|
||||
// Rewind rewinds the position to the given position.
|
||||
func (z *Lexer) Rewind(pos int) {
|
||||
z.pos = z.start + pos
|
||||
}
|
||||
|
||||
// Lexeme returns the bytes of the current selection.
|
||||
func (z *Lexer) Lexeme() []byte {
|
||||
return z.buf[z.start:z.pos]
|
||||
}
|
||||
|
||||
// Skip collapses the position to the end of the selection.
|
||||
func (z *Lexer) Skip() {
|
||||
z.start = z.pos
|
||||
}
|
||||
|
||||
// Shift returns the bytes of the current selection and collapses the position to the end of the selection.
|
||||
// It also returns the number of bytes we moved since the last call to Shift. This can be used in calls to Free.
|
||||
func (z *Lexer) Shift() []byte {
|
||||
if z.pos > len(z.buf) { // make sure we peeked at least as much as we shift
|
||||
z.read(z.pos - 1)
|
||||
}
|
||||
b := z.buf[z.start:z.pos]
|
||||
z.start = z.pos
|
||||
return b
|
||||
}
|
||||
|
||||
// ShiftLen returns the number of bytes moved since the last call to ShiftLen. This can be used in calls to Free because it takes into account multiple Shifts or Skips.
|
||||
func (z *Lexer) ShiftLen() int {
|
||||
n := z.start - z.prevStart
|
||||
z.prevStart = z.start
|
||||
return n
|
||||
}
|
39
vendor/github.com/tdewolff/buffer/reader.go
generated
vendored
Normal file
39
vendor/github.com/tdewolff/buffer/reader.go
generated
vendored
Normal file
@ -0,0 +1,39 @@
|
||||
package buffer // import "github.com/tdewolff/buffer"
|
||||
|
||||
import "io"
|
||||
|
||||
// Reader implements an io.Reader over a byte slice.
|
||||
type Reader struct {
|
||||
buf []byte
|
||||
pos int
|
||||
}
|
||||
|
||||
// NewReader returns a new Reader for a given byte slice.
|
||||
func NewReader(buf []byte) *Reader {
|
||||
return &Reader{
|
||||
buf: buf,
|
||||
}
|
||||
}
|
||||
|
||||
// Read reads bytes into the given byte slice and returns the number of bytes read and an error if occurred.
|
||||
func (r *Reader) Read(b []byte) (n int, err error) {
|
||||
if len(b) == 0 {
|
||||
return 0, nil
|
||||
}
|
||||
if r.pos >= len(r.buf) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
n = copy(b, r.buf[r.pos:])
|
||||
r.pos += n
|
||||
return
|
||||
}
|
||||
|
||||
// Bytes returns the underlying byte slice.
|
||||
func (r *Reader) Bytes() []byte {
|
||||
return r.buf
|
||||
}
|
||||
|
||||
// Reset resets the position of the read pointer to the beginning of the underlying byte slice
|
||||
func (r *Reader) Reset() {
|
||||
r.pos = 0
|
||||
}
|
144
vendor/github.com/tdewolff/buffer/shifter.go
generated
vendored
Normal file
144
vendor/github.com/tdewolff/buffer/shifter.go
generated
vendored
Normal file
@ -0,0 +1,144 @@
|
||||
package buffer // import "github.com/tdewolff/buffer"
|
||||
|
||||
import "io"
|
||||
|
||||
// Shifter is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
|
||||
type Shifter struct {
|
||||
r io.Reader
|
||||
err error
|
||||
eof bool
|
||||
|
||||
buf []byte
|
||||
pos int
|
||||
end int
|
||||
}
|
||||
|
||||
// NewShifter returns a new Shifter for a given io.Reader with a 4kB estimated buffer size.
|
||||
// If the io.Reader implements Bytes, that buffer is used instead.
|
||||
func NewShifter(r io.Reader) *Shifter {
|
||||
return NewShifterSize(r, defaultBufSize)
|
||||
}
|
||||
|
||||
// NewShifterSize returns a new Shifter for a given io.Reader and estimated required buffer size.
|
||||
// If the io.Reader implements Bytes, that buffer is used instead.
|
||||
func NewShifterSize(r io.Reader, size int) *Shifter {
|
||||
// If reader has the bytes in memory already, use that instead!
|
||||
if buffer, ok := r.(interface {
|
||||
Bytes() []byte
|
||||
}); ok {
|
||||
return &Shifter{
|
||||
err: io.EOF,
|
||||
eof: true,
|
||||
buf: buffer.Bytes(),
|
||||
}
|
||||
}
|
||||
z := &Shifter{
|
||||
r: r,
|
||||
buf: make([]byte, 0, size),
|
||||
}
|
||||
z.Peek(0)
|
||||
return z
|
||||
}
|
||||
|
||||
// Err returns the error returned from io.Reader. It may still return valid bytes for a while though.
|
||||
func (z *Shifter) Err() error {
|
||||
if z.eof && z.end < len(z.buf) {
|
||||
return nil
|
||||
}
|
||||
return z.err
|
||||
}
|
||||
|
||||
// IsEOF returns true when it has encountered EOF meaning that it has loaded the last data in memory (ie. previously returned byte slice will not be overwritten by Peek).
|
||||
// Calling IsEOF is faster than checking Err() == io.EOF.
|
||||
func (z *Shifter) IsEOF() bool {
|
||||
return z.eof
|
||||
}
|
||||
|
||||
func (z *Shifter) read(end int) byte {
|
||||
if z.err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
// reallocate a new buffer (possibly larger)
|
||||
c := cap(z.buf)
|
||||
d := len(z.buf) - z.pos
|
||||
var buf []byte
|
||||
if 2*d > c {
|
||||
buf = make([]byte, d, 2*c+end-z.pos)
|
||||
} else {
|
||||
buf = z.buf[:d]
|
||||
}
|
||||
copy(buf, z.buf[z.pos:])
|
||||
|
||||
// read in to fill the buffer till capacity
|
||||
var n int
|
||||
n, z.err = z.r.Read(buf[d:cap(buf)])
|
||||
z.eof = (z.err == io.EOF)
|
||||
end -= z.pos
|
||||
z.end -= z.pos
|
||||
z.pos, z.buf = 0, buf[:d+n]
|
||||
if n == 0 {
|
||||
if z.err == nil {
|
||||
z.err = io.EOF
|
||||
z.eof = true
|
||||
}
|
||||
return 0
|
||||
}
|
||||
return z.buf[end]
|
||||
}
|
||||
|
||||
// Peek returns the ith byte relative to the end position and possibly does an allocation. Calling Peek may invalidate previous returned byte slices by Bytes or Shift, unless IsEOF returns true.
|
||||
// Peek returns zero when an error has occurred, Err returns the error.
|
||||
func (z *Shifter) Peek(end int) byte {
|
||||
end += z.end
|
||||
if end >= len(z.buf) {
|
||||
return z.read(end)
|
||||
}
|
||||
return z.buf[end]
|
||||
}
|
||||
|
||||
// PeekRune returns the rune and rune length of the ith byte relative to the end position.
|
||||
func (z *Shifter) PeekRune(i int) (rune, int) {
|
||||
// from unicode/utf8
|
||||
c := z.Peek(i)
|
||||
if c < 0xC0 {
|
||||
return rune(c), 1
|
||||
} else if c < 0xE0 {
|
||||
return rune(c&0x1F)<<6 | rune(z.Peek(i+1)&0x3F), 2
|
||||
} else if c < 0xF0 {
|
||||
return rune(c&0x0F)<<12 | rune(z.Peek(i+1)&0x3F)<<6 | rune(z.Peek(i+2)&0x3F), 3
|
||||
}
|
||||
return rune(c&0x07)<<18 | rune(z.Peek(i+1)&0x3F)<<12 | rune(z.Peek(i+2)&0x3F)<<6 | rune(z.Peek(i+3)&0x3F), 4
|
||||
}
|
||||
|
||||
// Move advances the end position.
|
||||
func (z *Shifter) Move(n int) {
|
||||
z.end += n
|
||||
}
|
||||
|
||||
// MoveTo sets the end position.
|
||||
func (z *Shifter) MoveTo(n int) {
|
||||
z.end = z.pos + n
|
||||
}
|
||||
|
||||
// Pos returns the end position.
|
||||
func (z *Shifter) Pos() int {
|
||||
return z.end - z.pos
|
||||
}
|
||||
|
||||
// Bytes returns the bytes of the current selection.
|
||||
func (z *Shifter) Bytes() []byte {
|
||||
return z.buf[z.pos:z.end]
|
||||
}
|
||||
|
||||
// Shift returns the bytes of the current selection and collapses the position to the end.
|
||||
func (z *Shifter) Shift() []byte {
|
||||
b := z.buf[z.pos:z.end]
|
||||
z.pos = z.end
|
||||
return b
|
||||
}
|
||||
|
||||
// Skip collapses the position to the end.
|
||||
func (z *Shifter) Skip() {
|
||||
z.pos = z.end
|
||||
}
|
41
vendor/github.com/tdewolff/buffer/writer.go
generated
vendored
Normal file
41
vendor/github.com/tdewolff/buffer/writer.go
generated
vendored
Normal file
@ -0,0 +1,41 @@
|
||||
package buffer // import "github.com/tdewolff/buffer"
|
||||
|
||||
// Writer implements an io.Writer over a byte slice.
|
||||
type Writer struct {
|
||||
buf []byte
|
||||
}
|
||||
|
||||
// NewWriter returns a new Writer for a given byte slice.
|
||||
func NewWriter(buf []byte) *Writer {
|
||||
return &Writer{
|
||||
buf: buf,
|
||||
}
|
||||
}
|
||||
|
||||
// Write writes bytes from the given byte slice and returns the number of bytes written and an error if occurred. When err != nil, n == 0.
|
||||
func (w *Writer) Write(b []byte) (int, error) {
|
||||
n := len(b)
|
||||
end := len(w.buf)
|
||||
if end+n > cap(w.buf) {
|
||||
buf := make([]byte, end, 2*cap(w.buf)+n)
|
||||
copy(buf, w.buf)
|
||||
w.buf = buf
|
||||
}
|
||||
w.buf = w.buf[:end+n]
|
||||
return copy(w.buf[end:], b), nil
|
||||
}
|
||||
|
||||
// Len returns the length of the underlying byte slice.
|
||||
func (w *Writer) Len() int {
|
||||
return len(w.buf)
|
||||
}
|
||||
|
||||
// Bytes returns the underlying byte slice.
|
||||
func (w *Writer) Bytes() []byte {
|
||||
return w.buf
|
||||
}
|
||||
|
||||
// Reset empties and reuses the current buffer. Subsequent writes will overwrite the buffer, so any reference to the underlying slice is invalidated after this call.
|
||||
func (w *Writer) Reset() {
|
||||
w.buf = w.buf[:0]
|
||||
}
|
22
vendor/github.com/tdewolff/minify/LICENSE.md
generated
vendored
Normal file
22
vendor/github.com/tdewolff/minify/LICENSE.md
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
Copyright (c) 2015 Taco de Wolff
|
||||
|
||||
Permission is hereby granted, free of charge, to any person
|
||||
obtaining a copy of this software and associated documentation
|
||||
files (the "Software"), to deal in the Software without
|
||||
restriction, including without limitation the rights to use,
|
||||
copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following
|
||||
conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
565
vendor/github.com/tdewolff/minify/README.md
generated
vendored
Normal file
565
vendor/github.com/tdewolff/minify/README.md
generated
vendored
Normal file
@ -0,0 +1,565 @@
|
||||
# Minify <a name="minify"></a> [](https://travis-ci.org/tdewolff/minify) [](http://godoc.org/github.com/tdewolff/minify) [](https://coveralls.io/github/tdewolff/minify?branch=master) [](https://gitter.im/tdewolff/minify?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
|
||||
|
||||
**The preferred stable release is v2. Master has some new changes for SVG that haven't yet endured the test of time, bug reports are appreciated.**
|
||||
|
||||
**[Online demo](http://go.tacodewolff.nl/minify) if you need to minify files *now*.**
|
||||
|
||||
**[Command line tool](https://github.com/tdewolff/minify/tree/master/cmd/minify) that minifies concurrently and supports watching file changes.**
|
||||
|
||||
**[All releases](https://dl.equinox.io/tdewolff/minify/stable) on Equinox for various platforms.**
|
||||
|
||||
If this software is useful to you, consider making a [donation](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=27MSRR5UJQQUL)! When a significant amount has been deposited, I will write a much improved JS minifier.
|
||||
|
||||
---
|
||||
|
||||
Minify is a minifier package written in [Go][1]. It provides HTML5, CSS3, JS, JSON, SVG and XML minifiers and an interface to implement any other minifier. Minification is the process of removing bytes from a file (such as whitespace) without changing its output and therefore shrinking its size and speeding up transmission over the internet and possibly parsing. The implemented minifiers are high performance and streaming, which implies O(n).
|
||||
|
||||
The core functionality associates mimetypes with minification functions, allowing embedded resources (like CSS or JS within HTML files) to be minified as well. Users can add new implementations that are triggered based on a mimetype (or pattern), or redirect to an external command (like ClosureCompiler, UglifyCSS, ...).
|
||||
|
||||
#### Table of Contents
|
||||
|
||||
- [Minify](#minify)
|
||||
- [Prologue](#prologue)
|
||||
- [Installation](#installation)
|
||||
- [API stability](#api-stability)
|
||||
- [Testing](#testing)
|
||||
- [HTML](#html)
|
||||
- [Whitespace removal](#whitespace-removal)
|
||||
- [CSS](#css)
|
||||
- [JS](#js)
|
||||
- [JSON](#json)
|
||||
- [SVG](#svg)
|
||||
- [XML](#xml)
|
||||
- [Usage](#usage)
|
||||
- [New](#new)
|
||||
- [From reader](#from-reader)
|
||||
- [From bytes](#from-bytes)
|
||||
- [From string](#from-string)
|
||||
- [Custom minifier](#custom-minifier)
|
||||
- [Mediatypes](#mediatypes)
|
||||
- [Examples](#examples)
|
||||
- [Common minifiers](#common-minifiers)
|
||||
- [Custom minifier](#custom-minifier-example)
|
||||
- [ResponseWriter](#responsewriter)
|
||||
- [Templates](#templates)
|
||||
- [License](#license)
|
||||
|
||||
#### Status
|
||||
|
||||
* CSS: **fully implemented**
|
||||
* HTML: **fully implemented**
|
||||
* JS: basic JSmin-like implementation
|
||||
* JSON: **fully implemented**
|
||||
* SVG: partially implemented; in development
|
||||
* XML: **fully implemented**
|
||||
|
||||
## Prologue
|
||||
Minifiers or bindings to minifiers exist in almost all programming languages. Some implementations are merely using several regular-expressions to trim whitespace and comments (even though regex for parsing HTML/XML is ill-advised, for a good read see [Regular Expressions: Now You Have Two Problems](http://blog.codinghorror.com/regular-expressions-now-you-have-two-problems/)). Some implementations are much more profound, such as the [YUI Compressor](http://yui.github.io/yuicompressor/) and [Google Closure Compiler](https://github.com/google/closure-compiler) for JS. As most existing implementations either use Java or JavaScript and don't focus on performance, they are pretty slow. And loading the whole file into memory is bad for really large files (or impossible for infinite streams).
|
||||
|
||||
This minifier proves to be that fast and extensive minifier that can handle HTML and any other filetype it may contain (CSS, JS, ...). It streams the input and output and can minify files concurrently.
|
||||
|
||||
## Installation
|
||||
Run the following command
|
||||
|
||||
go get github.com/tdewolff/minify
|
||||
|
||||
or add the following imports and run the project with `go get`
|
||||
``` go
|
||||
import (
|
||||
"github.com/tdewolff/minify"
|
||||
"github.com/tdewolff/minify/css"
|
||||
"github.com/tdewolff/minify/html"
|
||||
"github.com/tdewolff/minify/js"
|
||||
"github.com/tdewolff/minify/json"
|
||||
"github.com/tdewolff/minify/svg"
|
||||
"github.com/tdewolff/minify/xml"
|
||||
)
|
||||
```
|
||||
|
||||
## API stability
|
||||
There is no guarantee for absolute stability, but I take issues and bugs seriously and don't take API changes lightly. The library will be maintained in a compatible way unless vital bugs prevent me from doing so. There has been one API change after v1 which added options support and I took the opportunity to push through some more API clean up as well. There are no plans whatsoever for future API changes.
|
||||
|
||||
- minify-v1.0.0 depends on parse-v1.0.0
|
||||
- minify-v1.1.0 depends on parse-v1.1.0
|
||||
- minify-v2.0.0 depends on parse-v2.0.0
|
||||
- minify-v2.1.0 depends on parse-v2.1.0
|
||||
- minify-tip will always compile with my other packages on tip
|
||||
|
||||
The API differences between v1 and v2 are listed below. If `m := minify.New()` and `w` and `r` are your writer and reader respectfully, then **v1** → **v2**:
|
||||
- `minify.Bytes(m, ...)` → `m.Bytes(...)`
|
||||
- `minify.String(m, ...)` → `m.String(...)`
|
||||
- `html.Minify(m, "text/html", w, r)` → `html.Minify(m, w, r, nil)` also for `css`, `js`, ...
|
||||
- `css.Minify(m, "text/css;inline=1", w, r)` → `css.Minify(m, w, r, map[string]string{"inline":"1"})`
|
||||
|
||||
## Testing
|
||||
For all subpackages and the imported `parse` and `buffer` packages, test coverage of 100% is pursued. Besides full coverage, the minifiers are [fuzz tested](https://github.com/tdewolff/fuzz) using [github.com/dvyukov/go-fuzz](http://www.github.com/dvyukov/go-fuzz), see [the wiki](https://github.com/tdewolff/minify/wiki) for the most important bugs found by fuzz testing. Furthermore am I working on adding visual testing to ensure that minification doesn't change anything visually. By using the WebKit browser to render the original and minified pages we can check whether any pixel is different.
|
||||
|
||||
These tests ensure that everything works as intended, the code does not crash (whatever the input) and that it doesn't change the final result visually. If you still encounter a bug, please report [here](https://github.com/tdewolff/minify/issues)!
|
||||
|
||||
## HTML
|
||||
|
||||
HTML (with JS and CSS) minification typically runs at about 40MB/s ~= 140GB/h, depending on the composition of the file.
|
||||
|
||||
Website | Original | Minified | Ratio | Time<sup>*</sup>
|
||||
------- | -------- | -------- | ----- | -----------------------
|
||||
[Amazon](http://www.amazon.com/) | 463kB | **414kB** | 90% | 10ms
|
||||
[BBC](http://www.bbc.com/) | 113kB | **96kB** | 85% | 3ms
|
||||
[StackOverflow](http://stackoverflow.com/) | 201kB | **182kB** | 91% | 5ms
|
||||
[Wikipedia](http://en.wikipedia.org/wiki/President_of_the_United_States) | 435kB | **410kB** | 94%<sup>**</sup> | 11ms
|
||||
|
||||
<sup>*</sup>These times are measured on my home computer which is an average development computer. The duration varies a lot but it's important to see it's in the 10ms range! The benchmark uses all the minifiers and excludes reading from and writing to the file from the measurement.
|
||||
|
||||
<sup>**</sup>Is already somewhat minified, so this doesn't reflect the full potential of this minifier.
|
||||
|
||||
The HTML5 minifier uses these minifications:
|
||||
|
||||
- strip unnecessary whitespace and otherwise collapse it to one space (or newline if it originally contained a newline)
|
||||
- strip superfluous quotes, or uses single/double quotes whichever requires fewer escapes
|
||||
- strip default attribute values and attribute boolean values
|
||||
- strip some empty attributes
|
||||
- strip unrequired tags (`html`, `head`, `body`, ...)
|
||||
- strip unrequired end tags (`tr`, `td`, `li`, ... and often `p`)
|
||||
- strip default protocols (`http:`, `https:` and `javascript:`)
|
||||
- strip all comments (including conditional comments, old IE versions are not supported anymore by Microsoft)
|
||||
- shorten `doctype` and `meta` charset
|
||||
- lowercase tags, attributes and some values to enhance gzip compression
|
||||
|
||||
Options:
|
||||
|
||||
- `KeepConditionalComments` preserve all IE conditional comments such as `<!--[if IE 6]><![endif]-->` and `<![if IE 6]><![endif]>`, see https://msdn.microsoft.com/en-us/library/ms537512(v=vs.85).aspx#syntax
|
||||
- `KeepDefaultAttrVals` preserve default attribute values such as `<script type="text/javascript">`
|
||||
- `KeepDocumentTags` preserve `html`, `head` and `body` tags
|
||||
- `KeepEndTags` preserve all end tags
|
||||
- `KeepWhitespace` preserve whitespace between inline tags but still collapse multiple whitespace characters into one
|
||||
|
||||
After recent benchmarking and profiling it became really fast and minifies pages in the 10ms range, making it viable for on-the-fly minification.
|
||||
|
||||
However, be careful when doing on-the-fly minification. Minification typically trims off 10% and does this at worst around about 20MB/s. This means users have to download slower than 2MB/s to make on-the-fly minification worthwhile. This may or may not apply in your situation. Rather use caching!
|
||||
|
||||
### Whitespace removal
|
||||
The whitespace removal mechanism collapses all sequences of whitespace (spaces, newlines, tabs) to a single space. If the sequence contained a newline or carriage return it will collapse into a newline character instead. It trims all text parts (in between tags) depending on whether it was preceded by a space from a previous piece of text and whether it is followed up by a block element or an inline element. In the former case we can omit spaces while for inline elements whitespace has significance.
|
||||
|
||||
Make sure your HTML doesn't depend on whitespace between `block` elements that have been changed to `inline` or `inline-block` elements using CSS. Your layout *should not* depend on those whitespaces as the minifier will remove them. An example is a menu consisting of multiple `<li>` that have `display:inline-block` applied and have whitespace in between them. It is bad practise to rely on whitespace for element positioning anyways!
|
||||
|
||||
## CSS
|
||||
|
||||
Minification typically runs at about 25MB/s ~= 90GB/h.
|
||||
|
||||
Library | Original | Minified | Ratio | Time<sup>*</sup>
|
||||
------- | -------- | -------- | ----- | -----------------------
|
||||
[Bootstrap](http://getbootstrap.com/) | 134kB | **111kB** | 83% | 4ms
|
||||
[Gumby](http://gumbyframework.com/) | 182kB | **167kB** | 90% | 7ms
|
||||
|
||||
<sup>*</sup>The benchmark excludes the time reading from and writing to a file from the measurement.
|
||||
|
||||
The CSS minifier will only use safe minifications:
|
||||
|
||||
- remove comments and unnecessary whitespace
|
||||
- remove trailing semicolons
|
||||
- optimize `margin`, `padding` and `border-width` number of sides
|
||||
- shorten numbers by removing unnecessary `+` and zeros and rewriting with/without exponent
|
||||
- remove dimension and percentage for zero values
|
||||
- remove quotes for URLs
|
||||
- remove quotes for font families and make lowercase
|
||||
- rewrite hex colors to/from color names, or to 3 digit hex
|
||||
- rewrite `rgb(`, `rgba(`, `hsl(` and `hsla(` colors to hex or name
|
||||
- replace `normal` and `bold` by numbers for `font-weight` and `font`
|
||||
- replace `none` → `0` for `border`, `background` and `outline`
|
||||
- lowercase all identifiers except classes, IDs and URLs to enhance gzip compression
|
||||
- shorten MS alpha function
|
||||
- rewrite data URIs with base64 or ASCII whichever is shorter
|
||||
- calls minifier for data URI mediatypes, thus you can compress embedded SVG files if you have that minifier attached
|
||||
|
||||
It does purposely not use the following techniques:
|
||||
|
||||
- (partially) merge rulesets
|
||||
- (partially) split rulesets
|
||||
- collapse multiple declarations when main declaration is defined within a ruleset (don't put `font-weight` within an already existing `font`, too complex)
|
||||
- remove overwritten properties in ruleset (this not always overwrites it, for example with `!important`)
|
||||
- rewrite properties into one ruleset if possible (like `margin-top`, `margin-right`, `margin-bottom` and `margin-left` → `margin`)
|
||||
- put nested ID selector at the front (`body > div#elem p` → `#elem p`)
|
||||
- rewrite attribute selectors for IDs and classes (`div[id=a]` → `div#a`)
|
||||
- put space after pseudo-selectors (IE6 is old, move on!)
|
||||
|
||||
It's great that so many other tools make comparison tables: [CSS Minifier Comparison](http://www.codenothing.com/benchmarks/css-compressor-3.0/full.html), [CSS minifiers comparison](http://www.phpied.com/css-minifiers-comparison/) and [CleanCSS tests](http://goalsmashers.github.io/css-minification-benchmark/). From the last link, this CSS minifier is almost without doubt the fastest and has near-perfect minification rates. It falls short with the purposely not implemented and often unsafe techniques, so that's fine.
|
||||
|
||||
Options:
|
||||
|
||||
- `Decimals` number of decimals to preserve for numbers, `-1` means no trimming
|
||||
|
||||
## JS
|
||||
|
||||
The JS minifier is pretty basic. It removes comments, whitespace and line breaks whenever it can. It employs all the rules that [JSMin](http://www.crockford.com/javascript/jsmin.html) does too, but has additional improvements. For example the prefix-postfix bug is fixed.
|
||||
|
||||
Minification typically runs at about 50MB/s ~= 180GB/h. Common speeds of PHP and JS implementations are about 100-300kB/s (see [Uglify2](http://lisperator.net/uglifyjs/), [Adventures in PHP web asset minimization](https://www.happyassassin.net/2014/12/29/adventures-in-php-web-asset-minimization/)).
|
||||
|
||||
Library | Original | Minified | Ratio | Time<sup>*</sup>
|
||||
------- | -------- | -------- | ----- | -----------------------
|
||||
[ACE](https://github.com/ajaxorg/ace-builds) | 630kB | **442kB** | 70% | 12ms
|
||||
[jQuery](http://jquery.com/download/) | 242kB | **130kB** | 54% | 5ms
|
||||
[jQuery UI](http://jqueryui.com/download/) | 459kB | **300kB** | 65% | 10ms
|
||||
[Moment](http://momentjs.com/) | 97kB | **51kB** | 52% | 2ms
|
||||
|
||||
<sup>*</sup>The benchmark excludes the time reading from and writing to a file from the measurement.
|
||||
|
||||
TODO:
|
||||
- shorten local variables / function parameters names
|
||||
- precise semicolon and newline omission
|
||||
|
||||
## JSON
|
||||
|
||||
Minification typically runs at about 95MB/s ~= 340GB/h. It shaves off about 15% of filesize for common indented JSON such as generated by [JSON Generator](http://www.json-generator.com/).
|
||||
|
||||
The JSON minifier only removes whitespace, which is the only thing that can be left out.
|
||||
|
||||
## SVG
|
||||
|
||||
Minification typically runs at about 15MB/s ~= 55GB/h. Performance improvement are due.
|
||||
|
||||
The SVG minifier uses these minifications:
|
||||
|
||||
- trim and collapse whitespace between all tags
|
||||
- strip comments, empty `doctype`, XML prelude, `metadata`
|
||||
- strip SVG version
|
||||
- strip CDATA sections wherever possible
|
||||
- collapse tags with no content to a void tag
|
||||
- collapse empty container tags (`g`, `svg`, ...)
|
||||
- minify style tag and attributes with the CSS minifier
|
||||
- minify colors
|
||||
- shorten lengths and numbers and remove default `px` unit
|
||||
- shorten `path` data
|
||||
- convert `rect`, `line`, `polygon`, `polyline` to `path`
|
||||
- use relative or absolute positions in path data whichever is shorter
|
||||
|
||||
TODO:
|
||||
- convert attributes to style attribute whenever shorter
|
||||
- merge path data? (same style and no intersection -- the latter is difficult)
|
||||
- truncate decimals
|
||||
|
||||
Options:
|
||||
|
||||
- `Decimals` number of decimals to preserve for numbers, `-1` means no trimming
|
||||
|
||||
## XML
|
||||
|
||||
Minification typically runs at about 70MB/s ~= 250GB/h.
|
||||
|
||||
The XML minifier uses these minifications:
|
||||
|
||||
- strip unnecessary whitespace and otherwise collapse it to one space (or newline if it originally contained a newline)
|
||||
- strip comments
|
||||
- collapse tags with no content to a void tag
|
||||
- strip CDATA sections wherever possible
|
||||
|
||||
Options:
|
||||
|
||||
- `KeepWhitespace` preserve whitespace between inline tags but still collapse multiple whitespace characters into one
|
||||
|
||||
## Usage
|
||||
Any input stream is being buffered by the minification functions. This is how the underlying buffer package inherently works to ensure high performance. The output stream however is not buffered. It is wise to preallocate a buffer as big as the input to which the output is written, or otherwise use `bufio` to buffer to a streaming writer.
|
||||
|
||||
### New
|
||||
Retrieve a minifier struct which holds a map of mediatype → minifier functions.
|
||||
``` go
|
||||
m := minify.New()
|
||||
```
|
||||
|
||||
The following loads all provided minifiers.
|
||||
``` go
|
||||
m := minify.New()
|
||||
m.AddFunc("text/css", css.Minify)
|
||||
m.AddFunc("text/html", html.Minify)
|
||||
m.AddFunc("text/javascript", js.Minify)
|
||||
m.AddFunc("image/svg+xml", svg.Minify)
|
||||
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
|
||||
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
|
||||
```
|
||||
|
||||
You can set options to several minifiers.
|
||||
``` go
|
||||
m.Add("text/html", &html.Minifier{
|
||||
KeepDefaultAttrVals: true,
|
||||
KeepWhitespace: true,
|
||||
})
|
||||
```
|
||||
|
||||
### From reader
|
||||
Minify from an `io.Reader` to an `io.Writer` for a specific mediatype.
|
||||
``` go
|
||||
if err := m.Minify(mediatype, w, r); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
### From bytes
|
||||
Minify from and to a `[]byte` for a specific mediatype.
|
||||
``` go
|
||||
b, err = m.Bytes(mediatype, b)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
### From string
|
||||
Minify from and to a `string` for a specific mediatype.
|
||||
``` go
|
||||
s, err = m.String(mediatype, s)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
### From reader
|
||||
Get a minifying reader for a specific mediatype.
|
||||
``` go
|
||||
mr := m.Reader(mediatype, r)
|
||||
if _, err := mr.Read(b); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
### From writer
|
||||
Get a minifying writer for a specific mediatype. Must be explicitly closed because it uses an `io.Pipe` underneath.
|
||||
``` go
|
||||
mw := m.Writer(mediatype, w)
|
||||
if mw.Write([]byte("input")); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := mw.Close(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
```
|
||||
|
||||
### Custom minifier
|
||||
Add a minifier for a specific mimetype.
|
||||
``` go
|
||||
type CustomMinifier struct {
|
||||
KeepLineBreaks bool
|
||||
}
|
||||
|
||||
func (c *CustomMinifier) Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
|
||||
// ...
|
||||
return nil
|
||||
}
|
||||
|
||||
m.Add(mimetype, &CustomMinifier{KeepLineBreaks: true})
|
||||
// or
|
||||
m.AddRegexp(regexp.MustCompile("/x-custom$"), &CustomMinifier{KeepLineBreaks: true})
|
||||
```
|
||||
|
||||
Add a minify function for a specific mimetype.
|
||||
``` go
|
||||
m.AddFunc(mimetype, func(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
|
||||
// ...
|
||||
return nil
|
||||
})
|
||||
m.AddFuncRegexp(regexp.MustCompile("/x-custom$"), func(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
|
||||
// ...
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
Add a command `cmd` with arguments `args` for a specific mimetype.
|
||||
``` go
|
||||
m.AddCmd(mimetype, exec.Command(cmd, args...))
|
||||
m.AddCmdRegexp(regexp.MustCompile("/x-custom$"), exec.Command(cmd, args...))
|
||||
```
|
||||
|
||||
### Mediatypes
|
||||
Using the `params map[string]string` argument one can pass parameters to the minifier such as seen in mediatypes (`type/subtype; key1=val2; key2=val2`). Examples are the encoding or charset of the data. Calling `Minify` will split the mimetype and parameters for the minifiers for you, but `MinifyMimetype` can be used if you already have them split up.
|
||||
|
||||
Minifiers can also be added using a regular expression. For example a minifier with `image/.*` will match any image mime.
|
||||
|
||||
## Examples
|
||||
### Common minifiers
|
||||
Basic example that minifies from stdin to stdout and loads the default HTML, CSS and JS minifiers. Optionally, one can enable `java -jar build/compiler.jar` to run for JS (for example the [ClosureCompiler](https://code.google.com/p/closure-compiler/)). Note that reading the file into a buffer first and writing to a pre-allocated buffer would be faster (but would disable streaming).
|
||||
``` go
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
|
||||
"github.com/tdewolff/minify"
|
||||
"github.com/tdewolff/minify/css"
|
||||
"github.com/tdewolff/minify/html"
|
||||
"github.com/tdewolff/minify/js"
|
||||
"github.com/tdewolff/minify/json"
|
||||
"github.com/tdewolff/minify/svg"
|
||||
"github.com/tdewolff/minify/xml"
|
||||
)
|
||||
|
||||
func main() {
|
||||
m := minify.New()
|
||||
m.AddFunc("text/css", css.Minify)
|
||||
m.AddFunc("text/html", html.Minify)
|
||||
m.AddFunc("text/javascript", js.Minify)
|
||||
m.AddFunc("image/svg+xml", svg.Minify)
|
||||
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
|
||||
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
|
||||
|
||||
// Or use the following for better minification of JS but lower speed:
|
||||
// m.AddCmd("text/javascript", exec.Command("java", "-jar", "build/compiler.jar"))
|
||||
|
||||
if err := m.Minify("text/html", os.Stdout, os.Stdin); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### <a name="custom-minifier-example"></a> Custom minifier
|
||||
Custom minifier showing an example that implements the minifier function interface. Within a custom minifier, it is possible to call any minifier function (through `m minify.Minifier`) recursively when dealing with embedded resources.
|
||||
``` go
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"github.com/tdewolff/minify"
|
||||
)
|
||||
|
||||
func main() {
|
||||
m := minify.New()
|
||||
m.AddFunc("text/plain", func(m *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
|
||||
// remove newlines and spaces
|
||||
rb := bufio.NewReader(r)
|
||||
for {
|
||||
line, err := rb.ReadString('\n')
|
||||
if err != nil && err != io.EOF {
|
||||
return err
|
||||
}
|
||||
if _, errws := io.WriteString(w, strings.Replace(line, " ", "", -1)); errws != nil {
|
||||
return errws
|
||||
}
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
in := "Because my coffee was too cold, I heated it in the microwave."
|
||||
out, err := m.String("text/plain", in)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
fmt.Println(out)
|
||||
// Output: Becausemycoffeewastoocold,Iheateditinthemicrowave.
|
||||
}
|
||||
```
|
||||
|
||||
### ResponseWriter
|
||||
#### Middleware
|
||||
``` go
|
||||
func main() {
|
||||
m := minify.New()
|
||||
m.AddFunc("text/css", css.Minify)
|
||||
m.AddFunc("text/html", html.Minify)
|
||||
m.AddFunc("text/javascript", js.Minify)
|
||||
m.AddFunc("image/svg+xml", svg.Minify)
|
||||
m.AddFuncRegexp(regexp.MustCompile("[/+]json$"), json.Minify)
|
||||
m.AddFuncRegexp(regexp.MustCompile("[/+]xml$"), xml.Minify)
|
||||
|
||||
http.Handle("/", m.Middleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
http.ServeFile(w, r, path.Join("www", r.URL.Path))
|
||||
})))
|
||||
}
|
||||
```
|
||||
|
||||
#### ResponseWriter
|
||||
``` go
|
||||
func Serve(w http.ResponseWriter, r *http.Request) {
|
||||
mw := m.ResponseWriter(w, r)
|
||||
defer mw.Close()
|
||||
w = mw
|
||||
|
||||
http.ServeFile(w, r, path.Join("www", r.URL.Path))
|
||||
}
|
||||
```
|
||||
|
||||
#### Custom response writer
|
||||
ResponseWriter example which returns a ResponseWriter that minifies the content and then writes to the original ResponseWriter. Any write after applying this filter will be minified.
|
||||
``` go
|
||||
type MinifyResponseWriter struct {
|
||||
http.ResponseWriter
|
||||
io.WriteCloser
|
||||
}
|
||||
|
||||
func (m MinifyResponseWriter) Write(b []byte) (int, error) {
|
||||
return m.WriteCloser.Write(b)
|
||||
}
|
||||
|
||||
// MinifyResponseWriter must be closed explicitly by calling site.
|
||||
func MinifyFilter(mediatype string, res http.ResponseWriter) MinifyResponseWriter {
|
||||
m := minify.New()
|
||||
// add minfiers
|
||||
|
||||
mw := m.Writer(mediatype, res)
|
||||
return MinifyResponseWriter{res, mw}
|
||||
}
|
||||
```
|
||||
|
||||
``` go
|
||||
// Usage
|
||||
func(w http.ResponseWriter, req *http.Request) {
|
||||
w = MinifyFilter("text/html", w)
|
||||
if _, err := io.WriteString(w, "<p class="message"> This HTTP response will be minified. </p>"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := w.Close(); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// Output: <p class=message>This HTTP response will be minified.
|
||||
}
|
||||
```
|
||||
|
||||
### Templates
|
||||
|
||||
Here's an example of a replacement for `template.ParseFiles` from `template/html`, which automatically minifies each template before parsing it.
|
||||
|
||||
Be aware that minifying templates will work in most cases but not all. Because the HTML minifier only works for valid HTML5, your template must be valid HTML5 of itself. Template tags are parsed as regular text by the minifier.
|
||||
|
||||
``` go
|
||||
func compileTemplates(filenames ...string) (*template.Template, error) {
|
||||
m := minify.New()
|
||||
m.AddFunc("text/html", html.Minify)
|
||||
|
||||
var tmpl *template.Template
|
||||
for _, filename := range filenames {
|
||||
name := filepath.Base(filename)
|
||||
if tmpl == nil {
|
||||
tmpl = template.New(name)
|
||||
} else {
|
||||
tmpl = tmpl.New(name)
|
||||
}
|
||||
|
||||
b, err := ioutil.ReadFile(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mb, err := m.Bytes("text/html", b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tmpl.Parse(string(mb))
|
||||
}
|
||||
return tmpl, nil
|
||||
}
|
||||
```
|
||||
|
||||
Example usage:
|
||||
|
||||
``` go
|
||||
templates := template.MustCompile(compileTemplates("view.html", "home.html"))
|
||||
```
|
||||
|
||||
## License
|
||||
Released under the [MIT license](LICENSE.md).
|
||||
|
||||
[1]: http://golang.org/ "Go Language"
|
339
vendor/github.com/tdewolff/minify/common.go
generated
vendored
Normal file
339
vendor/github.com/tdewolff/minify/common.go
generated
vendored
Normal file
@ -0,0 +1,339 @@
|
||||
package minify // import "github.com/tdewolff/minify"
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"net/url"
|
||||
|
||||
"github.com/tdewolff/parse"
|
||||
"github.com/tdewolff/strconv"
|
||||
)
|
||||
|
||||
// Epsilon is the closest number to zero that is not considered to be zero.
|
||||
var Epsilon = 0.00001
|
||||
|
||||
// ContentType minifies a given mediatype by removing all whitespace.
|
||||
func ContentType(b []byte) []byte {
|
||||
j := 0
|
||||
start := 0
|
||||
inString := false
|
||||
for i, c := range b {
|
||||
if !inString && parse.IsWhitespace(c) {
|
||||
if start != 0 {
|
||||
j += copy(b[j:], b[start:i])
|
||||
} else {
|
||||
j += i
|
||||
}
|
||||
start = i + 1
|
||||
} else if c == '"' {
|
||||
inString = !inString
|
||||
}
|
||||
}
|
||||
if start != 0 {
|
||||
j += copy(b[j:], b[start:])
|
||||
return parse.ToLower(b[:j])
|
||||
}
|
||||
return parse.ToLower(b)
|
||||
}
|
||||
|
||||
// DataURI minifies a data URI and calls a minifier by the specified mediatype. Specifications: https://www.ietf.org/rfc/rfc2397.txt.
|
||||
func DataURI(m *M, dataURI []byte) []byte {
|
||||
if mediatype, data, err := parse.DataURI(dataURI); err == nil {
|
||||
dataURI, _ = m.Bytes(string(mediatype), data)
|
||||
base64Len := len(";base64") + base64.StdEncoding.EncodedLen(len(dataURI))
|
||||
asciiLen := len(dataURI)
|
||||
for _, c := range dataURI {
|
||||
if 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' || c == '-' || c == '_' || c == '.' || c == '~' || c == ' ' {
|
||||
asciiLen++
|
||||
} else {
|
||||
asciiLen += 2
|
||||
}
|
||||
if asciiLen > base64Len {
|
||||
break
|
||||
}
|
||||
}
|
||||
if asciiLen > base64Len {
|
||||
encoded := make([]byte, base64Len-len(";base64"))
|
||||
base64.StdEncoding.Encode(encoded, dataURI)
|
||||
dataURI = encoded
|
||||
mediatype = append(mediatype, []byte(";base64")...)
|
||||
} else {
|
||||
dataURI = []byte(url.QueryEscape(string(dataURI)))
|
||||
dataURI = bytes.Replace(dataURI, []byte("\""), []byte("\\\""), -1)
|
||||
}
|
||||
if len("text/plain") <= len(mediatype) && parse.EqualFold(mediatype[:len("text/plain")], []byte("text/plain")) {
|
||||
mediatype = mediatype[len("text/plain"):]
|
||||
}
|
||||
for i := 0; i+len(";charset=us-ascii") <= len(mediatype); i++ {
|
||||
// must start with semicolon and be followed by end of mediatype or semicolon
|
||||
if mediatype[i] == ';' && parse.EqualFold(mediatype[i+1:i+len(";charset=us-ascii")], []byte("charset=us-ascii")) && (i+len(";charset=us-ascii") >= len(mediatype) || mediatype[i+len(";charset=us-ascii")] == ';') {
|
||||
mediatype = append(mediatype[:i], mediatype[i+len(";charset=us-ascii"):]...)
|
||||
break
|
||||
}
|
||||
}
|
||||
dataURI = append(append(append([]byte("data:"), mediatype...), ','), dataURI...)
|
||||
}
|
||||
return dataURI
|
||||
}
|
||||
|
||||
const MaxInt = int(^uint(0) >> 1)
|
||||
const MinInt = -MaxInt - 1
|
||||
|
||||
// Number minifies a given byte slice containing a number (see parse.Number) and removes superfluous characters.
|
||||
func Number(num []byte, prec int) []byte {
|
||||
// omit first + and register mantissa start and end, whether it's negative and the exponent
|
||||
neg := false
|
||||
start := 0
|
||||
dot := -1
|
||||
end := len(num)
|
||||
origExp := 0
|
||||
if 0 < end && (num[0] == '+' || num[0] == '-') {
|
||||
if num[0] == '-' {
|
||||
neg = true
|
||||
}
|
||||
start++
|
||||
}
|
||||
for i, c := range num[start:] {
|
||||
if c == '.' {
|
||||
dot = start + i
|
||||
} else if c == 'e' || c == 'E' {
|
||||
end = start + i
|
||||
i += start + 1
|
||||
if i < len(num) && num[i] == '+' {
|
||||
i++
|
||||
}
|
||||
if tmpOrigExp, n := strconv.ParseInt(num[i:]); n > 0 && tmpOrigExp >= int64(MinInt) && tmpOrigExp <= int64(MaxInt) {
|
||||
// range checks for when int is 32 bit
|
||||
origExp = int(tmpOrigExp)
|
||||
} else {
|
||||
return num
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
if dot == -1 {
|
||||
dot = end
|
||||
}
|
||||
|
||||
// trim leading zeros but leave at least one digit
|
||||
for start < end-1 && num[start] == '0' {
|
||||
start++
|
||||
}
|
||||
// trim trailing zeros
|
||||
i := end - 1
|
||||
for ; i > dot; i-- {
|
||||
if num[i] != '0' {
|
||||
end = i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
if i == dot {
|
||||
end = dot
|
||||
if start == end {
|
||||
num[start] = '0'
|
||||
return num[start : start+1]
|
||||
}
|
||||
} else if start == end-1 && num[start] == '0' {
|
||||
return num[start:end]
|
||||
}
|
||||
|
||||
// n is the number of significant digits
|
||||
// normExp would be the exponent if it were normalised (0.1 <= f < 1)
|
||||
n := 0
|
||||
normExp := 0
|
||||
if dot == start {
|
||||
for i = dot + 1; i < end; i++ {
|
||||
if num[i] != '0' {
|
||||
n = end - i
|
||||
normExp = dot - i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
} else if dot == end {
|
||||
normExp = end - start
|
||||
for i = end - 1; i >= start; i-- {
|
||||
if num[i] != '0' {
|
||||
n = i + 1 - start
|
||||
end = i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
} else {
|
||||
n = end - start - 1
|
||||
normExp = dot - start
|
||||
}
|
||||
|
||||
if origExp < 0 && (normExp < MinInt-origExp || normExp-n < MinInt-origExp) || origExp > 0 && (normExp > MaxInt-origExp || normExp-n > MaxInt-origExp) {
|
||||
return num
|
||||
}
|
||||
normExp += origExp
|
||||
|
||||
// intExp would be the exponent if it were an integer
|
||||
intExp := normExp - n
|
||||
lenIntExp := 1
|
||||
if intExp <= -10 || intExp >= 10 {
|
||||
lenIntExp = strconv.LenInt(int64(intExp))
|
||||
}
|
||||
|
||||
// there are three cases to consider when printing the number
|
||||
// case 1: without decimals and with an exponent (large numbers)
|
||||
// case 2: with decimals and without an exponent (around zero)
|
||||
// case 3: without decimals and with a negative exponent (small numbers)
|
||||
if normExp >= n {
|
||||
// case 1
|
||||
if dot < end {
|
||||
if dot == start {
|
||||
start = end - n
|
||||
} else {
|
||||
// TODO: copy the other part if shorter?
|
||||
copy(num[dot:], num[dot+1:end])
|
||||
end--
|
||||
}
|
||||
}
|
||||
if normExp >= n+3 {
|
||||
num[end] = 'e'
|
||||
end++
|
||||
for i := end + lenIntExp - 1; i >= end; i-- {
|
||||
num[i] = byte(intExp%10) + '0'
|
||||
intExp /= 10
|
||||
}
|
||||
end += lenIntExp
|
||||
} else if normExp == n+2 {
|
||||
num[end] = '0'
|
||||
num[end+1] = '0'
|
||||
end += 2
|
||||
} else if normExp == n+1 {
|
||||
num[end] = '0'
|
||||
end++
|
||||
}
|
||||
} else if normExp >= -lenIntExp-1 {
|
||||
// case 2
|
||||
zeroes := -normExp
|
||||
newDot := 0
|
||||
if zeroes > 0 {
|
||||
// dot placed at the front and add zeroes
|
||||
newDot = end - n - zeroes - 1
|
||||
if newDot != dot {
|
||||
d := start - newDot
|
||||
if d > 0 {
|
||||
if dot < end {
|
||||
// copy original digits behind the dot backwards
|
||||
copy(num[dot+1+d:], num[dot+1:end])
|
||||
if dot > start {
|
||||
// copy original digits before the dot backwards
|
||||
copy(num[start+d+1:], num[start:dot])
|
||||
}
|
||||
} else if dot > start {
|
||||
// copy original digits before the dot backwards
|
||||
copy(num[start+d:], num[start:dot])
|
||||
}
|
||||
newDot = start
|
||||
end += d
|
||||
} else {
|
||||
start += -d
|
||||
}
|
||||
num[newDot] = '.'
|
||||
for i := 0; i < zeroes; i++ {
|
||||
num[newDot+1+i] = '0'
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// placed in the middle
|
||||
if dot == start {
|
||||
// TODO: try if placing at the end reduces copying
|
||||
// when there are zeroes after the dot
|
||||
dot = end - n - 1
|
||||
start = dot
|
||||
} else if dot >= end {
|
||||
// TODO: try if placing at the start reduces copying
|
||||
// when input has no dot in it
|
||||
dot = end
|
||||
end++
|
||||
}
|
||||
newDot = start + normExp
|
||||
if newDot > dot {
|
||||
// copy digits forwards
|
||||
copy(num[dot:], num[dot+1:newDot+1])
|
||||
} else if newDot < dot {
|
||||
// copy digits backwards
|
||||
copy(num[newDot+1:], num[newDot:dot])
|
||||
}
|
||||
num[newDot] = '.'
|
||||
}
|
||||
|
||||
// apply precision
|
||||
dot = newDot
|
||||
if prec > -1 && dot+1+prec < end {
|
||||
end = dot + 1 + prec
|
||||
inc := num[end] >= '5'
|
||||
if inc || num[end-1] == '0' {
|
||||
for i := end - 1; i > start; i-- {
|
||||
if i == dot {
|
||||
end--
|
||||
} else if inc {
|
||||
if num[i] == '9' {
|
||||
if i > dot {
|
||||
end--
|
||||
} else {
|
||||
num[i] = '0'
|
||||
}
|
||||
} else {
|
||||
num[i]++
|
||||
inc = false
|
||||
break
|
||||
}
|
||||
} else if i > dot && num[i] == '0' {
|
||||
end--
|
||||
}
|
||||
}
|
||||
}
|
||||
if dot == start && end == start+1 {
|
||||
if inc {
|
||||
num[start] = '1'
|
||||
} else {
|
||||
num[start] = '0'
|
||||
}
|
||||
} else {
|
||||
if dot+1 == end {
|
||||
end--
|
||||
}
|
||||
if inc {
|
||||
if num[start] == '9' {
|
||||
num[start] = '0'
|
||||
copy(num[start+1:], num[start:end])
|
||||
end++
|
||||
num[start] = '1'
|
||||
} else {
|
||||
num[start]++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// case 3
|
||||
if dot < end {
|
||||
if dot == start {
|
||||
copy(num[start:], num[end-n:end])
|
||||
end = start + n
|
||||
} else {
|
||||
copy(num[dot:], num[dot+1:end])
|
||||
end--
|
||||
}
|
||||
}
|
||||
num[end] = 'e'
|
||||
num[end+1] = '-'
|
||||
end += 2
|
||||
intExp = -intExp
|
||||
for i := end + lenIntExp - 1; i >= end; i-- {
|
||||
num[i] = byte(intExp%10) + '0'
|
||||
intExp /= 10
|
||||
}
|
||||
end += lenIntExp
|
||||
}
|
||||
|
||||
if neg {
|
||||
start--
|
||||
num[start] = '-'
|
||||
}
|
||||
return num[start:end]
|
||||
}
|
58
vendor/github.com/tdewolff/minify/json/json.go
generated
vendored
Normal file
58
vendor/github.com/tdewolff/minify/json/json.go
generated
vendored
Normal file
@ -0,0 +1,58 @@
|
||||
// Package json minifies JSON following the specifications at http://json.org/.
|
||||
package json // import "github.com/tdewolff/minify/json"
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/tdewolff/minify"
|
||||
"github.com/tdewolff/parse/json"
|
||||
)
|
||||
|
||||
var (
|
||||
commaBytes = []byte(",")
|
||||
colonBytes = []byte(":")
|
||||
)
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
// Minifier is a JSON minifier.
|
||||
type Minifier struct{}
|
||||
|
||||
// Minify minifies JSON data, it reads from r and writes to w.
|
||||
func Minify(m *minify.M, w io.Writer, r io.Reader, params map[string]string) error {
|
||||
return (&Minifier{}).Minify(m, w, r, params)
|
||||
}
|
||||
|
||||
// Minify minifies JSON data, it reads from r and writes to w.
|
||||
func (o *Minifier) Minify(_ *minify.M, w io.Writer, r io.Reader, _ map[string]string) error {
|
||||
skipComma := true
|
||||
|
||||
p := json.NewParser(r)
|
||||
for {
|
||||
state := p.State()
|
||||
gt, text := p.Next()
|
||||
if gt == json.ErrorGrammar {
|
||||
if p.Err() != io.EOF {
|
||||
return p.Err()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if !skipComma && gt != json.EndObjectGrammar && gt != json.EndArrayGrammar {
|
||||
if state == json.ObjectKeyState || state == json.ArrayState {
|
||||
if _, err := w.Write(commaBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if state == json.ObjectValueState {
|
||||
if _, err := w.Write(colonBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
skipComma = gt == json.StartObjectGrammar || gt == json.StartArrayGrammar
|
||||
|
||||
if _, err := w.Write(text); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
272
vendor/github.com/tdewolff/minify/minify.go
generated
vendored
Normal file
272
vendor/github.com/tdewolff/minify/minify.go
generated
vendored
Normal file
@ -0,0 +1,272 @@
|
||||
// Package minify relates MIME type to minifiers. Several minifiers are provided in the subpackages.
|
||||
package minify // import "github.com/tdewolff/minify"
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"mime"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os/exec"
|
||||
"path"
|
||||
"regexp"
|
||||
"sync"
|
||||
|
||||
"github.com/tdewolff/buffer"
|
||||
"github.com/tdewolff/parse"
|
||||
)
|
||||
|
||||
// ErrNotExist is returned when no minifier exists for a given mimetype.
|
||||
var ErrNotExist = errors.New("minifier does not exist for mimetype")
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
// MinifierFunc is a function that implements Minifer.
|
||||
type MinifierFunc func(*M, io.Writer, io.Reader, map[string]string) error
|
||||
|
||||
// Minify calls f(m, w, r, params)
|
||||
func (f MinifierFunc) Minify(m *M, w io.Writer, r io.Reader, params map[string]string) error {
|
||||
return f(m, w, r, params)
|
||||
}
|
||||
|
||||
// Minifier is the interface for minifiers.
|
||||
// The *M parameter is used for minifying embedded resources, such as JS within HTML.
|
||||
type Minifier interface {
|
||||
Minify(*M, io.Writer, io.Reader, map[string]string) error
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
type patternMinifier struct {
|
||||
pattern *regexp.Regexp
|
||||
Minifier
|
||||
}
|
||||
|
||||
type cmdMinifier struct {
|
||||
cmd *exec.Cmd
|
||||
}
|
||||
|
||||
func (c *cmdMinifier) Minify(_ *M, w io.Writer, r io.Reader, _ map[string]string) error {
|
||||
cmd := &exec.Cmd{}
|
||||
*cmd = *c.cmd // concurrency safety
|
||||
cmd.Stdout = w
|
||||
cmd.Stdin = r
|
||||
return cmd.Run()
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
// M holds a map of mimetype => function to allow recursive minifier calls of the minifier functions.
|
||||
type M struct {
|
||||
literal map[string]Minifier
|
||||
pattern []patternMinifier
|
||||
|
||||
URL *url.URL
|
||||
}
|
||||
|
||||
// New returns a new M.
|
||||
func New() *M {
|
||||
return &M{
|
||||
map[string]Minifier{},
|
||||
[]patternMinifier{},
|
||||
nil,
|
||||
}
|
||||
}
|
||||
|
||||
// Add adds a minifier to the mimetype => function map (unsafe for concurrent use).
|
||||
func (m *M) Add(mimetype string, minifier Minifier) {
|
||||
m.literal[mimetype] = minifier
|
||||
}
|
||||
|
||||
// AddFunc adds a minify function to the mimetype => function map (unsafe for concurrent use).
|
||||
func (m *M) AddFunc(mimetype string, minifier MinifierFunc) {
|
||||
m.literal[mimetype] = minifier
|
||||
}
|
||||
|
||||
// AddRegexp adds a minifier to the mimetype => function map (unsafe for concurrent use).
|
||||
func (m *M) AddRegexp(pattern *regexp.Regexp, minifier Minifier) {
|
||||
m.pattern = append(m.pattern, patternMinifier{pattern, minifier})
|
||||
}
|
||||
|
||||
// AddFuncRegexp adds a minify function to the mimetype => function map (unsafe for concurrent use).
|
||||
func (m *M) AddFuncRegexp(pattern *regexp.Regexp, minifier MinifierFunc) {
|
||||
m.pattern = append(m.pattern, patternMinifier{pattern, minifier})
|
||||
}
|
||||
|
||||
// AddCmd adds a minify function to the mimetype => function map (unsafe for concurrent use) that executes a command to process the minification.
|
||||
// It allows the use of external tools like ClosureCompiler, UglifyCSS, etc. for a specific mimetype.
|
||||
func (m *M) AddCmd(mimetype string, cmd *exec.Cmd) {
|
||||
m.literal[mimetype] = &cmdMinifier{cmd}
|
||||
}
|
||||
|
||||
// AddCmdRegexp adds a minify function to the mimetype => function map (unsafe for concurrent use) that executes a command to process the minification.
|
||||
// It allows the use of external tools like ClosureCompiler, UglifyCSS, etc. for a specific mimetype regular expression.
|
||||
func (m *M) AddCmdRegexp(pattern *regexp.Regexp, cmd *exec.Cmd) {
|
||||
m.pattern = append(m.pattern, patternMinifier{pattern, &cmdMinifier{cmd}})
|
||||
}
|
||||
|
||||
// Match returns the pattern and minifier that gets matched with the mediatype.
|
||||
// It returns nil when no matching minifier exists.
|
||||
// It has the same matching algorithm as Minify.
|
||||
func (m *M) Match(mediatype string) (string, map[string]string, MinifierFunc) {
|
||||
mimetype, params := parse.Mediatype([]byte(mediatype))
|
||||
if minifier, ok := m.literal[string(mimetype)]; ok { // string conversion is optimized away
|
||||
return string(mimetype), params, minifier.Minify
|
||||
} else {
|
||||
for _, minifier := range m.pattern {
|
||||
if minifier.pattern.Match(mimetype) {
|
||||
return minifier.pattern.String(), params, minifier.Minify
|
||||
}
|
||||
}
|
||||
}
|
||||
return string(mimetype), params, nil
|
||||
}
|
||||
|
||||
// Minify minifies the content of a Reader and writes it to a Writer (safe for concurrent use).
|
||||
// An error is returned when no such mimetype exists (ErrNotExist) or when an error occurred in the minifier function.
|
||||
// Mediatype may take the form of 'text/plain', 'text/*', '*/*' or 'text/plain; charset=UTF-8; version=2.0'.
|
||||
func (m *M) Minify(mediatype string, w io.Writer, r io.Reader) error {
|
||||
mimetype, params := parse.Mediatype([]byte(mediatype))
|
||||
return m.MinifyMimetype(mimetype, w, r, params)
|
||||
}
|
||||
|
||||
// MinifyMimetype minifies the content of a Reader and writes it to a Writer (safe for concurrent use).
|
||||
// It is a lower level version of Minify and requires the mediatype to be split up into mimetype and parameters.
|
||||
// It is mostly used internally by minifiers because it is faster (no need to convert a byte-slice to string and vice versa).
|
||||
func (m *M) MinifyMimetype(mimetype []byte, w io.Writer, r io.Reader, params map[string]string) error {
|
||||
err := ErrNotExist
|
||||
if minifier, ok := m.literal[string(mimetype)]; ok { // string conversion is optimized away
|
||||
err = minifier.Minify(m, w, r, params)
|
||||
} else {
|
||||
for _, minifier := range m.pattern {
|
||||
if minifier.pattern.Match(mimetype) {
|
||||
err = minifier.Minify(m, w, r, params)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Bytes minifies an array of bytes (safe for concurrent use). When an error occurs it return the original array and the error.
|
||||
// It returns an error when no such mimetype exists (ErrNotExist) or any error occurred in the minifier function.
|
||||
func (m *M) Bytes(mediatype string, v []byte) ([]byte, error) {
|
||||
out := buffer.NewWriter(make([]byte, 0, len(v)))
|
||||
if err := m.Minify(mediatype, out, buffer.NewReader(v)); err != nil {
|
||||
return v, err
|
||||
}
|
||||
return out.Bytes(), nil
|
||||
}
|
||||
|
||||
// String minifies a string (safe for concurrent use). When an error occurs it return the original string and the error.
|
||||
// It returns an error when no such mimetype exists (ErrNotExist) or any error occurred in the minifier function.
|
||||
func (m *M) String(mediatype string, v string) (string, error) {
|
||||
out := buffer.NewWriter(make([]byte, 0, len(v)))
|
||||
if err := m.Minify(mediatype, out, buffer.NewReader([]byte(v))); err != nil {
|
||||
return v, err
|
||||
}
|
||||
return string(out.Bytes()), nil
|
||||
}
|
||||
|
||||
// Reader wraps a Reader interface and minifies the stream.
|
||||
// Errors from the minifier are returned by the reader.
|
||||
func (m *M) Reader(mediatype string, r io.Reader) io.Reader {
|
||||
pr, pw := io.Pipe()
|
||||
go func() {
|
||||
if err := m.Minify(mediatype, pw, r); err != nil {
|
||||
pw.CloseWithError(err)
|
||||
} else {
|
||||
pw.Close()
|
||||
}
|
||||
}()
|
||||
return pr
|
||||
}
|
||||
|
||||
// minifyWriter makes sure that errors from the minifier are passed down through Close (can be blocking).
|
||||
type minifyWriter struct {
|
||||
pw *io.PipeWriter
|
||||
wg sync.WaitGroup
|
||||
err error
|
||||
}
|
||||
|
||||
// Write intercepts any writes to the writer.
|
||||
func (w *minifyWriter) Write(b []byte) (int, error) {
|
||||
return w.pw.Write(b)
|
||||
}
|
||||
|
||||
// Close must be called when writing has finished. It returns the error from the minifier.
|
||||
func (w *minifyWriter) Close() error {
|
||||
w.pw.Close()
|
||||
w.wg.Wait()
|
||||
return w.err
|
||||
}
|
||||
|
||||
// Writer wraps a Writer interface and minifies the stream.
|
||||
// Errors from the minifier are returned by Close on the writer.
|
||||
// The writer must be closed explicitly.
|
||||
func (m *M) Writer(mediatype string, w io.Writer) *minifyWriter {
|
||||
pr, pw := io.Pipe()
|
||||
mw := &minifyWriter{pw, sync.WaitGroup{}, nil}
|
||||
mw.wg.Add(1)
|
||||
go func() {
|
||||
defer mw.wg.Done()
|
||||
|
||||
if err := m.Minify(mediatype, w, pr); err != nil {
|
||||
io.Copy(w, pr)
|
||||
mw.err = err
|
||||
}
|
||||
pr.Close()
|
||||
}()
|
||||
return mw
|
||||
}
|
||||
|
||||
// minifyResponseWriter wraps an http.ResponseWriter and makes sure that errors from the minifier are passed down through Close (can be blocking).
|
||||
// All writes to the response writer are intercepted and minified on the fly.
|
||||
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
|
||||
type minifyResponseWriter struct {
|
||||
http.ResponseWriter
|
||||
|
||||
writer *minifyWriter
|
||||
m *M
|
||||
mediatype string
|
||||
}
|
||||
|
||||
// Write intercepts any writes to the response writer.
|
||||
// The first write will extract the Content-Type as the mediatype. Otherwise it falls back to the RequestURI extension.
|
||||
func (w *minifyResponseWriter) Write(b []byte) (int, error) {
|
||||
if w.writer == nil {
|
||||
// first write
|
||||
if mediatype := w.ResponseWriter.Header().Get("Content-Type"); mediatype != "" {
|
||||
w.mediatype = mediatype
|
||||
}
|
||||
w.writer = w.m.Writer(w.mediatype, w.ResponseWriter)
|
||||
}
|
||||
return w.writer.Write(b)
|
||||
}
|
||||
|
||||
// Close must be called when writing has finished. It returns the error from the minifier.
|
||||
func (w *minifyResponseWriter) Close() error {
|
||||
if w.writer != nil {
|
||||
return w.writer.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ResponseWriter minifies any writes to the http.ResponseWriter.
|
||||
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
|
||||
// Minification might be slower than just sending the original file! Caching is advised.
|
||||
func (m *M) ResponseWriter(w http.ResponseWriter, r *http.Request) *minifyResponseWriter {
|
||||
mediatype := mime.TypeByExtension(path.Ext(r.RequestURI))
|
||||
return &minifyResponseWriter{w, nil, m, mediatype}
|
||||
}
|
||||
|
||||
// Middleware provides a middleware function that minifies content on the fly by intercepting writes to http.ResponseWriter.
|
||||
// http.ResponseWriter loses all functionality such as Pusher, Hijacker, Flusher, ...
|
||||
// Minification might be slower than just sending the original file! Caching is advised.
|
||||
func (m *M) Middleware(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
mw := m.ResponseWriter(w, r)
|
||||
next.ServeHTTP(mw, r)
|
||||
mw.Close()
|
||||
})
|
||||
}
|
22
vendor/github.com/tdewolff/parse/LICENSE.md
generated
vendored
Normal file
22
vendor/github.com/tdewolff/parse/LICENSE.md
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
Copyright (c) 2015 Taco de Wolff
|
||||
|
||||
Permission is hereby granted, free of charge, to any person
|
||||
obtaining a copy of this software and associated documentation
|
||||
files (the "Software"), to deal in the Software without
|
||||
restriction, including without limitation the rights to use,
|
||||
copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following
|
||||
conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
38
vendor/github.com/tdewolff/parse/README.md
generated
vendored
Normal file
38
vendor/github.com/tdewolff/parse/README.md
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
# Parse [](https://travis-ci.org/tdewolff/parse) [](http://godoc.org/github.com/tdewolff/parse) [](https://coveralls.io/github/tdewolff/parse?branch=master)
|
||||
|
||||
This package contains several lexers and parsers written in [Go][1]. All subpackages are built to be streaming, high performance and to be in accordance with the official (latest) specifications.
|
||||
|
||||
The lexers are implemented using `buffer.Lexer` in https://github.com/tdewolff/buffer and the parsers work on top of the lexers. Some subpackages have hashes defined (using [Hasher](https://github.com/tdewolff/hasher)) that speed up common byte-slice comparisons.
|
||||
|
||||
## CSS
|
||||
This package is a CSS3 lexer and parser. Both follow the specification at [CSS Syntax Module Level 3](http://www.w3.org/TR/css-syntax-3/). The lexer takes an io.Reader and converts it into tokens until the EOF. The parser returns a parse tree of the full io.Reader input stream, but the low-level `Next` function can be used for stream parsing to returns grammar units until the EOF.
|
||||
|
||||
[See README here](https://github.com/tdewolff/parse/tree/master/css).
|
||||
|
||||
## HTML
|
||||
This package is an HTML5 lexer. It follows the specification at [The HTML syntax](http://www.w3.org/TR/html5/syntax.html). The lexer takes an io.Reader and converts it into tokens until the EOF.
|
||||
|
||||
[See README here](https://github.com/tdewolff/parse/tree/master/html).
|
||||
|
||||
## JS
|
||||
This package is a JS lexer (ECMA-262, edition 6.0). It follows the specification at [ECMAScript Language Specification](http://www.ecma-international.org/ecma-262/6.0/). The lexer takes an io.Reader and converts it into tokens until the EOF.
|
||||
|
||||
[See README here](https://github.com/tdewolff/parse/tree/master/js).
|
||||
|
||||
## JSON
|
||||
This package is a JSON parser (ECMA-404). It follows the specification at [JSON](http://json.org/). The parser takes an io.Reader and converts it into tokens until the EOF.
|
||||
|
||||
[See README here](https://github.com/tdewolff/parse/tree/master/json).
|
||||
|
||||
## SVG
|
||||
This package contains common hashes for SVG1.1 tags and attributes.
|
||||
|
||||
## XML
|
||||
This package is an XML1.0 lexer. It follows the specification at [Extensible Markup Language (XML) 1.0 (Fifth Edition)](http://www.w3.org/TR/xml/). The lexer takes an io.Reader and converts it into tokens until the EOF.
|
||||
|
||||
[See README here](https://github.com/tdewolff/parse/tree/master/xml).
|
||||
|
||||
## License
|
||||
Released under the [MIT license](LICENSE.md).
|
||||
|
||||
[1]: http://golang.org/ "Go Language"
|
230
vendor/github.com/tdewolff/parse/common.go
generated
vendored
Normal file
230
vendor/github.com/tdewolff/parse/common.go
generated
vendored
Normal file
@ -0,0 +1,230 @@
|
||||
// Package parse contains a collection of parsers for various formats in its subpackages.
|
||||
package parse // import "github.com/tdewolff/parse"
|
||||
|
||||
import (
|
||||
"encoding/base64"
|
||||
"errors"
|
||||
"net/url"
|
||||
)
|
||||
|
||||
// ErrBadDataURI is returned by DataURI when the byte slice does not start with 'data:' or is too short.
|
||||
var ErrBadDataURI = errors.New("not a data URI")
|
||||
|
||||
// Number returns the number of bytes that parse as a number of the regex format (+|-)?([0-9]+(\.[0-9]+)?|\.[0-9]+)((e|E)(+|-)?[0-9]+)?.
|
||||
func Number(b []byte) int {
|
||||
if len(b) == 0 {
|
||||
return 0
|
||||
}
|
||||
i := 0
|
||||
if b[i] == '+' || b[i] == '-' {
|
||||
i++
|
||||
if i >= len(b) {
|
||||
return 0
|
||||
}
|
||||
}
|
||||
firstDigit := (b[i] >= '0' && b[i] <= '9')
|
||||
if firstDigit {
|
||||
i++
|
||||
for i < len(b) && b[i] >= '0' && b[i] <= '9' {
|
||||
i++
|
||||
}
|
||||
}
|
||||
if i < len(b) && b[i] == '.' {
|
||||
i++
|
||||
if i < len(b) && b[i] >= '0' && b[i] <= '9' {
|
||||
i++
|
||||
for i < len(b) && b[i] >= '0' && b[i] <= '9' {
|
||||
i++
|
||||
}
|
||||
} else if firstDigit {
|
||||
// . could belong to the next token
|
||||
i--
|
||||
return i
|
||||
} else {
|
||||
return 0
|
||||
}
|
||||
} else if !firstDigit {
|
||||
return 0
|
||||
}
|
||||
iOld := i
|
||||
if i < len(b) && (b[i] == 'e' || b[i] == 'E') {
|
||||
i++
|
||||
if i < len(b) && (b[i] == '+' || b[i] == '-') {
|
||||
i++
|
||||
}
|
||||
if i >= len(b) || b[i] < '0' || b[i] > '9' {
|
||||
// e could belong to next token
|
||||
return iOld
|
||||
}
|
||||
for i < len(b) && b[i] >= '0' && b[i] <= '9' {
|
||||
i++
|
||||
}
|
||||
}
|
||||
return i
|
||||
}
|
||||
|
||||
// Dimension parses a byte-slice and returns the length of the number and its unit.
|
||||
func Dimension(b []byte) (int, int) {
|
||||
num := Number(b)
|
||||
if num == 0 || num == len(b) {
|
||||
return num, 0
|
||||
} else if b[num] == '%' {
|
||||
return num, 1
|
||||
} else if b[num] >= 'a' && b[num] <= 'z' || b[num] >= 'A' && b[num] <= 'Z' {
|
||||
i := num + 1
|
||||
for i < len(b) && (b[i] >= 'a' && b[i] <= 'z' || b[i] >= 'A' && b[i] <= 'Z') {
|
||||
i++
|
||||
}
|
||||
return num, i - num
|
||||
}
|
||||
return num, 0
|
||||
}
|
||||
|
||||
// Mediatype parses a given mediatype and splits the mimetype from the parameters.
|
||||
// It works similar to mime.ParseMediaType but is faster.
|
||||
func Mediatype(b []byte) ([]byte, map[string]string) {
|
||||
i := 0
|
||||
for i < len(b) && b[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
b = b[i:]
|
||||
n := len(b)
|
||||
mimetype := b
|
||||
var params map[string]string
|
||||
for i := 3; i < n; i++ { // mimetype is at least three characters long
|
||||
if b[i] == ';' || b[i] == ' ' {
|
||||
mimetype = b[:i]
|
||||
if b[i] == ' ' {
|
||||
i++
|
||||
for i < n && b[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
if i < n && b[i] != ';' {
|
||||
break
|
||||
}
|
||||
}
|
||||
params = map[string]string{}
|
||||
s := string(b)
|
||||
PARAM:
|
||||
i++
|
||||
for i < n && s[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
start := i
|
||||
for i < n && s[i] != '=' && s[i] != ';' && s[i] != ' ' {
|
||||
i++
|
||||
}
|
||||
key := s[start:i]
|
||||
for i < n && s[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
if i < n && s[i] == '=' {
|
||||
i++
|
||||
for i < n && s[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
start = i
|
||||
for i < n && s[i] != ';' && s[i] != ' ' {
|
||||
i++
|
||||
}
|
||||
} else {
|
||||
start = i
|
||||
}
|
||||
params[key] = s[start:i]
|
||||
for i < n && s[i] == ' ' {
|
||||
i++
|
||||
}
|
||||
if i < n && s[i] == ';' {
|
||||
goto PARAM
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
return mimetype, params
|
||||
}
|
||||
|
||||
// DataURI parses the given data URI and returns the mediatype, data and ok.
|
||||
func DataURI(dataURI []byte) ([]byte, []byte, error) {
|
||||
if len(dataURI) > 5 && Equal(dataURI[:5], []byte("data:")) {
|
||||
dataURI = dataURI[5:]
|
||||
inBase64 := false
|
||||
var mediatype []byte
|
||||
i := 0
|
||||
for j := 0; j < len(dataURI); j++ {
|
||||
c := dataURI[j]
|
||||
if c == '=' || c == ';' || c == ',' {
|
||||
if c != '=' && Equal(TrimWhitespace(dataURI[i:j]), []byte("base64")) {
|
||||
if len(mediatype) > 0 {
|
||||
mediatype = mediatype[:len(mediatype)-1]
|
||||
}
|
||||
inBase64 = true
|
||||
i = j
|
||||
} else if c != ',' {
|
||||
mediatype = append(append(mediatype, TrimWhitespace(dataURI[i:j])...), c)
|
||||
i = j + 1
|
||||
} else {
|
||||
mediatype = append(mediatype, TrimWhitespace(dataURI[i:j])...)
|
||||
}
|
||||
if c == ',' {
|
||||
if len(mediatype) == 0 || mediatype[0] == ';' {
|
||||
mediatype = []byte("text/plain")
|
||||
}
|
||||
data := dataURI[j+1:]
|
||||
if inBase64 {
|
||||
decoded := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
|
||||
n, err := base64.StdEncoding.Decode(decoded, data)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
data = decoded[:n]
|
||||
} else if unescaped, err := url.QueryUnescape(string(data)); err == nil {
|
||||
data = []byte(unescaped)
|
||||
}
|
||||
return mediatype, data, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil, nil, ErrBadDataURI
|
||||
}
|
||||
|
||||
// QuoteEntity parses the given byte slice and returns the quote that got matched (' or ") and its entity length.
|
||||
func QuoteEntity(b []byte) (quote byte, n int) {
|
||||
if len(b) < 5 || b[0] != '&' {
|
||||
return 0, 0
|
||||
}
|
||||
if b[1] == '#' {
|
||||
if b[2] == 'x' {
|
||||
i := 3
|
||||
for i < len(b) && b[i] == '0' {
|
||||
i++
|
||||
}
|
||||
if i+2 < len(b) && b[i] == '2' && b[i+2] == ';' {
|
||||
if b[i+1] == '2' {
|
||||
return '"', i + 3 // "
|
||||
} else if b[i+1] == '7' {
|
||||
return '\'', i + 3 // '
|
||||
}
|
||||
}
|
||||
} else {
|
||||
i := 2
|
||||
for i < len(b) && b[i] == '0' {
|
||||
i++
|
||||
}
|
||||
if i+2 < len(b) && b[i] == '3' && b[i+2] == ';' {
|
||||
if b[i+1] == '4' {
|
||||
return '"', i + 3 // "
|
||||
} else if b[i+1] == '9' {
|
||||
return '\'', i + 3 // '
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if len(b) >= 6 && b[5] == ';' {
|
||||
if EqualFold(b[1:5], []byte{'q', 'u', 'o', 't'}) {
|
||||
return '"', 6 // "
|
||||
} else if EqualFold(b[1:5], []byte{'a', 'p', 'o', 's'}) {
|
||||
return '\'', 6 // '
|
||||
}
|
||||
}
|
||||
return 0, 0
|
||||
}
|
81
vendor/github.com/tdewolff/parse/json/README.md
generated
vendored
Normal file
81
vendor/github.com/tdewolff/parse/json/README.md
generated
vendored
Normal file
@ -0,0 +1,81 @@
|
||||
# JSON [](http://godoc.org/github.com/tdewolff/parse/json) [](http://gocover.io/github.com/tdewolff/parse/json)
|
||||
|
||||
This package is a JSON lexer (ECMA-404) written in [Go][1]. It follows the specification at [JSON](http://json.org/). The lexer takes an io.Reader and converts it into tokens until the EOF.
|
||||
|
||||
## Installation
|
||||
Run the following command
|
||||
|
||||
go get github.com/tdewolff/parse/json
|
||||
|
||||
or add the following import and run project with `go get`
|
||||
|
||||
import "github.com/tdewolff/parse/json"
|
||||
|
||||
## Parser
|
||||
### Usage
|
||||
The following initializes a new Parser with io.Reader `r`:
|
||||
``` go
|
||||
p := json.NewParser(r)
|
||||
```
|
||||
|
||||
To tokenize until EOF an error, use:
|
||||
``` go
|
||||
for {
|
||||
gt, text := p.Next()
|
||||
switch gt {
|
||||
case json.ErrorGrammar:
|
||||
// error or EOF set in p.Err()
|
||||
return
|
||||
// ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
All grammars:
|
||||
``` go
|
||||
ErrorGrammar GrammarType = iota // extra grammar when errors occur
|
||||
WhitespaceGrammar // space \t \r \n
|
||||
LiteralGrammar // null true false
|
||||
NumberGrammar
|
||||
StringGrammar
|
||||
StartObjectGrammar // {
|
||||
EndObjectGrammar // }
|
||||
StartArrayGrammar // [
|
||||
EndArrayGrammar // ]
|
||||
```
|
||||
|
||||
### Examples
|
||||
``` go
|
||||
package main
|
||||
|
||||
import (
|
||||
"os"
|
||||
|
||||
"github.com/tdewolff/parse/json"
|
||||
)
|
||||
|
||||
// Tokenize JSON from stdin.
|
||||
func main() {
|
||||
p := json.NewParser(os.Stdin)
|
||||
for {
|
||||
gt, text := p.Next()
|
||||
switch gt {
|
||||
case json.ErrorGrammar:
|
||||
if p.Err() != io.EOF {
|
||||
fmt.Println("Error on line", p.Line(), ":", p.Err())
|
||||
}
|
||||
return
|
||||
case json.LiteralGrammar:
|
||||
fmt.Println("Literal", string(text))
|
||||
case json.NumberGrammar:
|
||||
fmt.Println("Number", string(text))
|
||||
// ...
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## License
|
||||
Released under the [MIT license](https://github.com/tdewolff/parse/blob/master/LICENSE.md).
|
||||
|
||||
[1]: http://golang.org/ "Go Language"
|
317
vendor/github.com/tdewolff/parse/json/parse.go
generated
vendored
Normal file
317
vendor/github.com/tdewolff/parse/json/parse.go
generated
vendored
Normal file
@ -0,0 +1,317 @@
|
||||
// Package json is a JSON parser following the specifications at http://json.org/.
|
||||
package json // import "github.com/tdewolff/parse/json"
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"strconv"
|
||||
|
||||
"github.com/tdewolff/buffer"
|
||||
)
|
||||
|
||||
// ErrBadComma is returned when an unexpected comma is encountered.
|
||||
var ErrBadComma = errors.New("unexpected comma character outside an array or object")
|
||||
|
||||
// ErrNoComma is returned when no comma is present between two values.
|
||||
var ErrNoComma = errors.New("expected comma character or an array or object ending")
|
||||
|
||||
// ErrBadObjectKey is returned when the object key is not a quoted string.
|
||||
var ErrBadObjectKey = errors.New("expected object key to be a quoted string")
|
||||
|
||||
// ErrBadObjectDeclaration is returned when the object key is not followed by a colon character.
|
||||
var ErrBadObjectDeclaration = errors.New("expected colon character after object key")
|
||||
|
||||
// ErrBadObjectEnding is returned when an unexpected right brace is encountered.
|
||||
var ErrBadObjectEnding = errors.New("unexpected right brace character")
|
||||
|
||||
// ErrBadArrayEnding is returned when an unexpected right bracket is encountered.
|
||||
var ErrBadArrayEnding = errors.New("unexpected right bracket character")
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
// GrammarType determines the type of grammar
|
||||
type GrammarType uint32
|
||||
|
||||
// GrammarType values.
|
||||
const (
|
||||
ErrorGrammar GrammarType = iota // extra grammar when errors occur
|
||||
WhitespaceGrammar
|
||||
LiteralGrammar
|
||||
NumberGrammar
|
||||
StringGrammar
|
||||
StartObjectGrammar // {
|
||||
EndObjectGrammar // }
|
||||
StartArrayGrammar // [
|
||||
EndArrayGrammar // ]
|
||||
)
|
||||
|
||||
// String returns the string representation of a GrammarType.
|
||||
func (gt GrammarType) String() string {
|
||||
switch gt {
|
||||
case ErrorGrammar:
|
||||
return "Error"
|
||||
case WhitespaceGrammar:
|
||||
return "Whitespace"
|
||||
case LiteralGrammar:
|
||||
return "Literal"
|
||||
case NumberGrammar:
|
||||
return "Number"
|
||||
case StringGrammar:
|
||||
return "String"
|
||||
case StartObjectGrammar:
|
||||
return "StartObject"
|
||||
case EndObjectGrammar:
|
||||
return "EndObject"
|
||||
case StartArrayGrammar:
|
||||
return "StartArray"
|
||||
case EndArrayGrammar:
|
||||
return "EndArray"
|
||||
}
|
||||
return "Invalid(" + strconv.Itoa(int(gt)) + ")"
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
// State determines the current state the parser is in.
|
||||
type State uint32
|
||||
|
||||
// State values.
|
||||
const (
|
||||
ValueState State = iota // extra token when errors occur
|
||||
ObjectKeyState
|
||||
ObjectValueState
|
||||
ArrayState
|
||||
)
|
||||
|
||||
// String returns the string representation of a State.
|
||||
func (state State) String() string {
|
||||
switch state {
|
||||
case ValueState:
|
||||
return "Value"
|
||||
case ObjectKeyState:
|
||||
return "ObjectKey"
|
||||
case ObjectValueState:
|
||||
return "ObjectValue"
|
||||
case ArrayState:
|
||||
return "Array"
|
||||
}
|
||||
return "Invalid(" + strconv.Itoa(int(state)) + ")"
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
// Parser is the state for the lexer.
|
||||
type Parser struct {
|
||||
r *buffer.Lexer
|
||||
state []State
|
||||
err error
|
||||
|
||||
needComma bool
|
||||
}
|
||||
|
||||
// NewParser returns a new Parser for a given io.Reader.
|
||||
func NewParser(r io.Reader) *Parser {
|
||||
return &Parser{
|
||||
r: buffer.NewLexer(r),
|
||||
state: []State{ValueState},
|
||||
}
|
||||
}
|
||||
|
||||
// Err returns the error encountered during tokenization, this is often io.EOF but also other errors can be returned.
|
||||
func (p Parser) Err() error {
|
||||
err := p.r.Err()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return p.err
|
||||
}
|
||||
|
||||
// Next returns the next Grammar. It returns ErrorGrammar when an error was encountered. Using Err() one can retrieve the error message.
|
||||
func (p *Parser) Next() (GrammarType, []byte) {
|
||||
p.r.Free(p.r.ShiftLen())
|
||||
|
||||
p.moveWhitespace()
|
||||
c := p.r.Peek(0)
|
||||
state := p.state[len(p.state)-1]
|
||||
if c == ',' {
|
||||
if state != ArrayState && state != ObjectKeyState {
|
||||
p.err = ErrBadComma
|
||||
return ErrorGrammar, nil
|
||||
}
|
||||
p.r.Move(1)
|
||||
p.moveWhitespace()
|
||||
p.needComma = false
|
||||
c = p.r.Peek(0)
|
||||
}
|
||||
p.r.Skip()
|
||||
|
||||
if p.needComma && c != '}' && c != ']' && c != 0 {
|
||||
p.err = ErrNoComma
|
||||
return ErrorGrammar, nil
|
||||
} else if c == '{' {
|
||||
p.state = append(p.state, ObjectKeyState)
|
||||
p.r.Move(1)
|
||||
return StartObjectGrammar, p.r.Shift()
|
||||
} else if c == '}' {
|
||||
if state != ObjectKeyState {
|
||||
p.err = ErrBadObjectEnding
|
||||
return ErrorGrammar, nil
|
||||
}
|
||||
p.needComma = true
|
||||
p.state = p.state[:len(p.state)-1]
|
||||
if p.state[len(p.state)-1] == ObjectValueState {
|
||||
p.state[len(p.state)-1] = ObjectKeyState
|
||||
}
|
||||
p.r.Move(1)
|
||||
return EndObjectGrammar, p.r.Shift()
|
||||
} else if c == '[' {
|
||||
p.state = append(p.state, ArrayState)
|
||||
p.r.Move(1)
|
||||
return StartArrayGrammar, p.r.Shift()
|
||||
} else if c == ']' {
|
||||
p.needComma = true
|
||||
if state != ArrayState {
|
||||
p.err = ErrBadArrayEnding
|
||||
return ErrorGrammar, nil
|
||||
}
|
||||
p.state = p.state[:len(p.state)-1]
|
||||
if p.state[len(p.state)-1] == ObjectValueState {
|
||||
p.state[len(p.state)-1] = ObjectKeyState
|
||||
}
|
||||
p.r.Move(1)
|
||||
return EndArrayGrammar, p.r.Shift()
|
||||
} else if state == ObjectKeyState {
|
||||
if c != '"' || !p.consumeStringToken() {
|
||||
p.err = ErrBadObjectKey
|
||||
return ErrorGrammar, nil
|
||||
}
|
||||
n := p.r.Pos()
|
||||
p.moveWhitespace()
|
||||
if c := p.r.Peek(0); c != ':' {
|
||||
p.err = ErrBadObjectDeclaration
|
||||
return ErrorGrammar, nil
|
||||
}
|
||||
p.r.Move(1)
|
||||
p.state[len(p.state)-1] = ObjectValueState
|
||||
return StringGrammar, p.r.Shift()[:n]
|
||||
} else {
|
||||
p.needComma = true
|
||||
if state == ObjectValueState {
|
||||
p.state[len(p.state)-1] = ObjectKeyState
|
||||
}
|
||||
if c == '"' && p.consumeStringToken() {
|
||||
return StringGrammar, p.r.Shift()
|
||||
} else if p.consumeNumberToken() {
|
||||
return NumberGrammar, p.r.Shift()
|
||||
} else if p.consumeLiteralToken() {
|
||||
return LiteralGrammar, p.r.Shift()
|
||||
}
|
||||
}
|
||||
return ErrorGrammar, nil
|
||||
}
|
||||
|
||||
// State returns the state the parser is currently in (ie. which token is expected).
|
||||
func (p *Parser) State() State {
|
||||
return p.state[len(p.state)-1]
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////
|
||||
|
||||
/*
|
||||
The following functions follow the specifications at http://json.org/
|
||||
*/
|
||||
|
||||
func (p *Parser) moveWhitespace() {
|
||||
for {
|
||||
if c := p.r.Peek(0); c != ' ' && c != '\t' && c != '\r' && c != '\n' {
|
||||
break
|
||||
}
|
||||
p.r.Move(1)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Parser) consumeLiteralToken() bool {
|
||||
c := p.r.Peek(0)
|
||||
if c == 't' && p.r.Peek(1) == 'r' && p.r.Peek(2) == 'u' && p.r.Peek(3) == 'e' {
|
||||
p.r.Move(4)
|
||||
return true
|
||||
} else if c == 'f' && p.r.Peek(1) == 'a' && p.r.Peek(2) == 'l' && p.r.Peek(3) == 's' && p.r.Peek(4) == 'e' {
|
||||
p.r.Move(5)
|
||||
return true
|
||||
} else if c == 'n' && p.r.Peek(1) == 'u' && p.r.Peek(2) == 'l' && p.r.Peek(3) == 'l' {
|
||||
p.r.Move(4)
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (p *Parser) consumeNumberToken() bool {
|
||||
mark := p.r.Pos()
|
||||
if p.r.Peek(0) == '-' {
|
||||
p.r.Move(1)
|
||||
}
|
||||
c := p.r.Peek(0)
|
||||
if c >= '1' && c <= '9' {
|
||||
p.r.Move(1)
|
||||
for {
|
||||
if c := p.r.Peek(0); c < '0' || c > '9' {
|
||||
break
|
||||
}
|
||||
p.r.Move(1)
|
||||
}
|
||||
} else if c != '0' {
|
||||
p.r.Rewind(mark)
|
||||
return false
|
||||
} else {
|
||||
p.r.Move(1) // 0
|
||||
}
|
||||
if c := p.r.Peek(0); c == '.' {
|
||||
p.r.Move(1)
|
||||
if c := p.r.Peek(0); c < '0' || c > '9' {
|
||||
p.r.Move(-1)
|
||||
return true
|
||||
}
|
||||
for {
|
||||
if c := p.r.Peek(0); c < '0' || c > '9' {
|
||||
break
|
||||
}
|
||||
p.r.Move(1)
|
||||
}
|
||||
}
|
||||
mark = p.r.Pos()
|
||||
if c := p.r.Peek(0); c == 'e' || c == 'E' {
|
||||
p.r.Move(1)
|
||||
if c := p.r.Peek(0); c == '+' || c == '-' {
|
||||
p.r.Move(1)
|
||||
}
|
||||
if c := p.r.Peek(0); c < '0' || c > '9' {
|
||||
p.r.Rewind(mark)
|
||||
return true
|
||||
}
|
||||
for {
|
||||
if c := p.r.Peek(0); c < '0' || c > '9' {
|
||||
break
|
||||
}
|
||||
p.r.Move(1)
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
func (p *Parser) consumeStringToken() bool {
|
||||
// assume to be on "
|
||||
p.r.Move(1)
|
||||
for {
|
||||
c := p.r.Peek(0)
|
||||
if c == '"' {
|
||||
p.r.Move(1)
|
||||
break
|
||||
} else if c == '\\' && (p.r.Peek(1) != 0 || p.r.Err() == nil) {
|
||||
p.r.Move(1)
|
||||
} else if c == 0 {
|
||||
return false
|
||||
}
|
||||
p.r.Move(1)
|
||||
}
|
||||
return true
|
||||
}
|
160
vendor/github.com/tdewolff/parse/util.go
generated
vendored
Normal file
160
vendor/github.com/tdewolff/parse/util.go
generated
vendored
Normal file
@ -0,0 +1,160 @@
|
||||
package parse // import "github.com/tdewolff/parse"
|
||||
|
||||
// Copy returns a copy of the given byte slice.
|
||||
func Copy(src []byte) (dst []byte) {
|
||||
dst = make([]byte, len(src))
|
||||
copy(dst, src)
|
||||
return
|
||||
}
|
||||
|
||||
// ToLower converts all characters in the byte slice from A-Z to a-z.
|
||||
func ToLower(src []byte) []byte {
|
||||
for i, c := range src {
|
||||
if c >= 'A' && c <= 'Z' {
|
||||
src[i] = c + ('a' - 'A')
|
||||
}
|
||||
}
|
||||
return src
|
||||
}
|
||||
|
||||
// Equal returns true when s matches the target.
|
||||
func Equal(s, target []byte) bool {
|
||||
if len(s) != len(target) {
|
||||
return false
|
||||
}
|
||||
for i, c := range target {
|
||||
if s[i] != c {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// EqualFold returns true when s matches case-insensitively the targetLower (which must be lowercase).
|
||||
func EqualFold(s, targetLower []byte) bool {
|
||||
if len(s) != len(targetLower) {
|
||||
return false
|
||||
}
|
||||
for i, c := range targetLower {
|
||||
if s[i] != c && (c < 'A' && c > 'Z' || s[i]+('a'-'A') != c) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
var whitespaceTable = [256]bool{
|
||||
// ASCII
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, true, true, false, true, true, false, false, // tab, new line, form feed, carriage return
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
|
||||
true, false, false, false, false, false, false, false, // space
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
|
||||
// non-ASCII
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
false, false, false, false, false, false, false, false,
|
||||
}
|
||||
|
||||
// IsWhitespace returns true for space, \n, \r, \t, \f.
|
||||
func IsWhitespace(c byte) bool {
|
||||
return whitespaceTable[c]
|
||||
}
|
||||
|
||||
// IsAllWhitespace returns true when the entire byte slice consists of space, \n, \r, \t, \f.
|
||||
func IsAllWhitespace(b []byte) bool {
|
||||
for _, c := range b {
|
||||
if !IsWhitespace(c) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// TrimWhitespace removes any leading and trailing whitespace characters.
|
||||
func TrimWhitespace(b []byte) []byte {
|
||||
n := len(b)
|
||||
start := n
|
||||
for i := 0; i < n; i++ {
|
||||
if !IsWhitespace(b[i]) {
|
||||
start = i
|
||||
break
|
||||
}
|
||||
}
|
||||
end := n
|
||||
for i := n - 1; i >= start; i-- {
|
||||
if !IsWhitespace(b[i]) {
|
||||
end = i + 1
|
||||
break
|
||||
}
|
||||
}
|
||||
return b[start:end]
|
||||
}
|
||||
|
||||
// ReplaceMultipleWhitespace replaces character series of space, \n, \t, \f, \r into a single space or newline (when the serie contained a \n or \r).
|
||||
func ReplaceMultipleWhitespace(b []byte) []byte {
|
||||
j := 0
|
||||
prevWS := false
|
||||
hasNewline := false
|
||||
for i, c := range b {
|
||||
if IsWhitespace(c) {
|
||||
prevWS = true
|
||||
if c == '\n' || c == '\r' {
|
||||
hasNewline = true
|
||||
}
|
||||
} else {
|
||||
if prevWS {
|
||||
prevWS = false
|
||||
if hasNewline {
|
||||
hasNewline = false
|
||||
b[j] = '\n'
|
||||
} else {
|
||||
b[j] = ' '
|
||||
}
|
||||
j++
|
||||
}
|
||||
b[j] = b[i]
|
||||
j++
|
||||
}
|
||||
}
|
||||
if prevWS {
|
||||
if hasNewline {
|
||||
b[j] = '\n'
|
||||
} else {
|
||||
b[j] = ' '
|
||||
}
|
||||
j++
|
||||
}
|
||||
return b[:j]
|
||||
}
|
22
vendor/github.com/tdewolff/strconv/LICENSE.md
generated
vendored
Normal file
22
vendor/github.com/tdewolff/strconv/LICENSE.md
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
Copyright (c) 2015 Taco de Wolff
|
||||
|
||||
Permission is hereby granted, free of charge, to any person
|
||||
obtaining a copy of this software and associated documentation
|
||||
files (the "Software"), to deal in the Software without
|
||||
restriction, including without limitation the rights to use,
|
||||
copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the
|
||||
Software is furnished to do so, subject to the following
|
||||
conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be
|
||||
included in all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
|
||||
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
|
||||
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
|
||||
OTHER DEALINGS IN THE SOFTWARE.
|
10
vendor/github.com/tdewolff/strconv/README.md
generated
vendored
Normal file
10
vendor/github.com/tdewolff/strconv/README.md
generated
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
# Strconv [](http://godoc.org/github.com/tdewolff/strconv)
|
||||
|
||||
This package contains string conversion function and is written in [Go][1]. It is much alike the standard library's strconv package, but it is specifically tailored for the performance needs within the minify package.
|
||||
|
||||
For example, the floating-point to string conversion function is approximately twice as fast as the standard library, but it is not as precise.
|
||||
|
||||
## License
|
||||
Released under the [MIT license](LICENSE.md).
|
||||
|
||||
[1]: http://golang.org/ "Go Language"
|
251
vendor/github.com/tdewolff/strconv/float.go
generated
vendored
Normal file
251
vendor/github.com/tdewolff/strconv/float.go
generated
vendored
Normal file
@ -0,0 +1,251 @@
|
||||
package strconv // import "github.com/tdewolff/strconv"
|
||||
|
||||
import "math"
|
||||
|
||||
var float64pow10 = []float64{
|
||||
1e0, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9,
|
||||
1e10, 1e11, 1e12, 1e13, 1e14, 1e15, 1e16, 1e17, 1e18, 1e19,
|
||||
1e20, 1e21, 1e22,
|
||||
}
|
||||
|
||||
// Float parses a byte-slice and returns the float it represents.
|
||||
// If an invalid character is encountered, it will stop there.
|
||||
func ParseFloat(b []byte) (float64, int) {
|
||||
i := 0
|
||||
neg := false
|
||||
if i < len(b) && (b[i] == '+' || b[i] == '-') {
|
||||
neg = b[i] == '-'
|
||||
i++
|
||||
}
|
||||
|
||||
dot := -1
|
||||
trunk := -1
|
||||
n := uint64(0)
|
||||
for ; i < len(b); i++ {
|
||||
c := b[i]
|
||||
if c >= '0' && c <= '9' {
|
||||
if trunk == -1 {
|
||||
if n > math.MaxUint64/10 {
|
||||
trunk = i
|
||||
} else {
|
||||
n *= 10
|
||||
n += uint64(c - '0')
|
||||
}
|
||||
}
|
||||
} else if dot == -1 && c == '.' {
|
||||
dot = i
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
f := float64(n)
|
||||
if neg {
|
||||
f = -f
|
||||
}
|
||||
|
||||
mantExp := int64(0)
|
||||
if dot != -1 {
|
||||
if trunk == -1 {
|
||||
trunk = i
|
||||
}
|
||||
mantExp = int64(trunk - dot - 1)
|
||||
} else if trunk != -1 {
|
||||
mantExp = int64(trunk - i)
|
||||
}
|
||||
expExp := int64(0)
|
||||
if i < len(b) && (b[i] == 'e' || b[i] == 'E') {
|
||||
i++
|
||||
if e, expLen := ParseInt(b[i:]); expLen > 0 {
|
||||
expExp = e
|
||||
i += expLen
|
||||
}
|
||||
}
|
||||
exp := expExp - mantExp
|
||||
|
||||
// copied from strconv/atof.go
|
||||
if exp == 0 {
|
||||
return f, i
|
||||
} else if exp > 0 && exp <= 15+22 { // int * 10^k
|
||||
// If exponent is big but number of digits is not,
|
||||
// can move a few zeros into the integer part.
|
||||
if exp > 22 {
|
||||
f *= float64pow10[exp-22]
|
||||
exp = 22
|
||||
}
|
||||
if f <= 1e15 && f >= -1e15 {
|
||||
return f * float64pow10[exp], i
|
||||
}
|
||||
} else if exp < 0 && exp >= -22 { // int / 10^k
|
||||
return f / float64pow10[-exp], i
|
||||
}
|
||||
f *= math.Pow10(int(-mantExp))
|
||||
return f * math.Pow10(int(expExp)), i
|
||||
}
|
||||
|
||||
const log2 = 0.301029995
|
||||
const int64maxlen = 18
|
||||
|
||||
func float64exp(f float64) int {
|
||||
exp2 := 0
|
||||
if f != 0.0 {
|
||||
x := math.Float64bits(f)
|
||||
exp2 = int(x>>(64-11-1))&0x7FF - 1023 + 1
|
||||
}
|
||||
|
||||
exp10 := float64(exp2) * log2
|
||||
if exp10 < 0 {
|
||||
exp10 -= 1.0
|
||||
}
|
||||
return int(exp10)
|
||||
}
|
||||
|
||||
func AppendFloat(b []byte, f float64, prec int) ([]byte, bool) {
|
||||
if math.IsNaN(f) || math.IsInf(f, 0) {
|
||||
return b, false
|
||||
} else if prec >= int64maxlen {
|
||||
return b, false
|
||||
}
|
||||
|
||||
neg := false
|
||||
if f < 0.0 {
|
||||
f = -f
|
||||
neg = true
|
||||
}
|
||||
if prec == -1 {
|
||||
prec = int64maxlen - 1
|
||||
}
|
||||
prec -= float64exp(f) // number of digits in front of the dot
|
||||
f *= math.Pow10(prec)
|
||||
|
||||
// calculate mantissa and exponent
|
||||
mant := int64(f)
|
||||
mantLen := LenInt(mant)
|
||||
mantExp := mantLen - prec - 1
|
||||
if mant == 0 {
|
||||
return append(b, '0'), true
|
||||
}
|
||||
|
||||
// expLen is zero for positive exponents, because positive exponents are determined later on in the big conversion loop
|
||||
exp := 0
|
||||
expLen := 0
|
||||
if mantExp > 0 {
|
||||
// positive exponent is determined in the loop below
|
||||
// but if we initially decreased the exponent to fit in an integer, we can't set the new exponent in the loop alone,
|
||||
// since the number of zeros at the end determines the positive exponent in the loop, and we just artificially lost zeros
|
||||
if prec < 0 {
|
||||
exp = mantExp
|
||||
}
|
||||
expLen = 1 + LenInt(int64(exp)) // e + digits
|
||||
} else if mantExp < -3 {
|
||||
exp = mantExp
|
||||
expLen = 2 + LenInt(int64(exp)) // e + minus + digits
|
||||
} else if mantExp < -1 {
|
||||
mantLen += -mantExp - 1 // extra zero between dot and first digit
|
||||
}
|
||||
|
||||
// reserve space in b
|
||||
i := len(b)
|
||||
maxLen := 1 + mantLen + expLen // dot + mantissa digits + exponent
|
||||
if neg {
|
||||
maxLen++
|
||||
}
|
||||
if i+maxLen > cap(b) {
|
||||
b = append(b, make([]byte, maxLen)...)
|
||||
} else {
|
||||
b = b[:i+maxLen]
|
||||
}
|
||||
|
||||
// write to string representation
|
||||
if neg {
|
||||
b[i] = '-'
|
||||
i++
|
||||
}
|
||||
|
||||
// big conversion loop, start at the end and move to the front
|
||||
// initially print trailing zeros and remove them later on
|
||||
// for example if the first non-zero digit is three positions in front of the dot, it will overwrite the zeros with a positive exponent
|
||||
zero := true
|
||||
last := i + mantLen // right-most position of digit that is non-zero + dot
|
||||
dot := last - prec - exp // position of dot
|
||||
j := last
|
||||
for mant > 0 {
|
||||
if j == dot {
|
||||
b[j] = '.'
|
||||
j--
|
||||
}
|
||||
newMant := mant / 10
|
||||
digit := mant - 10*newMant
|
||||
if zero && digit > 0 {
|
||||
// first non-zero digit, if we are still behind the dot we can trim the end to this position
|
||||
// otherwise trim to the dot (including the dot)
|
||||
if j > dot {
|
||||
i = j + 1
|
||||
// decrease negative exponent further to get rid of dot
|
||||
if exp < 0 {
|
||||
newExp := exp - (j - dot)
|
||||
// getting rid of the dot shouldn't lower the exponent to more digits (e.g. -9 -> -10)
|
||||
if LenInt(int64(newExp)) == LenInt(int64(exp)) {
|
||||
exp = newExp
|
||||
dot = j
|
||||
j--
|
||||
i--
|
||||
}
|
||||
}
|
||||
} else {
|
||||
i = dot
|
||||
}
|
||||
last = j
|
||||
zero = false
|
||||
}
|
||||
b[j] = '0' + byte(digit)
|
||||
j--
|
||||
mant = newMant
|
||||
}
|
||||
|
||||
if j > dot {
|
||||
// extra zeros behind the dot
|
||||
for j > dot {
|
||||
b[j] = '0'
|
||||
j--
|
||||
}
|
||||
b[j] = '.'
|
||||
} else if last+3 < dot {
|
||||
// add positive exponent because we have 3 or more zeros in front of the dot
|
||||
i = last + 1
|
||||
exp = dot - last - 1
|
||||
} else if j == dot {
|
||||
// handle 0.1
|
||||
b[j] = '.'
|
||||
}
|
||||
|
||||
// exponent
|
||||
if exp != 0 {
|
||||
if exp == 1 {
|
||||
b[i] = '0'
|
||||
i++
|
||||
} else if exp == 2 {
|
||||
b[i] = '0'
|
||||
b[i+1] = '0'
|
||||
i += 2
|
||||
} else {
|
||||
b[i] = 'e'
|
||||
i++
|
||||
if exp < 0 {
|
||||
b[i] = '-'
|
||||
i++
|
||||
exp = -exp
|
||||
}
|
||||
i += LenInt(int64(exp))
|
||||
j := i
|
||||
for exp > 0 {
|
||||
newExp := exp / 10
|
||||
digit := exp - 10*newExp
|
||||
j--
|
||||
b[j] = '0' + byte(digit)
|
||||
exp = newExp
|
||||
}
|
||||
}
|
||||
}
|
||||
return b[:i], true
|
||||
}
|
78
vendor/github.com/tdewolff/strconv/int.go
generated
vendored
Normal file
78
vendor/github.com/tdewolff/strconv/int.go
generated
vendored
Normal file
@ -0,0 +1,78 @@
|
||||
package strconv // import "github.com/tdewolff/strconv"
|
||||
|
||||
import "math"
|
||||
|
||||
// Int parses a byte-slice and returns the integer it represents.
|
||||
// If an invalid character is encountered, it will stop there.
|
||||
func ParseInt(b []byte) (int64, int) {
|
||||
i := 0
|
||||
neg := false
|
||||
if len(b) > 0 && (b[0] == '+' || b[0] == '-') {
|
||||
neg = b[0] == '-'
|
||||
i++
|
||||
}
|
||||
n := uint64(0)
|
||||
for i < len(b) {
|
||||
c := b[i]
|
||||
if n > math.MaxUint64/10 {
|
||||
return 0, 0
|
||||
} else if c >= '0' && c <= '9' {
|
||||
n *= 10
|
||||
n += uint64(c - '0')
|
||||
} else {
|
||||
break
|
||||
}
|
||||
i++
|
||||
}
|
||||
if !neg && n > uint64(math.MaxInt64) || n > uint64(math.MaxInt64)+1 {
|
||||
return 0, 0
|
||||
} else if neg {
|
||||
return -int64(n), i
|
||||
}
|
||||
return int64(n), i
|
||||
}
|
||||
|
||||
func LenInt(i int64) int {
|
||||
if i < 0 {
|
||||
i = -i
|
||||
}
|
||||
switch {
|
||||
case i < 10:
|
||||
return 1
|
||||
case i < 100:
|
||||
return 2
|
||||
case i < 1000:
|
||||
return 3
|
||||
case i < 10000:
|
||||
return 4
|
||||
case i < 100000:
|
||||
return 5
|
||||
case i < 1000000:
|
||||
return 6
|
||||
case i < 10000000:
|
||||
return 7
|
||||
case i < 100000000:
|
||||
return 8
|
||||
case i < 1000000000:
|
||||
return 9
|
||||
case i < 10000000000:
|
||||
return 10
|
||||
case i < 100000000000:
|
||||
return 11
|
||||
case i < 1000000000000:
|
||||
return 12
|
||||
case i < 10000000000000:
|
||||
return 13
|
||||
case i < 100000000000000:
|
||||
return 14
|
||||
case i < 1000000000000000:
|
||||
return 15
|
||||
case i < 10000000000000000:
|
||||
return 16
|
||||
case i < 100000000000000000:
|
||||
return 17
|
||||
case i < 1000000000000000000:
|
||||
return 18
|
||||
}
|
||||
return 19
|
||||
}
|
Reference in New Issue
Block a user