1
0
mirror of https://github.com/StackExchange/dnscontrol.git synced 2024-05-11 05:55:12 +00:00

Switch from govendor to go modules. (#587)

Thanks to @BenoitKnecht for leading the way on this.
This commit is contained in:
Tom Limoncelli
2020-01-18 14:40:28 -05:00
committed by GitHub
parent 31188c3a70
commit 16d0043cce
1554 changed files with 400867 additions and 98222 deletions

5
vendor/github.com/tdewolff/parse/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,5 @@
language: go
before_install:
- go get github.com/mattn/goveralls
script:
- goveralls -v -service travis-ci -repotoken $COVERALLS_TOKEN || go test -v ./...

View File

@ -2,7 +2,33 @@
This package contains several lexers and parsers written in [Go][1]. All subpackages are built to be streaming, high performance and to be in accordance with the official (latest) specifications.
The lexers are implemented using `buffer.Lexer` in https://github.com/tdewolff/buffer and the parsers work on top of the lexers. Some subpackages have hashes defined (using [Hasher](https://github.com/tdewolff/hasher)) that speed up common byte-slice comparisons.
The lexers are implemented using `buffer.Lexer` in https://github.com/tdewolff/parse/buffer and the parsers work on top of the lexers. Some subpackages have hashes defined (using [Hasher](https://github.com/tdewolff/hasher)) that speed up common byte-slice comparisons.
## Buffer
### Reader
Reader is a wrapper around a `[]byte` that implements the `io.Reader` interface. It is comparable to `bytes.Reader` but has slightly different semantics (and a slightly smaller memory footprint).
### Writer
Writer is a buffer that implements the `io.Writer` interface and expands the buffer as needed. The reset functionality allows for better memory reuse. After calling `Reset`, it will overwrite the current buffer and thus reduce allocations.
### Lexer
Lexer is a read buffer specifically designed for building lexers. It keeps track of two positions: a start and end position. The start position is the beginning of the current token being parsed, the end position is being moved forward until a valid token is found. Calling `Shift` will collapse the positions to the end and return the parsed `[]byte`.
Moving the end position can go through `Move(int)` which also accepts negative integers. One can also use `Pos() int` to try and parse a token, and if it fails rewind with `Rewind(int)`, passing the previously saved position.
`Peek(int) byte` will peek forward (relative to the end position) and return the byte at that location. `PeekRune(int) (rune, int)` returns UTF-8 runes and its length at the given **byte** position. Upon an error `Peek` will return `0`, the **user must peek at every character** and not skip any, otherwise it may skip a `0` and panic on out-of-bounds indexing.
`Lexeme() []byte` will return the currently selected bytes, `Skip()` will collapse the selection. `Shift() []byte` is a combination of `Lexeme() []byte` and `Skip()`.
When the passed `io.Reader` returned an error, `Err() error` will return that error even if not at the end of the buffer.
### StreamLexer
StreamLexer behaves like Lexer but uses a buffer pool to read in chunks from `io.Reader`, retaining old buffers in memory that are still in use, and re-using old buffers otherwise. Calling `Free(n int)` frees up `n` bytes from the internal buffer(s). It holds an array of buffers to accommodate for keeping everything in-memory. Calling `ShiftLen() int` returns the number of bytes that have been shifted since the previous call to `ShiftLen`, which can be used to specify how many bytes need to be freed up from the buffer. If you don't need to keep returned byte slices around, call `Free(ShiftLen())` after every `Shift` call.
## Strconv
This package contains string conversion function much like the standard library's `strconv` package, but it is specifically tailored for the performance needs within the `minify` package.
For example, the floating-point to string conversion function is approximately twice as fast as the standard library, but it is not as precise.
## CSS
This package is a CSS3 lexer and parser. Both follow the specification at [CSS Syntax Module Level 3](http://www.w3.org/TR/css-syntax-3/). The lexer takes an io.Reader and converts it into tokens until the EOF. The parser returns a parse tree of the full io.Reader input stream, but the low-level `Next` function can be used for stream parsing to returns grammar units until the EOF.

15
vendor/github.com/tdewolff/parse/buffer/buffer.go generated vendored Normal file
View File

@ -0,0 +1,15 @@
/*
Package buffer contains buffer and wrapper types for byte slices. It is useful for writing lexers or other high-performance byte slice handling.
The `Reader` and `Writer` types implement the `io.Reader` and `io.Writer` respectively and provide a thinner and faster interface than `bytes.Buffer`.
The `Lexer` type is useful for building lexers because it keeps track of the start and end position of a byte selection, and shifts the bytes whenever a valid token is found.
The `StreamLexer` does the same, but keeps a buffer pool so that it reads a limited amount at a time, allowing to parse from streaming sources.
*/
package buffer // import "github.com/tdewolff/parse/buffer"
// defaultBufSize specifies the default initial length of internal buffers.
var defaultBufSize = 4096
// MinBuf specifies the default initial length of internal buffers.
// Solely here to support old versions of parse.
var MinBuf = defaultBufSize

158
vendor/github.com/tdewolff/parse/buffer/lexer.go generated vendored Normal file
View File

@ -0,0 +1,158 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"io"
"io/ioutil"
)
var nullBuffer = []byte{0}
// Lexer is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
// It keeps data in-memory until Free, taking a byte length, is called to move beyond the data.
type Lexer struct {
buf []byte
pos int // index in buf
start int // index in buf
err error
restore func()
}
// NewLexerBytes returns a new Lexer for a given io.Reader, and uses ioutil.ReadAll to read it into a byte slice.
// If the io.Reader implements Bytes, that is used instead.
// It will append a NULL at the end of the buffer.
func NewLexer(r io.Reader) *Lexer {
var b []byte
if r != nil {
if buffer, ok := r.(interface {
Bytes() []byte
}); ok {
b = buffer.Bytes()
} else {
var err error
b, err = ioutil.ReadAll(r)
if err != nil {
return &Lexer{
buf: []byte{0},
err: err,
}
}
}
}
return NewLexerBytes(b)
}
// NewLexerBytes returns a new Lexer for a given byte slice, and appends NULL at the end.
// To avoid reallocation, make sure the capacity has room for one more byte.
func NewLexerBytes(b []byte) *Lexer {
z := &Lexer{
buf: b,
}
n := len(b)
if n == 0 {
z.buf = nullBuffer
} else if b[n-1] != 0 {
// Append NULL to buffer, but try to avoid reallocation
if cap(b) > n {
// Overwrite next byte but restore when done
b = b[:n+1]
c := b[n]
b[n] = 0
z.buf = b
z.restore = func() {
b[n] = c
}
} else {
z.buf = append(b, 0)
}
}
return z
}
// Restore restores the replaced byte past the end of the buffer by NULL.
func (z *Lexer) Restore() {
if z.restore != nil {
z.restore()
z.restore = nil
}
}
// Err returns the error returned from io.Reader or io.EOF when the end has been reached.
func (z *Lexer) Err() error {
return z.PeekErr(0)
}
// PeekErr returns the error at position pos. When pos is zero, this is the same as calling Err().
func (z *Lexer) PeekErr(pos int) error {
if z.err != nil {
return z.err
} else if z.pos+pos >= len(z.buf)-1 {
return io.EOF
}
return nil
}
// Peek returns the ith byte relative to the end position.
// Peek returns 0 when an error has occurred, Err returns the error.
func (z *Lexer) Peek(pos int) byte {
pos += z.pos
return z.buf[pos]
}
// PeekRune returns the rune and rune length of the ith byte relative to the end position.
func (z *Lexer) PeekRune(pos int) (rune, int) {
// from unicode/utf8
c := z.Peek(pos)
if c < 0xC0 || z.Peek(pos+1) == 0 {
return rune(c), 1
} else if c < 0xE0 || z.Peek(pos+2) == 0 {
return rune(c&0x1F)<<6 | rune(z.Peek(pos+1)&0x3F), 2
} else if c < 0xF0 || z.Peek(pos+3) == 0 {
return rune(c&0x0F)<<12 | rune(z.Peek(pos+1)&0x3F)<<6 | rune(z.Peek(pos+2)&0x3F), 3
}
return rune(c&0x07)<<18 | rune(z.Peek(pos+1)&0x3F)<<12 | rune(z.Peek(pos+2)&0x3F)<<6 | rune(z.Peek(pos+3)&0x3F), 4
}
// Move advances the position.
func (z *Lexer) Move(n int) {
z.pos += n
}
// Pos returns a mark to which can be rewinded.
func (z *Lexer) Pos() int {
return z.pos - z.start
}
// Rewind rewinds the position to the given position.
func (z *Lexer) Rewind(pos int) {
z.pos = z.start + pos
}
// Lexeme returns the bytes of the current selection.
func (z *Lexer) Lexeme() []byte {
return z.buf[z.start:z.pos]
}
// Skip collapses the position to the end of the selection.
func (z *Lexer) Skip() {
z.start = z.pos
}
// Shift returns the bytes of the current selection and collapses the position to the end of the selection.
func (z *Lexer) Shift() []byte {
b := z.buf[z.start:z.pos]
z.start = z.pos
return b
}
// Offset returns the character position in the buffer.
func (z *Lexer) Offset() int {
return z.pos
}
// Bytes returns the underlying buffer.
func (z *Lexer) Bytes() []byte {
return z.buf
}

44
vendor/github.com/tdewolff/parse/buffer/reader.go generated vendored Normal file
View File

@ -0,0 +1,44 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import "io"
// Reader implements an io.Reader over a byte slice.
type Reader struct {
buf []byte
pos int
}
// NewReader returns a new Reader for a given byte slice.
func NewReader(buf []byte) *Reader {
return &Reader{
buf: buf,
}
}
// Read reads bytes into the given byte slice and returns the number of bytes read and an error if occurred.
func (r *Reader) Read(b []byte) (n int, err error) {
if len(b) == 0 {
return 0, nil
}
if r.pos >= len(r.buf) {
return 0, io.EOF
}
n = copy(b, r.buf[r.pos:])
r.pos += n
return
}
// Bytes returns the underlying byte slice.
func (r *Reader) Bytes() []byte {
return r.buf
}
// Reset resets the position of the read pointer to the beginning of the underlying byte slice.
func (r *Reader) Reset() {
r.pos = 0
}
// Len returns the length of the buffer.
func (r *Reader) Len() int {
return len(r.buf)
}

223
vendor/github.com/tdewolff/parse/buffer/streamlexer.go generated vendored Normal file
View File

@ -0,0 +1,223 @@
package buffer // import "github.com/tdewolff/parse/buffer"
import (
"io"
)
type block struct {
buf []byte
next int // index in pool plus one
active bool
}
type bufferPool struct {
pool []block
head int // index in pool plus one
tail int // index in pool plus one
pos int // byte pos in tail
}
func (z *bufferPool) swap(oldBuf []byte, size int) []byte {
// find new buffer that can be reused
swap := -1
for i := 0; i < len(z.pool); i++ {
if !z.pool[i].active && size <= cap(z.pool[i].buf) {
swap = i
break
}
}
if swap == -1 { // no free buffer found for reuse
if z.tail == 0 && z.pos >= len(oldBuf) && size <= cap(oldBuf) { // but we can reuse the current buffer!
z.pos -= len(oldBuf)
return oldBuf[:0]
}
// allocate new
z.pool = append(z.pool, block{make([]byte, 0, size), 0, true})
swap = len(z.pool) - 1
}
newBuf := z.pool[swap].buf
// put current buffer into pool
z.pool[swap] = block{oldBuf, 0, true}
if z.head != 0 {
z.pool[z.head-1].next = swap + 1
}
z.head = swap + 1
if z.tail == 0 {
z.tail = swap + 1
}
return newBuf[:0]
}
func (z *bufferPool) free(n int) {
z.pos += n
// move the tail over to next buffers
for z.tail != 0 && z.pos >= len(z.pool[z.tail-1].buf) {
z.pos -= len(z.pool[z.tail-1].buf)
newTail := z.pool[z.tail-1].next
z.pool[z.tail-1].active = false // after this, any thread may pick up the inactive buffer, so it can't be used anymore
z.tail = newTail
}
if z.tail == 0 {
z.head = 0
}
}
// StreamLexer is a buffered reader that allows peeking forward and shifting, taking an io.Reader.
// It keeps data in-memory until Free, taking a byte length, is called to move beyond the data.
type StreamLexer struct {
r io.Reader
err error
pool bufferPool
buf []byte
start int // index in buf
pos int // index in buf
prevStart int
free int
}
// NewStreamLexer returns a new StreamLexer for a given io.Reader with a 4kB estimated buffer size.
// If the io.Reader implements Bytes, that buffer is used instead.
func NewStreamLexer(r io.Reader) *StreamLexer {
return NewStreamLexerSize(r, defaultBufSize)
}
// NewStreamLexerSize returns a new StreamLexer for a given io.Reader and estimated required buffer size.
// If the io.Reader implements Bytes, that buffer is used instead.
func NewStreamLexerSize(r io.Reader, size int) *StreamLexer {
// if reader has the bytes in memory already, use that instead
if buffer, ok := r.(interface {
Bytes() []byte
}); ok {
return &StreamLexer{
err: io.EOF,
buf: buffer.Bytes(),
}
}
return &StreamLexer{
r: r,
buf: make([]byte, 0, size),
}
}
func (z *StreamLexer) read(pos int) byte {
if z.err != nil {
return 0
}
// free unused bytes
z.pool.free(z.free)
z.free = 0
// get new buffer
c := cap(z.buf)
p := pos - z.start + 1
if 2*p > c { // if the token is larger than half the buffer, increase buffer size
c = 2*c + p
}
d := len(z.buf) - z.start
buf := z.pool.swap(z.buf[:z.start], c)
copy(buf[:d], z.buf[z.start:]) // copy the left-overs (unfinished token) from the old buffer
// read in new data for the rest of the buffer
var n int
for pos-z.start >= d && z.err == nil {
n, z.err = z.r.Read(buf[d:cap(buf)])
d += n
}
pos -= z.start
z.pos -= z.start
z.start, z.buf = 0, buf[:d]
if pos >= d {
return 0
}
return z.buf[pos]
}
// Err returns the error returned from io.Reader. It may still return valid bytes for a while though.
func (z *StreamLexer) Err() error {
if z.err == io.EOF && z.pos < len(z.buf) {
return nil
}
return z.err
}
// Free frees up bytes of length n from previously shifted tokens.
// Each call to Shift should at one point be followed by a call to Free with a length returned by ShiftLen.
func (z *StreamLexer) Free(n int) {
z.free += n
}
// Peek returns the ith byte relative to the end position and possibly does an allocation.
// Peek returns zero when an error has occurred, Err returns the error.
// TODO: inline function
func (z *StreamLexer) Peek(pos int) byte {
pos += z.pos
if uint(pos) < uint(len(z.buf)) { // uint for BCE
return z.buf[pos]
}
return z.read(pos)
}
// PeekRune returns the rune and rune length of the ith byte relative to the end position.
func (z *StreamLexer) PeekRune(pos int) (rune, int) {
// from unicode/utf8
c := z.Peek(pos)
if c < 0xC0 {
return rune(c), 1
} else if c < 0xE0 {
return rune(c&0x1F)<<6 | rune(z.Peek(pos+1)&0x3F), 2
} else if c < 0xF0 {
return rune(c&0x0F)<<12 | rune(z.Peek(pos+1)&0x3F)<<6 | rune(z.Peek(pos+2)&0x3F), 3
}
return rune(c&0x07)<<18 | rune(z.Peek(pos+1)&0x3F)<<12 | rune(z.Peek(pos+2)&0x3F)<<6 | rune(z.Peek(pos+3)&0x3F), 4
}
// Move advances the position.
func (z *StreamLexer) Move(n int) {
z.pos += n
}
// Pos returns a mark to which can be rewinded.
func (z *StreamLexer) Pos() int {
return z.pos - z.start
}
// Rewind rewinds the position to the given position.
func (z *StreamLexer) Rewind(pos int) {
z.pos = z.start + pos
}
// Lexeme returns the bytes of the current selection.
func (z *StreamLexer) Lexeme() []byte {
return z.buf[z.start:z.pos]
}
// Skip collapses the position to the end of the selection.
func (z *StreamLexer) Skip() {
z.start = z.pos
}
// Shift returns the bytes of the current selection and collapses the position to the end of the selection.
// It also returns the number of bytes we moved since the last call to Shift. This can be used in calls to Free.
func (z *StreamLexer) Shift() []byte {
if z.pos > len(z.buf) { // make sure we peeked at least as much as we shift
z.read(z.pos - 1)
}
b := z.buf[z.start:z.pos]
z.start = z.pos
return b
}
// ShiftLen returns the number of bytes moved since the last call to ShiftLen. This can be used in calls to Free because it takes into account multiple Shifts or Skips.
func (z *StreamLexer) ShiftLen() int {
n := z.start - z.prevStart
z.prevStart = z.start
return n
}

41
vendor/github.com/tdewolff/parse/buffer/writer.go generated vendored Normal file
View File

@ -0,0 +1,41 @@
package buffer // import "github.com/tdewolff/parse/buffer"
// Writer implements an io.Writer over a byte slice.
type Writer struct {
buf []byte
}
// NewWriter returns a new Writer for a given byte slice.
func NewWriter(buf []byte) *Writer {
return &Writer{
buf: buf,
}
}
// Write writes bytes from the given byte slice and returns the number of bytes written and an error if occurred. When err != nil, n == 0.
func (w *Writer) Write(b []byte) (int, error) {
n := len(b)
end := len(w.buf)
if end+n > cap(w.buf) {
buf := make([]byte, end, 2*cap(w.buf)+n)
copy(buf, w.buf)
w.buf = buf
}
w.buf = w.buf[:end+n]
return copy(w.buf[end:], b), nil
}
// Len returns the length of the underlying byte slice.
func (w *Writer) Len() int {
return len(w.buf)
}
// Bytes returns the underlying byte slice.
func (w *Writer) Bytes() []byte {
return w.buf
}
// Reset empties and reuses the current buffer. Subsequent writes will overwrite the buffer, so any reference to the underlying slice is invalidated after this call.
func (w *Writer) Reset() {
w.buf = w.buf[:0]
}

View File

@ -2,6 +2,7 @@
package parse // import "github.com/tdewolff/parse"
import (
"bytes"
"encoding/base64"
"errors"
"net/url"
@ -145,7 +146,7 @@ func Mediatype(b []byte) ([]byte, map[string]string) {
// DataURI parses the given data URI and returns the mediatype, data and ok.
func DataURI(dataURI []byte) ([]byte, []byte, error) {
if len(dataURI) > 5 && Equal(dataURI[:5], []byte("data:")) {
if len(dataURI) > 5 && bytes.Equal(dataURI[:5], []byte("data:")) {
dataURI = dataURI[5:]
inBase64 := false
var mediatype []byte
@ -153,7 +154,7 @@ func DataURI(dataURI []byte) ([]byte, []byte, error) {
for j := 0; j < len(dataURI); j++ {
c := dataURI[j]
if c == '=' || c == ';' || c == ',' {
if c != '=' && Equal(TrimWhitespace(dataURI[i:j]), []byte("base64")) {
if c != '=' && bytes.Equal(TrimWhitespace(dataURI[i:j]), []byte("base64")) {
if len(mediatype) > 0 {
mediatype = mediatype[:len(mediatype)-1]
}

49
vendor/github.com/tdewolff/parse/error.go generated vendored Normal file
View File

@ -0,0 +1,49 @@
package parse
import (
"fmt"
"io"
"github.com/tdewolff/parse/buffer"
)
// Error is a parsing error returned by parser. It contains a message and an offset at which the error occurred.
type Error struct {
Message string
r io.Reader
Offset int
line int
column int
context string
}
// NewError creates a new error
func NewError(msg string, r io.Reader, offset int) *Error {
return &Error{
Message: msg,
r: r,
Offset: offset,
}
}
// NewErrorLexer creates a new error from a *buffer.Lexer
func NewErrorLexer(msg string, l *buffer.Lexer) *Error {
r := buffer.NewReader(l.Bytes())
offset := l.Offset()
return NewError(msg, r, offset)
}
// Positions re-parses the file to determine the line, column, and context of the error.
// Context is the entire line at which the error occurred.
func (e *Error) Position() (int, int, string) {
if e.line == 0 {
e.line, e.column, e.context, _ = Position(e.r, e.Offset)
}
return e.line, e.column, e.context
}
// Error returns the error string, containing the context and line + column number.
func (e *Error) Error() string {
line, column, context := e.Position()
return fmt.Sprintf("parse error:%d:%d: %s\n%s", line, column, e.Message, context)
}

View File

@ -2,33 +2,13 @@
package json // import "github.com/tdewolff/parse/json"
import (
"errors"
"io"
"strconv"
"github.com/tdewolff/buffer"
"github.com/tdewolff/parse"
"github.com/tdewolff/parse/buffer"
)
// ErrBadComma is returned when an unexpected comma is encountered.
var ErrBadComma = errors.New("unexpected comma character outside an array or object")
// ErrNoComma is returned when no comma is present between two values.
var ErrNoComma = errors.New("expected comma character or an array or object ending")
// ErrBadObjectKey is returned when the object key is not a quoted string.
var ErrBadObjectKey = errors.New("expected object key to be a quoted string")
// ErrBadObjectDeclaration is returned when the object key is not followed by a colon character.
var ErrBadObjectDeclaration = errors.New("expected colon character after object key")
// ErrBadObjectEnding is returned when an unexpected right brace is encountered.
var ErrBadObjectEnding = errors.New("unexpected right brace character")
// ErrBadArrayEnding is returned when an unexpected right bracket is encountered.
var ErrBadArrayEnding = errors.New("unexpected right bracket character")
////////////////////////////////////////////////////////////////
// GrammarType determines the type of grammar
type GrammarType uint32
@ -118,24 +98,26 @@ func NewParser(r io.Reader) *Parser {
}
// Err returns the error encountered during tokenization, this is often io.EOF but also other errors can be returned.
func (p Parser) Err() error {
err := p.r.Err()
if err != nil {
return err
func (p *Parser) Err() error {
if p.err != nil {
return p.err
}
return p.err
return p.r.Err()
}
// Restore restores the NULL byte at the end of the buffer.
func (p *Parser) Restore() {
p.r.Restore()
}
// Next returns the next Grammar. It returns ErrorGrammar when an error was encountered. Using Err() one can retrieve the error message.
func (p *Parser) Next() (GrammarType, []byte) {
p.r.Free(p.r.ShiftLen())
p.moveWhitespace()
c := p.r.Peek(0)
state := p.state[len(p.state)-1]
if c == ',' {
if state != ArrayState && state != ObjectKeyState {
p.err = ErrBadComma
p.err = parse.NewErrorLexer("unexpected comma character outside an array or object", p.r)
return ErrorGrammar, nil
}
p.r.Move(1)
@ -146,7 +128,7 @@ func (p *Parser) Next() (GrammarType, []byte) {
p.r.Skip()
if p.needComma && c != '}' && c != ']' && c != 0 {
p.err = ErrNoComma
p.err = parse.NewErrorLexer("expected comma character or an array or object ending", p.r)
return ErrorGrammar, nil
} else if c == '{' {
p.state = append(p.state, ObjectKeyState)
@ -154,7 +136,7 @@ func (p *Parser) Next() (GrammarType, []byte) {
return StartObjectGrammar, p.r.Shift()
} else if c == '}' {
if state != ObjectKeyState {
p.err = ErrBadObjectEnding
p.err = parse.NewErrorLexer("unexpected right brace character", p.r)
return ErrorGrammar, nil
}
p.needComma = true
@ -171,7 +153,7 @@ func (p *Parser) Next() (GrammarType, []byte) {
} else if c == ']' {
p.needComma = true
if state != ArrayState {
p.err = ErrBadArrayEnding
p.err = parse.NewErrorLexer("unexpected right bracket character", p.r)
return ErrorGrammar, nil
}
p.state = p.state[:len(p.state)-1]
@ -182,13 +164,13 @@ func (p *Parser) Next() (GrammarType, []byte) {
return EndArrayGrammar, p.r.Shift()
} else if state == ObjectKeyState {
if c != '"' || !p.consumeStringToken() {
p.err = ErrBadObjectKey
p.err = parse.NewErrorLexer("expected object key to be a quoted string", p.r)
return ErrorGrammar, nil
}
n := p.r.Pos()
p.moveWhitespace()
if c := p.r.Peek(0); c != ':' {
p.err = ErrBadObjectDeclaration
p.err = parse.NewErrorLexer("expected colon character after object key", p.r)
return ErrorGrammar, nil
}
p.r.Move(1)
@ -223,7 +205,7 @@ The following functions follow the specifications at http://json.org/
func (p *Parser) moveWhitespace() {
for {
if c := p.r.Peek(0); c != ' ' && c != '\t' && c != '\r' && c != '\n' {
if c := p.r.Peek(0); c != ' ' && c != '\n' && c != '\r' && c != '\t' {
break
}
p.r.Move(1)
@ -304,10 +286,18 @@ func (p *Parser) consumeStringToken() bool {
for {
c := p.r.Peek(0)
if c == '"' {
p.r.Move(1)
break
} else if c == '\\' && (p.r.Peek(1) != 0 || p.r.Err() == nil) {
p.r.Move(1)
escaped := false
for i := p.r.Pos() - 1; i >= 0; i-- {
if p.r.Lexeme()[i] == '\\' {
escaped = !escaped
} else {
break
}
}
if !escaped {
p.r.Move(1)
break
}
} else if c == 0 {
return false
}

79
vendor/github.com/tdewolff/parse/position.go generated vendored Normal file
View File

@ -0,0 +1,79 @@
package parse
import (
"fmt"
"io"
"strings"
"github.com/tdewolff/parse/buffer"
)
// Position returns the line and column number for a certain position in a file. It is useful for recovering the position in a file that caused an error.
// It only treates \n, \r, and \r\n as newlines, which might be different from some languages also recognizing \f, \u2028, and \u2029 to be newlines.
func Position(r io.Reader, offset int) (line, col int, context string, err error) {
l := buffer.NewLexer(r)
line = 1
for {
c := l.Peek(0)
if c == 0 {
col = l.Pos() + 1
context = positionContext(l, line, col)
err = l.Err()
if err == nil {
err = io.EOF
}
return
}
if offset == l.Pos() {
col = l.Pos() + 1
context = positionContext(l, line, col)
return
}
if c == '\n' {
l.Move(1)
line++
offset -= l.Pos()
l.Skip()
} else if c == '\r' {
if l.Peek(1) == '\n' {
if offset == l.Pos()+1 {
l.Move(1)
continue
}
l.Move(2)
} else {
l.Move(1)
}
line++
offset -= l.Pos()
l.Skip()
} else {
l.Move(1)
}
}
}
func positionContext(l *buffer.Lexer, line, col int) (context string) {
for {
c := l.Peek(0)
if c == 0 && l.Err() != nil || c == '\n' || c == '\r' {
break
}
l.Move(1)
}
// replace unprintable characters by a space
b := l.Lexeme()
for i, c := range b {
if c < 0x20 || c == 0x7F {
b[i] = ' '
}
}
context += fmt.Sprintf("%5d: %s\n", line, string(b))
context += fmt.Sprintf("%s^", strings.Repeat(" ", col+6))
return
}

View File

@ -17,19 +17,6 @@ func ToLower(src []byte) []byte {
return src
}
// Equal returns true when s matches the target.
func Equal(s, target []byte) bool {
if len(s) != len(target) {
return false
}
for i, c := range target {
if s[i] != c {
return false
}
}
return true
}
// EqualFold returns true when s matches case-insensitively the targetLower (which must be lowercase).
func EqualFold(s, targetLower []byte) bool {
if len(s) != len(targetLower) {
@ -92,6 +79,55 @@ func IsWhitespace(c byte) bool {
return whitespaceTable[c]
}
var newlineTable = [256]bool{
// ASCII
false, false, false, false, false, false, false, false,
false, false, true, false, false, true, false, false, // new line, carriage return
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
// non-ASCII
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false,
}
// IsNewline returns true for \n, \r.
func IsNewline(c byte) bool {
return newlineTable[c]
}
// IsAllWhitespace returns true when the entire byte slice consists of space, \n, \r, \t, \f.
func IsAllWhitespace(b []byte) bool {
for _, c := range b {
@ -130,7 +166,7 @@ func ReplaceMultipleWhitespace(b []byte) []byte {
for i, c := range b {
if IsWhitespace(c) {
prevWS = true
if c == '\n' || c == '\r' {
if IsNewline(c) {
hasNewline = true
}
} else {